threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi,\n\nI got a coredump when using hash join on a Postgres derived Database(Greenplum DB).\n\nAnd I find a way to reproduce it on Postgres.\n\nRoot cause:\n\nIn ExecChooseHashTableSize(), commit b154ee63bb uses func pg_nextpower2_size_t\nwhose param must not be 0.\n\n```\nsbuckets = pg_nextpower2_size_t(hash_table_bytes / bucket_size);\n\n```\n\nThere is a potential risk that hash_table_bytes < bucket_size in some corner cases.\n\nReproduce sql:\n\n```\n--create a wide enough table to reproduce the bug\nDO language 'plpgsql'\n$$\nDECLARE var_sql text := 'CREATE TABLE t_1600_columns('\n || string_agg('field' || i::text || ' varchar(255)', ',') || ');'\n FROM generate_series(1,1600) As i;\nBEGIN\n EXECUTE var_sql;\nEND;\n$$ ;\n\ncreate table j1(field1 text);\nset work_mem = 64;\nset hash_mem_multiplier = 1;\nset enable_nestloop = off;\nset enable_mergejoin = off;\n\nexplain select * from j1 inner join t_1600_columns using(field1);\n\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded\n\n```\n\nPart of core dump file:\n\n```\n#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=139769161559104) at ./nptl/pthread_kill.c:44\n#1 __pthread_kill_internal (signo=6, threadid=139769161559104) at ./nptl/pthread_kill.c:78\n#2 __GI___pthread_kill (threadid=139769161559104, signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n#3 0x00007f1e8b3de476 in __GI_raise (sig=sig@entry=6) at ../sysdeps/posix/raise.c:26\n#4 0x00007f1e8b3c47f3 in __GI_abort () at ./stdlib/abort.c:79\n#5 0x0000558cc8884062 in ExceptionalCondition (conditionName=0x558cc8a21570 \"num > 0 && num <= PG_UINT64_MAX / 2 + 1\",\n errorType=0x558cc8a21528 \"FailedAssertion\", fileName=0x558cc8a21500 \"../../../src/include/port/pg_bitutils.h\", lineNumber=165) at assert.c:69\n#6 0x0000558cc843bb16 in pg_nextpower2_64 (num=0) at ../../../src/include/port/pg_bitutils.h:165\n#7 0x0000558cc843d13a in ExecChooseHashTableSize (ntuples=100, tupwidth=825086, useskew=true, try_combined_hash_mem=false, parallel_workers=0,\n space_allowed=0x7ffdcfa01598, numbuckets=0x7ffdcfa01588, numbatches=0x7ffdcfa0158c, num_skew_mcvs=0x7ffdcfa01590) at nodeHash.c:835\n```\n\nThis patch fixes it easily:\n\n```\n- sbuckets = pg_nextpower2_size_t(hash_table_bytes / bucket_size);\n+ if (hash_table_bytes < bucket_size)\n+ sbuckets = 1;\n+ else\n+ sbuckets = pg_nextpower2_size_t(hash_table_bytes / bucket_size);\n```\n\nOr, we could report an error/hit message to tell users to increase work_mem/hash_mem_multiplier.\n\nAnd I think let it work is better.\n\nThe issue exists on master, 15, 14, 13.\n\nRegards,\nZhang Mingli",
"msg_date": "Fri, 12 Aug 2022 23:05:06 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "[patch]HashJoin crash"
},
{
"msg_contents": "+ Tom Lane\n\nOn Fri, Aug 12, 2022 at 11:05:06PM +0800, Zhang Mingli wrote:\n> I got a coredump when using hash join on a Postgres derived Database(Greenplum DB).\n> And I find a way to reproduce it on Postgres.\n> \n> Root cause:\n> \n> In ExecChooseHashTableSize(), commit b154ee63bb uses func pg_nextpower2_size_t\n> whose param must not be 0.\n> \n> sbuckets = pg_nextpower2_size_t(hash_table_bytes / bucket_size);\n> \n> There is a potential risk that hash_table_bytes < bucket_size in some corner cases.\n> \n> Reproduce sql:\n\n\n",
"msg_date": "Sat, 13 Aug 2022 14:17:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [patch]HashJoin crash"
},
{
"msg_contents": "Zhang Mingli <zmlpostgres@gmail.com> writes:\n> In ExecChooseHashTableSize(), commit b154ee63bb uses func pg_nextpower2_size_t\n> whose param must not be 0.\n\nRight. Fix pushed, thanks.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 13 Aug 2022 17:01:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch]HashJoin crash"
}
] |
[
{
"msg_contents": "Hello,\n\nI suggest supporting asynchronous execution for Custom Scan.\nSince v14, PostgreSQL supports asynchronous execution for Foreign Scan.\nThis patch enables asynchronous execution by applying the process for\nForeign Scan to Custom Scan .\n\nThe patch is divided into 2 parts, source and documents(sgml).\n\n\nRegards,",
"msg_date": "Sat, 13 Aug 2022 22:42:38 +0900",
"msg_from": "Kazutaka Onishi <onishi@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "v1 patch occurs gcc warnings, I fixed it.\n\n2022年8月13日(土) 22:42 Kazutaka Onishi <onishi@heterodb.com>:\n>\n> Hello,\n>\n> I suggest supporting asynchronous execution for Custom Scan.\n> Since v14, PostgreSQL supports asynchronous execution for Foreign Scan.\n> This patch enables asynchronous execution by applying the process for\n> Foreign Scan to Custom Scan .\n>\n> The patch is divided into 2 parts, source and documents(sgml).\n>\n>\n> Regards,",
"msg_date": "Sun, 14 Aug 2022 12:59:15 +0900",
"msg_from": "Kazutaka Onishi <onishi@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "Hi Onishi-san,\n\nOn Sun, Aug 14, 2022 at 12:59 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> v1 patch occurs gcc warnings, I fixed it.\n\nThanks for working on this!\n\nI'd like to review this (though, I'm not sure I can have time for it\nin the next commitfet), but I don't think we can review this without\nany example. Could you provide it? I think a simple example is\nbetter for ease of review.\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 22 Aug 2022 17:55:22 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "I internally suggested him to expand the ctidscan module for the PoC purpose.\nhttps://github.com/kaigai/ctidscan\n\nEven though it does not have asynchronous capability actually, but\nsuitable to ensure\nAPI works and small enough for reviewing.\n\nBest regards,\n\n2022年8月22日(月) 17:55 Etsuro Fujita <etsuro.fujita@gmail.com>:\n>\n> Hi Onishi-san,\n>\n> On Sun, Aug 14, 2022 at 12:59 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > v1 patch occurs gcc warnings, I fixed it.\n>\n> Thanks for working on this!\n>\n> I'd like to review this (though, I'm not sure I can have time for it\n> in the next commitfet), but I don't think we can review this without\n> any example. Could you provide it? I think a simple example is\n> better for ease of review.\n>\n> Best regards,\n> Etsuro Fujita\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:26:29 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "Hi KaiGai-san,\n\nOn Tue, Aug 23, 2022 at 6:26 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> I internally suggested him to expand the ctidscan module for the PoC purpose.\n> https://github.com/kaigai/ctidscan\n>\n> Even though it does not have asynchronous capability actually, but\n> suitable to ensure\n> API works and small enough for reviewing.\n\nSeems like a good idea.\n\nThanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Fri, 26 Aug 2022 17:18:04 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "Hi, Fujii-san,\n\nThe asynchronous version \"ctidscan\" plugin is ready.\nPlease check this.\nhttps://github.com/0-kaz/ctidscan/tree/async_sample\n\nI've confirmed this works correctly by running SQL shown below.\nThe query plan shows 2 custom scan works asynchronously.\n\npostgres=# LOAD 'ctidscan';\nLOAD\npostgres=# EXPLAIN ANALYZE SELECT * FROM t1 WHERE ctid BETWEEN\n'(2,1)'::tid AND '(3,10)'::tid\nUNION\nSELECT * FROM (SELECT * FROM t1 WHERE ctid BETWEEN '(2,115)'::tid AND\n'(3,10)'::tid);\n QUERY\nPLAN\n------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3.55..5.10 rows=155 width=36) (actual\ntime=0.633..0.646 rows=130 loops=1)\n Group Key: t1.a, t1.b\n Batches: 1 Memory Usage: 48kB\n -> Append (cost=0.01..2.77 rows=155 width=36) (actual\ntime=0.035..0.590 rows=146 loops=1)\n -> Async Custom Scan (ctidscan) on t1 (cost=0.01..1.00\nrows=134 width=37) (actual time=0.009..0.129 rows=130 loops=1)\n Filter: ((ctid >= '(2,1)'::tid) AND (ctid <= '(3,10)'::tid))\n Rows Removed by Filter: 30\n ctid quals: ((ctid >= '(2,1)'::tid) AND (ctid <= '(3,10)'::tid))\n -> Async Custom Scan (ctidscan) on t1 t1_1 (cost=0.01..1.00\nrows=21 width=37) (actual time=0.003..0.025 rows=16 loops=1)\n Filter: ((ctid >= '(2,115)'::tid) AND (ctid <= '(3,10)'::tid))\n Rows Removed by Filter: 144\n ctid quals: ((ctid >= '(2,115)'::tid) AND (ctid <=\n'(3,10)'::tid))\n Planning Time: 0.314 ms\n Execution Time: 0.762 ms\n(14 rows)\n\nRegards,\n\n\n2022年8月26日(金) 17:18 Etsuro Fujita <etsuro.fujita@gmail.com>:\n>\n> Hi KaiGai-san,\n>\n> On Tue, Aug 23, 2022 at 6:26 PM Kohei KaiGai <kaigai@heterodb.com> wrote:\n> > I internally suggested him to expand the ctidscan module for the PoC purpose.\n> > https://github.com/kaigai/ctidscan\n> >\n> > Even though it does not have asynchronous capability actually, but\n> > suitable to ensure\n> > API works and small enough for reviewing.\n>\n> Seems like a good idea.\n>\n> Thanks!\n>\n> Best regards,\n> Etsuro Fujita\n\n\n",
"msg_date": "Fri, 2 Sep 2022 22:43:16 +0900",
"msg_from": "Kazutaka Onishi <onishi@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "On Fri, Sep 2, 2022 at 10:43 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> The asynchronous version \"ctidscan\" plugin is ready.\n\nThanks for that!\n\nI looked at the extended version quickly. IIUC, it uses the proposed\nAPIs, but actually executes ctidscans *synchronously*, so it does not\nimprove performance. Right?\n\nAnyway, that version seems to be useful for testing that the proposed\nAPIs works well. So I'll review the proposed patches with it. I'm\nnot Fujii-san, though. :-)\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Mon, 5 Sep 2022 15:27:44 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "Fujita-san,\n\nI'm sorry for my error on your name...\n\n> IIUC, it uses the proposed\n> APIs, but actually executes ctidscans *synchronously*, so it does not\n> improve performance. Right?\n\nExactly.\nThe actual CustomScan that supports asynchronous execution will\nstart processing in CustomScanAsyncRequest,\nconfigure to detect completion via file descriptor in\nCustomScanAsyncConfigureWait,\nand receive the result in CustomScanAsyncNotify.\n\n> So I'll review the proposed patches with it.\nThank you!\n\n2022年9月5日(月) 15:27 Etsuro Fujita <etsuro.fujita@gmail.com>:\n>\n> On Fri, Sep 2, 2022 at 10:43 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > The asynchronous version \"ctidscan\" plugin is ready.\n>\n> Thanks for that!\n>\n> I looked at the extended version quickly. IIUC, it uses the proposed\n> APIs, but actually executes ctidscans *synchronously*, so it does not\n> improve performance. Right?\n>\n> Anyway, that version seems to be useful for testing that the proposed\n> APIs works well. So I'll review the proposed patches with it. I'm\n> not Fujii-san, though. :-)\n>\n> Best regards,\n> Etsuro Fujita\n\n\n",
"msg_date": "Mon, 5 Sep 2022 22:32:19 +0900",
"msg_from": "Kazutaka Onishi <onishi@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 10:32 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> I'm sorry for my error on your name...\n\nNo problem.\n\n> > IIUC, it uses the proposed\n> > APIs, but actually executes ctidscans *synchronously*, so it does not\n> > improve performance. Right?\n\n> Exactly.\n> The actual CustomScan that supports asynchronous execution will\n> start processing in CustomScanAsyncRequest,\n> configure to detect completion via file descriptor in\n> CustomScanAsyncConfigureWait,\n> and receive the result in CustomScanAsyncNotify.\n\nOk, thanks!\n\nBest regards,\nEtsuro Fujita\n\n\n",
"msg_date": "Tue, 6 Sep 2022 18:29:55 +0900",
"msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "Le mardi 6 septembre 2022, 11:29:55 CET Etsuro Fujita a écrit :\n> On Mon, Sep 5, 2022 at 10:32 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > I'm sorry for my error on your name...\n> \n> No problem.\n> \n> > > IIUC, it uses the proposed\n> > > \n> > > APIs, but actually executes ctidscans *synchronously*, so it does not\n> > > improve performance. Right?\n> > \n> > Exactly.\n> > The actual CustomScan that supports asynchronous execution will\n> > start processing in CustomScanAsyncRequest,\n> > configure to detect completion via file descriptor in\n> > CustomScanAsyncConfigureWait,\n> > and receive the result in CustomScanAsyncNotify.\n> \n> Ok, thanks!\n\nThanks for this patch, seems like a useful addition to the CustomScan API. \nJust to nitpick: there are extraneous tabs in createplan.c on a blank line.\n\nSorry for the digression, but I know your ctidscan module had been proposed \nfor inclusion in contrib a long time ago, and I wonder if the rationale for \nnot including it could have changed. We still don't have tests which cover \nCustomScan, and I can think of at least a few use cases where this customscan \nis helpful and not merely testing code.\n\nOne of those use case is when performing migrations on a table, and one wants \nto update the whole table by filling a new column with a computed value. You \nobviously don't want to do it in a single transaction, so you end up batching \nupdates using an index looking for null values. If you want to do this, it's \nmuch faster to update rows in a range of block, performing a first series of \nbatch updating all such block ranges, and then finally update the ones we \nmissed transactionally (inserted in a block we already processed while in the \nmiddle of the batch, or in new blocks resulting from a relation extension).\n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Tue, 22 Nov 2022 10:07:27 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "Thank you for your comment.\nI've removed the tabs.\n\n> I can think of at least a few use cases where this customscan is helpful and not merely testing code.\n\nIIUC, we already can use ctid in the where clause on the latest\nPostgreSQL, can't we?\n\n2022年11月22日(火) 18:07 Ronan Dunklau <ronan.dunklau@aiven.io>:\n>\n> Le mardi 6 septembre 2022, 11:29:55 CET Etsuro Fujita a écrit :\n> > On Mon, Sep 5, 2022 at 10:32 PM Kazutaka Onishi <onishi@heterodb.com> wrote:\n> > > I'm sorry for my error on your name...\n> >\n> > No problem.\n> >\n> > > > IIUC, it uses the proposed\n> > > >\n> > > > APIs, but actually executes ctidscans *synchronously*, so it does not\n> > > > improve performance. Right?\n> > >\n> > > Exactly.\n> > > The actual CustomScan that supports asynchronous execution will\n> > > start processing in CustomScanAsyncRequest,\n> > > configure to detect completion via file descriptor in\n> > > CustomScanAsyncConfigureWait,\n> > > and receive the result in CustomScanAsyncNotify.\n> >\n> > Ok, thanks!\n>\n> Thanks for this patch, seems like a useful addition to the CustomScan API.\n> Just to nitpick: there are extraneous tabs in createplan.c on a blank line.\n>\n> Sorry for the digression, but I know your ctidscan module had been proposed\n> for inclusion in contrib a long time ago, and I wonder if the rationale for\n> not including it could have changed. We still don't have tests which cover\n> CustomScan, and I can think of at least a few use cases where this customscan\n> is helpful and not merely testing code.\n>\n> One of those use case is when performing migrations on a table, and one wants\n> to update the whole table by filling a new column with a computed value. You\n> obviously don't want to do it in a single transaction, so you end up batching\n> updates using an index looking for null values. If you want to do this, it's\n> much faster to update rows in a range of block, performing a first series of\n> batch updating all such block ranges, and then finally update the ones we\n> missed transactionally (inserted in a block we already processed while in the\n> middle of the batch, or in new blocks resulting from a relation extension).\n>\n> Best regards,\n>\n> --\n> Ronan Dunklau\n>\n>",
"msg_date": "Thu, 1 Dec 2022 22:13:53 +0900",
"msg_from": "Kazutaka Onishi <onishi@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "> IIUC, we already can use ctid in the where clause on the latest\n> PostgreSQL, can't we?\n\nOh, sorry, I missed the TidRangeScan. My apologies for the noise.\n\nBest regards,\n\n--\nRonan Dunklau\n\n\n\n\n\n\n",
"msg_date": "Thu, 01 Dec 2022 14:27:31 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "> > IIUC, we already can use ctid in the where clause on the latest\n> > PostgreSQL, can't we?\n>\n> Oh, sorry, I missed the TidRangeScan. My apologies for the noise.\n>\nI made the ctidscan extension when we developed CustomScan API\ntowards v9.5 or v9.6, IIRC. It would make sense just an example of\nCustomScan API (e.g, PG-Strom code is too large and complicated\nto learn about this API), however, makes no sense from the standpoint\nof the functionality.\n\nBest regards,\n\n2022年12月1日(木) 22:27 Ronan Dunklau <ronan.dunklau@aiven.io>:\n>\n> > IIUC, we already can use ctid in the where clause on the latest\n> > PostgreSQL, can't we?\n>\n> Oh, sorry, I missed the TidRangeScan. My apologies for the noise.\n>\n> Best regards,\n>\n> --\n> Ronan Dunklau\n>\n>\n>\n>\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Fri, 2 Dec 2022 08:35:13 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "On Fri, 2 Dec 2022 at 05:05, Kohei KaiGai <kaigai@heterodb.com> wrote:\n>\n> > > IIUC, we already can use ctid in the where clause on the latest\n> > > PostgreSQL, can't we?\n> >\n> > Oh, sorry, I missed the TidRangeScan. My apologies for the noise.\n> >\n> I made the ctidscan extension when we developed CustomScan API\n> towards v9.5 or v9.6, IIRC. It would make sense just an example of\n> CustomScan API (e.g, PG-Strom code is too large and complicated\n> to learn about this API), however, makes no sense from the standpoint\n> of the functionality.\n\nThis patch has been moving from one CF to another CF without any\ndiscussion happening. It has been more than one year since any\nactivity in this thread. I don't see there is much interest in this\npatch. I prefer to return this patch in this CF unless someone is\ninterested in the patch and takes it forward.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Jan 2024 11:21:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
},
{
"msg_contents": "On Sun, 14 Jan 2024 at 11:21, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 2 Dec 2022 at 05:05, Kohei KaiGai <kaigai@heterodb.com> wrote:\n> >\n> > > > IIUC, we already can use ctid in the where clause on the latest\n> > > > PostgreSQL, can't we?\n> > >\n> > > Oh, sorry, I missed the TidRangeScan. My apologies for the noise.\n> > >\n> > I made the ctidscan extension when we developed CustomScan API\n> > towards v9.5 or v9.6, IIRC. It would make sense just an example of\n> > CustomScan API (e.g, PG-Strom code is too large and complicated\n> > to learn about this API), however, makes no sense from the standpoint\n> > of the functionality.\n>\n> This patch has been moving from one CF to another CF without any\n> discussion happening. It has been more than one year since any\n> activity in this thread. I don't see there is much interest in this\n> patch. I prefer to return this patch in this CF unless someone is\n> interested in the patch and takes it forward.\n\nSince the author or no one else showed interest in taking it forward\nand the patch had no activity for more than 1 year, I have changed the\nstatus to RWF. Feel free to add a new CF entry when someone is\nplanning to work on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 1 Feb 2024 23:12:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Asynchronous execution support for Custom Scan"
}
] |
[
{
"msg_contents": "\nFor some time I have been nursing along my old Windows XP instance,\nwhich nowadays only builds release 10, which is due to go to EOL in a\nfew months. The machine has suddenly started having issues with git, and\nI'm not really inclined to spend lots of time fixing it. XP itself is\nnow a very long time past EOL, so I think I'm just going to turn it off.\nThat will be the end for frogmouth, currawong and brolga.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 13 Aug 2022 16:49:18 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Goodbye Windows XP"
},
{
"msg_contents": "\nOn 2022-08-13 Sa 16:49, Andrew Dunstan wrote:\n> For some time I have been nursing along my old Windows XP instance,\n> which nowadays only builds release 10, which is due to go to EOL in a\n> few months. The machine has suddenly started having issues with git, and\n> I'm not really inclined to spend lots of time fixing it. XP itself is\n> now a very long time past EOL, so I think I'm just going to turn it off.\n> That will be the end for frogmouth, currawong and brolga.\n>\n>\n\n\nRight after I posted this it mysteriously started working again, so I\nguess we'll limp on till around December.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 20 Aug 2022 11:38:56 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Goodbye Windows XP"
}
] |
[
{
"msg_contents": "Hi,\n\nCurrently we do not include the dependent extension information for\nindex and materialized view in the describe command. I felt it would\nbe useful to include this information as part of the describe command\nlike:\n\\d+ idx_depends\n Index \"public.idx_depends\"\n Column | Type | Key? | Definition | Storage | Stats target\n--------+---------+------+------------+---------+--------------\n a | integer | yes | a | plain |\nbtree, for table \"public.tbl_idx_depends\"\nDepends:\n \"plpgsql\"\n\nAttached a patch for the same. Thoughts?\n\nRegards,\nVignesh",
"msg_date": "Sun, 14 Aug 2022 08:01:14 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Include the dependent extension information in describe command."
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> Currently we do not include the dependent extension information for\n> index and materialized view in the describe command. I felt it would\n> be useful to include this information as part of the describe command\n> like:\n> \\d+ idx_depends\n> Index \"public.idx_depends\"\n> Column | Type | Key? | Definition | Storage | Stats target\n> --------+---------+------+------------+---------+--------------\n> a | integer | yes | a | plain |\n> btree, for table \"public.tbl_idx_depends\"\n> Depends:\n> \"plpgsql\"\n\n> Attached a patch for the same. Thoughts?\n\nThis seems pretty much useless noise to me. Can you point to\nany previous requests for such a feature? If we did do it,\nwhy would we do it in such a narrow fashion (ie, only dependencies\nof two specific kinds of objects on one other specific kind of\nobject)? Why did you do it in this direction rather than\nthe other one, ie show dependencies when examining the extension?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Aug 2022 01:37:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Include the dependent extension information in describe command."
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > Currently we do not include the dependent extension information for\n> > index and materialized view in the describe command. I felt it would\n> > be useful to include this information as part of the describe command\n> > like:\n> > \\d+ idx_depends\n> > Index \"public.idx_depends\"\n> > Column | Type | Key? | Definition | Storage | Stats target\n> > --------+---------+------+------------+---------+--------------\n> > a | integer | yes | a | plain |\n> > btree, for table \"public.tbl_idx_depends\"\n> > Depends:\n> > \"plpgsql\"\n>\n> > Attached a patch for the same. Thoughts?\n>\n> This seems pretty much useless noise to me. Can you point to\n> any previous requests for such a feature? If we did do it,\n> why would we do it in such a narrow fashion (ie, only dependencies\n> of two specific kinds of objects on one other specific kind of\n> object)? Why did you do it in this direction rather than\n> the other one, ie show dependencies when examining the extension?\n\nWhile implementing logical replication of \"index which depends on\nextension\", I found that this information was not available in any of\nthe \\d describe commands. I felt having this information in the \\d\ndescribe command will be useful in validating the \"depends on\nextension\" easily. Now that you pointed out, I agree that it will be\nbetter to show the dependencies from the extension instead of handling\nit in multiple places. I will change it to handle it from extension\nand post an updated version soon for this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 14 Aug 2022 22:24:42 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include the dependent extension information in describe command."
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 10:24 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, Aug 14, 2022 at 11:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > vignesh C <vignesh21@gmail.com> writes:\n> > > Currently we do not include the dependent extension information for\n> > > index and materialized view in the describe command. I felt it would\n> > > be useful to include this information as part of the describe command\n> > > like:\n> > > \\d+ idx_depends\n> > > Index \"public.idx_depends\"\n> > > Column | Type | Key? | Definition | Storage | Stats target\n> > > --------+---------+------+------------+---------+--------------\n> > > a | integer | yes | a | plain |\n> > > btree, for table \"public.tbl_idx_depends\"\n> > > Depends:\n> > > \"plpgsql\"\n> >\n> > > Attached a patch for the same. Thoughts?\n> >\n> > This seems pretty much useless noise to me. Can you point to\n> > any previous requests for such a feature? If we did do it,\n> > why would we do it in such a narrow fashion (ie, only dependencies\n> > of two specific kinds of objects on one other specific kind of\n> > object)? Why did you do it in this direction rather than\n> > the other one, ie show dependencies when examining the extension?\n>\n> While implementing logical replication of \"index which depends on\n> extension\", I found that this information was not available in any of\n> the \\d describe commands. I felt having this information in the \\d\n> describe command will be useful in validating the \"depends on\n> extension\" easily. Now that you pointed out, I agree that it will be\n> better to show the dependencies from the extension instead of handling\n> it in multiple places. I will change it to handle it from extension\n> and post an updated version soon for this.\n\nI have updated the patch to display \"Objects depending on extension\"\nas describe extension footer. The changes for the same are available\nin the v2 version patch attached. Thoughts?\n\nRegards,\nVignesh",
"msg_date": "Mon, 15 Aug 2022 22:09:29 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include the dependent extension information in describe command."
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 10:09:29PM +0530, vignesh C wrote:\n> I have updated the patch to display \"Objects depending on extension\"\n> as describe extension footer. The changes for the same are available\n> in the v2 version patch attached. Thoughts?\n\nI wonder if we would be better off with a backslash command that showed\nthe dependencies of any object.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 11:34:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Include the dependent extension information in describe command."
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 9:04 PM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Mon, Aug 15, 2022 at 10:09:29PM +0530, vignesh C wrote:\n> > I have updated the patch to display \"Objects depending on extension\"\n> > as describe extension footer. The changes for the same are available\n> > in the v2 version patch attached. Thoughts?\n>\n> I wonder if we would be better off with a backslash command that showed\n> the dependencies of any object.\n\nYes, If we have a backslash command which could show the dependencies\nof the specified object could be helpful.\nCan we something like below:\na) Index idx1 depend on table t1\ncreate table t1(c1 int);\ncreate index idx1 on t1(c1);\npostgres=# \\dD idx1\nName\n---------\nidx1\nDepends on:\n table t1\n\nb) Index idx1 depend on table t1 and extension ext1\nalter index idx idx1 depends on extension ext1\npostgres=# \\dD idx1\nName\n---------\nidx1\nDepends on:\n table t1\n extension ext1\n\nc) materialized view mv1 depends on table t1\ncreate materialized view mv1 as select * from t1;\npostgres=# \\dD mv1\nName\n---------\nmv1\nDepends on:\n table t1\n\nIf you are ok with this approach, I can implement a patch on similar\nlines. Thoughts?\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 22 Aug 2022 21:43:33 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Include the dependent extension information in describe command."
}
] |
[
{
"msg_contents": "Hi,\n\nThis patch does a couple of things:\na) Tab completion for \"ALTER TYPE typename SET\" was missing. Added tab\ncompletion for the same. b) Tab completion for \"ALTER TYPE <sth>\nRENAME VALUE\" was not along with tab completion of \"ALTER TYPE\"\ncommands, it was present after \"ALTER GROUP <foo>\", rearranged \"ALTER\nTYPE <sth> RENAME VALUE\", so that it is along with \"ALTER TYPE\"\ncommands.\n\nAttached patch has the changes for the same. Thoughts?\n\nRegards,\nVignesh",
"msg_date": "Sun, 14 Aug 2022 08:25:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Tab completion for \"ALTER TYPE typename SET\" and rearranged \"Alter\n TYPE typename RENAME\""
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 08:25:01AM +0530, vignesh C wrote:\n> Attached patch has the changes for the same. Thoughts?\n>\n> a) Add tab completion for \"ALTER TYPE typename SET\" was missing.\n\nWhy not. I can also note that CREATE TYPE lists all the properties\nthat can be set to a new type. We could bother adding these for ALTER\nTYPE, perhaps?\n\n> b) Tab completion for \"ALTER TYPE <sth> RENAME VALUE\" was not along with tab\n> completion of \"ALTER TYPE\" commands, it was present after \"ALTER GROUP\n> <foo>\", rearranged \"ALTER TYPE <sth> RENAME VALUE\", so that it is along with\n> \"ALTER TYPE\" commands.\n\nYeah, no objections to keep that grouped.\n--\nMichael",
"msg_date": "Sun, 14 Aug 2022 19:11:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for \"ALTER TYPE typename SET\" and rearranged\n \"Alter TYPE typename RENAME\""
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 3:41 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Aug 14, 2022 at 08:25:01AM +0530, vignesh C wrote:\n> > Attached patch has the changes for the same. Thoughts?\n> >\n> > a) Add tab completion for \"ALTER TYPE typename SET\" was missing.\n>\n> Why not. I can also note that CREATE TYPE lists all the properties\n> that can be set to a new type. We could bother adding these for ALTER\n> TYPE, perhaps?\n\nModified the patch to list all the properties in case of \"ALTER TYPE\ntypename SET (\". I have included the properties in alphabetical order\nas I notice that the ordering is in alphabetical order in few cases\nex: \"ALTER SUBSCRIPTION <name> SET (\". The attached v2 patch has the\nchanges for the same. Thoughts?\n\nRegards,\nVignesh",
"msg_date": "Sun, 14 Aug 2022 19:56:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for \"ALTER TYPE typename SET\" and rearranged\n \"Alter TYPE typename RENAME\""
},
{
"msg_contents": "On Sun, Aug 14, 2022 at 07:56:00PM +0530, vignesh C wrote:\n> Modified the patch to list all the properties in case of \"ALTER TYPE\n> typename SET (\". I have included the properties in alphabetical order\n> as I notice that the ordering is in alphabetical order in few cases\n> ex: \"ALTER SUBSCRIPTION <name> SET (\". The attached v2 patch has the\n> changes for the same. Thoughts?\n\nSeems fine here, so applied after tweaking a bit the comments, mostly\nfor consistency with the area.\n--\nMichael",
"msg_date": "Mon, 15 Aug 2022 14:12:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tab completion for \"ALTER TYPE typename SET\" and rearranged\n \"Alter TYPE typename RENAME\""
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 10:42 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Aug 14, 2022 at 07:56:00PM +0530, vignesh C wrote:\n> > Modified the patch to list all the properties in case of \"ALTER TYPE\n> > typename SET (\". I have included the properties in alphabetical order\n> > as I notice that the ordering is in alphabetical order in few cases\n> > ex: \"ALTER SUBSCRIPTION <name> SET (\". The attached v2 patch has the\n> > changes for the same. Thoughts?\n>\n> Seems fine here, so applied after tweaking a bit the comments, mostly\n> for consistency with the area.\n\nThanks for pushing this patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 15 Aug 2022 22:05:58 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Tab completion for \"ALTER TYPE typename SET\" and rearranged\n \"Alter TYPE typename RENAME\""
}
] |
[
{
"msg_contents": "Hi,\n\nI am not sure whom I need to contact but the patches were not pushed to the repos as per release notes.\nhttps://www.postgresql.org/docs/release/\nPostgreSQL: Release Notes<https://www.postgresql.org/docs/release/>\n11th August 2022: PostgreSQL 14.5, 13.8, 12.12, 11.17, 10.22, and 15 Beta 3 Released! Quick Links. Documentation; Manuals. Archive; Release Notes; Books; Tutorials ...\nwww.postgresql.org\nThe latest still shows 1 version back.\n[cid:3d2d83b5-db21-489c-bf73-d9ed43743689]\n\nregards\nSelwyn\n\n \n\nThis email is subject to a disclaimer.\n\nVisit the FNB website and view the email disclaimer and privacy notice by clicking the \"About FNB + Legal\" and \"Legal Matters\" links.\nIf you are unable to access our website, please contact us to send you a copy of the email disclaimer or privacy notice.",
"msg_date": "Sun, 14 Aug 2022 10:29:01 +0000",
"msg_from": "\"Graaff, Selwyn\" <Selwyn.Graaff@fnb.co.za>",
"msg_from_op": true,
"msg_subject": "latest patches not updated in repos"
},
{
"msg_contents": "\"Graaff, Selwyn\" <Selwyn.Graaff@fnb.co.za> writes:\n> I am not sure whom I need to contact but the patches were not pushed to the repos as per release notes.\n> https://www.postgresql.org/docs/release/\n> PostgreSQL: Release Notes<https://www.postgresql.org/docs/release/>\n> 11th August 2022: PostgreSQL 14.5, 13.8, 12.12, 11.17, 10.22, and 15 Beta 3 Released! Quick Links. Documentation; Manuals. Archive; Release Notes; Books; Tutorials ...\n\nI see them there. Clear your browser cache, perhaps?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 14 Aug 2022 10:10:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: latest patches not updated in repos"
},
{
"msg_contents": "Hi,\n\nOn Sun, 2022-08-14 at 10:10 -0400, Tom Lane wrote:\n> \"Graaff, Selwyn\" <Selwyn.Graaff@fnb.co.za> writes:\n> > I am not sure whom I need to contact but the patches were not\n> > pushed to the repos as per release notes.\n> > https://www.postgresql.org/docs/release/\n> > PostgreSQL: Release Notes<https://www.postgresql.org/docs/release/>\n> > 11th August 2022: PostgreSQL 14.5, 13.8, 12.12, 11.17, 10.22, and\n> > 15 Beta 3 Released! Quick Links. Documentation; Manuals. Archive;\n> > Release Notes; Books; Tutorials ...\n> \n> I see them there. Clear your browser cache, perhaps?\n\nActually I just pushed them. Only 13.8 updates were missing, sorry\nabout that.\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR",
"msg_date": "Sun, 14 Aug 2022 15:25:58 +0100",
"msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>",
"msg_from_op": false,
"msg_subject": "Re: latest patches not updated in repos"
},
{
"msg_contents": "Thank you.\n\nregards,\nSelwyn\n________________________________\nFrom: Devrim Gündüz <devrim@gunduz.org>\nSent: Sunday, 14 August 2022 16:25\nTo: Tom Lane <tgl@sss.pgh.pa.us>; Graaff, Selwyn <Selwyn.Graaff@fnb.co.za>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: latest patches not updated in repos - [External Email]\n\nHi,\n\nOn Sun, 2022-08-14 at 10:10 -0400, Tom Lane wrote:\n> \"Graaff, Selwyn\" <Selwyn.Graaff@fnb.co.za> writes:\n> > I am not sure whom I need to contact but the patches were not\n> > pushed to the repos as per release notes.\n> > https://secure-web.cisco.com/12ONKcon5VEXiyKROUzZpHq3zopWQUOtcwCJpKyvlbCg08NMT_7-cA9ebiUSIh1tEjJ6yA5Rx1Dy2qLOy7DazeVIcz20AcBuG5aiC3u_Cv42McjVdQ1OfHHn16HL6WhYhWpmEa-4AL6v1ZCBVEw80WKNrIBM6vRX2NTvmhBzPnvpv4bJthhgiLDUAJGOJGnYX1YqBxmvuwrRmirg5OvCr5DNwmRGm9Ludl7qPGJ-Z4Fw_bPZC82Fusm0CgYirG_P-WW2PMmr4wc-0xdFutMamEYrKgU5bP50e3r71nR6p1EMVrfRnSS4QB6SB7ztsVHpAnGyWFCYTpVdb4TInedo5wA/https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Frelease%2F\n> > PostgreSQL: Release Notes<https://secure-web.cisco.com/12ONKcon5VEXiyKROUzZpHq3zopWQUOtcwCJpKyvlbCg08NMT_7-cA9ebiUSIh1tEjJ6yA5Rx1Dy2qLOy7DazeVIcz20AcBuG5aiC3u_Cv42McjVdQ1OfHHn16HL6WhYhWpmEa-4AL6v1ZCBVEw80WKNrIBM6vRX2NTvmhBzPnvpv4bJthhgiLDUAJGOJGnYX1YqBxmvuwrRmirg5OvCr5DNwmRGm9Ludl7qPGJ-Z4Fw_bPZC82Fusm0CgYirG_P-WW2PMmr4wc-0xdFutMamEYrKgU5bP50e3r71nR6p1EMVrfRnSS4QB6SB7ztsVHpAnGyWFCYTpVdb4TInedo5wA/https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Frelease%2F>\n> > 11th August 2022: PostgreSQL 14.5, 13.8, 12.12, 11.17, 10.22, and\n> > 15 Beta 3 Released! Quick Links. Documentation; Manuals. Archive;\n> > Release Notes; Books; Tutorials ...\n>\n> I see them there. Clear your browser cache, perhaps?\n\nActually I just pushed them. Only 13.8 updates were missing, sorry\nabout that.\n\nRegards,\n\n--\nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n \n\nThis email is subject to a disclaimer.\n\nVisit the FNB website and view the email disclaimer and privacy notice by clicking the \"About FNB + Legal\" and \"Legal Matters\" links.\nIf you are unable to access our website, please contact us to send you a copy of the email disclaimer or privacy notice.\n\n\n\n\n\n\n\n\nThank you.\n\n\n\n\nregards,\n\nSelwyn\n\n\n\nFrom: Devrim Gündüz <devrim@gunduz.org>\nSent: Sunday, 14 August 2022 16:25\nTo: Tom Lane <tgl@sss.pgh.pa.us>; Graaff, Selwyn <Selwyn.Graaff@fnb.co.za>\nCc: pgsql-hackers@lists.postgresql.org <pgsql-hackers@lists.postgresql.org>\nSubject: Re: latest patches not updated in repos - [External Email]\n \n\n\nHi,\n\nOn Sun, 2022-08-14 at 10:10 -0400, Tom Lane wrote:\n> \"Graaff, Selwyn\" <Selwyn.Graaff@fnb.co.za> writes:\n> > I am not sure whom I need to contact but the patches were not\n> > pushed to the repos as per release notes.\n> > \nhttps://secure-web.cisco.com/12ONKcon5VEXiyKROUzZpHq3zopWQUOtcwCJpKyvlbCg08NMT_7-cA9ebiUSIh1tEjJ6yA5Rx1Dy2qLOy7DazeVIcz20AcBuG5aiC3u_Cv42McjVdQ1OfHHn16HL6WhYhWpmEa-4AL6v1ZCBVEw80WKNrIBM6vRX2NTvmhBzPnvpv4bJthhgiLDUAJGOJGnYX1YqBxmvuwrRmirg5OvCr5DNwmRGm9Ludl7qPGJ-Z4Fw_bPZC82Fusm0CgYirG_P-WW2PMmr4wc-0xdFutMamEYrKgU5bP50e3r71nR6p1EMVrfRnSS4QB6SB7ztsVHpAnGyWFCYTpVdb4TInedo5wA/https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Frelease%2F\n> > PostgreSQL: Release Notes<https://secure-web.cisco.com/12ONKcon5VEXiyKROUzZpHq3zopWQUOtcwCJpKyvlbCg08NMT_7-cA9ebiUSIh1tEjJ6yA5Rx1Dy2qLOy7DazeVIcz20AcBuG5aiC3u_Cv42McjVdQ1OfHHn16HL6WhYhWpmEa-4AL6v1ZCBVEw80WKNrIBM6vRX2NTvmhBzPnvpv4bJthhgiLDUAJGOJGnYX1YqBxmvuwrRmirg5OvCr5DNwmRGm9Ludl7qPGJ-Z4Fw_bPZC82Fusm0CgYirG_P-WW2PMmr4wc-0xdFutMamEYrKgU5bP50e3r71nR6p1EMVrfRnSS4QB6SB7ztsVHpAnGyWFCYTpVdb4TInedo5wA/https%3A%2F%2Fwww.postgresql.org%2Fdocs%2Frelease%2F>\n> > 11th August 2022: PostgreSQL 14.5, 13.8, 12.12, 11.17, 10.22, and\n> > 15 Beta 3 Released! Quick Links. Documentation; Manuals. Archive;\n> > Release Notes; Books; Tutorials ...\n> \n> I see them there. Clear your browser cache, perhaps?\n\nActually I just pushed them. Only 13.8 updates were missing, sorry\nabout that.\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR\n\n\n This email is subject to a disclaimer.\nVisit the FNB website and view the email disclaimer and privacy notice by clicking \nthe \"About FNB + Legal\" and \"Legal Matters\" links.If you are unable to \naccess our website, please contact us to send you a copy of the email \ndisclaimer or privacy notice.",
"msg_date": "Mon, 15 Aug 2022 06:19:27 +0000",
"msg_from": "\"Graaff, Selwyn\" <Selwyn.Graaff@fnb.co.za>",
"msg_from_op": true,
"msg_subject": "Re: latest patches not updated in repos - [External Email]"
}
] |
[
{
"msg_contents": "Hi,\n\nwhile experimenting with logical messages, I ran into this assert in\nlogicalmsg_desc:\n\n Assert(prefix[xlrec->prefix_size] != '\\0');\n\nThis seems to be incorrect, because LogLogicalMessage does this:\n\n xlrec.prefix_size = strlen(prefix) + 1;\n\nSo prefix_size includes the null byte, so the assert points out at the\nfirst payload byte. And of course, the check should be \"==\" because we\nexpect the byte to be \\0, not the other way around.\n\nIt's pretty simple to make this crash by writing a logical message where\nthe first payload byte is \\0, e.g. like this:\n\n select pg_logical_emit_message(true, 'a'::text, '\\x00'::bytea);\n\nand then running pg_waldump on the WAL segment.\n\nAttached is a patch addressing this. This was added in 14, so we should\nbackpatch to that version.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 14 Aug 2022 18:16:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "bogus assert in logicalmsg_desc"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 1:17 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> while experimenting with logical messages, I ran into this assert in\n> logicalmsg_desc:\n>\n> Assert(prefix[xlrec->prefix_size] != '\\0');\n>\n> This seems to be incorrect, because LogLogicalMessage does this:\n>\n> xlrec.prefix_size = strlen(prefix) + 1;\n>\n> So prefix_size includes the null byte, so the assert points out at the\n> first payload byte. And of course, the check should be \"==\" because we\n> expect the byte to be \\0, not the other way around.\n>\n> It's pretty simple to make this crash by writing a logical message where\n> the first payload byte is \\0, e.g. like this:\n>\n> select pg_logical_emit_message(true, 'a'::text, '\\x00'::bytea);\n>\n> and then running pg_waldump on the WAL segment.\n>\n> Attached is a patch addressing this. This was added in 14, so we should\n> backpatch to that version.\n\n+1\n\nThe patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Mon, 15 Aug 2022 10:13:35 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus assert in logicalmsg_desc"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 12:17 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> So prefix_size includes the null byte, so the assert points out at the\n> first payload byte. And of course, the check should be \"==\" because we\n> expect the byte to be \\0, not the other way around.\n\n\nYes, indeed. There is even a comment emphasizing the trailing null byte\nin LogLogicalMessage.\n\n /* trailing zero is critical; see logicalmsg_desc */\n\n\n\n>\n> Attached is a patch addressing this. This was added in 14, so we should\n> backpatch to that version.\n>\n\n+1 for the patch.\n\nThanks\nRichard\n\nOn Mon, Aug 15, 2022 at 12:17 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\nSo prefix_size includes the null byte, so the assert points out at the\nfirst payload byte. And of course, the check should be \"==\" because we\nexpect the byte to be \\0, not the other way around.Yes, indeed. There is even a comment emphasizing the trailing null bytein LogLogicalMessage. /* trailing zero is critical; see logicalmsg_desc */ \n\nAttached is a patch addressing this. This was added in 14, so we should\nbackpatch to that version.+1 for the patch.ThanksRichard",
"msg_date": "Mon, 15 Aug 2022 09:20:28 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: bogus assert in logicalmsg_desc"
}
] |
[
{
"msg_contents": "Hi,\n\nI thought commit 81b9f23c9c8 had my back, but nope, we still need to\nmake CI turn red if \"headerscheck\" and \"cpluspluscheck\" don't like our\npatches (crake in the build farm should be a secondary defence...).\nSee attached.",
"msg_date": "Mon, 15 Aug 2022 17:38:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Header checker scripts should fail on failure"
},
{
"msg_contents": "\nOn 2022-08-15 Mo 01:38, Thomas Munro wrote:\n> Hi,\n>\n> I thought commit 81b9f23c9c8 had my back, but nope, we still need to\n> make CI turn red if \"headerscheck\" and \"cpluspluscheck\" don't like our\n> patches (crake in the build farm should be a secondary defence...).\n> See attached.\n\n\nYeah, the buildfarm module works around that by looking for non-empty\noutput, but this is better,\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 15 Aug 2022 09:54:13 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Header checker scripts should fail on failure"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-15 17:38:21 +1200, Thomas Munro wrote:\n> I thought commit 81b9f23c9c8 had my back, but nope, we still need to\n> make CI turn red if \"headerscheck\" and \"cpluspluscheck\" don't like our\n> patches (crake in the build farm should be a secondary defence...).\n> See attached.\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Aug 2022 08:43:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Header checker scripts should fail on failure"
}
] |
[
{
"msg_contents": "The last use of UNSAFE_STAT_OK was removed in \nbed90759fcbcd72d4d06969eebab81e47326f9a2, but the build system(s) still \nmentions it. Is it safe to remove, or does it interact with system \nheader files in some way that isn't obvious here?",
"msg_date": "Mon, 15 Aug 2022 10:41:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Remove remaining mentions of UNSAFE_STAT_OK"
}
] |
[
{
"msg_contents": "There's a smallish backup tool called pg_backupcluster in Debian's\npostgresql-common which also ships a systemd service that runs\npg_receivewal for wal archiving, and supplies a pg_getwal script for\nreading the files back on restore, including support for .partial\nfiles.\n\nSo far the machinery was using plain files and relied on compressing\nthe WALs from time to time, but now I wanted to compress the files\ndirectly from pg_receivewal --compress=5. Unfortunately this broke the\nregression tests that include a test for the .partial files where\npg_receivewal.service is shut down before the segment is full.\n\nThe problem was that systemd's default KillSignal is SIGTERM, while\npg_receivewal flushes the output compression buffers on SIGINT only.\nThe attached patch makes it do the same for SIGTERM as well. (Most\nplaces in PG that install a SIGINT handler also install a SIGTERM\nhandler already.)\n\nChristoph",
"msg_date": "Mon, 15 Aug 2022 14:45:24 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "pg_receivewal and SIGTERM"
},
{
"msg_contents": "> On 15 Aug 2022, at 14:45, Christoph Berg <myon@debian.org> wrote:\n\n> The problem was that systemd's default KillSignal is SIGTERM, while\n> pg_receivewal flushes the output compression buffers on SIGINT only.\n\nSupporting SIGTERM here makes sense, especially given how systemd works.\n\n> The attached patch makes it do the same for SIGTERM as well. (Most\n> places in PG that install a SIGINT handler also install a SIGTERM\n> handler already.)\n\nNot really when it comes to utilities though; initdb, pg_dump and pg_test_fsync\nseems to be the ones doing so. (That's probably mostly due to them not running\nin a daemon-like way as what's discussed here.)\n\nDo you think pg_recvlogical should support SIGTERM as well? (The signals which\nit does trap should be added to the documentation which just now says \"until\nterminated by a signal\" but that's a separate thing.)\n\n \tpqsignal(SIGINT, sigint_handler);\n+\tpqsignal(SIGTERM, sigint_handler);\nTiny nitpick, I think we should rename sigint_handler to just sig_handler as it\ndoes handle more than sigint.\n\nIn relation to this. Reading over this and looking around I realized that the\ndocumentation for pg_waldump lacks a closing parenthesis on Ctrl+C so I will be\npushing the below to fix it:\n\n--- a/doc/src/sgml/ref/pg_waldump.sgml\n+++ b/doc/src/sgml/ref/pg_waldump.sgml\n@@ -263,7 +263,7 @@ PostgreSQL documentation\n <para>\n If <application>pg_waldump</application> is terminated by signal\n <systemitem>SIGINT</systemitem>\n- (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>,\n+ (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>),\n the summary of the statistics computed is displayed up to the\n termination point. This operation is not supported on\n <productname>Windows</productname>.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 12:22:44 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Daniel Gustafsson\n> Do you think pg_recvlogical should support SIGTERM as well? (The signals which\n> it does trap should be added to the documentation which just now says \"until\n> terminated by a signal\" but that's a separate thing.)\n\nAck, that makes sense, added in the attached updated patch.\n\n> \tpqsignal(SIGINT, sigint_handler);\n> +\tpqsignal(SIGTERM, sigint_handler);\n> Tiny nitpick, I think we should rename sigint_handler to just sig_handler as it\n> does handle more than sigint.\n\nI went with sigexit_handler since pg_recvlogical has also a\nsighup_handler and \"sig_handler\" would be confusing there.\n\nChristoph",
"msg_date": "Tue, 16 Aug 2022 13:36:15 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 5:06 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Daniel Gustafsson\n> > Do you think pg_recvlogical should support SIGTERM as well? (The signals which\n> > it does trap should be added to the documentation which just now says \"until\n> > terminated by a signal\" but that's a separate thing.)\n>\n> Ack, that makes sense, added in the attached updated patch.\n>\n> > pqsignal(SIGINT, sigint_handler);\n> > + pqsignal(SIGTERM, sigint_handler);\n> > Tiny nitpick, I think we should rename sigint_handler to just sig_handler as it\n> > does handle more than sigint.\n>\n> I went with sigexit_handler since pg_recvlogical has also a\n> sighup_handler and \"sig_handler\" would be confusing there.\n\nCan we move these signal handlers to streamutil.h/.c so that both\npg_receivewal and pg_recvlogical can make use of it avoiding duplicate\ncode?\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 17:10:54 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "> On 16 Aug 2022, at 13:40, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> \n> On Tue, Aug 16, 2022 at 5:06 PM Christoph Berg <myon@debian.org> wrote:\n\n>> I went with sigexit_handler since pg_recvlogical has also a\n>> sighup_handler and \"sig_handler\" would be confusing there.\n> \n> Can we move these signal handlers to streamutil.h/.c so that both\n> pg_receivewal and pg_recvlogical can make use of it avoiding duplicate\n> code?\n\nIn general that's a good idea, but they are so trivial that I don't really see\nmuch point in doing that in this particular case.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 13:43:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Daniel Gustafsson\n> In general that's a good idea, but they are so trivial that I don't really see\n> much point in doing that in this particular case.\n\nPlus the variable they set is called differently...\n\nChristoph\n\n\n",
"msg_date": "Tue, 16 Aug 2022 13:44:52 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "> On 16 Aug 2022, at 13:36, Christoph Berg <myon@debian.org> wrote:\n\n>> \tpqsignal(SIGINT, sigint_handler);\n>> +\tpqsignal(SIGTERM, sigint_handler);\n>> Tiny nitpick, I think we should rename sigint_handler to just sig_handler as it\n>> does handle more than sigint.\n> \n> I went with sigexit_handler since pg_recvlogical has also a\n> sighup_handler and \"sig_handler\" would be confusing there.\n\nGood point, sigexit_handler is a better name here.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 13:44:55 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 5:15 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 16 Aug 2022, at 13:36, Christoph Berg <myon@debian.org> wrote:\n>\n> >> pqsignal(SIGINT, sigint_handler);\n> >> + pqsignal(SIGTERM, sigint_handler);\n> >> Tiny nitpick, I think we should rename sigint_handler to just sig_handler as it\n> >> does handle more than sigint.\n> >\n> > I went with sigexit_handler since pg_recvlogical has also a\n> > sighup_handler and \"sig_handler\" would be confusing there.\n>\n> Good point, sigexit_handler is a better name here.\n\n+1.\n\nDon't we need a similar explanation [1] for pg_recvlogical docs?\n\n[1]\n <para>\n In the absence of fatal errors, <application>pg_receivewal</application>\n- will run until terminated by the <systemitem>SIGINT</systemitem> signal\n- (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>).\n+ will run until terminated by the <systemitem>SIGINT</systemitem>\n+ (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>)\n+ or <systemitem>SIGTERM</systemitem> signal.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 17:20:05 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Bharath Rupireddy\n> Don't we need a similar explanation [1] for pg_recvlogical docs?\n> \n> [1]\n> <para>\n> In the absence of fatal errors, <application>pg_receivewal</application>\n> - will run until terminated by the <systemitem>SIGINT</systemitem> signal\n> - (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>).\n> + will run until terminated by the <systemitem>SIGINT</systemitem>\n> + (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>)\n> + or <systemitem>SIGTERM</systemitem> signal.\n\nCoped that from pg_receivewal(1) now.\n\nChristoph",
"msg_date": "Tue, 16 Aug 2022 15:56:45 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 7:26 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Bharath Rupireddy\n> > Don't we need a similar explanation [1] for pg_recvlogical docs?\n> >\n> > [1]\n> > <para>\n> > In the absence of fatal errors, <application>pg_receivewal</application>\n> > - will run until terminated by the <systemitem>SIGINT</systemitem> signal\n> > - (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>).\n> > + will run until terminated by the <systemitem>SIGINT</systemitem>\n> > + (<keycombo action=\"simul\"><keycap>Control</keycap><keycap>C</keycap></keycombo>)\n> > + or <systemitem>SIGTERM</systemitem> signal.\n>\n> Coped that from pg_receivewal(1) now.\n\nThanks.\n\n <application>pg_receivewal</application> will exit with status 0 when\n- terminated by the <systemitem>SIGINT</systemitem> signal. (That is the\n+ terminated by the <systemitem>SIGINT</systemitem> or\n+ <systemitem>SIGTERM</systemitem> signal. (That is the\n normal way to end it. Hence it is not an error.) For fatal errors or\n other signals, the exit status will be nonzero.\n\nCan we specify the reason in the docs why a SIGTERM causes (which\ntypically would cause a program to end with non-zero exit code)\npg_receivewal and pg_recvlogical exit with zero exit code? Having this\nin the commit message would help developers but the documentation will\nhelp users out there.\n\nThoughts?\n\n[1]\npg_receivewal, pg_recvlogical: Exit cleanly on SIGTERM\n\nIn pg_receivewal, compressed output is only flushed on clean exits. The\nreason to support SIGTERM here as well is that pg_receivewal might well\nbe running as a daemon, and systemd's default KillSignal is SIGTERM.\n\nSince pg_recvlogical is also supposed to run as a daemon, teach it about\nSIGTERM as well.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 22:03:59 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Bharath Rupireddy\n> <application>pg_receivewal</application> will exit with status 0 when\n> - terminated by the <systemitem>SIGINT</systemitem> signal. (That is the\n> + terminated by the <systemitem>SIGINT</systemitem> or\n> + <systemitem>SIGTERM</systemitem> signal. (That is the\n> normal way to end it. Hence it is not an error.) For fatal errors or\n> other signals, the exit status will be nonzero.\n> \n> Can we specify the reason in the docs why a SIGTERM causes (which\n> typically would cause a program to end with non-zero exit code)\n> pg_receivewal and pg_recvlogical exit with zero exit code? Having this\n> in the commit message would help developers but the documentation will\n> help users out there.\n\nWe could add \"because you want that if it's running as a daemon\", but\nTBH, I'd rather remove the parentheses part. It sounds too much like\n\"it works that way because that way is the sane way\".\n\nChristoph\n\n\n",
"msg_date": "Fri, 19 Aug 2022 12:54:34 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 4:24 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Bharath Rupireddy\n> > <application>pg_receivewal</application> will exit with status 0 when\n> > - terminated by the <systemitem>SIGINT</systemitem> signal. (That is the\n> > + terminated by the <systemitem>SIGINT</systemitem> or\n> > + <systemitem>SIGTERM</systemitem> signal. (That is the\n> > normal way to end it. Hence it is not an error.) For fatal errors or\n> > other signals, the exit status will be nonzero.\n> >\n> > Can we specify the reason in the docs why a SIGTERM causes (which\n> > typically would cause a program to end with non-zero exit code)\n> > pg_receivewal and pg_recvlogical exit with zero exit code? Having this\n> > in the commit message would help developers but the documentation will\n> > help users out there.\n>\n> We could add \"because you want that if it's running as a daemon\", but\n\n+1 to add \"some\" info in the docs (I'm not sure about the better\nwording though), we can try to be more specific of the use case if\nrequired.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Fri, 19 Aug 2022 17:34:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 05:34:56PM +0530, Bharath Rupireddy wrote:\n> +1 to add \"some\" info in the docs (I'm not sure about the better\n> wording though), we can try to be more specific of the use case if\n> required.\n\nYes, the amount of extra docs provided by the patch proposed by\nChristoph looks fine by me.\n\nFWIW, grouping the signal handlers into a common area like\nstreamutil.c seems rather confusing to me, as they set different\nvariable names that rely on their own assumptions in their local file,\nso I would leave that out, like the patch.\n\nWhile looking at the last patch proposed, it strikes me that\ntime_to_stop should be sig_atomic_t in pg_receivewal.c, as the safe\ntype of variable to set in a signal handler. We could change that,\nwhile on it..\n\nBackpatching this stuff is not an issue here.\n--\nMichael",
"msg_date": "Mon, 22 Aug 2022 09:42:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Michael Paquier\n> While looking at the last patch proposed, it strikes me that\n> time_to_stop should be sig_atomic_t in pg_receivewal.c, as the safe\n> type of variable to set in a signal handler. We could change that,\n> while on it..\n\nDone in the attached patch.\n\n> Backpatching this stuff is not an issue here.\n\nDo you mean it can, or can not be backpatched? (I'd argue for\nbackpatching since the behaviour is slightly broken at the moment.)\n\nChristoph",
"msg_date": "Mon, 22 Aug 2022 16:05:16 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 04:05:16PM +0200, Christoph Berg wrote:\n> Do you mean it can, or can not be backpatched? (I'd argue for\n> backpatching since the behaviour is slightly broken at the moment.)\n\nI mean that it is fine to backpatch that, in my opinion.\n--\nMichael",
"msg_date": "Tue, 23 Aug 2022 09:15:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "> On 23 Aug 2022, at 02:15, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Mon, Aug 22, 2022 at 04:05:16PM +0200, Christoph Berg wrote:\n>> Do you mean it can, or can not be backpatched? (I'd argue for\n>> backpatching since the behaviour is slightly broken at the moment.)\n> \n> I mean that it is fine to backpatch that, in my opinion.\n\nI think this can be argued both for and against backpatching. Catching SIGTERM\nmakes a lot of sense, especially given systemd's behavior. On the other hand,\nThis adds functionality to something arguably working as intended, regardless\nof what one thinks about the intent.\n\nThe attached adds the Exit Status section to pg_recvlogical docs which is\npresent in pg_receivewal to make them more aligned, and tweaks comments to\npgindent standards. This is the version I think is ready to commit.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Thu, 25 Aug 2022 11:19:05 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 11:19:05AM +0200, Daniel Gustafsson wrote:\n> I think this can be argued both for and against backpatching. Catching SIGTERM\n> makes a lot of sense, especially given systemd's behavior. On the other hand,\n> This adds functionality to something arguably working as intended, regardless\n> of what one thinks about the intent.\n\nSure. My view on this matter is that the behavior of the patch is\nmore useful to users as, on HEAD, a SIGTERM is equivalent to a drop of\nthe connection followed by a retry when not using -n. Or do you think\nthat there could be cases where the behavior of HEAD (force a\nconnection drop with the backend and handle the retry infinitely in\npg_receivewal/recvlogical) is more useful? systemd can also do\nretries a certain given of times, so that's moving the ball one layer\nto the other, at the end. We could also say to just set KillSignal to\nSIGINT in the docs, but my guess is that few users would actually\nnotice that until they see how pg_receiwal/recvlogical work with\nsystemd's default.\n\nFWIW, I've worked on an archiver integration a few years ago and got\nannoyed that we use SIGINT while SIGTERM was the default (systemd was\nnot directly used there but the signal problem was the same, so we had\nto go through some loops to make the stop signal configurable, like\nsystemd).\n--\nMichael",
"msg_date": "Thu, 25 Aug 2022 20:04:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Michael Paquier\n> FWIW, I've worked on an archiver integration a few years ago and got\n> annoyed that we use SIGINT while SIGTERM was the default (systemd was\n> not directly used there but the signal problem was the same, so we had\n> to go through some loops to make the stop signal configurable, like\n> systemd).\n\nSIGTERM is really the default for any init system or run-a-daemon system.\n\nChristoph\n\n\n",
"msg_date": "Thu, 25 Aug 2022 17:13:06 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 5:13 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Michael Paquier\n> > FWIW, I've worked on an archiver integration a few years ago and got\n> > annoyed that we use SIGINT while SIGTERM was the default (systemd was\n> > not directly used there but the signal problem was the same, so we had\n> > to go through some loops to make the stop signal configurable, like\n> > systemd).\n>\n> SIGTERM is really the default for any init system or run-a-daemon system.\n\nIt is, but there is also precedent for not using it for graceful\nshutdown. Apache, for example, will do what we do today on SIGTERM and\nyou use SIGWINCH to make it shut down gracefully (which would be the\nequivalent of us flushing the compression buffers, I'd say).\n\nI'm not saying we shouldn't change -- I fully approve of making the\nchange. But the world is full of fairly prominent examples of the\nother way as well.\n\nI'm leaning towards considering it a feature-change and thus not\nsomething to backpatch (I'd be OK sneaking it into 15 though, as that\none is not released yet and it feels like a perfectly *safe* change).\nNot enough to insist on it, but it seems \"slightly more correct\".\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Thu, 25 Aug 2022 20:45:05 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 08:45:05PM +0200, Magnus Hagander wrote:\n> I'm leaning towards considering it a feature-change and thus not\n> something to backpatch (I'd be OK sneaking it into 15 though, as that\n> one is not released yet and it feels like a perfectly *safe* change).\n> Not enough to insist on it, but it seems \"slightly more correct\".\n\nFine by me if both you and Daniel want to be more careful with this\nchange. We could always argue about a backpatch later if there is\nmore ask for it, as well.\n--\nMichael",
"msg_date": "Fri, 26 Aug 2022 09:51:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "Re: Daniel Gustafsson\n> The attached adds the Exit Status section to pg_recvlogical docs which is\n> present in pg_receivewal to make them more aligned, and tweaks comments to\n> pgindent standards. This is the version I think is ready to commit.\n\nLooks good to me.\n\nThanks,\nChristoph\n\n\n",
"msg_date": "Fri, 26 Aug 2022 10:52:36 +0200",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 09:51:26AM +0900, Michael Paquier wrote:\n> Fine by me if both you and Daniel want to be more careful with this\n> change. We could always argue about a backpatch later if there is\n> more ask for it, as well.\n\nDaniel, are you planning to apply this one on HEAD?\n--\nMichael",
"msg_date": "Fri, 2 Sep 2022 17:00:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "> On 2 Sep 2022, at 10:00, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Aug 26, 2022 at 09:51:26AM +0900, Michael Paquier wrote:\n>> Fine by me if both you and Daniel want to be more careful with this\n>> change. We could always argue about a backpatch later if there is\n>> more ask for it, as well.\n> \n> Daniel, are you planning to apply this one on HEAD?\n\nYes, it's on my TODO for this CF.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 2 Sep 2022 10:01:31 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
},
{
"msg_contents": "> On 2 Sep 2022, at 10:00, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Fri, Aug 26, 2022 at 09:51:26AM +0900, Michael Paquier wrote:\n>> Fine by me if both you and Daniel want to be more careful with this\n>> change. We could always argue about a backpatch later if there is\n>> more ask for it, as well.\n> \n> Daniel, are you planning to apply this one on HEAD?\n\nI had another look over this and pushed it.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 14 Sep 2022 16:37:58 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_receivewal and SIGTERM"
}
] |
[
{
"msg_contents": "Hi,\n\nI ran this test.\n\nDROP TABLE IF EXISTS long_json_as_text;\nCREATE TABLE long_json_as_text AS\nwith long as (\nselect repeat(description, 11)\nfrom pg_description\n)\nselect (select json_agg(row_to_json(long))::text as t from long) from\ngenerate_series(1, 100);\nVACUUM FREEZE long_json_as_text;\n\nselect 1 from long_json_as_text where t::json is null;\n\nhead:\nTime: 161,741ms\n\nv5:\nTime: 270,298 ms\n\nubuntu 64 bits\ngcc 9.4.0\n\nAm I missing something?\n\nregards,\nRanier Vilela\n\nHi,I ran this test.DROP TABLE IF EXISTS long_json_as_text;CREATE TABLE long_json_as_text ASwith long as ( select repeat(description, 11) from pg_description)select (select json_agg(row_to_json(long))::text as t from long) fromgenerate_series(1, 100);VACUUM FREEZE long_json_as_text;\nselect 1 from long_json_as_text where t::json is null;head:Time: 161,741msv5:Time: 270,298 msubuntu 64 bitsgcc 9.4.0Am I missing something?regards,Ranier Vilela",
"msg_date": "Mon, 15 Aug 2022 15:34:31 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying"
},
{
"msg_contents": "Em seg., 15 de ago. de 2022 às 15:34, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi,\n>\n> I ran this test.\n>\n> DROP TABLE IF EXISTS long_json_as_text;\n> CREATE TABLE long_json_as_text AS\n> with long as (\n> select repeat(description, 11)\n> from pg_description\n> )\n> select (select json_agg(row_to_json(long))::text as t from long) from\n> generate_series(1, 100);\n> VACUUM FREEZE long_json_as_text;\n>\n> select 1 from long_json_as_text where t::json is null;\n>\n> head:\n> Time: 161,741ms\n>\n> v5:\n> Time: 270,298 ms\n>\nSorry too fast, 270,298ms with native memchr.\n\nv5\nTime: 208,689 ms\n\nregards,\nRanier Vilela\n\nEm seg., 15 de ago. de 2022 às 15:34, Ranier Vilela <ranier.vf@gmail.com> escreveu:Hi,I ran this test.DROP TABLE IF EXISTS long_json_as_text;CREATE TABLE long_json_as_text ASwith long as ( select repeat(description, 11) from pg_description)select (select json_agg(row_to_json(long))::text as t from long) fromgenerate_series(1, 100);VACUUM FREEZE long_json_as_text;\nselect 1 from long_json_as_text where t::json is null;head:Time: 161,741msv5:Time: 270,298 msSorry too fast, 270,298ms with native memchr.v5Time: 208,689 msregards,Ranier Vilela",
"msg_date": "Mon, 15 Aug 2022 15:38:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Optimize json_lex_string by batching character copying"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAs Greg Stark noted elsewhere [0], it is presently very difficult to\nidentify the PID of the session using a temporary schema, which is\nparticularly unfortunate when a temporary table is putting a cluster in\ndanger of transaction ID wraparound. I noted [1] that the following query\ncan be used to identify the PID for a given backend ID:\n\n\tSELECT bid, pg_stat_get_backend_pid(bid) AS pid FROM pg_stat_get_backend_idset() bid;\n\nBut on closer inspection, this is just plain wrong. The backend IDs\nreturned by pg_stat_get_backend_idset() might initially bear some\nresemblance to the backend IDs stored in PGPROC, so my suggested query\nmight work some of the time, but the pg_stat_get_backend_* backend IDs\ntypically diverge from the PGPROC backend IDs as sessions connect and\ndisconnect.\n\nI think it would be nice to have a reliable way to discover the PID for a\ngiven temporary schema via SQL. The other thread [2] introduces a helpful\nlog message that indicates the PID for temporary tables that are in danger\nof causing transaction ID wraparound, and I intend for this proposal to be\ncomplementary to that work.\n\nAt first, I thought about adding a new function for retrieving the PGPROC\nbackend IDs, but I am worried that having two sets of backend IDs would be\neven more confusing than having one set that can't reliably be used for\ntemporary schemas. Instead, I tried adjusting the pg_stat_get_backend_*()\nsuite of functions to use the PGPROC backend IDs. This ended up being\nsimpler than anticipated. I added a backend_id field to the\nLocalPgBackendStatus struct (which is populated within\npgstat_read_current_status()), and I changed pgstat_fetch_stat_beentry() to\nbsearch() for the entry with the given PGPROC backend ID.\n\nThis does result in a small behavior change. Currently,\npg_stat_get_backend_idset() simply returns a range of numbers (1 to the\nnumber of active backends). With the attached patch, this function will\nstill return a set of numbers, but there might be gaps between the IDs, and\nthe maximum backend ID will usually be greater than the number of active\nbackends. I suppose this might break some existing uses, but I'm not sure\nhow much we should worry about that. IMO uniting the backend IDs is a net\nimprovement.\n\nThoughts?\n\n[0] https://postgr.es/m/CAM-w4HPCOuJDs4fdkgNdA8FFMeYMULPCAxjPpsOgvCO24KOAVg%40mail.gmail.com\n[1] https://postgr.es/m/DDF0D1BC-261D-45C2-961C-5CBDBB41EE71%40amazon.com\n[2] https://commitfest.postgresql.org/39/3358/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 15 Aug 2022 13:58:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On 8/15/22 1:58 PM, Nathan Bossart wrote:\n> Hi hackers,\n> \n> As Greg Stark noted elsewhere [0], it is presently very difficult to\n> identify the PID of the session using a temporary schema, which is\n> particularly unfortunate when a temporary table is putting a cluster in\n> danger of transaction ID wraparound. I noted [1] that the following query\n> can be used to identify the PID for a given backend ID:\n> \n> \tSELECT bid, pg_stat_get_backend_pid(bid) AS pid FROM pg_stat_get_backend_idset() bid;\n> \n> But on closer inspection, this is just plain wrong. The backend IDs\n> returned by pg_stat_get_backend_idset() might initially bear some\n> resemblance to the backend IDs stored in PGPROC, so my suggested query\n> might work some of the time, but the pg_stat_get_backend_* backend IDs\n> typically diverge from the PGPROC backend IDs as sessions connect and\n> disconnect.\n\nI didn't review the patch itself yet, but I'd like to chime in with a\nbig \"+1\" for the idea. I've had several past experiences getting called\nto help in situations where a database was getting close to wraparound\nand the culprit was a temp table blocking vacuum. I went down this same\ntrail of pg_stat_get_backend_idset() and I can attest that it did work\nonce or twice, but it didn't work other times.\n\nAFAIK, in PostgreSQL today, there's really no way to reliably get the\nPID of the session holding particular temp tables. (The idea of\niterating through backends with gdb and trying to find & dump some\nobscure data structure seems completely impractical for regular\nproduction ops.)\n\nI'll take a look at the patch if I can... and I'm hopeful that we're\nable to move this idea forward and get this little gap in PG filled once\nand for all!\n\n-Jeremy\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n",
"msg_date": "Mon, 15 Aug 2022 14:47:25 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 02:47:25PM -0700, Jeremy Schneider wrote:\n> I'll take a look at the patch if I can... and I'm hopeful that we're\n> able to move this idea forward and get this little gap in PG filled once\n> and for all!\n\nThanks!\n\nI noticed that the \"result\" variable in pg_stat_get_backend_idset() is kind\nof pointless after my patch is applied, so here is a v2 with it removed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 16 Aug 2022 16:04:23 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "At Tue, 16 Aug 2022 16:04:23 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Mon, Aug 15, 2022 at 02:47:25PM -0700, Jeremy Schneider wrote:\n> > I'll take a look at the patch if I can... and I'm hopeful that we're\n> > able to move this idea forward and get this little gap in PG filled once\n> > and for all!\n> \n> Thanks!\n> \n> I noticed that the \"result\" variable in pg_stat_get_backend_idset() is kind\n> of pointless after my patch is applied, so here is a v2 with it removed.\n\nIt seems to be a sensible way to expose the PGPROC backend ids to SQL\ninterface. It inserts bsearch into relatively frequently-called\nfunction but (I believe) that doesn't seem to matter much (comparing\nto, for example, the size of id->position translation table).\n\nI don't think pgstat_fetch_stat_beentry needs to check for\nout-of-range ids anymore. That case is a kind of rare and bsearch\nproperly handles it.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:19:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "Having this function would be great (I admit I never responded because\nI never figured out if your suggestion was right or not:). But should\nit also be added to the pg_stat_activity view? Perhaps even just in\nthe SQL view using the function.\n\nAlternately should pg_stat_activity show the actual temp schema name\ninstead of the id? I don't recall if it's visible outside the backend\nbut if it is, could pg_stat_activity show whether the temp schema is\nactually attached or not?\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:29:05 +0100",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 05:29, Greg Stark <stark@mit.edu> wrote:\n\n> Having this function would be great (I admit I never responded because\n> I never figured out if your suggestion was right or not:). But should\n> it also be added to the pg_stat_activity view? Perhaps even just in\n> the SQL view using the function.\n>\n> Alternately should pg_stat_activity show the actual temp schema name\n> instead of the id? I don't recall if it's visible outside the backend\n> but if it is, could pg_stat_activity show whether the temp schema is\n> actually attached or not?\n>\n\nWould it work to cast the schema oid to type regnamespace? Then the actual\ndata (numeric oid) would be present in the view, but it would display as\ntext.\n\nOn Tue, 23 Aug 2022 at 05:29, Greg Stark <stark@mit.edu> wrote:Having this function would be great (I admit I never responded because\nI never figured out if your suggestion was right or not:). But should\nit also be added to the pg_stat_activity view? Perhaps even just in\nthe SQL view using the function.\n\nAlternately should pg_stat_activity show the actual temp schema name\ninstead of the id? I don't recall if it's visible outside the backend\nbut if it is, could pg_stat_activity show whether the temp schema is\nactually attached or not?Would it work to cast the schema oid to type regnamespace? Then the actual data (numeric oid) would be present in the view, but it would display as text.",
"msg_date": "Tue, 23 Aug 2022 07:30:57 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 10:29:05AM +0100, Greg Stark wrote:\n> Having this function would be great (I admit I never responded because\n> I never figured out if your suggestion was right or not:). But should\n> it also be added to the pg_stat_activity view? Perhaps even just in\n> the SQL view using the function.\n> \n> Alternately should pg_stat_activity show the actual temp schema name\n> instead of the id? I don't recall if it's visible outside the backend\n> but if it is, could pg_stat_activity show whether the temp schema is\n> actually attached or not?\n\nI'm open to adding the backend ID or the temp schema name to\npg_stat_activity, but I wouldn't be surprised to learn that others aren't.\nIt'd be great to hear a few more opinions on the idea before I spend too\nmuch time on the patches. IMO we should still adjust the\npg_stat_get_backend_*() functions even if we do end up adjusting\npg_stat_activity.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 16:07:57 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Tue, Aug 23, 2022 at 10:29:05AM +0100, Greg Stark wrote:\n>> Alternately should pg_stat_activity show the actual temp schema name\n>> instead of the id? I don't recall if it's visible outside the backend\n>> but if it is, could pg_stat_activity show whether the temp schema is\n>> actually attached or not?\n\n> I'm open to adding the backend ID or the temp schema name to\n> pg_stat_activity, but I wouldn't be surprised to learn that others aren't.\n\nFWIW, I'd vote against adding the temp schema per se. We can see from\noutside whether the corresponding temp schema exists, but we can't readily\ntell whether the session has decided to use it, so attributing it to the\nsession is a bit dangerous. Maybe there is an argument for having\nsessions report it to pgstats when they do adopt a temp schema, but I\nthink there'd be race conditions, rollback after error, and other issues\nto contend with there.\n\nThe proposed patch seems like an independent first step in any case.\n\nOne thing I don't like about it documentation-wise is that it leaves\nthe concept of backend ID pretty much completely undefined.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Sep 2022 13:41:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Sat, Sep 24, 2022 at 01:41:38PM -0400, Tom Lane wrote:\n> One thing I don't like about it documentation-wise is that it leaves\n> the concept of backend ID pretty much completely undefined.\n\nHow specific do you think this definition ought to be? All I've come up\nwith so far is \"internal identifier for the backend that is independent\nfrom its PID,\" which is what I used in the attached patch. Do we want to\nmention its uses in more detail (e.g., temp schema name), or should we keep\nit vague?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Sep 2022 09:08:22 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sat, Sep 24, 2022 at 01:41:38PM -0400, Tom Lane wrote:\n>> One thing I don't like about it documentation-wise is that it leaves\n>> the concept of backend ID pretty much completely undefined.\n\n> How specific do you think this definition ought to be?\n\nFairly specific, I think, so that people can reason about how it behaves.\nNotably, it seems absolutely critical to be clear that the IDs recycle\nover short time frames. Maybe like\n\n These access functions use the session's backend ID number, which is\n a small integer that is distinct from the backend ID of any concurrent\n session, although an ID can be recycled as soon as the session exits.\n The backend ID is used, among other things, to identify the session's\n temporary schema if it has one.\n\nI'd prefer to use the terminology \"session\" than \"backend\" in the\ndefinition. I suppose we can't get away with actually calling it\na \"session ID\" given that \"backend ID\" is used in so many places;\nbut I think people have a clearer handle on what a session is.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Sep 2022 15:50:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 03:50:09PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Sat, Sep 24, 2022 at 01:41:38PM -0400, Tom Lane wrote:\n>>> One thing I don't like about it documentation-wise is that it leaves\n>>> the concept of backend ID pretty much completely undefined.\n> \n>> How specific do you think this definition ought to be?\n> \n> Fairly specific, I think, so that people can reason about how it behaves.\n> Notably, it seems absolutely critical to be clear that the IDs recycle\n> over short time frames. Maybe like\n> \n> These access functions use the session's backend ID number, which is\n> a small integer that is distinct from the backend ID of any concurrent\n> session, although an ID can be recycled as soon as the session exits.\n> The backend ID is used, among other things, to identify the session's\n> temporary schema if it has one.\n> \n> I'd prefer to use the terminology \"session\" than \"backend\" in the\n> definition. I suppose we can't get away with actually calling it\n> a \"session ID\" given that \"backend ID\" is used in so many places;\n> but I think people have a clearer handle on what a session is.\n\nThanks for the suggestion. I used it in v4 of the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Sep 2022 13:11:13 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Thanks for the suggestion. I used it in v4 of the patch.\n\nI reviewed this and made some changes, some cosmetic some less so.\n\nNotably, I was bemused that of the four calls of\npgstat_fetch_stat_local_beentry, three tested for a NULL result even\nthough they cannot get one, while the fourth (pg_stat_get_backend_idset)\n*is* at hazard of a NULL result but lacked a check. I changed\npg_stat_get_backend_idset so that it too cannot get a NULL, and deleted\nthe dead code from the other callers.\n\nA point that still bothers me a bit about pg_stat_get_backend_idset is\nthat it could miss or duplicate some backend IDs if the user calls\npg_stat_clear_snapshot() partway through the SRF's run, and we reload\na different set of backend entries than we had before. I added a comment\nabout that, with an argument why it's not worth working harder, but\nis the argument convincing? If not, what should we do?\n\nAlso, I realized that the functions we're changing here are mostly\nnot exercised in the current regression tests :-(. So I added a\nsmall test case.\n\nI think this is probably committable if you agree with my changes.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 28 Sep 2022 18:56:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 06:56:20PM -0400, Tom Lane wrote:\n> I reviewed this and made some changes, some cosmetic some less so.\n\nThanks for the detailed review.\n\n> A point that still bothers me a bit about pg_stat_get_backend_idset is\n> that it could miss or duplicate some backend IDs if the user calls\n> pg_stat_clear_snapshot() partway through the SRF's run, and we reload\n> a different set of backend entries than we had before. I added a comment\n> about that, with an argument why it's not worth working harder, but\n> is the argument convincing? If not, what should we do?\n\nIsn't this an existing problem? Granted, it'd manifest differently with\nthis patch, but ISTM we could already end up with too many or too few\nbackend IDs if there's a refresh partway through. I don't know if there's\nan easy way to avoid mismatches altogether besides perhaps ERROR-ing if\nthere's a concurrent refresh.\n\n> -\tif (beid < 1 || beid > localNumBackends)\n> -\t\treturn NULL;\n\nThe reason I'd kept this in was because I was worried about overflow in the\ncomparator function. Upon further inspection, I don't think there's\nactually any way that will produce incorrect results. And I'm not sure we\nshould worry about such cases too much, anyway.\n\nOverall, LGTM.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 19:40:38 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Wed, Sep 28, 2022 at 06:56:20PM -0400, Tom Lane wrote:\n>> A point that still bothers me a bit about pg_stat_get_backend_idset is\n>> that it could miss or duplicate some backend IDs if the user calls\n>> pg_stat_clear_snapshot() partway through the SRF's run, and we reload\n>> a different set of backend entries than we had before. I added a comment\n>> about that, with an argument why it's not worth working harder, but\n>> is the argument convincing? If not, what should we do?\n\n> Isn't this an existing problem? Granted, it'd manifest differently with\n> this patch, but ISTM we could already end up with too many or too few\n> backend IDs if there's a refresh partway through.\n\nRight. I'd been thinking the current code wouldn't generate duplicate IDs\n--- but it might produce duplicate query output anyway, in case a given\nbackend's entry is later in the array than it was before. So really\nthere's not much guarantees here in any case.\n\n>> -\tif (beid < 1 || beid > localNumBackends)\n>> -\t\treturn NULL;\n\n> The reason I'd kept this in was because I was worried about overflow in the\n> comparator function. Upon further inspection, I don't think there's\n> actually any way that will produce incorrect results. And I'm not sure we\n> should worry about such cases too much, anyway.\n\nAh, I see: if the user passes in a \"backend ID\" that is close to INT_MIN,\nthen the comparator's subtraction could overflow. We could fix that by\nwriting out the comparator code longhand (\"if (a < b) return -1;\" etc),\nbut I don't really think it's necessary. bsearch is guaranteed to\ncorrectly report that such a key is not present, even if it takes a\nstrange search path through the array due to inconsistent comparator\nresults. So the test quoted above just serves to fail a bit more quickly,\nbut we certainly shouldn't be optimizing for the case of a bad ID.\n\n> Overall, LGTM.\n\nOK. Will push shortly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Sep 2022 10:47:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
},
{
"msg_contents": "On Thu, Sep 29, 2022 at 10:47:06AM -0400, Tom Lane wrote:\n> OK. Will push shortly.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 29 Sep 2022 09:23:32 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: identifying the backend that owns a temporary schema"
}
] |
[
{
"msg_contents": "I happened to notice the following code in\nsrc/backend/commands/statscmds.c, CreateStatistics:\n\n======\n/*\n* Parse the statistics kinds.\n*\n* First check that if this is the case with a single expression, there\n* are no statistics kinds specified (we don't allow that for the simple\n* CREATE STATISTICS form).\n*/\nif ((list_length(stmt->exprs) == 1) && (list_length(stxexprs) == 1))\n{\n/* statistics kinds not specified */\nif (list_length(stmt->stat_types) > 0)\nereport(ERROR,\n(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\nerrmsg(\"when building statistics on a single expression, statistics\nkinds may not be specified\")));\n}\n======\n\n\nAFAICT that one-line comment (/* statistics kinds not specified */) is\nwrong because at that point we don't yet know if kinds are specified\nor not.\n\nSUGGESTION-1\nChange the comment to /* Check there are no statistics kinds specified */\n\nSUGGESTION-2\nSimply remove that one-line comment because the larger comment seems\nto be saying the same thing anyhow.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:46:50 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Wrong comment in statscmds.c/CreateStatistics?"
},
{
"msg_contents": "Yeah, the comments are kind of confusing, see some comments inline.\n\nOn Tue, Aug 16, 2022 at 8:47 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I happened to notice the following code in\n> src/backend/commands/statscmds.c, CreateStatistics:\n>\n> ======\n> /*\n> * Parse the statistics kinds.\n> *\n> * First check that if this is the case with a single expression, there\n> * are no statistics kinds specified (we don't allow that for the simple\n\nmaybe change to *there should be no* is better?\n\n> * CREATE STATISTICS form).\n> */\n> if ((list_length(stmt->exprs) == 1) && (list_length(stxexprs) == 1))\n> {\n> /* statistics kinds not specified */\n\nremove this line or change to *statistics kinds should not be specified*,\nI prefer just removing it.\n\n> if (list_length(stmt->stat_types) > 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"when building statistics on a single expression, statistics\n> kinds may not be specified\")));\n\nchange *may* to *should*?\n\n> }\n> ======\n>\n>\n> AFAICT that one-line comment (/* statistics kinds not specified */) is\n> wrong because at that point we don't yet know if kinds are specified\n> or not.\n>\n> SUGGESTION-1\n> Change the comment to /* Check there are no statistics kinds specified */\n>\n> SUGGESTION-2\n> Simply remove that one-line comment because the larger comment seems\n> to be saying the same thing anyhow.\n>\n> Thoughts?\n>\n> ------\n> Kind Regards,\n> Peter Smith.\n> Fujitsu Australia\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:27:38 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Wrong comment in statscmds.c/CreateStatistics?"
}
] |
[
{
"msg_contents": "During a recent code review I was going to suggest that some new code\nwould be more readable if the following:\nif (list_length(alist) == 0) ...\n\nwas replaced with:\nif (list_is_empty(alist)) ...\n\nbut then I found that actually no such function exists.\n\n~~~\n\nSearching the PG source found many cases using all kinds of\ninconsistent ways to test for empty Lists:\ne.g.1 if (list_length(alist) > 0)\ne.g.2 if (list_length(alist) == 0)\ne.g.3 if (list_length(alist) != 0)\ne.g.4 if (list_length(alist) >= 1)\ne.g.5 if (list_length(alist) < 1)\n\nOf course, all of them work OK as-is, but by using list_is_empty all\nthose can be made consistent and often also more readable as to the\ncode intent.\n\nPatch 0001 adds a new function 'list_is_empty'.\nPatch 0002 makes use of it.\n\nThoughts?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 16 Aug 2022 11:19:47 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Propose a new function - list_is_empty"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> During a recent code review I was going to suggest that some new code\n> would be more readable if the following:\n> if (list_length(alist) == 0) ...\n\n> was replaced with:\n> if (list_is_empty(alist)) ...\n\n> but then I found that actually no such function exists.\n\nThat's because the *correct* way to write it is either \"alist == NIL\"\nor just \"!alist\". I don't think we need yet another way to spell\nthat, and I'm entirely not on board with replacing either of those\nidioms. But if you want to get rid of overcomplicated uses of\nlist_length() in favor of one of those spellings, have at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 15 Aug 2022 21:27:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > During a recent code review I was going to suggest that some new code\n> > would be more readable if the following:\n> > if (list_length(alist) == 0) ...\n>\n> > was replaced with:\n> > if (list_is_empty(alist)) ...\n>\n> > but then I found that actually no such function exists.\n>\n> That's because the *correct* way to write it is either \"alist == NIL\"\n> or just \"!alist\". I don't think we need yet another way to spell\n> that, and I'm entirely not on board with replacing either of those\n> idioms. But if you want to get rid of overcomplicated uses of\n> list_length() in favor of one of those spellings, have at it.\n>\n\nThanks for your advice.\n\nYes, I saw that NIL is the definition of an empty list - that's how I\nimplemented list_is_empty.\n\nOK, I'll ditch the function idea and just look at de-complicating\nthose existing empty List checks.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 16 Aug 2022 11:39:25 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > During a recent code review I was going to suggest that some new code\n> > would be more readable if the following:\n> > if (list_length(alist) == 0) ...\n>\n> > was replaced with:\n> > if (list_is_empty(alist)) ...\n>\n> > but then I found that actually no such function exists.\n>\n> That's because the *correct* way to write it is either \"alist == NIL\"\n> or just \"!alist\". I don't think we need yet another way to spell\n> that, and I'm entirely not on board with replacing either of those\n> idioms. But if you want to get rid of overcomplicated uses of\n> list_length() in favor of one of those spellings, have at it.\n\nDone, and tested OK with make check-world.\n\nPSA.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.",
"msg_date": "Tue, 16 Aug 2022 15:29:29 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "> On 16 Aug 2022, at 07:29, Peter Smith <smithpb2250@gmail.com> wrote:\n> On Tue, Aug 16, 2022 at 11:27 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>> if you want to get rid of overcomplicated uses of\n>> list_length() in favor of one of those spellings, have at it.\n> \n> Done, and tested OK with make check-world.\n\nI think these are nice cleanups to simplify and streamline the code, just a few\nsmall comments from reading the patch:\n\n \t/* If no subcommands, don't collect */\n-\tif (list_length(currentEventTriggerState->currentCommand->d.alterTable.subcmds) != 0)\n+\tif (currentEventTriggerState->currentCommand->d.alterTable.subcmds)\nHere the current coding gives context about the data structure used for the\nsubcmds member which is now lost. I don't mind the change but rewording the\ncomment above to indicate that subcmds is a list would be good IMHO.\n\n\n-\tbuild_expressions = (list_length(stxexprs) > 0);\n+\tbuild_expressions = stxexprs != NIL;\nMight be personal taste, but I think the parenthesis should be kept here as a\nvisual aid for the reader.\n\n\n-\tAssert(list_length(publications) > 0);\n+\tAssert(publications);\nThe more common (and clearer IMO) pattern would be Assert(publications != NIL);\nI think. The same applies for a few hunks in the patch.\n\n\n-\tAssert(clauses != NIL);\n-\tAssert(list_length(clauses) >= 1);\n+\tAssert(clauses);\nJust removing the list_length() assertion would be enough here.\n\n\nmakeIndexArray() in jsonpath_gram.y has another Assert(list_length(list) > 0);\nconstruction as well. The other I found is in create_groupingsets_path() but\nthere I think it makes sense to keep the current coding based on the assertion\njust prior to it being very similar and requiring list_length().\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:39:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I think these are nice cleanups to simplify and streamline the code, just a few\n> small comments from reading the patch:\n\n> \t/* If no subcommands, don't collect */\n> -\tif (list_length(currentEventTriggerState->currentCommand->d.alterTable.subcmds) != 0)\n> +\tif (currentEventTriggerState->currentCommand->d.alterTable.subcmds)\n> Here the current coding gives context about the data structure used for the\n> subcmds member which is now lost. I don't mind the change but rewording the\n> comment above to indicate that subcmds is a list would be good IMHO.\n\nI think testing for equality to NIL is better where that's a concern.\n\n> Might be personal taste, but I think the parenthesis should be kept here as a\n> visual aid for the reader.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Aug 2022 09:37:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "On Mon, Aug 15, 2022 at 9:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > During a recent code review I was going to suggest that some new code\n> > would be more readable if the following:\n> > if (list_length(alist) == 0) ...\n>\n> > was replaced with:\n> > if (list_is_empty(alist)) ...\n>\n> > but then I found that actually no such function exists.\n>\n> That's because the *correct* way to write it is either \"alist == NIL\"\n> or just \"!alist\".\n\nI think the alist == NIL (or alist != NIL) style often makes the code\neasier to read. I recommend we standardize on that one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Aug 2022 09:47:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Aug 15, 2022 at 9:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That's because the *correct* way to write it is either \"alist == NIL\"\n>> or just \"!alist\".\n\n> I think the alist == NIL (or alist != NIL) style often makes the code\n> easier to read. I recommend we standardize on that one.\n\nI have a general preference for comparing to NIL because (as Daniel\nnoted nearby) it reminds you of what data type you're dealing with.\nHowever, I'm not up for trying to forbid the bare-boolean-test style\naltogether. It'd be near impossible to find all the instances;\nbesides which we don't insist that other pointer checks be written\nas explicit comparisons to NULL --- we do whichever of those seems\nclearest in context. So I'm happy for this patch to leave either\nof those existing usages alone. I agree though that while simplifying\nlist_length() calls, I'd lean to using explicit comparisons to NIL.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:03:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "> On 16 Aug 2022, at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I agree though that while simplifying list_length() calls, I'd lean to using\n> explicit comparisons to NIL.\n\n\nAgreed, I prefer that too.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 22:34:17 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 6:34 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 16 Aug 2022, at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > I agree though that while simplifying list_length() calls, I'd lean to using\n> > explicit comparisons to NIL.\n>\n>\n> Agreed, I prefer that too.\n>\n\nThanks for the feedback.\n\nPSA patch v3 which now uses explicit comparisons to NIL everywhere,\nand also addresses the other review comments.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Wed, 17 Aug 2022 11:09:44 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "> On 17 Aug 2022, at 03:09, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n> On Wed, Aug 17, 2022 at 6:34 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> \n>>> On 16 Aug 2022, at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> \n>>> I agree though that while simplifying list_length() calls, I'd lean to using\n>>> explicit comparisons to NIL.\n>> \n>> \n>> Agreed, I prefer that too.\n>> \n> \n> Thanks for the feedback.\n> \n> PSA patch v3 which now uses explicit comparisons to NIL everywhere,\n> and also addresses the other review comments.\n\nFrom reading, this version of the patch looks good to me.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 17 Aug 2022 09:33:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "There are some places that add extra parenthesis like here\n\n--- a/src/backend/optimizer/plan/planner.c\n+++ b/src/backend/optimizer/plan/planner.c\n@@ -3097,7 +3097,7 @@ reorder_grouping_sets(List *groupingsets, List\n*sortclause)\n GroupingSetData *gs = makeNode(GroupingSetData);\n\n while (list_length(sortclause) > list_length(previous) &&\n- list_length(new_elems) > 0)\n+ (new_elems != NIL))\n {\n\nand here,\n\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -3408,7 +3408,7 @@ estimate_num_groups_incremental(PlannerInfo\n*root, List *groupExprs,\n * for normal cases with GROUP BY or DISTINCT, but it is possible for\n * corner cases with set operations.)\n */\n- if (groupExprs == NIL || (pgset && list_length(*pgset) < 1))\n+ if (groupExprs == NIL || (pgset && (*pgset == NIL)))\n return 1.0;\n\nIs it necessary to add that extra parenthesis?\n\nOn Wed, Aug 17, 2022 at 3:33 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 17 Aug 2022, at 03:09, Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Aug 17, 2022 at 6:34 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> >>\n> >>> On 16 Aug 2022, at 16:03, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>\n> >>> I agree though that while simplifying list_length() calls, I'd lean to using\n> >>> explicit comparisons to NIL.\n> >>\n> >>\n> >> Agreed, I prefer that too.\n> >>\n> >\n> > Thanks for the feedback.\n> >\n> > PSA patch v3 which now uses explicit comparisons to NIL everywhere,\n> > and also addresses the other review comments.\n>\n> From reading, this version of the patch looks good to me.\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n>\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Wed, 17 Aug 2022 16:13:45 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "> On 17 Aug 2022, at 10:13, Junwang Zhao <zhjwpku@gmail.com> wrote:\n> \n> There are some places that add extra parenthesis like here\n> \n> ...\n> \n> Is it necessary to add that extra parenthesis?\n\nIt's not necessary unless needed for operator associativity, but also I don't\nobject to grouping with parenthesis as a visual aid for the person reading the\ncode. Going over the patch in more detail I might change my mind on some but I\ndon't object to the practice in general.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 17 Aug 2022 10:23:16 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "Junwang Zhao <zhjwpku@gmail.com> writes:\n> There are some places that add extra parenthesis like here\n> while (list_length(sortclause) > list_length(previous) &&\n> - list_length(new_elems) > 0)\n> + (new_elems != NIL))\n\n> Is it necessary to add that extra parenthesis?\n\nI'd drop the parens in these particular examples because they are\ninconsistent with the other parts of the same \"if\" condition.\nI concur with Daniel's point that parens can be useful as a visual\naid even when they aren't strictly necessary --- but I don't think\nwe should make future readers wonder why one half of the \"if\"\nis parenthesized and the other isn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Aug 2022 09:48:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "I wrote:\n> I'd drop the parens in these particular examples because they are\n> inconsistent with the other parts of the same \"if\" condition.\n\nAfter going through the patch I removed most but not all of the\nnewly-added parens on those grounds. I also found a couple more\nspots that could be converted. Pushed with those changes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Aug 2022 11:14:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Propose a new function - list_is_empty"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 1:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > I'd drop the parens in these particular examples because they are\n> > inconsistent with the other parts of the same \"if\" condition.\n>\n> After going through the patch I removed most but not all of the\n> newly-added parens on those grounds. I also found a couple more\n> spots that could be converted. Pushed with those changes.\n>\n\nThanks for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Thu, 18 Aug 2022 08:19:47 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Propose a new function - list_is_empty"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems like find_in_log() and advance_wal() functions (which are now\nbeing used in at least 2 places). find_in_log() is defined and being\nused in 2 places 019_replslot_limit.pl and 033_replay_tsp_drops.pl.\nThe functionality of advancing WAL is implemented in\n019_replslot_limit.pl with advance_wal() and 001_stream_repl.pl with\nthe same logic as advance_wal() but no function there and an\nin-progress feature [1] needs advance_wal() as-is for tests.\n\nDo these functions qualify to be added to the core test framework in\nCluster.pm? Or do we need more usages of these functions before we\ngeneralize and add to the core test framework? If added, a bit of\nduplicate code can be reduced and they become more usable across the\nentire tests for future use.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/CALj2ACUYz1z6QPduGn5gguCkfd-ko44j4hKcOMtp6fzv9xEWgw@mail.gmail.com\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 07:55:27 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add find_in_log() and advance_wal() perl functions to core test\n framework (?)"
},
{
"msg_contents": "\nOn 2022-08-15 Mo 22:25, Bharath Rupireddy wrote:\n> Hi,\n>\n> It seems like find_in_log() and advance_wal() functions (which are now\n> being used in at least 2 places). find_in_log() is defined and being\n> used in 2 places 019_replslot_limit.pl and 033_replay_tsp_drops.pl.\n> The functionality of advancing WAL is implemented in\n> 019_replslot_limit.pl with advance_wal() and 001_stream_repl.pl with\n> the same logic as advance_wal() but no function there and an\n> in-progress feature [1] needs advance_wal() as-is for tests.\n>\n> Do these functions qualify to be added to the core test framework in\n> Cluster.pm? Or do we need more usages of these functions before we\n> generalize and add to the core test framework? If added, a bit of\n> duplicate code can be reduced and they become more usable across the\n> entire tests for future use.\n>\n> Thoughts?\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACUYz1z6QPduGn5gguCkfd-ko44j4hKcOMtp6fzv9xEWgw@mail.gmail.com\n>\n\nI don't think there's a hard and fast rule about it. Certainly the case\nwould be more compelling if the functions were used across different TAP\nsuites. The SSL suite has suite-specific modules. That's a pattern also\nworth considering. e.g something like.\n\n use FindBin qw($Bin);\n use lib $Bin;\n use MySuite;\n\nand then you put your common routines in MySuite.pm in the same\ndirectory as the TAP test files.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 12:32:30 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add find_in_log() and advance_wal() perl functions to core test\n framework (?)"
},
{
"msg_contents": "On 2022-Aug-16, Andrew Dunstan wrote:\n\n> I don't think there's a hard and fast rule about it. Certainly the case\n> would be more compelling if the functions were used across different TAP\n> suites. The SSL suite has suite-specific modules. That's a pattern also\n> worth considering. e.g something like.\n> \n> use FindBin qw($Bin);\n> use lib $Bin;\n> use MySuite;\n> \n> and then you put your common routines in MySuite.pm in the same\n> directory as the TAP test files.\n\nYeah, I agree with that for advance_wal. Regarding find_in_log, that\none seems general enough to warrant being in Cluster.pm -- consider\nissues_sql_like, which also slurps_file($log). That could be unified a\nlittle bit, I think.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 18:40:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Add find_in_log() and advance_wal() perl functions to core test\n framework (?)"
},
{
"msg_contents": "At Tue, 16 Aug 2022 18:40:49 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Aug-16, Andrew Dunstan wrote:\n> \n> > I don't think there's a hard and fast rule about it. Certainly the case\n> > would be more compelling if the functions were used across different TAP\n> > suites. The SSL suite has suite-specific modules. That's a pattern also\n> > worth considering. e.g something like.\n> > \n> > use FindBin qw($Bin);\n> > use lib $Bin;\n> > use MySuite;\n> > \n> > and then you put your common routines in MySuite.pm in the same\n> > directory as the TAP test files.\n> \n> Yeah, I agree with that for advance_wal. Regarding find_in_log, that\n> one seems general enough to warrant being in Cluster.pm -- consider\n> issues_sql_like, which also slurps_file($log). That could be unified a\n> little bit, I think.\n\n+1\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 24 Aug 2022 10:12:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add find_in_log() and advance_wal() perl functions to core\n test framework (?)"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nSome time ago I've seen a hanging logical replication that was trying to send transaction commit after doing table pg_repack.\nI understand that those things do not mix well. Yet walsender was ignoring pg_terminate_backend() and I think this worth fixing.\nCan we add CHECK_FOR_INTERRUPTS(); somewhere in this backtrace? Full session is attaches as file.\n\n#0 pfree (pointer=0x561850bbee40) at ./build/../src/backend/utils/mmgr/mcxt.c:1032\n#1 0x00005617712530d6 in ReorderBufferReturnTupleBuf (tuple=<optimized out>, rb=<optimized out>) at ./build/../src/backend/replication/logical/reorderbuffer.c:469\n#2 ReorderBufferReturnChange (rb=<optimized out>, change=0x561772456048) at ./build/../src/backend/replication/logical/reorderbuffer.c:398\n#3 0x0000561771253da1 in ReorderBufferRestoreChanges (rb=rb@entry=0x561771c14e10, txn=0x561771c0b078, file=file@entry=0x561771c15168, segno=segno@entry=0x561771c15178) at ./build/../src/backend/replication/logical/reorderbuffer.c:2570\n#4 0x00005617712553ba in ReorderBufferIterTXNNext (state=0x561771c15130, rb=0x561771c14e10) at ./build/../src/backend/replication/logical/reorderbuffer.c:1146\n#5 ReorderBufferCommit (rb=0x561771c14e10, xid=xid@entry=2976347782, commit_lsn=79160378448744, end_lsn=<optimized out>, commit_time=commit_time@entry=686095734290578, origin_id=origin_id@entry=0, origin_lsn=0) at ./build/../src/backend/replication/logical/reorderbuffer.c:1523\n#6 0x000056177124a30a in DecodeCommit (xid=2976347782, parsed=0x7ffc3cb4c240, buf=0x7ffc3cb4c400, ctx=0x561771b10850) at ./build/../src/backend/replication/logical/decode.c:640\n#7 DecodeXactOp (ctx=0x561771b10850, buf=buf@entry=0x7ffc3cb4c400) at ./build/../src/backend/replication/logical/decode.c:248\n#8 0x000056177124a6a9 in LogicalDecodingProcessRecord (ctx=0x561771b10850, record=0x561771b10ae8) at ./build/../src/backend/replication/logical/decode.c:117\n#9 0x000056177125d1e5 in XLogSendLogical () at ./build/../src/backend/replication/walsender.c:2893\n#10 0x000056177125f5f2 in WalSndLoop (send_data=send_data@entry=0x56177125d180 <XLogSendLogical>) at ./build/../src/backend/replication/walsender.c:2242\n#11 0x0000561771260125 in StartLogicalReplication (cmd=<optimized out>) at ./build/../src/backend/replication/walsender.c:1179\n#12 exec_replication_command (cmd_string=cmd_string@entry=0x561771abe590 \"START_REPLICATION SLOT dttsjtaa66crdhbm015h LOGICAL 0/0 ( \\\"include-timestamp\\\" '1', \\\"include-types\\\" '1', \\\"include-xids\\\" '1', \\\"write-in-chunks\\\" '1', \\\"add-tables\\\" '/* sanitized */.claim_audit,public.__consu\"...) at ./build/../src/backend/replication/walsender.c:1612\n#13 0x00005617712b2334 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x561771b2a438, dbname=<optimized out>, username=<optimized out>) at ./build/../src/backend/tcop/postgres.c:4267\n#14 0x000056177123857c in BackendRun (port=0x561771b0d7a0, port=0x561771b0d7a0) at ./build/../src/backend/postmaster/postmaster.c:4484\n#15 BackendStartup (port=0x561771b0d7a0) at ./build/../src/backend/postmaster/postmaster.c:4167\n#16 ServerLoop () at ./build/../src/backend/postmaster/postmaster.c:1725\n#17 0x000056177123954b in PostmasterMain (argc=9, argv=0x561771ab70e0) at ./build/../src/backend/postmaster/postmaster.c:1398\n#18 0x0000561770fae8b6 in main (argc=9, argv=0x561771ab70e0) at ./build/../src/backend/main/main.c:228\n\nWhat do you think?\n\nThank you!\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Tue, 16 Aug 2022 08:57:54 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 9:28 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>\n> Hi hackers!\n>\n> Some time ago I've seen a hanging logical replication that was trying to send transaction commit after doing table pg_repack.\n> I understand that those things do not mix well. Yet walsender was ignoring pg_terminate_backend() and I think this worth fixing.\n> Can we add CHECK_FOR_INTERRUPTS(); somewhere in this backtrace?\n>\n\nI think if we want to do this in this code path then it may be it is\nbetter to add it in ReorderBufferProcessTXN where we are looping to\nprocess each change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:38:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 2:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 9:28 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >\n> > Hi hackers!\n> >\n> > Some time ago I've seen a hanging logical replication that was trying to send transaction commit after doing table pg_repack.\n> > I understand that those things do not mix well. Yet walsender was ignoring pg_terminate_backend() and I think this worth fixing.\n> > Can we add CHECK_FOR_INTERRUPTS(); somewhere in this backtrace?\n> >\n>\n> I think if we want to do this in this code path then it may be it is\n> better to add it in ReorderBufferProcessTXN where we are looping to\n> process each change.\n\n+1\n\nThe same issue is recently reported[1] on -bugs and I proposed the\npatch that adds CHECK_FOR_INTERRUPTS() to the loop in\nReorderBufferProcessTXN(). I think it should be backpatched.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoD%2BaNfLje%2B9JOqWbTiq1GL4BOp9_f7FxLADm8rS8cDhCQ%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 14:25:25 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 10:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 2:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Aug 16, 2022 at 9:28 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > >\n> > > Hi hackers!\n> > >\n> > > Some time ago I've seen a hanging logical replication that was trying to send transaction commit after doing table pg_repack.\n> > > I understand that those things do not mix well. Yet walsender was ignoring pg_terminate_backend() and I think this worth fixing.\n> > > Can we add CHECK_FOR_INTERRUPTS(); somewhere in this backtrace?\n> > >\n> >\n> > I think if we want to do this in this code path then it may be it is\n> > better to add it in ReorderBufferProcessTXN where we are looping to\n> > process each change.\n>\n> +1\n>\n> The same issue is recently reported[1] on -bugs and I proposed the\n> patch that adds CHECK_FOR_INTERRUPTS() to the loop in\n> ReorderBufferProcessTXN(). I think it should be backpatched.\n>\n\nI agree that it is better to backpatch this as well. Would you like to\nverify if your patch works for all branches or if it need some tweaks?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 16 Aug 2022 11:01:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 10:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Aug 16, 2022 at 2:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Aug 16, 2022 at 9:28 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > > >\n> > > > Hi hackers!\n> > > >\n> > > > Some time ago I've seen a hanging logical replication that was trying to send transaction commit after doing table pg_repack.\n> > > > I understand that those things do not mix well. Yet walsender was ignoring pg_terminate_backend() and I think this worth fixing.\n> > > > Can we add CHECK_FOR_INTERRUPTS(); somewhere in this backtrace?\n> > > >\n> > >\n> > > I think if we want to do this in this code path then it may be it is\n> > > better to add it in ReorderBufferProcessTXN where we are looping to\n> > > process each change.\n> >\n> > +1\n> >\n> > The same issue is recently reported[1] on -bugs and I proposed the\n> > patch that adds CHECK_FOR_INTERRUPTS() to the loop in\n> > ReorderBufferProcessTXN(). I think it should be backpatched.\n> >\n>\n> I agree that it is better to backpatch this as well. Would you like to\n> verify if your patch works for all branches or if it need some tweaks?\n>\n\nYes, I've confirmed v10 and master but will do that for other branches\nand send patches for all supported branches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Tue, 16 Aug 2022 14:32:44 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "\n\n> On 16 Aug 2022, at 10:25, Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> \n> The same issue is recently reported[1] on -bugs\nOh, I missed that thread.\n\n> and I proposed the\n> patch that adds CHECK_FOR_INTERRUPTS() to the loop in\n> ReorderBufferProcessTXN().\nI agree that it's a good place for check.\n\n> I think it should be backpatched.\nYes, I think so too.\n\n> [1] https://www.postgresql.org/message-id/CAD21AoD%2BaNfLje%2B9JOqWbTiq1GL4BOp9_f7FxLADm8rS8cDhCQ%40mail.gmail.com\n\nThe patch in this thread looks good to me.\n\n\nThank you!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Tue, 16 Aug 2022 14:06:01 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 2:32 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 2:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Aug 16, 2022 at 10:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Aug 16, 2022 at 2:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Tue, Aug 16, 2022 at 9:28 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > > > >\n> > > > > Hi hackers!\n> > > > >\n> > > > > Some time ago I've seen a hanging logical replication that was trying to send transaction commit after doing table pg_repack.\n> > > > > I understand that those things do not mix well. Yet walsender was ignoring pg_terminate_backend() and I think this worth fixing.\n> > > > > Can we add CHECK_FOR_INTERRUPTS(); somewhere in this backtrace?\n> > > > >\n> > > >\n> > > > I think if we want to do this in this code path then it may be it is\n> > > > better to add it in ReorderBufferProcessTXN where we are looping to\n> > > > process each change.\n> > >\n> > > +1\n> > >\n> > > The same issue is recently reported[1] on -bugs and I proposed the\n> > > patch that adds CHECK_FOR_INTERRUPTS() to the loop in\n> > > ReorderBufferProcessTXN(). I think it should be backpatched.\n> > >\n> >\n> > I agree that it is better to backpatch this as well. Would you like to\n> > verify if your patch works for all branches or if it need some tweaks?\n> >\n>\n> Yes, I've confirmed v10 and master but will do that for other branches\n> and send patches for all supported branches.\n>\n\nI've attached patches for all supported branches.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Tue, 16 Aug 2022 18:06:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 2:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached patches for all supported branches.\n>\n\nLGTM. I'll push this tomorrow unless there are comments/suggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 16:48:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 4:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 2:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached patches for all supported branches.\n> >\n>\n> LGTM. I'll push this tomorrow unless there are comments/suggestions.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 23 Aug 2022 14:10:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 4:40 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Aug 22, 2022 at 4:48 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Tue, Aug 16, 2022 at 2:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > I've attached patches for all supported branches.\n> >\n> > LGTM. I'll push this tomorrow unless there are comments/suggestions.\n>\n> Pushed.\n\nI think this was a good change, but there's at least one other problem\nhere: within ReorderBufferRestoreChanges, the while (restored <\nmax_changes_in_memory && *segno <= last_segno) doesn't seem to contain\na CFI. Note that this can loop either by repeatedly failing to open a\nfile, or by repeatedly reading from a file and passing the data read\nto ReorderBufferRestoreChange. So I think there should just be a CFI\nat the top of this loop to make sure both cases are covered.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 19 Oct 2022 19:47:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 5:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > Pushed.\n>\n> I think this was a good change, but there's at least one other problem\n> here: within ReorderBufferRestoreChanges, the while (restored <\n> max_changes_in_memory && *segno <= last_segno) doesn't seem to contain\n> a CFI. Note that this can loop either by repeatedly failing to open a\n> file, or by repeatedly reading from a file and passing the data read\n> to ReorderBufferRestoreChange. So I think there should just be a CFI\n> at the top of this loop to make sure both cases are covered.\n>\n\nAgreed. The failures due to file operations can make this loop\nunpredictable in terms of time, so it is a good idea to have CFI at\nthe top of this loop.\n\nI can take care of this unless there are any objections or you want to\ndo it. We have backpatched the previous similar change, so I think we\nshould backpatch this as well. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 20 Oct 2022 11:07:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 1:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Oct 20, 2022 at 5:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Pushed.\n> >\n> > I think this was a good change, but there's at least one other problem\n> > here: within ReorderBufferRestoreChanges, the while (restored <\n> > max_changes_in_memory && *segno <= last_segno) doesn't seem to contain\n> > a CFI. Note that this can loop either by repeatedly failing to open a\n> > file, or by repeatedly reading from a file and passing the data read\n> > to ReorderBufferRestoreChange. So I think there should just be a CFI\n> > at the top of this loop to make sure both cases are covered.\n>\n> Agreed. The failures due to file operations can make this loop\n> unpredictable in terms of time, so it is a good idea to have CFI at\n> the top of this loop.\n>\n> I can take care of this unless there are any objections or you want to\n> do it. We have backpatched the previous similar change, so I think we\n> should backpatch this as well. What do you think?\n\nPlease go ahead. +1 for back-patching.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 20 Oct 2022 09:47:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
},
{
"msg_contents": "On Thu, Oct 20, 2022 at 7:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Thu, Oct 20, 2022 at 1:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Thu, Oct 20, 2022 at 5:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > Pushed.\n> > >\n> > > I think this was a good change, but there's at least one other problem\n> > > here: within ReorderBufferRestoreChanges, the while (restored <\n> > > max_changes_in_memory && *segno <= last_segno) doesn't seem to contain\n> > > a CFI. Note that this can loop either by repeatedly failing to open a\n> > > file, or by repeatedly reading from a file and passing the data read\n> > > to ReorderBufferRestoreChange. So I think there should just be a CFI\n> > > at the top of this loop to make sure both cases are covered.\n> >\n> > Agreed. The failures due to file operations can make this loop\n> > unpredictable in terms of time, so it is a good idea to have CFI at\n> > the top of this loop.\n> >\n> > I can take care of this unless there are any objections or you want to\n> > do it. We have backpatched the previous similar change, so I think we\n> > should backpatch this as well. What do you think?\n>\n> Please go ahead. +1 for back-patching.\n>\n\nYesterday, I have pushed this change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 22 Oct 2022 16:27:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical WAL sender unresponsive during decoding commit"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nAs of PostgreSQL 14, \"tty\" in the libpq connection string has already been removed with the commit below.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=14d9b37607ad30c3848ea0f2955a78436eff1268\n\nBut https://www.postgresql.org/docs/15/libpq-connect.html#LIBPQ-CONNSTRING still says \"Ignored (formerly, this specified where to send server debug output)\". The attached patch removes the \"tty\" item.\n\nRegards,\nNoriyoshi Shinoda",
"msg_date": "Tue, 16 Aug 2022 05:27:36 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": true,
"msg_subject": "[PG15 Doc] remove \"tty\" connect string from manual"
},
{
"msg_contents": "> On 16 Aug 2022, at 07:27, Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com> wrote:\n> \n> Hello, hackers.\n> \n> As of PostgreSQL 14, \"tty\" in the libpq connection string has already been removed with the commit below.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=14d9b37607ad30c3848ea0f2955a78436eff1268\n> \n> But https://www.postgresql.org/docs/15/libpq-connect.html#LIBPQ-CONNSTRING still says \"Ignored (formerly, this specified where to send server debug output)\". The attached patch removes the \"tty\" item.\n\nAh, nice catch, I missed removing this in my original patch. I'll take of\ncommitting this shortly.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 10:45:30 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PG15 Doc] remove \"tty\" connect string from manual"
}
] |
[
{
"msg_contents": "Looking to tidy up c.h a bit, I think the NON_EXEC_STATIC #define \ndoesn't need to be known globally, and it's not related to establishing \na portable C environment, so I propose to move it to a more localized \nheader, such as postmaster.h, as in the attached patch.",
"msg_date": "Tue, 16 Aug 2022 13:23:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Move NON_EXEC_STATIC from c.h"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Looking to tidy up c.h a bit, I think the NON_EXEC_STATIC #define \n> doesn't need to be known globally, and it's not related to establishing \n> a portable C environment, so I propose to move it to a more localized \n> header, such as postmaster.h, as in the attached patch.\n\nHmm, postgres.h seems like a better choice, since in principle any\nbackend file might need this. This arrangement could require\npostmaster.h to be included just for this macro.\n\nAlso, the macro was severely underdocumented already, and I don't\nfind \"no comment at all\" to be better. Can't we afford a couple\nof lines of explanation?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Aug 2022 09:50:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move NON_EXEC_STATIC from c.h"
},
{
"msg_contents": "On 16.08.22 15:50, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Looking to tidy up c.h a bit, I think the NON_EXEC_STATIC #define\n>> doesn't need to be known globally, and it's not related to establishing\n>> a portable C environment, so I propose to move it to a more localized\n>> header, such as postmaster.h, as in the attached patch.\n> \n> Hmm, postgres.h seems like a better choice, since in principle any\n> backend file might need this. This arrangement could require\n> postmaster.h to be included just for this macro.\n\nI picked postmaster.h because the other side of the code, where the \nno-longer-static symbols are used, is in postmaster.c. But postgres.h \nis also ok.\n\n> Also, the macro was severely underdocumented already, and I don't\n> find \"no comment at all\" to be better. Can't we afford a couple\n> of lines of explanation?\n\nHere is a new patch with more comments.",
"msg_date": "Tue, 23 Aug 2022 21:13:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Move NON_EXEC_STATIC from c.h"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Here is a new patch with more comments.\n\nLGTM\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 17:06:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Move NON_EXEC_STATIC from c.h"
}
] |
[
{
"msg_contents": "Hi,\n\nI was looking at the code for pg_walinspect today and I think I may\nhave found a bug (or else I'm confused about how this all works, which\nis also possible). ReadNextXLogRecord() takes an argument first_record\nof type XLogRecPtr which is used only for error reporting purposes: if\nwe fail to read the next record for a reason other than end-of-WAL, we\ncomplain that we couldn't read the WAL at the LSN specified by\nfirst_record.\n\nReadNextXLogRecord() has three callers. In pg_get_wal_record_info(),\nwe're just reading a single record, and the LSN passed as first_record\nis the LSN at which that record starts. Cool. But in the other two\ncallers, GetWALRecordsInfo() and GetWalStats(), we're reading multiple\nrecords, and first_record is always passed as the LSN of the first\nrecord. That's logical enough given the name of the argument, but the\neffect of it seems to be that an error while reading any of the\nrecords will be reported using the LSN of the first record, which does\nnot seem right.\n\nBy contrast, pg_rewind's extractPageMap() reports the error using\nxlogreader->EndRecPtr. I think that's correct. The toplevel xlogreader\nfunction that we're calling here is XLogReadRecord(), which in turn\ncalls XLogNextRecord(), which has this comment:\n\n /*\n * state->EndRecPtr is expected to have been set by the last call to\n * XLogBeginRead() or XLogNextRecord(), and is the location of the\n * error.\n */\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Aug 2022 12:34:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_walinspect: ReadNextXLogRecord's first_record argument"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 10:04 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Hi,\n>\n> I was looking at the code for pg_walinspect today and I think I may\n> have found a bug (or else I'm confused about how this all works, which\n> is also possible). ReadNextXLogRecord() takes an argument first_record\n> of type XLogRecPtr which is used only for error reporting purposes: if\n> we fail to read the next record for a reason other than end-of-WAL, we\n> complain that we couldn't read the WAL at the LSN specified by\n> first_record.\n>\n> ReadNextXLogRecord() has three callers. In pg_get_wal_record_info(),\n> we're just reading a single record, and the LSN passed as first_record\n> is the LSN at which that record starts. Cool. But in the other two\n> callers, GetWALRecordsInfo() and GetWalStats(), we're reading multiple\n> records, and first_record is always passed as the LSN of the first\n> record. That's logical enough given the name of the argument, but the\n> effect of it seems to be that an error while reading any of the\n> records will be reported using the LSN of the first record, which does\n> not seem right.\n\nIndeed. Thanks a lot for finding it.\n\n> By contrast, pg_rewind's extractPageMap() reports the error using\n> xlogreader->EndRecPtr. I think that's correct. The toplevel xlogreader\n> function that we're calling here is XLogReadRecord(), which in turn\n> calls XLogNextRecord(), which has this comment:\n>\n> /*\n> * state->EndRecPtr is expected to have been set by the last call to\n> * XLogBeginRead() or XLogNextRecord(), and is the location of the\n> * error.\n> */\n>\n> Thoughts?\n\nAgreed.\n\nHere's a patch (for V15 as well) fixing this bug, please review.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Wed, 17 Aug 2022 10:10:48 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect: ReadNextXLogRecord's first_record argument"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 12:41 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Agreed.\n>\n> Here's a patch (for V15 as well) fixing this bug, please review.\n\nCouldn't you simplify this further by removing the lsn argument from\nGetWALRecordInfo and using record->ReadRecPtr instead? Then\nInitXLogReaderState's second argument could be XLogRecPtr instead of\nXLogRecPtr *.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 11:22:32 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_walinspect: ReadNextXLogRecord's first_record argument"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 8:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 17, 2022 at 12:41 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > Agreed.\n> >\n> > Here's a patch (for V15 as well) fixing this bug, please review.\n>\n> Couldn't you simplify this further by removing the lsn argument from\n> GetWALRecordInfo and using record->ReadRecPtr instead? Then\n> InitXLogReaderState's second argument could be XLogRecPtr instead of\n> XLogRecPtr *.\n\nDone. XLogFindNextRecord() stores the first valid record in EndRecPtr\nand the ReadRecPtr is set to InvalidXLogRecPtr by calling\nXLogBeginRead(). And XLogNextRecord() sets ReadRecPtr which we can\nuse.\n\nPSA v2 patches.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Wed, 17 Aug 2022 21:58:56 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_walinspect: ReadNextXLogRecord's first_record argument"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 12:29 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> PSA v2 patches.\n\nThese look OK to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 13:44:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_walinspect: ReadNextXLogRecord's first_record argument"
}
] |
[
{
"msg_contents": "It claims that:\n\n * 'RecPtr' should point to the beginning of a valid WAL record. Pointing at\n * the beginning of a page is also OK, if there is a new record right after\n * the page header, i.e. not a continuation.\n\nBut this actually doesn't seem to work. This function doesn't itself\nhave any problem with any LSNs you want to pass it, so if you call\nthis function with an LSN that is at the beginning of a page, you'll\nend up with EndRecPtr set to the LSN you specify and DecodeRecPtr set\nto NULL. When you then call XLogReadRecord, you'll reach\nXLogDecodeNextRecord, which will do this:\n\n if (state->DecodeRecPtr != InvalidXLogRecPtr)\n {\n /* read the record after the one we just read */\n\n /*\n * NextRecPtr is pointing to end+1 of the previous WAL record. If\n * we're at a page boundary, no more records can fit on the current\n * page. We must skip over the page header, but we can't do that until\n * we've read in the page, since the header size is variable.\n */\n }\n else\n {\n /*\n * Caller supplied a position to start at.\n *\n * In this case, NextRecPtr should already be pointing to a valid\n * record starting position.\n */\n Assert(XRecOffIsValid(RecPtr));\n randAccess = true;\n }\n\nSince DecodeRecPtr is NULL, you take the else branch, and then you\nfail an assertion.\n\nI tried adding a --beginread argument to pg_waldump (patch attached)\nto further verify this:\n\n[rhaas pgsql]$ pg_waldump -n1\n/Users/rhaas/pgstandby/pg_wal/0000000200000005000000A0\nrmgr: Heap len (rec/tot): 72/ 72, tx: 5778572, lsn:\n5/A0000028, prev 5/9FFFFFB8, desc: HOT_UPDATE off 39 xmax 5778572\nflags 0x20 ; new off 62 xmax 0, blkref #0: rel 1663/16388/16402 blk 1\n[rhaas pgsql]$ pg_waldump -n1 --beginread\n/Users/rhaas/pgstandby/pg_wal/0000000200000005000000A0\nAssertion failed: (((RecPtr) % 8192 >= (((uintptr_t)\n((sizeof(XLogPageHeaderData))) + ((8) - 1)) & ~((uintptr_t) ((8) -\n1))))), function XLogDecodeNextRecord, file xlogreader.c, line 582.\nAbort trap: 6 (core dumped)\n\nThe WAL record begins at offset 0x28 in the block, which I believe is\nthe length of a long page header, so this is indeed a WAL segment that\nbegins with a brand new record, not a continuation record.\n\nThere are two ways we could fix this, I believe. One is to correct the\ncomment at the start of XLogBeginRead() to reflect the way things\nactually work at present. The other is to correct the code to do what\nthe header comment claims. I would prefer the latter, because I'd like\nto be able to use the EndRecPtr of the last record read by one\nxlogreader as the starting point for a new xlogreader created at a\nlater time. I've found that, when there's no record spanning the block\nboundary, the EndRecPtr points to the start of the next block, not the\nstart of the first record in the next block. I could dodge the problem\nhere by just always using XLogFindNextRecord() rather than\nXLogBeginRecord(), but I'd actually like it to go boom if I somehow\nend up trying to start from an LSN that's in the middle of a record\nsomewhere (or the middle of the page header) because those cases\nshouldn't happen. But if I just have an LSN that happens to be the\nstart of the block header rather than the start of the record that\nfollows the block header, I'd like that case to be tolerated, because\nthe LSN I'm using came from the xlogreader machinery.\n\nThoughts?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 16 Aug 2022 13:58:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "XLogBeginRead's header comment lies"
},
{
"msg_contents": "Forgot the attachment.",
"msg_date": "Tue, 16 Aug 2022 15:04:57 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: XLogBeginRead's header comment lies"
},
{
"msg_contents": "On Tue, Aug 16, 2022 at 11:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> It claims that:\n>\n> * 'RecPtr' should point to the beginning of a valid WAL record. Pointing at\n> * the beginning of a page is also OK, if there is a new record right after\n> * the page header, i.e. not a continuation.\n>\n> But this actually doesn't seem to work. This function doesn't itself\n> have any problem with any LSNs you want to pass it, so if you call\n> this function with an LSN that is at the beginning of a page, you'll\n> end up with EndRecPtr set to the LSN you specify and DecodeRecPtr set\n> to NULL. When you then call XLogReadRecord, you'll reach\n> XLogDecodeNextRecord, which will do this:\n>\n> if (state->DecodeRecPtr != InvalidXLogRecPtr)\n> {\n> /* read the record after the one we just read */\n>\n> /*\n> * NextRecPtr is pointing to end+1 of the previous WAL record. If\n> * we're at a page boundary, no more records can fit on the current\n> * page. We must skip over the page header, but we can't do that until\n> * we've read in the page, since the header size is variable.\n> */\n> }\n> else\n> {\n> /*\n> * Caller supplied a position to start at.\n> *\n> * In this case, NextRecPtr should already be pointing to a valid\n> * record starting position.\n> */\n> Assert(XRecOffIsValid(RecPtr));\n> randAccess = true;\n> }\n>\n> Since DecodeRecPtr is NULL, you take the else branch, and then you\n> fail an assertion.\n>\n> I tried adding a --beginread argument to pg_waldump (patch attached)\n> to further verify this:\n>\n> [rhaas pgsql]$ pg_waldump -n1\n> /Users/rhaas/pgstandby/pg_wal/0000000200000005000000A0\n> rmgr: Heap len (rec/tot): 72/ 72, tx: 5778572, lsn:\n> 5/A0000028, prev 5/9FFFFFB8, desc: HOT_UPDATE off 39 xmax 5778572\n> flags 0x20 ; new off 62 xmax 0, blkref #0: rel 1663/16388/16402 blk 1\n> [rhaas pgsql]$ pg_waldump -n1 --beginread\n> /Users/rhaas/pgstandby/pg_wal/0000000200000005000000A0\n> Assertion failed: (((RecPtr) % 8192 >= (((uintptr_t)\n> ((sizeof(XLogPageHeaderData))) + ((8) - 1)) & ~((uintptr_t) ((8) -\n> 1))))), function XLogDecodeNextRecord, file xlogreader.c, line 582.\n> Abort trap: 6 (core dumped)\n>\n> The WAL record begins at offset 0x28 in the block, which I believe is\n> the length of a long page header, so this is indeed a WAL segment that\n> begins with a brand new record, not a continuation record.\n>\n> There are two ways we could fix this, I believe. One is to correct the\n> comment at the start of XLogBeginRead() to reflect the way things\n> actually work at present. The other is to correct the code to do what\n> the header comment claims. I would prefer the latter, because I'd like\n> to be able to use the EndRecPtr of the last record read by one\n> xlogreader as the starting point for a new xlogreader created at a\n> later time. I've found that, when there's no record spanning the block\n> boundary, the EndRecPtr points to the start of the next block, not the\n> start of the first record in the next block. I could dodge the problem\n> here by just always using XLogFindNextRecord() rather than\n> XLogBeginRecord(), but I'd actually like it to go boom if I somehow\n> end up trying to start from an LSN that's in the middle of a record\n> somewhere (or the middle of the page header) because those cases\n> shouldn't happen. But if I just have an LSN that happens to be the\n> start of the block header rather than the start of the record that\n> follows the block header, I'd like that case to be tolerated, because\n> the LSN I'm using came from the xlogreader machinery.\n>\n> Thoughts?\n\nYeah I think it makes sense to make it work as per the comment in\nXLogBeginRecord(). I think if we modify the Assert as per the comment\nof XLogBeginRecord() then the remaining code of the\nXLogDecodeNextRecord() is capable enough to take care of skipping the\npage header if we are pointing at the beginning of the block.\n\nSee attached patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 17 Aug 2022 11:18:55 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XLogBeginRead's header comment lies"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 16, 2022 at 11:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n>\n> Yeah I think it makes sense to make it work as per the comment in\n> XLogBeginRecord(). I think if we modify the Assert as per the comment\n> of XLogBeginRecord() then the remaining code of the\n> XLogDecodeNextRecord() is capable enough to take care of skipping the\n> page header if we are pointing at the beginning of the block.\n>\n> See attached patch.\n>\n\nI think that is not sufficient, if there is a record continuing from\nthe previous page and we are pointing to the start of the page then\nthis assertion is not sufficient. I think if the\ntargetRecOff is zero then we should additionally read the header and\nverify that XLP_FIRST_IS_CONTRECORD is not set.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 11:31:45 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XLogBeginRead's header comment lies"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 11:31 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Aug 17, 2022 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Aug 16, 2022 at 11:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> >\n> > Yeah I think it makes sense to make it work as per the comment in\n> > XLogBeginRecord(). I think if we modify the Assert as per the comment\n> > of XLogBeginRecord() then the remaining code of the\n> > XLogDecodeNextRecord() is capable enough to take care of skipping the\n> > page header if we are pointing at the beginning of the block.\n> >\n> > See attached patch.\n> >\n>\n> I think that is not sufficient, if there is a record continuing from\n> the previous page and we are pointing to the start of the page then\n> this assertion is not sufficient. I think if the\n> targetRecOff is zero then we should additionally read the header and\n> verify that XLP_FIRST_IS_CONTRECORD is not set.\n\nThinking again, there is already a code in XLogDecodeNextRecord() to\nerror out if XLP_FIRST_IS_CONTRECORD is set so probably we don't need\nto do anything else and the previous patch with modified assert should\njust work fine?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 16:22:52 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: XLogBeginRead's header comment lies"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 6:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Thinking again, there is already a code in XLogDecodeNextRecord() to\n> error out if XLP_FIRST_IS_CONTRECORD is set so probably we don't need\n> to do anything else and the previous patch with modified assert should\n> just work fine?\n\nYeah, that looks right to me. I'm inclined to commit your patch with\nsome changes to wording of the comments. I'm also inclined not to\nback-patch, since we don't know that this breaks anything for existing\nusers of the xlogreader facility.\n\nIf anyone doesn't want this committed or does want it back-patched,\nplease speak up.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 10:57:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: XLogBeginRead's header comment lies"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 10:57 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Yeah, that looks right to me. I'm inclined to commit your patch with\n> some changes to wording of the comments. I'm also inclined not to\n> back-patch, since we don't know that this breaks anything for existing\n> users of the xlogreader facility.\n\nDone.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Aug 2022 12:26:37 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: XLogBeginRead's header comment lies"
}
] |
[
{
"msg_contents": "Hello all,\n\nWith the current implementation of COPY FROM in PostgreSQL we are able to\nload the DEFAULT value/expression of a column if the column is absent in the\nlist of specified columns. We are not able to explicitly ask that\nPostgreSQL uses\nthe DEFAULT value/expression in a column that is being fetched from the\ninput\nfile, though.\n\nThis patch adds support for handling DEFAULT values in COPY FROM. It works\nsimilarly to NULL in COPY FROM: whenever the marker that was set for DEFAULT\nvalue/expression is read from the input stream, it will evaluate the DEFAULT\nvalue/expression of the corresponding column.\n\nI'm currently working as a support engineer, and both me and some customers\nhad\nalready faced a situation where we missed an implementation like this in\nCOPY\nFROM, and had to work around that by using an input file where the column\nwhich\nhas a DEFAULT value/expression was removed.\n\nThat does not solve all issues though, as it might be the case that we just\nwant a\nDEFAULT value to take place if no other value was set for the column in the\ninput\nfile, meaning we would like to have a column in the input file that\nsometimes assume\nthe DEFAULT value/expression, and sometimes assume an actual given value.\n\nThe implementation was performed about one month ago and included all\nregression\ntests regarding the changes that were introduced. It was just rebased on\ntop of the\nmaster branch before submitting this patch, and all tests are still\nsucceeding.\n\nThe implementation takes advantage of the logic that was already\nimplemented to\nhandle DEFAULT values for missing columns in COPY FROM. I just modified it\nto\nmake it available the DEFAULT values/expressions for all columns instead of\nonly\nfor the ones that were missing in the specification. I had to change the\nvariables\naccordingly, so it would index the correct positions in the new array of\nDEFAULT\nvalues/expressions.\n\nBesides that, I also copied and pasted most of the checks that are\nperformed for the\nNULL feature of COPY FROM, as the DEFAULT behaves somehow similarly.\n\nBest regards,\nIsrael.",
"msg_date": "Tue, 16 Aug 2022 15:12:20 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "\nOn 2022-08-16 Tu 14:12, Israel Barth Rubio wrote:\n> Hello all,\n>\n> With the current implementation of COPY FROM in PostgreSQL we are able to\n> load the DEFAULT value/expression of a column if the column is absent\n> in the\n> list of specified columns. We are not able to explicitly ask that\n> PostgreSQL uses\n> the DEFAULT value/expression in a column that is being fetched from\n> the input\n> file, though.\n>\n> This patch adds support for handling DEFAULT values in COPY FROM. It\n> works\n> similarly to NULL in COPY FROM: whenever the marker that was set for\n> DEFAULT\n> value/expression is read from the input stream, it will evaluate the\n> DEFAULT\n> value/expression of the corresponding column.\n>\n> I'm currently working as a support engineer, and both me and some\n> customers had\n> already faced a situation where we missed an implementation like this\n> in COPY\n> FROM, and had to work around that by using an input file where the\n> column which\n> has a DEFAULT value/expression was removed.\n>\n> That does not solve all issues though, as it might be the case that we\n> just want a\n> DEFAULT value to take place if no other value was set for the column\n> in the input\n> file, meaning we would like to have a column in the input file that\n> sometimes assume\n> the DEFAULT value/expression, and sometimes assume an actual given value.\n>\n> The implementation was performed about one month ago and included all\n> regression\n> tests regarding the changes that were introduced. It was just rebased\n> on top of the\n> master branch before submitting this patch, and all tests are still\n> succeeding.\n>\n> The implementation takes advantage of the logic that was already\n> implemented to\n> handle DEFAULT values for missing columns in COPY FROM. I just\n> modified it to\n> make it available the DEFAULT values/expressions for all columns\n> instead of only\n> for the ones that were missing in the specification. I had to change\n> the variables\n> accordingly, so it would index the correct positions in the new array\n> of DEFAULT\n> values/expressions.\n>\n> Besides that, I also copied and pasted most of the checks that are\n> performed for the\n> NULL feature of COPY FROM, as the DEFAULT behaves somehow similarly.\n>\n>\n\n\nInteresting, and probably useful. I've only had a brief look, but it's\nimportant that the default marker not be quoted in CSV mode (c.f. NULL)\n-f it is it should be taken as a literal rather than a special value.\nMaybe that's taken care of, but there should at least be a test for it,\nwhich I didn't see.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 16 Aug 2022 16:27:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello Andrew,\n\nThanks for reviewing this patch.\n\nIt is worth noting that DEFAULT will only take place if explicitly\nspecified, meaning there is\nno default value for the option DEFAULT. The usage of \\D in the tests was\nonly a suggestion.\nAlso, NULL marker will be an unquoted empty string by default in CSV mode.\n\nIn any case I have manually tested it and the behavior is compliant to what\nwe see in NULL\nif it is defined to use \\N both in text and CSV modes.\n\n- NULL as \\N:\n\npostgres=# CREATE TEMP TABLE copy_null (id integer primary key, value text);\nCREATE TABLE\npostgres=# copy copy_null from stdin with (format text, NULL '\\N');\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1 \\N\n>> 2 \\\\N\n>> 3 \"\\N\"\n>> \\.\nCOPY 3\npostgres=# TABLE copy_null ;\n id | value\n----+-------\n 1 |\n 2 | \\N\n 3 | \"N\"\n(3 rows)\n\npostgres=# TRUNCATE copy_null ;\nTRUNCATE TABLE\npostgres=# copy copy_null from stdin with (format csv, NULL '\\N');\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1,\\N\n>> 2,\\\\N\n>> 3,\"\\N\"\n>> \\.\nCOPY 3\npostgres=# TABLE copy_null ;\n id | value\n----+-------\n 1 |\n 2 | \\\\N\n 3 | \\N\n(3 rows)\n\n- DEFAULT as \\D:\n\npostgres=# CREATE TEMP TABLE copy_default (id integer primary key, value\ntext default 'test');\nCREATE TABLE\npostgres=# copy copy_default from stdin with (format text, DEFAULT '\\D');\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1 \\D\n>> 2 \\\\D\n>> 3 \"\\D\"\n>> \\.\nCOPY 3\npostgres=# TABLE copy_default ;\n id | value\n----+-------\n 1 | test\n 2 | \\D\n 3 | \"D\"\n(3 rows)\n\npostgres=# TRUNCATE copy_default ;\nTRUNCATE TABLE\npostgres=# copy copy_default from stdin with (format csv, DEFAULT '\\D');\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1,\\D\n>> 2,\\\\D\n>> 3,\"\\D\"\n>> \\.\nCOPY 3\npostgres=# TABLE copy_default ;\n id | value\n----+-------\n 1 | test\n 2 | \\\\D\n 3 | \\D\n(3 rows)\n\nIf you do not specify DEFAULT in COPY FROM, it will have no default value\nfor\nthat option. So, if you try to load \\D in CSV mode, then it will load the\nliteral value:\n\npostgres=# CREATE TEMP TABLE copy (id integer primary key, value text\ndefault 'test');\nCREATE TABLE\npostgres=# copy copy from stdin with (format csv);\nEnter data to be copied followed by a newline.\nEnd with a backslash and a period on a line by itself, or an EOF signal.\n>> 1,\\D\n>> 2,\\\\D\n>> 3,\"\\D\"\n>> \\.\nCOPY 3\npostgres=# TABLE copy ;\n id | value\n----+-------\n 1 | \\D\n 2 | \\\\D\n 3 | \\D\n(3 rows)\n\n\nDoes that address your concerns?\n\nI am attaching the new patch, containing the above test in the regress\nsuite.\n\nBest regards,\nIsrael.\n\nEm ter., 16 de ago. de 2022 às 17:27, Andrew Dunstan <andrew@dunslane.net>\nescreveu:\n\n>\n> On 2022-08-16 Tu 14:12, Israel Barth Rubio wrote:\n> > Hello all,\n> >\n> > With the current implementation of COPY FROM in PostgreSQL we are able to\n> > load the DEFAULT value/expression of a column if the column is absent\n> > in the\n> > list of specified columns. We are not able to explicitly ask that\n> > PostgreSQL uses\n> > the DEFAULT value/expression in a column that is being fetched from\n> > the input\n> > file, though.\n> >\n> > This patch adds support for handling DEFAULT values in COPY FROM. It\n> > works\n> > similarly to NULL in COPY FROM: whenever the marker that was set for\n> > DEFAULT\n> > value/expression is read from the input stream, it will evaluate the\n> > DEFAULT\n> > value/expression of the corresponding column.\n> >\n> > I'm currently working as a support engineer, and both me and some\n> > customers had\n> > already faced a situation where we missed an implementation like this\n> > in COPY\n> > FROM, and had to work around that by using an input file where the\n> > column which\n> > has a DEFAULT value/expression was removed.\n> >\n> > That does not solve all issues though, as it might be the case that we\n> > just want a\n> > DEFAULT value to take place if no other value was set for the column\n> > in the input\n> > file, meaning we would like to have a column in the input file that\n> > sometimes assume\n> > the DEFAULT value/expression, and sometimes assume an actual given value.\n> >\n> > The implementation was performed about one month ago and included all\n> > regression\n> > tests regarding the changes that were introduced. It was just rebased\n> > on top of the\n> > master branch before submitting this patch, and all tests are still\n> > succeeding.\n> >\n> > The implementation takes advantage of the logic that was already\n> > implemented to\n> > handle DEFAULT values for missing columns in COPY FROM. I just\n> > modified it to\n> > make it available the DEFAULT values/expressions for all columns\n> > instead of only\n> > for the ones that were missing in the specification. I had to change\n> > the variables\n> > accordingly, so it would index the correct positions in the new array\n> > of DEFAULT\n> > values/expressions.\n> >\n> > Besides that, I also copied and pasted most of the checks that are\n> > performed for the\n> > NULL feature of COPY FROM, as the DEFAULT behaves somehow similarly.\n> >\n> >\n>\n>\n> Interesting, and probably useful. I've only had a brief look, but it's\n> important that the default marker not be quoted in CSV mode (c.f. NULL)\n> -f it is it should be taken as a literal rather than a special value.\n> Maybe that's taken care of, but there should at least be a test for it,\n> which I didn't see.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>",
"msg_date": "Wed, 17 Aug 2022 18:12:04 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "\nOn 2022-08-17 We 17:12, Israel Barth Rubio wrote:\n>\n>\n> Does that address your concerns?\n>\n> I am attaching the new patch, containing the above test in the regress\n> suite.\n\n\nThanks, yes, that all looks sane.\n\n\nPlease add this to the next CommitFest if you haven't already done so.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 17 Aug 2022 17:56:07 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On 2022-08-16 Tu 14:12, Israel Barth Rubio wrote:\n>> Hello all,\n>>\n>> With the current implementation of COPY FROM in PostgreSQL we are\n>> able to load the DEFAULT value/expression of a column if the column\n>> is absent in the list of specified columns. We are not able to\n>> explicitly ask that PostgreSQL uses the DEFAULT value/expression in a\n>> column that is being fetched from the input file, though.\n>>\n>> This patch adds support for handling DEFAULT values in COPY FROM. It\n>> works similarly to NULL in COPY FROM: whenever the marker that was\n>> set for DEFAULT value/expression is read from the input stream, it\n>> will evaluate the DEFAULT value/expression of the corresponding\n>> column.\n[…]\n> Interesting, and probably useful. I've only had a brief look, but it's\n> important that the default marker not be quoted in CSV mode (c.f. NULL)\n> -f it is it should be taken as a literal rather than a special value.\n\nFor the NULL marker that can be overridden for individual columns with\nthe FORCE(_NOT)_NULL option. This feature should have a similar\nFORCE(_NOT)_DEFAULT option to allow the DEFAULT marker to be ignored, or\nrecognised even when quoted, respectively.\n\n- ilmari\n\n\n",
"msg_date": "Thu, 18 Aug 2022 10:55:58 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello,\n\nThanks for your review. I submitted the patch to the next commit fest\n(https://commitfest.postgresql.org/39/3822/).\n\nRegards,\nIsrael.\n\nEm qua., 17 de ago. de 2022 às 18:56, Andrew Dunstan <andrew@dunslane.net>\nescreveu:\n\n>\n> On 2022-08-17 We 17:12, Israel Barth Rubio wrote:\n> >\n> >\n> > Does that address your concerns?\n> >\n> > I am attaching the new patch, containing the above test in the regress\n> > suite.\n>\n>\n> Thanks, yes, that all looks sane.\n>\n>\n> Please add this to the next CommitFest if you haven't already done so.\n>\n>\n> cheers\n>\n>\n> andrew\n>\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n>\n\nHello,Thanks for your review. I submitted the patch to the next commit fest(https://commitfest.postgresql.org/39/3822/).Regards,Israel.Em qua., 17 de ago. de 2022 às 18:56, Andrew Dunstan <andrew@dunslane.net> escreveu:\nOn 2022-08-17 We 17:12, Israel Barth Rubio wrote:\n>\n>\n> Does that address your concerns?\n>\n> I am attaching the new patch, containing the above test in the regress\n> suite.\n\n\nThanks, yes, that all looks sane.\n\n\nPlease add this to the next CommitFest if you haven't already done so.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 18 Aug 2022 11:36:05 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello Ilmari,\n\nThanks for checking it, too. I can study to implement these changes\nto include a way of overriding the behavior for the given columns.\n\nRegards,\nIsrael.\n\nEm qui., 18 de ago. de 2022 às 06:56, Dagfinn Ilmari Mannsåker <\nilmari@ilmari.org> escreveu:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>\n> > On 2022-08-16 Tu 14:12, Israel Barth Rubio wrote:\n> >> Hello all,\n> >>\n> >> With the current implementation of COPY FROM in PostgreSQL we are\n> >> able to load the DEFAULT value/expression of a column if the column\n> >> is absent in the list of specified columns. We are not able to\n> >> explicitly ask that PostgreSQL uses the DEFAULT value/expression in a\n> >> column that is being fetched from the input file, though.\n> >>\n> >> This patch adds support for handling DEFAULT values in COPY FROM. It\n> >> works similarly to NULL in COPY FROM: whenever the marker that was\n> >> set for DEFAULT value/expression is read from the input stream, it\n> >> will evaluate the DEFAULT value/expression of the corresponding\n> >> column.\n> […]\n> > Interesting, and probably useful. I've only had a brief look, but it's\n> > important that the default marker not be quoted in CSV mode (c.f. NULL)\n> > -f it is it should be taken as a literal rather than a special value.\n>\n> For the NULL marker that can be overridden for individual columns with\n> the FORCE(_NOT)_NULL option. This feature should have a similar\n> FORCE(_NOT)_DEFAULT option to allow the DEFAULT marker to be ignored, or\n> recognised even when quoted, respectively.\n>\n> - ilmari\n>\n\nHello Ilmari,Thanks for checking it, too. I can study to implement these changesto include a way of overriding the behavior for the given columns.Regards,Israel.Em qui., 18 de ago. de 2022 às 06:56, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> escreveu:Andrew Dunstan <andrew@dunslane.net> writes:\n\n> On 2022-08-16 Tu 14:12, Israel Barth Rubio wrote:\n>> Hello all,\n>>\n>> With the current implementation of COPY FROM in PostgreSQL we are\n>> able to load the DEFAULT value/expression of a column if the column\n>> is absent in the list of specified columns. We are not able to\n>> explicitly ask that PostgreSQL uses the DEFAULT value/expression in a\n>> column that is being fetched from the input file, though.\n>>\n>> This patch adds support for handling DEFAULT values in COPY FROM. It\n>> works similarly to NULL in COPY FROM: whenever the marker that was\n>> set for DEFAULT value/expression is read from the input stream, it\n>> will evaluate the DEFAULT value/expression of the corresponding\n>> column.\n[…]\n> Interesting, and probably useful. I've only had a brief look, but it's\n> important that the default marker not be quoted in CSV mode (c.f. NULL)\n> -f it is it should be taken as a literal rather than a special value.\n\nFor the NULL marker that can be overridden for individual columns with\nthe FORCE(_NOT)_NULL option. This feature should have a similar\nFORCE(_NOT)_DEFAULT option to allow the DEFAULT marker to be ignored, or\nrecognised even when quoted, respectively.\n\n- ilmari",
"msg_date": "Thu, 18 Aug 2022 11:39:45 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "\nOn 2022-08-18 Th 05:55, Dagfinn Ilmari Mannsåker wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>\n>> On 2022-08-16 Tu 14:12, Israel Barth Rubio wrote:\n>>> Hello all,\n>>>\n>>> With the current implementation of COPY FROM in PostgreSQL we are\n>>> able to load the DEFAULT value/expression of a column if the column\n>>> is absent in the list of specified columns. We are not able to\n>>> explicitly ask that PostgreSQL uses the DEFAULT value/expression in a\n>>> column that is being fetched from the input file, though.\n>>>\n>>> This patch adds support for handling DEFAULT values in COPY FROM. It\n>>> works similarly to NULL in COPY FROM: whenever the marker that was\n>>> set for DEFAULT value/expression is read from the input stream, it\n>>> will evaluate the DEFAULT value/expression of the corresponding\n>>> column.\n> […]\n>> Interesting, and probably useful. I've only had a brief look, but it's\n>> important that the default marker not be quoted in CSV mode (c.f. NULL)\n>> -f it is it should be taken as a literal rather than a special value.\n> For the NULL marker that can be overridden for individual columns with\n> the FORCE(_NOT)_NULL option. This feature should have a similar\n> FORCE(_NOT)_DEFAULT option to allow the DEFAULT marker to be ignored, or\n> recognised even when quoted, respectively.\n>\n\n\nThat seems to be over-egging the pudding somewhat. FORCE_NOT_DEFAULT\nshould not be necessary at all, since here if there's no default\nspecified nothing will be taken as the default. I suppose a quoted\ndefault is just faintly possible, but I'd like a concrete example of a\nproducer that emitted it.\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 18 Aug 2022 11:01:48 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "\nOn 2022-08-17 We 17:12, Israel Barth Rubio wrote:\n> Hello Andrew,\n>\n> Thanks for reviewing this patch\n[...]\n>\n> I am attaching the new patch, containing the above test in the regress\n> suite.\n>\n\n\nThanks, this looks good but there are some things that need attention:\n\n. There needs to be a check that this is being used with COPY FROM, and\nthe restriction needs to be stated in the docs and tested for. c.f.\nFORCE NULL.\n\n. There needs to be support for this in psql's tab_complete.c, and\nappropriate tests added\n\n. There needs to be support for it in contrib/file_fdw/file_fdw.c, and a\ntest added\n\n. The tests should include psql's \\copy as well as sql COPY\n\n. I'm not sure we need a separate regression test file for this.\nProbably these tests can go at the end of src/test/regress/sql/copy2.sql.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 14 Sep 2022 18:29:16 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello Andrew,\n\n> . There needs to be a check that this is being used with COPY FROM, and\n> the restriction needs to be stated in the docs and tested for. c.f.\n> FORCE NULL.\n>\n> . There needs to be support for this in psql's tab_complete.c, and\n> appropriate tests added\n>\n> . There needs to be support for it in contrib/file_fdw/file_fdw.c, and a\n> test added\n>\n> . The tests should include psql's \\copy as well as sql COPY\n>\n> . I'm not sure we need a separate regression test file for this.\n> Probably these tests can go at the end of src/test/regress/sql/copy2.sql.\n\nThanks for your review! I have applied the suggested changes, and I'm\nsubmitting the new patch version.\n\nKind regards,\nIsrael.",
"msg_date": "Mon, 26 Sep 2022 12:12:15 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 8:12 AM Israel Barth Rubio <barthisrael@gmail.com>\nwrote:\n\n> Hello Andrew,\n>\n> > . There needs to be a check that this is being used with COPY FROM, and\n> > the restriction needs to be stated in the docs and tested for. c.f.\n> > FORCE NULL.\n> >\n> > . There needs to be support for this in psql's tab_complete.c, and\n> > appropriate tests added\n> >\n> > . There needs to be support for it in contrib/file_fdw/file_fdw.c, and a\n> > test added\n> >\n> > . The tests should include psql's \\copy as well as sql COPY\n> >\n> > . I'm not sure we need a separate regression test file for this.\n> > Probably these tests can go at the end of src/test/regress/sql/copy2.sql.\n>\n> Thanks for your review! I have applied the suggested changes, and I'm\n> submitting the new patch version.\n>\n> Kind regards,\n> Israel.\n>\n\nHi,\n\n+ /* attribute is NOT to be copied from input */\n\nI think saying `is NOT copied from input` should suffice.\n\n+ defaults = (bool *) palloc0(num_phys_attrs * sizeof(bool));\n+ MemSet(defaults, false, num_phys_attrs * sizeof(bool));\n\nIs the MemSet() call necessary ?\n\n+ /* fieldno is 0-index and attnum is 1-index */\n\n0-index -> 0-indexed\n\nCheers\n\nOn Mon, Sep 26, 2022 at 8:12 AM Israel Barth Rubio <barthisrael@gmail.com> wrote:Hello Andrew,> . There needs to be a check that this is being used with COPY FROM, and> the restriction needs to be stated in the docs and tested for. c.f.> FORCE NULL.> > . There needs to be support for this in psql's tab_complete.c, and> appropriate tests added> > . There needs to be support for it in contrib/file_fdw/file_fdw.c, and a> test added> > . The tests should include psql's \\copy as well as sql COPY> > . I'm not sure we need a separate regression test file for this.> Probably these tests can go at the end of src/test/regress/sql/copy2.sql.Thanks for your review! I have applied the suggested changes, and I'msubmitting the new patch version.Kind regards,Israel.Hi,+ /* attribute is NOT to be copied from input */ I think saying `is NOT copied from input` should suffice.+ defaults = (bool *) palloc0(num_phys_attrs * sizeof(bool));+ MemSet(defaults, false, num_phys_attrs * sizeof(bool));Is the MemSet() call necessary ?+ /* fieldno is 0-index and attnum is 1-index */0-index -> 0-indexedCheers",
"msg_date": "Mon, 26 Sep 2022 08:23:02 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-26 12:12:15 -0300, Israel Barth Rubio wrote:\n> Thanks for your review! I have applied the suggested changes, and I'm\n> submitting the new patch version.\n\ncfbot shows that tests started failing with this version:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3822\n\nhttps://cirrus-ci.com/task/5354378189078528?logs=test_world#L267\n\n[11:03:09.595] ============== running regression test queries ==============\n[11:03:09.595] test file_fdw ... FAILED (test process exited with exit code 2) 441 ms\n[11:03:09.595] ============== shutting down postmaster ==============\n[11:03:09.595]\n[11:03:09.595] ======================\n[11:03:09.595] 1 of 1 tests failed.\n[11:03:09.595] ======================\n[11:03:09.595]\n[11:03:09.595] The differences that caused some tests to fail can be viewed in the\n[11:03:09.595] file \"/tmp/cirrus-ci-build/build/testrun/file_fdw/regress/regression.diffs\". A copy of the test summary that you see\n[11:03:09.595] above is saved in the file \"/tmp/cirrus-ci-build/build/testrun/file_fdw/regress/regression.out\".\n[11:03:09.595]\n[11:03:09.595] # test failed\n\nThe reason for the failure is a crash:\nhttps://api.cirrus-ci.com/v1/artifact/task/5354378189078528/testrun/build/testrun/file_fdw/regress/log/postmaster.log\n\n2022-09-30 11:01:29.228 UTC client backend[26885] pg_regress/file_fdw ERROR: cannot insert into foreign table \"p1\"\n2022-09-30 11:01:29.228 UTC client backend[26885] pg_regress/file_fdw STATEMENT: UPDATE pt set a = 1 where a = 2;\nTRAP: FailedAssertion(\"CurrentMemoryContext == econtext->ecxt_per_tuple_memory\", File: \"../src/backend/commands/copyfromparse.c\", Line: 956, PID: 26885)\npostgres: postgres regression [local] SELECT(ExceptionalCondition+0x8d)[0x559ed2fdf600]\npostgres: postgres regression [local] SELECT(NextCopyFrom+0x3e4)[0x559ed2c4e3cb]\n/tmp/cirrus-ci-build/build/tmp_install/usr/local/lib/x86_64-linux-gnu/postgresql/file_fdw.so(+0x2eef)[0x7ff42d072eef]\npostgres: postgres regression [local] SELECT(+0x2cc400)[0x559ed2cff400]\npostgres: postgres regression [local] SELECT(+0x2ba0eb)[0x559ed2ced0eb]\npostgres: postgres regression [local] SELECT(ExecScan+0x6d)[0x559ed2ced178]\npostgres: postgres regression [local] SELECT(+0x2cc43e)[0x559ed2cff43e]\npostgres: postgres regression [local] SELECT(+0x2af6d5)[0x559ed2ce26d5]\npostgres: postgres regression [local] SELECT(standard_ExecutorRun+0x15f)[0x559ed2ce28b0]\npostgres: postgres regression [local] SELECT(ExecutorRun+0x25)[0x559ed2ce297e]\npostgres: postgres regression [local] SELECT(+0x47275b)[0x559ed2ea575b]\npostgres: postgres regression [local] SELECT(PortalRun+0x307)[0x559ed2ea71af]\npostgres: postgres regression [local] SELECT(+0x47013a)[0x559ed2ea313a]\npostgres: postgres regression [local] SELECT(PostgresMain+0x774)[0x559ed2ea5054]\npostgres: postgres regression [local] SELECT(+0x3d41f4)[0x559ed2e071f4]\npostgres: postgres regression [local] SELECT(+0x3d73a5)[0x559ed2e0a3a5]\npostgres: postgres regression [local] SELECT(+0x3d75b7)[0x559ed2e0a5b7]\npostgres: postgres regression [local] SELECT(PostmasterMain+0x1215)[0x559ed2e0bc52]\npostgres: postgres regression [local] SELECT(main+0x231)[0x559ed2d46f17]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xea)[0x7ff43892dd0a]\npostgres: postgres regression [local] SELECT(_start+0x2a)[0x559ed2b0204a]\n\nA full backtrace is at https://api.cirrus-ci.com/v1/task/5354378189078528/logs/cores.log\n\nRegards,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 2 Oct 2022 10:17:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello Zhihong,\n\n> + /* attribute is NOT to be copied from input */\n>\n> I think saying `is NOT copied from input` should suffice.\n>\n> + /* fieldno is 0-index and attnum is 1-index */\n>\n> 0-index -> 0-indexed\n\nI have applied both suggestions, thanks! I'll submit a 4th version\nof the patch soon.\n\n> + defaults = (bool *) palloc0(num_phys_attrs * sizeof(bool));\n> + MemSet(defaults, false, num_phys_attrs * sizeof(bool));\n>\n> Is the MemSet() call necessary ?\n\nI would say it is, so it initializes the array with all flags set to false.\nLater, if it detects attributes that should evaluate their default\nexpression,\nit would set the flag to true.\n\nAm I missing something?\n\nRegards,\nIsrael.\n\n>\n\nHello Zhihong,> + /* attribute is NOT to be copied from input */ > > I think saying `is NOT copied from input` should suffice.> > + /* fieldno is 0-index and attnum is 1-index */> > 0-index -> 0-indexedI have applied both suggestions, thanks! I'll submit a 4th versionof the patch soon.> + defaults = (bool *) palloc0(num_phys_attrs * sizeof(bool));> + MemSet(defaults, false, num_phys_attrs * sizeof(bool));> > Is the MemSet() call necessary ?I would say it is, so it initializes the array with all flags set to false.Later, if it detects attributes that should evaluate their default expression,it would set the flag to true.Am I missing something?Regards,Israel.",
"msg_date": "Fri, 7 Oct 2022 16:09:28 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "On Fri, Oct 7, 2022 at 12:09 PM Israel Barth Rubio <barthisrael@gmail.com>\nwrote:\n\n> Hello Zhihong,\n>\n> > + /* attribute is NOT to be copied from input */\n> >\n> > I think saying `is NOT copied from input` should suffice.\n> >\n> > + /* fieldno is 0-index and attnum is 1-index */\n> >\n> > 0-index -> 0-indexed\n>\n> I have applied both suggestions, thanks! I'll submit a 4th version\n> of the patch soon.\n>\n> > + defaults = (bool *) palloc0(num_phys_attrs * sizeof(bool));\n> > + MemSet(defaults, false, num_phys_attrs * sizeof(bool));\n> >\n> > Is the MemSet() call necessary ?\n>\n> I would say it is, so it initializes the array with all flags set to false.\n> Later, if it detects attributes that should evaluate their default\n> expression,\n> it would set the flag to true.\n>\n> Am I missing something?\n>\n> Regards,\n> Israel.\n>\nHi,\nFor the last question, please take a look at:\n\n#define MemSetAligned(start, val, len) \\\n\nwhich is called by palloc0().\n\nOn Fri, Oct 7, 2022 at 12:09 PM Israel Barth Rubio <barthisrael@gmail.com> wrote:Hello Zhihong,> + /* attribute is NOT to be copied from input */ > > I think saying `is NOT copied from input` should suffice.> > + /* fieldno is 0-index and attnum is 1-index */> > 0-index -> 0-indexedI have applied both suggestions, thanks! I'll submit a 4th versionof the patch soon.> + defaults = (bool *) palloc0(num_phys_attrs * sizeof(bool));> + MemSet(defaults, false, num_phys_attrs * sizeof(bool));> > Is the MemSet() call necessary ?I would say it is, so it initializes the array with all flags set to false.Later, if it detects attributes that should evaluate their default expression,it would set the flag to true.Am I missing something?Regards,Israel.Hi,For the last question, please take a look at:#define MemSetAligned(start, val, len) \\ which is called by palloc0().",
"msg_date": "Fri, 7 Oct 2022 12:16:22 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello Andres,\n\n> cfbot shows that tests started failing with this version:\n>\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3822\n> A full backtrace is at\nhttps://api.cirrus-ci.com/v1/task/5354378189078528/logs/cores.log\n\nThanks for pointing this out. I had initially missed this as my local runs\nof *make check*\nwere working fine, sorry!\n\nI'm attaching a new version of the patch, containing the memory context\nswitches.\n\nRegards,\nIsrael.",
"msg_date": "Fri, 7 Oct 2022 16:17:35 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello Zhihong,\n\n> For the last question, please take a look at:\n>\n> #define MemSetAligned(start, val, len) \\\n>\n> which is called by palloc0().\n\nOh, I totally missed that. Thanks for the heads up!\n\nI'm attaching the new patch version, which contains both the fix\nto the problem reported by Andres, and removes this useless\nMemSet call.\n\nBest regards,\nIsrael.",
"msg_date": "Fri, 7 Oct 2022 17:54:46 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello all,\n\nI'm submitting a new version of the patch. Instead of changing signature\nof several functions in order to use the defaults parameter, it is now\nstoring\nthat in the cstate structure, which is already passed to all functions that\nwere previously modified.\n\nBest regards,\nIsrael.\n\nEm sex., 7 de out. de 2022 às 17:54, Israel Barth Rubio <\nbarthisrael@gmail.com> escreveu:\n\n> Hello Zhihong,\n>\n> > For the last question, please take a look at:\n> >\n> > #define MemSetAligned(start, val, len) \\\n> >\n> > which is called by palloc0().\n>\n> Oh, I totally missed that. Thanks for the heads up!\n>\n> I'm attaching the new patch version, which contains both the fix\n> to the problem reported by Andres, and removes this useless\n> MemSet call.\n>\n> Best regards,\n> Israel.\n>",
"msg_date": "Fri, 2 Dec 2022 11:11:28 -0300",
"msg_from": "Israel Barth Rubio <barthisrael@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "\nOn 2022-12-02 Fr 09:11, Israel Barth Rubio wrote:\n> Hello all,\n>\n> I'm submitting a new version of the patch. Instead of changing signature\n> of several functions in order to use the defaults parameter, it is now\n> storing\n> that in the cstate structure, which is already passed to all functions\n> that\n> were previously modified.\n>\n\nI'm reviewing this and it looks in pretty good shape. I notice that in\nfile_fdw.c:fileIterateForeignScan() we unconditionally generate the\nestate, switch context etc, whether or not there is a default option\nused. I guess there's no harm in that, and the performance impact should\nbe minimal, but I thought it worth mentioning, as it's probably not\nstrictly necessary.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 9 Jan 2023 08:52:41 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "\nOn 2022-12-02 Fr 09:11, Israel Barth Rubio wrote:\n> Hello all,\n>\n> I'm submitting a new version of the patch. Instead of changing signature\n> of several functions in order to use the defaults parameter, it is now \n> storing\n> that in the cstate structure, which is already passed to all functions \n> that\n> were previously modified.\n>\n\nThanks, committed.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 13 Mar 2023 10:15:38 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "Hello,\n13.03.2023 17:15, Andrew Dunstan wrote:\n>\n> On 2022-12-02 Fr 09:11, Israel Barth Rubio wrote:\n>> Hello all,\n>>\n>> I'm submitting a new version of the patch. Instead of changing signature\n>> of several functions in order to use the defaults parameter, it is now storing\n>> that in the cstate structure, which is already passed to all functions that\n>> were previously modified.\n>>\n>\n> Thanks, committed.\n\nPlease look at the query:\ncreate table t (f1 int);\ncopy t from stdin with (format csv, default '\\D');\n1,\\D\n\nthat invokes an assertion failure after 9f8377f7a:\nCore was generated by `postgres: law regression [local] \nCOPY '.\nProgram terminated with signal SIGABRT, Aborted.\n\nwarning: Section `.reg-xstate/3253881' in core file too small.\n#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140665061189440) \nat ./nptl/pthread_kill.c:44\n44 ./nptl/pthread_kill.c: No such file or directory.\n(gdb) bt\n#0 __pthread_kill_implementation (no_tid=0, signo=6, threadid=140665061189440) \nat ./nptl/pthread_kill.c:44\n#1 __pthread_kill_internal (signo=6, threadid=140665061189440) at \n./nptl/pthread_kill.c:78\n#2 __GI___pthread_kill (threadid=140665061189440, signo=signo@entry=6) at \n./nptl/pthread_kill.c:89\n#3 0x00007fef2250e476 in __GI_raise (sig=sig@entry=6) at \n../sysdeps/posix/raise.c:26\n#4 0x00007fef224f47f3 in __GI_abort () at ./stdlib/abort.c:79\n#5 0x00005600fd395750 in ExceptionalCondition (\n conditionName=conditionName@entry=0x5600fd3fa751 \"n >= 0 && n < list->length\",\n fileName=fileName@entry=0x5600fd416db8 \n\"../../../src/include/nodes/pg_list.h\", lineNumber=lineNumber@entry=280)\n at assert.c:66\n#6 0x00005600fd02626d in list_nth_cell (n=<optimized out>, list=<optimized out>)\n at ../../../src/include/nodes/pg_list.h:280\n#7 list_nth_int (n=<optimized out>, list=<optimized out>) at \n../../../src/include/nodes/pg_list.h:313\n#8 CopyReadAttributesCSV (cstate=<optimized out>) at copyfromparse.c:1905\n#9 0x00005600fd0265a5 in NextCopyFromRawFields (cstate=0x5600febdd238, \nfields=0x7fff12ef7130, nfields=0x7fff12ef712c)\n at copyfromparse.c:833\n#10 0x00005600fd0267f9 in NextCopyFrom (cstate=cstate@entry=0x5600febdd238, \necontext=econtext@entry=0x5600fec9c5c8,\n values=0x5600febdd5c8, nulls=0x5600febdd5d0) at copyfromparse.c:885\n#11 0x00005600fd0234db in CopyFrom (cstate=cstate@entry=0x5600febdd238) at \ncopyfrom.c:989\n#12 0x00005600fd0222e5 in DoCopy (pstate=0x5600febdc568, stmt=0x5600febb2d58, \nstmt_location=0, stmt_len=49,\n processed=0x7fff12ef7340) at copy.c:308\n#13 0x00005600fd25c5e9 in standard_ProcessUtility (pstmt=0x5600febb2e78,\n queryString=0x5600febb2178 \"copy t from stdin with (format csv, default \n'\\\\D');\", readOnlyTree=<optimized out>,\n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \ndest=0x5600febb3138, qc=0x7fff12ef7600)\n at utility.c:742\n#14 0x00005600fd25a9f1 in PortalRunUtility (portal=portal@entry=0x5600fec4ea48, \npstmt=pstmt@entry=0x5600febb2e78,\n isTopLevel=isTopLevel@entry=true, \nsetHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5600febb3138,\n qc=qc@entry=0x7fff12ef7600) at pquery.c:1158\n#15 0x00005600fd25ab2d in PortalRunMulti (portal=portal@entry=0x5600fec4ea48, \nisTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x5600febb3138,\n altdest=altdest@entry=0x5600febb3138, qc=qc@entry=0x7fff12ef7600) at \npquery.c:1315\n#16 0x00005600fd25b1c1 in PortalRun (portal=portal@entry=0x5600fec4ea48, \ncount=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, \ndest=dest@entry=0x5600febb3138,\n altdest=altdest@entry=0x5600febb3138, qc=0x7fff12ef7600) at pquery.c:791\n#17 0x00005600fd256f34 in exec_simple_query (\n query_string=0x5600febb2178 \"copy t from stdin with (format csv, default \n'\\\\D');\") at postgres.c:1240\n#18 0x00005600fd258ae7 in PostgresMain (dbname=<optimized out>, \nusername=<optimized out>) at postgres.c:4572\n#19 0x00005600fd1c2d3f in BackendRun (port=0x5600febe05c0, port=0x5600febe05c0) \nat postmaster.c:4461\n#20 BackendStartup (port=0x5600febe05c0) at postmaster.c:4189\n#21 ServerLoop () at postmaster.c:1779\n#22 0x00005600fd1c3d63 in PostmasterMain (argc=argc@entry=3, \nargv=argv@entry=0x5600febad640) at postmaster.c:1463\n#23 0x00005600fced4fc6 in main (argc=3, argv=0x5600febad640) at main.c:200\n\nBest regards,\nAlexander\n\n\n\n\n\nHello,\n 13.03.2023 17:15, Andrew Dunstan wrote:\n\n\n\n On 2022-12-02 Fr 09:11, Israel Barth Rubio wrote:\n \nHello all,\n \n\n I'm submitting a new version of the patch. Instead of changing\n signature\n \n of several functions in order to use the defaults parameter, it\n is now storing\n \n that in the cstate structure, which is already passed to all\n functions that\n \n were previously modified.\n \n\n\n\n Thanks, committed.\n \n\n\n Please look at the query:\n create table t (f1 int);\n copy t from stdin with (format csv, default '\\D');\n 1,\\D\n\n that invokes an assertion failure after 9f8377f7a:\n Core was generated by `postgres: law regression [local]\n COPY '.\n Program terminated with signal SIGABRT, Aborted.\n\n warning: Section `.reg-xstate/3253881' in core file too small.\n #0 __pthread_kill_implementation (no_tid=0, signo=6,\n threadid=140665061189440) at ./nptl/pthread_kill.c:44\n 44 ./nptl/pthread_kill.c: No such file or directory.\n (gdb) bt \n #0 __pthread_kill_implementation (no_tid=0, signo=6,\n threadid=140665061189440) at ./nptl/pthread_kill.c:44\n #1 __pthread_kill_internal (signo=6, threadid=140665061189440) at\n ./nptl/pthread_kill.c:78\n #2 __GI___pthread_kill (threadid=140665061189440,\n signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n #3 0x00007fef2250e476 in __GI_raise (sig=sig@entry=6) at\n ../sysdeps/posix/raise.c:26\n #4 0x00007fef224f47f3 in __GI_abort () at ./stdlib/abort.c:79\n #5 0x00005600fd395750 in ExceptionalCondition (\n conditionName=conditionName@entry=0x5600fd3fa751 \"n >= 0\n && n < list->length\", \n fileName=fileName@entry=0x5600fd416db8\n \"../../../src/include/nodes/pg_list.h\",\n lineNumber=lineNumber@entry=280)\n at assert.c:66\n #6 0x00005600fd02626d in list_nth_cell (n=<optimized out>,\n list=<optimized out>)\n at ../../../src/include/nodes/pg_list.h:280\n #7 list_nth_int (n=<optimized out>, list=<optimized\n out>) at ../../../src/include/nodes/pg_list.h:313\n #8 CopyReadAttributesCSV (cstate=<optimized out>) at\n copyfromparse.c:1905\n #9 0x00005600fd0265a5 in NextCopyFromRawFields\n (cstate=0x5600febdd238, fields=0x7fff12ef7130,\n nfields=0x7fff12ef712c)\n at copyfromparse.c:833\n #10 0x00005600fd0267f9 in NextCopyFrom\n (cstate=cstate@entry=0x5600febdd238,\n econtext=econtext@entry=0x5600fec9c5c8, \n values=0x5600febdd5c8, nulls=0x5600febdd5d0) at\n copyfromparse.c:885\n #11 0x00005600fd0234db in CopyFrom\n (cstate=cstate@entry=0x5600febdd238) at copyfrom.c:989\n #12 0x00005600fd0222e5 in DoCopy (pstate=0x5600febdc568,\n stmt=0x5600febb2d58, stmt_location=0, stmt_len=49, \n processed=0x7fff12ef7340) at copy.c:308\n #13 0x00005600fd25c5e9 in standard_ProcessUtility\n (pstmt=0x5600febb2e78, \n queryString=0x5600febb2178 \"copy t from stdin with (format csv,\n default '\\\\D');\", readOnlyTree=<optimized out>, \n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n dest=0x5600febb3138, qc=0x7fff12ef7600)\n at utility.c:742\n #14 0x00005600fd25a9f1 in PortalRunUtility\n (portal=portal@entry=0x5600fec4ea48,\n pstmt=pstmt@entry=0x5600febb2e78, \n isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false,\n dest=dest@entry=0x5600febb3138, \n qc=qc@entry=0x7fff12ef7600) at pquery.c:1158\n #15 0x00005600fd25ab2d in PortalRunMulti\n (portal=portal@entry=0x5600fec4ea48,\n isTopLevel=isTopLevel@entry=true, \n setHoldSnapshot=setHoldSnapshot@entry=false,\n dest=dest@entry=0x5600febb3138, \n altdest=altdest@entry=0x5600febb3138,\n qc=qc@entry=0x7fff12ef7600) at pquery.c:1315\n #16 0x00005600fd25b1c1 in PortalRun\n (portal=portal@entry=0x5600fec4ea48,\n count=count@entry=9223372036854775807, \n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\n dest=dest@entry=0x5600febb3138, \n altdest=altdest@entry=0x5600febb3138, qc=0x7fff12ef7600) at\n pquery.c:791\n #17 0x00005600fd256f34 in exec_simple_query (\n query_string=0x5600febb2178 \"copy t from stdin with (format csv,\n default '\\\\D');\") at postgres.c:1240\n #18 0x00005600fd258ae7 in PostgresMain (dbname=<optimized\n out>, username=<optimized out>) at postgres.c:4572\n #19 0x00005600fd1c2d3f in BackendRun (port=0x5600febe05c0,\n port=0x5600febe05c0) at postmaster.c:4461\n #20 BackendStartup (port=0x5600febe05c0) at postmaster.c:4189\n #21 ServerLoop () at postmaster.c:1779\n #22 0x00005600fd1c3d63 in PostmasterMain (argc=argc@entry=3,\n argv=argv@entry=0x5600febad640) at postmaster.c:1463\n #23 0x00005600fced4fc6 in main (argc=3, argv=0x5600febad640) at\n main.c:200\n\n Best regards,\n Alexander",
"msg_date": "Wed, 15 Mar 2023 20:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "On 2023-03-15 We 13:00, Alexander Lakhin wrote:\n> Hello,\n> 13.03.2023 17:15, Andrew Dunstan wrote:\n>>\n>> On 2022-12-02 Fr 09:11, Israel Barth Rubio wrote:\n>>> Hello all,\n>>>\n>>> I'm submitting a new version of the patch. Instead of changing \n>>> signature\n>>> of several functions in order to use the defaults parameter, it is \n>>> now storing\n>>> that in the cstate structure, which is already passed to all \n>>> functions that\n>>> were previously modified.\n>>>\n>>\n>> Thanks, committed.\n>\n> Please look at the query:\n> create table t (f1 int);\n> copy t from stdin with (format csv, default '\\D');\n> 1,\\D\n>\n> that invokes an assertion failure after 9f8377f7a:\n> Core was generated by `postgres: law regression [local] \n> COPY '.\n> Program terminated with signal SIGABRT, Aborted.\n>\n> warning: Section `.reg-xstate/3253881' in core file too small.\n> #0 __pthread_kill_implementation (no_tid=0, signo=6, \n> threadid=140665061189440) at ./nptl/pthread_kill.c:44\n> 44 ./nptl/pthread_kill.c: No such file or directory.\n> (gdb) bt\n> #0 __pthread_kill_implementation (no_tid=0, signo=6, \n> threadid=140665061189440) at ./nptl/pthread_kill.c:44\n> #1 __pthread_kill_internal (signo=6, threadid=140665061189440) at \n> ./nptl/pthread_kill.c:78\n> #2 __GI___pthread_kill (threadid=140665061189440, \n> signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n> #3 0x00007fef2250e476 in __GI_raise (sig=sig@entry=6) at \n> ../sysdeps/posix/raise.c:26\n> #4 0x00007fef224f47f3 in __GI_abort () at ./stdlib/abort.c:79\n> #5 0x00005600fd395750 in ExceptionalCondition (\n> conditionName=conditionName@entry=0x5600fd3fa751 \"n >= 0 && n < \n> list->length\",\n> fileName=fileName@entry=0x5600fd416db8 \n> \"../../../src/include/nodes/pg_list.h\", lineNumber=lineNumber@entry=280)\n> at assert.c:66\n> #6 0x00005600fd02626d in list_nth_cell (n=<optimized out>, \n> list=<optimized out>)\n> at ../../../src/include/nodes/pg_list.h:280\n> #7 list_nth_int (n=<optimized out>, list=<optimized out>) at \n> ../../../src/include/nodes/pg_list.h:313\n> #8 CopyReadAttributesCSV (cstate=<optimized out>) at copyfromparse.c:1905\n> #9 0x00005600fd0265a5 in NextCopyFromRawFields \n> (cstate=0x5600febdd238, fields=0x7fff12ef7130, nfields=0x7fff12ef712c)\n> at copyfromparse.c:833\n> #10 0x00005600fd0267f9 in NextCopyFrom \n> (cstate=cstate@entry=0x5600febdd238, \n> econtext=econtext@entry=0x5600fec9c5c8,\n> values=0x5600febdd5c8, nulls=0x5600febdd5d0) at copyfromparse.c:885\n> #11 0x00005600fd0234db in CopyFrom \n> (cstate=cstate@entry=0x5600febdd238) at copyfrom.c:989\n> #12 0x00005600fd0222e5 in DoCopy (pstate=0x5600febdc568, \n> stmt=0x5600febb2d58, stmt_location=0, stmt_len=49,\n> processed=0x7fff12ef7340) at copy.c:308\n> #13 0x00005600fd25c5e9 in standard_ProcessUtility (pstmt=0x5600febb2e78,\n> queryString=0x5600febb2178 \"copy t from stdin with (format csv, \n> default '\\\\D');\", readOnlyTree=<optimized out>,\n> context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, \n> dest=0x5600febb3138, qc=0x7fff12ef7600)\n> at utility.c:742\n> #14 0x00005600fd25a9f1 in PortalRunUtility \n> (portal=portal@entry=0x5600fec4ea48, pstmt=pstmt@entry=0x5600febb2e78,\n> isTopLevel=isTopLevel@entry=true, \n> setHoldSnapshot=setHoldSnapshot@entry=false, \n> dest=dest@entry=0x5600febb3138,\n> qc=qc@entry=0x7fff12ef7600) at pquery.c:1158\n> #15 0x00005600fd25ab2d in PortalRunMulti \n> (portal=portal@entry=0x5600fec4ea48, isTopLevel=isTopLevel@entry=true,\n> setHoldSnapshot=setHoldSnapshot@entry=false, \n> dest=dest@entry=0x5600febb3138,\n> altdest=altdest@entry=0x5600febb3138, qc=qc@entry=0x7fff12ef7600) \n> at pquery.c:1315\n> #16 0x00005600fd25b1c1 in PortalRun \n> (portal=portal@entry=0x5600fec4ea48, \n> count=count@entry=9223372036854775807,\n> isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, \n> dest=dest@entry=0x5600febb3138,\n> altdest=altdest@entry=0x5600febb3138, qc=0x7fff12ef7600) at \n> pquery.c:791\n> #17 0x00005600fd256f34 in exec_simple_query (\n> query_string=0x5600febb2178 \"copy t from stdin with (format csv, \n> default '\\\\D');\") at postgres.c:1240\n> #18 0x00005600fd258ae7 in PostgresMain (dbname=<optimized out>, \n> username=<optimized out>) at postgres.c:4572\n> #19 0x00005600fd1c2d3f in BackendRun (port=0x5600febe05c0, \n> port=0x5600febe05c0) at postmaster.c:4461\n> #20 BackendStartup (port=0x5600febe05c0) at postmaster.c:4189\n> #21 ServerLoop () at postmaster.c:1779\n> #22 0x00005600fd1c3d63 in PostmasterMain (argc=argc@entry=3, \n> argv=argv@entry=0x5600febad640) at postmaster.c:1463\n> #23 0x00005600fced4fc6 in main (argc=3, argv=0x5600febad640) at main.c:200\n>\n>\n\nThanks for the test case. Will fix.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-15 We 13:00, Alexander\n Lakhin wrote:\n\n\n\nHello,\n 13.03.2023 17:15, Andrew Dunstan wrote:\n\n \n On 2022-12-02 Fr 09:11, Israel Barth Rubio wrote: \nHello all, \n\n I'm submitting a new version of the patch. Instead of changing\n signature \n of several functions in order to use the defaults parameter,\n it is now storing \n that in the cstate structure, which is already passed to all\n functions that \n were previously modified. \n\n\n\n Thanks, committed. \n\n\n Please look at the query:\n create table t (f1 int);\n copy t from stdin with (format csv, default '\\D');\n 1,\\D\n\n that invokes an assertion failure after 9f8377f7a:\n Core was generated by `postgres: law regression [local]\n COPY '.\n Program terminated with signal SIGABRT, Aborted.\n\n warning: Section `.reg-xstate/3253881' in core file too small.\n #0 __pthread_kill_implementation (no_tid=0, signo=6,\n threadid=140665061189440) at ./nptl/pthread_kill.c:44\n 44 ./nptl/pthread_kill.c: No such file or directory.\n (gdb) bt \n #0 __pthread_kill_implementation (no_tid=0, signo=6,\n threadid=140665061189440) at ./nptl/pthread_kill.c:44\n #1 __pthread_kill_internal (signo=6, threadid=140665061189440) at\n ./nptl/pthread_kill.c:78\n #2 __GI___pthread_kill (threadid=140665061189440,\n signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n #3 0x00007fef2250e476 in __GI_raise (sig=sig@entry=6) at\n ../sysdeps/posix/raise.c:26\n #4 0x00007fef224f47f3 in __GI_abort () at ./stdlib/abort.c:79\n #5 0x00005600fd395750 in ExceptionalCondition (\n conditionName=conditionName@entry=0x5600fd3fa751 \"n >= 0\n && n < list->length\", \n fileName=fileName@entry=0x5600fd416db8\n \"../../../src/include/nodes/pg_list.h\",\n lineNumber=lineNumber@entry=280)\n at assert.c:66\n #6 0x00005600fd02626d in list_nth_cell (n=<optimized out>,\n list=<optimized out>)\n at ../../../src/include/nodes/pg_list.h:280\n #7 list_nth_int (n=<optimized out>, list=<optimized\n out>) at ../../../src/include/nodes/pg_list.h:313\n #8 CopyReadAttributesCSV (cstate=<optimized out>) at\n copyfromparse.c:1905\n #9 0x00005600fd0265a5 in NextCopyFromRawFields\n (cstate=0x5600febdd238, fields=0x7fff12ef7130,\n nfields=0x7fff12ef712c)\n at copyfromparse.c:833\n #10 0x00005600fd0267f9 in NextCopyFrom\n (cstate=cstate@entry=0x5600febdd238,\n econtext=econtext@entry=0x5600fec9c5c8, \n values=0x5600febdd5c8, nulls=0x5600febdd5d0) at\n copyfromparse.c:885\n #11 0x00005600fd0234db in CopyFrom\n (cstate=cstate@entry=0x5600febdd238) at copyfrom.c:989\n #12 0x00005600fd0222e5 in DoCopy (pstate=0x5600febdc568,\n stmt=0x5600febb2d58, stmt_location=0, stmt_len=49, \n processed=0x7fff12ef7340) at copy.c:308\n #13 0x00005600fd25c5e9 in standard_ProcessUtility\n (pstmt=0x5600febb2e78, \n queryString=0x5600febb2178 \"copy t from stdin with (format\n csv, default '\\\\D');\", readOnlyTree=<optimized out>, \n context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0,\n dest=0x5600febb3138, qc=0x7fff12ef7600)\n at utility.c:742\n #14 0x00005600fd25a9f1 in PortalRunUtility\n (portal=portal@entry=0x5600fec4ea48,\n pstmt=pstmt@entry=0x5600febb2e78, \n isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false,\n dest=dest@entry=0x5600febb3138, \n qc=qc@entry=0x7fff12ef7600) at pquery.c:1158\n #15 0x00005600fd25ab2d in PortalRunMulti\n (portal=portal@entry=0x5600fec4ea48,\n isTopLevel=isTopLevel@entry=true, \n setHoldSnapshot=setHoldSnapshot@entry=false,\n dest=dest@entry=0x5600febb3138, \n altdest=altdest@entry=0x5600febb3138,\n qc=qc@entry=0x7fff12ef7600) at pquery.c:1315\n #16 0x00005600fd25b1c1 in PortalRun\n (portal=portal@entry=0x5600fec4ea48,\n count=count@entry=9223372036854775807, \n isTopLevel=isTopLevel@entry=true,\n run_once=run_once@entry=true, dest=dest@entry=0x5600febb3138, \n altdest=altdest@entry=0x5600febb3138, qc=0x7fff12ef7600) at\n pquery.c:791\n #17 0x00005600fd256f34 in exec_simple_query (\n query_string=0x5600febb2178 \"copy t from stdin with (format\n csv, default '\\\\D');\") at postgres.c:1240\n #18 0x00005600fd258ae7 in PostgresMain (dbname=<optimized\n out>, username=<optimized out>) at postgres.c:4572\n #19 0x00005600fd1c2d3f in BackendRun (port=0x5600febe05c0,\n port=0x5600febe05c0) at postmaster.c:4461\n #20 BackendStartup (port=0x5600febe05c0) at postmaster.c:4189\n #21 ServerLoop () at postmaster.c:1779\n #22 0x00005600fd1c3d63 in PostmasterMain (argc=argc@entry=3,\n argv=argv@entry=0x5600febad640) at postmaster.c:1463\n #23 0x00005600fced4fc6 in main (argc=3, argv=0x5600febad640) at\n main.c:200\n\n\n\n\n\nThanks for the test case. Will fix.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 15 Mar 2023 16:43:33 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
},
{
"msg_contents": "On 2023-03-15 We 13:00, Alexander Lakhin wrote:\n> Hello,\n> 13.03.2023 17:15, Andrew Dunstan wrote:\n>>\n>> On 2022-12-02 Fr 09:11, Israel Barth Rubio wrote:\n>>> Hello all,\n>>>\n>>> I'm submitting a new version of the patch. Instead of changing \n>>> signature\n>>> of several functions in order to use the defaults parameter, it is \n>>> now storing\n>>> that in the cstate structure, which is already passed to all \n>>> functions that\n>>> were previously modified.\n>>>\n>>\n>> Thanks, committed.\n>\n> Please look at the query:\n> create table t (f1 int);\n> copy t from stdin with (format csv, default '\\D');\n> 1,\\D\n>\n> that invokes an assertion failure after 9f8377f7a:\n> Core was generated by `postgres: law regression [local] \n> COPY '.\n> Program terminated with signal SIGABRT, Aborted.\n>\n>\n\nFix pushed, thanks for the report.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-15 We 13:00, Alexander\n Lakhin wrote:\n\n\n\nHello,\n 13.03.2023 17:15, Andrew Dunstan wrote:\n\n \n On 2022-12-02 Fr 09:11, Israel Barth Rubio wrote: \nHello all, \n\n I'm submitting a new version of the patch. Instead of changing\n signature \n of several functions in order to use the defaults parameter,\n it is now storing \n that in the cstate structure, which is already passed to all\n functions that \n were previously modified. \n\n\n\n Thanks, committed. \n\n\n Please look at the query:\n create table t (f1 int);\n copy t from stdin with (format csv, default '\\D');\n 1,\\D\n\n that invokes an assertion failure after 9f8377f7a:\n Core was generated by `postgres: law regression [local]\n COPY '.\n Program terminated with signal SIGABRT, Aborted.\n\n\n\n\n\nFix pushed, thanks for the report.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 15 Mar 2023 17:21:49 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Add support for DEFAULT specification in COPY FROM"
}
] |
[
{
"msg_contents": "Hi,\n\n1. When using extended PGroonga\n\nCREATE EXTENSION pgroonga;\n\nCREATE TABLE memos (\n id boolean,\n content varchar\n);\n\nCREATE INDEX idxA ON memos USING pgroonga (id);\n\n2. Disable bitmapscan and seqscan:\n\nSET enable_seqscan=off;\nSET enable_indexscan=on;\nSET enable_bitmapscan=off;\n\n3. Neither ID = 'f' nor id= 't' can use the index correctly.\n\npostgres=# explain select * from memos where id='f';\n QUERY PLAN\n--------------------------------------------------------------------------\n Seq Scan on memos (cost=10000000000.00..10000000001.06 rows=3 width=33)\n Filter: (NOT id)\n(2 rows)\n\npostgres=# explain select * from memos where id='t';\n QUERY PLAN\n--------------------------------------------------------------------------\n Seq Scan on memos (cost=10000000000.00..10000000001.06 rows=3 width=33)\n Filter: id\n(2 rows)\n\npostgres=# explain select * from memos where id>='t';\n QUERY PLAN\n-------------------------------------------------------------------\n Index Scan using idxa on memos (cost=0.00..4.01 rows=2 width=33)\n Index Cond: (id >= true)\n(2 rows)\n\n\n\nThe reason is that these expressions are converted to BoolExpr and Var.\nmatch_clause_to_indexcol does not use them to check boolean-index.\n\npatch attached.\n\n--\nQuan Zongliang\nBeijing Vastdata",
"msg_date": "Wed, 17 Aug 2022 09:43:54 +0800",
"msg_from": "Quan Zongliang <quanzongliang@yeah.net>",
"msg_from_op": true,
"msg_subject": "Bug: When user-defined AM is used, the index path cannot be selected\n correctly"
},
{
"msg_contents": "Quan Zongliang <quanzongliang@yeah.net> writes:\n> 1. When using extended PGroonga\n> ...\n> 3. Neither ID = 'f' nor id= 't' can use the index correctly.\n\nThis works fine for btree indexes. I think the actual problem\nis that IsBooleanOpfamily only accepts the btree and hash\nopclasses, and that's what needs to be improved. Your proposed\npatch fails to do that, which makes it just a crude hack that\nsolves some aspects of the issue (and probably breaks other\nthings).\n\nIt might work to change IsBooleanOpfamily so that it checks to\nsee whether BooleanEqualOperator is a member of the opclass.\nThat's basically what we need to know before we dare generate\nsubstitute index clauses. It's kind of an expensive test\nthough, and the existing coding assumes that IsBooleanOpfamily\nis cheap ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 16 Aug 2022 22:03:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: When user-defined AM is used,\n the index path cannot be selected correctly"
},
{
"msg_contents": "On 2022/8/17 10:03, Tom Lane wrote:\n> Quan Zongliang <quanzongliang@yeah.net> writes:\n>> 1. When using extended PGroonga\n>> ...\n>> 3. Neither ID = 'f' nor id= 't' can use the index correctly.\n> \n> This works fine for btree indexes. I think the actual problem\n> is that IsBooleanOpfamily only accepts the btree and hash\n> opclasses, and that's what needs to be improved. Your proposed\n> patch fails to do that, which makes it just a crude hack that\n> solves some aspects of the issue (and probably breaks other\n> things).\n> \n> It might work to change IsBooleanOpfamily so that it checks to\n> see whether BooleanEqualOperator is a member of the opclass.\n> That's basically what we need to know before we dare generate\n> substitute index clauses. It's kind of an expensive test\n> though, and the existing coding assumes that IsBooleanOpfamily\n> is cheap ...\n> \n> \t\t\tregards, tom lane\n\nNew patch attached.\n\nIt seems that partitions do not use AM other than btree and hash.\nRewrite only indxpath.c and check if it is a custom AM.",
"msg_date": "Wed, 17 Aug 2022 17:11:43 +0800",
"msg_from": "Quan Zongliang <quanzongliang@yeah.net>",
"msg_from_op": true,
"msg_subject": "Re: Bug: When user-defined AM is used, the index path cannot be\n selected correctly"
},
{
"msg_contents": "Quan Zongliang <quanzongliang@yeah.net> writes:\n> New patch attached.\n> It seems that partitions do not use AM other than btree and hash.\n> Rewrite only indxpath.c and check if it is a custom AM.\n\nThis seems drastically underdocumented, and the test you're using\nfor extension opclasses is wrong. What we need to know before\napplying match_boolean_index_clause is that a clause using\nBooleanEqualOperator will be valid for the index. That's got next\ndoor to nothing to do with whether the opclass is default for\nthe index AM. A non-default opclass might support that operator,\nand conversely a default one might not (although I concede it's\nnot that easy to imagine what other set of operators-on-boolean\nan extension opclass might be interested in).\n\nI think we need something more like the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 01 Sep 2022 16:33:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: When user-defined AM is used,\n the index path cannot be selected correctly"
},
{
"msg_contents": "I wrote:\n> I think we need something more like the attached.\n\nMeh -- serves me right for not doing check-world before sending.\nThe patch causes some plans to change in the btree_gin and btree_gist\nmodules; which is good, because that shows that the patch is actually\ndoing what it's supposed to. The fact that your patch didn't make\nthe cfbot unhappy implies that it wasn't triggering for those modules.\nI think the reason is that you did\n\n+ ((amid) >= FirstNormalObjectId && \\\n+ OidIsValid(GetDefaultOpClass(BOOLOID, (amid)))) \\\n\nso that the FirstNormalObjectId cutoff was applied to the AM's OID,\nnot to the opfamily OID, causing it to do the wrong thing for\nextension opclasses over built-in AMs.\n\nThe good news is this means we don't need to worry about making\na test case ...\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 01 Sep 2022 17:22:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug: When user-defined AM is used,\n the index path cannot be selected correctly"
}
] |
[
{
"msg_contents": "Hi,\n\n\n\nMy name is Aravind and I am part of IBM CICS TX product development and support. We have a requirement from one of our customers to use IBM CICS TX with Postgresql 13/14. IBM CICS TX is a Transaction Manager middleware product that is deployed as container on Kubernetes platforms. IBM CICS TX can interact with database products such as DB2, Oracle, MSSQL, Postgresql through XA/Open standards.\n\nCICS TX is a 32bit C runtime product and uses the databases’ 32bit client libraries to perform embedded SQL operations. The customer applications are Embedded SQL C or COBOL programs deployed on CICS TX and CICS TX runtime executes them as transactions ensuring the data integrity.\n\nWe observed there are no 32bit client binaries/libraries available with Postgresql 13/14 and CICS TX require them to interact with the postgresql server. Currently we have tested with Postgresql 10.12.1 and our customer wants to upgrade to Postgresql 13 or 14.\n\nBased on the above requirements and details, we have few questions which require your support.\n\n\n\n 1. Can we get 32bit client binaries/libraries for postgresql 14 ?\n 2. We also found that the libraries can be built by using the postgresql 14 source. Is it possible to build the 32bit client binaries/libraries from the source available ?\n 3. Is there an official support for 32bit client libraries/binaries built out of source for customers ?\n 4. Can the postgresql 10.12.1 client work with Postgresql 14 server ? Do you still support postgresql 10.12.1 client ?\n\n\n\nThanks & Regards,\nAravind Phaneendra\nCICS TX and TXSeries Development & L3 Support\nIndia Systems Development Labs\nIBM Systems\n\n\n\n\n\n\n\n\n\n\nHi, \n \nMy name is Aravind and I am part of IBM CICS TX product development and support. We have a requirement from one of our customers to use IBM CICS TX with Postgresql 13/14. IBM CICS\n TX is a Transaction Manager middleware product that is deployed as container on Kubernetes platforms. IBM CICS TX can interact with database products such as DB2, Oracle, MSSQL, Postgresql through XA/Open standards. \nCICS TX is a 32bit C runtime product and uses the databases’ 32bit client libraries to perform embedded SQL operations. The customer applications are Embedded SQL C or COBOL programs\n deployed on CICS TX and CICS TX runtime executes them as transactions ensuring the data integrity. \nWe observed there are no 32bit client binaries/libraries available with Postgresql 13/14 and CICS TX require them to interact with the postgresql server. Currently we have tested\n with Postgresql 10.12.1 and our customer wants to upgrade to Postgresql 13 or 14. \nBased on the above requirements and details, we have few questions which require your support. \n \n\nCan we get 32bit client binaries/libraries for postgresql 14 ?We also found that the libraries can be built by using the postgresql 14 source. Is it possible to build the 32bit client binaries/libraries from\n the source available ?Is there an official support for 32bit client libraries/binaries built out of source for customers ?Can the postgresql 10.12.1 client work with Postgresql 14 server ? Do you still support postgresql 10.12.1 client ?\n \n \n \n\nThanks & Regards,\nAravind Phaneendra\nCICS TX and TXSeries Development & L3 Support\nIndia Systems Development Labs\nIBM Systems",
"msg_date": "Wed, 17 Aug 2022 03:41:40 +0000",
"msg_from": "Aravind Phaneendra <aphaneen@in.ibm.com>",
"msg_from_op": true,
"msg_subject": "Regarding availability of 32bit client drivers for postgresql 13/14"
}
] |
[
{
"msg_contents": "Hi,\n\n\n\nMy name is Aravind and I am part of IBM CICS TX product development and support. We have a requirement from one of our customers to use IBM CICS TX with Postgresql 13/14. IBM CICS TX is a Transaction Manager middleware product that is deployed as container on Kubernetes platforms. IBM CICS TX can interact with database products such as DB2, Oracle, MSSQL, Postgresql through XA/Open standards.\n\nCICS TX is a 32bit C runtime product and uses the databases’ 32bit client libraries to perform embedded SQL operations. The customer applications are Embedded SQL C or COBOL programs deployed on CICS TX and CICS TX runtime executes them as transactions ensuring the data integrity.\n\nWe observed there are no 32bit client binaries/libraries available with Postgresql 13/14 and CICS TX require them to interact with the postgresql server. Currently we have tested with Postgresql 10.12.1 and our customer wants to upgrade to Postgresql 13 or 14.\n\nBased on the above requirements and details, we have few questions which require your support.\n\n\n\n 1. Can we get 32bit client binaries/libraries for postgresql 14 ?\n 2. We also found that the libraries can be built by using the postgresql 14 source. Is it possible to build the 32bit client binaries/libraries from the source available ?\n 3. Is there an official support for 32bit client libraries/binaries built out of source for customers ?\n 4. Can the postgresql 10.12.1 client work with Postgresql 14 server ? Do you still support postgresql 10.12.1 client ?\n\n\nThanks & Regards,\nAravind Phaneendra\nCICS TX and TXSeries Development & L3 Support\nIndia Systems Development Labs\nIBM Systems\n\n\n\n\n\n\n\n\n\n\nHi, \n\n \n\nMy name is Aravind and I am part of IBM CICS TX product development and support. We have a requirement from one of our customers to use IBM CICS TX with Postgresql 13/14. IBM CICS TX is a Transaction Manager middleware product that\n is deployed as container on Kubernetes platforms. IBM CICS TX can interact with database products such as DB2, Oracle, MSSQL, Postgresql through XA/Open standards. \n\nCICS TX is a 32bit C runtime product and uses the databases’ 32bit client libraries to perform embedded SQL operations. The customer applications are Embedded SQL C or COBOL programs deployed on CICS TX and CICS TX runtime executes\n them as transactions ensuring the data integrity. \n\nWe observed there are no 32bit client binaries/libraries available with Postgresql 13/14 and CICS TX require them to interact with the postgresql server. Currently we have tested with Postgresql 10.12.1 and our customer wants to upgrade\n to Postgresql 13 or 14. \n\nBased on the above requirements and details, we have few questions which require your support. \n\n \n\n\nCan we get 32bit client binaries/libraries for postgresql 14 ?\nWe also found that the libraries can be built by using the postgresql 14 source. Is it possible to build the 32bit client binaries/libraries from the source available ?\nIs there an official support for 32bit client libraries/binaries built out of source for customers ?\nCan the postgresql 10.12.1 client work with Postgresql 14 server ? Do you still support postgresql 10.12.1 client ?\n \n \n\nThanks & Regards,\nAravind Phaneendra\nCICS TX and TXSeries Development & L3 Support\nIndia Systems Development Labs\nIBM Systems",
"msg_date": "Wed, 17 Aug 2022 03:43:55 +0000",
"msg_from": "Aravind Phaneendra <aphaneen@in.ibm.com>",
"msg_from_op": true,
"msg_subject": "Regarding availability of 32bit client drivers for postgresql 13/14"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 03:43:55AM +0000, Aravind Phaneendra wrote:\n> Based on the above requirements and details, we have few questions which\n> require your support. \n> \n> 1. Can we get 32bit client binaries/libraries for postgresql 14 ?\n> 2. We also found that the libraries can be built by using the postgresql 14\n> source. Is it possible to build the 32bit client binaries/libraries from\n> the source available ?\n> 3. Is there an official support for 32bit client libraries/binaries built out\n> of source for customers ?\n> 4. Can the postgresql 10.12.1 client work with Postgresql 14 server ? Do you\n> still support postgresql 10.12.1 client ?\n\nThe community produces the source code, and third parties like Debian,\nRed Hat, EDB, and our own packagers build the binaries you are asking\nabout. I think you need to contact wherever you are getting your\nbinaries and ask them about 32-bit support. You can certainly build\n32-bit binaries yourself if you wish.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 17 Aug 2022 07:39:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Regarding availability of 32bit client drivers for postgresql\n 13/14"
}
] |
[
{
"msg_contents": "dummyret hasn't been used in a while (last use removed by 50d22de932, \nand before that 84b6d5f359), and since we are now preferring inline \nfunctions over complex macros, it's unlikely to be needed again.",
"msg_date": "Wed, 17 Aug 2022 07:26:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Remove dummyret definition"
},
{
"msg_contents": "> On 17 Aug 2022, at 07:26, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> dummyret hasn't been used in a while (last use removed by 50d22de932, and before that 84b6d5f359), and since we are now preferring inline functions over complex macros, it's unlikely to be needed again.\n\n+1, I can't see that making a comeback into the code.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 17 Aug 2022 09:01:36 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Remove dummyret definition"
}
] |
[
{
"msg_contents": "Hello,\n\nI've a slightly modified version of test_shm_mq, that I changed to include\na shared fileset. The motivation to do that came because I hit an\nassertion failure with PG15 while doing some development work on BDR and I\nsuspected it to be a PG15 bug.\n\nThe stack trace looks as below:\n\n(lldb) bt\n* thread #1\n * frame #0: 0x00007ff8187b100e libsystem_kernel.dylib`__pthread_kill + 10\n frame #1: 0x00007ff8187e71ff libsystem_pthread.dylib`pthread_kill + 263\n frame #2: 0x00007ff818732d24 libsystem_c.dylib`abort + 123\n frame #3: 0x000000010fce1bab\npostgres`ExceptionalCondition(conditionName=\"pgstat_is_initialized &&\n!pgstat_is_shutdown\", errorType=\"FailedAssertion\", fileName=\"pgstat.c\",\nlineNumber=1227) at assert.c:69:2\n frame #4: 0x000000010fb06412 postgres`pgstat_assert_is_up at\npgstat.c:1227:2\n frame #5: 0x000000010fb0d2c7\npostgres`pgstat_get_entry_ref(kind=PGSTAT_KIND_DATABASE, dboid=0, objoid=0,\ncreate=true, created_entry=0x0000000000000000) at pgstat_shmem.c:406:2\n frame #6: 0x000000010fb07579\npostgres`pgstat_prep_pending_entry(kind=PGSTAT_KIND_DATABASE, dboid=0,\nobjoid=0, created_entry=0x0000000000000000) at pgstat.c:1068:14\n frame #7: 0x000000010fb09cce\npostgres`pgstat_prep_database_pending(dboid=0) at pgstat_database.c:327:14\n frame #8: 0x000000010fb09dff\npostgres`pgstat_report_tempfile(filesize=0) at pgstat_database.c:179:10\n frame #9: 0x000000010fa8dbe9\npostgres`ReportTemporaryFileUsage(path=\"base/pgsql_tmp/pgsql_tmp17312.0.fileset/test_mq_sharefile.0\",\nsize=0) at fd.c:1521:2\n frame #10: 0x000000010fa8db3c\npostgres`PathNameDeleteTemporaryFile(path=\"base/pgsql_tmp/pgsql_tmp17312.0.fileset/test_mq_sharefile.0\",\nerror_on_failure=false) at fd.c:1945:3\n frame #11: 0x000000010fa8d3a8\npostgres`unlink_if_exists_fname(fname=\"base/pgsql_tmp/pgsql_tmp17312.0.fileset/test_mq_sharefile.0\",\nisdir=false, elevel=15) at fd.c:3674:3\n frame #12: 0x000000010fa8d270\npostgres`walkdir(path=\"base/pgsql_tmp/pgsql_tmp17312.0.fileset\",\naction=(postgres`unlink_if_exists_fname at fd.c:3663),\nprocess_symlinks=false, elevel=15) at fd.c:3573:5\n frame #13: 0x000000010fa8d0e2\npostgres`PathNameDeleteTemporaryDir(dirname=\"base/pgsql_tmp/pgsql_tmp17312.0.fileset\")\nat fd.c:1689:2\n frame #14: 0x000000010fa91ac1\npostgres`FileSetDeleteAll(fileset=0x0000000119240870) at fileset.c:165:3\n frame #15: 0x000000010fa92b08\npostgres`SharedFileSetOnDetach(segment=0x00007f93ff00a7c0,\ndatum=4716759152) at sharedfileset.c:119:3\n frame #16: 0x000000010fa96b00\npostgres`dsm_detach(seg=0x00007f93ff00a7c0) at dsm.c:801:3\n frame #17: 0x000000010fa96f51 postgres`dsm_backend_shutdown at\ndsm.c:738:3\n frame #18: 0x000000010fa99402 postgres`shmem_exit(code=1) at ipc.c:259:2\n frame #19: 0x000000010fa99227 postgres`proc_exit_prepare(code=1) at\nipc.c:194:2\n frame #20: 0x000000010fa99133 postgres`proc_exit(code=1) at ipc.c:107:2\n frame #21: 0x000000010fce318c postgres`errfinish(filename=\"postgres.c\",\nlineno=3204, funcname=\"ProcessInterrupts\") at elog.c:661:3\n frame #22: 0x000000010fad7c51 postgres`ProcessInterrupts at\npostgres.c:3201:4\n frame #23: 0x000000011924d85b\ntest_shm_mq.so`test_shm_mq_main(main_arg=1155036180) at worker.c:159:2\n frame #24: 0x000000010f9da11e postgres`StartBackgroundWorker at\nbgworker.c:858:2\n frame #25: 0x000000010f9e80b4\npostgres`do_start_bgworker(rw=0x00007f93ef904080) at postmaster.c:5823:4\n frame #26: 0x000000010f9e2524 postgres`maybe_start_bgworkers at\npostmaster.c:6047:9\n frame #27: 0x000000010f9e0e63\npostgres`sigusr1_handler(postgres_signal_arg=30) at postmaster.c:5204:3\n frame #28: 0x00007ff8187fcdfd libsystem_platform.dylib`_sigtramp + 29\n frame #29: 0x00007ff8187b2d5b libsystem_kernel.dylib`__select + 11\n frame #30: 0x000000010f9e268d postgres`ServerLoop at\npostmaster.c:1770:13\n frame #31: 0x000000010f9e0157 postgres`PostmasterMain(argc=8,\nargv=0x0000600002f30190) at postmaster.c:1478:11\n frame #32: 0x000000010f8bc930 postgres`main(argc=8,\nargv=0x0000600002f30190) at main.c:202:3\n frame #33: 0x000000011f7d651e dyld`start + 462\n\nI notice that pgstat_shutdown_hook() is registered as a before-shmem-exit\ncallback. The callback is responsible for detaching from the pgstat shared\nmemory segment. But looks like other parts of the system still expect it to\nbe available during later stages of proc exit.\n\nIt's not clear to me if pgstat shutdown should happen later or code that\ngets executed later in the cycle should not try to use pgstat. It's also\nentirely possible that my usage of SharedFileSet is completely wrong. If\nthat's the case, please let me know the mistake in the usage.\n\nPatch modifying the test case is attached. In order to reproduce the\nproblem quickly, I added a CHECK_FOR_INTERRUPTS() in the test, but I don't\nsee why that would be a bad coding practice.\n\nThanks,\nPavan",
"msg_date": "Wed, 17 Aug 2022 11:15:28 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assertion failure on PG15 with modified test_shm_mq test"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 11:15:28AM +0530, Pavan Deolasee wrote:\n> I notice that pgstat_shutdown_hook() is registered as a before-shmem-exit\n> callback. The callback is responsible for detaching from the pgstat shared\n> memory segment. But looks like other parts of the system still expect it to\n> be available during later stages of proc exit.\n> \n> It's not clear to me if pgstat shutdown should happen later or code that\n> gets executed later in the cycle should not try to use pgstat. It's also\n> entirely possible that my usage of SharedFileSet is completely wrong. If\n> that's the case, please let me know the mistake in the usage.\n\nThat's visibly an issue with shared memory and the stats. I have\nadded an open item. Andres?\n--\nMichael",
"msg_date": "Wed, 17 Aug 2022 15:02:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure on PG15 with modified test_shm_mq test"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 11:15:28 +0530, Pavan Deolasee wrote:\n> I've a slightly modified version of test_shm_mq, that I changed to include\n> a shared fileset. The motivation to do that came because I hit an\n> assertion failure with PG15 while doing some development work on BDR and I\n> suspected it to be a PG15 bug.\n\n> I notice that pgstat_shutdown_hook() is registered as a before-shmem-exit\n> callback. The callback is responsible for detaching from the pgstat shared\n> memory segment. But looks like other parts of the system still expect it to\n> be available during later stages of proc exit.\n\n> It's not clear to me if pgstat shutdown should happen later or code that\n> gets executed later in the cycle should not try to use pgstat. It's also\n> entirely possible that my usage of SharedFileSet is completely wrong. If\n> that's the case, please let me know the mistake in the usage.\n\nI don't think we have the infrastructure for a nice solution to this at the\nmoment - we need a fairly large overhaul of process initialization / shutdown\nto handle these interdependencies nicely.\n\nWe can't move pgstat shutdown into on_dsm callback because that's too late to\nallocate *new* dsm segments, which we might need to do while flushing\nout pending stats.\n\nSee https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fa91d4c91f28f4819dc54f93adbd413a685e366a\nfor a way to avoid the problem.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 17:08:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure on PG15 with modified test_shm_mq test"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 15:02:28 +0900, Michael Paquier wrote:\n> On Wed, Aug 17, 2022 at 11:15:28AM +0530, Pavan Deolasee wrote:\n> > I notice that pgstat_shutdown_hook() is registered as a before-shmem-exit\n> > callback. The callback is responsible for detaching from the pgstat shared\n> > memory segment. But looks like other parts of the system still expect it to\n> > be available during later stages of proc exit.\n> > \n> > It's not clear to me if pgstat shutdown should happen later or code that\n> > gets executed later in the cycle should not try to use pgstat. It's also\n> > entirely possible that my usage of SharedFileSet is completely wrong. If\n> > that's the case, please let me know the mistake in the usage.\n> \n> That's visibly an issue with shared memory and the stats. I have\n> added an open item. Andres?\n\nI don't think there's anything reasonably done about this for 15, as explained\nupthread. We need a big redesign of the shutdown sequence at some point, but\n...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 18:09:46 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure on PG15 with modified test_shm_mq test"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 18, 2022 at 5:38 AM Andres Freund <andres@anarazel.de> wrote:\n\n> I don't think we have the infrastructure for a nice solution to this at the\n> moment - we need a fairly large overhaul of process initialization /\n> shutdown\n> to handle these interdependencies nicely.\n>\n>\nOk, understood.\n\n\n> We can't move pgstat shutdown into on_dsm callback because that's too late\n> to\n> allocate *new* dsm segments, which we might need to do while flushing\n> out pending stats.\n>\n> See\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fa91d4c91f28f4819dc54f93adbd413a685e366a\n> for a way to avoid the problem.\n>\n>\nThanks for the hint. I will try that approach. I wonder though if there is\nsomething more we can do. For example, would it make sense to throw a\nWARNING and avoid segfault if pgstat machinery is already shutdown? Just\nworried if the code can be reached from multiple paths and testing all of\nthose would be difficult for extension developers, especially given this\nmay happen in error recovery path.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nHi,On Thu, Aug 18, 2022 at 5:38 AM Andres Freund <andres@anarazel.de> wrote:I don't think we have the infrastructure for a nice solution to this at the\nmoment - we need a fairly large overhaul of process initialization / shutdown\nto handle these interdependencies nicely.\nOk, understood. \nWe can't move pgstat shutdown into on_dsm callback because that's too late to\nallocate *new* dsm segments, which we might need to do while flushing\nout pending stats.\n\nSee https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fa91d4c91f28f4819dc54f93adbd413a685e366a\nfor a way to avoid the problem.\nThanks for the hint. I will try that approach. I wonder though if there is something more we can do. For example, would it make sense to throw a WARNING and avoid segfault if pgstat machinery is already shutdown? Just worried if the code can be reached from multiple paths and testing all of those would be difficult for extension developers, especially given this may happen in error recovery path.Thanks,Pavan-- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Thu, 18 Aug 2022 16:58:24 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assertion failure on PG15 with modified test_shm_mq test"
},
{
"msg_contents": "At Thu, 18 Aug 2022 16:58:24 +0530, Pavan Deolasee <pavan.deolasee@gmail.com> wrote in \n> Hi,\n> \n> On Thu, Aug 18, 2022 at 5:38 AM Andres Freund <andres@anarazel.de> wrote:\n> \n> > We can't move pgstat shutdown into on_dsm callback because that's too late\n> > to\n> > allocate *new* dsm segments, which we might need to do while flushing\n> > out pending stats.\n> >\n> > See\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fa91d4c91f28f4819dc54f93adbd413a685e366a\n> > for a way to avoid the problem.\n> >\n> >\n> Thanks for the hint. I will try that approach. I wonder though if there is\n> something more we can do. For example, would it make sense to throw a\n> WARNING and avoid segfault if pgstat machinery is already shutdown? Just\n> worried if the code can be reached from multiple paths and testing all of\n> those would be difficult for extension developers, especially given this\n> may happen in error recovery path.\n\nI'm not sure how extensions can face this problem, but..\n\npgstat is designed not to lose reported numbers. The assertion is\nmanifets that intention. It is not enabled on non-assertion builds\nand pgstat enters undefined state then maybe crash after the assertion\npoint. On the other hand I don't think we want to perform the same\ncheck for the all places the assertion exists on non-assertion builds.\n\nWe cannot simply replace the assertion with ereport().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 24 Aug 2022 13:05:00 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assertion failure on PG15 with modified test_shm_mq test"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nDo you think it will be useful to specify STORAGE and/or COMPRESSION\nfor domains?\n\nAs an example, this will allow creating an alias for TEXT with\nEXTERNAL storage strategy. In other words, to do the same we do with\nALTER TABLE, but for types. This feature is arguably not something\nmost people are going to use, but it shouldn't be difficult to\nimplement and/or maintain either.\n\nThoughts?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 17 Aug 2022 12:43:29 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Proposal: CREATE/ALTER DOMAIN ... STORAGE/COMPRESSION = ..."
},
{
"msg_contents": "On 17.08.22 11:43, Aleksander Alekseev wrote:\n> Do you think it will be useful to specify STORAGE and/or COMPRESSION\n> for domains?\n\nDomains are supposed to a logical construct that restricts the accepted \nvalues for a data type (it's in the name \"domain\"). Expanding that into \na general \"column definition macro\" seems outside its scope. For \nexample, what would be the semantics of this when such a domain is a \nfunction argument or return value?\n\n> As an example, this will allow creating an alias for TEXT with\n> EXTERNAL storage strategy. In other words, to do the same we do with\n> ALTER TABLE, but for types. This feature is arguably not something\n> most people are going to use, but it shouldn't be difficult to\n> implement and/or maintain either.\n\nConsidering how difficult it has been to maintain domains in just their \ncurrent form, I don't believe that.\n\n\n\n\n",
"msg_date": "Sat, 20 Aug 2022 09:47:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: CREATE/ALTER DOMAIN ... STORAGE/COMPRESSION = ..."
},
{
"msg_contents": "Hi!\nI agree, domains are supposed to define data types only and are not meant\nto impact\nhow these types are stored. Storage and compression strategy differ for one\ngiven type\nfrom table to table and must be defined explicitly, except for default.\nAlso, such implicit-like\nstorage and compression definition would very likely be the source of\nerrors or unpredictable\nbehavior while using such data types.\n\nOn Sat, Aug 20, 2022 at 10:47 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 17.08.22 11:43, Aleksander Alekseev wrote:\n> > Do you think it will be useful to specify STORAGE and/or COMPRESSION\n> > for domains?\n>\n> Domains are supposed to a logical construct that restricts the accepted\n> values for a data type (it's in the name \"domain\"). Expanding that into\n> a general \"column definition macro\" seems outside its scope. For\n> example, what would be the semantics of this when such a domain is a\n> function argument or return value?\n>\n> > As an example, this will allow creating an alias for TEXT with\n> > EXTERNAL storage strategy. In other words, to do the same we do with\n> > ALTER TABLE, but for types. This feature is arguably not something\n> > most people are going to use, but it shouldn't be difficult to\n> > implement and/or maintain either.\n>\n> Considering how difficult it has been to maintain domains in just their\n> current form, I don't believe that.\n>\n>\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I agree, domains are supposed to define data types only and are not meant to impacthow these types are stored. Storage and compression strategy differ for one given typefrom table to table and must be defined explicitly, except for default. Also, such implicit-likestorage and compression definition would very likely be the source of errors or unpredictablebehavior while using such data types.On Sat, Aug 20, 2022 at 10:47 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 17.08.22 11:43, Aleksander Alekseev wrote:\n> Do you think it will be useful to specify STORAGE and/or COMPRESSION\n> for domains?\n\nDomains are supposed to a logical construct that restricts the accepted \nvalues for a data type (it's in the name \"domain\"). Expanding that into \na general \"column definition macro\" seems outside its scope. For \nexample, what would be the semantics of this when such a domain is a \nfunction argument or return value?\n\n> As an example, this will allow creating an alias for TEXT with\n> EXTERNAL storage strategy. In other words, to do the same we do with\n> ALTER TABLE, but for types. This feature is arguably not something\n> most people are going to use, but it shouldn't be difficult to\n> implement and/or maintain either.\n\nConsidering how difficult it has been to maintain domains in just their \ncurrent form, I don't believe that.\n\n\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Sun, 21 Aug 2022 21:04:50 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: CREATE/ALTER DOMAIN ... STORAGE/COMPRESSION = ..."
}
] |
[
{
"msg_contents": "There's been no progress on this in the past discussions.\n\nhttps://www.postgresql.org/message-id/flat/877k1psmpf.fsf%40mailbox.samurai.com\nhttps://www.postgresql.org/message-id/flat/CAApHDvpqBR7u9yzW4yggjG%3DQfN%3DFZsc8Wo2ckokpQtif-%2BiQ2A%40mail.gmail.com#2d900bfe18fce17f97ec1f00800c8e27\nhttps://www.postgresql.org/message-id/flat/MN2PR18MB2927F7B5F690065E1194B258E35D0%40MN2PR18MB2927.namprd18.prod.outlook.com\n\nBut an unfortunate consequence of not fixing the historic issues is that it\nprecludes the possibility that anyone could be expected to notice if they\nintroduce more instances of the same problem (as in the first half of these\npatches). Then the hole which has already been dug becomes deeper, further\nincreasing the burden of fixing the historic issues before being able to use\n-Wshadow.\n\nThe first half of the patches fix shadow variables newly-introduced in v15\n(including one of my own patches), the rest are fixing the lowest hanging fruit\nof the \"short list\" from COPT=-Wshadow=compatible-local\n\nI can't see that any of these are bugs, but it seems like a good goal to move\ntowards allowing use of the -Wshadow* options to help avoid future errors, as\nwell as cleanliness and readability (rather than allowing it to get harder to\nuse -Wshadow).\n\n-- \nJustin",
"msg_date": "Wed, 17 Aug 2022 09:54:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 12:54 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> There's been no progress on this in the past discussions.\n>\n> https://www.postgresql.org/message-id/flat/877k1psmpf.fsf%40mailbox.samurai.com\n> https://www.postgresql.org/message-id/flat/CAApHDvpqBR7u9yzW4yggjG%3DQfN%3DFZsc8Wo2ckokpQtif-%2BiQ2A%40mail.gmail.com#2d900bfe18fce17f97ec1f00800c8e27\n> https://www.postgresql.org/message-id/flat/MN2PR18MB2927F7B5F690065E1194B258E35D0%40MN2PR18MB2927.namprd18.prod.outlook.com\n>\n> But an unfortunate consequence of not fixing the historic issues is that it\n> precludes the possibility that anyone could be expected to notice if they\n> introduce more instances of the same problem (as in the first half of these\n> patches). Then the hole which has already been dug becomes deeper, further\n> increasing the burden of fixing the historic issues before being able to use\n> -Wshadow.\n>\n> The first half of the patches fix shadow variables newly-introduced in v15\n> (including one of my own patches), the rest are fixing the lowest hanging fruit\n> of the \"short list\" from COPT=-Wshadow=compatible-local\n>\n> I can't see that any of these are bugs, but it seems like a good goal to move\n> towards allowing use of the -Wshadow* options to help avoid future errors, as\n> well as cleanliness and readability (rather than allowing it to get harder to\n> use -Wshadow).\n>\n\nHey, thanks for picking this up!\n\nI'd started looking at these [1] last year and spent a day trying to\ncategorise them all in a spreadsheet (shadows a global, shadows a\nparameter, shadows a local var etc) but I became swamped by the\nvolume, and then other work/life got in the way.\n\n+1 from me.\n\n------\n[1] https://www.postgresql.org/message-id/flat/CAHut%2BPuv4LaQKVQSErtV_%3D3MezUdpipVOMt7tJ3fXHxt_YK-Zw%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 18 Aug 2022 08:49:14 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 08:49:14AM +1000, Peter Smith wrote:\n> I'd started looking at these [1] last year and spent a day trying to\n> categorise them all in a spreadsheet (shadows a global, shadows a\n> parameter, shadows a local var etc) but I became swamped by the\n> volume, and then other work/life got in the way.\n> \n> +1 from me.\n\nA lot of the changes proposed here update the code so as the same\nvariable gets used across more code paths by removing declarations,\nbut we have two variables defined because both are aimed to be used in\na different context (see AttachPartitionEnsureIndexes() in tablecmds.c\nfor example).\n\nWouldn't it be a saner approach in a lot of cases to rename the\nshadowed variables (aka the ones getting removed in your patches) and\nkeep them local to the code paths where we use them?\n--\nMichael",
"msg_date": "Thu, 18 Aug 2022 09:39:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 09:39:02AM +0900, Michael Paquier wrote:\n> On Thu, Aug 18, 2022 at 08:49:14AM +1000, Peter Smith wrote:\n> > I'd started looking at these [1] last year and spent a day trying to\n> > categorise them all in a spreadsheet (shadows a global, shadows a\n> > parameter, shadows a local var etc) but I became swamped by the\n> > volume, and then other work/life got in the way.\n> > \n> > +1 from me.\n> \n> A lot of the changes proposed here update the code so as the same\n> variable gets used across more code paths by removing declarations,\n> but we have two variables defined because both are aimed to be used in\n> a different context (see AttachPartitionEnsureIndexes() in tablecmds.c\n> for example).\n\n> Wouldn't it be a saner approach in a lot of cases to rename the\n> shadowed variables (aka the ones getting removed in your patches) and\n> keep them local to the code paths where we use them?\n\nThe cases where I removed a declaration are ones where the variable either\nhasn't yet been assigned in the outer scope (so it's safe to use first in the\ninner scope, since its value is later overwriten in the outer scope). Or it's\nno longer used in the outer scope, so it's safe to re-use it in the inner scope\n(as in AttachPartitionEnsureIndexes). Since you think it's saner, I changed to\nrename them.\n\nIn the case of \"first\", the var is used in two independent loops, the same way,\nand re-initialized. In the case of found_whole_row, the var is ignored, as the\ncomments say, so it would be silly to declare more vars to be additionally\nignored.\n\n-- \nJustin\n\nPS. I hadn't sent the other patches which rename the variables, having assumed\nthat the discussion would be bikeshedded to death and derail without having\nfixed the lowest hanging fruits. I'm attaching them those now to see what\nhappens.",
"msg_date": "Wed, 17 Aug 2022 21:36:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 18 Aug 2022 at 02:54, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The first half of the patches fix shadow variables newly-introduced in v15\n> (including one of my own patches), the rest are fixing the lowest hanging fruit\n> of the \"short list\" from COPT=-Wshadow=compatible-local\n\nI wonder if it's better to fix the \"big hitters\" first. The idea\nthere would be to try to reduce the number of these warnings as\nquickly and easily as possible. If we can get the numbers down fairly\nsignificantly without too much effort, then that should provide us\nwith a bit more motivation to get rid of the remaining ones.\n\nHere are the warnings grouped by the name of the variable:\n\n$ make -s 2>&1 | grep \"warning: declaration of\" | grep -oP\n\"‘([_a-zA-Z]{1}[_a-zA-Z0-9]*)’\" | sort | uniq -c\n 2 ‘aclresult’\n 3 ‘attnum’\n 1 ‘cell’\n 1 ‘cell__state’\n 2 ‘cmp’\n 2 ‘command’\n 1 ‘constraintOid’\n 1 ‘copyTuple’\n 1 ‘data’\n 1 ‘db’\n 1 ‘_do_rethrow’\n 1 ‘dpns’\n 1 ‘econtext’\n 1 ‘entry’\n 36 ‘expected’\n 1 ‘first’\n 1 ‘found_whole_row’\n 1 ‘host’\n 20 ‘i’\n 1 ‘iclause’\n 1 ‘idxs’\n 1 ‘i_oid’\n 4 ‘isnull’\n 1 ‘it’\n 2 ‘item’\n 1 ‘itemno’\n 1 ‘j’\n 1 ‘jtc’\n 1 ‘k’\n 1 ‘keyno’\n 7 ‘l’\n 13 ‘lc’\n 4 ‘lc__state’\n 1 ‘len’\n 1 ‘_local_sigjmp_buf’\n 1 ‘name’\n 2 ‘now’\n 1 ‘owning_tab’\n 1 ‘page’\n 1 ‘partitionId’\n 2 ‘path’\n 3 ‘proc’\n 1 ‘proclock’\n 1 ‘querytree_list’\n 1 ‘range’\n 1 ‘rel’\n 1 ‘relation’\n 1 ‘relid’\n 1 ‘rightop’\n 2 ‘rinfo’\n 1 ‘_save_context_stack’\n 1 ‘save_errno’\n 1 ‘_save_exception_stack’\n 1 ‘slot’\n 1 ‘sqlca’\n 9 ‘startelem’\n 1 ‘stmt_list’\n 2 ‘str’\n 1 ‘subpath’\n 1 ‘tbinfo’\n 1 ‘ti’\n 1 ‘transno’\n 1 ‘ttype’\n 1 ‘tuple’\n 5 ‘val’\n 1 ‘value2’\n 1 ‘wco’\n 1 ‘xid’\n 1 ‘xlogfname’\n\nThe top 5 by count here account for about half of the warnings, so\nmaybe is best to start with those? Likely the ones ending in __state\nwill fix themselves when you fix the variable with the same name\nwithout that suffix.\n\nThe attached patch targets fixing the \"expected\" variable.\n\n$ ./configure --prefix=/home/drowley/pg\nCFLAGS=\"-Wshadow=compatible-local\" > /dev/null\n$ make clean -s\n$ make -j -s 2>&1 | grep \"warning: declaration of\" | wc -l\n153\n$ make clean -s\n$ patch -p1 < reduce_local_variable_shadow_warnings_in_regress.c.patch\n$ make -j -s 2>&1 | grep \"warning: declaration of\" | wc -l\n117\n\nSo 36 fewer warnings with the attached.\n\nI'm probably not the only committer to want to run a mile when they\nsee someone posting 17 or 26 patches in an email. So maybe \"bang for\nbuck\" is a better method for getting the ball rolling here. As you\nknow, I was recently bitten by local shadows in af7d270dd, so I do\nbelieve in the cause.\n\nWhat do you think?\n\nDavid",
"msg_date": "Thu, 18 Aug 2022 15:17:33 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> A lot of the changes proposed here update the code so as the same\n> variable gets used across more code paths by removing declarations,\n> but we have two variables defined because both are aimed to be used in\n> a different context (see AttachPartitionEnsureIndexes() in tablecmds.c\n> for example).\n\n> Wouldn't it be a saner approach in a lot of cases to rename the\n> shadowed variables (aka the ones getting removed in your patches) and\n> keep them local to the code paths where we use them?\n\nYeah. I do not think a patch of this sort has any business changing\nthe scopes of variables. That moves it out of \"cosmetic cleanup\"\nand into \"hm, I wonder if this introduces any bugs\". Most hackers\nare going to decide that they have better ways to spend their time\nthan doing that level of analysis for a very noncritical patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Aug 2022 23:42:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:\n> I'm probably not the only committer to want to run a mile when they\n> see someone posting 17 or 26 patches in an email. So maybe \"bang for\n> buck\" is a better method for getting the ball rolling here. As you\n> know, I was recently bitten by local shadows in af7d270dd, so I do\n> believe in the cause.\n> \n> What do you think?\n\nYou already fixed the shadow var introduced in master/pg16, and I sent patches\nfor the shadow vars added in pg15 (marked as such and presented as 001-008), so\nperhaps it's okay to start with that ?\n\nBTW, one of the remaining warnings seems to be another buglet, which I'll write\nabout at a later date.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 00:16:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:\n> > I'm probably not the only committer to want to run a mile when they\n> > see someone posting 17 or 26 patches in an email. So maybe \"bang for\n> > buck\" is a better method for getting the ball rolling here. As you\n> > know, I was recently bitten by local shadows in af7d270dd, so I do\n> > believe in the cause.\n> >\n> > What do you think?\n>\n> You already fixed the shadow var introduced in master/pg16, and I sent patches\n> for the shadow vars added in pg15 (marked as such and presented as 001-008), so\n> perhaps it's okay to start with that ?\n\nAlright, I made a pass over the 0001-0008 patches.\n\n0001. I'd also rather see these 4 renamed:\n\n+++ b/src/bin/pg_dump/pg_dump.c\n@@ -3144,7 +3144,6 @@ dumpDatabase(Archive *fout)\n PQExpBuffer loHorizonQry = createPQExpBuffer();\n int i_relfrozenxid,\n i_relfilenode,\n- i_oid,\n i_relminmxid;\n\nAdding an extra 'i' (for inner) on the front seems fine to me.\n\n0002. I don't really like the \"my\" name. I also see you've added the\nword \"this\" to many other variables that are shadowing. It feels kinda\nlike you're missing a \"self\" and a \"me\" in there somewhere! :)\n\n@@ -7080,21 +7080,21 @@ getConstraints(Archive *fout, TableInfo\ntblinfo[], int numTables)\n appendPQExpBufferChar(tbloids, '{');\n for (int i = 0; i < numTables; i++)\n {\n- TableInfo *tbinfo = &tblinfo[i];\n+ TableInfo *mytbinfo = &tblinfo[i];\n\nHow about just \"tinfo\"?\n\n0003. The following is used for the exact same purpose as its shadowed\ncounterpart. I suggest just using the variable from the outer scope.\n\n@@ -16799,21 +16799,21 @@ dumpSequence(Archive *fout, const TableInfo *tbinfo)\n */\n if (OidIsValid(tbinfo->owning_tab) && !tbinfo->is_identity_sequence)\n {\n- TableInfo *owning_tab = findTableByOid(tbinfo->owning_tab);\n+ TableInfo *this_owning_tab = findTableByOid(tbinfo->owning_tab);\n\n0004. I would rather people used foreach_current_index(lc) > 0 to\ndetermine when we're not doing the first iteration of a foreach loop.\nI understand there are more complex cases with filtering that this\ncannot be done, but these are highly simple and using\nforeach_current_index() removes multiple lines of code and makes it\nlook nicer.\n\n@@ -762,8 +762,8 @@ fetch_remote_table_info(char *nspname, char *relname,\n TupleTableSlot *slot;\n Oid attrsRow[] = {INT2VECTOROID};\n StringInfoData pub_names;\n- bool first = true;\n\n+ first = true;\n initStringInfo(&pub_names);\n foreach(lc, MySubscription->publications)\n\n0005. How about just \"tslot\". I'm not a fan of \"this\".\n\n+++ b/src/backend/replication/logical/tablesync.c\n@@ -759,7 +759,7 @@ fetch_remote_table_info(char *nspname, char *relname,\n if (walrcv_server_version(LogRepWorkerWalRcvConn) >= 150000)\n {\n WalRcvExecResult *pubres;\n- TupleTableSlot *slot;\n+ TupleTableSlot *thisslot;\n\n0006. A see the outer shadowed counterpart is used to add a new backup\ntype. Since I'm not a fan of \"this\", how about the outer one gets\nrenamed to \"newtype\"?\n\n+++ b/src/backend/backup/basebackup_target.c\n@@ -73,9 +73,9 @@ BaseBackupAddTarget(char *name,\n /* Search the target type list for an existing entry with this name. */\n foreach(lc, BaseBackupTargetTypeList)\n {\n- BaseBackupTargetType *ttype = lfirst(lc);\n+ BaseBackupTargetType *this_ttype = lfirst(lc);\n\n0007. Meh, more \"this\". How about just \"col\".\n\n+++ b/src/backend/parser/parse_jsontable.c\n@@ -341,13 +341,13 @@ transformJsonTableChildPlan(JsonTableContext\n*cxt, JsonTablePlan *plan,\n /* transform all nested columns into cross/union join */\n foreach(lc, columns)\n {\n- JsonTableColumn *jtc = castNode(JsonTableColumn, lfirst(lc));\n+ JsonTableColumn *thisjtc = castNode(JsonTableColumn, lfirst(lc));\n\nThere's a discussion about reverting this entire patch. Not sure if\npatching master and not backpatching to pg15 would be useful to the\npeople who may be doing that revert.\n\n0008. Sorry, I had to change this one too. I just have an aversion to\nvariables named \"temp\" or \"tmp\".\n\n+++ b/src/backend/utils/adt/jsonpath_exec.c\n@@ -3109,10 +3109,10 @@ JsonItemFromDatum(Datum val, Oid typid, int32\ntypmod, JsonbValue *res)\n\n if (JsonContainerIsScalar(&jb->root))\n {\n- bool res PG_USED_FOR_ASSERTS_ONLY;\n+ bool tmp PG_USED_FOR_ASSERTS_ONLY;\n\n- res = JsonbExtractScalar(&jb->root, jbv);\n- Assert(res);\n+ tmp = JsonbExtractScalar(&jb->root, jbv);\n+ Assert(tmp);\n\nI've attached a patch which does things more along the lines of how I\nwould have done it. I don't think we should be back patching this\nstuff.\n\nAny objections to pushing this to master only?\n\nDavid",
"msg_date": "Thu, 18 Aug 2022 19:27:09 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 5:27 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 18 Aug 2022 at 17:16, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > On Thu, Aug 18, 2022 at 03:17:33PM +1200, David Rowley wrote:\n> > > I'm probably not the only committer to want to run a mile when they\n> > > see someone posting 17 or 26 patches in an email. So maybe \"bang for\n> > > buck\" is a better method for getting the ball rolling here. As you\n> > > know, I was recently bitten by local shadows in af7d270dd, so I do\n> > > believe in the cause.\n> > >\n> > > What do you think?\n> >\n> > You already fixed the shadow var introduced in master/pg16, and I sent patches\n> > for the shadow vars added in pg15 (marked as such and presented as 001-008), so\n> > perhaps it's okay to start with that ?\n>\n> Alright, I made a pass over the 0001-0008 patches.\n>\n...\n\n>\n> 0005. How about just \"tslot\". I'm not a fan of \"this\".\n>\n\n(I'm sure there are others like this; I just picked this one as an example)\n\nAFAICT the offending 'slot' really should have never been declared at\nall at the local scope in the first place - e.g. the other code in\nthis function seems happy enough with the pattern of just re-using the\nfunction scoped 'slot'.\n\nI understand that for this shadow patch changing the var-name is\nconsidered the saner/safer way than tampering with the scope, but\nperhaps it is still useful to include a comment when changing ones\nlike this?\n\ne.g.\n+ TupleTableSlot *tslot; /* TODO - Why declare this at all? Shouldn't\nit just re-use the 'slot' at function scope? */\n\nOtherwise, such knowledge will be lost, and nobody will ever know to\nrevisit them, which feels a bit more like *hiding* the mistake than\nfixing it.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 18 Aug 2022 18:26:33 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:\n> 0001. I'd also rather see these 4 renamed:\n..\n> 0002. I don't really like the \"my\" name. I also see you've added the\n..\n> How about just \"tinfo\"?\n..\n> 0005. How about just \"tslot\". I'm not a fan of \"this\".\n..\n> Since I'm not a fan of \"this\", how about the outer one gets renamed \n..\n> 0007. Meh, more \"this\". How about just \"col\".\n..\n> 0008. Sorry, I had to change this one too.\n\nI agree that ii_oid and newtype are better names (although it's a bit\nunfortunate to rename the outer \"ttype\" var of wider scope).\n\n> 0003. The following is used for the exact same purpose as its shadowed\n> counterpart. I suggest just using the variable from the outer scope.\n\nAnd that's what my original patch did, before people insisted that the patches\nshouldn't change variable scope. Now it's back to where I stared.\n\n> There's a discussion about reverting this entire patch. Not sure if\n> patching master and not backpatching to pg15 would be useful to the\n> people who may be doing that revert.\n\nI think if it were reverted, it'd be in both branches.\n\n> I've attached a patch which does things more along the lines of how I\n> would have done it. I don't think we should be back patching this\n> stuff.\n> \n> Any objections to pushing this to master only?\n\nI won't object, but some of your changes are what makes backpatching this less\nreasonable (foreach_current_index and newtype). I had made these v15 patches\nfirst to simplify backpatching, since having the same code in v15 means that\nthere's no backpatch hazard for this new-in-v15 code.\n\nI am opened to presenting the patches differently, but we need to come up with\na better process than one person writing patches and someone else rewriting it.\nI also don't see the value of debating which order to write the patches in.\nGrouping by variable name or doing other statistical analysis doesn't change\nthe fact that there are 50+ issues to address to allow -Wshadow to be usable.\n\nMaybe these would be helpful ?\n - if I publish the patches on github;\n - if I send the patches with more context;\n - if you have an suggestion/objection/complaint with a patch, I can address it\n and/or re-arrange the patchset so this is later, and all the polished\n patches are presented first.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 18:21:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 9:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:\n> > 0001. I'd also rather see these 4 renamed:\n> ..\n> > 0002. I don't really like the \"my\" name. I also see you've added the\n> ..\n> > How about just \"tinfo\"?\n> ..\n> > 0005. How about just \"tslot\". I'm not a fan of \"this\".\n> ..\n> > Since I'm not a fan of \"this\", how about the outer one gets renamed\n> ..\n> > 0007. Meh, more \"this\". How about just \"col\".\n> ..\n> > 0008. Sorry, I had to change this one too.\n>\n> I agree that ii_oid and newtype are better names (although it's a bit\n> unfortunate to rename the outer \"ttype\" var of wider scope).\n>\n> > 0003. The following is used for the exact same purpose as its shadowed\n> > counterpart. I suggest just using the variable from the outer scope.\n>\n> And that's what my original patch did, before people insisted that the patches\n> shouldn't change variable scope. Now it's back to where I stared.\n>\n> > There's a discussion about reverting this entire patch. Not sure if\n> > patching master and not backpatching to pg15 would be useful to the\n> > people who may be doing that revert.\n>\n> I think if it were reverted, it'd be in both branches.\n>\n> > I've attached a patch which does things more along the lines of how I\n> > would have done it. I don't think we should be back patching this\n> > stuff.\n> >\n> > Any objections to pushing this to master only?\n>\n> I won't object, but some of your changes are what makes backpatching this less\n> reasonable (foreach_current_index and newtype). I had made these v15 patches\n> first to simplify backpatching, since having the same code in v15 means that\n> there's no backpatch hazard for this new-in-v15 code.\n>\n> I am opened to presenting the patches differently, but we need to come up with\n> a better process than one person writing patches and someone else rewriting it.\n> I also don't see the value of debating which order to write the patches in.\n> Grouping by variable name or doing other statistical analysis doesn't change\n> the fact that there are 50+ issues to address to allow -Wshadow to be usable.\n>\n> Maybe these would be helpful ?\n> - if I publish the patches on github;\n> - if I send the patches with more context;\n> - if you have an suggestion/objection/complaint with a patch, I can address it\n> and/or re-arrange the patchset so this is later, and all the polished\n> patches are presented first.\n>\n\nStarting off with patches might come to grief, and it won't be much\nfun rearranging patches over and over.\n\nBecause there are so many changes, I think it would be better to\nattack this task methodically:\n\nSTEP 1 - Capture every shadow warning and categorise exactly what kind\nis it. e.g maybe do this as some XLS which can be shared. The last\ntime I looked there were hundreds of instances, but I expect there\nwill be less than a couple of dozen different *categories* of them.\n\ne.g. shadow of a global var\ne.g. shadow of a function param\ne.g. shadow of a function var in a code block for the exact same usage\ne.g. shadow of a function var in a code block for some 'tmp' var\ne.g. shadow of a function var in a code block due to a mistake\ne.g. shadow of a function var by some loop index\ne.g. shadow of a function var for some loop 'first' handling\ne.g. bug\netc...\n\nSTEP 2 - Define your rules for how intend to address each of these\nkinds of shadows (e.g. just simple rename of the var, use\n'foreach_current_index', ...). Hopefully, it will be easy to reach an\nagreement now since all instances of the same kind will look pretty\nmuch the same.\n\nSTEP 3 - Fix all of the same kinds of shadows per single patch (using\nthe already agreed fix approach from step 2).\n\nREPEAT STEPS 2,3 until done.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 19 Aug 2022 10:49:25 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Fri, 19 Aug 2022 at 11:21, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Aug 18, 2022 at 07:27:09PM +1200, David Rowley wrote:\n> > Any objections to pushing this to master only?\n>\n> I won't object, but some of your changes are what makes backpatching this less\n> reasonable (foreach_current_index and newtype). I had made these v15 patches\n> first to simplify backpatching, since having the same code in v15 means that\n> there's no backpatch hazard for this new-in-v15 code.\n\nI spent a bit more time on this and I see that make check-world does\nfail if I change either of the foreach_current_index() changes to be\nincorrect. e.g change the condition from \"> 0\" to be \"== 0\", \"> 1\" or\n\"> -1\".\n\nAs for the newtype change, I was inclined to give the variable name\nwith the most meaning to the one that's in scope for longer.\n\nI'm starting to feel like it would be ok to backpatch these\nnew-to-pg-15 changes back into PG15. The reason I think this is that\nthey all seem low enough risk that it's probably more risky to not\nbackpatch and risk bugs being introduced due to mistakes being made in\nconflict resolution when future patches don't apply. It was the\nfailing tests I mentioned above that swayed me on this.\n\n> I am opened to presenting the patches differently, but we need to come up with\n> a better process than one person writing patches and someone else rewriting it.\n\nIt wasn't my intention to purposefully rewrite everything. It's just\nthat in order to get the work into something I was willing to commit,\nthat's how it ended up. As for why I did that rather than ask you to\nwas the fact that doing it myself required fewer keystrokes, mental\neffort and time than asking you to. It's not my intention to do that\nfor any personal credit. I'm happy for you to take that. I'd just\nrather not be batting such trivial patches over the fence at each\nother for days or weeks. The effort-to-reward ratio for that is\nprobably going to drop below my threshold after a few rounds.\n\nDavid\n\n\n",
"msg_date": "Fri, 19 Aug 2022 15:37:52 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 03:37:52PM +1200, David Rowley wrote:\n> I'm happy for you to take that. I'd just rather not be batting such trivial\n> patches over the fence at each other for days or weeks.\n\nYes, thanks for that.\nI read through your patch, which looks fine.\nLet me know what I can do when it's time for round two.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 23:28:16 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Fri, 19 Aug 2022 at 16:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Let me know what I can do when it's time for round two.\n\nI pushed the modified 0001-0008 patches earlier today and also the one\nI wrote to fixup the 36 warnings about \"expected\" being shadowed.\n\nI looked through a bunch of your remaining patches and was a bit\nunexcited to see many more renaming such as:\n\n- List *querytree_list;\n+ List *this_querytree_list;\n\nI don't think this sort of thing is an improvement.\n\nHowever, one category of these changes that I do like are the ones\nwhere we can move the variable into an inner scope. Out of your\nrenaming 0009-0026 patches, these are:\n\n0013\n0014\n0017\n0018\n\nI feel like having the variable in scope for the minimal amount of\ntime makes the code cleaner and I feel like these are good next steps\nbecause:\n\na) no variable needs to be renamed\nb) any backpatching issues is more likely to lead to compilation\nfailure rather than using the wrong variable.\n\nLikely 0016 is a subcategory of the above as if you modified that\npatch to follow this rule then you'd have to declare the variable a\nfew times. I think that category is less interesting and we can maybe\nconsider those after we're done with the more simple ones.\n\nDo you want to submit a series of patches that fixes all of the\nremaining warnings that are in this category? Once these are done we\ncan consider the best ways to fix and if we want to fix any of the\nremaining ones.\n\nFeel free to gzip the patches up if the number is large.\n\nDavid\n\n\n",
"msg_date": "Sat, 20 Aug 2022 21:17:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 09:17:41PM +1200, David Rowley wrote:\n> On Fri, 19 Aug 2022 at 16:28, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Let me know what I can do when it's time for round two.\n> \n> I pushed the modified 0001-0008 patches earlier today and also the one\n> I wrote to fixup the 36 warnings about \"expected\" being shadowed.\n\nThank you\n\n> I looked through a bunch of your remaining patches and was a bit\n> unexcited to see many more renaming such as:\n\nYes - after Michael said that was the sane procedure, I had rearranged the\npatch series to present eariler those patches first which renamed variables ..\n\n> However, one category of these changes that I do like are the ones\n> where we can move the variable into an inner scope.\n\nThere are a lot of these, which ISTM is a good thing.\nThis fixes about half of the remaining warnings.\n\nhttps://github.com/justinpryzby/postgres/tree/avoid-shadow-vars\nYou could review without applying the patches, on the webpage or (probably\nbetter) by adding as a git remote. Attached is a squished version.\n\n-- \nJustin",
"msg_date": "Mon, 22 Aug 2022 20:16:59 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 13:17, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Attached is a squished version.\n\nI see there's some renaming ones snuck in there. e.g:\n\n- Relation rel;\n- HeapTuple tuple;\n+ Relation pg_foreign_table;\n+ HeapTuple foreigntuple;\n\nThis one does not seem to be in the category I mentioned:\n\n@@ -3036,8 +3036,6 @@ XLogFileInitInternal(XLogSegNo logsegno,\nTimeLineID logtli,\n pgstat_report_wait_start(WAIT_EVENT_WAL_INIT_SYNC);\n if (pg_fsync(fd) != 0)\n {\n- int save_errno = errno;\n-\n\nMore renaming:\n\n+++ b/src/backend/catalog/heap.c\n@@ -1818,19 +1818,19 @@ heap_drop_with_catalog(Oid relid)\n */\n if (rel->rd_rel->relkind == RELKIND_FOREIGN_TABLE)\n {\n- Relation rel;\n- HeapTuple tuple;\n+ Relation pg_foreign_table;\n+ HeapTuple foreigntuple;\n\nMore renaming:\n\n+++ b/src/backend/commands/publicationcmds.c\n@@ -106,7 +106,7 @@ parse_publication_options(ParseState *pstate,\n {\n char *publish;\n List *publish_list;\n- ListCell *lc;\n+ ListCell *lc2;\n\nand again:\n\n+++ b/src/backend/commands/tablecmds.c\n@@ -10223,7 +10223,7 @@ CloneFkReferencing(List **wqueue, Relation\nparentRel, Relation partRel)\n Oid constrOid;\n ObjectAddress address,\n referenced;\n- ListCell *cell;\n+ ListCell *lc;\n\nI've not checked the context one this, but this does not appear to\nmeet the category of moving to an inner scope:\n\n+++ b/src/backend/executor/execPartition.c\n@@ -768,7 +768,6 @@ ExecInitPartitionInfo(ModifyTableState *mtstate,\nEState *estate,\n {\n List *onconflset;\n List *onconflcols;\n- bool found_whole_row;\n\nLooks like you're just using the one from the wider scope. That's not\nthe category we're after for now.\n\nYou've also got some renaming going on in ExecInitAgg()\n\n- phasedata->gset_lengths[i] = perhash->numCols = aggnode->numCols;\n+ phasedata->gset_lengths[setno] = perhash->numCols = aggnode->numCols;\n\nI wondered about this one too:\n\n- i = -1;\n- while ((i = bms_next_member(all_grouped_cols, i)) >= 0)\n- aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);\n+ {\n+ int i = -1;\n+ while ((i = bms_next_member(all_grouped_cols, i)) >= 0)\n+ aggstate->all_grouped_cols = lcons_int(i, aggstate->all_grouped_cols);\n+ }\n\nI had in mind that maybe we should switch those to be something more like:\n\nfor (int i = -1; (i = bms_next_member(all_grouped_cols, i)) >= 0;)\n\nBut I had 2nd thoughts as the \"while\" version has become the standard method.\n\n(Really that code should be using bms_prev_member() and lappend_int()\nso we don't have to memmove() the entire list each lcons_int() call.\n(not for this patch though))\n\nMore renaming being done here:\n\n- int i; /* Index into *ident_user */\n+ int j; /* Index into *ident_user */\n\n... in fact, there's lots of renaming, so I'll just stop looking.\n\nCan you just send a patch that only changes the cases where you can\nremove a variable declaration from an outer scope into a single inner\nscope, or multiple inner scope when the variable can be declared\ninside a for() loop? The mcv_get_match_bitmap() change is an example\nof this. There's still a net reduction in lines of code, so I think\nthe mcv_get_match_bitmap(), and any like it are ok for this next step.\nA counter example is ExecInitPartitionInfo() where the way to do this\nwould be to move the found_whole_row declaration into multiple inner\nscopes. That's a net increase in code lines, for which I think\nrequires more careful thought if we want that or not.\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:38:40 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 01:38:40PM +1200, David Rowley wrote:\n> On Tue, 23 Aug 2022 at 13:17, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Attached is a squished version.\n> \n> I see there's some renaming ones snuck in there. e.g:\n> ... in fact, there's lots of renaming, so I'll just stop looking.\n\nActually, they didn't sneak in - what I sent are the patches which are ready to\nbe reviewed, excluding the set of \"this\" and \"tmp\" and other renames which you\ndisliked. In the branch (not the squished patch) the first ~15 patches were\nmostly for C99 for loops - I presented them this way deliberately, so you could\nreview and comment on whatever you're able to bite off, or run with whatever\nparts you think are ready. I rewrote it now to be more bite sized by\ntruncating off the 2nd half of the patches.\n\n> Can you just send a patch that only changes the cases where you can\n> remove a variable declaration from an outer scope into a single inner\n> scope, or multiple inner scope when the variable can be declared\n> inside a for() loop?\n\n> would be to move the found_whole_row declaration into multiple inner\n> scopes. That's a net increase in code lines, for which I think\n> requires more careful thought if we want that or not.\n\nIMO it doesn't make sense to declare multiple integers for something like this\nwhether they're all ignored. Nor for \"save_errno\" nor the third, similar case,\nfor the reason in the commit message.\n\n-- \nJustin",
"msg_date": "Mon, 22 Aug 2022 21:14:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 14:14, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Actually, they didn't sneak in - what I sent are the patches which are ready to\n> be reviewed, excluding the set of \"this\" and \"tmp\" and other renames which you\n> disliked. In the branch (not the squished patch) the first ~15 patches were\n> mostly for C99 for loops - I presented them this way deliberately, so you could\n> review and comment on whatever you're able to bite off, or run with whatever\n> parts you think are ready. I rewrote it now to be more bite sized by\n> truncating off the 2nd half of the patches.\n\nThanks for the updated patch.\n\nI've now pushed it after making some small adjustments.\n\nIt seems there was one leftover rename still there, I removed that.\nThe only other changes I made were to just make the patch mode\nconsistent with what it was doing. There were a few cases where you\nwere doing:\n\n if (typlen == -1) /* varlena */\n {\n- int i;\n-\n- for (i = 0; i < nvalues; i++)\n+ for (int i = 0; i < nvalues; i++)\n\nThat wasn't really required to remove the warning as you'd already\nadjusted the scope of the shadowed variable so there was no longer a\ncollision. The reason I adjusted these was because sometimes you were\ndoing that, and sometimes you were not. I wanted to be consistent, so\nI opted for not doing it as it's not required for this effort. Maybe\none day those can be changed in some other unrelated effort to C99ify\nour code.\n\nThe attached patch is just the portions I didn't commit.\n\nThanks for working on this.\n\nDavid",
"msg_date": "Wed, 24 Aug 2022 12:37:29 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 12:37:29PM +1200, David Rowley wrote:\n> On Tue, 23 Aug 2022 at 14:14, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Actually, they didn't sneak in - what I sent are the patches which are ready to\n> > be reviewed, excluding the set of \"this\" and \"tmp\" and other renames which you\n> > disliked. In the branch (not the squished patch) the first ~15 patches were\n> > mostly for C99 for loops - I presented them this way deliberately, so you could\n> > review and comment on whatever you're able to bite off, or run with whatever\n> > parts you think are ready. I rewrote it now to be more bite sized by\n> > truncating off the 2nd half of the patches.\n> \n> Thanks for the updated patch.\n> \n> I've now pushed it after making some small adjustments.\n\nThanks for handling them.\n\nAttached are half of the remainder of what I've written, ready for review.\n\nI also put it here: https://github.com/justinpryzby/postgres/tree/avoid-shadow-vars\n\nYou may or may not find the associated commit messages to be useful.\nLet me know if you'd like the individual patches included here, instead.\n\nThe first patch removes 2ndary, \"inner\" declarations, where that seems\nreasonably safe and consistent with existing practice (and probably what the\noriginal authors intended or would have written).\n\n-- \nJustin",
"msg_date": "Tue, 23 Aug 2022 21:39:44 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, 24 Aug 2022 at 14:39, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Attached are half of the remainder of what I've written, ready for review.\n\nThanks for the patches.\n\nI started to do some analysis of the remaining warnings and put them\nin the attached spreadsheet. I put each of the remaining warnings into\na category of how I think they should be fixed.\n\nThese categories are:\n\n1. \"Rescope\" (adjust scope of outer variable to move it into a deeper scope)\n2. \"Rename\" (a variable needs to be renamed)\n3. \"RenameOrScope\" (a variable needs renamed or we need to something\nmore extreme to rescope)\n4. \"Repurpose\" (variables have the same purpose and may as well use\nthe same variable)\n5. \"Refactor\" (fix the code to make it better)\n6. \"Remove\" (variable is not needed)\n\nThere's also:\n7. \"Bug?\" (might be a bug)\n8. \"?\" (I don't know)\n\nI was hoping we'd already caught all of the #1s in 421892a19, but I\ncaught a few of those in some of your other patches. One you'd done\nanother way and some you'd done the rescope but just put it in the\nwrong patch. The others had not been done yet. I just pushed\nf959bf9a5 to fix those ones.\n\nI really think #2s should be done last. I'm not as comfortable with\nthe renaming and we might want to discuss tactics on that. We could\neither opt to rename the shadowed or shadowing variable, or both. If\nwe rename the shadowing variable, then pending patches or forward\npatches could use the wrong variable. If we rename the shadowed\nvariable then it's not impossible that backpatching could go wrong\nwhere the new code intends to reference the outer variable using the\nnewly named variable, but when that's backpatched it uses the variable\nwith the same name in the inner scope. Renaming both would make the\nproblem more obvious. I'm not sure which is best. The answer may\ndepend on how many lines the variable is in scope for. If it's just\nfor a few lines then the hunk context would conflict and the committer\nwould likely notice the issue when resolving the conflict.\n\nFor #3, I just couldn't decide the best fix. Many of these could be\nmoved into an inner scope, but it would require indenting a large\namount of code, e.g. in a switch() statement's \"case:\" to allow\nvariables to be declared within the case.\n\nI think probably #4 should be next to do (maybe after #5)\n\nI have some ideas on how to fix the two #5s, so I'm going to go and do that now.\n\nThere's only 1 #6. I'm not so sure on that yet. The variable being\nassigned to the variable is the current time and I'm not sure if we\ncan reuse the existing variable or not as time may have moved on\nsufficiently.\n\nI'll study #7 a bit more. My eyes glazed over a bit from doing all\nthat analysis, so I might be mistaken about that being a bug.\n\nFor #8s. These are the PG_TRY() ones. I see you had a go at fixing\nthat by moving the nested PG_TRY()s to a helper function. I don't\nthink that's a good fix. If we were to ever consider making\n-Wshadow=compatible-local in a standard build, then we'd basically be\nsaying that nested PG_TRYs are not allowed. I don't think that'll fly.\nI'd rather find a better way to fix those. I see we can't make use of\n##__LINE__ in the variable name since PG_TRY()'s friends use the\nvariables too and they'd be on a different line. We maybe could have\nan \"ident\" parameter in the macro that we ##ident onto the variables\nnames, but that would break existing code.\n\n> The first patch removes 2ndary, \"inner\" declarations, where that seems\n> reasonably safe and consistent with existing practice (and probably what the\n> original authors intended or would have written).\n\nWould you be able to write a patch for #4. I'll do #5 now. You could\ndo a draft patch for #2 as well, but I think it should be committed\nlast, if we decide it's a good move to make. It may be worth having\nthe discussion about if we actually want to run\n-Wshadow=compatible-local as a standard build flag before we rename\nanything.\n\nDavid",
"msg_date": "Wed, 24 Aug 2022 22:47:31 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:\n> I was hoping we'd already caught all of the #1s in 421892a19, but I\n> caught a few of those in some of your other patches. One you'd done\n> another way and some you'd done the rescope but just put it in the\n> wrong patch. The others had not been done yet. I just pushed\n> f959bf9a5 to fix those ones.\n\nThis fixed pg_get_statisticsobj_worker() but not pg_get_indexdef_worker() nor\npg_get_partkeydef_worker().\n\n(Also, I'd mentioned that my fixes for those deliberately re-used the\nouter-scope vars, which isn't what you did, and it's why I didn't include them\nwith the patch for inner-scope).\n\n> I really think #2s should be done last. I'm not as comfortable with\n> the renaming and we might want to discuss tactics on that. We could\n> either opt to rename the shadowed or shadowing variable, or both. If\n> we rename the shadowing variable, then pending patches or forward\n> patches could use the wrong variable. If we rename the shadowed\n> variable then it's not impossible that backpatching could go wrong\n> where the new code intends to reference the outer variable using the\n> newly named variable, but when that's backpatched it uses the variable\n> with the same name in the inner scope. Renaming both would make the\n> problem more obvious. I'm not sure which is best. The answer may\n> depend on how many lines the variable is in scope for. If it's just\n> for a few lines then the hunk context would conflict and the committer\n> would likely notice the issue when resolving the conflict.\n\nYes, the hope is to limit the change to variables that are only used a couple\ntimes within a few lines. It's also possible that these will break patches in\ndevelopment, but that's normal for any change at all.\n\n> I'll study #7 a bit more. My eyes glazed over a bit from doing all\n> that analysis, so I might be mistaken about that being a bug.\n\nI reported this last week.\nhttps://www.postgresql.org/message-id/20220819211824.GX26426@telsasoft.com\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 24 Aug 2022 09:00:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 25 Aug 2022 at 02:00, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:\n> > I was hoping we'd already caught all of the #1s in 421892a19, but I\n> > caught a few of those in some of your other patches. One you'd done\n> > another way and some you'd done the rescope but just put it in the\n> > wrong patch. The others had not been done yet. I just pushed\n> > f959bf9a5 to fix those ones.\n>\n> This fixed pg_get_statisticsobj_worker() but not pg_get_indexdef_worker() nor\n> pg_get_partkeydef_worker().\n\nThe latter two can't be fixed in the same way as\npg_get_statisticsobj_worker(), which is why I left them alone. We can\ndeal with those when getting onto the next category of warnings, which\nI believe should be the \"Repurpose\" category. If you look at the\nshadow_analysis spreadsheet then you can see how I've categorised\neach. I'm not pretending those are all 100% accurate. Various cases\nthe choice of category was subjective. My aim here is to fix as many\nof the warnings as possible in the safest way possible for the\nparticular warning. This is why pg_get_statisticsobj_worker() wasn't\nfixed in the same pass as pg_get_indexdef_worker() and\npg_get_partkeydef_worker().\n\nDavid\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:51:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, 24 Aug 2022 at 22:47, David Rowley <dgrowleyml@gmail.com> wrote:\n> 5. \"Refactor\" (fix the code to make it better)\n\n> I have some ideas on how to fix the two #5s, so I'm going to go and do that now.\n\nI've attached a patch which I think improves the code in\ngistRelocateBuildBuffersOnSplit() so that there's no longer a shadowed\nvariable. I also benchmarked this method in a tight loop and can\nmeasure no performance change from getting the loop index this way vs\nthe old way.\n\nThis only fixes one of the #5s I mentioned. I ended up scraping my\nidea to fix the shadowed 'i' in get_qual_for_range() as it became too\ncomplex. The idea was to use list_cell_number() to find out how far\nwe looped in the forboth() loop. It turned out that 'i' was used in\nthe subsequent loop in \"j = i;\". The fix just became too complex and I\ndidn't think it was worth the risk of breaking something just to get\nrid of the showed 'i'.\n\nDavid",
"msg_date": "Thu, 25 Aug 2022 13:46:11 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:\n> On Wed, 24 Aug 2022 at 14:39, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Attached are half of the remainder of what I've written, ready for review.\n> \n> Thanks for the patches.\n\n> 4. \"Repurpose\" (variables have the same purpose and may as well use\n> the same variable)\n\n> Would you be able to write a patch for #4.\n\nThe first of the patches that I sent yesterday was all about \"repurposed\" vars\nfrom outer scope (lc, l, isnull, save_errno), and was 70% of your list of vars\nto repurpose.\n\nHere, I've included the rest of your list.\n\nPlus another patch for vars which I'd already written patches to repurpose, but\nwhich aren't classified as \"repurpose\" on your list.\n\nFor subselect.c, you could remove some more \"lc\" vars and re-use the \"l\" var\nfor consistency (but I suppose you won't want that).\n\n-- \nJustin",
"msg_date": "Wed, 24 Aug 2022 21:08:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 25 Aug 2022 at 14:08, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Here, I've included the rest of your list.\n\nOK, I've gone through v3-remove-var-declarations.txt, v4-reuse.txt\nv4-reuse-more.txt and committed most of what you had and removed a few\nthat I thought should be renames instead.\n\nI also added some additional ones after reprocessing the RenameOrScope\ncategory from the spreadsheet.\n\nWith some minor adjustments to a small number of your ones, I pushed\nwhat I came up with.\n\nDavid",
"msg_date": "Fri, 26 Aug 2022 02:55:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 25 Aug 2022 at 13:46, David Rowley <dgrowleyml@gmail.com> wrote:\n> I've attached a patch which I think improves the code in\n> gistRelocateBuildBuffersOnSplit() so that there's no longer a shadowed\n> variable. I also benchmarked this method in a tight loop and can\n> measure no performance change from getting the loop index this way vs\n> the old way.\n\nI've now pushed this patch too.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Aug 2022 02:56:15 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 10:47:31PM +1200, David Rowley wrote:\n> I really think #2s should be done last. I'm not as comfortable with\n> the renaming and we might want to discuss tactics on that. We could\n> either opt to rename the shadowed or shadowing variable, or both. If\n> we rename the shadowing variable, then pending patches or forward\n> patches could use the wrong variable. If we rename the shadowed\n> variable then it's not impossible that backpatching could go wrong\n> where the new code intends to reference the outer variable using the\n> newly named variable, but when that's backpatched it uses the variable\n> with the same name in the inner scope. Renaming both would make the\n> problem more obvious.\n\nThe most *likely* outcome of renaming the *outer* variable is that\n*every* cherry-pick involving that variable would fails to compile,\nwhich is an *obvious* failure (good) but also kind of annoying if it\ncould've worked fine if it weren't renamed. I think most of the renames\nshould be applied to the inner var, because it's of narrower scope, and\nmore likely to cause a conflict (good) rather than appearing to apply\ncleanly but then misbehave. But it seems reasonable to consider\nrenaming both if the inner scope is longer than a handful of lines.\n\n> Would you be able to write a patch for #4. I'll do #5 now. You could\n> do a draft patch for #2 as well, but I think it should be committed\n> last, if we decide it's a good move to make. It may be worth having\n> the discussion about if we actually want to run\n> -Wshadow=compatible-local as a standard build flag before we rename\n> anything.\n\nI'm afraid the discussion about default flags would distract from fixing\nthe individual warnings, which itself preclude usability of the flag by\nindividual developers, or buildfarm, even as a local setting.\n\nIt can't be enabled until *all* the shadows are gone, due to -Werror on\nthe buildfarm and cirrusci. Unless perhaps we used -Wno-error=shadow.\nI suppose we're only talking about enabling it for gcc?\n\nThe biggest benefit is if we fix *all* the local shadow vars, since that\nallows someone to make use of the option, and thereby avoiding future\nsuch issues. Enabling the option could conceivably avoid issues\ncherry-picking into back branch - if an inner var is re-introduced\nduring conflict resolution, then a new warning would be issued, and\nhopefully the developer would look more closely.\n\nWould you check if any of these changes are good enough ?\n\n-- \nJustin",
"msg_date": "Tue, 30 Aug 2022 00:44:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 17:44, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Would you check if any of these changes are good enough ?\n\nI looked through v5.txt and modified it so that the fix for the shadow\nwarnings are more aligned to the spreadsheet I created.\n\nI also fixed some additional warnings which leaves just 5 warnings. Namely:\n\n../../../src/include/utils/elog.h:317:29: warning: declaration of\n‘_save_exception_stack’ shadows a previous local\n../../../src/include/utils/elog.h:318:39: warning: declaration of\n‘_save_context_stack’ shadows a previous local\n../../../src/include/utils/elog.h:319:28: warning: declaration of\n‘_local_sigjmp_buf’ shadows a previous local\n../../../src/include/utils/elog.h:320:22: warning: declaration of\n‘_do_rethrow’ shadows a previous local\npgbench.c:7509:40: warning: declaration of ‘now’ shadows a previous local\n\nThe first 4 of those are due to a nested PG_TRY(). The final one I\njust ran out of inspiration on what to rename the variable to.\n\nIf there are no objections then I'll push this in the next day or 2.\n\nDavid",
"msg_date": "Tue, 4 Oct 2022 14:27:09 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, Oct 04, 2022 at 02:27:09PM +1300, David Rowley wrote:\n> On Tue, 30 Aug 2022 at 17:44, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Would you check if any of these changes are good enough ?\n> \n> I looked through v5.txt and modified it so that the fix for the shadow\n> warnings are more aligned to the spreadsheet I created.\n\nThanks\n\n> diff --git a/src/backend/utils/adt/datetime.c b/src/backend/utils/adt/datetime.c\n> index 350039cc86..7848deeea9 100644\n> --- a/src/backend/utils/adt/datetime.c\n> +++ b/src/backend/utils/adt/datetime.c\n> @@ -1019,17 +1019,17 @@ DecodeDateTime(char **field, int *ftype, int nf,\n> \t\t\t\tif (ptype == DTK_JULIAN)\n> \t\t\t\t{\n> \t\t\t\t\tchar\t *cp;\n> -\t\t\t\t\tint\t\t\tval;\n> +\t\t\t\t\tint\t\t\tjday;\n> \n> \t\t\t\t\tif (tzp == NULL)\n> \t\t\t\t\t\treturn DTERR_BAD_FORMAT;\n> \n> \t\t\t\t\terrno = 0;\n> -\t\t\t\t\tval = strtoint(field[i], &cp, 10);\n> +\t\t\t\t\tjday = strtoint(field[i], &cp, 10);\n> \t\t\t\t\tif (errno == ERANGE || val < 0)\n> \t\t\t\t\t\treturn DTERR_FIELD_OVERFLOW;\n\nHere, you forgot to change \"val < 0\".\n\nI tried to see how to make that fail (differently) but can't see yet how\npass a negative julian date..\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 3 Oct 2022 21:30:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, 4 Oct 2022 at 15:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Here, you forgot to change \"val < 0\".\n\nThanks. I made another review pass of each change to ensure I didn't\nmiss any others. There were no other issues, so I pushed the adjusted\npatch.\n\n5 warnings remain. 4 of these are for PG_TRY() and co.\n\nDavid\n\n\n",
"msg_date": "Wed, 5 Oct 2022 21:05:07 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, 5 Oct 2022 at 21:05, David Rowley <dgrowleyml@gmail.com> wrote:\n> 5 warnings remain. 4 of these are for PG_TRY() and co.\n\nI've attached a draft patch for a method I was considering to fix the\nwarnings we're getting from the nested PG_TRY() statement in\nutility.c.\n\nThe C preprocessor does not allow name overloading in macros, but of\ncourse, it does allow variable argument marcos with ... so I just\nused that and added ##__VA_ARGS__ to each variable. I think that\nshould work ok providing callers only supply 0 or 1 arguments to the\nmacro, and of course, make that parameter value the same for each set\nof macros used in the PG_TRY() statement.\n\nThe good thing about the optional argument is that we don't need to\ntouch any existing users of PG_TRY(). The attached just modifies the\ninner-most PG_TRY() in the only nested PG_TRY() we have in the tree in\nutility.c.\n\nThe only warning remaining after applying the attached is the \"now\"\nwarning in pgbench.c:7509. I'd considered changing this to \"thenow\"\nwhich translates to \"right now\" in the part of Scotland that I'm from.\nI also considered \"nownow\", which is used in South Africa [1].\nAnyway, I'm not really being serious, but I didn't come up with\nanything better than \"now2\". It's just I didn't like that as it sort\nof implies there are multiple definitions of \"now\" and I struggle with\nthat... maybe I'm just thinking too much in terms of Newtonian\nRelativity...\n\nDavid\n\n[1] https://www.goodthingsguy.com/fun/now-now-just-now/",
"msg_date": "Wed, 5 Oct 2022 23:22:33 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On 2022-Oct-05, David Rowley wrote:\n\n> The only warning remaining after applying the attached is the \"now\"\n> warning in pgbench.c:7509. I'd considered changing this to \"thenow\"\n> which translates to \"right now\" in the part of Scotland that I'm from.\n> I also considered \"nownow\", which is used in South Africa [1].\n> Anyway, I'm not really being serious, but I didn't come up with\n> anything better than \"now2\". It's just I didn't like that as it sort\n> of implies there are multiple definitions of \"now\" and I struggle with\n> that... maybe I'm just thinking too much in terms of Newtonian\n> Relativity...\n\n:-D\n\nA simpler idea might be to just remove the inner declaration, and have\nthat block set the outer var. There's no damage, since the block is\ngoing to end and not access the previous value anymore.\n\ndiff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\nindex aa1a3541fe..91a067859b 100644\n--- a/src/bin/pgbench/pgbench.c\n+++ b/src/bin/pgbench/pgbench.c\n@@ -7506,7 +7506,7 @@ threadRun(void *arg)\n \t\t/* progress report is made by thread 0 for all threads */\n \t\tif (progress && thread->tid == 0)\n \t\t{\n-\t\t\tpg_time_usec_t now = pg_time_now();\n+\t\t\tnow = pg_time_now();\t/* not lazy; clobbers outer value */\n \n \t\t\tif (now >= next_report)\n \t\t\t{\n\n\nThe \"now now\" reference reminded me of \"ahorita\"\nhttps://doorwaytomexico.com/paulina/ahorita-meaning-examples/\nwhich is source of misunderstandings across borders in South America ...\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The important things in the world are problems with society that we don't\nunderstand at all. The machines will become more complicated but they won't\nbe more complicated than the societies that run them.\" (Freeman Dyson)\n\n\n",
"msg_date": "Wed, 5 Oct 2022 15:34:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I've attached a draft patch for a method I was considering to fix the\n> warnings we're getting from the nested PG_TRY() statement in\n> utility.c.\n\n+1\n\n> The only warning remaining after applying the attached is the \"now\"\n> warning in pgbench.c:7509. I'd considered changing this to \"thenow\"\n> which translates to \"right now\" in the part of Scotland that I'm from.\n> I also considered \"nownow\", which is used in South Africa [1].\n> Anyway, I'm not really being serious, but I didn't come up with\n> anything better than \"now2\".\n\nYeah, \"now2\" seems as reasonable as anything.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Oct 2022 10:19:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 03:19, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > I've attached a draft patch for a method I was considering to fix the\n> > warnings we're getting from the nested PG_TRY() statement in\n> > utility.c.\n>\n> +1\n\nPushed.\n\n> > The only warning remaining after applying the attached is the \"now\"\n> > warning in pgbench.c:7509. I'd considered changing this to \"thenow\"\n> > which translates to \"right now\" in the part of Scotland that I'm from.\n> > I also considered \"nownow\", which is used in South Africa [1].\n> > Anyway, I'm not really being serious, but I didn't come up with\n> > anything better than \"now2\".\n>\n> Yeah, \"now2\" seems as reasonable as anything.\n\nAlso pushed. (Thanks for saving me on that one.)\n\nDavid\n\n\n",
"msg_date": "Thu, 6 Oct 2022 10:21:41 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 10:21:41 +1300, David Rowley wrote:\n> Also pushed. (Thanks for saving me on that one.)\n\nYour commit message said the last shadowed variable. But building with\n-Wshadow=compatible-local triggers a bunch of warnings for me (see trimmed at\nthe end). Looks like it \"only\" fixed it for src/, without optional\ndependencies like gssapi and python.\n\nI think we should add -Wshadow=compatible-local to our sets of warning flags\nafter fixing those.\n\n\n[237/1827 42 12%] Compiling C object src/interfaces/libpq/libpq.a.p/fe-secure-gssapi.c.o\n../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-secure-gssapi.c: In function ‘pg_GSS_write’:\n../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-secure-gssapi.c:138:41: warning: declaration of ‘ret’ shadows a previous local [-Wshadow=compatible-local]\n 138 | ssize_t ret;\n | ^~~\n../../../../home/andres/src/postgresql/src/interfaces/libpq/fe-secure-gssapi.c:92:25: note: shadowed declaration is here\n 92 | ssize_t ret = -1;\n | ^~~\n\n\n[1283/1827 42 70%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_cursorobject.c.o\nIn file included from ../../../../home/andres/src/postgresql/src/include/postgres.h:48,\n from ../../../../home/andres/src/postgresql/src/pl/plpython/plpy_cursorobject.c:7:\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_cursorobject.c: In function ‘PLy_cursor_plan’:\n../../../../home/andres/src/postgresql/src/include/utils/elog.h:325:29: warning: declaration of ‘_save_exception_stack’ shadows a previous local [-Wshadow=compatible-local]\n 325 | sigjmp_buf *_save_exception_stack##__VA_ARGS__ = PG_exception_stack; \\\n | ^~~~~~~~~~~~~~~~~~~~~\n...\n\n[1289/1827 42 70%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_exec.c.o\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c: In function ‘PLy_exec_trigger’:\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:378:46: warning: declaration of ‘tdata’ shadows a previous local [-Wshadow=compatible-local]\n 378 | TriggerData *tdata = (TriggerData *) fcinfo->context;\n | ^~~~~\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:310:22: note: shadowed declaration is here\n 310 | TriggerData *tdata;\n | ^~~~~\n\n[1291/1827 42 70%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_spi.c.o\nIn file included from ../../../../home/andres/src/postgresql/src/include/postgres.h:48,\n from ../../../../home/andres/src/postgresql/src/pl/plpython/plpy_spi.c:7:\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_spi.c: In function ‘PLy_spi_execute_plan’:\n../../../../home/andres/src/postgresql/src/include/utils/elog.h:325:29: warning: declaration of ‘_save_exception_stack’ shadows a previous local [-Wshadow=compatible-local]\n 325 | sigjmp_buf *_save_exception_stack##__VA_ARGS__ = PG_exception_stack; \\\n | ^~~~~~~~~~~~~~~~~~~~~\n\n\n[1344/1827 42 73%] Compiling C object contrib/bloom/bloom.so.p/blinsert.c.o\n../../../../home/andres/src/postgresql/contrib/bloom/blinsert.c: In function ‘blinsert’:\n../../../../home/andres/src/postgresql/contrib/bloom/blinsert.c:235:33: warning: declaration of ‘page’ shadows a previous local [-Wshadow=compatible-local]\n 235 | Page page;\n | ^~~~\n../../../../home/andres/src/postgresql/contrib/bloom/blinsert.c:210:25: note: shadowed declaration is here\n 210 | Page page,\n | ^~~~\n\n[1415/1827 42 77%] Compiling C object contrib/file_fdw/file_fdw.so.p/file_fdw.c.o\n../../../../home/andres/src/postgresql/contrib/file_fdw/file_fdw.c: In function ‘get_file_fdw_attribute_options’:\n../../../../home/andres/src/postgresql/contrib/file_fdw/file_fdw.c:453:29: warning: declaration of ‘options’ shadows a previous local [-Wshadow=compatible-local]\n 453 | List *options;\n | ^~~~~~~\n../../../../home/andres/src/postgresql/contrib/file_fdw/file_fdw.c:443:21: note: shadowed declaration is here\n 443 | List *options = NIL;\n | ^~~~~~~\n\n[1441/1827 42 78%] Compiling C object contrib/hstore/hstore.so.p/hstore_io.c.o\nIn file included from ../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c:12:\n../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c: In function ‘hstorePairs’:\n../../../../home/andres/src/postgresql/contrib/hstore/hstore.h:131:21: warning: declaration of ‘buflen’ shadows a parameter [-Wshadow=compatible-local]\n 131 | int buflen = (ptr_) - (buf_); \\\n | ^~~~~~\n../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c:411:9: note: in expansion of macro ‘HS_FINALIZE’\n 411 | HS_FINALIZE(out, pcount, buf, ptr);\n | ^~~~~~~~~~~\n../../../../home/andres/src/postgresql/contrib/hstore/hstore_io.c:388:47: note: shadowed declaration is here\n 388 | hstorePairs(Pairs *pairs, int32 pcount, int32 buflen)\n | ~~~~~~^~~~~~\n\n[1564/1827 42 85%] Compiling C object contrib/postgres_fdw/postgres_fdw.so.p/deparse.c.o\n../../../../home/andres/src/postgresql/contrib/postgres_fdw/deparse.c: In function ‘foreign_expr_walker’:\n../../../../home/andres/src/postgresql/contrib/postgres_fdw/deparse.c:946:53: warning: declaration of ‘lc’ shadows a previous local [-Wshadow=compatible-local]\n 946 | ListCell *lc;\n | ^~\n../../../../home/andres/src/postgresql/contrib/postgres_fdw/deparse.c:904:45: note: shadowed declaration is here\n 904 | ListCell *lc;\n | ^~\n\n[1575/1827 38 86%] Compiling C object src/test/modules/test_integerset/test_integerset.so.p/test_integerset.c.o\n../../../../home/andres/src/postgresql/src/test/modules/test_integerset/test_integerset.c: In function ‘test_huge_distances’:\n../../../../home/andres/src/postgresql/src/test/modules/test_integerset/test_integerset.c:588:33: warning: declaration of ‘x’ shadows a previous local [-Wshadow=compatible-local]\n 588 | uint64 x = values[i];\n | ^\n../../../../home/andres/src/postgresql/src/test/modules/test_integerset/test_integerset.c:526:25: note: shadowed declaration is here\n 526 | uint64 x;\n | ^\n\n[1633/1827 3 89%] Compiling C object contrib/postgres_fdw/postgres_fdw.so.p/postgres_fdw.c.o\n../../../../home/andres/src/postgresql/contrib/postgres_fdw/postgres_fdw.c: In function ‘postgresGetForeignPlan’:\n../../../../home/andres/src/postgresql/contrib/postgres_fdw/postgres_fdw.c:1344:37: warning: declaration of ‘lc’ shadows a previous local [-Wshadow=compatible-local]\n 1344 | ListCell *lc;\n | ^~\n../../../../home/andres/src/postgresql/contrib/postgres_fdw/postgres_fdw.c:1238:21: note: shadowed declaration is here\n 1238 | ListCell *lc;\n | ^~\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Oct 2022 14:40:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 02:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> A simpler idea might be to just remove the inner declaration, and have\n> that block set the outer var. There's no damage, since the block is\n> going to end and not access the previous value anymore.\n>\n> diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> index aa1a3541fe..91a067859b 100644\n> --- a/src/bin/pgbench/pgbench.c\n> +++ b/src/bin/pgbench/pgbench.c\n> @@ -7506,7 +7506,7 @@ threadRun(void *arg)\n> /* progress report is made by thread 0 for all threads */\n> if (progress && thread->tid == 0)\n> {\n> - pg_time_usec_t now = pg_time_now();\n> + now = pg_time_now(); /* not lazy; clobbers outer value */\n\nI didn't want to do it that way because all this code is in a while\nloop and the outer \"now\" will be reused after it's set by the code\nabove. It's not really immediately obvious to me what repercussions\nthat would have, but it didn't seem worth taking any risks.\n\nDavid\n\n\n",
"msg_date": "Thu, 6 Oct 2022 11:46:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 10:40, Andres Freund <andres@anarazel.de> wrote:\n> Your commit message said the last shadowed variable. But building with\n> -Wshadow=compatible-local triggers a bunch of warnings for me (see trimmed at\n> the end). Looks like it \"only\" fixed it for src/, without optional\n> dependencies like gssapi and python.\n\nWell, that's embarrassing. You're right. I only fixed the ones I saw\nfrom running make in the base directory of the tree. I'll set about\nfixing these nownow.\n\nDavid\n\n\n",
"msg_date": "Thu, 6 Oct 2022 11:50:27 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 11:50, David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 6 Oct 2022 at 10:40, Andres Freund <andres@anarazel.de> wrote:\n> > Your commit message said the last shadowed variable. But building with\n> > -Wshadow=compatible-local triggers a bunch of warnings for me (see trimmed at\n> > the end). Looks like it \"only\" fixed it for src/, without optional\n> > dependencies like gssapi and python.\n>\n> Well, that's embarrassing. You're right. I only fixed the ones I saw\n> from running make in the base directory of the tree. I'll set about\n> fixing these nownow.\n\nHere's a patch which (I think) fixes the ones I missed.\n\nDavid",
"msg_date": "Thu, 6 Oct 2022 13:00:41 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-06 13:00:41 +1300, David Rowley wrote:\n> Here's a patch which (I think) fixes the ones I missed.\n\nYep, does the trick for me.\n\nI attached a patch to add -Wshadow=compatible-local to our set of warnings.\n\n\n> diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h\n> index 4713e6ea7a..897af244a4 100644\n> --- a/contrib/hstore/hstore.h\n> +++ b/contrib/hstore/hstore.h\n> @@ -128,15 +128,15 @@ typedef struct\n> /* finalize a newly-constructed hstore */\n> #define HS_FINALIZE(hsp_,count_,buf_,ptr_)\t\t\t\t\t\t\t\\\n> \tdo {\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> -\t\tint buflen = (ptr_) - (buf_);\t\t\t\t\t\t\t\t\\\n> +\t\tint _buflen = (ptr_) - (buf_);\t\t\t\t\t\t\t\t\\\n\nNot pretty. Given that HS_FINALIZE already has multiple-eval hazards, perhaps\nwe could just remove the local?\n\n\n\n> --- a/src/interfaces/libpq/fe-secure-gssapi.c\n> +++ b/src/interfaces/libpq/fe-secure-gssapi.c\n> @@ -135,11 +135,11 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)\n> \t\t */\n> \t\tif (PqGSSSendLength)\n> \t\t{\n> -\t\t\tssize_t\t\tret;\n> +\t\t\tssize_t\t\tretval;\n\nThat looks like it could easily lead to confusion further down the\nline. Wouldn't the better fix here be to remove the inner variable?\n\n\n> --- a/src/pl/plpython/plpy_exec.c\n> +++ b/src/pl/plpython/plpy_exec.c\n> @@ -375,11 +375,11 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc)\n> \t\t\t\trv = NULL;\n> \t\t\telse if (pg_strcasecmp(srv, \"MODIFY\") == 0)\n> \t\t\t{\n> -\t\t\t\tTriggerData *tdata = (TriggerData *) fcinfo->context;\n> +\t\t\t\tTriggerData *trigdata = (TriggerData *) fcinfo->context;\n> \n> -\t\t\t\tif (TRIGGER_FIRED_BY_INSERT(tdata->tg_event) ||\n> -\t\t\t\t\tTRIGGER_FIRED_BY_UPDATE(tdata->tg_event))\n> -\t\t\t\t\trv = PLy_modify_tuple(proc, plargs, tdata, rv);\n> +\t\t\t\tif (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event) ||\n> +\t\t\t\t\tTRIGGER_FIRED_BY_UPDATE(trigdata->tg_event))\n> +\t\t\t\t\trv = PLy_modify_tuple(proc, plargs, trigdata, rv);\n> \t\t\t\telse\n> \t\t\t\t\tereport(WARNING,\n> \t\t\t\t\t\t\t(errmsg(\"PL/Python trigger function returned \\\"MODIFY\\\" in a DELETE trigger -- ignored\")));\n\nThis doesn't strike me as a good fix either. Isn't the inner tdata exactly\nthe same as the outer tdata?\n\n\ttdata = (TriggerData *) fcinfo->context;\n...\n\t\t\t\tTriggerData *trigdata = (TriggerData *) fcinfo->context;\n\n\n\n> --- a/src/test/modules/test_integerset/test_integerset.c\n> +++ b/src/test/modules/test_integerset/test_integerset.c\n> @@ -585,26 +585,26 @@ test_huge_distances(void)\n\nThis is one of the cases where our insistence on -Wdeclaration-after-statement\nreally makes this unnecessary ugly... Declaring x at the start of the function\njust makes this harder to read.\n\nAnyway, this isn't important code, and your fix seem ok.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 5 Oct 2022 17:39:20 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 13:39, Andres Freund <andres@anarazel.de> wrote:\n> I attached a patch to add -Wshadow=compatible-local to our set of warnings.\n\nThanks for writing that and for looking at the patch.\n\nFWIW, I'm +1 for having this part of our default compilation flags. I\ndon't want to have to revisit this on a yearly basis. I imagine Justin\ndoesn't want to do that either. I feel that since this work has\nalready uncovered 2 existing bugs that it's worth having this as a\ndefault compilation flag. Additionally, in the cases like in the\nPLy_exec_trigger() trigger case below, I feel this has resulted in\nslightly more simple code that's easier to follow. I feel having to be\nslightly more inventive with variable names in a small number of cases\nis worth the trouble. I feel the cases where this could get annoying\nare probably limited to variables declared in macros. Maybe that's\njust a reason to consider static inline functions instead. That\nwouldn't work for macros such as PG_TRY(), but I think macros in that\ncategory are rare. I think switching it on does not mean we can never\nswitch it off again should we ever find something we're unable to work\naround. That just seems a little unlikely given that with the prior\ncommits plus the attached patch, we've managed to fix ~30 years worth\nof opportunity to introduce shadowed local variables.\n\n> > diff --git a/contrib/hstore/hstore.h b/contrib/hstore/hstore.h\n> > #define HS_FINALIZE(hsp_,count_,buf_,ptr_) \\\n> > do { \\\n> > - int buflen = (ptr_) - (buf_); \\\n> > + int _buflen = (ptr_) - (buf_); \\\n>\n> Not pretty. Given that HS_FINALIZE already has multiple-eval hazards, perhaps\n> we could just remove the local?\n\nYou're right. It's not that pretty, but I don't feel like making the\nhazards any worse is a good idea. This is old code. I'd rather change\nit as little as possible to minimise the risk of introducing any bugs.\nI'm open to other names for the variable, but I just don't want to\nwiden the scope for multiple evaluation hazards.\n\n> > --- a/src/interfaces/libpq/fe-secure-gssapi.c\n> > +++ b/src/interfaces/libpq/fe-secure-gssapi.c\n> > @@ -135,11 +135,11 @@ pg_GSS_write(PGconn *conn, const void *ptr, size_t len)\n> > - ssize_t ret;\n> > + ssize_t retval;\n>\n> That looks like it could easily lead to confusion further down the\n> line. Wouldn't the better fix here be to remove the inner variable?\n\nhmm. You're maybe able to see something I can't there, but to me, it\nlooks like reusing the outer variable could change the behaviour of\nthe function. Note at the end of the function we set \"ret\" just\nbefore the goto label. It looks like it might be possible for the\ngoto to jump to the point after \"ret = bytes_sent;\", in which case we\nshould return -1, the default value for the outer \"ret\". If I go and\nreuse the outer \"ret\" for something else then it'll return whatever\nvalue it's left set to. I could study the code more and perhaps work\nout that that cannot happen, but if it can't then it's really not\nobvious to me and if it's not obvious then I just don't feel the need\nto take any undue risks by reusing the outer variable. I'm open to\nbetter names, but I'd just rather not reuse the outer scoped variable.\n\n> > --- a/src/pl/plpython/plpy_exec.c\n> > +++ b/src/pl/plpython/plpy_exec.c\n> > @@ -375,11 +375,11 @@ PLy_exec_trigger(FunctionCallInfo fcinfo, PLyProcedure *proc)\n> > - TriggerData *tdata = (TriggerData *) fcinfo->context;\n> > + TriggerData *trigdata = (TriggerData *) fcinfo->context;\n\n> This doesn't strike me as a good fix either. Isn't the inner tdata exactly\n> the same as the outer data?\n\nYeah, you're right. I've adjusted the patch to use the outer scoped\nvariable and get rid of the inner scoped one.\n\n> > --- a/src/test/modules/test_integerset/test_integerset.c\n> > +++ b/src/test/modules/test_integerset/test_integerset.c\n> > @@ -585,26 +585,26 @@ test_huge_distances(void)\n>\n> This is one of the cases where our insistence on -Wdeclaration-after-statement\n> really makes this unnecessary ugly... Declaring x at the start of the function\n> just makes this harder to read.\n\nYeah, it's not pretty. Maybe one day we'll relax that rule. Until\nthen, I think it's not worth expending too much thought on a test\nmodule.\n\nDavid",
"msg_date": "Thu, 6 Oct 2022 16:32:25 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On 2022-Oct-06, David Rowley wrote:\n\n> On Thu, 6 Oct 2022 at 02:34, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > A simpler idea might be to just remove the inner declaration, and have\n> > that block set the outer var. There's no damage, since the block is\n> > going to end and not access the previous value anymore.\n> >\n> > diff --git a/src/bin/pgbench/pgbench.c b/src/bin/pgbench/pgbench.c\n> > index aa1a3541fe..91a067859b 100644\n> > --- a/src/bin/pgbench/pgbench.c\n> > +++ b/src/bin/pgbench/pgbench.c\n> > @@ -7506,7 +7506,7 @@ threadRun(void *arg)\n> > /* progress report is made by thread 0 for all threads */\n> > if (progress && thread->tid == 0)\n> > {\n> > - pg_time_usec_t now = pg_time_now();\n> > + now = pg_time_now(); /* not lazy; clobbers outer value */\n> \n> I didn't want to do it that way because all this code is in a while\n> loop and the outer \"now\" will be reused after it's set by the code\n> above. It's not really immediately obvious to me what repercussions\n> that would have, but it didn't seem worth taking any risks.\n\nNo, it's re-initialized to zero every time through the loop, so setting\nit to something else at the bottom doesn't have any further effect.\n\nIf it were *not* reinitialized every time through the loop, then what\nwould happen is that every iteration in the loop (and each operation\nwithin) would see exactly the same value of \"now\", because it's only set\n\"lazily\" (meaning, if already set, don't change it.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 6 Oct 2022 09:32:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 20:32, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Oct-06, David Rowley wrote:\n> > I didn't want to do it that way because all this code is in a while\n> > loop and the outer \"now\" will be reused after it's set by the code\n> > above. It's not really immediately obvious to me what repercussions\n> > that would have, but it didn't seem worth taking any risks.\n>\n> No, it's re-initialized to zero every time through the loop, so setting\n> it to something else at the bottom doesn't have any further effect.\n\nOh yeah, you're right.\n\n> If it were *not* reinitialized every time through the loop, then what\n> would happen is that every iteration in the loop (and each operation\n> within) would see exactly the same value of \"now\", because it's only set\n> \"lazily\" (meaning, if already set, don't change it.)\n\nOn my misread, that's what I was afraid of changing, but now seeing\nthat now = 0 at the start of each loop, I understand that\npg_time_now_lazy will get an up-to-date time on each loop.\n\nI'm happy if you want to change it to use the outer scoped variable\ninstead of the now2 one.\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Oct 2022 09:13:31 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Thu, 6 Oct 2022 at 13:39, Andres Freund <andres@anarazel.de> wrote:\n> I attached a patch to add -Wshadow=compatible-local to our set of warnings.\n\nSince I just committed the patch to fix the final warnings, I think we\nshould go ahead and commit the patch you wrote to add\n-Wshadow=compatible-local to the standard build flags. I don't mind\ndoing this.\n\nDoes anyone think we shouldn't do it? Please let it be known soon.\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Oct 2022 13:24:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Fri, 7 Oct 2022 at 13:24, David Rowley <dgrowleyml@gmail.com> wrote:\n> Since I just committed the patch to fix the final warnings, I think we\n> should go ahead and commit the patch you wrote to add\n> -Wshadow=compatible-local to the standard build flags. I don't mind\n> doing this.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Fri, 7 Oct 2022 16:51:55 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Fri, 7 Oct 2022 at 13:24, David Rowley <dgrowleyml@gmail.com> wrote:\n>> Since I just committed the patch to fix the final warnings, I think we\n>> should go ahead and commit the patch you wrote to add\n>> -Wshadow=compatible-local to the standard build flags. I don't mind\n>> doing this.\n\n> Pushed.\n\nThe buildfarm's showing a few instances of this warning, which seem\nto indicate that not all versions of the Perl headers are clean:\n\n fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]\n fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]\n fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]\n fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]\n fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]\n fairywren | 2022-10-10 09:03:50 | C:/Perl64/lib/CORE/cop.h:612:13: warning: declaration of 'av' shadows a previous local [-Wshadow=compatible-local]\n snakefly | 2022-10-10 08:21:05 | Util.c:457:14: warning: declaration of 'cv' shadows a parameter [-Wshadow=compatible-local]\n\nBefore you ask:\n\nfairywren: perl 5.24.3\nsnakefly: perl 5.16.3\n\nwhich are a little old, but not *that* old.\n\nScraping the configure logs also shows that only half of the buildfarm\n(exactly 50 out of 100 reporting animals) knows -Wshadow=compatible-local,\nwhich suggests that we might see more of these if they all did. On the\nother hand, animals with newer compilers probably also have newer Perl\ninstallations, so assuming that the Perl crew have kept this clean\nrecently, maybe not.\n\nNot sure if this is problematic enough to justify removing the switch.\nA plausible alternative is to have a few animals with known-clean Perl\ninstallations add the switch manually (and use -Werror), so that we find\nout about violations without having warnings in the face of developers\nwho can't fix them. I'm willing to wait to see if anyone complains of\nsuch warnings, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Oct 2022 12:06:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-10 12:06:22 -0400, Tom Lane wrote:\n> Scraping the configure logs also shows that only half of the buildfarm\n> (exactly 50 out of 100 reporting animals) knows -Wshadow=compatible-local,\n> which suggests that we might see more of these if they all did.\n\nI think it's not just newness - only gcc has compatible-local, even very new\nclang doesn't.\n\n\nThis was fixed ~6 years ago in perl:\n\ncommit f2b9631d5d19d2b71c1776e1193173d13f3620bf\nAuthor: David Mitchell <davem@iabyn.com>\nDate: 2016-05-23 14:43:56 +0100\n\n CX_POP_SAVEARRAY(): use more distinctive var name\n\n Under -Wshadow, CX_POP_SAVEARRAY's local var 'av' can generate this\n warning:\n\n warning: declaration shadows a local variable [-Wshadow]\n\n So rename it to cx_pop_savearay_av to reduce the risk of a clash.\n\n (See http://nntp.perl.org/group/perl.perl5.porters/236444)\n\n\n> Not sure if this is problematic enough to justify removing the switch.\n> A plausible alternative is to have a few animals with known-clean Perl\n> installations add the switch manually (and use -Werror), so that we find\n> out about violations without having warnings in the face of developers\n> who can't fix them. I'm willing to wait to see if anyone complains of\n> such warnings, though.\n\nGiven the age of affected perl instances I suspect there'll not be a lot of\ndevelopers affected, and the number of warnings is reasonably small too. It'd\nlikely hurt more developers to not see the warnings locally, given that such\nshadowing often causes bugs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Oct 2022 09:27:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On 2022-Oct-10, Andres Freund wrote:\n\n> Given the age of affected perl instances I suspect there'll not be a lot of\n> developers affected, and the number of warnings is reasonably small too. It'd\n> likely hurt more developers to not see the warnings locally, given that such\n> shadowing often causes bugs.\n\nMaybe we can install a filter-out in src/pl/plperl's Makefile for the\ntime being.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Por suerte hoy explotó el califont porque si no me habría muerto\n de aburrido\" (Papelucho)\n\n\n",
"msg_date": "Mon, 10 Oct 2022 18:33:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-10 18:33:11 +0200, Alvaro Herrera wrote:\n> On 2022-Oct-10, Andres Freund wrote:\n> \n> > Given the age of affected perl instances I suspect there'll not be a lot of\n> > developers affected, and the number of warnings is reasonably small too. It'd\n> > likely hurt more developers to not see the warnings locally, given that such\n> > shadowing often causes bugs.\n> \n> Maybe we can install a filter-out in src/pl/plperl's Makefile for the\n> time being.\n\nWe could, but is it really a useful thing for something fixed 6 years ago?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 10 Oct 2022 09:37:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On 2022-10-10 09:37:38 -0700, Andres Freund wrote:\n> On 2022-10-10 18:33:11 +0200, Alvaro Herrera wrote:\n> > On 2022-Oct-10, Andres Freund wrote:\n> > \n> > > Given the age of affected perl instances I suspect there'll not be a lot of\n> > > developers affected, and the number of warnings is reasonably small too. It'd\n> > > likely hurt more developers to not see the warnings locally, given that such\n> > > shadowing often causes bugs.\n> > \n> > Maybe we can install a filter-out in src/pl/plperl's Makefile for the\n> > time being.\n> \n> We could, but is it really a useful thing for something fixed 6 years ago?\n\nAs an out, a hypothetical dev could add -Wno-shadow=compatible-local to their\nCFLAGS.\n\n\n",
"msg_date": "Mon, 10 Oct 2022 09:45:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On 2022-Oct-10, Andres Freund wrote:\n\n> On 2022-10-10 09:37:38 -0700, Andres Freund wrote:\n> > On 2022-10-10 18:33:11 +0200, Alvaro Herrera wrote:\n> > > On 2022-Oct-10, Andres Freund wrote:\n> > > \n> > > > Given the age of affected perl instances I suspect there'll not be a lot of\n> > > > developers affected, and the number of warnings is reasonably small too. It'd\n> > > > likely hurt more developers to not see the warnings locally, given that such\n> > > > shadowing often causes bugs.\n> > > \n> > > Maybe we can install a filter-out in src/pl/plperl's Makefile for the\n> > > time being.\n> > \n> > We could, but is it really a useful thing for something fixed 6 years ago?\n\nWell, for people purposefully installing using older installs of Perl\n(not me, admittedly), it does seem useful, because you get the benefit\nof checking shadow vars for the rest of the tree and still get no\nwarnings if everything is clean.\n\n> As an out, a hypothetical dev could add -Wno-shadow=compatible-local to their\n> CFLAGS.\n\nBut that disables it for the tree as a whole, which is not better.\n\nWe can remove the filter-out when we decide to move the Perl version\nrequirement up, say 4 years from now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El hombre nunca sabe de lo que es capaz hasta que lo intenta\" (C. Dickens)\n\n\n",
"msg_date": "Mon, 10 Oct 2022 18:53:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Oct-10, Andres Freund wrote:\n>> We could, but is it really a useful thing for something fixed 6 years ago?\n\n> Well, for people purposefully installing using older installs of Perl\n> (not me, admittedly), it does seem useful, because you get the benefit\n> of checking shadow vars for the rest of the tree and still get no\n> warnings if everything is clean.\n\nMeh --- people purposefully using old Perls are likely using old\ncompilers too. Let's wait and see if any devs actually complain.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 10 Oct 2022 13:02:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, 11 Oct 2022 at 06:02, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> > On 2022-Oct-10, Andres Freund wrote:\n> >> We could, but is it really a useful thing for something fixed 6 years ago?\n>\n> > Well, for people purposefully installing using older installs of Perl\n> > (not me, admittedly), it does seem useful, because you get the benefit\n> > of checking shadow vars for the rest of the tree and still get no\n> > warnings if everything is clean.\n>\n> Meh --- people purposefully using old Perls are likely using old\n> compilers too. Let's wait and see if any devs actually complain.\n\nI can't really add much here, apart from I think it would be a shame\nif some 3rd party 6 year old code was to hold us back on this.\n\nI'm also keen to wait for complaints and only if we really have to,\nremove the shadow flag from being used only in the places where we\nneed to.\n\nAside from this issue, if anything I'd be keen to go a little further\nwith this and upgrade to -Wshadow=local. The reason being is that I\nnoticed that the const qualifier is not classed as \"compatible\" with\nthe equivalently named and typed variable without the const qualifier.\nISTM that there's close to as much opportunity to mix up two variables\nwith the same name that are const and non-const as there are two\nvariables with the same const qualifier. However, I'll be waiting for\nthe dust to settle on the current flags before thinking any more about\nthat.\n\nDavid\n\n\n",
"msg_date": "Tue, 11 Oct 2022 13:16:50 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Tue, Oct 11, 2022 at 01:16:50PM +1300, David Rowley wrote:\n> Aside from this issue, if anything I'd be keen to go a little further\n> with this and upgrade to -Wshadow=local. The reason being is that I\n> noticed that the const qualifier is not classed as \"compatible\" with\n> the equivalently named and typed variable without the const qualifier.\n> ISTM that there's close to as much opportunity to mix up two variables\n> with the same name that are const and non-const as there are two\n> variables with the same const qualifier. However, I'll be waiting for\n> the dust to settle on the current flags before thinking any more about\n> that.\n\n-Wshadow=compatible-local causes one extra warning in postgres.c with\n-DWRITE_READ_PARSE_PLAN_TREES:\npostgres.c: In function ‘pg_rewrite_query’:\npostgres.c:818:37: warning: declaration of ‘query’ shadows a parameter [-Wshadow=compatible-local]\n 818 | Query *query = lfirst_node(Query, lc);\n | ^~~~~\npostgres.c:771:25: note: shadowed declaration is here\n 771 | pg_rewrite_query(Query *query)\n | ~~~~~~~^~~~~\n\nSomething like the patch attached would deal with this one.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 10:39:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, 12 Oct 2022 at 14:39, Michael Paquier <michael@paquier.xyz> wrote:\n> -Wshadow=compatible-local causes one extra warning in postgres.c with\n> -DWRITE_READ_PARSE_PLAN_TREES:\n> postgres.c: In function ‘pg_rewrite_query’:\n> postgres.c:818:37: warning: declaration of ‘query’ shadows a parameter [-Wshadow=compatible-local]\n> 818 | Query *query = lfirst_node(Query, lc);\n> | ^~~~~\n> postgres.c:771:25: note: shadowed declaration is here\n> 771 | pg_rewrite_query(Query *query)\n> | ~~~~~~~^~~~~\n>\n> Something like the patch attached would deal with this one.\n\nThanks for finding that and coming up with the patch. It looks fine to\nme. Do you want to push it?\n\nDavid\n\n\n",
"msg_date": "Wed, 12 Oct 2022 14:50:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 02:50:58PM +1300, David Rowley wrote:\n> Thanks for finding that and coming up with the patch. It looks fine to\n> me. Do you want to push it?\n\nThanks for double-checking. I'll do so shortly, I just got annoyed by\nthat for a few days :)\n\nThanks for your work on this thread to be able to push the switch by\ndefault, by the way.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 11:12:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
},
{
"msg_contents": "On 2022-Oct-11, David Rowley wrote:\n\n> I'm also keen to wait for complaints and only if we really have to,\n> remove the shadow flag from being used only in the places where we\n> need to.\n\n+1\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The problem with the future is that it keeps turning into the present\"\n(Hobbes)\n\n\n",
"msg_date": "Wed, 12 Oct 2022 15:26:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: shadow variables - pg15 edition"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that the comments regarding bit layouts for varlena headers\nin postgres.h are somewhat misleading. For instance, when reading:\n\n```\n00xxxxxx 4-byte length word, aligned, uncompressed data (up to 1G)\n```\n\n... one can assume this is a 00xxxxxx byte followed by another 4 bytes\n(which is wrong). Also one can read this as \"aligned, uncompressed\ndata\" (which again is wrong).\n\n```\n10000000 1-byte length word, unaligned, TOAST pointer\n```\n\nThis is misleading too. The comments above this line say that `struct\nvaratt_external` is a TOAST pointer. sizeof(varatt_external) = 16,\nplus 1 byte equals 17, right? However the documentation [1] claims the\nresult should be 18:\n\n\"\"\"\nAllowing for the varlena header bytes, the total size of an on-disk\nTOAST pointer datum is therefore 18 bytes regardless of the actual\nsize of the represented value.\n\"\"\"\n\nI did my best to get rid of any ambiguity. The patch is attached.\n\n[1]: https://www.postgresql.org/docs/current/storage-toast.html\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 17 Aug 2022 21:06:38 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Clarify the comments about varlena header encoding"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 1:06 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi hackers,\n>\n> I noticed that the comments regarding bit layouts for varlena headers\n> in postgres.h are somewhat misleading. For instance, when reading:\n\nI agree it's confusing, but I don't think this patch is the right direction.\n\n> ```\n> 00xxxxxx 4-byte length word, aligned, uncompressed data (up to 1G)\n> ```\n>\n> ... one can assume this is a 00xxxxxx byte followed by another 4 bytes\n> (which is wrong). Also one can read this as \"aligned, uncompressed\n> data\" (which again is wrong).\n\n- * 00xxxxxx 4-byte length word, aligned, uncompressed data (up to 1G)\n+ * 00xxxxxx xxxxxxxx xxxxxxxx xxxxxxxx, uncompressed data (up to 1G)\n\nMaybe \"00xxxxxx 4-byte length word (aligned),\" is more clear about\nwhat is aligned. Also, adding all those xxx's obscures the point that\nwe only need to examine one byte to figure out what to do next.\n\n> ```\n> 10000000 1-byte length word, unaligned, TOAST pointer\n> ```\n>\n> This is misleading too. The comments above this line say that `struct\n> varatt_external` is a TOAST pointer. sizeof(varatt_external) = 16,\n> plus 1 byte equals 17, right? However the documentation [1] claims the\n> result should be 18:\n\nThe patch has:\n\n+ * In the third case the va_tag field (see varattrib_1b_e) is used to discern\n+ * the specific type and length of the pointer datum. On disk the \"xxx\" bits\n+ * currently always store sizeof(varatt_external) + 2.\n\n...so not sure where 17 came from.\n\n- * 10000000 1-byte length word, unaligned, TOAST pointer\n+ * 10000000 xxxxxxxx, TOAST pointer (struct varatt_external)\n\nThis implies that the header is two bytes, which is not accurate. That\nnext byte is a type tag:\n\n/* TOAST pointers are a subset of varattrib_1b with an identifying tag byte */\ntypedef struct\n{\nuint8 va_header; /* Always 0x80 or 0x01 */\nuint8 va_tag; /* Type of datum */\nchar va_data[FLEXIBLE_ARRAY_MEMBER]; /* Type-specific data */\n} varattrib_1b_e;\n\n...and does not always represent the on-disk length:\n\n/*\n * Type tag for the various sorts of \"TOAST pointer\" datums. The peculiar\n * value for VARTAG_ONDISK comes from a requirement for on-disk compatibility\n * with a previous notion that the tag field was the pointer datum's length.\n */\ntypedef enum vartag_external\n{\nVARTAG_INDIRECT = 1,\nVARTAG_EXPANDED_RO = 2,\nVARTAG_EXPANDED_RW = 3,\nVARTAG_ONDISK = 18\n} vartag_external;\n\nAnd I don't think the new comments referring to \"third case\", \"first\ntwo cases\", etc make it easier to follow.\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Aug 2022 11:14:34 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Clarify the comments about varlena header encoding"
},
{
"msg_contents": "Hi John,\n\nThanks for the feedback.\n\n> Maybe \"00xxxxxx 4-byte length word (aligned),\" is more clear about\n> what is aligned. Also, adding all those xxx's obscures the point that\n> we only need to examine one byte to figure out what to do next.\n\nIMO \"00xxxxxx 4-byte length word\" is still confusing. One can misread\nthis as a 00-xx-xx-xx hex value, where the first byte (not two bits)\nis 00h.\n\n> The patch has:\n>\n> + * In the third case the va_tag field (see varattrib_1b_e) is used to discern\n> + * the specific type and length of the pointer datum. On disk the \"xxx\" bits\n> + * currently always store sizeof(varatt_external) + 2.\n>\n> ...so not sure where 17 came from.\n\nRight, AFTER applying the patch it's clear that it's actually 18\nbytes. Currently the comment says \"1 byte followed by a TOAST pointer\n(16 bytes)\" which is wrong.\n\n> - * 10000000 1-byte length word, unaligned, TOAST pointer\n> + * 10000000 xxxxxxxx, TOAST pointer (struct varatt_external)\n>\n> This implies that the header is two bytes, which is not accurate. That\n> next byte is a type tag:\n> [...]\n> ...and does not always represent the on-disk length:\n\nWell, the comments don't say what is the header and what is the type\ntag. They merely describe the bit layouts. The patch doesn't seem to\nmake things worse in this respect. Do you think we should address this\ntoo? I suspect that describing the difference between the header and\nthe type tag here will create even more confusion.\n\n> And I don't think the new comments referring to \"third case\", \"first\n> two cases\", etc make it easier to follow.\n\nMaybe you are right. I'm open to suggestions.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 6 Sep 2022 13:18:52 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Clarify the comments about varlena header encoding"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 5:19 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi John,\n>\n> Thanks for the feedback.\n>\n> > Maybe \"00xxxxxx 4-byte length word (aligned),\" is more clear about\n> > what is aligned. Also, adding all those xxx's obscures the point that\n> > we only need to examine one byte to figure out what to do next.\n>\n> IMO \"00xxxxxx 4-byte length word\" is still confusing. One can misread\n> this as a 00-xx-xx-xx hex value, where the first byte (not two bits)\n> is 00h.\n\nThe top of the comment literally says\n\n * Bit layouts for varlena headers on big-endian machines:\n\n...but maybe we can say at the top that we inspect the first byte to\ndetermine what kind of header it is. Or put the now-standard 0b in\nfront.\n\n> > The patch has:\n> >\n> > + * In the third case the va_tag field (see varattrib_1b_e) is used to discern\n> > + * the specific type and length of the pointer datum. On disk the \"xxx\" bits\n> > + * currently always store sizeof(varatt_external) + 2.\n> >\n> > ...so not sure where 17 came from.\n>\n> Right, AFTER applying the patch it's clear that it's actually 18\n> bytes.\n\nOkay, I see now that this quote from your first email:\n\n\"This is misleading too. The comments above this line say that `struct\nvaratt_external` is a TOAST pointer. sizeof(varatt_external) = 16,\nplus 1 byte equals 17, right? However the documentation [1] claims the\nresult should be 18:\"\n\n...is not your thought, but one of a fictional misled reader. I\nactually found this phrase more misleading than the header comments.\n:-)\n\nI think the problem is ambiguity about what a \"toast pointer\" is. This comment:\n\n * struct varatt_external is a traditional \"TOAST pointer\", that is, the\n\nhas caused people to think a toasted value in the main relation takes\nup 16 bytes on disk sizeof(varatt_external) = 16, when it's actually\n18. Is the 16 the \"toast pointer\" or the 18?\n\n> > - * 10000000 1-byte length word, unaligned, TOAST pointer\n> > + * 10000000 xxxxxxxx, TOAST pointer (struct varatt_external)\n> >\n> > This implies that the header is two bytes, which is not accurate. That\n> > next byte is a type tag:\n> > [...]\n> > ...and does not always represent the on-disk length:\n>\n> Well, the comments don't say what is the header and what is the type\n> tag.\n\nBecause the comments explain the following macros that read bits in\nthe *first* byte of a 1- or 4-byte header to determine what kind it\nis.\n\n> They merely describe the bit layouts. The patch doesn't seem to\n> make things worse in this respect. Do you think we should address this\n> too? I suspect that describing the difference between the header and\n> the type tag here will create even more confusion.\n\nI said nothing about describing the difference between the header and\ntype tag. The patch added xxx's for the type tag in a comment about\nthe header. This is more misleading than what is there now.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:54:43 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Clarify the comments about varlena header encoding"
},
{
"msg_contents": "Hi John,\n\nMany thanks for the feedback!\n\n> Or put the now-standard 0b in front.\n\nGood idea.\n\n> I think the problem is ambiguity about what a \"toast pointer\" is. This comment:\n>\n> * struct varatt_external is a traditional \"TOAST pointer\", that is, the\n\nRight. The comment for varatt_external says that it IS a TOAST\npointer. However the comments for varlena headers bit layout\nimplicitly include it into a TOAST pointer, which contradicts the\nprevious comments. I suggest we fix this ambiguity by explicitly\nenumerating the type tag in the comments for varlena headers.\n\n> The patch added xxx's for the type tag in a comment about\n> the header. This is more misleading than what is there now.\n\nOK, here is another attempt. Changes compared to v1:\n\n* \"xxx xxx xxx\" were removed, according to the feedback\n* 0b prefix was added in order to make sure the reader will not\nmisread this as a hex values\n* The clarification about the type tag was added\n* The references to \"first case\", \"second case\", etc were removed\n\nHopefully it's better now.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Sun, 11 Sep 2022 13:06:07 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Clarify the comments about varlena header encoding"
},
{
"msg_contents": "On Sun, Sep 11, 2022 at 5:06 PM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi John,\n>\n> Many thanks for the feedback!\n>\n> > Or put the now-standard 0b in front.\n>\n> Good idea.\n\nNow that I look at the results, though, it's distracting and not good\nfor readability. I'm not actually sure we need to do anything here,\nbut I am somewhat in favor of putting [un]aligned in parentheses, as\nalready discussed. Even there, in the first email you said:\n\n> Also one can read this as \"aligned, uncompressed\n> data\" (which again is wrong).\n\nI'm not sure it rises to the level of \"wrong\", because a blob of bytes\nimmediately after an aligned uint32 is in fact aligned. The important\nthing is: a zero byte is always either a padding byte or part of a\n4-byte header, so it's the alignment of the header we really care\nabout.\n\n> > I think the problem is ambiguity about what a \"toast pointer\" is. This comment:\n> >\n> > * struct varatt_external is a traditional \"TOAST pointer\", that is, the\n>\n> Right. The comment for varatt_external says that it IS a TOAST\n> pointer.\n\nWell, the word \"traditional\" is not very informative, but it is there.\nAnd afterwards there is also varatt_indirect, varatt_expanded, and\nvarattrib_1b_e, which all mention \"TOAST pointer\".\n\n> However the comments for varlena headers bit layout\n> implicitly include it into a TOAST pointer, which contradicts the\n> previous comments. I suggest we fix this ambiguity by explicitly\n> enumerating the type tag in the comments for varlena headers.\n\n- * 10000000 1-byte length word, unaligned, TOAST pointer\n+ * 0b10000000 1-byte length word (unaligned), type tag, TOAST pointer\n\nThis is distracting from the point of this whole comment, which, I\nwill say again is: How to look at the first byte to determine what\nkind of varlena we're looking at. There is no reason to mention the\ntype tag here, at all.\n\n- * In TOAST pointers the va_tag field (see varattrib_1b_e) is used to discern\n- * the specific type and length of the pointer datum.\n+ * For the TOAST pointers the type tag (see varattrib_1b_e.va_tag field) is\n+ * used to discern the specific type and length of the pointer datum.\n\nI don't think this clarifies anything, it's just a rephrasing.\n\nMore broadly, I think the description of varlenas in this header is at\na kind of \"local maximum\" -- minor adjustments are more likely to make\nit worse. To significantly improve clarity might require a larger\nrewriting, but I'm not personally interested in taking part in that.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 12 Sep 2022 11:47:07 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Clarify the comments about varlena header encoding"
},
{
"msg_contents": "On Mon, Sep 12, 2022 at 11:47:07AM +0700, John Naylor wrote:\n> I don't think this clarifies anything, it's just a rephrasing.\n> \n> More broadly, I think the description of varlenas in this header is at\n> a kind of \"local maximum\" -- minor adjustments are more likely to make\n> it worse. To significantly improve clarity might require a larger\n> rewriting, but I'm not personally interested in taking part in that.\n\nThis has remained unanswered for four weeks now, so marked as returned\nwith feedback in the 2022-09 CF.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:26:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Clarify the comments about varlena header encoding"
}
] |
[
{
"msg_contents": "I've been working on having NOT NULL constraints have pg_constraint\nrows.\n\nEverything is working now. Some things are a bit weird, and I would\nlike opinions on them:\n\n1. In my implementation, you can have more than one NOT NULL\n pg_constraint row for a column. What should happen if the user does\n ALTER TABLE .. ALTER COLUMN .. DROP NOT NULL;\n ? Currently it throws an error about the ambiguity (ie. which\n constraint to drop).\n Using ALTER TABLE DROP CONSTRAINT works fine, and the 'attnotnull'\n bit is lost when the last one such constraint goes away.\n\n2. If a table has a primary key, and a table is created that inherits\n from it, then the child has its column(s) marked attnotnull but there\n is no pg_constraint row for that. This is not okay. But what should\n happen?\n\n 1. a CHECK(col IS NOT NULL) constraint is created for each column\n 2. a PRIMARY KEY () constraint is created\n\nNote that I've chosen not to create CHECK(foo IS NOT NULL) pg_constraint\nrows for columns in the primary key, unless an explicit NOT NULL\ndeclaration is also given. Adding them would be a very easily solution\nto problem 2 above, but ISTM that such constraints would be redundant\nand not very nice.\n\nAfter gathering input on these thing, I'll finish the patch and post it.\nAs far as I can tell, everything else is working (except the annoying\npg_dump tests, see below).\n\nThanks\n\nImplementation notes:\n\nIn the current implementation I am using CHECK constraints, so these\nconstraints are contype='c', conkey={col} and the corresponding\nexpression.\n\npg_attribute.attnotnull is still there, and it is set true when at least\none \"CHECK (col IS NOT NULL)\" constraint (and it's been validated) or\nPRIMARY KEY constraint exists for the column.\n\nCHECK constraint names are no longer \"tab_col_check\" when the expression\nis CHECK (foo IS NOT NULL). The constraint is now going to be named\n\"tab_col_not_null\"\n\nIf you say CREATE TABLE (a int NOT NULL), you'll get a CHECK constraint\nprinted by psql: (this is a bit more noisy that previously and it\nchanges a lot of regression tests output).\n\n55489 16devel 1776237=# create table tab (a int not null);\nCREATE TABLE\n55489 16devel 1776237=# \\d tab\n Tabla «public.tab»\n Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión \n─────────┼─────────┼──────────────┼──────────┼─────────────\n a │ integer │ │ not null │ \nRestricciones CHECK:\n \"tab_a_not_null\" CHECK (a IS NOT NULL)\n\n\npg_dump no longer prints NOT NULL in the table definition; rather, the\nCHECK constraint is dumped as a separate table constraint (still within\nthe CREATE TABLE statement though). This preserves any possible\nconstraint name, in case one was specified by the user at creation time.\n\nIn order to search for the correct constraint for each column for\nvarious DDL actions, I just inspect each pg_constraint row for the table\nand match conkey and the CHECK expression. Some things would be easier\nwith a new pg_attribute column that carries a pg_constraint.oid of the\nconstraint for that column; however, that seems to be just catalog bloat\nand is not normalized, so I decided not to do it.\n\nNice side-effect: if you add CHECK (foo IS NOT NULL) NOT VALID, and\nlater validate that constraint, the attnotnull bit becomes set.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 17 Aug 2022 20:12:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "cataloguing NOT NULL constraints"
},
{
"msg_contents": "References to past discussions and patches:\n\nhttps://postgr.es/m/CAKOSWNkN6HSyatuys8xZxzRCR-KL1OkHS5-b9qd9bf1Rad3PLA@mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/1343682669-sup-2532@alvh.no-ip.org\nhttps://www.postgresql.org/message-id/20160109030002.GA671800@alvherre.pgsql\n\nI started this time around from the newest of my patches in those\nthreads, but the implementation has changed considerably from what's\nthere.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 17 Aug 2022 20:24:30 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": ">\n> I started this time around from the newest of my patches in those\n> threads, but the implementation has changed considerably from what's\n> there.\n>\n\nI don´t know exactly what will be the scope of this process you're working\non, but there is a gap on foreign key constraint too.\nIt is possible to have wrong values on a FK constraint if you disable\nchecking of it with session_replication_role or disable trigger all\nI know you can create that constraint with \"not valid\" and it'll be checked\nwhen turned on. But if I just forgot that ...\nSo would be good to have validate constraints which checks, even if it's\nalready valid\n\ndrop table if exists tb_pk cascade;create table tb_pk(key integer not null\nprimary key);\ndrop table if exists tb_fk cascade;create table tb_fk(fk_key integer);\nalter table tb_fk add constraint fk_pk foreign key (fk_key) references\ntb_pk (key);\ninsert into tb_pk values(1);\nalter table tb_fk disable trigger all; --can be with\nsession_replication_role too.\ninsert into tb_fk values(5); --wrong values on that table\n\nThen, you could check\n\nalter table tb_fk validate constraint fk_pk\nor\nalter table tb_fk validate all constraints\n\nI started this time around from the newest of my patches in those\nthreads, but the implementation has changed considerably from what's\nthere. I don´t know exactly what will be the scope of this process you're working on, but there is a gap on foreign key constraint too.It is possible to have wrong values on a FK constraint if you disable checking of it with session_replication_role or disable trigger allI know you can create that constraint with \"not valid\" and it'll be checked when turned on. But if I just forgot that ... So would be good to have validate constraints which checks, even if it's already validdrop table if exists tb_pk cascade;create table tb_pk(key integer not null primary key);drop table if exists tb_fk cascade;create table tb_fk(fk_key integer);alter table tb_fk add constraint fk_pk foreign key (fk_key) references tb_pk (key);insert into tb_pk values(1);alter table tb_fk disable trigger all; --can be with session_replication_role too.insert into tb_fk values(5); --wrong values on that tableThen, you could check alter table tb_fk validate constraint fk_pkoralter table tb_fk validate all constraints",
"msg_date": "Wed, 17 Aug 2022 17:09:22 -0300",
"msg_from": "Marcos Pegoraro <marcos@f10.com.br>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, 2022-08-17 at 20:12 +0200, Alvaro Herrera wrote:\n> I've been working on having NOT NULL constraints have pg_constraint\n> rows.\n> \n> Everything is working now. Some things are a bit weird, and I would\n> like opinions on them:\n> \n> 1. In my implementation, you can have more than one NOT NULL\n> pg_constraint row for a column. What should happen if the user does\n> ALTER TABLE .. ALTER COLUMN .. DROP NOT NULL;\n> ? Currently it throws an error about the ambiguity (ie. which\n> constraint to drop).\n\nI'd say that is a good solution, particularly if there is a hint to drop\nthe constraint instead, similar to when you try to drop an index that\nimplements a constraint.\n\n> Using ALTER TABLE DROP CONSTRAINT works fine, and the 'attnotnull'\n> bit is lost when the last one such constraint goes away.\n\nWouldn't it be the correct solution to set \"attnotnumm\" to FALSE only\nwhen the last NOT NULL constraint is dropped?\n\n> 2. If a table has a primary key, and a table is created that inherits\n> from it, then the child has its column(s) marked attnotnull but there\n> is no pg_constraint row for that. This is not okay. But what should\n> happen?\n> \n> 1. a CHECK(col IS NOT NULL) constraint is created for each column\n> 2. a PRIMARY KEY () constraint is created\n\nI think it would be best to create a primary key constraint on the\npartition.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 18 Aug 2022 10:17:08 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Aug-18, Laurenz Albe wrote:\n\n> On Wed, 2022-08-17 at 20:12 +0200, Alvaro Herrera wrote:\n\n> > 1. In my implementation, you can have more than one NOT NULL\n> > pg_constraint row for a column. What should happen if the user does\n> > ALTER TABLE .. ALTER COLUMN .. DROP NOT NULL;\n> > ? Currently it throws an error about the ambiguity (ie. which\n> > constraint to drop).\n> \n> I'd say that is a good solution, particularly if there is a hint to drop\n> the constraint instead, similar to when you try to drop an index that\n> implements a constraint.\n\nAh, I didn't think about the hint. I'll add that, thanks.\n\n> > Using ALTER TABLE DROP CONSTRAINT works fine, and the 'attnotnull'\n> > bit is lost when the last one such constraint goes away.\n> \n> Wouldn't it be the correct solution to set \"attnotnumm\" to FALSE only\n> when the last NOT NULL constraint is dropped?\n\n... when the last NOT NULL or PRIMARY KEY constraint is dropped. We\nhave to keep attnotnull set when a PK exists even if there's no specific\nNOT NULL constraint.\n\n> > 2. If a table has a primary key, and a table is created that inherits\n> > from it, then the child has its column(s) marked attnotnull but there\n> > is no pg_constraint row for that. This is not okay. But what should\n> > happen?\n> > \n> > 1. a CHECK(col IS NOT NULL) constraint is created for each column\n> > 2. a PRIMARY KEY () constraint is created\n> \n> I think it would be best to create a primary key constraint on the\n> partition.\n\nSorry, I wasn't specific enough. This applies to legacy inheritance\nonly; partitioning has its own solution (as you say: the PK constraint\nexists), but legacy inheritance works differently. Creating a PK in\nchildren tables is not feasible (because unicity cannot be maintained),\nbut creating a CHECK (NOT NULL) constraint is possible.\n\nI think a PRIMARY KEY should not be allowed to exist in an inheritance\nparent, precisely because of this problem, but it seems too late to add\nthat restriction now. This behavior is absurd, but longstanding:\n\n55432 16devel 1787364=# create table parent (a int primary key);\nCREATE TABLE\n55432 16devel 1787364=# create table child () inherits (parent);\nCREATE TABLE\n55432 16devel 1787364=# insert into parent values (1);\nINSERT 0 1\n55432 16devel 1787364=# insert into child values (1);\nINSERT 0 1\n55432 16devel 1787364=# select * from parent;\n a \n───\n 1\n 1\n(2 filas)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n",
"msg_date": "Thu, 18 Aug 2022 11:04:25 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Thu, 2022-08-18 at 11:04 +0200, Alvaro Herrera wrote:\n> On 2022-Aug-18, Laurenz Albe wrote:\n> > On Wed, 2022-08-17 at 20:12 +0200, Alvaro Herrera wrote:\n> > > Using ALTER TABLE DROP CONSTRAINT works fine, and the 'attnotnull'\n> > > bit is lost when the last one such constraint goes away.\n> > \n> > Wouldn't it be the correct solution to set \"attnotnumm\" to FALSE only\n> > when the last NOT NULL constraint is dropped?\n> \n> ... when the last NOT NULL or PRIMARY KEY constraint is dropped. We\n> have to keep attnotnull set when a PK exists even if there's no specific\n> NOT NULL constraint.\n\nOf course, I forgot that.\nI hope that is not too hard to implement.\n\n> > > 2. If a table has a primary key, and a table is created that inherits\n> > > from it, then the child has its column(s) marked attnotnull but there\n> > > is no pg_constraint row for that. This is not okay. But what should\n> > > happen?\n> > > \n> > > 1. a CHECK(col IS NOT NULL) constraint is created for each column\n> > > 2. a PRIMARY KEY () constraint is created\n> > \n> > I think it would be best to create a primary key constraint on the\n> > partition.\n> \n> Sorry, I wasn't specific enough. This applies to legacy inheritance\n> only; partitioning has its own solution (as you say: the PK constraint\n> exists), but legacy inheritance works differently. Creating a PK in\n> children tables is not feasible (because unicity cannot be maintained),\n> but creating a CHECK (NOT NULL) constraint is possible.\n> \n> I think a PRIMARY KEY should not be allowed to exist in an inheritance\n> parent, precisely because of this problem, but it seems too late to add\n> that restriction now. This behavior is absurd, but longstanding:\n\nMy mistake; you clearly said \"inherits\".\n\nSince such an inheritance child currently does not have a primary key, you\ncan insert duplicates. So automatically adding a NUT NULL constraint on the\ninheritance child seems the only solution that does not break backwards\ncompatibility. pg_upgrade would have to be able to cope with that.\n\nForcing a primary key constraint on the inheritance child could present an\nupgrade problem. Even if that is probably a rare and strange case, I don't\nthink we should risk that. Moreover, if we force a primary key on the\ninheritance child, using ALTER TABLE ... INHERIT might have to create a\nunique index on the table, which can be cumbersome if the table is large.\n\nSo I think a NOT NULL constraint is the least evil.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 18 Aug 2022 17:00:52 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 6:04 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Aug-18, Laurenz Albe wrote:\n> > On Wed, 2022-08-17 at 20:12 +0200, Alvaro Herrera wrote:\n> > > 2. If a table has a primary key, and a table is created that inherits\n> > > from it, then the child has its column(s) marked attnotnull but there\n> > > is no pg_constraint row for that. This is not okay. But what should\n> > > happen?\n> > >\n> > > 1. a CHECK(col IS NOT NULL) constraint is created for each column\n> > > 2. a PRIMARY KEY () constraint is created\n> >\n> > I think it would be best to create a primary key constraint on the\n> > partition.\n>\n> Sorry, I wasn't specific enough. This applies to legacy inheritance\n> only; partitioning has its own solution (as you say: the PK constraint\n> exists), but legacy inheritance works differently. Creating a PK in\n> children tables is not feasible (because unicity cannot be maintained),\n> but creating a CHECK (NOT NULL) constraint is possible.\n\nYeah, I think it makes sense to think of the NOT NULL constraints on\ntheir own in this case, without worrying about the PK constraint that\ncreated them in the first place.\n\nBTW, maybe you are aware, but the legacy inheritance implementation is\nnot very consistent about wanting to maintain the same NULLness for a\ngiven column in all members of the inheritance tree. For example, it\nallows one to alter the NULLness of an inherited column:\n\ncreate table p (a int not null);\ncreate table c (a int) inherits (p);\n\\d c\n Table \"public.c\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\nInherits: p\n\nalter table c alter a drop not null ;\nALTER TABLE\n\\d c\n Table \"public.c\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\nInherits: p\n\nContrast that with the partitioning implementation:\n\ncreate table pp (a int not null) partition by list (a);\ncreate table cc partition of pp default;\n\\d cc\n Table \"public.cc\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\nPartition of: pp DEFAULT\n\nalter table cc alter a drop not null ;\nERROR: column \"a\" is marked NOT NULL in parent table\n\nIIRC, I had tried to propose implementing the same behavior for legacy\ninheritance back in the day, but maybe we left it alone for not\nbreaking compatibility.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Aug 2022 12:42:02 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Aug-22, Amit Langote wrote:\n\n> Yeah, I think it makes sense to think of the NOT NULL constraints on\n> their own in this case, without worrying about the PK constraint that\n> created them in the first place.\n\nCool, that's enough votes that I'm comfortable implementing things that\nway.\n\n> BTW, maybe you are aware, but the legacy inheritance implementation is\n> not very consistent about wanting to maintain the same NULLness for a\n> given column in all members of the inheritance tree. For example, it\n> allows one to alter the NULLness of an inherited column:\n\nRight ... I think what gives this patch most of its complexity is the\nnumber of odd, inconsistent cases that have to preserve historical\nbehavior. Luckily I think this particular behavior is easy to\nimplement.\n\n> IIRC, I had tried to propose implementing the same behavior for legacy\n> inheritance back in the day, but maybe we left it alone for not\n> breaking compatibility.\n\nYeah, that wouldn't be surprising.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:49:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "So I was wrong in thinking that \"this case was simple to implement\" as I\nreplied upthread. Doing that actually required me to rewrite large\nparts of the patch. I think it ended up being a good thing, because in\nhindsight the approach I was using was somewhat bogus anyway, and the\ncurrent one should be better. Please find it attached.\n\nThere are still a few problems, sadly. Most notably, I ran out of time\ntrying to fix a pg_upgrade issue with pg_dump in binary-upgrade mode.\nI have to review that again, but I think it'll need a deeper rethink of\nhow we pg_upgrade inherited constraints. So the pg_upgrade tests are\nknown to fail. I'm not aware of any other tests failing, but I'm sure\nthe cfbot will prove me wrong.\n\nI reluctantly added a new ALTER TABLE subcommand type, AT_SetAttNotNull,\nto allow setting pg_attribute.attnotnull without adding a CHECK\nconstraint (only used internally). I would like to find a better way to\ngo about this, so I may remove it again, therefore it's not fully\nimplemented.\n\nThere are *many* changed regress expect files and I didn't carefully vet\nall of them. Mostly it's the addition of CHECK constraints in the\nfooters of many \\d listings and stuff like that. At a quick glance they\nappear valid, but I need to review them more carefully still.\n\nWe've had pg_constraint.conparentid for a while now, but for some\nconstraints we continue to use conislocal/coninhcount. I think we\nshould get rid of that and rely on conparentid completely.\n\nAn easily fixed issue is that of constraint naming.\nChooseConstraintName has an argument for passing known constraint names,\nbut this patch doesn't use it and it must.\n\nOne issue that I don't currently know how to fix, is the fact that we\nneed to know whether a column is a row type or not (because they need a\ndifferent null test). At table creation time that's easy to know,\nbecause we have the descriptor already built by the time we add the\nconstraints; but if you do ALTER TABLE .. ADD COLUMN .., ADD CONSTRAINT\nthen we don't.\n\nSome ancient code comments suggest that allowing a child table's NOT\nNULL constraint acquired from parent shouldn't be independently\ndroppable. This patch doesn't change that, but it's easy to do if we\ndecide to. However, that'd be a compatibility break, so I'd rather not\ndo it in the same patch that introduces the feature.\n\nOverall, there's a lot more work required to get this to a good shape.\nThat said, I think it's the right direction.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)",
"msg_date": "Thu, 1 Sep 2022 00:19:10 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 3:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> So I was wrong in thinking that \"this case was simple to implement\" as I\n> replied upthread. Doing that actually required me to rewrite large\n> parts of the patch. I think it ended up being a good thing, because in\n> hindsight the approach I was using was somewhat bogus anyway, and the\n> current one should be better. Please find it attached.\n>\n> There are still a few problems, sadly. Most notably, I ran out of time\n> trying to fix a pg_upgrade issue with pg_dump in binary-upgrade mode.\n> I have to review that again, but I think it'll need a deeper rethink of\n> how we pg_upgrade inherited constraints. So the pg_upgrade tests are\n> known to fail. I'm not aware of any other tests failing, but I'm sure\n> the cfbot will prove me wrong.\n>\n> I reluctantly added a new ALTER TABLE subcommand type, AT_SetAttNotNull,\n> to allow setting pg_attribute.attnotnull without adding a CHECK\n> constraint (only used internally). I would like to find a better way to\n> go about this, so I may remove it again, therefore it's not fully\n> implemented.\n>\n> There are *many* changed regress expect files and I didn't carefully vet\n> all of them. Mostly it's the addition of CHECK constraints in the\n> footers of many \\d listings and stuff like that. At a quick glance they\n> appear valid, but I need to review them more carefully still.\n>\n> We've had pg_constraint.conparentid for a while now, but for some\n> constraints we continue to use conislocal/coninhcount. I think we\n> should get rid of that and rely on conparentid completely.\n>\n> An easily fixed issue is that of constraint naming.\n> ChooseConstraintName has an argument for passing known constraint names,\n> but this patch doesn't use it and it must.\n>\n> One issue that I don't currently know how to fix, is the fact that we\n> need to know whether a column is a row type or not (because they need a\n> different null test). At table creation time that's easy to know,\n> because we have the descriptor already built by the time we add the\n> constraints; but if you do ALTER TABLE .. ADD COLUMN .., ADD CONSTRAINT\n> then we don't.\n>\n> Some ancient code comments suggest that allowing a child table's NOT\n> NULL constraint acquired from parent shouldn't be independently\n> droppable. This patch doesn't change that, but it's easy to do if we\n> decide to. However, that'd be a compatibility break, so I'd rather not\n> do it in the same patch that introduces the feature.\n>\n> Overall, there's a lot more work required to get this to a good shape.\n> That said, I think it's the right direction.\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n> \"La primera ley de las demostraciones en vivo es: no trate de usar el\n> sistema.\n> Escriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n>\n\nHi,\nFor findNotNullConstraintAttnum():\n\n+ if (multiple == NULL)\n+ break;\n\nShouldn't `pfree(arr)` be called before breaking ?\n\n+static Constraint *makeNNCheckConstraint(Oid nspid, char *constraint_name,\n\nYou used `NN` because there is method makeCheckNotNullConstraint, right ?\nI think it would be better to expand `NN` so that its meaning is easy to\nunderstand.\n\nCheers\n\nOn Wed, Aug 31, 2022 at 3:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:So I was wrong in thinking that \"this case was simple to implement\" as I\nreplied upthread. Doing that actually required me to rewrite large\nparts of the patch. I think it ended up being a good thing, because in\nhindsight the approach I was using was somewhat bogus anyway, and the\ncurrent one should be better. Please find it attached.\n\nThere are still a few problems, sadly. Most notably, I ran out of time\ntrying to fix a pg_upgrade issue with pg_dump in binary-upgrade mode.\nI have to review that again, but I think it'll need a deeper rethink of\nhow we pg_upgrade inherited constraints. So the pg_upgrade tests are\nknown to fail. I'm not aware of any other tests failing, but I'm sure\nthe cfbot will prove me wrong.\n\nI reluctantly added a new ALTER TABLE subcommand type, AT_SetAttNotNull,\nto allow setting pg_attribute.attnotnull without adding a CHECK\nconstraint (only used internally). I would like to find a better way to\ngo about this, so I may remove it again, therefore it's not fully\nimplemented.\n\nThere are *many* changed regress expect files and I didn't carefully vet\nall of them. Mostly it's the addition of CHECK constraints in the\nfooters of many \\d listings and stuff like that. At a quick glance they\nappear valid, but I need to review them more carefully still.\n\nWe've had pg_constraint.conparentid for a while now, but for some\nconstraints we continue to use conislocal/coninhcount. I think we\nshould get rid of that and rely on conparentid completely.\n\nAn easily fixed issue is that of constraint naming.\nChooseConstraintName has an argument for passing known constraint names,\nbut this patch doesn't use it and it must.\n\nOne issue that I don't currently know how to fix, is the fact that we\nneed to know whether a column is a row type or not (because they need a\ndifferent null test). At table creation time that's easy to know,\nbecause we have the descriptor already built by the time we add the\nconstraints; but if you do ALTER TABLE .. ADD COLUMN .., ADD CONSTRAINT\nthen we don't.\n\nSome ancient code comments suggest that allowing a child table's NOT\nNULL constraint acquired from parent shouldn't be independently\ndroppable. This patch doesn't change that, but it's easy to do if we\ndecide to. However, that'd be a compatibility break, so I'd rather not\ndo it in the same patch that introduces the feature.\n\nOverall, there's a lot more work required to get this to a good shape.\nThat said, I think it's the right direction.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)Hi,For findNotNullConstraintAttnum(): + if (multiple == NULL)+ break;Shouldn't `pfree(arr)` be called before breaking ?+static Constraint *makeNNCheckConstraint(Oid nspid, char *constraint_name,You used `NN` because there is method makeCheckNotNullConstraint, right ?I think it would be better to expand `NN` so that its meaning is easy to understand.Cheers",
"msg_date": "Wed, 31 Aug 2022 16:08:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 4:08 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n>\n>\n> On Wed, Aug 31, 2022 at 3:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n>> So I was wrong in thinking that \"this case was simple to implement\" as I\n>> replied upthread. Doing that actually required me to rewrite large\n>> parts of the patch. I think it ended up being a good thing, because in\n>> hindsight the approach I was using was somewhat bogus anyway, and the\n>> current one should be better. Please find it attached.\n>>\n>> There are still a few problems, sadly. Most notably, I ran out of time\n>> trying to fix a pg_upgrade issue with pg_dump in binary-upgrade mode.\n>> I have to review that again, but I think it'll need a deeper rethink of\n>> how we pg_upgrade inherited constraints. So the pg_upgrade tests are\n>> known to fail. I'm not aware of any other tests failing, but I'm sure\n>> the cfbot will prove me wrong.\n>>\n>> I reluctantly added a new ALTER TABLE subcommand type, AT_SetAttNotNull,\n>> to allow setting pg_attribute.attnotnull without adding a CHECK\n>> constraint (only used internally). I would like to find a better way to\n>> go about this, so I may remove it again, therefore it's not fully\n>> implemented.\n>>\n>> There are *many* changed regress expect files and I didn't carefully vet\n>> all of them. Mostly it's the addition of CHECK constraints in the\n>> footers of many \\d listings and stuff like that. At a quick glance they\n>> appear valid, but I need to review them more carefully still.\n>>\n>> We've had pg_constraint.conparentid for a while now, but for some\n>> constraints we continue to use conislocal/coninhcount. I think we\n>> should get rid of that and rely on conparentid completely.\n>>\n>> An easily fixed issue is that of constraint naming.\n>> ChooseConstraintName has an argument for passing known constraint names,\n>> but this patch doesn't use it and it must.\n>>\n>> One issue that I don't currently know how to fix, is the fact that we\n>> need to know whether a column is a row type or not (because they need a\n>> different null test). At table creation time that's easy to know,\n>> because we have the descriptor already built by the time we add the\n>> constraints; but if you do ALTER TABLE .. ADD COLUMN .., ADD CONSTRAINT\n>> then we don't.\n>>\n>> Some ancient code comments suggest that allowing a child table's NOT\n>> NULL constraint acquired from parent shouldn't be independently\n>> droppable. This patch doesn't change that, but it's easy to do if we\n>> decide to. However, that'd be a compatibility break, so I'd rather not\n>> do it in the same patch that introduces the feature.\n>>\n>> Overall, there's a lot more work required to get this to a good shape.\n>> That said, I think it's the right direction.\n>>\n>> --\n>> Álvaro Herrera 48°01'N 7°57'E —\n>> https://www.EnterpriseDB.com/\n>> \"La primera ley de las demostraciones en vivo es: no trate de usar el\n>> sistema.\n>> Escriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n>>\n>\n> Hi,\n> For findNotNullConstraintAttnum():\n>\n> + if (multiple == NULL)\n> + break;\n>\n> Shouldn't `pfree(arr)` be called before breaking ?\n>\n> +static Constraint *makeNNCheckConstraint(Oid nspid, char *constraint_name,\n>\n> You used `NN` because there is method makeCheckNotNullConstraint, right ?\n> I think it would be better to expand `NN` so that its meaning is easy to\n> understand.\n>\n> Cheers\n>\nHi,\nFor tryExtractNotNullFromNode , in the block for `if (rel == NULL)`:\n\n+ return false;\n\nI think you meant returning NULL since false is for boolean.\n\nCheers\n\nOn Wed, Aug 31, 2022 at 4:08 PM Zhihong Yu <zyu@yugabyte.com> wrote:On Wed, Aug 31, 2022 at 3:19 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:So I was wrong in thinking that \"this case was simple to implement\" as I\nreplied upthread. Doing that actually required me to rewrite large\nparts of the patch. I think it ended up being a good thing, because in\nhindsight the approach I was using was somewhat bogus anyway, and the\ncurrent one should be better. Please find it attached.\n\nThere are still a few problems, sadly. Most notably, I ran out of time\ntrying to fix a pg_upgrade issue with pg_dump in binary-upgrade mode.\nI have to review that again, but I think it'll need a deeper rethink of\nhow we pg_upgrade inherited constraints. So the pg_upgrade tests are\nknown to fail. I'm not aware of any other tests failing, but I'm sure\nthe cfbot will prove me wrong.\n\nI reluctantly added a new ALTER TABLE subcommand type, AT_SetAttNotNull,\nto allow setting pg_attribute.attnotnull without adding a CHECK\nconstraint (only used internally). I would like to find a better way to\ngo about this, so I may remove it again, therefore it's not fully\nimplemented.\n\nThere are *many* changed regress expect files and I didn't carefully vet\nall of them. Mostly it's the addition of CHECK constraints in the\nfooters of many \\d listings and stuff like that. At a quick glance they\nappear valid, but I need to review them more carefully still.\n\nWe've had pg_constraint.conparentid for a while now, but for some\nconstraints we continue to use conislocal/coninhcount. I think we\nshould get rid of that and rely on conparentid completely.\n\nAn easily fixed issue is that of constraint naming.\nChooseConstraintName has an argument for passing known constraint names,\nbut this patch doesn't use it and it must.\n\nOne issue that I don't currently know how to fix, is the fact that we\nneed to know whether a column is a row type or not (because they need a\ndifferent null test). At table creation time that's easy to know,\nbecause we have the descriptor already built by the time we add the\nconstraints; but if you do ALTER TABLE .. ADD COLUMN .., ADD CONSTRAINT\nthen we don't.\n\nSome ancient code comments suggest that allowing a child table's NOT\nNULL constraint acquired from parent shouldn't be independently\ndroppable. This patch doesn't change that, but it's easy to do if we\ndecide to. However, that'd be a compatibility break, so I'd rather not\ndo it in the same patch that introduces the feature.\n\nOverall, there's a lot more work required to get this to a good shape.\nThat said, I think it's the right direction.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)Hi,For findNotNullConstraintAttnum(): + if (multiple == NULL)+ break;Shouldn't `pfree(arr)` be called before breaking ?+static Constraint *makeNNCheckConstraint(Oid nspid, char *constraint_name,You used `NN` because there is method makeCheckNotNullConstraint, right ?I think it would be better to expand `NN` so that its meaning is easy to understand.CheersHi,For tryExtractNotNullFromNode , in the block for `if (rel == NULL)`:+ return false;I think you meant returning NULL since false is for boolean.Cheers",
"msg_date": "Wed, 31 Aug 2022 16:11:42 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "There were a lot more problems in that submission than I at first\nrealized, and I had to rewrite a lot of code in order to fix them. I\nhave fixed all the user-visible problems I found in this version, and\nreviewed the tests results more carefully so I am now more confident\nthat behaviourally it's doing the right thing; but\n\n1. the pg_upgrade test problem is still unaddressed,\n2. I haven't verified that catalog contents is correct, especially\n regarding dependencies,\n3. there are way too many XXX and FIXME comments sprinkled everywhere.\n\nI'm sure a couple of these XXX comments can be left for later work, and\nthere's a few that should be dealt with by merely removing them; but the\nothers (and all FIXMEs) represent pending work.\n\nAlso, I'm not at all happy about having this new ConstraintNotNull\nartificial node there; perhaps this can be solved by using a regular\nConstraint with some new flag, or maybe it will even work without any\nextra flags by the fact that the node appears where it appears. Anyway,\nrequires investigation. Also, the AT_SetAttNotNull continues to irk me.\n\ntest_ddl_deparse is also unhappy. This is probably an easy fix;\napparently, ATExecDropConstraint has been doing things wrong forever.\n\nAnyway, here's version 2 of this, with apologies for those who spent\ntime reviewing version 1 with all its brokenness.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"On the other flipper, one wrong move and we're Fatal Exceptions\"\n(T.U.X.: Term Unit X - http://www.thelinuxreview.com/TUX/)",
"msg_date": "Fri, 9 Sep 2022 19:58:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\nw.r.t. the while loop in findNotNullConstraintAttnum():\n\n+ if (multiple == NULL)\n+ break;\n\nI think `pfree(arr)` should be called before breaking.\n\n+ if (constraint->cooked_expr != NULL)\n+ return\ntryExtractNotNullFromNode(stringToNode(constraint->cooked_expr), rel);\n+ else\n+ return tryExtractNotNullFromNode(constraint->raw_expr, rel);\n\nnit: the `else` keyword is not needed.\n\n+ if (isnull)\n+ elog(ERROR, \"null conbin for constraint %u\", conForm->oid);\n\nIt would be better to expand `conbin` so that the user can better\nunderstand the error.\n\nCheers\n\n>\n\nHi,w.r.t. the while loop in findNotNullConstraintAttnum():+ if (multiple == NULL)+ break;I think `pfree(arr)` should be called before breaking.+ if (constraint->cooked_expr != NULL)+ return tryExtractNotNullFromNode(stringToNode(constraint->cooked_expr), rel);+ else+ return tryExtractNotNullFromNode(constraint->raw_expr, rel);nit: the `else` keyword is not needed.+ if (isnull)+ elog(ERROR, \"null conbin for constraint %u\", conForm->oid);It would be better to expand `conbin` so that the user can better understand the error.Cheers",
"msg_date": "Fri, 9 Sep 2022 17:28:19 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": " Hi Alvaro,\n\nOn Sat, Sep 10, 2022 at 2:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> There were a lot more problems in that submission than I at first\n> realized, and I had to rewrite a lot of code in order to fix them. I\n> have fixed all the user-visible problems I found in this version, and\n> reviewed the tests results more carefully so I am now more confident\n> that behaviourally it's doing the right thing; but\n>\n> 1. the pg_upgrade test problem is still unaddressed,\n> 2. I haven't verified that catalog contents is correct, especially\n> regarding dependencies,\n> 3. there are way too many XXX and FIXME comments sprinkled everywhere.\n>\n> I'm sure a couple of these XXX comments can be left for later work, and\n> there's a few that should be dealt with by merely removing them; but the\n> others (and all FIXMEs) represent pending work.\n>\n> Also, I'm not at all happy about having this new ConstraintNotNull\n> artificial node there; perhaps this can be solved by using a regular\n> Constraint with some new flag, or maybe it will even work without any\n> extra flags by the fact that the node appears where it appears. Anyway,\n> requires investigation. Also, the AT_SetAttNotNull continues to irk me.\n>\n> test_ddl_deparse is also unhappy. This is probably an easy fix;\n> apparently, ATExecDropConstraint has been doing things wrong forever.\n>\n> Anyway, here's version 2 of this, with apologies for those who spent\n> time reviewing version 1 with all its brokenness.\n\nI have been testing this with the intention of understanding how you\nmade this work with inheritance. While doing so with the previous\nversion, I ran into an existing issue (bug?) that I reported at [1].\n\nI ran into another while testing version 2 that I think has to do with\nthis patch. So this happens:\n\n-- regular inheritance\ncreate table foo (a int not null);\ncreate table foo1 (a int not null);\nalter table foo1 inherit foo;\nalter table foo alter a drop not null ;\nERROR: constraint \"foo_a_not_null\" of relation \"foo1\" does not exist\n\n-- partitioning\ncreate table parted (a int not null) partition by list (a);\ncreate table part1 (a int not null);\nalter table parted attach partition part1 default;\nalter table parted alter a drop not null;\nERROR: constraint \"parted_a_not_null\" of relation \"part1\" does not exist\n\nIn both of these cases, MergeConstraintsIntoExisting(), called by\nCreateInheritance() when attaching the child to the parent, marks the\nchild's NOT NULL check constraint as the child constraint of the\ncorresponding constraint in parent, which seems fine and necessary.\n\nHowever, ATExecDropConstraint_internal(), the new function called by\nATExecDropNotNull(), doesn't seem to recognize when recursing to the\nchild tables that a child's copy NOT NULL check constraint attached to\nthe parent's may have a different name, so scanning pg_constraint with\nthe parent's name is what gives the above error. I wonder if it\nwouldn't be better for ATExecDropNotNull() to handle its own recursion\nrather than delegating it to the DropConstraint()?\n\nThe same error does not occur when the NOT NULL constraint is added to\nparent after-the-fact and thus recursively to the children:\n\n-- regular inheritance\ncreate table foo (a int);\ncreate table foo1 (a int not null) inherits (foo);\nalter table foo alter a set not null;\nalter table foo alter a drop not null ;\nALTER TABLE\n\n-- partitioning\ncreate table parted (a int) partition by list (a);\ncreate table part1 partition of parted (a not null) default;\nalter table parted alter a set not null;\nalter table parted alter a drop not null;\nALTER TABLE\n\nAnd the reason for that seems a bit accidental, because\nMergeWithExistingConstraint(), called by AddRelationNewConstraints()\nwhen recursively adding the NOT NULL check constraint to a child, does\nnot have the code to find the child's already existing constraint that\nmatches with it. So, in this case, we get a copy of the parent's\nconstraint with the same name in the child. There is a line in the\nheader comments of both MergeWithExistingConstraint() and\nMergeConstraintsIntoExisting() asking to keep their code in sync, so\nmaybe the patch missed adding the new NOT NULL check constraint logic\nto the former?\n\nAlso, it seems that the inheritance recursion for SET NOT NULL is now\noccurring both in the prep phase and exec phase due to the following\nnew code added to ATExecSetNotNull():\n\n@@ -7485,6 +7653,50 @@ ATExecSetNotNull(AlteredTableInfo *tab, Relation rel,\n InvokeObjectPostAlterHook(RelationRelationId,\n RelationGetRelid(rel), attnum);\n ...\n+ /* See if there's one already, and skip this if so. */\n+ constr = findNotNullConstraintAttnum(rel, attnum, NULL);\n+ if (constr && direct)\n+ heap_freetuple(constr); /* nothing to do */\n+ else\n+ {\n+ Constraint *newconstr;\n+ ObjectAddress addr;\n+ List *children;\n+ List *already_done_rels;\n+\n+ newconstr = makeCheckNotNullConstraint(rel->rd_rel->relnamespace,\n+ constrname,\n+\nNameStr(rel->rd_rel->relname),\n+ colName,\n+ false, /* XXX is_row */\n+ InvalidOid);\n+\n+ addr = ATAddCheckConstraint_internal(wqueue, tab, rel, newconstr,\n+ false, false, false, lockmode);\n+ already_done_rels = list_make1_oid(RelationGetRelid(rel));\n+\n+ /* and recurse into children, if there are any */\n+ children =\nfind_inheritance_children(RelationGetRelid(rel), lockmode);\n+ ATAddCheckConstraint_recurse(wqueue, children, newconstr,\n\nIt seems harmless because ATExecSetNotNull() set up to run on the\nchildren by the prep phase becomes a no-op due to the work done by the\nabove code, but maybe we should keep one or the other.\n\nRegarding the following bit:\n\n- /* If rel is partition, shouldn't drop NOT NULL if parent has the same */\n+ /*\n+ * If rel is partition, shouldn't drop NOT NULL if parent has the same.\n+ * XXX is this consideration still valid? Can we get rid of this by\n+ * changing the type of dependency between the two constraints instead?\n+ */\n if (rel->rd_rel->relispartition)\n {\n Oid parentId =\nget_partition_parent(RelationGetRelid(rel), false);\n\nYes, it seems we can now prevent dropping a partition's NOT NULL\nconstraint by seeing that it is inherited, so no need for this block\nwhich was written when the NOT NULL constraints didn't have the\ninherited marking.\n\nBTW, have you thought about making DROP NOT NULL command emit a\ndifferent error message than what one gets now:\n\ncreate table foo (a int);\ncreate table foo1 (a int) inherits (foo);\nalter table foo alter a set not null;\nalter table foo1 alter a drop not null ;\nERROR: cannot drop inherited constraint \"foo_a_not_null\" of relation \"foo1\"\n\nLike, say:\n\nERROR: cannot drop an inherited NOT NULL constraint\n\nMaybe you did and thought that it's OK for it to spell out the\ninternally generated constraint name, because we already require users\nto know that they exist, say to drop it using DROP CONSTRAINT.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqFggpjAvsVqNV06HUF6CUrU0Vo3pLgGWCViGbPkzTiofg%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 13 Sep 2022 12:17:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 09.09.22 19:58, Alvaro Herrera wrote:\n> There were a lot more problems in that submission than I at first\n> realized, and I had to rewrite a lot of code in order to fix them. I\n> have fixed all the user-visible problems I found in this version, and\n> reviewed the tests results more carefully so I am now more confident\n> that behaviourally it's doing the right thing; but\n\nReading through the SQL standard again, I think this patch goes a bit \ntoo far in folding NOT NULL and CHECK constraints together. The spec \nsays that you need to remember whether a column was defined as NOT NULL, \nand that the commands DROP NOT NULL and SET NOT NULL only affect \nconstraints defined in that way. In this implementation, a constraint \ndefined as NOT NULL is converted to a CHECK (x IS NOT NULL) constraint \nand the original definition is forgotten.\n\nBesides that, I think that users are not going to like that pg_dump \nrewrites their NOT NULL constraints into CHECK table constraints.\n\nI suspect that this needs a separate contype for NOT NULL constraints \nthat is separate from CONSTRAINT_CHECK.\n\n\n\n",
"msg_date": "Wed, 14 Sep 2022 22:03:50 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Sep-14, Peter Eisentraut wrote:\n\n> Reading through the SQL standard again, I think this patch goes a bit too\n> far in folding NOT NULL and CHECK constraints together. The spec says that\n> you need to remember whether a column was defined as NOT NULL, and that the\n> commands DROP NOT NULL and SET NOT NULL only affect constraints defined in\n> that way. In this implementation, a constraint defined as NOT NULL is\n> converted to a CHECK (x IS NOT NULL) constraint and the original definition\n> is forgotten.\n\nHmm, I don't read it the same way. Reading SQL:2016, they talk about a\nnullability characteristic (/known not nullable/ or /possibly\nnullable/):\n\n: 4.13 Columns, fields, and attributes\n: [...]\n: Every column has a nullability characteristic that indicates whether the\n: value from that column can be the null value. A nullability characteristic\n: is either known not nullable or possibly nullable.\n: Let C be a column of a base table T. C is known not nullable if and only\n: if at least one of the following is true:\n: — There exists at least one constraint NNC that is enforced and not\n: deferrable and that simply contains a <search condition> that is a\n: <boolean value expression> that is a readily-known-not-null condition for C.\n: [other possibilities]\n\nthen in the same section they explain that this is derived from a\ntable constraint:\n\n: A column C is described by a column descriptor. A column descriptor\n: includes:\n: [...]\n: — If C is a column of a base table, then an indication of whether it is\n: defined as NOT NULL and, if so, the constraint name of the associated\n: table constraint definition.\n\n [aside: note that elsewhere (<boolean value expression>), they define\n \"readily-known-not-null\" in Syntax Rule 3), of 6.39 <boolean value\n expression>:\n\n : 3) Let X denote either a column C or the <key word> VALUE. Given a\n : <boolean value expression> BVE and X, the notion “BVE is a\n : readily-known-not-null condition for X” is defined as follows.\n : Case:\n : a) If BVE is a <predicate> of the form “RVE IS NOT NULL”, where RVE is a\n : <row value predicand> that is a <row value constructor predicand> that\n : simply contains a <common value expression>, <boolean predicand>, or\n : <row value constructor element> that is a <column reference> that\n : references C, then BVE is a readily-known-not-null condition for C.\n : b) If BVE is the <predicate> “VALUE IS NOT NULL”, then BVE is a\n : readily-known-not-null condition for VALUE.\n : c) Otherwise, BVE is not a readily-known-not-null condition for X.\n edisa]\n\nLater, <column definition> says literally that specifying NOT NULL in a\ncolumn is equivalent to the CHECK (.. IS NOT NULL) table constraint:\n\n: 11.4 <column definition>\n: \n: Syntax Rules,\n: 17) If a <column constraint definition> is specified, then let CND be\n: the <constraint name definition> if one is specified and let CND be the\n: zero-length character character string otherwise; let CA be the\n: <constraint characteristics> if specified and let CA be the zero-length\n: character string otherwise. The <column constraint definition> is\n: equivalent to a <table constraint definition> as follows.\n: \n: Case:\n: \n: a) If a <column constraint definition> is specified that contains the\n: <column constraint> NOT NULL, then it is equivalent to the following\n: <table constraint definition>:\n: CND CHECK ( C IS NOT NULL ) CA\n\nIn my reading of it, it doesn't follow that you have to remember whether\nthe table constraint was created by saying explicitly by doing CHECK (c\nIS NOT NULL) or as a plain NOT NULL column constraint. The idea of\nbeing able to do DROP NOT NULL when only a constraint defined as CHECK\n(c IS NOT NULL) exists seems to follow from there; and also that you can\nuse DROP CONSTRAINT to remove one added via plain NOT NULL; and that\nboth these operations change the nullability characteristic of the\ncolumn. This is made more explicit by the fact that they do state that\nthe nullability characteristic can *not* be \"destroyed\" for other types\nof constraints, in 11.26 <drop table constraint definition>, Syntax Rule\n11)\n\n: 11) Destruction of TC shall not cause the nullability characteristic of\n: any of the following columns of T to change from known not nullable to\n: possibly nullable:\n: \n: a) A column that is a constituent of the primary key of T, if any.\n: b) The system-time period start column, if any.\n: c) The system-time period end column, if any.\n: d) The identity column, if any.\n\nthen General Rule 7) explains that this does indeed happen for columns\ndeclared to have some sort of NOT NULL constraint, without saying\nexactly how was that constraint defined:\n\n: 7) If TC causes some column COL to be known not nullable and no other\n: constraint causes COL to be known not nullable, then the nullability\n: characteristic of the column descriptor of COL is changed to possibly\n: nullable.\n\n> Besides that, I think that users are not going to like that pg_dump rewrites\n> their NOT NULL constraints into CHECK table constraints.\n\nThis is a good point, but we could get around it by decreeing that\npg_dump dumps the NOT NULL in the old way if the name is not changed\nfrom whatever would be generated normally. This would require some\ngames to remove the CHECK one; and it would also mean that partitions\nwould not use the same constraint as the parent, but rather it'd have to\ngenerate a new constraint name that uses its own table name, rather than\nthe parent's.\n\n(This makes me wonder what should happen if you rename a table: should\nwe go around and rename all the automatically-named constraints as well?\nProbably not, but this may annoy people that creates table under one\nname, then rename them into their final places afterwards. pg_dump may\nbehave funny for those. We can tackle that later, if ever. But\nconsider that moving the table across schemas might cause even weirder\nproblems, since the standard says constraint names must not conflict\nwithin a schema ...)\n\n> I suspect that this needs a separate contype for NOT NULL constraints that\n> is separate from CONSTRAINT_CHECK.\n\nMaybe it is possible to read this in the way you propose, but I think\nthat interpretation is strictly less useful than the one I propose.\nAlso, see this reply from Tom to Vitaly Burovoy who was proposing\nsomething that seems to derivate from this interpretation:\nhttps://www.postgresql.org/message-id/flat/17684.1462339177%40sss.pgh.pa.us\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Hay dos momentos en la vida de un hombre en los que no debería\nespecular: cuando puede permitírselo y cuando no puede\" (Mark Twain)\n\n\n",
"msg_date": "Mon, 19 Sep 2022 13:19:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 2:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n> If you say CREATE TABLE (a int NOT NULL), you'll get a CHECK constraint\r\n> printed by psql: (this is a bit more noisy that previously and it\r\n> changes a lot of regression tests output).\r\n>\r\n> 55489 16devel 1776237=# create table tab (a int not null);\r\n> CREATE TABLE\r\n> 55489 16devel 1776237=# \\d tab\r\n> Tabla «public.tab»\r\n> Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión\r\n> ─────────┼─────────┼──────────────┼──────────┼─────────────\r\n> a │ integer │ │ not null │\r\n> Restricciones CHECK:\r\n> \"tab_a_not_null\" CHECK (a IS NOT NULL)\r\n\r\nIn a table with many columns, most of which are NOT NULL, this is\r\ngoing to produce a ton of clutter. I don't like that.\r\n\r\nI'm not sure what a good alternative would be, though.\r\n\r\n-- \r\nRobert Haas\r\nEDB: http://www.enterprisedb.com\r\n",
"msg_date": "Mon, 19 Sep 2022 09:32:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, 19 Sept 2022 at 09:32, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Aug 17, 2022 at 2:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> > If you say CREATE TABLE (a int NOT NULL), you'll get a CHECK constraint\n> > printed by psql: (this is a bit more noisy that previously and it\n> > changes a lot of regression tests output).\n> >\n> > 55489 16devel 1776237=# create table tab (a int not null);\n> > CREATE TABLE\n> > 55489 16devel 1776237=# \\d tab\n> > Tabla «public.tab»\n> > Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión\n> > ─────────┼─────────┼──────────────┼──────────┼─────────────\n> > a │ integer │ │ not null │\n> > Restricciones CHECK:\n> > \"tab_a_not_null\" CHECK (a IS NOT NULL)\n>\n> In a table with many columns, most of which are NOT NULL, this is\n> going to produce a ton of clutter. I don't like that.\n>\n> I'm not sure what a good alternative would be, though.\n>\n\nI thought I saw some discussion about the SQL standard saying that there is\na difference between putting NOT NULL in a column definition, and CHECK\n(column_name NOT NULL). So if we're going to take this seriously, I think\nthat means there needs to be a field in pg_constraint which identifies\nwhether a constraint is a \"real\" one created explicitly as a constraint, or\nif it is just one created because a field is marked NOT NULL.\n\nIf this is correct, the answer is easy: don't show constraints that are\nthere only because of a NOT NULL in the \\d or \\d+ listings. I certainly\ndon't want to see that clutter and I'm having trouble seeing why anybody\nelse would want to see it either; the information is already there in the\n\"Nullable\" column of the field listing.\n\nThe error message for a duplicate constraint name when creating a\nconstraint needs however to be very clear that the conflict is with a NOT\nNULL constraint and which one, since I'm proposing leaving those ones off\nthe visible listing, and it would be very bad for somebody to get\n\"duplicate name\" and then be unable to see the conflicting entry.\n\nOn Mon, 19 Sept 2022 at 09:32, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Aug 17, 2022 at 2:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> If you say CREATE TABLE (a int NOT NULL), you'll get a CHECK constraint\n> printed by psql: (this is a bit more noisy that previously and it\n> changes a lot of regression tests output).\n>\n> 55489 16devel 1776237=# create table tab (a int not null);\n> CREATE TABLE\n> 55489 16devel 1776237=# \\d tab\n> Tabla «public.tab»\n> Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión\n> ─────────┼─────────┼──────────────┼──────────┼─────────────\n> a │ integer │ │ not null │\n> Restricciones CHECK:\n> \"tab_a_not_null\" CHECK (a IS NOT NULL)\n\nIn a table with many columns, most of which are NOT NULL, this is\ngoing to produce a ton of clutter. I don't like that.\n\nI'm not sure what a good alternative would be, though.I thought I saw some discussion about the SQL standard saying that there is a difference between putting NOT NULL in a column definition, and CHECK (column_name NOT NULL). So if we're going to take this seriously, I think that means there needs to be a field in pg_constraint which identifies whether a constraint is a \"real\" one created explicitly as a constraint, or if it is just one created because a field is marked NOT NULL.If this is correct, the answer is easy: don't show constraints that are there only because of a NOT NULL in the \\d or \\d+ listings. I certainly don't want to see that clutter and I'm having trouble seeing why anybody else would want to see it either; the information is already there in the \"Nullable\" column of the field listing.The error message for a duplicate constraint name when creating a constraint needs however to be very clear that the conflict is with a NOT NULL constraint and which one, since I'm proposing leaving those ones off the visible listing, and it would be very bad for somebody to get \"duplicate name\" and then be unable to see the conflicting entry.",
"msg_date": "Mon, 19 Sep 2022 09:54:41 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> I thought I saw some discussion about the SQL standard saying that there is\n> a difference between putting NOT NULL in a column definition, and CHECK\n> (column_name NOT NULL). So if we're going to take this seriously, I think\n> that means there needs to be a field in pg_constraint which identifies\n> whether a constraint is a \"real\" one created explicitly as a constraint, or\n> if it is just one created because a field is marked NOT NULL.\n\nIf we're going to go that way, I think that we should take the further\nstep of making not-null constraints be their own contype rather than\nan artificially generated CHECK. The bloat in pg_constraint from CHECK\nexpressions made this way seems like an additional reason not to like\ndoing it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Sep 2022 10:08:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, 19 Sept 2022 at 15:32, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Aug 17, 2022 at 2:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > If you say CREATE TABLE (a int NOT NULL), you'll get a CHECK constraint\n> > printed by psql: (this is a bit more noisy that previously and it\n> > changes a lot of regression tests output).\n> >\n> > 55489 16devel 1776237=# create table tab (a int not null);\n> > CREATE TABLE\n> > 55489 16devel 1776237=# \\d tab\n> > Tabla «public.tab»\n> > Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión\n> > ─────────┼─────────┼──────────────┼──────────┼─────────────\n> > a │ integer │ │ not null │\n> > Restricciones CHECK:\n> > \"tab_a_not_null\" CHECK (a IS NOT NULL)\n>\n> In a table with many columns, most of which are NOT NULL, this is\n> going to produce a ton of clutter. I don't like that.\n>\n> I'm not sure what a good alternative would be, though.\n\nI'm not sure on the 'good' part of this alternative, but we could go\nwith a single row-based IS NOT NULL to reduce such clutter, utilizing\nthe `ROW() IS NOT NULL` requirement of a row only matching IS NOT NULL\nwhen all attributes are also IS NOT NULL:\n\n Check constraints:\n \"tab_notnull_check\" CHECK (ROW(a, b, c, d, e) IS NOT NULL)\n\ninstead of:\n\n Check constraints:\n \"tab_a_not_null\" CHECK (a IS NOT NULL)\n \"tab_b_not_null\" CHECK (b IS NOT NULL)\n \"tab_c_not_null\" CHECK (c IS NOT NULL)\n \"tab_d_not_null\" CHECK (d IS NOT NULL)\n \"tab_e_not_null\" CHECK (e IS NOT NULL)\n\nBut the performance of repeated row-casting would probably not be as\ngood as our current NULL checks if we'd use the current row\ninfrastructure, and constraint failure reports wouldn't be as helpful\nas the current attribute NOT NULL failures.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Mon, 19 Sep 2022 16:10:17 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Sep-19, Robert Haas wrote:\n\n> On Wed, Aug 17, 2022 at 2:12 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > 55489 16devel 1776237=# \\d tab\n> > Tabla «public.tab»\n> > Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión\n> > ─────────┼─────────┼──────────────┼──────────┼─────────────\n> > a │ integer │ │ not null │\n> > Restricciones CHECK:\n> > \"tab_a_not_null\" CHECK (a IS NOT NULL)\n> \n> In a table with many columns, most of which are NOT NULL, this is\n> going to produce a ton of clutter. I don't like that.\n> \n> I'm not sure what a good alternative would be, though.\n\nPerhaps that can be solved by displaying the constraint name in the\ntable:\n\n 55489 16devel 1776237=# \\d tab\n Tabla «public.tab»\n Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión\n ─────────┼─────────┼──────────────┼────────────────┼─────────────\n a │ integer │ │ tab_a_not_null │ \n\n(Perhaps we can change the column title also, not sure.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"The Gord often wonders why people threaten never to come back after they've\nbeen told never to return\" (www.actsofgord.com)\n\n\n",
"msg_date": "Tue, 20 Sep 2022 12:51:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Sep-19, Isaac Morland wrote:\n\n> I thought I saw some discussion about the SQL standard saying that there is\n> a difference between putting NOT NULL in a column definition, and CHECK\n> (column_name NOT NULL). So if we're going to take this seriously,\n\nWas it Peter E.'s reply to this thread?\n\nhttps://postgr.es/m/bac841ed-b86d-e3c2-030d-02a3db067307@enterprisedb.com\n\nbecause if it wasn't there, then I would like to know what you source\nis.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Thou shalt not follow the NULL pointer, for chaos and madness await\nthee at its end.\" (2nd Commandment for C programmers)\n\n\n",
"msg_date": "Tue, 20 Sep 2022 12:53:02 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Sep-19, Matthias van de Meent wrote:\n\n> I'm not sure on the 'good' part of this alternative, but we could go\n> with a single row-based IS NOT NULL to reduce such clutter, utilizing\n> the `ROW() IS NOT NULL` requirement of a row only matching IS NOT NULL\n> when all attributes are also IS NOT NULL:\n> \n> Check constraints:\n> \"tab_notnull_check\" CHECK (ROW(a, b, c, d, e) IS NOT NULL)\n\nThere's no way to mark this NOT VALID individually or validate it\nafterwards, though.\n\n> But the performance of repeated row-casting would probably not be as\n> good as our current NULL checks\n\nThe NULL checks would still be mostly done by the attnotnull checks\ninternally, so there shouldn't be too much of a difference.\n\n.. though I'm now wondering if there's additional overhead from checking\nthe constraint twice on each row: first the attnotnull bit, then the\nCHECK itself. Hmm. That's probably quite bad.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Sep 2022 12:56:19 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, 20 Sept 2022 at 06:56, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\nThe NULL checks would still be mostly done by the attnotnull checks\n> internally, so there shouldn't be too much of a difference.\n>\n> .. though I'm now wondering if there's additional overhead from checking\n> the constraint twice on each row: first the attnotnull bit, then the\n> CHECK itself. Hmm. That's probably quite bad.\n>\n\nAnother reason to treat NOT NULL-implementing constraints differently.\n\nMy thinking is that pg_constraint entries for NOT NULL columns are mostly\nan implementation detail. I've certainly never cared whether I had an\nactual constraint corresponding to my NOT NULL columns. So I think marking\nthem as such, or a different contype, and excluding them from \\d+ display,\nprobably makes sense. Just need to deal with the issue of trying to create\na constraint and having its name conflict with a NOT NULL constraint. Could\nit work to reserve [field name]_notnull for NOT NULL-implementing\nconstraints? I'd be worried about what happens with field renames; renaming\nthe constraint automatically seems a bit weird, but maybe…\n\nOn Tue, 20 Sept 2022 at 06:56, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\nThe NULL checks would still be mostly done by the attnotnull checks\ninternally, so there shouldn't be too much of a difference.\n\n.. though I'm now wondering if there's additional overhead from checking\nthe constraint twice on each row: first the attnotnull bit, then the\nCHECK itself. Hmm. That's probably quite bad.\nAnother reason to treat NOT NULL-implementing constraints differently.My thinking is that pg_constraint entries for NOT NULL columns are mostly an implementation detail. I've certainly never cared whether I had an actual constraint corresponding to my NOT NULL columns. So I think marking them as such, or a different contype, and excluding them from \\d+ display, probably makes sense. Just need to deal with the issue of trying to create a constraint and having its name conflict with a NOT NULL constraint. Could it work to reserve [field name]_notnull for NOT NULL-implementing constraints? I'd be worried about what happens with field renames; renaming the constraint automatically seems a bit weird, but maybe…",
"msg_date": "Tue, 20 Sep 2022 10:15:29 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2022-Sep-20, Isaac Morland wrote:\n\n> On Tue, 20 Sept 2022 at 06:56, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n> \n> > .. though I'm now wondering if there's additional overhead from checking\n> > the constraint twice on each row: first the attnotnull bit, then the\n> > CHECK itself. Hmm. That's probably quite bad.\n> \n> Another reason to treat NOT NULL-implementing constraints differently.\n\nYeah.\n\n> My thinking is that pg_constraint entries for NOT NULL columns are mostly\n> an implementation detail. I've certainly never cared whether I had an\n> actual constraint corresponding to my NOT NULL columns.\n\nNaturally, all catalog entries are implementation details; a user never\nreally cares if an entry exists or not, only that the desired semantics\nare provided. In this case, we want the constraint row because it gives\nus some additional features, such as the ability to mark NOT NULL\nconstraints NOT VALID and validating them later, which is a useful thing\nto do in large production databases. We have some hacks to provide part\nof that functionality using straight CHECK constraints, but you cannot\ncleanly get the `attnotnull` flag set for a column (which means it's\nhard to add a primary key, for example).\n\nIt is also supposed to fix some inconsistencies such as disallowing to\nremove a constraint on a table when it is implied from a constraint on\nan ancestor table. Right now we have ad-hoc protections for partitions,\nbut we don't do that for legacy inheritance.\n\nThat said, the patch I posted for this ~10 years ago used a separate\ncontype and was simpler than what I ended up with now, but amusingly\nenough it was returned at the time with the argument that it would be\nbetter to treat them as normal CHECK constraints; so I want to be very\nsure that we're not just going around in circles.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 20 Sep 2022 16:39:46 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 10:39 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> That said, the patch I posted for this ~10 years ago used a separate\n> contype and was simpler than what I ended up with now, but amusingly\n> enough it was returned at the time with the argument that it would be\n> better to treat them as normal CHECK constraints; so I want to be very\n> sure that we're not just going around in circles.\n\nI don't have an intrinsic view on whether we ought to have one contype\nor two, but I think this comment of yours from a few messages ago is\nright on point: \".. though I'm now wondering if there's additional\noverhead from checking\nthe constraint twice on each row: first the attnotnull bit, then the\nCHECK itself. Hmm. That's probably quite bad.\" For that exact\nreason, it seems absolutely necessary to be able to somehow identify\nthe \"redundant\" check constraints, so that we don't pay the expense of\nrevalidating them. Another contype would be one way of identifying\nsuch constraints, but it could probably be done in other ways, too.\nPerhaps it could even be discovered dynamically, like when we build a\nrelcache entry. I actually have no idea what design is best.\n\nI am a little confused as to why we want to do this, though. While\nwe're on the topic of what is more complicated and simpler, what\nfunctionality do we get out of adding all of these additional catalog\nentries that then have to be linked back to the corresponding\nattnotnull markings? And couldn't we get that functionality in some\nmuch simpler way? Like, if you want to track whether the NOT NULL\nconstraint has been validated, we could add an attnotnullvalidated\ncolumn, or probably better, change the existing attnotnull column to a\ncharacter used as an enum, or maybe an integer bit-field of some kind.\nI'm not trying to blow up your patch with dynamite or anything, but\nI'm a little suspicious that this may be one of those cases where\npgsql-hackers discussed turns a complicated project into an even more\ncomplicated project to no real benefit.\n\nOne thing that I don't particularly like about this whole design is\nthat it feels like it creates a bunch of catalog bloat. Now all of the\nattnotnull flags also generate additional pg_constraint rows. The\ncatalogs in the default install will be bigger than before, and the\ncatalogs after user tables are created will be more bigger. If we get\nsome nifty benefit out of all that, cool! But if we're just\nanti-optimizing the catalog storage out of some feeling that the\nresult will be intellectually purer than some competing design, maybe\nwe should reconsider. It's not stupid to optimize for common special\ncases, and making a column as NOT NULL is probably at least one and\nmaybe several orders of magnitude more common than putting some other\nCHECK constraint on it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 12:49:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "So I reworked this to use a new contype value for the NOT NULL\npg_constraint rows; I attach it here. I think it's fairly clean.\n\n0001 is just a trivial change that seemed obvious as soon as I ran into\nthe problem.\n\n0002 is the most interesting part.\n\nThings that are curious:\n\n- Inheritance and primary keys. If you have a table with a primary key,\nand create a child of it, that child is going to have a NOT NULL in the\ncolumn that is the primary key.\n\n- Inheritance and plain constraints. It is not allowed to remove the\nNOT NULL constraint from a child; currently, NO INHERIT constraints are\nnot supported. I would say this is an useless feature, but perhaps not.\n\n0003:\nSince nobody liked the idea of listing the constraints in psql \\d's\nfooter, I changed \\d+ so that the \"not null\" column shows the name of\nthe constraint if there is one, or the string \"(primary key)\" if the\nattnotnull marking for the column comes from the primary key. The new\ncolumn is going to be quite wide in some cases; if we want to hide it\nfurther, we could add the mythical \\d++ and have *that* list the\nconstraint name, keeping \\d+ as current.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Los trabajadores menos efectivos son sistematicamente llevados al lugar\ndonde pueden hacer el menor daño posible: gerencia.\" (El principio Dilbert)",
"msg_date": "Tue, 28 Feb 2023 20:15:37 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hmm, so it turned out that cfbot didn't like this because I didn't patch\none of the compression.out alternate files. Fixed here. I think in the\nfuture I'm not going to submit the 0003 patch, because it's not very\ninteresting while being way too bulky and also the one most likely to\nhave conflicts.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Hay que recordar que la existencia en el cosmos, y particularmente la\nelaboración de civilizaciones dentro de él no son, por desgracia,\nnada idílicas\" (Ijon Tichy)",
"msg_date": "Wed, 1 Mar 2023 13:03:48 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 08:15:37PM +0100, Alvaro Herrera wrote:\n> Since nobody liked the idea of listing the constraints in psql \\d's\n> footer, I changed \\d+ so that the \"not null\" column shows the name of\n> the constraint if there is one, or the string \"(primary key)\" if the\n> attnotnull marking for the column comes from the primary key. The new\n> column is going to be quite wide in some cases; if we want to hide it\n> further, we could add the mythical \\d++ and have *that* list the\n> constraint name, keeping \\d+ as current.\n\nOne concern here is that the title \"NOT NULL Constraint\" is itself\npretty wide, which is an issue for tables which have no not-null\nconstraints.\n\nOn Wed, Mar 01, 2023 at 01:03:48PM +0100, Alvaro Herrera wrote:\n> Hmm, so it turned out that cfbot didn't like this because I didn't patch\n> one of the compression.out alternate files. Fixed here. I think in the\n> future I'm not going to submit the 0003 patch, because it's not very\n> interesting while being way too bulky and also the one most likely to\n> have conflicts.\n\nI like \\dt++, and it seems like the obvious thing to do here, to avoid\nchanging lots of regression test output, which seems worth avoiding in\nany case, due to ensuing conflicts in other patches being developed, and\nin backpatching.\n\nRight now, \\dt+ includes a bit too much output, including things like\nsizes, which makes it hard to test. Moving some things into \\dt++ would\nmake \\dt+ more testable (and more usable BTW). Even if that's not true\nof (or not a good idea) for \\dt+, I'm sure it applies to other slash\ncommands. Currently, fourty-five (45) psql commands support verbose\n\"plus\" variants, and the sql regression tests exercise fifteen (15) of\nthem.\n\nI proposed \\dn++, \\dA++, and \\db++ in 2ndary patches here:\nhttps://commitfest.postgresql.org/42/3256/\n\nI've considered sending a patch with \"plusplus\" commands as 001, to\npropose that on its own merits rather than in the context of \\d[Abn]++\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 1 Mar 2023 16:32:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 28.02.23 20:15, Alvaro Herrera wrote:\n> So I reworked this to use a new contype value for the NOT NULL\n> pg_constraint rows; I attach it here. I think it's fairly clean.\n> \n> 0001 is just a trivial change that seemed obvious as soon as I ran into\n> the problem.\n\nThis looks harmless enough, but I wonder what the reason for it is. \nWhat command can cause this error (no test case?)? Is there ever a \nconfusion about what table is in play?\n\n> 0002 is the most interesting part.\n\nWhere did this syntax come from:\n\n--- a/doc/src/sgml/ref/create_table.sgml\n+++ b/doc/src/sgml/ref/create_table.sgml\n@@ -77,6 +77,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | \nUNLOGGED ] TABLE [ IF NOT EXI\n\n [ CONSTRAINT <replaceable \nclass=\"parameter\">constraint_name</replaceable> ]\n { CHECK ( <replaceable class=\"parameter\">expression</replaceable> ) [ \nNO INHERIT ] |\n+ NOT NULL <replaceable class=\"parameter\">column_name</replaceable> |\n UNIQUE [ NULLS [ NOT ] DISTINCT ] ( <replaceable \nclass=\"parameter\">column_name</replaceable> [, ... ] ) <replaceable \nclass=\"parameter\">in>\n PRIMARY KEY ( <replaceable \nclass=\"parameter\">column_name</replaceable> [, ... ] ) <replaceable \nclass=\"parameter\">index_parameters</replac>\n EXCLUDE [ USING <replaceable \nclass=\"parameter\">index_method</replaceable> ] ( <replaceable \nclass=\"parameter\">exclude_element</replaceable>\n\nI don't see that in the standard.\n\nIf we need it for something, we should at least document that it's an \nextension.\n\nThe test tables in foreign_key.sql are declared with columns like\n\n id bigint NOT NULL PRIMARY KEY,\n\nwhich is a bit weird and causes expected output diffs in your patch. Is \nthat interesting for this patch? Otherwise I suggest dropping the NOT \nNULL from those table definitions to avoid these extra diffs.\n\n> 0003:\n> Since nobody liked the idea of listing the constraints in psql \\d's\n> footer, I changed \\d+ so that the \"not null\" column shows the name of\n> the constraint if there is one, or the string \"(primary key)\" if the\n> attnotnull marking for the column comes from the primary key. The new\n> column is going to be quite wide in some cases; if we want to hide it\n> further, we could add the mythical \\d++ and have *that* list the\n> constraint name, keeping \\d+ as current.\n\nI think my rough preference here would be to leave the existing output \nstyle (column header \"Nullable\", content \"not null\") alone and display \nthe constraint name somewhere in the footer optionally. In practice, \nthe name of the constraint is rarely needed.\n\nI do like the idea of mentioning primary key-ness inside the table somehow.\n\nAs you wrote elsewhere, we can leave this patch alone for now.\n\n\n\n",
"msg_date": "Fri, 3 Mar 2023 11:15:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Mar-03, Peter Eisentraut wrote:\n\n> On 28.02.23 20:15, Alvaro Herrera wrote:\n> > So I reworked this to use a new contype value for the NOT NULL\n> > pg_constraint rows; I attach it here. I think it's fairly clean.\n> > \n> > 0001 is just a trivial change that seemed obvious as soon as I ran into\n> > the problem.\n> \n> This looks harmless enough, but I wonder what the reason for it is. What\n> command can cause this error (no test case?)? Is there ever a confusion\n> about what table is in play?\n\nHmm, I realize now that the only reason I have this is that I had a bug\nat some point: the case where it's not evident which table it is, is\nwhen you're adding a PK to a partitioned table and one of the partitions\ndoesn't have the NOT NULL marking. But if you add a PK with the patch,\nthe partitions are supposed to get the nullability marking\nautomatically; the bug is that they didn't. So we don't need patch 0001\nat all.\n\n> > 0002 is the most interesting part.\n\nAnother thing I realized after posting, is that the constraint naming\nbusiness is mistaken. It's currently written to work similarly to CHECK\nconstraints, that is: each descendent needs to have the constraint named\nthe same (this is so that descent works correctly when altering/dropping\nthe constraint afterwards). But for NOT NULL constraints, that is not\nnecessary, because when descending down the hierarchy, we can just match\nthe constraint based on column name, since each column has at most one\nNOT NULL constraint. So the games with constraint renaming are\naltogether unnecessary and can be removed from the patch. We just need\nto ensure that coninhcount/conislocal is updated appropriately.\n\n\n> Where did this syntax come from:\n> \n> --- a/doc/src/sgml/ref/create_table.sgml\n> +++ b/doc/src/sgml/ref/create_table.sgml\n> @@ -77,6 +77,7 @@ CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } |\n> UNLOGGED ] TABLE [ IF NOT EXI\n> \n> [ CONSTRAINT <replaceable class=\"parameter\">constraint_name</replaceable> ]\n> { CHECK ( <replaceable class=\"parameter\">expression</replaceable> ) [ NO\n> INHERIT ] |\n> + NOT NULL <replaceable class=\"parameter\">column_name</replaceable> |\n> UNIQUE [ NULLS [ NOT ] DISTINCT ] ( <replaceable\n> class=\"parameter\">column_name</replaceable> [, ... ] ) <replaceable\n> class=\"parameter\">in>\n> PRIMARY KEY ( <replaceable class=\"parameter\">column_name</replaceable> [,\n> ... ] ) <replaceable class=\"parameter\">index_parameters</replac>\n> EXCLUDE [ USING <replaceable class=\"parameter\">index_method</replaceable>\n> ] ( <replaceable class=\"parameter\">exclude_element</replaceable>\n> \n> I don't see that in the standard.\n\nYeah, I made it up because I needed table-level constraints for some\nreason that doesn't come to mind right now.\n\n> If we need it for something, we should at least document that it's an\n> extension.\n\nOK.\n\n> The test tables in foreign_key.sql are declared with columns like\n> \n> id bigint NOT NULL PRIMARY KEY,\n> \n> which is a bit weird and causes expected output diffs in your patch. Is\n> that interesting for this patch? Otherwise I suggest dropping the NOT NULL\n> from those table definitions to avoid these extra diffs.\n\nThe behavior is completely different if you drop the primary key. If\nyou don't have NOT NULL, then when you drop the PK the columns becomes\nnullable. If you do have a NOT NULL constraint in addition to the PK,\nand drop the PK, then the column remains non nullable.\n\nNow, if you want to suggest that dropping the PK ought to leave the\ncolumn as NOT NULL (that is, it automatically acquires a NOT NULL\nconstraint), then let's discuss that. But I understand the standard as\nsaying otherwise.\n\n\n> > 0003:\n> > Since nobody liked the idea of listing the constraints in psql \\d's\n> > footer, I changed \\d+ so that the \"not null\" column shows the name of\n> > the constraint if there is one, or the string \"(primary key)\" if the\n> > attnotnull marking for the column comes from the primary key. The new\n> > column is going to be quite wide in some cases; if we want to hide it\n> > further, we could add the mythical \\d++ and have *that* list the\n> > constraint name, keeping \\d+ as current.\n> \n> I think my rough preference here would be to leave the existing output style\n> (column header \"Nullable\", content \"not null\") alone and display the\n> constraint name somewhere in the footer optionally.\n\nWell, there is resistance to showing the name of the constraint in the\nfooter also because it's too verbose. In the end, I think a\n\"super-verbose\" mode is the most convincing way forward. (I think the\nlist of partitions in the footer of a partitioned table is a terrible\ndesign. Let's not repeat that.)\n\n> In practice, the name of the constraint is rarely needed.\n\nThat is true.\n\n> I do like the idea of mentioning primary key-ness inside the table somehow.\n\nMaybe change the \"not null\" to \"primary key\" in the Nullable column and\nnothing else.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qué formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n\n\n",
"msg_date": "Fri, 3 Mar 2023 11:47:28 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Here's v5. I removed the business of renaming constraints in child\nrelations: recursing now just relies on matching column names. Each\ncolumn has only one NOT NULL constraint; if you try to add another,\nnothing happens. All in all, this code is pretty similar to how we\nhandle inheritance of columns, which I think is good.\n\nI added a mention that this funny syntax\n ALTER TABLE tab ADD CONSTRAINT NOT NULL col;\nis not standard. Maybe it's OK, but it seems a bit too prominent to me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 15 Mar 2023 23:44:40 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 15.03.23 23:44, Alvaro Herrera wrote:\n> Here's v5. I removed the business of renaming constraints in child\n> relations: recursing now just relies on matching column names. Each\n> column has only one NOT NULL constraint; if you try to add another,\n> nothing happens. All in all, this code is pretty similar to how we\n> handle inheritance of columns, which I think is good.\n\nThis patch looks pretty okay to me now. It matches all the functional \nexpectations.\n\nI suggest going through the tests carefully again and make sure all the \nchanges are sensible and all the comments are correct. There are a few \nplaces where the behavior of tests has changed (intentionally) but the \nsurrounding comments don't match anymore, or objects that previously \nweren't created now succeed but then affect following tests. Also, it \nseems some tests are left over from the first variant of this patch \n(where not-null constraints were converted to check constraints), and \ntest names or comments should be updated to the current behavior.\n\nI suppose we don't need any changes in pg_dump, since ruleutils.c \nhandles that?\n\nThe information schema should be updated. I think the following views:\n\n- CHECK_CONSTRAINTS\n- CONSTRAINT_COLUMN_USAGE\n- DOMAIN_CONSTRAINTS\n- TABLE_CONSTRAINTS\n\nIt looks like these have no test coverage; maybe that could be addressed \nat the same time.\n\n\n\n",
"msg_date": "Mon, 27 Mar 2023 15:55:27 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 27.03.23 15:55, Peter Eisentraut wrote:\n> The information schema should be updated. I think the following views:\n> \n> - CHECK_CONSTRAINTS\n> - CONSTRAINT_COLUMN_USAGE\n> - DOMAIN_CONSTRAINTS\n> - TABLE_CONSTRAINTS\n> \n> It looks like these have no test coverage; maybe that could be addressed \n> at the same time.\n\nHere are patches for this. I haven't included the expected files for \nthe tests; this should be checked again that output is correct or the \nchanges introduced by this patch set are as expected.\n\nThe reason we didn't have tests for this before was probably in part \nbecause the information schema made up names for not-null constraints \ninvolving OIDs, so the test wouldn't have been stable.\n\nFeel free to integrate this, or we can add it on afterwards.",
"msg_date": "Wed, 29 Mar 2023 16:46:16 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Mar-27, Peter Eisentraut wrote:\n\n> I suggest going through the tests carefully again and make sure all the\n> changes are sensible and all the comments are correct. There are a few\n> places where the behavior of tests has changed (intentionally) but the\n> surrounding comments don't match anymore, or objects that previously weren't\n> created now succeed but then affect following tests. Also, it seems some\n> tests are left over from the first variant of this patch (where not-null\n> constraints were converted to check constraints), and test names or comments\n> should be updated to the current behavior.\n\nThanks for reviewing!\n\nYeah, there were some obsolete tests. I fixed those, added a couple\nmore, and while doing that I realized that failing to have NO INHERIT\nconstraints may be seen as regressing feature-wise, because there would\nbe no way to return to the situation where a parent table has a NOT NULL\nbut the children don't necessarily. So I added that, and that led me to\nchanging the code structure a bit more in order to support *not* copying\nthe attnotnull flag in the cases where the parent only has it because of\na NO INHERIT constraint.\n\nI'll go over this again tomorrow with fresh eyes, but I think it should\nbe pretty close to ready. (Need to amend docs to note the new NO\nINHERIT option for NOT NULL table constraints, and make sure pg_dump\ncomplies.)\n\nTests are currently running: https://cirrus-ci.com/build/6261827823206400\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Las navajas y los monos deben estar siempre distantes\" (Germán Poo)",
"msg_date": "Thu, 6 Apr 2023 01:33:56 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-06 01:33:56 +0200, Alvaro Herrera wrote:\n> I'll go over this again tomorrow with fresh eyes, but I think it should\n> be pretty close to ready. (Need to amend docs to note the new NO\n> INHERIT option for NOT NULL table constraints, and make sure pg_dump\n> complies.)\n\nMissed this thread somehow. This is not a review - I just want to say that I\nam very excited that we might finally catalogue NOT NULL constraints. This has\nbeen a long time in the making...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 5 Apr 2023 18:54:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, Apr 05, 2023 at 06:54:54PM -0700, Andres Freund wrote:\n> On 2023-04-06 01:33:56 +0200, Alvaro Herrera wrote:\n>> I'll go over this again tomorrow with fresh eyes, but I think it should\n>> be pretty close to ready. (Need to amend docs to note the new NO\n>> INHERIT option for NOT NULL table constraints, and make sure pg_dump\n>> complies.)\n> \n> Missed this thread somehow. This is not a review - I just want to say that I\n> am very excited that we might finally catalogue NOT NULL constraints. This has\n> been a long time in the making...\n\n+1!\n--\nMichael",
"msg_date": "Thu, 6 Apr 2023 11:08:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Thu, Apr 06, 2023 at 01:33:56AM +0200, Alvaro Herrera wrote:\n> - The forms <literal>ADD</literal> (without <literal>USING INDEX</literal>),\n> + The forms <literal>ADD</literal> (without <literal>USING INDEX</literal>, and\n> + except for the <literal>NOT NULL <replaceable>column_name</replaceable></literal>\n> + form to add a table constraint),\n\nThe \"except\" part seems pretty incoherent to me :(\n\n> + if (isnull)\n> + elog(ERROR, \"null conkey for NOT NULL constraint %u\", conForm->oid);\n\ncould use SysCacheGetAttrNotNull()\n\n> +\t\tif (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\terrcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +\t\t\t\t\terrmsg(\"cannot add constraint to only the partitioned table when partitions exist\"),\n> +\t\t\t\t\terrhint(\"Do not specify the ONLY keyword.\"));\n> +\t\telse\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\terrcode(ERRCODE_INVALID_TABLE_DEFINITION),\n> +\t\t\t\t\terrmsg(\"cannot add constraint to table with inheritance children\"),\n\nmissing \"only\" ?\n\n> +\tconrel = table_open(ConstraintRelationId, RowExclusiveLock);\n\nShould this be opened after the following error check ?\n\n> +\t\tarr = DatumGetArrayTypeP(adatum);\t/* ensure not toasted */\n> +\t\tnumkeys = ARR_DIMS(arr)[0];\n> +\t\tif (ARR_NDIM(arr) != 1 ||\n> +\t\t\tnumkeys < 0 ||\n> +\t\t\tARR_HASNULL(arr) ||\n> +\t\t\tARR_ELEMTYPE(arr) != INT2OID)\n> +\t\t\telog(ERROR, \"conkey is not a 1-D smallint array\");\n> +\t\tattnums = (int16 *) ARR_DATA_PTR(arr);\n> +\n> +\t\tfor (int i = 0; i < numkeys; i++)\n> +\t\t\tunconstrained_cols = lappend_int(unconstrained_cols, attnums[i]);\n> +\t}\n\nDoes \"arr\" need to be freed ?\n\n> +\t\t\t * Since the above deletion has been made visible, we can now\n> +\t\t\t * search for any remaining constraints on this column (or these\n> +\t\t\t * columns, in the case we're dropping a multicol primary key.)\n> +\t\t\t * Then, verify whether any further NOT NULL or primary key exist,\n\nIf I'm reading it right, I think it should say \"exists\"\n\n> +/*\n> + * When a primary key index on a partitioned table is to be attached an index\n> + * on a partition, the partition's columns should also be marked NOT NULL.\n> + * Ensure that is the case.\n\nI think the comment may be missing words, or backwards.\nThe index on the *partitioned* table wouldn't be attached.\nIs the index on the *partition* that's attached *to* the former index.\n\n> +create table c() inherits(inh_p1, inh_p2, inh_p3, inh_p4);\n> +NOTICE: merging multiple inherited definitions of column \"f1\"\n> +NOTICE: merging multiple inherited definitions of column \"f1\"\n> +ERROR: relation \"c\" already exists\n\nDo you intend to make an error here ?\n\nAlso, I think these table names may be too generic, and conflict with\nother parallel tests, now or in the future.\n\n> +create table d(a int not null, f1 int) inherits(inh_p3, c);\n> +ERROR: relation \"d\" already exists\n\nAnd here ?\n\n> +-- with explicitely specified not null constraints\n\nsp: explicitly\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 6 Apr 2023 13:20:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Apr-06, Justin Pryzby wrote:\n\n> On Thu, Apr 06, 2023 at 01:33:56AM +0200, Alvaro Herrera wrote:\n> > - The forms <literal>ADD</literal> (without <literal>USING INDEX</literal>),\n> > + The forms <literal>ADD</literal> (without <literal>USING INDEX</literal>, and\n> > + except for the <literal>NOT NULL <replaceable>column_name</replaceable></literal>\n> > + form to add a table constraint),\n> \n> The \"except\" part seems pretty incoherent to me :(\n\nYeah, I feared that would be the case. I can't think of a wording\nthat doesn't take two lines, so suggestions welcome.\n\nI handled your other comments, except these:\n\n> > +\tconrel = table_open(ConstraintRelationId, RowExclusiveLock);\n> \n> Should this be opened after the following error check ?\n\nAdded new code in the middle when I found a small problem, so now the\ntable_open is necessary there. (To wit: if we DROP NOT NULL a\nconstraint that is both locally defined in the table and inherited, we\nshould remove the \"conislocal\" flag and it's done. Previously, we were\nthrowing an error that the constraint is inherited, but that's wrong.)\n\n> > +\t\tarr = DatumGetArrayTypeP(adatum);\t/* ensure not toasted */\n> \n> Does \"arr\" need to be freed ?\n\nI see this pattern in one or two other places and we don't worry about\nsuch small allocations too much. (I copied this code almost verbatim\nfrom somewhere IIRC).\n\nAnyway, I found a couple of additional minor problems when playing with\nsome additional corner case scenarios; I cleaned up the test cases, per\nPeter. Then I realized that pg_dump support was missing completely, so\nI filled that in. Sadly, the binary-upgrade mode is a bit of a mess and\nthus the pg_upgrade test is failing.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)",
"msg_date": "Fri, 7 Apr 2023 04:14:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "I think this should fix the pg_upgrade issues.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)",
"msg_date": "Fri, 7 Apr 2023 14:49:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Fri, Apr 07, 2023 at 04:14:13AM +0200, Alvaro Herrera wrote:\n> On 2023-Apr-06, Justin Pryzby wrote:\n\n> > +ERROR: relation \"c\" already exists\n>\n> Do you intend to make an error here ?\n\nThese still look like mistakes in the tests.\n\n> Also, I think these table names may be too generic, and conflict with\n> other parallel tests, now or in the future.\n>\n> > +create table d(a int not null, f1 int) inherits(inh_p3, c);\n> > +ERROR: relation \"d\" already exists\n\n> Sadly, the binary-upgrade mode is a bit of a mess and thus the\n> pg_upgrade test is failing.\n\n\n",
"msg_date": "Fri, 7 Apr 2023 09:55:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nI think there's some test instability:\n\nFail:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2023-04-07%2018%3A43%3A02\nSubsequent success, without relevant changes:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2023-04-07%2020%3A22%3A01\nFollowed by a failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=parula&dt=2023-04-07%2020%3A31%3A02\n\nSimilar failures on other animals:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=komodoensis&dt=2023-04-07%2020%3A27%3A43\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=siskin&dt=2023-04-07%2020%3A09%3A25\n\nThere's also as second type of failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=dragonet&dt=2023-04-07%2020%3A23%3A35\n..\n\nI suspect there's a naming conflict between tests in different groups.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 13:38:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 13:38:43 -0700, Andres Freund wrote:\n> I suspect there's a naming conflict between tests in different groups.\n\nYep:\n\ntest: create_aggregate create_function_sql create_cast constraints triggers select inherit typed_table vacuum drop_if_exists updatable_views roleattributes create_am hash_func errors infinite_recurse\n\nsrc/test/regress/sql/inherit.sql\n851:create table child(f1 int not null, f2 text not null) inherits(inh_parent_1, inh_parent_2);\n\nsrc/test/regress/sql/triggers.sql\n2127:create table child partition of parent for values in ('AAA');\n2266:create table child () inherits (parent);\n2759:create table child () inherits (parent);\n\nThe inherit.sql part is new.\n\nI'll see how hard it is to fix.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 13:45:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Apr-07, Andres Freund wrote:\n\n> src/test/regress/sql/triggers.sql\n> 2127:create table child partition of parent for values in ('AAA');\n> 2266:create table child () inherits (parent);\n> 2759:create table child () inherits (parent);\n> \n> The inherit.sql part is new.\n\nYeah.\n\n> I'll see how hard it is to fix.\n\nRunning the tests for it now -- it's a short fix.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329\n\n\n",
"msg_date": "Fri, 7 Apr 2023 23:00:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 23:00:01 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-07, Andres Freund wrote:\n> \n> > src/test/regress/sql/triggers.sql\n> > 2127:create table child partition of parent for values in ('AAA');\n> > 2266:create table child () inherits (parent);\n> > 2759:create table child () inherits (parent);\n> > \n> > The inherit.sql part is new.\n> \n> Yeah.\n> \n> > I'll see how hard it is to fix.\n> \n> Running the tests for it now -- it's a short fix.\n\nI just pushed a fix - sorry, I thought you might have stopped working for the\nday and CI finished with the modification a few seconds before your email\narrived...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 14:10:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Apr-07, Andres Freund wrote:\n\n> I just pushed a fix - sorry, I thought you might have stopped working for the\n> day and CI finished with the modification a few seconds before your email\n> arrived...\n\nAh, cool, no worries. I would have stopped indeed, but I had to stay\naround in case of any test failures.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)\n\n\n",
"msg_date": "Fri, 7 Apr 2023 23:11:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 23:11:55 +0200, Alvaro Herrera wrote:\n> On 2023-Apr-07, Andres Freund wrote:\n> \n> > I just pushed a fix - sorry, I thought you might have stopped working for the\n> > day and CI finished with the modification a few seconds before your email\n> > arrived...\n> \n> Ah, cool, no worries. I would have stopped indeed, but I had to stay\n> around in case of any test failures.\n\nLooks like there's work for you if you want ;)\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2023-04-07%2018%3A52%3A13\n\nBut IMO fixing sepgsql can easily wait till tomorrow.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 14:19:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-07 23:11:55 +0200, Alvaro Herrera wrote:\n>> Ah, cool, no worries. I would have stopped indeed, but I had to stay\n>> around in case of any test failures.\n\n> Looks like there's work for you if you want ;)\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2023-04-07%2018%3A52%3A13\n\n> But IMO fixing sepgsql can easily wait till tomorrow.\n\nI can deal with that one -- it's a bit annoying to work with sepgsql\nif you're not on a Red Hat platform.\n\nAfter quickly eyeing the diffs, I'm just going to take the new output\nas good. I'm not surprised that there are additional output messages\ngiven the additional catalog entries this made. I *am* a bit surprised\nthat some messages seem to have disappeared --- are there places where\nthis resulted in fewer catalog accesses than before? Nonetheless,\nthere's no good reason to assume this test is exposing any bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 17:46:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 17:46:33 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-04-07 23:11:55 +0200, Alvaro Herrera wrote:\n> >> Ah, cool, no worries. I would have stopped indeed, but I had to stay\n> >> around in case of any test failures.\n> \n> > Looks like there's work for you if you want ;)\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=rhinoceros&dt=2023-04-07%2018%3A52%3A13\n> \n> > But IMO fixing sepgsql can easily wait till tomorrow.\n> \n> I can deal with that one -- it's a bit annoying to work with sepgsql\n> if you're not on a Red Hat platform.\n\nIndeed. I tried to get them running a while back, to enable the tests with\nmeson, without lot of success. Then I realized that they're also not wired up\nin make... ;)\n\n\n> After quickly eyeing the diffs, I'm just going to take the new output\n> as good. I'm not surprised that there are additional output messages\n> given the additional catalog entries this made. I *am* a bit surprised\n> that some messages seem to have disappeared --- are there places where\n> this resulted in fewer catalog accesses than before? Nonetheless,\n> there's no good reason to assume this test is exposing any bugs.\n\nI wonder if the issue is that the new paths miss a hook invocation.\n\n@@ -160,11 +160,7 @@\n ALTER TABLE regtest_table ALTER b SET DEFAULT 'XYZ'; -- not supported yet\n ALTER TABLE regtest_table ALTER b DROP DEFAULT; -- not supported yet\n ALTER TABLE regtest_table ALTER b SET NOT NULL;\n-LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"regtest_schema_2.regtest_table.b\" permissive=0\n-LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"regtest_schema.regtest_table_2.b\" permissive=0\n ALTER TABLE regtest_table ALTER b DROP NOT NULL;\n-LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"regtest_schema_2.regtest_table.b\" permissive=0\n-LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"regtest_schema.regtest_table_2.b\" permissive=0\n ALTER TABLE regtest_table ALTER b SET STATISTICS -1;\n LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"regtest_schema_2.regtest_table.b\" permissive=0\n LOG: SELinux: allowed { setattr } scontext=unconfined_u:unconfined_r:sepgsql_regtest_superuser_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"regtest_schema.regtest_table_2.b\" permissive=0\n\nThe 'not supported yet' cases don't emit messages. Previously SET NOT NULL\nwasn't among that set, but seemingly it now is.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 15:23:26 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-04-07 17:46:33 -0400, Tom Lane wrote:\n>> After quickly eyeing the diffs, I'm just going to take the new output\n>> as good. I'm not surprised that there are additional output messages\n>> given the additional catalog entries this made. I *am* a bit surprised\n>> that some messages seem to have disappeared --- are there places where\n>> this resulted in fewer catalog accesses than before? Nonetheless,\n>> there's no good reason to assume this test is exposing any bugs.\n\n> I wonder if the issue is that the new paths miss a hook invocation.\n\nPerhaps. I'm content to silence the buildfarm for today; we can\ninvestigate more closely later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 18:26:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "... BTW, shouldn't\nhttps://commitfest.postgresql.org/42/3869/\nnow get closed as committed?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 07 Apr 2023 18:39:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 18:26:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-04-07 17:46:33 -0400, Tom Lane wrote:\n> >> After quickly eyeing the diffs, I'm just going to take the new output\n> >> as good. I'm not surprised that there are additional output messages\n> >> given the additional catalog entries this made. I *am* a bit surprised\n> >> that some messages seem to have disappeared --- are there places where\n> >> this resulted in fewer catalog accesses than before? Nonetheless,\n> >> there's no good reason to assume this test is exposing any bugs.\n> \n> > I wonder if the issue is that the new paths miss a hook invocation.\n> \n> Perhaps. I'm content to silence the buildfarm for today; we can\n> investigate more closely later.\n\nMakes sense.\n\nI think\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-04-07%2021%3A16%3A04\nmight point out a problem with the pg_dump or pg_upgrade backward compat\npaths:\n\n--- C:\\\\prog\\\\bf/root/upgrade.drongo/HEAD/origin-REL9_5_STABLE.sql.fixed\t2023-04-07 23:51:27.641328600 +0000\n+++ C:\\\\prog\\\\bf/root/upgrade.drongo/HEAD/converted-REL9_5_STABLE-to-HEAD.sql.fixed\t2023-04-07 23:51:27.672571900 +0000\n@@ -416,9 +416,9 @@\n -- Name: entry; Type: TABLE; Schema: public; Owner: buildfarm\n --\n CREATE TABLE public.entry (\n- accession text,\n- eid integer,\n- txid smallint\n+ accession text NOT NULL,\n+ eid integer NOT NULL,\n+ txid smallint NOT NULL\n );\n ALTER TABLE public.entry OWNER TO buildfarm;\n --\n\nLooks like we're making up NOT NULL constraints when migrating from 9.5, for\nsome reason?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 17:19:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-04-07 17:19:42 -0700, Andres Freund wrote:\n> I think\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-04-07%2021%3A16%3A04\n> might point out a problem with the pg_dump or pg_upgrade backward compat\n> paths:\n> \n> --- C:\\\\prog\\\\bf/root/upgrade.drongo/HEAD/origin-REL9_5_STABLE.sql.fixed\t2023-04-07 23:51:27.641328600 +0000\n> +++ C:\\\\prog\\\\bf/root/upgrade.drongo/HEAD/converted-REL9_5_STABLE-to-HEAD.sql.fixed\t2023-04-07 23:51:27.672571900 +0000\n> @@ -416,9 +416,9 @@\n> -- Name: entry; Type: TABLE; Schema: public; Owner: buildfarm\n> --\n> CREATE TABLE public.entry (\n> - accession text,\n> - eid integer,\n> - txid smallint\n> + accession text NOT NULL,\n> + eid integer NOT NULL,\n> + txid smallint NOT NULL\n> );\n> ALTER TABLE public.entry OWNER TO buildfarm;\n> --\n> \n> Looks like we're making up NOT NULL constraints when migrating from 9.5, for\n> some reason?\n\nMy compiler complains:\n\n../../../../home/andres/src/postgresql/src/backend/catalog/heap.c: In function ‘AddRelationNotNullConstraints’:\n../../../../home/andres/src/postgresql/src/backend/catalog/heap.c:2829:37: warning: ‘conname’ may be used uninitialized [-Wmaybe-uninitialized]\n 2829 | if (strcmp(lfirst(lc2), conname) == 0)\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~\n../../../../home/andres/src/postgresql/src/backend/catalog/heap.c:2802:29: note: ‘conname’ was declared here\n 2802 | char *conname;\n | ^~~~~~~\n\nI think the compiler may be right - I think the first use of conname might\nhave been intended as constr->conname?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 7 Apr 2023 17:41:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=drongo&dt=2023-04-07%2021%3A16%3A04\n> might point out a problem with the pg_dump or pg_upgrade backward compat\n> paths:\n\nYeah, this patch has broken every single upgrade-from-back-branch test.\n\nI think there's a second problem, though: even without considering\nback branches, this has changed pg_dump output in a way that\nI fear is unacceptable. Consider for instance this table definition\n(from rules.sql):\n\ncreate table rule_and_refint_t1 (\n\tid1a integer,\n\tid1b integer,\n\tprimary key (id1a, id1b)\n);\n\nThis used to be dumped as\n\nCREATE TABLE public.rule_and_refint_t1 (\n id1a integer NOT NULL,\n id1b integer NOT NULL\n);\n...\n... load data ...\n...\nALTER TABLE ONLY public.rule_and_refint_t1\n ADD CONSTRAINT rule_and_refint_t1_pkey PRIMARY KEY (id1a, id1b);\n\nIn the new dispensation, pg_dump omits the NOT NULL clauses.\nGreat, you say, that makes the output more like what the user wrote.\nI'm not so sure. This means that the ALTER TABLE will be compelled\nto perform a full-table scan to verify that there are no nulls in the\nalready-loaded data before it can add the missing NOT NULL constraint.\nThe old dump output was carefully designed to avoid the need for that\nscan. Admittedly, we have to do a scan anyway to build the index,\nso this is strictly less than a 2X penalty on the ALTER, but is\nthat acceptable? It might be all right in the context of regular\ndump/restore, where we're surely doing a lot of per-row work anyway\nto load the data and make the index. In the context of pg_upgrade,\nthough, it seems absolutely disastrous: there will now be a per-row\ncost where there was none before, and that is surely a deal-breaker.\n\nBTW, I note from testing that the NOT NULL clauses *are* still\nemitted in at least some cases when doing --binary-upgrade from an old\nversion. (This may be directly related to the buildfarm failures,\nnot sure.) That's no solution though, because now what you get in\npg_constraint will differ depending on which way you upgraded,\nwhich seems unacceptable too.\n\nI'm inclined to think that this idea of suppressing the implied\nNOT NULL from PRIMARY KEY is a nonstarter and we should just\ngo ahead and make such a constraint. Another idea could be for\npg_dump to emit the NOT NULL, load data, do the ALTER ADD PRIMARY\nKEY, and then ALTER DROP NOT NULL.\n\nIn any case, I wonder whether that's the sort of redesign we should\nbe doing post-feature-freeze. It might be best to revert and try\nagain in v17.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Apr 2023 12:23:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Apr-09, Tom Lane wrote:\n\n> In the new dispensation, pg_dump omits the NOT NULL clauses.\n> Great, you say, that makes the output more like what the user wrote.\n> I'm not so sure. This means that the ALTER TABLE will be compelled\n> to perform a full-table scan to verify that there are no nulls in the\n> already-loaded data before it can add the missing NOT NULL constraint.\n\nYeah, I agree that this unintended consequence isn't very palatable. I\nthink the other pg_upgrade problem is easily fixed (haven't tried yet),\nbut having to rethink the pg_dump representation would likely take\nlonger than we'd like.\n\n> I'm inclined to think that this idea of suppressing the implied\n> NOT NULL from PRIMARY KEY is a nonstarter and we should just\n> go ahead and make such a constraint. Another idea could be for\n> pg_dump to emit the NOT NULL, load data, do the ALTER ADD PRIMARY\n> KEY, and then ALTER DROP NOT NULL.\n\nI like that second idea, yeah. It might be tough to make it work, but\nI'll try.\n\n> In any case, I wonder whether that's the sort of redesign we should\n> be doing post-feature-freeze. It might be best to revert and try\n> again in v17.\n\nYeah, sounds like reverting for now and retrying in v17 with the\ndiscussed changes might be better.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n",
"msg_date": "Sun, 9 Apr 2023 21:11:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> I'm inclined to think that this idea of suppressing the implied\n>> NOT NULL from PRIMARY KEY is a nonstarter and we should just\n>> go ahead and make such a constraint. Another idea could be for\n>> pg_dump to emit the NOT NULL, load data, do the ALTER ADD PRIMARY\n>> KEY, and then ALTER DROP NOT NULL.\n\n> I like that second idea, yeah. It might be tough to make it work, but\n> I'll try.\n\nYeah, I've been thinking more about it, and this might also yield a\nworkable solution for the TestUpgrade breakage. The idea would be,\nroughly, for pg_dump to emit NOT NULL column decoration in all the\nsame places it does now, and then to drop it again immediately after\ndoing ADD PRIMARY KEY if it judges that there was no other reason\nto have it. This gets rid of the inconsistency for --binary-upgrade\nwhich I think is what is causing the breakage.\n\nI also ran into something else I didn't much care for:\n\nregression=# create table foo(f1 int primary key, f2 int);\nCREATE TABLE\nregression=# create table foochild() inherits(foo);\nCREATE TABLE\nregression=# alter table only foo alter column f2 set not null;\nERROR: cannot add constraint only to table with inheritance children\nHINT: Do not specify the ONLY keyword.\n\nPrevious versions accepted this case, and I don't really see why\nwe can't do so with this new implementation -- isn't this exactly\nwhat pg_constraint.connoinherit was invented to represent? Moreover,\nexisting pg_dump files can contain precisely this construct, so\nblowing it off isn't going to be zero-cost.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 09 Apr 2023 16:11:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "OK, so here's a new attempt to get this working correctly. This time I\ndid try the new pg_upgrade when starting with a pg_dumpall produced by a\nserver in branch 14 after running the regression tests. The pg_upgrade\nsupport is *really* finicky ...\n\nThe main novelty in this version of the patch, is that we now emit\n\"throwaway\" NOT NULL constraints when a column is part of the primary\nkey. Then, after the PK is created, we run a DROP for that constraint.\nThat lets us create the PK without having to scan the table during\npg_upgrade. (I thought about adding a new dump object, either one per\ntable or just a single one for the whole dump, which would carry the\nALTER TABLE .. DROP CONSTRAINT commands for those throwaway constraints.\nI decided that this is unnecessary, so the code the command in the same\ndump object that does ALTER TABLE ADD PRIMARY KEY seems good enough. If\nsomebody sees a reason to do it differently, we can.)\n\n\nThere's new funny business with RelationGetIndexList and primary keys of\npartitioned tables. With the patch, we continue to store the OID of the\nPK even when that index is marked invalid. The reason for this is\npg_dump: when it does the ALTER TABLE to drop the NOT NULLs, the columns\nwould become marked nullable, because since the PK is invalid, it's not\nconsidered to protect the columns. I guess it might be possible to\nimplement this in some other way, but I found none that were reasonable.\nI didn't find that did had any undesirable side-effects anyway.\n\n\nScanning this thread, I think I left one reported issue unfixed related\nto tables created LIKE others. I'll give it a look later. Other than\nthat I think all bases are covered, but I intend to leave the patch open\nuntil near the end of the CF, in case someone wants to play with it.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html",
"msg_date": "Fri, 30 Jun 2023 13:44:03 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi,\n\nOn 2023-06-30 13:44:03 +0200, Alvaro Herrera wrote:\n> OK, so here's a new attempt to get this working correctly.\n\nThanks for continuing to work on this!\n\n\n> The main novelty in this version of the patch, is that we now emit\n> \"throwaway\" NOT NULL constraints when a column is part of the primary\n> key. Then, after the PK is created, we run a DROP for that constraint.\n> That lets us create the PK without having to scan the table during\n> pg_upgrade.\n\nHave you considered extending the DDL statement for this purpose? We have\n ALTER TABLE ... ADD CONSTRAINT ... PRIMARY KEY USING INDEX ...;\nwe could just do something similar for the NOT NULL constraint? Which would\nthen delete the separate constraint NOT NULL constraint.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Jun 2023 18:12:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jun-30, Andres Freund wrote:\n\n> On 2023-06-30 13:44:03 +0200, Alvaro Herrera wrote:\n> \n> > The main novelty in this version of the patch, is that we now emit\n> > \"throwaway\" NOT NULL constraints when a column is part of the primary\n> > key. Then, after the PK is created, we run a DROP for that constraint.\n> > That lets us create the PK without having to scan the table during\n> > pg_upgrade.\n> \n> Have you considered extending the DDL statement for this purpose? We have\n> ALTER TABLE ... ADD CONSTRAINT ... PRIMARY KEY USING INDEX ...;\n> we could just do something similar for the NOT NULL constraint? Which would\n> then delete the separate constraint NOT NULL constraint.\n\nHmm, I hadn't. I think if we have to explicitly list the constraint\nthat we want dropped, then it's pretty much the same than as if we used\na comma-separated list of subcommands, like \n\nALTER TABLE ... ADD CONSTRAINT .. PRIMARY KEY (a,b),\n DROP CONSTRAINT pgdump_throwaway_notnull_0,\n DROP CONSTRAINT pgdump_throwaway_notnull_1;\n\nHowever, I think it would be ideal if we *don't* have to specify the\nlist of constraints: we would do this on any ALTER TABLE .. ADD\nCONSTRAINT PRIMARY KEY, without having any additional clause.\n\nBut how to distinguish which NOT NULL markings to drop? Maybe we would\nhave to specify a flag at NOT NULL constraint creation time. So pg_dump\nwould emit something like\n\nCREATE TABLE foo (a int CONSTRAINT NOT NULL THROWAWAY);\n... (much later) ...\nALTER TABLE foo ADD CONSTRAINT .. PRIMARY KEY;\n\nand by the time this second command is run, those throwaway constraints\nare removed. The problems now are 1) how to make this CREATE statement\nmore SQL-conformant (answer: make pg_dump emit a separate ALTER TABLE\ncommand for the constraint addition; it already knows how to do this, so\nit'd be very little code); but also 2) where to store the flag\nserver-side flag that says this constraint has this property. I think\nit'd have to be a new pg_constraint column, and I don't like to add one\nfor such a minor issue.\n\nOn 2023-Jun-30, Alvaro Herrera wrote:\n\n> Scanning this thread, I think I left one reported issue unfixed related\n> to tables created LIKE others. I'll give it a look later. Other than\n> that I think all bases are covered, but I intend to leave the patch open\n> until near the end of the CF, in case someone wants to play with it.\n\nSo it was [1] that I meant, where this example was provided:\n\n# create table t1 (c int primary key null unique); \n# create table t2 (like t1); \n# alter table t2 alter c drop not null; \nERROR: no NOT NULL constraint found to drop\n\nThe problem here is that because we didn't give INCLUDING INDEXES in the\nLIKE clause, we end up with a column marked NOT NULL for which we have\nno pg_constraint row. Okay, I thought, we can just make sure *not* to\nmark that case as not null; that works fine and looks reasonable.\nHowever, it breaks the following use case, which is already in use in\nthe regression tests and possibly by users:\n\n CREATE TABLE pk (a int PRIMARY KEY) PARTITION BY RANGE (a);\n CREATE TABLE pk4 (LIKE pk);\n ALTER TABLE pk ATTACH PARTITION pk4 FOR VALUES FROM (3000) TO (4000);\n+ERROR: column \"a\" in child table must be marked NOT NULL\n\nThe problem here is that we were assuming, by the time the third command\nis run, that the column had been marked NOT NULL by the second command.\nSo my solution above is simply not acceptable. What we must do, in\norder to handle this backward-compatibly, is to ensure that a column\npart of a PK automatically gets a NOT NULL constraint for all the PK\ncolumns, for the case where INCLUDING INDEXES is not given. This is the\nsame we do for regular INHERITS children and PKs.\n\nI'll go write this code now; should be simple enough.\n\n[1] https://postgr.es/m/CAMbWs48astPDb3K+L89wb8Yju0jM_Czm8svmU=Uzd+WM61Cr6Q@mail.gmail.com\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 14:58:49 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 30.06.23 13:44, Alvaro Herrera wrote:\n> OK, so here's a new attempt to get this working correctly.\n\nAttached is a small fixup patch for the documentation.\n\nFurthermore, there are a few outdated comments that are probably left \nover from previous versions of this patch set.\n\n\n* src/backend/catalog/pg_constraint.c\n\nOutdated comment:\n\n+ /* only tuples for CHECK constraints should be given */\n+ Assert(((Form_pg_constraint) GETSTRUCT(constrTup))->contype == \nCONSTRAINT_NOTNULL);\n\n\n* src/backend/parser/gram.y\n\nShould processCASbits() set &n->skip_validation, like in the CHECK\ncase? _outConstraint() looks at the field, so it seems relevant.\n\n\n* src/backend/parser/parse_utilcmd.c\n\nThe updated comment says\n\n List *ckconstraints; /* CHECK and NOT NULL constraints */\n\nbut it seems to me that NOT NULL constraints are not actually added\nthere but in nnconstraints instead.\n\nIt would be nice if the field nnconstraints was listed after\nckconstraints consistently throughout the file. It's a bit random\nright now.\n\nThis comment is outdated:\n\n+ /*\n+ * For NOT NULL declarations, we need to mark the column as\n+ * not nullable, and set things up to have a CHECK \nconstraint\n+ * created. Also, duplicate NOT NULL declarations are not\n+ * allowed.\n+ */\n\nAbout this:\n\n case CONSTR_CHECK:\n cxt->ckconstraints = lappend(cxt->ckconstraints, \nconstraint);\n+\n+ /*\n+ * XXX If the user says CHECK (IS NOT NULL), should we turn\n+ * that into a regular NOT NULL constraint?\n+ */\n break;\n\nI think this was decided against.\n\n+ /*\n+ * Copy NOT NULL constraints, too (these do not require any option \nto have\n+ * been given).\n+ */\n\nShouldn't that be governed by the INCLUDING CONSTRAINTS option?\n\nBtw., there is some asymmetry here between check constraints and\nnot-null constraints: Check constraints are in the tuple descriptor,\nbut not-null constraints are not. Should that be unified? Or at\nleast explained?\n\n\n* src/include/nodes/parsenodes.h\n\nThis comment appears to be outdated:\n\n+ * intermixed in tableElts, and constraints and notnullcols are NIL. After\n+ * parse analysis, tableElts contains just ColumnDefs, notnullcols has been\n+ * filled with not-nullable column names from various sources, and \nconstraints\n+ * contains just Constraint nodes (in fact, only CONSTR_CHECK nodes, in the\n+ * present implementation).\n\nThere is no \"notnullcolls\".\n\n\n* src/test/regress/parallel_schedule\n\nThis change appears to be correct, but unrelated to this patch, so I\nsuggest committing this separately.\n\n\n* src/test/regress/sql/create_table.sql\n\n-SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = \n'part_b'::regclass;\n+SELECT conislocal, coninhcount FROM pg_constraint WHERE conrelid = \n'part_b'::regclass ORDER BY coninhcount DESC, conname;\n\nMaybe add conname to the select list here as well, for consistency with \nnearby queries.",
"msg_date": "Mon, 3 Jul 2023 16:02:37 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-03, Peter Eisentraut wrote:\n\n> On 30.06.23 13:44, Alvaro Herrera wrote:\n> > OK, so here's a new attempt to get this working correctly.\n> \n> Attached is a small fixup patch for the documentation.\n> \n> Furthermore, there are a few outdated comments that are probably left over\n> from previous versions of this patch set.\n\nThanks! I've incorporated your doc fixes and applied fixes for almost\nall the other issues you listed; and fixed a couple of additional\nissues, such as\n\n* adding a test to regress for an error message that wasn't covered (and\n removed the XXX comment about that)\n* remove a pointless variable addition to pg_dump (leftover from a\n previous implementation of constraint capture)\n* adapt the sepgsql tests again (we don't recurse to children when\n there's nothing to do, so an object hook invocation doesn't happen\n anymore -- I think)\n* made ATExecSetAttNotNull return the constraint address\n* more outdated comments adjustment in MergeAttributes\n\nMost importantly, I fixed table creation for LIKE inheritance, as I\ndescribed upthread already.\n\nThe one thing I have not touched is add ¬_valid to processCASbits()\nin gram.y; rather I added a comment that NOT VALID is not yet suported.\nI think adding support for that is a reasonably easy on top of this\npatch, but since it also requires more pg_dump support and stuff, I'd\nrather not mix it in at this point. The pg_upgrade support is already\nquite a house of cards and it drove me crazy.\n\nSo, attached is v10.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)",
"msg_date": "Mon, 10 Jul 2023 18:52:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "I left two questions unanswered here, so here I respond to them while\ngiving one more revision of the patch.\n\nI realized that the AT_CheckNotNull stuff is now dead code, so in this\nversion I remove it. I also changed on heap_getattr to\nSysCacheGetAttrNotNull, per a very old review comment from Justin that I\nhadn't acted upon. The other changes are minor code comments and test\nadjustments.\n\nAt this point I think this is committable.\n\nOn 2023-Jul-03, Peter Eisentraut wrote:\n\n> + /*\n> + * Copy NOT NULL constraints, too (these do not require any option to have\n> + * been given).\n> + */\n> \n> Shouldn't that be governed by the INCLUDING CONSTRAINTS option?\n\nTo clarify: this is in LIKE, such as \nCREATE TABLE (LIKE someother);\nand the reason we don't want to make this behavior depend on INCLUDING\nCONSTRAINTS, is backwards compatibility; NOT NULL markings have\ntraditionally been propagated, so it can be used to create partitions\nbased on the parent table, and if we made that require the option to be\nspecified, that would no longer occur in the default case. Maybe we can\nchange that behavior, but I'm pretty sure it would be resisted.\n\n> Btw., there is some asymmetry here between check constraints and\n> not-null constraints: Check constraints are in the tuple descriptor,\n> but not-null constraints are not. Should that be unified? Or at\n> least explained?\n\nWell, the reason check constraints are in the descriptor, is that they\nare needed to verify a table. NOT NULL constraint as catalog objects\nare (at present) only useful from a DDL point of view; they won't change\nthe underlying implementation, which still depends on just the\nattnotnull markings.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 11 Jul 2023 16:17:20 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "In this version I mistakenly included an unwanted change, which broke\nthe test_ddl_deparse test. Here's v12 with that removed.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Las mujeres son como hondas: mientras más resistencia tienen,\n más lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)",
"msg_date": "Tue, 11 Jul 2023 19:29:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "v13, because a conflict was just committed to alter_table.sql.\n\nHere I also call out the relcache.c change by making it a separate\ncommit. I'm likely to commit it that way, too. To recap: the change is\nto have a partitioned table's index list include the primary key, even\nwhen said primary key is marked invalid. This turns out to be necessary\nfor the currently proposed pg_dump strategy to work; if this is not in\nplace, attaching the per-partition PK indexes to the parent index fails\nbecause it sees that the columns are not marked NOT NULL.\n\nI don't see any obvious problem with this change; but if someone does\nand this turns out to be unacceptable, then the pg_dump stuff would need\nsome surgery.\n\nThere are no other changes from v12. One thing I should probably get\nto, is fixing the constraint name comparison code in pg_dump. Right now\nit's a bit dumb and will get in silly trouble with overlength\ntable/column names (nothing that would actually break, just that it will\nemit constraint names when there's no need to.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)",
"msg_date": "Wed, 12 Jul 2023 19:10:59 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, 12 Jul 2023 at 18:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> v13, because a conflict was just committed to alter_table.sql.\n>\n> Here I also call out the relcache.c change by making it a separate\n> commit. I'm likely to commit it that way, too. To recap: the change is\n> to have a partitioned table's index list include the primary key, even\n> when said primary key is marked invalid. This turns out to be necessary\n> for the currently proposed pg_dump strategy to work; if this is not in\n> place, attaching the per-partition PK indexes to the parent index fails\n> because it sees that the columns are not marked NOT NULL.\n>\n\nHmm, looking at that change, it looks a little ugly. I think someone\nreading that code in the future will have no idea why it's including\nsome invalid indexes, and not others.\n\n> There are no other changes from v12. One thing I should probably get\n> to, is fixing the constraint name comparison code in pg_dump. Right now\n> it's a bit dumb and will get in silly trouble with overlength\n> table/column names (nothing that would actually break, just that it will\n> emit constraint names when there's no need to.)\n>\n\nYeah, that seems a bit ugly. Presumably, also, if something like a\ncolumn rename happens, the constraint name will no longer match.\n\nI see that it's already been discussed, but I don't like the fact that\nthere is no way to get hold of the new constraint names in psql. I\nthink for the purposes of dropping named constraints, and also\npossible future stuff like NOT VALID / DEFERRABLE constraints, having\nsome way to get their names will be important.\n\nSomething else I noticed is that the result from \"ALTER TABLE ...\nALTER COLUMN ... DROP NOT NULL\" is no longer easily predictable -- if\nthere are multiple NOT NULL constraints on the column, it just drops\none (chosen at random?) and leaves the others. I think that it should\neither drop all the constraints, or throw an error. Either way, I\nwould expect that if DROP NOT NULL succeeds, the result is that the\ncolumn is nullable.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 13 Jul 2023 14:56:44 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-13, Dean Rasheed wrote:\n\n> Hmm, looking at that change, it looks a little ugly. I think someone\n> reading that code in the future will have no idea why it's including\n> some invalid indexes, and not others.\n\nTrue. I've added a longish comment in 0001 to explain why we do this.\n0002 has two bugfixes, described below.\n\n> On Wed, 12 Jul 2023 at 18:11, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > There are no other changes from v12. One thing I should probably get\n> > to, is fixing the constraint name comparison code in pg_dump. Right now\n> > it's a bit dumb and will get in silly trouble with overlength\n> > table/column names (nothing that would actually break, just that it will\n> > emit constraint names when there's no need to.)\n> \n> Yeah, that seems a bit ugly. Presumably, also, if something like a\n> column rename happens, the constraint name will no longer match.\n\nWell, we never rename constraints (except AFAIR for unique constraints\nwhen the unique index is renamed), and I'm not sure that it's a good\nidea to automatically rename a not null constraint when the column or\nthe table are renamed.\n\n(I think trying to make pg_dump be smarter about the constraint name\nwhen the table/column names are very long, would require exporting\nmakeObjectName() for frontend use. It looks an easy task, but I haven't\ndone it.)\n\n(Maybe it would be reasonable to rename the NOT NULL constraint when the\ntable or column are renamed, iff the original constraint name is the\ndefault one. Initially I lean towards not doing it, though.)\n\n\nAnyway, what does happen when the name doesn't match what pg_dump thinks\nis the default name (<table>_<column>_not_null) is that the constraint\nname is printed in the output. So if you have this table\n\n create table one (a int not null, b int not null);\nand rename column b to c, then pg_dump will print the table like this:\n\nCREATE TABLE public.one (\n a integer NOT NULL,\n c integer CONSTRAINT one_b_not_null NOT NULL\n);\n\nIn other words, the name is preserved across a dump. I think this is\nnot terrible.\n\n\n> I see that it's already been discussed, but I don't like the fact that\n> there is no way to get hold of the new constraint names in psql. I\n> think for the purposes of dropping named constraints, and also\n> possible future stuff like NOT VALID / DEFERRABLE constraints, having\n> some way to get their names will be important.\n\nYeah, so there are two proposals:\n\n1. Have \\d+ replace the \"not null\" literal in the \\d+ display with the\nconstraint name; if the column is not nullable because of the primary\nkey, it says \"(primary key)\" instead. There's a patch for this in the\nthread somewhere.\n\n2. Same, but use \\d++ for this purpose\n\nUsing ++ would be a novelty in psql, so I'm hesitant to make that an\nintegral part of the current proposal. However, to me (2) seems to most\ncomfortable way forward, because while you're right that people do need\nthe constraint name from time to time, this is very seldom the case, so\npolluting \\d+ might inconvenience people for not much gain. And people\ndidn't like having the constraint name in \\d+.\n\nDo you have an opinion on these ideas?\n\n> Something else I noticed is that the result from \"ALTER TABLE ...\n> ALTER COLUMN ... DROP NOT NULL\" is no longer easily predictable -- if\n> there are multiple NOT NULL constraints on the column, it just drops\n> one (chosen at random?) and leaves the others. I think that it should\n> either drop all the constraints, or throw an error. Either way, I\n> would expect that if DROP NOT NULL succeeds, the result is that the\n> column is nullable.\n\nHmm, there shouldn't be multiple NOT NULL constraints for the same\ncolumn; if there's one, a further SET NOT NULL should do nothing. At\nsome point the code was creating two constraints, but I realized that\ntrying to support multiple constraints caused other problems, and it\nseems to serve no purpose, so I removed it. Maybe there are ways to end\nup with multiple constraints, but at this stage I would say that those\nare bugs to be fixed, rather than something we want to keep.\n\n... oh, I did find a bug here -- indeed,\n\n ALTER TABLE tab ADD CONSTRAINT NOT NULL col;\n\nwas not checking whether a constraint already existed, and created a\nduplicate. In v14-0002 I made that throw an error instead. And having\ndone that, I discovered another bug: in test_ddl_deparse we CREATE TABLE\nLIKE from SERIAL PRIMARY KEY column, so that was creating two NOT NULL\nconstraints, one for the lack of INCLUDING INDEXES on the PK, and\nanother for the NOT NULL itself which comes implicit with SERIAL. So I\nfixed that too, by having transformTableLikeClause skip creating a NOT\nNULL for PK columns if we're going to create one for a NOT NULL\nconstraint.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El número de instalaciones de UNIX se ha elevado a 10,\ny se espera que este número aumente\" (UPM, 1972)",
"msg_date": "Thu, 20 Jul 2023 17:31:48 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Thu, 20 Jul 2023 at 16:31, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jul-13, Dean Rasheed wrote:\n>\n> > I see that it's already been discussed, but I don't like the fact that\n> > there is no way to get hold of the new constraint names in psql. I\n> > think for the purposes of dropping named constraints, and also\n> > possible future stuff like NOT VALID / DEFERRABLE constraints, having\n> > some way to get their names will be important.\n>\n> Yeah, so there are two proposals:\n>\n> 1. Have \\d+ replace the \"not null\" literal in the \\d+ display with the\n> constraint name; if the column is not nullable because of the primary\n> key, it says \"(primary key)\" instead. There's a patch for this in the\n> thread somewhere.\n>\n> 2. Same, but use \\d++ for this purpose\n>\n> Using ++ would be a novelty in psql, so I'm hesitant to make that an\n> integral part of the current proposal. However, to me (2) seems to most\n> comfortable way forward, because while you're right that people do need\n> the constraint name from time to time, this is very seldom the case, so\n> polluting \\d+ might inconvenience people for not much gain. And people\n> didn't like having the constraint name in \\d+.\n>\n> Do you have an opinion on these ideas?\n>\n\nHmm, I don't particularly like that approach, because I think it will\nbe difficult to cram any additional details into the table, and also I\ndon't know whether having multiple not null constraints for a\nparticular column can be entirely ruled out.\n\nI may well be in the minority here, but I think the best way is to\nlist them in a separate footer section, in the same way as CHECK\nconstraints, allowing other constraint properties to be included. So\nit might look something like:\n\n\\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\n b | integer | | not null |\n c | integer | | not null |\n d | integer | | not null |\nIndexes:\n \"foo_pkey\" PRIMARY KEY, btree (a, b)\nCheck constraints:\n \"foo_a_check\" CHECK (a > 0)\n \"foo_b_check\" CHECK (b > 0) NO INHERIT NOT VALID\nNot null constraints:\n \"foo_c_not_null\" NOT NULL c\n \"foo_d_not_null\" NOT NULL d NO INHERIT\n\nAs for CHECK constraints, the contents of each constraint line would\nmatch the \"table_constraint\" SQL syntax needed to reconstruct the\nconstraint. Doing it this way allows for things like NOT VALID and\nDEFERRABLE to be added in the future.\n\nI think that's probably too verbose for a plain \"\\d\", but I think it\nwould be OK with \"\\d+\".\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 24 Jul 2023 09:42:57 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Thu, 20 Jul 2023 at 16:31, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jul-13, Dean Rasheed wrote:\n>\n> > Something else I noticed is that the result from \"ALTER TABLE ...\n> > ALTER COLUMN ... DROP NOT NULL\" is no longer easily predictable -- if\n> > there are multiple NOT NULL constraints on the column, it just drops\n> > one (chosen at random?) and leaves the others. I think that it should\n> > either drop all the constraints, or throw an error. Either way, I\n> > would expect that if DROP NOT NULL succeeds, the result is that the\n> > column is nullable.\n>\n> Hmm, there shouldn't be multiple NOT NULL constraints for the same\n> column; if there's one, a further SET NOT NULL should do nothing. At\n> some point the code was creating two constraints, but I realized that\n> trying to support multiple constraints caused other problems, and it\n> seems to serve no purpose, so I removed it. Maybe there are ways to end\n> up with multiple constraints, but at this stage I would say that those\n> are bugs to be fixed, rather than something we want to keep.\n>\n\nHmm, I'm not so sure. I think perhaps multiple NOT NULL constraints on\nthe same column should just be allowed, otherwise things might get\nconfusing. For example:\n\ncreate table p1 (a int not null check (a > 0));\ncreate table p2 (a int not null check (a > 0));\ncreate table foo () inherits (p1, p2);\n\ncauses foo to have 2 CHECK constraints, but only 1 NOT NULL constraint:\n\n\\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\nCheck constraints:\n \"p1_a_check\" CHECK (a > 0)\n \"p2_a_check\" CHECK (a > 0)\nInherits: p1,\n p2\n\nselect conname from pg_constraint where conrelid = 'foo'::regclass;\n conname\n---------------\n p1_a_not_null\n p2_a_check\n p1_a_check\n(3 rows)\n\nwhich I find a little counter-intuitive / inconsistent. If I then drop\nthe p1 constraints:\n\nalter table p1 drop constraint p1_a_check;\nalter table p1 drop constraint p1_a_not_null;\n\nI end up with column \"a\" still being not null, and the \"p1_a_not_null\"\nconstraint still being there on foo, which seems even more\ncounter-intuitive, because I just dropped that constraint, and it\nreally should now be the \"p2_a_not_null\" constraint that makes \"a\" not\nnull:\n\n\\d foo\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | not null |\nCheck constraints:\n \"p2_a_check\" CHECK (a > 0)\nInherits: p1,\n p2\n\nselect conname from pg_constraint where conrelid = 'foo'::regclass;\n conname\n---------------\n p1_a_not_null\n p2_a_check\n(2 rows)\n\nI haven't thought through various other cases in any detail, but I\ncan't help feeling that it would be simpler and more logical /\nconsistent to just allow multiple NOT NULL constraints on a column,\nrather than trying to enforce a rule that only one is allowed. That\nway, I think it would be easier for the user to keep track of why a\ncolumn is not null.\n\nSo I'd say that ALTER TABLE ... ADD NOT NULL should always add a\nconstraint, even if there already is one. For example ALTER TABLE ...\nADD UNIQUE does nothing to prevent multiple unique constraints on the\nsame column(s). It seems pretty dumb, but maybe there is a reason to\nallow it, and it doesn't feel like we should be second-guessing what\nthe user wants.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 24 Jul 2023 11:06:06 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Something else I noticed: the error message from ALTER TABLE ... ADD\nCONSTRAINT in the case of a duplicate constraint name is not very\nfriendly:\n\nERROR: duplicate key value violates unique constraint\n\"pg_constraint_conrelid_contypid_conname_index\"\nDETAIL: Key (conrelid, contypid, conname)=(16540, 0, nn) already exists.\n\nTo match the error message for other constraint types, this should be:\n\nERROR: constraint \"nn\" for relation \"foo\" already exists\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 24 Jul 2023 11:10:51 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-24, Dean Rasheed wrote:\n\n> Hmm, I don't particularly like that approach, because I think it will\n> be difficult to cram any additional details into the table, and also I\n> don't know whether having multiple not null constraints for a\n> particular column can be entirely ruled out.\n> \n> I may well be in the minority here, but I think the best way is to\n> list them in a separate footer section, in the same way as CHECK\n> constraints, allowing other constraint properties to be included. So\n> it might look something like:\n\nThat's the first thing I proposed actually. I got one vote down from\nRobert Haas[1], but while the idea seems to have had support from Justin\nPryzby (in \\dt++) [2] and definitely did from Peter Eisentraut [3], I do\nnot like it too much myself, mainly because the partition list has a\nvery similar treatment and I find that one an annoyance.\n\n> and also I don't know whether having multiple not null constraints for\n> a particular column can be entirely ruled out.\n\nI had another look at the standard. In 11.26 (<drop table\nconstraint definition>) it says that \"If [the constraint being removed]\ncauses some column COL to be known not nullable and no other constraint\ncauses COL to be known not nullable, then the nullability characteristic\nof the column descriptor of COL is changed to possibly nullable\". Which\nsupports the idea that there might be multiple such constraints.\n(However, we could also read this as meaning that the PK could be one\nsuch constraint while NOT NULL is another one.)\n\nHowever, 11.16 (<drop column not null clause> as part of 11.12 <alter\ncolumn definition>), says that DROP NOT NULL causes the indication of\nthe column as NOT NULL to be removed. This, to me, says that if you do\nhave multiple such constraints, you'd better remove them all with that\ncommand. All in all, I lean towards allowing just one as best as we\ncan.\n\n[1] https://postgr.es/m/CA+Tgmobnoxt83y1QesBNVArhFm-fLwWkDUyiV84e+psayDwB7A@mail.gmail.com\n[2] https://postgr.es/m/20230301223214.GC4268%40telsasoft.com\n[3] https://postgr.es/m/1c4f3755-2d10-cae9-647f-91a9f006410e%40enterprisedb.com\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Mon, 24 Jul 2023 12:32:53 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-24, Dean Rasheed wrote:\n\n> Hmm, I'm not so sure. I think perhaps multiple NOT NULL constraints on\n> the same column should just be allowed, otherwise things might get\n> confusing. For example:\n> \n create table p1 (a int not null check (a > 0));\n create table p2 (a int not null check (a > 0));\n create table foo () inherits (p1, p2);\n\nHave a look at the conislocal / coninhcount values. These should\nreflect the fact that the constraint has multiple sources; and the\nconstraint does disappear if you drop it from both sources.\n\n> If I then drop the p1 constraints:\n> \n> alter table p1 drop constraint p1_a_check;\n> alter table p1 drop constraint p1_a_not_null;\n> \n> I end up with column \"a\" still being not null, and the \"p1_a_not_null\"\n> constraint still being there on foo, which seems even more\n> counter-intuitive, because I just dropped that constraint, and it\n> really should now be the \"p2_a_not_null\" constraint that makes \"a\" not\n> null:\n\nI can see that it might make sense to not inherit the constraint name in\nsome cases. Perhaps:\n\n1. never inherit a name. Each table has its own constraint name always\n2. only inherit if there's a single parent\n3. always inherit the name from the first parent (current implementation)\n\n> So I'd say that ALTER TABLE ... ADD NOT NULL should always add a\n> constraint, even if there already is one. For example ALTER TABLE ...\n> ADD UNIQUE does nothing to prevent multiple unique constraints on the\n> same column(s). It seems pretty dumb, but maybe there is a reason to\n> allow it, and it doesn't feel like we should be second-guessing what\n> the user wants.\n\nThat was my initial implementation but I changed it to allowing a single\nconstraint because of the way the standard describes SET NOT NULL;\nspecifically, 11.15 <set column not null clause> says that \"If the\ncolumn descriptor of C does not contain an indication that C is defined\nas NOT NULL, then:\" a constraint is added; otherwise (i.e., such an\nindication does exist), nothing happens.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La virtud es el justo medio entre dos defectos\" (Aristóteles)\n\n\n",
"msg_date": "Mon, 24 Jul 2023 12:55:26 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-24, Dean Rasheed wrote:\n\n> Something else I noticed: the error message from ALTER TABLE ... ADD\n> CONSTRAINT in the case of a duplicate constraint name is not very\n> friendly:\n> \n> ERROR: duplicate key value violates unique constraint\n> \"pg_constraint_conrelid_contypid_conname_index\"\n> DETAIL: Key (conrelid, contypid, conname)=(16540, 0, nn) already exists.\n> \n> To match the error message for other constraint types, this should be:\n> \n> ERROR: constraint \"nn\" for relation \"foo\" already exists\n\nHmm, how did you get this one? I can't reproduce it:\n\n55490 17devel 3166154=# create table foo (a int constraint nn not null);\nCREATE TABLE\n55490 17devel 3166154=# alter table foo add constraint nn not null a;\nERROR: column \"a\" of table \"foo\" is already NOT NULL\n\n55490 17devel 3166154=# drop table foo;\nDROP TABLE\n\n55490 17devel 3166154=# create table foo (a int);\nCREATE TABLE\nDuración: 1,472 ms\n55490 17devel 3166154=# alter table foo add constraint nn not null a, add constraint nn not null a;\nERROR: column \"a\" of table \"foo\" is already NOT NULL\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El número de instalaciones de UNIX se ha elevado a 10,\ny se espera que este número aumente\" (UPM, 1972)\n\n\n",
"msg_date": "Mon, 24 Jul 2023 18:42:28 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello,\n\nWhile discussing the matter of multiple constraints with Vik Fearing, I\nnoticed that we were throwing an unnecessary error if you used\n\nCREATE TABLE foo (a int NOT NULL NOT NULL);\n\nThat would die with \"redundant NOT NULL declarations\", but current\nmaster doesn't do that; and we don't do it for UNIQUE UNIQUE either.\nSo I modified the patch to make it ignore the dupe and create a single\nconstraint. This (and rebasing to current master) are the only changes\nin v15.\n\nI have not changed the psql presentation, but I'll do as soon as we have\nrough consensus on what to do. To reiterate, the options are:\n\n1. Don't show the constraint names. This is what the current patch does\n\n2. Show the constraint name in \\d+ in the \"nullable\" column.\n I did this early on, to much booing.\n\n3. Show the constraint name in \\d++ (a new command) tabular output\n\n4. Show the constraint name in the footer of \\d+\n I also did this at some point; there are some +1s and some -1s.\n\n5. Show the constraint name in the footer of \\d++\n\n\nMany thanks, Dean, for the discussion so far.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Mon, 24 Jul 2023 19:17:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-24, Alvaro Herrera wrote:\n\n> That would die with \"redundant NOT NULL declarations\", but current\n> master doesn't do that; and we don't do it for UNIQUE UNIQUE either.\n> So I modified the patch to make it ignore the dupe and create a single\n> constraint. This (and rebasing to current master) are the only changes\n> in v15.\n\nDid I forget the attachments once more? Yup, I did. Here they are.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 24 Jul 2023 19:18:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 7/24/23 18:42, Alvaro Herrera wrote:\n> 55490 17devel 3166154=# create table foo (a int constraint nn not null);\n> CREATE TABLE\n> 55490 17devel 3166154=# alter table foo add constraint nn not null a;\n> ERROR: column \"a\" of table \"foo\" is already NOT NULL\n\nSurely this should be a WARNING or INFO? I see no reason to ERROR here.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 24 Jul 2023 19:19:48 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, 24 Jul 2023 at 17:42, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jul-24, Dean Rasheed wrote:\n>\n> > Something else I noticed: the error message from ALTER TABLE ... ADD\n> > CONSTRAINT in the case of a duplicate constraint name is not very\n> > friendly:\n> >\n> > ERROR: duplicate key value violates unique constraint\n> > \"pg_constraint_conrelid_contypid_conname_index\"\n> > DETAIL: Key (conrelid, contypid, conname)=(16540, 0, nn) already exists.\n> >\n\nTo reproduce this error, try to create 2 constraints with the same\nname on different columns:\n\ncreate table foo(a int, b int);\nalter table foo add constraint nn not null a;\nalter table foo add constraint nn not null b;\n\nI found another, separate issue:\n\ncreate table p1(a int not null);\ncreate table p2(a int);\ncreate table foo () inherits (p1,p2);\nalter table p2 add not null a;\n\nERROR: column \"a\" of table \"foo\" is already NOT NULL\n\nwhereas doing \"alter table p2 alter column a set not null\" works OK,\nmerging the constraints as expected.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 24 Jul 2023 20:05:39 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 6:33 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> That's the first thing I proposed actually. I got one vote down from\n> Robert Haas[1], but while the idea seems to have had support from Justin\n> Pryzby (in \\dt++) [2] and definitely did from Peter Eisentraut [3], I do\n> not like it too much myself, mainly because the partition list has a\n> very similar treatment and I find that one an annoyance.\n\nI think I might want to retract my earlier -1 vote. I mean, I agree\nwith former me that having the \\d+ output get a whole lot longer is\nnot super-appealing. But I also agree with Dean that having this\ninformation available somewhere is probably important, and I also\nagree with your point that inventing \\d++ for this isn't necessarily a\ngood idea. I fear that will just result in having to type an extra\nplus sign any time you want to see all of the table details, to make\nsure that psql knows that you really mean it. So, maybe showing it in\nthe \\d+ output as Dean proposes is the least of evils.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 24 Jul 2023 15:22:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-24, Robert Haas wrote:\n\n> I think I might want to retract my earlier -1 vote. I mean, I agree\n> with former me that having the \\d+ output get a whole lot longer is\n> not super-appealing. But I also agree with Dean that having this\n> information available somewhere is probably important, and I also\n> agree with your point that inventing \\d++ for this isn't necessarily a\n> good idea. I fear that will just result in having to type an extra\n> plus sign any time you want to see all of the table details, to make\n> sure that psql knows that you really mean it. So, maybe showing it in\n> the \\d+ output as Dean proposes is the least of evils.\n\nOkay then, I've made these show up in the footer of \\d+. This is in\npatch 0003 here. Please let me know what do you think of the regression\nchanges.\n\nOn 2023-Jul-24, Dean Rasheed wrote:\n\n> To reproduce this error, try to create 2 constraints with the same\n> name on different columns:\n> \n> create table foo(a int, b int);\n> alter table foo add constraint nn not null a;\n> alter table foo add constraint nn not null b;\n\nAh, of course. Fixed.\n\n> I found another, separate issue:\n> \n> create table p1(a int not null);\n> create table p2(a int);\n> create table foo () inherits (p1,p2);\n> alter table p2 add not null a;\n> \n> ERROR: column \"a\" of table \"foo\" is already NOT NULL\n> \n> whereas doing \"alter table p2 alter column a set not null\" works OK,\n> merging the constraints as expected.\n\nTrue. I made it a non-error. I initially changed the message to INFO,\nas suggested by Vik nearby; but after noticing that SET NOT NULL just\ndoes the same thing with no message, I removed this message altogether,\nfor consistence. Now that I did it, though, I wonder: if the user\nspecified a constraint name, and that name does not match the existing\nconstraint, maybe we should have an INFO or NOTICE or WARNING message\nthat the requested constraint name was not satisfied.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Tue, 25 Jul 2023 14:35:54 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 8:36 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> Okay then, I've made these show up in the footer of \\d+. This is in\n> patch 0003 here. Please let me know what do you think of the regression\n> changes.\n\nSeems OK.\n\nI'm not really thrilled with the idea of every not-null constraint\nhaving a name, to be honest. Of all the kinds of constraints that we\nhave in the system, NOT NULL constraints are probably the ones where\nnaming them is least likely to be interesting, because they don't\nreally have any interesting properties. A CHECK constraint has an\nexpression; a foreign key constraint has columns that it applies to on\neach side plus the identity of the table and opclass information, but\na NOT NULL constraint seems like it can never have any property other\nthan which column. So it sort of seems like a waste to name it. But if\nwe want it catalogued then we don't really have an option, so I\nsuppose we just have to accept a bit of clutter as the price of doing\nbusiness.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 11:39:13 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, 25 Jul 2023 at 11:39, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I'm not really thrilled with the idea of every not-null constraint\n> having a name, to be honest. Of all the kinds of constraints that we\n> have in the system, NOT NULL constraints are probably the ones where\n> naming them is least likely to be interesting, because they don't\n> really have any interesting properties. A CHECK constraint has an\n> expression; a foreign key constraint has columns that it applies to on\n> each side plus the identity of the table and opclass information, but\n> a NOT NULL constraint seems like it can never have any property other\n> than which column. So it sort of seems like a waste to name it. But if\n> we want it catalogued then we don't really have an option, so I\n> suppose we just have to accept a bit of clutter as the price of doing\n> business.\n>\n\nI agree. I definitely do *not* want a bunch of NOT NULL constraint names\ncluttering up displays. Can we legislate that all NOT NULL implementing\nconstraints are named by mashing together the table name, column name, and\nsomething to identify it as a NOT NULL constraint? Maybe even something\nlike pg_not_null_[relname]_[attname] (with some escaping), using the pg_\nprefix to make the name reserved similar to schemas and tables? And then\ndon't show such constraints in \\d, not even \\d+ - just indicate it in\nthe Nullable column of the column listing as done now. Show a NOT NULL\nconstraint if there is something odd about it - for example, if it gets\nrenamed, or not renamed when the table is renamed.\n\nSorry for the noise if this has already been decided otherwise.\n\nOn Tue, 25 Jul 2023 at 11:39, Robert Haas <robertmhaas@gmail.com> wrote:\nI'm not really thrilled with the idea of every not-null constraint\nhaving a name, to be honest. Of all the kinds of constraints that we\nhave in the system, NOT NULL constraints are probably the ones where\nnaming them is least likely to be interesting, because they don't\nreally have any interesting properties. A CHECK constraint has an\nexpression; a foreign key constraint has columns that it applies to on\neach side plus the identity of the table and opclass information, but\na NOT NULL constraint seems like it can never have any property other\nthan which column. So it sort of seems like a waste to name it. But if\nwe want it catalogued then we don't really have an option, so I\nsuppose we just have to accept a bit of clutter as the price of doing\nbusiness.I agree. I definitely do *not* want a bunch of NOT NULL constraint names cluttering up displays. Can we legislate that all NOT NULL implementing constraints are named by mashing together the table name, column name, and something to identify it as a NOT NULL constraint? Maybe even something like pg_not_null_[relname]_[attname] (with some escaping), using the pg_ prefix to make the name reserved similar to schemas and tables? And then don't show such constraints in \\d, not even \\d+ - just indicate it in the Nullable column of the column listing as done now. Show a NOT NULL constraint if there is something odd about it - for example, if it gets renamed, or not renamed when the table is renamed.Sorry for the noise if this has already been decided otherwise.",
"msg_date": "Tue, 25 Jul 2023 11:54:39 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-25, Isaac Morland wrote:\n\n> I agree. I definitely do *not* want a bunch of NOT NULL constraint names\n> cluttering up displays. Can we legislate that all NOT NULL implementing\n> constraints are named by mashing together the table name, column name, and\n> something to identify it as a NOT NULL constraint?\n\nAll constraints are named like that already, and NOT NULL constraints\njust inherited the same idea. The names are <table>_<column>_not_null\nfor NOT NULL constraints. pg_dump goes great lengths to avoid printing\nconstraint names when they have this pattern.\n\nI do not want these constraint names cluttering the output either.\nThat's why I propose moving them to a new \\d++ command, where they will\nonly bother you if you absolutely need them. But so far I have only one\nvote supporting that idea.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 25 Jul 2023 18:24:38 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, 25 Jul 2023 at 12:24, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2023-Jul-25, Isaac Morland wrote:\n>\n> > I agree. I definitely do *not* want a bunch of NOT NULL constraint names\n> > cluttering up displays. Can we legislate that all NOT NULL implementing\n> > constraints are named by mashing together the table name, column name,\n> and\n> > something to identify it as a NOT NULL constraint?\n>\n> All constraints are named like that already, and NOT NULL constraints\n> just inherited the same idea. The names are <table>_<column>_not_null\n> for NOT NULL constraints. pg_dump goes great lengths to avoid printing\n> constraint names when they have this pattern.\n>\n\nOK, this is helpful. Can \\d do the same thing? I use a lot of NOT NULL\nconstraints and I very seriously do not want \\d (including \\d+) to have an\nextra line for almost every column. It's just noise, and while my screen is\nlarge, it's still not infinite.\n\nI do not want these constraint names cluttering the output either.\n> That's why I propose moving them to a new \\d++ command, where they will\n> only bother you if you absolutely need them. But so far I have only one\n> vote supporting that idea.\n\n\nMy suggestion is for \\d+ to show NOT NULL constraints only if there is\nsomething weird going on (wrong name, duplicate constraints, …). If there\nis nothing weird about the constraint then explicitly listing it provides\nabsolutely no information that is not given by \"not null\" in the \"Nullable\"\ncolumn. Easier said than done I suppose. I'm just worried about my \\d+\ndisplays becoming less useful.\n\nOn Tue, 25 Jul 2023 at 12:24, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Jul-25, Isaac Morland wrote:\n\n> I agree. I definitely do *not* want a bunch of NOT NULL constraint names\n> cluttering up displays. Can we legislate that all NOT NULL implementing\n> constraints are named by mashing together the table name, column name, and\n> something to identify it as a NOT NULL constraint?\n\nAll constraints are named like that already, and NOT NULL constraints\njust inherited the same idea. The names are <table>_<column>_not_null\nfor NOT NULL constraints. pg_dump goes great lengths to avoid printing\nconstraint names when they have this pattern.OK, this is helpful. Can \\d do the same thing? I use a lot of NOT NULL constraints and I very seriously do not want \\d (including \\d+) to have an extra line for almost every column. It's just noise, and while my screen is large, it's still not infinite.\nI do not want these constraint names cluttering the output either.\nThat's why I propose moving them to a new \\d++ command, where they will\nonly bother you if you absolutely need them. But so far I have only one\nvote supporting that idea.My suggestion is for \\d+ to show NOT NULL constraints only if there is something weird going on (wrong name, duplicate constraints, …). If there is nothing weird about the constraint then explicitly listing it provides absolutely no information that is not given by \"not null\" in the \"Nullable\" column. Easier said than done I suppose. I'm just worried about my \\d+ displays becoming less useful.",
"msg_date": "Tue, 25 Jul 2023 13:32:56 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 1:33 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n> My suggestion is for \\d+ to show NOT NULL constraints only if there is something weird going on (wrong name, duplicate constraints, …). If there is nothing weird about the constraint then explicitly listing it provides absolutely no information that is not given by \"not null\" in the \"Nullable\" column. Easier said than done I suppose. I'm just worried about my \\d+ displays becoming less useful.\n\nI mean, the problem is that if you want to ALTER TABLE .. DROP\nCONSTRAINT, you need to know what the valid arguments to that command\nare, and the names of these constraints will be just as valid as the\nnames of any other constraints.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 14:59:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, 25 Jul 2023 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Jul 25, 2023 at 1:33 PM Isaac Morland <isaac.morland@gmail.com>\n> wrote:\n> > My suggestion is for \\d+ to show NOT NULL constraints only if there is\n> something weird going on (wrong name, duplicate constraints, …). If there\n> is nothing weird about the constraint then explicitly listing it provides\n> absolutely no information that is not given by \"not null\" in the \"Nullable\"\n> column. Easier said than done I suppose. I'm just worried about my \\d+\n> displays becoming less useful.\n>\n> I mean, the problem is that if you want to ALTER TABLE .. DROP\n> CONSTRAINT, you need to know what the valid arguments to that command\n> are, and the names of these constraints will be just as valid as the\n> names of any other constraints.\n>\n\nCan't I just ALTER TABLE … DROP NOT NULL still?\n\nOK, I suppose ALTER CONSTRAINT to change the deferrable status and validity\n(that is why we're doing this, right?) needs the constraint name. But the\nconstraint name is formulaic by default, and my proposal is to suppress it\nonly when it matches the formula, so you could just construct the\nconstraint name using the documented formula if it's not explicitly listed.\n\nI really don’t see it as a good use of space to add n lines to the \\d+\ndisplay just to confirm that the \"not null\" designations in the \"Nullable\"\ncolumn are implemented by named constraints with the expected names.\n\nOn Tue, 25 Jul 2023 at 14:59, Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Jul 25, 2023 at 1:33 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n> My suggestion is for \\d+ to show NOT NULL constraints only if there is something weird going on (wrong name, duplicate constraints, …). If there is nothing weird about the constraint then explicitly listing it provides absolutely no information that is not given by \"not null\" in the \"Nullable\" column. Easier said than done I suppose. I'm just worried about my \\d+ displays becoming less useful.\n\nI mean, the problem is that if you want to ALTER TABLE .. DROP\nCONSTRAINT, you need to know what the valid arguments to that command\nare, and the names of these constraints will be just as valid as the\nnames of any other constraints.Can't I just ALTER TABLE … DROP NOT NULL still?OK, I suppose ALTER CONSTRAINT to change the deferrable status and validity (that is why we're doing this, right?) needs the constraint name. But the constraint name is formulaic by default, and my proposal is to suppress it only when it matches the formula, so you could just construct the constraint name using the documented formula if it's not explicitly listed.I really don’t see it as a good use of space to add n lines to the \\d+ display just to confirm that the \"not null\" designations in the \"Nullable\" column are implemented by named constraints with the expected names.",
"msg_date": "Tue, 25 Jul 2023 15:06:52 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 3:07 PM Isaac Morland <isaac.morland@gmail.com> wrote:\n> OK, I suppose ALTER CONSTRAINT to change the deferrable status and validity (that is why we're doing this, right?) needs the constraint name. But the constraint name is formulaic by default, and my proposal is to suppress it only when it matches the formula, so you could just construct the constraint name using the documented formula if it's not explicitly listed.\n>\n> I really don’t see it as a good use of space to add n lines to the \\d+ display just to confirm that the \"not null\" designations in the \"Nullable\" column are implemented by named constraints with the expected names.\n\nYeah, I mean, I get that. That was my initial concern, too. But I also\nthink if there's some complicated rule that determines what gets\ndisplayed and what doesn't, nobody's going to remember it, and then\nwhen you don't see something, you're never going to be sure exactly\nwhat's going on. Displaying everything is going to be clunky\nespecially if, like me, you tend to be careful to mark columns NOT\nNULL when they are, but when something goes wrong, the last thing you\nwant to do is run a \\d command and have it show you incomplete\ninformation.\n\nI can't count the number of times that somebody's shown me the output\nof a query against pg_locks or pg_stat_activity that had been filtered\nto remove irrelevant information and it turned out that the hidden\ninformation was not so irrelevant as the person who wrote the query\nthought. It happens all the time. I don't want to create the same kind\nof situation here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 16:05:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Tue, 25 Jul 2023 at 13:36, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Okay then, I've made these show up in the footer of \\d+. This is in\n> patch 0003 here. Please let me know what do you think of the regression\n> changes.\n>\n\nThe new \\d+ output certainly makes testing and reviewing easier,\nthough I do understand people's concerns that this may make the output\nsignificantly longer in many real-world cases. I don't think it would\nbe a good idea to filter the list in any way though, because I think\nthat will only lead to confusion. I think it should be all-or-nothing,\nthough I'm not necessarily opposed to using something like \\d++ to\nenable it, if that turns out to be the least-bad option.\n\nGoing back to this example:\n\ndrop table if exists p1, p2, foo;\ncreate table p1(a int not null check (a > 0));\ncreate table p2(a int not null check (a > 0));\ncreate table foo () inherits (p1,p2);\n\\d+ foo\n\n Table \"public.foo\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | not null | | plain |\n | |\nCheck constraints:\n \"p1_a_check\" CHECK (a > 0)\n \"p2_a_check\" CHECK (a > 0)\nNot null constraints:\n \"p1_a_not_null\" NOT NULL \"a\" (inherited)\nInherits: p1,\n p2\nAccess method: heap\n\nI remain of the opinion that that should create 2 NOT NULL constraints\non foo, for consistency with CHECK constraints, and the misleading\nname that results if p1_a_not_null is dropped from p1. That way, the\nnames of inherited NOT NULL constraints could be kept in sync, as they\nare for other constraint types, making it easier to keep track of\nwhere they come from, and it wouldn't be necessary to treat them\ndifferently (e.g., matching by column number, when dropping NOT NULL\nconstraints).\n\nDoing a little more testing, I found some other issues.\n\n\nGiven the following sequence:\n\ndrop table if exists p,c;\ncreate table p(a int primary key);\ncreate table c() inherits (p);\nalter table p drop constraint p_pkey;\n\np.a ends up being nullable, where previously it would have been left\nnon-nullable. That change makes sense, and is presumably one of the\nbenefits of tying the nullability of columns to pg_constraint entries.\nHowever, c.a remains non-nullable, with a NOT NULL constraint that\nclaims to be inherited:\n\n\\d+ c\n Table \"public.c\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | not null | | plain |\n | |\nNot null constraints:\n \"c_a_not_null\" NOT NULL \"a\" (inherited)\nInherits: p\nAccess method: heap\n\nThat's a problem, because now the NOT NULL constraint on c cannot be\ndropped (attempting to drop it on c errors out because it thinks it's\ninherited, but it can't be dropped via p, because p.a is already\nnullable).\n\nI wonder if NOT NULL constraints created as a result of inherited PKs\nshould have names based on the PK name (e.g.,\n<PK_name>_<col_name>_not_null), to make it more obvious where they\ncame from. That would be more consistent with the way NOT NULL\nconstraint names are inherited.\n\n\nGiven the following sequence:\n\ndrop table if exists p,c;\ncreate table p(a int);\ncreate table c() inherits (p);\nalter table p add primary key (a);\n\nc.a ends up non-nullable, but there is no pg_constraint entry\nenforcing the constraint:\n\n\\d+ c\n Table \"public.c\"\n Column | Type | Collation | Nullable | Default | Storage |\nCompression | Stats target | Description\n--------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n a | integer | | not null | | plain |\n | |\nInherits: p\nAccess method: heap\n\n\nGiven a database containing these 2 tables:\n\ncreate table p(a int primary key);\ncreate table c() inherits (p);\n\ndoing a pg_dump and restore fails to restore the NOT NULL constraint\non c, because all constraints created by the dump are local to p.\n\n\nThat's it for now. I'll try to do more testing later.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 26 Jul 2023 14:09:02 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Thanks for spending so much time with this patch -- really appreciated.\n\nOn 2023-Jul-26, Dean Rasheed wrote:\n\n> On Tue, 25 Jul 2023 at 13:36, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Okay then, I've made these show up in the footer of \\d+. This is in\n> > patch 0003 here. Please let me know what do you think of the regression\n> > changes.\n> \n> The new \\d+ output certainly makes testing and reviewing easier,\n> though I do understand people's concerns that this may make the output\n> significantly longer in many real-world cases. I don't think it would\n> be a good idea to filter the list in any way though, because I think\n> that will only lead to confusion. I think it should be all-or-nothing,\n> though I'm not necessarily opposed to using something like \\d++ to\n> enable it, if that turns out to be the least-bad option.\n\nYeah, at this point I'm inclined to get the \\d+ version committed\nimmediately after the main patch, and we can tweak the psql UI after the\nfact -- for instance so that they are only shown in \\d++, or some other\nidea we may come across.\n\n> Going back to this example:\n> \n> drop table if exists p1, p2, foo;\n> create table p1(a int not null check (a > 0));\n> create table p2(a int not null check (a > 0));\n> create table foo () inherits (p1,p2);\n\n> I remain of the opinion that that should create 2 NOT NULL constraints\n> on foo, for consistency with CHECK constraints, and the misleading\n> name that results if p1_a_not_null is dropped from p1. That way, the\n> names of inherited NOT NULL constraints could be kept in sync, as they\n> are for other constraint types, making it easier to keep track of\n> where they come from, and it wouldn't be necessary to treat them\n> differently (e.g., matching by column number, when dropping NOT NULL\n> constraints).\n\nI think having two constraints is more problematic, UI-wise. Previous\nversions of this patchset did it that way, and it's not great: for\nexample ALTER TABLE ALTER COLUMN DROP NOT NULL fails and tells you to\nchoose which exact constraint you want to drop and use DROP CONSTRAINT\ninstead. And when searching for the not-null constraints for a column,\nthe code had to consider the case of there being multiple ones, which\nled to strange contortions. Allowing a single one is simpler and covers\nall important cases well.\n\nAnyway, you still can't drop the doubly-inherited constraint directly,\nbecause it'll complain that it is an inherited constraint. So you have\nto deinherit first and only then can you drop the constraint.\n\nNow, one possible improvement here would be to ignore the parent\nconstraint's name, and have 'foo' recompute its own constraint name from\nscratch, inheriting the name only if one of the parents had a\nmanually-specified constraint name (and we would choose the first one,\nif there's more than one). I think complicating things more than that\nis unnecessary -- particularly considering that legacy inheritance is,\nwell, legacy, and I doubt people are relying too much on it.\n\n\n> Given the following sequence:\n> \n> drop table if exists p,c;\n> create table p(a int primary key);\n> create table c() inherits (p);\n> alter table p drop constraint p_pkey;\n> \n> p.a ends up being nullable, where previously it would have been left\n> non-nullable. That change makes sense, and is presumably one of the\n> benefits of tying the nullability of columns to pg_constraint entries.\n\nRight.\n\n> However, c.a remains non-nullable, with a NOT NULL constraint that\n> claims to be inherited:\n> \n> \\d+ c\n> Table \"public.c\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | integer | | not null | | plain |\n> | |\n> Not null constraints:\n> \"c_a_not_null\" NOT NULL \"a\" (inherited)\n> Inherits: p\n> Access method: heap\n> \n> That's a problem, because now the NOT NULL constraint on c cannot be\n> dropped (attempting to drop it on c errors out because it thinks it's\n> inherited, but it can't be dropped via p, because p.a is already\n> nullable).\n\nOh, I think the bug here is just that this constraint should not claim\nto be inherited, but standalone. So you can drop it afterwards; but if\nyou drop it and end up with NULL values in your PK-labelled column in\nthe parent table, that's on you.\n\n> I wonder if NOT NULL constraints created as a result of inherited PKs\n> should have names based on the PK name (e.g.,\n> <PK_name>_<col_name>_not_null), to make it more obvious where they\n> came from. That would be more consistent with the way NOT NULL\n> constraint names are inherited.\n\nHmm, interesting idea. I'll play with it. (It may quickly lead to\nconstraint names that are too long, though.)\n\n> Given the following sequence:\n> \n> drop table if exists p,c;\n> create table p(a int);\n> create table c() inherits (p);\n> alter table p add primary key (a);\n> \n> c.a ends up non-nullable, but there is no pg_constraint entry\n> enforcing the constraint:\n> \n> \\d+ c\n> Table \"public.c\"\n> Column | Type | Collation | Nullable | Default | Storage |\n> Compression | Stats target | Description\n> --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> a | integer | | not null | | plain |\n> | |\n> Inherits: p\n> Access method: heap\n\nOh, this one's a bad omission. I'll fix it.\n\n\n> Given a database containing these 2 tables:\n> \n> create table p(a int primary key);\n> create table c() inherits (p);\n> \n> doing a pg_dump and restore fails to restore the NOT NULL constraint\n> on c, because all constraints created by the dump are local to p.\n\nStrange. I'll see about fixing this one too.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un guión que no toque nada para no causar daños.\" (Jakob Nielsen)\n\n\n",
"msg_date": "Wed, 26 Jul 2023 15:49:39 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-26, Alvaro Herrera wrote:\n\n> On 2023-Jul-26, Dean Rasheed wrote:\n> \n> > The new \\d+ output certainly makes testing and reviewing easier,\n> > though I do understand people's concerns that this may make the output\n> > significantly longer in many real-world cases.\n> \n> Yeah, at this point I'm inclined to get the \\d+ version committed\n> immediately after the main patch, and we can tweak the psql UI after the\n> fact -- for instance so that they are only shown in \\d++, or some other\n> idea we may come across.\n\n(For example, maybe we could add \\dtc [PATTERN] or some such, that lists\nall the constraints of all kinds in tables matching PATTERN.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"If you want to have good ideas, you must have many ideas. Most of them\nwill be wrong, and what you have to learn is which ones to throw away.\"\n (Linus Pauling)\n\n\n",
"msg_date": "Wed, 26 Jul 2023 16:06:53 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "> > Given the following sequence:\n> > \n> > drop table if exists p,c;\n> > create table p(a int primary key);\n> > create table c() inherits (p);\n> > alter table p drop constraint p_pkey;\n\n> > However, c.a remains non-nullable, with a NOT NULL constraint that\n> > claims to be inherited:\n> > \n> > \\d+ c\n> > Table \"public.c\"\n> > Column | Type | Collation | Nullable | Default | Storage |\n> > Compression | Stats target | Description\n> > --------+---------+-----------+----------+---------+---------+-------------+--------------+-------------\n> > a | integer | | not null | | plain |\n> > | |\n> > Not null constraints:\n> > \"c_a_not_null\" NOT NULL \"a\" (inherited)\n> > Inherits: p\n> > Access method: heap\n> > \n> > That's a problem, because now the NOT NULL constraint on c cannot be\n> > dropped (attempting to drop it on c errors out because it thinks it's\n> > inherited, but it can't be dropped via p, because p.a is already\n> > nullable).\n\nSo I implemented a fix for this (namely: fix the inhcount to be 0\ninitially), and it works well, but it does cause a definitional problem:\nany time we create a child table that inherits from another table that\nhas a primary key, all the columns in the child table will get normal,\nvisible, droppable NOT NULL constraints. Thus, pg_dump for example will\noutput that constraint exactly as if the user had specified it in the\nchild's CREATE TABLE command. By itself this doesn't bother me, though\nI admit it seems a little odd.\n\nWhen you restore such a setup from pg_dump, things work perfectly -- I\nmean, you don't get a second constraint. But if you do drop the\nconstraint, then it will be reinstated by the next pg_dump as if you\nhadn't dropped it, by way of it springing to life from the PK.\n\nTo avoid that, one option would be to make this NN constraint\nundroppable ... but I don't see how. One option might be to add a\npg_depend row that links the NOT NULL constraint to its PK constraint.\nBut this will be a strange case that occurs nowhere else, since other\nNOT NULL constraint don't have such pg_depend rows. Also, I won't know\nhow pg_dump likes this until I implement it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 28 Jul 2023 12:47:44 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 24.07.23 12:32, Alvaro Herrera wrote:\n> However, 11.16 (<drop column not null clause> as part of 11.12 <alter\n> column definition>), says that DROP NOT NULL causes the indication of\n> the column as NOT NULL to be removed. This, to me, says that if you do\n> have multiple such constraints, you'd better remove them all with that\n> command. All in all, I lean towards allowing just one as best as we\n> can.\n\nAnother clue is in 11.15 <set column not null clause>, which says\n\n 1) Let C be the column identified by the <column name> CN in the\n containing <alter column definition>. If the column descriptor of C\n does not contain an indication that C is defined as NOT NULL, then:\n\n [do things]\n\nOtherwise it does nothing. So there can only be one such constraint per \ntable.\n\n\n\n",
"msg_date": "Wed, 2 Aug 2023 10:29:39 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Jul-28, Alvaro Herrera wrote:\n\n> To avoid that, one option would be to make this NN constraint\n> undroppable ... but I don't see how. One option might be to add a\n> pg_depend row that links the NOT NULL constraint to its PK constraint.\n> But this will be a strange case that occurs nowhere else, since other\n> NOT NULL constraint don't have such pg_depend rows. Also, I won't know\n> how pg_dump likes this until I implement it.\n\nI've been completing the implementation for this. It seems to work\nreasonably okay; pg_dump requires somewhat strange contortions, but they\nare similar to what we do in flagInhTables already, so I don't feel too\nbad about that.\n\nWhat *is* odd and bothersome is that it also causes a problem dropping\nthe child table. For example,\n\nCREATE TABLE parent (a int primary key);\nCREATE TABLE child () INHERITS (parent);\n\\d+ child\n\n Tabla «public.child»\n Columna │ Tipo │ Ordenamiento │ Nulable │ Por omisión │ Almacenamiento │ Compresión │ Estadísticas │ Descripción \n─────────┼─────────┼──────────────┼──────────┼─────────────┼────────────────┼────────────┼──────────────┼─────────────\n a │ integer │ │ not null │ │ plain │ │ │ \nNot null constraints:\n \"child_a_not_null\" NOT NULL \"a\"\nHereda: parent\nMétodo de acceso: heap\n\nThis is the behavior that I think we wanted to prevent drop of the child\nconstraint, and it seems okay to me:\n\n=# alter table child drop constraint child_a_not_null;\nERROR: cannot drop constraint child_a_not_null on table child because constraint parent_pkey on table parent requires it\nSUGERENCIA: You can drop constraint parent_pkey on table parent instead.\n\nBut the problem is this:\n\n=# drop table child;\nERROR: cannot drop table child because other objects depend on it\nDETALLE: constraint parent_pkey on table parent depends on table child\nSUGERENCIA: Use DROP ... CASCADE to drop the dependent objects too.\n\n\nTo be clear, what my patch is doing is add one new dependency:\n\n dep │ ref │ deptype \n────────────────────────────────────────────┼────────────────────────────────────────┼─────────\n type foo │ table foo │ i\n table foo │ schema public │ n\n constraint foo_pkey on table foo │ column a of table foo │ a\n type bar │ table bar │ i\n table bar │ schema public │ n\n table bar │ table foo │ n\n constraint bar_a_not_null on table bar │ column a of table bar │ a\n constraint child_a_not_null on table child │ column a of table child │ a\n constraint child_a_not_null on table child │ constraint parent_pkey on table parent │ i\n\nthe last row here is what is new. I'm not sure what's the right fix.\nMaybe I need to invert the direction of that dependency.\n\n\nEven with that fixed, I'd still need to write more code so that ALTER\nTABLE INHERIT adds the link (I already patched the DROP INHERIT part).\nNot sure what else might I be missing.\n\nSeparately, I also noticed that some code that's currently\ndropconstraint_internal needs to be moved to DropConstraintById, because\nif the PK is dropped for some other reason than ALTER TABLE DROP\nCONSTRAINT, some ancillary actions are not taken.\n\nSigh.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n Are you not unsure you want to delete Firefox?\n [Not unsure] [Not not unsure] [Cancel]\n http://smylers.hates-software.com/2008/01/03/566e45b2.html\n\n\n",
"msg_date": "Fri, 4 Aug 2023 20:10:42 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Fri, 4 Aug 2023 at 19:10, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jul-28, Alvaro Herrera wrote:\n>\n> > To avoid that, one option would be to make this NN constraint\n> > undroppable ... but I don't see how. One option might be to add a\n> > pg_depend row that links the NOT NULL constraint to its PK constraint.\n> > But this will be a strange case that occurs nowhere else, since other\n> > NOT NULL constraint don't have such pg_depend rows. Also, I won't know\n> > how pg_dump likes this until I implement it.\n>\n> I've been completing the implementation for this. It seems to work\n> reasonably okay; pg_dump requires somewhat strange contortions, but they\n> are similar to what we do in flagInhTables already, so I don't feel too\n> bad about that.\n>\n> What *is* odd and bothersome is that it also causes a problem dropping\n> the child table.\n>\n\nHmm, thinking about this some more, I think this might be the wrong\napproach to fixing the original problem. I think it was probably OK\nthat the NOT NULL constraint on the child was marked as inherited, but\nI think what should have happened is that dropping the PRIMARY KEY\nconstraint on the parent should have caused the NOT NULL constraint on\nthe child to have been deleted (in the same way as it would have been,\nif it had been a NOT NULL constraint on the parent).\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 5 Aug 2023 12:35:02 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-05, Dean Rasheed wrote:\n\n> Hmm, thinking about this some more, I think this might be the wrong\n> approach to fixing the original problem. I think it was probably OK\n> that the NOT NULL constraint on the child was marked as inherited, but\n> I think what should have happened is that dropping the PRIMARY KEY\n> constraint on the parent should have caused the NOT NULL constraint on\n> the child to have been deleted (in the same way as it would have been,\n> if it had been a NOT NULL constraint on the parent).\n\nYeah, something like that. However, if the child had a NOT NULL\nconstraint of its own, then it should not be deleted when the\nPK-on-parent is, but merely marked as no longer inherited. (This is\nalso what happens with a straight NOT NULL constraint.) I think what\nthis means is that at some point during the deletion of the PK we must\nremove the dependency link rather than letting it be followed. I'm not\nyet sure how to do this.\n\nAnyway, I was at the same time fixing the other problem you reported\nwith inheritance (namely, adding a PK ends up with the child column\nbeing marked NOT NULL but no corresponding constraint).\n\nAt some point I wondered if the easy way out wouldn't be to give up on\nthe idea that creating a PK causes the child columns to be marked\nnot-nullable. However, IIRC I decided against that because it breaks\nrestoring of old dumps, so it wouldn't be acceptable.\n\nTo make matters worse: pg_dump creates the PK as \n\n ALTER TABLE ONLY parent ADD PRIMARY KEY ( ... )\n\nnote the ONLY there. It seems I'm forced to cause the PK to affect\nchildren even though ONLY is given. This is undesirable but I don't see\na way out of that.\n\nIt is all a bit of a rat's nest.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Nunca se desea ardientemente lo que solo se desea por razón\" (F. Alexandre)\n\n\n",
"msg_date": "Sat, 5 Aug 2023 19:37:50 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Sat, 5 Aug 2023 at 18:37, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Yeah, something like that. However, if the child had a NOT NULL\n> constraint of its own, then it should not be deleted when the\n> PK-on-parent is, but merely marked as no longer inherited. (This is\n> also what happens with a straight NOT NULL constraint.) I think what\n> this means is that at some point during the deletion of the PK we must\n> remove the dependency link rather than letting it be followed. I'm not\n> yet sure how to do this.\n>\n\nI'm not sure that adding that new dependency was the right thing to\ndo. I think perhaps this could just be made to work using conislocal\nand coninhcount to track whether the child constraint needs to be\ndeleted, or just updated.\n\n> Anyway, I was at the same time fixing the other problem you reported\n> with inheritance (namely, adding a PK ends up with the child column\n> being marked NOT NULL but no corresponding constraint).\n>\n> At some point I wondered if the easy way out wouldn't be to give up on\n> the idea that creating a PK causes the child columns to be marked\n> not-nullable. However, IIRC I decided against that because it breaks\n> restoring of old dumps, so it wouldn't be acceptable.\n>\n> To make matters worse: pg_dump creates the PK as\n>\n> ALTER TABLE ONLY parent ADD PRIMARY KEY ( ... )\n>\n> note the ONLY there. It seems I'm forced to cause the PK to affect\n> children even though ONLY is given. This is undesirable but I don't see\n> a way out of that.\n>\n> It is all a bit of a rat's nest.\n>\n\nI wonder if that could be made to work in the same way as inherited\nCHECK constraints -- dump the child's inherited NOT NULL constraints,\nand then manually update conislocal in pg_constraint.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 5 Aug 2023 20:50:01 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 05.08.23 21:50, Dean Rasheed wrote:\n>> Anyway, I was at the same time fixing the other problem you reported\n>> with inheritance (namely, adding a PK ends up with the child column\n>> being marked NOT NULL but no corresponding constraint).\n>>\n>> At some point I wondered if the easy way out wouldn't be to give up on\n>> the idea that creating a PK causes the child columns to be marked\n>> not-nullable. However, IIRC I decided against that because it breaks\n>> restoring of old dumps, so it wouldn't be acceptable.\n>>\n>> To make matters worse: pg_dump creates the PK as\n>>\n>> ALTER TABLE ONLY parent ADD PRIMARY KEY ( ... )\n>>\n>> note the ONLY there. It seems I'm forced to cause the PK to affect\n>> children even though ONLY is given. This is undesirable but I don't see\n>> a way out of that.\n>>\n>> It is all a bit of a rat's nest.\n>>\n> \n> I wonder if that could be made to work in the same way as inherited\n> CHECK constraints -- dump the child's inherited NOT NULL constraints,\n> and then manually update conislocal in pg_constraint.\n\nI wonder whether the root of these problems is that we mix together \nprimary key constraints and not-null constraints. I understand that \nright now, with the proposed patch, when a table inherits from a parent \ntable with a primary key constraint, we generate not-null constraints on \nthe child, in order to enforce the not-nullness. What if we did \nsomething like this instead: In the child table, we don't generate a \nnot-null constraint, but instead a primary key constraint entry. But we \nmark the primary key constraint somehow to say, this is just for the \npurpose of inheritance, don't enforce uniqueness, but enforce \nnot-nullness. Would that work?\n\n\n\n",
"msg_date": "Wed, 9 Aug 2023 09:55:09 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-09, Peter Eisentraut wrote:\n\n> I wonder whether the root of these problems is that we mix together primary\n> key constraints and not-null constraints. I understand that right now, with\n> the proposed patch, when a table inherits from a parent table with a primary\n> key constraint, we generate not-null constraints on the child, in order to\n> enforce the not-nullness. What if we did something like this instead: In\n> the child table, we don't generate a not-null constraint, but instead a\n> primary key constraint entry. But we mark the primary key constraint\n> somehow to say, this is just for the purpose of inheritance, don't enforce\n> uniqueness, but enforce not-nullness. Would that work?\n\nHmm. One table can have many parents, and many of them can have primary\nkeys. If we tried to model it the way you suggest, the child table\nwould need to have several primary keys. I don't think this would work.\n\nBut I think I just need to stare at the dependency graph a little while\nlonger. Maybe I just need to add some extra edges to make it work\ncorrectly.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 9 Aug 2023 13:10:29 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-05, Dean Rasheed wrote:\n\n> On Sat, 5 Aug 2023 at 18:37, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Yeah, something like that. However, if the child had a NOT NULL\n> > constraint of its own, then it should not be deleted when the\n> > PK-on-parent is, but merely marked as no longer inherited. (This is\n> > also what happens with a straight NOT NULL constraint.) I think what\n> > this means is that at some point during the deletion of the PK we must\n> > remove the dependency link rather than letting it be followed. I'm not\n> > yet sure how to do this.\n> \n> I'm not sure that adding that new dependency was the right thing to\n> do. I think perhaps this could just be made to work using conislocal\n> and coninhcount to track whether the child constraint needs to be\n> deleted, or just updated.\n\nRight, in the end I got around to that point of view. I abandoned the\nidea of adding these dependency links, and I'm back at relying on the\nconinhcount/conislocal markers. But there were a couple of bugs in the\naccounting for that, so I've fixed some of those, but it's not yet\ncomplete:\n\n- ALTER TABLE parent ADD PRIMARY KEY\n needs to create NOT NULL constraints in children. I added this, but\n I'm not yet sure it works correctly (for example, if a child already\n has a NOT NULL constraint, we need to bump its inhcount, but we\n don't.)\n- ALTER TABLE parent ADD PRIMARY KEY USING index\n Not sure if this is just as above or needs separate handling\n- ALTER TABLE DROP PRIMARY KEY\n needs to decrement inhcount or drop the constraint if there are no\n other sources for that constraint to exist. I've adjusted the drop\n constraint code to do this.\n- ALTER TABLE INHERIT\n needs to create a constraint on the new child, if parent has PK. Not\n implemented\n- ALTER TABLE NO INHERIT\n needs to delink any constraints (decrement inhcount, possibly drop\n the constraint).\n\nI also need to add tests for those scenarios, because I think there\naren't any for most of them.\n\nThere's also another a pg_upgrade problem: we now get spurious ALTER\nTABLE SET NOT NULL commands in a dump after pg_upgrade for the columns\nthat get the constraint from a primary key. (This causes a pg_upgrade\ntest failure). I need to adjust pg_dump to suppress those; I think\nsomething like flagInhTables would do.\n\n(I had mentioned that I needed to move code from dropconstraint_internal\nto RemoveConstraintById. However, now I can't figure out exactly what\ncase was having a problem, so I've left it alone.)\n\nHere's v17, which is a step forward, but several holes remain.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I can't go to a restaurant and order food because I keep looking at the\nfonts on the menu. Five minutes later I realize that it's also talking\nabout food\" (Donald Knuth)",
"msg_date": "Fri, 11 Aug 2023 15:54:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Fri, 11 Aug 2023 at 14:54, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Right, in the end I got around to that point of view. I abandoned the\n> idea of adding these dependency links, and I'm back at relying on the\n> coninhcount/conislocal markers. But there were a couple of bugs in the\n> accounting for that, so I've fixed some of those, but it's not yet\n> complete:\n>\n> - ALTER TABLE parent ADD PRIMARY KEY\n> needs to create NOT NULL constraints in children. I added this, but\n> I'm not yet sure it works correctly (for example, if a child already\n> has a NOT NULL constraint, we need to bump its inhcount, but we\n> don't.)\n> - ALTER TABLE parent ADD PRIMARY KEY USING index\n> Not sure if this is just as above or needs separate handling\n> - ALTER TABLE DROP PRIMARY KEY\n> needs to decrement inhcount or drop the constraint if there are no\n> other sources for that constraint to exist. I've adjusted the drop\n> constraint code to do this.\n> - ALTER TABLE INHERIT\n> needs to create a constraint on the new child, if parent has PK. Not\n> implemented\n> - ALTER TABLE NO INHERIT\n> needs to delink any constraints (decrement inhcount, possibly drop\n> the constraint).\n>\n\nI think perhaps for ALTER TABLE INHERIT, it should check that the\nchild has a NOT NULL constraint, and error out if not. That's the\ncurrent behaviour, and also matches other constraints types (e.g.,\nCHECK constraints).\n\nMore generally though, I'm worried that this is starting to get very\ncomplicated. I wonder if there might be a different, simpler approach.\nOne vague idea is to have a new attribute on the column that counts\nthe number of constraints (local and inherited PK and NOT NULL\nconstraints) that make the column not null.\n\nSomething else I noticed when reading the SQL standard is that a\nuser-defined CHECK (col IS NOT NULL) constraint should be recognised\nby the system as also making the column not null (setting its\n\"nullability characteristic\" to \"known not nullable\"). I think that's\nmore than just an artefact of how they say NOT NULL constraints should\nbe implemented, because the effect of such a CHECK constraint should\nbe exposed in the \"columns\" view of the information schema -- the\nvalue of \"is_nullable\" should be \"NO\" if the column is \"known not\nnullable\".\n\nIn this sense, the standard does allow multiple not null constraints\non a column, independently of whether the column is \"defined as NOT\nNULL\". My understanding of the standard is that ALTER COLUMN ...\nSET/DROP NOT NULL change whether or not the column is \"defined as NOT\nNULL\", and manage a single system-generated constraint, but there may\nbe any number of other user-defined constraints that also make the\ncolumn \"known not nullable\", and they need to be tracked in some way.\n\nI'm also wondering whether creating a pg_constraint entry for *every*\nnot-nullable column is actually going too far. If we were to\ndistinguish between \"defined as NOT NULL\" and being not null as a\nresult of one or more constraints, in the way that the standard seems\nto suggest, perhaps the former (likely to be much more common) could\nsimply be a new attribute stored on the column. I think we actually\nonly need to create pg_constraint entries if a constraint name or any\nadditional constraint properties such as NOT VALID are specified. That\nwould lead to far fewer new constraints, less catalog bloat, and less\nnoise in the \\d output.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 15 Aug 2023 10:57:34 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-15, Dean Rasheed wrote:\n\n> I think perhaps for ALTER TABLE INHERIT, it should check that the\n> child has a NOT NULL constraint, and error out if not. That's the\n> current behaviour, and also matches other constraints types (e.g.,\n> CHECK constraints).\n\nYeah, I reached the same conclusion yesterday while trying it out, so\nthat's what I implemented. I'll post later today.\n\n> More generally though, I'm worried that this is starting to get very\n> complicated. I wonder if there might be a different, simpler approach.\n> One vague idea is to have a new attribute on the column that counts\n> the number of constraints (local and inherited PK and NOT NULL\n> constraints) that make the column not null.\n\nHmm. I grant that this is different, but I don't see that it is\nsimpler.\n\n> Something else I noticed when reading the SQL standard is that a\n> user-defined CHECK (col IS NOT NULL) constraint should be recognised\n> by the system as also making the column not null (setting its\n> \"nullability characteristic\" to \"known not nullable\").\n\nI agree with this view actually, but I've refrained from implementing\nit(*) because our SQL-standards people have advised against it. Insider\nknowledge? I don't know. I think this is a comparatively smaller\nconsideration though, and we can adjust for it afterwards.\n\n(*) Rather: at some point I removed the implementation of that from the\npatch.\n\n> I'm also wondering whether creating a pg_constraint entry for *every*\n> not-nullable column is actually going too far. If we were to\n> distinguish between \"defined as NOT NULL\" and being not null as a\n> result of one or more constraints, in the way that the standard seems\n> to suggest, perhaps the former (likely to be much more common) could\n> simply be a new attribute stored on the column. I think we actually\n> only need to create pg_constraint entries if a constraint name or any\n> additional constraint properties such as NOT VALID are specified. That\n> would lead to far fewer new constraints, less catalog bloat, and less\n> noise in the \\d output.\n\nThere is a problem if we do this, though, which is that we cannot use\nthe constraints for the things that we want them for -- for example,\nremove_useless_groupby_columns() would like to use unique constraints,\nnot just primary keys; but it depends on the NOT NULL rows being there\nfor invalidation reasons (namely: if the NOT NULL constraint is dropped,\nwe need to be able to replan. Without catalog rows, we don't have a\nmechanism to let that happen).\n\nIf we don't add all those redundant catalog rows, then this is all for\nnaught.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n",
"msg_date": "Tue, 15 Aug 2023 12:15:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 15.08.23 11:57, Dean Rasheed wrote:\n> Something else I noticed when reading the SQL standard is that a\n> user-defined CHECK (col IS NOT NULL) constraint should be recognised\n> by the system as also making the column not null (setting its\n> \"nullability characteristic\" to \"known not nullable\"). I think that's\n> more than just an artefact of how they say NOT NULL constraints should\n> be implemented, because the effect of such a CHECK constraint should\n> be exposed in the \"columns\" view of the information schema -- the\n> value of \"is_nullable\" should be \"NO\" if the column is \"known not\n> nullable\".\n\nNullability determination is different from not-null constraints. The \nnullability characteristic of a column can be derived from multiple \nsources, including not-null constraints, check constraints, primary key \nconstraints, domain constraints, as well as more complex rules in case \nof views, joins, etc. But this is all distinct and separate from the \nissue of not-null constraints that we are discussing here.\n\n\n",
"msg_date": "Wed, 16 Aug 2023 10:02:11 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "I have two small patches that you can integrate into your patch set:\n\nThe first just changes the punctuation of \"Not-null constraints\" in the \npsql output to match what the documentation mostly uses.\n\nThe second has some changes to ddl.sgml to reflect that not-null \nconstraints are now named and can be operated on like other constraints. \n You might want to read that again to make sure it matches your latest \nintentions, but I think it catches all the places that are required to \nchange.",
"msg_date": "Wed, 16 Aug 2023 12:09:33 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Okay, so here's another version of this, where I fixed the creation of\nNOT NULLs derived from PKs. It turned out that what I was doing wasn't\ndoing recursion correctly, so for example if you have NOT NULLs in\ngrand-child tables they would not be marked as inherited from the PK\n(thus wrongly droppable). I had to rewrite it to go through ATPrepCmd\nand friends; and we had no way to indicate inheritance that way, so I\nhad to add an \"int inhcount\" to the Constraint node. (I think it might\nbe OK to make it just a \"bool inherited\" instead).\n\nThere is one good thing about this, which is that currently\nAddRelationNewConstraints() has a strange \"bool is_local\" parameter\n(added by commit cd902b331d, 2008), which is somewhat strange, and which\nwe could remove to instead use this new Constraint->inhcount mechanism\nto pass down the flag.\n\n\nAlso: it turns out that you can do this\nCREATE TABLE parent (a int);\nCREATE TABLE child (NOT NULL a) INHERITS (parent);\n\nthat is, the column has no local definition on the child, but the\nconstraint does. This required some special fixes but also works\ncorrectly now AFAICT.\n\nOn 2023-Aug-16, Peter Eisentraut wrote:\n\n> I have two small patches that you can integrate into your patch set:\n> \n> The first just changes the punctuation of \"Not-null constraints\" in the psql\n> output to match what the documentation mostly uses.\n> \n> The second has some changes to ddl.sgml to reflect that not-null constraints\n> are now named and can be operated on like other constraints. You might want\n> to read that again to make sure it matches your latest intentions, but I\n> think it catches all the places that are required to change.\n\nI've incorporated both of those, verbatim for now; I'll give the docs\nanother look tomorrow.\n\nOn 2023-Aug-11, Alvaro Herrera wrote:\n\n> - ALTER TABLE parent ADD PRIMARY KEY\n> needs to create NOT NULL constraints in children. I added this, but\n> I'm not yet sure it works correctly (for example, if a child already\n> has a NOT NULL constraint, we need to bump its inhcount, but we\n> don't.)\n> - ALTER TABLE parent ADD PRIMARY KEY USING index\n> Not sure if this is just as above or needs separate handling\n> - ALTER TABLE DROP PRIMARY KEY\n> needs to decrement inhcount or drop the constraint if there are no\n> other sources for that constraint to exist. I've adjusted the drop\n> constraint code to do this.\n> - ALTER TABLE INHERIT\n> needs to create a constraint on the new child, if parent has PK. Not\n> implemented\n> - ALTER TABLE NO INHERIT\n> needs to delink any constraints (decrement inhcount, possibly drop\n> the constraint).\n>\n> I also need to add tests for those scenarios, because I think there\n> aren't any for most of them.\n\nI've added tests for the ones I caught missing, including leaving some\ntables to exercise the pg_upgrade side of things.\n\n> There's also another a pg_upgrade problem: we now get spurious ALTER\n> TABLE SET NOT NULL commands in a dump after pg_upgrade for the columns\n> that get the constraint from a primary key.\n\nI fixed this too.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 21 Aug 2023 20:01:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "I went over the whole patch and made a very large number of additional\ncleanups[1], to the point where I think this is truly ready for commit now.\nThere are some relatively minor things that could still be subject of\ndebate, such as what to name constraints that derive from PKs or from\nmultiple inheritance parents. I have one commented out Assert() because\nof that. But other than those and a couple of not-terribly-important\nXXX comments, this is as ready as it'll ever be.\n\nI'll put it through CI soon. It's been a while since I tested using\npg_upgrade from older versions, so I'll do that too. If no problems\nemerge, I intend to get this committed soon.\n\n[1] https://github.com/alvherre/postgres/tree/catalog-notnull-9\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"If you want to have good ideas, you must have many ideas. Most of them\nwill be wrong, and what you have to learn is which ones to throw away.\"\n (Linus Pauling)",
"msg_date": "Wed, 23 Aug 2023 19:08:18 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "I have now pushed this again. Hopefully it'll stick this time.\n\nWe may want to make some further tweaks to the behavior in some cases --\nfor example, don't disallow ALTER TABLE DROP NOT NULL when the\nconstraint is both inherited and has a local definition; the other\noption is to mark the constraint as no longer having a local definition.\nI left it the other way because that's what CHECK does; maybe we would\nlike to change both at once.\n\nI ran it through CI, and the pg_upgrade test with a dump from 14's\nregression test database and everything worked well, but it's been a\nwhile since I tested the sepgsql part of it, so that might the first\nthing to explode.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 25 Aug 2023 13:38:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-25, Alvaro Herrera wrote:\n\n> I have now pushed this again. Hopefully it'll stick this time.\n\nHmm, failed under the Czech locale[1]; apparently \"inh_grandchld\" sorts\nearlier than \"inh_child1\" there. I think I'll rename inh_grandchld to\ninh_child3 or something like that.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hippopotamus&dt=2023-08-25%2011%3A33%3A07\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 25 Aug 2023 14:00:41 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 25.08.23 13:38, Alvaro Herrera wrote:\n> I have now pushed this again. Hopefully it'll stick this time.\n> \n> We may want to make some further tweaks to the behavior in some cases --\n> for example, don't disallow ALTER TABLE DROP NOT NULL when the\n> constraint is both inherited and has a local definition; the other\n> option is to mark the constraint as no longer having a local definition.\n> I left it the other way because that's what CHECK does; maybe we would\n> like to change both at once.\n> \n> I ran it through CI, and the pg_upgrade test with a dump from 14's\n> regression test database and everything worked well, but it's been a\n> while since I tested the sepgsql part of it, so that might the first\n> thing to explode.\n\nIt looks like we forgot about domain constraints? For example,\n\ncreate domain testdomain as int not null;\n\nshould create a row in pg_constraint?\n\n\n\n",
"msg_date": "Mon, 28 Aug 2023 14:55:54 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-28, Peter Eisentraut wrote:\n\n> It looks like we forgot about domain constraints? For example,\n> \n> create domain testdomain as int not null;\n> \n> should create a row in pg_constraint?\n\nWell, at some point I purposefully left them out; they were sufficiently\ndifferent from the ones in tables that doing both things at the same\ntime was not saving any effort. I guess we could try to bake them in\nnow.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Doing what he did amounts to sticking his fingers under the hood of the\nimplementation; if he gets his fingers burnt, it's his problem.\" (Tom Lane)\n\n\n",
"msg_date": "Mon, 28 Aug 2023 17:44:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi Alvaro,\n\n25.08.2023 14:38, Alvaro Herrera wrote:\n> I have now pushed this again. Hopefully it'll stick this time.\n\nI've found that after that commit the following query:\nCREATE TABLE t(a int PRIMARY KEY) PARTITION BY RANGE (a);\nCREATE TABLE tp1(a int);\nALTER TABLE t ATTACH PARTITION tp1 FOR VALUES FROM (0) to (1);\n\ntriggers a server crash:\nCore was generated by `postgres: law regression [local] ALTER TABLE '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n\nwarning: Section `.reg-xstate/2194811' in core file too small.\n#0 0x0000556007711d77 in MergeAttributesIntoExisting (child_rel=0x7fc30ba309d8,\n parent_rel=0x7fc30ba33f18) at tablecmds.c:15771\n15771 if (!((Form_pg_constraint) GETSTRUCT(contup))->connoinherit)\n(gdb) bt\n#0 0x0000556007711d77 in MergeAttributesIntoExisting (child_rel=0x7fc30ba309d8,\n parent_rel=0x7fc30ba33f18) at tablecmds.c:15771\n#1 0x00005560077118d4 in CreateInheritance (child_rel=0x7fc30ba309d8, parent_rel=0x7fc30ba33f18)\n at tablecmds.c:15631\n...\n\n(gdb) print contup\n$1 = (HeapTuple) 0x0\n\nOn b0e96f311~1 I get:\nERROR: column \"a\" in child table must be marked NOT NULL\n\nBest regards,\nAlexander\n\n\n\n\n\nHi Alvaro,\n\n 25.08.2023 14:38, Alvaro Herrera wrote:\n\n\nI have now pushed this again. Hopefully it'll stick this time.\n\n\n\n I've found that after that commit the following query:\n CREATE TABLE t(a int PRIMARY KEY) PARTITION BY RANGE (a);\n CREATE TABLE tp1(a int);\n ALTER TABLE t ATTACH PARTITION tp1 FOR VALUES FROM (0) to (1);\n\n triggers a server crash:\n Core was generated by `postgres: law regression [local] ALTER\n TABLE '.\n Program terminated with signal SIGSEGV, Segmentation fault.\n\n warning: Section `.reg-xstate/2194811' in core file too small.\n #0 0x0000556007711d77 in MergeAttributesIntoExisting\n (child_rel=0x7fc30ba309d8, \n parent_rel=0x7fc30ba33f18) at tablecmds.c:15771\n 15771 if (!((Form_pg_constraint)\n GETSTRUCT(contup))->connoinherit)\n (gdb) bt\n #0 0x0000556007711d77 in MergeAttributesIntoExisting\n (child_rel=0x7fc30ba309d8, \n parent_rel=0x7fc30ba33f18) at tablecmds.c:15771\n #1 0x00005560077118d4 in CreateInheritance\n (child_rel=0x7fc30ba309d8, parent_rel=0x7fc30ba33f18)\n at tablecmds.c:15631\n ...\n\n (gdb) print contup\n $1 = (HeapTuple) 0x0\n\n On b0e96f311~1 I get:\n ERROR: column \"a\" in child table must be marked NOT NULL\n\n Best regards,\n Alexander",
"msg_date": "Thu, 31 Aug 2023 13:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Mar-29, Peter Eisentraut wrote:\n\n> On 27.03.23 15:55, Peter Eisentraut wrote:\n> > The information schema should be updated. I think the following views:\n> > \n> > - CHECK_CONSTRAINTS\n> > - CONSTRAINT_COLUMN_USAGE\n> > - DOMAIN_CONSTRAINTS\n> > - TABLE_CONSTRAINTS\n> > \n> > It looks like these have no test coverage; maybe that could be addressed\n> > at the same time.\n> \n> Here are patches for this. I haven't included the expected files for the\n> tests; this should be checked again that output is correct or the changes\n> introduced by this patch set are as expected.\n> \n> The reason we didn't have tests for this before was probably in part because\n> the information schema made up names for not-null constraints involving\n> OIDs, so the test wouldn't have been stable.\n> \n> Feel free to integrate this, or we can add it on afterwards.\n\nI'm eyeing patch 0002 here. I noticed that in view check_constraints it\ndefines the not-null constraint definition as substrings over the\npg_get_constraintdef() function[q1], so I wondered whether it might be\nbetter to join to pg_attribute instead. I see two options:\n\n1. add a scalar subselect in the select list for each constraint [q2]\n2. add a LEFT JOIN to pg_attribute to the main FROM list [q3]\n ON con.conrelid=att.attrelid AND con.conkey[1] = con.attrelid\n\nWith just the regression test tables in place, these forms are all\npretty much the same in execution time. I then created 20k tables with\n6 columns each and NOT NULL constraint on each column[4]. That's not a\nhuge amount but it's credible for a medium-size database with a bunch of\npartitions (it's amazing what passes for a medium-size database these\ndays). I was surprised to find out that q3 (~130ms) is three times\nfaster than q2 (~390ms), which is in turn more than twice faster than\nyour proposed q1 (~870ms). So unless you have another reason to prefer\nit, I think we should use q3 here.\n\n\nIn constraint_column_usage, you're adding a relkind to the catalog scan\nthat goes through pg_depend for CHECK constraints. Here we can rely on\na simple conkey[1] check and a separate UNION ALL arm[q5]; this is also\nfaster when there are many tables.\n\nThe third view definition looks ok. It's certainly very nice to be able\nto delete XXX comments there.\n\n\n[q1]\nSELECT current_database()::information_schema.sql_identifier AS constraint_catalog,\n rs.nspname::information_schema.sql_identifier AS constraint_schema,\n con.conname::information_schema.sql_identifier AS constraint_name,\n CASE con.contype\n WHEN 'c'::\"char\" THEN \"left\"(SUBSTRING(pg_get_constraintdef(con.oid) FROM 8), '-1'::integer)\n WHEN 'n'::\"char\" THEN SUBSTRING(pg_get_constraintdef(con.oid) FROM 10) || ' IS NOT NULL'::text \n ELSE NULL::text\n END::information_schema.character_data AS check_clause\n FROM pg_constraint con\n LEFT JOIN pg_namespace rs ON rs.oid = con.connamespace\n LEFT JOIN pg_class c ON c.oid = con.conrelid\n LEFT JOIN pg_type t ON t.oid = con.contypid \n WHERE pg_has_role(COALESCE(c.relowner, t.typowner), 'USAGE'::text) AND (con.contype = ANY (ARRAY['c'::\"char\", 'n'::\"char\"]));\n\n[q2]\nSELECT current_database()::information_schema.sql_identifier AS constraint_catalog,\n rs.nspname::information_schema.sql_identifier AS constraint_schema,\n con.conname::information_schema.sql_identifier AS constraint_name,\n CASE con.contype\n WHEN 'c'::\"char\" THEN \"left\"(SUBSTRING(pg_get_constraintdef(con.oid) FROM 8), '-1'::integer)\n WHEN 'n'::\"char\" THEN FORMAT('CHECK (%s IS NOT NULL)',\n (SELECT attname FROM pg_attribute WHERE attrelid = conrelid AND attnum = conkey[1]))\n ELSE NULL::text\n END::information_schema.character_data AS check_clause\n FROM pg_constraint con\n LEFT JOIN pg_namespace rs ON rs.oid = con.connamespace\n LEFT JOIN pg_class c ON c.oid = con.conrelid\n LEFT JOIN pg_type t ON t.oid = con.contypid\n WHERE pg_has_role(COALESCE(c.relowner, t.typowner), 'USAGE'::text) AND (con.contype = ANY (ARRAY['c'::\"char\", 'n'::\"char\"]));\n\n[q3]\nSELECT current_database()::information_schema.sql_identifier AS constraint_catalog,\n rs.nspname::information_schema.sql_identifier AS constraint_schema,\n con.conname::information_schema.sql_identifier AS constraint_name,\n CASE con.contype\n WHEN 'c'::\"char\" THEN \"left\"(SUBSTRING(pg_get_constraintdef(con.oid) FROM 8), '-1'::integer)\n WHEN 'n'::\"char\" THEN FORMAT('CHECK (%s IS NOT NULL)', at.attname) \n ELSE NULL::text\n END::information_schema.character_data AS check_clause\n FROM pg_constraint con\n LEFT JOIN pg_namespace rs ON rs.oid = con.connamespace\n LEFT JOIN pg_class c ON c.oid = con.conrelid\n LEFT JOIN pg_type t ON t.oid = con.contypid\n LEFT JOIN pg_attribute at ON (con.conrelid = at.attrelid AND con.conkey[1] = at.attnum)\n WHERE pg_has_role(COALESCE(c.relowner, t.typowner), 'USAGE'::text) AND (con.contype = ANY (ARRAY['c'::\"char\", 'n'::\"char\"]));\n\n[4]\ndo $$ begin for i in 0 .. 20000 loop\n execute format('create table t_%s (a1 int not null, a2 int not null, a3 int not null,\n a4 int not null, a5 int not null, a6 int not null);',\n i);\n if i % 1000 = 0 then commit; end if;\nend loop; end $$;\n\n[q5]\nSELECT CAST(current_database() AS sql_identifier) AS table_catalog,\n CAST(tblschema AS sql_identifier) AS table_schema,\n CAST(tblname AS sql_identifier) AS table_name,\n CAST(colname AS sql_identifier) AS column_name,\n CAST(current_database() AS sql_identifier) AS constraint_catalog,\n CAST(cstrschema AS sql_identifier) AS constraint_schema,\n CAST(cstrname AS sql_identifier) AS constraint_name\n\n FROM (\n /* check constraints */\n SELECT DISTINCT nr.nspname, r.relname, r.relowner, a.attname, nc.nspname, c.conname\n FROM pg_namespace nr, pg_class r, pg_attribute a, pg_depend d, pg_namespace nc, pg_constraint c\n WHERE nr.oid = r.relnamespace\n AND r.oid = a.attrelid\n AND d.refclassid = 'pg_catalog.pg_class'::regclass\n AND d.refobjid = r.oid\n AND d.refobjsubid = a.attnum\n AND d.classid = 'pg_catalog.pg_constraint'::regclass\n AND d.objid = c.oid\n AND c.connamespace = nc.oid\n AND c.contype = 'c'\n AND r.relkind IN ('r', 'p')\n AND NOT a.attisdropped\n\n UNION ALL\n\n /* not-null constraints */\n SELECT DISTINCT nr.nspname, r.relname, r.relowner, a.attname, nc.nspname, c.conname\n FROM pg_namespace nr, pg_class r, pg_attribute a, pg_namespace nc, pg_constraint c\n WHERE nr.oid = r.relnamespace\n\t AND r.oid = a.attrelid\n\t AND r.oid = c.conrelid\n\t AND a.attnum = c.conkey[1]\n\t AND c.connamespace = nc.oid\n\t AND c.contype = 'n'\n\t AND r.relkind in ('r', 'p')\n\t AND not a.attisdropped\n\n UNION ALL\n\n /* unique/primary key/foreign key constraints */\n SELECT nr.nspname, r.relname, r.relowner, a.attname, nc.nspname, c.conname\n FROM pg_namespace nr, pg_class r, pg_attribute a, pg_namespace nc,\n pg_constraint c\n WHERE nr.oid = r.relnamespace\n AND r.oid = a.attrelid\n AND nc.oid = c.connamespace\n AND r.oid = CASE c.contype WHEN 'f' THEN c.confrelid ELSE c.conrelid END\n AND a.attnum = ANY (CASE c.contype WHEN 'f' THEN c.confkey ELSE c.conkey END)\n AND NOT a.attisdropped\n AND c.contype IN ('p', 'u', 'f')\n AND r.relkind IN ('r', 'p')\n\n ) AS x (tblschema, tblname, tblowner, colname, cstrschema, cstrname)\n\n WHERE pg_has_role(x.tblowner, 'USAGE') ;\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 31 Aug 2023 12:02:39 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello Alexander,\n\nThanks for testing.\n\nOn 2023-Aug-31, Alexander Lakhin wrote:\n\n> 25.08.2023 14:38, Alvaro Herrera wrote:\n> > I have now pushed this again. Hopefully it'll stick this time.\n> \n> I've found that after that commit the following query:\n> CREATE TABLE t(a int PRIMARY KEY) PARTITION BY RANGE (a);\n> CREATE TABLE tp1(a int);\n> ALTER TABLE t ATTACH PARTITION tp1 FOR VALUES FROM (0) to (1);\n> \n> triggers a server crash:\n\nHmm, that's some weird code I left there all right. Can you please try\nthis patch? (Not final; I'll review it more completely later,\nparticularly to add this test case.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.",
"msg_date": "Thu, 31 Aug 2023 12:26:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "31.08.2023 13:26, Alvaro Herrera wrote:\n> Hmm, that's some weird code I left there all right. Can you please try\n> this patch? (Not final; I'll review it more completely later,\n> particularly to add this test case.)\n\nYes, your patch fixes the issue. I get the same error now:\nERROR: column \"a\" in child table must be marked NOT NULL\n\nThank you!\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 31 Aug 2023 14:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Aug-31, Alvaro Herrera wrote:\n\n> Hmm, that's some weird code I left there all right. Can you please try\n> this patch? (Not final; I'll review it more completely later,\n> particularly to add this test case.)\n\nThe change in MergeAttributesIntoExisting turned out to be close but not\nquite there, so I pushed another version of the fix.\n\nIn case you're wondering, the change in MergeConstraintsIntoExisting is\na related but different case, for which I added the other test case you\nsee there.\n\nI also noticed, while looking at this, that there's another problem when\na child has a NO INHERIT not-null constraint and the parent has a\nprimary key in the same column. It should refuse, or take over by\nmarking it no longer NO INHERIT. But it just accepts silently and all\nappears to be good. The problems appear when you add a child to that\nchild. I'll look into this later; it's not exactly the same code. At\nleast it's not a crasher.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 1 Sep 2023 19:55:52 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Looking at your 0001 patch, which adds tests for some of the\ninformation_schema views, I think it's a bad idea to put them in\nwhatever other regression .sql files; they would be subject to\nconcurrent changes depending on what other tests are being executed in\nthe same parallel test. I suggest to put them all in a separate .sql\nfile, and schedule that to run in the last concurrent group, together\nwith the tablespace test. This way, it would capture all the objects\nleft over by other test files.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 4 Sep 2023 13:00:05 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "In reference to [1], 0001 attached to this email contains the updated\nview definitions that I propose.\n\nIn 0002, I took the tests added by Peter's proposed patch and put them\nin a separate test file that runs at the end. There are some issues,\nhowever. One is that the ORDER BY clause in the check_constraints view\nis not fully deterministic, because the table name is not part of the\nview definition, so we cannot sort by table name. In the current\nregression database there is only one case[2] where two constraints have\nthe same name and different definition:\n\n inh_check_constraint │ 2 │ ((f1 > 0)) NOT VALID ↵\n │ │ ((f1 > 0))\n\n(on tables invalid_check_con and invalid_check_con_child). I assume\nthis is going to bite us at some point. We could just add a WHERE\nclause to omit that one constraint.\n\nAnother issue I notice eyeballing at the results is that foreign keys on\npartitioned tables are listing the rows used to implement the\nconstraints on partitions, which are sort-of \"internal\" constraints (and\nare not displayed by psql's \\d). I hope this is a relatively simple fix\nthat we could extract from the code used by psql.\n\nAnyway, I think I'm going to get 0001 committed sometime tomorrow, and\nthen play a bit more with 0002 to try and get it pushed soon also.\n\nThanks\n\n[1] https://postgr.es/m/81b461c4-edab-5d8c-2f88-203108425340@enterprisedb.com\n\n[2]\nselect constraint_name, count(*),\n string_agg(distinct check_clause, E'\\n')\nfrom information_schema.check_constraints\ngroup by constraint_name\nhaving count(*) > 1;\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You don't solve a bad join with SELECT DISTINCT\" #CupsOfFail\nhttps://twitter.com/connor_mc_d/status/1431240081726115845",
"msg_date": "Mon, 4 Sep 2023 19:10:06 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "information_schema and not-null constraints"
},
{
"msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> In 0002, I took the tests added by Peter's proposed patch and put them\n> in a separate test file that runs at the end. There are some issues,\n> however. One is that the ORDER BY clause in the check_constraints view\n> is not fully deterministic, because the table name is not part of the\n> view definition, so we cannot sort by table name.\n\nI object very very strongly to this proposed test method. It\ncompletely undoes the work I did in v15 (cc50080a8 and related)\nto make the core regression test scripts mostly independent of each\nother. Even without considering the use-case of running a subset of\nthe tests, the new test's expected output will constantly be needing\nupdates as side effects of unrelated changes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 04 Sep 2023 16:43:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 31.08.23 12:02, Alvaro Herrera wrote:\n> In constraint_column_usage, you're adding a relkind to the catalog scan\n> that goes through pg_depend for CHECK constraints. Here we can rely on\n> a simple conkey[1] check and a separate UNION ALL arm[q5]; this is also\n> faster when there are many tables.\n> \n> The third view definition looks ok. It's certainly very nice to be able\n> to delete XXX comments there.\n\nThe following information schema views are affected by the not-null \nconstraint catalog entries:\n\n1. CHECK_CONSTRAINTS\n2. CONSTRAINT_COLUMN_USAGE\n3. DOMAIN_CONSTRAINTS\n4. TABLE_CONSTRAINTS\n\nNote that 1 and 3 also contain domain constraints. So as long as the \ndomain not-null constraints are not similarly catalogued, we can't \ndelete the separate not-null union branch. (3 never had one, so \narguably a bit buggy.)\n\nI think we can fix up 4 by just deleting the not-null union branch.\n\nFor 2, the simple fix is also easy, but there are some other options, as \nyou discuss above.\n\nHow do you want to proceed?\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 13:29:47 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Sep-05, Peter Eisentraut wrote:\n\n> The following information schema views are affected by the not-null\n> constraint catalog entries:\n> \n> 1. CHECK_CONSTRAINTS\n> 2. CONSTRAINT_COLUMN_USAGE\n> 3. DOMAIN_CONSTRAINTS\n> 4. TABLE_CONSTRAINTS\n> \n> Note that 1 and 3 also contain domain constraints. So as long as the domain\n> not-null constraints are not similarly catalogued, we can't delete the\n> separate not-null union branch. (3 never had one, so arguably a bit buggy.)\n> \n> I think we can fix up 4 by just deleting the not-null union branch.\n> \n> For 2, the simple fix is also easy, but there are some other options, as you\n> discuss above.\n> \n> How do you want to proceed?\n\nI posted as a patch in a separate thread[1]. Let me fix up the\ndefinitions for views 1 and 3 for domains per your comments, and I'll\npost in that thread again.\n\n[1] https://postgr.es/m/202309041710.psytrxlsiqex@alvherre.pgsql\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n",
"msg_date": "Tue, 5 Sep 2023 17:29:56 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Sep-05, Peter Eisentraut wrote:\n\n> The following information schema views are affected by the not-null\n> constraint catalog entries:\n> \n> 1. CHECK_CONSTRAINTS\n> 2. CONSTRAINT_COLUMN_USAGE\n> 3. DOMAIN_CONSTRAINTS\n> 4. TABLE_CONSTRAINTS\n> \n> Note that 1 and 3 also contain domain constraints.\n\nAfter looking at what happens for domain constraints in older versions\n(I tested 15, but I suppose this applies everywhere), I notice that we\ndon't seem to handle them anywhere that I can see. My quick exercise is\njust\n\ncreate domain nnint as int not null;\ncreate table foo (a nnint);\n\nand then verify that this constraint shows nowhere -- it's not in\nDOMAIN_CONSTRAINTS for starters, which is I think the most obvious place.\nAnd nothing is shown in CHECK_CONSTRAINTS nor TABLE_CONSTRAINTS either.\n\nThis did ever work in the past? I tested with 9.3 and didn't see\nanything there either.\n\nI am hesitant to try to add domain not-null constraint support to\ninformation_schema in the same commit as these changes. I think this\nshould be fixed separately.\n\n(Note that if, in older versions, you change the table to be\n create table foo (a nnint NOT NULL);\n then you do get a row in table_constraints, but nothing in\n check_constraints. With my proposed definition this constraint appears\n in check_constraints, table_constraints and constraint_column_usage.)\n\nOn 2023-Sep-04, Tom Lane wrote:\n\n> I object very very strongly to this proposed test method. It\n> completely undoes the work I did in v15 (cc50080a8 and related)\n> to make the core regression test scripts mostly independent of each\n> other. Even without considering the use-case of running a subset of\n> the tests, the new test's expected output will constantly be needing\n> updates as side effects of unrelated changes.\n\nYou're absolutely right, this would be disastrous. A better alternative\nis that the new test file creates a few objects for itself, either by\nusing a separate role or by using a separate schema, and we examine the\ninformation_schema display for those objects only. Then it'll be better\nisolated.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nSubversion to GIT: the shortest path to happiness I've ever heard of\n (Alexey Klyukin)\n\n\n",
"msg_date": "Tue, 5 Sep 2023 18:24:37 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 2023-Sep-05, Alvaro Herrera wrote:\n\n> After looking at what happens for domain constraints in older versions\n> (I tested 15, but I suppose this applies everywhere), I notice that we\n> don't seem to handle them anywhere that I can see. My quick exercise is\n> just\n> \n> create domain nnint as int not null;\n> create table foo (a nnint);\n> \n> and then verify that this constraint shows nowhere -- it's not in\n> DOMAIN_CONSTRAINTS for starters, which is I think the most obvious place.\n> And nothing is shown in CHECK_CONSTRAINTS nor TABLE_CONSTRAINTS either.\n\nLooking now at what to do for CHECK_CONSTRAINTS with domain constraints,\nI admit I'm completely confused about what this view is supposed to\nshow. Currently, we show the constraint name and a definition like\n\"CHECK (column IS NOT NULL)\". But since the table name is not given, it\nis not possible to know to what table the column name refers to. For\ndomains, we could show \"CHECK (VALUE IS NOT NULL)\" but again with no\nindication of what domain it applies to, or anything at all that would\nmake this useful in any way whatsoever.\n\nSo this whole thing seems pretty futile and I'm disinclined to waste\nmuch time on it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 5 Sep 2023 19:15:43 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 9/5/23 19:15, Alvaro Herrera wrote:\n> On 2023-Sep-05, Alvaro Herrera wrote:\n> \n> Looking now at what to do for CHECK_CONSTRAINTS with domain constraints,\n> I admit I'm completely confused about what this view is supposed to\n> show. Currently, we show the constraint name and a definition like\n> \"CHECK (column IS NOT NULL)\". But since the table name is not given, it\n> is not possible to know to what table the column name refers to. For\n> domains, we could show \"CHECK (VALUE IS NOT NULL)\" but again with no\n> indication of what domain it applies to, or anything at all that would\n> make this useful in any way whatsoever.\n\nConstraint names are supposed to be unique per schema[1] so the view \ncontains the minimum required information to identify the constraint.\n\n> So this whole thing seems pretty futile and I'm disinclined to waste\n> much time on it.\n\nUntil PostgreSQL either\n A) obeys the spec on this uniqueness, or\n B) decides to deviate from the information_schema spec;\nthis view will be completely useless for actually getting any useful \ninformation.\n\nI would like to see us do A because it is the right thing to do. Our \nautogenerated names obey this rule, but who knows how many duplicate \nnames per schema are out there in the wild from people specifying their \nown names.\n\nI don't know what the project would think about doing B.\n\n\n[1] SQL:2023-2 11.4 <table constraint definition> Syntax Rule 4\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 5 Sep 2023 23:50:04 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On Tue, Sep 5, 2023 at 2:50 PM Vik Fearing <vik@postgresfriends.org> wrote:\n\n> On 9/5/23 19:15, Alvaro Herrera wrote:\n> > On 2023-Sep-05, Alvaro Herrera wrote:\n> >\n> > Looking now at what to do for CHECK_CONSTRAINTS with domain constraints,\n> > I admit I'm completely confused about what this view is supposed to\n> > show. Currently, we show the constraint name and a definition like\n> > \"CHECK (column IS NOT NULL)\". But since the table name is not given, it\n> > is not possible to know to what table the column name refers to. For\n> > domains, we could show \"CHECK (VALUE IS NOT NULL)\" but again with no\n> > indication of what domain it applies to, or anything at all that would\n> > make this useful in any way whatsoever.\n>\n> Constraint names are supposed to be unique per schema[1] so the view\n> contains the minimum required information to identify the constraint.\n>\n\nI'm presuming that the view constraint_column_usage [1] is an integral part\nof all this though I haven't taken the time to figure out exactly how we\nare implementing it today.\n\nI'm not all that for either A or B since the status quo seems workable.\nThough ideally if the system has unique names per schema then everything\nshould just work - having the views produce duplicated information (as\nopposed to nothing) if they are used when the DBA doesn't enforce the\nstandard's requirements seems plausible.\n\nDavid J.\n\n[1]\nhttps://www.postgresql.org/docs/current/infoschema-constraint-column-usage.html\n\nOn Tue, Sep 5, 2023 at 2:50 PM Vik Fearing <vik@postgresfriends.org> wrote:On 9/5/23 19:15, Alvaro Herrera wrote:\n> On 2023-Sep-05, Alvaro Herrera wrote:\n> \n> Looking now at what to do for CHECK_CONSTRAINTS with domain constraints,\n> I admit I'm completely confused about what this view is supposed to\n> show. Currently, we show the constraint name and a definition like\n> \"CHECK (column IS NOT NULL)\". But since the table name is not given, it\n> is not possible to know to what table the column name refers to. For\n> domains, we could show \"CHECK (VALUE IS NOT NULL)\" but again with no\n> indication of what domain it applies to, or anything at all that would\n> make this useful in any way whatsoever.\n\nConstraint names are supposed to be unique per schema[1] so the view \ncontains the minimum required information to identify the constraint.I'm presuming that the view constraint_column_usage [1] is an integral part of all this though I haven't taken the time to figure out exactly how we are implementing it today.I'm not all that for either A or B since the status quo seems workable. Though ideally if the system has unique names per schema then everything should just work - having the views produce duplicated information (as opposed to nothing) if they are used when the DBA doesn't enforce the standard's requirements seems plausible.David J.[1] https://www.postgresql.org/docs/current/infoschema-constraint-column-usage.html",
"msg_date": "Tue, 5 Sep 2023 15:14:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 9/6/23 00:14, David G. Johnston wrote:\n> \n> I'm not all that for either A or B since the status quo seems workable.\n\nPray tell, how is it workable? The view does not identify a specific \nconstraint because we don't obey the rules on one side and we do obey \nthe rules on the other side. It is completely useless and unworkable.\n\n> Though ideally if the system has unique names per schema then everything\n> should just work - having the views produce duplicated information (as\n> opposed to nothing) if they are used when the DBA doesn't enforce the\n> standard's requirements seems plausible.\nLet us not engage in victim blaming. Postgres is the problem here.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 01:35:24 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 9/6/23 00:14, David G. Johnston wrote:\n>> I'm not all that for either A or B since the status quo seems workable.\n\n> Pray tell, how is it workable? The view does not identify a specific \n> constraint because we don't obey the rules on one side and we do obey \n> the rules on the other side. It is completely useless and unworkable.\n\nWhat solution do you propose? Starting to enforce the spec's rather\narbitrary requirement that constraint names be unique per-schema is\na complete nonstarter. Changing the set of columns in a spec-defined\nview is also a nonstarter, or at least we've always taken it as such.\n\nIf you'd like to see some forward progress in this area, maybe you\ncould lobby the SQL committee to make constraint names unique per-table\nnot per-schema, and then make the information_schema changes that would\nbe required to support that.\n\nIn general though, the fact that we have any DDL extensions at all\ncompared to the standard means that there will be Postgres databases\nthat are not adequately represented by the information_schema views.\nI'm not sure it's worth being more outraged about constraint names\nthan anything else. Or do you also want us to rip out (for starters)\nunique indexes on expressions, or unique partial indexes?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Sep 2023 20:53:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 9/6/23 02:53, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 9/6/23 00:14, David G. Johnston wrote:\n>>> I'm not all that for either A or B since the status quo seems workable.\n> \n>> Pray tell, how is it workable? The view does not identify a specific\n>> constraint because we don't obey the rules on one side and we do obey\n>> the rules on the other side. It is completely useless and unworkable.\n> \n> What solution do you propose? Starting to enforce the spec's rather\n> arbitrary requirement that constraint names be unique per-schema is\n> a complete nonstarter. Changing the set of columns in a spec-defined\n> view is also a nonstarter, or at least we've always taken it as such.\n\nI both semi-agree and semi-disagree that these are nonstarters. One of \nthem has to give.\n\n> If you'd like to see some forward progress in this area, maybe you\n> could lobby the SQL committee to make constraint names unique per-table\n> not per-schema, and then make the information_schema changes that would\n> be required to support that.\n\nI could easily do that; but now you are asking to denormalize the \nstandard, because the constraints could be from tables, domains, or \nassertions.\n\nI don't think that will go over well, starting with my own opinion.\n\nAnd for this reason, I do not believe that this is a \"rather arbitrary \nrequirement\".\n\n> In general though, the fact that we have any DDL extensions at all\n> compared to the standard means that there will be Postgres databases\n> that are not adequately represented by the information_schema views.\n\nSure.\n\n> I'm not sure it's worth being more outraged about constraint names\n> than anything else. Or do you also want us to rip out (for starters)\n> unique indexes on expressions, or unique partial indexes?\n\nIndexes of any kind are not part of the standard so these examples are \nbasically invalid.\n\nSQL:2023-11 Schemata is not the part I am most familiar with, but I \ndon't even see where regular multi-column unique constraints are listed \nout, so that is both a lack in the standard and a knockdown of this \nargument. I am happy to be shown wrong about this.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 04:31:44 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 9/6/23 02:53, Tom Lane wrote:\n>> What solution do you propose? Starting to enforce the spec's rather\n>> arbitrary requirement that constraint names be unique per-schema is\n>> a complete nonstarter. Changing the set of columns in a spec-defined\n>> view is also a nonstarter, or at least we've always taken it as such.\n\n> I both semi-agree and semi-disagree that these are nonstarters. One of \n> them has to give.\n\n[ shrug... ] if you stick to a SQL-compliant schema setup, then the\ninformation_schema views will serve for introspection. If you don't,\nthey won't, and you'll need to look at Postgres-specific catalog data.\nThis compromise has served for twenty years or so, and I'm not in a\nhurry to change it. I think the odds of changing to the spec's\nrestriction without enormous pushback are nil, and I do not think\nthat the benefit could possibly be worth the ensuing pain to users.\n(It's not even the absolute pain level that is a problem, so much\nas the asymmetry: the pain would fall exclusively on users who get\nno benefit, because they weren't relying on these views anyway.\nIf you think that's an easy sell, you're mistaken.)\n\nIt could possibly be a little more palatable to add column(s) to the\ninformation_schema views, but I'm having a hard time seeing how that\nmoves the needle. The situation would still be precisely describable\nas \"if you stick to a SQL-compliant schema setup, then the standard\ncolumns of the information_schema views will serve for introspection.\nIf you don't, they won't, and you'll need to look at Postgres-specific\ncolumns\". That doesn't seem like a big improvement. Also, given your\npoint about normalization, how would we define the additions exactly?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 05 Sep 2023 23:40:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 05.09.23 18:24, Alvaro Herrera wrote:\n> On 2023-Sep-05, Peter Eisentraut wrote:\n> \n>> The following information schema views are affected by the not-null\n>> constraint catalog entries:\n>>\n>> 1. CHECK_CONSTRAINTS\n>> 2. CONSTRAINT_COLUMN_USAGE\n>> 3. DOMAIN_CONSTRAINTS\n>> 4. TABLE_CONSTRAINTS\n>>\n>> Note that 1 and 3 also contain domain constraints.\n> \n> After looking at what happens for domain constraints in older versions\n> (I tested 15, but I suppose this applies everywhere), I notice that we\n> don't seem to handle them anywhere that I can see. My quick exercise is\n> just\n> \n> create domain nnint as int not null;\n> create table foo (a nnint);\n> \n> and then verify that this constraint shows nowhere -- it's not in\n> DOMAIN_CONSTRAINTS for starters, which is I think the most obvious place.\n> And nothing is shown in CHECK_CONSTRAINTS nor TABLE_CONSTRAINTS either.\n> \n> This did ever work in the past? I tested with 9.3 and didn't see\n> anything there either.\n\nNo, this was never implemented. (As I wrote in my other message on the \nother thread, arguably a bit buggy.) We could fix this separately, \nunless we are going to implement catalogued domain not-null constraints \nsoon.\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 13:02:50 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 2023-Sep-04, Alvaro Herrera wrote:\n\n> In reference to [1], 0001 attached to this email contains the updated\n> view definitions that I propose.\n\nGiven the downthread discussion, I propose the attached. There are no\nchanges to v2, other than dropping the test part.\n\nWe can improve the situation for domains separately and likewise for\ntesting.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 6 Sep 2023 19:52:37 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 9/6/23 05:40, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> On 9/6/23 02:53, Tom Lane wrote:\n>>> What solution do you propose? Starting to enforce the spec's rather\n>>> arbitrary requirement that constraint names be unique per-schema is\n>>> a complete nonstarter. Changing the set of columns in a spec-defined\n>>> view is also a nonstarter, or at least we've always taken it as such.\n> \n>> I both semi-agree and semi-disagree that these are nonstarters. One of\n>> them has to give.\n> \n> [ shrug... ] if you stick to a SQL-compliant schema setup, then the\n> information_schema views will serve for introspection. If you don't,\n> they won't, and you'll need to look at Postgres-specific catalog data.\n\n\nAs someone who regularly asks people to cite chapter and verse of the \nstandard, do you not see this as a problem?\n\nIf there is /one thing/ I wish we were 100% compliant on, it's \ninformation_schema.\n\n\n> This compromise has served for twenty years or so, and I'm not in a\n> hurry to change it. \n\n\nHas it? Or is this just the first time someone has complained?\n\n\n> I think the odds of changing to the spec's\n> restriction without enormous pushback are nil, and I do not think\n> that the benefit could possibly be worth the ensuing pain to users.\n\n\nThat is a valid opinion, and probably one that will win out for quite a \nwhile.\n\n\n> (It's not even the absolute pain level that is a problem, so much\n> as the asymmetry: the pain would fall exclusively on users who get\n> no benefit, because they weren't relying on these views anyway.\n> If you think that's an easy sell, you're mistaken.)\n\n\nI am curious how many people we are selling this to. In my career as a \nconsultant, I have never once come across anyone specifying their own \nconstraint names. That is certainly anecdotal, and by no means means it \ndoesn't happen, but my personal experience says that it is very low.\n\nAnd since our generated names obey the spec (see ChooseConstraintName() \nwhich even says some apps depend on this), I don't see making this \nchange being a big problem in the real world.\n\nMind you, I am not pushing (right now) to make this change; I am just \nsaying that it is the right thing to do.\n\n\n> It could possibly be a little more palatable to add column(s) to the\n> information_schema views, but I'm having a hard time seeing how that\n> moves the needle. The situation would still be precisely describable\n> as \"if you stick to a SQL-compliant schema setup, then the standard\n> columns of the information_schema views will serve for introspection.\n> If you don't, they won't, and you'll need to look at Postgres-specific\n> columns\". That doesn't seem like a big improvement. Also, given your\n> point about normalization, how would we define the additions exactly?\n\n\nThis is precisely my point.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Wed, 6 Sep 2023 21:09:20 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 2023-Sep-06, Alvaro Herrera wrote:\n\n> On 2023-Sep-04, Alvaro Herrera wrote:\n> \n> > In reference to [1], 0001 attached to this email contains the updated\n> > view definitions that I propose.\n> \n> Given the downthread discussion, I propose the attached. There are no\n> changes to v2, other than dropping the test part.\n\nPushed.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 7 Sep 2023 11:40:07 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 06.09.23 19:52, Alvaro Herrera wrote:\n> + SELECT current_database()::information_schema.sql_identifier AS constraint_catalog,\n> + rs.nspname::information_schema.sql_identifier AS constraint_schema,\n> + con.conname::information_schema.sql_identifier AS constraint_name,\n> + format('CHECK (%s IS NOT NULL)', at.attname)::information_schema.character_data AS check_clause\n\nSmall correction here: This should be\n\npg_catalog.format('%s IS NOT NULL', at.attname)::information_schema.character_data AS check_clause\n\nThat is, the word \"CHECK\" and the parentheses should not be part of the\nproduced value.\n\n\n",
"msg_date": "Thu, 14 Sep 2023 10:20:01 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 14.09.23 10:20, Peter Eisentraut wrote:\n> On 06.09.23 19:52, Alvaro Herrera wrote:\n>> + SELECT current_database()::information_schema.sql_identifier AS \n>> constraint_catalog,\n>> + rs.nspname::information_schema.sql_identifier AS \n>> constraint_schema,\n>> + con.conname::information_schema.sql_identifier AS \n>> constraint_name,\n>> + format('CHECK (%s IS NOT NULL)', \n>> at.attname)::information_schema.character_data AS check_clause\n> \n> Small correction here: This should be\n> \n> pg_catalog.format('%s IS NOT NULL', \n> at.attname)::information_schema.character_data AS check_clause\n> \n> That is, the word \"CHECK\" and the parentheses should not be part of the\n> produced value.\n\nI have committed this fix.\n\n\n",
"msg_date": "Mon, 18 Sep 2023 08:15:53 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 2023-Sep-18, Peter Eisentraut wrote:\n\n> On 14.09.23 10:20, Peter Eisentraut wrote:\n\n> > Small correction here: This should be\n> > \n> > pg_catalog.format('%s IS NOT NULL',\n> > at.attname)::information_schema.character_data AS check_clause\n> > \n> > That is, the word \"CHECK\" and the parentheses should not be part of the\n> > produced value.\n> \n> I have committed this fix.\n\nThanks.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 18 Sep 2023 09:43:25 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 14.09.23 10:20, Peter Eisentraut wrote:\n> On 06.09.23 19:52, Alvaro Herrera wrote:\n>> + SELECT current_database()::information_schema.sql_identifier AS \n>> constraint_catalog,\n>> + rs.nspname::information_schema.sql_identifier AS \n>> constraint_schema,\n>> + con.conname::information_schema.sql_identifier AS \n>> constraint_name,\n>> + format('CHECK (%s IS NOT NULL)', \n>> at.attname)::information_schema.character_data AS check_clause\n> \n> Small correction here: This should be\n> \n> pg_catalog.format('%s IS NOT NULL', \n> at.attname)::information_schema.character_data AS check_clause\n> \n> That is, the word \"CHECK\" and the parentheses should not be part of the\n> produced value.\n\nSlightly related, so let's just tack it on here:\n\nWhile testing this, I noticed that the way the check_clause of regular \ncheck constraints is computed appears to be suboptimal. It currently does\n\nCAST(substring(pg_get_constraintdef(con.oid) from 7) AS character_data)\n\nwhich ends up with an extra set of parentheses, which is ignorable, but \nit also leaves in suffixes like \"NOT VALID\", which don't belong into \nthat column. Earlier in this thread I had contemplated a fix for the \nfirst issue, but that wouldn't address the second issue. I think we can \nfix this quite simply by using pg_get_expr() instead. I don't know why \nit wasn't done like that to begin with, maybe it was just a (my?) \nmistake. See attached patch.",
"msg_date": "Tue, 19 Sep 2023 09:01:56 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "On 19.09.23 09:01, Peter Eisentraut wrote:\n> While testing this, I noticed that the way the check_clause of regular \n> check constraints is computed appears to be suboptimal. It currently does\n> \n> CAST(substring(pg_get_constraintdef(con.oid) from 7) AS character_data)\n> \n> which ends up with an extra set of parentheses, which is ignorable, but \n> it also leaves in suffixes like \"NOT VALID\", which don't belong into \n> that column. Earlier in this thread I had contemplated a fix for the \n> first issue, but that wouldn't address the second issue. I think we can \n> fix this quite simply by using pg_get_expr() instead. I don't know why \n> it wasn't done like that to begin with, maybe it was just a (my?) \n> mistake. See attached patch.\n\ncommitted\n\n\n",
"msg_date": "Fri, 22 Sep 2023 07:59:56 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: information_schema and not-null constraints"
},
{
"msg_contents": "Hi Alvaro,\n\n25.08.2023 14:38, Alvaro Herrera wrote:\n> I have now pushed this again. Hopefully it'll stick this time.\n\nI've discovered that that commit added several recursive functions, and\nsome of them are not protected from stack overflow.\n\nNamely, with \"max_locks_per_transaction = 600\" and default ulimit -s (8192),\nI observe server crashes with the following scripts:\n# ATExecSetNotNull()\n(n=40000; printf \"create table t0 (a int, b int);\";\nfor ((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1 ))); \"; done;\nprintf \"alter table t0 alter b set not null;\" ) | psql >psql.log\n\n# dropconstraint_internal()\n(n=20000; printf \"create table t0 (a int, b int not null);\";\nfor ((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1 ))); \"; done;\nprintf \"alter table t0 alter b drop not null;\" ) | psql >psql.log\n\n# set_attnotnull()\n(n=110000; printf \"create table tp (a int, b int, primary key(a, b)) partition by range (a); create table tp0 (a int \nprimary key, b int) partition by range (a);\";\nfor ((i=1;i<=$n;i++)); do printf \"create table tp$i partition of tp$(( $i - 1 )) for values from ($i) to (1000000) \npartition by range (a);\"; done;\nprintf \"alter table tp attach partition tp0 for values from (0) to (1000000);\") | psql >psql.log # this takes half an \nhour on my machine\n\nMay be you would find appropriate to add check_stack_depth() to these\nfunctions.\n\n(ATAddCheckNNConstraint() is protected because it calls\nAddRelationNewConstraints(), which in turn calls StoreRelCheck() ->\nCreateConstraintEntry() -> recordDependencyOnSingleRelExpr() ->\nfind_expr_references_walker() -> expression_tree_walker() ->\nexpression_tree_walker() -> check_stack_depth().)\n\n(There were patches prepared for similar cases [1], but they don't cover new\nfunctions, of course, and I'm not sure how to handle all such instances.)\n\n[1] https://commitfest.postgresql.org/45/4239/\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 12 Oct 2023 13:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2023-Oct-12, Alexander Lakhin wrote:\n\nHello,\n\n> I've discovered that that commit added several recursive functions, and\n> some of them are not protected from stack overflow.\n\nTrue. I reproduced the first two, but didn't attempt to reproduce the\nthird one -- patching all these to check for stack depth is cheap\nprotection. I also patched ATAddCheckNNConstraint:\n\n> (ATAddCheckNNConstraint() is protected because it calls\n> AddRelationNewConstraints(), which in turn calls StoreRelCheck() ->\n> CreateConstraintEntry() -> recordDependencyOnSingleRelExpr() ->\n> find_expr_references_walker() -> expression_tree_walker() ->\n> expression_tree_walker() -> check_stack_depth().)\n\nbecause it seems uselessly risky to rely on depth checks that exist on\ncompletely unrelated pieces of code, when the function visibly recurses\non itself. Especially so since the test cases that demonstrate crashes\nare so expensive to run, which means we're not going to detect it if at\nsome point that other stack depth check stops being called for whatever\nreason.\n\nBTW probably the tests could be made much cheaper by running the server\nwith a lower \"ulimit -s\" setting. I didn't try.\n\nI noticed one more crash while trying to \"drop table\" one of the\nhierarchies your scripts create. But it's a preexisting issue which\nneeds a backpatched fix, and I think Egor already reported it in the\nother thread.\n\nThank you\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n",
"msg_date": "Wed, 8 Nov 2023 18:53:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi Alvaro,\n25.08.2023 14:38, Alvaro Herrera wrote:\n> I have now pushed this again. Hopefully it'll stick this time.\n\nStarting from b0e96f31, pg_upgrade fails with inherited NOT NULL constraint:\nFor example upgrade from 9c13b6814a (or REL_12_STABLE .. REL_16_STABLE) to\nb0e96f31 (or master) with following two tables (excerpt from\nsrc/test/regress/sql/rules.sql)\n\ncreate table test_0 (id serial primary key);\ncreate table test_1 (id integer primary key) inherits (test_0);\n\nI get the failure:\n\nSetting frozenxid and minmxid counters in new cluster ok\nRestoring global objects in the new cluster ok\nRestoring database schemas in the new cluster\n test\n*failure*\n\nConsult the last few lines of\n\"new/pg_upgrade_output.d/20240125T151231.112/log/pg_upgrade_dump_16384.log\"\nfor\nthe probable cause of the failure.\nFailure, exiting\n\nIn log:\n\npg_restore: connecting to database for restore\npg_restore: creating DATABASE \"test\"\npg_restore: connecting to new database \"test\"\npg_restore: creating DATABASE PROPERTIES \"test\"\npg_restore: connecting to new database \"test\"\npg_restore: creating pg_largeobject \"pg_largeobject\"\npg_restore: creating COMMENT \"SCHEMA \"public\"\"\npg_restore: creating TABLE \"public.test_0\"\npg_restore: creating SEQUENCE \"public.test_0_id_seq\"\npg_restore: creating SEQUENCE OWNED BY \"public.test_0_id_seq\"\npg_restore: creating TABLE \"public.test_1\"\npg_restore: creating DEFAULT \"public.test_0 id\"\npg_restore: executing SEQUENCE SET test_0_id_seq\npg_restore: creating CONSTRAINT \"public.test_0 test_0_pkey\"\npg_restore: creating CONSTRAINT \"public.test_1 test_1_pkey\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 3200; 2606 16397 CONSTRAINT test_1 test_1_pkey\nandrew\npg_restore: error: could not execute query: ERROR: cannot drop inherited\nconstraint \"pgdump_throwaway_notnull_0\" of relation \"test_1\"\nCommand was:\n-- For binary upgrade, must preserve pg_class oids and relfilenodes\nSELECT\npg_catalog.binary_upgrade_set_next_index_pg_class_oid('16396'::pg_catalog.oid);\n\nSELECT\npg_catalog.binary_upgrade_set_next_index_relfilenode('16396'::pg_catalog.oid);\n\n\nALTER TABLE ONLY \"public\".\"test_1\"\n ADD CONSTRAINT \"test_1_pkey\" PRIMARY KEY (\"id\");\n\nALTER TABLE ONLY \"public\".\"test_1\" DROP CONSTRAINT\npgdump_throwaway_notnull_0;\n\nThanks!\n\n\n\n\nOn Thu, Jan 25, 2024 at 3:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> I have now pushed this again. Hopefully it'll stick this time.\n>\n> We may want to make some further tweaks to the behavior in some cases --\n> for example, don't disallow ALTER TABLE DROP NOT NULL when the\n> constraint is both inherited and has a local definition; the other\n> option is to mark the constraint as no longer having a local definition.\n> I left it the other way because that's what CHECK does; maybe we would\n> like to change both at once.\n>\n> I ran it through CI, and the pg_upgrade test with a dump from 14's\n> regression test database and everything worked well, but it's been a\n> while since I tested the sepgsql part of it, so that might the first\n> thing to explode.\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n>\n>\n>\n>\n>\n\nHi Alvaro,25.08.2023 14:38, Alvaro Herrera wrote:> I have now pushed this again. Hopefully it'll stick this time.Starting from b0e96f31, pg_upgrade fails with inherited NOT NULL constraint:For example upgrade from 9c13b6814a (or REL_12_STABLE .. REL_16_STABLE) to b0e96f31 (or master) with following two tables (excerpt from src/test/regress/sql/rules.sql)create table test_0 (id serial primary key);create table test_1 (id integer primary key) inherits (test_0);I get the failure:Setting frozenxid and minmxid counters in new cluster ok\nRestoring global objects in the new cluster ok\nRestoring database schemas in the new cluster test *failure*\n\nConsult the last few lines of \"new/pg_upgrade_output.d/20240125T151231.112/log/pg_upgrade_dump_16384.log\" for\nthe probable cause of the failure.\nFailure, exitingIn log:pg_restore: connecting to database for restore\npg_restore: creating DATABASE \"test\"\npg_restore: connecting to new database \"test\"\npg_restore: creating DATABASE PROPERTIES \"test\"\npg_restore: connecting to new database \"test\"\npg_restore: creating pg_largeobject \"pg_largeobject\"\npg_restore: creating COMMENT \"SCHEMA \"public\"\"\npg_restore: creating TABLE \"public.test_0\"\npg_restore: creating SEQUENCE \"public.test_0_id_seq\"\npg_restore: creating SEQUENCE OWNED BY \"public.test_0_id_seq\"\npg_restore: creating TABLE \"public.test_1\"\npg_restore: creating DEFAULT \"public.test_0 id\"\npg_restore: executing SEQUENCE SET test_0_id_seq\npg_restore: creating CONSTRAINT \"public.test_0 test_0_pkey\"\npg_restore: creating CONSTRAINT \"public.test_1 test_1_pkey\"\npg_restore: while PROCESSING TOC:\npg_restore: from TOC entry 3200; 2606 16397 CONSTRAINT test_1 test_1_pkey andrew\npg_restore: error: could not execute query: ERROR: cannot drop inherited constraint \"pgdump_throwaway_notnull_0\" of relation \"test_1\"\nCommand was: -- For binary upgrade, must preserve pg_class oids and relfilenodes\nSELECT pg_catalog.binary_upgrade_set_next_index_pg_class_oid('16396'::pg_catalog.oid);\nSELECT pg_catalog.binary_upgrade_set_next_index_relfilenode('16396'::pg_catalog.oid);\n\nALTER TABLE ONLY \"public\".\"test_1\"\n ADD CONSTRAINT \"test_1_pkey\" PRIMARY KEY (\"id\");\n\nALTER TABLE ONLY \"public\".\"test_1\" DROP CONSTRAINT pgdump_throwaway_notnull_0;Thanks!On Thu, Jan 25, 2024 at 3:06 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:I have now pushed this again. Hopefully it'll stick this time.\n\nWe may want to make some further tweaks to the behavior in some cases --\nfor example, don't disallow ALTER TABLE DROP NOT NULL when the\nconstraint is both inherited and has a local definition; the other\noption is to mark the constraint as no longer having a local definition.\nI left it the other way because that's what CHECK does; maybe we would\nlike to change both at once.\n\nI ran it through CI, and the pg_upgrade test with a dump from 14's\nregression test database and everything worked well, but it's been a\nwhile since I tested the sepgsql part of it, so that might the first\nthing to explode.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Thu, 25 Jan 2024 15:21:35 +0700",
"msg_from": "Andrew Bille <andrewbille@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello Alvaro,\n\nPlease look at an anomaly introduced with b0e96f311.\nThe following script:\nCREATE TABLE a ();\nCREATE TABLE b (i int) INHERITS (a);\nCREATE TABLE c () INHERITS (a, b);\n\nALTER TABLE a ADD COLUMN i int NOT NULL;\n\nresults in:\nNOTICE: merging definition of column \"i\" for child \"b\"\nNOTICE: merging definition of column \"i\" for child \"c\"\nERROR: tuple already updated by self\n\n(This is similar to bug #18297, but ATExecAddColumn() isn't guilty in this\ncase.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 2 Feb 2024 19:00:01 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Fri, Feb 02, 2024 at 07:00:01PM +0300, Alexander Lakhin wrote:\n> results in:\n> NOTICE: merging definition of column \"i\" for child \"b\"\n> NOTICE: merging definition of column \"i\" for child \"c\"\n> ERROR: tuple already updated by self\n> \n> (This is similar to bug #18297, but ATExecAddColumn() isn't guilty in this\n> case.)\n\nStill I suspect that the fix should be similar, soI'll go put a coin\non a missing CCI().\n--\nMichael",
"msg_date": "Mon, 5 Feb 2024 16:21:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Feb-05, Michael Paquier wrote:\n\n> On Fri, Feb 02, 2024 at 07:00:01PM +0300, Alexander Lakhin wrote:\n> > results in:\n> > NOTICE: merging definition of column \"i\" for child \"b\"\n> > NOTICE: merging definition of column \"i\" for child \"c\"\n> > ERROR: tuple already updated by self\n> > \n> > (This is similar to bug #18297, but ATExecAddColumn() isn't guilty in this\n> > case.)\n> \n> Still I suspect that the fix should be similar, so I'll go put a coin\n> on a missing CCI().\n\nHmm, let me have a look, I can probably get this one fixed today before\nembarking on a larger fix elsewhere in the same feature.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"After a quick R of TFM, all I can say is HOLY CR** THAT IS COOL! PostgreSQL was\namazing when I first started using it at 7.2, and I'm continually astounded by\nlearning new features and techniques made available by the continuing work of\nthe development team.\"\nBerend Tober, http://archives.postgresql.org/pgsql-hackers/2007-08/msg01009.php\n\n\n",
"msg_date": "Mon, 5 Feb 2024 09:51:33 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Feb-05, Alvaro Herrera wrote:\n\n> Hmm, let me have a look, I can probably get this one fixed today before\n> embarking on a larger fix elsewhere in the same feature.\n\nYou know what -- this missing CCI has a much more visible impact, which\nis that the attnotnull marker that a primary key imposes on a partition\nis propagated early. So this regression test no longer fails:\n\ncreate table cnn2_parted(a int primary key) partition by list (a);\ncreate table cnn2_part1(a int);\nalter table cnn2_parted attach partition cnn2_part1 for values in (1);\n\nHere, in the existing code the ALTER TABLE ATTACH fails with the error\nmessage that\n ERROR: primary key column \"a\" is not marked NOT NULL\nbut with the patch, this no longer occurs.\n\nI'm not sure that this behavior change is desirable ... I have vague\nmemories of people complaining that this sort of error was not very\nwelcome ... but on the other hand it seems now pretty clear that if it\n*is* desirable, then its implementation is no good, because a single\nadded CCI breaks it.\n\nI'm leaning towards accepting the behavior change, but I'd like to\ninvestigate a little bit more first, but what do others think?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 5 Feb 2024 10:50:56 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Feb-05, Alvaro Herrera wrote:\n\n> Subject: [PATCH v1] Fix failure to merge NOT NULL constraints in inheritance\n> \n> set_attnotnull() was not careful to CommandCounterIncrement() in cases\n> of multiple recursion. Omission in b0e96f311985.\n\nEh, this needs to read \"multiple inheritance\" rather than \"multiple\nrecursion\". (I'd also need to describe the change for the partitioning\ncases in the commit message.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Someone said that it is at least an order of magnitude more work to do\nproduction software than a prototype. I think he is wrong by at least\nan order of magnitude.\" (Brian Kernighan)\n\n\n",
"msg_date": "Mon, 5 Feb 2024 10:53:17 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Feb-05, Alvaro Herrera wrote:\n\n> So this regression test no longer fails:\n> \n> create table cnn2_parted(a int primary key) partition by list (a);\n> create table cnn2_part1(a int);\n> alter table cnn2_parted attach partition cnn2_part1 for values in (1);\n\n> Here, in the existing code the ALTER TABLE ATTACH fails with the error\n> message that\n> ERROR: primary key column \"a\" is not marked NOT NULL\n> but with the patch, this no longer occurs.\n\nI think this change is OK. In the partition, the primary key is created\nin the partition anyway (as expected) which marks the column as\nattnotnull[*], and the table is scanned for presence of NULLs if there's\nno not-null constraint, and not scanned if there's one. (The actual\nscan is inevitable anyway because we must check the partition\nconstraint). This seems the behavior we want.\n\n[*] This attnotnull constraint is lost if you DETACH the partition and\ndrop the primary key, which is also the behavior we want.\n\n\nWhile playing with it I noticed this other behavior change from 16,\n\ncreate table pa (a int primary key) partition by list (a);\ncreate table pe (a int unique);\nalter table pa attach partition pe for values in (1, null);\n\nIn 16, we get the error:\nERROR: column \"a\" in child table must be marked NOT NULL\nwhich is correct (because the PK requires not-null). In master we just\nlet that through, but that seems to be a separate bug.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Saca el libro que tu religión considere como el indicado para encontrar la\noración que traiga paz a tu alma. Luego rebootea el computador\ny ve si funciona\" (Carlos Duclós)\n\n\n",
"msg_date": "Mon, 5 Feb 2024 15:47:19 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Feb-05, Alvaro Herrera wrote:\n\n> While playing with it I noticed this other behavior change from 16,\n> \n> create table pa (a int primary key) partition by list (a);\n> create table pe (a int unique);\n> alter table pa attach partition pe for values in (1, null);\n> \n> In 16, we get the error:\n> ERROR: column \"a\" in child table must be marked NOT NULL\n> which is correct (because the PK requires not-null). In master we just\n> let that through, but that seems to be a separate bug.\n\nHmm, so my initial reaction was to make the constraint-matching code\nignore the constraint in the partition-to-be if it's not the same type\n(this is what patch 0002 here does) ... but what ends up happening is\nthat we create a separate, identical constraint+index for the primary\nkey. I don't like that behavior too much myself, as it seems too\nmagical and surprising, since it could cause the ALTER TABLE ATTACH\noperation of a large partition become costly and slower, since it needs\nto create an index instead of merely scanning the whole data.\n\nI'll look again at the idea of raising an error if the not-null\nconstraint is not already present. That seems safer (and also, it's\nwhat we've been doing all along).\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Mon, 5 Feb 2024 19:11:18 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, Feb 5, 2024 at 5:51 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2024-Feb-05, Alvaro Herrera wrote:\n>\n> > Hmm, let me have a look, I can probably get this one fixed today before\n> > embarking on a larger fix elsewhere in the same feature.\n>\n> You know what -- this missing CCI has a much more visible impact, which\n> is that the attnotnull marker that a primary key imposes on a partition\n> is propagated early. So this regression test no longer fails:\n>\n> create table cnn2_parted(a int primary key) partition by list (a);\n> create table cnn2_part1(a int);\n> alter table cnn2_parted attach partition cnn2_part1 for values in (1);\n>\n> Here, in the existing code the ALTER TABLE ATTACH fails with the error\n> message that\n> ERROR: primary key column \"a\" is not marked NOT NULL\n> but with the patch, this no longer occurs.\n>\n> I'm not sure that this behavior change is desirable ... I have vague\n> memories of people complaining that this sort of error was not very\n> welcome ... but on the other hand it seems now pretty clear that if it\n> *is* desirable, then its implementation is no good, because a single\n> added CCI breaks it.\n>\n> I'm leaning towards accepting the behavior change, but I'd like to\n> investigate a little bit more first, but what do others think?\n>\n\nif you place CommandCounterIncrement inside the `if (recurse)` branch,\nthen the regression test will be ok.\n\ndiff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c\nindex 9f516967..25e225c2 100644\n--- a/src/backend/commands/tablecmds.c\n+++ b/src/backend/commands/tablecmds.c\n@@ -7719,6 +7719,9 @@ set_attnotnull(List **wqueue, Relation rel,\nAttrNumber attnum, bool recurse,\n\n false));\n retval |= set_attnotnull(wqueue, childrel, childattno,\n\n recurse, lockmode);\n+\n+ CommandCounterIncrement();\n\n\n",
"msg_date": "Wed, 7 Feb 2024 17:44:24 +0800",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "(I think I had already argued this point, but I don't see it in the\narchives, so here it is again).\n\nOn 2024-Feb-07, jian he wrote:\n\n> if you place CommandCounterIncrement inside the `if (recurse)` branch,\n> then the regression test will be ok.\n\nYeah, but don't you think this is too magical? I mean, randomly added\nCCIs in the execution path for other reasons would break this. Worse --\nhow can we _ensure_ that no CCIs occur at all? I mean, it's possible\nthat an especially crafted multi-subcommand ALTER TABLE could contain\njust the right CCI to break things in the opposite way. The difference\nin behavior would be difficult to justify. (For good or ill, ALTER\nTABLE ATTACH PARTITION cannot run in a multi-subcommand ALTER TABLE, so\nthis concern might be misplaced. Still, more certainty seems better\nthan less.)\n\nI've pushed both these patches now, adding what seemed a reasonable set\nof test cases. If there still are cases behaving in unexpected ways,\nplease let me know.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n\n",
"msg_date": "Mon, 15 Apr 2024 15:20:55 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Jan-25, Andrew Bille wrote:\n\n> Starting from b0e96f31, pg_upgrade fails with inherited NOT NULL constraint:\n> For example upgrade from 9c13b6814a (or REL_12_STABLE .. REL_16_STABLE) to\n> b0e96f31 (or master) with following two tables (excerpt from\n> src/test/regress/sql/rules.sql)\n> \n> create table test_0 (id serial primary key);\n> create table test_1 (id integer primary key) inherits (test_0);\n\nI have pushed a fix which should hopefully fix this problem\n(d9f686a72e). Please give this a look. Thanks for reporting the issue.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"I apologize for the confusion in my previous responses.\n There appears to be an error.\" (ChatGPT)\n\n\n",
"msg_date": "Thu, 18 Apr 2024 15:39:12 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello Alvaro,\n\n18.04.2024 16:39, Alvaro Herrera wrote:\n> I have pushed a fix which should hopefully fix this problem\n> (d9f686a72e). Please give this a look. Thanks for reporting the issue.\n\nPlease look at an assertion failure, introduced with d9f686a72:\nCREATE TABLE t(a int, NOT NULL a NO INHERIT);\nCREATE TABLE t2() INHERITS (t);\n\nALTER TABLE t ADD CONSTRAINT nna NOT NULL a;\nTRAP: failed Assert(\"lockmode != NoLock || IsBootstrapProcessingMode() || CheckRelationLockedByMe(r, AccessShareLock, \ntrue)\"), File: \"relation.c\", Line: 67, PID: 2980258\n\nOn d9f686a72~1 this script results in:\nERROR: cannot change NO INHERIT status of inherited NOT NULL constraint \"t_a_not_null\" on relation \"t\"\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 18 Apr 2024 23:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hi Alexander,\n\nOn 2024-Apr-18, Alexander Lakhin wrote:\n\n> 18.04.2024 16:39, Alvaro Herrera wrote:\n> > I have pushed a fix which should hopefully fix this problem\n> > (d9f686a72e). Please give this a look. Thanks for reporting the issue.\n> \n> Please look at an assertion failure, introduced with d9f686a72:\n> CREATE TABLE t(a int, NOT NULL a NO INHERIT);\n> CREATE TABLE t2() INHERITS (t);\n> \n> ALTER TABLE t ADD CONSTRAINT nna NOT NULL a;\n> TRAP: failed Assert(\"lockmode != NoLock || IsBootstrapProcessingMode() ||\n> CheckRelationLockedByMe(r, AccessShareLock, true)\"), File: \"relation.c\",\n> Line: 67, PID: 2980258\n\nAh, of course -- we're missing acquiring locks during the prep phase for\nthe recursive case of ADD CONSTRAINT. So we just need to add\nfind_all_inheritors() to do so in the AT_AddConstraint case in\nATPrepCmd(). However these naked find_all_inheritors() call look a bit\nugly to me, so I couldn't resist the temptation of adding a static\nfunction ATLockAllDescendants to clean it up a bit. I'll also add your\nscript to the tests and push shortly.\n\n> On d9f686a72~1 this script results in:\n> ERROR: cannot change NO INHERIT status of inherited NOT NULL constraint \"t_a_not_null\" on relation \"t\"\n\nRight. Now I'm beginning to wonder if allowing ADD CONSTRAINT to mutate\na pre-existing NO INHERIT constraint into a inheritable constraint\n(while accepting a constraint name in the command that we don't heed) is\nreally what we want. Maybe we should throw some error when the affected\nconstraint is the topmost one, and only accept the inheritance status\nchange when we're recursing.\n\nAlso I just noticed that in 9b581c534186 (which introduced this error\nmessage) I used ERRCODE_DATATYPE_MISMATCH ... Is that really appropriate\nhere?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Once again, thank you and all of the developers for your hard work on\nPostgreSQL. This is by far the most pleasant management experience of\nany database I've worked on.\" (Dan Harris)\nhttp://archives.postgresql.org/pgsql-performance/2006-04/msg00247.php",
"msg_date": "Mon, 22 Apr 2024 12:22:23 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Apr-22, Alvaro Herrera wrote:\n\n> > On d9f686a72~1 this script results in:\n> > ERROR: cannot change NO INHERIT status of inherited NOT NULL constraint \"t_a_not_null\" on relation \"t\"\n> \n> Right. Now I'm beginning to wonder if allowing ADD CONSTRAINT to mutate\n> a pre-existing NO INHERIT constraint into a inheritable constraint\n> (while accepting a constraint name in the command that we don't heed) is\n> really what we want. Maybe we should throw some error when the affected\n> constraint is the topmost one, and only accept the inheritance status\n> change when we're recursing.\n\nSo I added a restriction that we only accept such a change when\nrecursively adding a constraint, or during binary upgrade. This should\nlimit the damage: you're no longer able to change an existing constraint\nfrom NO INHERIT to YES INHERIT merely by doing another ALTER TABLE ADD\nCONSTRAINT.\n\nOne thing that has me a little nervous about this whole business is\nwhether we're set up to error out where some child table down the\nhierarchy has nulls, and we add a not-null constraint to it but fail to\ndo a verification scan. I tried a couple of cases and AFAICS it works\ncorrectly, but maybe there are other cases I haven't thought about where\nit doesn't.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"You're _really_ hosed if the person doing the hiring doesn't understand\nrelational systems: you end up with a whole raft of programmers, none of\nwhom has had a Date with the clue stick.\" (Andrew Sullivan)\nhttps://postgr.es/m/20050809113420.GD2768@phlogiston.dyndns.org",
"msg_date": "Wed, 24 Apr 2024 19:36:17 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "24.04.2024 20:36, Alvaro Herrera wrote:\n> So I added a restriction that we only accept such a change when\n> recursively adding a constraint, or during binary upgrade. This should\n> limit the damage: you're no longer able to change an existing constraint\n> from NO INHERIT to YES INHERIT merely by doing another ALTER TABLE ADD\n> CONSTRAINT.\n>\n> One thing that has me a little nervous about this whole business is\n> whether we're set up to error out where some child table down the\n> hierarchy has nulls, and we add a not-null constraint to it but fail to\n> do a verification scan. I tried a couple of cases and AFAICS it works\n> correctly, but maybe there are other cases I haven't thought about where\n> it doesn't.\n>\n\nThank you for the fix!\n\nWhile studying the NO INHERIT option, I've noticed that the documentation\nprobably misses it's specification for NOT NULL:\nhttps://www.postgresql.org/docs/devel/sql-createtable.html\n\nwhere column_constraint is:\n...\n[ CONSTRAINT constraint_name ]\n{ NOT NULL |\n NULL |\n CHECK ( expression ) [ NO INHERIT ] |\n\nAlso, I've found a weird behaviour with a non-inherited NOT NULL\nconstraint for a partitioned table:\nCREATE TABLE pt(a int NOT NULL NO INHERIT) PARTITION BY LIST (a);\nCREATE TABLE dp(a int NOT NULL);\nALTER TABLE pt ATTACH PARTITION dp DEFAULT;\nALTER TABLE pt DETACH PARTITION dp;\nfails with:\nERROR: relation 16389 has non-inherited constraint \"dp_a_not_null\"\n\nThough with an analogous check constraint, I get:\nCREATE TABLE pt(a int, CONSTRAINT nna CHECK (a IS NOT NULL) NO INHERIT) PARTITION BY LIST (a);\nERROR: cannot add NO INHERIT constraint to partitioned table \"pt\"\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 25 Apr 2024 08:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Apr-25, Alexander Lakhin wrote:\n\n> While studying the NO INHERIT option, I've noticed that the documentation\n> probably misses it's specification for NOT NULL:\n> https://www.postgresql.org/docs/devel/sql-createtable.html\n> \n> where column_constraint is:\n> ...\n> [ CONSTRAINT constraint_name ]\n> { NOT NULL |\n> NULL |\n> CHECK ( expression ) [ NO INHERIT ] |\n\nHmm, okay, will fix.\n\n> Also, I've found a weird behaviour with a non-inherited NOT NULL\n> constraint for a partitioned table:\n> CREATE TABLE pt(a int NOT NULL NO INHERIT) PARTITION BY LIST (a);\n> CREATE TABLE dp(a int NOT NULL);\n> ALTER TABLE pt ATTACH PARTITION dp DEFAULT;\n> ALTER TABLE pt DETACH PARTITION dp;\n> fails with:\n> ERROR: relation 16389 has non-inherited constraint \"dp_a_not_null\"\n\nUgh. Maybe a way to handle this is to disallow NO INHERIT in\nconstraints on partitioned tables altogether. I mean, they are a\ncompletely useless gimmick, aren't they?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 25 Apr 2024 12:16:25 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-Apr-25, Alvaro Herrera wrote:\n\n> > Also, I've found a weird behaviour with a non-inherited NOT NULL\n> > constraint for a partitioned table:\n> > CREATE TABLE pt(a int NOT NULL NO INHERIT) PARTITION BY LIST (a);\n\n> Ugh. Maybe a way to handle this is to disallow NO INHERIT in\n> constraints on partitioned tables altogether. I mean, they are a\n> completely useless gimmick, aren't they?\n\nHere are two patches that I intend to push soon (hopefully tomorrow).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No me acuerdo, pero no es cierto. No es cierto, y si fuera cierto,\n no me acuerdo.\" (Augusto Pinochet a una corte de justicia)",
"msg_date": "Wed, 1 May 2024 19:49:35 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello Alvaro,\n\n01.05.2024 20:49, Alvaro Herrera wrote:\n> Here are two patches that I intend to push soon (hopefully tomorrow).\n>\n\nThank you for fixing those issues!\n\nCould you also clarify, please, how CREATE TABLE ... LIKE is expected to\nwork with NOT NULL constraints?\n\nI wonder whether EXCLUDING CONSTRAINTS (ALL) should cover not-null\nconstraints too. What I'm seeing now, is that:\nCREATE TABLE t1 (i int, CONSTRAINT nn NOT NULL i);\nCREATE TABLE t2 (LIKE t1 EXCLUDING ALL);\n\\d+ t2\n-- ends with:\nNot-null constraints:\n \"nn\" NOT NULL \"i\"\n\nOr a similar case with PRIMARY KEY:\nCREATE TABLE t1 (i int PRIMARY KEY);\nCREATE TABLE t2 (LIKE t1 EXCLUDING CONSTRAINTS EXCLUDING INDEXES);\n\\d+ t2\n-- leaves:\nNot-null constraints:\n \"t2_i_not_null\" NOT NULL \"i\"\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Thu, 2 May 2024 18:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello Alexander\n\nOn 2024-May-02, Alexander Lakhin wrote:\n\n> Could you also clarify, please, how CREATE TABLE ... LIKE is expected to\n> work with NOT NULL constraints?\n\nIt should behave identically to 16. If in 16 you end up with a\nnot-nullable column, then in 17 you should get a not-null constraint.\n\n> I wonder whether EXCLUDING CONSTRAINTS (ALL) should cover not-null\n> constraints too. What I'm seeing now, is that:\n> CREATE TABLE t1 (i int, CONSTRAINT nn NOT NULL i);\n> CREATE TABLE t2 (LIKE t1 EXCLUDING ALL);\n> \\d+ t2\n> -- ends with:\n> Not-null constraints:\n> \"nn\" NOT NULL \"i\"\n\nIn 16, this results in\n Table \"public.t2\"\n Column │ Type │ Collation │ Nullable │ Default │ Storage │ Compression │ Stats target │ Description \n────────┼─────────┼───────────┼──────────┼─────────┼─────────┼─────────────┼──────────────┼─────────────\n i │ integer │ │ not null │ │ plain │ │ │ \nAccess method: heap\n\nso the fact that we have a not-null constraint in pg17 is correct.\n\n\n> Or a similar case with PRIMARY KEY:\n> CREATE TABLE t1 (i int PRIMARY KEY);\n> CREATE TABLE t2 (LIKE t1 EXCLUDING CONSTRAINTS EXCLUDING INDEXES);\n> \\d+ t2\n> -- leaves:\n> Not-null constraints:\n> \"t2_i_not_null\" NOT NULL \"i\"\n\nHere you also end up with a not-nullable column in 16, so I made it do\nthat.\n\nNow you could argue that EXCLUDING CONSTRAINTS is explicit in saying\nthat we don't want the constraints; but in that case why did 16 mark the\ncolumns as not-null? The answer seems to be that the standard requires\nthis. Look at 11.3 <table definition> syntax rule 9) b) iii) 4):\n\n 4) If the nullability characteristic included in LCDi is known not\n nullable, then let LNCi be NOT NULL; otherwise, let LNCi be the\n zero-length character string.\n\nwhere LCDi is \"1) Let LCDi be the column descriptor of the i-th column\nof LT.\" and then\n\n 5) Let CDi be the <column definition>\n LCNi LDTi LNCi\n\n\nNow, you could claim that the standard doesn't mention\nINCLUDING/EXCLUDING CONSTRAINTS, therefore since we have come up with\nits definition then we should make it affect not-null constraints.\nHowever, there's also this note:\n\n NOTE 520 — <column constraint>s, except for NOT NULL, are not included in\n CDi; <column constraint definition>s are effectively transformed to <table\n constraint definition>s and are thereby also excluded.\n\nwhich is explicitly saying that not-null constraints are treated\ndifferently; in essence, with INCLUDING CONSTRAINTS we choose to affect\nthe constraints that the standard says to ignore.\n\n\nThanks for looking!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329\n\n\n",
"msg_date": "Thu, 2 May 2024 18:21:56 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "02.05.2024 19:21, Alvaro Herrera wrote:\n\n> Now, you could claim that the standard doesn't mention\n> INCLUDING/EXCLUDING CONSTRAINTS, therefore since we have come up with\n> its definition then we should make it affect not-null constraints.\n> However, there's also this note:\n>\n> NOTE 520 — <column constraint>s, except for NOT NULL, are not included in\n> CDi; <column constraint definition>s are effectively transformed to <table\n> constraint definition>s and are thereby also excluded.\n>\n> which is explicitly saying that not-null constraints are treated\n> differently; in essence, with INCLUDING CONSTRAINTS we choose to affect\n> the constraints that the standard says to ignore.\n\nThank you for very detailed and convincing explanation!\n\nNow I see what the last sentence here (from [1]) means:\nINCLUDING CONSTRAINTS\n\n CHECK constraints will be copied. No distinction is made between\n column constraints and table constraints. _Not-null constraints are\n always copied to the new table._\n\n(I hadn't paid enough attention to it, because this exact paragraph is\nalso presented in previous versions...)\n\n[1] https://www.postgresql.org/docs/devel/sql-createtable.html\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Fri, 3 May 2024 07:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "Hello,\n\nAt Wed, 1 May 2024 19:49:35 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Here are two patches that I intend to push soon (hopefully tomorrow).\n\nThis commit added and edited two error messages, resulting in using\nslightly different wordings \"in\" and \"on\" for relation constraints.\n\n+ errmsg(\"cannot change NO INHERIT status of NOT NULL constraint \\\"%s\\\" on relation \\\"%s\\\"\",\n===\n+ errmsg(\"cannot change NO INHERIT status of NOT NULL constraint \\\"%s\\\" in relation \\\"%s\\\"\",\n\nI think we usually use on in this case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 07 May 2024 17:17:24 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-07, Kyotaro Horiguchi wrote:\n\n> Hello,\n> \n> At Wed, 1 May 2024 19:49:35 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > Here are two patches that I intend to push soon (hopefully tomorrow).\n> \n> This commit added and edited two error messages, resulting in using\n> slightly different wordings \"in\" and \"on\" for relation constraints.\n> \n> + errmsg(\"cannot change NO INHERIT status of NOT NULL constraint \\\"%s\\\" on relation \\\"%s\\\"\",\n> ===\n> + errmsg(\"cannot change NO INHERIT status of NOT NULL constraint \\\"%s\\\" in relation \\\"%s\\\"\",\n\nThank you, I hadn't noticed the inconsistency -- I fix this in the\nattached series.\n\nWhile trying to convince myself that I could mark the remaining open\nitem for this work closed, I discovered that pg_dump fails to produce\nworking output for some combinations. Notably, if I create Andrew\nBille's example in 16:\n\ncreate table test_0 (id serial primary key);\ncreate table test_1 (id integer primary key) inherits (test_0);\n\nthen current master's pg_dump produces output that the current server\nfails to restore, failing the PK creation in test_0:\n\nALTER TABLE ONLY public.test_0\n ADD CONSTRAINT test_0_pkey PRIMARY KEY (id);\nERROR: cannot change NO INHERIT status of NOT NULL constraint \"pgdump_throwaway_notnull_0\" in relation \"test_1\"\n\nbecause we have already created the NOT NULL NO INHERIT constraint in\ntest_1 when we created it, and because of d45597f72fe5, we refuse to\nchange it into a regular inheritable constraint, which the PK in its\nparent table needs.\n\nI spent a long time trying to think how to fix this, and I had despaired\nwanting to write that I would need to revert the whole NOT NULL business\nfor pg17 -- but that was until I realized that we don't actually need\nthis NOT NULL NO INHERIT business except during pg_upgrade, and that\nsimplifies things enough to give me confidence that the whole feature\ncan be kept.\n\nBecause, remember: the idea of those NO INHERIT \"throwaway\" constraints\nis that we can skip reading the data when we create the PRIMARY KEY\nduring binary upgrade. We don't actually need the NO INHERIT\nconstraints for anything during regular pg_dump. So what we can do, is\nrestrict the usage of NOT NULL NO INHERIT so that they occur only during\npg_upgrade. I think this will make Justin P. happier, because we no\nlonger have these unsightly NOT NULL NO INHERIT nonstandard syntax in\ndumps.\n\nThe attached patch series does that. Actually, it does a little more,\nbut it's not really much:\n\n0001: fix the typos pointed out by Kyotaro.\n\n0002: A mechanical code movement that takes some ugly ballast out of\ngetTableAttrs into its own routine. I realized that this new code was\nfar too ugly and messy to be in the middle of filling the tbinfo struct\nof attributes. If you use \"git show --color-moved\n--color-moved-ws=ignore-all-space\" with this commit you can see that\nnothing happens apart from the code move.\n\n0003: pgindent, fixes the comments just moved to account for different\nindentation depth.\n\n0004: moves again the moved PQfnumber() calls back to getTableAttrs(),\nfor efficiency (we don't want to search the result for those resnums for\nevery single attribute of all tables being dumped).\n\n0005: This is the actual code change I describe above. We restrict\nuse_throwaway_nulls so that it's only set during binary upgrade mode.\nThis changes pg_dump output; in the normal case, we no longer have NOT\nNULL NO INHERIT. I added one test stanza to verify that pg_upgrade\nretains these clauses, where they are critical.\n\n0006: Tighten up what d45597f72fe5 did, in that outside of binary\nupgrade mode, we no longer accept changes to NOT NULL NO INHERIT\nconstraints so that they become INHERIT. Previously we accepted that\nduring recursion, but this isn't really very principled. (I had\naccepted this because pg_dump required it for some other cases). This\nchanges some test output, and I also simplify some test cases that were\ntesting stuff that's no longer interesting.\n\n(To push, I'll squash 0002+0003+0004 as a single one, and perhaps 0005\nwith them; I produced them like this only to make them easy to see\nwhat's changing.)\n\n\nI also have a pending patch for 16 that adds tables like the problematic\nones so that they remain for future pg_upgrade testing. With the\nchanges in this series, the whole thing finally works AFAICT.\n\nI did notice one more small bit of weirdness, which is that at the end\nof the process you may end up with constraints that retain the throwaway\nname. This doesn't seem at all critical, considering that you can't\ndrop them anyway and such names do not survive a further dump (because\nthey are marked as inherited constraint without a \"local\" definition, so\nthey're not dumped separately). I would still like to fix it, but it\nseems to require unduly contortions so I may end up not doing anything\nabout it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 8 May 2024 22:42:08 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, May 8, 2024 at 4:42 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I spent a long time trying to think how to fix this, and I had despaired\n> wanting to write that I would need to revert the whole NOT NULL business\n> for pg17 -- but that was until I realized that we don't actually need\n> this NOT NULL NO INHERIT business except during pg_upgrade, and that\n> simplifies things enough to give me confidence that the whole feature\n> can be kept.\n\nYeah, I have to admit that the ongoing bug fixing here has started to\nmake me a bit nervous, but I also can't totally follow everything\nthat's under discussion, so I don't want to rush to judgement. I feel\nlike we might need some documentation or a README or something that\nexplains the takeaway from the recent commits dealing with no-inherit\nconstraints. None of those commits updated the documentation, which\nmay be fine, but neither the resulting behavior nor the reasoning\nbehind it is obvious. It's not enough for it to be correct -- it has\nto be understandable enough to the hive mind that we can maintain it\ngoing forward.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 May 2024 16:05:58 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-09, Robert Haas wrote:\n\n> Yeah, I have to admit that the ongoing bug fixing here has started to\n> make me a bit nervous, but I also can't totally follow everything\n> that's under discussion, so I don't want to rush to judgement.\n\nI have found two more problems that I think are going to require some\nmore work to fix, so I've decided to cut my losses now and revert the\nwhole. I'll come back again in 18 with these problems fixed.\n\nSpecifically, the problem is that I mentioned that we could restrict the\nNOT NULL NO INHERIT addition in pg_dump for primary keys to occur only\nin pg_upgrade; but it turns this is not correct. In normal\ndump/restore, there's an additional table scan to check for nulls when\nthe constraints is not there, so the PK creation would become measurably\nslower. (In a table with a million single-int rows, PK creation goes\nfrom 2000ms to 2300ms due to the second scan to check for nulls).\n\nThe addition of NOT NULL NO INHERIT constraints for this purpose\ncollides with addition of constraints for other reasons, and it forces\nus to do unpleasant things such as altering an existing constraint to go\nfrom NO INHERIT to INHERIT. If this happens only during pg_upgrade,\nthat would be okay IMV; but if we're forced to allow in normal operation\n(and in some cases we are), it could cause inconsistencies, so I don't\nwant to do that. I see a way to fix this (adding another query in\npg_dump that detects which columns descend from ones used in PKs in\nancestor tables), but that's definitely too much additional mechanism to\nbe adding this late in the cycle.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Sat, 11 May 2024 11:40:01 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-11, Alvaro Herrera wrote:\n\n> I have found two more problems that [] require some more work to fix,\n> so I've decided to cut my losses now and revert the whole.\n\nHere's the revert patch, which I intend to push early tomorrow.\n\nCommits reverted are:\n21ac38f498b33f0231842238b83847ec63dfe07b\nd45597f72fe53a53f6271d5ba4e7acf8fc9308a1\n13daa33fa5a6d340f9be280db14e7b07ed11f92e\n0cd711271d42b0888d36f8eda50e1092c2fed4b3\nd72d32f52d26c9588256de90b9bc54fe312cee60\nd9f686a72ee91f6773e5d2bc52994db8d7157a8e\nc3709100be73ad5af7ff536476d4d713bca41b1a\n3af7217942722369a6eb7629e0fb1cbbef889a9b\nb0f7dd915bca6243f3daf52a81b8d0682a38ee3b\nac22a9545ca906e70a819b54e76de38817c93aaf\nd0ec2ddbe088f6da35444fad688a62eae4fbd840\n9b581c53418666205938311ef86047aa3c6b741f\nb0e96f311985bceba79825214f8e43f65afa653a\n\nwith some significant conflict fixes (mostly in the last one).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El sentido de las cosas no viene de las cosas, sino de\nlas inteligencias que las aplican a sus problemas diarios\nen busca del progreso.\" (Ernesto Hernández-Novich)",
"msg_date": "Sun, 12 May 2024 16:56:09 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Sat, May 11, 2024 at 5:40 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I have found two more problems that I think are going to require some\n> more work to fix, so I've decided to cut my losses now and revert the\n> whole. I'll come back again in 18 with these problems fixed.\n\nBummer, but makes sense.\n\n> Specifically, the problem is that I mentioned that we could restrict the\n> NOT NULL NO INHERIT addition in pg_dump for primary keys to occur only\n> in pg_upgrade; but it turns this is not correct. In normal\n> dump/restore, there's an additional table scan to check for nulls when\n> the constraints is not there, so the PK creation would become measurably\n> slower. (In a table with a million single-int rows, PK creation goes\n> from 2000ms to 2300ms due to the second scan to check for nulls).\n\nI have a feeling that any theory of the form \"X only needs to happen\nduring pg_upgrade\" is likely to be wrong. pg_upgrade isn't really\ndoing anything especially unusual: just creating some objects and\nloading data. Those things can also be done at other times, so\nwhatever is needed during pg_upgrade is also likely to be needed at\nother times. Maybe that's not sound reasoning for some reason or\nother, but that's my intuition.\n\n> The addition of NOT NULL NO INHERIT constraints for this purpose\n> collides with addition of constraints for other reasons, and it forces\n> us to do unpleasant things such as altering an existing constraint to go\n> from NO INHERIT to INHERIT. If this happens only during pg_upgrade,\n> that would be okay IMV; but if we're forced to allow in normal operation\n> (and in some cases we are), it could cause inconsistencies, so I don't\n> want to do that. I see a way to fix this (adding another query in\n> pg_dump that detects which columns descend from ones used in PKs in\n> ancestor tables), but that's definitely too much additional mechanism to\n> be adding this late in the cycle.\n\nI'm sorry that I haven't been following this thread closely, but I'm\nconfused about how we ended up here. What exactly are the user-visible\nbehavior changes wrought by this patch, and how do they give rise to\nthese issues? One change I know about is that a constraint that is\nexplicitly catalogued (vs. just existing implicitly) has a name. But\nit isn't obvious to me that such a difference, by itself, is enough to\ncause all of these problems: if a NOT NULL constraint is created\nwithout a name, then I suppose we just have to generate one. Maybe the\nfact that the constraints have names somehow causes ugliness later,\nbut I can't quite understand why it would.\n\nThe other possibility that occurs to me is that I think the motivation\nfor cataloging NOT NULL constraints was that we wanted to be able to\ntrack dependencies on them, or something like that, which seems like\nit might be able to create issues of the type that you're facing, but\nthe details aren't clear to me. Changing any behavior in this area\nseems like it could be quite tricky, because of things like the\ninteraction between PRIMARY KEY and NOT NULL, which is rather\nidiosyncratic but upon which a lot of existing SQL (including SQL not\ncontrolled by us) likely depends. If there's not a clear plan for how\nwe keep all the stuff that works today working, I fear we'll end up in\nan endless game of whack-a-mole. If you've already written the design\nideas down someplace, I'd appreciate a pointer in the right direction.\n\nOr maybe there's some other issue entirely. In any case, sorry about\nthe revert, and sorry that I haven't paid more attention to this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 May 2024 09:00:28 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-13, Robert Haas wrote:\n\n> On Sat, May 11, 2024 at 5:40 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Specifically, the problem is that I mentioned that we could restrict the\n> > NOT NULL NO INHERIT addition in pg_dump for primary keys to occur only\n> > in pg_upgrade; but it turns this is not correct. In normal\n> > dump/restore, there's an additional table scan to check for nulls when\n> > the constraints is not there, so the PK creation would become measurably\n> > slower. (In a table with a million single-int rows, PK creation goes\n> > from 2000ms to 2300ms due to the second scan to check for nulls).\n> \n> I have a feeling that any theory of the form \"X only needs to happen\n> during pg_upgrade\" is likely to be wrong. pg_upgrade isn't really\n> doing anything especially unusual: just creating some objects and\n> loading data. Those things can also be done at other times, so\n> whatever is needed during pg_upgrade is also likely to be needed at\n> other times. Maybe that's not sound reasoning for some reason or\n> other, but that's my intuition.\n\nTrue. It may be that by setting up the upgrade SQL script differently,\nwe don't need to make the distinction at all. I hope to be able to do\nthat.\n\n> I'm sorry that I haven't been following this thread closely, but I'm\n> confused about how we ended up here. What exactly are the user-visible\n> behavior changes wrought by this patch, and how do they give rise to\n> these issues?\n\nThe problematic point is the need to add NOT NULL constraints during\ntable creation that don't exist in the table being dumped, for\nperformance of primary key creation -- I called this a throwaway\nconstraint. We needed to be able to drop those constraints after the PK\nwas created. These were marked NO INHERIT to allow them to be dropped,\nwhich is easier if the children don't have them. This all worked fine.\n\nHowever, at some point we realized that we needed to add NOT NULL\nconstraints in child tables for the columns in which the parent had a\nprimary key. Then things become messy because we had the throwaway\nconstraints on one hand and the not-nulls that descend from the PK on\nthe other hand, where one was NO INHERIT and the other wasn't; worse if\nthe child also has a primary key.\n\nIt turned out that we didn't have any mechanism to transform a NO\nINHERIT constraint into a regular one that would be inherited. I added\none, didn't like the way it worked, tried to restrict it but that caused\nother problems; this is the mess that led to the revert (pg_dump in\nnormal mode would emit scripts that fail for some legitimate cases).\n\nOne possible way forward might be to make pg_dump smarter by adding one\nmore query to know the relationship between constraints that must be\ndropped and those that don't. Another might be to allow multiple\nnot-null constraints on the same column (one inherits, the other\ndoesn't, and you can drop them independently). There may be others.\n\n> The other possibility that occurs to me is that I think the motivation\n> for cataloging NOT NULL constraints was that we wanted to be able to\n> track dependencies on them, or something like that, which seems like\n> it might be able to create issues of the type that you're facing, but\n> the details aren't clear to me.\n\nNOT VALID constraints would be extremely useful, for one thing (because\nthen you don't need to exclusively-lock the table during a long scan in\norder to add a constraint), and it's just one step away from having\nthese constraints be catalogued. It was also fixing some inconsistent\nhandling of inheritance cases.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 13 May 2024 15:44:40 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, May 13, 2024 at 9:44 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> The problematic point is the need to add NOT NULL constraints during\n> table creation that don't exist in the table being dumped, for\n> performance of primary key creation -- I called this a throwaway\n> constraint. We needed to be able to drop those constraints after the PK\n> was created. These were marked NO INHERIT to allow them to be dropped,\n> which is easier if the children don't have them. This all worked fine.\n\nThis seems really weird to me. Why is it necessary? I mean, in\nexisting releases, if you declare a column as PRIMARY KEY, the columns\nincluded in the key are forced to be NOT NULL, and you can't change\nthat for so long as they are included in the PRIMARY KEY. So I would\nhave thought that after this patch, you'd end up with the same thing.\nOne way of doing that would be to make the PRIMARY KEY depend on the\nnow-catalogued NOT NULL constraints, and the other way would be to\nkeep it as an ad-hoc prohibition, same as now. In PostgreSQL 16, I get\na dump like this:\n\nCREATE TABLE public.foo (\n a integer NOT NULL,\n b text\n);\n\nCOPY public.foo (a, b) FROM stdin;\n\\.\n\nALTER TABLE ONLY public.foo\n ADD CONSTRAINT foo_pkey PRIMARY KEY (a);\n\nIf I'm dumping from an existing release, I don't see why any of that\nneeds to change. The NOT NULL decoration should lead to a\nsystem-generated constraint name. If I'm dumping from a new release,\nthe NOT NULL decoration needs to be replaced with CONSTRAINT\nexisting_constraint_name NOT NULL. But I don't see why I need to end\nup with what the patch generates, which seems to be something like\nCONSTRAINT pgdump_throwaway_notnull_0 NOT NULL NO INHERIT. That kind\nof thing suggests that we're changing around the order of operations\nin pg_dump, probably by adding the NOT NULL constraints at a later\nstage than currently, and I think the proper solution is most likely\nto be to avoid doing that in the first place.\n\n> However, at some point we realized that we needed to add NOT NULL\n> constraints in child tables for the columns in which the parent had a\n> primary key. Then things become messy because we had the throwaway\n> constraints on one hand and the not-nulls that descend from the PK on\n> the other hand, where one was NO INHERIT and the other wasn't; worse if\n> the child also has a primary key.\n\nThis seems like another problem that is created by changing the order\nof operations in pg_dump.\n\n> > The other possibility that occurs to me is that I think the motivation\n> > for cataloging NOT NULL constraints was that we wanted to be able to\n> > track dependencies on them, or something like that, which seems like\n> > it might be able to create issues of the type that you're facing, but\n> > the details aren't clear to me.\n>\n> NOT VALID constraints would be extremely useful, for one thing (because\n> then you don't need to exclusively-lock the table during a long scan in\n> order to add a constraint), and it's just one step away from having\n> these constraints be catalogued. It was also fixing some inconsistent\n> handling of inheritance cases.\n\nI agree that NOT VALID constraints would be very useful. I'm a little\nscared by the idea of fixing inconsistent handling of inheritance\ncases, just for fear that there may be more things relying on the\ninconsistent behavior than we realize. I feel like this is an area\nwhere it's easy for changes to be scarier than they at first seem. I\nstill have memories of discovering some of the current behavior back\nin the mid-2000s when I was learning PostgreSQL (and databases\ngenerally). It struck me as fiddly back then, and it still does. I\nfeel like there are probably some behaviors that look like arbitrary\ndecisions but are actually very important for some undocumented\nreason. That's not to say that we shouldn't try to make improvements,\njust that it may be hard to get right.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 May 2024 11:14:42 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-13, Robert Haas wrote:\n\n> On Mon, May 13, 2024 at 9:44 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > The problematic point is the need to add NOT NULL constraints during\n> > table creation that don't exist in the table being dumped, for\n> > performance of primary key creation -- I called this a throwaway\n> > constraint. We needed to be able to drop those constraints after the PK\n> > was created. These were marked NO INHERIT to allow them to be dropped,\n> > which is easier if the children don't have them. This all worked fine.\n> \n> This seems really weird to me. Why is it necessary? I mean, in\n> existing releases, if you declare a column as PRIMARY KEY, the columns\n> included in the key are forced to be NOT NULL, and you can't change\n> that for so long as they are included in the PRIMARY KEY.\n\nThe point is that a column can be in a primary key and not have an\nexplicit not-null constraint. This is different from having a column be\nNOT NULL and having a primary key on top. In both cases the attnotnull\nflag is set; the difference between these two scenarios is what happens\nif you drop the primary key. If you do not have an explicit not-null\nconstraint, then the attnotnull flag is lost as soon as you drop the\nprimary key. You don't have to do DROP NOT NULL for that to happen.\n\nThis means that if you have a column that's in the primary key but does\nnot have an explicit not-null constraint, then we shouldn't make one up.\n(Which we would, if we were to keep an unadorned NOT NULL that we can't\ndrop at the end of the dump.)\n\n> So I would have thought that after this patch, you'd end up with the\n> same thing.\n\nAt least as I interpret the standard, you wouldn't.\n\n> One way of doing that would be to make the PRIMARY KEY depend on the\n> now-catalogued NOT NULL constraints, and the other way would be to\n> keep it as an ad-hoc prohibition, same as now.\n\nThat would be against what [I think] the standard says.\n\n> But I don't see why I need to end up with what the patch generates,\n> which seems to be something like CONSTRAINT pgdump_throwaway_notnull_0\n> NOT NULL NO INHERIT. That kind of thing suggests that we're changing\n> around the order of operations in pg_dump, probably by adding the NOT\n> NULL constraints at a later stage than currently, and I think the\n> proper solution is most likely to be to avoid doing that in the first\n> place.\n\nThe point of the throwaway constraints is that they don't remain after\nthe dump has restored completely. They are there only so that we don't\nhave to scan the data looking for possible nulls when we create the\nprimary key. We have a DROP CONSTRAINT for the throwaway not-nulls as\nsoon as the PK is created.\n\nWe're not changing any order of operations as such.\n\n> That's not to say that we shouldn't try to make improvements, just\n> that it may be hard to get right.\n\nSure, that's why this patch has now been reverted twice :-) and has been\nin the works for ... how many years now?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 13 May 2024 18:45:40 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, May 13, 2024 at 12:45 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> The point is that a column can be in a primary key and not have an\n> explicit not-null constraint. This is different from having a column be\n> NOT NULL and having a primary key on top. In both cases the attnotnull\n> flag is set; the difference between these two scenarios is what happens\n> if you drop the primary key. If you do not have an explicit not-null\n> constraint, then the attnotnull flag is lost as soon as you drop the\n> primary key. You don't have to do DROP NOT NULL for that to happen\n>\n> This means that if you have a column that's in the primary key but does\n> not have an explicit not-null constraint, then we shouldn't make one up.\n> (Which we would, if we were to keep an unadorned NOT NULL that we can't\n> drop at the end of the dump.)\n\nIt seems to me that the practical thing to do about this problem is\njust decide not to solve it. I mean, it's currently the case that if\nyou establish a PRIMARY KEY when you create a table, the columns of\nthat key are marked NOT NULL and remain NOT NULL even if the primary\nkey is later dropped. So, if that didn't change, we would be no less\ncompliant with the SQL standard (or your reading of it) than we are\nnow. And if you do really want to make that change, why not split it\nout into its own patch, so that the patch that does $SUBJECT is\nchanging the minimal number of other things at the same time? That\nway, reverting something might not involve reverting everything, plus\nyou could have a separate design discussion about what that fix ought\nto look like, separate from the issues that are truly inherent to\ncataloging NOT NULL constraints per se.\n\nWhat I meant about changing the order of operations is that,\ncurrently, the database knows that the column is NOT NULL before the\nCOPY happens, and I don't think we can change that. I think you agree\n-- that's why you invented the throwaway constraints. As far as I can\nsee, the problems all have to do with getting the \"throwaway\" part to\nhappen correctly. It can't be a problem to just mark the relevant\ncolumns NOT NULL in the relevant tables -- we already do that. But if\nyou want to discard some of those NOT NULL markings once the PRIMARY\nKEY is added, you have to know which ones to discard. If we just\nconsider the most straightforward scenario where somebody does a full\ndump-and-restore, getting that right may be annoying, but it seems\nlike it surely has to be possible. The dump will just have to\nunderstand which child tables (or, more generally, descendent tables)\ngot a NOT NULL marking on a column because of the PK and which ones\nhad an explicit marking in the old database and do the right thing in\neach case.\n\nBut what if somebody does a selective restore of one table from a\npartitioning hierarchy? Currently, the columns that would have been\npart of the primary key end up NOT NULL, but the primary key itself is\nnot restored because it can't be. What will happen in this new system?\nIf you don't apply any NOT NULL constraints to those columns, then a\nuser who restores one partition from an old dump and tries to reattach\nit to the correct partitioned table has to recheck the NOT NULL\nconstraint, unlike now. If you apply a normal-looking garden-variety\nNOT NULL constraint to that column, you've invented a constraint that\ndidn't exist in the source database. And if you apply a throwaway NOT\nNULL constraint but the user never attaches that table anywhere, then\nthe throwaway constraint survives. None of those options sound very\ngood to me.\n\nAnother scenario: Say that you have a table with a PRIMARY KEY. For\nsome reason, you want to drop the primary key and then add it back.\nWell, with this definitional change, as soon as you drop it, you\nforget that the underlying columns don't contain any nulls, so when\nyou add it back, you have to check them again. I don't know who would\nfind that behavior an improvement over what we have today.\n\nSo I don't really think it's a great idea to change this behavior, but\neven if it is, is it such a good idea that we want to sink the whole\npatch set repeatedly over it, as has already happened twice now? I\nfeel that if we did what Tom suggested a year ago in\nhttps://www.postgresql.org/message-id/3801207.1681057430@sss.pgh.pa.us\n-- \"I'm inclined to think that this idea of suppressing the implied\nNOT NULL from PRIMARY KEY is a nonstarter and we should just go ahead\nand make such a constraint\" -- there's a very good chance that a\nrevert would have been avoided here and it would still be just as\nvalid to think of revisiting this particular question in a future\nrelease as it is now.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 May 2024 14:58:25 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-13, Robert Haas wrote:\n\n> It seems to me that the practical thing to do about this problem is\n> just decide not to solve it. I mean, it's currently the case that if\n> you establish a PRIMARY KEY when you create a table, the columns of\n> that key are marked NOT NULL and remain NOT NULL even if the primary\n> key is later dropped. So, if that didn't change, we would be no less\n> compliant with the SQL standard (or your reading of it) than we are\n> now.\n[...]\n> So I don't really think it's a great idea to change this behavior, but\n> even if it is, is it such a good idea that we want to sink the whole\n> patch set repeatedly over it, as has already happened twice now? I\n> feel that if we did what Tom suggested a year ago in\n> https://www.postgresql.org/message-id/3801207.1681057430@sss.pgh.pa.us\n> -- \"I'm inclined to think that this idea of suppressing the implied\n> NOT NULL from PRIMARY KEY is a nonstarter and we should just go ahead\n> and make such a constraint\" [...]\n\nHmm, I hadn't interpreted Tom's message the way you suggest, and you may\nbe right that it might be a good way forward. I'll keep this in mind\nfor next time.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"No es bueno caminar con un hombre muerto\"\n\n\n",
"msg_date": "Tue, 14 May 2024 11:58:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Sun, May 12, 2024 at 04:56:09PM +0200, Álvaro Herrera wrote:\n> On 2024-May-11, Alvaro Herrera wrote:\n> \n> > I have found two more problems that [] require some more work to fix,\n> > so I've decided to cut my losses now and revert the whole.\n> \n> Here's the revert patch, which I intend to push early tomorrow.\n> \n> Commits reverted are:\n> 21ac38f498b33f0231842238b83847ec63dfe07b\n> d45597f72fe53a53f6271d5ba4e7acf8fc9308a1\n> 13daa33fa5a6d340f9be280db14e7b07ed11f92e\n> 0cd711271d42b0888d36f8eda50e1092c2fed4b3\n> d72d32f52d26c9588256de90b9bc54fe312cee60\n> d9f686a72ee91f6773e5d2bc52994db8d7157a8e\n> c3709100be73ad5af7ff536476d4d713bca41b1a\n> 3af7217942722369a6eb7629e0fb1cbbef889a9b\n> b0f7dd915bca6243f3daf52a81b8d0682a38ee3b\n> ac22a9545ca906e70a819b54e76de38817c93aaf\n> d0ec2ddbe088f6da35444fad688a62eae4fbd840\n> 9b581c53418666205938311ef86047aa3c6b741f\n> b0e96f311985bceba79825214f8e43f65afa653a\n> \n> with some significant conflict fixes (mostly in the last one).\n\nTurns out these commits generated a single release note item, which I\nhave now removed with the attached committed patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 14 May 2024 21:32:49 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Mon, May 13, 2024 at 09:00:28AM -0400, Robert Haas wrote:\n> > Specifically, the problem is that I mentioned that we could restrict the\n> > NOT NULL NO INHERIT addition in pg_dump for primary keys to occur only\n> > in pg_upgrade; but it turns this is not correct. In normal\n> > dump/restore, there's an additional table scan to check for nulls when\n> > the constraints is not there, so the PK creation would become measurably\n> > slower. (In a table with a million single-int rows, PK creation goes\n> > from 2000ms to 2300ms due to the second scan to check for nulls).\n> \n> I have a feeling that any theory of the form \"X only needs to happen\n> during pg_upgrade\" is likely to be wrong. pg_upgrade isn't really\n> doing anything especially unusual: just creating some objects and\n> loading data. Those things can also be done at other times, so\n> whatever is needed during pg_upgrade is also likely to be needed at\n> other times. Maybe that's not sound reasoning for some reason or\n> other, but that's my intuition.\n\nI assume Alvaro is saying that pg_upgrade has only a single session,\nwhich is unique and might make things easier for him.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 14 May 2024 21:34:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 2024-May-14, Bruce Momjian wrote:\n\n> Turns out these commits generated a single release note item, which I\n> have now removed with the attached committed patch.\n\nHmm, but the commits about not-null constraints for domains were not\nreverted, only the ones for constraints on relations. I think the\nrelease notes don't properly address the ones on domains. I think it's\nat least these two commits:\n\n> -Author: Peter Eisentraut <peter@eisentraut.org>\n> -2024-03-20 [e5da0fe3c] Catalog domain not-null constraints\n> -Author: Peter Eisentraut <peter@eisentraut.org>\n> -2024-04-15 [9895b35cb] Fix ALTER DOMAIN NOT NULL syntax\n\nIt may still be a good idea to make a note about those, at least to\npoint out that information_schema now lists them. For example, pg11\nrelease notes had this item\n\n<!--\n2018-02-07 [32ff26911] Add more information_schema columns\n-->\n\n <para>\n Add <literal>information_schema</literal> columns related to table\n constraints and triggers (Peter Eisentraut)\n </para>\n\n <para>\n Specifically,\n <structname>triggers</structname>.<structfield>action_order</structfield>,\n <structname>triggers</structname>.<structfield>action_reference_old_table</structfield>,\n and\n <structname>triggers</structname>.<structfield>action_reference_new_table</structfield>\n are now populated, where before they were always null. Also,\n <structname>table_constraints</structname>.<structfield>enforced</structfield>\n now exists but is not yet usefully populated.\n </para>\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 15 May 2024 09:50:36 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On 15.05.24 09:50, Alvaro Herrera wrote:\n> On 2024-May-14, Bruce Momjian wrote:\n> \n>> Turns out these commits generated a single release note item, which I\n>> have now removed with the attached committed patch.\n> \n> Hmm, but the commits about not-null constraints for domains were not\n> reverted, only the ones for constraints on relations. I think the\n> release notes don't properly address the ones on domains. I think it's\n> at least these two commits:\n> \n>> -Author: Peter Eisentraut <peter@eisentraut.org>\n>> -2024-03-20 [e5da0fe3c] Catalog domain not-null constraints\n>> -Author: Peter Eisentraut <peter@eisentraut.org>\n>> -2024-04-15 [9895b35cb] Fix ALTER DOMAIN NOT NULL syntax\n\nI'm confused that these were kept. The first one was specifically to \nmake the catalog representation of domain not-null constraints \nconsistent with table not-null constraints. But the table part was \nreverted, so now the domain constraints are inconsistent again.\n\nThe second one refers to the first one, but it might also fix some \nadditional older issue, so it would need more investigation.\n\n\n\n",
"msg_date": "Wed, 15 May 2024 14:37:48 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
},
{
"msg_contents": "On Wed, May 15, 2024 at 09:50:36AM +0200, Álvaro Herrera wrote:\n> On 2024-May-14, Bruce Momjian wrote:\n> \n> > Turns out these commits generated a single release note item, which I\n> > have now removed with the attached committed patch.\n> \n> Hmm, but the commits about not-null constraints for domains were not\n> reverted, only the ones for constraints on relations. I think the\n> release notes don't properly address the ones on domains. I think it's\n> at least these two commits:\n> \n> > -Author: Peter Eisentraut <peter@eisentraut.org>\n> > -2024-03-20 [e5da0fe3c] Catalog domain not-null constraints\n> > -Author: Peter Eisentraut <peter@eisentraut.org>\n> > -2024-04-15 [9895b35cb] Fix ALTER DOMAIN NOT NULL syntax\n> \n> It may still be a good idea to make a note about those, at least to\n> point out that information_schema now lists them. For example, pg11\n> release notes had this item\n\nLet me explain what I did to adjust the release notes. I took your\ncommit hashes, which were longer than mine, and got the commit subject\ntext from them. I then searched the release notes to see which commit\nsubjects existed in the document. Only the first three did, and the\nrelease note item has five commits.\n\nThe then tested if the last two patches could be reverted, and 'patch'\nthought they could be, so that confirmed they were not reverted.\n\nHowever, there was no text in the release note item that corresponded to\nthe commits, so I just removed the entire item.\n\nWhat I now think happened is that the last two commits were considered\npart of the larger NOT NULL change, and not worth mentioning separately,\nbut now that the NOT NULL part is reverted, we might need to mention\nthem.\n\nI rarely handle such complex cases so I don't think I was totally\ncorrect in my handling. Let's get a reply to Peter Eisentraut's\nquestion and we can figure out what to do.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Wed, 15 May 2024 23:09:48 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: cataloguing NOT NULL constraints"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nWhile working on Pluggable TOAST we extended the PG_ATTRIBUTE table with a\nnew\ncolumn 'atttoaster'. But is is obvious that this column is related to\ntables, columns and datatypes\nonly, and is not needed for other attributes.\nYou can find full discussion on Pluggable TOAST here\nhttps://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a@sigaev.ru\n\nWe already had some thoughts on storing, let's call them \"optional\"\nattributes into 'attoptions'\ninstead of extending the PG_ATTRIBUTE table, and here came feedback from\nAndres Freund\nwith a remark that we're increasing the largest catalog table. So we\ndecided to propose moving\nthese \"optional\" attributes from being the PG_ATTRIBUTE column to be the\npart of 'attoptions'\ncolumn of this table.\nThe first most suspected attributes to store in attoptions column are the\n'atttoaster' and\n'attcompression', because they are related to datatypes and table columns.\nAlso, this change\nwill allow setting options for custom Toasters, which makes a lot of sense\ntoo, according with\nan important [as we see it] 'force TOAST' option which is meant to force\ngiven value to be\nTOASTed bypassing existing logic (reference depends on tuple and value\nsize).\n\nAlso, we suggest that options stored in 'attoptions' column could be packed\nas JSON values.\n\nIt seems to make a lot of sense to optimize PG_ATTRIBUTE structure and size\nwith attributes\nrelated only to specific types, etc.\n\nWe'd welcome any opinions, suggestions and advice!\n\n-- \nRegards,\nNikita Malakhov\nhttps://postgrespro.ru/\n\nHi hackers!While working on Pluggable TOAST we extended the PG_ATTRIBUTE table with a new column 'atttoaster'. But is is obvious that this column is related to tables, columns and datatypes only, and is not needed for other attributes.You can find full discussion on Pluggable TOAST herehttps://www.postgresql.org/message-id/flat/224711f9-83b7-a307-b17f-4457ab73aa0a@sigaev.ruWe already had some thoughts on storing, let's call them \"optional\" attributes into 'attoptions'instead of extending the PG_ATTRIBUTE table, and here came feedback from Andres Freundwith a remark that we're increasing the largest catalog table. So we decided to propose moving these \"optional\" attributes from being the PG_ATTRIBUTE column to be the part of 'attoptions' column of this table. The first most suspected attributes to store in attoptions column are the 'atttoaster' and 'attcompression', because they are related to datatypes and table columns. Also, this changewill allow setting options for custom Toasters, which makes a lot of sense too, according withan important [as we see it] 'force TOAST' option which is meant to force given value to beTOASTed bypassing existing logic (reference depends on tuple and value size).Also, we suggest that options stored in 'attoptions' column could be packed as JSON values.It seems to make a lot of sense to optimize PG_ATTRIBUTE structure and size with attributes related only to specific types, etc.We'd welcome any opinions, suggestions and advice!-- Regards,Nikita Malakhovhttps://postgrespro.ru/",
"msg_date": "Wed, 17 Aug 2022 21:14:13 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "RFC: Moving specific attributes from pg_attribute column into\n attoptions"
},
{
"msg_contents": "Nikita Malakhov <hukutoc@gmail.com> writes:\n> We already had some thoughts on storing, let's call them \"optional\"\n> attributes into 'attoptions' instead of extending the PG_ATTRIBUTE\n> table, and here came feedback from Andres Freund with a remark that\n> we're increasing the largest catalog table. So we decided to propose\n> moving these \"optional\" attributes from being the PG_ATTRIBUTE column to\n> be the part of 'attoptions' column of this table.\n\nThis smells very much like what was done eons ago to create the\npg_attrdef catalog. I don't have any concrete comments to make,\nonly to suggest that that's an instructive parallel case. One\nthing that comes to mind immediately is whether this stuff could\nbe unified with pg_attrdef instead of creating Yet Another catalog\nthat has to be consulted on the way to getting any real work done.\n\nI think that pg_attrdef was originally separated to keep large\ndefault expressions from overrunning the maximum tuple size,\na motivation that disappeared once we could TOAST system tables.\nHowever, nowadays it's still useful for it to be separate because\nit simplifies representation of dependencies of default expressions\n(pg_depend refers to OIDs of pg_attrdef entries for that).\nIf we're thinking of moving anything that would need dependency\nmanagement then it might need its own catalog, maybe?\n\nOn the whole I'm not convinced that what you suggest will be a\nnet win. pg_attrdef wins to the extent that there are a lot of\ncolumns with no non-null default and hence no need for any pg_attrdef\nentry. But the minute you move something that most tables need, like\nattcompression, you'll just have another bloated catalog to deal with.\n\n> Also, we suggest that options stored in 'attoptions' column could be packed\n> as JSON values.\n\nPlease, no. Use of JSON in a SQL database pretty much always\nrepresents a failure to think hard enough about what you need\nto store. Sometimes it's not worth thinking all that hard;\nbut I strenuously oppose applying that sort of standard in\nthe system catalogs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 17 Aug 2022 16:51:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RFC: Moving specific attributes from pg_attribute column into\n attoptions"
},
{
"msg_contents": "Hi hackers!\n\nTom, thank you for your feedback!\nWe thought about this because it already seems that custom Toasters\ncould have a bunch of options, so we already thinking how to store\nthem.\n\nI'll check if we can implement storing Toaster options in PG_ATTRDEF.\n\nAndres Freund complained that 'atttoaster' column extends already the\nlargest catalog table. It is a reasonable complain because atttoaster\noption only makes sense for columns and datatypes only, and the\nDefault Toaster is accessible by global constant\nDEFAULT_TOASTER_OID\nand does not require accessing the PG_ATTRDEF table.\n\nAlso, we thought about making Toaster responsible for column compression\nand thus moving 'attcompression' out from PG_ATTRIBUTE column to\nToaster options. What do you think about this?\n\nUsing JSON - accepted, we won't do it.\n\nOn Wed, Aug 17, 2022 at 11:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Nikita Malakhov <hukutoc@gmail.com> writes:\n> > We already had some thoughts on storing, let's call them \"optional\"\n> > attributes into 'attoptions' instead of extending the PG_ATTRIBUTE\n> > table, and here came feedback from Andres Freund with a remark that\n> > we're increasing the largest catalog table. So we decided to propose\n> > moving these \"optional\" attributes from being the PG_ATTRIBUTE column to\n> > be the part of 'attoptions' column of this table.\n>\n> This smells very much like what was done eons ago to create the\n> pg_attrdef catalog. I don't have any concrete comments to make,\n> only to suggest that that's an instructive parallel case. One\n> thing that comes to mind immediately is whether this stuff could\n> be unified with pg_attrdef instead of creating Yet Another catalog\n> that has to be consulted on the way to getting any real work done.\n>\n> I think that pg_attrdef was originally separated to keep large\n> default expressions from overrunning the maximum tuple size,\n> a motivation that disappeared once we could TOAST system tables.\n> However, nowadays it's still useful for it to be separate because\n> it simplifies representation of dependencies of default expressions\n> (pg_depend refers to OIDs of pg_attrdef entries for that).\n> If we're thinking of moving anything that would need dependency\n> management then it might need its own catalog, maybe?\n>\n> On the whole I'm not convinced that what you suggest will be a\n> net win. pg_attrdef wins to the extent that there are a lot of\n> columns with no non-null default and hence no need for any pg_attrdef\n> entry. But the minute you move something that most tables need, like\n> attcompression, you'll just have another bloated catalog to deal with.\n>\n> > Also, we suggest that options stored in 'attoptions' column could be\n> packed\n> > as JSON values.\n>\n> Please, no. Use of JSON in a SQL database pretty much always\n> represents a failure to think hard enough about what you need\n> to store. Sometimes it's not worth thinking all that hard;\n> but I strenuously oppose applying that sort of standard in\n> the system catalogs.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nNikita Malakhov\nhttps://postgrespro.ru/\n\nHi hackers!Tom, thank you for your feedback!We thought about this because it already seems that custom Toasterscould have a bunch of options, so we already thinking how to storethem.I'll check if we can implement storing Toaster options in PG_ATTRDEF.Andres Freund complained that 'atttoaster' column extends already thelargest catalog table. It is a reasonable complain because atttoaster option only makes sense for columns and datatypes only, and the Default Toaster is accessible by global constant DEFAULT_TOASTER_OIDand does not require accessing the PG_ATTRDEF table.Also, we thought about making Toaster responsible for column compression and thus moving 'attcompression' out from PG_ATTRIBUTE column to Toaster options. What do you think about this?Using JSON - accepted, we won't do it.On Wed, Aug 17, 2022 at 11:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Nikita Malakhov <hukutoc@gmail.com> writes:\n> We already had some thoughts on storing, let's call them \"optional\"\n> attributes into 'attoptions' instead of extending the PG_ATTRIBUTE\n> table, and here came feedback from Andres Freund with a remark that\n> we're increasing the largest catalog table. So we decided to propose\n> moving these \"optional\" attributes from being the PG_ATTRIBUTE column to\n> be the part of 'attoptions' column of this table.\n\nThis smells very much like what was done eons ago to create the\npg_attrdef catalog. I don't have any concrete comments to make,\nonly to suggest that that's an instructive parallel case. One\nthing that comes to mind immediately is whether this stuff could\nbe unified with pg_attrdef instead of creating Yet Another catalog\nthat has to be consulted on the way to getting any real work done.\n\nI think that pg_attrdef was originally separated to keep large\ndefault expressions from overrunning the maximum tuple size,\na motivation that disappeared once we could TOAST system tables.\nHowever, nowadays it's still useful for it to be separate because\nit simplifies representation of dependencies of default expressions\n(pg_depend refers to OIDs of pg_attrdef entries for that).\nIf we're thinking of moving anything that would need dependency\nmanagement then it might need its own catalog, maybe?\n\nOn the whole I'm not convinced that what you suggest will be a\nnet win. pg_attrdef wins to the extent that there are a lot of\ncolumns with no non-null default and hence no need for any pg_attrdef\nentry. But the minute you move something that most tables need, like\nattcompression, you'll just have another bloated catalog to deal with.\n\n> Also, we suggest that options stored in 'attoptions' column could be packed\n> as JSON values.\n\nPlease, no. Use of JSON in a SQL database pretty much always\nrepresents a failure to think hard enough about what you need\nto store. Sometimes it's not worth thinking all that hard;\nbut I strenuously oppose applying that sort of standard in\nthe system catalogs.\n\n regards, tom lane\n-- Regards,Nikita Malakhovhttps://postgrespro.ru/",
"msg_date": "Thu, 18 Aug 2022 16:38:31 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: RFC: Moving specific attributes from pg_attribute column into\n attoptions"
}
] |
[
{
"msg_contents": "Hi,\n\nI was hacking in making aix work with the meson patchset last night when I\nnoticed this delightful bit:\n\ngmake -C src/interfaces/libpq\n...\n\nrm -f libpq.a\nar crs libpq.a fe-auth-scram.o fe-connect.o fe-exec.o fe-lobj.o fe-misc.o fe-print.o fe-protocol3.o fe-secure.o fe-trace.o legacy-pqsignal.o libpq-events.o pqexpbuffer.o fe-auth.o\ntouch libpq.a\n\n( echo '#! libpq.so.5'; gawk '/^[^#]/ {printf \"%s\\n\",$1}' /home/andres/src/postgres/build-ac/../src/interfaces/libpq/exports.txt ) >libpq.exp\ngcc -maix64 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wimplicit-fallthrough=3 -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -Wno-format-truncation -O2 -pthread -D_REENTRANT -D_THREAD_SAFE -o libpq.so.5 libpq.a -Wl,-bE:libpq.exp -L../../../src/port -L../../../src/common -lpgcommon_shlib -lpgport_shlib -Wl,-bbigtoc -Wl,-blibpath:'/usr/local/pgsql/lib:/usr/lib:/lib' -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE -lm\n\nrm -f libpq.a\nar crs libpq.a libpq.so.5\n\n\nwe first create a static library libpq.a as normal, but then we overwrite it\nwith the special aix way of packing up shared libraries, by packing them up in\na static library. That part is correct, it's apparently the easiest way of\ngetting applications to link to shared libraries on AIX (I think the\n-Wl,-bM:SRE is relevant for ensuring it'll be a dynamic link, rather than a\nstatic one).\n\nThis likely has been going on for approximately forever.\n\nTwo questions:\n1) Do we continue building static libraries for libpq etc?\n2) Do we care about static libraries not suriving on AIX? There could also be\n a race in the buildrules leading to sometimes static libs sometimes shared\n libs winning, I think.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 12:01:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "On Wed, Aug 17, 2022 at 3:02 PM Andres Freund <andres@anarazel.de> wrote:\n> 2) Do we care about static libraries not suriving on AIX? There could also be\n> a race in the buildrules leading to sometimes static libs sometimes shared\n> libs winning, I think.\n\nInstead of overwriting the same file, can we not use different\nfilenames for different things?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 17 Aug 2022 15:28:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 15:28:18 -0400, Robert Haas wrote:\n> On Wed, Aug 17, 2022 at 3:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > 2) Do we care about static libraries not suriving on AIX? There could also be\n> > a race in the buildrules leading to sometimes static libs sometimes shared\n> > libs winning, I think.\n>\n> Instead of overwriting the same file, can we not use different\n> filenames for different things?\n\nNot easily, as far as I understand. The way one customarily links to shared\nlibraries on aix is to have an .a archive containing the shared library. That\nway the -lpq picks up libpq.a, which then triggers the shared library to be\nreferenced.\n\nE.g.\nandres@gcc119:[/home/andres/src/postgres/build-ac]$ LIBPATH=$(pwd)/src/interfaces/libpq ldd src/bin/scripts/clusterdb\nsrc/bin/scripts/clusterdb needs:\n /usr/lib/libc.a(shr_64.o)\n /usr/lib/libpthread.a(shr_xpg5_64.o)\n /usr/lib/libreadline.a(libreadline.so.6)\n /home/andres/src/postgres/build-ac/src/interfaces/libpq/libpq.a(libpq.so.5)\n /unix\n /usr/lib/libcrypt.a(shr_64.o)\n /usr/lib/libcurses.a(shr42_64.o)\n /usr/lib/libpthreads.a(shr_xpg5_64.o)\n\nNote the .a(libpq.so.5) bit.\n\n\nUnfortunately that's exactly how one links to a static library as well.\n\nSo we'd have to change the name used as -l$this between linking to a shared\nlibpq and a static libpq.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Aug 2022 13:08:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "The AIX sections of Makefile.shlib misuse the terms \"static\" and \"shared\".\nImagine s/static library/library ending in .a/ and s/shared library/library\nending in .so/. That yields an accurate description of the makefile rules.\n\nOn Wed, Aug 17, 2022 at 12:01:54PM -0700, Andres Freund wrote:\n> Two questions:\n> 1) Do we continue building static libraries for libpq etc?\n\nEssentially, we don't build static libpq today, and we should continue not\nbuilding it. (The first-built libpq.a is static, but that file is an\nimplementation detail of the makefile rules. The surviving libpq.a is a\nnormal AIX shared library.)\n\n> 2) Do we care about static libraries not suriving on AIX?\n\nNo.\n\n> There could also be\n> a race in the buildrules leading to sometimes static libs sometimes shared\n> libs winning, I think.\n\nNot since commit e8564ef, to my knowledge.\n\n\nAlong the lines of Robert's comment, it could be a nice code beautification to\nuse a different suffix for the short-lived .a file. Perhaps _so_inputs.a.\n\nI found this useful years ago:\nhttps://web.archive.org/web/20151003130212/http://seriousbirder.com/blogs/aix-shared-and-static-libraries-explained/\n\n\n",
"msg_date": "Wed, 17 Aug 2022 21:59:29 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 12:59 AM Noah Misch <noah@leadboat.com> wrote:\n> Along the lines of Robert's comment, it could be a nice code beautification to\n> use a different suffix for the short-lived .a file. Perhaps _so_inputs.a.\n\nYeah, this is the kind of thing I was thinking about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 18 Aug 2022 10:10:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "\nOn 2022-08-18 Th 10:10, Robert Haas wrote:\n> On Thu, Aug 18, 2022 at 12:59 AM Noah Misch <noah@leadboat.com> wrote:\n>> Along the lines of Robert's comment, it could be a nice code beautification to\n>> use a different suffix for the short-lived .a file. Perhaps _so_inputs.a.\n> Yeah, this is the kind of thing I was thinking about.\n\n\n+1 for that and clarifying Makefile.shlib.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 18 Aug 2022 10:31:49 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-17 21:59:29 -0700, Noah Misch wrote:\n> The AIX sections of Makefile.shlib misuse the terms \"static\" and \"shared\".\n>\n> Imagine s/static library/library ending in .a/ and s/shared library/library\n> ending in .so/. That yields an accurate description of the makefile rules.\n\nI realize that aspect.\n\nMy point is that we currently, across most of the other platforms, support\nbuilding a \"proper\" static library, and install it too. But on AIX (and I\nthink mingw), we don't, but without an explicit comment about not doing so. In\nfact, the all-static-lib target on those platforms will build a non-static\nlibrary, which seems not great.\n\n\n> > There could also be\n> > a race in the buildrules leading to sometimes static libs sometimes shared\n> > libs winning, I think.\n>\n> Not since commit e8564ef, to my knowledge.\n\nI'd missed that the $(stlib): ... bit is not defined due to haslibarule being\ndefined...\n\n\n> Along the lines of Robert's comment, it could be a nice code beautification to\n> use a different suffix for the short-lived .a file. Perhaps _so_inputs.a.\n\nAgreed, it'd be an improvement.\n\nAfaict we could just stop building the intermediary static lib. Afaict the\nMKLDEXPORT path isn't needed for libraries without an exports.txt because the\nlinker defaults to exporting \"most\" symbols, and for symbols with an\nexports.txt we don't need it either.\n\nThe only path that really needs MKLDEXPORT is postgres. Not really for the\nexport side either, but for the import side.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:03:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 09:03:57AM -0700, Andres Freund wrote:\n> My point is that we currently, across most of the other platforms, support\n> building a \"proper\" static library, and install it too. But on AIX (and I\n> think mingw), we don't, but without an explicit comment about not doing so. In\n> fact, the all-static-lib target on those platforms will build a non-static\n> library, which seems not great.\n\nYep. If someone had just pushed a correct patch to make AIX match our\nGNU/Linux static linking assistance, I wouldn't be arguing to revert that\npatch. At the same time, if someone asks me to choose high-value projects for\n20 people, doing more for static linking on AIX won't be on the list.\n\n> On 2022-08-17 21:59:29 -0700, Noah Misch wrote:\n> > Along the lines of Robert's comment, it could be a nice code beautification to\n> > use a different suffix for the short-lived .a file. Perhaps _so_inputs.a.\n> \n> Agreed, it'd be an improvement.\n> \n> Afaict we could just stop building the intermediary static lib. Afaict the\n> MKLDEXPORT path isn't needed for libraries without an exports.txt because the\n> linker defaults to exporting \"most\" symbols\n\nIf that works, great.\n\n\n",
"msg_date": "Thu, 18 Aug 2022 22:56:43 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-18 22:56:43 -0700, Noah Misch wrote:\n> > On 2022-08-17 21:59:29 -0700, Noah Misch wrote:\n> > > Along the lines of Robert's comment, it could be a nice code beautification to\n> > > use a different suffix for the short-lived .a file. Perhaps _so_inputs.a.\n> >\n> > Agreed, it'd be an improvement.\n> >\n> > Afaict we could just stop building the intermediary static lib. Afaict the\n> > MKLDEXPORT path isn't needed for libraries without an exports.txt because the\n> > linker defaults to exporting \"most\" symbols\n>\n> If that works, great.\n\nI looked at that. It's not too hard to make it work. But while doing so I\nencountered some funny bits.\n\nAs far as I can tell the way we build shared libraries on aix with gcc isn't\ncorrect:\n\nWithout -shared gcc won't know that it's building a shared library, which\nafaict will prevent gcc from generating correct unwind info and we end up with\na statically linked copy of libgcc each time.\n\nThe naive thing of just adding -shared fails, but that's our fault:\n\nldd pgoutput.so\npgoutput.so needs:\nCannot find libgcc_s.a(shr.o)\n /usr/lib/libc.a(shr_64.o)\n /unix\n /usr/lib/libcrypt.a(shr_64.o)\n\nMakefile.aix has:\n# -blibpath must contain ALL directories where we should look for libraries\nlibpath := $(shell echo $(subst -L,:,$(filter -L/%,$(LDFLAGS))) | sed -e's/ //g'):/usr/lib:/lib\n\nbut that's insufficient for gcc, because it won't find gcc's runtime lib. We\ncould force a build of the statically linked libgcc, but once it knows it's\ngenerating with a shared library, a static libgcc unfortunately blows up the\nsize of the output considerably.\n\nSo I think we need something like\n\nifeq ($(GCC), yes)\nlibpath := $(libpath):$(dir $(shell gcc -print-libgcc-file-name))\nendif\n\nalthough deferring the computation of that would be nicer, but would require\nsome cleanup before.\n\n\nWith that libraries do shrink a bit. E.g. cube.so goes from 140k to 96k.\n\n\nAfaict there's no reason to generate lib<name>.a for extension .so's, right?\n\n\nWe have plenty of detritus that's vaguely AIX related. The common.mk rule to\ngenerate SUBSYS.o isn't used (mea culpa), and backend/Makefile's postgres.o\nrule hasn't been used for well over 20 years.\n\n\nI'll send in a patch series tomorrow, too tired for today.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 20 Aug 2022 01:35:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 01:35:22AM -0700, Andres Freund wrote:\n> Afaict there's no reason to generate lib<name>.a for extension .so's, right?\n\nRight. We install cube.so, not any *cube.a file.\n\n\n",
"msg_date": "Sat, 20 Aug 2022 09:57:13 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-20 01:35:22 -0700, Andres Freund wrote:\n> I'll send in a patch series tomorrow, too tired for today.\n\nHere it goes.\n\n0001 aix: Fix SHLIB_EXPORTS reference in VPATH builds\n\n That's mostly so I could even build. It's not quite right in the sense that\n we don't depend on the file, but that's a preexisting issue. Could be folded\n in with 0005, which fixes that aspect. Or it could be backpatched as the\n minimal fix.\n\n\n0002 Remove SUBSYS.o rule in common.mk, hasn't been used in a long time\n0003 Remove rule to generate postgres.o, not needed for 20+ years\n\n Both obvious, I think.\n\n\n0004 aix: when building with gcc, tell gcc we're building a shared library\n\n That's the gcc -shared issue I explained in the email I'm replying to.\n\n We should probably consider building executables with -shared-libgcc too,\n that shrinks them a decent amount (e.g. 1371684 -> 1126765 for psql). But\n I've not done that here.\n\n\n0005 aix: No need to use mkldexport when we want to export all symbols\n\n This makes the building of shared libraries a lot more similar to other\n platforms. Export files are only used when an exports.txt is present and\n there's no more intermediary static libraries.\n\n\n0006 configure: Expand -fvisibility checks to more compilers, add -qvisibility\n\n This isn't strictly speaking part of the same \"thread\" of work, but I don't\n want to touch aix more often than I have too... I'll post it in the other\n thread too.\n\n I did just test that this passes at least some tests on aix with xlc and\n solaris with sunpro.\n\nGreetings,\n\nAndres",
"msg_date": "Sat, 20 Aug 2022 10:42:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-20 10:42:13 -0700, Andres Freund wrote:\n> On 2022-08-20 01:35:22 -0700, Andres Freund wrote:\n> > I'll send in a patch series tomorrow, too tired for today.\n> \n> Here it goes.\n\n> 0001 aix: Fix SHLIB_EXPORTS reference in VPATH builds\n> \n> That's mostly so I could even build. It's not quite right in the sense that\n> we don't depend on the file, but that's a preexisting issue. Could be folded\n> in with 0005, which fixes that aspect. Or it could be backpatched as the\n> minimal fix.\n> \n> \n> 0002 Remove SUBSYS.o rule in common.mk, hasn't been used in a long time\n> 0003 Remove rule to generate postgres.o, not needed for 20+ years\n> \n> Both obvious, I think.\n\nPushed these, given that they're all pretty trivial.\n\n\n\n> 0004 aix: when building with gcc, tell gcc we're building a shared library\n> \n> That's the gcc -shared issue I explained in the email I'm replying to.\n> \n> We should probably consider building executables with -shared-libgcc too,\n> that shrinks them a decent amount (e.g. 1371684 -> 1126765 for psql). But\n> I've not done that here.\n> \n> \n> 0005 aix: No need to use mkldexport when we want to export all symbols\n> \n> This makes the building of shared libraries a lot more similar to other\n> platforms. Export files are only used when an exports.txt is present and\n> there's no more intermediary static libraries.\n> \n> \n> 0006 configure: Expand -fvisibility checks to more compilers, add -qvisibility\n> \n> This isn't strictly speaking part of the same \"thread\" of work, but I don't\n> want to touch aix more often than I have too... I'll post it in the other\n> thread too.\n> \n> I did just test that this passes at least some tests on aix with xlc and\n> solaris with sunpro.\n\nAny comments here?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Aug 2022 20:43:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 08:43:04PM -0700, Andres Freund wrote:\n> On 2022-08-20 10:42:13 -0700, Andres Freund wrote:\n> > 0004 aix: when building with gcc, tell gcc we're building a shared library\n> > \n> > That's the gcc -shared issue I explained in the email I'm replying to.\n> > \n> > We should probably consider building executables with -shared-libgcc too,\n> > that shrinks them a decent amount (e.g. 1371684 -> 1126765 for psql). But\n> > I've not done that here.\n> > \n> > \n> > 0005 aix: No need to use mkldexport when we want to export all symbols\n> > \n> > This makes the building of shared libraries a lot more similar to other\n> > platforms. Export files are only used when an exports.txt is present and\n> > there's no more intermediary static libraries.\n> > \n> > \n> > 0006 configure: Expand -fvisibility checks to more compilers, add -qvisibility\n> > \n> > This isn't strictly speaking part of the same \"thread\" of work, but I don't\n> > want to touch aix more often than I have too... I'll post it in the other\n> > thread too.\n> > \n> > I did just test that this passes at least some tests on aix with xlc and\n> > solaris with sunpro.\n> \n> Any comments here?\n\nI don't know much about them, but they sound like the sort of thing that can't\ncause subtle bugs. If they build and test the first time, they're probably\nvalid. You may as well push them.\n\n\n",
"msg_date": "Wed, 24 Aug 2022 21:14:18 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: static libpq (and other libraries) overwritten on aix"
}
] |
[
{
"msg_contents": "Hello, hackers.\n\nI was investigating Valgrind issues with plpython. It turns out\npython itself doesn't play well with Valgrind in default build.\n\nTherefore I built python with valgrind related flags\n\t--with-valgrind --without-pymalloc\nand added debug flags just to be sure\n\t--with-pydebug --with-assertions\n\nIt causes plpython's tests to fail on internal python's\nassertions.\nExample backtrace (python version 3.7, postgresql master branch):\n\n#8 0x00007fbf02851662 in __GI___assert_fail \"!PyErr_Occurred()\"\n\tat assert.c:101\n#9 0x00007fbef9060d31 in _PyType_Lookup\n\tat Objects/typeobject.c:3117 \n#10 0x00007fbef90461be in _PyObject_GenericGetAttrWithDict\n\tat Objects/object.c:1231 \n#11 0x00007fbef9046707 in PyObject_GenericGetAttr\n\tat Objects/object.c:1309 \n#12 0x00007fbef9043cdf in PyObject_GetAttr\n\tat Objects/object.c:913\n#13 0x00007fbef90458d9 in PyObject_GetAttrString\n\tat Objects/object.c:818\n#14 0x00007fbf02499636 in get_string_attr\n\tat plpy_elog.c:569\n#15 0x00007fbf02498ea5 in PLy_get_error_data\n\tat plpy_elog.c:420\n#16 0x00007fbf0249763b in PLy_elog_impl\n\tat plpy_elog.c:77\n\nLooks like there several places where code tries to get\nattributes from error objects, and while code is ready for\nattribute absence, it doesn't clear AttributeError exception\nin that case.\n\nAttached patch adds 3 calls to PyErr_Clear() in places where\ncode reacts on attribute absence. With this patch tests are\npassed well.\n\nThere were similar findings before. Calls to PyErr_Clear were\nclose to, but not exactly at, same places before were removed\nin\n 7e3bb08038 Fix access-to-already-freed-memory issue in plpython's error handling.\nThen one of PyErr_Clear were added in\n 1d2f9de38d Fix freshly-introduced PL/Python portability bug.\nBut looks like there's need for more.\n\nPS. When python is compilled `--with-valgrind --without-pymalloc`\nValgrind doesn't complain, so there are no memory related\nissues in plpython.\n\nregards\n\n------\n\nYura Sokolov\ny.sokolov",
"msg_date": "Wed, 17 Aug 2022 23:36:59 +0300",
"msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "plpython causes assertions with python debug build"
}
] |
[
{
"msg_contents": "When building on macOS against a Homebrew-provided Perl installation, I \nget these warnings during the build:\n\nld: warning: object file (SPI.o) was built for newer macOS version \n(12.4) than being linked (11.3)\nld: warning: object file (plperl.o) was built for newer macOS version \n(12.4) than being linked (11.3)\n...\n\nThis is because the link command uses the option \n-mmacosx-version-min=11.3, which comes in from perl_embed_ldflags (perl \n-MExtUtils::Embed -e ldopts), but the compile commands don't use that \noption, which creates a situation that ld considers inconsistent.\n\nI think an appropriate fix is to strip out the undesired option from \nperl_embed_ldflags. We already do that for other options. Proposed \npatch attached.",
"msg_date": "Thu, 18 Aug 2022 08:57:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This is because the link command uses the option \n> -mmacosx-version-min=11.3, which comes in from perl_embed_ldflags (perl \n> -MExtUtils::Embed -e ldopts), but the compile commands don't use that \n> option, which creates a situation that ld considers inconsistent.\n\n> I think an appropriate fix is to strip out the undesired option from \n> perl_embed_ldflags. We already do that for other options. Proposed \n> patch attached.\n\nAgreed on rejecting -mmacosx-version-min, but I wonder if we should\nthink about adopting a whitelist-instead-of-blacklist approach to\nadopting stuff from perl_embed_ldflags. ISTR that in pltcl we already\nuse the approach of accepting only -L and -l, and perhaps similar\nstrictness would serve us well here.\n\nAs an example, on a not-too-new MacPorts install, I see\n\n$ /opt/local/bin/perl -MExtUtils::Embed -e ldopts\n -L/opt/local/lib -Wl,-headerpad_max_install_names -fstack-protector-strong -L/opt/local/lib/perl5/5.28/darwin-thread-multi-2level/CORE -lperl\n\nI can't see any really good reason why we should allow perl\nto be injecting that sort of -f option into the plperl build,\nand I'm pretty dubious about the -headerpad_max_install_names\nbit too.\n\nI think also that this would allow us to drop the weird dance of\ntrying to subtract ccdlflags.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:53:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "On 18.08.22 15:53, Tom Lane wrote:\n> Agreed on rejecting -mmacosx-version-min, but I wonder if we should\n> think about adopting a whitelist-instead-of-blacklist approach to\n> adopting stuff from perl_embed_ldflags. ISTR that in pltcl we already\n> use the approach of accepting only -L and -l, and perhaps similar\n> strictness would serve us well here.\n> \n> As an example, on a not-too-new MacPorts install, I see\n> \n> $ /opt/local/bin/perl -MExtUtils::Embed -e ldopts\n> -L/opt/local/lib -Wl,-headerpad_max_install_names -fstack-protector-strong -L/opt/local/lib/perl5/5.28/darwin-thread-multi-2level/CORE -lperl\n> \n> I can't see any really good reason why we should allow perl\n> to be injecting that sort of -f option into the plperl build,\n> and I'm pretty dubious about the -headerpad_max_install_names\n> bit too.\n> \n> I think also that this would allow us to drop the weird dance of\n> trying to subtract ccdlflags.\n\nAfter analyzing the source code of ExtUtils::Embed's ldopts, I think we \ncan also do this by subtracting $Config{ldflags}, since\n\nmy $linkage = \"$ccdlflags $ldflags @archives $ld_or_bs\";\n\nand we really just want the $ld_or_bs part. (@archives should be empty \nfor our uses.)\n\nThis would get rid of -mmacosx-version-min and -arch and all the things \nyou showed, including -L/opt/local/lib, which is probably there so that \nthe build of Perl itself could look there for things, but we don't need it.",
"msg_date": "Fri, 19 Aug 2022 09:12:06 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> After analyzing the source code of ExtUtils::Embed's ldopts, I think we \n> can also do this by subtracting $Config{ldflags}, since\n> my $linkage = \"$ccdlflags $ldflags @archives $ld_or_bs\";\n> and we really just want the $ld_or_bs part. (@archives should be empty \n> for our uses.)\n\n+1, this looks like a nice clean solution. I see that it gets rid\nof stuff we don't really want on RHEL8 as well as various generations\nof macOS.\n\n> This would get rid of -mmacosx-version-min and -arch and all the things \n> you showed, including -L/opt/local/lib, which is probably there so that \n> the build of Perl itself could look there for things, but we don't need it.\n\nIt is a little weird that they are inserting -L/opt/local/lib or\n-L/usr/local/lib on so many different platforms. But I concur\nthat if we need that, we likely should be inserting it ourselves\nrather than absorbing it from their $ldflags.\n\nBTW, I think the -arch business is dead code anyway now that we\ndesupported PPC-era macOS; I do not see any such switches from\nmodern macOS' perl. So not having a special case for that is an\nadditional win.\n\nPatch LGTM; I noted only a trivial typo in the commit message:\n\n-like we already do with $Config{ccdlflags}. Those flags the choices\n+like we already do with $Config{ccdlflags}. Those flags are the choices\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Aug 2022 10:00:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-19 10:00:35 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > After analyzing the source code of ExtUtils::Embed's ldopts, I think we\n> > can also do this by subtracting $Config{ldflags}, since\n> > my $linkage = \"$ccdlflags $ldflags @archives $ld_or_bs\";\n> > and we really just want the $ld_or_bs part. (@archives should be empty\n> > for our uses.)\n>\n> +1, this looks like a nice clean solution. I see that it gets rid\n> of stuff we don't really want on RHEL8 as well as various generations\n> of macOS.\n\nLooks like it'd also get rid of the bogus\n-bE:/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE/perl.exp we're\nwe're picking up on AIX (we had a thread about filtering that out, but I've only\ndone so inside the meson patch, round tuits).\n\nSo +1 from that front.\n\n\nMaybe a daft question: Why do want any of the -l flags other than -lperl? With\nthe patch configure spits out the following on my debian system:\n\nchecking for CFLAGS to compile embedded Perl... -DDEBIAN\nchecking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n\nthose libraries were likely relevant to build libperl, but don't look relevant\nfor linking to it dynamically. Statically would be a different story, but we\nalready insist on a shared build.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 20 Aug 2022 13:44:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n> the patch configure spits out the following on my debian system:\n\n> checking for CFLAGS to compile embedded Perl... -DDEBIAN\n> checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n\n> those libraries were likely relevant to build libperl, but don't look relevant\n> for linking to it dynamically.\n\nI'm certain that there are/were platforms that insist on those libraries\nbeing mentioned anyway. Maybe they are all obsolete now?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Aug 2022 16:53:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nFWIW, looks like Peter's patch unbreaks building plperl on AIX using gcc and\nsystem perl. Before we picked up a bunch of xlc specific flags that prevented\nthat.\n\nbefore:\nchecking for flags to link embedded Perl... -brtl -bdynamic -b64 -L/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE -lperl -lpthread -lbind -lnsl -ldl -lld -lm -lcrypt -lpthreads -lc\nnow:\nchecking for flags to link embedded Perl... -L/usr/opt/perl5/lib64/5.28.1/aix-thread-multi-64all/CORE -lperl -lpthread -lbind -lnsl -ldl -lld -lm -lcrypt -lpthreads -lc\n\n\nOn 2022-08-20 16:53:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n> > the patch configure spits out the following on my debian system:\n>\n> > checking for CFLAGS to compile embedded Perl... -DDEBIAN\n> > checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n>\n> > those libraries were likely relevant to build libperl, but don't look relevant\n> > for linking to it dynamically.\n>\n> I'm certain that there are/were platforms that insist on those libraries\n> being mentioned anyway. Maybe they are all obsolete now?\n\nI don't think any of the supported platforms require it for stuff used inside\nthe shared library (and we'd be in trouble if so, check e.g. libpq.pc). But of\ncourse that's different if there's inline function / macros getting pulled in.\n\nWhich turns out to be an issue on AIX. All the -l flags added by perl can be\nremoved for xlc, but for gcc, -lpthreads (or -pthread) it is required.\n\nTried it on Solaris (32 bit, not sure if there's a 64bit perl available),\nworks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 20 Aug 2022 14:44:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "On 20.08.22 22:44, Andres Freund wrote:\n> Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n> the patch configure spits out the following on my debian system:\n> \n> checking for CFLAGS to compile embedded Perl... -DDEBIAN\n> checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n> \n> those libraries were likely relevant to build libperl, but don't look relevant\n> for linking to it dynamically. Statically would be a different story, but we\n> already insist on a shared build.\n\nLooking inside the ExtUtils::Embed source code, I wonder if there are \nsome installations that have things like -lperl538 or something like \nthat that it wants to deal with.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 16:31:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "On 20.08.22 23:44, Andres Freund wrote:\n> On 2022-08-20 16:53:31 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n>>> the patch configure spits out the following on my debian system:\n>>\n>>> checking for CFLAGS to compile embedded Perl... -DDEBIAN\n>>> checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n>>\n>>> those libraries were likely relevant to build libperl, but don't look relevant\n>>> for linking to it dynamically.\n>>\n>> I'm certain that there are/were platforms that insist on those libraries\n>> being mentioned anyway. Maybe they are all obsolete now?\n> \n> I don't think any of the supported platforms require it for stuff used inside\n> the shared library (and we'd be in trouble if so, check e.g. libpq.pc). But of\n> course that's different if there's inline function / macros getting pulled in.\n> \n> Which turns out to be an issue on AIX. All the -l flags added by perl can be\n> removed for xlc, but for gcc, -lpthreads (or -pthread) it is required.\n> \n> Tried it on Solaris (32 bit, not sure if there's a 64bit perl available),\n> works.\n\nDoes that mean my proposed patch (v2) is adequate for these platforms, \nor does it need further analysis?\n\n\n",
"msg_date": "Mon, 22 Aug 2022 16:32:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 16:32:36 +0200, Peter Eisentraut wrote:\n> On 20.08.22 23:44, Andres Freund wrote:\n> > On 2022-08-20 16:53:31 -0400, Tom Lane wrote:\n> > > Andres Freund <andres@anarazel.de> writes:\n> > > > Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n> > > > the patch configure spits out the following on my debian system:\n> > > \n> > > > checking for CFLAGS to compile embedded Perl... -DDEBIAN\n> > > > checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n> > > \n> > > > those libraries were likely relevant to build libperl, but don't look relevant\n> > > > for linking to it dynamically.\n> > > \n> > > I'm certain that there are/were platforms that insist on those libraries\n> > > being mentioned anyway. Maybe they are all obsolete now?\n> > \n> > I don't think any of the supported platforms require it for stuff used inside\n> > the shared library (and we'd be in trouble if so, check e.g. libpq.pc). But of\n> > course that's different if there's inline function / macros getting pulled in.\n> > \n> > Which turns out to be an issue on AIX. All the -l flags added by perl can be\n> > removed for xlc, but for gcc, -lpthreads (or -pthread) it is required.\n> > \n> > Tried it on Solaris (32 bit, not sure if there's a 64bit perl available),\n> > works.\n> \n> Does that mean my proposed patch (v2) is adequate for these platforms, or\n> does it need further analysis?\n\nI think it's a clear improvement over the status quo. Unnecessary -l's are\npretty harmless compared to random other flags.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Aug 2022 08:37:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 16:31:53 +0200, Peter Eisentraut wrote:\n> On 20.08.22 22:44, Andres Freund wrote:\n> > Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n> > the patch configure spits out the following on my debian system:\n> > \n> > checking for CFLAGS to compile embedded Perl... -DDEBIAN\n> > checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n> > \n> > those libraries were likely relevant to build libperl, but don't look relevant\n> > for linking to it dynamically. Statically would be a different story, but we\n> > already insist on a shared build.\n> \n> Looking inside the ExtUtils::Embed source code, I wonder if there are some\n> installations that have things like -lperl538 or something like that that it\n> wants to deal with.\n\nThere definitely are - I wasn't trying to suggest we'd add -lperl ourselves,\njust that we'd try to only add -lperl* based on some Config variable.\n\nWe have plenty fragile windows specific logic that would be nice to get rid\nof. But there's more exciting things to wrangle, I have to admit.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Aug 2022 08:41:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 08:37:40 -0700, Andres Freund wrote:\n> On 2022-08-22 16:32:36 +0200, Peter Eisentraut wrote:\n> > On 20.08.22 23:44, Andres Freund wrote:\n> > > On 2022-08-20 16:53:31 -0400, Tom Lane wrote:\n> > > > Andres Freund <andres@anarazel.de> writes:\n> > > > > Maybe a daft question: Why do want any of the -l flags other than -lperl? With\n> > > > > the patch configure spits out the following on my debian system:\n> > > > \n> > > > > checking for CFLAGS to compile embedded Perl... -DDEBIAN\n> > > > > checking for flags to link embedded Perl... -L/usr/lib/x86_64-linux-gnu/perl/5.34/CORE -lperl -ldl -lm -lpthread -lc -lcrypt\n> > > > \n> > > > > those libraries were likely relevant to build libperl, but don't look relevant\n> > > > > for linking to it dynamically.\n> > > > \n> > > > I'm certain that there are/were platforms that insist on those libraries\n> > > > being mentioned anyway. Maybe they are all obsolete now?\n> > > \n> > > I don't think any of the supported platforms require it for stuff used inside\n> > > the shared library (and we'd be in trouble if so, check e.g. libpq.pc). But of\n> > > course that's different if there's inline function / macros getting pulled in.\n> > > \n> > > Which turns out to be an issue on AIX. All the -l flags added by perl can be\n> > > removed for xlc, but for gcc, -lpthreads (or -pthread) it is required.\n> > > \n> > > Tried it on Solaris (32 bit, not sure if there's a 64bit perl available),\n> > > works.\n> > \n> > Does that mean my proposed patch (v2) is adequate for these platforms, or\n> > does it need further analysis?\n> \n> I think it's a clear improvement over the status quo. Unnecessary -l's are\n> pretty harmless compared to random other flags.\n\nFWIW, while trying to mirror the same logic in meson I learned that the new\nlogic removes the rpath setting from the parameters on at least netbsd and\nsuse. We'll add them back - unless --disable-rpath is used. I think some\ndistributions build with --disable-rpath. Not sure if worth worrying about.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 19:11:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> FWIW, while trying to mirror the same logic in meson I learned that the new\n> logic removes the rpath setting from the parameters on at least netbsd and\n> suse. We'll add them back - unless --disable-rpath is used.\n\nHmm ... on my shiny new netbsd buildfarm animal, I see:\n\n$ perl -MConfig -e 'print \"$Config{ccdlflags}\"'\n-Wl,-E -Wl,-R/usr/pkg/lib/perl5/5.34.0/powerpc-netbsd-thread-multi/CORE\n\n$ perl -MConfig -e 'print \"$Config{ldflags}\"'\n -pthread -L/usr/lib -Wl,-R/usr/lib -Wl,-R/usr/pkg/lib -L/usr/pkg/lib\n\n$ perl -MExtUtils::Embed -e ldopts\n-Wl,-E -Wl,-R/usr/pkg/lib/perl5/5.34.0/powerpc-netbsd-thread-multi/CORE -pthread -L/usr/lib -Wl,-R/usr/lib -Wl,-R/usr/pkg/lib -L/usr/pkg/lib -L/usr/pkg/lib/perl5/5.34.0/powerpc-netbsd-thread-multi/CORE -lperl -lm -lcrypt -lpthread\n\nSo we were *already* stripping the rpath for where libperl.so is,\nand now we also strip -L and rpath for /usr/lib (which surely is\npointless) and for /usr/pkg/lib (which in point of fact have to\nbe added to the PG configuration options anyway). These Perl\noptions seem a bit inconsistent ...\n\n> I think some\n> distributions build with --disable-rpath. Not sure if worth worrying about.\n\nI believe that policy only makes sense for distros that expect every\nshared library to appear in /usr/lib (or /usr/lib64 perhaps). Red Hat\nfor one does that --- but they follow through: libperl.so is there.\n\nI think we're good here, at least till somebody points out a platform\nwhere we aren't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 22:49:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "On 19.08.22 09:12, Peter Eisentraut wrote:\n> After analyzing the source code of ExtUtils::Embed's ldopts, I think we \n> can also do this by subtracting $Config{ldflags}, since\n> \n> my $linkage = \"$ccdlflags $ldflags @archives $ld_or_bs\";\n> \n> and we really just want the $ld_or_bs part. (@archives should be empty \n> for our uses.)\n> \n> This would get rid of -mmacosx-version-min and -arch and all the things \n> you showed, including -L/opt/local/lib, which is probably there so that \n> the build of Perl itself could look there for things, but we don't need it.\n\nThis patch has failed on Cygwin lorikeet:\n\nBefore:\n\nchecking for flags to link embedded Perl...\n -Wl,--enable-auto-import -Wl,--export-all-symbols \n-Wl,--enable-auto-image-base -fstack-protector-strong \n-L/usr/lib/perl5/5.32/x86_64-cygwin-threads/CORE -lperl -lpthread -ldl \n-lcrypt\n\nAfter:\n\nchecking for flags to link embedded Perl... \n-L/usr/lib/perl5/5.32/x86_64-cygwin-threads/CORE -lperl -lpthread -ldl \n-lcrypt\n\nThat's as designed. But the plperl tests fail:\n\nCREATE EXTENSION plperl;\n+ERROR: incompatible library \n\"/home/andrew/bf/root/HEAD/inst/lib/postgresql/plperl.dll\": missing \nmagic block\n+HINT: Extension libraries are required to use the PG_MODULE_MAGIC macro.\n\nAmong the now-dropped options, we can discount -Wl,--enable-auto-import, \nbecause that is used anyway via src/template/cygwin.\n\nSo one of the options\n\n-Wl,--export-all-symbols\n-Wl,--enable-auto-image-base\n-fstack-protector-strong\n\nis needed. These options aren't used for any other shared libraries \nAFAICT, so nothing is clear to me.\n\n\n",
"msg_date": "Wed, 24 Aug 2022 12:12:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> This patch has failed on Cygwin lorikeet:\n> CREATE EXTENSION plperl;\n> +ERROR: incompatible library \n> \"/home/andrew/bf/root/HEAD/inst/lib/postgresql/plperl.dll\": missing \n> magic block\n\nPresumably this is caused by not having\n\n> -Wl,--export-all-symbols\n\nwhich is something we ought to be injecting for ourselves if we\naren't doing anything to export the magic-block constant explicitly.\nBut I too am confused why we haven't seen this elsewhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 09:30:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-24 We 09:30, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> This patch has failed on Cygwin lorikeet:\n>> CREATE EXTENSION plperl;\n>> +ERROR: incompatible library \n>> \"/home/andrew/bf/root/HEAD/inst/lib/postgresql/plperl.dll\": missing \n>> magic block\n> Presumably this is caused by not having\n>\n>> -Wl,--export-all-symbols\n> which is something we ought to be injecting for ourselves if we\n> aren't doing anything to export the magic-block constant explicitly.\n> But I too am confused why we haven't seen this elsewhere.\n>\n> \t\t\t\n\n\nMe too. I note that we have -Wl,--out-implib=libplperl.a but we don't\nappear to do anything with libplperl.a.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 24 Aug 2022 10:24:50 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-08-24 We 09:30, Tom Lane wrote:\n>> Presumably this is caused by not having\n>> > -Wl,--export-all-symbols\n>> which is something we ought to be injecting for ourselves if we\n>> aren't doing anything to export the magic-block constant explicitly.\n>> But I too am confused why we haven't seen this elsewhere.\n\n> Me too. I note that we have -Wl,--out-implib=libplperl.a but we don't\n> appear to do anything with libplperl.a.\n\nI've poked around and formed a vague theory, based on noting this\nfrom the ld(1) man page:\n\n --export-all-symbols\n ... When symbols are\n explicitly exported via DEF files or implicitly exported via\n function attributes, the default is to not export anything else\n unless this option is given.\n\nSo we could explain the behavior if, say, plperl's _PG_init were\nexplicitly marked with __attribute__((visibility(\"default\"))) while\nits Pg_magic_func was not. That would work anyway as long as\n--export-all-symbols was being used at link time, and would produce\nthe observed symptom as soon as it wasn't.\n\nNow, seeing that both of those functions are surely marked with\nPGDLLEXPORT in the source code, how could such a state of affairs\narise? What I'm thinking about is that _PG_init's marking will be\ndetermined by the extern declaration for it in fmgr.h, while\nPg_magic_func's marking will be determined by the extern declaration\nobtained from expanding PG_MODULE_MAGIC. And there are a boatload\nof Perl-specific header files read between those points in plperl.c.\n\nIn short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\nor somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n(perhaps more likely) then we could explain this failure, and that\nwould also explain why it doesn't fail elsewhere.\n\nI can't readily check this, since I have no idea exactly which version\nof the Perl headers lorikeet uses.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 18:56:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-24 We 18:56, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-08-24 We 09:30, Tom Lane wrote:\n>>> Presumably this is caused by not having\n>>>> -Wl,--export-all-symbols\n>>> which is something we ought to be injecting for ourselves if we\n>>> aren't doing anything to export the magic-block constant explicitly.\n>>> But I too am confused why we haven't seen this elsewhere.\n>> Me too. I note that we have -Wl,--out-implib=libplperl.a but we don't\n>> appear to do anything with libplperl.a.\n> I've poked around and formed a vague theory, based on noting this\n> from the ld(1) man page:\n>\n> --export-all-symbols\n> ... When symbols are\n> explicitly exported via DEF files or implicitly exported via\n> function attributes, the default is to not export anything else\n> unless this option is given.\n>\n> So we could explain the behavior if, say, plperl's _PG_init were\n> explicitly marked with __attribute__((visibility(\"default\"))) while\n> its Pg_magic_func was not. That would work anyway as long as\n> --export-all-symbols was being used at link time, and would produce\n> the observed symptom as soon as it wasn't.\n>\n> Now, seeing that both of those functions are surely marked with\n> PGDLLEXPORT in the source code, how could such a state of affairs\n> arise? What I'm thinking about is that _PG_init's marking will be\n> determined by the extern declaration for it in fmgr.h, while\n> Pg_magic_func's marking will be determined by the extern declaration\n> obtained from expanding PG_MODULE_MAGIC. And there are a boatload\n> of Perl-specific header files read between those points in plperl.c.\n>\n> In short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\n> or somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n> (perhaps more likely) then we could explain this failure, and that\n> would also explain why it doesn't fail elsewhere.\n>\n> I can't readily check this, since I have no idea exactly which version\n> of the Perl headers lorikeet uses.\n>\n> \t\t\t\n\n\n\nIt's built against cygwin perl 5.32.\n\n\nI don't see anything like that in perl.h. It's certainly using\n__attribute__() a lot.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 24 Aug 2022 20:14:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "On 25.08.22 02:14, Andrew Dunstan wrote:\n>> In short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\n>> or somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n>> (perhaps more likely) then we could explain this failure, and that\n>> would also explain why it doesn't fail elsewhere.\n>>\n>> I can't readily check this, since I have no idea exactly which version\n>> of the Perl headers lorikeet uses.\n> \n> It's built against cygwin perl 5.32.\n> \n> I don't see anything like that in perl.h. It's certainly using\n> __attribute__() a lot.\n\nThis could be checked by running plperl.c through the preprocessor \n(replace gcc -c plperl.c -o plperl.o by gcc -E plperl.c -o plperl.i) and \nseeing what becomes of those symbols.\n\nIf we want to get the buildfarm green again sooner, we could force a \n--export-all-symbols directly.\n\n\n",
"msg_date": "Thu, 25 Aug 2022 15:01:14 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> In short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\n>>> or somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n>>> (perhaps more likely) then we could explain this failure, and that\n>>> would also explain why it doesn't fail elsewhere.\n\n> This could be checked by running plperl.c through the preprocessor \n> (replace gcc -c plperl.c -o plperl.o by gcc -E plperl.c -o plperl.i) and \n> seeing what becomes of those symbols.\n\nYeah, that was what I was going to suggest: grep the \"-E\" output for\n_PG_init and Pg_magic_func and confirm what their extern declarations\nlook like.\n\n> If we want to get the buildfarm green again sooner, we could force a \n> --export-all-symbols directly.\n\nI'm not hugely upset as long as it's just the one machine failing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 09:43:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-25 Th 09:43, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>>> In short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\n>>>> or somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n>>>> (perhaps more likely) then we could explain this failure, and that\n>>>> would also explain why it doesn't fail elsewhere.\n>> This could be checked by running plperl.c through the preprocessor \n>> (replace gcc -c plperl.c -o plperl.o by gcc -E plperl.c -o plperl.i) and \n>> seeing what becomes of those symbols.\n> Yeah, that was what I was going to suggest: grep the \"-E\" output for\n> _PG_init and Pg_magic_func and confirm what their extern declarations\n> look like.\n\n\n$ egrep '_PG_init|Pg_magic_func' plperl.i\nextern __attribute__((visibility(\"default\"))) void _PG_init(void);\nextern __attribute__((visibility(\"default\"))) const Pg_magic_struct\n*Pg_magic_func(void); const Pg_magic_struct * Pg_magic_func(void) {\nstatic const Pg_magic_struct Pg_magic_data = { sizeof(Pg_magic_struct),\n160000 / 100, 100, 32, 64,\n_PG_init(void)\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Aug 2022 17:39:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-25 17:39:35 -0400, Andrew Dunstan wrote:\n> On 2022-08-25 Th 09:43, Tom Lane wrote:\n> > Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> >>>> In short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\n> >>>> or somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n> >>>> (perhaps more likely) then we could explain this failure, and that\n> >>>> would also explain why it doesn't fail elsewhere.\n> >> This could be checked by running plperl.c through the preprocessor \n> >> (replace gcc -c plperl.c -o plperl.o by gcc -E plperl.c -o plperl.i) and \n> >> seeing what becomes of those symbols.\n> > Yeah, that was what I was going to suggest: grep the \"-E\" output for\n> > _PG_init and Pg_magic_func and confirm what their extern declarations\n> > look like.\n> \n> \n> $ egrep '_PG_init|Pg_magic_func'� plperl.i\n> extern __attribute__((visibility(\"default\"))) void _PG_init(void);\n> extern __attribute__((visibility(\"default\"))) const Pg_magic_struct\n> *Pg_magic_func(void); const Pg_magic_struct * Pg_magic_func(void) {\n> static const Pg_magic_struct Pg_magic_data = { sizeof(Pg_magic_struct),\n> 160000 / 100, 100, 32, 64,\n> _PG_init(void)\n\nCould you show objdump -t of the library? Perhaps once with the flags as now,\nand once relinking with the \"old\" flags that we're now omitting?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Aug 2022 14:47:39 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-25 Th 17:47, Andres Freund wrote:\n> Hi,\n>\n> On 2022-08-25 17:39:35 -0400, Andrew Dunstan wrote:\n>> On 2022-08-25 Th 09:43, Tom Lane wrote:\n>>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>>>>> In short: if the Cygwin Perl headers redefine PGDLLEXPORT (unlikely)\n>>>>>> or somehow #define \"__attribute__()\" or \"visibility()\" into no-ops\n>>>>>> (perhaps more likely) then we could explain this failure, and that\n>>>>>> would also explain why it doesn't fail elsewhere.\n>>>> This could be checked by running plperl.c through the preprocessor \n>>>> (replace gcc -c plperl.c -o plperl.o by gcc -E plperl.c -o plperl.i) and \n>>>> seeing what becomes of those symbols.\n>>> Yeah, that was what I was going to suggest: grep the \"-E\" output for\n>>> _PG_init and Pg_magic_func and confirm what their extern declarations\n>>> look like.\n>>\n>> $ egrep '_PG_init|Pg_magic_func' plperl.i\n>> extern __attribute__((visibility(\"default\"))) void _PG_init(void);\n>> extern __attribute__((visibility(\"default\"))) const Pg_magic_struct\n>> *Pg_magic_func(void); const Pg_magic_struct * Pg_magic_func(void) {\n>> static const Pg_magic_struct Pg_magic_data = { sizeof(Pg_magic_struct),\n>> 160000 / 100, 100, 32, 64,\n>> _PG_init(void)\n> Could you show objdump -t of the library? Perhaps once with the flags as now,\n> and once relinking with the \"old\" flags that we're now omitting?\n\n\ncurrent:\n\n\n$ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n[103](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040a0\nPg_magic_func\n[105](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040b0 _PG_init\n\n\nfrom July 11th build:\n\n\n$ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n[101](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040d0\nPg_magic_func\n[103](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040e0 _PG_init\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Thu, 25 Aug 2022 18:04:34 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-25 18:04:34 -0400, Andrew Dunstan wrote:\n> On 2022-08-25 Th 17:47, Andres Freund wrote:\n> >> $ egrep '_PG_init|Pg_magic_func'� plperl.i\n> >> extern __attribute__((visibility(\"default\"))) void _PG_init(void);\n> >> extern __attribute__((visibility(\"default\"))) const Pg_magic_struct\n> >> *Pg_magic_func(void); const Pg_magic_struct * Pg_magic_func(void) {\n> >> static const Pg_magic_struct Pg_magic_data = { sizeof(Pg_magic_struct),\n> >> 160000 / 100, 100, 32, 64,\n> >> _PG_init(void)\n> > Could you show objdump -t of the library? Perhaps once with the flags as now,\n> > and once relinking with the \"old\" flags that we're now omitting?\n> \n> \n> current:\n> \n> \n> $ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n> [103](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040a0\n> Pg_magic_func\n> [105](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040b0 _PG_init\n> \n> \n> from July 11th build:\n> \n> \n> $ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n> [101](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040d0\n> Pg_magic_func\n> [103](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040e0 _PG_init\n\nThanks.\n\nSo it looks like it's not the symbol not being exported. I wonder if the image\nbase thing is somehow the problem? Sounds like it should just be an efficiency\ndifference, by avoiding some relocations, not a functional difference.\n\nCan you try adding just that to the flags for building and whether that then\nallows a LOAD 'plperl' to succeed?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Aug 2022 15:13:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-25 Th 18:13, Andres Freund wrote:\n> Hi,\n>\n> On 2022-08-25 18:04:34 -0400, Andrew Dunstan wrote:\n>> On 2022-08-25 Th 17:47, Andres Freund wrote:\n>>>> $ egrep '_PG_init|Pg_magic_func' plperl.i\n>>>> extern __attribute__((visibility(\"default\"))) void _PG_init(void);\n>>>> extern __attribute__((visibility(\"default\"))) const Pg_magic_struct\n>>>> *Pg_magic_func(void); const Pg_magic_struct * Pg_magic_func(void) {\n>>>> static const Pg_magic_struct Pg_magic_data = { sizeof(Pg_magic_struct),\n>>>> 160000 / 100, 100, 32, 64,\n>>>> _PG_init(void)\n>>> Could you show objdump -t of the library? Perhaps once with the flags as now,\n>>> and once relinking with the \"old\" flags that we're now omitting?\n>>\n>> current:\n>>\n>>\n>> $ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n>> [103](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040a0\n>> Pg_magic_func\n>> [105](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040b0 _PG_init\n>>\n>>\n>> from July 11th build:\n>>\n>>\n>> $ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n>> [101](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040d0\n>> Pg_magic_func\n>> [103](sec 1)(fl 0x00)(ty 20)(scl 2) (nx 0) 0x00000000000040e0 _PG_init\n> Thanks.\n>\n> So it looks like it's not the symbol not being exported. I wonder if the image\n> base thing is somehow the problem? Sounds like it should just be an efficiency\n> difference, by avoiding some relocations, not a functional difference.\n>\n> Can you try adding just that to the flags for building and whether that then\n> allows a LOAD 'plperl' to succeed?\n>\n\n\nAdding what?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Aug 2022 10:04:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 10:04:35 -0400, Andrew Dunstan wrote:\n> On 2022-08-25 Th 18:13, Andres Freund wrote:\n> >>> Could you show objdump -t of the library? Perhaps once with the flags as now,\n> >>> and once relinking with the \"old\" flags that we're now omitting?\n> >>\n> >> current:\n> >>\n> >>\n> >> $ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n> >> [103](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040a0\n> >> Pg_magic_func\n> >> [105](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040b0 _PG_init\n> >>\n> >>\n> >> from July 11th build:\n> >>\n> >>\n> >> $ objdump -t plperl.dll | egrep '_PG_init|Pg_magic_func'\n> >> [101](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040d0\n> >> Pg_magic_func\n> >> [103](sec� 1)(fl 0x00)(ty� 20)(scl�� 2) (nx 0) 0x00000000000040e0 _PG_init\n> > Thanks.\n> >\n> > So it looks like it's not the symbol not being exported. I wonder if the image\n> > base thing is somehow the problem? Sounds like it should just be an efficiency\n> > difference, by avoiding some relocations, not a functional difference.\n> >\n> > Can you try adding just that to the flags for building and whether that then\n> > allows a LOAD 'plperl' to succeed?\n> >\n> \n> \n> Adding what?\n\n-Wl,--enable-auto-image-base\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 07:14:32 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-26 10:04:35 -0400, Andrew Dunstan wrote:\n>> On 2022-08-25 Th 18:13, Andres Freund wrote:\n>>> Can you try adding just that to the flags for building and whether that then\n>>> allows a LOAD 'plperl' to succeed?\n\n>> Adding what?\n\n> -Wl,--enable-auto-image-base\n\nAnd if that doesn't help, try -Wl,--export-all-symbols\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:11:42 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-26 Fr 12:11, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-08-26 10:04:35 -0400, Andrew Dunstan wrote:\n>>> On 2022-08-25 Th 18:13, Andres Freund wrote:\n>>>> Can you try adding just that to the flags for building and whether that then\n>>>> allows a LOAD 'plperl' to succeed?\n>>> Adding what?\n>> -Wl,--enable-auto-image-base\n\n\ndidn't work\n\n\n> And if that doesn't help, try -Wl,--export-all-symbols\n\n\nworked\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Aug 2022 15:36:16 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2022-08-26 Fr 12:11, Tom Lane wrote:\n>> And if that doesn't help, try -Wl,--export-all-symbols\n\n> worked\n\nHmph. Hard to see how that isn't a linker bug. As a stopgap\nto get the farm green again, I propose adding something like\n\nifeq ($(PORTNAME), cygwin)\nSHLIB_LINK += -Wl,--export-all-symbols\nendif\n\nto plperl's makefile.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:00:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-26 Fr 16:00, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2022-08-26 Fr 12:11, Tom Lane wrote:\n>>> And if that doesn't help, try -Wl,--export-all-symbols\n>> worked\n> Hmph. Hard to see how that isn't a linker bug. As a stopgap\n> to get the farm green again, I propose adding something like\n>\n> ifeq ($(PORTNAME), cygwin)\n> SHLIB_LINK += -Wl,--export-all-symbols\n> endif\n>\n> to plperl's makefile.\n>\n> \t\t\t\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:07:32 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 16:00:31 -0400, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 2022-08-26 Fr 12:11, Tom Lane wrote:\n> >> And if that doesn't help, try -Wl,--export-all-symbols\n>\n> > worked\n\nExcept that it's only happening for plperl, I'd wonder if it's possibly\nrelated to our magic symbols being prefixed with _. I noticed that the\nunderscore prefix e.g. changes the behaviour of gcc's \"collect2\" on AIX, which\nis responsible for exporting symbols etc.\n\n\n> Hmph. Hard to see how that isn't a linker bug.\n\nAgreed, given that this is only happening with plperl, and not with any of the\nother extensions...\n\n\n> As a stopgap to get the farm green again, I propose adding something like\n>\n> ifeq ($(PORTNAME), cygwin)\n> SHLIB_LINK += -Wl,--export-all-symbols\n> endif\n>\n> to plperl's makefile.\n\n:(\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 13:25:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "\nOn 2022-08-26 Fr 16:25, Andres Freund wrote:\n> Hi,\n>\n> On 2022-08-26 16:00:31 -0400, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> On 2022-08-26 Fr 12:11, Tom Lane wrote:\n>>>> And if that doesn't help, try -Wl,--export-all-symbols\n>>> worked\n> Except that it's only happening for plperl, I'd wonder if it's possibly\n> related to our magic symbols being prefixed with _. I noticed that the\n> underscore prefix e.g. changes the behaviour of gcc's \"collect2\" on AIX, which\n> is responsible for exporting symbols etc.\n>\n>\n>> Hmph. Hard to see how that isn't a linker bug.\n> Agreed, given that this is only happening with plperl, and not with any of the\n> other extensions...\n>\n>\n>> As a stopgap to get the farm green again, I propose adding something like\n>>\n>> ifeq ($(PORTNAME), cygwin)\n>> SHLIB_LINK += -Wl,--export-all-symbols\n>> endif\n>>\n>> to plperl's makefile.\n> :(\n>\n\nIt doesn't make me very happy either, but nobody seems to have a better\nidea.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:35:51 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-30 09:35:51 -0400, Andrew Dunstan wrote:\n> On 2022-08-26 Fr 16:25, Andres Freund wrote:\n> > On 2022-08-26 16:00:31 -0400, Tom Lane wrote:\n> >> Andrew Dunstan <andrew@dunslane.net> writes:\n> >>> On 2022-08-26 Fr 12:11, Tom Lane wrote:\n> >>>> And if that doesn't help, try -Wl,--export-all-symbols\n> >>> worked\n> > Except that it's only happening for plperl, I'd wonder if it's possibly\n> > related to our magic symbols being prefixed with _. I noticed that the\n> > underscore prefix e.g. changes the behaviour of gcc's \"collect2\" on AIX, which\n> > is responsible for exporting symbols etc.\n> >\n> >\n> >> Hmph. Hard to see how that isn't a linker bug.\n> > Agreed, given that this is only happening with plperl, and not with any of the\n> > other extensions...\n> >\n> >\n> >> As a stopgap to get the farm green again, I propose adding something like\n> >>\n> >> ifeq ($(PORTNAME), cygwin)\n> >> SHLIB_LINK += -Wl,--export-all-symbols\n> >> endif\n> >>\n> >> to plperl's makefile.\n> > :(\n> >\n>\n> It doesn't make me very happy either, but nobody seems to have a better\n> idea.\n\nThe plpython issue I was investigating in\nhttps://postgr.es/m/20220928022724.erzuk5v4ai4b53do%40awork3.anarazel.de\nfeels eerily similar to the issue here.\n\nI wonder if it's the same problem - __attribute__((visibility(\"default\")))\nworks to export - unless another symbol uses __declspec (dllexport). In the\nreferenced thread that was PyInit_plpy(), here it could be some perl generated\none.\n\nDoes this issue resolved if you add\n#define PGDLLEXPORT __declspec (dllexport)\nto cygwin.h? Without the -Wl,--export-all-symbols of course.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Sep 2022 19:52:42 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Strip -mmacosx-version-min options from plperl build"
}
] |
[
{
"msg_contents": "Hello\n\nIs there a postgres extension or project related to\napplication-level/foreign-table data caching ? The postgres_fdw extension\nfetches data from foreign table for each command.\n\nI have seen previous messages in archive about caching in form of global\ntemp tables, query cache etc. There are good discussions about\nwhether support should be built-in but did not find any implementation.\n\nI have seen the 44 postgres extensions that come pre-installed with ubuntu\n16.04 but none of them do this.\n\nThanks.\nAnant.\n\nHelloIs there a postgres extension or project related to application-level/foreign-table data caching ? The postgres_fdw extension fetches data from foreign table for each command.I have seen previous messages in archive about caching in form of global temp tables, query cache etc. There are good discussions about whether support should be built-in but did not find any implementation.I have seen the 44 postgres extensions that come pre-installed with ubuntu 16.04 but none of them do this.Thanks.Anant.",
"msg_date": "Thu, 18 Aug 2022 16:12:45 +0530",
"msg_from": "Anant ngo <anant.ietf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Data caching"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 04:12:45PM +0530, Anant ngo wrote:\n> Hello\n> \n> Is there a postgres extension or project related to application-level/\n> foreign-table data caching ? The postgres_fdw extension fetches data from\n> foreign table for each command.\n> \n> I have seen previous messages in archive about caching in form of global temp\n> tables, query cache etc. There are good discussions about whether support\n> should be built-in but did not find any implementation.\n> \n> I have seen the 44 postgres extensions that come pre-installed with ubuntu\n> 16.04 but none of them do this.\n\nYou can do foreign-table data caching via materialized views:\n\n\thttps://momjian.us/main/blogs/pgblog/2017.html#September_1_2017\n\nAlso, this is more of a question for pgsql-general@postgresql.org.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:57:55 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Data caching"
},
{
"msg_contents": "Hello\n\nIs there a postgres extension or project related to\napplication-level/foreign-table data caching ? The postgres_fdw extension\nfetches data from foreign table for each command.\n\nI have seen previous messages in archive about caching in form of global\ntemp tables, query cache etc. There are good discussions about\nwhether support should be built-in but did not find any implementation.\n\nI have seen the 44 postgres extensions that come pre-installed with ubuntu\n16.04 but none of them do this.\n\nThanks.\nAnant.\n\nHelloIs there a postgres extension or project related to application-level/foreign-table data caching ? The postgres_fdw extension fetches data from foreign table for each command.I have seen previous messages in archive about caching in form of global temp tables, query cache etc. There are good discussions about whether support should be built-in but did not find any implementation.I have seen the 44 postgres extensions that come pre-installed with ubuntu 16.04 but none of them do this.Thanks.Anant.",
"msg_date": "Thu, 18 Aug 2022 22:09:45 +0530",
"msg_from": "Anant ngo <anant.ietf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fwd: Data caching"
},
{
"msg_contents": "On 8/18/22 09:39, Anant ngo wrote:\n> Hello\n> \n> Is there a postgres extension or project related to \n> application-level/foreign-table data caching ? The postgres_fdw \n> extension fetches data from foreign table for each command.\n> \n> I have seen previous messages in archive about caching in form of global \n> temp tables, query cache etc. There are good discussions about \n> whether support should be built-in but did not find any implementation.\n\nCursors?\n\nhttps://www.postgresql.org/docs/current/sql-declare.html\n\n\"A cursor created with WITH HOLD is closed when an explicit CLOSE \ncommand is issued on it, or the session ends. In the current \nimplementation, the rows represented by a held cursor are copied into a \ntemporary file or memory area so that they remain available for \nsubsequent transactions.\"\n\n> \n> I have seen the 44 postgres extensions that come pre-installed with \n> ubuntu 16.04 but none of them do this.\n> \n> Thanks.\n> Anant.\n\n\n-- \nAdrian Klaver\nadrian.klaver@aklaver.com\n\n\n",
"msg_date": "Thu, 18 Aug 2022 12:49:47 -0700",
"msg_from": "Adrian Klaver <adrian.klaver@aklaver.com>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Data caching"
}
] |
[
{
"msg_contents": "Immediately after upgrading an internal instance, a loop around \"vacuum\" did\nthis:\n\nTRAP: FailedAssertion(\"indstats->status == PARALLEL_INDVAC_STATUS_INITIAL\", File: \"vacuumparallel.c\", Line: 611, PID: 27635)\npostgres: postgres pryzbyj [local] VACUUM(ExceptionalCondition+0x8d)[0x99d9fd]\npostgres: postgres pryzbyj [local] VACUUM[0x6915db]\npostgres: postgres pryzbyj [local] VACUUM(heap_vacuum_rel+0x12b6)[0x5083e6]\npostgres: postgres pryzbyj [local] VACUUM[0x68e97a]\npostgres: postgres pryzbyj [local] VACUUM(vacuum+0x48e)[0x68fe9e]\npostgres: postgres pryzbyj [local] VACUUM(ExecVacuum+0x2ae)[0x69065e]\npostgres: postgres pryzbyj [local] VACUUM(standard_ProcessUtility+0x530)[0x8567b0]\n/usr/pgsql-15/lib/pg_stat_statements.so(+0x5450)[0x7f52b891c450]\npostgres: postgres pryzbyj [local] VACUUM[0x85490a]\npostgres: postgres pryzbyj [local] VACUUM[0x854a53]\npostgres: postgres pryzbyj [local] VACUUM(PortalRun+0x179)[0x855029]\npostgres: postgres pryzbyj [local] VACUUM[0x85099b]\npostgres: postgres pryzbyj [local] VACUUM(PostgresMain+0x199a)[0x85268a]\npostgres: postgres pryzbyj [local] VACUUM[0x496a21]\npostgres: postgres pryzbyj [local] VACUUM(PostmasterMain+0x11c0)[0x7b3980]\npostgres: postgres pryzbyj [local] VACUUM(main+0x1c6)[0x4986a6]\n/lib64/libc.so.6(__libc_start_main+0xf5)[0x7f52c4b893d5]\npostgres: postgres pryzbyj [local] VACUUM[0x498c59]\n< 2022-08-18 07:56:51.963 CDT >LOG: server process (PID 27635) was terminated by signal 6: Aborted\n< 2022-08-18 07:56:51.963 CDT >DETAIL: Failed process was running: VACUUM ANALYZE alarms\n\nUnfortunately, it looks like the RPM packages are compiled with -O2, so this is\nof limited use.\n\nCore was generated by `postgres: postgres pryzbyj [local] VACUUM '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f52c4b9d207 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install audit-libs-2.8.4-4.el7.x86_64 bzip2-libs-1.0.6-13.el7.x86_64 cyrus-sasl-lib-2.1.26-23.el7.x86_64 elfutils-libelf-0.176-5.el7.x86_64 elfutils-libs-0.176-5.el7.x86_64 glibc-2.17-260.el7_6.3.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-51.el7_9.x86_64 libattr-2.4.46-13.el7.x86_64 libcap-2.22-9.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 libcom_err-1.42.9-19.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libgcrypt-1.5.3-14.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libicu-50.1.2-17.el7.x86_64 libselinux-2.5-15.el7.x86_64 libstdc++-4.8.5-39.el7.x86_64 libxml2-2.9.1-6.el7_9.6.x86_64 libzstd-1.5.2-1.el7.x86_64 lz4-1.7.5-2.el7.x86_64 nspr-4.19.0-1.el7_5.x86_64 nss-3.36.0-7.1.el7_6.x86_64 nss-softokn-freebl-3.36.0-5.el7_5.x86_64 nss-util-3.36.0-1.1.el7_6.x86_64 openldap-2.4.44-21.el7_6.x86_64 openssl-libs-1.0.2k-22.el7_9.x86_64 pam-1.1.8-22.el7.x86_64 pcre-8.32-17.el7.x86_64 systemd-libs-219-62.el7_6.5.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f52c4b9d207 in raise () from /lib64/libc.so.6\n#1 0x00007f52c4b9e8f8 in abort () from /lib64/libc.so.6\n#2 0x000000000099da1e in ExceptionalCondition (conditionName=conditionName@entry=0xafae40 \"indstats->status == PARALLEL_INDVAC_STATUS_INITIAL\", errorType=errorType@entry=0x9fb4b7 \"FailedAssertion\", \n fileName=fileName@entry=0xafb0c0 \"vacuumparallel.c\", lineNumber=lineNumber@entry=611) at assert.c:69\n#3 0x00000000006915db in parallel_vacuum_process_all_indexes (pvs=0x2e85f80, num_index_scans=<optimized out>, vacuum=<optimized out>) at vacuumparallel.c:611\n#4 0x00000000005083e6 in heap_vacuum_rel (rel=<optimized out>, params=<optimized out>, bstrategy=<optimized out>) at vacuumlazy.c:2679\n#5 0x000000000068e97a in table_relation_vacuum (bstrategy=<optimized out>, params=0x7fff46de9a80, rel=0x7f52c7bc2c10) at ../../../src/include/access/tableam.h:1680\n#6 vacuum_rel (relid=52187497, relation=<optimized out>, params=0x7fff46de9a80) at vacuum.c:2092\n#7 0x000000000068fe9e in vacuum (relations=0x2dbeee8, params=params@entry=0x7fff46de9a80, bstrategy=<optimized out>, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:475\n#8 0x000000000069065e in ExecVacuum (pstate=pstate@entry=0x2dc38d0, vacstmt=vacstmt@entry=0x2d9f3a0, isTopLevel=isTopLevel@entry=true) at vacuum.c:275\n#9 0x00000000008567b0 in standard_ProcessUtility (pstmt=pstmt@entry=0x2d9f7a0, queryString=queryString@entry=0x2d9e8a0 \"VACUUM ANALYZE alarms\", readOnlyTree=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL, \n params=params@entry=0x0, queryEnv=queryEnv@entry=0x0, dest=dest@entry=0x2d9f890, qc=qc@entry=0x7fff46dea0c0) at utility.c:866\n#10 0x00007f52b891c450 in pgss_ProcessUtility (pstmt=0x2d9f7a0, queryString=0x2d9e8a0 \"VACUUM ANALYZE alarms\", readOnlyTree=<optimized out>, context=PROCESS_UTILITY_TOPLEVEL, params=0x0, queryEnv=0x0, dest=0x2d9f890, \n qc=0x7fff46dea0c0) at pg_stat_statements.c:1143\n#11 0x000000000085490a in PortalRunUtility (portal=portal@entry=0x2e20fc0, pstmt=0x2d9f7a0, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=0x2d9f890, qc=0x7fff46dea0c0) at pquery.c:1158\n#12 0x0000000000854a53 in PortalRunMulti (portal=portal@entry=0x2e20fc0, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x2d9f890, altdest=altdest@entry=0x2d9f890, \n qc=qc@entry=0x7fff46dea0c0) at pquery.c:1322\n#13 0x0000000000855029 in PortalRun (portal=0x2e20fc0, count=9223372036854775807, isTopLevel=<optimized out>, run_once=<optimized out>, dest=0x2d9f890, altdest=0x2d9f890, qc=0x7fff46dea0c0) at pquery.c:791\n#14 0x000000000085099b in exec_simple_query (query_string=0x2d9e8a0 \"VACUUM ANALYZE alarms\") at postgres.c:1250\n#15 0x000000000085268a in PostgresMain (dbname=<optimized out>, username=<optimized out>) at postgres.c:4581\n#16 0x0000000000496a21 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4504\n#17 BackendStartup (port=0x2dbe9c0) at postmaster.c:4232\n#18 ServerLoop () at postmaster.c:1806\n#19 0x00000000007b3980 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x2d99280) at postmaster.c:1478\n#20 0x00000000004986a6 in main (argc=3, argv=0x2d99280) at main.c:202\n\n(gdb) p *pvs\n$2 = {pcxt = 0x2e84490, indrels = 0x2e84220, nindexes = 8, shared = 0x2aaaaf142380, indstats = 0x2aaaaf1423c0, dead_items = 0x2aaaab142380, buffer_usage = 0x2aaaab142260, wal_usage = 0x2aaaab142220, \n will_parallel_vacuum = 0x2e8f750, nindexes_parallel_bulkdel = 5, nindexes_parallel_cleanup = 0, nindexes_parallel_condcleanup = 5, bstrategy = 0x2dbed40, relnamespace = 0x0, relname = 0x0, indname = 0x0, \n status = PARALLEL_INDVAC_STATUS_INITIAL}\n\n(gdb) info locals \nindstats = <optimized out>\ni = <optimized out>\nnworkers = 2\n\n(gdb) p *pvs\n$4 = {pcxt = 0x2e84490, indrels = 0x2e84220, nindexes = 8, shared = 0x2aaaaf142380, indstats = 0x2aaaaf1423c0, dead_items = 0x2aaaab142380, buffer_usage = 0x2aaaab142260, wal_usage = 0x2aaaab142220, \n will_parallel_vacuum = 0x2e8f750, nindexes_parallel_bulkdel = 5, nindexes_parallel_cleanup = 0, nindexes_parallel_condcleanup = 5, bstrategy = 0x2dbed40, relnamespace = 0x0, relname = 0x0, indname = 0x0, \n status = PARALLEL_INDVAC_STATUS_INITIAL}\n\nI reproduced it like this:\n\npryzbyj=# VACUUM (PARALLEL 2,VERBOSE,INDEX_CLEANUP on) alarms; -- DISABLE_PAGE_SKIPPING true\nINFO: vacuuming \"pryzbyj.public.alarms\"\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n\nSo I'll be back shortly with more...\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 08:34:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "pg15b3: crash in paralell vacuum"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 08:34:06AM -0500, Justin Pryzby wrote:\n> Unfortunately, it looks like the RPM packages are compiled with -O2, so this is\n> of limited use. So I'll be back shortly with more...\n\n#3 0x00000000006874f1 in parallel_vacuum_process_all_indexes (pvs=0x25bdce0, num_index_scans=0, vacuum=vacuum@entry=false) at vacuumparallel.c:611\n611 Assert(indstats->status == PARALLEL_INDVAC_STATUS_INITIAL);\n\n(gdb) p *pvs\n$1 = {pcxt = 0x25bc1e0, indrels = 0x25bbf70, nindexes = 8, shared = 0x7fc5184393a0, indstats = 0x7fc5184393e0, dead_items = 0x7fc5144393a0, buffer_usage = 0x7fc514439280, wal_usage = 0x7fc514439240, \n will_parallel_vacuum = 0x266d818, nindexes_parallel_bulkdel = 5, nindexes_parallel_cleanup = 0, nindexes_parallel_condcleanup = 5, bstrategy = 0x264f120, relnamespace = 0x0, relname = 0x0, indname = 0x0, \n status = PARALLEL_INDVAC_STATUS_INITIAL}\n\n(gdb) p *indstats\n$2 = {status = 11, parallel_workers_can_process = false, istat_updated = false, istat = {num_pages = 0, estimated_count = false, num_index_tuples = 0, tuples_removed = 0, pages_newly_deleted = 0, pages_deleted = 1, \n pages_free = 0}}\n\n(gdb) bt f\n...\n#3 0x00000000006874f1 in parallel_vacuum_process_all_indexes (pvs=0x25bdce0, num_index_scans=0, vacuum=vacuum@entry=false) at vacuumparallel.c:611\n indstats = 0x7fc5184393e0\n i = 0\n nworkers = 2\n new_status = PARALLEL_INDVAC_STATUS_NEED_CLEANUP\n __func__ = \"parallel_vacuum_process_all_indexes\"\n#4 0x0000000000687ef0 in parallel_vacuum_cleanup_all_indexes (pvs=<optimized out>, num_table_tuples=num_table_tuples@entry=409149, num_index_scans=<optimized out>, estimated_count=estimated_count@entry=true)\n at vacuumparallel.c:486\nNo locals.\n#5 0x00000000004f80b8 in lazy_cleanup_all_indexes (vacrel=vacrel@entry=0x25bc510) at vacuumlazy.c:2679\n reltuples = 409149\n estimated_count = true\n#6 0x00000000004f884a in lazy_scan_heap (vacrel=vacrel@entry=0x25bc510) at vacuumlazy.c:1278\n rel_pages = 67334\n blkno = 67334\n next_unskippable_block = 67334\n next_failsafe_block = 0\n next_fsm_block_to_vacuum = 0\n dead_items = 0x7fc5144393a0\n vmbuffer = 1300\n next_unskippable_allvis = true\n skipping_current_range = false\n initprog_index = {0, 1, 5}\n initprog_val = {1, 67334, 11184809}\n __func__ = \"lazy_scan_heap\"\n#7 0x00000000004f925f in heap_vacuum_rel (rel=0x7fc52df6b820, params=0x7ffd74f74620, bstrategy=0x264f120) at vacuumlazy.c:534\n vacrel = 0x25bc510\n verbose = true\n instrument = <optimized out>\n aggressive = false\n skipwithvm = true\n frozenxid_updated = false\n minmulti_updated = false\n OldestXmin = 32759288\n FreezeLimit = 4277726584\n OldestMxact = 157411\n MultiXactCutoff = 4290124707\n orig_rel_pages = 67334\n new_rel_pages = <optimized out>\n new_rel_allvisible = 4\n ru0 = {tv = {tv_sec = 1660830451, tv_usec = 473980}, ru = {ru_utime = {tv_sec = 0, tv_usec = 317891}, ru_stime = {tv_sec = 1, tv_usec = 212372}, {ru_maxrss = 74524, __ru_maxrss_word = 74524}, {ru_ixrss = 0, \n __ru_ixrss_word = 0}, {ru_idrss = 0, __ru_idrss_word = 0}, {ru_isrss = 0, __ru_isrss_word = 0}, {ru_minflt = 18870, __ru_minflt_word = 18870}, {ru_majflt = 0, __ru_majflt_word = 0}, {ru_nswap = 0, \n __ru_nswap_word = 0}, {ru_inblock = 1124750, __ru_inblock_word = 1124750}, {ru_oublock = 0, __ru_oublock_word = 0}, {ru_msgsnd = 0, __ru_msgsnd_word = 0}, {ru_msgrcv = 0, __ru_msgrcv_word = 0}, {ru_nsignals = 0, \n __ru_nsignals_word = 0}, {ru_nvcsw = 42, __ru_nvcsw_word = 42}, {ru_nivcsw = 35, __ru_nivcsw_word = 35}}}\n starttime = 714145651473980\n startreadtime = 0\n startwritetime = 0\n startwalusage = {wal_records = 2, wal_fpi = 0, wal_bytes = 421}\n StartPageHit = 50\n StartPageMiss = 0\n StartPageDirty = 0\n errcallback = {previous = 0x0, callback = 0x4f5f41 <vacuum_error_callback>, arg = 0x25bc510}\n indnames = 0x266d838\n __func__ = \"heap_vacuum_rel\"\n\nThis is a qemu VM which (full disclosure) has crashed a few times recently due\nto OOM. This is probably a postgres bug, but conceivably it's being tickled by\nbad data (although the vm crashing shouldn't cause that, either, following\nrecovery). This is also an instance that was pg_upgraded from v14 (and earlier\nversions) to v15b1 and then b2, so it's conceivably possible there's weird data\npages that wouldn't be written by beta3. But that doesn't seem to be the issue\nhere anyway.\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:04:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: crash in paralell vacuum"
},
{
"msg_contents": "Hi,\n\nOn Thu, Aug 18, 2022 at 10:34 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Immediately after upgrading an internal instance, a loop around \"vacuum\" did\n> this:\n\nThank you for the report!\n\n>\n> TRAP: FailedAssertion(\"indstats->status == PARALLEL_INDVAC_STATUS_INITIAL\", File: \"vacuumparallel.c\", Line: 611, PID: 27635)\n> postgres: postgres pryzbyj [local] VACUUM(ExceptionalCondition+0x8d)[0x99d9fd]\n> postgres: postgres pryzbyj [local] VACUUM[0x6915db]\n> postgres: postgres pryzbyj [local] VACUUM(heap_vacuum_rel+0x12b6)[0x5083e6]\n> postgres: postgres pryzbyj [local] VACUUM[0x68e97a]\n> postgres: postgres pryzbyj [local] VACUUM(vacuum+0x48e)[0x68fe9e]\n> postgres: postgres pryzbyj [local] VACUUM(ExecVacuum+0x2ae)[0x69065e]\n> postgres: postgres pryzbyj [local] VACUUM(standard_ProcessUtility+0x530)[0x8567b0]\n> /usr/pgsql-15/lib/pg_stat_statements.so(+0x5450)[0x7f52b891c450]\n> postgres: postgres pryzbyj [local] VACUUM[0x85490a]\n> postgres: postgres pryzbyj [local] VACUUM[0x854a53]\n> postgres: postgres pryzbyj [local] VACUUM(PortalRun+0x179)[0x855029]\n> postgres: postgres pryzbyj [local] VACUUM[0x85099b]\n> postgres: postgres pryzbyj [local] VACUUM(PostgresMain+0x199a)[0x85268a]\n> postgres: postgres pryzbyj [local] VACUUM[0x496a21]\n> postgres: postgres pryzbyj [local] VACUUM(PostmasterMain+0x11c0)[0x7b3980]\n> postgres: postgres pryzbyj [local] VACUUM(main+0x1c6)[0x4986a6]\n> /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f52c4b893d5]\n> postgres: postgres pryzbyj [local] VACUUM[0x498c59]\n> < 2022-08-18 07:56:51.963 CDT >LOG: server process (PID 27635) was terminated by signal 6: Aborted\n> < 2022-08-18 07:56:51.963 CDT >DETAIL: Failed process was running: VACUUM ANALYZE alarms\n>\n> Unfortunately, it looks like the RPM packages are compiled with -O2, so this is\n> of limited use.\n>\n> Core was generated by `postgres: postgres pryzbyj [local] VACUUM '.\n> Program terminated with signal 6, Aborted.\n> #0 0x00007f52c4b9d207 in raise () from /lib64/libc.so.6\n> Missing separate debuginfos, use: debuginfo-install audit-libs-2.8.4-4.el7.x86_64 bzip2-libs-1.0.6-13.el7.x86_64 cyrus-sasl-lib-2.1.26-23.el7.x86_64 elfutils-libelf-0.176-5.el7.x86_64 elfutils-libs-0.176-5.el7.x86_64 glibc-2.17-260.el7_6.3.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-51.el7_9.x86_64 libattr-2.4.46-13.el7.x86_64 libcap-2.22-9.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 libcom_err-1.42.9-19.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libgcrypt-1.5.3-14.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libicu-50.1.2-17.el7.x86_64 libselinux-2.5-15.el7.x86_64 libstdc++-4.8.5-39.el7.x86_64 libxml2-2.9.1-6.el7_9.6.x86_64 libzstd-1.5.2-1.el7.x86_64 lz4-1.7.5-2.el7.x86_64 nspr-4.19.0-1.el7_5.x86_64 nss-3.36.0-7.1.el7_6.x86_64 nss-softokn-freebl-3.36.0-5.el7_5.x86_64 nss-util-3.36.0-1.1.el7_6.x86_64 openldap-2.4.44-21.el7_6.x86_64 openssl-libs-1.0.2k-22.el7_9.x86_64 pam-1.1.8-22.el7.x86_64 pcre-8.32-17.el7.x86_64 systemd-libs-219-62.el7_6.5.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n> (gdb) bt\n> #0 0x00007f52c4b9d207 in raise () from /lib64/libc.so.6\n> #1 0x00007f52c4b9e8f8 in abort () from /lib64/libc.so.6\n> #2 0x000000000099da1e in ExceptionalCondition (conditionName=conditionName@entry=0xafae40 \"indstats->status == PARALLEL_INDVAC_STATUS_INITIAL\", errorType=errorType@entry=0x9fb4b7 \"FailedAssertion\",\n> fileName=fileName@entry=0xafb0c0 \"vacuumparallel.c\", lineNumber=lineNumber@entry=611) at assert.c:69\n> #3 0x00000000006915db in parallel_vacuum_process_all_indexes (pvs=0x2e85f80, num_index_scans=<optimized out>, vacuum=<optimized out>) at vacuumparallel.c:611\n> #4 0x00000000005083e6 in heap_vacuum_rel (rel=<optimized out>, params=<optimized out>, bstrategy=<optimized out>) at vacuumlazy.c:2679\n\nIt seems that parallel_vacuum_cleanup_all_indexes() got called[1],\nwhich means this was the first time to perform parallel vacuum (i.e.,\nindex cleanup).\n\nI'm not convinced yet but it could be a culprit that we missed doing\nmemset(0) for the shared array of PVIndStats in\nparallel_vacuum_init(). This shared array was introduced in PG15.\n\n[1] https://github.com/postgres/postgres/blob/REL_15_STABLE/src/backend/access/heap/vacuumlazy.c#L2679\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 18 Aug 2022 23:06:50 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: crash in paralell vacuum"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 11:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> On Thu, Aug 18, 2022 at 10:34 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Immediately after upgrading an internal instance, a loop around \"vacuum\" did\n> > this:\n>\n> Thank you for the report!\n>\n> >\n> > TRAP: FailedAssertion(\"indstats->status == PARALLEL_INDVAC_STATUS_INITIAL\", File: \"vacuumparallel.c\", Line: 611, PID: 27635)\n> > postgres: postgres pryzbyj [local] VACUUM(ExceptionalCondition+0x8d)[0x99d9fd]\n> > postgres: postgres pryzbyj [local] VACUUM[0x6915db]\n> > postgres: postgres pryzbyj [local] VACUUM(heap_vacuum_rel+0x12b6)[0x5083e6]\n> > postgres: postgres pryzbyj [local] VACUUM[0x68e97a]\n> > postgres: postgres pryzbyj [local] VACUUM(vacuum+0x48e)[0x68fe9e]\n> > postgres: postgres pryzbyj [local] VACUUM(ExecVacuum+0x2ae)[0x69065e]\n> > postgres: postgres pryzbyj [local] VACUUM(standard_ProcessUtility+0x530)[0x8567b0]\n> > /usr/pgsql-15/lib/pg_stat_statements.so(+0x5450)[0x7f52b891c450]\n> > postgres: postgres pryzbyj [local] VACUUM[0x85490a]\n> > postgres: postgres pryzbyj [local] VACUUM[0x854a53]\n> > postgres: postgres pryzbyj [local] VACUUM(PortalRun+0x179)[0x855029]\n> > postgres: postgres pryzbyj [local] VACUUM[0x85099b]\n> > postgres: postgres pryzbyj [local] VACUUM(PostgresMain+0x199a)[0x85268a]\n> > postgres: postgres pryzbyj [local] VACUUM[0x496a21]\n> > postgres: postgres pryzbyj [local] VACUUM(PostmasterMain+0x11c0)[0x7b3980]\n> > postgres: postgres pryzbyj [local] VACUUM(main+0x1c6)[0x4986a6]\n> > /lib64/libc.so.6(__libc_start_main+0xf5)[0x7f52c4b893d5]\n> > postgres: postgres pryzbyj [local] VACUUM[0x498c59]\n> > < 2022-08-18 07:56:51.963 CDT >LOG: server process (PID 27635) was terminated by signal 6: Aborted\n> > < 2022-08-18 07:56:51.963 CDT >DETAIL: Failed process was running: VACUUM ANALYZE alarms\n> >\n> > Unfortunately, it looks like the RPM packages are compiled with -O2, so this is\n> > of limited use.\n> >\n> > Core was generated by `postgres: postgres pryzbyj [local] VACUUM '.\n> > Program terminated with signal 6, Aborted.\n> > #0 0x00007f52c4b9d207 in raise () from /lib64/libc.so.6\n> > Missing separate debuginfos, use: debuginfo-install audit-libs-2.8.4-4.el7.x86_64 bzip2-libs-1.0.6-13.el7.x86_64 cyrus-sasl-lib-2.1.26-23.el7.x86_64 elfutils-libelf-0.176-5.el7.x86_64 elfutils-libs-0.176-5.el7.x86_64 glibc-2.17-260.el7_6.3.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-51.el7_9.x86_64 libattr-2.4.46-13.el7.x86_64 libcap-2.22-9.el7.x86_64 libcap-ng-0.7.5-4.el7.x86_64 libcom_err-1.42.9-19.el7.x86_64 libgcc-4.8.5-39.el7.x86_64 libgcrypt-1.5.3-14.el7.x86_64 libgpg-error-1.12-3.el7.x86_64 libicu-50.1.2-17.el7.x86_64 libselinux-2.5-15.el7.x86_64 libstdc++-4.8.5-39.el7.x86_64 libxml2-2.9.1-6.el7_9.6.x86_64 libzstd-1.5.2-1.el7.x86_64 lz4-1.7.5-2.el7.x86_64 nspr-4.19.0-1.el7_5.x86_64 nss-3.36.0-7.1.el7_6.x86_64 nss-softokn-freebl-3.36.0-5.el7_5.x86_64 nss-util-3.36.0-1.1.el7_6.x86_64 openldap-2.4.44-21.el7_6.x86_64 openssl-libs-1.0.2k-22.el7_9.x86_64 pam-1.1.8-22.el7.x86_64 pcre-8.32-17.el7.x86_64 systemd-libs-219-62.el7_6.5.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n> > (gdb) bt\n> > #0 0x00007f52c4b9d207 in raise () from /lib64/libc.so.6\n> > #1 0x00007f52c4b9e8f8 in abort () from /lib64/libc.so.6\n> > #2 0x000000000099da1e in ExceptionalCondition (conditionName=conditionName@entry=0xafae40 \"indstats->status == PARALLEL_INDVAC_STATUS_INITIAL\", errorType=errorType@entry=0x9fb4b7 \"FailedAssertion\",\n> > fileName=fileName@entry=0xafb0c0 \"vacuumparallel.c\", lineNumber=lineNumber@entry=611) at assert.c:69\n> > #3 0x00000000006915db in parallel_vacuum_process_all_indexes (pvs=0x2e85f80, num_index_scans=<optimized out>, vacuum=<optimized out>) at vacuumparallel.c:611\n> > #4 0x00000000005083e6 in heap_vacuum_rel (rel=<optimized out>, params=<optimized out>, bstrategy=<optimized out>) at vacuumlazy.c:2679\n>\n> It seems that parallel_vacuum_cleanup_all_indexes() got called[1],\n> which means this was the first time to perform parallel vacuum (i.e.,\n> index cleanup).\n\nSorry, this explanation is wrong. But according to the recent\ninformation from Justin it was the first time to perform parallel\nvacuum:\n\n#3 0x00000000006874f1 in parallel_vacuum_process_all_indexes\n(pvs=0x25bdce0, num_index_scans=0, vacuum=vacuum@entry=false) at\nvacuumparallel.c:611\n611 Assert(indstats->status ==\nPARALLEL_INDVAC_STATUS_INITIAL);\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n",
"msg_date": "Thu, 18 Aug 2022 23:14:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: crash in paralell vacuum"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 11:04 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Aug 18, 2022 at 08:34:06AM -0500, Justin Pryzby wrote:\n> > Unfortunately, it looks like the RPM packages are compiled with -O2, so this is\n> > of limited use. So I'll be back shortly with more...\n>\n> #3 0x00000000006874f1 in parallel_vacuum_process_all_indexes (pvs=0x25bdce0, num_index_scans=0, vacuum=vacuum@entry=false) at vacuumparallel.c:611\n> 611 Assert(indstats->status == PARALLEL_INDVAC_STATUS_INITIAL);\n>\n> (gdb) p *pvs\n> $1 = {pcxt = 0x25bc1e0, indrels = 0x25bbf70, nindexes = 8, shared = 0x7fc5184393a0, indstats = 0x7fc5184393e0, dead_items = 0x7fc5144393a0, buffer_usage = 0x7fc514439280, wal_usage = 0x7fc514439240,\n> will_parallel_vacuum = 0x266d818, nindexes_parallel_bulkdel = 5, nindexes_parallel_cleanup = 0, nindexes_parallel_condcleanup = 5, bstrategy = 0x264f120, relnamespace = 0x0, relname = 0x0, indname = 0x0,\n> status = PARALLEL_INDVAC_STATUS_INITIAL}\n>\n> (gdb) p *indstats\n> $2 = {status = 11, parallel_workers_can_process = false, istat_updated = false, istat = {num_pages = 0, estimated_count = false, num_index_tuples = 0, tuples_removed = 0, pages_newly_deleted = 0, pages_deleted = 1,\n> pages_free = 0}}\n\nThe status = 11 is invalid value. Probably because indstats was not\ninitialized to 0 as I mentioned.\n\nJustin, if it's reproducible in your environment, could you please try\nit again with the attached patch?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/",
"msg_date": "Thu, 18 Aug 2022 23:24:22 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: crash in paralell vacuum"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 11:24:22PM +0900, Masahiko Sawada wrote:\n> The status = 11 is invalid value. Probably because indstats was not\n> initialized to 0 as I mentioned.\n> \n> Justin, if it's reproducible in your environment, could you please try\n> it again with the attached patch?\n\nYes, this seems to resolve the problem.\n\nThanks,\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:52:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: crash in paralell vacuum"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 09:52:36AM -0500, Justin Pryzby wrote:\n> On Thu, Aug 18, 2022 at 11:24:22PM +0900, Masahiko Sawada wrote:\n> > The status = 11 is invalid value. Probably because indstats was not\n> > initialized to 0 as I mentioned.\n> > \n> > Justin, if it's reproducible in your environment, could you please try\n> > it again with the attached patch?\n> \n> Yes, this seems to resolve the problem.\n\nIt seems a bit crazy that this escaped detection until now.\nAre these allocations especially vulnerable to uninitialized data ?\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 18 Aug 2022 18:04:47 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg15b3: crash in parallel vacuum"
},
{
"msg_contents": "On Thu, Aug 18, 2022 at 7:25 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> Justin, if it's reproducible in your environment, could you please try\n> it again with the attached patch?\n\nPushed, thanks.\n\nI wonder how this issue could have been caught earlier, or even\navoided in the first place. Would the bug have been caught if Valgrind\nhad known to mark dynamic shared memory VALGRIND_MAKE_MEM_UNDEFINED()\nwhen it is first allocated? ISTM that we should do something that is\nanalogous to aset.c's Valgrind handling for palloc() requests.\n\nSimilar work on buffers in shared memory led to us catching a tricky\nbug involving unsafe access to a buffer, a little while ago -- see\nbugfix commit 7b7ed046. The bug in question would probably have taken\nmuch longer to catch without the instrumentation. In fact, it seems\nlike a good idea to use Valgrind for *anything* where it *might* catch\nbugs, just in case.\n\nValgrind can work well for shared memory without any extra work. The\nbackend's own idea of the memory (the memory mapping used by the\nprocess) is all that Valgrind cares about. You don't have to worry\nabout Valgrind instrumentation in one backend causing confusion in\nanother backend. It's very practical, and very general purpose. I\nthink that most of the protection comes from a basic understanding of\n\"this memory is unsafe to access, this memory contains uninitialized\ndata that cannot be assumed to have any particular value, this memory\nis initialized and safe\".\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 18 Aug 2022 17:50:24 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pg15b3: crash in paralell vacuum"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAdded a pg_buffercache_summary() function to retrieve an aggregated summary\ninformation with less cost.\n\nIt's often useful to know only how many buffers are used, how many of them\nare dirty etc. for monitoring purposes.\nThis info can already be retrieved by pg_buffercache. The extension\ncurrently creates a row with many details for each buffer, then summary\ninfo can be aggregated from that returned table.\nBut it is quite expensive to run regularly for monitoring.\n\nThe attached patch adds a pg_buffercache_summary() function to get this\nsummary info faster.\nNew function only collects following info and returns them in a single row:\n- used_buffers = number of buffers with a valid relfilenode (both dirty and\nnot)\n- unused_buffers = number of buffers with invalid relfilenode\n- dirty_buffers = number of dirty buffers.\n- pinned_buffers = number of buffers that have at least one pinning backend\n(i.e. refcount > 0)\n- average usagecount of used buffers\n\nOne other difference between pg_buffercache_summary and\npg_buffercache_pages is that pg_buffercache_summary does not get locks on\nbuffer headers as opposed to pg_buffercache_pages.\nSince the purpose of pg_buffercache_summary is just to give us an overall\nidea about shared buffers and to be a cheaper function, locks are not\nstrictly needed.\n\nTo compare pg_buffercache_summary() and pg_buffercache_pages(), I used a\nsimple query to aggregate the summary information above by calling\n pg_buffercache_pages().\nHere is the result:\n\npostgres=# show shared_buffers;\n shared_buffers\n----------------\n 16GB\n(1 row)\n\nTime: 0.756 ms\npostgres=# SELECT relfilenode <> 0 AS is_valid, isdirty, count(*) FROM\npg_buffercache GROUP BY relfilenode <> 0, isdirty;\n is_valid | isdirty | count\n----------+---------+---------\n t | f | 209\n | | 2096904\n t | t | 39\n(3 rows)\n\nTime: 1434.870 ms (00:01.435)\npostgres=# select * from pg_buffercache_summary();\n used_buffers | unused_buffers | dirty_buffers | pinned_buffers |\navg_usagecount\n--------------+----------------+---------------+----------------+----------------\n 248 | 2096904 | 39 | 0 |\n 3.141129\n(1 row)\n\nTime: 9.712 ms\n\nThere is a significant difference between timings of those two functions,\neven though they return similar results.\n\nI would appreciate any feedback/comment on this change.\n\nThanks,\nMelih",
"msg_date": "Thu, 18 Aug 2022 16:57:16 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Summary function for pg_buffercache"
},
{
"msg_contents": "Hi hackers,\n\nI also added documentation changes into the patch.\nYou can find it attached.\n\n I would appreciate any feedback about this pg_buffercache_summary function.\n\nBest,\nMelih",
"msg_date": "Fri, 9 Sep 2022 16:41:07 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Melih,\n\n> I would appreciate any feedback/comment on this change.\n\nAnother benefit of pg_buffercache_summary() you didn't mention is that\nit allocates much less memory than pg_buffercache_pages() does.\n\nHere is v3 where I added this to the documentation. The patch didn't\napply to the current master branch with the following error:\n\n```\npg_buffercache_pages.c:286:19: error: no member named 'rlocator' in\n'struct buftag'\n if (bufHdr->tag.rlocator.relNumber != InvalidOid)\n ~~~~~~~~~~~ ^\n1 error generated.\n```\n\nI fixed this too. Additionally, the patch was pgindent'ed and some\ntypos were fixed.\n\nHowever I'm afraid you can't examine BufferDesc's without taking\nlocks. This is explicitly stated in buf_internals.h:\n\n\"\"\"\nBuffer header lock (BM_LOCKED flag) must be held to EXAMINE or change\nTAG, state or wait_backend_pgprocno fields.\n\"\"\"\n\nLet's consider this code again (this is after my fix):\n\n```\nif (RelFileNumberIsValid(BufTagGetRelNumber(bufHdr))) {\n /* ... */\n}\n```\n\nWhen somebody modifies relNumber concurrently (e.g. calls\nClearBufferTag()) this will cause an undefined behaviour.\n\nI suggest we focus on saving the memory first and then think about the\nperformance, if necessary.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 9 Sep 2022 17:36:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 05:36:45PM +0300, Aleksander Alekseev wrote:\n> However I'm afraid you can't examine BufferDesc's without taking\n> locks. This is explicitly stated in buf_internals.h:\n\nYeah, when I glanced at this patch earlier, I wondered about this.\n\n> I suggest we focus on saving the memory first and then think about the\n> performance, if necessary.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 10:23:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi hackers,\n\n> > I suggest we focus on saving the memory first and then think about the\n> > performance, if necessary.\n>\n> +1\n\nI made a mistake in v3 cfbot complained about. It should have been:\n\n```\nif (RelFileNumberIsValid(BufTagGetRelNumber(&bufHdr->tag)))\n```\n\nHere is the corrected patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Sat, 10 Sep 2022 00:14:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Aleksander and Nathan,\n\nThanks for your comments.\n\nAleksander Alekseev <aleksander@timescale.com>, 9 Eyl 2022 Cum, 17:36\ntarihinde şunu yazdı:\n\n> However I'm afraid you can't examine BufferDesc's without taking\n> locks. This is explicitly stated in buf_internals.h:\n>\n> \"\"\"\n> Buffer header lock (BM_LOCKED flag) must be held to EXAMINE or change\n> TAG, state or wait_backend_pgprocno fields.\n> \"\"\"\n>\n\nI wasn't aware of this explanation. Thanks for pointing it out.\n\nWhen somebody modifies relNumber concurrently (e.g. calls\n> ClearBufferTag()) this will cause an undefined behaviour.\n>\n\nI thought that it wouldn't really be a problem even if relNumber is\nmodified concurrently, since the function does not actually rely on the\nactual values.\nI'm not sure about what undefined behaviour could harm this badly. It\nseemed to me that it would read an invalid relNumber in the worst case\nscenario.\nBut I'm not actually familiar with buffer related parts of the code, so I\nmight be wrong.\nAnd I'm okay with taking header locks if necessary.\n\nIn the attached patch, I added buffer header locks just before examining\ntag as follows:\n\n+ buf_state = LockBufHdr(bufHdr);\n> +\n> + /* Invalid RelFileNumber means the buffer is unused */\n> + if (RelFileNumberIsValid(BufTagGetRelNumber(&bufHdr->tag)))\n> + {\n> ...\n> + }\n> ...\n> + UnlockBufHdr(bufHdr, buf_state);\n>\n\n\n> > I suggest we focus on saving the memory first and then think about the\n> > > performance, if necessary.\n> >\n> > +1\n>\n\nI again did the same quick benchmarking, here are the numbers with locks.\n\npostgres=# show shared_buffers;\n shared_buffers\n----------------\n 16GB\n(1 row)\n\npostgres=# SELECT relfilenode <> 0 AS is_valid, isdirty, count(*) FROM\npg_buffercache GROUP BY relfilenode <> 0, isdirty;\n is_valid | isdirty | count\n----------+---------+---------\n t | f | 256\n | | 2096876\n t | t | 20\n(3 rows)\n\nTime: 1024.456 ms (00:01.024)\n\npostgres=# select * from pg_buffercache_summary();\n used_buffers | unused_buffers | dirty_buffers | pinned_buffers |\navg_usagecount\n--------------+----------------+---------------+----------------+----------------\n 282 | 2096870 | 20 | 0 |\n 3.4574468\n(1 row)\n\nTime: 33.074 ms\n\nYes, locks slowed pg_buffercache_summary down. But there is still quite a\nbit of performance improvement, plus memory saving as you mentioned.\n\n\n> Here is the corrected patch.\n>\n\nAlso thanks for corrections.\n\nBest,\nMelih",
"msg_date": "Sat, 10 Sep 2022 02:59:50 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Melih,\n\n> I'm not sure about what undefined behaviour could harm this badly.\n\nYou are right that in practice nothing wrong will (probably) happen on\nx86/x64 architecture with (most?) modern C compilers. This is not true in\nthe general case though. It's up to the compiler to decide how reading the\nbufHdr->tag is going to be actually implemented. This can be one assembly\ninstruction or several instructions. This reading can be optimized-out if\nthe compiler believes the required value is already in the register, etc.\nSince the result will be different depending on the assembly code used this\nis an undefined behaviour and we can't use code like this.\n\n> In the attached patch, I added buffer header locks just before examining\ntag as follows\n\nMany thanks for the updated patch! It looks better now.\n\nHowever I have somewhat mixed feelings about avg_usagecount. Generally\nAVG() is a relatively useless methric for monitoring. What if the user\nwants MIN(), MAX() or let's say a 99th percentile? I suggest splitting it\ninto usagecount_min, usagecount_max and usagecount_sum. AVG() can be\nderived as usercount_sum / used_buffers.\n\nAlso I suggest changing the names of the columns in order to make them\nconsistent with the rest of the system. If you consider pg_stat_activity\nand family [1] you will notice that the columns are named\n(entity)_(property), e.g. backend_xid, backend_type, client_addr, etc. So\ninstead of used_buffers and unused_buffers the naming should be\nbuffers_used and buffers_unused.\n\n[1]: https://www.postgresql.org/docs/current/monitoring-stats.html\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Melih,> I'm not sure about what undefined behaviour could harm this badly.You are right that in practice nothing wrong will (probably) happen on x86/x64 architecture with (most?) modern C compilers. This is not true in the general case though. It's up to the compiler to decide how reading the bufHdr->tag is going to be actually implemented. This can be one assembly instruction or several instructions. This reading can be optimized-out if the compiler believes the required value is already in the register, etc. Since the result will be different depending on the assembly code used this is an undefined behaviour and we can't use code like this.> In the attached patch, I added buffer header locks just before examining tag as followsMany thanks for the updated patch! It looks better now.However I have somewhat mixed feelings about avg_usagecount. Generally AVG() is a relatively useless methric for monitoring. What if the user wants MIN(), MAX() or let's say a 99th percentile? I suggest splitting it into usagecount_min, usagecount_max and usagecount_sum. AVG() can be derived as usercount_sum / used_buffers.Also I suggest changing the names of the columns in order to make them consistent with the rest of the system. If you consider pg_stat_activity and family [1] you will notice that the columns are named (entity)_(property), e.g. backend_xid, backend_type, client_addr, etc. So instead of used_buffers and unused_buffers the naming should be buffers_used and buffers_unused.[1]: https://www.postgresql.org/docs/current/monitoring-stats.html-- Best regards,Aleksander Alekseev",
"msg_date": "Sat, 10 Sep 2022 12:28:30 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hello Aleksander,\n\n> I'm not sure about what undefined behaviour could harm this badly.\n>\n> You are right that in practice nothing wrong will (probably) happen on\n> x86/x64 architecture with (most?) modern C compilers. This is not true in\n> the general case though. It's up to the compiler to decide how reading the\n> bufHdr->tag is going to be actually implemented. This can be one assembly\n> instruction or several instructions. This reading can be optimized-out if\n> the compiler believes the required value is already in the register, etc.\n> Since the result will be different depending on the assembly code used this\n> is an undefined behaviour and we can't use code like this.\n>\n\nGot it. Thanks for explaining.\n\n\n> However I have somewhat mixed feelings about avg_usagecount. Generally\n> AVG() is a relatively useless methric for monitoring. What if the user\n> wants MIN(), MAX() or let's say a 99th percentile? I suggest splitting it\n> into usagecount_min, usagecount_max and usagecount_sum. AVG() can be\n> derived as usercount_sum / used_buffers.\n>\n\nWon't be usagecount_max almost always 5 as \"BM_MAX_USAGE_COUNT\" set to 5 in\nbuf_internals.h? I'm not sure about how much usagecount_min would add\neither.\nA usagecount is always an integer between 0 and 5, it's not\nsomething unbounded. I think the 99th percentile would be much better than\naverage if strong outlier values could occur. But in this case, I feel like\nan average value would be sufficiently useful as well.\nusagecount_sum would actually be useful since average can be derived from\nit. If you think that the sum of usagecounts has a meaning just by itself,\nit makes sense to include it. Otherwise, wouldn't showing directly averaged\nvalue be more useful?\n\n\n\n> Also I suggest changing the names of the columns in order to make them\n> consistent with the rest of the system. If you consider pg_stat_activity\n> and family [1] you will notice that the columns are named\n> (entity)_(property), e.g. backend_xid, backend_type, client_addr, etc. So\n> instead of used_buffers and unused_buffers the naming should be\n> buffers_used and buffers_unused.\n>\n> [1]: https://www.postgresql.org/docs/current/monitoring-stats.html\n>\n\nYou're right. I will change the names accordingly. Thanks.\n\n\nRegards,\nMelih\n\nHello Aleksander,> I'm not sure about what undefined behaviour could harm this badly.You are right that in practice nothing wrong will (probably) happen on x86/x64 architecture with (most?) modern C compilers. This is not true in the general case though. It's up to the compiler to decide how reading the bufHdr->tag is going to be actually implemented. This can be one assembly instruction or several instructions. This reading can be optimized-out if the compiler believes the required value is already in the register, etc. Since the result will be different depending on the assembly code used this is an undefined behaviour and we can't use code like this.Got it. Thanks for explaining. However I have somewhat mixed feelings about avg_usagecount. Generally AVG() is a relatively useless methric for monitoring. What if the user wants MIN(), MAX() or let's say a 99th percentile? I suggest splitting it into usagecount_min, usagecount_max and usagecount_sum. AVG() can be derived as usercount_sum / used_buffers.Won't be usagecount_max almost always 5 as \"BM_MAX_USAGE_COUNT\" set to 5 in buf_internals.h? I'm not sure about how much usagecount_min would add either. A usagecount is always an integer between 0 and 5, it's not something unbounded. I think the 99th percentile would be much better than average if strong outlier values could occur. But in this case, I feel like an average value would be sufficiently useful as well. usagecount_sum would actually be useful since average can be derived from it. If you think that the sum of usagecounts has a meaning just by itself, it makes sense to include it. Otherwise, wouldn't showing directly averaged value be more useful? Also I suggest changing the names of the columns in order to make them consistent with the rest of the system. If you consider pg_stat_activity and family [1] you will notice that the columns are named (entity)_(property), e.g. backend_xid, backend_type, client_addr, etc. So instead of used_buffers and unused_buffers the naming should be buffers_used and buffers_unused.[1]: https://www.postgresql.org/docs/current/monitoring-stats.htmlYou're right. I will change the names accordingly. Thanks.Regards,Melih",
"msg_date": "Sat, 10 Sep 2022 15:55:30 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-09 17:36:45 +0300, Aleksander Alekseev wrote:\n> I suggest we focus on saving the memory first and then think about the\n> performance, if necessary.\n\nPersonally I think the locks part is at least as important - it's what makes\nthe production impact higher.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 15 Sep 2022 13:25:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nAlso I suggest changing the names of the columns in order to make them\n> consistent with the rest of the system. If you consider pg_stat_activity\n> and family [1] you will notice that the columns are named\n> (entity)_(property), e.g. backend_xid, backend_type, client_addr, etc. So\n> instead of used_buffers and unused_buffers the naming should be\n> buffers_used and buffers_unused.\n>\n> [1]: https://www.postgresql.org/docs/current/monitoring-stats.html\n\n\nI changed these names and updated the patch.\n\nHowever I have somewhat mixed feelings about avg_usagecount. Generally\n>> AVG() is a relatively useless methric for monitoring. What if the user\n>> wants MIN(), MAX() or let's say a 99th percentile? I suggest splitting it\n>> into usagecount_min, usagecount_max and usagecount_sum. AVG() can be\n>> derived as usercount_sum / used_buffers.\n>>\n>\n> Won't be usagecount_max almost always 5 as \"BM_MAX_USAGE_COUNT\" set to 5\n> in buf_internals.h? I'm not sure about how much usagecount_min would add\n> either.\n> A usagecount is always an integer between 0 and 5, it's not\n> something unbounded. I think the 99th percentile would be much better than\n> average if strong outlier values could occur. But in this case, I feel like\n> an average value would be sufficiently useful as well.\n> usagecount_sum would actually be useful since average can be derived from\n> it. If you think that the sum of usagecounts has a meaning just by itself,\n> it makes sense to include it. Otherwise, wouldn't showing directly averaged\n> value be more useful?\n>\n\nAleksander, do you still think the average usagecount is a bit useless? Or\ndoes it make sense to you to keep it like this?\n\n> I suggest we focus on saving the memory first and then think about the\n> > performance, if necessary.\n>\n> Personally I think the locks part is at least as important - it's what\n> makes\n> the production impact higher.\n>\n\nI agree that it's important due to its high impact. I'm not sure how to\navoid any undefined behaviour without locks though.\nEven with locks, performance is much better. But is it good enough for\nproduction?\n\n\nThanks,\nMelih",
"msg_date": "Tue, 20 Sep 2022 11:47:40 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Melih,\n\n> I changed these names and updated the patch.\n\nThanks for the updated patch!\n\n> Aleksander, do you still think the average usagecount is a bit useless? Or does it make sense to you to keep it like this?\n\nI don't mind keeping the average.\n\n> I'm not sure how to avoid any undefined behaviour without locks though.\n> Even with locks, performance is much better. But is it good enough for production?\n\nPotentially you could avoid taking locks by utilizing atomic\noperations and lock-free algorithms. But these algorithms are\ntypically error-prone and not always produce a faster code than the\nlock-based ones. I'm pretty confident this is out of scope of this\nparticular patch.\n\nThe patch v6 had several defacts:\n\n* Trailing whitespaces (can be checked by applying the patch with `git am`)\n* Wrong code formatting (can be fixed with pgindent)\n* Several empty lines were removed which is not related to the\nproposed change (can be seen with `git diff`)\n* An unlikely division by zero if buffers_used = 0\n* Missing part of the commit message added in v4\n\nHere is a corrected patch v7. To me it seems to be in pretty good\nshape, unless cfbot and/or other hackers will report any issues.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 20 Sep 2022 12:45:24 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi hackers,\n\n> Here is a corrected patch v7. To me it seems to be in pretty good\n> shape, unless cfbot and/or other hackers will report any issues.\n\nThere was a missing empty line in pg_buffercache.out which made the\ntests fail. Here is a corrected v8 patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 20 Sep 2022 13:57:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com>, 20 Eyl 2022 Sal, 13:57\ntarihinde şunu yazdı:\n\n> There was a missing empty line in pg_buffercache.out which made the\n> tests fail. Here is a corrected v8 patch.\n>\n\nI was just sending a corrected patch without the missing line.\n\nThanks a lot for all these reviews and the corrected patch.\n\nBest,\nMelih\n\nAleksander Alekseev <aleksander@timescale.com>, 20 Eyl 2022 Sal, 13:57 tarihinde şunu yazdı:\nThere was a missing empty line in pg_buffercache.out which made the\ntests fail. Here is a corrected v8 patch.I was just sending a corrected patch without the missing line. Thanks a lot for all these reviews and the corrected patch.Best,Melih",
"msg_date": "Tue, 20 Sep 2022 14:00:19 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nSeems like cfbot tests are passing now:\nhttps://cirrus-ci.com/build/4727923671302144\n\nBest,\nMelih\n\nMelih Mutlu <m.melihmutlu@gmail.com>, 20 Eyl 2022 Sal, 14:00 tarihinde şunu\nyazdı:\n\n> Aleksander Alekseev <aleksander@timescale.com>, 20 Eyl 2022 Sal, 13:57\n> tarihinde şunu yazdı:\n>\n>> There was a missing empty line in pg_buffercache.out which made the\n>> tests fail. Here is a corrected v8 patch.\n>>\n>\n> I was just sending a corrected patch without the missing line.\n>\n> Thanks a lot for all these reviews and the corrected patch.\n>\n> Best,\n> Melih\n>\n\nHi,Seems like cfbot tests are passing now:https://cirrus-ci.com/build/4727923671302144Best,MelihMelih Mutlu <m.melihmutlu@gmail.com>, 20 Eyl 2022 Sal, 14:00 tarihinde şunu yazdı:Aleksander Alekseev <aleksander@timescale.com>, 20 Eyl 2022 Sal, 13:57 tarihinde şunu yazdı:\nThere was a missing empty line in pg_buffercache.out which made the\ntests fail. Here is a corrected v8 patch.I was just sending a corrected patch without the missing line. Thanks a lot for all these reviews and the corrected patch.Best,Melih",
"msg_date": "Tue, 20 Sep 2022 15:10:16 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nCorrect me if I’m wrong.\n\nThe doc says we don’t take lock during pg_buffercache_summary, but I see locks in the v8 patch, Isn’t it?\n\n```\nSimilar to <function>pg_buffercache_pages</function> function\n <function>pg_buffercache_summary</function> doesn't take buffer manager\n locks, thus the result is not consistent across all buffers. This is\n intentional. The purpose of this function is to provide a general idea about\n the state of shared buffers as fast as possible. Additionally,\n <function>pg_buffercache_summary</function> allocates much less memory.\n\n```\n\n\n\n\nRegards,\nZhang Mingli\nOn Sep 20, 2022, 20:10 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n> Hi,\n>\n> Seems like cfbot tests are passing now:\n> https://cirrus-ci.com/build/4727923671302144\n>\n> Best,\n> Melih\n>\n> > Melih Mutlu <m.melihmutlu@gmail.com>, 20 Eyl 2022 Sal, 14:00 tarihinde şunu yazdı:\n> > > Aleksander Alekseev <aleksander@timescale.com>, 20 Eyl 2022 Sal, 13:57 tarihinde şunu yazdı:\n> > > > > There was a missing empty line in pg_buffercache.out which made the\n> > > > > tests fail. Here is a corrected v8 patch.\n> > > >\n> > > > I was just sending a corrected patch without the missing line.\n> > > >\n> > > > Thanks a lot for all these reviews and the corrected patch.\n> > > >\n> > > > Best,\n> > > > Melih\n\n\n\n\n\n\n\nHi,Correct me if I’m wrong.\n\nThe doc says we don’t take lock during pg_buffercache_summary, but I see locks in the v8 patch, Isn’t it?\n\n```Similar to <function>pg_buffercache_pages</function> function\n <function>pg_buffercache_summary</function> doesn't take buffer manager\n locks, thus the result is not consistent across all buffers. This is\n intentional. The purpose of this function is to provide a general idea about\n the state of shared buffers as fast as possible. Additionally,\n <function>pg_buffercache_summary</function> allocates much less memory.\n\n```\n\n\n\n\n\nRegards,\nZhang Mingli\n\n\nOn Sep 20, 2022, 20:10 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n\nHi,\n\nSeems like cfbot tests are passing now:\nhttps://cirrus-ci.com/build/4727923671302144\n\nBest,\nMelih\n\n\n\nMelih Mutlu <m.melihmutlu@gmail.com>, 20 Eyl 2022 Sal, 14:00 tarihinde şunu yazdı:\n\n\nAleksander Alekseev <aleksander@timescale.com>, 20 Eyl 2022 Sal, 13:57 tarihinde şunu yazdı:\n\nThere was a missing empty line in pg_buffercache.out which made the\ntests fail. Here is a corrected v8 patch.\n\nI was just sending a corrected patch without the missing line. \n\nThanks a lot for all these reviews and the corrected patch.\n\nBest,\nMelih",
"msg_date": "Tue, 20 Sep 2022 20:29:30 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Zhang,\n\n> The doc says we don’t take lock during pg_buffercache_summary, but I see locks in the v8 patch, Isn’t it?\n>\n> ```\n> Similar to <function>pg_buffercache_pages</function> function\n> <function>pg_buffercache_summary</function> doesn't take buffer manager\n> locks [...]\n> ```\n\nCorrect, the procedure doesn't take the locks of the buffer manager.\nIt does take the locks of every individual buffer.\n\nI agree that the text is somewhat confusing, but it is consistent with\nthe current description of pg_buffercache [1]. I think this is a\nproblem worth addressing but it also seems to be out of scope of the\nproposed patch.\n\n[1]: https://www.postgresql.org/docs/current/pgbuffercache.html\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 20 Sep 2022 15:43:25 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nRegards,\nZhang Mingli\nOn Sep 20, 2022, 20:43 +0800, Aleksander Alekseev <aleksander@timescale.com>, wrote:\n>\n> Correct, the procedure doesn't take the locks of the buffer manager.\n> It does take the locks of every individual buffer.\nAh, now I get it, thanks.\n\n\n\n\n\n\n\nHi,\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 20, 2022, 20:43 +0800, Aleksander Alekseev <aleksander@timescale.com>, wrote:\n\nCorrect, the procedure doesn't take the locks of the buffer manager.\nIt does take the locks of every individual buffer.\nAh, now I get it, thanks.",
"msg_date": "Tue, 20 Sep 2022 20:48:59 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Zhang,\n\nThose are two different locks.\nThe locks that are taken in the patch are for buffer headers. This locks\nonly the current buffer and makes that particular buffer's info consistent\nwithin itself.\n\nHowever, the lock mentioned in the doc is for buffer manager which would\nprevent changes on any buffer if it's held.\npg_buffercache_summary (and pg_buffercache_pages) does not hold buffer\nmanager lock. Therefore, consistency across all buffers is not guaranteed.\n\nFor pg_buffercache_pages, self-consistent buffer information is useful\nsince it shows each buffer separately.\n\nFor pg_buffercache_summary, even self-consistency may not matter much since\nresults are aggregated and we can't see individual buffer information.\nConsistency across all buffers is also not a concern since its purpose is\nto give an overall idea about the state of buffers.\n\nI see that these two different locks in the same context can be confusing.\nI hope it is a bit more clear now.\n\nBest,\nMelih\n\n>\n\nHi Zhang,Those are two different locks.The locks that are taken in the patch are for buffer headers. This locks only the current buffer and makes that particular buffer's info consistent within itself.However, the lock mentioned in the doc is for buffer manager which would prevent changes on any buffer if it's held. pg_buffercache_summary (and pg_buffercache_pages) does not hold buffer manager lock. Therefore, consistency across all buffers is not guaranteed.For pg_buffercache_pages, self-consistent buffer information is useful since it shows each buffer separately.For pg_buffercache_summary, even self-consistency may not matter much since results are aggregated and we can't see individual buffer information.Consistency across all buffers is also not a concern since its purpose is to give an overall idea about the state of buffers.I see that these two different locks in the same context can be confusing. I hope it is a bit more clear now.Best,Melih",
"msg_date": "Tue, 20 Sep 2022 15:49:39 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\nOn Sep 20, 2022, 20:49 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n> Hi Zhang,\n>\n> Those are two different locks.\n> The locks that are taken in the patch are for buffer headers. This locks only the current buffer and makes that particular buffer's info consistent within itself.\n>\n> However, the lock mentioned in the doc is for buffer manager which would prevent changes on any buffer if it's held.\n> pg_buffercache_summary (and pg_buffercache_pages) does not hold buffer manager lock. Therefore, consistency across all buffers is not guaranteed.\n>\n> For pg_buffercache_pages, self-consistent buffer information is useful since it shows each buffer separately.\n>\n> For pg_buffercache_summary, even self-consistency may not matter much since results are aggregated and we can't see individual buffer information.\n> Consistency across all buffers is also not a concern since its purpose is to give an overall idea about the state of buffers.\n>\n> I see that these two different locks in the same context can be confusing. I hope it is a bit more clear now.\n>\n> Best,\n> Melih\nThanks for your explanation, LGTM.\n\n\n\n\n\n\n\nHi,\n\n\nOn Sep 20, 2022, 20:49 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\nHi Zhang,\n\nThose are two different locks.\nThe locks that are taken in the patch are for buffer headers. This locks only the current buffer and makes that particular buffer's info consistent within itself.\n\nHowever, the lock mentioned in the doc is for buffer manager which would prevent changes on any buffer if it's held. \npg_buffercache_summary (and pg_buffercache_pages) does not hold buffer manager lock. Therefore, consistency across all buffers is not guaranteed.\n\nFor pg_buffercache_pages, self-consistent buffer information is useful since it shows each buffer separately.\n\nFor pg_buffercache_summary, even self-consistency may not matter much since results are aggregated and we can't see individual buffer information.\nConsistency across all buffers is also not a concern since its purpose is to give an overall idea about the state of buffers.\n\nI see that these two different locks in the same context can be confusing. I hope it is a bit more clear now.\n\nBest,\nMelih\nThanks for your explanation, LGTM.",
"msg_date": "Tue, 20 Sep 2022 20:52:13 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-20 12:45:24 +0300, Aleksander Alekseev wrote:\n> > I'm not sure how to avoid any undefined behaviour without locks though.\n> > Even with locks, performance is much better. But is it good enough for production?\n>\n> Potentially you could avoid taking locks by utilizing atomic\n> operations and lock-free algorithms. But these algorithms are\n> typically error-prone and not always produce a faster code than the\n> lock-based ones. I'm pretty confident this is out of scope of this\n> particular patch.\n\nWhy would you need lockfree operations? All you need to do is to read\nBufferDesc->state into a local variable and then make decisions based on that?\n\n\n> +\tfor (int i = 0; i < NBuffers; i++)\n> +\t{\n> +\t\tBufferDesc *bufHdr;\n> +\t\tuint32\t\tbuf_state;\n> +\n> +\t\tbufHdr = GetBufferDescriptor(i);\n> +\n> +\t\t/* Lock each buffer header before inspecting. */\n> +\t\tbuf_state = LockBufHdr(bufHdr);\n> +\n> +\t\t/* Invalid RelFileNumber means the buffer is unused */\n> +\t\tif (RelFileNumberIsValid(BufTagGetRelNumber(&bufHdr->tag)))\n> +\t\t{\n> +\t\t\tbuffers_used++;\n> +\t\t\tusagecount_avg += BUF_STATE_GET_USAGECOUNT(buf_state);\n> +\n> +\t\t\tif (buf_state & BM_DIRTY)\n> +\t\t\t\tbuffers_dirty++;\n> +\t\t}\n> +\t\telse\n> +\t\t\tbuffers_unused++;\n> +\n> +\t\tif (BUF_STATE_GET_REFCOUNT(buf_state) > 0)\n> +\t\t\tbuffers_pinned++;\n> +\n> +\t\tUnlockBufHdr(bufHdr, buf_state);\n> +\t}\n\nI.e. instead of locking the buffer header as done above, this could just do\nsomething along these lines:\n\n BufferDesc *bufHdr;\n uint32 buf_state;\n\n bufHdr = GetBufferDescriptor(i);\n\n\t\tbuf_state = pg_atomic_read_u32(&bufHdr->state);\n\n\t\tif (buf_state & BM_VALID)\n {\n buffers_used++;\n usagecount_avg += BUF_STATE_GET_USAGECOUNT(buf_state);\n\n if (buf_state & BM_DIRTY)\n buffers_dirty++;\n }\n else\n buffers_unused++;\n\n if (BUF_STATE_GET_REFCOUNT(buf_state) > 0)\n buffers_pinned++;\n\n\nWithout a memory barrier you can get very slightly \"out-of-date\" values of the\nstate, but that's fine in this case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Sep 2022 17:58:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Andres,\n\n> All you need to do is to read BufferDesc->state into a local variable and then make decisions based on that\n\nYou are right, thanks.\n\nHere is the corrected patch.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Wed, 21 Sep 2022 16:08:51 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nSince header locks are removed again, I put some doc changes and comments\nback.\n\nThanks,\nMelih",
"msg_date": "Thu, 22 Sep 2022 18:22:44 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-22 18:22:44 +0300, Melih Mutlu wrote:\n> Since header locks are removed again, I put some doc changes and comments\n> back.\n\nDue to the merge of the meson build system, this needs to adjust meson.build\nas well.\n\n\n> --- a/contrib/pg_buffercache/expected/pg_buffercache.out\n> +++ b/contrib/pg_buffercache/expected/pg_buffercache.out\n> @@ -8,3 +8,12 @@ from pg_buffercache;\n> t\n> (1 row)\n>\n> +select buffers_used + buffers_unused > 0,\n> + buffers_dirty < buffers_used,\n> + buffers_pinned < buffers_used\n\nDoesn't these have to be \"<=\" instead of \"<\"?\n\n\n> +\tfor (int i = 0; i < NBuffers; i++)\n> +\t{\n> +\t\tBufferDesc *bufHdr;\n> +\t\tuint32\t\tbuf_state;\n> +\n> +\t\t/*\n> +\t\t * No need to get locks on buffer headers as we don't rely on the\n> +\t\t * results in detail. Therefore, we don't get a consistent snapshot\n> +\t\t * across all buffers and it is not guaranteed that the information of\n> +\t\t * each buffer is self-consistent as opposed to pg_buffercache_pages.\n> +\t\t */\n\nI think the \"consistent snapshot\" bit is misleading - even taking buffer\nheader locks wouldn't give you that.\n\n\n> +\tif (buffers_used != 0)\n> +\t\tusagecount_avg = usagecount_avg / buffers_used;\n\nPerhaps the average should be NULL in the buffers_used == 0 case?\n\n\n> + <para>\n> + <function>pg_buffercache_pages</function> function\n> + returns a set of records, plus a view <structname>pg_buffercache</structname> that wraps the function for\n> + convenient use is provided.\n> + </para>\n> +\n> + <para>\n> + <function>pg_buffercache_summary</function> function returns a table with a single row\n> + that contains summarized and aggregated information about shared buffer caches.\n> </para>\n\nI think these sentences are missing a \"The \" at the start?\n\n\"shared buffer caches\" isn't right - I think I'd just drop the \"caches\".\n\n\n> + <para>\n> + There is a single row to show summarized information of all shared buffers.\n> + <function>pg_buffercache_summary</function> is not interested\n> + in the state of each shared buffer, only shows aggregated information.\n> + </para>\n> +\n> + <para>\n> + <function>pg_buffercache_summary</function> doesn't take buffer manager\n> + locks. Unlike <function>pg_buffercache_pages</function> function\n> + <function>pg_buffercache_summary</function> doesn't take buffer headers locks\n> + either, thus the result is not consistent. This is intentional. The purpose\n> + of this function is to provide a general idea about the state of shared\n> + buffers as fast as possible. Additionally, <function>pg_buffercache_summary</function>\n> + allocates much less memory.\n> + </para>\n> + </sect2>\n\nI don't think this mentioning of buffer header locks is useful for users - nor\nis it I think correct. Acquiring the buffer header locks wouldn't add *any*\nadditional consistency.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 09:10:14 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi Andres,\n\nAdjusted the patch so that it will work with meson now.\n\nAlso addressed your other reviews as well.\nI hope explanations in comments/docs are better now.\n\nBest,\nMelih",
"msg_date": "Fri, 23 Sep 2022 23:14:09 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi all,\n\nThe patch needed a rebase due to recent changes on pg_buffercache.\nYou can find the updated version attached.\n\nBest,\nMelih",
"msg_date": "Wed, 28 Sep 2022 16:49:49 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Regards,\nZhang Mingli\nOn Sep 28, 2022, 21:50 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n> Hi all,\n>\n> The patch needed a rebase due to recent changes on pg_buffercache.\n> You can find the updated version attached.\n>\n> Best,\n> Melih\n>\n>\n```\n+\n+\tif (buffers_used != 0)\n+ usagecount_avg = usagecount_avg / buffers_used;\n+\n+\tmemset(nulls, 0, sizeof(nulls));\n+\tvalues[0] = Int32GetDatum(buffers_used);\n+\tvalues[1] = Int32GetDatum(buffers_unused);\n+\tvalues[2] = Int32GetDatum(buffers_dirty);\n+\tvalues[3] = Int32GetDatum(buffers_pinned);\n+\n+\tif (buffers_used != 0)\n+\t{\n+ usagecount_avg = usagecount_avg / buffers_used;\n+ values[4] = Float4GetDatum(usagecount_avg);\n+\t}\n+\telse\n+\t{\n+ nulls[4] = true;\n+\t}\n```\n\nWhy compute usagecount_avg twice?\n\n\n\n\n\n\n\nRegards,\nZhang Mingli\n\n\n\nOn Sep 28, 2022, 21:50 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\nHi all,\n\nThe patch needed a rebase due to recent changes on pg_buffercache.\nYou can find the updated version attached.\n\nBest,\nMelih\n\n\n```\n+\n+\tif (buffers_used != 0)\n+ usagecount_avg = usagecount_avg / buffers_used;\n+\n+\tmemset(nulls, 0, sizeof(nulls));\n+\tvalues[0] = Int32GetDatum(buffers_used);\n+\tvalues[1] = Int32GetDatum(buffers_unused);\n+\tvalues[2] = Int32GetDatum(buffers_dirty);\n+\tvalues[3] = Int32GetDatum(buffers_pinned);\n+\n+\tif (buffers_used != 0)\n+\t{\n+ usagecount_avg = usagecount_avg / buffers_used;\n+ values[4] = Float4GetDatum(usagecount_avg);\n+\t}\n+\telse\n+\t{\n+ nulls[4] = true;\n+\t}\n```\n\nWhy compute usagecount_avg twice?",
"msg_date": "Wed, 28 Sep 2022 22:31:34 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Zhang Mingli <zmlpostgres@gmail.com>, 28 Eyl 2022 Çar, 17:31 tarihinde şunu\nyazdı:\n\n> Why compute usagecount_avg twice?\n>\n\nI should have removed the first one, but I think I missed it.\nNice catch.\n\nAttached an updated version.\n\nThanks,\nMelih",
"msg_date": "Wed, 28 Sep 2022 17:41:45 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn Sep 28, 2022, 22:41 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n>\n>\n> Zhang Mingli <zmlpostgres@gmail.com>, 28 Eyl 2022 Çar, 17:31 tarihinde şunu yazdı:\n> > Why compute usagecount_avg twice?\n>\n> I should have removed the first one, but I think I missed it.\n> Nice catch.\n>\n> Attached an updated version.\n>\n> Thanks,\n> Melih\n>\nHmm, I just apply v13 patch but failed.\n\nPart of errors:\n```\nerror: patch failed: contrib/pg_buffercache/pg_buffercache.control:1 error: contrib/pg_buffercache/pg_buffercache.control: patch does not apply\nChecking patch contrib/pg_buffercache/pg_buffercache_pages.c...\nerror: while searching for:\n */\nPG_FUNCTION_INFO_V1(pg_buffercache_pages);\nPG_FUNCTION_INFO_V1(pg_buffercache_pages_v1_4);\n```\n\nRebase on master and then apply our changes again?\n\nRegards,\nZhang Mingli\n>\n\n\n\n\n\n\n\nHi,\n\nOn Sep 28, 2022, 22:41 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n\n\nZhang Mingli <zmlpostgres@gmail.com>, 28 Eyl 2022 Çar, 17:31 tarihinde şunu yazdı:\nWhy compute usagecount_avg twice? \n\nI should have removed the first one, but I think I missed it.\nNice catch.\n\nAttached an updated version.\n\nThanks,\nMelih\n\nHmm, I just apply v13 patch but failed.\n\nPart of errors:\n```\nerror: patch failed: contrib/pg_buffercache/pg_buffercache.control:1 error: contrib/pg_buffercache/pg_buffercache.control: patch does not apply\nChecking patch contrib/pg_buffercache/pg_buffercache_pages.c...\nerror: while searching for:\n */\nPG_FUNCTION_INFO_V1(pg_buffercache_pages);\nPG_FUNCTION_INFO_V1(pg_buffercache_pages_v1_4);\n```\n\nRebase on master and then apply our changes again?\n\n\nRegards,\nZhang Mingli",
"msg_date": "Wed, 28 Sep 2022 23:07:49 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nSeems like the commit a448e49bcbe40fb72e1ed85af910dd216d45bad8 reverts the\nchanges on pg_buffercache.\n\nWhy compute usagecount_avg twice?\n>\nThen, I'm going back to v11 + the fix for this.\n\nThanks,\nMelih",
"msg_date": "Wed, 28 Sep 2022 18:19:57 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn Sep 28, 2022, 23:20 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\n> Hi,\n>\n> Seems like the commit a448e49bcbe40fb72e1ed85af910dd216d45bad8 reverts the changes on pg_buffercache.\n>\n> > Why compute usagecount_avg twice?\n> Then, I'm going back to v11 + the fix for this.\n>\n> Thanks,\n> Melih\nLooks good to me.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nOn Sep 28, 2022, 23:20 +0800, Melih Mutlu <m.melihmutlu@gmail.com>, wrote:\nHi,\n\nSeems like the commit a448e49bcbe40fb72e1ed85af910dd216d45bad8 reverts the changes on pg_buffercache.\n\nWhy compute usagecount_avg twice? \nThen, I'm going back to v11 + the fix for this.\n\nThanks,\nMelih\nLooks good to me.\n\n\nRegards,\nZhang Mingli",
"msg_date": "Thu, 29 Sep 2022 14:11:30 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-28 18:19:57 +0300, Melih Mutlu wrote:\n> diff --git a/contrib/pg_buffercache/pg_buffercache--1.3--1.4.sql b/contrib/pg_buffercache/pg_buffercache--1.3--1.4.sql\n> new file mode 100644\n> index 0000000000..77e250b430\n> --- /dev/null\n> +++ b/contrib/pg_buffercache/pg_buffercache--1.3--1.4.sql\n> @@ -0,0 +1,13 @@\n> +/* contrib/pg_buffercache/pg_buffercache--1.3--1.4.sql */\n> +\n> +-- complain if script is sourced in psql, rather than via ALTER EXTENSION\n> +\\echo Use \"ALTER EXTENSION pg_buffercache UPDATE TO '1.4'\" to load this file. \\quit\n> +\n> +CREATE FUNCTION pg_buffercache_summary()\n> +RETURNS TABLE (buffers_used int4, buffers_unused int4, buffers_dirty int4,\n> +\t\t\t\tbuffers_pinned int4, usagecount_avg real)\n> +AS 'MODULE_PATHNAME', 'pg_buffercache_summary'\n> +LANGUAGE C PARALLEL SAFE;\n\nI think using RETURNS TABLE isn't quite right here, as it implies 'SETOF'. But\nthe function doesn't return a set of rows. I changed this to use OUT\nparameters.\n\n\n> +-- Don't want these to be available to public.\n> +REVOKE ALL ON FUNCTION pg_buffercache_summary() FROM PUBLIC;\n\nI think this needs to grant to pg_monitor too. See\npg_buffercache--1.2--1.3.sql\n\nI added a test verifying the permissions are right, with the hope that it'll\nmake future contributors try to add a parallel test and notice the permissions\naren't right.\n\n\n> +\t/* Construct a tuple descriptor for the result rows. */\n> +\ttupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_SUMMARY_ELEM);\n\nGiven that we define the return type on the SQL level, it imo is nicer to use\nget_call_result_type() here.\n\n\n> +\tTupleDescInitEntry(tupledesc, (AttrNumber) 5, \"usagecount_avg\",\n> +\t\t\t\t\t FLOAT4OID, -1, 0);\n\nI changed this to FLOAT8. Not that the precision will commonly be useful, but\nit doesn't seem worth having to even think about whether there are cases where\nit'd matter.\n\nI also changed it so that the accumulation happens in an int64 variable named\nusagecount_total, which gets converted to a double only when actually\ncomputing the result.\n\n\n> <para>\n> The <filename>pg_buffercache</filename> module provides a means for\n> - examining what's happening in the shared buffer cache in real time.\n> + examining what's happening in the shared buffer in real time.\n> </para>\n\nThis seems to be an unnecessary / unrelated change. I suspect you made it in\nresponse to\nhttps://postgr.es/m/20220922161014.copbzwdl3ja4nt6z%40awork3.anarazel.de\nbut that was about a different sentence, where you said 'shared buffer caches'\n(even though there is only a single shared buffer cache).\n\n\n> <indexterm>\n> @@ -17,10 +17,19 @@\n> </indexterm>\n> \n> <para>\n> - The module provides a C function <function>pg_buffercache_pages</function>\n> - that returns a set of records, plus a view\n> - <structname>pg_buffercache</structname> that wraps the function for\n> - convenient use.\n> + The module provides C functions <function>pg_buffercache_pages</function>\n> + and <function>pg_buffercache_summary</function>.\n> + </para>\n> +\n> + <para>\n> + The <function>pg_buffercache_pages</function> function\n> + returns a set of records, plus a view <structname>pg_buffercache</structname> that wraps the function for\n> + convenient use is provided.\n> + </para>\n\nI rephrased this, because it sounds like the function returns a set of records\nand a view.\n\n\n> + <para>\n> + The <function>pg_buffercache_summary</function> function returns a table with a single row\n> + that contains summarized and aggregated information about shared buffer.\n> </para>\n\n\"summarized and aggregated\" is quite redundant.\n\n\n> + <table id=\"pgbuffercachesummary-columns\">\n> + <title><structname>pg_buffercachesummary</structname> Columns</title>\n\nMissing underscore.\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>buffers_unused</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Number of shared buffers that not currently being used\n> + </para></entry>\n> + </row>\n\nThere's a missing 'are' in here, I think. I rephrased all of these to\n\"Number of (used|unused|dirty|pinned) shared buffers\"\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>buffers_dirty</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Number of dirty shared buffers\n> + </para></entry>\n> + </row>\n> +\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>buffers_pinned</structfield> <type>int4</type>\n> + </para>\n> + <para>\n> + Number of shared buffers that has a pinned backend\n> + </para></entry>\n> + </row>\n\nBackends pin buffers, not the other way round...\n\n\n> + <para>\n> + There is a single row to show summarized information of all shared buffers.\n> + <function>pg_buffercache_summary</function> is not interested\n> + in the state of each shared buffer, only shows aggregated information.\n> + </para>\n> +\n> + <para>\n> + The <function>pg_buffercache_summary</function> doesn't provide a result\n> + that is consistent across all buffers. This is intentional. The purpose\n> + of this function is to provide a general idea about the state of shared\n> + buffers as fast as possible. Additionally, <function>pg_buffercache_summary</function>\n> + allocates much less memory.\n> + </para>\n\nI still didn't like this comment. Please see the attached.\n\n\nI intentionally put my changes into a fixup commit, in case you want to look\nat the differences.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 12 Oct 2022 12:27:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-12 12:27:54 -0700, Andres Freund wrote:\n> I intentionally put my changes into a fixup commit, in case you want to look\n> at the differences.\n\nI pushed the (combined) patch now. Thanks for your contribution!\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 13 Oct 2022 10:04:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Summary function for pg_buffercache"
}
] |
[
{
"msg_contents": "I happened to notice that configure extracts TCL_SHLIB_LD_LIBS\nfrom tclConfig.sh, and puts the value into Makefile.global,\nbut then we never use it anywhere. AFAICT the only use went\naway in cd75f94da, in 2003. I propose the attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 18 Aug 2022 11:04:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Another dead configure test"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-18 11:04:03 -0400, Tom Lane wrote:\n> I happened to notice that configure extracts TCL_SHLIB_LD_LIBS\n> from tclConfig.sh, and puts the value into Makefile.global,\n> but then we never use it anywhere. AFAICT the only use went\n> away in cd75f94da, in 2003. I propose the attached.\n\nLooks good, except that it perhaps could go a tad further: TCL_SHARED_BUILD\nisn't used either afaics?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Aug 2022 09:56:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Another dead configure test"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Looks good, except that it perhaps could go a tad further: TCL_SHARED_BUILD\n> isn't used either afaics?\n\nI wondered about that, but we do need TCL_SHARED_BUILD in configure\nitself, and the PGAC_EVAL_TCLCONFIGSH macro is going to AC_SUBST it.\nWe could remove the line in Makefile.global but I don't think that\nbuys much, and it might be more confusing not less so.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Aug 2022 13:00:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another dead configure test"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-18 13:00:28 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Looks good, except that it perhaps could go a tad further: TCL_SHARED_BUILD\n> > isn't used either afaics?\n> \n> I wondered about that, but we do need TCL_SHARED_BUILD in configure\n> itself, and the PGAC_EVAL_TCLCONFIGSH macro is going to AC_SUBST it.\n> We could remove the line in Makefile.global but I don't think that\n> buys much, and it might be more confusing not less so.\n\n From the meson-generates-Makefile.global angle I like fewer symbols that have\nto be considered in Makefile.global.in :). But even leaving that aside, I\nthink it's clearer to not have things in Makefile.global if they're not used.\n\nBut it's obviously not important.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 18 Aug 2022 10:20:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Another dead configure test"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-18 13:00:28 -0400, Tom Lane wrote:\n>> I wondered about that, but we do need TCL_SHARED_BUILD in configure\n>> itself, and the PGAC_EVAL_TCLCONFIGSH macro is going to AC_SUBST it.\n>> We could remove the line in Makefile.global but I don't think that\n>> buys much, and it might be more confusing not less so.\n\n>> From the meson-generates-Makefile.global angle I like fewer symbols that have\n> to be considered in Makefile.global.in :). But even leaving that aside, I\n> think it's clearer to not have things in Makefile.global if they're not used.\n\n> But it's obviously not important.\n\nYeah, I'm not excited about it either way --- feel free to change\nif you'd rather.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 18 Aug 2022 14:20:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Another dead configure test"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a rebased version of the patch adding logical decoding of\nsequences. The previous attempt [1] ended up getting reverted, due to\nrunning into issues with non-transactional nature of sequences when\ndecoding the existing WAL records. See [2] for details.\n\nThis patch uses a different approach, proposed by Hannu Krosing [3],\nbased on tracking sequences actually modified in each transaction, and\nthen WAL-logging the state at the end.\n\nThis does work, but I'm not very happy about WAL-logging all sequences\nat the end. The \"problem\" is we have to re-read the current state of the\nsequence from disk, because it might be concurrently updated by another\ntransaction.\n\nImagine two transactions, T1 and T2:\n\nT1: BEGIN\n\nT1: SELECT nextval('s') FROM generate_series(1,1000)\n\nT2: BEGIN\n\nT2: SELECT nextval('s') FROM generate_series(1,1000)\n\nT2: COMMIT\n\nT1: COMMIT\n\nThe expected outcome is that the sequence value is ~2000. We must not\nblindly apply the changes from T2 by the increments in T1. So the patch\nsimply reads \"current\" state of the transaction at commit time. Which is\nannoying, because it involves I/O, increases the commit duration, etc.\n\nOn the other hand, this is likely cheaper than the other approach based\non WAL-logging every sequence increment (that would have to be careful\nabout obsoleted increments too, when applying them transactionally).\n\n\nI wonder if we might deal with this by simply WAL-logging LSN of the\nlast change for each sequence (in the given xact), which would allow\ndiscarding the \"obsolete\" changes quite easily I think. nextval() would\nsimply look at LSN in the page header.\n\nAnd maybe we could then use the LSN to read the increment from the WAL\nduring decoding, instead of having to read it and WAL-log it during\ncommit. Essentially, we'd run a local XLogReader. Of course, we'd have\nto be careful about checkpoints, not sure what to do about that.\n\nAnother idea that just occurred to me is that if we end up having to\nread the sequence state during commit, maybe we could at least optimize\nit somehow. For example we might track LSN of the last logged state for\neach sequence (in shared memory or something), and the other sessions\ncould just skip the WAL-log if their \"local\" LSN is <= than this LSN.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/d045f3c2-6cfb-06d3-5540-e63c320df8bc@enterprisedb.com\n\n[2]\nhttps://www.postgresql.org/message-id/00708727-d856-1886-48e3-811296c7ba8c%40enterprisedb.com\n\n[3]\nhttps://www.postgresql.org/message-id/CAMT0RQQeDR51xs8zTa25YpfKB1B34nS-Q4hhsRPznVsjMB_P1w%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 18 Aug 2022 23:10:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "I've been thinking about the two optimizations mentioned at the end a\nbit more, so let me share my thoughts before I forget that:\n\nOn 8/18/22 23:10, Tomas Vondra wrote:\n>\n> ...\n>\n> And maybe we could then use the LSN to read the increment from the WAL\n> during decoding, instead of having to read it and WAL-log it during\n> commit. Essentially, we'd run a local XLogReader. Of course, we'd have\n> to be careful about checkpoints, not sure what to do about that.\n> \n\nI think logging just the LSN is workable.\n\nI was worried about dealing with checkpoints, because imagine you do\nnextval() on sequence that was last WAL-logged a couple checkpoints\nback. Then you wouldn't be able to read the LSN (when decoding), because\nthe WAL might have been recycled. But that can't happen, because we\nalways force WAL-logging the first time nextval() is called after a\ncheckpoint. So we know the LSN is guaranteed to be available.\n\nOf course, this would not reduce the amount of WAL messages, because\nwe'd still log all sequences touched by the transaction. We wouldn't\nneed to read the state from disk, though, and we could ignore \"old\"\nstuff in decoding (with LSN lower than the last LSN we decoded).\n\nFor frequently used sequences that seems like a win.\n\n\n> Another idea that just occurred to me is that if we end up having to\n> read the sequence state during commit, maybe we could at least optimize\n> it somehow. For example we might track LSN of the last logged state for\n> each sequence (in shared memory or something), and the other sessions\n> could just skip the WAL-log if their \"local\" LSN is <= than this LSN.\n> \n\nTracking the last LSN for each sequence (in a SLRU or something) should\nwork too, I guess. In principle this just moves the skipping of \"old\"\nincrements from decoding to writing, so that we don't even have to write\nthose into WAL.\n\nWe don't even need persistence, nor to keep all the records, I think. If\nyou don't find a record for a given sequence, assume it wasn't logged\nyet and just log it. Of course, it requires a bit of shared memory for\neach sequence, say ~32B. Not sure about the overhead, but I'd bet if you\nhave many (~thousands) frequently used sequences, there'll be a lot of\nother overhead making this irrelevant.\n\nOf course, if we're doing the skipping when writing the WAL, maybe we\nshould just read the sequence state - we'd do the I/O, but only in\nfraction of the transactions, and we wouldn't need to read old WAL in\nlogical decoding.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Aug 2022 13:11:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nI noticed on cfbot the patch no longer applies, so here's a rebased\nversion. Most of the breakage was due to the column filtering reworks,\ngrammar changes etc. A lot of bitrot, but mostly mechanical stuff.\n\nI haven't looked into the optimizations / improvements I discussed in my\nprevious post (logging only LSN of the last WAL-logged increment),\nbecause while fixing \"make check-world\" I ran into a more serious issue\nthat I think needs to be discussed first. And I suspect it might also\naffect the feasibility of the LSN optimization.\n\nSo, what's the issue - the current solution is based on WAL-logging\nstate of all sequences incremented by the transaction at COMMIT. To do\nthat, we read the state from disk, and write that into WAL. However,\nthese WAL messages are not necessarily correlated to COMMIT records, so\nstuff like this might happen:\n\n1. transaction T1 increments sequence S\n2. transaction T2 increments sequence S\n3. both T1 and T2 start to COMMIT\n4. T1 reads state of S from disk, writes it into WAL\n5. transaction T3 increments sequence S\n6. T2 reads state of S from disk, writes it into WAL\n7. T2 write COMMIT into WAL\n8. T1 write COMMIT into WAL\n\nBecause the apply order is determined by ordering of COMMIT records,\nthis means we'd apply the increments logged by T2, and then by T1. But\nthat undoes the increment by T3, and the sequence would go backwards.\n\nThe previous patch version addressed that by acquiring lock on the\nsequence, holding it until transaction end. This effectively ensures the\norder of sequence messages and COMMIT matches. But that's problematic\nfor a number of reasons:\n\n1) throughput reduction, because the COMMIT records need to serialize\n\n2) deadlock risk, if we happen to lock sequences in different order\n (in different transactions)\n\n3) problem for prepared transactions - the sequences are locked and\n logged in PrepareTransaction, because we may not have seqhashtab\n beyond that point. This is a much worse variant of (1).\n\nNote: I also wonder what happens if someone does DISCARD SEQUENCES. I\nguess we'll forget the sequences, which is bad - so we'd have to invent\na separate cache that does not have this issue.\n\n\nI realized (3) because one of the test_decoding TAP tests got stuck\nexactly because of a sequence locked by a prepared transaction.\n\nThis patch simply releases the lock after writing the WAL message, but\nthat just makes it vulnerable to the reordering. And this would have\nbeen true even with the LSN optimization.\n\nHowever, I was thinking that maybe we could use the LSN of the WAL\nmessage (XLOG_LOGICAL_SEQUENCE) to deal with the ordering issue, because\n*this* is the sensible sequence increment ordering.\n\nIn the example above, we'd first apply the WAL message from T2 (because\nthat commits first). And then we'd get to apply T1, but the WAL message\nhas an older LSN, so we'd skip it.\n\nBut this requires us remembering LSN of the already applied WAL sequence\nmessages, which could be tricky - we'd need to persist it in some way\nbecause of restarts, etc. We can't do this while decoding but on the\napply side, I think, because of streaming, aborts.\n\nThe other option might be to make these messages non-transactional, in\nwhich case we'd separate the ordering from COMMIT ordering, evading the\nreordering problem.\n\nThat'd mean we'd ignore rollbacks (which seems fine), we could probably\noptimize this by checking if the state actually changed, etc. But we'd\nalso need to deal with transactions created in the (still uncommitted)\ntransaction. But I'm also worried it might lead to the same issue with\nnon-transactional behaviors that forced revert in v15.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 11 Nov 2022 23:49:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "2022年11月12日(土) 7:49 Tomas Vondra <tomas.vondra@enterprisedb.com>:\n>\n> Hi,\n>\n> I noticed on cfbot the patch no longer applies, so here's a rebased\n> version. Most of the breakage was due to the column filtering reworks,\n> grammar changes etc. A lot of bitrot, but mostly mechanical stuff.\n\n(...)\n\nHi\n\nThanks for the update patch.\n\nWhile reviewing the patch backlog, we have determined that this patch adds\none or more TAP tests but has not added the test to the \"meson.build\" file.\n\nTo do this, locate the relevant \"meson.build\" file for each test and add it\nin the 'tests' dictionary, which will look something like this:\n\n 'tap': {\n 'tests': [\n 't/001_basic.pl',\n ],\n },\n\nFor some additional details please see this Wiki article:\n\n https://wiki.postgresql.org/wiki/Meson_for_patch_authors\n\nFor more information on the meson build system for PostgreSQL see:\n\n https://wiki.postgresql.org/wiki/Meson\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 16 Nov 2022 13:43:51 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Nov 11, 2022 at 5:49 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> The other option might be to make these messages non-transactional, in\n> which case we'd separate the ordering from COMMIT ordering, evading the\n> reordering problem.\n>\n> That'd mean we'd ignore rollbacks (which seems fine), we could probably\n> optimize this by checking if the state actually changed, etc. But we'd\n> also need to deal with transactions created in the (still uncommitted)\n> transaction. But I'm also worried it might lead to the same issue with\n> non-transactional behaviors that forced revert in v15.\n\nI think it might be a good idea to step back slightly from\nimplementation details and try to agree on a theoretical model of\nwhat's happening here. Let's start by banishing the words\ntransactional and non-transactional from the conversation and talk\nabout what logical replication is trying to do.\n\nWe can imagine that the replicated objects on the primary pass through\na series of states S1, S2, ..., Sn, where n keeps going up as new\nstate changes occur. The state, for our purposes here, is the contents\nof the database as they could be observed by a user running SELECT\nqueries at some moment in time chosen by the user. For instance, if\nthe initial state of the database is S1, and then the user executes\nBEGIN, 2 single-row INSERT statements, and a COMMIT, then S2 is the\nstate that differs from S1 in that both of those rows are now part of\nthe database contents. There is no state where one of those rows is\nvisible and the other is not. That was never observable by the user,\nexcept from within the transaction as it was executing, which we can\nand should discount. I believe that the goal of logical replication is\nto bring about a state of affairs where the set of states observable\non the standby is a subset of the states observable on the primary.\nThat is, if the primary goes from S1 to S2 to S3, the standby can do\nthe same thing, or it can go straight from S1 to S3 without ever\nmaking it possible for the user to observe S2. Either is correct\nbehavior. But the standby cannot invent any new states that didn't\noccur on the primary. It can't decide to go from S1 to S1.5 to S2.5 to\nS3, or something like that. It can only consolidate changes that\noccurred separately on the primary, never split them up. Neither can\nit reorder them.\n\nNow, if you accept this as a reasonable definition of correctness,\nthen the next question is what consequences it has for transactional\nand non-transactional behavior. If all behavior is transactional, then\nwe've basically got to replay each primary transaction in a single\nstandby transaction, and commit those transactions in the same order\nthat the corresponding primary transactions committed. We could\nlegally choose to merge a group of transactions that committed one\nafter the other on the primary into a single transaction on the\nstandby, and it might even be a good idea if they're all very tiny,\nbut it's not required. But if there are non-transactional things\nhappening, then there are changes that become visible at some time\nother than at a transaction commit. For example, consider this\nsequence of events, in which each \"thing\" that happens is\ntransactional except where the contrary is noted:\n\nT1: BEGIN;\nT2: BEGIN;\nT1: Do thing 1;\nT2: Do thing 2;\nT1: Do a non-transactional thing;\nT1: Do thing 3;\nT2: Do thing 4;\nT2: COMMIT;\nT1: COMMIT;\n\n From the point of the user here, there are 4 observable states here:\n\nS1: Initiate state.\nS2: State after the non-transactional thing happens.\nS3: State after T2 commits (reflects the non-transactional thing plus\nthings 2 and 4).\nS4: State after T1 commits.\n\nBasically, the non-transactional thing behaves a whole lot like a\nseparate transaction. That non-transactional operation ought to be\nreplicated before T2, which ought to be replicated before T1. Maybe\nlogical replication ought to treat it in exactly that way: as a\nseparate operation that needs to be replicated after any earlier\ntransactions that completed prior to the history shown here, but\nbefore T2 or T1. Alternatively, you can merge the non-transactional\nchange into T2, i.e. the first transaction that committed after it\nhappened. But you can't merge it into T1, even though it happened in\nT1. If you do that, then you're creating states on the standby that\nnever existed on the primary, which is wrong. You could argue that\nthis is just nitpicking: who cares if the change in the sequence value\ndoesn't get replicated at exactly the right moment? But I don't think\nit's a technicality at all: I think if we don't make the operation\nappear to happen at the same point in the sequence as it became\nvisible on the master, then there will be endless artifacts and corner\ncases to the bottom of which we will never get. Just like if we\nreplicated the actual transactions out of order, chaos would ensue,\nbecause there can be logical dependencies between them, so too can\nthere be logical dependencies between non-transactional operations, or\nbetween a non-transactional operation and a transactional operation.\n\nTo make it more concrete, consider two sessions concurrently running this SQL:\n\ninsert into t1 select nextval('s1') from generate_series(1,1000000) g;\n\nThere are, in effect, 2000002 transaction-like things here. The\nsequence gets incremented 2 million times, and then there are 2\ncommits that each insert a million rows. Perhaps the actual order of\nevents looks something like this:\n\n1. nextval the sequence N times, where N >= 1 million\n2. commit the first transaction, adding a million rows to t1\n3. nextval the sequence 2 million - N times\n4. commit the second transaction, adding another million rows to t1\n\nUnless we replicate all of the nextval operations that occur in step 1\nat the same time or prior to replicating the first transaction in step\n2, we might end up making visible a state where the next value of the\nsequence is less than the highest value present in the table, which\nwould be bad.\n\nWith that perhaps overly-long set of preliminaries, I'm going to move\non to talking about the implementation ideas which you mention. You\nwrite that \"the current solution is based on WAL-logging state of all\nsequences incremented by the transaction at COMMIT\" and then, it seems\nto me, go on to demonstrate that it's simply incorrect. In my opinion,\nthe fundamental problem is that it doesn't look at the order that\nthings happened on the primary and do them in the same order on the\nstandby. Instead, it accepts that the non-transactional operations are\ngoing to be replicated at the wrong time, and then tries to patch\naround the issue by attempting to scrounge up the correct values at\nsome convenient point and use that data to compensate for our failure\nto do the right thing at an earlier point. That doesn't seem like a\nsatisfying solution, and I think it will be hard to make it fully\ncorrect.\n\nYour alternative proposal says \"The other option might be to make\nthese messages non-transactional, in which case we'd separate the\nordering from COMMIT ordering, evading the reordering problem.\" But I\ndon't think that avoids the reordering problem at all. Nor do I think\nit's correct. I don't think you *can* separate the ordering of these\noperations from the COMMIT ordering. They are, as I argue here,\nessentially mini-commits that only bump the sequence value, and they\nneed to be replicated after the transactions that commit prior to the\nsequence value bump and before those that commit afterward. If they\naren't handled that way, I don't think you're going to get fully\ncorrect behavior.\n\nI'm going to confess that I have no really specific idea how to\nimplement that. I'm just not sufficiently familiar with this code.\nHowever, I suspect that the solution lies in changing things on the\ndecoding side rather than in the WAL format. I feel like the\ninformation that we need in order to do the right thing must already\nbe present in the WAL. If it weren't, then how could crash recovery\nwork correctly, or physical replication? At any given moment, you can\nchoose to promote a physical standby, and at that point the state you\nobserve on the new primary had better be some state that existed on\nthe primary at some point in its history. At any moment, you can\nunplug the primary, restart it, and run crash recovery, and if you do,\nyou had better end up with some state that existed on the primary at\nsome point shortly before the crash. I think that there are actually a\nfew subtle inaccuracies in the last two sentences, because actually\nthe order in which transactions become visible on a physical standby\ncan differ from the order in which it happens on the primary, but I\ndon't think that actually changes the picture much. The point is that\nthe WAL is the definitive source of information about what happened\nand in what order it happened, and we use it in that way already in\nthe context of physical replication, and of standbys. If logical\ndecoding has a problem with some case that those systems handle\ncorrectly, the problem is with logical decoding, not the WAL format.\n\nIn particular, I think it's likely that the \"non-transactional\nmessages\" that you mention earlier don't get applied at the point in\nthe commit sequence where they were found in the WAL. Not sure why\nexactly, but perhaps the point at which we're reading WAL runs ahead\nof the decoding per se, or something like that, and thus those\nnon-transactional messages arrive too early relative to the commit\nordering. Possibly that could be changed, and they could be buffered\nuntil earlier commits are replicated. Or else, when we see a WAL\nrecord for a non-transactional sequence operation, we could arrange to\nbundle that operation into an \"adjacent\" replicated transaction i.e.\nthe transaction whose commit record occurs most nearly prior to, or\nmost nearly after, the WAL record for the operation itself. Or else,\nwe could create \"virtual\" transactions for such operations and make\nsure those get replayed at the right point in the commit sequence. Or\nelse, I don't know, maybe something else. But I think the overall\npicture is that we need to approach the problem by replicating changes\nin WAL order, as a physical standby would do. Saying that a change is\n\"nontransactional\" doesn't mean that it's exempt from ordering\nrequirements; rather, it means that that change has its own place in\nthat ordering, distinct from the transaction in which it occurred.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 16 Nov 2022 16:05:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 11/16/22 22:05, Robert Haas wrote:\n> On Fri, Nov 11, 2022 at 5:49 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> The other option might be to make these messages non-transactional, in\n>> which case we'd separate the ordering from COMMIT ordering, evading the\n>> reordering problem.\n>>\n>> That'd mean we'd ignore rollbacks (which seems fine), we could probably\n>> optimize this by checking if the state actually changed, etc. But we'd\n>> also need to deal with transactions created in the (still uncommitted)\n>> transaction. But I'm also worried it might lead to the same issue with\n>> non-transactional behaviors that forced revert in v15.\n> \n> I think it might be a good idea to step back slightly from\n> implementation details and try to agree on a theoretical model of\n> what's happening here. Let's start by banishing the words\n> transactional and non-transactional from the conversation and talk\n> about what logical replication is trying to do.\n> \n\nOK, let's try.\n\n> We can imagine that the replicated objects on the primary pass through\n> a series of states S1, S2, ..., Sn, where n keeps going up as new\n> state changes occur. The state, for our purposes here, is the contents\n> of the database as they could be observed by a user running SELECT\n> queries at some moment in time chosen by the user. For instance, if\n> the initial state of the database is S1, and then the user executes\n> BEGIN, 2 single-row INSERT statements, and a COMMIT, then S2 is the\n> state that differs from S1 in that both of those rows are now part of\n> the database contents. There is no state where one of those rows is\n> visible and the other is not. That was never observable by the user,\n> except from within the transaction as it was executing, which we can\n> and should discount. I believe that the goal of logical replication is\n> to bring about a state of affairs where the set of states observable\n> on the standby is a subset of the states observable on the primary.\n> That is, if the primary goes from S1 to S2 to S3, the standby can do\n> the same thing, or it can go straight from S1 to S3 without ever\n> making it possible for the user to observe S2. Either is correct\n> behavior. But the standby cannot invent any new states that didn't\n> occur on the primary. It can't decide to go from S1 to S1.5 to S2.5 to\n> S3, or something like that. It can only consolidate changes that\n> occurred separately on the primary, never split them up. Neither can\n> it reorder them.\n> \n\nI mostly agree, and in a way the last patch aims to do roughly this,\ni.e. make sure that the state after each transaction matches the state a\nuser might observe on the primary (modulo implementation challenges).\n\nThere's a couple of caveats, though:\n\n1) Maybe we should focus more on \"actually observed\" state instead of\n\"observable\". Who cares if the sequence moved forward in a transaction\nthat was ultimately rolled back? No committed transaction should have\nobserver those values - in a way, the last \"valid\" state of the sequence\nis the last value generated in a transaction that ultimately committed.\n\n2) I think what matters more is that we never generate duplicate value.\nThat is, if you generate a value from a sequence, commit a transaction\nand replicate it, then the logical standby should not generate the same\nvalue from the sequence. This guarantee seems necessary for \"failover\"\nto logical standby.\n\n> Now, if you accept this as a reasonable definition of correctness,\n> then the next question is what consequences it has for transactional\n> and non-transactional behavior. If all behavior is transactional, then\n> we've basically got to replay each primary transaction in a single\n> standby transaction, and commit those transactions in the same order\n> that the corresponding primary transactions committed. We could\n> legally choose to merge a group of transactions that committed one\n> after the other on the primary into a single transaction on the\n> standby, and it might even be a good idea if they're all very tiny,\n> but it's not required. But if there are non-transactional things\n> happening, then there are changes that become visible at some time\n> other than at a transaction commit. For example, consider this\n> sequence of events, in which each \"thing\" that happens is\n> transactional except where the contrary is noted:\n> \n> T1: BEGIN;\n> T2: BEGIN;\n> T1: Do thing 1;\n> T2: Do thing 2;\n> T1: Do a non-transactional thing;\n> T1: Do thing 3;\n> T2: Do thing 4;\n> T2: COMMIT;\n> T1: COMMIT;\n> \n> From the point of the user here, there are 4 observable states here:\n> \n> S1: Initiate state.\n> S2: State after the non-transactional thing happens.\n> S3: State after T2 commits (reflects the non-transactional thing plus\n> things 2 and 4).\n> S4: State after T1 commits.\n> \n> Basically, the non-transactional thing behaves a whole lot like a\n> separate transaction. That non-transactional operation ought to be\n> replicated before T2, which ought to be replicated before T1. Maybe\n> logical replication ought to treat it in exactly that way: as a\n> separate operation that needs to be replicated after any earlier\n> transactions that completed prior to the history shown here, but\n> before T2 or T1. Alternatively, you can merge the non-transactional\n> change into T2, i.e. the first transaction that committed after it\n> happened. But you can't merge it into T1, even though it happened in\n> T1. If you do that, then you're creating states on the standby that\n> never existed on the primary, which is wrong. You could argue that\n> this is just nitpicking: who cares if the change in the sequence value\n> doesn't get replicated at exactly the right moment? But I don't think\n> it's a technicality at all: I think if we don't make the operation\n> appear to happen at the same point in the sequence as it became\n> visible on the master, then there will be endless artifacts and corner\n> cases to the bottom of which we will never get. Just like if we\n> replicated the actual transactions out of order, chaos would ensue,\n> because there can be logical dependencies between them, so too can\n> there be logical dependencies between non-transactional operations, or\n> between a non-transactional operation and a transactional operation.\n> \n\nWell, yeah - we can either try to perform the stuff independently of the\ntransactions that triggered it, or we can try making it part of some of\nthe transactions. Each of those options has problems, though :-(\n\nThe first version of the patch tried the first approach, i.e. decode the\nincrements and apply that independently. But:\n\n (a) What would you do with increments of sequences created/reset in a\n transaction? Can't apply those outside the transaction, because it\n might be rolled back (and that state is not visible on primary).\n\n (b) What about increments created before we have a proper snapshot?\n There may be transactions dependent on the increment. This is what\n ultimately led to revert of the patch.\n\nThis version of the patch tries to do the opposite thing - make sure\nthat the state after each commit matches what the transaction might have\nseen (for sequences it accessed). It's imperfect, because it might log a\nstate generated \"after\" the sequence got accessed - it focuses on the\nguarantee not to generate duplicate values.\n\n> To make it more concrete, consider two sessions concurrently running this SQL:\n> \n> insert into t1 select nextval('s1') from generate_series(1,1000000) g;\n> \n> There are, in effect, 2000002 transaction-like things here. The\n> sequence gets incremented 2 million times, and then there are 2\n> commits that each insert a million rows. Perhaps the actual order of\n> events looks something like this:\n> \n> 1. nextval the sequence N times, where N >= 1 million\n> 2. commit the first transaction, adding a million rows to t1\n> 3. nextval the sequence 2 million - N times\n> 4. commit the second transaction, adding another million rows to t1\n> \n> Unless we replicate all of the nextval operations that occur in step 1\n> at the same time or prior to replicating the first transaction in step\n> 2, we might end up making visible a state where the next value of the\n> sequence is less than the highest value present in the table, which\n> would be bad.\n> \n\nRight, that's the \"guarantee\" I've mentioned above, more or less.\n\n> With that perhaps overly-long set of preliminaries, I'm going to move\n> on to talking about the implementation ideas which you mention. You\n> write that \"the current solution is based on WAL-logging state of all\n> sequences incremented by the transaction at COMMIT\" and then, it seems\n> to me, go on to demonstrate that it's simply incorrect. In my opinion,\n> the fundamental problem is that it doesn't look at the order that\n> things happened on the primary and do them in the same order on the\n> standby. Instead, it accepts that the non-transactional operations are\n> going to be replicated at the wrong time, and then tries to patch\n> around the issue by attempting to scrounge up the correct values at\n> some convenient point and use that data to compensate for our failure\n> to do the right thing at an earlier point. That doesn't seem like a\n> satisfying solution, and I think it will be hard to make it fully\n> correct.\n> \n\nI understand what you're saying, but I'm not sure I agree with you.\n\nYes, this would mean we accept we may end up with something like this:\n\n1: T1 logs sequence state S1\n2: someone increments sequence\n3: T2 logs sequence stats S2\n4: T2 commits\n5: T1 commits\n\nwhich \"inverts\" the apply order of S1 vs. S2, because we first apply S2\nand then the \"old\" S1. But as long as we're smart enough to \"discard\"\napplying S1, I think that's acceptable - because it guarantees we'll not\ngenerate duplicate values (with values in the committed transaction).\n\nI'd also argue it does not actually generate invalid state, because once\nwe commit either transaction, S2 is what's visible.\n\nYes, if you so \"SELECT * FROM sequence\" you'll see some intermediate\nstate, but that's not how sequences are accessed. And you can't do\ncurrval('s') from a transaction that never accessed the sequence.\n\nAnd if it did, we'd write S2 (or whatever it saw) as part of it's commits.\n\nSo I think the main issue of this approach is how to decide which\nsequence states are obsolete and should be skipped.\n\n> Your alternative proposal says \"The other option might be to make\n> these messages non-transactional, in which case we'd separate the\n> ordering from COMMIT ordering, evading the reordering problem.\" But I\n> don't think that avoids the reordering problem at all.\n\nI don't understand why. Why would it not address the reordering issue?\n\n> Nor do I think it's correct.\n\nNor do I understand this. I mean, isn't it essentially the option you\nmentioned earlier - treating the non-transactional actions as\nindependent transactions? Yes, we'd be batching them so that we'd not\nsee \"intermediate\" states, but those are not observed by abyone.\n\n> I don't think you *can* separate the ordering of these\n> operations from the COMMIT ordering. They are, as I argue here,\n> essentially mini-commits that only bump the sequence value, and they\n> need to be replicated after the transactions that commit prior to the\n> sequence value bump and before those that commit afterward. If they\n> aren't handled that way, I don't think you're going to get fully\n> correct behavior.\n\nI'm confused. Isn't that pretty much exactly what I'm proposing? Imagine\nyou have something like this:\n\n1: T1 does something and also increments a sequence\n2: T1 logs state of the sequence (right before commit)\n3: T1 writes COMMIT\n\nNow when we decode/apply this, we end up doing this:\n\n1: decode all T1 changes, stash them\n2: decode the sequence state and apply it separately\n3: decode COMMIT, apply all T1 changes\n\nThere might be other transactions interleaving with this, but I think\nit'd behave correctly. What example would not work?\n\n> \n> I'm going to confess that I have no really specific idea how to\n> implement that. I'm just not sufficiently familiar with this code.\n> However, I suspect that the solution lies in changing things on the\n> decoding side rather than in the WAL format. I feel like the\n> information that we need in order to do the right thing must already\n> be present in the WAL. If it weren't, then how could crash recovery\n> work correctly, or physical replication? At any given moment, you can\n> choose to promote a physical standby, and at that point the state you\n> observe on the new primary had better be some state that existed on\n> the primary at some point in its history. At any moment, you can\n> unplug the primary, restart it, and run crash recovery, and if you do,\n> you had better end up with some state that existed on the primary at\n> some point shortly before the crash. I think that there are actually a\n> few subtle inaccuracies in the last two sentences, because actually\n> the order in which transactions become visible on a physical standby\n> can differ from the order in which it happens on the primary, but I\n> don't think that actually changes the picture much. The point is that\n> the WAL is the definitive source of information about what happened\n> and in what order it happened, and we use it in that way already in\n> the context of physical replication, and of standbys. If logical\n> decoding has a problem with some case that those systems handle\n> correctly, the problem is with logical decoding, not the WAL format.\n> \n\nThe problem lies in how we log sequences. If we wrote each individual\nincrement to WAL, it might work the way you propose (except for cases\nwith sequences created in a transaction, etc.). But that's not what we\ndo - we log sequence increments in batches of 32 values, and then only\nmodify the sequence relfilenode.\n\nThis works for physical replication, because the WAL describes the\n\"next\" state of the sequence (so if you do \"SELECT * FROM sequence\"\nyou'll not see the same state, and the sequence value may \"jump ahead\"\nafter a failover).\n\nBut for logical replication this does not work, because the transaction\nmight depend on a state created (WAL-logged) by some other transaction.\nAnd perhaps that transaction actually happened *before* we even built\nthe first snapshot for decoding :-/\n\nThere's also the issue with what snapshot to use when decoding these\ntransactional changes in logical decoding (see\n\n\n> In particular, I think it's likely that the \"non-transactional\n> messages\" that you mention earlier don't get applied at the point in\n> the commit sequence where they were found in the WAL. Not sure why\n> exactly, but perhaps the point at which we're reading WAL runs ahead\n> of the decoding per se, or something like that, and thus those\n> non-transactional messages arrive too early relative to the commit\n> ordering. Possibly that could be changed, and they could be buffered\n\nI'm not sure which case of \"non-transactional messages\" this refers to,\nso I can't quite respond to these comments. Perhaps you mean the\nproblems that killed the previous patch [1]?\n\n[1]\nhttps://www.postgresql.org/message-id/00708727-d856-1886-48e3-811296c7ba8c%40enterprisedb.com\n\n\n> until earlier commits are replicated. Or else, when we see a WAL\n> record for a non-transactional sequence operation, we could arrange to\n> bundle that operation into an \"adjacent\" replicated transaction i.e.\n\nIIRC moving stuff between transactions during decoding is problematic,\nbecause of snapshots.\n\n> the transaction whose commit record occurs most nearly prior to, or\n> most nearly after, the WAL record for the operation itself. Or else,\n> we could create \"virtual\" transactions for such operations and make\n> sure those get replayed at the right point in the commit sequence. Or\n> else, I don't know, maybe something else. But I think the overall\n> picture is that we need to approach the problem by replicating changes\n> in WAL order, as a physical standby would do. Saying that a change is\n> \"nontransactional\" doesn't mean that it's exempt from ordering\n> requirements; rather, it means that that change has its own place in\n> that ordering, distinct from the transaction in which it occurred.\n> \n\nBut doesn't the approach with WAL-logging sequence state before COMMIT,\nand then applying it independently in WAL-order, do pretty much this?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 17 Nov 2022 02:41:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\n\nOn 2022-11-17 02:41:14 +0100, Tomas Vondra wrote:\n> Well, yeah - we can either try to perform the stuff independently of the\n> transactions that triggered it, or we can try making it part of some of\n> the transactions. Each of those options has problems, though :-(\n>\n> The first version of the patch tried the first approach, i.e. decode the\n> increments and apply that independently. But:\n>\n> (a) What would you do with increments of sequences created/reset in a\n> transaction? Can't apply those outside the transaction, because it\n> might be rolled back (and that state is not visible on primary).\n\nI think a reasonable approach could be to actually perform different WAL\nlogging for that case. It'll require a bit of machinery, but could actually\nresult in *less* WAL logging overall, because we don't need to emit a WAL\nrecord for each SEQ_LOG_VALS sequence values.\n\n\n\n> (b) What about increments created before we have a proper snapshot?\n> There may be transactions dependent on the increment. This is what\n> ultimately led to revert of the patch.\n\nI don't understand this - why would we ever need to process those increments\nfrom before we have a snapshot? Wouldn't they, by definition, be before the\nslot was active?\n\nTo me this is the rough equivalent of logical decoding not giving the initial\nstate of all tables. You need some process outside of logical decoding to get\nthat (obviously we have some support for that via the exported data snapshot\nduring slot creation).\n\nI assume that part of the initial sync would have to be a new sequence\nsynchronization step that reads all the sequence states on the publisher and\nensures that the subscriber sequences are at the same point. There's a bit of\ntrickiness there, but it seems entirely doable. The logical replication replay\nsupport for sequences will have to be a bit careful about not decreasing the\nsubscriber's sequence values - the standby initially will be ahead of the\nincrements we'll see in the WAL. But that seems inevitable given the\nnon-transactional nature of sequences.\n\n\n\n> This version of the patch tries to do the opposite thing - make sure\n> that the state after each commit matches what the transaction might have\n> seen (for sequences it accessed). It's imperfect, because it might log a\n> state generated \"after\" the sequence got accessed - it focuses on the\n> guarantee not to generate duplicate values.\n\nThat approach seems quite wrong to me.\n\n\n> > I'm going to confess that I have no really specific idea how to\n> > implement that. I'm just not sufficiently familiar with this code.\n> > However, I suspect that the solution lies in changing things on the\n> > decoding side rather than in the WAL format. I feel like the\n> > information that we need in order to do the right thing must already\n> > be present in the WAL. If it weren't, then how could crash recovery\n> > work correctly, or physical replication? At any given moment, you can\n> > choose to promote a physical standby, and at that point the state you\n> > observe on the new primary had better be some state that existed on\n> > the primary at some point in its history. At any moment, you can\n> > unplug the primary, restart it, and run crash recovery, and if you do,\n> > you had better end up with some state that existed on the primary at\n> > some point shortly before the crash.\n\nOne minor exception here is that there's no real time bound to see the last\nfew sequence increments if nothing after the XLOG_SEQ_LOG records forces a WAL\nflush.\n\n\n> > I think that there are actually a\n> > few subtle inaccuracies in the last two sentences, because actually\n> > the order in which transactions become visible on a physical standby\n> > can differ from the order in which it happens on the primary, but I\n> > don't think that actually changes the picture much. The point is that\n> > the WAL is the definitive source of information about what happened\n> > and in what order it happened, and we use it in that way already in\n> > the context of physical replication, and of standbys. If logical\n> > decoding has a problem with some case that those systems handle\n> > correctly, the problem is with logical decoding, not the WAL format.\n> >\n>\n> The problem lies in how we log sequences. If we wrote each individual\n> increment to WAL, it might work the way you propose (except for cases\n> with sequences created in a transaction, etc.). But that's not what we\n> do - we log sequence increments in batches of 32 values, and then only\n> modify the sequence relfilenode.\n\n> This works for physical replication, because the WAL describes the\n> \"next\" state of the sequence (so if you do \"SELECT * FROM sequence\"\n> you'll not see the same state, and the sequence value may \"jump ahead\"\n> after a failover).\n>\n> But for logical replication this does not work, because the transaction\n> might depend on a state created (WAL-logged) by some other transaction.\n> And perhaps that transaction actually happened *before* we even built\n> the first snapshot for decoding :-/\n\nI really can't follow the \"depend on state ... by some other transaction\"\naspect.\n\n\nEven the case of a sequence that is renamed inside a transaction that did\n*not* create / reset the sequence and then also triggers increment of the\nsequence seems to be dealt with reasonably by processing sequence increments\noutside a transaction - the old name will be used for the increments, replay\nof the renaming transaction would then implement the rename in a hypothetical\nDDL-replay future.\n\n\n> There's also the issue with what snapshot to use when decoding these\n> transactional changes in logical decoding (see\n\nIncomplete parenthetical? Or were you referencing the next paragraph?\n\nWhat are the transactional changes you're referring to here?\n\n\nI did some skimming of the referenced thread about the reversal of the last\napproach, but I couldn't really understand what the fundamental issues were\nwith the reverted implementation - it's a very long thread and references\nother threads.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 16 Nov 2022 18:43:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 11/17/22 03:43, Andres Freund wrote:\n> Hi,\n> \n> \n> On 2022-11-17 02:41:14 +0100, Tomas Vondra wrote:\n>> Well, yeah - we can either try to perform the stuff independently of the\n>> transactions that triggered it, or we can try making it part of some of\n>> the transactions. Each of those options has problems, though :-(\n>>\n>> The first version of the patch tried the first approach, i.e. decode the\n>> increments and apply that independently. But:\n>>\n>> (a) What would you do with increments of sequences created/reset in a\n>> transaction? Can't apply those outside the transaction, because it\n>> might be rolled back (and that state is not visible on primary).\n> \n> I think a reasonable approach could be to actually perform different WAL\n> logging for that case. It'll require a bit of machinery, but could actually\n> result in *less* WAL logging overall, because we don't need to emit a WAL\n> record for each SEQ_LOG_VALS sequence values.\n> \n\nCould you elaborate? Hard to comment without knowing more ...\n\nMy point was that stuff like this (creating a new sequence or at least a\nnew relfilenode) means we can't apply that independently of the\ntransaction (unlike regular increments). I'm not sure how a change to\nWAL logging would make that go away.\n\n> \n> \n>> (b) What about increments created before we have a proper snapshot?\n>> There may be transactions dependent on the increment. This is what\n>> ultimately led to revert of the patch.\n> \n> I don't understand this - why would we ever need to process those increments\n> from before we have a snapshot? Wouldn't they, by definition, be before the\n> slot was active?\n> \n> To me this is the rough equivalent of logical decoding not giving the initial\n> state of all tables. You need some process outside of logical decoding to get\n> that (obviously we have some support for that via the exported data snapshot\n> during slot creation).\n> \n\nWhich is what already happens during tablesync, no? We more or less copy\nsequences as if they were tables.\n\n> I assume that part of the initial sync would have to be a new sequence\n> synchronization step that reads all the sequence states on the publisher and\n> ensures that the subscriber sequences are at the same point. There's a bit of\n> trickiness there, but it seems entirely doable. The logical replication replay\n> support for sequences will have to be a bit careful about not decreasing the\n> subscriber's sequence values - the standby initially will be ahead of the\n> increments we'll see in the WAL. But that seems inevitable given the\n> non-transactional nature of sequences.\n> \n\nSee fetch_sequence_data / copy_sequence in the patch. The bit about\nensuring the sequence does not go away (say, using page LSN and/or LSN\nof the increment) is not there, however isn't that pretty much what I\nproposed doing for \"reconciling\" the sequence state logged at COMMIT?\n\n> \n>> This version of the patch tries to do the opposite thing - make sure\n>> that the state after each commit matches what the transaction might have\n>> seen (for sequences it accessed). It's imperfect, because it might log a\n>> state generated \"after\" the sequence got accessed - it focuses on the\n>> guarantee not to generate duplicate values.\n> \n> That approach seems quite wrong to me.\n> \n\nWhy? Because it might log a state for sequence as of COMMIT, when the\ntransaction accessed the sequence much earlier? That is, this may happen:\n\nT1: nextval('s') -> 1\nT2: call nextval('s') 1000000x\nT1: commit\n\nand T1 will log sequence state ~1000001, give or take. I don't think\nthere's way around that, given the non-transactional nature of\nsequences. And I'm not convinced this is an issue, as it ensures\nuniqueness of values generated on the subscriber. And I think it's\nreasonable to replicate the sequence state as of the commit (because\nthat's what you'd see on the primary).\n\n> \n>>> I'm going to confess that I have no really specific idea how to\n>>> implement that. I'm just not sufficiently familiar with this code.\n>>> However, I suspect that the solution lies in changing things on the\n>>> decoding side rather than in the WAL format. I feel like the\n>>> information that we need in order to do the right thing must already\n>>> be present in the WAL. If it weren't, then how could crash recovery\n>>> work correctly, or physical replication? At any given moment, you can\n>>> choose to promote a physical standby, and at that point the state you\n>>> observe on the new primary had better be some state that existed on\n>>> the primary at some point in its history. At any moment, you can\n>>> unplug the primary, restart it, and run crash recovery, and if you do,\n>>> you had better end up with some state that existed on the primary at\n>>> some point shortly before the crash.\n> \n> One minor exception here is that there's no real time bound to see the last\n> few sequence increments if nothing after the XLOG_SEQ_LOG records forces a WAL\n> flush.\n> \n\nRight. Another issue is we ignore stuff that happened in aborted\ntransactions, so then nextval('s') in another transaction may not wait\nfor syncrep to confirm receiving that WAL. Which is a data loss case,\nsee [1]:\n\n[1]\nhttps://www.postgresql.org/message-id/712cad46-a9c8-1389-aef8-faf0203c9be9%40enterprisedb.com\n\n> \n>>> I think that there are actually a\n>>> few subtle inaccuracies in the last two sentences, because actually\n>>> the order in which transactions become visible on a physical standby\n>>> can differ from the order in which it happens on the primary, but I\n>>> don't think that actually changes the picture much. The point is that\n>>> the WAL is the definitive source of information about what happened\n>>> and in what order it happened, and we use it in that way already in\n>>> the context of physical replication, and of standbys. If logical\n>>> decoding has a problem with some case that those systems handle\n>>> correctly, the problem is with logical decoding, not the WAL format.\n>>>\n>>\n>> The problem lies in how we log sequences. If we wrote each individual\n>> increment to WAL, it might work the way you propose (except for cases\n>> with sequences created in a transaction, etc.). But that's not what we\n>> do - we log sequence increments in batches of 32 values, and then only\n>> modify the sequence relfilenode.\n> \n>> This works for physical replication, because the WAL describes the\n>> \"next\" state of the sequence (so if you do \"SELECT * FROM sequence\"\n>> you'll not see the same state, and the sequence value may \"jump ahead\"\n>> after a failover).\n>>\n>> But for logical replication this does not work, because the transaction\n>> might depend on a state created (WAL-logged) by some other transaction.\n>> And perhaps that transaction actually happened *before* we even built\n>> the first snapshot for decoding :-/\n> \n> I really can't follow the \"depend on state ... by some other transaction\"\n> aspect.\n> \n\nT1: nextval('s') -> writes WAL, covering by the next 32 increments\nT2: nextval('s') -> no WAL generated, covered by T1 WAL\n\nThis is what I mean by \"dependency\" on state logged by another\ntransaction. It already causes problems with streaming replication (see\nthe reference to syncrep above), logical replication has the same issue.\n\n> \n> Even the case of a sequence that is renamed inside a transaction that did\n> *not* create / reset the sequence and then also triggers increment of the\n> sequence seems to be dealt with reasonably by processing sequence increments\n> outside a transaction - the old name will be used for the increments, replay\n> of the renaming transaction would then implement the rename in a hypothetical\n> DDL-replay future.\n> \n> \n>> There's also the issue with what snapshot to use when decoding these\n>> transactional changes in logical decoding (see\n> \n> Incomplete parenthetical? Or were you referencing the next paragraph?\n> \n> What are the transactional changes you're referring to here?\n> \n\nSorry, IIRC I merely wanted to mention/reference the snapshot issue in\nthe thread [2] that I ended up referencing in the next paragraph.\n\n\n[2]\nhttps://www.postgresql.org/message-id/00708727-d856-1886-48e3-811296c7ba8c%40enterprisedb.com\n\n> \n> I did some skimming of the referenced thread about the reversal of the last\n> approach, but I couldn't really understand what the fundamental issues were\n> with the reverted implementation - it's a very long thread and references\n> other threads.\n> \n\nYes, it's long/complex, but I intentionally linked to a specific message\nwhich describes the issue ...\n\nIt's entirely possible there is a simple fix for the issue, and I just\ngot confused / unable to see the solution. The whole issue was due to\nhaving a mix of transactional and non-transactional cases, similarly to\nlogical messages - and logicalmsg_decode() has the same issue, so maybe\nlet's talk about that for a moment.\n\nSee [3] and imagine you're dealing with a transactional message, but\nyou're still building a consistent snapshot. So the first branch applies:\n\n if (transactional &&\n !SnapBuildProcessChange(builder, xid, buf->origptr))\n return;\n\nbut because we don't have a snapshot, SnapBuildProcessChange does this:\n\n if (builder->state < SNAPBUILD_FULL_SNAPSHOT)\n return false;\n\nwhich however means logicalmsg_decode() does\n\n snapshot = SnapBuildGetOrBuildSnapshot(builder);\n\nwhich crashes, because it hits this assert:\n\n Assert(builder->state == SNAPBUILD_CONSISTENT);\n\nThe sequence decoding did almost the same thing, with the same issue.\nMaybe the correct thing to do is to just ignore the change in this case?\nPresumably it'd be replicated by tablesync. But we've been unable to\nconvince ourselves that's correct, or what snapshot to pass to\nReorderBufferQueueMessage/ReorderBufferQueueSequence.\n\n\n[3]\nhttps://github.com/postgres/postgres/blob/master/src/backend/replication/logical/decode.c#L585\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 17 Nov 2022 12:39:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 8:41 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> There's a couple of caveats, though:\n>\n> 1) Maybe we should focus more on \"actually observed\" state instead of\n> \"observable\". Who cares if the sequence moved forward in a transaction\n> that was ultimately rolled back? No committed transaction should have\n> observer those values - in a way, the last \"valid\" state of the sequence\n> is the last value generated in a transaction that ultimately committed.\n\nWhen I say \"observable\" I mean from a separate transaction, not one\nthat is making changes to things.\n\nI said \"observable\" rather than \"actually observed\" because we neither\nknow nor care whether someone actually ran a SELECT statement at any\ngiven moment in time, just what they would have seen if they did.\n\n> 2) I think what matters more is that we never generate duplicate value.\n> That is, if you generate a value from a sequence, commit a transaction\n> and replicate it, then the logical standby should not generate the same\n> value from the sequence. This guarantee seems necessary for \"failover\"\n> to logical standby.\n\nI think that matters, but I don't think it's sufficient. We need to\npreserve the order in which things appear to happen, and which changes\nare and are not atomic, not just the final result.\n\n> Well, yeah - we can either try to perform the stuff independently of the\n> transactions that triggered it, or we can try making it part of some of\n> the transactions. Each of those options has problems, though :-(\n>\n> The first version of the patch tried the first approach, i.e. decode the\n> increments and apply that independently. But:\n>\n> (a) What would you do with increments of sequences created/reset in a\n> transaction? Can't apply those outside the transaction, because it\n> might be rolled back (and that state is not visible on primary).\n\nIf the state isn't going to be visible until the transaction commits,\nit has to be replicated as part of the transaction. If I create a\nsequence and then nextval it a bunch of times, I can't replicate that\nby first creating the sequence, and then later, as a separate\noperation, replicating the nextvals. If I do that, then there's an\nintermediate state visible on the replica that was never visible on\nthe origin server. That's broken.\n\n> (b) What about increments created before we have a proper snapshot?\n> There may be transactions dependent on the increment. This is what\n> ultimately led to revert of the patch.\n\nWhatever problem exists here is with the implementation, not the\nconcept. If you copy the initial state as it exists at some moment in\ntime to a replica, and then replicate all the changes that happen\nafterward to that replica without messing up the order, the replica\nWILL be in sync with the origin server. The things that happen before\nyou copy the initial state do not and cannot matter.\n\nBut what you're describing sounds like the changes aren't really\nreplicated in visibility order, and then it is easy to see how a\nproblem like this can happen. Because now, an operation that actually\nbecame visible just before or just after the initial copy was taken\nmight be thought to belong on the other side of that boundary, and\nthen everything will break. And it sounds like that is what you are\ndescribing.\n\n> This version of the patch tries to do the opposite thing - make sure\n> that the state after each commit matches what the transaction might have\n> seen (for sequences it accessed). It's imperfect, because it might log a\n> state generated \"after\" the sequence got accessed - it focuses on the\n> guarantee not to generate duplicate values.\n\nLike Andres, I just can't imagine this being correct. It feels like\nit's trying to paper over the failure to do the replication properly\nduring the transaction by overwriting state at the end.\n\n> Yes, this would mean we accept we may end up with something like this:\n>\n> 1: T1 logs sequence state S1\n> 2: someone increments sequence\n> 3: T2 logs sequence stats S2\n> 4: T2 commits\n> 5: T1 commits\n>\n> which \"inverts\" the apply order of S1 vs. S2, because we first apply S2\n> and then the \"old\" S1. But as long as we're smart enough to \"discard\"\n> applying S1, I think that's acceptable - because it guarantees we'll not\n> generate duplicate values (with values in the committed transaction).\n>\n> I'd also argue it does not actually generate invalid state, because once\n> we commit either transaction, S2 is what's visible.\n\nI agree that it's OK if the sequence increment gets merged into the\ncommit that immediately follows. However, I disagree with the idea of\ndiscarding the second update on the grounds that it would make the\nsequence go backward and we know that can't be right. That algorithm\nworks in the really specific case where the only operations are\nincrements. As soon as anyone does anything else to the sequence, such\nan algorithm can no longer work. Nor can it work for objects that are\nnot sequences. The alternative strategy of replicating each change\nexactly once and in the correct order works for all current and future\nobject types in all cases.\n\n> > Your alternative proposal says \"The other option might be to make\n> > these messages non-transactional, in which case we'd separate the\n> > ordering from COMMIT ordering, evading the reordering problem.\" But I\n> > don't think that avoids the reordering problem at all.\n>\n> I don't understand why. Why would it not address the reordering issue?\n>\n> > Nor do I think it's correct.\n>\n> Nor do I understand this. I mean, isn't it essentially the option you\n> mentioned earlier - treating the non-transactional actions as\n> independent transactions? Yes, we'd be batching them so that we'd not\n> see \"intermediate\" states, but those are not observed by abyone.\n\nI don't think that batching them is a bad idea, in fact I think it's\nnecessary. But those batches still have to be applied at the right\ntime relative to the sequence of commits.\n\n> I'm confused. Isn't that pretty much exactly what I'm proposing? Imagine\n> you have something like this:\n>\n> 1: T1 does something and also increments a sequence\n> 2: T1 logs state of the sequence (right before commit)\n> 3: T1 writes COMMIT\n>\n> Now when we decode/apply this, we end up doing this:\n>\n> 1: decode all T1 changes, stash them\n> 2: decode the sequence state and apply it separately\n> 3: decode COMMIT, apply all T1 changes\n>\n> There might be other transactions interleaving with this, but I think\n> it'd behave correctly. What example would not work?\n\nWhat if one of the other transactions renames the sequence, or changes\nthe current value, or does basically anything to it other than\nnextval?\n\n> The problem lies in how we log sequences. If we wrote each individual\n> increment to WAL, it might work the way you propose (except for cases\n> with sequences created in a transaction, etc.). But that's not what we\n> do - we log sequence increments in batches of 32 values, and then only\n> modify the sequence relfilenode.\n>\n> This works for physical replication, because the WAL describes the\n> \"next\" state of the sequence (so if you do \"SELECT * FROM sequence\"\n> you'll not see the same state, and the sequence value may \"jump ahead\"\n> after a failover).\n>\n> But for logical replication this does not work, because the transaction\n> might depend on a state created (WAL-logged) by some other transaction.\n> And perhaps that transaction actually happened *before* we even built\n> the first snapshot for decoding :-/\n\nI agree that there's a problem here but I don't think that it's a huge\nproblem. I think that it's not QUITE right to think about what state\nis visible on the primary. It's better to think about what state would\nbe visible on the primary if it crashed and restarted after writing\nany given amount of WAL, or what would be visible on a physical\nstandby after replaying any given amount of WAL. If logical\nreplication mimics that, I think it's as correct as it needs to be. If\nnot, those other systems are broken, too.\n\nSo I think what should happen is that when we write a WAL record\nsaying that the sequence has been incremented by 32, that should be\nlogically replicated after all commits whose commit record precedes\nthat WAL record and before commits whose commit record follows that\nWAL record. It is OK to merge the replication of that record into one\nof either the immediately preceding or the immediately following\ncommit, but you can't do it as part of any other commit because then\nyou're changing the order of operations.\n\nFor instance, consider:\n\nT1: BEGIN; INSERT; COMMIT;\nT2: BEGIN; nextval('a_seq') causing a logged advancement to the sequence;\nT3: BEGIN; nextval('b_seq') causing a logged advancement to the sequence;\nT4: BEGIN; INSERT; COMMIT;\nT2: COMMIT;\nT3: COMMIT;\n\nThe sequence increments can be replicated as part of T1 or part of T4\nor in between applying T1 and T4. They cannot be applied as part of T2\nor T3. Otherwise, suppose T4 read the current value of one of those\nsequences and included that value in the inserted row, and the target\ntable happened to be the sequence_value_at_end_of_period table. Then\nimagine that after receiving the data for T4 and replicating it, the\nprimary server is hit by a meteor and the replica is promoted. Well,\nit's now possible for some new transaction to get a value from that\nsequence than what has already been written to the\nsequence_value_at_end_of_period table, which will presumably break the\napplication.\n\n> > In particular, I think it's likely that the \"non-transactional\n> > messages\" that you mention earlier don't get applied at the point in\n> > the commit sequence where they were found in the WAL. Not sure why\n> > exactly, but perhaps the point at which we're reading WAL runs ahead\n> > of the decoding per se, or something like that, and thus those\n> > non-transactional messages arrive too early relative to the commit\n> > ordering. Possibly that could be changed, and they could be buffered\n>\n> I'm not sure which case of \"non-transactional messages\" this refers to,\n> so I can't quite respond to these comments. Perhaps you mean the\n> problems that killed the previous patch [1]?\n\nIn http://postgr.es/m/8bf1c518-b886-fe1b-5c42-09f9c663146d@enterprisedb.com\nyou said \"The other option might be to make these messages\nnon-transactional\". I was referring to that.\n\n> > the transaction whose commit record occurs most nearly prior to, or\n> > most nearly after, the WAL record for the operation itself. Or else,\n> > we could create \"virtual\" transactions for such operations and make\n> > sure those get replayed at the right point in the commit sequence. Or\n> > else, I don't know, maybe something else. But I think the overall\n> > picture is that we need to approach the problem by replicating changes\n> > in WAL order, as a physical standby would do. Saying that a change is\n> > \"nontransactional\" doesn't mean that it's exempt from ordering\n> > requirements; rather, it means that that change has its own place in\n> > that ordering, distinct from the transaction in which it occurred.\n>\n> But doesn't the approach with WAL-logging sequence state before COMMIT,\n> and then applying it independently in WAL-order, do pretty much this?\n\nI'm sort of repeating myself here, but: only if the only operations\nthat ever get performed on sequences are increments. Which is just not\ntrue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Nov 2022 11:04:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-17 12:39:49 +0100, Tomas Vondra wrote:\n> On 11/17/22 03:43, Andres Freund wrote:\n> > On 2022-11-17 02:41:14 +0100, Tomas Vondra wrote:\n> >> Well, yeah - we can either try to perform the stuff independently of the\n> >> transactions that triggered it, or we can try making it part of some of\n> >> the transactions. Each of those options has problems, though :-(\n> >>\n> >> The first version of the patch tried the first approach, i.e. decode the\n> >> increments and apply that independently. But:\n> >>\n> >> (a) What would you do with increments of sequences created/reset in a\n> >> transaction? Can't apply those outside the transaction, because it\n> >> might be rolled back (and that state is not visible on primary).\n> >\n> > I think a reasonable approach could be to actually perform different WAL\n> > logging for that case. It'll require a bit of machinery, but could actually\n> > result in *less* WAL logging overall, because we don't need to emit a WAL\n> > record for each SEQ_LOG_VALS sequence values.\n> >\n>\n> Could you elaborate? Hard to comment without knowing more ...\n>\n> My point was that stuff like this (creating a new sequence or at least a\n> new relfilenode) means we can't apply that independently of the\n> transaction (unlike regular increments). I'm not sure how a change to\n> WAL logging would make that go away.\n\nDifferent WAL logging would make it easy to handle that on the logical\ndecoding level. We don't need to emit WAL records each time a\ncreated-in-this-toplevel-xact sequences gets incremented as they're not\npersisting anyway if the surrounding xact aborts. We already need to remember\nthe filenode so it can be dropped at the end of the transaction, so we could\nemit a single record for each sequence at that point.\n\n\n> >> (b) What about increments created before we have a proper snapshot?\n> >> There may be transactions dependent on the increment. This is what\n> >> ultimately led to revert of the patch.\n> >\n> > I don't understand this - why would we ever need to process those increments\n> > from before we have a snapshot? Wouldn't they, by definition, be before the\n> > slot was active?\n> >\n> > To me this is the rough equivalent of logical decoding not giving the initial\n> > state of all tables. You need some process outside of logical decoding to get\n> > that (obviously we have some support for that via the exported data snapshot\n> > during slot creation).\n> >\n>\n> Which is what already happens during tablesync, no? We more or less copy\n> sequences as if they were tables.\n\nI think you might have to copy sequences after tables, but I'm not sure. But\notherwise, yea.\n\n\n> > I assume that part of the initial sync would have to be a new sequence\n> > synchronization step that reads all the sequence states on the publisher and\n> > ensures that the subscriber sequences are at the same point. There's a bit of\n> > trickiness there, but it seems entirely doable. The logical replication replay\n> > support for sequences will have to be a bit careful about not decreasing the\n> > subscriber's sequence values - the standby initially will be ahead of the\n> > increments we'll see in the WAL. But that seems inevitable given the\n> > non-transactional nature of sequences.\n> >\n>\n> See fetch_sequence_data / copy_sequence in the patch. The bit about\n> ensuring the sequence does not go away (say, using page LSN and/or LSN\n> of the increment) is not there, however isn't that pretty much what I\n> proposed doing for \"reconciling\" the sequence state logged at COMMIT?\n\nWell, I think the approach of logging all sequence increments at commit is the\nwrong idea...\n\nCreating a new relfilenode whenever a sequence is incremented seems like a\ncomplete no-go to me. That increases sequence overhead by several orders of\nmagnitude and will lead to *awful* catalog bloat on the subscriber.\n\n\n> >\n> >> This version of the patch tries to do the opposite thing - make sure\n> >> that the state after each commit matches what the transaction might have\n> >> seen (for sequences it accessed). It's imperfect, because it might log a\n> >> state generated \"after\" the sequence got accessed - it focuses on the\n> >> guarantee not to generate duplicate values.\n> >\n> > That approach seems quite wrong to me.\n> >\n>\n> Why? Because it might log a state for sequence as of COMMIT, when the\n> transaction accessed the sequence much earlier?\n\nMainly because sequences aren't transactional and trying to make them will\nrequire awful contortions.\n\nWhile there are cases where we don't flush the WAL / wait for syncrep for\nsequences, we do replicate their state correctly on physical replication. If\nan LSN has been acknowledged as having been replicated, we won't just loose a\nprior sequence increment after promotion, even if the transaction didn't [yet]\ncommit.\n\nIt's completely valid for an application to call nextval() in one transaction,\npotentially even abort it, and then only use that sequence value in another\ntransaction.\n\n\n\n> > I did some skimming of the referenced thread about the reversal of the last\n> > approach, but I couldn't really understand what the fundamental issues were\n> > with the reverted implementation - it's a very long thread and references\n> > other threads.\n> >\n>\n> Yes, it's long/complex, but I intentionally linked to a specific message\n> which describes the issue ...\n>\n> It's entirely possible there is a simple fix for the issue, and I just\n> got confused / unable to see the solution. The whole issue was due to\n> having a mix of transactional and non-transactional cases, similarly to\n> logical messages - and logicalmsg_decode() has the same issue, so maybe\n> let's talk about that for a moment.\n>\n> See [3] and imagine you're dealing with a transactional message, but\n> you're still building a consistent snapshot. So the first branch applies:\n>\n> if (transactional &&\n> !SnapBuildProcessChange(builder, xid, buf->origptr))\n> return;\n>\n> but because we don't have a snapshot, SnapBuildProcessChange does this:\n>\n> if (builder->state < SNAPBUILD_FULL_SNAPSHOT)\n> return false;\n\nIn this case we'd just return without further work in logicalmsg_decode(). The\nproblematic case presumably is is when we have a full snapshot but aren't yet\nconsistent, but xid is >= next_phase_at. Then SnapBuildProcessChange() returns\ntrue. And we reach:\n\n> which however means logicalmsg_decode() does\n>\n> snapshot = SnapBuildGetOrBuildSnapshot(builder);\n>\n> which crashes, because it hits this assert:\n>\n> Assert(builder->state == SNAPBUILD_CONSISTENT);\n\nI think the problem here is just that we shouldn't even try to get a snapshot\nin the transactional case - note that it's not even used in\nReorderBufferQueueMessage() for transactional message. The transactional case\nneeds to behave like a \"normal\" change - we might never decode the message if\nthe transaction ends up committing before we've reached a consistent point.\n\n\n> The sequence decoding did almost the same thing, with the same issue.\n> Maybe the correct thing to do is to just ignore the change in this case?\n\nNo, I don't think that'd be correct, the message | sequence needs to be queued\nfor the transaction. If the transaction ends up committing after we've reached\nconsistency, we'll get the correct snapshot from the base snapshot set in\nSnapBuildProcessChange().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 17 Nov 2022 09:07:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 11/17/22 18:07, Andres Freund wrote:\n> Hi,\n> \n> On 2022-11-17 12:39:49 +0100, Tomas Vondra wrote:\n>> On 11/17/22 03:43, Andres Freund wrote:\n>>> On 2022-11-17 02:41:14 +0100, Tomas Vondra wrote:\n>>>> Well, yeah - we can either try to perform the stuff independently of the\n>>>> transactions that triggered it, or we can try making it part of some of\n>>>> the transactions. Each of those options has problems, though :-(\n>>>>\n>>>> The first version of the patch tried the first approach, i.e. decode the\n>>>> increments and apply that independently. But:\n>>>>\n>>>> (a) What would you do with increments of sequences created/reset in a\n>>>> transaction? Can't apply those outside the transaction, because it\n>>>> might be rolled back (and that state is not visible on primary).\n>>>\n>>> I think a reasonable approach could be to actually perform different WAL\n>>> logging for that case. It'll require a bit of machinery, but could actually\n>>> result in *less* WAL logging overall, because we don't need to emit a WAL\n>>> record for each SEQ_LOG_VALS sequence values.\n>>>\n>>\n>> Could you elaborate? Hard to comment without knowing more ...\n>>\n>> My point was that stuff like this (creating a new sequence or at least a\n>> new relfilenode) means we can't apply that independently of the\n>> transaction (unlike regular increments). I'm not sure how a change to\n>> WAL logging would make that go away.\n> \n> Different WAL logging would make it easy to handle that on the logical\n> decoding level. We don't need to emit WAL records each time a\n> created-in-this-toplevel-xact sequences gets incremented as they're not\n> persisting anyway if the surrounding xact aborts. We already need to remember\n> the filenode so it can be dropped at the end of the transaction, so we could\n> emit a single record for each sequence at that point.\n> \n> \n>>>> (b) What about increments created before we have a proper snapshot?\n>>>> There may be transactions dependent on the increment. This is what\n>>>> ultimately led to revert of the patch.\n>>>\n>>> I don't understand this - why would we ever need to process those increments\n>>> from before we have a snapshot? Wouldn't they, by definition, be before the\n>>> slot was active?\n>>>\n>>> To me this is the rough equivalent of logical decoding not giving the initial\n>>> state of all tables. You need some process outside of logical decoding to get\n>>> that (obviously we have some support for that via the exported data snapshot\n>>> during slot creation).\n>>>\n>>\n>> Which is what already happens during tablesync, no? We more or less copy\n>> sequences as if they were tables.\n> \n> I think you might have to copy sequences after tables, but I'm not sure. But\n> otherwise, yea.\n> \n> \n>>> I assume that part of the initial sync would have to be a new sequence\n>>> synchronization step that reads all the sequence states on the publisher and\n>>> ensures that the subscriber sequences are at the same point. There's a bit of\n>>> trickiness there, but it seems entirely doable. The logical replication replay\n>>> support for sequences will have to be a bit careful about not decreasing the\n>>> subscriber's sequence values - the standby initially will be ahead of the\n>>> increments we'll see in the WAL. But that seems inevitable given the\n>>> non-transactional nature of sequences.\n>>>\n>>\n>> See fetch_sequence_data / copy_sequence in the patch. The bit about\n>> ensuring the sequence does not go away (say, using page LSN and/or LSN\n>> of the increment) is not there, however isn't that pretty much what I\n>> proposed doing for \"reconciling\" the sequence state logged at COMMIT?\n> \n> Well, I think the approach of logging all sequence increments at commit is the\n> wrong idea...\n> \n\nBut we're not logging all sequence increments, no?\n\nWe're logging the state for each sequence touched by the transaction,\nbut only once - if the transaction incremented the sequence 1000000x\ntimes, we'll still log it just once (at least for this particular purpose).\n\nYes, if transactions touch each sequence just once, then we're logging\nindividual increments.\n\nThe only more efficient solution would be to decode the existing WAL\n(every ~32 increments), and perhaps also tracking which sequences were\naccessed by a transaction. And then simply stashing the increments in a\nglobal reorderbuffer hash table, and then applying only the last one at\ncommit time. This would require the transactional / non-transactional\nbehavior (I think), but perhaps we can make that work.\n\nOr are you thinking about some other scheme?\n\n> Creating a new relfilenode whenever a sequence is incremented seems like a\n> complete no-go to me. That increases sequence overhead by several orders of\n> magnitude and will lead to *awful* catalog bloat on the subscriber.\n> \n\nYou mean on the the apply side? Yes, I agree this needs a better\napproach, I've focused on the decoding side so far.\n\n> \n>>>\n>>>> This version of the patch tries to do the opposite thing - make sure\n>>>> that the state after each commit matches what the transaction might have\n>>>> seen (for sequences it accessed). It's imperfect, because it might log a\n>>>> state generated \"after\" the sequence got accessed - it focuses on the\n>>>> guarantee not to generate duplicate values.\n>>>\n>>> That approach seems quite wrong to me.\n>>>\n>>\n>> Why? Because it might log a state for sequence as of COMMIT, when the\n>> transaction accessed the sequence much earlier?\n> \n> Mainly because sequences aren't transactional and trying to make them will\n> require awful contortions.\n> \n> While there are cases where we don't flush the WAL / wait for syncrep for\n> sequences, we do replicate their state correctly on physical replication. If\n> an LSN has been acknowledged as having been replicated, we won't just loose a\n> prior sequence increment after promotion, even if the transaction didn't [yet]\n> commit.\n> \n\nTrue, I agree we should aim to achieve that.\n\n> It's completely valid for an application to call nextval() in one transaction,\n> potentially even abort it, and then only use that sequence value in another\n> transaction.\n> \n\nI don't quite agree with that - we make no promises about what happens\nto sequence changes in aborted transactions. I don't think I've ever\nseen an application using such pattern either.\n\nAnd I'd argue we already fail to uphold such guarantee, because we don't\nwait for syncrep if the sequence WAL happened in aborted transaction. So\nif you use the value elsewhere (outside PG), you may lose it.\n\nAnyway, I think the scheme I outlined above (with stashing decoded\nincrements, logged once every 32 values) and applying the latest\nincrement for all sequences at commit, would work.\n\n> \n> \n>>> I did some skimming of the referenced thread about the reversal of the last\n>>> approach, but I couldn't really understand what the fundamental issues were\n>>> with the reverted implementation - it's a very long thread and references\n>>> other threads.\n>>>\n>>\n>> Yes, it's long/complex, but I intentionally linked to a specific message\n>> which describes the issue ...\n>>\n>> It's entirely possible there is a simple fix for the issue, and I just\n>> got confused / unable to see the solution. The whole issue was due to\n>> having a mix of transactional and non-transactional cases, similarly to\n>> logical messages - and logicalmsg_decode() has the same issue, so maybe\n>> let's talk about that for a moment.\n>>\n>> See [3] and imagine you're dealing with a transactional message, but\n>> you're still building a consistent snapshot. So the first branch applies:\n>>\n>> if (transactional &&\n>> !SnapBuildProcessChange(builder, xid, buf->origptr))\n>> return;\n>>\n>> but because we don't have a snapshot, SnapBuildProcessChange does this:\n>>\n>> if (builder->state < SNAPBUILD_FULL_SNAPSHOT)\n>> return false;\n> \n> In this case we'd just return without further work in logicalmsg_decode(). The\n> problematic case presumably is is when we have a full snapshot but aren't yet\n> consistent, but xid is >= next_phase_at. Then SnapBuildProcessChange() returns\n> true. And we reach:\n> \n>> which however means logicalmsg_decode() does\n>>\n>> snapshot = SnapBuildGetOrBuildSnapshot(builder);\n>>\n>> which crashes, because it hits this assert:\n>>\n>> Assert(builder->state == SNAPBUILD_CONSISTENT);\n> \n> I think the problem here is just that we shouldn't even try to get a snapshot\n> in the transactional case - note that it's not even used in\n> ReorderBufferQueueMessage() for transactional message. The transactional case\n> needs to behave like a \"normal\" change - we might never decode the message if\n> the transaction ends up committing before we've reached a consistent point.\n> \n> \n>> The sequence decoding did almost the same thing, with the same issue.\n>> Maybe the correct thing to do is to just ignore the change in this case?\n> \n> No, I don't think that'd be correct, the message | sequence needs to be queued\n> for the transaction. If the transaction ends up committing after we've reached\n> consistency, we'll get the correct snapshot from the base snapshot set in\n> SnapBuildProcessChange().\n> \n\nYeah, I think you're right. I looked at this again, with fresh mind, and\nI came to the same conclusion. Roughly what the attached patch does.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 17 Nov 2022 22:13:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-17 22:13:23 +0100, Tomas Vondra wrote:\n> On 11/17/22 18:07, Andres Freund wrote:\n> > On 2022-11-17 12:39:49 +0100, Tomas Vondra wrote:\n> >> On 11/17/22 03:43, Andres Freund wrote:\n> >>> I assume that part of the initial sync would have to be a new sequence\n> >>> synchronization step that reads all the sequence states on the publisher and\n> >>> ensures that the subscriber sequences are at the same point. There's a bit of\n> >>> trickiness there, but it seems entirely doable. The logical replication replay\n> >>> support for sequences will have to be a bit careful about not decreasing the\n> >>> subscriber's sequence values - the standby initially will be ahead of the\n> >>> increments we'll see in the WAL. But that seems inevitable given the\n> >>> non-transactional nature of sequences.\n> >>>\n> >>\n> >> See fetch_sequence_data / copy_sequence in the patch. The bit about\n> >> ensuring the sequence does not go away (say, using page LSN and/or LSN\n> >> of the increment) is not there, however isn't that pretty much what I\n> >> proposed doing for \"reconciling\" the sequence state logged at COMMIT?\n> >\n> > Well, I think the approach of logging all sequence increments at commit is the\n> > wrong idea...\n> >\n>\n> But we're not logging all sequence increments, no?\n\nI was imprecise - I meant streaming them out at commit.\n\n\n\n> Yeah, I think you're right. I looked at this again, with fresh mind, and\n> I came to the same conclusion. Roughly what the attached patch does.\n\nTo me it seems a bit nicer to keep the SnapBuildGetOrBuildSnapshot() call in\ndecode.c instead of moving it to reorderbuffer.c. Perhaps we should add a\nsnapbuild.c helper similar to SnapBuildProcessChange() for non-transactional\nchanges that also gets a snapshot?\n\nCould look something like\n\n Snapshot snapshot = NULL;\n\n if (message->transactional &&\n !SnapBuildProcessChange(builder, xid, buf->origptr))\n return;\n else if (!SnapBuildProcessStateNonTx(builder, &snapshot))\n return;\n\n ...\n\nOr perhaps we should just bite the bullet and add an argument to\nSnapBuildProcessChange to deal with that?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 17 Nov 2022 19:03:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nHere's a rebased version of the sequence decoding patch.\n\n0001 is a fix for the pre-existing issue in logicalmsg_decode,\nattempting to build a snapshot before getting into a consistent state.\nAFAICS this only affects assert-enabled builds and is otherwise\nharmless, because we are not actually using the snapshot (apply gets a\nvalid snapshot from the transaction).\n\nThis is mostly the fix I shared in November, except that I kept the call\nin decode.c (per comment from Andres). I haven't added any argument to\nSnapBuildProcessChange because we may need to backpatch this (and it\ndidn't seem much simpler, IMHO).\n\n0002 is a rebased version of the original approach, committed as\n0da92dc530 (and then reverted in 2c7ea57e56). This includes the same fix\nas 0001 (for the sequence messages), the primary reason for the revert.\n\nThe rebase was not quite straightforward, due to extensive changes in\nhow publications deal with tables/schemas, and so on. So this adopts\nthem, but other than that it behaves just like the original patch.\n\nSo this abandons the approach with COMMIT-time logging for sequences\naccessed/modified by the transaction, proposed in response to the\nrevert. It seemed like a good (and simpler) alternative, but there were\nfar too many issues - higher overhead, ordering of records for\nconcurrent transactions, making it reliable, etc.\n\nI think the main remaining question is what's the goal of this patch, or\nrather what \"guarantees\" we expect from it - what we expect to see on\nthe replica after incrementing a sequence on the primary.\n\nRobert described [1] a model and argued the standby should not \"invent\"\nnew states. It's a long / detailed explanation, I'm not going to try to\nshorten in here because that'd inevitably omit various details. So\nbetter read it whole ...\n\nAnyway, I don't think this approach (essentially treating most sequence\nincrements as non-transactional) breaks any consistency guarantees or\nintroduces any \"new\" states that would not be observable on the primary.\nIn a way, this treats non-transactional sequence increments as separate\ntransactions, and applies them directly. If you read the sequence in\nbetween two commits, you might see any \"intermediate\" state of the\nsequence - that's the nature of non-transactional changes.\n\nWe could \"postpone\" applying the decoded changes until the first next\ncommit, which might improve performance if a transaction is long enough\nto cover many sequence increments. But that's more a performance\noptimization than a matter of correctness, IMHO.\n\nOne caveat is that because of how WAL works for sequences, we're\nactually decoding changes \"ahead\" so if you read the sequence on the\nsubscriber it'll actually seem to be slightly ahead (up to ~32 values).\nThis could be eliminated by setting SEQ_LOG_VALS to 0, which however\nincreases the sequence costs, of course.\n\nThis however brings me to the original question what's the purpose of\nthis patch - and that's essentially keeping sequences up to date to make\nthem usable after a failover. We can't generate values from the sequence\non the subscriber, because it'd just get overwritten. And from this\npoint of view, it's also fine that the sequence is slightly ahead,\nbecause that's what happens after crash recovery anyway. And we're not\nguaranteeing the sequences to be gap-less.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoaYG7672OgdwpGm5cOwy8_ftbs%3D3u-YMvR9fiJwQUzgrQ%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 10 Jan 2023 19:32:12 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 1:32 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> 0001 is a fix for the pre-existing issue in logicalmsg_decode,\n> attempting to build a snapshot before getting into a consistent state.\n> AFAICS this only affects assert-enabled builds and is otherwise\n> harmless, because we are not actually using the snapshot (apply gets a\n> valid snapshot from the transaction).\n>\n> This is mostly the fix I shared in November, except that I kept the call\n> in decode.c (per comment from Andres). I haven't added any argument to\n> SnapBuildProcessChange because we may need to backpatch this (and it\n> didn't seem much simpler, IMHO).\n\nI tend to associate transactional behavior with snapshots, so it looks\nodd to see code that builds a snapshot only when the message is\nnon-transactional. I think that a more detailed comment spelling out\nthe reasoning would be useful here.\n\n> This however brings me to the original question what's the purpose of\n> this patch - and that's essentially keeping sequences up to date to make\n> them usable after a failover. We can't generate values from the sequence\n> on the subscriber, because it'd just get overwritten. And from this\n> point of view, it's also fine that the sequence is slightly ahead,\n> because that's what happens after crash recovery anyway. And we're not\n> guaranteeing the sequences to be gap-less.\n\nI agree that it's fine for the sequence to be slightly ahead, but I\nthink that it can't be too far ahead without causing problems. Suppose\nfor example that transaction #1 creates a sequence. Transaction #2\ndoes nextval on the sequence a bunch of times and inserts rows into a\ntable using the sequence values as the PK. It's fine if the nextval\noperations are replicated ahead of the commit of transaction #2 -- in\nfact I'd say it's necessary for correctness -- but they can't precede\nthe commit of transaction #1, since then the sequence won't exist yet.\nLikewise, if there's an ALTER SEQUENCE that creates a new relfilenode,\nI think that needs to act as a barrier: non-transactional changes that\nhappened before that transaction must also be replicated before that\ntransaction is replicated, and those that happened after that\ntransaction is replicated must be replayed after that transaction is\nreplicated. Otherwise, at the very least, there will be states visible\non the standby that were never visible on the origin server, and maybe\nwe'll just straight up get the wrong answer. For instance:\n\n1. nextval, setting last_value to 3\n2. ALTER SEQUENCE, getting a new relfilenode, and also set last_value to 19\n3. nextval, setting last_value to 20\n\nIf 3 happens before 2, the sequence ends up in the wrong state.\n\nMaybe you've already got this and similar cases totally correctly\nhandled, I'm not sure, just throwing it out there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 14:52:20 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 1/10/23 20:52, Robert Haas wrote:\n> On Tue, Jan 10, 2023 at 1:32 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> 0001 is a fix for the pre-existing issue in logicalmsg_decode,\n>> attempting to build a snapshot before getting into a consistent state.\n>> AFAICS this only affects assert-enabled builds and is otherwise\n>> harmless, because we are not actually using the snapshot (apply gets a\n>> valid snapshot from the transaction).\n>>\n>> This is mostly the fix I shared in November, except that I kept the call\n>> in decode.c (per comment from Andres). I haven't added any argument to\n>> SnapBuildProcessChange because we may need to backpatch this (and it\n>> didn't seem much simpler, IMHO).\n> \n> I tend to associate transactional behavior with snapshots, so it looks\n> odd to see code that builds a snapshot only when the message is\n> non-transactional. I think that a more detailed comment spelling out\n> the reasoning would be useful here.\n> \n\nI'll try adding a comment explaining this, but the reasoning is fairly\nsimple AFAICS:\n\n1) We don't actually need to build the snapshot for transactional\nchanges, because if we end up applying the change, we'll use snapshot\nprovided/maintained by reorderbuffer.\n\n2) But we don't know if we end up applying the change - it may happen\nthis is one of the transactions we're waiting to finish / skipped, in\nwhich case the snapshot is kinda bogus anyway. What \"saved\" us is that\nwe'll not actually use the snapshot in the end. It's just the assert\nthat causes issues.\n\n3) For non-transactional changes, we need a snapshot because we're going\nto execute the callback right away. But in this case the code actually\nprotects against building inconsistent snapshots.\n\n>> This however brings me to the original question what's the purpose of\n>> this patch - and that's essentially keeping sequences up to date to make\n>> them usable after a failover. We can't generate values from the sequence\n>> on the subscriber, because it'd just get overwritten. And from this\n>> point of view, it's also fine that the sequence is slightly ahead,\n>> because that's what happens after crash recovery anyway. And we're not\n>> guaranteeing the sequences to be gap-less.\n> \n> I agree that it's fine for the sequence to be slightly ahead, but I\n> think that it can't be too far ahead without causing problems. Suppose\n> for example that transaction #1 creates a sequence. Transaction #2\n> does nextval on the sequence a bunch of times and inserts rows into a\n> table using the sequence values as the PK. It's fine if the nextval\n> operations are replicated ahead of the commit of transaction #2 -- in\n> fact I'd say it's necessary for correctness -- but they can't precede\n> the commit of transaction #1, since then the sequence won't exist yet.\n\nIt's not clear to me how could that even happen. If transaction #1\ncreates a sequence, it's invisible for transaction #2. So how could it\ndo nextval() on it? #2 has to wait for #1 to commit before it can do\nanything on the sequence, which enforces the correct ordering, no?\n\n> Likewise, if there's an ALTER SEQUENCE that creates a new relfilenode,\n> I think that needs to act as a barrier: non-transactional changes that\n> happened before that transaction must also be replicated before that\n> transaction is replicated, and those that happened after that\n> transaction is replicated must be replayed after that transaction is\n> replicated. Otherwise, at the very least, there will be states visible\n> on the standby that were never visible on the origin server, and maybe\n> we'll just straight up get the wrong answer. For instance:\n> \n> 1. nextval, setting last_value to 3\n> 2. ALTER SEQUENCE, getting a new relfilenode, and also set last_value to 19\n> 3. nextval, setting last_value to 20\n> \n> If 3 happens before 2, the sequence ends up in the wrong state.\n> \n> Maybe you've already got this and similar cases totally correctly\n> handled, I'm not sure, just throwing it out there.\n> \n\nI believe this should behave correctly too, thanks to locking.\n\nIf a transaction does ALTER SEQUENCE, that locks the sequence, so only\nthat transaction can do stuff with that sequence (and changes from that\npoint are treated as transactional). And everyone else is waiting for #1\nto commit.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Jan 2023 19:29:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\n\nHeikki, CCed you due to the point about 2c03216d8311 below.\n\n\nOn 2023-01-10 19:32:12 +0100, Tomas Vondra wrote:\n> 0001 is a fix for the pre-existing issue in logicalmsg_decode,\n> attempting to build a snapshot before getting into a consistent state.\n> AFAICS this only affects assert-enabled builds and is otherwise\n> harmless, because we are not actually using the snapshot (apply gets a\n> valid snapshot from the transaction).\n\nLGTM.\n\n\n> 0002 is a rebased version of the original approach, committed as\n> 0da92dc530 (and then reverted in 2c7ea57e56). This includes the same fix\n> as 0001 (for the sequence messages), the primary reason for the revert.\n> \n> The rebase was not quite straightforward, due to extensive changes in\n> how publications deal with tables/schemas, and so on. So this adopts\n> them, but other than that it behaves just like the original patch.\n\nThis is a huge diff:\n> 72 files changed, 4715 insertions(+), 612 deletions(-)\n\nIt'd be nice to split it to make review easier. Perhaps the sequence decoding\nsupport could be split from the whole publication rigamarole?\n\n\n> This does not include any changes to test_decoding and/or the built-in\n> replication - those will be committed in separate patches.\n\nLooks like that's not the case anymore?\n\n\n> +/*\n> + * Update the sequence state by modifying the existing sequence data row.\n> + *\n> + * This keeps the same relfilenode, so the behavior is non-transactional.\n> + */\n> +static void\n> +SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64 log_cnt, bool is_called)\n> +{\n> +\tSeqTable\telm;\n> +\tRelation\tseqrel;\n> +\tBuffer\t\tbuf;\n> +\tHeapTupleData seqdatatuple;\n> +\tForm_pg_sequence_data seq;\n> +\n> +\t/* open and lock sequence */\n> +\tinit_sequence(seqrelid, &elm, &seqrel);\n> +\n> +\t/* lock page' buffer and read tuple */\n> +\tseq = read_seq_tuple(seqrel, &buf, &seqdatatuple);\n> +\n> +\t/* check the comment above nextval_internal()'s equivalent call. */\n> +\tif (RelationNeedsWAL(seqrel))\n> +\t{\n> +\t\tGetTopTransactionId();\n> +\n> +\t\tif (XLogLogicalInfoActive())\n> +\t\t\tGetCurrentTransactionId();\n> +\t}\n> +\n> +\t/* ready to change the on-disk (or really, in-buffer) tuple */\n> +\tSTART_CRIT_SECTION();\n> +\n> +\tseq->last_value = last_value;\n> +\tseq->is_called = is_called;\n> +\tseq->log_cnt = log_cnt;\n> +\n> +\tMarkBufferDirty(buf);\n> +\n> +\t/* XLOG stuff */\n> +\tif (RelationNeedsWAL(seqrel))\n> +\t{\n> +\t\txl_seq_rec\txlrec;\n> +\t\tXLogRecPtr\trecptr;\n> +\t\tPage\t\tpage = BufferGetPage(buf);\n> +\n> +\t\tXLogBeginInsert();\n> +\t\tXLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);\n> +\n> +\t\txlrec.locator = seqrel->rd_locator;\n> +\t\txlrec.created = false;\n> +\n> +\t\tXLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));\n> +\t\tXLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);\n> +\n> +\t\trecptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\n> +\n> +\t\tPageSetLSN(page, recptr);\n> +\t}\n> +\n> +\tEND_CRIT_SECTION();\n> +\n> +\tUnlockReleaseBuffer(buf);\n> +\n> +\t/* Clear local cache so that we don't think we have cached numbers */\n> +\t/* Note that we do not change the currval() state */\n> +\telm->cached = elm->last;\n> +\n> +\trelation_close(seqrel, NoLock);\n> +}\n> +\n> +/*\n> + * Update the sequence state by creating a new relfilenode.\n> + *\n> + * This creates a new relfilenode, to allow transactional behavior.\n> + */\n> +static void\n> +SetSequence_transactional(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)\n> +{\n> +\tSeqTable\telm;\n> +\tRelation\tseqrel;\n> +\tBuffer\t\tbuf;\n> +\tHeapTupleData seqdatatuple;\n> +\tForm_pg_sequence_data seq;\n> +\tHeapTuple\ttuple;\n> +\n> +\t/* open and lock sequence */\n> +\tinit_sequence(seq_relid, &elm, &seqrel);\n> +\n> +\t/* lock page' buffer and read tuple */\n> +\tseq = read_seq_tuple(seqrel, &buf, &seqdatatuple);\n> +\n> +\t/* Copy the existing sequence tuple. */\n> +\ttuple = heap_copytuple(&seqdatatuple);\n> +\n> +\t/* Now we're done with the old page */\n> +\tUnlockReleaseBuffer(buf);\n> +\n> +\t/*\n> +\t * Modify the copied tuple to update the sequence state (similar to what\n> +\t * ResetSequence does).\n> +\t */\n> +\tseq = (Form_pg_sequence_data) GETSTRUCT(tuple);\n> +\tseq->last_value = last_value;\n> +\tseq->is_called = is_called;\n> +\tseq->log_cnt = log_cnt;\n> +\n> +\t/*\n> +\t * Create a new storage file for the sequence - this is needed for the\n> +\t * transactional behavior.\n> +\t */\n> +\tRelationSetNewRelfilenumber(seqrel, seqrel->rd_rel->relpersistence);\n> +\n> +\t/*\n> +\t * Ensure sequence's relfrozenxid is at 0, since it won't contain any\n> +\t * unfrozen XIDs. Same with relminmxid, since a sequence will never\n> +\t * contain multixacts.\n> +\t */\n> +\tAssert(seqrel->rd_rel->relfrozenxid == InvalidTransactionId);\n> +\tAssert(seqrel->rd_rel->relminmxid == InvalidMultiXactId);\n> +\n> +\t/*\n> +\t * Insert the modified tuple into the new storage file. This does all the\n> +\t * necessary WAL-logging etc.\n> +\t */\n> +\tfill_seq_with_data(seqrel, tuple);\n> +\n> +\t/* Clear local cache so that we don't think we have cached numbers */\n> +\t/* Note that we do not change the currval() state */\n> +\telm->cached = elm->last;\n> +\n> +\trelation_close(seqrel, NoLock);\n> +}\n> +\n> +/*\n> + * Set a sequence to a specified internal state.\n> + *\n> + * The change is made transactionally, so that on failure of the current\n> + * transaction, the sequence will be restored to its previous state.\n> + * We do that by creating a whole new relfilenode for the sequence; so this\n> + * works much like the rewriting forms of ALTER TABLE.\n> + *\n> + * Caller is assumed to have acquired AccessExclusiveLock on the sequence,\n> + * which must not be released until end of transaction. Caller is also\n> + * responsible for permissions checking.\n> + */\n> +void\n> +SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called)\n> +{\n> +\tif (transactional)\n> +\t\tSetSequence_transactional(seq_relid, last_value, log_cnt, is_called);\n> +\telse\n> +\t\tSetSequence_non_transactional(seq_relid, last_value, log_cnt, is_called);\n> +}\n\nThat's a lot of duplication with existing code. There's no explanation why\nSetSequence() as well as do_setval() exists.\n\n\n> /*\n> * Initialize a sequence's relation with the specified tuple as content\n> *\n> @@ -406,8 +560,13 @@ fill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)\n> \n> \t/* check the comment above nextval_internal()'s equivalent call. */\n> \tif (RelationNeedsWAL(rel))\n> +\t{\n> \t\tGetTopTransactionId();\n> \n> +\t\tif (XLogLogicalInfoActive())\n> +\t\t\tGetCurrentTransactionId();\n> +\t}\n\nIs it actually possible to reach this without an xid already having been\nassigned for the current xact?\n\n\n\n> @@ -806,10 +966,28 @@ nextval_internal(Oid relid, bool check_permissions)\n> \t * It's sufficient to ensure the toplevel transaction has an xid, no need\n> \t * to assign xids subxacts, that'll already trigger an appropriate wait.\n> \t * (Have to do that here, so we're outside the critical section)\n> +\t *\n> +\t * We have to ensure we have a proper XID, which will be included in\n> +\t * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()\n> +\t * in a subxact (without any preceding changes) would get XID 0, and it\n> +\t * would then be impossible to decide which top xact it belongs to.\n> +\t * It'd also trigger assert in DecodeSequence. We only do that with\n> +\t * wal_level=logical, though.\n> +\t *\n> +\t * XXX This might seem unnecessary, because if there's no XID the xact\n> +\t * couldn't have done anything important yet, e.g. it could not have\n> +\t * created a sequence. But that's incorrect, because of subxacts. The\n> +\t * current subtransaction might not have done anything yet (thus no XID),\n> +\t * but an earlier one might have created the sequence.\n> \t */\n\nWhat about restricting this to the case you're mentioning,\ni.e. subtransactions?\n\n\n> @@ -845,6 +1023,7 @@ nextval_internal(Oid relid, bool check_permissions)\n> \t\tseq->log_cnt = 0;\n> \n> \t\txlrec.locator = seqrel->rd_locator;\n\nI realize this isn't from this patch, but:\n\nWhy do we include the locator in the record? We already have it via\nXLogRegisterBuffer(), no? And afaict we don't even use it, as we read the page\nvia XLogInitBufferForRedo() during recovery.\n\nKinda looks like an oversight in 2c03216d8311\n\n\n\n\n> +/*\n> + * Handle sequence decode\n> + *\n> + * Decoding sequences is a bit tricky, because while most sequence actions\n> + * are non-transactional (not subject to rollback), some need to be handled\n> + * as transactional.\n> + *\n> + * By default, a sequence increment is non-transactional - we must not queue\n> + * it in a transaction as other changes, because the transaction might get\n> + * rolled back and we'd discard the increment. The downstream would not be\n> + * notified about the increment, which is wrong.\n> + *\n> + * On the other hand, the sequence may be created in a transaction. In this\n> + * case we *should* queue the change as other changes in the transaction,\n> + * because we don't want to send the increments for unknown sequence to the\n> + * plugin - it might get confused about which sequence it's related to etc.\n> + */\n> +void\n> +sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n> +{\n\n> +\t/* extract the WAL record, with \"created\" flag */\n> +\txlrec = (xl_seq_rec *) XLogRecGetData(r);\n> +\n> +\t/* XXX how could we have sequence change without data? */\n> +\tif(!datalen || !tupledata)\n> +\t\treturn;\n\nYea, I think we should error out here instead, something has gone quite wrong\nif this happens.\n\n\n> +\ttuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);\n> +\tDecodeSeqTuple(tupledata, datalen, tuplebuf);\n> +\n> +\t/*\n> +\t * Should we handle the sequence increment as transactional or not?\n> +\t *\n> +\t * If the sequence was created in a still-running transaction, treat\n> +\t * it as transactional and queue the increments. Otherwise it needs\n> +\t * to be treated as non-transactional, in which case we send it to\n> +\t * the plugin right away.\n> +\t */\n> +\ttransactional = ReorderBufferSequenceIsTransactional(ctx->reorder,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t target_locator,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t xlrec->created);\n\nWhy re-create this information during decoding, when we basically already have\nit available on the primary? I think we already pay the price for that\ntracking, which we e.g. use for doing a non-transactional truncate:\n\n\t\t/*\n\t\t * Normally, we need a transaction-safe truncation here. However, if\n\t\t * the table was either created in the current (sub)transaction or has\n\t\t * a new relfilenumber in the current (sub)transaction, then we can\n\t\t * just truncate it in-place, because a rollback would cause the whole\n\t\t * table or the current physical file to be thrown away anyway.\n\t\t */\n\t\tif (rel->rd_createSubid == mySubid ||\n\t\t\trel->rd_newRelfilelocatorSubid == mySubid)\n\t\t{\n\t\t\t/* Immediate, non-rollbackable truncation is OK */\n\t\t\theap_truncate_one_rel(rel);\n\t\t}\n\nAfaict we could do something similar for sequences, except that I think we\nwould just check if the sequence was created in the current transaction\n(i.e. any of the fields are set).\n\n\n> +/*\n> + * A transactional sequence increment is queued to be processed upon commit\n> + * and a non-transactional increment gets processed immediately.\n> + *\n> + * A sequence update may be both transactional and non-transactional. When\n> + * created in a running transaction, treat it as transactional and queue\n> + * the change in it. Otherwise treat it as non-transactional, so that we\n> + * don't forget the increment in case of a rollback.\n> + */\n> +void\n> +ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,\n> +\t\t\t\t\t\t Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,\n> +\t\t\t\t\t\t RelFileLocator rlocator, bool transactional, bool created,\n> +\t\t\t\t\t\t ReorderBufferTupleBuf *tuplebuf)\n\n\n> +\t\t/*\n> +\t\t * Decoding needs access to syscaches et al., which in turn use\n> +\t\t * heavyweight locks and such. Thus we need to have enough state around to\n> +\t\t * keep track of those. The easiest way is to simply use a transaction\n> +\t\t * internally. That also allows us to easily enforce that nothing writes\n> +\t\t * to the database by checking for xid assignments.\n> +\t\t *\n> +\t\t * When we're called via the SQL SRF there's already a transaction\n> +\t\t * started, so start an explicit subtransaction there.\n> +\t\t */\n> +\t\tusing_subtxn = IsTransactionOrTransactionBlock();\n\nThis duplicates a lot of the code from ReorderBufferProcessTXN(). But only\ndoes so partially. It's hard to tell whether some of the differences are\nintentional. Could we de-duplicate that code with ReorderBufferProcessTXN()?\n\nMaybe something like\n\nvoid\nReorderBufferSetupXactEnv(ReorderBufferXactEnv *, bool process_invals);\n\nvoid\nReorderBufferTeardownXactEnv(ReorderBufferXactEnv *, bool is_error);\n\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:12:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 1:29 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> > I agree that it's fine for the sequence to be slightly ahead, but I\n> > think that it can't be too far ahead without causing problems. Suppose\n> > for example that transaction #1 creates a sequence. Transaction #2\n> > does nextval on the sequence a bunch of times and inserts rows into a\n> > table using the sequence values as the PK. It's fine if the nextval\n> > operations are replicated ahead of the commit of transaction #2 -- in\n> > fact I'd say it's necessary for correctness -- but they can't precede\n> > the commit of transaction #1, since then the sequence won't exist yet.\n>\n> It's not clear to me how could that even happen. If transaction #1\n> creates a sequence, it's invisible for transaction #2. So how could it\n> do nextval() on it? #2 has to wait for #1 to commit before it can do\n> anything on the sequence, which enforces the correct ordering, no?\n\nYeah, I meant if #1 had committed and then #2 started to do its thing.\nI was worried that decoding might reach the nextval operations in\ntransaction #2 before it replayed #1.\n\nThis worry may be entirely based on me not understanding how this\nactually works. Do we always apply a transaction as soon as we see the\ncommit record for it, before decoding any further?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:23:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-11 15:23:18 -0500, Robert Haas wrote:\n> Yeah, I meant if #1 had committed and then #2 started to do its thing.\n> I was worried that decoding might reach the nextval operations in\n> transaction #2 before it replayed #1.\n>\n> This worry may be entirely based on me not understanding how this\n> actually works. Do we always apply a transaction as soon as we see the\n> commit record for it, before decoding any further?\n\nYes.\n\nOtherwise we'd have a really hard time figuring out the correct historical\nsnapshot to use for subsequent transactions - they'd have been able to see the\ncatalog modifications made by the committing transaction.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:28:51 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 3:28 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-11 15:23:18 -0500, Robert Haas wrote:\n> > Yeah, I meant if #1 had committed and then #2 started to do its thing.\n> > I was worried that decoding might reach the nextval operations in\n> > transaction #2 before it replayed #1.\n> >\n> > This worry may be entirely based on me not understanding how this\n> > actually works. Do we always apply a transaction as soon as we see the\n> > commit record for it, before decoding any further?\n>\n> Yes.\n>\n> Otherwise we'd have a really hard time figuring out the correct historical\n> snapshot to use for subsequent transactions - they'd have been able to see the\n> catalog modifications made by the committing transaction.\n\nI wonder, then, what happens if somebody wants to do parallel apply.\nThat would seem to require some relaxation of this rule, but then\ndoesn't that break what this patch wants to do?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:41:45 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-11 15:41:45 -0500, Robert Haas wrote:\n> I wonder, then, what happens if somebody wants to do parallel apply. That\n> would seem to require some relaxation of this rule, but then doesn't that\n> break what this patch wants to do?\n\nI don't think it'd pose a direct problem - presumably you'd only parallelize\napplying changes, not committing the transactions containing them. You'd get a\nlot of inconsistencies otherwise.\n\nIf you're thinking of decoding changes in parallel (rather than streaming out\nlarge changes before commit when possible), you'd only be able to do that in\ncases when transaction haven't performed catalog changes, I think. In which\ncase there'd also be no issue wrt transactional sequence changes.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:58:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 1/11/23 21:58, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-11 15:41:45 -0500, Robert Haas wrote:\n>> I wonder, then, what happens if somebody wants to do parallel apply. That\n>> would seem to require some relaxation of this rule, but then doesn't that\n>> break what this patch wants to do?\n> \n> I don't think it'd pose a direct problem - presumably you'd only parallelize\n> applying changes, not committing the transactions containing them. You'd get a\n> lot of inconsistencies otherwise.\n> \n\nRight. It's the commit order that matters - as long as that's\nmaintained, the result should be consistent etc.\n\nThere's plenty of other hard problems, though - for example it's trivial\nfor the apply workers to apply the changes in the incorrect order\n(contradicting commit order) and then a deadlock. And the deadlock\ndetector may easily keep aborting the incorrect worker (the oldest one),\nso that the replication grinds down to a halt.\n\nI was wondering recently how far would we get by just doing prefetch for\nlogical apply - instead of applying the changes, just try doing a lookup\non he replica identity values, and then simple serial apply.\n\n> If you're thinking of decoding changes in parallel (rather than streaming out\n> large changes before commit when possible), you'd only be able to do that in\n> cases when transaction haven't performed catalog changes, I think. In which\n> case there'd also be no issue wrt transactional sequence changes.\n> \n\nPerhaps, although it's not clear to me how would you know that in\nadvance? I mean, you could start decoding changes in parallel, and then\nyou find one of the earlier transactions touched a catalog.\n\nBu maybe I misunderstand what \"decoding\" refers to - don't we need the\nsnapshot only in reorderbuffer? In which case all the other stuff could\nbe parallelized (not sure if that's really expensive).\n\nAnyway, all of this is far out of scope of this patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:30:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 1/11/23 21:12, Andres Freund wrote:\n> Hi,\n> \n> \n> Heikki, CCed you due to the point about 2c03216d8311 below.\n> \n> \n> On 2023-01-10 19:32:12 +0100, Tomas Vondra wrote:\n>> 0001 is a fix for the pre-existing issue in logicalmsg_decode,\n>> attempting to build a snapshot before getting into a consistent state.\n>> AFAICS this only affects assert-enabled builds and is otherwise\n>> harmless, because we are not actually using the snapshot (apply gets a\n>> valid snapshot from the transaction).\n> \n> LGTM.\n> \n> \n>> 0002 is a rebased version of the original approach, committed as\n>> 0da92dc530 (and then reverted in 2c7ea57e56). This includes the same fix\n>> as 0001 (for the sequence messages), the primary reason for the revert.\n>>\n>> The rebase was not quite straightforward, due to extensive changes in\n>> how publications deal with tables/schemas, and so on. So this adopts\n>> them, but other than that it behaves just like the original patch.\n> \n> This is a huge diff:\n>> 72 files changed, 4715 insertions(+), 612 deletions(-)\n> \n> It'd be nice to split it to make review easier. Perhaps the sequence decoding\n> support could be split from the whole publication rigamarole?\n> \n> \n>> This does not include any changes to test_decoding and/or the built-in\n>> replication - those will be committed in separate patches.\n> \n> Looks like that's not the case anymore?\n> \n\nAh, right! Now I realized I originally committed this in chunks, but\nthe revert was a single commit. And I just \"reverted the revert\" to\ncreate this patch.\n\nI'll definitely split this into smaller patches. This also explains the\nobsolete commit message about test_decoding not being included, etc.\n\n> \n>> +/*\n>> + * Update the sequence state by modifying the existing sequence data row.\n>> + *\n>> + * This keeps the same relfilenode, so the behavior is non-transactional.\n>> + */\n>> +static void\n>> +SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64 log_cnt, bool is_called)\n>> +{\n>> +\tSeqTable\telm;\n>> +\tRelation\tseqrel;\n>> +\tBuffer\t\tbuf;\n>> +\tHeapTupleData seqdatatuple;\n>> +\tForm_pg_sequence_data seq;\n>> +\n>> +\t/* open and lock sequence */\n>> +\tinit_sequence(seqrelid, &elm, &seqrel);\n>> +\n>> +\t/* lock page' buffer and read tuple */\n>> +\tseq = read_seq_tuple(seqrel, &buf, &seqdatatuple);\n>> +\n>> +\t/* check the comment above nextval_internal()'s equivalent call. */\n>> +\tif (RelationNeedsWAL(seqrel))\n>> +\t{\n>> +\t\tGetTopTransactionId();\n>> +\n>> +\t\tif (XLogLogicalInfoActive())\n>> +\t\t\tGetCurrentTransactionId();\n>> +\t}\n>> +\n>> +\t/* ready to change the on-disk (or really, in-buffer) tuple */\n>> +\tSTART_CRIT_SECTION();\n>> +\n>> +\tseq->last_value = last_value;\n>> +\tseq->is_called = is_called;\n>> +\tseq->log_cnt = log_cnt;\n>> +\n>> +\tMarkBufferDirty(buf);\n>> +\n>> +\t/* XLOG stuff */\n>> +\tif (RelationNeedsWAL(seqrel))\n>> +\t{\n>> +\t\txl_seq_rec\txlrec;\n>> +\t\tXLogRecPtr\trecptr;\n>> +\t\tPage\t\tpage = BufferGetPage(buf);\n>> +\n>> +\t\tXLogBeginInsert();\n>> +\t\tXLogRegisterBuffer(0, buf, REGBUF_WILL_INIT);\n>> +\n>> +\t\txlrec.locator = seqrel->rd_locator;\n>> +\t\txlrec.created = false;\n>> +\n>> +\t\tXLogRegisterData((char *) &xlrec, sizeof(xl_seq_rec));\n>> +\t\tXLogRegisterData((char *) seqdatatuple.t_data, seqdatatuple.t_len);\n>> +\n>> +\t\trecptr = XLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\n>> +\n>> +\t\tPageSetLSN(page, recptr);\n>> +\t}\n>> +\n>> +\tEND_CRIT_SECTION();\n>> +\n>> +\tUnlockReleaseBuffer(buf);\n>> +\n>> +\t/* Clear local cache so that we don't think we have cached numbers */\n>> +\t/* Note that we do not change the currval() state */\n>> +\telm->cached = elm->last;\n>> +\n>> +\trelation_close(seqrel, NoLock);\n>> +}\n>> +\n>> +/*\n>> + * Update the sequence state by creating a new relfilenode.\n>> + *\n>> + * This creates a new relfilenode, to allow transactional behavior.\n>> + */\n>> +static void\n>> +SetSequence_transactional(Oid seq_relid, int64 last_value, int64 log_cnt, bool is_called)\n>> +{\n>> +\tSeqTable\telm;\n>> +\tRelation\tseqrel;\n>> +\tBuffer\t\tbuf;\n>> +\tHeapTupleData seqdatatuple;\n>> +\tForm_pg_sequence_data seq;\n>> +\tHeapTuple\ttuple;\n>> +\n>> +\t/* open and lock sequence */\n>> +\tinit_sequence(seq_relid, &elm, &seqrel);\n>> +\n>> +\t/* lock page' buffer and read tuple */\n>> +\tseq = read_seq_tuple(seqrel, &buf, &seqdatatuple);\n>> +\n>> +\t/* Copy the existing sequence tuple. */\n>> +\ttuple = heap_copytuple(&seqdatatuple);\n>> +\n>> +\t/* Now we're done with the old page */\n>> +\tUnlockReleaseBuffer(buf);\n>> +\n>> +\t/*\n>> +\t * Modify the copied tuple to update the sequence state (similar to what\n>> +\t * ResetSequence does).\n>> +\t */\n>> +\tseq = (Form_pg_sequence_data) GETSTRUCT(tuple);\n>> +\tseq->last_value = last_value;\n>> +\tseq->is_called = is_called;\n>> +\tseq->log_cnt = log_cnt;\n>> +\n>> +\t/*\n>> +\t * Create a new storage file for the sequence - this is needed for the\n>> +\t * transactional behavior.\n>> +\t */\n>> +\tRelationSetNewRelfilenumber(seqrel, seqrel->rd_rel->relpersistence);\n>> +\n>> +\t/*\n>> +\t * Ensure sequence's relfrozenxid is at 0, since it won't contain any\n>> +\t * unfrozen XIDs. Same with relminmxid, since a sequence will never\n>> +\t * contain multixacts.\n>> +\t */\n>> +\tAssert(seqrel->rd_rel->relfrozenxid == InvalidTransactionId);\n>> +\tAssert(seqrel->rd_rel->relminmxid == InvalidMultiXactId);\n>> +\n>> +\t/*\n>> +\t * Insert the modified tuple into the new storage file. This does all the\n>> +\t * necessary WAL-logging etc.\n>> +\t */\n>> +\tfill_seq_with_data(seqrel, tuple);\n>> +\n>> +\t/* Clear local cache so that we don't think we have cached numbers */\n>> +\t/* Note that we do not change the currval() state */\n>> +\telm->cached = elm->last;\n>> +\n>> +\trelation_close(seqrel, NoLock);\n>> +}\n>> +\n>> +/*\n>> + * Set a sequence to a specified internal state.\n>> + *\n>> + * The change is made transactionally, so that on failure of the current\n>> + * transaction, the sequence will be restored to its previous state.\n>> + * We do that by creating a whole new relfilenode for the sequence; so this\n>> + * works much like the rewriting forms of ALTER TABLE.\n>> + *\n>> + * Caller is assumed to have acquired AccessExclusiveLock on the sequence,\n>> + * which must not be released until end of transaction. Caller is also\n>> + * responsible for permissions checking.\n>> + */\n>> +void\n>> +SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called)\n>> +{\n>> +\tif (transactional)\n>> +\t\tSetSequence_transactional(seq_relid, last_value, log_cnt, is_called);\n>> +\telse\n>> +\t\tSetSequence_non_transactional(seq_relid, last_value, log_cnt, is_called);\n>> +}\n> \n> That's a lot of duplication with existing code. There's no explanation why\n> SetSequence() as well as do_setval() exists.\n> \n\nThanks, I'll look into this.\n\n> \n>> /*\n>> * Initialize a sequence's relation with the specified tuple as content\n>> *\n>> @@ -406,8 +560,13 @@ fill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)\n>> \n>> \t/* check the comment above nextval_internal()'s equivalent call. */\n>> \tif (RelationNeedsWAL(rel))\n>> +\t{\n>> \t\tGetTopTransactionId();\n>> \n>> +\t\tif (XLogLogicalInfoActive())\n>> +\t\t\tGetCurrentTransactionId();\n>> +\t}\n> \n> Is it actually possible to reach this without an xid already having been\n> assigned for the current xact?\n> \n\nI believe it is. That's probably how I found this change is needed,\nactually.\n\n> \n> \n>> @@ -806,10 +966,28 @@ nextval_internal(Oid relid, bool check_permissions)\n>> \t * It's sufficient to ensure the toplevel transaction has an xid, no need\n>> \t * to assign xids subxacts, that'll already trigger an appropriate wait.\n>> \t * (Have to do that here, so we're outside the critical section)\n>> +\t *\n>> +\t * We have to ensure we have a proper XID, which will be included in\n>> +\t * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()\n>> +\t * in a subxact (without any preceding changes) would get XID 0, and it\n>> +\t * would then be impossible to decide which top xact it belongs to.\n>> +\t * It'd also trigger assert in DecodeSequence. We only do that with\n>> +\t * wal_level=logical, though.\n>> +\t *\n>> +\t * XXX This might seem unnecessary, because if there's no XID the xact\n>> +\t * couldn't have done anything important yet, e.g. it could not have\n>> +\t * created a sequence. But that's incorrect, because of subxacts. The\n>> +\t * current subtransaction might not have done anything yet (thus no XID),\n>> +\t * but an earlier one might have created the sequence.\n>> \t */\n> \n> What about restricting this to the case you're mentioning,\n> i.e. subtransactions?\n> \n\nThat might work, but I need to think about it a bit.\n\nI don't think it'd save us much, though. I mean, vast majority of\ntransactions (and subtransactions) calling nextval() will then do\nsomething else which requires a XID. This just moves the XID a bit,\nthat's all.\n\n> \n>> @@ -845,6 +1023,7 @@ nextval_internal(Oid relid, bool check_permissions)\n>> \t\tseq->log_cnt = 0;\n>> \n>> \t\txlrec.locator = seqrel->rd_locator;\n> \n> I realize this isn't from this patch, but:\n> \n> Why do we include the locator in the record? We already have it via\n> XLogRegisterBuffer(), no? And afaict we don't even use it, as we read the page\n> via XLogInitBufferForRedo() during recovery.\n> \n> Kinda looks like an oversight in 2c03216d8311\n> \n\nI don't know, it's what the code did.\n\n> \n> \n> \n>> +/*\n>> + * Handle sequence decode\n>> + *\n>> + * Decoding sequences is a bit tricky, because while most sequence actions\n>> + * are non-transactional (not subject to rollback), some need to be handled\n>> + * as transactional.\n>> + *\n>> + * By default, a sequence increment is non-transactional - we must not queue\n>> + * it in a transaction as other changes, because the transaction might get\n>> + * rolled back and we'd discard the increment. The downstream would not be\n>> + * notified about the increment, which is wrong.\n>> + *\n>> + * On the other hand, the sequence may be created in a transaction. In this\n>> + * case we *should* queue the change as other changes in the transaction,\n>> + * because we don't want to send the increments for unknown sequence to the\n>> + * plugin - it might get confused about which sequence it's related to etc.\n>> + */\n>> +void\n>> +sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n>> +{\n> \n>> +\t/* extract the WAL record, with \"created\" flag */\n>> +\txlrec = (xl_seq_rec *) XLogRecGetData(r);\n>> +\n>> +\t/* XXX how could we have sequence change without data? */\n>> +\tif(!datalen || !tupledata)\n>> +\t\treturn;\n> \n> Yea, I think we should error out here instead, something has gone quite wrong\n> if this happens.\n> \n\nOK\n\n> \n>> +\ttuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);\n>> +\tDecodeSeqTuple(tupledata, datalen, tuplebuf);\n>> +\n>> +\t/*\n>> +\t * Should we handle the sequence increment as transactional or not?\n>> +\t *\n>> +\t * If the sequence was created in a still-running transaction, treat\n>> +\t * it as transactional and queue the increments. Otherwise it needs\n>> +\t * to be treated as non-transactional, in which case we send it to\n>> +\t * the plugin right away.\n>> +\t */\n>> +\ttransactional = ReorderBufferSequenceIsTransactional(ctx->reorder,\n>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t target_locator,\n>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t xlrec->created);\n> \n> Why re-create this information during decoding, when we basically already have\n> it available on the primary? I think we already pay the price for that\n> tracking, which we e.g. use for doing a non-transactional truncate:\n> \n> \t\t/*\n> \t\t * Normally, we need a transaction-safe truncation here. However, if\n> \t\t * the table was either created in the current (sub)transaction or has\n> \t\t * a new relfilenumber in the current (sub)transaction, then we can\n> \t\t * just truncate it in-place, because a rollback would cause the whole\n> \t\t * table or the current physical file to be thrown away anyway.\n> \t\t */\n> \t\tif (rel->rd_createSubid == mySubid ||\n> \t\t\trel->rd_newRelfilelocatorSubid == mySubid)\n> \t\t{\n> \t\t\t/* Immediate, non-rollbackable truncation is OK */\n> \t\t\theap_truncate_one_rel(rel);\n> \t\t}\n> \n> Afaict we could do something similar for sequences, except that I think we\n> would just check if the sequence was created in the current transaction\n> (i.e. any of the fields are set).\n> \n\nHmm, good point.\n\n> \n>> +/*\n>> + * A transactional sequence increment is queued to be processed upon commit\n>> + * and a non-transactional increment gets processed immediately.\n>> + *\n>> + * A sequence update may be both transactional and non-transactional. When\n>> + * created in a running transaction, treat it as transactional and queue\n>> + * the change in it. Otherwise treat it as non-transactional, so that we\n>> + * don't forget the increment in case of a rollback.\n>> + */\n>> +void\n>> +ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,\n>> +\t\t\t\t\t\t Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,\n>> +\t\t\t\t\t\t RelFileLocator rlocator, bool transactional, bool created,\n>> +\t\t\t\t\t\t ReorderBufferTupleBuf *tuplebuf)\n> \n> \n>> +\t\t/*\n>> +\t\t * Decoding needs access to syscaches et al., which in turn use\n>> +\t\t * heavyweight locks and such. Thus we need to have enough state around to\n>> +\t\t * keep track of those. The easiest way is to simply use a transaction\n>> +\t\t * internally. That also allows us to easily enforce that nothing writes\n>> +\t\t * to the database by checking for xid assignments.\n>> +\t\t *\n>> +\t\t * When we're called via the SQL SRF there's already a transaction\n>> +\t\t * started, so start an explicit subtransaction there.\n>> +\t\t */\n>> +\t\tusing_subtxn = IsTransactionOrTransactionBlock();\n> \n> This duplicates a lot of the code from ReorderBufferProcessTXN(). But only\n> does so partially. It's hard to tell whether some of the differences are\n> intentional. Could we de-duplicate that code with ReorderBufferProcessTXN()?\n> \n> Maybe something like\n> \n> void\n> ReorderBufferSetupXactEnv(ReorderBufferXactEnv *, bool process_invals);\n> \n> void\n> ReorderBufferTeardownXactEnv(ReorderBufferXactEnv *, bool is_error);\n> \n\nThanks for the suggestion, I'll definitely consider that in the next\nversion of the patch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:46:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-11 22:30:42 +0100, Tomas Vondra wrote:\n> On 1/11/23 21:58, Andres Freund wrote:\n> > If you're thinking of decoding changes in parallel (rather than streaming out\n> > large changes before commit when possible), you'd only be able to do that in\n> > cases when transaction haven't performed catalog changes, I think. In which\n> > case there'd also be no issue wrt transactional sequence changes.\n> > \n> \n> Perhaps, although it's not clear to me how would you know that in\n> advance? I mean, you could start decoding changes in parallel, and then\n> you find one of the earlier transactions touched a catalog.\n\nYou could have a running count of in-progress catalog modifying transactions\nand not allow parallelized processing when that's not 0.\n\n\n> Bu maybe I misunderstand what \"decoding\" refers to - don't we need the\n> snapshot only in reorderbuffer? In which case all the other stuff could\n> be parallelized (not sure if that's really expensive).\n\nCalling output functions is pretty expensive, so being able to call those in\nparallel has some benefits. But I don't think we're there.\n\n\n> Anyway, all of this is far out of scope of this patch.\n\nYea, clearly that's independent work. And I don't think relying on commit\norder in one more place, i.e. for sequences, would make it harder.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 13:53:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nhere's a slightly updated version - the main change is splitting the\npatch into multiple parts, along the lines of the original patch\nreverted in 2c7ea57e56ca5f668c32d4266e0a3e45b455bef5:\n\n- basic sequence decoding infrastructure\n- support in test_decoding\n- support in built-in logical replication\n\nThe revert mentions a couple additional parts, but those were mostly\nfixes / improvements. And those are not merged into the three parts.\n\n\nOn 1/11/23 22:46, Tomas Vondra wrote:\n> \n>>...\n>>\n>>> +/*\n>>> + * Update the sequence state by modifying the existing sequence data row.\n>>> + *\n>>> + * This keeps the same relfilenode, so the behavior is non-transactional.\n>>> + */\n>>> +static void\n>>> +SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64 log_cnt, bool is_called)\n>>> +{\n>>> ...\n>>>\n>>> +void\n>>> +SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called)\n>>> +{\n>>> +\tif (transactional)\n>>> +\t\tSetSequence_transactional(seq_relid, last_value, log_cnt, is_called);\n>>> +\telse\n>>> +\t\tSetSequence_non_transactional(seq_relid, last_value, log_cnt, is_called);\n>>> +}\n>>\n>> That's a lot of duplication with existing code. There's no explanation why\n>> SetSequence() as well as do_setval() exists.\n>>\n> \n> Thanks, I'll look into this.\n> \n\nI haven't done anything about this yet. The functions are doing similar\nthings, but there's also a fair amount of differences so I haven't found\na good way to merge them yet.\n\n>>\n>>> /*\n>>> * Initialize a sequence's relation with the specified tuple as content\n>>> *\n>>> @@ -406,8 +560,13 @@ fill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)\n>>> \n>>> \t/* check the comment above nextval_internal()'s equivalent call. */\n>>> \tif (RelationNeedsWAL(rel))\n>>> +\t{\n>>> \t\tGetTopTransactionId();\n>>> \n>>> +\t\tif (XLogLogicalInfoActive())\n>>> +\t\t\tGetCurrentTransactionId();\n>>> +\t}\n>>\n>> Is it actually possible to reach this without an xid already having been\n>> assigned for the current xact?\n>>\n> \n> I believe it is. That's probably how I found this change is needed,\n> actually.\n> \n\nI've added a comment explaining why this needed. I don't think it's\nworth trying to optimize this, because in plausible workloads we'd just\ndelay the work a little bit.\n\n>>\n>>\n>>> @@ -806,10 +966,28 @@ nextval_internal(Oid relid, bool check_permissions)\n>>> \t * It's sufficient to ensure the toplevel transaction has an xid, no need\n>>> \t * to assign xids subxacts, that'll already trigger an appropriate wait.\n>>> \t * (Have to do that here, so we're outside the critical section)\n>>> +\t *\n>>> +\t * We have to ensure we have a proper XID, which will be included in\n>>> +\t * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()\n>>> +\t * in a subxact (without any preceding changes) would get XID 0, and it\n>>> +\t * would then be impossible to decide which top xact it belongs to.\n>>> +\t * It'd also trigger assert in DecodeSequence. We only do that with\n>>> +\t * wal_level=logical, though.\n>>> +\t *\n>>> +\t * XXX This might seem unnecessary, because if there's no XID the xact\n>>> +\t * couldn't have done anything important yet, e.g. it could not have\n>>> +\t * created a sequence. But that's incorrect, because of subxacts. The\n>>> +\t * current subtransaction might not have done anything yet (thus no XID),\n>>> +\t * but an earlier one might have created the sequence.\n>>> \t */\n>>\n>> What about restricting this to the case you're mentioning,\n>> i.e. subtransactions?\n>>\n> \n> That might work, but I need to think about it a bit.\n> \n> I don't think it'd save us much, though. I mean, vast majority of\n> transactions (and subtransactions) calling nextval() will then do\n> something else which requires a XID. This just moves the XID a bit,\n> that's all.\n> \n\nAfter thinking about this a bit more, I don't think the optimization is\nworth it, for the reasons explained above.\n\n>>\n>>> +/*\n>>> + * Handle sequence decode\n>>> + *\n>>> + * Decoding sequences is a bit tricky, because while most sequence actions\n>>> + * are non-transactional (not subject to rollback), some need to be handled\n>>> + * as transactional.\n>>> + *\n>>> + * By default, a sequence increment is non-transactional - we must not queue\n>>> + * it in a transaction as other changes, because the transaction might get\n>>> + * rolled back and we'd discard the increment. The downstream would not be\n>>> + * notified about the increment, which is wrong.\n>>> + *\n>>> + * On the other hand, the sequence may be created in a transaction. In this\n>>> + * case we *should* queue the change as other changes in the transaction,\n>>> + * because we don't want to send the increments for unknown sequence to the\n>>> + * plugin - it might get confused about which sequence it's related to etc.\n>>> + */\n>>> +void\n>>> +sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n>>> +{\n>>\n>>> +\t/* extract the WAL record, with \"created\" flag */\n>>> +\txlrec = (xl_seq_rec *) XLogRecGetData(r);\n>>> +\n>>> +\t/* XXX how could we have sequence change without data? */\n>>> +\tif(!datalen || !tupledata)\n>>> +\t\treturn;\n>>\n>> Yea, I think we should error out here instead, something has gone quite wrong\n>> if this happens.\n>>\n> \n> OK\n>\n\nDone.\n\n>>\n>>> +\ttuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);\n>>> +\tDecodeSeqTuple(tupledata, datalen, tuplebuf);\n>>> +\n>>> +\t/*\n>>> +\t * Should we handle the sequence increment as transactional or not?\n>>> +\t *\n>>> +\t * If the sequence was created in a still-running transaction, treat\n>>> +\t * it as transactional and queue the increments. Otherwise it needs\n>>> +\t * to be treated as non-transactional, in which case we send it to\n>>> +\t * the plugin right away.\n>>> +\t */\n>>> +\ttransactional = ReorderBufferSequenceIsTransactional(ctx->reorder,\n>>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t target_locator,\n>>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t xlrec->created);\n>>\n>> Why re-create this information during decoding, when we basically already have\n>> it available on the primary? I think we already pay the price for that\n>> tracking, which we e.g. use for doing a non-transactional truncate:\n>>\n>> \t\t/*\n>> \t\t * Normally, we need a transaction-safe truncation here. However, if\n>> \t\t * the table was either created in the current (sub)transaction or has\n>> \t\t * a new relfilenumber in the current (sub)transaction, then we can\n>> \t\t * just truncate it in-place, because a rollback would cause the whole\n>> \t\t * table or the current physical file to be thrown away anyway.\n>> \t\t */\n>> \t\tif (rel->rd_createSubid == mySubid ||\n>> \t\t\trel->rd_newRelfilelocatorSubid == mySubid)\n>> \t\t{\n>> \t\t\t/* Immediate, non-rollbackable truncation is OK */\n>> \t\t\theap_truncate_one_rel(rel);\n>> \t\t}\n>>\n>> Afaict we could do something similar for sequences, except that I think we\n>> would just check if the sequence was created in the current transaction\n>> (i.e. any of the fields are set).\n>>\n> \n> Hmm, good point.\n> \n\nBut rd_createSubid/rd_newRelfilelocatorSubid fields are available only\nin the original transaction, not during decoding. So we'd have to do\nthis check there and add the result to the WAL record. Is that what you\nhad in mind?\n\n>>\n>>> +/*\n>>> + * A transactional sequence increment is queued to be processed upon commit\n>>> + * and a non-transactional increment gets processed immediately.\n>>> + *\n>>> + * A sequence update may be both transactional and non-transactional. When\n>>> + * created in a running transaction, treat it as transactional and queue\n>>> + * the change in it. Otherwise treat it as non-transactional, so that we\n>>> + * don't forget the increment in case of a rollback.\n>>> + */\n>>> +void\n>>> +ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,\n>>> +\t\t\t\t\t\t Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,\n>>> +\t\t\t\t\t\t RelFileLocator rlocator, bool transactional, bool created,\n>>> +\t\t\t\t\t\t ReorderBufferTupleBuf *tuplebuf)\n>>\n>>\n>>> +\t\t/*\n>>> +\t\t * Decoding needs access to syscaches et al., which in turn use\n>>> +\t\t * heavyweight locks and such. Thus we need to have enough state around to\n>>> +\t\t * keep track of those. The easiest way is to simply use a transaction\n>>> +\t\t * internally. That also allows us to easily enforce that nothing writes\n>>> +\t\t * to the database by checking for xid assignments.\n>>> +\t\t *\n>>> +\t\t * When we're called via the SQL SRF there's already a transaction\n>>> +\t\t * started, so start an explicit subtransaction there.\n>>> +\t\t */\n>>> +\t\tusing_subtxn = IsTransactionOrTransactionBlock();\n>>\n>> This duplicates a lot of the code from ReorderBufferProcessTXN(). But only\n>> does so partially. It's hard to tell whether some of the differences are\n>> intentional. Could we de-duplicate that code with ReorderBufferProcessTXN()?\n>>\n>> Maybe something like\n>>\n>> void\n>> ReorderBufferSetupXactEnv(ReorderBufferXactEnv *, bool process_invals);\n>>\n>> void\n>> ReorderBufferTeardownXactEnv(ReorderBufferXactEnv *, bool is_error);\n>>\n> \n> Thanks for the suggestion, I'll definitely consider that in the next\n> version of the patch.\n\nI did look at the code a bit, but I'm not sure there really is a lot of\nduplicated code - yes, we start/abort the (sub)transaction, setup and\ntear down the snapshot, etc. Or what else would you put into the two new\nfunctions?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 15 Jan 2023 00:39:06 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "cfbot didn't like the rebased / split patch, and after looking at it I\nbelieve it's a bug in parallel apply of large transactions (216a784829),\nwhich seems to have changed interpretation of in_remote_transaction and\nin_streamed_transaction. I've reported the issue on that thread [1], but\nhere's a version with a temporary workaround so that we can continue\nreviewing it.\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/984ff689-adde-9977-affe-cd6029e850be%40enterprisedb.com\n\nOn 1/15/23 00:39, Tomas Vondra wrote:\n> Hi,\n> \n> here's a slightly updated version - the main change is splitting the\n> patch into multiple parts, along the lines of the original patch\n> reverted in 2c7ea57e56ca5f668c32d4266e0a3e45b455bef5:\n> \n> - basic sequence decoding infrastructure\n> - support in test_decoding\n> - support in built-in logical replication\n> \n> The revert mentions a couple additional parts, but those were mostly\n> fixes / improvements. And those are not merged into the three parts.\n> \n> \n> On 1/11/23 22:46, Tomas Vondra wrote:\n>>\n>>> ...\n>>>\n>>>> +/*\n>>>> + * Update the sequence state by modifying the existing sequence data row.\n>>>> + *\n>>>> + * This keeps the same relfilenode, so the behavior is non-transactional.\n>>>> + */\n>>>> +static void\n>>>> +SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64 log_cnt, bool is_called)\n>>>> +{\n>>>> ...\n>>>>\n>>>> +void\n>>>> +SetSequence(Oid seq_relid, bool transactional, int64 last_value, int64 log_cnt, bool is_called)\n>>>> +{\n>>>> +\tif (transactional)\n>>>> +\t\tSetSequence_transactional(seq_relid, last_value, log_cnt, is_called);\n>>>> +\telse\n>>>> +\t\tSetSequence_non_transactional(seq_relid, last_value, log_cnt, is_called);\n>>>> +}\n>>>\n>>> That's a lot of duplication with existing code. There's no explanation why\n>>> SetSequence() as well as do_setval() exists.\n>>>\n>>\n>> Thanks, I'll look into this.\n>>\n> \n> I haven't done anything about this yet. The functions are doing similar\n> things, but there's also a fair amount of differences so I haven't found\n> a good way to merge them yet.\n> \n>>>\n>>>> /*\n>>>> * Initialize a sequence's relation with the specified tuple as content\n>>>> *\n>>>> @@ -406,8 +560,13 @@ fill_seq_fork_with_data(Relation rel, HeapTuple tuple, ForkNumber forkNum)\n>>>> \n>>>> \t/* check the comment above nextval_internal()'s equivalent call. */\n>>>> \tif (RelationNeedsWAL(rel))\n>>>> +\t{\n>>>> \t\tGetTopTransactionId();\n>>>> \n>>>> +\t\tif (XLogLogicalInfoActive())\n>>>> +\t\t\tGetCurrentTransactionId();\n>>>> +\t}\n>>>\n>>> Is it actually possible to reach this without an xid already having been\n>>> assigned for the current xact?\n>>>\n>>\n>> I believe it is. That's probably how I found this change is needed,\n>> actually.\n>>\n> \n> I've added a comment explaining why this needed. I don't think it's\n> worth trying to optimize this, because in plausible workloads we'd just\n> delay the work a little bit.\n> \n>>>\n>>>\n>>>> @@ -806,10 +966,28 @@ nextval_internal(Oid relid, bool check_permissions)\n>>>> \t * It's sufficient to ensure the toplevel transaction has an xid, no need\n>>>> \t * to assign xids subxacts, that'll already trigger an appropriate wait.\n>>>> \t * (Have to do that here, so we're outside the critical section)\n>>>> +\t *\n>>>> +\t * We have to ensure we have a proper XID, which will be included in\n>>>> +\t * the XLOG record by XLogRecordAssemble. Otherwise the first nextval()\n>>>> +\t * in a subxact (without any preceding changes) would get XID 0, and it\n>>>> +\t * would then be impossible to decide which top xact it belongs to.\n>>>> +\t * It'd also trigger assert in DecodeSequence. We only do that with\n>>>> +\t * wal_level=logical, though.\n>>>> +\t *\n>>>> +\t * XXX This might seem unnecessary, because if there's no XID the xact\n>>>> +\t * couldn't have done anything important yet, e.g. it could not have\n>>>> +\t * created a sequence. But that's incorrect, because of subxacts. The\n>>>> +\t * current subtransaction might not have done anything yet (thus no XID),\n>>>> +\t * but an earlier one might have created the sequence.\n>>>> \t */\n>>>\n>>> What about restricting this to the case you're mentioning,\n>>> i.e. subtransactions?\n>>>\n>>\n>> That might work, but I need to think about it a bit.\n>>\n>> I don't think it'd save us much, though. I mean, vast majority of\n>> transactions (and subtransactions) calling nextval() will then do\n>> something else which requires a XID. This just moves the XID a bit,\n>> that's all.\n>>\n> \n> After thinking about this a bit more, I don't think the optimization is\n> worth it, for the reasons explained above.\n> \n>>>\n>>>> +/*\n>>>> + * Handle sequence decode\n>>>> + *\n>>>> + * Decoding sequences is a bit tricky, because while most sequence actions\n>>>> + * are non-transactional (not subject to rollback), some need to be handled\n>>>> + * as transactional.\n>>>> + *\n>>>> + * By default, a sequence increment is non-transactional - we must not queue\n>>>> + * it in a transaction as other changes, because the transaction might get\n>>>> + * rolled back and we'd discard the increment. The downstream would not be\n>>>> + * notified about the increment, which is wrong.\n>>>> + *\n>>>> + * On the other hand, the sequence may be created in a transaction. In this\n>>>> + * case we *should* queue the change as other changes in the transaction,\n>>>> + * because we don't want to send the increments for unknown sequence to the\n>>>> + * plugin - it might get confused about which sequence it's related to etc.\n>>>> + */\n>>>> +void\n>>>> +sequence_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n>>>> +{\n>>>\n>>>> +\t/* extract the WAL record, with \"created\" flag */\n>>>> +\txlrec = (xl_seq_rec *) XLogRecGetData(r);\n>>>> +\n>>>> +\t/* XXX how could we have sequence change without data? */\n>>>> +\tif(!datalen || !tupledata)\n>>>> +\t\treturn;\n>>>\n>>> Yea, I think we should error out here instead, something has gone quite wrong\n>>> if this happens.\n>>>\n>>\n>> OK\n>>\n> \n> Done.\n> \n>>>\n>>>> +\ttuplebuf = ReorderBufferGetTupleBuf(ctx->reorder, tuplelen);\n>>>> +\tDecodeSeqTuple(tupledata, datalen, tuplebuf);\n>>>> +\n>>>> +\t/*\n>>>> +\t * Should we handle the sequence increment as transactional or not?\n>>>> +\t *\n>>>> +\t * If the sequence was created in a still-running transaction, treat\n>>>> +\t * it as transactional and queue the increments. Otherwise it needs\n>>>> +\t * to be treated as non-transactional, in which case we send it to\n>>>> +\t * the plugin right away.\n>>>> +\t */\n>>>> +\ttransactional = ReorderBufferSequenceIsTransactional(ctx->reorder,\n>>>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t target_locator,\n>>>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t xlrec->created);\n>>>\n>>> Why re-create this information during decoding, when we basically already have\n>>> it available on the primary? I think we already pay the price for that\n>>> tracking, which we e.g. use for doing a non-transactional truncate:\n>>>\n>>> \t\t/*\n>>> \t\t * Normally, we need a transaction-safe truncation here. However, if\n>>> \t\t * the table was either created in the current (sub)transaction or has\n>>> \t\t * a new relfilenumber in the current (sub)transaction, then we can\n>>> \t\t * just truncate it in-place, because a rollback would cause the whole\n>>> \t\t * table or the current physical file to be thrown away anyway.\n>>> \t\t */\n>>> \t\tif (rel->rd_createSubid == mySubid ||\n>>> \t\t\trel->rd_newRelfilelocatorSubid == mySubid)\n>>> \t\t{\n>>> \t\t\t/* Immediate, non-rollbackable truncation is OK */\n>>> \t\t\theap_truncate_one_rel(rel);\n>>> \t\t}\n>>>\n>>> Afaict we could do something similar for sequences, except that I think we\n>>> would just check if the sequence was created in the current transaction\n>>> (i.e. any of the fields are set).\n>>>\n>>\n>> Hmm, good point.\n>>\n> \n> But rd_createSubid/rd_newRelfilelocatorSubid fields are available only\n> in the original transaction, not during decoding. So we'd have to do\n> this check there and add the result to the WAL record. Is that what you\n> had in mind?\n> \n>>>\n>>>> +/*\n>>>> + * A transactional sequence increment is queued to be processed upon commit\n>>>> + * and a non-transactional increment gets processed immediately.\n>>>> + *\n>>>> + * A sequence update may be both transactional and non-transactional. When\n>>>> + * created in a running transaction, treat it as transactional and queue\n>>>> + * the change in it. Otherwise treat it as non-transactional, so that we\n>>>> + * don't forget the increment in case of a rollback.\n>>>> + */\n>>>> +void\n>>>> +ReorderBufferQueueSequence(ReorderBuffer *rb, TransactionId xid,\n>>>> +\t\t\t\t\t\t Snapshot snapshot, XLogRecPtr lsn, RepOriginId origin_id,\n>>>> +\t\t\t\t\t\t RelFileLocator rlocator, bool transactional, bool created,\n>>>> +\t\t\t\t\t\t ReorderBufferTupleBuf *tuplebuf)\n>>>\n>>>\n>>>> +\t\t/*\n>>>> +\t\t * Decoding needs access to syscaches et al., which in turn use\n>>>> +\t\t * heavyweight locks and such. Thus we need to have enough state around to\n>>>> +\t\t * keep track of those. The easiest way is to simply use a transaction\n>>>> +\t\t * internally. That also allows us to easily enforce that nothing writes\n>>>> +\t\t * to the database by checking for xid assignments.\n>>>> +\t\t *\n>>>> +\t\t * When we're called via the SQL SRF there's already a transaction\n>>>> +\t\t * started, so start an explicit subtransaction there.\n>>>> +\t\t */\n>>>> +\t\tusing_subtxn = IsTransactionOrTransactionBlock();\n>>>\n>>> This duplicates a lot of the code from ReorderBufferProcessTXN(). But only\n>>> does so partially. It's hard to tell whether some of the differences are\n>>> intentional. Could we de-duplicate that code with ReorderBufferProcessTXN()?\n>>>\n>>> Maybe something like\n>>>\n>>> void\n>>> ReorderBufferSetupXactEnv(ReorderBufferXactEnv *, bool process_invals);\n>>>\n>>> void\n>>> ReorderBufferTeardownXactEnv(ReorderBufferXactEnv *, bool is_error);\n>>>\n>>\n>> Thanks for the suggestion, I'll definitely consider that in the next\n>> version of the patch.\n> \n> I did look at the code a bit, but I'm not sure there really is a lot of\n> duplicated code - yes, we start/abort the (sub)transaction, setup and\n> tear down the snapshot, etc. Or what else would you put into the two new\n> functions?\n> \n> \n> regards\n> \n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 16 Jan 2023 00:18:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, 16 Jan 2023 at 04:49, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> cfbot didn't like the rebased / split patch, and after looking at it I\n> believe it's a bug in parallel apply of large transactions (216a784829),\n> which seems to have changed interpretation of in_remote_transaction and\n> in_streamed_transaction. I've reported the issue on that thread [1], but\n> here's a version with a temporary workaround so that we can continue\n> reviewing it.\n>\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\n17e72ec45d313b98bd90b95bc71b4cc77c2c89c3 ===\n=== applying patch\n./0001-Fix-snapshot-handling-in-logicalmsg_decode-20230116.patch\npatching file src/backend/replication/logical/decode.c\npatching file src/backend/replication/logical/reorderbuffer.c\n=== applying patch ./0002-Logical-decoding-of-sequences-20230116.patch\npatching file doc/src/sgml/logicaldecoding.sgml\nHunk #3 FAILED at 483.\nHunk #4 FAILED at 494.\nHunk #7 succeeded at 1252 (offset 4 lines).\n2 out of 7 hunks FAILED -- saving rejects to file\ndoc/src/sgml/logicaldecoding.sgml.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3823.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 27 Jan 2023 20:11:45 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nHere's a rebased patch, without the last bit which is now unnecessary\nthanks to c981d9145dea.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 16 Feb 2023 16:50:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\r\n\r\nOn 2/16/23 10:50 AM, Tomas Vondra wrote:\r\n> Hi,\r\n> \r\n> Here's a rebased patch, without the last bit which is now unnecessary\r\n> thanks to c981d9145dea.\r\n\r\nThanks for continuing to work on this patch! I tested the latest version \r\nand have some feedback/clarifications.\r\n\r\nI did some testing using a demo-app-based-on-a-real-world app I had \r\nconjured up[1]. This uses integer sequences as surrogate keys.\r\n\r\nIn general things seemed to work, but I had a couple of \r\nobservations/questions.\r\n\r\n1. Sequence IDs after a \"failover\". I believe this is a design decision, \r\nbut I noticed that after simulating a failover, the IDs were replicating \r\nfrom a higher value, e.g.\r\n\r\nINSERT INTO room (name) VALUES ('room 1');\r\nINSERT INTO room (name) VALUES ('room 2');\r\nINSERT INTO room (name) VALUES ('room 3');\r\nINSERT INTO room (name) VALUES ('room 4');\r\n\r\nThe values of room_id_seq on each instance:\r\n\r\ninstance 1:\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 4 | 29 | t\r\n\r\n instance 2:\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 33 | 0 | t\r\n\r\nAfter the switchover on instance 2:\r\n\r\nINSERT INTO room (name) VALUES ('room 5') RETURNING id;\r\n\r\n id\r\n----\r\n 34\r\n\r\nI don't see this as an issue for most applications, but we should at \r\nleast document the behavior somewhere.\r\n\r\n2. Using with origin=none with nonconflicting sequences.\r\n\r\nI modified the example in [1] to set up two schemas with non-conflicting \r\nsequences[2], e.g. on instance 1:\r\n\r\nCREATE TABLE public.room (\r\n id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 1) \r\nPRIMARY KEY,\r\n name text NOT NULL\r\n);\r\n\r\nand instance 2:\r\n\r\nCREATE TABLE public.room (\r\n id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 2) \r\nPRIMARY KEY,\r\n name text NOT NULL\r\n);\r\n\r\nI ran the following on instance 1:\r\n\r\nINSERT INTO public.room ('name') VALUES ('room 1-e');\r\n\r\nThis committed and successfully replicated.\r\n\r\nHowever, when I ran the following on instance 2, I received a conlifct \r\nerror:\r\n\r\nINSERT INTO public.room ('name') VALUES ('room 1-w');\r\n\r\nThe conflict came further down the trigger change, i.e. to a change in \r\nthe `public.calendar` table:\r\n\r\n2023-02-22 01:49:12.293 UTC [87235] ERROR: duplicate key value violates \r\nunique constraint \"calendar_pkey\"\r\n2023-02-22 01:49:12.293 UTC [87235] DETAIL: Key (id)=(661) already exists.\r\n\r\nAfter futzing with the logging and restarting, I was also able to \r\nreproduce a similar conflict with the same insert pattern into 'room'.\r\n\r\nI did notice that the sequence values kept bouncing around between the \r\nservers. Without any activity, this is what \"SELECT * FROM room_id_seq\" \r\nwould return with queries run ~4s apart:\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 131 | 0 | t\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 65 | 0 | t\r\n\r\nThe values were more varying on \"calendar\". Again, this is under no \r\nadditional write activity, these numbers kept fluctuating:\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 197 | 0 | t\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 461 | 0 | t\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 263 | 0 | t\r\n\r\n last_value | log_cnt | is_called\r\n------------+---------+-----------\r\n 527 | 0 | t\r\n\r\nTo handle this case for now, I adapted the schema to create sequences \r\nthat we clearly independently named[3]. I did learn that I had to create \r\nsequences on both instances to support this behavior, e.g.:\r\n\r\n-- instance 1\r\nCREATE SEQUENCE public.room_id_1_seq AS int INCREMENT BY 2 START WITH 1;\r\nCREATE SEQUENCE public.room_id_2_seq AS int INCREMENT BY 2 START WITH 2;\r\nCREATE TABLE public.room (\r\n id int DEFAULT nextval('room_id_1_seq') PRIMARY KEY,\r\n name text NOT NULL\r\n);\r\n\r\n-- instance 2\r\nCREATE SEQUENCE public.room_id_1_seq AS int INCREMENT BY 2 START WITH 1;\r\nCREATE SEQUENCE public.room_id_2_seq AS int INCREMENT BY 2 START WITH 2;\r\nCREATE TABLE public.room (\r\n id int DEFAULT nextval('room_id_2_seq') PRIMARY KEY,\r\n name text NOT NULL\r\n);\r\n\r\nAfter building out [3] this did work, but it was more tedious.\r\n\r\nIs it possible to support IDENTITY columns (or serial columns) where the \r\nvalues of the sequence are set to different intervals on the \r\npublisher/subscriber?\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://github.com/CrunchyData/postgres-realtime-demo/blob/main/examples/demo/demo1.sql\r\n[2] https://gist.github.com/jkatz/5c34bf1e401b3376dfe8e627fcd30af3\r\n[3] https://gist.github.com/jkatz/1599e467d55abec88ab487d8ac9dc7c3",
"msg_date": "Tue, 21 Feb 2023 21:28:55 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\nOn 2/22/23 03:28, Jonathan S. Katz wrote:\n> Hi,\n> \n> On 2/16/23 10:50 AM, Tomas Vondra wrote:\n>> Hi,\n>>\n>> Here's a rebased patch, without the last bit which is now unnecessary\n>> thanks to c981d9145dea.\n> \n> Thanks for continuing to work on this patch! I tested the latest version\n> and have some feedback/clarifications.\n> \n\nThanks!\n\n> I did some testing using a demo-app-based-on-a-real-world app I had\n> conjured up[1]. This uses integer sequences as surrogate keys.\n> \n> In general things seemed to work, but I had a couple of\n> observations/questions.\n> \n> 1. Sequence IDs after a \"failover\". I believe this is a design decision,\n> but I noticed that after simulating a failover, the IDs were replicating\n> from a higher value, e.g.\n> \n> INSERT INTO room (name) VALUES ('room 1');\n> INSERT INTO room (name) VALUES ('room 2');\n> INSERT INTO room (name) VALUES ('room 3');\n> INSERT INTO room (name) VALUES ('room 4');\n> \n> The values of room_id_seq on each instance:\n> \n> instance 1:\n> \n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 4 | 29 | t\n> \n> instance 2:\n> \n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 33 | 0 | t\n> \n> After the switchover on instance 2:\n> \n> INSERT INTO room (name) VALUES ('room 5') RETURNING id;\n> \n> id\n> ----\n> 34\n> \n> I don't see this as an issue for most applications, but we should at\n> least document the behavior somewhere.\n> \n\nYes, this is due to how we WAL-log sequences. We don't log individual\nincrements, but every 32nd increment and we log the \"future\" sequence\nstate so that after a crash/recovery we don't generate duplicates.\n\nSo you do nextval() and it returns 1. But into WAL we record 32. And\nthere will be no WAL records until nextval reaches 32 and needs to\ngenerate another batch.\n\nAnd because logical replication relies on these WAL records, it inherits\nthis batching behavior with a \"jump\" on recovery/failover. IMHO it's OK,\nit works for the \"logical failover\" use case and if you need gapless\nsequences then regular sequences are not an issue anyway.\n\nIt's possible to reduce the jump a bit by reducing the batch size (from\n32 to 0) so that every increment is logged. But it doesn't eliminate it\nbecause of rollbacks.\n\n> 2. Using with origin=none with nonconflicting sequences.\n> \n> I modified the example in [1] to set up two schemas with non-conflicting\n> sequences[2], e.g. on instance 1:\n> \n> CREATE TABLE public.room (\n> id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 1)\n> PRIMARY KEY,\n> name text NOT NULL\n> );\n> \n> and instance 2:\n> \n> CREATE TABLE public.room (\n> id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 2)\n> PRIMARY KEY,\n> name text NOT NULL\n> );\n> \n\nWell, yeah. We don't support active-active logical replication (at least\nnot with the built-in). You can easily get into similar issues without\nsequences.\n\nReplicating a sequence overwrites the state of the sequence on the other\nside, which may result in it generating duplicate values with the other\nnode, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 22 Feb 2023 11:02:12 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 2/22/23 5:02 AM, Tomas Vondra wrote:\r\n> \r\n> On 2/22/23 03:28, Jonathan S. Katz wrote:\r\n\r\n>> Thanks for continuing to work on this patch! I tested the latest version\r\n>> and have some feedback/clarifications.\r\n>>\r\n> \r\n> Thanks!\r\n\r\nAlso I should mention I've been testing with both async/sync logical \r\nreplication. I didn't have any specific comments on either as it seemed \r\nto just work and behaviors aligned with existing expectations.\r\n\r\nGenerally it's been a good experience and it seems to be working. :) At \r\nthis point I'm trying to understand the limitations and tripwires so we \r\ncan guide users appropriately.\r\n\r\n> Yes, this is due to how we WAL-log sequences. We don't log individual\r\n> increments, but every 32nd increment and we log the \"future\" sequence\r\n> state so that after a crash/recovery we don't generate duplicates.\r\n> \r\n> So you do nextval() and it returns 1. But into WAL we record 32. And\r\n> there will be no WAL records until nextval reaches 32 and needs to\r\n> generate another batch.\r\n> \r\n> And because logical replication relies on these WAL records, it inherits\r\n> this batching behavior with a \"jump\" on recovery/failover. IMHO it's OK,\r\n> it works for the \"logical failover\" use case and if you need gapless\r\n> sequences then regular sequences are not an issue anyway.\r\n> \r\n> It's possible to reduce the jump a bit by reducing the batch size (from\r\n> 32 to 0) so that every increment is logged. But it doesn't eliminate it\r\n> because of rollbacks.\r\n\r\nI generally agree. I think it's mainly something we should capture in \r\nthe user docs that they can be a jump on the subscriber side, so people \r\nare not surprised.\r\n\r\nInterestingly, in systems that tend to have higher rates of failover \r\n(I'm thinking of a few distributed systems), this may cause int4 \r\nsequences to exhaust numbers slightly (marginally?) more quickly. Likely \r\nnot too big of an issue, but something to keep in mind.\r\n\r\n>> 2. Using with origin=none with nonconflicting sequences.\r\n>>\r\n>> I modified the example in [1] to set up two schemas with non-conflicting\r\n>> sequences[2], e.g. on instance 1:\r\n>>\r\n>> CREATE TABLE public.room (\r\n>> id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 1)\r\n>> PRIMARY KEY,\r\n>> name text NOT NULL\r\n>> );\r\n>>\r\n>> and instance 2:\r\n>>\r\n>> CREATE TABLE public.room (\r\n>> id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 2)\r\n>> PRIMARY KEY,\r\n>> name text NOT NULL\r\n>> );\r\n>>\r\n> \r\n> Well, yeah. We don't support active-active logical replication (at least\r\n> not with the built-in). You can easily get into similar issues without\r\n> sequences.\r\n\r\nThe \"origin=none\" feature lets you replicate tables bidirectionally. \r\nWhile it's not full \"active-active\", this is a starting point and a \r\nfeature for v16. We'll definitely have users replicating data \r\nbidirectionally with this.\r\n\r\n> Replicating a sequence overwrites the state of the sequence on the other\r\n> side, which may result in it generating duplicate values with the other\r\n> node, etc.\r\n\r\nI understand that we don't currently support global sequences, but I am \r\nconcerned there may be a tripwire here in the origin=none case given \r\nit's fairly common to use serial/GENERATED BY to set primary keys. And \r\nit's fairly trivial to set them to be nonconflicting, or at least give \r\nthe user the appearance that they are nonconflicting.\r\n\r\n From my high level understand of how sequences work, this sounds like \r\nit would be a lift to support the example in [1]. Or maybe the answer is \r\nthat you can bidirectionally replicate the changes in the tables, but \r\nnot sequences?\r\n\r\nIn any case, we should update the restrictions in [2] to state: while \r\nsequences can be replicated, there is additional work required if you \r\nare bidirectionally replicating tables that use sequences, esp. if used \r\nin a PK or a constraint. We can provide alternatives to how a user could \r\nset that up, i.e. not replicates the sequences or do something like in [3].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://gist.github.com/jkatz/5c34bf1e401b3376dfe8e627fcd30af3\r\n[2] \r\nhttps://www.postgresql.org/docs/devel/logical-replication-restrictions.html\r\n[3] https://gist.github.com/jkatz/1599e467d55abec88ab487d8ac9dc7c3",
"msg_date": "Wed, 22 Feb 2023 12:04:29 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 2/22/23 18:04, Jonathan S. Katz wrote:\n> On 2/22/23 5:02 AM, Tomas Vondra wrote:\n>>\n>> On 2/22/23 03:28, Jonathan S. Katz wrote:\n> \n>>> Thanks for continuing to work on this patch! I tested the latest version\n>>> and have some feedback/clarifications.\n>>>\n>>\n>> Thanks!\n> \n> Also I should mention I've been testing with both async/sync logical\n> replication. I didn't have any specific comments on either as it seemed\n> to just work and behaviors aligned with existing expectations.\n> \n> Generally it's been a good experience and it seems to be working. :) At\n> this point I'm trying to understand the limitations and tripwires so we\n> can guide users appropriately.\n> \n\nGood to hear.\n\n>> Yes, this is due to how we WAL-log sequences. We don't log individual\n>> increments, but every 32nd increment and we log the \"future\" sequence\n>> state so that after a crash/recovery we don't generate duplicates.\n>>\n>> So you do nextval() and it returns 1. But into WAL we record 32. And\n>> there will be no WAL records until nextval reaches 32 and needs to\n>> generate another batch.\n>>\n>> And because logical replication relies on these WAL records, it inherits\n>> this batching behavior with a \"jump\" on recovery/failover. IMHO it's OK,\n>> it works for the \"logical failover\" use case and if you need gapless\n>> sequences then regular sequences are not an issue anyway.\n>>\n>> It's possible to reduce the jump a bit by reducing the batch size (from\n>> 32 to 0) so that every increment is logged. But it doesn't eliminate it\n>> because of rollbacks.\n> \n> I generally agree. I think it's mainly something we should capture in\n> the user docs that they can be a jump on the subscriber side, so people\n> are not surprised.\n> \n> Interestingly, in systems that tend to have higher rates of failover\n> (I'm thinking of a few distributed systems), this may cause int4\n> sequences to exhaust numbers slightly (marginally?) more quickly. Likely\n> not too big of an issue, but something to keep in mind.\n> \n\nIMHO the number of systems that would work fine with int4 sequences but\nthis change results in the sequences being \"exhausted\" too quickly is\nindistinguishable from 0. I don't think this is an issue.\n\n>>> 2. Using with origin=none with nonconflicting sequences.\n>>>\n>>> I modified the example in [1] to set up two schemas with non-conflicting\n>>> sequences[2], e.g. on instance 1:\n>>>\n>>> CREATE TABLE public.room (\n>>> id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 1)\n>>> PRIMARY KEY,\n>>> name text NOT NULL\n>>> );\n>>>\n>>> and instance 2:\n>>>\n>>> CREATE TABLE public.room (\n>>> id int GENERATED BY DEFAULT AS IDENTITY (INCREMENT 2 START WITH 2)\n>>> PRIMARY KEY,\n>>> name text NOT NULL\n>>> );\n>>>\n>>\n>> Well, yeah. We don't support active-active logical replication (at least\n>> not with the built-in). You can easily get into similar issues without\n>> sequences.\n> \n> The \"origin=none\" feature lets you replicate tables bidirectionally.\n> While it's not full \"active-active\", this is a starting point and a\n> feature for v16. We'll definitely have users replicating data\n> bidirectionally with this.\n> \n\nWell, then the users need to use some other way to generate IDs, not\nlocal sequences. Either some sort of distributed/global sequence, UUIDs\nor something like that.\n\n>> Replicating a sequence overwrites the state of the sequence on the other\n>> side, which may result in it generating duplicate values with the other\n>> node, etc.\n> \n> I understand that we don't currently support global sequences, but I am\n> concerned there may be a tripwire here in the origin=none case given\n> it's fairly common to use serial/GENERATED BY to set primary keys. And\n> it's fairly trivial to set them to be nonconflicting, or at least give\n> the user the appearance that they are nonconflicting.\n> \n> From my high level understand of how sequences work, this sounds like it\n> would be a lift to support the example in [1]. Or maybe the answer is\n> that you can bidirectionally replicate the changes in the tables, but\n> not sequences?\n> \n\nYes, I don't think local sequences don't and can't work in such setups.\n\n> In any case, we should update the restrictions in [2] to state: while\n> sequences can be replicated, there is additional work required if you\n> are bidirectionally replicating tables that use sequences, esp. if used\n> in a PK or a constraint. We can provide alternatives to how a user could\n> set that up, i.e. not replicates the sequences or do something like in [3].\n> \n\nI agree. I see this as mostly a documentation issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 23 Feb 2023 13:56:13 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 2/23/23 7:56 AM, Tomas Vondra wrote:\r\n> On 2/22/23 18:04, Jonathan S. Katz wrote:\r\n>> On 2/22/23 5:02 AM, Tomas Vondra wrote:\r\n>>>\r\n\r\n>> Interestingly, in systems that tend to have higher rates of failover\r\n>> (I'm thinking of a few distributed systems), this may cause int4\r\n>> sequences to exhaust numbers slightly (marginally?) more quickly. Likely\r\n>> not too big of an issue, but something to keep in mind.\r\n>>\r\n> \r\n> IMHO the number of systems that would work fine with int4 sequences but\r\n> this change results in the sequences being \"exhausted\" too quickly is\r\n> indistinguishable from 0. I don't think this is an issue.\r\n\r\nI agree it's an edge case. I do think it's a number greater than 0, \r\nhaving seen some incredibly flaky setups, particularly in distributed \r\nsystems. I would not worry about it, but only mentioned it to try and \r\nprobe edge cases.\r\n\r\n>>> Well, yeah. We don't support active-active logical replication (at least\r\n>>> not with the built-in). You can easily get into similar issues without\r\n>>> sequences.\r\n>>\r\n>> The \"origin=none\" feature lets you replicate tables bidirectionally.\r\n>> While it's not full \"active-active\", this is a starting point and a\r\n>> feature for v16. We'll definitely have users replicating data\r\n>> bidirectionally with this.\r\n>>\r\n> \r\n> Well, then the users need to use some other way to generate IDs, not\r\n> local sequences. Either some sort of distributed/global sequence, UUIDs\r\n> or something like that.\r\n[snip]\r\n\r\n>> In any case, we should update the restrictions in [2] to state: while\r\n>> sequences can be replicated, there is additional work required if you\r\n>> are bidirectionally replicating tables that use sequences, esp. if used\r\n>> in a PK or a constraint. We can provide alternatives to how a user could\r\n>> set that up, i.e. not replicates the sequences or do something like in [3].\r\n>>\r\n> \r\n> I agree. I see this as mostly a documentation issue.\r\n\r\nGreat. I agree that users need other mechanisms to generate IDs, but we \r\nshould ensure we document that. If needed, I'm happy to help with the \r\ndocs here.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sun, 26 Feb 2023 14:11:53 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nhere's a rebased patch to make cfbot happy, dropping the first part that\nis now unnecessary thanks to 7fe1aa991b.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Feb 2023 19:01:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Mar 1, 2023 at 1:02 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n> here's a rebased patch to make cfbot happy, dropping the first part that\n> is now unnecessary thanks to 7fe1aa991b.\n\nHi Tomas,\n\nI'm looking into doing some \"in situ\" testing, but for now I'll mention\nsome minor nits I found:\n\n0001\n\n+ * so we simply do a lookup (the sequence is identified by relfilende). If\n\nrelfilenode? Or should it be called a relfilelocator, which is the\nparameter type? I see some other references to relfilenode in comments and\ncommit message, and I'm not sure which need to be updated.\n\n+ /* XXX Maybe check that we're still in the same top-level xact? */\n\nAny ideas on what should happen here?\n\n+ /* XXX how could we have sequence change without data? */\n+ if(!datalen || !tupledata)\n+ elog(ERROR, \"sequence decode missing tuple data\");\n\nSince the ERROR is new based on feedback, we can get rid of XXX I think.\n\nMore generally, I associate XXX comments to highlight problems or\nunpleasantness in the code that don't quite rise to the level of FIXME, but\nare perhaps more serious than \"NB:\", \"Note:\", or \"Important:\"\n\n+ * When we're called via the SQL SRF there's already a transaction\n\nI see this was copied from existing code, but I found it confusing -- does\nthis function have a stable name?\n\n+ /* Only ever called from ReorderBufferApplySequence, so transational. */\n\nTypo: transactional\n\n0002\n\nI see a few SERIAL types in the tests but no GENERATED ... AS IDENTITY --\nnot sure if it matters, but seems good for completeness.\n\nReminder for later: Patches 0002 and 0003 still refer to 0da92dc530, which\nis a reverted commit -- I assume it intends to refer to the content of 0001?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 1, 2023 at 1:02 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> here's a rebased patch to make cfbot happy, dropping the first part that> is now unnecessary thanks to 7fe1aa991b.Hi Tomas,I'm looking into doing some \"in situ\" testing, but for now I'll mention some minor nits I found:0001+ * so we simply do a lookup (the sequence is identified by relfilende). Ifrelfilenode? Or should it be called a relfilelocator, which is the parameter type? I see some other references to relfilenode in comments and commit message, and I'm not sure which need to be updated.+\t\t/* XXX Maybe check that we're still in the same top-level xact? */Any ideas on what should happen here?+\t/* XXX how could we have sequence change without data? */+\tif(!datalen || !tupledata)+\t\telog(ERROR, \"sequence decode missing tuple data\");Since the ERROR is new based on feedback, we can get rid of XXX I think.More generally, I associate XXX comments to highlight problems or unpleasantness in the code that don't quite rise to the level of FIXME, but are perhaps more serious than \"NB:\", \"Note:\", or \"Important:\"+\t\t * When we're called via the SQL SRF there's already a transactionI see this was copied from existing code, but I found it confusing -- does this function have a stable name?+\t/* Only ever called from ReorderBufferApplySequence, so transational. */Typo: transactional0002I see a few SERIAL types in the tests but no GENERATED ... AS IDENTITY -- not sure if it matters, but seems good for completeness.Reminder for later: Patches 0002 and 0003 still refer to 0da92dc530, which is a reverted commit -- I assume it intends to refer to the content of 0001?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 10 Mar 2023 17:03:04 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "I tried a couple toy examples with various combinations of use styles.\n\nThree with \"automatic\" reading from sequences:\n\ncreate table test(i serial);\ncreate table test(i int GENERATED BY DEFAULT AS IDENTITY);\ncreate table test(i int default nextval('s1'));\n\n...where s1 has some non-default parameters:\n\nCREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n\n...and then two with explicit use of s1, one inserting the 'nextval' into a\ntable with no default, and one with no table at all, just selecting from\nthe sequence.\n\nThe last two seem to work similarly to the first three, so it seems like\nFOR ALL TABLES adds all sequences as well. Is that expected? The\ndocumentation for CREATE PUBLICATION mentions sequence options, but doesn't\nreally say how these options should be used.\n\nHere's the script:\n\n# alter system set wal_level='logical';\n# restart\n# port 7777 is subscriber\n\necho\necho \"PUB:\"\npsql -c \"drop sequence if exists s1;\"\npsql -c \"drop publication if exists pub1;\"\n\necho\necho \"SUB:\"\npsql -p 7777 -c \"drop sequence if exists s1;\"\npsql -p 7777 -c \"drop subscription if exists sub1 ;\"\n\necho\necho \"PUB:\"\npsql -c \"CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\"\npsql -c \"CREATE PUBLICATION pub1 FOR ALL TABLES;\"\n\necho\necho \"SUB:\"\npsql -p 7777 -c \"CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\"\npsql -p 7777 -c \"CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=john application_name=sub1 port=5432' PUBLICATION pub1;\"\n\n\necho\necho \"PUB:\"\npsql -c \"select nextval('s1');\"\npsql -c \"select nextval('s1');\"\npsql -c \"select * from s1;\"\n\nsleep 1\n\necho\necho \"SUB:\"\npsql -p 7777 -c \"select * from s1;\"\n\npsql -p 7777 -c \"drop subscription sub1 ;\"\n\npsql -p 7777 -c \"select nextval('s1');\"\npsql -p 7777 -c \"select * from s1;\"\n\n\n...with the last two queries returning\n\n nextval\n---------\n 67\n(1 row)\n\n last_value | log_cnt | is_called\n------------+---------+-----------\n 67 | 32 | t\n\nSo, I interpret that the decrement by 32 got logged here.\n\nAlso, running\n\nCREATE PUBLICATION pub2 FOR ALL SEQUENCES WITH (publish = 'insert, update,\ndelete, truncate, sequence');\n\n...reports success, but do non-default values of \"publish = ...\" have an\neffect (or should they), or are these just ignored? It seems like these\ncases shouldn't be treated orthogonally.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI tried a couple toy examples with various combinations of use styles.Three with \"automatic\" reading from sequences:create table test(i serial);create table test(i int GENERATED BY DEFAULT AS IDENTITY);create table test(i int default nextval('s1'));...where s1 has some non-default parameters:CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;...and then two with explicit use of s1, one inserting the 'nextval' into a table with no default, and one with no table at all, just selecting from the sequence.The last two seem to work similarly to the first three, so it seems like FOR ALL TABLES adds all sequences as well. Is that expected? The documentation for CREATE PUBLICATION mentions sequence options, but doesn't really say how these options should be used.Here's the script:# alter system set wal_level='logical';# restart# port 7777 is subscriberechoecho \"PUB:\"psql -c \"drop sequence if exists s1;\"psql -c \"drop publication if exists pub1;\"echoecho \"SUB:\"psql -p 7777 -c \"drop sequence if exists s1;\"psql -p 7777 -c \"drop subscription if exists sub1 ;\"echoecho \"PUB:\"psql -c \"CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\"psql -c \"CREATE PUBLICATION pub1 FOR ALL TABLES;\"echoecho \"SUB:\"psql -p 7777 -c \"CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\"psql -p 7777 -c \"CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost dbname=john application_name=sub1 port=5432' PUBLICATION pub1;\"echoecho \"PUB:\"psql -c \"select nextval('s1');\"psql -c \"select nextval('s1');\"psql -c \"select * from s1;\"sleep 1echoecho \"SUB:\"psql -p 7777 -c \"select * from s1;\"psql -p 7777 -c \"drop subscription sub1 ;\"psql -p 7777 -c \"select nextval('s1');\"psql -p 7777 -c \"select * from s1;\"...with the last two queries returning nextval --------- 67(1 row) last_value | log_cnt | is_called ------------+---------+----------- 67 | 32 | tSo, I interpret that the decrement by 32 got logged here.Also, runningCREATE PUBLICATION pub2 FOR ALL SEQUENCES WITH (publish = 'insert, update, delete, truncate, sequence');...reports success, but do non-default values of \"publish = ...\" have an effect (or should they), or are these just ignored? It seems like these cases shouldn't be treated orthogonally.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 14 Mar 2023 14:30:02 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 3/10/23 11:03, John Naylor wrote:\n> \n> On Wed, Mar 1, 2023 at 1:02 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n>> here's a rebased patch to make cfbot happy, dropping the first part that\n>> is now unnecessary thanks to 7fe1aa991b.\n> \n> Hi Tomas,\n> \n> I'm looking into doing some \"in situ\" testing, but for now I'll mention\n> some minor nits I found:\n> \n> 0001\n> \n> + * so we simply do a lookup (the sequence is identified by relfilende). If\n> \n> relfilenode? Or should it be called a relfilelocator, which is the\n> parameter type? I see some other references to relfilenode in comments\n> and commit message, and I'm not sure which need to be updated.\n> \n\nYeah, that's a leftover from the original patch, before the relfilenode\nwas renamed to relfilelocator.\n\n> + /* XXX Maybe check that we're still in the same top-level xact? */\n> \n> Any ideas on what should happen here?\n> \n\nI don't recall why I added this comment, but I don't think there's\nanything we need to do (so drop the comment).\n\n> + /* XXX how could we have sequence change without data? */\n> + if(!datalen || !tupledata)\n> + elog(ERROR, \"sequence decode missing tuple data\");\n> \n> Since the ERROR is new based on feedback, we can get rid of XXX I think.\n> \n> More generally, I associate XXX comments to highlight problems or\n> unpleasantness in the code that don't quite rise to the level of FIXME,\n> but are perhaps more serious than \"NB:\", \"Note:\", or \"Important:\"\n> \n\nUnderstood. I keep adding XXX in places where I have some open\nquestions, or something that may need to be improved (so kinda less\nserious than a FIXME).\n\n> + * When we're called via the SQL SRF there's already a transaction\n> \n> I see this was copied from existing code, but I found it confusing --\n> does this function have a stable name?\n> \n\nWhat do you mean by \"stable name\"? It certainly is not exposed as a\nuser-callable SQL function, so I think this comment it misleading and\nshould be removed.\n\n> + /* Only ever called from ReorderBufferApplySequence, so transational. */\n> \n> Typo: transactional\n> \n> 0002\n> \n> I see a few SERIAL types in the tests but no GENERATED ... AS IDENTITY\n> -- not sure if it matters, but seems good for completeness.\n> \n\nThat's a good point. Adding tests for GENERATED ... AS IDENTITY is a\ngood idea.\n\n> Reminder for later: Patches 0002 and 0003 still refer to 0da92dc530,\n> which is a reverted commit -- I assume it intends to refer to the\n> content of 0001?\n> \n\nCorrect. That needs to be adjusted at commit time.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 15 Mar 2023 13:00:22 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/14/23 08:30, John Naylor wrote:\n> I tried a couple toy examples with various combinations of use styles.\n> \n> Three with \"automatic\" reading from sequences:\n> \n> create table test(i serial);\n> create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n> create table test(i int default nextval('s1'));\n> \n> ...where s1 has some non-default parameters:\n> \n> CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n> \n> ...and then two with explicit use of s1, one inserting the 'nextval'\n> into a table with no default, and one with no table at all, just\n> selecting from the sequence.\n> \n> The last two seem to work similarly to the first three, so it seems like\n> FOR ALL TABLES adds all sequences as well. Is that expected?\n\nYeah, that's a bug - we shouldn't replicate the sequence changes, unless\nthe sequence is actually added to the publication. I tracked this down\nto a thinko in get_rel_sync_entry() which failed to check the object\ntype when puballtables or puballsequences was set.\n\nAttached is a patch fixing this.\n\n> The documentation for CREATE PUBLICATION mentions sequence options,\n> but doesn't really say how these options should be used.\nGood point. The idea is that we handle tables and sequences the same\nway, i.e. if you specify 'sequence' then we'll replicate increments for\nsequences explicitly added to the publication.\n\nIf this is not clear, the docs may need some improvements.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 15 Mar 2023 13:51:32 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn Wed, Mar 15, 2023 at 9:52 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 3/14/23 08:30, John Naylor wrote:\n> > I tried a couple toy examples with various combinations of use styles.\n> >\n> > Three with \"automatic\" reading from sequences:\n> >\n> > create table test(i serial);\n> > create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n> > create table test(i int default nextval('s1'));\n> >\n> > ...where s1 has some non-default parameters:\n> >\n> > CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n> >\n> > ...and then two with explicit use of s1, one inserting the 'nextval'\n> > into a table with no default, and one with no table at all, just\n> > selecting from the sequence.\n> >\n> > The last two seem to work similarly to the first three, so it seems like\n> > FOR ALL TABLES adds all sequences as well. Is that expected?\n>\n> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless\n> the sequence is actually added to the publication. I tracked this down\n> to a thinko in get_rel_sync_entry() which failed to check the object\n> type when puballtables or puballsequences was set.\n>\n> Attached is a patch fixing this.\n>\n> > The documentation for CREATE PUBLICATION mentions sequence options,\n> > but doesn't really say how these options should be used.\n> Good point. The idea is that we handle tables and sequences the same\n> way, i.e. if you specify 'sequence' then we'll replicate increments for\n> sequences explicitly added to the publication.\n>\n> If this is not clear, the docs may need some improvements.\n>\n\nI'm late to this thread, but I have some questions and review comments.\n\nRegarding sequence logical replication, it seems that changes of\nsequence created after CREATE SUBSCRIPTION are applied on the\nsubscriber even without REFRESH PUBLICATION command on the subscriber.\nWhich is a different behavior than tables. For example, I set both\npublisher and subscriber as follows:\n\n1. On publisher\ncreate publication test_pub for all sequences;\n\n2. On subscriber\ncreate subscription test_sub connection 'dbname=postgres port=5551'\npublication test_pub; -- port=5551 is the publisher\n\n3. On publisher\ncreate sequence s1;\nselect nextval('s1');\n\nI got the error \"ERROR: relation \"public.s1\" does not exist on the\nsubscriber\". Probably we need to do should_apply_changes_for_rel()\ncheck in apply_handle_sequence().\n\nIf my understanding is correct, is there any case where the subscriber\nneeds to apply transactional sequence changes? The commit message of\n0001 patch says:\n\n * Changes for sequences created in the same top-level transaction are\n treated as transactional, i.e. just like any other change from that\n transaction, and discarded in case of a rollback.\n\nIIUC such sequences are not visible to the subscriber, so it cannot\nsubscribe to them until the commit.\n\n---\nI got an assertion failure. The reproducible steps are:\n\n1. On publisher\nalter system set logical_replication_mode = 'immediate';\nselect pg_reload_conf();\ncreate publication test_pub for all sequences;\n\n2. On subscriber\ncreate subscription test_sub connection 'dbname=postgres port=5551'\npublication test_pub with (streaming='parall\\el')\n\n3. On publisher\nbegin;\ncreate table bar (c int, d serial);\ninsert into bar(c) values (100);\ncommit;\n\nI got the following assertion failure:\n\nTRAP: failed Assert(\"(!seq.transactional) || in_remote_transaction\"),\nFile: \"worker.c\", Line: 1458, PID: 508056\npostgres: logical replication parallel apply worker for subscription\n16388 (ExceptionalCondition+0x9e)[0xb6c0af]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x92f7fe]\npostgres: logical replication parallel apply worker for subscription\n16388 (apply_dispatch+0xed)[0x932925]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x90d927]\npostgres: logical replication parallel apply worker for subscription\n16388 (ParallelApplyWorkerMain+0x34f)[0x90dd8d]\npostgres: logical replication parallel apply worker for subscription\n16388 (StartBackgroundWorker+0x1f3)[0x8e7b19]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x8f1798]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x8f1b53]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x8f0bed]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x8ecca4]\npostgres: logical replication parallel apply worker for subscription\n16388 (PostmasterMain+0x1246)[0x8ec6d7]\npostgres: logical replication parallel apply worker for subscription\n16388 [0x7bbe5c]\n/lib64/libc.so.6(__libc_start_main+0xf3)[0x7f69094cbcf3]\npostgres: logical replication parallel apply worker for subscription\n16388 (_start+0x2e)[0x49d15e]\n2023-03-16 12:33:19.471 JST [507974] LOG: background worker \"logical\nreplication parallel worker\" (PID 508056) was terminated by signal 6:\nAborted\n\nseq.transactional is true and in_remote_transaction is false. It might\nbe an issue of the parallel apply feature rather than this patch.\n\n---\nThere is no documentation about the new 'sequence' value of the\npublish option in CREATE/ALTER PUBLICATION. It seems to be possible to\nspecify something like \"CREATE PUBLICATION ... FOR ALL SEQUENCES WITH\n(publish = 'truncate')\" (i.e., not specifying 'sequence' value in the\npublish option). How does logical replication work with this setting?\nNothing is replicated?\n\n---\nIt seems that sequence replication does't work well together with\nALTER SUBSCRIPTION ... SKIP command. IIUC these changes are not\nskipped even if these are transactional changes. The reproducible\nsteps are:\n\n1. On both nodes\ncreate table a (c int primary key);\n\n2. On publisher\ncreate publication hoge_pub for all sequences, tables\n\n3. On subscriber\ncreate subscription hoge_sub connection 'dbname=postgres port=5551'\npublication hoge_pub;\ninsert into a values (1);\n\n4. On publisher\nbegin;\ncreate sequence s2;\ninsert into a values (nextval('s2'));\ncommit;\n\nAt step 4, applying INSERT conflicts with the existing row on the\nsubscriber. If I skip this transaction using ALTER SUBSCRIPTION ...\nSKIP command, I got:\n\nERROR: relation \"public.s2\" does not exist\nCONTEXT: processing remote data for replication origin \"pg_16390\"\nduring message type \"BEGIN\" in transaction 734, finished at 0/1751698\n\nIf I create the sequence s2 in advance on the subscriber, the sequence\nchange is applied on the subscriber.\n\nIf the subscriber doesn't need to apply transactional sequence changes\nin the first place, this problem will disappear.\n\n---\nThere are two typos in 0001 patch:\n\nIn the commit message:\n\n ensure the sequence record has a valid XID - until now the the increment\n\ns/the the/ the/\n\nAnd,\n\n+ /* Only ever called from ReorderBufferApplySequence, so transational. */\n\ns/transational/transactional/\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Mar 2023 16:38:07 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Mar 16, 2023 at 1:08 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Mar 15, 2023 at 9:52 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> >\n> >\n> > On 3/14/23 08:30, John Naylor wrote:\n> ---\n> I got an assertion failure. The reproducible steps are:\n>\n> 1. On publisher\n> alter system set logical_replication_mode = 'immediate';\n> select pg_reload_conf();\n> create publication test_pub for all sequences;\n>\n> 2. On subscriber\n> create subscription test_sub connection 'dbname=postgres port=5551'\n> publication test_pub with (streaming='parall\\el')\n>\n> 3. On publisher\n> begin;\n> create table bar (c int, d serial);\n> insert into bar(c) values (100);\n> commit;\n>\n> I got the following assertion failure:\n>\n> TRAP: failed Assert(\"(!seq.transactional) || in_remote_transaction\"),\n...\n>\n> seq.transactional is true and in_remote_transaction is false. It might\n> be an issue of the parallel apply feature rather than this patch.\n>\n\nDuring parallel apply we didn't need to rely on in_remote_transaction,\nso it was not set. I haven't checked the patch in detail but am\nwondering, isn't it sufficient to instead check IsTransactionState()\nand or IsTransactionOrTransactionBlock()?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 16 Mar 2023 16:26:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi!\n\nOn 3/16/23 08:38, Masahiko Sawada wrote:\n> Hi,\n> \n> On Wed, Mar 15, 2023 at 9:52 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>>\n>> On 3/14/23 08:30, John Naylor wrote:\n>>> I tried a couple toy examples with various combinations of use styles.\n>>>\n>>> Three with \"automatic\" reading from sequences:\n>>>\n>>> create table test(i serial);\n>>> create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n>>> create table test(i int default nextval('s1'));\n>>>\n>>> ...where s1 has some non-default parameters:\n>>>\n>>> CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n>>>\n>>> ...and then two with explicit use of s1, one inserting the 'nextval'\n>>> into a table with no default, and one with no table at all, just\n>>> selecting from the sequence.\n>>>\n>>> The last two seem to work similarly to the first three, so it seems like\n>>> FOR ALL TABLES adds all sequences as well. Is that expected?\n>>\n>> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless\n>> the sequence is actually added to the publication. I tracked this down\n>> to a thinko in get_rel_sync_entry() which failed to check the object\n>> type when puballtables or puballsequences was set.\n>>\n>> Attached is a patch fixing this.\n>>\n>>> The documentation for CREATE PUBLICATION mentions sequence options,\n>>> but doesn't really say how these options should be used.\n>> Good point. The idea is that we handle tables and sequences the same\n>> way, i.e. if you specify 'sequence' then we'll replicate increments for\n>> sequences explicitly added to the publication.\n>>\n>> If this is not clear, the docs may need some improvements.\n>>\n> \n> I'm late to this thread, but I have some questions and review comments.\n> \n> Regarding sequence logical replication, it seems that changes of\n> sequence created after CREATE SUBSCRIPTION are applied on the\n> subscriber even without REFRESH PUBLICATION command on the subscriber.\n> Which is a different behavior than tables. For example, I set both\n> publisher and subscriber as follows:\n> \n> 1. On publisher\n> create publication test_pub for all sequences;\n> \n> 2. On subscriber\n> create subscription test_sub connection 'dbname=postgres port=5551'\n> publication test_pub; -- port=5551 is the publisher\n> \n> 3. On publisher\n> create sequence s1;\n> select nextval('s1');\n> \n> I got the error \"ERROR: relation \"public.s1\" does not exist on the\n> subscriber\". Probably we need to do should_apply_changes_for_rel()\n> check in apply_handle_sequence().\n> \n\nYes, you're right - the sequence handling should have been calling the\nshould_apply_changes_for_rel() etc.\n\nThe attached 0005 patch should fix that - I still need to test it a bit\nmore and maybe clean it up a bit, but hopefully it'll allow you to\ncontinue the review.\n\nI had to tweak the protocol a bit, so that this uses the same cache as\ntables. I wonder if maybe we should make it even more similar, by\nessentially treating sequences as tables with (last_value, log_cnt,\ncalled) columns.\n\n> If my understanding is correct, is there any case where the subscriber\n> needs to apply transactional sequence changes? The commit message of\n> 0001 patch says:\n> \n> * Changes for sequences created in the same top-level transaction are\n> treated as transactional, i.e. just like any other change from that\n> transaction, and discarded in case of a rollback.\n> \n> IIUC such sequences are not visible to the subscriber, so it cannot\n> subscribe to them until the commit.\n> \n\nThe comment is slightly misleading, as it talks about creation of\nsequences, but it should be talking about relfilenodes. For example, if\nyou create a sequence, add it to publication, and then in a later\ntransaction you do\n\n ALTER SEQUENCE x RESTART\n\nor something else that creates a new relfilenode, then the subsequent\nincrements are visible only in that transaction. But we still need to\napply those on the subscriber, but only as part of the transaction,\nbecause it might roll back.\n\n> ---\n> I got an assertion failure. The reproducible steps are:\n> \n\nI do believe this was due to a thinko in apply_handle_sequence, which\nsometimes started transaction and didn't terminate it correctly. I've\nchanged it to use the begin_replication_step() etc. and it seems to be\nworking fine now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 16 Mar 2023 17:25:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, 16 Mar 2023 at 21:55, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi!\n>\n> On 3/16/23 08:38, Masahiko Sawada wrote:\n> > Hi,\n> >\n> > On Wed, Mar 15, 2023 at 9:52 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >>\n> >>\n> >> On 3/14/23 08:30, John Naylor wrote:\n> >>> I tried a couple toy examples with various combinations of use styles.\n> >>>\n> >>> Three with \"automatic\" reading from sequences:\n> >>>\n> >>> create table test(i serial);\n> >>> create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n> >>> create table test(i int default nextval('s1'));\n> >>>\n> >>> ...where s1 has some non-default parameters:\n> >>>\n> >>> CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n> >>>\n> >>> ...and then two with explicit use of s1, one inserting the 'nextval'\n> >>> into a table with no default, and one with no table at all, just\n> >>> selecting from the sequence.\n> >>>\n> >>> The last two seem to work similarly to the first three, so it seems like\n> >>> FOR ALL TABLES adds all sequences as well. Is that expected?\n> >>\n> >> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless\n> >> the sequence is actually added to the publication. I tracked this down\n> >> to a thinko in get_rel_sync_entry() which failed to check the object\n> >> type when puballtables or puballsequences was set.\n> >>\n> >> Attached is a patch fixing this.\n> >>\n> >>> The documentation for CREATE PUBLICATION mentions sequence options,\n> >>> but doesn't really say how these options should be used.\n> >> Good point. The idea is that we handle tables and sequences the same\n> >> way, i.e. if you specify 'sequence' then we'll replicate increments for\n> >> sequences explicitly added to the publication.\n> >>\n> >> If this is not clear, the docs may need some improvements.\n> >>\n> >\n> > I'm late to this thread, but I have some questions and review comments.\n> >\n> > Regarding sequence logical replication, it seems that changes of\n> > sequence created after CREATE SUBSCRIPTION are applied on the\n> > subscriber even without REFRESH PUBLICATION command on the subscriber.\n> > Which is a different behavior than tables. For example, I set both\n> > publisher and subscriber as follows:\n> >\n> > 1. On publisher\n> > create publication test_pub for all sequences;\n> >\n> > 2. On subscriber\n> > create subscription test_sub connection 'dbname=postgres port=5551'\n> > publication test_pub; -- port=5551 is the publisher\n> >\n> > 3. On publisher\n> > create sequence s1;\n> > select nextval('s1');\n> >\n> > I got the error \"ERROR: relation \"public.s1\" does not exist on the\n> > subscriber\". Probably we need to do should_apply_changes_for_rel()\n> > check in apply_handle_sequence().\n> >\n>\n> Yes, you're right - the sequence handling should have been calling the\n> should_apply_changes_for_rel() etc.\n>\n> The attached 0005 patch should fix that - I still need to test it a bit\n> more and maybe clean it up a bit, but hopefully it'll allow you to\n> continue the review.\n>\n> I had to tweak the protocol a bit, so that this uses the same cache as\n> tables. I wonder if maybe we should make it even more similar, by\n> essentially treating sequences as tables with (last_value, log_cnt,\n> called) columns.\n>\n> > If my understanding is correct, is there any case where the subscriber\n> > needs to apply transactional sequence changes? The commit message of\n> > 0001 patch says:\n> >\n> > * Changes for sequences created in the same top-level transaction are\n> > treated as transactional, i.e. just like any other change from that\n> > transaction, and discarded in case of a rollback.\n> >\n> > IIUC such sequences are not visible to the subscriber, so it cannot\n> > subscribe to them until the commit.\n> >\n>\n> The comment is slightly misleading, as it talks about creation of\n> sequences, but it should be talking about relfilenodes. For example, if\n> you create a sequence, add it to publication, and then in a later\n> transaction you do\n>\n> ALTER SEQUENCE x RESTART\n>\n> or something else that creates a new relfilenode, then the subsequent\n> increments are visible only in that transaction. But we still need to\n> apply those on the subscriber, but only as part of the transaction,\n> because it might roll back.\n>\n> > ---\n> > I got an assertion failure. The reproducible steps are:\n> >\n>\n> I do believe this was due to a thinko in apply_handle_sequence, which\n> sometimes started transaction and didn't terminate it correctly. I've\n> changed it to use the begin_replication_step() etc. and it seems to be\n> working fine now.\n\nOne of the patch does not apply on HEAD, because of a recent commit,\nwe might have to rebase the patch:\ngit am 0005-fixup-syncing-refresh-sequences-20230316.patch\nApplying: fixup syncing/refresh sequences\nerror: patch failed: src/backend/replication/pgoutput/pgoutput.c:711\nerror: src/backend/replication/pgoutput/pgoutput.c: patch does not apply\nPatch failed at 0001 fixup syncing/refresh sequences\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 17 Mar 2023 11:04:00 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:51 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n>\n>\n> On 3/14/23 08:30, John Naylor wrote:\n> > I tried a couple toy examples with various combinations of use styles.\n> >\n> > Three with \"automatic\" reading from sequences:\n> >\n> > create table test(i serial);\n> > create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n> > create table test(i int default nextval('s1'));\n> >\n> > ...where s1 has some non-default parameters:\n> >\n> > CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n> >\n> > ...and then two with explicit use of s1, one inserting the 'nextval'\n> > into a table with no default, and one with no table at all, just\n> > selecting from the sequence.\n> >\n> > The last two seem to work similarly to the first three, so it seems like\n> > FOR ALL TABLES adds all sequences as well. Is that expected?\n>\n> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless\n> the sequence is actually added to the publication. I tracked this down\n> to a thinko in get_rel_sync_entry() which failed to check the object\n> type when puballtables or puballsequences was set.\n>\n> Attached is a patch fixing this.\n\nOkay, I can verify that with 0001-0006, sequences don't replicate unless\nspecified. I do see an additional change that doesn't make sense: On the\nsubscriber I no longer see a jump to the logged 32 increment, I see the\nvery next value:\n\n# alter system set wal_level='logical';\n# port 7777 is subscriber\n\necho\necho \"PUB:\"\npsql -c \"drop table if exists test;\"\npsql -c \"drop publication if exists pub1;\"\n\necho\necho \"SUB:\"\npsql -p 7777 -c \"drop table if exists test;\"\npsql -p 7777 -c \"drop subscription if exists sub1 ;\"\n\necho\necho \"PUB:\"\npsql -c \"create table test(i int GENERATED BY DEFAULT AS IDENTITY);\"\npsql -c \"CREATE PUBLICATION pub1 FOR ALL TABLES;\"\npsql -c \"CREATE PUBLICATION pub2 FOR ALL SEQUENCES;\"\n\necho\necho \"SUB:\"\npsql -p 7777 -c \"create table test(i int GENERATED BY DEFAULT AS IDENTITY);\"\npsql -p 7777 -c \"CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\ndbname=postgres application_name=sub1 port=5432' PUBLICATION pub1;\"\npsql -p 7777 -c \"CREATE SUBSCRIPTION sub2 CONNECTION 'host=localhost\ndbname=postgres application_name=sub2 port=5432' PUBLICATION pub2;\"\n\necho\necho \"PUB:\"\npsql -c \"insert into test default values;\"\npsql -c \"insert into test default values;\"\npsql -c \"select * from test;\"\npsql -c \"select * from test_i_seq;\"\n\nsleep 1\n\necho\necho \"SUB:\"\npsql -p 7777 -c \"select * from test;\"\npsql -p 7777 -c \"select * from test_i_seq;\"\n\npsql -p 7777 -c \"drop subscription sub1 ;\"\npsql -p 7777 -c \"drop subscription sub2 ;\"\n\npsql -p 7777 -c \"insert into test default values;\"\npsql -p 7777 -c \"select * from test;\"\npsql -p 7777 -c \"select * from test_i_seq;\"\n\nThe last two queries on the subscriber show:\n\n i\n---\n 1\n 2\n 3\n(3 rows)\n\n last_value | log_cnt | is_called\n------------+---------+-----------\n 3 | 30 | t\n(1 row)\n\n...whereas before with 0001-0003 I saw:\n\n i\n----\n 1\n 2\n 34\n(3 rows)\n\n last_value | log_cnt | is_called\n------------+---------+-----------\n 34 | 32 | t\n\n> > The documentation for CREATE PUBLICATION mentions sequence options,\n> > but doesn't really say how these options should be used.\n> Good point. The idea is that we handle tables and sequences the same\n> way, i.e. if you specify 'sequence' then we'll replicate increments for\n> sequences explicitly added to the publication.\n>\n> If this is not clear, the docs may need some improvements.\n\nAside from docs, I'm not clear what some of the tests are doing:\n\n+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish\n= 'sequence');\n+RESET client_min_messages;\n+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert,\nsequence');\n\nWhat does it mean to add 'insert' to a sequence publication?\n\nLikewise, from a brief change in my test above, 'sequence' seems to be a\nnoise word for table publications. I'm not fully read up on the background\nof this topic, but wanted to make sure I understood the design of the\nsyntax.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 15, 2023 at 7:51 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:>>>> On 3/14/23 08:30, John Naylor wrote:> > I tried a couple toy examples with various combinations of use styles.> >> > Three with \"automatic\" reading from sequences:> >> > create table test(i serial);> > create table test(i int GENERATED BY DEFAULT AS IDENTITY);> > create table test(i int default nextval('s1'));> >> > ...where s1 has some non-default parameters:> >> > CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;> >> > ...and then two with explicit use of s1, one inserting the 'nextval'> > into a table with no default, and one with no table at all, just> > selecting from the sequence.> >> > The last two seem to work similarly to the first three, so it seems like> > FOR ALL TABLES adds all sequences as well. Is that expected?>> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless> the sequence is actually added to the publication. I tracked this down> to a thinko in get_rel_sync_entry() which failed to check the object> type when puballtables or puballsequences was set.>> Attached is a patch fixing this.Okay, I can verify that with 0001-0006, sequences don't replicate unless specified. I do see an additional change that doesn't make sense: On the subscriber I no longer see a jump to the logged 32 increment, I see the very next value:# alter system set wal_level='logical';# port 7777 is subscriberechoecho \"PUB:\"psql -c \"drop table if exists test;\"psql -c \"drop publication if exists pub1;\"echoecho \"SUB:\"psql -p 7777 -c \"drop table if exists test;\"psql -p 7777 -c \"drop subscription if exists sub1 ;\"echoecho \"PUB:\"psql -c \"create table test(i int GENERATED BY DEFAULT AS IDENTITY);\"psql -c \"CREATE PUBLICATION pub1 FOR ALL TABLES;\"psql -c \"CREATE PUBLICATION pub2 FOR ALL SEQUENCES;\"echoecho \"SUB:\"psql -p 7777 -c \"create table test(i int GENERATED BY DEFAULT AS IDENTITY);\"psql -p 7777 -c \"CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost dbname=postgres application_name=sub1 port=5432' PUBLICATION pub1;\"psql -p 7777 -c \"CREATE SUBSCRIPTION sub2 CONNECTION 'host=localhost dbname=postgres application_name=sub2 port=5432' PUBLICATION pub2;\"echoecho \"PUB:\"psql -c \"insert into test default values;\"psql -c \"insert into test default values;\"psql -c \"select * from test;\"psql -c \"select * from test_i_seq;\"sleep 1echoecho \"SUB:\"psql -p 7777 -c \"select * from test;\"psql -p 7777 -c \"select * from test_i_seq;\"psql -p 7777 -c \"drop subscription sub1 ;\"psql -p 7777 -c \"drop subscription sub2 ;\"psql -p 7777 -c \"insert into test default values;\"psql -p 7777 -c \"select * from test;\"psql -p 7777 -c \"select * from test_i_seq;\"The last two queries on the subscriber show: i --- 1 2 3(3 rows) last_value | log_cnt | is_called ------------+---------+----------- 3 | 30 | t(1 row)...whereas before with 0001-0003 I saw: i ---- 1 2 34(3 rows) last_value | log_cnt | is_called ------------+---------+----------- 34 | 32 | t> > The documentation for CREATE PUBLICATION mentions sequence options,> > but doesn't really say how these options should be used.> Good point. The idea is that we handle tables and sequences the same> way, i.e. if you specify 'sequence' then we'll replicate increments for> sequences explicitly added to the publication.>> If this is not clear, the docs may need some improvements.Aside from docs, I'm not clear what some of the tests are doing:+CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH (publish = 'sequence');+RESET client_min_messages;+ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert, sequence');What does it mean to add 'insert' to a sequence publication?Likewise, from a brief change in my test above, 'sequence' seems to be a noise word for table publications. I'm not fully read up on the background of this topic, but wanted to make sure I understood the design of the syntax.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 17 Mar 2023 12:53:46 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 7:00 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n> On 3/10/23 11:03, John Naylor wrote:\n\n> > + * When we're called via the SQL SRF there's already a transaction\n> >\n> > I see this was copied from existing code, but I found it confusing --\n> > does this function have a stable name?\n>\n> What do you mean by \"stable name\"? It certainly is not exposed as a\n> user-callable SQL function, so I think this comment it misleading and\n> should be removed.\n\nOkay, I was just trying to think of why it was phrased this way...\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Mar 15, 2023 at 7:00 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:>> On 3/10/23 11:03, John Naylor wrote:> > + * When we're called via the SQL SRF there's already a transaction> >> > I see this was copied from existing code, but I found it confusing --> > does this function have a stable name?>> What do you mean by \"stable name\"? It certainly is not exposed as a> user-callable SQL function, so I think this comment it misleading and> should be removed.Okay, I was just trying to think of why it was phrased this way...-- John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 17 Mar 2023 12:54:30 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, 16 Mar 2023 at 21:55, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi!\n>\n> On 3/16/23 08:38, Masahiko Sawada wrote:\n> > Hi,\n> >\n> > On Wed, Mar 15, 2023 at 9:52 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >>\n> >>\n> >> On 3/14/23 08:30, John Naylor wrote:\n> >>> I tried a couple toy examples with various combinations of use styles.\n> >>>\n> >>> Three with \"automatic\" reading from sequences:\n> >>>\n> >>> create table test(i serial);\n> >>> create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n> >>> create table test(i int default nextval('s1'));\n> >>>\n> >>> ...where s1 has some non-default parameters:\n> >>>\n> >>> CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n> >>>\n> >>> ...and then two with explicit use of s1, one inserting the 'nextval'\n> >>> into a table with no default, and one with no table at all, just\n> >>> selecting from the sequence.\n> >>>\n> >>> The last two seem to work similarly to the first three, so it seems like\n> >>> FOR ALL TABLES adds all sequences as well. Is that expected?\n> >>\n> >> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless\n> >> the sequence is actually added to the publication. I tracked this down\n> >> to a thinko in get_rel_sync_entry() which failed to check the object\n> >> type when puballtables or puballsequences was set.\n> >>\n> >> Attached is a patch fixing this.\n> >>\n> >>> The documentation for CREATE PUBLICATION mentions sequence options,\n> >>> but doesn't really say how these options should be used.\n> >> Good point. The idea is that we handle tables and sequences the same\n> >> way, i.e. if you specify 'sequence' then we'll replicate increments for\n> >> sequences explicitly added to the publication.\n> >>\n> >> If this is not clear, the docs may need some improvements.\n> >>\n> >\n> > I'm late to this thread, but I have some questions and review comments.\n> >\n> > Regarding sequence logical replication, it seems that changes of\n> > sequence created after CREATE SUBSCRIPTION are applied on the\n> > subscriber even without REFRESH PUBLICATION command on the subscriber.\n> > Which is a different behavior than tables. For example, I set both\n> > publisher and subscriber as follows:\n> >\n> > 1. On publisher\n> > create publication test_pub for all sequences;\n> >\n> > 2. On subscriber\n> > create subscription test_sub connection 'dbname=postgres port=5551'\n> > publication test_pub; -- port=5551 is the publisher\n> >\n> > 3. On publisher\n> > create sequence s1;\n> > select nextval('s1');\n> >\n> > I got the error \"ERROR: relation \"public.s1\" does not exist on the\n> > subscriber\". Probably we need to do should_apply_changes_for_rel()\n> > check in apply_handle_sequence().\n> >\n>\n> Yes, you're right - the sequence handling should have been calling the\n> should_apply_changes_for_rel() etc.\n>\n> The attached 0005 patch should fix that - I still need to test it a bit\n> more and maybe clean it up a bit, but hopefully it'll allow you to\n> continue the review.\n>\n> I had to tweak the protocol a bit, so that this uses the same cache as\n> tables. I wonder if maybe we should make it even more similar, by\n> essentially treating sequences as tables with (last_value, log_cnt,\n> called) columns.\n>\n> > If my understanding is correct, is there any case where the subscriber\n> > needs to apply transactional sequence changes? The commit message of\n> > 0001 patch says:\n> >\n> > * Changes for sequences created in the same top-level transaction are\n> > treated as transactional, i.e. just like any other change from that\n> > transaction, and discarded in case of a rollback.\n> >\n> > IIUC such sequences are not visible to the subscriber, so it cannot\n> > subscribe to them until the commit.\n> >\n>\n> The comment is slightly misleading, as it talks about creation of\n> sequences, but it should be talking about relfilenodes. For example, if\n> you create a sequence, add it to publication, and then in a later\n> transaction you do\n>\n> ALTER SEQUENCE x RESTART\n>\n> or something else that creates a new relfilenode, then the subsequent\n> increments are visible only in that transaction. But we still need to\n> apply those on the subscriber, but only as part of the transaction,\n> because it might roll back.\n>\n> > ---\n> > I got an assertion failure. The reproducible steps are:\n> >\n>\n> I do believe this was due to a thinko in apply_handle_sequence, which\n> sometimes started transaction and didn't terminate it correctly. I've\n> changed it to use the begin_replication_step() etc. and it seems to be\n> working fine now.\n\nFew comments:\n1) One of the test is failing for me, I had also seen the same failure\nin CFBOT at [1] too:\n# Failed test 'create sequence, advance it in rolled-back\ntransaction, but commit the create'\n# at t/030_sequences.pl line 152.\n# got: '1|0|f'\n# expected: '132|0|t'\nt/030_sequences.pl ................. 5/? ?\n# Failed test 'advance the new sequence in a transaction and roll it back'\n# at t/030_sequences.pl line 175.\n# got: '1|0|f'\n# expected: '231|0|t'\n\n# Failed test 'advance sequence in a subtransaction'\n# at t/030_sequences.pl line 198.\n# got: '1|0|f'\n# expected: '330|0|t'\n# Looks like you failed 3 tests of 6.\n\n2) We could replace the below:\n$node_publisher->wait_for_catchup('seq_sub');\n\n# Wait for initial sync to finish as well\nmy $synced_query =\n \"SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT\nIN ('s', 'r');\";\n$node_subscriber->poll_query_until('postgres', $synced_query)\n or die \"Timed out while waiting for subscriber to synchronize data\";\n\nwith:\n$node_subscriber->wait_for_subscription_sync;\n\n3) We could change 030_sequences to 033_sequences.pl as 030 is already used:\ndiff --git a/src/test/subscription/t/030_sequences.pl\nb/src/test/subscription/t/030_sequences.pl\nnew file mode 100644\nindex 00000000000..9ae3c03d7d1\n--- /dev/null\n+++ b/src/test/subscription/t/030_sequences.pl\n\n4) Copyright year should be changed to 2023:\n@@ -0,0 +1,202 @@\n+\n+# Copyright (c) 2021, PostgreSQL Global Development Group\n+\n+# This tests that sequences are replicated correctly by logical replication\n+use strict;\n+use warnings;\n\n[1] - https://cirrus-ci.com/task/5032679352041472\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 17 Mar 2023 14:21:56 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 3/17/23 06:53, John Naylor wrote:\n> On Wed, Mar 15, 2023 at 7:51 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com <mailto:tomas.vondra@enterprisedb.com>>\n> wrote:\n>>\n>>\n>>\n>> On 3/14/23 08:30, John Naylor wrote:\n>> > I tried a couple toy examples with various combinations of use styles.\n>> >\n>> > Three with \"automatic\" reading from sequences:\n>> >\n>> > create table test(i serial);\n>> > create table test(i int GENERATED BY DEFAULT AS IDENTITY);\n>> > create table test(i int default nextval('s1'));\n>> >\n>> > ...where s1 has some non-default parameters:\n>> >\n>> > CREATE SEQUENCE s1 START 100 MAXVALUE 100 INCREMENT BY -1;\n>> >\n>> > ...and then two with explicit use of s1, one inserting the 'nextval'\n>> > into a table with no default, and one with no table at all, just\n>> > selecting from the sequence.\n>> >\n>> > The last two seem to work similarly to the first three, so it seems like\n>> > FOR ALL TABLES adds all sequences as well. Is that expected?\n>>\n>> Yeah, that's a bug - we shouldn't replicate the sequence changes, unless\n>> the sequence is actually added to the publication. I tracked this down\n>> to a thinko in get_rel_sync_entry() which failed to check the object\n>> type when puballtables or puballsequences was set.\n>>\n>> Attached is a patch fixing this.\n> \n> Okay, I can verify that with 0001-0006, sequences don't replicate unless\n> specified. I do see an additional change that doesn't make sense: On the\n> subscriber I no longer see a jump to the logged 32 increment, I see the\n> very next value:\n> \n> # alter system set wal_level='logical';\n> # port 7777 is subscriber\n> \n> echo\n> echo \"PUB:\"\n> psql -c \"drop table if exists test;\"\n> psql -c \"drop publication if exists pub1;\"\n> \n> echo\n> echo \"SUB:\"\n> psql -p 7777 -c \"drop table if exists test;\"\n> psql -p 7777 -c \"drop subscription if exists sub1 ;\"\n> \n> echo\n> echo \"PUB:\"\n> psql -c \"create table test(i int GENERATED BY DEFAULT AS IDENTITY);\"\n> psql -c \"CREATE PUBLICATION pub1 FOR ALL TABLES;\"\n> psql -c \"CREATE PUBLICATION pub2 FOR ALL SEQUENCES;\"\n> \n> echo\n> echo \"SUB:\"\n> psql -p 7777 -c \"create table test(i int GENERATED BY DEFAULT AS IDENTITY);\"\n> psql -p 7777 -c \"CREATE SUBSCRIPTION sub1 CONNECTION 'host=localhost\n> dbname=postgres application_name=sub1 port=5432' PUBLICATION pub1;\"\n> psql -p 7777 -c \"CREATE SUBSCRIPTION sub2 CONNECTION 'host=localhost\n> dbname=postgres application_name=sub2 port=5432' PUBLICATION pub2;\"\n> \n> echo\n> echo \"PUB:\"\n> psql -c \"insert into test default values;\"\n> psql -c \"insert into test default values;\"\n> psql -c \"select * from test;\"\n> psql -c \"select * from test_i_seq;\"\n> \n> sleep 1\n> \n> echo\n> echo \"SUB:\"\n> psql -p 7777 -c \"select * from test;\"\n> psql -p 7777 -c \"select * from test_i_seq;\"\n> \n> psql -p 7777 -c \"drop subscription sub1 ;\"\n> psql -p 7777 -c \"drop subscription sub2 ;\"\n> \n> psql -p 7777 -c \"insert into test default values;\"\n> psql -p 7777 -c \"select * from test;\"\n> psql -p 7777 -c \"select * from test_i_seq;\"\n> \n> The last two queries on the subscriber show:\n> \n> i\n> ---\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 3 | 30 | t\n> (1 row)\n> \n> ...whereas before with 0001-0003 I saw:\n> \n> i \n> ----\n> 1\n> 2\n> 34\n> (3 rows)\n> \n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 34 | 32 | t\n> \n\nOh, this is a silly thinko in how sequences are synced at the beginning\n(or maybe a combination of two issues).\n\nfetch_sequence_data() simply runs a select from the sequence\n\n SELECT last_value, log_cnt, is_called\n\nbut that's wrong, because that's the *current* state of the sequence, at\nthe moment it's initially synced. We to make this \"correct\" with respect\nto the decoding, we'd need to deduce what was the last WAL record, so\nsomething like\n\n last_value += log_cnt + 1\n\nThat should produce 34 again.\n\nFWIW the older patch has this issue too, I believe the difference is\nmerely due to a slightly different timing between the sync and decoding\nthe first insert. If you insert a sleep after the CREATE SUBSCRIPTION\ncommands, it should disappear.\n\n\nThis however made me realize the initial sync of sequences may not be\ncorrect. I mean, the idea of tablesync is syncing the data in REPEATABLE\nREAD transaction, and then applying decoded changes. But sequences are\nnot transactional in this way - if you select from a sequence, you'll\nalways see the latest data, even in REPEATABLE READ.\n\nI wonder if this might result in losing some of the sequence increments,\nand/or applying them in the wrong order (so that the sequence goes\nbackward for a while).\n\n\n>> > The documentation for CREATE PUBLICATION mentions sequence options,\n>> > but doesn't really say how these options should be used.\n>> Good point. The idea is that we handle tables and sequences the same\n>> way, i.e. if you specify 'sequence' then we'll replicate increments for\n>> sequences explicitly added to the publication.\n>>\n>> If this is not clear, the docs may need some improvements.\n> \n> Aside from docs, I'm not clear what some of the tests are doing:\n> \n> +CREATE PUBLICATION testpub_forallsequences FOR ALL SEQUENCES WITH\n> (publish = 'sequence');\n> +RESET client_min_messages;\n> +ALTER PUBLICATION testpub_forallsequences SET (publish = 'insert,\n> sequence');\n> \n> What does it mean to add 'insert' to a sequence publication?\n> \n\nI don't recall why this particular test exists, but you can still add\ntables to \"for all sequences\" publication. IMO it's fine to allow adding\nactions that are irrelevant for currently published objects, we don't\nhave a cross-check to prevent that (how would you even do that e.g. for\nFOR ALL TABLES publications?).\n\n> Likewise, from a brief change in my test above, 'sequence' seems to be a\n> noise word for table publications. I'm not fully read up on the\n> background of this topic, but wanted to make sure I understood the\n> design of the syntax.\n> \n\nI think it's fine, for the same reason as above.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Mar 2023 18:55:23 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/17/23 18:55, Tomas Vondra wrote:\n> \n> ...\n> \n> This however made me realize the initial sync of sequences may not be\n> correct. I mean, the idea of tablesync is syncing the data in REPEATABLE\n> READ transaction, and then applying decoded changes. But sequences are\n> not transactional in this way - if you select from a sequence, you'll\n> always see the latest data, even in REPEATABLE READ.\n> \n> I wonder if this might result in losing some of the sequence increments,\n> and/or applying them in the wrong order (so that the sequence goes\n> backward for a while).\n> \n\nYeah, I think my suspicion was warranted - it's pretty easy to make the\nsequence go backwards for a while by adding a sleep between the slot\ncreation and the copy_sequence() call, and increment the sequence in\nbetween (enough to do some WAL logging).\n\nThe copy_sequence() then reads the current on-disk state (because of the\nnon-transactional nature w.r.t. REPEATABLE READ), applies it, and then\nwe start processing the WAL added since the slot creation. But those are\nolder, so stuff like this happens:\n\n 21:52:54.147 CET [35404] WARNING: copy_sequence 1222 0 1\n 21:52:54.163 CET [35404] WARNING: apply_handle_sequence 990 0 1\n 21:52:54.163 CET [35404] WARNING: apply_handle_sequence 1023 0 1\n 21:52:54.163 CET [35404] WARNING: apply_handle_sequence 1056 0 1\n 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1089 0 1\n 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1122 0 1\n 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1155 0 1\n 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1188 0 1\n 21:52:54.175 CET [35404] WARNING: apply_handle_sequence 1221 0 1\n 21:52:54.898 CET [35402] WARNING: apply_handle_sequence 1254 0 1\n\nClearly, for sequences we can't quite rely on snapshots/slots, we need\nto get the LSN to decide what changes to apply/skip from somewhere else.\nI wonder if we can just ignore the queued changes in tablesync, but I\nguess not - there can be queued increments after reading the sequence\nstate, and we need to apply those. But maybe we could use the page LSN\nfrom the relfilenode - that should be the LSN of the last WAL record.\n\nOr maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\nuse to read the sequence state ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 17 Mar 2023 22:43:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/17/23 18:55, Tomas Vondra wrote:\n> >\n> > ...\n> >\n> > This however made me realize the initial sync of sequences may not be\n> > correct. I mean, the idea of tablesync is syncing the data in REPEATABLE\n> > READ transaction, and then applying decoded changes. But sequences are\n> > not transactional in this way - if you select from a sequence, you'll\n> > always see the latest data, even in REPEATABLE READ.\n> >\n> > I wonder if this might result in losing some of the sequence increments,\n> > and/or applying them in the wrong order (so that the sequence goes\n> > backward for a while).\n> >\n>\n> Yeah, I think my suspicion was warranted - it's pretty easy to make the\n> sequence go backwards for a while by adding a sleep between the slot\n> creation and the copy_sequence() call, and increment the sequence in\n> between (enough to do some WAL logging).\n>\n> The copy_sequence() then reads the current on-disk state (because of the\n> non-transactional nature w.r.t. REPEATABLE READ), applies it, and then\n> we start processing the WAL added since the slot creation. But those are\n> older, so stuff like this happens:\n>\n> 21:52:54.147 CET [35404] WARNING: copy_sequence 1222 0 1\n> 21:52:54.163 CET [35404] WARNING: apply_handle_sequence 990 0 1\n> 21:52:54.163 CET [35404] WARNING: apply_handle_sequence 1023 0 1\n> 21:52:54.163 CET [35404] WARNING: apply_handle_sequence 1056 0 1\n> 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1089 0 1\n> 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1122 0 1\n> 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1155 0 1\n> 21:52:54.174 CET [35404] WARNING: apply_handle_sequence 1188 0 1\n> 21:52:54.175 CET [35404] WARNING: apply_handle_sequence 1221 0 1\n> 21:52:54.898 CET [35402] WARNING: apply_handle_sequence 1254 0 1\n>\n> Clearly, for sequences we can't quite rely on snapshots/slots, we need\n> to get the LSN to decide what changes to apply/skip from somewhere else.\n> I wonder if we can just ignore the queued changes in tablesync, but I\n> guess not - there can be queued increments after reading the sequence\n> state, and we need to apply those. But maybe we could use the page LSN\n> from the relfilenode - that should be the LSN of the last WAL record.\n>\n> Or maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\n> use to read the sequence state ...\n>\n\nWhat if some Alter Sequence is performed before the copy starts and\nafter the copy is finished, the containing transaction rolled back?\nWon't it copy something which shouldn't have been copied?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 18 Mar 2023 11:05:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/18/23 06:35, Amit Kapila wrote:\n> On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> ...\n>>\n>> Clearly, for sequences we can't quite rely on snapshots/slots, we need\n>> to get the LSN to decide what changes to apply/skip from somewhere else.\n>> I wonder if we can just ignore the queued changes in tablesync, but I\n>> guess not - there can be queued increments after reading the sequence\n>> state, and we need to apply those. But maybe we could use the page LSN\n>> from the relfilenode - that should be the LSN of the last WAL record.\n>>\n>> Or maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\n>> use to read the sequence state ...\n>>\n> \n> What if some Alter Sequence is performed before the copy starts and\n> after the copy is finished, the containing transaction rolled back?\n> Won't it copy something which shouldn't have been copied?\n> \n\nThat shouldn't be possible - the alter creates a new relfilenode and\nit's invisible until commit. So either it gets committed (and then\nreplicated), or it remains invisible to the SELECT during sync.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 18 Mar 2023 16:19:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sat, Mar 18, 2023 at 8:49 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/18/23 06:35, Amit Kapila wrote:\n> > On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> ...\n> >>\n> >> Clearly, for sequences we can't quite rely on snapshots/slots, we need\n> >> to get the LSN to decide what changes to apply/skip from somewhere else.\n> >> I wonder if we can just ignore the queued changes in tablesync, but I\n> >> guess not - there can be queued increments after reading the sequence\n> >> state, and we need to apply those. But maybe we could use the page LSN\n> >> from the relfilenode - that should be the LSN of the last WAL record.\n> >>\n> >> Or maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\n> >> use to read the sequence state ...\n> >>\n> >\n> > What if some Alter Sequence is performed before the copy starts and\n> > after the copy is finished, the containing transaction rolled back?\n> > Won't it copy something which shouldn't have been copied?\n> >\n>\n> That shouldn't be possible - the alter creates a new relfilenode and\n> it's invisible until commit. So either it gets committed (and then\n> replicated), or it remains invisible to the SELECT during sync.\n>\n\nOkay, however, we need to ensure that such a change will later be\nreplicated and also need to ensure that the required WAL doesn't get\nremoved.\n\nSay, if we use your first idea of page LSN from the relfilenode, then\nhow do we ensure that the corresponding WAL doesn't get removed when\nlater the sync worker tries to start replication from that LSN? I am\nimagining here the sync_sequence_slot will be created before\ncopy_sequence but even then it is possible that the sequence has not\nbeen updated for a long time and the LSN location will be in the past\n(as compared to the slot's LSN) which means the corresponding WAL\ncould be removed. Now, here we can't directly start using the slot's\nLSN to stream changes because there is no correlation of it with the\nLSN (page LSN of sequence's relfilnode) where we want to start\nstreaming.\n\nNow, for the second idea which is to directly use\npg_current_wal_insert_lsn(), I think we won't be able to ensure that\nthe changes covered by in-progress transactions like the one with\nAlter Sequence I have given example would be streamed later after the\ninitial copy. Because the LSN returned by pg_current_wal_insert_lsn()\ncould be an LSN after the LSN associated with Alter Sequence but\nbefore the corresponding xact's commit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 09:12:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 3/20/23 04:42, Amit Kapila wrote:\n> On Sat, Mar 18, 2023 at 8:49 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 3/18/23 06:35, Amit Kapila wrote:\n>>> On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> ...\n>>>>\n>>>> Clearly, for sequences we can't quite rely on snapshots/slots, we need\n>>>> to get the LSN to decide what changes to apply/skip from somewhere else.\n>>>> I wonder if we can just ignore the queued changes in tablesync, but I\n>>>> guess not - there can be queued increments after reading the sequence\n>>>> state, and we need to apply those. But maybe we could use the page LSN\n>>>> from the relfilenode - that should be the LSN of the last WAL record.\n>>>>\n>>>> Or maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\n>>>> use to read the sequence state ...\n>>>>\n>>>\n>>> What if some Alter Sequence is performed before the copy starts and\n>>> after the copy is finished, the containing transaction rolled back?\n>>> Won't it copy something which shouldn't have been copied?\n>>>\n>>\n>> That shouldn't be possible - the alter creates a new relfilenode and\n>> it's invisible until commit. So either it gets committed (and then\n>> replicated), or it remains invisible to the SELECT during sync.\n>>\n> \n> Okay, however, we need to ensure that such a change will later be\n> replicated and also need to ensure that the required WAL doesn't get\n> removed.\n> \n> Say, if we use your first idea of page LSN from the relfilenode, then\n> how do we ensure that the corresponding WAL doesn't get removed when\n> later the sync worker tries to start replication from that LSN? I am\n> imagining here the sync_sequence_slot will be created before\n> copy_sequence but even then it is possible that the sequence has not\n> been updated for a long time and the LSN location will be in the past\n> (as compared to the slot's LSN) which means the corresponding WAL\n> could be removed. Now, here we can't directly start using the slot's\n> LSN to stream changes because there is no correlation of it with the\n> LSN (page LSN of sequence's relfilnode) where we want to start\n> streaming.\n> \n\nI don't understand why we'd need WAL from before the slot is created,\nwhich happens before copy_sequence so the sync will see a more recent\nstate (reflecting all changes up to the slot LSN).\n\nI think the only \"issue\" are the WAL records after the slot LSN, or more\nprecisely deciding which of the decoded changes to apply.\n\n\n> Now, for the second idea which is to directly use\n> pg_current_wal_insert_lsn(), I think we won't be able to ensure that\n> the changes covered by in-progress transactions like the one with\n> Alter Sequence I have given example would be streamed later after the\n> initial copy. Because the LSN returned by pg_current_wal_insert_lsn()\n> could be an LSN after the LSN associated with Alter Sequence but\n> before the corresponding xact's commit.\n\nYeah, I think you're right - the locking itself is not sufficient to\nprevent this ordering of operations. copy_sequence would have to lock\nthe sequence exclusively, which seems bit disruptive.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Mar 2023 09:19:41 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 1:49 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n> On 3/20/23 04:42, Amit Kapila wrote:\n> > On Sat, Mar 18, 2023 at 8:49 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 3/18/23 06:35, Amit Kapila wrote:\n> >>> On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra\n> >>> <tomas.vondra@enterprisedb.com> wrote:\n> >>>>\n> >>>> ...\n> >>>>\n> >>>> Clearly, for sequences we can't quite rely on snapshots/slots, we need\n> >>>> to get the LSN to decide what changes to apply/skip from somewhere else.\n> >>>> I wonder if we can just ignore the queued changes in tablesync, but I\n> >>>> guess not - there can be queued increments after reading the sequence\n> >>>> state, and we need to apply those. But maybe we could use the page LSN\n> >>>> from the relfilenode - that should be the LSN of the last WAL record.\n> >>>>\n> >>>> Or maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\n> >>>> use to read the sequence state ...\n> >>>>\n> >>>\n> >>> What if some Alter Sequence is performed before the copy starts and\n> >>> after the copy is finished, the containing transaction rolled back?\n> >>> Won't it copy something which shouldn't have been copied?\n> >>>\n> >>\n> >> That shouldn't be possible - the alter creates a new relfilenode and\n> >> it's invisible until commit. So either it gets committed (and then\n> >> replicated), or it remains invisible to the SELECT during sync.\n> >>\n> >\n> > Okay, however, we need to ensure that such a change will later be\n> > replicated and also need to ensure that the required WAL doesn't get\n> > removed.\n> >\n> > Say, if we use your first idea of page LSN from the relfilenode, then\n> > how do we ensure that the corresponding WAL doesn't get removed when\n> > later the sync worker tries to start replication from that LSN? I am\n> > imagining here the sync_sequence_slot will be created before\n> > copy_sequence but even then it is possible that the sequence has not\n> > been updated for a long time and the LSN location will be in the past\n> > (as compared to the slot's LSN) which means the corresponding WAL\n> > could be removed. Now, here we can't directly start using the slot's\n> > LSN to stream changes because there is no correlation of it with the\n> > LSN (page LSN of sequence's relfilnode) where we want to start\n> > streaming.\n> >\n>\n> I don't understand why we'd need WAL from before the slot is created,\n> which happens before copy_sequence so the sync will see a more recent\n> state (reflecting all changes up to the slot LSN).\n>\n\nImagine the following sequence of events:\n1. Operation on a sequence seq-1 which requires WAL. Say, this is done\nat LSN 1000.\n2. Some other random operations on unrelated objects. This would\nincrease LSN to 2000.\n3. Create a slot that uses current LSN 2000.\n4. Copy sequence seq-1 where you will get the LSN value as 1000. Then\nyou will use LSN 1000 as a starting point to start replication in\nsequence sync worker.\n\nIt is quite possible that WAL from LSN 1000 may not be present. Now,\nit may be possible that we use the slot's LSN in this case but\ncurrently, it may not be possible without some changes in the slot\nmachinery. Even, if we somehow solve this, we have the below problem\nwhere we can miss some concurrent activity.\n\n> I think the only \"issue\" are the WAL records after the slot LSN, or more\n> precisely deciding which of the decoded changes to apply.\n>\n>\n> > Now, for the second idea which is to directly use\n> > pg_current_wal_insert_lsn(), I think we won't be able to ensure that\n> > the changes covered by in-progress transactions like the one with\n> > Alter Sequence I have given example would be streamed later after the\n> > initial copy. Because the LSN returned by pg_current_wal_insert_lsn()\n> > could be an LSN after the LSN associated with Alter Sequence but\n> > before the corresponding xact's commit.\n>\n> Yeah, I think you're right - the locking itself is not sufficient to\n> prevent this ordering of operations. copy_sequence would have to lock\n> the sequence exclusively, which seems bit disruptive.\n>\n\nRight, that doesn't sound like a good idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:30:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/20/23 12:00, Amit Kapila wrote:\n> On Mon, Mar 20, 2023 at 1:49 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>> On 3/20/23 04:42, Amit Kapila wrote:\n>>> On Sat, Mar 18, 2023 at 8:49 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 3/18/23 06:35, Amit Kapila wrote:\n>>>>> On Sat, Mar 18, 2023 at 3:13 AM Tomas Vondra\n>>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>>\n>>>>>> ...\n>>>>>>\n>>>>>> Clearly, for sequences we can't quite rely on snapshots/slots, we need\n>>>>>> to get the LSN to decide what changes to apply/skip from somewhere else.\n>>>>>> I wonder if we can just ignore the queued changes in tablesync, but I\n>>>>>> guess not - there can be queued increments after reading the sequence\n>>>>>> state, and we need to apply those. But maybe we could use the page LSN\n>>>>>> from the relfilenode - that should be the LSN of the last WAL record.\n>>>>>>\n>>>>>> Or maybe we could simply add pg_current_wal_insert_lsn() into the SQL we\n>>>>>> use to read the sequence state ...\n>>>>>>\n>>>>>\n>>>>> What if some Alter Sequence is performed before the copy starts and\n>>>>> after the copy is finished, the containing transaction rolled back?\n>>>>> Won't it copy something which shouldn't have been copied?\n>>>>>\n>>>>\n>>>> That shouldn't be possible - the alter creates a new relfilenode and\n>>>> it's invisible until commit. So either it gets committed (and then\n>>>> replicated), or it remains invisible to the SELECT during sync.\n>>>>\n>>>\n>>> Okay, however, we need to ensure that such a change will later be\n>>> replicated and also need to ensure that the required WAL doesn't get\n>>> removed.\n>>>\n>>> Say, if we use your first idea of page LSN from the relfilenode, then\n>>> how do we ensure that the corresponding WAL doesn't get removed when\n>>> later the sync worker tries to start replication from that LSN? I am\n>>> imagining here the sync_sequence_slot will be created before\n>>> copy_sequence but even then it is possible that the sequence has not\n>>> been updated for a long time and the LSN location will be in the past\n>>> (as compared to the slot's LSN) which means the corresponding WAL\n>>> could be removed. Now, here we can't directly start using the slot's\n>>> LSN to stream changes because there is no correlation of it with the\n>>> LSN (page LSN of sequence's relfilnode) where we want to start\n>>> streaming.\n>>>\n>>\n>> I don't understand why we'd need WAL from before the slot is created,\n>> which happens before copy_sequence so the sync will see a more recent\n>> state (reflecting all changes up to the slot LSN).\n>>\n> \n> Imagine the following sequence of events:\n> 1. Operation on a sequence seq-1 which requires WAL. Say, this is done\n> at LSN 1000.\n> 2. Some other random operations on unrelated objects. This would\n> increase LSN to 2000.\n> 3. Create a slot that uses current LSN 2000.\n> 4. Copy sequence seq-1 where you will get the LSN value as 1000. Then\n> you will use LSN 1000 as a starting point to start replication in\n> sequence sync worker.\n> \n> It is quite possible that WAL from LSN 1000 may not be present. Now,\n> it may be possible that we use the slot's LSN in this case but\n> currently, it may not be possible without some changes in the slot\n> machinery. Even, if we somehow solve this, we have the below problem\n> where we can miss some concurrent activity.\n> \n\nI think the question is what would be the WAL-requiring operation at LSN\n1000. If it's just regular nextval(), then we *will* see it during\ncopy_sequence - sequences are not transactional in the MVCC sense.\n\nIf it's an ALTER SEQUENCE, I guess it might create a new relfilenode,\nand then we might fail to apply this - that'd be bad.\n\nI wonder if we'd allow actually discarding the WAL while building the\nconsistent snapshot, though. You're however right we can't just decide\nthis based on LSN, we'd probably need to compare the relfilenodes too or\nsomething like that ...\n\n>> I think the only \"issue\" are the WAL records after the slot LSN, or more\n>> precisely deciding which of the decoded changes to apply.\n>>\n>>\n>>> Now, for the second idea which is to directly use\n>>> pg_current_wal_insert_lsn(), I think we won't be able to ensure that\n>>> the changes covered by in-progress transactions like the one with\n>>> Alter Sequence I have given example would be streamed later after the\n>>> initial copy. Because the LSN returned by pg_current_wal_insert_lsn()\n>>> could be an LSN after the LSN associated with Alter Sequence but\n>>> before the corresponding xact's commit.\n>>\n>> Yeah, I think you're right - the locking itself is not sufficient to\n>> prevent this ordering of operations. copy_sequence would have to lock\n>> the sequence exclusively, which seems bit disruptive.\n>>\n> \n> Right, that doesn't sound like a good idea.\n> \n\nAlthough, maybe we could use a less strict lock level? I mean, one that\nallows nextval() to continue, but would conflict with ALTER SEQUENCE.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Mar 2023 12:43:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Mar 20, 2023 at 5:13 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/20/23 12:00, Amit Kapila wrote:\n> > On Mon, Mar 20, 2023 at 1:49 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >>\n> >> I don't understand why we'd need WAL from before the slot is created,\n> >> which happens before copy_sequence so the sync will see a more recent\n> >> state (reflecting all changes up to the slot LSN).\n> >>\n> >\n> > Imagine the following sequence of events:\n> > 1. Operation on a sequence seq-1 which requires WAL. Say, this is done\n> > at LSN 1000.\n> > 2. Some other random operations on unrelated objects. This would\n> > increase LSN to 2000.\n> > 3. Create a slot that uses current LSN 2000.\n> > 4. Copy sequence seq-1 where you will get the LSN value as 1000. Then\n> > you will use LSN 1000 as a starting point to start replication in\n> > sequence sync worker.\n> >\n> > It is quite possible that WAL from LSN 1000 may not be present. Now,\n> > it may be possible that we use the slot's LSN in this case but\n> > currently, it may not be possible without some changes in the slot\n> > machinery. Even, if we somehow solve this, we have the below problem\n> > where we can miss some concurrent activity.\n> >\n>\n> I think the question is what would be the WAL-requiring operation at LSN\n> 1000. If it's just regular nextval(), then we *will* see it during\n> copy_sequence - sequences are not transactional in the MVCC sense.\n>\n> If it's an ALTER SEQUENCE, I guess it might create a new relfilenode,\n> and then we might fail to apply this - that'd be bad.\n>\n> I wonder if we'd allow actually discarding the WAL while building the\n> consistent snapshot, though.\n>\n\nNo, as soon as we reserve the WAL location, we update the slot's\nminLSN (replicationSlotMinLSN) which would prevent the required WAL\nfrom being removed.\n\n> You're however right we can't just decide\n> this based on LSN, we'd probably need to compare the relfilenodes too or\n> something like that ...\n>\n> >> I think the only \"issue\" are the WAL records after the slot LSN, or more\n> >> precisely deciding which of the decoded changes to apply.\n> >>\n> >>\n> >>> Now, for the second idea which is to directly use\n> >>> pg_current_wal_insert_lsn(), I think we won't be able to ensure that\n> >>> the changes covered by in-progress transactions like the one with\n> >>> Alter Sequence I have given example would be streamed later after the\n> >>> initial copy. Because the LSN returned by pg_current_wal_insert_lsn()\n> >>> could be an LSN after the LSN associated with Alter Sequence but\n> >>> before the corresponding xact's commit.\n> >>\n> >> Yeah, I think you're right - the locking itself is not sufficient to\n> >> prevent this ordering of operations. copy_sequence would have to lock\n> >> the sequence exclusively, which seems bit disruptive.\n> >>\n> >\n> > Right, that doesn't sound like a good idea.\n> >\n>\n> Although, maybe we could use a less strict lock level? I mean, one that\n> allows nextval() to continue, but would conflict with ALTER SEQUENCE.\n>\n\nI don't know if that is a good idea but are you imagining a special\ninterface/mechanism just for logical replication because as far as I\ncan see you have used SELECT to fetch the sequence values?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 17:56:16 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 3/20/23 13:26, Amit Kapila wrote:\n> On Mon, Mar 20, 2023 at 5:13 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 3/20/23 12:00, Amit Kapila wrote:\n>>> On Mon, Mar 20, 2023 at 1:49 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>>\n>>>> I don't understand why we'd need WAL from before the slot is created,\n>>>> which happens before copy_sequence so the sync will see a more recent\n>>>> state (reflecting all changes up to the slot LSN).\n>>>>\n>>>\n>>> Imagine the following sequence of events:\n>>> 1. Operation on a sequence seq-1 which requires WAL. Say, this is done\n>>> at LSN 1000.\n>>> 2. Some other random operations on unrelated objects. This would\n>>> increase LSN to 2000.\n>>> 3. Create a slot that uses current LSN 2000.\n>>> 4. Copy sequence seq-1 where you will get the LSN value as 1000. Then\n>>> you will use LSN 1000 as a starting point to start replication in\n>>> sequence sync worker.\n>>>\n>>> It is quite possible that WAL from LSN 1000 may not be present. Now,\n>>> it may be possible that we use the slot's LSN in this case but\n>>> currently, it may not be possible without some changes in the slot\n>>> machinery. Even, if we somehow solve this, we have the below problem\n>>> where we can miss some concurrent activity.\n>>>\n>>\n>> I think the question is what would be the WAL-requiring operation at LSN\n>> 1000. If it's just regular nextval(), then we *will* see it during\n>> copy_sequence - sequences are not transactional in the MVCC sense.\n>>\n>> If it's an ALTER SEQUENCE, I guess it might create a new relfilenode,\n>> and then we might fail to apply this - that'd be bad.\n>>\n>> I wonder if we'd allow actually discarding the WAL while building the\n>> consistent snapshot, though.\n>>\n> \n> No, as soon as we reserve the WAL location, we update the slot's\n> minLSN (replicationSlotMinLSN) which would prevent the required WAL\n> from being removed.\n> \n>> You're however right we can't just decide\n>> this based on LSN, we'd probably need to compare the relfilenodes too or\n>> something like that ...\n>>\n>>>> I think the only \"issue\" are the WAL records after the slot LSN, or more\n>>>> precisely deciding which of the decoded changes to apply.\n>>>>\n>>>>\n>>>>> Now, for the second idea which is to directly use\n>>>>> pg_current_wal_insert_lsn(), I think we won't be able to ensure that\n>>>>> the changes covered by in-progress transactions like the one with\n>>>>> Alter Sequence I have given example would be streamed later after the\n>>>>> initial copy. Because the LSN returned by pg_current_wal_insert_lsn()\n>>>>> could be an LSN after the LSN associated with Alter Sequence but\n>>>>> before the corresponding xact's commit.\n>>>>\n>>>> Yeah, I think you're right - the locking itself is not sufficient to\n>>>> prevent this ordering of operations. copy_sequence would have to lock\n>>>> the sequence exclusively, which seems bit disruptive.\n>>>>\n>>>\n>>> Right, that doesn't sound like a good idea.\n>>>\n>>\n>> Although, maybe we could use a less strict lock level? I mean, one that\n>> allows nextval() to continue, but would conflict with ALTER SEQUENCE.\n>>\n> \n> I don't know if that is a good idea but are you imagining a special\n> interface/mechanism just for logical replication because as far as I\n> can see you have used SELECT to fetch the sequence values?\n> \n\nNot sure what would the special mechanism be? I don't think it could\nread the sequence from somewhere else, and due the lack of MVCC we'd\njust read same sequence data from the current relfilenode. Or what else\nwould it do?\n\nThe one thing we can't quite do at the moment is locking the sequence,\nbecause LOCK is only supported for tables. So we could either provide a\nfunction to lock a sequence, or locks it and then returns the current\nstate (as if we did a SELECT).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 20 Mar 2023 18:03:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/20/23 18:03, Tomas Vondra wrote:\n> \n> ...\n>>\n>> I don't know if that is a good idea but are you imagining a special\n>> interface/mechanism just for logical replication because as far as I\n>> can see you have used SELECT to fetch the sequence values?\n>>\n> \n> Not sure what would the special mechanism be? I don't think it could\n> read the sequence from somewhere else, and due the lack of MVCC we'd\n> just read same sequence data from the current relfilenode. Or what else\n> would it do?\n> \n\nI was thinking about alternative ways to do this, but I couldn't think\nof anything. The non-MVCC behavior of sequences means it's not really\npossible to do this based on snapshots / slots or stuff like that ...\n\n> The one thing we can't quite do at the moment is locking the sequence,\n> because LOCK is only supported for tables. So we could either provide a\n> function to lock a sequence, or locks it and then returns the current\n> state (as if we did a SELECT).\n> \n\n... so I took a stab at doing it like this. I didn't feel relaxing LOCK\nrestrictions to also allow locking sequences would be the right choice,\nso I added a new function pg_sequence_lock_for_sync(). I wonder if we\ncould/should restrict this to logical replication use, somehow.\n\nThe interlock happens right after creating the slot - I was thinking\nabout doing it even before the slot gets created, but that's not\npossible, because that installs a snapshot (so it has to be the first\ncommand in the transaction). It acquires RowExclusiveLock, which is\nenough to conflict with ALTER SEQUENCE, but allows nextval().\n\nAFAICS this does the trick - if there's ALTER SEQUENCE, we'll wait for\nit to complete. And copy_sequence() will read the resulting state, even\nthough this is REPEATABLE READ - remember, sequences are not subject to\nthat consistency.\n\nThe once anomaly I can think of is the sequence might seem to go\n\"backwards\" for a little bit during the sync. Imagine this sequence of\noperations:\n\n1) tablesync creates slot\n2) S1 does ALTER SEQUENCE ... RESTART WITH 20 (gets lock)\n3) S2 tries ALTER SEQUENCE ... RESTART WITH 100 (waits for lock)\n4) tablesync requests lock\n5) S1 does the thing, commits\n6) S2 acquires lock, does the thing, commits\n7) tablesync gets lock, reads current sequence state\n8) tablesync decodes changes from S1 and S2, applies them\n\nBut I think this is fine - it's part of the catchup, and until that's\ndone the sync is not considered completed.\n\n\nI merged the earlier \"fixup\" patches into the relevant parts, and left\ntwo patches with new tweaks (deducing the corrent \"WAL\" state from the\ncurrent state read by copy_sequence), and the interlock discussed here.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 23 Mar 2023 23:25:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nOn Fri, Mar 24, 2023 at 7:26 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I merged the earlier \"fixup\" patches into the relevant parts, and left\n> two patches with new tweaks (deducing the corrent \"WAL\" state from the\n> current state read by copy_sequence), and the interlock discussed here.\n>\n\nApart from that, how does the publication having sequences work with\nsubscribers who are not able to handle sequence changes, e.g. in a\ncase where PostgreSQL version of publication is newer than the\nsubscriber? As far as I tested the latest patches, the subscriber\n(v15) errors out with the error 'invalid logical replication message\ntype \"Q\"' when receiving a sequence change. I'm not sure it's sensible\nbehavior. I think we should instead either (1) deny starting the\nreplication if the subscriber isn't able to handle sequence changes\nand the publication includes that, or (2) not send sequence changes to\nsuch subscribers.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 10:32:27 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 3/27/23 03:32, Masahiko Sawada wrote:\n> Hi,\n> \n> On Fri, Mar 24, 2023 at 7:26 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I merged the earlier \"fixup\" patches into the relevant parts, and left\n>> two patches with new tweaks (deducing the corrent \"WAL\" state from the\n>> current state read by copy_sequence), and the interlock discussed here.\n>>\n> \n> Apart from that, how does the publication having sequences work with\n> subscribers who are not able to handle sequence changes, e.g. in a\n> case where PostgreSQL version of publication is newer than the\n> subscriber? As far as I tested the latest patches, the subscriber\n> (v15) errors out with the error 'invalid logical replication message\n> type \"Q\"' when receiving a sequence change. I'm not sure it's sensible\n> behavior. I think we should instead either (1) deny starting the\n> replication if the subscriber isn't able to handle sequence changes\n> and the publication includes that, or (2) not send sequence changes to\n> such subscribers.\n> \n\nI agree the \"invalid message\" error is not great, but it's not clear to\nme how to do either (1). The trouble is we don't really know if the\npublication contains (or will contain) sequences. I mean, what would\nhappen if the replication starts and then someone adds a sequence?\n\nFor (2), I think that's not something we should do - silently discarding\nsome messages seems error-prone. If the publication includes sequences,\npresumably the user wanted to replicate those. If they want to replicate\nto an older subscriber, create a publication without sequences.\n\nPerhaps the right solution would be to check if the subscriber supports\nreplication of sequences in the output plugin, while attempting to write\nthe \"Q\" message. And error-out if the subscriber does not support it.\n\nWhat do you think?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:46:09 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Mar 27, 2023 at 11:46 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 3/27/23 03:32, Masahiko Sawada wrote:\n> > Hi,\n> >\n> > On Fri, Mar 24, 2023 at 7:26 AM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> I merged the earlier \"fixup\" patches into the relevant parts, and left\n> >> two patches with new tweaks (deducing the corrent \"WAL\" state from the\n> >> current state read by copy_sequence), and the interlock discussed here.\n> >>\n> >\n> > Apart from that, how does the publication having sequences work with\n> > subscribers who are not able to handle sequence changes, e.g. in a\n> > case where PostgreSQL version of publication is newer than the\n> > subscriber? As far as I tested the latest patches, the subscriber\n> > (v15) errors out with the error 'invalid logical replication message\n> > type \"Q\"' when receiving a sequence change. I'm not sure it's sensible\n> > behavior. I think we should instead either (1) deny starting the\n> > replication if the subscriber isn't able to handle sequence changes\n> > and the publication includes that, or (2) not send sequence changes to\n> > such subscribers.\n> >\n>\n> I agree the \"invalid message\" error is not great, but it's not clear to\n> me how to do either (1). The trouble is we don't really know if the\n> publication contains (or will contain) sequences. I mean, what would\n> happen if the replication starts and then someone adds a sequence?\n>\n> For (2), I think that's not something we should do - silently discarding\n> some messages seems error-prone. If the publication includes sequences,\n> presumably the user wanted to replicate those. If they want to replicate\n> to an older subscriber, create a publication without sequences.\n>\n> Perhaps the right solution would be to check if the subscriber supports\n> replication of sequences in the output plugin, while attempting to write\n> the \"Q\" message. And error-out if the subscriber does not support it.\n\nIt might be related to this topic; do we need to bump the protocol\nversion? The commit 64824323e57d introduced new streaming callbacks\nand bumped the protocol version. I think the same seems to be true for\nthis change as it adds sequence_cb callback.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 29 Mar 2023 01:34:39 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/28/23 18:34, Masahiko Sawada wrote:\n> On Mon, Mar 27, 2023 at 11:46 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>>\n>>\n>> On 3/27/23 03:32, Masahiko Sawada wrote:\n>>> Hi,\n>>>\n>>> On Fri, Mar 24, 2023 at 7:26 AM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> I merged the earlier \"fixup\" patches into the relevant parts, and left\n>>>> two patches with new tweaks (deducing the corrent \"WAL\" state from the\n>>>> current state read by copy_sequence), and the interlock discussed here.\n>>>>\n>>>\n>>> Apart from that, how does the publication having sequences work with\n>>> subscribers who are not able to handle sequence changes, e.g. in a\n>>> case where PostgreSQL version of publication is newer than the\n>>> subscriber? As far as I tested the latest patches, the subscriber\n>>> (v15) errors out with the error 'invalid logical replication message\n>>> type \"Q\"' when receiving a sequence change. I'm not sure it's sensible\n>>> behavior. I think we should instead either (1) deny starting the\n>>> replication if the subscriber isn't able to handle sequence changes\n>>> and the publication includes that, or (2) not send sequence changes to\n>>> such subscribers.\n>>>\n>>\n>> I agree the \"invalid message\" error is not great, but it's not clear to\n>> me how to do either (1). The trouble is we don't really know if the\n>> publication contains (or will contain) sequences. I mean, what would\n>> happen if the replication starts and then someone adds a sequence?\n>>\n>> For (2), I think that's not something we should do - silently discarding\n>> some messages seems error-prone. If the publication includes sequences,\n>> presumably the user wanted to replicate those. If they want to replicate\n>> to an older subscriber, create a publication without sequences.\n>>\n>> Perhaps the right solution would be to check if the subscriber supports\n>> replication of sequences in the output plugin, while attempting to write\n>> the \"Q\" message. And error-out if the subscriber does not support it.\n> \n> It might be related to this topic; do we need to bump the protocol\n> version? The commit 64824323e57d introduced new streaming callbacks\n> and bumped the protocol version. I think the same seems to be true for\n> this change as it adds sequence_cb callback.\n> \n\nIt's not clear to me what should be the exact behavior?\n\nI mean, imagine we're opening a connection for logical replication, and\nthe subscriber does not handle sequences. What should the publisher do?\n\n(Note: The correct commit hash is 464824323e57d.)\n\nI don't think the streaming is a good match for sequences, because of a\ncouple important differences ...\n\nFirstly, streaming determines *how* the changes are replicated, not what\ngets replicated. It doesn't (silently) filter out \"bad\" events that the\nsubscriber doesn't know how to apply. If the subscriber does not know\nhow to deal with streamed xacts, it'll still get the same changes\nexactly per the publication definition.\n\nSecondly, the default value is \"streming=off\", i.e. the subscriber has\nto explicitly request streaming when opening the connection. And we\nsimply check it against the negotiated protocol version, i.e. the check\nin pgoutput_startup() protects against subscriber requesting a protocol\nv1 but also streaming=on.\n\nI don't think we can/should do more check at this point - we don't know\nwhat's included in the requested publications at that point, and I doubt\nit's worth adding because we certainly can't predict if the publication\nwill be altered to include/decode sequences in the future.\n\n\nSpeaking of precedents, TRUNCATE is probably a better one, because it's\na new action and it determines *what* the subscriber can handle. But\nthat does exactly the thing we do for sequences - if you open a\nconnection from PG10 subscriber (truncate was added in PG11), and the\npublisher decodes a truncate, subscriber will do:\n\n2023-03-28 20:29:46.921 CEST [2357609] ERROR: invalid logical\n replication message type \"T\"\n2023-03-28 20:29:46.922 CEST [2356534] LOG: worker process: logical\n replication worker for subscription 16390 (PID 2357609) exited with\n exit code 1\n\nI don't see why sequences should do anything else. If you need to\nreplicate to such subscriber, create a publication that does not have\n'sequence' in the publish option ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Mar 2023 20:34:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 3:34 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/28/23 18:34, Masahiko Sawada wrote:\n> > On Mon, Mar 27, 2023 at 11:46 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >>\n> >>\n> >> On 3/27/23 03:32, Masahiko Sawada wrote:\n> >>> Hi,\n> >>>\n> >>> On Fri, Mar 24, 2023 at 7:26 AM Tomas Vondra\n> >>> <tomas.vondra@enterprisedb.com> wrote:\n> >>>>\n> >>>> I merged the earlier \"fixup\" patches into the relevant parts, and left\n> >>>> two patches with new tweaks (deducing the corrent \"WAL\" state from the\n> >>>> current state read by copy_sequence), and the interlock discussed here.\n> >>>>\n> >>>\n> >>> Apart from that, how does the publication having sequences work with\n> >>> subscribers who are not able to handle sequence changes, e.g. in a\n> >>> case where PostgreSQL version of publication is newer than the\n> >>> subscriber? As far as I tested the latest patches, the subscriber\n> >>> (v15) errors out with the error 'invalid logical replication message\n> >>> type \"Q\"' when receiving a sequence change. I'm not sure it's sensible\n> >>> behavior. I think we should instead either (1) deny starting the\n> >>> replication if the subscriber isn't able to handle sequence changes\n> >>> and the publication includes that, or (2) not send sequence changes to\n> >>> such subscribers.\n> >>>\n> >>\n> >> I agree the \"invalid message\" error is not great, but it's not clear to\n> >> me how to do either (1). The trouble is we don't really know if the\n> >> publication contains (or will contain) sequences. I mean, what would\n> >> happen if the replication starts and then someone adds a sequence?\n> >>\n> >> For (2), I think that's not something we should do - silently discarding\n> >> some messages seems error-prone. If the publication includes sequences,\n> >> presumably the user wanted to replicate those. If they want to replicate\n> >> to an older subscriber, create a publication without sequences.\n> >>\n> >> Perhaps the right solution would be to check if the subscriber supports\n> >> replication of sequences in the output plugin, while attempting to write\n> >> the \"Q\" message. And error-out if the subscriber does not support it.\n> >\n> > It might be related to this topic; do we need to bump the protocol\n> > version? The commit 64824323e57d introduced new streaming callbacks\n> > and bumped the protocol version. I think the same seems to be true for\n> > this change as it adds sequence_cb callback.\n> >\n>\n> It's not clear to me what should be the exact behavior?\n>\n> I mean, imagine we're opening a connection for logical replication, and\n> the subscriber does not handle sequences. What should the publisher do?\n>\n> (Note: The correct commit hash is 464824323e57d.)\n\nThanks.\n\n>\n> I don't think the streaming is a good match for sequences, because of a\n> couple important differences ...\n>\n> Firstly, streaming determines *how* the changes are replicated, not what\n> gets replicated. It doesn't (silently) filter out \"bad\" events that the\n> subscriber doesn't know how to apply. If the subscriber does not know\n> how to deal with streamed xacts, it'll still get the same changes\n> exactly per the publication definition.\n>\n> Secondly, the default value is \"streming=off\", i.e. the subscriber has\n> to explicitly request streaming when opening the connection. And we\n> simply check it against the negotiated protocol version, i.e. the check\n> in pgoutput_startup() protects against subscriber requesting a protocol\n> v1 but also streaming=on.\n>\n> I don't think we can/should do more check at this point - we don't know\n> what's included in the requested publications at that point, and I doubt\n> it's worth adding because we certainly can't predict if the publication\n> will be altered to include/decode sequences in the future.\n\nTrue. That's a valid argument.\n\n>\n> Speaking of precedents, TRUNCATE is probably a better one, because it's\n> a new action and it determines *what* the subscriber can handle. But\n> that does exactly the thing we do for sequences - if you open a\n> connection from PG10 subscriber (truncate was added in PG11), and the\n> publisher decodes a truncate, subscriber will do:\n>\n> 2023-03-28 20:29:46.921 CEST [2357609] ERROR: invalid logical\n> replication message type \"T\"\n> 2023-03-28 20:29:46.922 CEST [2356534] LOG: worker process: logical\n> replication worker for subscription 16390 (PID 2357609) exited with\n> exit code 1\n>\n> I don't see why sequences should do anything else. If you need to\n> replicate to such subscriber, create a publication that does not have\n> 'sequence' in the publish option ...\n>\n\nI didn't check TRUNCATE cases, yes, sequence replication is a good\nmatch for them. So it seems we don't need to do anything.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 29 Mar 2023 14:44:53 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 12:04 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/28/23 18:34, Masahiko Sawada wrote:\n> > On Mon, Mar 27, 2023 at 11:46 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>>\n> >>> Apart from that, how does the publication having sequences work with\n> >>> subscribers who are not able to handle sequence changes, e.g. in a\n> >>> case where PostgreSQL version of publication is newer than the\n> >>> subscriber? As far as I tested the latest patches, the subscriber\n> >>> (v15) errors out with the error 'invalid logical replication message\n> >>> type \"Q\"' when receiving a sequence change. I'm not sure it's sensible\n> >>> behavior. I think we should instead either (1) deny starting the\n> >>> replication if the subscriber isn't able to handle sequence changes\n> >>> and the publication includes that, or (2) not send sequence changes to\n> >>> such subscribers.\n> >>>\n> >>\n> >> I agree the \"invalid message\" error is not great, but it's not clear to\n> >> me how to do either (1). The trouble is we don't really know if the\n> >> publication contains (or will contain) sequences. I mean, what would\n> >> happen if the replication starts and then someone adds a sequence?\n> >>\n> >> For (2), I think that's not something we should do - silently discarding\n> >> some messages seems error-prone. If the publication includes sequences,\n> >> presumably the user wanted to replicate those. If they want to replicate\n> >> to an older subscriber, create a publication without sequences.\n> >>\n> >> Perhaps the right solution would be to check if the subscriber supports\n> >> replication of sequences in the output plugin, while attempting to write\n> >> the \"Q\" message. And error-out if the subscriber does not support it.\n> >\n> > It might be related to this topic; do we need to bump the protocol\n> > version? The commit 64824323e57d introduced new streaming callbacks\n> > and bumped the protocol version. I think the same seems to be true for\n> > this change as it adds sequence_cb callback.\n> >\n>\n> It's not clear to me what should be the exact behavior?\n>\n> I mean, imagine we're opening a connection for logical replication, and\n> the subscriber does not handle sequences. What should the publisher do?\n>\n\nI think deciding anything at the publisher would be tricky but won't\nit be better if by default we disallow connection from subscriber to\nthe publisher when the publisher's version is higher? And then allow\nit only based on some subscription option or maybe by default allow\nthe connection to a higher version but based on option disallows the\nconnection.\n\n>\n> Speaking of precedents, TRUNCATE is probably a better one, because it's\n> a new action and it determines *what* the subscriber can handle. But\n> that does exactly the thing we do for sequences - if you open a\n> connection from PG10 subscriber (truncate was added in PG11), and the\n> publisher decodes a truncate, subscriber will do:\n>\n> 2023-03-28 20:29:46.921 CEST [2357609] ERROR: invalid logical\n> replication message type \"T\"\n> 2023-03-28 20:29:46.922 CEST [2356534] LOG: worker process: logical\n> replication worker for subscription 16390 (PID 2357609) exited with\n> exit code 1\n>\n> I don't see why sequences should do anything else.\n>\n\nIs this behavior of TRUNCATE known or discussed previously? I can't\nsee any mention of this in the docs or commit message. I guess if we\nwant to follow such behavior it should be well documented so that it\nwon't be a surprise for users. I think we would face such cases in the\nfuture as well. One of the similar cases we are discussing for DDL\nreplication where a higher version publisher could send some DDL\nsyntax that lower version subscribers won't support and will lead to\nan error [1].\n\n[1] - https://www.postgresql.org/message-id/OS0PR01MB5716088E497BDCBCED7FC3DA94849%40OS0PR01MB5716.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Mar 2023 15:21:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 3/29/23 11:51, Amit Kapila wrote:\n> On Wed, Mar 29, 2023 at 12:04 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 3/28/23 18:34, Masahiko Sawada wrote:\n>>> On Mon, Mar 27, 2023 at 11:46 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> Apart from that, how does the publication having sequences work with\n>>>>> subscribers who are not able to handle sequence changes, e.g. in a\n>>>>> case where PostgreSQL version of publication is newer than the\n>>>>> subscriber? As far as I tested the latest patches, the subscriber\n>>>>> (v15) errors out with the error 'invalid logical replication message\n>>>>> type \"Q\"' when receiving a sequence change. I'm not sure it's sensible\n>>>>> behavior. I think we should instead either (1) deny starting the\n>>>>> replication if the subscriber isn't able to handle sequence changes\n>>>>> and the publication includes that, or (2) not send sequence changes to\n>>>>> such subscribers.\n>>>>>\n>>>>\n>>>> I agree the \"invalid message\" error is not great, but it's not clear to\n>>>> me how to do either (1). The trouble is we don't really know if the\n>>>> publication contains (or will contain) sequences. I mean, what would\n>>>> happen if the replication starts and then someone adds a sequence?\n>>>>\n>>>> For (2), I think that's not something we should do - silently discarding\n>>>> some messages seems error-prone. If the publication includes sequences,\n>>>> presumably the user wanted to replicate those. If they want to replicate\n>>>> to an older subscriber, create a publication without sequences.\n>>>>\n>>>> Perhaps the right solution would be to check if the subscriber supports\n>>>> replication of sequences in the output plugin, while attempting to write\n>>>> the \"Q\" message. And error-out if the subscriber does not support it.\n>>>\n>>> It might be related to this topic; do we need to bump the protocol\n>>> version? The commit 64824323e57d introduced new streaming callbacks\n>>> and bumped the protocol version. I think the same seems to be true for\n>>> this change as it adds sequence_cb callback.\n>>>\n>>\n>> It's not clear to me what should be the exact behavior?\n>>\n>> I mean, imagine we're opening a connection for logical replication, and\n>> the subscriber does not handle sequences. What should the publisher do?\n>>\n> \n> I think deciding anything at the publisher would be tricky but won't\n> it be better if by default we disallow connection from subscriber to\n> the publisher when the publisher's version is higher? And then allow\n> it only based on some subscription option or maybe by default allow\n> the connection to a higher version but based on option disallows the\n> connection.\n> \n>>\n>> Speaking of precedents, TRUNCATE is probably a better one, because it's\n>> a new action and it determines *what* the subscriber can handle. But\n>> that does exactly the thing we do for sequences - if you open a\n>> connection from PG10 subscriber (truncate was added in PG11), and the\n>> publisher decodes a truncate, subscriber will do:\n>>\n>> 2023-03-28 20:29:46.921 CEST [2357609] ERROR: invalid logical\n>> replication message type \"T\"\n>> 2023-03-28 20:29:46.922 CEST [2356534] LOG: worker process: logical\n>> replication worker for subscription 16390 (PID 2357609) exited with\n>> exit code 1\n>>\n>> I don't see why sequences should do anything else.\n>>\n> \n> Is this behavior of TRUNCATE known or discussed previously? I can't\n> see any mention of this in the docs or commit message. I guess if we\n> want to follow such behavior it should be well documented so that it\n> won't be a surprise for users. I think we would face such cases in the\n> future as well. One of the similar cases we are discussing for DDL\n> replication where a higher version publisher could send some DDL\n> syntax that lower version subscribers won't support and will lead to\n> an error [1].\n> \n\nI don't know where/how it's documented, TBH.\n\nFWIW I agree the TRUNCATE-like behavior (failing on subscriber after\nreceiving unknown message type) is a bit annoying.\n\nPerhaps it'd be reasonable to tie the \"protocol version\" to subscriber\ncapabilities, so that a protocol version guarantees what message types\nthe subscriber understands. So we could increment the protocol version,\ncheck it in pgoutput_startup and then error-out in the sequence callback\nif the subscriber version is too old.\n\nThat'd be nicer in the sense that we'd generate nicer error message on\nthe publisher, not an \"unknown message type\" on the subscriber. That's\ndoable, the main problem being it'd be inconsistent with the TRUNCATE\nbehavior. OTOH that was introduced in PG11, which is the oldest version\nstill under support ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Mar 2023 16:28:16 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 29.03.23 16:28, Tomas Vondra wrote:\n> Perhaps it'd be reasonable to tie the \"protocol version\" to subscriber\n> capabilities, so that a protocol version guarantees what message types\n> the subscriber understands. So we could increment the protocol version,\n> check it in pgoutput_startup and then error-out in the sequence callback\n> if the subscriber version is too old.\n\nThat would make sense.\n\n> That'd be nicer in the sense that we'd generate nicer error message on\n> the publisher, not an \"unknown message type\" on the subscriber. That's\n> doable, the main problem being it'd be inconsistent with the TRUNCATE\n> behavior. OTOH that was introduced in PG11, which is the oldest version\n> still under support ...\n\nI think at the time TRUNCATE support was added, we didn't have a strong \nsense of how the protocol versioning would work or whether it would work \nat all, so doing nothing was the easiest way out.\n\n\n\n",
"msg_date": "Wed, 29 Mar 2023 16:49:04 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Mar 29, 2023 at 7:58 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 3/29/23 11:51, Amit Kapila wrote:\n> >>\n> >> It's not clear to me what should be the exact behavior?\n> >>\n> >> I mean, imagine we're opening a connection for logical replication, and\n> >> the subscriber does not handle sequences. What should the publisher do?\n> >>\n> >\n> > I think deciding anything at the publisher would be tricky but won't\n> > it be better if by default we disallow connection from subscriber to\n> > the publisher when the publisher's version is higher? And then allow\n> > it only based on some subscription option or maybe by default allow\n> > the connection to a higher version but based on option disallows the\n> > connection.\n> >\n> >>\n> >> Speaking of precedents, TRUNCATE is probably a better one, because it's\n> >> a new action and it determines *what* the subscriber can handle. But\n> >> that does exactly the thing we do for sequences - if you open a\n> >> connection from PG10 subscriber (truncate was added in PG11), and the\n> >> publisher decodes a truncate, subscriber will do:\n> >>\n> >> 2023-03-28 20:29:46.921 CEST [2357609] ERROR: invalid logical\n> >> replication message type \"T\"\n> >> 2023-03-28 20:29:46.922 CEST [2356534] LOG: worker process: logical\n> >> replication worker for subscription 16390 (PID 2357609) exited with\n> >> exit code 1\n> >>\n> >> I don't see why sequences should do anything else.\n> >>\n> >\n> > Is this behavior of TRUNCATE known or discussed previously? I can't\n> > see any mention of this in the docs or commit message. I guess if we\n> > want to follow such behavior it should be well documented so that it\n> > won't be a surprise for users. I think we would face such cases in the\n> > future as well. One of the similar cases we are discussing for DDL\n> > replication where a higher version publisher could send some DDL\n> > syntax that lower version subscribers won't support and will lead to\n> > an error [1].\n> >\n>\n> I don't know where/how it's documented, TBH.\n>\n> FWIW I agree the TRUNCATE-like behavior (failing on subscriber after\n> receiving unknown message type) is a bit annoying.\n>\n> Perhaps it'd be reasonable to tie the \"protocol version\" to subscriber\n> capabilities, so that a protocol version guarantees what message types\n> the subscriber understands. So we could increment the protocol version,\n> check it in pgoutput_startup and then error-out in the sequence callback\n> if the subscriber version is too old.\n>\n> That'd be nicer in the sense that we'd generate nicer error message on\n> the publisher, not an \"unknown message type\" on the subscriber.\n>\n\nAgreed. So, we can probably formalize this rule such that whenever in\na newer version publisher we want to send additional information which\nthe old version subscriber won't be able to handle, the error should\nbe raised at the publisher by using protocol version number.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Mar 2023 08:31:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 29, 2023 at 7:58 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 3/29/23 11:51, Amit Kapila wrote:\n> > >>\n> > >> It's not clear to me what should be the exact behavior?\n> > >>\n> > >> I mean, imagine we're opening a connection for logical replication, and\n> > >> the subscriber does not handle sequences. What should the publisher do?\n> > >>\n> > >\n> > > I think deciding anything at the publisher would be tricky but won't\n> > > it be better if by default we disallow connection from subscriber to\n> > > the publisher when the publisher's version is higher? And then allow\n> > > it only based on some subscription option or maybe by default allow\n> > > the connection to a higher version but based on option disallows the\n> > > connection.\n> > >\n> > >>\n> > >> Speaking of precedents, TRUNCATE is probably a better one, because it's\n> > >> a new action and it determines *what* the subscriber can handle. But\n> > >> that does exactly the thing we do for sequences - if you open a\n> > >> connection from PG10 subscriber (truncate was added in PG11), and the\n> > >> publisher decodes a truncate, subscriber will do:\n> > >>\n> > >> 2023-03-28 20:29:46.921 CEST [2357609] ERROR: invalid logical\n> > >> replication message type \"T\"\n> > >> 2023-03-28 20:29:46.922 CEST [2356534] LOG: worker process: logical\n> > >> replication worker for subscription 16390 (PID 2357609) exited with\n> > >> exit code 1\n> > >>\n> > >> I don't see why sequences should do anything else.\n> > >>\n> > >\n> > > Is this behavior of TRUNCATE known or discussed previously? I can't\n> > > see any mention of this in the docs or commit message. I guess if we\n> > > want to follow such behavior it should be well documented so that it\n> > > won't be a surprise for users. I think we would face such cases in the\n> > > future as well. One of the similar cases we are discussing for DDL\n> > > replication where a higher version publisher could send some DDL\n> > > syntax that lower version subscribers won't support and will lead to\n> > > an error [1].\n> > >\n> >\n> > I don't know where/how it's documented, TBH.\n> >\n> > FWIW I agree the TRUNCATE-like behavior (failing on subscriber after\n> > receiving unknown message type) is a bit annoying.\n> >\n> > Perhaps it'd be reasonable to tie the \"protocol version\" to subscriber\n> > capabilities, so that a protocol version guarantees what message types\n> > the subscriber understands. So we could increment the protocol version,\n> > check it in pgoutput_startup and then error-out in the sequence callback\n> > if the subscriber version is too old.\n> >\n> > That'd be nicer in the sense that we'd generate nicer error message on\n> > the publisher, not an \"unknown message type\" on the subscriber.\n> >\n>\n> Agreed. So, we can probably formalize this rule such that whenever in\n> a newer version publisher we want to send additional information which\n> the old version subscriber won't be able to handle, the error should\n> be raised at the publisher by using protocol version number.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 30 Mar 2023 12:15:29 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 3/30/23 05:15, Masahiko Sawada wrote:\n>\n> ...\n>\n>>>\n>>> Perhaps it'd be reasonable to tie the \"protocol version\" to subscriber\n>>> capabilities, so that a protocol version guarantees what message types\n>>> the subscriber understands. So we could increment the protocol version,\n>>> check it in pgoutput_startup and then error-out in the sequence callback\n>>> if the subscriber version is too old.\n>>>\n>>> That'd be nicer in the sense that we'd generate nicer error message on\n>>> the publisher, not an \"unknown message type\" on the subscriber.\n>>>\n>>\n>> Agreed. So, we can probably formalize this rule such that whenever in\n>> a newer version publisher we want to send additional information which\n>> the old version subscriber won't be able to handle, the error should\n>> be raised at the publisher by using protocol version number.\n> \n> +1\n> \n\nOK, I took a stab at this, see the attached 0007 patch which bumps the\nprotocol version, and allows the subscriber to specify \"sequences\" when\nstarting the replication, similar to what we do for the two-phase stuff.\n\nThe patch essentially adds 'sequences' to the replication start command,\ndepending on the server version, but it can be overridden by \"sequences\"\nsubscription option. The patch is pretty small, but I wonder how much\nsmarter this should be ...\n\n\nI think there are about 4 cases that we need to consider\n\n1) there are no sequences in the publication -> OK\n\n2) publication with sequences, subscriber knows how to apply (and\nspecifies \"sequences on\" either automatically or explicitly) -> OK\n\n3) publication with sequences, subscriber explicitly disabled them by\nspecifying \"sequences off\" in startup -> OK\n\n4) publication with sequences, subscriber without sequence support (e.g.\nolder Postgres release) -> PROBLEM (?)\n\n\nThe reason why I think (4) may be a problem is that my opinion is we\nshouldn't silently drop stuff that is meant to be part of the\npublication. That is, if someone creates a publication and adds a\nsequence to it, he wants to replicate the sequence.\n\nBut the current behavior is the old subscriber connects, doesn't specify\nthe 'sequences on' so the publisher disables that and then simply\nignores sequence increments during decoding.\n\nI think we might want to detect this and error out instead of just\nskipping the change, but that needs to happen later, only when the\npublication actually has any sequences ...\n\nI don't want to over-think / over-engineer this, though, so I wonder\nwhat are your opinions on this?\n\nThere's a couple XXX comments in the code, mostly about stuff I left out\nwhen copying the two-phase stuff. For example, we store two-phase stuff\nin the replication slot itself - I don't think we need to do that for\nsequences, though.\n\nAnother thing what to do about ALTER SUBSCRIPTION - at the moment it's\nnot possible to change the \"sequences\" option, but maybe we should allow\nthat? But then we'd need to re-sync all the sequences, somehow ...\n\n\nAside from that, I've also added 0005, which does the sync interlock in\na slightly different way - instead of a custom function for locking\nsequence, it allows LOCK on sequences. Peter Eisentraut suggested doing\nit like this, it's simpler, and I can't see what issues it might cause.\nThe patch should update LOCK documentation, I haven't done that yet.\nUltimately it should all be merged into 0003, of course.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 2 Apr 2023 19:46:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Fwiw the cfbot seems to have some failing tests with this patch:\n\n\n[19:05:11.398] # Failed test 'initial test data replicated'\n[19:05:11.398] # at t/030_sequences.pl line 75.\n[19:05:11.398] # got: '1|0|f'\n[19:05:11.398] # expected: '132|0|t'\n[19:05:11.398]\n[19:05:11.398] # Failed test 'advance sequence in rolled-back transaction'\n[19:05:11.398] # at t/030_sequences.pl line 98.\n[19:05:11.398] # got: '1|0|f'\n[19:05:11.398] # expected: '231|0|t'\n[19:05:11.398]\n[19:05:11.398] # Failed test 'create sequence, advance it in\nrolled-back transaction, but commit the create'\n[19:05:11.398] # at t/030_sequences.pl line 152.\n[19:05:11.398] # got: '1|0|f'\n[19:05:11.398] # expected: '132|0|t'\n[19:05:11.398]\n[19:05:11.398] # Failed test 'advance the new sequence in a\ntransaction and roll it back'\n[19:05:11.398] # at t/030_sequences.pl line 175.\n[19:05:11.398] # got: '1|0|f'\n[19:05:11.398] # expected: '231|0|t'\n[19:05:11.398]\n[19:05:11.398] # Failed test 'advance sequence in a subtransaction'\n[19:05:11.398] # at t/030_sequences.pl line 198.\n[19:05:11.398] # got: '1|0|f'\n[19:05:11.398] # expected: '330|0|t'\n[19:05:11.398] # Looks like you failed 5 tests of 6.\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 4 Apr 2023 11:45:56 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Patch 0002 is very annoying to scroll, and I realized that it's because\npsql is writing 200kB of dashes in one of the test_decoding test cases.\nI propose to set psql's printing format to 'unaligned' to avoid that,\nwhich should cut the size of that patch to a tenth.\n\nI wonder if there's a similar issue in 0003, but I didn't check.\n\nIt's annoying that git doesn't seem to have a way of reporting length of\nlongest lines.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I'm always right, but sometimes I'm more right than other times.\"\n (Linus Torvalds)",
"msg_date": "Wed, 5 Apr 2023 12:39:53 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 4/5/23 12:39, Alvaro Herrera wrote:\n> Patch 0002 is very annoying to scroll, and I realized that it's because\n> psql is writing 200kB of dashes in one of the test_decoding test cases.\n> I propose to set psql's printing format to 'unaligned' to avoid that,\n> which should cut the size of that patch to a tenth.\n> \n\nYeah, that's a good idea, I think. It shrunk the diff to ~90kB, which is\nmuch better.\n\n> I wonder if there's a similar issue in 0003, but I didn't check.\n> \n\nI don't think so, there just seems to be enough code changes to generate\n~260kB diff with all the context.\n\nAs for the cfbot failures reported by Greg, that turned out to be a\nminor thinko in the protocol version negotiation, introduced by part\n0008 (current part, after adding Alvaro's patch tweaking test output).\nThe subscriber failed to send 'sequences on' when starting the stream.\nIt also forgot to refresh the subscription after a sequence was added.\n\nThe attached patch version fixes all of this, but I think at this point\nit's better to just postpone this for PG17 - if it was something we\ncould fix within a single release, maybe. But the replication protocol\nis something we can't easily change after release, so if we find out the\nversioning (and sequence negotiation) should work differently, we can't\nchange it. In fact, we'd be probably stuck with it until PG16 gets out\nof support, not just until PG17 ...\n\nI've thought about pushing at least the first two parts (adding the\nsequence decoding infrastructure and test_decoding support), but I'm not\nsure that's quite worth it without the built-in replication stuff.\n\nOr we could push it and then tweak it after feature freeze, if we\nconclude the protocol versioning should work differently. I recall we\ndid changes in the column and row filtering in PG15. But that seems\nquite wrong, obviously.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 5 Apr 2023 23:26:33 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 02.04.23 19:46, Tomas Vondra wrote:\n> OK, I took a stab at this, see the attached 0007 patch which bumps the\n> protocol version, and allows the subscriber to specify \"sequences\" when\n> starting the replication, similar to what we do for the two-phase stuff.\n> \n> The patch essentially adds 'sequences' to the replication start command,\n> depending on the server version, but it can be overridden by \"sequences\"\n> subscription option. The patch is pretty small, but I wonder how much\n> smarter this should be ...\n\nI think this should actually be much simpler.\n\nAll the code needs to do is:\n\n- Raise protocol version (4->5) (Your patch does that.)\n\n- pgoutput_sequence() checks whether the protocol version is >=5 and if \nnot it raises an error.\n\n- Subscriber uses old protocol if the remote end is an older PG version. \n (Your patch does that.)\n\nI don't see the need for the subscriber to toggle sequences explicitly \nor anything like that.\n\n\n\n",
"msg_date": "Thu, 11 May 2023 15:54:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\nSorry for jumping late in this thread.\n\nI started experimenting with the functionality. Maybe something that\nwas already discussed earlier. Given that the thread is being\ndiscussed for so long and has gone several changes, revalidating the\nfunctionality is useful.\n\nI considered following aspects:\nChanges to the sequence on subscriber\n-----------------------------------------------------\n1. Since this is logical decoding, logical replica is writable. So the\nlogically replicated sequence can be manipulated on the subscriber as\nwell. This implementation consolidates the changes on subscriber and\npublisher rather than replicating the publisher state as is. That's\ngood. See example command sequence below\na. publisher calls nextval() - this sets the sequence state on\npublisher as (1, 32, t) which is replicated to the subscriber.\nb. subscriber calls nextval() once - this sets the sequence state on\nsubscriber as (34, 32, t)\nc. subscriber calls nextval() 32 times - on-disk state of sequence\ndoesn't change on subscriber\nd. subscriber calls nextval() 33 times - this sets the sequence state\non subscriber as (99, 0, t)\ne. publisher calls nextval() 32 times - this sets the sequence state\non publisher as (33, 0, t)\n\nThe on-disk state on publisher at the end of e. is replicated to the\nsubscriber but subscriber doesn't apply it. The state there is still\n(99, 0, t). I think this is closer to how logical replication of\nsequence should look like. This is aso good enough as long as we\nexpect the replication of sequences to be used for failover and\nswitchover.\n\nBut it might not help if we want to consolidate the INSERTs that use\nnextvals(). If we were to treat sequences as accumulating the\nincrements, we might be able to resolve the conflicts by adjusting the\ncolumns values considering the increments made on subscriber. IIUC,\nconflict resolution is not part of built-in logical replication. So we\nmay not want to go this route. But worth considering.\n\nImplementation agnostic decoded change\n--------------------------------------------------------\nCurrent method of decoding and replicating the sequences is tied to\nthe implementation - it replicates the sequence row as is. If the\nimplementation changes in future, we might need to revise the decoded\npresentation of sequence. I think only nextval() matters for sequence.\nSo as long as we are replicating information enough to calculate the\nnextval we should be good. Current implementation does that by\nreplicating the log_value and is_called. is_called can be consolidated\ninto log_value itself. The implemented protocol, thus requires two\nextra values to be replicated. Those can be ignored right now. But\nthey might pose a problem in future, if some downstream starts using\nthem. We will be forced to provide fake but sane values even if a\nfuture upstream implementation does not produce those values. Of\ncourse we can't predict the future implementation enough to decide\nwhat would be an implementation independent format. E.g. if a\npluggable storage were to be used to implement sequences or if we come\naround implementing distributed sequences, their shape can't be\npredicted right now. So a change in protocol seems to be unavoidable\nwhatever we do. But starting with bare minimum might save us from\nlarger troubles. I think, it's better to just replicate the nextval()\nand craft the representation on subscriber so that it produces that\nnextval().\n\n3. Primary key sequences\n-----------------------------------\nI am not experimented with this. But I think we will need to add the\nsequences associated with the primary keys to the publications\npublishing the owner tables. Otherwise, we will have problems with the\nfailover. And it needs to be done automatically since a. the names of\nthese sequences are generated automatically b. publications with FOR\nALL TABLES will add tables automatically and start replicating the\nchanges. Users may not be able to intercept the replication activity\nto add the associated sequences are also addedto the publication.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 18 May 2023 19:53:52 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Patch set needs a rebase, PFA rebased patch-set.\n\nThe conflict was in commit \"Add decoding of sequences to built-in\nreplication\", in files tablesync.c and 002_pg_dump.pl.\n\nOn Thu, May 18, 2023 at 7:53 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi,\n> Sorry for jumping late in this thread.\n>\n> I started experimenting with the functionality. Maybe something that\n> was already discussed earlier. Given that the thread is being\n> discussed for so long and has gone several changes, revalidating the\n> functionality is useful.\n>\n> I considered following aspects:\n> Changes to the sequence on subscriber\n> -----------------------------------------------------\n> 1. Since this is logical decoding, logical replica is writable. So the\n> logically replicated sequence can be manipulated on the subscriber as\n> well. This implementation consolidates the changes on subscriber and\n> publisher rather than replicating the publisher state as is. That's\n> good. See example command sequence below\n> a. publisher calls nextval() - this sets the sequence state on\n> publisher as (1, 32, t) which is replicated to the subscriber.\n> b. subscriber calls nextval() once - this sets the sequence state on\n> subscriber as (34, 32, t)\n> c. subscriber calls nextval() 32 times - on-disk state of sequence\n> doesn't change on subscriber\n> d. subscriber calls nextval() 33 times - this sets the sequence state\n> on subscriber as (99, 0, t)\n> e. publisher calls nextval() 32 times - this sets the sequence state\n> on publisher as (33, 0, t)\n>\n> The on-disk state on publisher at the end of e. is replicated to the\n> subscriber but subscriber doesn't apply it. The state there is still\n> (99, 0, t). I think this is closer to how logical replication of\n> sequence should look like. This is aso good enough as long as we\n> expect the replication of sequences to be used for failover and\n> switchover.\n>\n> But it might not help if we want to consolidate the INSERTs that use\n> nextvals(). If we were to treat sequences as accumulating the\n> increments, we might be able to resolve the conflicts by adjusting the\n> columns values considering the increments made on subscriber. IIUC,\n> conflict resolution is not part of built-in logical replication. So we\n> may not want to go this route. But worth considering.\n>\n> Implementation agnostic decoded change\n> --------------------------------------------------------\n> Current method of decoding and replicating the sequences is tied to\n> the implementation - it replicates the sequence row as is. If the\n> implementation changes in future, we might need to revise the decoded\n> presentation of sequence. I think only nextval() matters for sequence.\n> So as long as we are replicating information enough to calculate the\n> nextval we should be good. Current implementation does that by\n> replicating the log_value and is_called. is_called can be consolidated\n> into log_value itself. The implemented protocol, thus requires two\n> extra values to be replicated. Those can be ignored right now. But\n> they might pose a problem in future, if some downstream starts using\n> them. We will be forced to provide fake but sane values even if a\n> future upstream implementation does not produce those values. Of\n> course we can't predict the future implementation enough to decide\n> what would be an implementation independent format. E.g. if a\n> pluggable storage were to be used to implement sequences or if we come\n> around implementing distributed sequences, their shape can't be\n> predicted right now. So a change in protocol seems to be unavoidable\n> whatever we do. But starting with bare minimum might save us from\n> larger troubles. I think, it's better to just replicate the nextval()\n> and craft the representation on subscriber so that it produces that\n> nextval().\n>\n> 3. Primary key sequences\n> -----------------------------------\n> I am not experimented with this. But I think we will need to add the\n> sequences associated with the primary keys to the publications\n> publishing the owner tables. Otherwise, we will have problems with the\n> failover. And it needs to be done automatically since a. the names of\n> these sequences are generated automatically b. publications with FOR\n> ALL TABLES will add tables automatically and start replicating the\n> changes. Users may not be able to intercept the replication activity\n> to add the associated sequences are also addedto the publication.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Tue, 13 Jun 2023 19:04:05 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 5/18/23 16:23, Ashutosh Bapat wrote:\n> Hi,\n> Sorry for jumping late in this thread.\n> \n> I started experimenting with the functionality. Maybe something that\n> was already discussed earlier. Given that the thread is being\n> discussed for so long and has gone several changes, revalidating the\n> functionality is useful.\n> \n> I considered following aspects:\n> Changes to the sequence on subscriber\n> -----------------------------------------------------\n> 1. Since this is logical decoding, logical replica is writable. So the\n> logically replicated sequence can be manipulated on the subscriber as\n> well. This implementation consolidates the changes on subscriber and\n> publisher rather than replicating the publisher state as is. That's\n> good. See example command sequence below\n> a. publisher calls nextval() - this sets the sequence state on\n> publisher as (1, 32, t) which is replicated to the subscriber.\n> b. subscriber calls nextval() once - this sets the sequence state on\n> subscriber as (34, 32, t)\n> c. subscriber calls nextval() 32 times - on-disk state of sequence\n> doesn't change on subscriber\n> d. subscriber calls nextval() 33 times - this sets the sequence state\n> on subscriber as (99, 0, t)\n> e. publisher calls nextval() 32 times - this sets the sequence state\n> on publisher as (33, 0, t)\n> \n> The on-disk state on publisher at the end of e. is replicated to the\n> subscriber but subscriber doesn't apply it. The state there is still\n> (99, 0, t). I think this is closer to how logical replication of\n> sequence should look like. This is aso good enough as long as we\n> expect the replication of sequences to be used for failover and\n> switchover.\n> \n\nI'm really confused - are you describing what the patch is doing, or\nwhat you think it should be doing? Because right now there's nothing\nthat'd \"consolidate\" the changes (in the sense of reconciling write\nconflicts), and there's absolutely no way to do that.\n\nSo if the subscriber advances the sequence (which it technically can),\nthe subscriber state will be eventually be discarded and overwritten\nwhen the next increment gets decoded from WAL on the publisher.\n\nThere's no way to fix this with type of sequences - it requires some\nsort of global consensus (consensus on range assignment, locking or\nwhatever), which we don't have.\n\nIf the sequence is the only thing replicated, this may go unnoticed. But\nchances are the user is also replicating the table with PK populated by\nthe sequence, at which point it'll lead to constraint violation.\n\n> But it might not help if we want to consolidate the INSERTs that use\n> nextvals(). If we were to treat sequences as accumulating the\n> increments, we might be able to resolve the conflicts by adjusting the\n> columns values considering the increments made on subscriber. IIUC,\n> conflict resolution is not part of built-in logical replication. So we\n> may not want to go this route. But worth considering.\n\nWe can't just adjust values in columns that may be used externally.\n\n> \n> Implementation agnostic decoded change\n> --------------------------------------------------------\n> Current method of decoding and replicating the sequences is tied to\n> the implementation - it replicates the sequence row as is. If the\n> implementation changes in future, we might need to revise the decoded\n> presentation of sequence. I think only nextval() matters for sequence.\n> So as long as we are replicating information enough to calculate the\n> nextval we should be good. Current implementation does that by\n> replicating the log_value and is_called. is_called can be consolidated\n> into log_value itself. The implemented protocol, thus requires two\n> extra values to be replicated. Those can be ignored right now. But\n> they might pose a problem in future, if some downstream starts using\n> them. We will be forced to provide fake but sane values even if a\n> future upstream implementation does not produce those values. Of\n> course we can't predict the future implementation enough to decide\n> what would be an implementation independent format. E.g. if a\n> pluggable storage were to be used to implement sequences or if we come\n> around implementing distributed sequences, their shape can't be\n> predicted right now. So a change in protocol seems to be unavoidable\n> whatever we do. But starting with bare minimum might save us from\n> larger troubles. I think, it's better to just replicate the nextval()\n> and craft the representation on subscriber so that it produces that\n> nextval().\n\nYes, I agree with this. It's probably better to replicate just the next\nvalue, without the log_cnt / is_called fields (which are implementation\nspecific).\n\n> \n> 3. Primary key sequences\n> -----------------------------------\n> I am not experimented with this. But I think we will need to add the\n> sequences associated with the primary keys to the publications\n> publishing the owner tables. Otherwise, we will have problems with the\n> failover. And it needs to be done automatically since a. the names of\n> these sequences are generated automatically b. publications with FOR\n> ALL TABLES will add tables automatically and start replicating the\n> changes. Users may not be able to intercept the replication activity\n> to add the associated sequences are also addedto the publication.\n> \n\nRight, this idea was mentioned before, and I agree maybe we should\nconsider adding some of those \"automatic\" sequences automatically.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 13 Jun 2023 19:31:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Jun 13, 2023 at 11:01 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/18/23 16:23, Ashutosh Bapat wrote:\n> > Hi,\n> > Sorry for jumping late in this thread.\n> >\n> > I started experimenting with the functionality. Maybe something that\n> > was already discussed earlier. Given that the thread is being\n> > discussed for so long and has gone several changes, revalidating the\n> > functionality is useful.\n> >\n> > I considered following aspects:\n> > Changes to the sequence on subscriber\n> > -----------------------------------------------------\n> > 1. Since this is logical decoding, logical replica is writable. So the\n> > logically replicated sequence can be manipulated on the subscriber as\n> > well. This implementation consolidates the changes on subscriber and\n> > publisher rather than replicating the publisher state as is. That's\n> > good. See example command sequence below\n> > a. publisher calls nextval() - this sets the sequence state on\n> > publisher as (1, 32, t) which is replicated to the subscriber.\n> > b. subscriber calls nextval() once - this sets the sequence state on\n> > subscriber as (34, 32, t)\n> > c. subscriber calls nextval() 32 times - on-disk state of sequence\n> > doesn't change on subscriber\n> > d. subscriber calls nextval() 33 times - this sets the sequence state\n> > on subscriber as (99, 0, t)\n> > e. publisher calls nextval() 32 times - this sets the sequence state\n> > on publisher as (33, 0, t)\n> >\n> > The on-disk state on publisher at the end of e. is replicated to the\n> > subscriber but subscriber doesn't apply it. The state there is still\n> > (99, 0, t). I think this is closer to how logical replication of\n> > sequence should look like. This is aso good enough as long as we\n> > expect the replication of sequences to be used for failover and\n> > switchover.\n> >\n>\n> I'm really confused - are you describing what the patch is doing, or\n> what you think it should be doing? Because right now there's nothing\n> that'd \"consolidate\" the changes (in the sense of reconciling write\n> conflicts), and there's absolutely no way to do that.\n>\n> So if the subscriber advances the sequence (which it technically can),\n> the subscriber state will be eventually be discarded and overwritten\n> when the next increment gets decoded from WAL on the publisher.\n\nI described what I observed in my experiments. My observation doesn't\nagree with your description. I will revisit this when I review the\noutput plugin changes and the WAL receiver changes.\n\n>\n> Yes, I agree with this. It's probably better to replicate just the next\n> value, without the log_cnt / is_called fields (which are implementation\n> specific).\n\nOk. I will review the logic once you revise the patches.\n\n>\n> >\n> > 3. Primary key sequences\n> > -----------------------------------\n> > I am not experimented with this. But I think we will need to add the\n> > sequences associated with the primary keys to the publications\n> > publishing the owner tables. Otherwise, we will have problems with the\n> > failover. And it needs to be done automatically since a. the names of\n> > these sequences are generated automatically b. publications with FOR\n> > ALL TABLES will add tables automatically and start replicating the\n> > changes. Users may not be able to intercept the replication activity\n> > to add the associated sequences are also addedto the publication.\n> >\n>\n> Right, this idea was mentioned before, and I agree maybe we should\n> consider adding some of those \"automatic\" sequences automatically.\n>\n\nAre you planning to add this in the same patch set or separately?\n\nI reviewed 0001 and related parts of 0004 and 0008 in detail.\n\nI have only one major change request, about\ntypedef struct xl_seq_rec\n{\nRelFileLocator locator;\n+ bool created; /* creates a new relfilenode (CREATE/ALTER) */\n\nI am not sure what are the repercussions of adding a member to an existing WAL\nrecord. I didn't see any code which handles the old WAL format which doesn't\ncontain the \"created\" flag. IIUC, the logical decoding may come across\na WAL record written in the old format after upgrade and restart. Is\nthat not possible?\n\nBut I don't think it's necessary. We can add a\ndecoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\nin XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\nas is. Of course we will add non-sequence relfilelocators as well but that\nshould be fine. Creating a new relfilelocator shouldn't be a frequent\noperation. If at all we are worried about that, we can add only the\nrelfilenodes associated with sequences to the hash table.\n\nIf this idea has been discussed earlier, please point me to the relevant\ndiscussion.\n\nSome other minor comments and nitpicks.\n\n<function>stream_stop_cb</function>, <function>stream_abort_cb</function>,\n<function>stream_commit_cb</function>, and <function>stream_change_cb</function>\n- are required, while <function>stream_message_cb</function> and\n+ are required, while <function>stream_message_cb</function>,\n+ <function>stream_sequence_cb</function> and\n\nLike the non-streaming counterpart, should we also mention what happens if those\ncallbacks are not defined? That applies to stream_message_cb and\nstream_truncate_cb too.\n+ /*\n+ * Make sure the subtransaction has a XID assigned, so that the sequence\n+ * increment WAL record is properly associated with it. This matters for\n+ * increments of sequences created/altered in the transaction, which are\n+ * handled as transactional.\n+ */\n+ if (XLogLogicalInfoActive())\n+ GetCurrentTransactionId();\n\nGetCurrentTransactionId() will also assign xids to all the parents so it\ndoesn't seem necessary to call both GetTopTransactionId() and\nGetCurrentTransactionId(). Calling only the latter should suffice. Applies to\nall the calls to GetCurrentTransactionId().\n\n+\n+ memcpy(((char *) tuple->tuple.t_data),\n+ data + sizeof(xl_seq_rec),\n+ SizeofHeapTupleHeader);\n+\n+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,\n+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,\n+ datalen);\n\nThe memory chunks being copied in these memcpy calls are contiguous. Why don't\nwe use a single memcpy? For readability?\n\n+ * If we don't have snapshot or we are just fast-forwarding, there is no\n+ * point in decoding messages.\n\ns/decoding messages/decoding sequence changes/\n\n+ tupledata = XLogRecGetData(r);\n+ datalen = XLogRecGetDataLen(r);\n+ tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);\n+\n+ /* extract the WAL record, with \"created\" flag */\n+ xlrec = (xl_seq_rec *) XLogRecGetData(r);\n\nI think we should set tupledata = xlrec + sizeof(xl_seq_rec) so that it points\nto actual tuple data. This will also simplify the calculations in\nDecodeSeqTule().\n+/* entry for hash table we use to track sequences created in running xacts */\n\ns/running/transaction being decoded/ ?\n\n+\n+ /* search the lookup table (we ignore the return value, found is enough) */\n+ ent = hash_search(rb->sequences,\n+ (void *) &rlocator,\n+ created ? HASH_ENTER : HASH_FIND,\n+ &found);\n\nMisleading comment. We seem to be using the return value later.\n\n+ /*\n+ * When creating the sequence, remember the XID of the transaction\n+ * that created id.\n+ */\n+ if (created)\n+ ent->xid = xid;\n\nShould we set ent->locator as well? The sequence won't get cleaned otherwise.\n\n+\n+ TeardownHistoricSnapshot(false);\n+\n+ AbortCurrentTransaction();\n\nThis call to AbortCurrentTransaction() in PG_TRY should be called if only this\nblock started the transaction?\n\n+ PG_CATCH();\n+ {\n+ TeardownHistoricSnapshot(true);\n+\n+ AbortCurrentTransaction();\n\nShouldn't we do this only if this block started the transaction? And in that\ncase, wouldn't PG_RE_THROW take care of it?\n\n+/*\n+ * Helper function for ReorderBufferProcessTXN for applying sequences.\n+ */\n+static inline void\n+ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,\n+ Relation relation, ReorderBufferChange *change,\n+ bool streaming)\n\nPossibly we should find a way to call this function from\nReorderBufferQueueSequence() when processing non-transactional sequence change.\nIt should probably absorb logic common to both the cases.\n\n+\n+ if (RelationIsLogicallyLogged(relation))\n+ ReorderBufferApplySequence(rb, txn, relation, change, streaming);\n\nThis condition is not used in ReorderBufferQueueSequence() when processing\nnon-transactional change there. Why?\n+\n+ if (len)\n+ {\n+ memcpy(data, &tup->tuple, sizeof(HeapTupleData));\n+ data += sizeof(HeapTupleData);\n+\n+ memcpy(data, tup->tuple.t_data, len);\n+ data += len;\n+ }\n+\n\nWe are just copying the sequence data. Shouldn't we copy the file locator as\nwell or that's not needed once the change has been queued? Similarly for\nReorderBufferChangeSize() and ReorderBufferChangeSize()\n\n+ /*\n+ * relfilenode => XID lookup table for sequences created in a transaction\n+ * (also includes altered sequences, which assigns new relfilenode)\n+ */\n+ HTAB *sequences;\n+\n\nBetter renamed as seq_rel_locator or some such. Shouldn't this be part of\nReorderBufferTxn which has similar transaction specific hashes.\n\nI will continue reviewing the remaining patches.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 23 Jun 2023 18:48:25 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Regarding the patchsets, I think we will need to rearrange the\ncommits. Right now 0004 has some parts that should have been in 0001.\nAlso the logic to assign XID to a subtrasaction be better a separate\ncommit. That piece is independent of logical decoding of sequences.\n\nOn Fri, Jun 23, 2023 at 6:48 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Jun 13, 2023 at 11:01 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 5/18/23 16:23, Ashutosh Bapat wrote:\n> > > Hi,\n> > > Sorry for jumping late in this thread.\n> > >\n> > > I started experimenting with the functionality. Maybe something that\n> > > was already discussed earlier. Given that the thread is being\n> > > discussed for so long and has gone several changes, revalidating the\n> > > functionality is useful.\n> > >\n> > > I considered following aspects:\n> > > Changes to the sequence on subscriber\n> > > -----------------------------------------------------\n> > > 1. Since this is logical decoding, logical replica is writable. So the\n> > > logically replicated sequence can be manipulated on the subscriber as\n> > > well. This implementation consolidates the changes on subscriber and\n> > > publisher rather than replicating the publisher state as is. That's\n> > > good. See example command sequence below\n> > > a. publisher calls nextval() - this sets the sequence state on\n> > > publisher as (1, 32, t) which is replicated to the subscriber.\n> > > b. subscriber calls nextval() once - this sets the sequence state on\n> > > subscriber as (34, 32, t)\n> > > c. subscriber calls nextval() 32 times - on-disk state of sequence\n> > > doesn't change on subscriber\n> > > d. subscriber calls nextval() 33 times - this sets the sequence state\n> > > on subscriber as (99, 0, t)\n> > > e. publisher calls nextval() 32 times - this sets the sequence state\n> > > on publisher as (33, 0, t)\n> > >\n> > > The on-disk state on publisher at the end of e. is replicated to the\n> > > subscriber but subscriber doesn't apply it. The state there is still\n> > > (99, 0, t). I think this is closer to how logical replication of\n> > > sequence should look like. This is aso good enough as long as we\n> > > expect the replication of sequences to be used for failover and\n> > > switchover.\n> > >\n> >\n> > I'm really confused - are you describing what the patch is doing, or\n> > what you think it should be doing? Because right now there's nothing\n> > that'd \"consolidate\" the changes (in the sense of reconciling write\n> > conflicts), and there's absolutely no way to do that.\n> >\n> > So if the subscriber advances the sequence (which it technically can),\n> > the subscriber state will be eventually be discarded and overwritten\n> > when the next increment gets decoded from WAL on the publisher.\n>\n> I described what I observed in my experiments. My observation doesn't\n> agree with your description. I will revisit this when I review the\n> output plugin changes and the WAL receiver changes.\n>\n> >\n> > Yes, I agree with this. It's probably better to replicate just the next\n> > value, without the log_cnt / is_called fields (which are implementation\n> > specific).\n>\n> Ok. I will review the logic once you revise the patches.\n>\n> >\n> > >\n> > > 3. Primary key sequences\n> > > -----------------------------------\n> > > I am not experimented with this. But I think we will need to add the\n> > > sequences associated with the primary keys to the publications\n> > > publishing the owner tables. Otherwise, we will have problems with the\n> > > failover. And it needs to be done automatically since a. the names of\n> > > these sequences are generated automatically b. publications with FOR\n> > > ALL TABLES will add tables automatically and start replicating the\n> > > changes. Users may not be able to intercept the replication activity\n> > > to add the associated sequences are also addedto the publication.\n> > >\n> >\n> > Right, this idea was mentioned before, and I agree maybe we should\n> > consider adding some of those \"automatic\" sequences automatically.\n> >\n>\n> Are you planning to add this in the same patch set or separately?\n>\n> I reviewed 0001 and related parts of 0004 and 0008 in detail.\n>\n> I have only one major change request, about\n> typedef struct xl_seq_rec\n> {\n> RelFileLocator locator;\n> + bool created; /* creates a new relfilenode (CREATE/ALTER) */\n>\n> I am not sure what are the repercussions of adding a member to an existing WAL\n> record. I didn't see any code which handles the old WAL format which doesn't\n> contain the \"created\" flag. IIUC, the logical decoding may come across\n> a WAL record written in the old format after upgrade and restart. Is\n> that not possible?\n>\n> But I don't think it's necessary. We can add a\n> decoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\n> in XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\n> as is. Of course we will add non-sequence relfilelocators as well but that\n> should be fine. Creating a new relfilelocator shouldn't be a frequent\n> operation. If at all we are worried about that, we can add only the\n> relfilenodes associated with sequences to the hash table.\n>\n> If this idea has been discussed earlier, please point me to the relevant\n> discussion.\n>\n> Some other minor comments and nitpicks.\n>\n> <function>stream_stop_cb</function>, <function>stream_abort_cb</function>,\n> <function>stream_commit_cb</function>, and <function>stream_change_cb</function>\n> - are required, while <function>stream_message_cb</function> and\n> + are required, while <function>stream_message_cb</function>,\n> + <function>stream_sequence_cb</function> and\n>\n> Like the non-streaming counterpart, should we also mention what happens if those\n> callbacks are not defined? That applies to stream_message_cb and\n> stream_truncate_cb too.\n> + /*\n> + * Make sure the subtransaction has a XID assigned, so that the sequence\n> + * increment WAL record is properly associated with it. This matters for\n> + * increments of sequences created/altered in the transaction, which are\n> + * handled as transactional.\n> + */\n> + if (XLogLogicalInfoActive())\n> + GetCurrentTransactionId();\n>\n> GetCurrentTransactionId() will also assign xids to all the parents so it\n> doesn't seem necessary to call both GetTopTransactionId() and\n> GetCurrentTransactionId(). Calling only the latter should suffice. Applies to\n> all the calls to GetCurrentTransactionId().\n>\n> +\n> + memcpy(((char *) tuple->tuple.t_data),\n> + data + sizeof(xl_seq_rec),\n> + SizeofHeapTupleHeader);\n> +\n> + memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,\n> + data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,\n> + datalen);\n>\n> The memory chunks being copied in these memcpy calls are contiguous. Why don't\n> we use a single memcpy? For readability?\n>\n> + * If we don't have snapshot or we are just fast-forwarding, there is no\n> + * point in decoding messages.\n>\n> s/decoding messages/decoding sequence changes/\n>\n> + tupledata = XLogRecGetData(r);\n> + datalen = XLogRecGetDataLen(r);\n> + tuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);\n> +\n> + /* extract the WAL record, with \"created\" flag */\n> + xlrec = (xl_seq_rec *) XLogRecGetData(r);\n>\n> I think we should set tupledata = xlrec + sizeof(xl_seq_rec) so that it points\n> to actual tuple data. This will also simplify the calculations in\n> DecodeSeqTule().\n> +/* entry for hash table we use to track sequences created in running xacts */\n>\n> s/running/transaction being decoded/ ?\n>\n> +\n> + /* search the lookup table (we ignore the return value, found is enough) */\n> + ent = hash_search(rb->sequences,\n> + (void *) &rlocator,\n> + created ? HASH_ENTER : HASH_FIND,\n> + &found);\n>\n> Misleading comment. We seem to be using the return value later.\n>\n> + /*\n> + * When creating the sequence, remember the XID of the transaction\n> + * that created id.\n> + */\n> + if (created)\n> + ent->xid = xid;\n>\n> Should we set ent->locator as well? The sequence won't get cleaned otherwise.\n>\n> +\n> + TeardownHistoricSnapshot(false);\n> +\n> + AbortCurrentTransaction();\n>\n> This call to AbortCurrentTransaction() in PG_TRY should be called if only this\n> block started the transaction?\n>\n> + PG_CATCH();\n> + {\n> + TeardownHistoricSnapshot(true);\n> +\n> + AbortCurrentTransaction();\n>\n> Shouldn't we do this only if this block started the transaction? And in that\n> case, wouldn't PG_RE_THROW take care of it?\n>\n> +/*\n> + * Helper function for ReorderBufferProcessTXN for applying sequences.\n> + */\n> +static inline void\n> +ReorderBufferApplySequence(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> + Relation relation, ReorderBufferChange *change,\n> + bool streaming)\n>\n> Possibly we should find a way to call this function from\n> ReorderBufferQueueSequence() when processing non-transactional sequence change.\n> It should probably absorb logic common to both the cases.\n>\n> +\n> + if (RelationIsLogicallyLogged(relation))\n> + ReorderBufferApplySequence(rb, txn, relation, change, streaming);\n>\n> This condition is not used in ReorderBufferQueueSequence() when processing\n> non-transactional change there. Why?\n> +\n> + if (len)\n> + {\n> + memcpy(data, &tup->tuple, sizeof(HeapTupleData));\n> + data += sizeof(HeapTupleData);\n> +\n> + memcpy(data, tup->tuple.t_data, len);\n> + data += len;\n> + }\n> +\n>\n> We are just copying the sequence data. Shouldn't we copy the file locator as\n> well or that's not needed once the change has been queued? Similarly for\n> ReorderBufferChangeSize() and ReorderBufferChangeSize()\n>\n> + /*\n> + * relfilenode => XID lookup table for sequences created in a transaction\n> + * (also includes altered sequences, which assigns new relfilenode)\n> + */\n> + HTAB *sequences;\n> +\n>\n> Better renamed as seq_rel_locator or some such. Shouldn't this be part of\n> ReorderBufferTxn which has similar transaction specific hashes.\n>\n> I will continue reviewing the remaining patches.\n>\n> --\n> Best Wishes,\n> Ashutosh Bapat\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 23 Jun 2023 18:54:08 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "This is review of 0003 patch. Overall the patch looks good and helps\nunderstand the decoding logic better.\n\n+ data\n+----------------------------------------------------------------------------------------\n+ BEGIN\n+ sequence public.test_sequence: transactional:1 last_value: 1\nlog_cnt: 0 is_called:0\n+ COMMIT\n\nLooking at this output, I am wondering how would this patch work with DDL\nreplication. I should have noticed this earlier, sorry. A sequence DDL has two\nparts, changes to the catalogs and changes to the data file. Support for\nreplicating the data file changes is added by these patches. The catalog\nchanges will need to be supported by DDL replication patch. When applying the\nDDL changes, there are two ways 1. just apply the catalog changes and let the\nsupport added here apply the data changes. 2. Apply both the changes. If the\nsecond route is chosen, all the \"transactional\" decoding and application\nsupport added by this patch will need to be ripped out. That will make the\n\"transactional\" field in the protocol will become useless. It has potential to\nbe waste bandwidth in future.\n\nOTOH, I feel that waiting for the DDL repliation patch set to be commtted will\ncause this patchset to be delayed for an unknown duration. That's undesirable\ntoo.\n\nOne solution I see is to use Storage RMID WAL again. While decoding it we send\na message to the subscriber telling it that a new relfilenode is being\nallocated to a sequence. The subscriber too then allocates new relfilenode to\nthe sequence. The sequence data changes are decoded without \"transactional\"\nflag; but they are decoded as transactional or non-transactional using the same\nlogic as the current patch-set. The subscriber will always apply these changes\nto the reflilenode associated with the sequence at that point in time. This\nwould have the same effect as the current patch-set. But then there is\npotential that the DDL replication patchset will render the Storage decoding\nuseless. So not an option. But anyway, I will leave this as a comment as an\nalternative thought and discarded. Also this might trigger a better idea.\n\nWhat do you think?\n\n+-- savepoint test on table with serial column\n+BEGIN;\n+CREATE TABLE test_table (a SERIAL, b INT);\n+INSERT INTO test_table (b) VALUES (100);\n+INSERT INTO test_table (b) VALUES (200);\n+SAVEPOINT a;\n+INSERT INTO test_table (b) VALUES (300);\n+ROLLBACK TO SAVEPOINT a;\n\nThe third implicit nextval won't be logged so whether subtransaction is rolled\nback or committed, it won't have much effect on the decoding. Adding\nsubtransaction around the first INSERT itself might be useful to test that the\nsubtransaction rollback does not rollback the sequence changes.\n\nAfter adding {'include_sequences', false} to the calls to\npg_logical_slot_get_changes() in other tests, the SQL statement has grown\nbeyond 80 characters. Need to split it into multiple lines.\n\n }\n+ else if (strcmp(elem->defname, \"include-sequences\") == 0)\n+ {\n+\n+ if (elem->arg == NULL)\n+ data->include_sequences = false;\n\nBy default inlclude_sequences = true. Shouldn't then it be set to true here?\n\nAfter looking at the option processing code in\npg_logical_slot_get_changes_guts(), it looks like an argument can never be\nNULL. But I see we have checks for NULL values of other arguments so it's ok to\nkeep a NULL check here.\n\nI will look at 0004 next.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 26 Jun 2023 18:48:59 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 6/26/23 15:18, Ashutosh Bapat wrote:\n> This is review of 0003 patch. Overall the patch looks good and helps\n> understand the decoding logic better.\n> \n> + data\n> +----------------------------------------------------------------------------------------\n> + BEGIN\n> + sequence public.test_sequence: transactional:1 last_value: 1\n> log_cnt: 0 is_called:0\n> + COMMIT\n> \n> Looking at this output, I am wondering how would this patch work with DDL\n> replication. I should have noticed this earlier, sorry. A sequence DDL has two\n> parts, changes to the catalogs and changes to the data file. Support for\n> replicating the data file changes is added by these patches. The catalog\n> changes will need to be supported by DDL replication patch. When applying the\n> DDL changes, there are two ways 1. just apply the catalog changes and let the\n> support added here apply the data changes. 2. Apply both the changes. If the\n> second route is chosen, all the \"transactional\" decoding and application\n> support added by this patch will need to be ripped out. That will make the\n> \"transactional\" field in the protocol will become useless. It has potential to\n> be waste bandwidth in future.\n> \n\nI don't understand why would it need to be ripped out. Why would it make\nthe transactional behavior useless? Can you explain?\n\nIMHO we replicate either changes (and then DDL replication does not\ninterfere with that), or DDL (and then this patch should not interfere).\n\n> OTOH, I feel that waiting for the DDL repliation patch set to be commtted will\n> cause this patchset to be delayed for an unknown duration. That's undesirable\n> too.\n> \n> One solution I see is to use Storage RMID WAL again. While decoding it we send\n> a message to the subscriber telling it that a new relfilenode is being\n> allocated to a sequence. The subscriber too then allocates new relfilenode to\n> the sequence. The sequence data changes are decoded without \"transactional\"\n> flag; but they are decoded as transactional or non-transactional using the same\n> logic as the current patch-set. The subscriber will always apply these changes\n> to the reflilenode associated with the sequence at that point in time. This\n> would have the same effect as the current patch-set. But then there is\n> potential that the DDL replication patchset will render the Storage decoding\n> useless. So not an option. But anyway, I will leave this as a comment as an\n> alternative thought and discarded. Also this might trigger a better idea.\n> \n> What do you think?\n> \n\n\nI don't understand what the problem with DDL is, so I can't judge how\nthis is supposed to solve it.\n\n> +-- savepoint test on table with serial column\n> +BEGIN;\n> +CREATE TABLE test_table (a SERIAL, b INT);\n> +INSERT INTO test_table (b) VALUES (100);\n> +INSERT INTO test_table (b) VALUES (200);\n> +SAVEPOINT a;\n> +INSERT INTO test_table (b) VALUES (300);\n> +ROLLBACK TO SAVEPOINT a;\n> \n> The third implicit nextval won't be logged so whether subtransaction is rolled\n> back or committed, it won't have much effect on the decoding. Adding\n> subtransaction around the first INSERT itself might be useful to test that the\n> subtransaction rollback does not rollback the sequence changes.\n> \n> After adding {'include_sequences', false} to the calls to\n> pg_logical_slot_get_changes() in other tests, the SQL statement has grown\n> beyond 80 characters. Need to split it into multiple lines.\n> \n> }\n> + else if (strcmp(elem->defname, \"include-sequences\") == 0)\n> + {\n> +\n> + if (elem->arg == NULL)\n> + data->include_sequences = false;\n> \n> By default inlclude_sequences = true. Shouldn't then it be set to true here?\n> \n\nI don't follow. Is this still related to the DDL replication, or are you\ndescribing some new issue with savepoints?\n\n> After looking at the option processing code in\n> pg_logical_slot_get_changes_guts(), it looks like an argument can never be\n> NULL. But I see we have checks for NULL values of other arguments so it's ok to\n> keep a NULL check here.\n> \n> I will look at 0004 next.\n> \n\nOK\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 26 Jun 2023 17:05:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 8:35 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 6/26/23 15:18, Ashutosh Bapat wrote:\n> > This is review of 0003 patch. Overall the patch looks good and helps\n> > understand the decoding logic better.\n> >\n> > + data\n> > +----------------------------------------------------------------------------------------\n> > + BEGIN\n> > + sequence public.test_sequence: transactional:1 last_value: 1\n> > log_cnt: 0 is_called:0\n> > + COMMIT\n> >\n> > Looking at this output, I am wondering how would this patch work with DDL\n> > replication. I should have noticed this earlier, sorry. A sequence DDL has two\n> > parts, changes to the catalogs and changes to the data file. Support for\n> > replicating the data file changes is added by these patches. The catalog\n> > changes will need to be supported by DDL replication patch. When applying the\n> > DDL changes, there are two ways 1. just apply the catalog changes and let the\n> > support added here apply the data changes. 2. Apply both the changes. If the\n> > second route is chosen, all the \"transactional\" decoding and application\n> > support added by this patch will need to be ripped out. That will make the\n> > \"transactional\" field in the protocol will become useless. It has potential to\n> > be waste bandwidth in future.\n> >\n>\n> I don't understand why would it need to be ripped out. Why would it make\n> the transactional behavior useless? Can you explain?\n>\n> IMHO we replicate either changes (and then DDL replication does not\n> interfere with that), or DDL (and then this patch should not interfere).\n>\n> > OTOH, I feel that waiting for the DDL repliation patch set to be commtted will\n> > cause this patchset to be delayed for an unknown duration. That's undesirable\n> > too.\n> >\n> > One solution I see is to use Storage RMID WAL again. While decoding it we send\n> > a message to the subscriber telling it that a new relfilenode is being\n> > allocated to a sequence. The subscriber too then allocates new relfilenode to\n> > the sequence. The sequence data changes are decoded without \"transactional\"\n> > flag; but they are decoded as transactional or non-transactional using the same\n> > logic as the current patch-set. The subscriber will always apply these changes\n> > to the reflilenode associated with the sequence at that point in time. This\n> > would have the same effect as the current patch-set. But then there is\n> > potential that the DDL replication patchset will render the Storage decoding\n> > useless. So not an option. But anyway, I will leave this as a comment as an\n> > alternative thought and discarded. Also this might trigger a better idea.\n> >\n> > What do you think?\n> >\n>\n>\n> I don't understand what the problem with DDL is, so I can't judge how\n> this is supposed to solve it.\n\nI have not looked at the DDL replication patch in detail so I may be\nmissing something. IIUC, that patch replicates the DDL statement in\nsome form: parse tree or statement. But it doesn't replicate the some\nor all WAL records that the DDL execution generates.\n\nConsider DDL \"ALTER SEQUENCE test_sequence RESTART WITH 4000;\". It\nupdates the catalogs with a new relfilenode and also the START VALUE.\nIt also writes to the new relfilenode. When publisher replicates the\nDDL and the subscriber applies it, it will do the same - update the\ncatalogs and write to new relfilenode. We don't want the sequence data\nto be replicated again when it's changed by a DDL. All the\ntransactional changes are associated with a DDL. Other changes to the\ndata sequence are non-transactional. So when replicating the sequence\ndata changes, \"transactional\" field becomes useless. What I am\npointing to is: if we add \"transactional\" field in the protocol today\nand in future DDL replication is implemented in a way that\n\"transactional\" field becomes redundant, we have introduced a\nredundant field which will eat a byte on wire. Of course we can\nremove it by bumping protocol version, but that's some work.\n\nPlease note we will still need the code to determine whether a change\nin sequence data is transactional or not IOW whether it's associated\nwith DDL or not. So that code remains.\n\n> >\n> > }\n> > + else if (strcmp(elem->defname, \"include-sequences\") == 0)\n> > + {\n> > +\n> > + if (elem->arg == NULL)\n> > + data->include_sequences = false;\n> >\n> > By default inlclude_sequences = true. Shouldn't then it be set to true here?\n> >\n>\n> I don't follow. Is this still related to the DDL replication, or are you\n> describing some new issue with savepoints?\n\nNot related to DDL replication. Not an issue with savepoints either.\nJust a comment about that particular change. So for not being clear.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 27 Jun 2023 11:30:40 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Jun 26, 2023 at 8:35 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 6/26/23 15:18, Ashutosh Bapat wrote:\n\n> > I will look at 0004 next.\n> >\n>\n> OK\n\n\n0004- is quite large. I think if we split this into two or even three\n1. publication and\nsubscription catalog handling 2. built-in replication protocol changes, it\nmight be easier to review. But anyway, I have given it one read. I have\nreviewed the parts which deal with the replication-proper in detail. I have\n*not* thoroughly reviewed the parts which deal with the catalogs, pg_dump,\ndescribe and tab completion. Similarly tests. If those parts need a\nthorough review, please let\nme know.\n\nBut before jumping into the comments, a weird scenario I tried. On publisher I\ncreated a table t1(a int, b int) and a sequence s and added both to a\npublication. On subscriber I swapped their names i.e. created a table s(a int, b\nint) and a sequence t1 and subscribed to the publication. The subscription was\ncreated, and during replication it threw error \"logical replication target\nrelation \"public.t1\" is missing replicated columns: \"a\", \"b\" and logical\nreplication target relation \"public.s\" is missing replicated columns:\n\"last_value\", \"lo g_cnt\", \"is_called\". I think it's good that it at least\nthrew an error. But it would be good if it detected that the reltypes\nthemselves are different and mentioned that in the error. Something like\n\"logical replication target \"public.s\" is not a sequence like source\n\"public.s\".\n\nComments on the patch itself.\n\nI didn't find any mention of 'sequence' in the documentation of publish option\nin CREATE or ALTER PUBLICATION. Something missing in the documentation? But do\nwe really need to record \"sequence\" as an operation? Just adding the sequences\nto the publication should be fine right? There's only one operation on\nsequences, updating the sequence row.\n\n+CREATE VIEW pg_publication_sequences AS\n+ SELECT\n+ P.pubname AS pubname,\n+ N.nspname AS schemaname,\n+ C.relname AS sequencename\n\nIf we report oid or regclass for sequences it might be easier to join the view\nfurther. We don't have reg* for publication so we report both oid and\nname of publication.\n\n+/*\n+ * Update the sequence state by modifying the existing sequence data row.\n+ *\n+ * This keeps the same relfilenode, so the behavior is non-transactional.\n+ */\n+static void\n+SetSequence_non_transactional(Oid seqrelid, int64 last_value, int64\nlog_cnt, bool is_called)\n\nThis function has some code similar to nextval but with the sequence\nof operations (viz. changes to buffer, WAL insert and cache update) changed.\nGiven the comments in nextval_internal() the difference in sequence of\noperations should not make a difference in the end result. But I think it will\nbe good to deduplicate the code to avoid confusion and also for ease of\nmaintenance.\n\n+\n+/*\n+ * Update the sequence state by creating a new relfilenode.\n+ *\n+ * This creates a new relfilenode, to allow transactional behavior.\n+ */\n+static void\n+SetSequence_transactional(Oid seq_relid, int64 last_value, int64\nlog_cnt, bool is_called)\n\nNeed some deduplication here as well. But the similarities with AlterSequence,\nResetSequence or DefineSequence are less.\n\n@@ -730,9 +731,9 @@ CreateSubscription(ParseState *pstate,\nCreateSubscriptionStmt *stmt,\n {\n /*\n- * Get the table list from publisher and build local table status\n- * info.\n+ * Get the table and sequence list from publisher and build\n+ * local relation sync status info.\n */\n- tables = fetch_table_list(wrconn, publications);\n- foreach(lc, tables)\n+ relations = fetch_table_list(wrconn, publications);\n\nIs it allowed to connect a newer subscriber to an old publisher? If\nyes the query\nto fetch sequences will throw an error since it won't find the catalog.\n\n@@ -882,8 +886,10 @@ AlterSubscription_refresh(Subscription *sub, bool\ncopy_data,\n- /* Get the table list from publisher. */\n+ /* Get the list of relations from publisher. */\n pubrel_names = fetch_table_list(wrconn, sub->publications);\n+ pubrel_names = list_concat(pubrel_names,\n+ fetch_sequence_list(wrconn,\nsub->publications));\n\nSimilarly here.\n\n+void\n+logicalrep_write_sequence(StringInfo out, Relation rel, TransactionId xid,\n+\n... snip ...\n+ pq_sendint8(out, flags);\n+ pq_sendint64(out, lsn);\n... snip ...\n+LogicalRepRelId\n+logicalrep_read_sequence(StringInfo in, LogicalRepSequence *seqdata)\n+{\n... snip ...\n+ /* XXX skipping flags and lsn */\n+ pq_getmsgint(in, 1);\n+ pq_getmsgint64(in);\n\nWe are ignoring these two fields on the WAL receiver side. I don't see such\nfields being part of INSERT, UPDATE or DELETE messages. Should we just drop\nthose or do they have some future use? Two lsns are written by\nOutputPrepareWrite() as prologue to the logical message. If this LSN\nis one of them, it could be dropped anyway.\n\n\n+static void\n+fetch_sequence_data(char *nspname, char *relname,\n... snip ...\n+ appendStringInfo(&cmd, \"SELECT last_value, log_cnt, is_called\\n\"\n+ \" FROM %s\",\nquote_qualified_identifier(nspname, relname));\n\nWe are using an undocumented interface here. SELECT ... FROM <sequence> is not\ndocumented. This code will break if we change the way a sequence is stored.\nThat is quite unlikely but not impossible. Ideally we should use one of the\nmethods documented at [1]. But none of them provide us what is needed per your\ncomment in copy_sequence() i.e the state of sequence as of last WAL record on\nthat sequence. So I don't have any better ideas that what's done in the patch.\nMay be we can use \"nextval() + 32\" as an approximation.\n\nSome minor comments and nitpicks:\n\n@@ -1958,12 +1958,14 @@ get_object_address_publication_schema(List\n*object, bool missing_ok)\n\nNeed an update to the function prologue with the description of the third\nelement. Also the error message at the end of the function needs to mention the\nobject type.\n\n- appendStringInfo(&buffer, _(\"publication of schema %s\nin publication %s\"),\n- nspname, pubname);\n+ appendStringInfo(&buffer, _(\"publication of schema %s\nin publication %s type %s\"),\n+ nspname, pubname, objtype);\n\ns/type/for object type/ ?\n\n\n@@ -5826,18 +5842,24 @@ getObjectIdentityParts(const ObjectAddress *object,\n\n break;\n- appendStringInfo(&buffer, \"%s in publication %s\",\n- nspname, pubname);\n+ appendStringInfo(&buffer, \"%s in publication %s type %s\",\n+ nspname, pubname, objtype);\n\ns/type/object type/? ... in some other places as well?\n\n\n+/*\n+ * Check the character is a valid object type for schema publication.\n+ *\n+ * This recognizes either 't' for tables or 's' for sequences. Places that\n+ * need to handle 'u' for unsupported relkinds need to do that explicitlyl\n\ns/explicitlyl/explicitly/\n\n+Datum\n+pg_get_publication_sequences(PG_FUNCTION_ARGS)\n+{\n ... snip ...\n+ /*\n+ * Publications support partitioned tables, although all changes are\n+ * replicated using leaf partition identity and schema, so we only\n+ * need those.\n+ */\n\nNot relevant here.\n\n+ if (publication->allsequences)\n+ sequences = GetAllSequencesPublicationRelations();\n+ else\n+ {\n+ List *relids,\n+ *schemarelids;\n+\n+ relids = GetPublicationRelations(publication->oid,\n+ PUB_OBJTYPE_SEQUENCE,\n+ publication->pubviaroot ?\n+ PUBLICATION_PART_ROOT :\n+ PUBLICATION_PART_LEAF);\n+ schemarelids = GetAllSchemaPublicationRelations(publication->oid,\n+\nPUB_OBJTYPE_SEQUENCE,\n+\npublication->pubviaroot ?\n+\nPUBLICATION_PART_ROOT :\n+\nPUBLICATION_PART_LEAF);\n\nI think we should just pass PUBLICATION_PART_ALL since that parameter is\nirrelevant to sequences anyway. Otherwise this code would be confusing.\n\nI think we should rename PublicationTable structure to PublicationRelation\nsince it can now contain information about a table or a sequence, both of which\nare relations.\n\n+/*\n+ * Add or remove table to/from publication.\n\ns/table/sequence/. Generally this applies to all the code, working for tables,\ncopied and modified for sequences.\n\n@@ -18826,6 +18867,30 @@ preprocess_pubobj_list(List *pubobjspec_list,\ncore_yyscan_t yyscanner)\n errmsg(\"invalid schema name\"),\n parser_errposition(pubobj->location));\n }\n+ else if (pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCES_IN_SCHEMA ||\n+ pubobj->pubobjtype == PUBLICATIONOBJ_SEQUENCES_IN_CUR_SCHEMA)\n+ {\n+ /* WHERE clause is not allowed on a schema object */\n+ if (pubobj->pubtable && pubobj->pubtable->whereClause)\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"WHERE clause not allowed for schema\"),\n+ parser_errposition(pubobj->location));\n\nGrammar doesn't allow specifying whereClause with ALL TABLES IN SCHEMA\nspecification but we have code to throw error if that happens. We also have\nsimilar code for ALL SEQUENCES IN SCHEMA. Should we add for SEQUENCE\nspecification as well?\n\n+static void\n+fetch_sequence_data(char *nspname, char *relname,\n... snip ...\n+ /* tablesync sets the sequences in non-transactional way */\n+ SetSequence(RelationGetRelid(rel), false, last_value, log_cnt, is_called);\nWhy? In case of a regular table, in case the sync fails, the table will retain\nits state before sync. Similarly it will be expected that the sequence retains\nits state before sync, No?\n\n@@ -1467,10 +1557,21 @@ LogicalRepSyncTableStart(XLogRecPtr *origin_startpos)\n\nNow that it syncs sequences as well, should we rename this as\nLogicalRepSyncRelationStart?\n\n+static void\n+apply_handle_sequence(StringInfo s)\n... snip ...\n+ /*\n+ * Commit the per-stream transaction (we only do this when not in\n+ * remote transaction, i.e. for non-transactional sequence updates.)\n+ */\n+ if (!in_remote_transaction)\n+ CommitTransactionCommand();\n\nI understand the purpose of if block. It commits the transaction that was\nstarted when applying a non-transactional sequence change. But didn't\nunderstand the term \"per-stream transaction\".\n\n@@ -5683,8 +5686,15 @@ RelationBuildPublicationDesc(Relation relation,\nPublicationDesc *pubdesc)\n\nThanks for the additional comments. Those are useful.\n\n@@ -1716,28 +1716,19 @@ describeOneTableDetails(const char *schemaname,\n\nI think these changes make it easy to print the publication description per the\ncode changes later. But May be we should commit the refactoring patch\nseparately.\n\n-DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_index,\n6239, PublicationNamespacePnnspidPnpubidIndexId, on\npg_publication_namespace using btree(pnnspid oid_ops, pnpubid\noid_ops));\n+DECLARE_UNIQUE_INDEX(pg_publication_namespace_pnnspid_pnpubid_pntype_index,\n8903, PublicationNamespacePnnspidPnpubidPntypeIndexId, on\npg_publication_namespace using btree(pnnspid oid_ops, pnpubid oid_ops,\npntype char_ops));\n\nWhy do we need a new OID? The old index should not be there in a cluster\ncreated using this version and hence this OID will not be used.\n\n[1] https://www.postgresql.org/docs/current/functions-sequence.html\n\nNext I will review 0005.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 4 Jul 2023 18:43:16 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "0005, 0006 and 0007 are all related to the initial sequence sync. [3]\nresulted in 0007 and I think we need it. That leaves 0005 and 0006 to\nbe reviewed in this response.\n\nI followed the discussion starting [1] till [2]. The second one\nmentions the interlock mechanism which has been implemented in 0005\nand 0006. While I don't have an objection to allowing LOCKing a\nsequence using the LOCK command, I am not sure whether it will\nactually work or is even needed.\n\nThe problem described in [1] seems to be the same as the problem\ndescribed in [2]. In both cases we see the sequence moving backwards\nduring CATCHUP. At the end of catchup the sequence is in the right\nstate in both the cases. [2] actually deems this behaviour OK. I also\nagree that the behaviour is ok. I am confused whether we have solved\nanything using interlocking and it's really needed.\n\nI see that the idea of using an LSN to decide whether or not to apply\na change to sequence started in [4]. In [5] Tomas proposed to use page\nLSN. Looking at [6], it actually seems like a good idea. In [7] Tomas\nagreed that LSN won't be sufficient. But I don't understand why. There\nare three LSNs in the picture - restart LSN of sync slot,\nconfirmed_flush LSN of sync slot and page LSN of the sequence page\nfrom where we read the initial state of the sequence. I think they can\nbe used with the following rules:\n1. The publisher will not send any changes with LSN less than\nconfirmed_flush so we are good there.\n2. Any non-transactional changes that happened between confirmed_flush\nand page LSN should be discarded while syncing. They are already\nvisible to SELECT.\n3. Any transactional changes with commit LSN between confirmed_flush\nand page LSN should be discarded while syncing. They are already\nvisible to SELECT.\n4. A DDL acquires a lock on sequence. Thus no other change to that\nsequence can have an LSN between the LSN of the change made by DDL and\nthe commit LSN of that transaction. Only DDL changes to sequence are\ntransactional. Hence any transactional changes with commit LSN beyond\npage LSN would not have been seen by the SELECT otherwise SELECT would\nsee the page LSN committed by that transaction. so they need to be\napplied while syncing.\n5. Any non-transactional changes beyond page LSN should be applied.\nThey are not seen by SELECT.\n\nAm I missing something?\n\nI don't have an idea how to get page LSN via a SQL query (while also\nfetching data on that page). That may or may not be a challenge.\n\n[1] https://www.postgresql.org/message-id/c2799362-9098-c7bf-c315-4d7975acafa3%40enterprisedb.com\n[2] https://www.postgresql.org/message-id/2d4bee7b-31be-8b36-2847-a21a5d56e04f%40enterprisedb.com\n[3] https://www.postgresql.org/message-id/f5a9d63d-a6fe-59a9-d1ed-38f6a5582c13%40enterprisedb.com\n[4] https://www.postgresql.org/message-id/CAA4eK1KUYrXFq25xyjBKU1UDh7Dkzw74RXN1d3UAYhd4NzDcsg%40mail.gmail.com\n[5] https://www.postgresql.org/message-id/CAA4eK1LiA8nV_ZT7gNHShgtFVpoiOvwoxNsmP_fryP%3DPsYPvmA%40mail.gmail.com\n[6] https://www.postgresql.org/docs/current/storage-page-layout.html\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 5 Jul 2023 20:21:34 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "And the last patch 0008.\n\n@@ -1180,6 +1194,13 @@ AlterSubscription(ParseState *pstate,\nAlterSubscriptionStmt *stmt,\n... snip ...\n+ if (IsSet(opts.specified_opts, SUBOPT_SEQUENCES))\n+ {\n+ values[Anum_pg_subscription_subsequences - 1] =\n+ BoolGetDatum(opts.sequences);\n+ replaces[Anum_pg_subscription_subsequences - 1] = true;\n+ }\n+\n\nThe list of allowed options set a few lines above this code does not contain\n\"sequences\". Is this option missing there or this code is unnecessary? If we\nintend to add \"sequence\" at a later time after a subscription is created, will\nthe sequences be synced after ALTER SUBSCRIPTION?\n\n+ /*\n+ * ignore sequences when not requested\n+ *\n+ * XXX Maybe we should differentiate between \"callbacks not defined\" or\n+ * \"subscriber disabled sequence replication\" and \"subscriber does not\n+ * know about sequence replication\" (e.g. old subscriber version).\n+ *\n+ * For the first two it'd be fine to bail out here, but for the last it\n\nIt's not clear which two you are talking about. Maybe that's because the\nparagraph above is ambiguious. It is in the form of A or B and C; so not clear\nwhich cases we are differentiating between: (A, B, C), ((A or B) and C) or (A or\n(B and C)) or something else.\n\n+ * might be better to continue and error out only when the sequence\n+ * would be replicated (e.g. as part of the publication). We don't know\n+ * that here, unfortunately.\n\nPlease see comments on changes to pgoutput_startup() below. We may\nwant to change the paragraph accordingly.\n\n@@ -298,6 +298,20 @@ StartupDecodingContext(List *output_plugin_options,\n */\n ctx->reorder->update_progress_txn = update_progress_txn_cb_wrapper;\n\n+ /*\n+ * To support logical decoding of sequences, we require the sequence\n+ * callback. We decide it here, but only check it later in the wrappers.\n+ *\n+ * XXX Isn't it wrong to define only one of those callbacks? Say we\n+ * only define the stream_sequence_cb() - that may get strange results\n+ * depending on what gets streamed. Either none or both?\n\nI don't think the current condition is correct; it will consider sequence\nchanges to be streamed even when sequence_cb is not defined and actually not\nsend those. sequence_cb is needed to send sequence changes irrespective of\nwhether transaction streaming is supported. But stream_sequence_cb is required\nif other stream callbacks are available. Something like\n\nif (ctx->callbacks.sequence_cb)\n{\n if (ctx->streaming)\n {\n if ctx->callbacks.stream_sequence_cb == NULL)\n ctx->sequences = false;\n else\n ctx->sequences = true;\n }\n else\n ctx->sequences = true;\n}\nelse\n ctx->sequences = false;\n\n+ *\n+ * XXX Shouldn't sequence be defined at slot creation time, similar\n+ * to two_phase? Probably not.\n\nI don't know why two_phase is defined at the slot creation time, so can't\ncomment on this. But looks like something we need to answer before committing\nthe patches.\n\n+ /*\n+ * We allow decoding of sequences when the option is given at the streaming\n+ * start, provided the plugin supports all the callbacks for two-phase.\n\ns/two-phase/sequences/\n\n+ *\n+ * XXX Similar behavior to the two-phase block below.\n\nI think we need to describe sequence specific behaviour instead of pointing to\nthe two-phase. two-phase is part of in replication slot's on disk specification\nbut sequence is not. Given that it's XXX, I think you are planning to do that.\n\n+ *\n+ * XXX Shouldn't this error out if the callbacks are not defined?\n\nIsn't this already being done in pgoutput_startup()? Should we remove this XXX.\n\n+ /*\n+ * Here, we just check whether the sequences decoding option is passed\n+ * by plugin and decide whether to enable it at later point of time. It\n+ * remains enabled if the previous start-up has done so. But we only\n+ * allow the option to be passed in with sufficient version of the\n+ * protocol, and when the output plugin supports it.\n+ */\n+ if (!data->sequences)\n+ ctx->sequences_opt_given = false;\n+ else if (data->protocol_version <\nLOGICALREP_PROTO_SEQUENCES_VERSION_NUM)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"requested proto_version=%d does not\nsupport sequences, need %d or higher\",\n+ data->protocol_version,\nLOGICALREP_PROTO_SEQUENCES_VERSION_NUM)));\n+ else if (!ctx->sequences)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"sequences requested, but not supported\nby output plugin\")));\n\nIf a given output plugin doesn't implement the callbacks but subscription\nspecifies sequences, the code will throw an error whether or not publication is\npublishing sequences. Instead I think the behaviour should be same as the case\nwhen publication doesn't include sequences even if the publisher node has\nsequences. In either case publisher (the plugin or the publication) doesn't want\nto publish sequence data. So subscriber's request can be ignored.\n\nWhat might be good is to throw an error if the publication publishes the\nsequences but there are no callbacks - both output plugin and the publication\nare part of publisher node, thus it's easy for users to setup them consistently.\nGetPublicationRelations can be tweaked a bit to return just tables or sequences.\nThat along with publication's all sequences flag should tell us whether\npublication publishes any sequences or not.\n\nThat ends my first round of reviews.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 5 Jul 2023 20:24:34 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Jun 27, 2023 at 11:30 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> I have not looked at the DDL replication patch in detail so I may be\n> missing something. IIUC, that patch replicates the DDL statement in\n> some form: parse tree or statement. But it doesn't replicate the some\n> or all WAL records that the DDL execution generates.\n>\n\nYes, the DDL replication patch uses the parse tree and catalog\ninformation to generate a deparsed form of DDL statement which is WAL\nlogged and used to replicate DDLs.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 13:11:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nhere's a rebased and significantly reworked version of this patch\nseries, based on the recent reviews and discussion. Let me go through\nthe main differences:\n\n\n1) reorder the patches to have the \"shortening\" of test output first\n\n\n2) merge the various \"fix\" patches in to the three main patches\n\n 0002 - introduce sequence decoding infrastructure\n 0003 - add sequences to test_decoding\n 0004 - add sequences to built-in replication\n\nI've kept those patches separate to make the evolution easier to follow\nand discuss, but it was necessary to cleanup the patch series and make\nit clearer what the current state is.\n\n\n3) simplify the replicated state\n\nAs suggested by Ashutosh, it may not be a good idea to replicate the\n(last_value, log_cnt, is_called) tuple, as that's pretty tightly tied to\nour internal implementation. Which may not be the right thing for other\nplugins. So this new patch replicates just \"value\" which is pretty much\n(last_value + log_cnt), representing the next value that should be safe\nto generate on the subscriber (in case of a failover).\n\n\n4) simplify test_decoding code & tests\n\nI realized I can ditch some of the test_decoding changes, because at\nsome point we chose to only include sequences in test_decoding when\nexplicitly requested. So the tests don't need to disable that, it's the\nother way - one test needs to enable it.\n\nThis now also prints the single value, instead of the three values.\n\n\n5) minor tweaks in the built-in replication\n\nThis adopts the relaxed LOCK code to allow locking sequences during the\ninitial sync, and also adopts the replication of a single value (this\naffects the \"apply\" side of that change too).\n\n\n6) simplified protocol versioning\n\nThe main open question I had was what to do about protocol versioning\nfor the built-in replication - how to decide whether the subscriber can\napply sequences, and what should happen if we decode sequence but the\nsubscriber does not support that.\n\nI was not entirely sure we want to handle this by a simple version\ncheck, because that maps capabilities to a linear scale, which seems\npretty limiting. That is, each protocol version just grows, and new\nversion number means support of a new capability - like replication of\ntwo-phase commits, or sequences. Which is nice, but it does not allow\nsupporting just the later feature, for example - you can't skip one.\nWhich is why 2PC decoding has both a version and a subscription flag,\nwhich allows exactly that ...\n\nWhen discussing this off-list with Peter Eisentraut, he reminded me of\nhis old message in the thread:\n\nhttps://www.postgresql.org/message-id/8046273f-ea88-5c97-5540-0ccd5d244fd4@enterprisedb.com\n\nwhere he advocates for exactly this simplified behavior. So I took a\nstab at it and 0005 should be doing that. I keep it as a separate patch\nfor now, to make the changes clearer, but ultimately it should be merged\ninto 0003 and 0004 parts.\n\nIt's not particularly complex change, it mostly ditches the subscription\noption (which also means columns in the pg_subscription catalog), and a\nflag in the decoding context.\n\nBut the main change is in pgoutput_sequence(), where we protocol_version\nand error-out if it's not the right version (instead of just ignoring\nthe sequence). AFAICS this behaves as expected - with PG15 subscriber, I\nget an ERROR on the publisher side from the sequence callback.\n\nBut it no occurred to me we could do the same thing with the original\napproach - allow the per-subscription \"sequences\" flag, but error out\nwhen the subscriber did not enable that capability ...\n\n\nHopefully, I haven't forgotten to address any important point from the\nreviews ...\n\nThe one thing I'm not really sure about is how it interferes with the\nreplication of DDL. But in principle, if it decodes DDL for ALTER\nSEQUENCE, I don't see why it would be a problem that we then decode and\nreplicate the WAL for the sequence state. But if it is a problem, we\nshould be able to skip this WAL record with the initial sequence state\n(which I think should be possible thanks to the \"created\" flag this\npatch adds to the WAL record).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 12 Jul 2023 21:05:15 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Thanks for the updated patches. I haven't looked at the patches yet\nbut have some responses below.\n\nOn Thu, Jul 13, 2023 at 12:35 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n>\n>\n> 3) simplify the replicated state\n>\n> As suggested by Ashutosh, it may not be a good idea to replicate the\n> (last_value, log_cnt, is_called) tuple, as that's pretty tightly tied to\n> our internal implementation. Which may not be the right thing for other\n> plugins. So this new patch replicates just \"value\" which is pretty much\n> (last_value + log_cnt), representing the next value that should be safe\n> to generate on the subscriber (in case of a failover).\n>\n\nThanks. That will help.\n\n\n> 5) minor tweaks in the built-in replication\n>\n> This adopts the relaxed LOCK code to allow locking sequences during the\n> initial sync, and also adopts the replication of a single value (this\n> affects the \"apply\" side of that change too).\n>\n\nI think the problem we are trying to solve with LOCK is not actually\ngetting solved. See [2]. Instead your earlier idea of using page LSN\nlooks better.\n\n>\n> 6) simplified protocol versioning\n\nI had tested the cross-version logical replication with older set of\npatches. Didn't see any unexpected behaviour then. I will test again.\n>\n> The one thing I'm not really sure about is how it interferes with the\n> replication of DDL. But in principle, if it decodes DDL for ALTER\n> SEQUENCE, I don't see why it would be a problem that we then decode and\n> replicate the WAL for the sequence state. But if it is a problem, we\n> should be able to skip this WAL record with the initial sequence state\n> (which I think should be possible thanks to the \"created\" flag this\n> patch adds to the WAL record).\n\nI had suggested a solution in [1] to avoid adding a flag to the WAL\nrecord. Did you consider it? If you considered it and rejected, I\nwould be interested in knowing reasons behind rejecting it. Let me\nrepeat here again:\n\n```\nWe can add a\ndecoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\nin XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\nas is. Of course we will add non-sequence relfilelocators as well but that\nshould be fine. Creating a new relfilelocator shouldn't be a frequent\noperation. If at all we are worried about that, we can add only the\nrelfilenodes associated with sequences to the hash table.\n```\n\nIf the DDL replication takes care of replicating and applying sequence\nchanges, I think we don't need the changes tracking \"transactional\"\nsequence changes in this patch-set. That also makes a case for not\nadding a new field to WAL which may not be used.\n\n[1] https://www.postgresql.org/message-id/CAExHW5v_vVqkhF4ehST9EzpX1L3bemD1S%2BkTk_-ZVu_ir-nKDw%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAExHW5vHRgjWzi6zZbgCs97eW9U7xMtzXEQK%2BaepuzoGDsDNtg%40mail.gmail.com\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 13 Jul 2023 19:54:36 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 6/23/23 15:18, Ashutosh Bapat wrote:\n> ...\n>\n> I reviewed 0001 and related parts of 0004 and 0008 in detail.\n> \n> I have only one major change request, about\n> typedef struct xl_seq_rec\n> {\n> RelFileLocator locator;\n> + bool created; /* creates a new relfilenode (CREATE/ALTER) */\n> \n> I am not sure what are the repercussions of adding a member to an existing WAL\n> record. I didn't see any code which handles the old WAL format which doesn't\n> contain the \"created\" flag. IIUC, the logical decoding may come across\n> a WAL record written in the old format after upgrade and restart. Is\n> that not possible?\n> \n\nI don't understand why would adding a new field to xl_seq_rec be an\nissue, considering it's done in a new major version. Sure, if you\ngenerate WAL with old build, and start with a patched version, that\nwould break things. But that's true for many other patches, and it's\nirrelevant for releases.\n\n> But I don't think it's necessary. We can add a\n> decoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\n> in XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\n> as is. Of course we will add non-sequence relfilelocators as well but that\n> should be fine. Creating a new relfilelocator shouldn't be a frequent\n> operation. If at all we are worried about that, we can add only the\n> relfilenodes associated with sequences to the hash table.\n> \n\nHmmmm, that might work. I feel a bit uneasy about having to keep all\nrelfilenodes, not just sequences ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jul 2023 16:59:17 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/5/23 16:51, Ashutosh Bapat wrote:\n> 0005, 0006 and 0007 are all related to the initial sequence sync. [3]\n> resulted in 0007 and I think we need it. That leaves 0005 and 0006 to\n> be reviewed in this response.\n> \n> I followed the discussion starting [1] till [2]. The second one\n> mentions the interlock mechanism which has been implemented in 0005\n> and 0006. While I don't have an objection to allowing LOCKing a\n> sequence using the LOCK command, I am not sure whether it will\n> actually work or is even needed.\n> \n> The problem described in [1] seems to be the same as the problem\n> described in [2]. In both cases we see the sequence moving backwards\n> during CATCHUP. At the end of catchup the sequence is in the right\n> state in both the cases. [2] actually deems this behaviour OK. I also\n> agree that the behaviour is ok. I am confused whether we have solved\n> anything using interlocking and it's really needed.\n> \n> I see that the idea of using an LSN to decide whether or not to apply\n> a change to sequence started in [4]. In [5] Tomas proposed to use page\n> LSN. Looking at [6], it actually seems like a good idea. In [7] Tomas\n> agreed that LSN won't be sufficient. But I don't understand why. There\n> are three LSNs in the picture - restart LSN of sync slot,\n> confirmed_flush LSN of sync slot and page LSN of the sequence page\n> from where we read the initial state of the sequence. I think they can\n> be used with the following rules:\n> 1. The publisher will not send any changes with LSN less than\n> confirmed_flush so we are good there.\n> 2. Any non-transactional changes that happened between confirmed_flush\n> and page LSN should be discarded while syncing. They are already\n> visible to SELECT.\n> 3. Any transactional changes with commit LSN between confirmed_flush\n> and page LSN should be discarded while syncing. They are already\n> visible to SELECT.\n> 4. A DDL acquires a lock on sequence. Thus no other change to that\n> sequence can have an LSN between the LSN of the change made by DDL and\n> the commit LSN of that transaction. Only DDL changes to sequence are\n> transactional. Hence any transactional changes with commit LSN beyond\n> page LSN would not have been seen by the SELECT otherwise SELECT would\n> see the page LSN committed by that transaction. so they need to be\n> applied while syncing.\n> 5. Any non-transactional changes beyond page LSN should be applied.\n> They are not seen by SELECT.\n> \n> Am I missing something?\n> \n\nHmmm, I think you're onto something and the interlock may not be\nactually necessary ...\n\nIIRC there were two examples of the non-MVCC sequence behavior, leading\nme to add the interlock.\n\n\n1) going \"backwards\" during catchup\n\nSequences are not MVCC, and if there are increments between the slot\ncreation and the SELECT, the sequence will go backwards. But it will\nultimately end with the correct value. The LSN checks were an attempt to\nprevent this.\n\nI don't recall why I concluded this would not be sufficient (there's no\nlink for [7] in your message), but maybe it was related to the sequence\nincrements not being WAL-logged and thus not guaranteed to update the\npage LSN, or something like that.\n\nBut if we agree we only guarantee consistency at the end of the catchup,\nthis does not matter - it's OK to go backwards as long as the sequence\nends with the correct value.\n\n\n2) missing an increment because of ALTER SEQUENCE\n\nMy concern here was that we might have a transaction that does ALTER\nSEQUENCE before the tablesync slot gets created, and the SELECT still\nsees the old sequence state because we start decoding after the ALTER.\n\nBut now that I think about it again, this probably can't happen, because\nthe slot won't be created until the ALTER commits. So we shouldn't miss\nanything.\n\nI suspect I got confused by some other bug in the patch at that time,\nleading me to a faulty conclusion.\n\n\nI'll try removing the interlock, and make sure it actually works OK.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jul 2023 17:41:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/13/23 16:24, Ashutosh Bapat wrote:\n> Thanks for the updated patches. I haven't looked at the patches yet\n> but have some responses below.\n> \n> On Thu, Jul 13, 2023 at 12:35 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>\n>>\n>> 3) simplify the replicated state\n>>\n>> As suggested by Ashutosh, it may not be a good idea to replicate the\n>> (last_value, log_cnt, is_called) tuple, as that's pretty tightly tied to\n>> our internal implementation. Which may not be the right thing for other\n>> plugins. So this new patch replicates just \"value\" which is pretty much\n>> (last_value + log_cnt), representing the next value that should be safe\n>> to generate on the subscriber (in case of a failover).\n>>\n> \n> Thanks. That will help.\n> \n> \n>> 5) minor tweaks in the built-in replication\n>>\n>> This adopts the relaxed LOCK code to allow locking sequences during the\n>> initial sync, and also adopts the replication of a single value (this\n>> affects the \"apply\" side of that change too).\n>>\n> \n> I think the problem we are trying to solve with LOCK is not actually\n> getting solved. See [2]. Instead your earlier idea of using page LSN\n> looks better.\n> \n\nThanks. I think you may be right, and the interlock may not be\nnecessary. I've responded to the linked threads, that's probably easier\nto follow as it keeps the context.\n\n>>\n>> 6) simplified protocol versioning\n> \n> I had tested the cross-version logical replication with older set of\n> patches. Didn't see any unexpected behaviour then. I will test again.\n>>\n\nI think the question is what's the expected behavior. What behavior did\nyou expect/observe?\n\nIIRC with the previous version of the patch, if you connected an old\nsubscriber (without sequence replication), it just ignored/skipped the\nsequence increments and replicated the other changes.\n\nThe new patch detects that, and triggers ERROR on the publisher. And I\nthink that's the correct thing to do.\n\nThere was a lengthy discussion about making this more flexible (by not\ntying this to \"linear\" protocol version) and/or permissive. I tried\ndoing that by doing similar thing to decoding of 2PC, which allows\nchoosing when creating a subscription.\n\nBut ultimately that just chooses where to throw an error - whether on\nthe publisher (in the output plugin callback) or on apply side (when\ntrying to apply change to non-existent sequence).\n\nI still think it might be useful to have these \"capabilities\" orthogonal\nto the protocol version, but it's a matter for a separate patch. It's\nenough not to fail with \"unknown message\" on the subscriber.\n\n>> The one thing I'm not really sure about is how it interferes with the\n>> replication of DDL. But in principle, if it decodes DDL for ALTER\n>> SEQUENCE, I don't see why it would be a problem that we then decode and\n>> replicate the WAL for the sequence state. But if it is a problem, we\n>> should be able to skip this WAL record with the initial sequence state\n>> (which I think should be possible thanks to the \"created\" flag this\n>> patch adds to the WAL record).\n> \n> I had suggested a solution in [1] to avoid adding a flag to the WAL\n> record. Did you consider it? If you considered it and rejected, I\n> would be interested in knowing reasons behind rejecting it. Let me\n> repeat here again:\n> \n> ```\n> We can add a\n> decoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\n> in XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\n> as is. Of course we will add non-sequence relfilelocators as well but that\n> should be fine. Creating a new relfilelocator shouldn't be a frequent\n> operation. If at all we are worried about that, we can add only the\n> relfilenodes associated with sequences to the hash table.\n> ```\n> \n\nThanks for reminding me. In principle I'm not against using the proposed\napproach - tracking all relfilenodes created by a transaction, although\nI don't think the new flag in xl_seq_rec is a problem, and it's probably\ncheaper than having to decode all relfilenode creations.\n\n> If the DDL replication takes care of replicating and applying sequence\n> changes, I think we don't need the changes tracking \"transactional\"\n> sequence changes in this patch-set. That also makes a case for not\n> adding a new field to WAL which may not be used.\n> \n\nMaybe, but the DDL replication patch is not there yet, and I'm not sure\nit's a good idea to make this patch wait for a much larger/complex\npatch. If the DDL replication patch gets committed, it may ditch this\npart (assuming it happens in the same development cycle).\n\nHowever, my impression was DDL replication would be optional. In which\ncase we still need to handle the transactional case, to support sequence\nreplication without DDL replication enabled.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 13 Jul 2023 18:17:28 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 8:29 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 6/23/23 15:18, Ashutosh Bapat wrote:\n> > ...\n> >\n> > I reviewed 0001 and related parts of 0004 and 0008 in detail.\n> >\n> > I have only one major change request, about\n> > typedef struct xl_seq_rec\n> > {\n> > RelFileLocator locator;\n> > + bool created; /* creates a new relfilenode (CREATE/ALTER) */\n> >\n> > I am not sure what are the repercussions of adding a member to an existing WAL\n> > record. I didn't see any code which handles the old WAL format which doesn't\n> > contain the \"created\" flag. IIUC, the logical decoding may come across\n> > a WAL record written in the old format after upgrade and restart. Is\n> > that not possible?\n> >\n>\n> I don't understand why would adding a new field to xl_seq_rec be an\n> issue, considering it's done in a new major version. Sure, if you\n> generate WAL with old build, and start with a patched version, that\n> would break things. But that's true for many other patches, and it's\n> irrelevant for releases.\n\nThere are two issues\n1. the name of the field \"created\" - what does created mean in a\n\"sequence status\" WAL record? Consider following sequence of events\nBegin;\nCreate sequence ('s');\nselect nextval('s') from generate_series(1, 1000);\n\n...\ncommit\n\nThis is going to create 1000/32 WAL records with \"created\" = true. But\nonly the first one created the relfilenode. We might fix this little\nannoyance by changing the name to \"transactional\".\n\n2. Consider following scenario\nv15 running logical decoding has restart_lsn before a \"sequence\nchange\" WAL record written in old format\nstop the server\nupgrade to v16\nlogical decoding will stat from restart_lsn pointing to a WAL record\nwritten by v15. When it tries to read \"sequence change\" WAL record it\nwon't be able to get \"created\" flag.\n\nAm I missing something here?\n\n>\n> > But I don't think it's necessary. We can add a\n> > decoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\n> > in XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\n> > as is. Of course we will add non-sequence relfilelocators as well but that\n> > should be fine. Creating a new relfilelocator shouldn't be a frequent\n> > operation. If at all we are worried about that, we can add only the\n> > relfilenodes associated with sequences to the hash table.\n> >\n>\n> Hmmmm, that might work. I feel a bit uneasy about having to keep all\n> relfilenodes, not just sequences ...\n\n From relfilenode it should be easy to get to rel and then see if it's\na sequence. Only add relfilenodes for the sequence.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 14 Jul 2023 12:21:17 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Jul 13, 2023 at 9:47 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n>\n> >>\n> >> 6) simplified protocol versioning\n> >\n> > I had tested the cross-version logical replication with older set of\n> > patches. Didn't see any unexpected behaviour then. I will test again.\n> >>\n>\n> I think the question is what's the expected behavior. What behavior did\n> you expect/observe?\n\nLet me run my test again and respond.\n\n>\n> IIRC with the previous version of the patch, if you connected an old\n> subscriber (without sequence replication), it just ignored/skipped the\n> sequence increments and replicated the other changes.\n\nI liked that.\n\n>\n> The new patch detects that, and triggers ERROR on the publisher. And I\n> think that's the correct thing to do.\n\nWith this behaviour users will never be able to setup logical\nreplication between old and new servers considering almost every setup\nhas sequences.\n\n>\n> There was a lengthy discussion about making this more flexible (by not\n> tying this to \"linear\" protocol version) and/or permissive. I tried\n> doing that by doing similar thing to decoding of 2PC, which allows\n> choosing when creating a subscription.\n>\n> But ultimately that just chooses where to throw an error - whether on\n> the publisher (in the output plugin callback) or on apply side (when\n> trying to apply change to non-existent sequence).\n\nI had some comments on throwing error in [1], esp. towards the end.\n\n>\n> I still think it might be useful to have these \"capabilities\" orthogonal\n> to the protocol version, but it's a matter for a separate patch. It's\n> enough not to fail with \"unknown message\" on the subscriber.\n\nYes, We should avoid breaking replication with \"unknown message\".\n\nI also agree that improving things in this area can be done in a\nseparate patch, but as far as possible in this release itself.\n\n> > If the DDL replication takes care of replicating and applying sequence\n> > changes, I think we don't need the changes tracking \"transactional\"\n> > sequence changes in this patch-set. That also makes a case for not\n> > adding a new field to WAL which may not be used.\n> >\n>\n> Maybe, but the DDL replication patch is not there yet, and I'm not sure\n> it's a good idea to make this patch wait for a much larger/complex\n> patch. If the DDL replication patch gets committed, it may ditch this\n> part (assuming it happens in the same development cycle).\n>\n> However, my impression was DDL replication would be optional. In which\n> case we still need to handle the transactional case, to support sequence\n> replication without DDL replication enabled.\n\nAs I said before, I don't think this patchset needs to wait for DDL\nreplication patch. Let's hope that the later lands in the same release\nand straightens protocol instead of carrying it forever.\n\n[1] https://www.postgresql.org/message-id/CAExHW5vScYKKb0RZoiNEPfbaQ60hihfuWeLuZF4JKrwPJXPcUw%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 14 Jul 2023 13:04:33 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/14/23 09:34, Ashutosh Bapat wrote:\n> On Thu, Jul 13, 2023 at 9:47 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>\n>>>>\n>>>> 6) simplified protocol versioning\n>>>\n>>> I had tested the cross-version logical replication with older set of\n>>> patches. Didn't see any unexpected behaviour then. I will test again.\n>>>>\n>>\n>> I think the question is what's the expected behavior. What behavior did\n>> you expect/observe?\n> \n> Let me run my test again and respond.\n> \n>>\n>> IIRC with the previous version of the patch, if you connected an old\n>> subscriber (without sequence replication), it just ignored/skipped the\n>> sequence increments and replicated the other changes.\n> \n> I liked that.\n> \n\nI liked that too, initially (which is why I did it that way). But I\nchanged my mind, because it's likely to cause more harm than good.\n\n>>\n>> The new patch detects that, and triggers ERROR on the publisher. And I\n>> think that's the correct thing to do.\n> \n> With this behaviour users will never be able to setup logical\n> replication between old and new servers considering almost every setup\n> has sequences.\n> \n\nThat's not true.\n\nReplication to older versions works fine as long as the publication does\nnot include sequences (which need to be added explicitly). If you have a\npublication with sequences, you clearly want to replicate them, ignoring\nit is just confusing \"magic\".\n\nIf you have a publication with sequences and still want to replicate to\nan older server, create a new publication without sequences.\n\n>>\n>> There was a lengthy discussion about making this more flexible (by not\n>> tying this to \"linear\" protocol version) and/or permissive. I tried\n>> doing that by doing similar thing to decoding of 2PC, which allows\n>> choosing when creating a subscription.\n>>\n>> But ultimately that just chooses where to throw an error - whether on\n>> the publisher (in the output plugin callback) or on apply side (when\n>> trying to apply change to non-existent sequence).\n> \n> I had some comments on throwing error in [1], esp. towards the end.\n> \n\nYes. You said:\n\n If a given output plugin doesn't implement the callbacks but\n subscription specifies sequences, the code will throw an error\n whether or not publication is publishing sequences.\n\nThis refers to situation when the subscriber says \"sequences\" when\nopening the connection. And this happens *in the plugin* which also\ndefines the callbacks, so I don't see how we could not have the\ncallbacks defined ...\n\nFurthermore, the simplified protocol versioning does away with the\n\"sequences\" option, so in that case this can't even happen.\n\n>>\n>> I still think it might be useful to have these \"capabilities\" orthogonal\n>> to the protocol version, but it's a matter for a separate patch. It's\n>> enough not to fail with \"unknown message\" on the subscriber.\n> \n> Yes, We should avoid breaking replication with \"unknown message\".\n> \n> I also agree that improving things in this area can be done in a\n> separate patch, but as far as possible in this release itself.\n> \n>>> If the DDL replication takes care of replicating and applying sequence\n>>> changes, I think we don't need the changes tracking \"transactional\"\n>>> sequence changes in this patch-set. That also makes a case for not\n>>> adding a new field to WAL which may not be used.\n>>>\n>>\n>> Maybe, but the DDL replication patch is not there yet, and I'm not sure\n>> it's a good idea to make this patch wait for a much larger/complex\n>> patch. If the DDL replication patch gets committed, it may ditch this\n>> part (assuming it happens in the same development cycle).\n>>\n>> However, my impression was DDL replication would be optional. In which\n>> case we still need to handle the transactional case, to support sequence\n>> replication without DDL replication enabled.\n> \n> As I said before, I don't think this patchset needs to wait for DDL\n> replication patch. Let's hope that the later lands in the same release\n> and straightens protocol instead of carrying it forever.\n> \n\nOK, I agree with that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jul 2023 12:28:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/14/23 08:51, Ashutosh Bapat wrote:\n> On Thu, Jul 13, 2023 at 8:29 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 6/23/23 15:18, Ashutosh Bapat wrote:\n>>> ...\n>>>\n>>> I reviewed 0001 and related parts of 0004 and 0008 in detail.\n>>>\n>>> I have only one major change request, about\n>>> typedef struct xl_seq_rec\n>>> {\n>>> RelFileLocator locator;\n>>> + bool created; /* creates a new relfilenode (CREATE/ALTER) */\n>>>\n>>> I am not sure what are the repercussions of adding a member to an existing WAL\n>>> record. I didn't see any code which handles the old WAL format which doesn't\n>>> contain the \"created\" flag. IIUC, the logical decoding may come across\n>>> a WAL record written in the old format after upgrade and restart. Is\n>>> that not possible?\n>>>\n>>\n>> I don't understand why would adding a new field to xl_seq_rec be an\n>> issue, considering it's done in a new major version. Sure, if you\n>> generate WAL with old build, and start with a patched version, that\n>> would break things. But that's true for many other patches, and it's\n>> irrelevant for releases.\n> \n> There are two issues\n> 1. the name of the field \"created\" - what does created mean in a\n> \"sequence status\" WAL record? Consider following sequence of events\n> Begin;\n> Create sequence ('s');\n> select nextval('s') from generate_series(1, 1000);\n> \n> ...\n> commit\n> \n> This is going to create 1000/32 WAL records with \"created\" = true. But\n> only the first one created the relfilenode. We might fix this little\n> annoyance by changing the name to \"transactional\".\n> \n\nI don't think that's true - this will create 1 record with\n\"created=true\" (the one right after the CREATE SEQUENCE) and the rest\nwill have \"created=false\".\n\nI realized I haven't modified seq_desc to show this flag, so I did that\nin the updated patch version, which makes this easy to see.\n\nAnd all of them need to be handled in a transactional way, because they\nmodify relfilenode visible only to that transaction. So calling the flag\n\"transactional\" would be misleading, because the increments can be\ntransactional even with \"created=false\".\n\n\n> 2. Consider following scenario\n> v15 running logical decoding has restart_lsn before a \"sequence\n> change\" WAL record written in old format\n> stop the server\n> upgrade to v16\n> logical decoding will stat from restart_lsn pointing to a WAL record\n> written by v15. When it tries to read \"sequence change\" WAL record it\n> won't be able to get \"created\" flag.\n> \n> Am I missing something here?\n> \n\nYou're missing the fact that pg_upgrade does not copy replication slots,\nso the restart_lsn does not matter.\n\n(Yes, this is pretty annoying consequence of using pg_upgrade. And maybe\nwe'll improve that in the future - but I'm pretty sure we won't allow\ndecoding old WAL.)\n\n>>\n>>> But I don't think it's necessary. We can add a\n>>> decoding routine for RM_SMGR_ID. The decoding routine will add relfilelocator\n>>> in XLOG_SMGR_CREATE record to txn->sequences hash. Rest of the logic will work\n>>> as is. Of course we will add non-sequence relfilelocators as well but that\n>>> should be fine. Creating a new relfilelocator shouldn't be a frequent\n>>> operation. If at all we are worried about that, we can add only the\n>>> relfilenodes associated with sequences to the hash table.\n>>>\n>>\n>> Hmmmm, that might work. I feel a bit uneasy about having to keep all\n>> relfilenodes, not just sequences ...\n> \n> From relfilenode it should be easy to get to rel and then see if it's\n> a sequence. Only add relfilenodes for the sequence.\n> \n\nWill try.\n\nAttached is an updated version with pg_waldump printing the \"created\"\nflag in seq_desc, and removing the unnecessary interlock. I've kept the\nprotocol changes in a separate commit for now.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 14 Jul 2023 12:40:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 3:59 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n>\n> >>\n> >> The new patch detects that, and triggers ERROR on the publisher. And I\n> >> think that's the correct thing to do.\n> >\n> > With this behaviour users will never be able to setup logical\n> > replication between old and new servers considering almost every setup\n> > has sequences.\n> >\n>\n> That's not true.\n>\n> Replication to older versions works fine as long as the publication does\n> not include sequences (which need to be added explicitly). If you have a\n> publication with sequences, you clearly want to replicate them, ignoring\n> it is just confusing \"magic\".\n\nI was looking at it from a different angle. Publishers publish what\nthey want, subscribers choose what they want and what gets replicated\nis intersection of these two sets. Both live happily.\n\nBut I am fine with that too. It's just that users need to create more\npublications.\n\n>\n> If you have a publication with sequences and still want to replicate to\n> an older server, create a new publication without sequences.\n>\n\nI tested the current patches with subscriber at PG 14 and publisher at\nmaster + these patches. I created one table and a sequence on both\npublisher and subscriber. I created two publications, one with\nsequence and other without it. Both have the table in it. When the\nsubscriber subscribes to the publication with sequence, following\nERROR is repeated in the subscriber logs and nothing gets replicated\n```\n[2023-07-14 18:55:41.307 IST] [916293] [] [] [3/30:0] LOG: 00000:\nlogical replication apply worker for subscription \"sub5433\" has\nstarted\n[2023-07-14 18:55:41.307 IST] [916293] [] [] [3/30:0] LOCATION:\nApplyWorkerMain, worker.c:3169\n[2023-07-14 18:55:41.322 IST] [916293] [] [] [3/0:0] ERROR: 08P01:\ncould not receive data from WAL stream: ERROR: protocol version does\nnot support sequence replication\n CONTEXT: slot \"sub5433\", output plugin \"pgoutput\", in the\nsequence callback, associated LSN 0/1513718\n[2023-07-14 18:55:41.322 IST] [916293] [] [] [3/0:0] LOCATION:\nlibpqrcv_receive, libpqwalreceiver.c:818\n[2023-07-14 18:55:41.325 IST] [916213] [] [] [:0] LOG: 00000:\nbackground worker \"logical replication worker\" (PID 916293) exited\nwith exit code 1\n[2023-07-14 18:55:41.325 IST] [916213] [] [] [:0] LOCATION:\nLogChildExit, postmaster.c:3737\n```\n\nWhen the subscriber subscribes to the publication without sequence,\nthings work normally.\n\nThe cross-version replication is working as expected then.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 14 Jul 2023 19:20:59 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 4:10 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I don't think that's true - this will create 1 record with\n> \"created=true\" (the one right after the CREATE SEQUENCE) and the rest\n> will have \"created=false\".\n\nI may have misread the code.\n\n>\n> I realized I haven't modified seq_desc to show this flag, so I did that\n> in the updated patch version, which makes this easy to see.\n\nNow I see it. Thanks for the clarification.\n\n> >\n> > Am I missing something here?\n> >\n>\n> You're missing the fact that pg_upgrade does not copy replication slots,\n> so the restart_lsn does not matter.\n>\n> (Yes, this is pretty annoying consequence of using pg_upgrade. And maybe\n> we'll improve that in the future - but I'm pretty sure we won't allow\n> decoding old WAL.)\n\nAh, I see. Thanks for correcting me.\n\n> >>>\n> >>\n> >> Hmmmm, that might work. I feel a bit uneasy about having to keep all\n> >> relfilenodes, not just sequences ...\n> >\n> > From relfilenode it should be easy to get to rel and then see if it's\n> > a sequence. Only add relfilenodes for the sequence.\n> >\n>\n> Will try.\n>\n\nActually, adding all relfilenodes to hash may not be that bad. There\nshouldn't be many of those. So the extra step to lookup reltype may\nnot be necessary. What's your reason for uneasiness? But yeah, there's\na way to avoid that as well.\n\nShould I wait for this before the second round of review?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 14 Jul 2023 19:32:27 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/14/23 15:50, Ashutosh Bapat wrote:\n> On Fri, Jul 14, 2023 at 3:59 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>\n>>>>\n>>>> The new patch detects that, and triggers ERROR on the publisher. And I\n>>>> think that's the correct thing to do.\n>>>\n>>> With this behaviour users will never be able to setup logical\n>>> replication between old and new servers considering almost every setup\n>>> has sequences.\n>>>\n>>\n>> That's not true.\n>>\n>> Replication to older versions works fine as long as the publication does\n>> not include sequences (which need to be added explicitly). If you have a\n>> publication with sequences, you clearly want to replicate them, ignoring\n>> it is just confusing \"magic\".\n> \n> I was looking at it from a different angle. Publishers publish what\n> they want, subscribers choose what they want and what gets replicated\n> is intersection of these two sets. Both live happily.\n> \n> But I am fine with that too. It's just that users need to create more\n> publications.\n> \n\nI think you might make essentially the same argument about replicating\njust some of the tables in the publication. That is, the publication has\ntables t1 and t2, but subscriber only has t1. That will fail too, we\ndon't allow the subscriber to ignore changes for t2.\n\nI think it'd be rather weird (and confusing) to do this differently for\ndifferent types of replicated objects.\n\n>>\n>> If you have a publication with sequences and still want to replicate to\n>> an older server, create a new publication without sequences.\n>>\n> \n> I tested the current patches with subscriber at PG 14 and publisher at\n> master + these patches. I created one table and a sequence on both\n> publisher and subscriber. I created two publications, one with\n> sequence and other without it. Both have the table in it. When the\n> subscriber subscribes to the publication with sequence, following\n> ERROR is repeated in the subscriber logs and nothing gets replicated\n> ```\n> [2023-07-14 18:55:41.307 IST] [916293] [] [] [3/30:0] LOG: 00000:\n> logical replication apply worker for subscription \"sub5433\" has\n> started\n> [2023-07-14 18:55:41.307 IST] [916293] [] [] [3/30:0] LOCATION:\n> ApplyWorkerMain, worker.c:3169\n> [2023-07-14 18:55:41.322 IST] [916293] [] [] [3/0:0] ERROR: 08P01:\n> could not receive data from WAL stream: ERROR: protocol version does\n> not support sequence replication\n> CONTEXT: slot \"sub5433\", output plugin \"pgoutput\", in the\n> sequence callback, associated LSN 0/1513718\n> [2023-07-14 18:55:41.322 IST] [916293] [] [] [3/0:0] LOCATION:\n> libpqrcv_receive, libpqwalreceiver.c:818\n> [2023-07-14 18:55:41.325 IST] [916213] [] [] [:0] LOG: 00000:\n> background worker \"logical replication worker\" (PID 916293) exited\n> with exit code 1\n> [2023-07-14 18:55:41.325 IST] [916213] [] [] [:0] LOCATION:\n> LogChildExit, postmaster.c:3737\n> ```\n> \n> When the subscriber subscribes to the publication without sequence,\n> things work normally.\n> \n> The cross-version replication is working as expected then.\n> \n\nThanks for testing / confirming this! So, do we agree this behavior is\nreasonable?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jul 2023 16:03:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/14/23 16:02, Ashutosh Bapat wrote:\n> ...\n>>>>>\n>>>>\n>>>> Hmmmm, that might work. I feel a bit uneasy about having to keep all\n>>>> relfilenodes, not just sequences ...\n>>>\n>>> From relfilenode it should be easy to get to rel and then see if it's\n>>> a sequence. Only add relfilenodes for the sequence.\n>>>\n>>\n>> Will try.\n>>\n> \n> Actually, adding all relfilenodes to hash may not be that bad. There\n> shouldn't be many of those. So the extra step to lookup reltype may\n> not be necessary. What's your reason for uneasiness? But yeah, there's\n> a way to avoid that as well.\n> \n> Should I wait for this before the second round of review?\n> \n\nI don't think you have to wait - just ignore the part that changes the\nWAL record, which is a pretty tiny bit of the patch.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 14 Jul 2023 16:31:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Here's a slightly improved version of the patch, fixing two minor issues\nreported by cfbot:\n\n- compiler warning about fetch_sequence_data maybe not initializing a\nvariable (not true, but silence the warning)\n\n- missing \"id\" for an element in SGML cocs\n\n\n\nregards\n\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 15 Jul 2023 17:08:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Jul 14, 2023 at 7:33 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n>\n> Thanks for testing / confirming this! So, do we agree this behavior is\n> reasonable?\n>\n\nThis behaviour doesn't need any on-disk changes or has nothing in it\nwhich prohibits us from changing it in future. So I think it's good as\na v0. If required we can add the protocol option to provide more\nflexible behaviour.\n\nOne thing I am worried about is that the subscriber will get an error\nonly when a sequence change is decoded. All the prior changes will be\nreplicated and applied on the subscriber. Thus by the time the user\nrealises this mistake, they may have replicated data. At this point if\nthey want to subscribe to a publication without sequences they will\nneed to clean the already replicated data. But they may not be in a\nposition to know which is which esp when the subscriber has its own\ndata in those tables. Example,\n\npublisher: create publication pub with sequences and tables\nsubscriber: subscribe to pub\npublisher: modify data in tables and sequences\nsubscriber: replicates some data and errors out\npublisher: delete some data from tables\npublisher: create a publication pub_tab without sequences\nsubscriber: subscribe to pub_tab\nsubscriber: replicates the data but rows which were deleted on\npublisher remain on the subscriber\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 18 Jul 2023 19:22:12 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/18/23 15:52, Ashutosh Bapat wrote:\n> On Fri, Jul 14, 2023 at 7:33 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>\n>> Thanks for testing / confirming this! So, do we agree this behavior is\n>> reasonable?\n>>\n> \n> This behaviour doesn't need any on-disk changes or has nothing in it\n> which prohibits us from changing it in future. So I think it's good as\n> a v0. If required we can add the protocol option to provide more\n> flexible behaviour.\n> \n\nTrue, although \"no on-disk changes\" does not exactly mean we can just\nchange it at will. Essentially, once it gets released, the behavior is\nsomewhat fixed for the next ~5 years, until that release gets EOL. And\nlikely longer, because more features are likely to do the same thing.\n\nThat's essentially why the patch was reverted from PG16 - I was worried\nthe elaborate protocol versioning/negotiation was not the right thing.\n\n> One thing I am worried about is that the subscriber will get an error\n> only when a sequence change is decoded. All the prior changes will be\n> replicated and applied on the subscriber. Thus by the time the user\n> realises this mistake, they may have replicated data. At this point if\n> they want to subscribe to a publication without sequences they will\n> need to clean the already replicated data. But they may not be in a\n> position to know which is which esp when the subscriber has its own\n> data in those tables. Example,\n> \n> publisher: create publication pub with sequences and tables\n> subscriber: subscribe to pub\n> publisher: modify data in tables and sequences\n> subscriber: replicates some data and errors out\n> publisher: delete some data from tables\n> publisher: create a publication pub_tab without sequences\n> subscriber: subscribe to pub_tab\n> subscriber: replicates the data but rows which were deleted on\n> publisher remain on the subscriber\n> \n\nSure, but I'd argue that's correct. If the replication stream has\nsomething the subscriber can't apply, what else would you do? We had\nexactly the same thing with TRUNCATE, for example (except that it failed\nwith \"unknown message\" on the subscriber).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 18 Jul 2023 21:50:44 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jul 19, 2023 at 1:20 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >\n> > This behaviour doesn't need any on-disk changes or has nothing in it\n> > which prohibits us from changing it in future. So I think it's good as\n> > a v0. If required we can add the protocol option to provide more\n> > flexible behaviour.\n> >\n>\n> True, although \"no on-disk changes\" does not exactly mean we can just\n> change it at will. Essentially, once it gets released, the behavior is\n> somewhat fixed for the next ~5 years, until that release gets EOL. And\n> likely longer, because more features are likely to do the same thing.\n>\n> That's essentially why the patch was reverted from PG16 - I was worried\n> the elaborate protocol versioning/negotiation was not the right thing.\n\nI agree that elaborate protocol would pose roadblocks in future. It's\nbetter not to add that burden right now, esp. when usage is not clear.\n\nHere's behavriour and extension matrix as I understand it and as of\nthe last set of patches.\n\nPublisher PG 17, Subscriber PG 17 - changes to sequences are\nreplicated, downstream is capable of applying them\n\nPublisher PG 16-, Subscriber PG 17 changes to sequences are never replicated\n\nPublisher PG 18+, Subscriber PG 17 - same as 17, 17 case. Any changes\nin PG 18+ need to make sure that PG 17 subscriber receives sequence\nchanges irrespective of changes in protocol. That may pose some\nmaintenance burden but doesn't seem to be any harder than usual\nbackward compatibility burden.\n\nMoreover users can control whether changes to sequences get replicated\nor not by controlling the objects contained in publication.\n\nI don't see any downside to this. Looks all good. Please correct me if wrong.\n\n>\n> > One thing I am worried about is that the subscriber will get an error\n> > only when a sequence change is decoded. All the prior changes will be\n> > replicated and applied on the subscriber. Thus by the time the user\n> > realises this mistake, they may have replicated data. At this point if\n> > they want to subscribe to a publication without sequences they will\n> > need to clean the already replicated data. But they may not be in a\n> > position to know which is which esp when the subscriber has its own\n> > data in those tables. Example,\n> >\n> > publisher: create publication pub with sequences and tables\n> > subscriber: subscribe to pub\n> > publisher: modify data in tables and sequences\n> > subscriber: replicates some data and errors out\n> > publisher: delete some data from tables\n> > publisher: create a publication pub_tab without sequences\n> > subscriber: subscribe to pub_tab\n> > subscriber: replicates the data but rows which were deleted on\n> > publisher remain on the subscriber\n> >\n>\n> Sure, but I'd argue that's correct. If the replication stream has\n> something the subscriber can't apply, what else would you do? We had\n> exactly the same thing with TRUNCATE, for example (except that it failed\n> with \"unknown message\" on the subscriber).\n\nWhen the replication starts, the publisher knows what publication is\nbeing used, it also knows what protocol is being used. From\npublication it knows what objects will be replicated. So we could fail\nbefore any changes are replicated when executing START_REPLICATION\ncommand. According to [1], if an object is added or removed from\npublication the subscriber is required to REFRESH SUBSCRIPTION in\nwhich case there will be fresh START_REPLICATION command sent. So we\nshould fail the START_REPLICATION command before sending any change\nrather than when a change is being replicated. That's more\ndeterministic and easy to handle. Of course any changes that were sent\nbefore ALTER PUBLICATION can not be reverted, but that's expected.\n\nComing back to TRUNCATE, I don't think it's possible to know whether a\npublication will send a truncate downstream or not. So we can't throw\nan error before TRUNCATE change is decoded.\n\nAnyway, I think this behaviour should be documented. I didn't see this\nmentioned in PUBLICATION or SUBSCRIPTION documentation.\n\n[1] https://www.postgresql.org/docs/current/sql-alterpublication.html\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 19 Jul 2023 11:12:28 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/19/23 07:42, Ashutosh Bapat wrote:\n> On Wed, Jul 19, 2023 at 1:20 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>\n>>> This behaviour doesn't need any on-disk changes or has nothing in it\n>>> which prohibits us from changing it in future. So I think it's good as\n>>> a v0. If required we can add the protocol option to provide more\n>>> flexible behaviour.\n>>>\n>>\n>> True, although \"no on-disk changes\" does not exactly mean we can just\n>> change it at will. Essentially, once it gets released, the behavior is\n>> somewhat fixed for the next ~5 years, until that release gets EOL. And\n>> likely longer, because more features are likely to do the same thing.\n>>\n>> That's essentially why the patch was reverted from PG16 - I was worried\n>> the elaborate protocol versioning/negotiation was not the right thing.\n> \n> I agree that elaborate protocol would pose roadblocks in future. It's\n> better not to add that burden right now, esp. when usage is not clear.\n> \n> Here's behavriour and extension matrix as I understand it and as of\n> the last set of patches.\n> \n> Publisher PG 17, Subscriber PG 17 - changes to sequences are\n> replicated, downstream is capable of applying them\n> \n> Publisher PG 16-, Subscriber PG 17 changes to sequences are never replicated\n> \n> Publisher PG 18+, Subscriber PG 17 - same as 17, 17 case. Any changes\n> in PG 18+ need to make sure that PG 17 subscriber receives sequence\n> changes irrespective of changes in protocol. That may pose some\n> maintenance burden but doesn't seem to be any harder than usual\n> backward compatibility burden.\n> \n> Moreover users can control whether changes to sequences get replicated\n> or not by controlling the objects contained in publication.\n> \n> I don't see any downside to this. Looks all good. Please correct me if wrong.\n> \n\nI think this is an accurate description of what the current patch does.\nAnd I think it's a reasonable behavior.\n\nMy point is that if this gets released in PG17, it'll be difficult to\nchange, even if it does not change on-disk format.\n\n>>\n>>> One thing I am worried about is that the subscriber will get an error\n>>> only when a sequence change is decoded. All the prior changes will be\n>>> replicated and applied on the subscriber. Thus by the time the user\n>>> realises this mistake, they may have replicated data. At this point if\n>>> they want to subscribe to a publication without sequences they will\n>>> need to clean the already replicated data. But they may not be in a\n>>> position to know which is which esp when the subscriber has its own\n>>> data in those tables. Example,\n>>>\n>>> publisher: create publication pub with sequences and tables\n>>> subscriber: subscribe to pub\n>>> publisher: modify data in tables and sequences\n>>> subscriber: replicates some data and errors out\n>>> publisher: delete some data from tables\n>>> publisher: create a publication pub_tab without sequences\n>>> subscriber: subscribe to pub_tab\n>>> subscriber: replicates the data but rows which were deleted on\n>>> publisher remain on the subscriber\n>>>\n>>\n>> Sure, but I'd argue that's correct. If the replication stream has\n>> something the subscriber can't apply, what else would you do? We had\n>> exactly the same thing with TRUNCATE, for example (except that it failed\n>> with \"unknown message\" on the subscriber).\n> \n> When the replication starts, the publisher knows what publication is\n> being used, it also knows what protocol is being used. From\n> publication it knows what objects will be replicated. So we could fail\n> before any changes are replicated when executing START_REPLICATION\n> command. According to [1], if an object is added or removed from\n> publication the subscriber is required to REFRESH SUBSCRIPTION in\n> which case there will be fresh START_REPLICATION command sent. So we\n> should fail the START_REPLICATION command before sending any change\n> rather than when a change is being replicated. That's more\n> deterministic and easy to handle. Of course any changes that were sent\n> before ALTER PUBLICATION can not be reverted, but that's expected.\n> \n> Coming back to TRUNCATE, I don't think it's possible to know whether a\n> publication will send a truncate downstream or not. So we can't throw\n> an error before TRUNCATE change is decoded.\n> \n> Anyway, I think this behaviour should be documented. I didn't see this\n> mentioned in PUBLICATION or SUBSCRIPTION documentation.\n> \n\nI need to think behavior about this a bit more, and maybe check how\ndifficult would be implementing it.\n\nI did however look at the proposed alternative to the \"created\" flag.\nThe attached 0006 part ditches the flag with XLOG_SMGR_CREATE decoding.\nThe smgr_decode code needs a review (I'm not sure the\nskipping/fast-forwarding part is correct), but it seems to be working\nfine overall, although we need to ensure the WAL record has the correct XID.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 19 Jul 2023 12:53:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/19/23 12:53, Tomas Vondra wrote:\n> ...\n> \n> I did however look at the proposed alternative to the \"created\" flag.\n> The attached 0006 part ditches the flag with XLOG_SMGR_CREATE decoding.\n> The smgr_decode code needs a review (I'm not sure the\n> skipping/fast-forwarding part is correct), but it seems to be working\n> fine overall, although we need to ensure the WAL record has the correct XID.\n> \n\ncfbot reported two issues in the patch - compilation warning, due to\nunused variable in sequence_decode, and a failing test in test_decoding.\n\nThe second thing happens because when creating the relfilenode, it may\nhappen before we know the XID. The patch already does ensure the WAL\nwith the sequence data has XID, but that's later. And when the CREATE\nrecord did not have the correct XID, that broke the logic deciding which\nincrements should be \"transactional\".\n\nThis forces us to assign XID a bit earlier (it'd happen anyway, when\nlogging the increment). There's a bit of a drawback, because we don't\nhave the relation yet, so we can't do RelationNeedsWAL ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 19 Jul 2023 23:01:04 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Thanks Tomas for the updated patches.\n\nHere are my comments on 0006 patch as well as 0002 patch.\n\nOn Wed, Jul 19, 2023 at 4:23 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I think this is an accurate description of what the current patch does.\n> And I think it's a reasonable behavior.\n>\n> My point is that if this gets released in PG17, it'll be difficult to\n> change, even if it does not change on-disk format.\n>\n\nYes. I agree. And I don't see any problem even if we are not able to change it.\n\n>\n> I need to think behavior about this a bit more, and maybe check how\n> difficult would be implementing it.\n\nOk.\n\nIn most of the comments and in documentation, there are some phrases\nwhich do not look accurate.\n\nChange to a sequence is being refered to as \"sequence increment\". While\nascending sequences are common, PostgreSQL supports descending sequences as\nwell. The changes there will be decrements. But that's not the only case. A\nsequence may be restarted with an older value, in which case the change could\nincrement or a decrement. I think correct usage is 'changes to sequence' or\n'sequence changes'.\n\nSequence being assigned a new relfilenode is referred to as sequence\nbeing created. This is confusing. When an existing sequence is ALTERed, we\nwill not \"create\" a new sequence but we will \"create\" a new relfilenode and\n\"assign\" it to that sequence.\n\nPFA such edits in 0002 and 0006 patches. Let me know if those look\ncorrect. I think we\nneed similar changes to the documentation and comments in other places.\n\n>\n> I did however look at the proposed alternative to the \"created\" flag.\n> The attached 0006 part ditches the flag with XLOG_SMGR_CREATE decoding.\n> The smgr_decode code needs a review (I'm not sure the\n> skipping/fast-forwarding part is correct), but it seems to be working\n> fine overall, although we need to ensure the WAL record has the correct XID.\n>\n\nBriefly describing the patch. When decoding a XLOG_SMGR_CREATE WAL\nrecord, it adds the relfilenode mentioned in it to the sequences hash.\nWhen decoding a sequence change record, it checks whether the\nrelfilenode in the WAL record is in hash table. If it is the sequence\nchanges is deemed transactional otherwise non-transactional. The\nchange looks good to me. It simplifies the logic to decide whether a\nsequence change is transactional or not.\n\nIn sequence_decode() we skip sequence changes when fast forwarding.\nGiven that smgr_decode() is only to supplement sequence_decode(), I\nthink it's correct to do the same in smgr_decode() as well. Simillarly\nskipping when we don't have full snapshot.\n\nSome minor comments on 0006 patch\n\n+ /* make sure the relfilenode creation is associated with the XID */\n+ if (XLogLogicalInfoActive())\n+ GetCurrentTransactionId();\n\nI think this change is correct and is inline with similar changes in 0002. But\nI looked at other places from where DefineRelation() is called. For regular\ntables it is called from ProcessUtilitySlow() which in turn does not call\nGetCurrentTransactionId(). I am wondering whether we are just discovering a\nclass of bugs caused by not associating an xid with a newly created\nrelfilenode.\n\n+ /*\n+ * If we don't have snapshot or we are just fast-forwarding, there is no\n+ * point in decoding changes.\n+ */\n+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n+ ctx->fast_forward)\n+ return;\n\nThis code block is repeated.\n\n+void\n+ReorderBufferAddRelFileLocator(ReorderBuffer *rb, TransactionId xid,\n+ RelFileLocator rlocator)\n+{\n ... snip ...\n+\n+ /* sequence changes require a transaction */\n+ if (xid == InvalidTransactionId)\n+ return;\n\nIIUC, with your changes in DefineSequence() in this patch, this should not\nhappen. So this condition will never be true. But in case it happens, this code\nwill not add the relfilelocation to the hash table and we will deem the\nsequence change as non-transactional. Isn't it better to just throw an error\nand stop replication if that (ever) happens?\n\nAlso some comments on 0002 patch\n\n@@ -405,8 +405,19 @@ fill_seq_fork_with_data(Relation rel, HeapTuple\ntuple, ForkNumber forkNum)\n\n /* check the comment above nextval_internal()'s equivalent call. */\n if (RelationNeedsWAL(rel))\n+ {\n GetTopTransactionId();\n\n+ /*\n+ * Make sure the subtransaction has a XID assigned, so that\nthe sequence\n+ * increment WAL record is properly associated with it. This\nmatters for\n+ * increments of sequences created/altered in the\ntransaction, which are\n+ * handled as transactional.\n+ */\n+ if (XLogLogicalInfoActive())\n+ GetCurrentTransactionId();\n+ }\n+\n\nI think we should separately commit the changes which add a call to\nGetCurrentTransactionId(). That looks like an existing bug/anomaly\nwhich can stay irrespective of this patch.\n\n+ /*\n+ * To support logical decoding of sequences, we require the sequence\n+ * callback. We decide it here, but only check it later in the wrappers.\n+ *\n+ * XXX Isn't it wrong to define only one of those callbacks? Say we\n+ * only define the stream_sequence_cb() - that may get strange results\n+ * depending on what gets streamed. Either none or both?\n+ *\n+ * XXX Shouldn't sequence be defined at slot creation time, similar\n+ * to two_phase? Probably not.\n+ */\n\nDo you intend to keep these XXX's as is? My previous comments on this comment\nblock are in [1].\n\nIn fact, given that whether or not sequences are replicated is decided by the\nprotocol version, do we really need LogicalDecodingContext::sequences? Drawing\nparallel with WAL messages, I don't think it's needed.\n\n[1] https://www.postgresql.org/message-id/CAExHW5vScYKKb0RZoiNEPfbaQ60hihfuWeLuZF4JKrwPJXPcUw%40mail.gmail.com\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 20 Jul 2023 12:54:21 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/20/23 09:24, Ashutosh Bapat wrote:\n> Thanks Tomas for the updated patches.\n> \n> Here are my comments on 0006 patch as well as 0002 patch.\n> \n> On Wed, Jul 19, 2023 at 4:23 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I think this is an accurate description of what the current patch does.\n>> And I think it's a reasonable behavior.\n>>\n>> My point is that if this gets released in PG17, it'll be difficult to\n>> change, even if it does not change on-disk format.\n>>\n> \n> Yes. I agree. And I don't see any problem even if we are not able to change it.\n> \n>>\n>> I need to think behavior about this a bit more, and maybe check how\n>> difficult would be implementing it.\n> \n> Ok.\n> \n> In most of the comments and in documentation, there are some phrases\n> which do not look accurate.\n> \n> Change to a sequence is being refered to as \"sequence increment\". While\n> ascending sequences are common, PostgreSQL supports descending sequences as\n> well. The changes there will be decrements. But that's not the only case. A\n> sequence may be restarted with an older value, in which case the change could\n> increment or a decrement. I think correct usage is 'changes to sequence' or\n> 'sequence changes'.\n> \n> Sequence being assigned a new relfilenode is referred to as sequence\n> being created. This is confusing. When an existing sequence is ALTERed, we\n> will not \"create\" a new sequence but we will \"create\" a new relfilenode and\n> \"assign\" it to that sequence.\n> \n> PFA such edits in 0002 and 0006 patches. Let me know if those look\n> correct. I think we\n> need similar changes to the documentation and comments in other places.\n> \n\nOK, I merged the changes into the patches, with some minor changes to\nthe wording etc.\n\n>>\n>> I did however look at the proposed alternative to the \"created\" flag.\n>> The attached 0006 part ditches the flag with XLOG_SMGR_CREATE decoding.\n>> The smgr_decode code needs a review (I'm not sure the\n>> skipping/fast-forwarding part is correct), but it seems to be working\n>> fine overall, although we need to ensure the WAL record has the correct XID.\n>>\n> \n> Briefly describing the patch. When decoding a XLOG_SMGR_CREATE WAL\n> record, it adds the relfilenode mentioned in it to the sequences hash.\n> When decoding a sequence change record, it checks whether the\n> relfilenode in the WAL record is in hash table. If it is the sequence\n> changes is deemed transactional otherwise non-transactional. The\n> change looks good to me. It simplifies the logic to decide whether a\n> sequence change is transactional or not.\n> \n\nRight.\n\n> In sequence_decode() we skip sequence changes when fast forwarding.\n> Given that smgr_decode() is only to supplement sequence_decode(), I\n> think it's correct to do the same in smgr_decode() as well. Simillarly\n> skipping when we don't have full snapshot.\n> \n\nI don't follow, smgr_decode already checks ctx->fast_forward.\n\n> Some minor comments on 0006 patch\n> \n> + /* make sure the relfilenode creation is associated with the XID */\n> + if (XLogLogicalInfoActive())\n> + GetCurrentTransactionId();\n> \n> I think this change is correct and is inline with similar changes in 0002. But\n> I looked at other places from where DefineRelation() is called. For regular\n> tables it is called from ProcessUtilitySlow() which in turn does not call\n> GetCurrentTransactionId(). I am wondering whether we are just discovering a\n> class of bugs caused by not associating an xid with a newly created\n> relfilenode.\n> \n\nNot sure. Why would it be a bug?\n\n> + /*\n> + * If we don't have snapshot or we are just fast-forwarding, there is no\n> + * point in decoding changes.\n> + */\n> + if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n> + ctx->fast_forward)\n> + return;\n> \n> This code block is repeated.\n> \n\nFixed.\n\n> +void\n> +ReorderBufferAddRelFileLocator(ReorderBuffer *rb, TransactionId xid,\n> + RelFileLocator rlocator)\n> +{\n> ... snip ...\n> +\n> + /* sequence changes require a transaction */\n> + if (xid == InvalidTransactionId)\n> + return;\n> \n> IIUC, with your changes in DefineSequence() in this patch, this should not\n> happen. So this condition will never be true. But in case it happens, this code\n> will not add the relfilelocation to the hash table and we will deem the\n> sequence change as non-transactional. Isn't it better to just throw an error\n> and stop replication if that (ever) happens?\n> \n\nIt can't happen for sequence, but it may happen when creating a\nnon-sequence relfilenode. In a way, it's a way to skip (some)\nunnecessary relfilenodes.\n\n> Also some comments on 0002 patch\n> \n> @@ -405,8 +405,19 @@ fill_seq_fork_with_data(Relation rel, HeapTuple\n> tuple, ForkNumber forkNum)\n> \n> /* check the comment above nextval_internal()'s equivalent call. */\n> if (RelationNeedsWAL(rel))\n> + {\n> GetTopTransactionId();\n> \n> + /*\n> + * Make sure the subtransaction has a XID assigned, so that\n> the sequence\n> + * increment WAL record is properly associated with it. This\n> matters for\n> + * increments of sequences created/altered in the\n> transaction, which are\n> + * handled as transactional.\n> + */\n> + if (XLogLogicalInfoActive())\n> + GetCurrentTransactionId();\n> + }\n> +\n> \n> I think we should separately commit the changes which add a call to\n> GetCurrentTransactionId(). That looks like an existing bug/anomaly\n> which can stay irrespective of this patch.\n> \n\nNot sure, but I don't see this as a bug.\n\n> + /*\n> + * To support logical decoding of sequences, we require the sequence\n> + * callback. We decide it here, but only check it later in the wrappers.\n> + *\n> + * XXX Isn't it wrong to define only one of those callbacks? Say we\n> + * only define the stream_sequence_cb() - that may get strange results\n> + * depending on what gets streamed. Either none or both?\n> + *\n> + * XXX Shouldn't sequence be defined at slot creation time, similar\n> + * to two_phase? Probably not.\n> + */\n> \n> Do you intend to keep these XXX's as is? My previous comments on this comment\n> block are in [1].\n> \n> In fact, given that whether or not sequences are replicated is decided by the\n> protocol version, do we really need LogicalDecodingContext::sequences? Drawing\n> parallel with WAL messages, I don't think it's needed.\n> \n\nRight. We do that for two_phase because you can override that when\ncreating the subscription - sequences allowed that too initially, but\nthen we ditched that. So I don't think we need this.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 20 Jul 2023 16:51:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "FWIW there's two questions related to the switch to XLOG_SMGR_CREATE.\n\n1) Does smgr_decode() need to do the same block as sequence_decode()?\n\n /* Skip the change if already processed (per the snapshot). */\n if (transactional &&\n !SnapBuildProcessChange(builder, xid, buf->origptr))\n return;\n else if (!transactional &&\n (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n SnapBuildXactNeedsSkip(builder, buf->origptr)))\n return;\n\nI don't think it does. Also, we don't have any transactional flag here.\nOr rather, everything is transactional ...\n\n\n2) Currently, the sequences hash table is in reorderbuffer, i.e. global.\nI was thinking maybe we should have it in the transaction (because we\nneed to do cleanup at the end). It seem a bit inconvenient, because then\nwe'd need to either search htabs in all subxacts, or transfer the\nentries to the top-level xact (otoh, we already do that with snapshots),\nand cleanup on abort.\n\nWhat do you think?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 20 Jul 2023 18:49:55 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 8:22 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> OK, I merged the changes into the patches, with some minor changes to\n> the wording etc.\n>\n\nI think we can do 0001-Make-test_decoding-ddl.out-shorter-20230720\neven without the rest of the patches. Isn't it a separate improvement?\n\nI see that origin filtering (origin=none) doesn't work with this\npatch. You can see this by using the following statements:\nNode-1:\npostgres=# create sequence s;\nCREATE SEQUENCE\npostgres=# create publication mypub for all sequences;\nCREATE PUBLICATION\n\nNode-2:\npostgres=# create sequence s;\nCREATE SEQUENCE\npostgres=# create subscription mysub_sub connection '....' publication\nmypub with (origin=none);\nNOTICE: created replication slot \"mysub_sub\" on publisher\nCREATE SUBSCRIPTION\npostgres=# create publication mypub_sub for all sequences;\nCREATE PUBLICATION\n\nNode-1:\ncreate subscription mysub_pub connection '...' publication mypub_sub\nwith (origin=none);\nNOTICE: created replication slot \"mysub_pub\" on publisher\nCREATE SUBSCRIPTION\n\nSELECT nextval('s') FROM generate_series(1,100);\n\nAfter that, you can check on the subscriber that sequences values are\noverridden with older values:\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 67 | 0 | t\n(1 row)\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 100 | 0 | t\n(1 row)\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 133 | 0 | t\n(1 row)\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 67 | 0 | t\n(1 row)\n\nI haven't verified all the details but I think that is because we\ndon't set XLOG_INCLUDE_ORIGIN while logging sequence values.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jul 2023 12:01:20 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 20, 2023 at 8:22 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > OK, I merged the changes into the patches, with some minor changes to\n> > the wording etc.\n> >\n>\n> I think we can do 0001-Make-test_decoding-ddl.out-shorter-20230720\n> even without the rest of the patches. Isn't it a separate improvement?\n\n+1. Yes, it can go separately. It would even be better if the test can\nbe modified to capture the toasted data into a psql variable before\ninsert into the table, and compare it with output of\npg_logical_slot_get_changes.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 24 Jul 2023 15:58:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> 0005, 0006 and 0007 are all related to the initial sequence sync. [3]\n> resulted in 0007 and I think we need it. That leaves 0005 and 0006 to\n> be reviewed in this response.\n>\n> I followed the discussion starting [1] till [2]. The second one\n> mentions the interlock mechanism which has been implemented in 0005\n> and 0006. While I don't have an objection to allowing LOCKing a\n> sequence using the LOCK command, I am not sure whether it will\n> actually work or is even needed.\n>\n> The problem described in [1] seems to be the same as the problem\n> described in [2]. In both cases we see the sequence moving backwards\n> during CATCHUP. At the end of catchup the sequence is in the right\n> state in both the cases.\n>\n\nI think we could see backward sequence value even after the catchup\nphase (after the sync worker is exited and or the state of rel is\nmarked as 'ready' in pg_subscription_rel). The point is that there is\nno guarantee that we will process all the pending WAL before\nconsidering the sequence state is 'SYNCDONE' and or 'READY'. For\nexample, after copy_sequence, I see values like:\n\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 165 | 0 | t\n(1 row)\npostgres=# select nextval('s');\n nextval\n---------\n 166\n(1 row)\npostgres=# select nextval('s');\n nextval\n---------\n 167\n(1 row)\npostgres=# select currval('s');\n currval\n---------\n 167\n(1 row)\n\nThen during the catchup phase:\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 33 | 0 | t\n(1 row)\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 66 | 0 | t\n(1 row)\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+-----------\n 16394 | 16390 | r | 0/16374E8\n 16394 | 16393 | s | 0/1637700\n(2 rows)\n\npostgres=# select * from pg_subscription_rel;\n srsubid | srrelid | srsubstate | srsublsn\n---------+---------+------------+-----------\n 16394 | 16390 | r | 0/16374E8\n 16394 | 16393 | r | 0/1637700\n(2 rows)\n\nHere Sequence relid id 16393. You can see sequence state is marked as ready.\n\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 66 | 0 | t\n(1 row)\n\nEven after that, see below the value of the sequence is still not\ncaught up. Later, when the apply worker processes all the WAL, the\nsequence state will be caught up.\n\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 165 | 0 | t\n(1 row)\n\nSo, there will be a window where the sequence won't be caught up for a\ncertain period of time and any usage of it (even after the sync is\nfinished) during that time could result in inconsistent behaviour.\n\nThe other question is whether it is okay to allow the sequence to go\nbackwards even during the initial sync phase? The reason I am asking\nthis question is that for the time sequence value moves backwards, one\nis allowed to use it on the subscriber which will result in using\nout-of-sequence values. For example, immediately, after copy_sequence\nthe values look like this:\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 133 | 32 | t\n(1 row)\npostgres=# select nextval('s');\n nextval\n---------\n 134\n(1 row)\npostgres=# select currval('s');\n currval\n---------\n 134\n(1 row)\n\nBut then during the sync phase, it can go backwards and one is allowed\nto use it on the subscriber:\npostgres=# select * from s;\n last_value | log_cnt | is_called\n------------+---------+-----------\n 66 | 0 | t\n(1 row)\npostgres=# select nextval('s');\n nextval\n---------\n 67\n(1 row)\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 24 Jul 2023 16:10:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/24/23 12:40, Amit Kapila wrote:\n> On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> 0005, 0006 and 0007 are all related to the initial sequence sync. [3]\n>> resulted in 0007 and I think we need it. That leaves 0005 and 0006 to\n>> be reviewed in this response.\n>>\n>> I followed the discussion starting [1] till [2]. The second one\n>> mentions the interlock mechanism which has been implemented in 0005\n>> and 0006. While I don't have an objection to allowing LOCKing a\n>> sequence using the LOCK command, I am not sure whether it will\n>> actually work or is even needed.\n>>\n>> The problem described in [1] seems to be the same as the problem\n>> described in [2]. In both cases we see the sequence moving backwards\n>> during CATCHUP. At the end of catchup the sequence is in the right\n>> state in both the cases.\n>>\n> \n> I think we could see backward sequence value even after the catchup\n> phase (after the sync worker is exited and or the state of rel is\n> marked as 'ready' in pg_subscription_rel). The point is that there is\n> no guarantee that we will process all the pending WAL before\n> considering the sequence state is 'SYNCDONE' and or 'READY'. For\n> example, after copy_sequence, I see values like:\n> \n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 165 | 0 | t\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 166\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 167\n> (1 row)\n> postgres=# select currval('s');\n> currval\n> ---------\n> 167\n> (1 row)\n> \n> Then during the catchup phase:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 33 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 66 | 0 | t\n> (1 row)\n> \n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16394 | 16390 | r | 0/16374E8\n> 16394 | 16393 | s | 0/1637700\n> (2 rows)\n> \n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16394 | 16390 | r | 0/16374E8\n> 16394 | 16393 | r | 0/1637700\n> (2 rows)\n> \n> Here Sequence relid id 16393. You can see sequence state is marked as ready.\n> \n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 66 | 0 | t\n> (1 row)\n> \n> Even after that, see below the value of the sequence is still not\n> caught up. Later, when the apply worker processes all the WAL, the\n> sequence state will be caught up.\n> \n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 165 | 0 | t\n> (1 row)\n> \n> So, there will be a window where the sequence won't be caught up for a\n> certain period of time and any usage of it (even after the sync is\n> finished) during that time could result in inconsistent behaviour.\n> \n\nI'm rather confused about which node these queries are executed on.\nPresumably some of it is on publisher, some on subscriber?\n\nCan you create a reproducer (TAP test demonstrating this?) I guess it\nmight require adding some sleeps to hit the right timing ...\n\n> The other question is whether it is okay to allow the sequence to go\n> backwards even during the initial sync phase? The reason I am asking\n> this question is that for the time sequence value moves backwards, one\n> is allowed to use it on the subscriber which will result in using\n> out-of-sequence values. For example, immediately, after copy_sequence\n> the values look like this:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 133 | 32 | t\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 134\n> (1 row)\n> postgres=# select currval('s');\n> currval\n> ---------\n> 134\n> (1 row)\n> \n> But then during the sync phase, it can go backwards and one is allowed\n> to use it on the subscriber:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 66 | 0 | t\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 67\n> (1 row)\n> \n\nWell, as for going back during the sync phase, I think the agreement was\nthat's acceptable, as we don't make guarantees about that. The question\nis what's the state at the end of the sync (which I think leads to the\nfirst part of your message).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jul 2023 12:52:08 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/24/23 08:31, Amit Kapila wrote:\n> On Thu, Jul 20, 2023 at 8:22 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> OK, I merged the changes into the patches, with some minor changes to\n>> the wording etc.\n>>\n> \n> I think we can do 0001-Make-test_decoding-ddl.out-shorter-20230720\n> even without the rest of the patches. Isn't it a separate improvement?\n> \n\nTrue.\n\n> I see that origin filtering (origin=none) doesn't work with this\n> patch. You can see this by using the following statements:\n> Node-1:\n> postgres=# create sequence s;\n> CREATE SEQUENCE\n> postgres=# create publication mypub for all sequences;\n> CREATE PUBLICATION\n> \n> Node-2:\n> postgres=# create sequence s;\n> CREATE SEQUENCE\n> postgres=# create subscription mysub_sub connection '....' publication\n> mypub with (origin=none);\n> NOTICE: created replication slot \"mysub_sub\" on publisher\n> CREATE SUBSCRIPTION\n> postgres=# create publication mypub_sub for all sequences;\n> CREATE PUBLICATION\n> \n> Node-1:\n> create subscription mysub_pub connection '...' publication mypub_sub\n> with (origin=none);\n> NOTICE: created replication slot \"mysub_pub\" on publisher\n> CREATE SUBSCRIPTION\n> \n> SELECT nextval('s') FROM generate_series(1,100);\n> \n> After that, you can check on the subscriber that sequences values are\n> overridden with older values:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 67 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 100 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 133 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 67 | 0 | t\n> (1 row)\n> \n> I haven't verified all the details but I think that is because we\n> don't set XLOG_INCLUDE_ORIGIN while logging sequence values.\n> \n\nHmmm, yeah. I guess we'll need to set XLOG_INCLUDE_ORIGIN with\nwal_level=logical.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jul 2023 12:54:36 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 2023-Jul-20, Tomas Vondra wrote:\n\n> From 809d60be7e636b8505027ad87bcb9fc65224c47b Mon Sep 17 00:00:00 2001\n> From: Tomas Vondra <tomas.vondra@postgresql.org>\n> Date: Wed, 5 Apr 2023 22:49:41 +0200\n> Subject: [PATCH 1/6] Make test_decoding ddl.out shorter\n> \n> Some of the test_decoding test output was extremely wide, because it\n> deals with toasted values, and the aligned mode causes psql to produce\n> 200kB of dashes. Turn that off temporarily using \\pset to avoid it.\n\nDo you mind if I get this one pushed later today? Or feel free to push\nit yourself, if you want. It's an annoying patch to keep seeing posted\nover and over, with no further value. \n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")\n\n\n",
"msg_date": "Mon, 24 Jul 2023 13:14:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 8:22 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n> >\n> > PFA such edits in 0002 and 0006 patches. Let me know if those look\n> > correct. I think we\n> > need similar changes to the documentation and comments in other places.\n> >\n>\n> OK, I merged the changes into the patches, with some minor changes to\n> the wording etc.\n\nThanks.\n\n\n>\n> > In sequence_decode() we skip sequence changes when fast forwarding.\n> > Given that smgr_decode() is only to supplement sequence_decode(), I\n> > think it's correct to do the same in smgr_decode() as well. Simillarly\n> > skipping when we don't have full snapshot.\n> >\n>\n> I don't follow, smgr_decode already checks ctx->fast_forward.\n\nIn your earlier email you seemed to expressed some doubts about the\nchange skipping code in smgr_decode(). To that, I gave my own\nperspective of why the change skipping code in smgr_decode() is\ncorrect. I think smgr_decode is doing the right thing, IMO. No change\nrequired there.\n\n>\n> > Some minor comments on 0006 patch\n> >\n> > + /* make sure the relfilenode creation is associated with the XID */\n> > + if (XLogLogicalInfoActive())\n> > + GetCurrentTransactionId();\n> >\n> > I think this change is correct and is inline with similar changes in 0002. But\n> > I looked at other places from where DefineRelation() is called. For regular\n> > tables it is called from ProcessUtilitySlow() which in turn does not call\n> > GetCurrentTransactionId(). I am wondering whether we are just discovering a\n> > class of bugs caused by not associating an xid with a newly created\n> > relfilenode.\n> >\n>\n> Not sure. Why would it be a bug?\n\nThis discussion is unrelated to sequence decoding but let me add it\nhere. If we don't know the transaction ID that created a relfilenode,\nwe wouldn't know whether to roll back that creation if the transaction\ngets rolled back during recovery. But maybe that doesn't matter since\nthe relfilenode is not visible in any of the catalogs, so it just lies\nthere unused.\n\n\n>\n> > +void\n> > +ReorderBufferAddRelFileLocator(ReorderBuffer *rb, TransactionId xid,\n> > + RelFileLocator rlocator)\n> > +{\n> > ... snip ...\n> > +\n> > + /* sequence changes require a transaction */\n> > + if (xid == InvalidTransactionId)\n> > + return;\n> >\n> > IIUC, with your changes in DefineSequence() in this patch, this should not\n> > happen. So this condition will never be true. But in case it happens, this code\n> > will not add the relfilelocation to the hash table and we will deem the\n> > sequence change as non-transactional. Isn't it better to just throw an error\n> > and stop replication if that (ever) happens?\n> >\n>\n> It can't happen for sequence, but it may happen when creating a\n> non-sequence relfilenode. In a way, it's a way to skip (some)\n> unnecessary relfilenodes.\n\nAh! The comment is correct but cryptic. I didn't read it to mean this.\n\n> > + /*\n> > + * To support logical decoding of sequences, we require the sequence\n> > + * callback. We decide it here, but only check it later in the wrappers.\n> > + *\n> > + * XXX Isn't it wrong to define only one of those callbacks? Say we\n> > + * only define the stream_sequence_cb() - that may get strange results\n> > + * depending on what gets streamed. Either none or both?\n> > + *\n> > + * XXX Shouldn't sequence be defined at slot creation time, similar\n> > + * to two_phase? Probably not.\n> > + */\n> >\n> > Do you intend to keep these XXX's as is? My previous comments on this comment\n> > block are in [1].\n\nThis comment remains unanswered.\n\n> >\n> > In fact, given that whether or not sequences are replicated is decided by the\n> > protocol version, do we really need LogicalDecodingContext::sequences? Drawing\n> > parallel with WAL messages, I don't think it's needed.\n> >\n>\n> Right. We do that for two_phase because you can override that when\n> creating the subscription - sequences allowed that too initially, but\n> then we ditched that. So I don't think we need this.\n\nThen we should just remove that member and its references.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 24 Jul 2023 18:23:11 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/24/23 13:14, Alvaro Herrera wrote:\n> On 2023-Jul-20, Tomas Vondra wrote:\n> \n>> From 809d60be7e636b8505027ad87bcb9fc65224c47b Mon Sep 17 00:00:00 2001\n>> From: Tomas Vondra <tomas.vondra@postgresql.org>\n>> Date: Wed, 5 Apr 2023 22:49:41 +0200\n>> Subject: [PATCH 1/6] Make test_decoding ddl.out shorter\n>>\n>> Some of the test_decoding test output was extremely wide, because it\n>> deals with toasted values, and the aligned mode causes psql to produce\n>> 200kB of dashes. Turn that off temporarily using \\pset to avoid it.\n> \n> Do you mind if I get this one pushed later today? Or feel free to push\n> it yourself, if you want. It's an annoying patch to keep seeing posted\n> over and over, with no further value. \n> \n\nFeel free to push. It's your patch, after all.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jul 2023 14:54:37 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Jul 20, 2023 at 10:19 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> FWIW there's two questions related to the switch to XLOG_SMGR_CREATE.\n>\n> 1) Does smgr_decode() need to do the same block as sequence_decode()?\n>\n> /* Skip the change if already processed (per the snapshot). */\n> if (transactional &&\n> !SnapBuildProcessChange(builder, xid, buf->origptr))\n> return;\n> else if (!transactional &&\n> (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n> SnapBuildXactNeedsSkip(builder, buf->origptr)))\n> return;\n>\n> I don't think it does. Also, we don't have any transactional flag here.\n> Or rather, everything is transactional ...\n\nRight.\n\n>\n>\n> 2) Currently, the sequences hash table is in reorderbuffer, i.e. global.\n> I was thinking maybe we should have it in the transaction (because we\n> need to do cleanup at the end). It seem a bit inconvenient, because then\n> we'd need to either search htabs in all subxacts, or transfer the\n> entries to the top-level xact (otoh, we already do that with snapshots),\n> and cleanup on abort.\n>\n> What do you think?\n\nHash table per transaction seems saner design. Adding it to the top\nlevel transaction should be fine. The entry will contain an XID\nanyway. If we add it to every subtransaction we will need to search\nhash table in each of the subtransactions when deciding whether a\nsequence change is transactional or not. Top transaction is a\nreasonable trade off.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 24 Jul 2023 18:27:13 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/24/23 08:31, Amit Kapila wrote:\n> On Thu, Jul 20, 2023 at 8:22 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> OK, I merged the changes into the patches, with some minor changes to\n>> the wording etc.\n>>\n> \n> I think we can do 0001-Make-test_decoding-ddl.out-shorter-20230720\n> even without the rest of the patches. Isn't it a separate improvement?\n> \n> I see that origin filtering (origin=none) doesn't work with this\n> patch. You can see this by using the following statements:\n> Node-1:\n> postgres=# create sequence s;\n> CREATE SEQUENCE\n> postgres=# create publication mypub for all sequences;\n> CREATE PUBLICATION\n> \n> Node-2:\n> postgres=# create sequence s;\n> CREATE SEQUENCE\n> postgres=# create subscription mysub_sub connection '....' publication\n> mypub with (origin=none);\n> NOTICE: created replication slot \"mysub_sub\" on publisher\n> CREATE SUBSCRIPTION\n> postgres=# create publication mypub_sub for all sequences;\n> CREATE PUBLICATION\n> \n> Node-1:\n> create subscription mysub_pub connection '...' publication mypub_sub\n> with (origin=none);\n> NOTICE: created replication slot \"mysub_pub\" on publisher\n> CREATE SUBSCRIPTION\n> \n> SELECT nextval('s') FROM generate_series(1,100);\n> \n> After that, you can check on the subscriber that sequences values are\n> overridden with older values:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 67 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 100 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 133 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 67 | 0 | t\n> (1 row)\n> \n> I haven't verified all the details but I think that is because we\n> don't set XLOG_INCLUDE_ORIGIN while logging sequence values.\n> \n\nGood point. Attached is a patch that adds XLOG_INCLUDE_ORIGIN to\nsequence changes. I considered doing that only for wal_level=logical,\nbut we don't do that elsewhere. Also, I didn't do that for smgr_create,\nbecause we don't actually replicate that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 24 Jul 2023 16:52:05 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/24/23 14:53, Ashutosh Bapat wrote:\n> On Thu, Jul 20, 2023 at 8:22 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>>\n>>> PFA such edits in 0002 and 0006 patches. Let me know if those look\n>>> correct. I think we\n>>> need similar changes to the documentation and comments in other places.\n>>>\n>>\n>> OK, I merged the changes into the patches, with some minor changes to\n>> the wording etc.\n> \n> Thanks.\n> \n> \n>>\n>>> In sequence_decode() we skip sequence changes when fast forwarding.\n>>> Given that smgr_decode() is only to supplement sequence_decode(), I\n>>> think it's correct to do the same in smgr_decode() as well. Simillarly\n>>> skipping when we don't have full snapshot.\n>>>\n>>\n>> I don't follow, smgr_decode already checks ctx->fast_forward.\n> \n> In your earlier email you seemed to expressed some doubts about the\n> change skipping code in smgr_decode(). To that, I gave my own\n> perspective of why the change skipping code in smgr_decode() is\n> correct. I think smgr_decode is doing the right thing, IMO. No change\n> required there.\n> \n\nI think that was referring to the skipping we do for logical messages:\n\n if (message->transactional &&\n !SnapBuildProcessChange(builder, xid, buf->origptr))\n return;\n else if (!message->transactional &&\n (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n SnapBuildXactNeedsSkip(builder, buf->origptr)))\n return;\n\nI concluded we don't need to do that here.\n\n>>\n>>> Some minor comments on 0006 patch\n>>>\n>>> + /* make sure the relfilenode creation is associated with the XID */\n>>> + if (XLogLogicalInfoActive())\n>>> + GetCurrentTransactionId();\n>>>\n>>> I think this change is correct and is inline with similar changes in 0002. But\n>>> I looked at other places from where DefineRelation() is called. For regular\n>>> tables it is called from ProcessUtilitySlow() which in turn does not call\n>>> GetCurrentTransactionId(). I am wondering whether we are just discovering a\n>>> class of bugs caused by not associating an xid with a newly created\n>>> relfilenode.\n>>>\n>>\n>> Not sure. Why would it be a bug?\n> \n> This discussion is unrelated to sequence decoding but let me add it\n> here. If we don't know the transaction ID that created a relfilenode,\n> we wouldn't know whether to roll back that creation if the transaction\n> gets rolled back during recovery. But maybe that doesn't matter since\n> the relfilenode is not visible in any of the catalogs, so it just lies\n> there unused.\n> \n\nI think that's unrelated to this patch.\n\n> \n>>\n>>> +void\n>>> +ReorderBufferAddRelFileLocator(ReorderBuffer *rb, TransactionId xid,\n>>> + RelFileLocator rlocator)\n>>> +{\n>>> ... snip ...\n>>> +\n>>> + /* sequence changes require a transaction */\n>>> + if (xid == InvalidTransactionId)\n>>> + return;\n>>>\n>>> IIUC, with your changes in DefineSequence() in this patch, this should not\n>>> happen. So this condition will never be true. But in case it happens, this code\n>>> will not add the relfilelocation to the hash table and we will deem the\n>>> sequence change as non-transactional. Isn't it better to just throw an error\n>>> and stop replication if that (ever) happens?\n>>>\n>>\n>> It can't happen for sequence, but it may happen when creating a\n>> non-sequence relfilenode. In a way, it's a way to skip (some)\n>> unnecessary relfilenodes.\n> \n> Ah! The comment is correct but cryptic. I didn't read it to mean this.\n> \n\nOK, I'll improve the comment.\n\n>>> + /*\n>>> + * To support logical decoding of sequences, we require the sequence\n>>> + * callback. We decide it here, but only check it later in the wrappers.\n>>> + *\n>>> + * XXX Isn't it wrong to define only one of those callbacks? Say we\n>>> + * only define the stream_sequence_cb() - that may get strange results\n>>> + * depending on what gets streamed. Either none or both?\n>>> + *\n>>> + * XXX Shouldn't sequence be defined at slot creation time, similar\n>>> + * to two_phase? Probably not.\n>>> + */\n>>>\n>>> Do you intend to keep these XXX's as is? My previous comments on this comment\n>>> block are in [1].\n> \n> This comment remains unanswered.\n> \n\nI think the conclusion was we don't need to do that. I forgot to remove\nthe comment, though.\n\n>>>\n>>> In fact, given that whether or not sequences are replicated is decided by the\n>>> protocol version, do we really need LogicalDecodingContext::sequences? Drawing\n>>> parallel with WAL messages, I don't think it's needed.\n>>>\n>>\n>> Right. We do that for two_phase because you can override that when\n>> creating the subscription - sequences allowed that too initially, but\n>> then we ditched that. So I don't think we need this.\n> \n> Then we should just remove that member and its references.\n> \n\nThe member is still needed - it says whether the plugin has callbacks\nfor sequence decoding or not (just like we have a flag for streaming,\nfor example). I see the XXX comment in sequence_decode() is no longer\nneeded, we rely on protocol versioning.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jul 2023 17:27:40 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 2023-Jul-24, Tomas Vondra wrote:\n\n> On 7/24/23 13:14, Alvaro Herrera wrote:\n\n> > Do you mind if I get this one pushed later today? Or feel free to push\n> > it yourself, if you want. It's an annoying patch to keep seeing posted\n> > over and over, with no further value. \n> \n> Feel free to push. It's your patch, after all.\n\nThanks, done.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Learn about compilers. Then everything looks like either a compiler or\na database, and now you have two problems but one of them is fun.\"\n https://twitter.com/thingskatedid/status/1456027786158776329\n\n\n",
"msg_date": "Mon, 24 Jul 2023 17:57:33 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/24/23 12:40, Amit Kapila wrote:\n> On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> 0005, 0006 and 0007 are all related to the initial sequence sync. [3]\n>> resulted in 0007 and I think we need it. That leaves 0005 and 0006 to\n>> be reviewed in this response.\n>>\n>> I followed the discussion starting [1] till [2]. The second one\n>> mentions the interlock mechanism which has been implemented in 0005\n>> and 0006. While I don't have an objection to allowing LOCKing a\n>> sequence using the LOCK command, I am not sure whether it will\n>> actually work or is even needed.\n>>\n>> The problem described in [1] seems to be the same as the problem\n>> described in [2]. In both cases we see the sequence moving backwards\n>> during CATCHUP. At the end of catchup the sequence is in the right\n>> state in both the cases.\n>>\n> \n> I think we could see backward sequence value even after the catchup\n> phase (after the sync worker is exited and or the state of rel is\n> marked as 'ready' in pg_subscription_rel). The point is that there is\n> no guarantee that we will process all the pending WAL before\n> considering the sequence state is 'SYNCDONE' and or 'READY'. For\n> example, after copy_sequence, I see values like:\n> \n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 165 | 0 | t\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 166\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 167\n> (1 row)\n> postgres=# select currval('s');\n> currval\n> ---------\n> 167\n> (1 row)\n> \n> Then during the catchup phase:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 33 | 0 | t\n> (1 row)\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 66 | 0 | t\n> (1 row)\n> \n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16394 | 16390 | r | 0/16374E8\n> 16394 | 16393 | s | 0/1637700\n> (2 rows)\n> \n> postgres=# select * from pg_subscription_rel;\n> srsubid | srrelid | srsubstate | srsublsn\n> ---------+---------+------------+-----------\n> 16394 | 16390 | r | 0/16374E8\n> 16394 | 16393 | r | 0/1637700\n> (2 rows)\n> \n> Here Sequence relid id 16393. You can see sequence state is marked as ready.\n> \n\nRight, but \"READY\" just means the apply caught up if the LSN where the\nsync finished ...\n\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 66 | 0 | t\n> (1 row)\n> \n> Even after that, see below the value of the sequence is still not\n> caught up. Later, when the apply worker processes all the WAL, the\n> sequence state will be caught up.\n> \n\nAnd how is this different from what tablesync does for tables? For that\n'r' also does not mean it's fully caught up, IIRC. What matters is\nwhether the sequence since this moment can go back. And I don't think it\ncan, because that would require replaying changes from before we did\ncopy_sequence ...\n\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 165 | 0 | t\n> (1 row)\n> \n> So, there will be a window where the sequence won't be caught up for a\n> certain period of time and any usage of it (even after the sync is\n> finished) during that time could result in inconsistent behaviour.\n> \n> The other question is whether it is okay to allow the sequence to go\n> backwards even during the initial sync phase? The reason I am asking\n> this question is that for the time sequence value moves backwards, one\n> is allowed to use it on the subscriber which will result in using\n> out-of-sequence values. For example, immediately, after copy_sequence\n> the values look like this:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 133 | 32 | t\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 134\n> (1 row)\n> postgres=# select currval('s');\n> currval\n> ---------\n> 134\n> (1 row)\n> \n> But then during the sync phase, it can go backwards and one is allowed\n> to use it on the subscriber:\n> postgres=# select * from s;\n> last_value | log_cnt | is_called\n> ------------+---------+-----------\n> 66 | 0 | t\n> (1 row)\n> postgres=# select nextval('s');\n> nextval\n> ---------\n> 67\n> (1 row)\n> \n\nAs I wrote earlier, I think the agreement was we make no guarantees\nabout what happens during the sync.\n\nAlso, not sure what you mean by \"no one is allowed to use it on\nsubscriber\" - that is only allowed after a failover/switchover, after\nsequence sync completes.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 24 Jul 2023 18:01:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 9:32 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/24/23 12:40, Amit Kapila wrote:\n> > On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Even after that, see below the value of the sequence is still not\n> > caught up. Later, when the apply worker processes all the WAL, the\n> > sequence state will be caught up.\n> >\n>\n> And how is this different from what tablesync does for tables? For that\n> 'r' also does not mean it's fully caught up, IIRC. What matters is\n> whether the sequence since this moment can go back. And I don't think it\n> can, because that would require replaying changes from before we did\n> copy_sequence ...\n>\n\nFor sequences, it is quite possible that we replay WAL from before the\ncopy_sequence whereas the same is not true for tables (w.r.t\ncopy_table()). This is because for tables we have a kind of interlock\nw.r.t LSN returned via create_slot (say this value of LSN is LSN1),\nbasically, the walsender corresponding to tablesync worker in\npublisher won't send any WAL before that LSN whereas the same is not\ntrue for sequences. Also, even if apply worker can receive WAL before\ncopy_table, it won't apply that as that would be behind the LSN1 and\nthe same is not true for sequences. So, for tables, we will never go\nback to a state before the copy_table() but for sequences, we can go\nback to a state before copy_sequence().\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jul 2023 11:58:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Jul 24, 2023 at 4:22 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/24/23 12:40, Amit Kapila wrote:\n> > On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> >>\n> >> 0005, 0006 and 0007 are all related to the initial sequence sync. [3]\n> >> resulted in 0007 and I think we need it. That leaves 0005 and 0006 to\n> >> be reviewed in this response.\n> >>\n> >> I followed the discussion starting [1] till [2]. The second one\n> >> mentions the interlock mechanism which has been implemented in 0005\n> >> and 0006. While I don't have an objection to allowing LOCKing a\n> >> sequence using the LOCK command, I am not sure whether it will\n> >> actually work or is even needed.\n> >>\n> >> The problem described in [1] seems to be the same as the problem\n> >> described in [2]. In both cases we see the sequence moving backwards\n> >> during CATCHUP. At the end of catchup the sequence is in the right\n> >> state in both the cases.\n> >>\n> >\n> > I think we could see backward sequence value even after the catchup\n> > phase (after the sync worker is exited and or the state of rel is\n> > marked as 'ready' in pg_subscription_rel). The point is that there is\n> > no guarantee that we will process all the pending WAL before\n> > considering the sequence state is 'SYNCDONE' and or 'READY'. For\n> > example, after copy_sequence, I see values like:\n> >\n> > postgres=# select * from s;\n> > last_value | log_cnt | is_called\n> > ------------+---------+-----------\n> > 165 | 0 | t\n> > (1 row)\n> > postgres=# select nextval('s');\n> > nextval\n> > ---------\n> > 166\n> > (1 row)\n> > postgres=# select nextval('s');\n> > nextval\n> > ---------\n> > 167\n> > (1 row)\n> > postgres=# select currval('s');\n> > currval\n> > ---------\n> > 167\n> > (1 row)\n> >\n> > Then during the catchup phase:\n> > postgres=# select * from s;\n> > last_value | log_cnt | is_called\n> > ------------+---------+-----------\n> > 33 | 0 | t\n> > (1 row)\n> > postgres=# select * from s;\n> > last_value | log_cnt | is_called\n> > ------------+---------+-----------\n> > 66 | 0 | t\n> > (1 row)\n> >\n> > postgres=# select * from pg_subscription_rel;\n> > srsubid | srrelid | srsubstate | srsublsn\n> > ---------+---------+------------+-----------\n> > 16394 | 16390 | r | 0/16374E8\n> > 16394 | 16393 | s | 0/1637700\n> > (2 rows)\n> >\n> > postgres=# select * from pg_subscription_rel;\n> > srsubid | srrelid | srsubstate | srsublsn\n> > ---------+---------+------------+-----------\n> > 16394 | 16390 | r | 0/16374E8\n> > 16394 | 16393 | r | 0/1637700\n> > (2 rows)\n> >\n> > Here Sequence relid id 16393. You can see sequence state is marked as ready.\n> >\n> > postgres=# select * from s;\n> > last_value | log_cnt | is_called\n> > ------------+---------+-----------\n> > 66 | 0 | t\n> > (1 row)\n> >\n> > Even after that, see below the value of the sequence is still not\n> > caught up. Later, when the apply worker processes all the WAL, the\n> > sequence state will be caught up.\n> >\n> > postgres=# select * from s;\n> > last_value | log_cnt | is_called\n> > ------------+---------+-----------\n> > 165 | 0 | t\n> > (1 row)\n> >\n> > So, there will be a window where the sequence won't be caught up for a\n> > certain period of time and any usage of it (even after the sync is\n> > finished) during that time could result in inconsistent behaviour.\n> >\n>\n> I'm rather confused about which node these queries are executed on.\n> Presumably some of it is on publisher, some on subscriber?\n>\n\nThese are all on the subscriber.\n\n> Can you create a reproducer (TAP test demonstrating this?) I guess it\n> might require adding some sleeps to hit the right timing ...\n>\n\nI have used the debugger to reproduce this as it needs quite some\ncoordination. I just wanted to see if the sequence can go backward and\ndidn't catch up completely before the sequence state is marked\n'ready'. On the publisher side, I created a publication with a table\nand a sequence. Then did the following steps:\nSELECT nextval('s') FROM generate_series(1,50);\ninsert into t1 values(1);\nSELECT nextval('s') FROM generate_series(51,150);\n\nThen on the subscriber side with some debugging aid, I could find the\nvalues in the sequence shown in the previous email. Sorry, I haven't\nrecorded each and every step but, if you think it helps, I can again\ntry to reproduce it and share the steps.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 25 Jul 2023 15:50:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/25/23 08:28, Amit Kapila wrote:\n> On Mon, Jul 24, 2023 at 9:32 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/24/23 12:40, Amit Kapila wrote:\n>>> On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n>>> <ashutosh.bapat.oss@gmail.com> wrote:\n>>>\n>>> Even after that, see below the value of the sequence is still not\n>>> caught up. Later, when the apply worker processes all the WAL, the\n>>> sequence state will be caught up.\n>>>\n>>\n>> And how is this different from what tablesync does for tables? For that\n>> 'r' also does not mean it's fully caught up, IIRC. What matters is\n>> whether the sequence since this moment can go back. And I don't think it\n>> can, because that would require replaying changes from before we did\n>> copy_sequence ...\n>>\n> \n> For sequences, it is quite possible that we replay WAL from before the\n> copy_sequence whereas the same is not true for tables (w.r.t\n> copy_table()). This is because for tables we have a kind of interlock\n> w.r.t LSN returned via create_slot (say this value of LSN is LSN1),\n> basically, the walsender corresponding to tablesync worker in\n> publisher won't send any WAL before that LSN whereas the same is not\n> true for sequences. Also, even if apply worker can receive WAL before\n> copy_table, it won't apply that as that would be behind the LSN1 and\n> the same is not true for sequences. So, for tables, we will never go\n> back to a state before the copy_table() but for sequences, we can go\n> back to a state before copy_sequence().\n> \n\nRight. I think the important detail is that during sync we have three\nimportant LSNs\n\n- LSN1 where the slot is created\n- LSN2 where the copy happens\n- LSN3 where we consider the sync completed\n\nFor tables, LSN1 == LSN2, because the data is completed using the\nsnapshot from the temporary slot. And (LSN1 <= LSN3).\n\nBut for sequences, the copy happens after the slot creation, possibly\nwith (LSN1 < LSN2). And because LSN3 comes from the main subscription\n(which may be a bit behind, for whatever reason), it may happen that\n\n (LSN1 < LSN3 < LSN2)\n\nThe the sync ends at LSN3, but that means all sequence changes between\nLSN3 and LSN2 will be applied \"again\" making the sequence go away.\n\nIMHO the right fix is to make sure LSN3 >= LSN2 (for sequences).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 25 Jul 2023 13:59:42 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 5:29 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Right. I think the important detail is that during sync we have three\n> important LSNs\n>\n> - LSN1 where the slot is created\n> - LSN2 where the copy happens\n> - LSN3 where we consider the sync completed\n>\n> For tables, LSN1 == LSN2, because the data is completed using the\n> snapshot from the temporary slot. And (LSN1 <= LSN3).\n>\n> But for sequences, the copy happens after the slot creation, possibly\n> with (LSN1 < LSN2). And because LSN3 comes from the main subscription\n> (which may be a bit behind, for whatever reason), it may happen that\n>\n> (LSN1 < LSN3 < LSN2)\n>\n> The the sync ends at LSN3, but that means all sequence changes between\n> LSN3 and LSN2 will be applied \"again\" making the sequence go away.\n>\n> IMHO the right fix is to make sure LSN3 >= LSN2 (for sequences).\n\nBack in this thread, an approach to use page LSN (LSN2 above) to make\nsure that no change before LSN2 is applied on subscriber. The approach\nwas discussed in emails around [1] and discarded later for no reason.\nI think that approach has some merit.\n\n[1] https://www.postgresql.org/message-id/flat/21c87ea8-86c9-80d6-bc78-9b95033ca00b%40enterprisedb.com#36bb9c7968b7af577dc080950761290d\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Tue, 25 Jul 2023 18:48:27 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/25/23 15:18, Ashutosh Bapat wrote:\n>\n> ...\n>\n>> But for sequences, the copy happens after the slot creation, possibly\n>> with (LSN1 < LSN2). And because LSN3 comes from the main subscription\n>> (which may be a bit behind, for whatever reason), it may happen that\n>>\n>> (LSN1 < LSN3 < LSN2)\n>>\n>> The the sync ends at LSN3, but that means all sequence changes between\n>> LSN3 and LSN2 will be applied \"again\" making the sequence go away.\n>>\n>> IMHO the right fix is to make sure LSN3 >= LSN2 (for sequences).\n> \n\nDo you agree this scheme would be correct?\n\n> Back in this thread, an approach to use page LSN (LSN2 above) to make\n> sure that no change before LSN2 is applied on subscriber. The approach\n> was discussed in emails around [1] and discarded later for no reason.\n> I think that approach has some merit.\n> \n> [1] https://www.postgresql.org/message-id/flat/21c87ea8-86c9-80d6-bc78-9b95033ca00b%40enterprisedb.com#36bb9c7968b7af577dc080950761290d\n> \n\nThat doesn't seem to be the correct link ... IIRC the page LSN was\ndiscussed as a way to skip changes up to the point when the COPY was\ndone. I believe it might work with the scheme I described above too.\n\nThe trouble is we don't have an interface to select both the sequence\nstate and the page LSN. It's probably not hard to add (extend the\nread_seq_tuple() to also return the LSN, and adding a SQL function), but\nI don't think it'd add much value, compared to just getting the current\ninsert LSN after the COPY.\n\nYes, the current LSN may be a bit higher, so we may need to apply a\ncouple changes to get into \"ready\" state. But we read it right after\ncopy_sequence() so how much can happen in between?\n\nAlso, we can get into similar state anyway - the main subscription can\nget ahead, at which point the sync has to catchup to it.\n\nThe attached patch (part 0007) does it this way. Can you try if you can\nstill reproduce the \"backwards\" movement with this version?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 25 Jul 2023 18:24:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/24/23 14:57, Ashutosh Bapat wrote:\n> ...\n> \n>>\n>>\n>> 2) Currently, the sequences hash table is in reorderbuffer, i.e. global.\n>> I was thinking maybe we should have it in the transaction (because we\n>> need to do cleanup at the end). It seem a bit inconvenient, because then\n>> we'd need to either search htabs in all subxacts, or transfer the\n>> entries to the top-level xact (otoh, we already do that with snapshots),\n>> and cleanup on abort.\n>>\n>> What do you think?\n> \n> Hash table per transaction seems saner design. Adding it to the top\n> level transaction should be fine. The entry will contain an XID\n> anyway. If we add it to every subtransaction we will need to search\n> hash table in each of the subtransactions when deciding whether a\n> sequence change is transactional or not. Top transaction is a\n> reasonable trade off.\n> \n\nIt's not clear to me what design you're proposing, exactly.\n\nIf we track it in top-level transactions, then we'd need copy the data\nwhenever a transaction is assigned as a child, and perhaps also remove\nit when there's a subxact abort.\n\nAnd we'd need to still search the hashes in all toplevel transactions on\nevery sequence increment - in principle we can't have increment for a\nsequence created in another in-progress transaction, but maybe it's just\nnot assigned yet.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 25 Jul 2023 18:32:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Here's a somewhat cleaned up version of the patch series, with some of\nthe smaller \"rework\" patches (protocol versioning, origins, smgr_create,\n...) merged into the appropriate part. I've kept the bit adding separate\ntablesync LSN.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 25 Jul 2023 23:12:53 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 5:29 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/25/23 08:28, Amit Kapila wrote:\n> > On Mon, Jul 24, 2023 at 9:32 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 7/24/23 12:40, Amit Kapila wrote:\n> >>> On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n> >>> <ashutosh.bapat.oss@gmail.com> wrote:\n> >>>\n> >>> Even after that, see below the value of the sequence is still not\n> >>> caught up. Later, when the apply worker processes all the WAL, the\n> >>> sequence state will be caught up.\n> >>>\n> >>\n> >> And how is this different from what tablesync does for tables? For that\n> >> 'r' also does not mean it's fully caught up, IIRC. What matters is\n> >> whether the sequence since this moment can go back. And I don't think it\n> >> can, because that would require replaying changes from before we did\n> >> copy_sequence ...\n> >>\n> >\n> > For sequences, it is quite possible that we replay WAL from before the\n> > copy_sequence whereas the same is not true for tables (w.r.t\n> > copy_table()). This is because for tables we have a kind of interlock\n> > w.r.t LSN returned via create_slot (say this value of LSN is LSN1),\n> > basically, the walsender corresponding to tablesync worker in\n> > publisher won't send any WAL before that LSN whereas the same is not\n> > true for sequences. Also, even if apply worker can receive WAL before\n> > copy_table, it won't apply that as that would be behind the LSN1 and\n> > the same is not true for sequences. So, for tables, we will never go\n> > back to a state before the copy_table() but for sequences, we can go\n> > back to a state before copy_sequence().\n> >\n>\n> Right. I think the important detail is that during sync we have three\n> important LSNs\n>\n> - LSN1 where the slot is created\n> - LSN2 where the copy happens\n> - LSN3 where we consider the sync completed\n>\n> For tables, LSN1 == LSN2, because the data is completed using the\n> snapshot from the temporary slot. And (LSN1 <= LSN3).\n>\n> But for sequences, the copy happens after the slot creation, possibly\n> with (LSN1 < LSN2). And because LSN3 comes from the main subscription\n> (which may be a bit behind, for whatever reason), it may happen that\n>\n> (LSN1 < LSN3 < LSN2)\n>\n> The the sync ends at LSN3, but that means all sequence changes between\n> LSN3 and LSN2 will be applied \"again\" making the sequence go away.\n>\n\nYeah, the problem is something as you explained but an additional\nminor point is that for sequences we also do end up applying the WAL\nbetween LSN1 and LSN3 which makes it go backwards. The ideal way is\nthat sequences on subscribers never go backward in a way that is\nvisible to users. I will share my thoughts after studying your\nproposal in a later email.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jul 2023 09:37:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 25, 2023 at 5:29 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 7/25/23 08:28, Amit Kapila wrote:\n> > > On Mon, Jul 24, 2023 at 9:32 PM Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > >>\n> > >> On 7/24/23 12:40, Amit Kapila wrote:\n> > >>> On Wed, Jul 5, 2023 at 8:21 PM Ashutosh Bapat\n> > >>> <ashutosh.bapat.oss@gmail.com> wrote:\n> > >>>\n> > >>> Even after that, see below the value of the sequence is still not\n> > >>> caught up. Later, when the apply worker processes all the WAL, the\n> > >>> sequence state will be caught up.\n> > >>>\n> > >>\n> > >> And how is this different from what tablesync does for tables? For that\n> > >> 'r' also does not mean it's fully caught up, IIRC. What matters is\n> > >> whether the sequence since this moment can go back. And I don't think it\n> > >> can, because that would require replaying changes from before we did\n> > >> copy_sequence ...\n> > >>\n> > >\n> > > For sequences, it is quite possible that we replay WAL from before the\n> > > copy_sequence whereas the same is not true for tables (w.r.t\n> > > copy_table()). This is because for tables we have a kind of interlock\n> > > w.r.t LSN returned via create_slot (say this value of LSN is LSN1),\n> > > basically, the walsender corresponding to tablesync worker in\n> > > publisher won't send any WAL before that LSN whereas the same is not\n> > > true for sequences. Also, even if apply worker can receive WAL before\n> > > copy_table, it won't apply that as that would be behind the LSN1 and\n> > > the same is not true for sequences. So, for tables, we will never go\n> > > back to a state before the copy_table() but for sequences, we can go\n> > > back to a state before copy_sequence().\n> > >\n> >\n> > Right. I think the important detail is that during sync we have three\n> > important LSNs\n> >\n> > - LSN1 where the slot is created\n> > - LSN2 where the copy happens\n> > - LSN3 where we consider the sync completed\n> >\n> > For tables, LSN1 == LSN2, because the data is completed using the\n> > snapshot from the temporary slot. And (LSN1 <= LSN3).\n> >\n> > But for sequences, the copy happens after the slot creation, possibly\n> > with (LSN1 < LSN2). And because LSN3 comes from the main subscription\n> > (which may be a bit behind, for whatever reason), it may happen that\n> >\n> > (LSN1 < LSN3 < LSN2)\n> >\n> > The the sync ends at LSN3, but that means all sequence changes between\n> > LSN3 and LSN2 will be applied \"again\" making the sequence go away.\n> >\n>\n> Yeah, the problem is something as you explained but an additional\n> minor point is that for sequences we also do end up applying the WAL\n> between LSN1 and LSN3 which makes it go backwards.\n>\n\nI was reading this email thread and found the email by Andres [1]\nwhich seems to me to say the same thing: \"I assume that part of the\ninitial sync would have to be a new sequence synchronization step that\nreads all the sequence states on the publisher and ensures that the\nsubscriber sequences are at the same point. There's a bit of\ntrickiness there, but it seems entirely doable. The logical\nreplication replay support for sequences will have to be a bit careful\nabout not decreasing the subscriber's sequence values - the standby\ninitially will be ahead of the\nincrements we'll see in the WAL.\". Now, IIUC this means that even\nbefore the sequence is marked as SYNCDONE, it shouldn't go backward.\n\n[1]: \"https://www.postgresql.org/message-id/20221117024357.ljjme6v75mny2j6u%40awork3.anarazel.de\n\nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 26 Jul 2023 12:57:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/26/23 09:27, Amit Kapila wrote:\n> On Wed, Jul 26, 2023 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> ...\n>>\n> \n> I was reading this email thread and found the email by Andres [1]\n> which seems to me to say the same thing: \"I assume that part of the\n> initial sync would have to be a new sequence synchronization step that\n> reads all the sequence states on the publisher and ensures that the\n> subscriber sequences are at the same point. There's a bit of\n> trickiness there, but it seems entirely doable. The logical\n> replication replay support for sequences will have to be a bit careful\n> about not decreasing the subscriber's sequence values - the standby\n> initially will be ahead of the\n> increments we'll see in the WAL.\". Now, IIUC this means that even\n> before the sequence is marked as SYNCDONE, it shouldn't go backward.\n> \n\nWell, I could argue that's more an opinion, and I'm not sure it really\ncontradicts the idea that the sequence should not go backwards only\nafter the sync completes.\n\nAnyway, I was thinking about this a bit more, and it seems it's not as\ndifficult to use the page LSN to ensure sequences don't go backwards.\nThe 0005 change does that, by:\n\n1) adding pg_sequence_state, that returns both the sequence state and\n the page LSN\n\n2) copy_sequence returns the page LSN\n\n3) tablesync then sets this LSN as origin_startpos (which for tables is\n just the LSN of the replication slot)\n\nAFAICS this makes it work - we start decoding at the page LSN, so that\nwe skip the increments that could lead to the sequence going backwards.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 26 Jul 2023 17:18:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/26/23 09:27, Amit Kapila wrote:\n> > On Wed, Jul 26, 2023 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Anyway, I was thinking about this a bit more, and it seems it's not as\n> difficult to use the page LSN to ensure sequences don't go backwards.\n>\n\nWhile studying the changes for this proposal and related areas, I have\na few comments:\n1. I think you need to advance the origin if it is changed due to\ncopy_sequence(), otherwise, if the sync worker restarts after\nSUBREL_STATE_FINISHEDCOPY, then it will restart from the slot's LSN\nvalue.\n\n2. Between the time of SYNCDONE and READY state, the patch can skip\napplying non-transactional sequence changes even if it should apply\nit. The reason is that during that state change\nshould_apply_changes_for_rel() decides whether to apply change based\non the value of remote_final_lsn which won't be set for\nnon-transactional change. I think we need to send the start LSN of a\nnon-transactional record and then use that as remote_final_lsn for\nsuch a change.\n\n3. For non-transactional sequence change apply, we don't set\nreplorigin_session_origin_lsn/replorigin_session_origin_timestamp as\nwe are doing in apply_handle_commit_internal() before calling\nCommitTransactionCommand(). So, that can lead to the origin moving\nbackwards after restart which will lead to requesting and applying the\nsame changes again and for that period of time sequence can go\nbackwards. This needs some more thought as to what is the correct\nbehaviour/solution for this.\n\n4. BTW, while checking this behaviour, I noticed that the initial sync\nworker for sequence mentions the table in the LOG message: \"LOG:\nlogical replication table synchronization worker for subscription\n\"mysub\", table \"s\" has finished\". Won't it be better here to refer to\nit as a sequence?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 28 Jul 2023 15:12:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 10:02 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/24/23 14:57, Ashutosh Bapat wrote:\n> > ...\n> >\n> >>\n> >>\n> >> 2) Currently, the sequences hash table is in reorderbuffer, i.e. global.\n> >> I was thinking maybe we should have it in the transaction (because we\n> >> need to do cleanup at the end). It seem a bit inconvenient, because then\n> >> we'd need to either search htabs in all subxacts, or transfer the\n> >> entries to the top-level xact (otoh, we already do that with snapshots),\n> >> and cleanup on abort.\n> >>\n> >> What do you think?\n> >\n> > Hash table per transaction seems saner design. Adding it to the top\n> > level transaction should be fine. The entry will contain an XID\n> > anyway. If we add it to every subtransaction we will need to search\n> > hash table in each of the subtransactions when deciding whether a\n> > sequence change is transactional or not. Top transaction is a\n> > reasonable trade off.\n> >\n>\n> It's not clear to me what design you're proposing, exactly.\n>\n> If we track it in top-level transactions, then we'd need copy the data\n> whenever a transaction is assigned as a child, and perhaps also remove\n> it when there's a subxact abort.\n\nI thought, esp. with your changes to assign xid, we will always know\nthe top level transaction when a sequence is assigned a relfilenode.\nSo the refilenodes will always get added to the correct hash directly.\nI didn't imagine a case where we will need to copy the hash table from\nsub-transaction to top transaction. If that's true, yes it's\ninconvenient.\n\nAs to the abort, don't we already remove entries on subtxn abort?\nHaving per transaction hash table doesn't seem to change anything\nmuch.\n\n>\n> And we'd need to still search the hashes in all toplevel transactions on\n> every sequence increment - in principle we can't have increment for a\n> sequence created in another in-progress transaction, but maybe it's just\n> not assigned yet.\n\nWe hold a strong lock on sequence when changing its relfilenode. The\nsequence whose relfilenode is being changed can not be accessed by any\nconcurrent transaction. So I am not able to understand what you are\ntrying to say.\n\nI think per (top level) transaction hash table is cleaner design. It\nputs the hash table where it should be. But if that makes code\ndifficult, current design works too.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 28 Jul 2023 18:05:54 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/28/23 11:42, Amit Kapila wrote:\n> On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/26/23 09:27, Amit Kapila wrote:\n>>> On Wed, Jul 26, 2023 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Anyway, I was thinking about this a bit more, and it seems it's not as\n>> difficult to use the page LSN to ensure sequences don't go backwards.\n>>\n> \n> While studying the changes for this proposal and related areas, I have\n> a few comments:\n> 1. I think you need to advance the origin if it is changed due to\n> copy_sequence(), otherwise, if the sync worker restarts after\n> SUBREL_STATE_FINISHEDCOPY, then it will restart from the slot's LSN\n> value.\n> \n\nTrue, we want to restart at the new origin_startpos.\n\n> 2. Between the time of SYNCDONE and READY state, the patch can skip\n> applying non-transactional sequence changes even if it should apply\n> it. The reason is that during that state change\n> should_apply_changes_for_rel() decides whether to apply change based\n> on the value of remote_final_lsn which won't be set for\n> non-transactional change. I think we need to send the start LSN of a\n> non-transactional record and then use that as remote_final_lsn for\n> such a change.\n\nGood catch. remote_final_lsn is set in apply_handle_begin, but that\nwon't happen for sequences. We're already sending the LSN, but\nlogicalrep_read_sequence ignores it - it should be enough to add it to\nLogicalRepSequence and then set it in apply_handle_sequence().\n\n> \n> 3. For non-transactional sequence change apply, we don't set\n> replorigin_session_origin_lsn/replorigin_session_origin_timestamp as\n> we are doing in apply_handle_commit_internal() before calling\n> CommitTransactionCommand(). So, that can lead to the origin moving\n> backwards after restart which will lead to requesting and applying the\n> same changes again and for that period of time sequence can go\n> backwards. This needs some more thought as to what is the correct\n> behaviour/solution for this.\n> \n\nI think saying \"origin moves backwards\" is a bit misleading. AFAICS the\norigin position is not actually moving backwards, it's more that we\ndon't (and can't) move it forwards for each non-transactional change. So\nyeah, we may re-apply those, and IMHO that's expected - the sequence is\nallowed to be \"ahead\" on the subscriber.\n\nI don't see a way to improve this, except maybe having a separate LSN\nfor non-transactional changes (for each origin).\n\n> 4. BTW, while checking this behaviour, I noticed that the initial sync\n> worker for sequence mentions the table in the LOG message: \"LOG:\n> logical replication table synchronization worker for subscription\n> \"mysub\", table \"s\" has finished\". Won't it be better here to refer to\n> it as a sequence?\n> \n\nThanks, I'll fix that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 Jul 2023 14:42:39 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Anyway, I was thinking about this a bit more, and it seems it's not as\n> difficult to use the page LSN to ensure sequences don't go backwards.\n> The 0005 change does that, by:\n>\n> 1) adding pg_sequence_state, that returns both the sequence state and\n> the page LSN\n>\n> 2) copy_sequence returns the page LSN\n>\n> 3) tablesync then sets this LSN as origin_startpos (which for tables is\n> just the LSN of the replication slot)\n>\n> AFAICS this makes it work - we start decoding at the page LSN, so that\n> we skip the increments that could lead to the sequence going backwards.\n>\n\nI like this design very much. It makes things simpler than complex.\nThanks for doing this.\n\nI am wondering whether we could reuse pg_sequence_last_value() instead\nof adding a new function. But the name of the function doesn't leave\nmuch space for expanding its functionality. So we are good with a new\none. Probably some code deduplication.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 28 Jul 2023 18:14:45 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/28/23 14:35, Ashutosh Bapat wrote:\n> On Tue, Jul 25, 2023 at 10:02 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/24/23 14:57, Ashutosh Bapat wrote:\n>>> ...\n>>>\n>>>>\n>>>>\n>>>> 2) Currently, the sequences hash table is in reorderbuffer, i.e. global.\n>>>> I was thinking maybe we should have it in the transaction (because we\n>>>> need to do cleanup at the end). It seem a bit inconvenient, because then\n>>>> we'd need to either search htabs in all subxacts, or transfer the\n>>>> entries to the top-level xact (otoh, we already do that with snapshots),\n>>>> and cleanup on abort.\n>>>>\n>>>> What do you think?\n>>>\n>>> Hash table per transaction seems saner design. Adding it to the top\n>>> level transaction should be fine. The entry will contain an XID\n>>> anyway. If we add it to every subtransaction we will need to search\n>>> hash table in each of the subtransactions when deciding whether a\n>>> sequence change is transactional or not. Top transaction is a\n>>> reasonable trade off.\n>>>\n>>\n>> It's not clear to me what design you're proposing, exactly.\n>>\n>> If we track it in top-level transactions, then we'd need copy the data\n>> whenever a transaction is assigned as a child, and perhaps also remove\n>> it when there's a subxact abort.\n> \n> I thought, esp. with your changes to assign xid, we will always know\n> the top level transaction when a sequence is assigned a relfilenode.\n> So the refilenodes will always get added to the correct hash directly.\n> I didn't imagine a case where we will need to copy the hash table from\n> sub-transaction to top transaction. If that's true, yes it's\n> inconvenient.\n> \n\nWell, it's a matter of efficiency.\n\nTo check if a sequence change is transactional, we need to check if it's\nfor a relfilenode created in the current transaction (it can't be for\nrelfilenode created in a concurrent top-level transaction, due to MVCC).\n\nIf you don't copy the entries into the top-level xact, you have to walk\nall subxacts and search all of those, for each sequence change. And\nthere may be quite a few of both subxacts and sequence changes ...\n\nI wonder if we need to search the other top-level xacts, but we probably\nneed to do that. Because it might be a subxact without an assignment, or\nsomething like that.\n\n> As to the abort, don't we already remove entries on subtxn abort?\n> Having per transaction hash table doesn't seem to change anything\n> much.\n> \n\nWhat entries are we removing? My point is that if we copy the entries to\nthe top-level xact, we probably need to remove them on abort. Or we\ncould leave them in the top-level xact hash.\n\n>>\n>> And we'd need to still search the hashes in all toplevel transactions on\n>> every sequence increment - in principle we can't have increment for a\n>> sequence created in another in-progress transaction, but maybe it's just\n>> not assigned yet.\n> \n> We hold a strong lock on sequence when changing its relfilenode. The\n> sequence whose relfilenode is being changed can not be accessed by any\n> concurrent transaction. So I am not able to understand what you are\n> trying to say.\n> \n\nHow do you know the subxact has already been recognized as such? It may\nbe treated as top-level transaction for a while, until the assignment.\n\n> I think per (top level) transaction hash table is cleaner design. It\n> puts the hash table where it should be. But if that makes code\n> difficult, current design works too.\n> \n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 28 Jul 2023 14:56:34 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Jul 28, 2023 at 6:12 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/28/23 11:42, Amit Kapila wrote:\n> > On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 7/26/23 09:27, Amit Kapila wrote:\n> >>> On Wed, Jul 26, 2023 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> Anyway, I was thinking about this a bit more, and it seems it's not as\n> >> difficult to use the page LSN to ensure sequences don't go backwards.\n> >>\n> >\n> > While studying the changes for this proposal and related areas, I have\n> > a few comments:\n> > 1. I think you need to advance the origin if it is changed due to\n> > copy_sequence(), otherwise, if the sync worker restarts after\n> > SUBREL_STATE_FINISHEDCOPY, then it will restart from the slot's LSN\n> > value.\n> >\n>\n> True, we want to restart at the new origin_startpos.\n>\n> > 2. Between the time of SYNCDONE and READY state, the patch can skip\n> > applying non-transactional sequence changes even if it should apply\n> > it. The reason is that during that state change\n> > should_apply_changes_for_rel() decides whether to apply change based\n> > on the value of remote_final_lsn which won't be set for\n> > non-transactional change. I think we need to send the start LSN of a\n> > non-transactional record and then use that as remote_final_lsn for\n> > such a change.\n>\n> Good catch. remote_final_lsn is set in apply_handle_begin, but that\n> won't happen for sequences. We're already sending the LSN, but\n> logicalrep_read_sequence ignores it - it should be enough to add it to\n> LogicalRepSequence and then set it in apply_handle_sequence().\n>\n\nAs per my understanding, the LSN sent is EndRecPtr of record which is\nthe beginning of the next record (means current_record_end + 1). For\ncomparing the current record, we use the start_position of the record\nas we do when we use the remote_final_lsn via apply_handle_begin().\n\n> >\n> > 3. For non-transactional sequence change apply, we don't set\n> > replorigin_session_origin_lsn/replorigin_session_origin_timestamp as\n> > we are doing in apply_handle_commit_internal() before calling\n> > CommitTransactionCommand(). So, that can lead to the origin moving\n> > backwards after restart which will lead to requesting and applying the\n> > same changes again and for that period of time sequence can go\n> > backwards. This needs some more thought as to what is the correct\n> > behaviour/solution for this.\n> >\n>\n> I think saying \"origin moves backwards\" is a bit misleading. AFAICS the\n> origin position is not actually moving backwards, it's more that we\n> don't (and can't) move it forwards for each non-transactional change. So\n> yeah, we may re-apply those, and IMHO that's expected - the sequence is\n> allowed to be \"ahead\" on the subscriber.\n>\n\nBut, if this happens then for a period of time the sequence will go\nbackwards relative to what one would have observed before restart.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 29 Jul 2023 10:24:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/28/23 14:44, Ashutosh Bapat wrote:\n> On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Anyway, I was thinking about this a bit more, and it seems it's not as\n>> difficult to use the page LSN to ensure sequences don't go backwards.\n>> The 0005 change does that, by:\n>>\n>> 1) adding pg_sequence_state, that returns both the sequence state and\n>> the page LSN\n>>\n>> 2) copy_sequence returns the page LSN\n>>\n>> 3) tablesync then sets this LSN as origin_startpos (which for tables is\n>> just the LSN of the replication slot)\n>>\n>> AFAICS this makes it work - we start decoding at the page LSN, so that\n>> we skip the increments that could lead to the sequence going backwards.\n>>\n> \n> I like this design very much. It makes things simpler than complex.\n> Thanks for doing this.\n> \n\nI agree it seems simpler. It'd be good to try testing / reviewing it a\nbit more, so that it doesn't misbehave in some way.\n\n> I am wondering whether we could reuse pg_sequence_last_value() instead\n> of adding a new function. But the name of the function doesn't leave\n> much space for expanding its functionality. So we are good with a new\n> one. Probably some code deduplication.\n> \n\nI don't think we should do that, the pg_sequence_last_value() function\nis meant to do something different. I don't think it'd be any simpler to\nalso make it do what pg_sequence_state() does would make it any simpler.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 29 Jul 2023 14:23:29 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/29/23 06:54, Amit Kapila wrote:\n> On Fri, Jul 28, 2023 at 6:12 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/28/23 11:42, Amit Kapila wrote:\n>>> On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> On 7/26/23 09:27, Amit Kapila wrote:\n>>>>> On Wed, Jul 26, 2023 at 9:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> Anyway, I was thinking about this a bit more, and it seems it's not as\n>>>> difficult to use the page LSN to ensure sequences don't go backwards.\n>>>>\n>>>\n>>> While studying the changes for this proposal and related areas, I have\n>>> a few comments:\n>>> 1. I think you need to advance the origin if it is changed due to\n>>> copy_sequence(), otherwise, if the sync worker restarts after\n>>> SUBREL_STATE_FINISHEDCOPY, then it will restart from the slot's LSN\n>>> value.\n>>>\n>>\n>> True, we want to restart at the new origin_startpos.\n>>\n>>> 2. Between the time of SYNCDONE and READY state, the patch can skip\n>>> applying non-transactional sequence changes even if it should apply\n>>> it. The reason is that during that state change\n>>> should_apply_changes_for_rel() decides whether to apply change based\n>>> on the value of remote_final_lsn which won't be set for\n>>> non-transactional change. I think we need to send the start LSN of a\n>>> non-transactional record and then use that as remote_final_lsn for\n>>> such a change.\n>>\n>> Good catch. remote_final_lsn is set in apply_handle_begin, but that\n>> won't happen for sequences. We're already sending the LSN, but\n>> logicalrep_read_sequence ignores it - it should be enough to add it to\n>> LogicalRepSequence and then set it in apply_handle_sequence().\n>>\n> \n> As per my understanding, the LSN sent is EndRecPtr of record which is\n> the beginning of the next record (means current_record_end + 1). For\n> comparing the current record, we use the start_position of the record\n> as we do when we use the remote_final_lsn via apply_handle_begin().\n> \n>>>\n>>> 3. For non-transactional sequence change apply, we don't set\n>>> replorigin_session_origin_lsn/replorigin_session_origin_timestamp as\n>>> we are doing in apply_handle_commit_internal() before calling\n>>> CommitTransactionCommand(). So, that can lead to the origin moving\n>>> backwards after restart which will lead to requesting and applying the\n>>> same changes again and for that period of time sequence can go\n>>> backwards. This needs some more thought as to what is the correct\n>>> behaviour/solution for this.\n>>>\n>>\n>> I think saying \"origin moves backwards\" is a bit misleading. AFAICS the\n>> origin position is not actually moving backwards, it's more that we\n>> don't (and can't) move it forwards for each non-transactional change. So\n>> yeah, we may re-apply those, and IMHO that's expected - the sequence is\n>> allowed to be \"ahead\" on the subscriber.\n>>\n> \n> But, if this happens then for a period of time the sequence will go\n> backwards relative to what one would have observed before restart.\n> \n\nThat is true, but is it really a problem? This whole sequence decoding\nthing was meant to allow logical failover - make sure that after switch\nto the subscriber, the sequences don't generate duplicate values. From\nthis POV, the sequence going backwards (back to the confirmed origin\nposition) is not an issue - it's still far enough (ahead of publisher).\n\nIs that great / ideal? No, I agree with that. But it was considered\nacceptable and good enough for the failover use case ...\n\nThe only idea how to improve that is we could keep the non-transactional\nchanges (instead of applying them immediately), and then apply them on\nthe nearest \"commit\". That'd mean it's subject to the position tracking,\nand the sequence would not go backwards, I think.\n\nSo every time we decode a commit, we'd check if we decoded any sequence\nchanges since the last commit, and merge them (a bit like a subxact).\n\nThis would however also mean sequence changes from rolled-back xacts may\nnot be replictated. I think that'd be fine, but IIRC Andres suggested\nit's a valid use case.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 29 Jul 2023 14:38:07 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/28/23 14:35, Ashutosh Bapat wrote:\n>\n> ...\n>\n> We hold a strong lock on sequence when changing its relfilenode. The\n> sequence whose relfilenode is being changed can not be accessed by any\n> concurrent transaction. So I am not able to understand what you are\n> trying to say.\n> \n> I think per (top level) transaction hash table is cleaner design. It\n> puts the hash table where it should be. But if that makes code\n> difficult, current design works too.\n> \n\nI was thinking about switching to the per-txn hash, so here's a patch\nadopting that approach (in part 0006). I can't say it's much simpler,\nbut maybe it can be simplified a bit. Most of the complexity comes from\nassignments maybe happening with a delay, so it's hard to say what's a\ntop-level xact.\n\nThe patch essentially does this:\n\n1) the HTAB is moved to ReorderBufferTXN\n\n2) after decoding SGMR_CREATE, we add an entry to the current TXN and\n(for subtransactions) to the parent TXN (even the copy references the\nsubxact)\n\n3) when processing an assignment, we copy the HTAB entries from the\nsubxact to the parent\n\n4) after a subxact abort, we remove the HTAB entries from the parent\n\n5) while searching for the relfilenode, we only scan the HTAB in the\ntop-level xacts (this is possible due to the copying)\n\nThis could work without the copy in parent HTAB, but then we'd have to\nscan all the transactions for every increment. And there may be many\nlookups and many (sub)transactions, but only a small number of new\nrelfilenodes. So it seems like a good tradeoff.\n\nIf we could convince ourselves the subxact has to be already assigned\nwhile decoding the sequence change, then we could simply search only the\ncurrent transaction (and the parent). But I've been unable to convince\nmyself that's guaranteed.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 29 Jul 2023 15:03:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/29/23 14:38, Tomas Vondra wrote:\n>\n> ...\n>\n> The only idea how to improve that is we could keep the non-transactional\n> changes (instead of applying them immediately), and then apply them on\n> the nearest \"commit\". That'd mean it's subject to the position tracking,\n> and the sequence would not go backwards, I think.\n> \n> So every time we decode a commit, we'd check if we decoded any sequence\n> changes since the last commit, and merge them (a bit like a subxact).\n> \n> This would however also mean sequence changes from rolled-back xacts may\n> not be replictated. I think that'd be fine, but IIRC Andres suggested\n> it's a valid use case.\n> \n\nI wasn't sure how difficult would this approach be, so I experimented\nwith this today, and it's waaaay more complicated than I thought. In\nfact, I'm not even sure how to do that ...\n\nThe part 0008 is an WIP patch where ReorderBufferQueueSequence does not\napply the non-transactional changes immediately, and instead adds the\nchanges to a top-level list. And then ReorderBufferCommit adds a fake\nsubxact with all sequence changes up to the commit LSN.\n\nThe challenging part is snapshot management - when applying the changes\nimmediately, we can simply build and use the current snapshot. But with\n0008 it's not that simple - we don't even know into which transaction\nwill the sequence change get \"injected\". In fact, we don't even know if\nthe parent transaction will have a snapshot (if it only does nextval()\nit may seem empty). I was thinking maybe we could \"keep\" the snapshots\nfor non-transactional changes, but I suspect it might confuse the main\ntransaction in some way.\n\nI'm still not convinced this behavior would actually be desirable ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 30 Jul 2023 02:00:31 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sat, Jul 29, 2023 at 5:53 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/28/23 14:44, Ashutosh Bapat wrote:\n> > On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> Anyway, I was thinking about this a bit more, and it seems it's not as\n> >> difficult to use the page LSN to ensure sequences don't go backwards.\n> >> The 0005 change does that, by:\n> >>\n> >> 1) adding pg_sequence_state, that returns both the sequence state and\n> >> the page LSN\n> >>\n> >> 2) copy_sequence returns the page LSN\n> >>\n> >> 3) tablesync then sets this LSN as origin_startpos (which for tables is\n> >> just the LSN of the replication slot)\n> >>\n> >> AFAICS this makes it work - we start decoding at the page LSN, so that\n> >> we skip the increments that could lead to the sequence going backwards.\n> >>\n> >\n> > I like this design very much. It makes things simpler than complex.\n> > Thanks for doing this.\n> >\n>\n> I agree it seems simpler. It'd be good to try testing / reviewing it a\n> bit more, so that it doesn't misbehave in some way.\n>\n\nYeah, I also think this needs a review. This is a sort of new concept\nwhere we don't use the LSN of the slot (for cases where copy returned\na larger value of LSN) or a full_snapshot created corresponding to the\nsync slot by Walsender. For the case of the table, we build a full\nsnapshot because we use that for copying the table but why do we need\nto build that for copying the sequence especially when we directly\ncopy it from the sequence relation without caring for any snapshot?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 31 Jul 2023 14:55:43 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 7/31/23 11:25, Amit Kapila wrote:\n> On Sat, Jul 29, 2023 at 5:53 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/28/23 14:44, Ashutosh Bapat wrote:\n>>> On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>\n>>>> Anyway, I was thinking about this a bit more, and it seems it's not as\n>>>> difficult to use the page LSN to ensure sequences don't go backwards.\n>>>> The 0005 change does that, by:\n>>>>\n>>>> 1) adding pg_sequence_state, that returns both the sequence state and\n>>>> the page LSN\n>>>>\n>>>> 2) copy_sequence returns the page LSN\n>>>>\n>>>> 3) tablesync then sets this LSN as origin_startpos (which for tables is\n>>>> just the LSN of the replication slot)\n>>>>\n>>>> AFAICS this makes it work - we start decoding at the page LSN, so that\n>>>> we skip the increments that could lead to the sequence going backwards.\n>>>>\n>>>\n>>> I like this design very much. It makes things simpler than complex.\n>>> Thanks for doing this.\n>>>\n>>\n>> I agree it seems simpler. It'd be good to try testing / reviewing it a\n>> bit more, so that it doesn't misbehave in some way.\n>>\n> \n> Yeah, I also think this needs a review. This is a sort of new concept\n> where we don't use the LSN of the slot (for cases where copy returned\n> a larger value of LSN) or a full_snapshot created corresponding to the\n> sync slot by Walsender. For the case of the table, we build a full\n> snapshot because we use that for copying the table but why do we need\n> to build that for copying the sequence especially when we directly\n> copy it from the sequence relation without caring for any snapshot?\n> \n\nWe need the slot to decode/apply changes during catchup. The main\nsubscription may get ahead, and we need to ensure the WAL is not\ndiscarded or something like that. This applies even if the initial sync\nstep does not use the slot/snapshot directly.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 31 Jul 2023 13:34:47 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Jul 31, 2023 at 5:04 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/31/23 11:25, Amit Kapila wrote:\n> > On Sat, Jul 29, 2023 at 5:53 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 7/28/23 14:44, Ashutosh Bapat wrote:\n> >>> On Wed, Jul 26, 2023 at 8:48 PM Tomas Vondra\n> >>> <tomas.vondra@enterprisedb.com> wrote:\n> >>>>\n> >>>> Anyway, I was thinking about this a bit more, and it seems it's not as\n> >>>> difficult to use the page LSN to ensure sequences don't go backwards.\n> >>>> The 0005 change does that, by:\n> >>>>\n> >>>> 1) adding pg_sequence_state, that returns both the sequence state and\n> >>>> the page LSN\n> >>>>\n> >>>> 2) copy_sequence returns the page LSN\n> >>>>\n> >>>> 3) tablesync then sets this LSN as origin_startpos (which for tables is\n> >>>> just the LSN of the replication slot)\n> >>>>\n> >>>> AFAICS this makes it work - we start decoding at the page LSN, so that\n> >>>> we skip the increments that could lead to the sequence going backwards.\n> >>>>\n> >>>\n> >>> I like this design very much. It makes things simpler than complex.\n> >>> Thanks for doing this.\n> >>>\n> >>\n> >> I agree it seems simpler. It'd be good to try testing / reviewing it a\n> >> bit more, so that it doesn't misbehave in some way.\n> >>\n> >\n> > Yeah, I also think this needs a review. This is a sort of new concept\n> > where we don't use the LSN of the slot (for cases where copy returned\n> > a larger value of LSN) or a full_snapshot created corresponding to the\n> > sync slot by Walsender. For the case of the table, we build a full\n> > snapshot because we use that for copying the table but why do we need\n> > to build that for copying the sequence especially when we directly\n> > copy it from the sequence relation without caring for any snapshot?\n> >\n>\n> We need the slot to decode/apply changes during catchup. The main\n> subscription may get ahead, and we need to ensure the WAL is not\n> discarded or something like that. This applies even if the initial sync\n> step does not use the slot/snapshot directly.\n>\n\nAFAIK, none of these needs a full_snapshot (see usage of\nSnapBuild->building_full_snapshot). The full_snapshot tracks both\ncatalog and non-catalog xacts in the snapshot where we require to\ntrack non-catalog ones because we want to copy the table using that\nsnapshot. It is relatively expensive to build a full snapshot and we\ndon't do that unless it is required. For the current usage of this\npatch, I think using CRS_NOEXPORT_SNAPSHOT would be sufficient.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 1 Aug 2023 08:29:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 8/1/23 04:59, Amit Kapila wrote:\n> On Mon, Jul 31, 2023 at 5:04 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 7/31/23 11:25, Amit Kapila wrote:\n>>> ...\n>>>\n>>> Yeah, I also think this needs a review. This is a sort of new concept\n>>> where we don't use the LSN of the slot (for cases where copy returned\n>>> a larger value of LSN) or a full_snapshot created corresponding to the\n>>> sync slot by Walsender. For the case of the table, we build a full\n>>> snapshot because we use that for copying the table but why do we need\n>>> to build that for copying the sequence especially when we directly\n>>> copy it from the sequence relation without caring for any snapshot?\n>>>\n>>\n>> We need the slot to decode/apply changes during catchup. The main\n>> subscription may get ahead, and we need to ensure the WAL is not\n>> discarded or something like that. This applies even if the initial sync\n>> step does not use the slot/snapshot directly.\n>>\n> \n> AFAIK, none of these needs a full_snapshot (see usage of\n> SnapBuild->building_full_snapshot). The full_snapshot tracks both\n> catalog and non-catalog xacts in the snapshot where we require to\n> track non-catalog ones because we want to copy the table using that\n> snapshot. It is relatively expensive to build a full snapshot and we\n> don't do that unless it is required. For the current usage of this\n> patch, I think using CRS_NOEXPORT_SNAPSHOT would be sufficient.\n> \n\nYeah, you may be right we don't need a full snapshot, because we don't\nneed to export it. We however still need a snapshot, and it wasn't clear\nto me whether you suggest we don't need the slot / snapshot at all.\n\nAnyway, I think this is \"just\" a matter of efficiency, not correctness.\nIMHO there are bigger questions regarding the \"going back\" behavior\nafter apply restart.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 1 Aug 2023 17:16:02 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Aug 1, 2023 at 8:46 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Anyway, I think this is \"just\" a matter of efficiency, not correctness.\n> IMHO there are bigger questions regarding the \"going back\" behavior\n> after apply restart.\n\n\nsequence_decode() has the following code\n/* Skip the change if already processed (per the snapshot). */\nif (transactional &&\n!SnapBuildProcessChange(builder, xid, buf->origptr))\nreturn;\nelse if (!transactional &&\n(SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\nSnapBuildXactNeedsSkip(builder, buf->origptr)))\nreturn;\n\nThis means that if the subscription restarts, the upstream will *not*\nsend any non-transactional sequence changes with LSN prior to the LSN\nspecified by START_REPLICATION command. That should avoid replicating\nall the non-transactional sequence changes since\nReplicationSlot::restart_lsn if the subscription restarts.\n\nBut in apply_handle_sequence(), we do not update the\nreplorigin_session_origin_lsn with LSN of the non-transactional\nsequence change when it's applied. This means that if a subscription\nrestarts while it is half way through applying a transaction, those\nchanges will be replicated again. This will move the sequence\nbackward. If the subscription keeps restarting again and again while\napplying that transaction, we will see the sequence \"rubber banding\"\n[1] on subscription. So untill the transaction is completely applied,\nthe other users of the sequence may see duplicate values during this\ntime. I think this is undesirable.\n\nBut I am not able to find a case where this can lead to conflicting\nvalues after failover. If there's only one transaction which is\nrepeatedly being applied, the rows which use sequence values were\nnever committed so there's no conflicting value present on the\nsubscription. The same reasoning can be extended to multiple in-flight\ntransactions. If another transaction (T2) uses the sequence values\nchanged by in-flight transaction T1 and if T2 commits before T1, the\nsequence changes used by T2 must have LSNs before commit of T2 and\nthus they will never be replicated. (See example below).\n\nT1\ninsert into t1 (nextval('seq'), ...) from generate_series(1, 100); - Q1\nT2\ninsert into t1 (nextval('seq'), ...) from generate_series(1, 100); - Q2\nCOMMIT;\nT1\ninsert into t1 (nextval('seq'), ...) from generate_series(1, 100); - Q13\nCOMMIT;\n\nSo I am not able to imagine a case when a sequence going backward can\ncause conflicting values.\n\nBut whether or not that's the case, downstream should not request (and\nhence receive) any changes that have been already applied (and\ncommitted) downstream as a principle. I think a way to achieve this is\nto update the replorigin_session_origin_lsn so that a sequence change\napplied once is not requested (and hence sent) again.\n\n[1] https://en.wikipedia.org/wiki/Rubber_banding\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Fri, 11 Aug 2023 12:02:59 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 8/11/23 08:32, Ashutosh Bapat wrote:\n> On Tue, Aug 1, 2023 at 8:46 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> Anyway, I think this is \"just\" a matter of efficiency, not correctness.\n>> IMHO there are bigger questions regarding the \"going back\" behavior\n>> after apply restart.\n> \n> \n> sequence_decode() has the following code\n> /* Skip the change if already processed (per the snapshot). */\n> if (transactional &&\n> !SnapBuildProcessChange(builder, xid, buf->origptr))\n> return;\n> else if (!transactional &&\n> (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n> SnapBuildXactNeedsSkip(builder, buf->origptr)))\n> return;\n> \n> This means that if the subscription restarts, the upstream will *not*\n> send any non-transactional sequence changes with LSN prior to the LSN\n> specified by START_REPLICATION command. That should avoid replicating\n> all the non-transactional sequence changes since\n> ReplicationSlot::restart_lsn if the subscription restarts.\n> \n\nAh, right, I got confused and mixed restart_lsn and the LSN passed in\nthe START_REPLICATION COMMAND. Thanks for the details, I think this\nworks fine.\n\n> But in apply_handle_sequence(), we do not update the\n> replorigin_session_origin_lsn with LSN of the non-transactional\n> sequence change when it's applied. This means that if a subscription\n> restarts while it is half way through applying a transaction, those\n> changes will be replicated again. This will move the sequence\n> backward. If the subscription keeps restarting again and again while\n> applying that transaction, we will see the sequence \"rubber banding\"\n> [1] on subscription. So untill the transaction is completely applied,\n> the other users of the sequence may see duplicate values during this\n> time. I think this is undesirable.\n> \n\nWell, but as I said earlier, this is not expected to support using the\nsequence on the subscriber until after the failover, so there's not real\nrisk of \"duplicate values\". Yes, you might select the data from the\nsequence directly, but that would have all sorts of issues even without\nreplication - users are required to use nextval/currval and so on.\n\n> But I am not able to find a case where this can lead to conflicting\n> values after failover. If there's only one transaction which is\n> repeatedly being applied, the rows which use sequence values were\n> never committed so there's no conflicting value present on the\n> subscription. The same reasoning can be extended to multiple in-flight\n> transactions. If another transaction (T2) uses the sequence values\n> changed by in-flight transaction T1 and if T2 commits before T1, the\n> sequence changes used by T2 must have LSNs before commit of T2 and\n> thus they will never be replicated. (See example below).\n> \n> T1\n> insert into t1 (nextval('seq'), ...) from generate_series(1, 100); - Q1\n> T2\n> insert into t1 (nextval('seq'), ...) from generate_series(1, 100); - Q2\n> COMMIT;\n> T1\n> insert into t1 (nextval('seq'), ...) from generate_series(1, 100); - Q13\n> COMMIT;\n> \n> So I am not able to imagine a case when a sequence going backward can\n> cause conflicting values.\n\nRight, I agree this \"rubber banding\" can happen. But as long as we don't\ngo back too far (before the last applied commit) I think that'd fine. We\nonly need to make guarantees about committed transactions, and I don't\nthink we need to worry about this too much ...\n\n> \n> But whether or not that's the case, downstream should not request (and\n> hence receive) any changes that have been already applied (and\n> committed) downstream as a principle. I think a way to achieve this is\n> to update the replorigin_session_origin_lsn so that a sequence change\n> applied once is not requested (and hence sent) again.\n> \n\nI guess we could update the origin, per attached 0004. We don't have\ntimestamp to set replorigin_session_origin_timestamp, but it seems we\ndon't need that.\n\nThe attached patch merges the earlier improvements, except for the part\nthat experimented with adding a \"fake\" transaction (which turned out to\nhave a number of difficult issues).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 16 Aug 2023 16:26:46 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 7:56 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> >\n> > But whether or not that's the case, downstream should not request (and\n> > hence receive) any changes that have been already applied (and\n> > committed) downstream as a principle. I think a way to achieve this is\n> > to update the replorigin_session_origin_lsn so that a sequence change\n> > applied once is not requested (and hence sent) again.\n> >\n>\n> I guess we could update the origin, per attached 0004. We don't have\n> timestamp to set replorigin_session_origin_timestamp, but it seems we\n> don't need that.\n>\n> The attached patch merges the earlier improvements, except for the part\n> that experimented with adding a \"fake\" transaction (which turned out to\n> have a number of difficult issues).\n\n0004 looks good to me. But I need to review the impact of not setting\nreplorigin_session_origin_timestamp.\n\nWhat fake transaction experiment are you talking about?\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 17 Aug 2023 19:13:45 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 7:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Aug 16, 2023 at 7:56 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > >\n> > > But whether or not that's the case, downstream should not request (and\n> > > hence receive) any changes that have been already applied (and\n> > > committed) downstream as a principle. I think a way to achieve this is\n> > > to update the replorigin_session_origin_lsn so that a sequence change\n> > > applied once is not requested (and hence sent) again.\n> > >\n> >\n> > I guess we could update the origin, per attached 0004. We don't have\n> > timestamp to set replorigin_session_origin_timestamp, but it seems we\n> > don't need that.\n> >\n> > The attached patch merges the earlier improvements, except for the part\n> > that experimented with adding a \"fake\" transaction (which turned out to\n> > have a number of difficult issues).\n>\n> 0004 looks good to me.\n\n\n+ {\n CommitTransactionCommand();\n+\n+ /*\n+ * Update origin state so we don't try applying this sequence\n+ * change in case of crash.\n+ *\n+ * XXX We don't have replorigin_session_origin_timestamp, but we\n+ * can just leave that set to 0.\n+ */\n+ replorigin_session_origin_lsn = seq.lsn;\n\nIIUC, your proposal is to update the replorigin_session_origin_lsn, so\nthat after restart, it doesn't use some prior origin LSN to start with\nwhich can in turn lead the sequence to go backward. If so, it should\nbe updated before calling CommitTransactionCommand() as we are doing\nin apply_handle_commit_internal(). If that is not the intention then\nit is not clear to me how updating replorigin_session_origin_lsn after\ncommit is helpful.\n\n>\n But I need to review the impact of not setting\n> replorigin_session_origin_timestamp.\n>\n\nThis may not have a direct impact on built-in replication as I think\nwe don't rely on it yet but we need to think of out-of-core solutions.\nI am not sure if I understood your proposal as per my previous comment\nbut once you clarify the same, I'll also try to think on the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 18 Aug 2023 10:37:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Aug 17, 2023 at 7:13 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Wed, Aug 16, 2023 at 7:56 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> > >\n> > > >\n> > > > But whether or not that's the case, downstream should not request (and\n> > > > hence receive) any changes that have been already applied (and\n> > > > committed) downstream as a principle. I think a way to achieve this is\n> > > > to update the replorigin_session_origin_lsn so that a sequence change\n> > > > applied once is not requested (and hence sent) again.\n> > > >\n> > >\n> > > I guess we could update the origin, per attached 0004. We don't have\n> > > timestamp to set replorigin_session_origin_timestamp, but it seems we\n> > > don't need that.\n> > >\n> > > The attached patch merges the earlier improvements, except for the part\n> > > that experimented with adding a \"fake\" transaction (which turned out to\n> > > have a number of difficult issues).\n> >\n> > 0004 looks good to me.\n>\n>\n> + {\n> CommitTransactionCommand();\n> +\n> + /*\n> + * Update origin state so we don't try applying this sequence\n> + * change in case of crash.\n> + *\n> + * XXX We don't have replorigin_session_origin_timestamp, but we\n> + * can just leave that set to 0.\n> + */\n> + replorigin_session_origin_lsn = seq.lsn;\n>\n> IIUC, your proposal is to update the replorigin_session_origin_lsn, so\n> that after restart, it doesn't use some prior origin LSN to start with\n> which can in turn lead the sequence to go backward. If so, it should\n> be updated before calling CommitTransactionCommand() as we are doing\n> in apply_handle_commit_internal(). If that is not the intention then\n> it is not clear to me how updating replorigin_session_origin_lsn after\n> commit is helpful.\n>\n\ntypedef struct ReplicationState\n{\n...\n /*\n * Location of the latest commit from the remote side.\n */\n XLogRecPtr remote_lsn;\n\nThis is the variable that will be updated with the value of\nreplorigin_session_origin_lsn. This means we will now track some\narbitrary LSN location of the remote side in this variable. The above\ncomment makes me wonder if there is anything we are missing or if it\nis just a matter of updating this comment because before the patch we\nalways adhere to what is written in the comment.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 18 Aug 2023 16:28:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 7:13 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > The attached patch merges the earlier improvements, except for the part\n> > that experimented with adding a \"fake\" transaction (which turned out to\n> > have a number of difficult issues).\n>\n> 0004 looks good to me. But I need to review the impact of not setting\n> replorigin_session_origin_timestamp.\n\nI think it will be good to set replorigin_session_origin_timestamp = 0\nexplicitly so as not to pick up a garbage value. The timestamp is\nwritten to the commit record. Beyond that I don't see any use of it.\nIt is further passed downstream if there is cascaded logical\nreplication setup. But I don't see it being used. So it should be fine\nto leave it 0. I don't think we can use logically replicated sequences\nin a mult-master environment where the timestamp may be used to\nresolve conflict. Such a setup will require a distributed sequence\nmanagement which can not be achieved by logical replication alone.\n\nIn short, I didn't find any hazard in leaving the\nreplorigin_session_origin_timestamp as 0.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 13 Sep 2023 17:08:14 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 4:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Aug 18, 2023 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Aug 17, 2023 at 7:13 PM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > On Wed, Aug 16, 2023 at 7:56 PM Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > > >\n> > > > >\n> > > > > But whether or not that's the case, downstream should not request (and\n> > > > > hence receive) any changes that have been already applied (and\n> > > > > committed) downstream as a principle. I think a way to achieve this is\n> > > > > to update the replorigin_session_origin_lsn so that a sequence change\n> > > > > applied once is not requested (and hence sent) again.\n> > > > >\n> > > >\n> > > > I guess we could update the origin, per attached 0004. We don't have\n> > > > timestamp to set replorigin_session_origin_timestamp, but it seems we\n> > > > don't need that.\n> > > >\n> > > > The attached patch merges the earlier improvements, except for the part\n> > > > that experimented with adding a \"fake\" transaction (which turned out to\n> > > > have a number of difficult issues).\n> > >\n> > > 0004 looks good to me.\n> >\n> >\n> > + {\n> > CommitTransactionCommand();\n> > +\n> > + /*\n> > + * Update origin state so we don't try applying this sequence\n> > + * change in case of crash.\n> > + *\n> > + * XXX We don't have replorigin_session_origin_timestamp, but we\n> > + * can just leave that set to 0.\n> > + */\n> > + replorigin_session_origin_lsn = seq.lsn;\n> >\n> > IIUC, your proposal is to update the replorigin_session_origin_lsn, so\n> > that after restart, it doesn't use some prior origin LSN to start with\n> > which can in turn lead the sequence to go backward. If so, it should\n> > be updated before calling CommitTransactionCommand() as we are doing\n> > in apply_handle_commit_internal(). If that is not the intention then\n> > it is not clear to me how updating replorigin_session_origin_lsn after\n> > commit is helpful.\n> >\n>\n> typedef struct ReplicationState\n> {\n> ...\n> /*\n> * Location of the latest commit from the remote side.\n> */\n> XLogRecPtr remote_lsn;\n>\n> This is the variable that will be updated with the value of\n> replorigin_session_origin_lsn. This means we will now track some\n> arbitrary LSN location of the remote side in this variable. The above\n> comment makes me wonder if there is anything we are missing or if it\n> is just a matter of updating this comment because before the patch we\n> always adhere to what is written in the comment.\n\nI don't think we are missing anything. This value is used to track the\nremote LSN upto which all the commits from upstream have been applied\nlocally. Since a non-transactional sequence change is like a single\nWAL record transaction, it's LSN acts as the LSN of the mini-commit.\nSo it should be fine to update remote_lsn with sequence WAL record's\nend LSN. That's what the patches do. I don't see any hazard. But you\nare right, we need to update comments. Here and also at other places\nlike\nreplorigin_session_advance() which uses remote_commit as name of the\nargument which gets assigned to ReplicationState::remote_lsn.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Wed, 13 Sep 2023 18:48:24 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wednesday, August 16, 2023 10:27 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\r\n\r\nHi,\r\n\r\n> \r\n> \r\n> I guess we could update the origin, per attached 0004. We don't have\r\n> timestamp to set replorigin_session_origin_timestamp, but it seems we don't\r\n> need that.\r\n> \r\n> The attached patch merges the earlier improvements, except for the part that\r\n> experimented with adding a \"fake\" transaction (which turned out to have a\r\n> number of difficult issues).\r\n\r\nI tried to test the patch and found a crash when calling\r\npg_logical_slot_get_changes() to consume sequence changes.\r\n\r\nSteps:\r\n----\r\ncreate table t1_seq(a int);\r\ncreate sequence seq1;\r\nSELECT 'init' FROM pg_create_logical_replication_slot('test_slot',\r\n'test_decoding', false, true);\r\nINSERT INTO t1_seq SELECT nextval('seq1') FROM generate_series(1,100);\r\nSELECT data FROM pg_logical_slot_get_changes('test_slot', NULL, NULL,\r\n'include-xids', 'false', 'skip-empty-xacts', '1');\r\n----\r\n\r\nAttach the backtrace in bt.txt.\r\n\r\nBest Regards,\r\nHou zj",
"msg_date": "Fri, 15 Sep 2023 03:11:16 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 7:57 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n\nI was reading through 0001, I noticed this comment in\nReorderBufferSequenceIsTransactional() function\n\n+ * To decide if a sequence change should be handled as transactional or applied\n+ * immediately, we track (sequence) relfilenodes created by each transaction.\n+ * We don't know if the current sub-transaction was already assigned to the\n+ * top-level transaction, so we need to check all transactions.\n\nIt says \"We don't know if the current sub-transaction was already\nassigned to the top-level transaction, so we need to check all\ntransactions\". But IIRC as part of the steaming of in-progress\ntransactions we have ensured that whenever we are logging the first\nchange by any subtransaction we include the top transaction ID in it.\n\nRefer this code\n\nLogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\nXLogReaderState *record)\n{\n...\n/*\n* If the top-level xid is valid, we need to assign the subxact to the\n* top-level xact. We need to do this for all records, hence we do it\n* before the switch.\n*/\nif (TransactionIdIsValid(txid))\n{\nReorderBufferAssignChild(ctx->reorder,\ntxid,\nXLogRecGetXid(record),\nbuf.origptr);\n}\n}\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 20 Sep 2023 15:23:43 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Sep 20, 2023 at 3:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Aug 16, 2023 at 7:57 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n>\n> I was reading through 0001, I noticed this comment in\n> ReorderBufferSequenceIsTransactional() function\n>\n> + * To decide if a sequence change should be handled as transactional or applied\n> + * immediately, we track (sequence) relfilenodes created by each transaction.\n> + * We don't know if the current sub-transaction was already assigned to the\n> + * top-level transaction, so we need to check all transactions.\n>\n> It says \"We don't know if the current sub-transaction was already\n> assigned to the top-level transaction, so we need to check all\n> transactions\". But IIRC as part of the steaming of in-progress\n> transactions we have ensured that whenever we are logging the first\n> change by any subtransaction we include the top transaction ID in it.\n>\n> Refer this code\n>\n> LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\n> XLogReaderState *record)\n> {\n> ...\n> /*\n> * If the top-level xid is valid, we need to assign the subxact to the\n> * top-level xact. We need to do this for all records, hence we do it\n> * before the switch.\n> */\n> if (TransactionIdIsValid(txid))\n> {\n> ReorderBufferAssignChild(ctx->reorder,\n> txid,\n> XLogRecGetXid(record),\n> buf.origptr);\n> }\n> }\n\nSome more comments\n\n1.\nReorderBufferSequenceIsTransactional and ReorderBufferSequenceGetXid\nare duplicated except the first one is just confirming whether\nrelfilelocator was created in the transaction or not and the other is\nreturning the XID as well so I think these two could be easily merged\nso that we can avoid duplicate codes.\n\n2.\n/*\n+ * ReorderBufferTransferSequencesToParent\n+ * Copy the relfilenode entries to the parent after assignment.\n+ */\n+static void\n+ReorderBufferTransferSequencesToParent(ReorderBuffer *rb,\n+ ReorderBufferTXN *txn,\n+ ReorderBufferTXN *subtxn)\n\nIf we agree with my comment in the previous email (i.e. the first WAL\nby a subxid will always include topxid) then we do not need this\nfunction at all and always add relfilelocator directly to the top\ntransaction and we never need to transfer.\n\nThat is all I have for now while first pass of 0001, later I will do a\nmore detailed review and will look into other patches also.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 22 Sep 2023 16:54:58 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Friday, September 15, 2023 11:11 AM Zhijie Hou (Fujitsu) <houzj.fnst@fujitsu.com> wrote:\r\n> \r\n> On Wednesday, August 16, 2023 10:27 PM Tomas Vondra\r\n> <tomas.vondra@enterprisedb.com> wrote:\r\n> \r\n> Hi,\r\n> \r\n> >\r\n> >\r\n> > I guess we could update the origin, per attached 0004. We don't have\r\n> > timestamp to set replorigin_session_origin_timestamp, but it seems we\r\n> > don't need that.\r\n> >\r\n> > The attached patch merges the earlier improvements, except for the\r\n> > part that experimented with adding a \"fake\" transaction (which turned\r\n> > out to have a number of difficult issues).\r\n> \r\n> I tried to test the patch and found a crash when calling\r\n> pg_logical_slot_get_changes() to consume sequence changes.\r\n\r\nOh, after confirming again, I realize it's my fault that my build environment\r\nwas not clean. This case passed after rebuilding. Sorry for the noise.\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Mon, 25 Sep 2023 06:33:50 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 9/22/23 13:24, Dilip Kumar wrote:\n> On Wed, Sep 20, 2023 at 3:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Wed, Aug 16, 2023 at 7:57 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>\n>> I was reading through 0001, I noticed this comment in\n>> ReorderBufferSequenceIsTransactional() function\n>>\n>> + * To decide if a sequence change should be handled as transactional or applied\n>> + * immediately, we track (sequence) relfilenodes created by each transaction.\n>> + * We don't know if the current sub-transaction was already assigned to the\n>> + * top-level transaction, so we need to check all transactions.\n>>\n>> It says \"We don't know if the current sub-transaction was already\n>> assigned to the top-level transaction, so we need to check all\n>> transactions\". But IIRC as part of the steaming of in-progress\n>> transactions we have ensured that whenever we are logging the first\n>> change by any subtransaction we include the top transaction ID in it.\n>>\n>> Refer this code\n>>\n>> LogicalDecodingProcessRecord(LogicalDecodingContext *ctx,\n>> XLogReaderState *record)\n>> {\n>> ...\n>> /*\n>> * If the top-level xid is valid, we need to assign the subxact to the\n>> * top-level xact. We need to do this for all records, hence we do it\n>> * before the switch.\n>> */\n>> if (TransactionIdIsValid(txid))\n>> {\n>> ReorderBufferAssignChild(ctx->reorder,\n>> txid,\n>> XLogRecGetXid(record),\n>> buf.origptr);\n>> }\n>> }\n> \n> Some more comments\n> \n> 1.\n> ReorderBufferSequenceIsTransactional and ReorderBufferSequenceGetXid\n> are duplicated except the first one is just confirming whether\n> relfilelocator was created in the transaction or not and the other is\n> returning the XID as well so I think these two could be easily merged\n> so that we can avoid duplicate codes.\n> \n\nRight. The attached patch modifies the IsTransactional function to also\nreturn the XID, and removes the GetXid one. It feels a bit weird because\nnow the IsTransactional function is called even in places where we know\nthe change is transactional. It's true two separate functions duplicated\na bit of code, ofc.\n\n> 2.\n> /*\n> + * ReorderBufferTransferSequencesToParent\n> + * Copy the relfilenode entries to the parent after assignment.\n> + */\n> +static void\n> +ReorderBufferTransferSequencesToParent(ReorderBuffer *rb,\n> + ReorderBufferTXN *txn,\n> + ReorderBufferTXN *subtxn)\n> \n> If we agree with my comment in the previous email (i.e. the first WAL\n> by a subxid will always include topxid) then we do not need this\n> function at all and always add relfilelocator directly to the top\n> transaction and we never need to transfer.\n> \n\nGood point! I don't recall why I thought this was necessary. I suspect\nit was before I added the GetCurrentTransactionId() calls to ensure the\nsubxact has a XID. I replaced the ReorderBufferTransferSequencesToParent\ncall with an assert that the relfilenode hash table is empty, and I've\nbeen unable to trigger any failures.\n\n> That is all I have for now while first pass of 0001, later I will do a\n> more detailed review and will look into other patches also.\n> \n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 12 Oct 2023 17:05:30 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 9/20/23 11:53, Dilip Kumar wrote:\n> On Wed, Aug 16, 2023 at 7:57 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n> \n> I was reading through 0001, I noticed this comment in\n> ReorderBufferSequenceIsTransactional() function\n> \n> + * To decide if a sequence change should be handled as transactional or applied\n> + * immediately, we track (sequence) relfilenodes created by each transaction.\n> + * We don't know if the current sub-transaction was already assigned to the\n> + * top-level transaction, so we need to check all transactions.\n> \n> It says \"We don't know if the current sub-transaction was already\n> assigned to the top-level transaction, so we need to check all\n> transactions\". But IIRC as part of the steaming of in-progress\n> transactions we have ensured that whenever we are logging the first\n> change by any subtransaction we include the top transaction ID in it.\n> \n\nYeah, that's a stale comment - the actual code only searched through the\ntop-level ones (and thus relying on the immediate assignment). As I\nwrote in the earlier response, I suspect this code originates from\nbefore I added the GetCurrentTransactionId() calls.\n\nThat being said, I do wonder why with the immediate assignments we still\nneed the bit in ReorderBufferAssignChild that says:\n\n /*\n * We already saw this transaction, but initially added it to the\n * list of top-level txns. Now that we know it's not top-level,\n * remove it from there.\n */\n dlist_delete(&subtxn->node);\n\nI don't think that affects this patch, but it's a bit confusing.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Oct 2023 17:13:48 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 9/13/23 15:18, Ashutosh Bapat wrote:\n> On Fri, Aug 18, 2023 at 4:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Fri, Aug 18, 2023 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>> On Thu, Aug 17, 2023 at 7:13 PM Ashutosh Bapat\n>>> <ashutosh.bapat.oss@gmail.com> wrote:\n>>>>\n>>>> On Wed, Aug 16, 2023 at 7:56 PM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>>>\n>>>>>> But whether or not that's the case, downstream should not request (and\n>>>>>> hence receive) any changes that have been already applied (and\n>>>>>> committed) downstream as a principle. I think a way to achieve this is\n>>>>>> to update the replorigin_session_origin_lsn so that a sequence change\n>>>>>> applied once is not requested (and hence sent) again.\n>>>>>>\n>>>>>\n>>>>> I guess we could update the origin, per attached 0004. We don't have\n>>>>> timestamp to set replorigin_session_origin_timestamp, but it seems we\n>>>>> don't need that.\n>>>>>\n>>>>> The attached patch merges the earlier improvements, except for the part\n>>>>> that experimented with adding a \"fake\" transaction (which turned out to\n>>>>> have a number of difficult issues).\n>>>>\n>>>> 0004 looks good to me.\n>>>\n>>>\n>>> + {\n>>> CommitTransactionCommand();\n>>> +\n>>> + /*\n>>> + * Update origin state so we don't try applying this sequence\n>>> + * change in case of crash.\n>>> + *\n>>> + * XXX We don't have replorigin_session_origin_timestamp, but we\n>>> + * can just leave that set to 0.\n>>> + */\n>>> + replorigin_session_origin_lsn = seq.lsn;\n>>>\n>>> IIUC, your proposal is to update the replorigin_session_origin_lsn, so\n>>> that after restart, it doesn't use some prior origin LSN to start with\n>>> which can in turn lead the sequence to go backward. If so, it should\n>>> be updated before calling CommitTransactionCommand() as we are doing\n>>> in apply_handle_commit_internal(). If that is not the intention then\n>>> it is not clear to me how updating replorigin_session_origin_lsn after\n>>> commit is helpful.\n>>>\n>>\n>> typedef struct ReplicationState\n>> {\n>> ...\n>> /*\n>> * Location of the latest commit from the remote side.\n>> */\n>> XLogRecPtr remote_lsn;\n>>\n>> This is the variable that will be updated with the value of\n>> replorigin_session_origin_lsn. This means we will now track some\n>> arbitrary LSN location of the remote side in this variable. The above\n>> comment makes me wonder if there is anything we are missing or if it\n>> is just a matter of updating this comment because before the patch we\n>> always adhere to what is written in the comment.\n> \n> I don't think we are missing anything. This value is used to track the\n> remote LSN upto which all the commits from upstream have been applied\n> locally. Since a non-transactional sequence change is like a single\n> WAL record transaction, it's LSN acts as the LSN of the mini-commit.\n> So it should be fine to update remote_lsn with sequence WAL record's\n> end LSN. That's what the patches do. I don't see any hazard. But you\n> are right, we need to update comments. Here and also at other places\n> like\n> replorigin_session_advance() which uses remote_commit as name of the\n> argument which gets assigned to ReplicationState::remote_lsn.\n> \n\nI agree - updating the replorigin_session_origin_lsn shouldn't break\nanything. As you write, it's essentially a \"mini-commit\" and the commit\norder remains the same.\n\nI'm not sure about resetting replorigin_session_origin_timestamp to 0\nthough. It's not something we rely on very much (it may not correlated\nwith the commit order etc.). But why should we set it to 0? We don't do\nthat for regular commits, right? And IMO it makes sense to just use the\ntimestamp of the last commit before the sequence change.\n\nFWIW I've left this in a separate commit, but I'll merge that into 0002\nin the next patch version.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Oct 2023 17:26:49 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 7/25/23 12:20, Amit Kapila wrote:\n> ...\n>\n> I have used the debugger to reproduce this as it needs quite some\n> coordination. I just wanted to see if the sequence can go backward and\n> didn't catch up completely before the sequence state is marked\n> 'ready'. On the publisher side, I created a publication with a table\n> and a sequence. Then did the following steps:\n> SELECT nextval('s') FROM generate_series(1,50);\n> insert into t1 values(1);\n> SELECT nextval('s') FROM generate_series(51,150);\n> \n> Then on the subscriber side with some debugging aid, I could find the\n> values in the sequence shown in the previous email. Sorry, I haven't\n> recorded each and every step but, if you think it helps, I can again\n> try to reproduce it and share the steps.\n> \n\nAmit, can you try to reproduce this backwards movement with the latest\nversion of the patch? I have tried triggering that (mis)behavior, but I\nhaven't been successful so far. I'm hesitant to declare it resolved, as\nit's dependent on timing etc. and you mentioned it required quite some\ncoordination.\n\n\nThanks!\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 12 Oct 2023 17:33:51 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Oct 12, 2023 at 9:03 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 7/25/23 12:20, Amit Kapila wrote:\n> > ...\n> >\n> > I have used the debugger to reproduce this as it needs quite some\n> > coordination. I just wanted to see if the sequence can go backward and\n> > didn't catch up completely before the sequence state is marked\n> > 'ready'. On the publisher side, I created a publication with a table\n> > and a sequence. Then did the following steps:\n> > SELECT nextval('s') FROM generate_series(1,50);\n> > insert into t1 values(1);\n> > SELECT nextval('s') FROM generate_series(51,150);\n> >\n> > Then on the subscriber side with some debugging aid, I could find the\n> > values in the sequence shown in the previous email. Sorry, I haven't\n> > recorded each and every step but, if you think it helps, I can again\n> > try to reproduce it and share the steps.\n> >\n>\n> Amit, can you try to reproduce this backwards movement with the latest\n> version of the patch?\n>\n\nI lost touch with this patch but IIRC the quoted problem per se\nshouldn't occur after the idea to use page LSN instead of slot's LSN\nfor synchronization between sync and apply worker.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 14 Oct 2023 17:23:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thursday, October 12, 2023 11:06 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\r\n>\r\n\r\nHi,\r\n\r\nI have been reviewing the patch set, and here are some initial comments.\r\n\r\n1.\r\n\r\nI think we need to mark the RBTXN_HAS_STREAMABLE_CHANGE flag for transactional\r\nsequence change in ReorderBufferQueueChange().\r\n\r\n2.\r\n\r\nReorderBufferSequenceIsTransactional\r\n\r\nIt seems we call the above function once in sequence_decode() and call it again\r\nin ReorderBufferQueueSequence(), would it better to avoid the second call as\r\nthe hashtable search looks not cheap.\r\n\r\n3.\r\n\r\nThe patch cleans up the sequence hash table when COMMIT or ABORT a transaction\r\n(via ReorderBufferAbort() and ReorderBufferReturnTXN()), while it doesn't seem\r\ndestory the hash table when PREPARE the transaction. It's not a big porblem but\r\nwould it be better to release the memory earlier by destory the table for\r\nprepare ?\r\n\r\n4.\r\n\r\n+pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\r\n...\r\n+\t/* output BEGIN if we haven't yet, but only for the transactional case */\r\n+\tif (transactional)\r\n+\t{\r\n+\t\tif (data->skip_empty_xacts && !txndata->xact_wrote_changes)\r\n+\t\t{\r\n+\t\t\tpg_output_begin(ctx, data, txn, false);\r\n+\t\t}\r\n+\t\ttxndata->xact_wrote_changes = true;\r\n+\t}\r\n\r\nI think we should call pg_output_stream_start() instead of pg_output_begin()\r\nfor streaming sequence changes.\r\n\r\n5.\r\n+\t/*\r\n+\t * Schema should be sent using the original relation because it\r\n+\t * also sends the ancestor's relation.\r\n+\t */\r\n+\tmaybe_send_schema(ctx, txn, relation, relentry);\r\n\r\nThe comment seems a bit misleading here, I think it was used for the partition\r\nlogic in pgoutput_change().\r\n\r\nBest Regards,\r\nHou zj\r\n",
"msg_date": "Tue, 24 Oct 2023 11:31:29 +0000",
"msg_from": "\"Zhijie Hou (Fujitsu)\" <houzj.fnst@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi!\n\nOn 10/24/23 13:31, Zhijie Hou (Fujitsu) wrote:\n> On Thursday, October 12, 2023 11:06 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>\n> \n> Hi,\n> \n> I have been reviewing the patch set, and here are some initial comments.\n> \n> 1.\n> \n> I think we need to mark the RBTXN_HAS_STREAMABLE_CHANGE flag for transactional\n> sequence change in ReorderBufferQueueChange().\n> \n\nTrue. It's unlikely for a transaction to only have sequence increments\nand be large enough to get streamed, and other changes would make it to\nhave this flag. But it's certainly more correct to set the flag even for\nsequence changes.\n\nThe updated patch modifies ReorderBufferQueueChange to do this.\n\n> 2.\n> \n> ReorderBufferSequenceIsTransactional\n> \n> It seems we call the above function once in sequence_decode() and call it again\n> in ReorderBufferQueueSequence(), would it better to avoid the second call as\n> the hashtable search looks not cheap.\n> \n\nIn principle yes, but I don't think it's worth it - I doubt the overhead\nis going to be measurable.\n\nBased on earlier reviews I tried to reduce the code duplication (there\nused to be two separate functions doing the lookup), and I did consider\ndoing just one call in sequence_decode() and passing the XID to\nReorderBufferQueueSequence() - determining the XID is the only purpose\nof the call there. But it didn't seem nice/worth it.\n\n> 3.\n> \n> The patch cleans up the sequence hash table when COMMIT or ABORT a transaction\n> (via ReorderBufferAbort() and ReorderBufferReturnTXN()), while it doesn't seem\n> destory the hash table when PREPARE the transaction. It's not a big porblem but\n> would it be better to release the memory earlier by destory the table for\n> prepare ?\n> \n\nI think you're right. I added the sequence cleanup to a couple places,\nright before cleanup of the transaction. I wonder if we should simply\ncall ReorderBufferSequenceCleanup() from ReorderBufferCleanupTXN().\n\n> 4.\n> \n> +pg_decode_stream_sequence(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> ...\n> +\t/* output BEGIN if we haven't yet, but only for the transactional case */\n> +\tif (transactional)\n> +\t{\n> +\t\tif (data->skip_empty_xacts && !txndata->xact_wrote_changes)\n> +\t\t{\n> +\t\t\tpg_output_begin(ctx, data, txn, false);\n> +\t\t}\n> +\t\ttxndata->xact_wrote_changes = true;\n> +\t}\n> \n> I think we should call pg_output_stream_start() instead of pg_output_begin()\n> for streaming sequence changes.\n> \n\nGood catch! Fixed.\n\n> 5.\n> +\t/*\n> +\t * Schema should be sent using the original relation because it\n> +\t * also sends the ancestor's relation.\n> +\t */\n> +\tmaybe_send_schema(ctx, txn, relation, relentry);\n> \n> The comment seems a bit misleading here, I think it was used for the partition\n> logic in pgoutput_change().\n\nTrue. I've removed the comment.\n\n\nAttached is an updated patch, with all those tweaks/fixes.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 2 Nov 2023 16:30:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nI've been cleaning up the first two patches to get them committed soon\n(adding the decoding infrastructure + test_decoding), cleaning up stale\ncomments, updating commit messages etc. And I think it's ready to go,\nbut it's too late over, so I plan going over once more tomorrow and then\nlikely push. But if someone wants to take a look, I'd welcome that.\n\nThe one issue I found during this cleanup is that the patch was missing\nthe changes introduced by 29d0a77fa660 for decoding of other stuff.\n\n commit 29d0a77fa6606f9c01ba17311fc452dabd3f793d\n Author: Amit Kapila <akapila@postgresql.org>\n Date: Thu Oct 26 06:54:16 2023 +0530\n\n Migrate logical slots to the new node during an upgrade.\n ...\n\nI fixed that, but perhaps someone might want to double check ...\n\n\n0003 is here just for completeness - that's the part adding sequences to\nbuilt-in replication. I haven't done much with it, it needs some cleanup\ntoo to get it committable. I don't intend to push that right after\n0001+0002, though.\n\n\nWhile going over 0001, I realized there might be an optimization for\nReorderBufferSequenceIsTransactional. As coded in 0001, it always\nsearches through all top-level transactions, and if there's many of them\nthat might be expensive, even if very few of them have any relfilenodes\nin the hash table. It's still linear search, and it needs to happen for\neach sequence change.\n\nBut can the relfilenode even be in some other top-level transaction? How\ncould it be - our transaction would not see it, and wouldn't be able to\ngenerate the sequence change. So we should be able to simply check *our*\ntransaction (or if it's a subxact, the top-level transaction). Either\nit's there (and it's transactional change), or not (and then it's\nnon-transactional change). The 0004 does this.\n\nThis of course hinges on when exactly the transactions get created, and\nassignments processed. For example if this would fire before the txn\ngets assigned to the top-level one, this would break. I don't think this\ncan happen thanks to the immediate logging of assignments, but I'm too\ntired to think about it now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 27 Nov 2023 02:11:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 6:41 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I've been cleaning up the first two patches to get them committed soon\n> (adding the decoding infrastructure + test_decoding), cleaning up stale\n> comments, updating commit messages etc. And I think it's ready to go,\n> but it's too late over, so I plan going over once more tomorrow and then\n> likely push. But if someone wants to take a look, I'd welcome that.\n>\n> The one issue I found during this cleanup is that the patch was missing\n> the changes introduced by 29d0a77fa660 for decoding of other stuff.\n>\n> commit 29d0a77fa6606f9c01ba17311fc452dabd3f793d\n> Author: Amit Kapila <akapila@postgresql.org>\n> Date: Thu Oct 26 06:54:16 2023 +0530\n>\n> Migrate logical slots to the new node during an upgrade.\n> ...\n>\n> I fixed that, but perhaps someone might want to double check ...\n>\n>\n> 0003 is here just for completeness - that's the part adding sequences to\n> built-in replication. I haven't done much with it, it needs some cleanup\n> too to get it committable. I don't intend to push that right after\n> 0001+0002, though.\n>\n>\n> While going over 0001, I realized there might be an optimization for\n> ReorderBufferSequenceIsTransactional. As coded in 0001, it always\n> searches through all top-level transactions, and if there's many of them\n> that might be expensive, even if very few of them have any relfilenodes\n> in the hash table. It's still linear search, and it needs to happen for\n> each sequence change.\n>\n> But can the relfilenode even be in some other top-level transaction? How\n> could it be - our transaction would not see it, and wouldn't be able to\n> generate the sequence change. So we should be able to simply check *our*\n> transaction (or if it's a subxact, the top-level transaction). Either\n> it's there (and it's transactional change), or not (and then it's\n> non-transactional change).\n>\n\nI also think the relfilenode should be part of either the current\ntop-level xact or one of its subxact, so looking at all the top-level\ntransactions for each change doesn't seem advisable.\n\n> The 0004 does this.\n>\n> This of course hinges on when exactly the transactions get created, and\n> assignments processed. For example if this would fire before the txn\n> gets assigned to the top-level one, this would break. I don't think this\n> can happen thanks to the immediate logging of assignments, but I'm too\n> tired to think about it now.\n>\n\nThis needs some thought because I think we can't guarantee the\nassociation till we reach the point where we can actually decode the\nxact. See comments in AssertTXNLsnOrder() [1].\n\nI noticed few minor comments while reading the patch:\n1.\n+ * turned on here because the non-transactional logical message is\n+ * decoded without waiting for these records.\n\nInstead of '.. logical message', shouldn't we say sequence change message?\n\n2.\n+ /*\n+ * If we found an entry with matchine relfilenode,\n\ntypo (matchine)\n\n3.\n+ Note that this may not the value obtained by the process updating the\n+ process, but the future sequence value written to WAL (typically about\n+ 32 values ahead).\n\n/may not the value/may not be the value\n\n[1] -\n/*\n* Skip the verification if we don't reach the LSN at which we start\n* decoding the contents of transactions yet because until we reach the\n* LSN, we could have transactions that don't have the association between\n* the top-level transaction and subtransaction yet and consequently have\n* the same LSN. We don't guarantee this association until we try to\n* decode the actual contents of transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 11:34:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 27, 2023 at 6:41 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > While going over 0001, I realized there might be an optimization for\n> > ReorderBufferSequenceIsTransactional. As coded in 0001, it always\n> > searches through all top-level transactions, and if there's many of them\n> > that might be expensive, even if very few of them have any relfilenodes\n> > in the hash table. It's still linear search, and it needs to happen for\n> > each sequence change.\n> >\n> > But can the relfilenode even be in some other top-level transaction? How\n> > could it be - our transaction would not see it, and wouldn't be able to\n> > generate the sequence change. So we should be able to simply check *our*\n> > transaction (or if it's a subxact, the top-level transaction). Either\n> > it's there (and it's transactional change), or not (and then it's\n> > non-transactional change).\n> >\n>\n> I also think the relfilenode should be part of either the current\n> top-level xact or one of its subxact, so looking at all the top-level\n> transactions for each change doesn't seem advisable.\n>\n> > The 0004 does this.\n> >\n> > This of course hinges on when exactly the transactions get created, and\n> > assignments processed. For example if this would fire before the txn\n> > gets assigned to the top-level one, this would break. I don't think this\n> > can happen thanks to the immediate logging of assignments, but I'm too\n> > tired to think about it now.\n> >\n>\n> This needs some thought because I think we can't guarantee the\n> association till we reach the point where we can actually decode the\n> xact. See comments in AssertTXNLsnOrder() [1].\n>\n\nI am wondering that instead of building the infrastructure to know\nwhether a particular change is transactional on the decoding side,\ncan't we have some flag in the WAL record to note whether the change\nis transactional or not? I have discussed this point with my colleague\nKuroda-San and we thought that it may be worth exploring whether we\ncan use rd_createSubid/rd_newRelfilelocatorSubid in RelationData to\ndetermine if the sequence is created/changed in the current\nsubtransaction and then record that in WAL record. By this, we need to\nhave additional information in the WAL record like XLOG_SEQ_LOG but we\ncan probably do it only with wal_level as logical.\n\nOne minor point:\nIt'd also\n+ * trigger assert in DecodeSequence.\n\nI don't see DecodeSequence() in the patch. Which exact assert/function\nare you referring to here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 15:43:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 11/27/23 11:13, Amit Kapila wrote:\n> On Mon, Nov 27, 2023 at 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Mon, Nov 27, 2023 at 6:41 AM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n>>> While going over 0001, I realized there might be an optimization for\n>>> ReorderBufferSequenceIsTransactional. As coded in 0001, it always\n>>> searches through all top-level transactions, and if there's many of them\n>>> that might be expensive, even if very few of them have any relfilenodes\n>>> in the hash table. It's still linear search, and it needs to happen for\n>>> each sequence change.\n>>>\n>>> But can the relfilenode even be in some other top-level transaction? How\n>>> could it be - our transaction would not see it, and wouldn't be able to\n>>> generate the sequence change. So we should be able to simply check *our*\n>>> transaction (or if it's a subxact, the top-level transaction). Either\n>>> it's there (and it's transactional change), or not (and then it's\n>>> non-transactional change).\n>>>\n>>\n>> I also think the relfilenode should be part of either the current\n>> top-level xact or one of its subxact, so looking at all the top-level\n>> transactions for each change doesn't seem advisable.\n>>\n>>> The 0004 does this.\n>>>\n>>> This of course hinges on when exactly the transactions get created, and\n>>> assignments processed. For example if this would fire before the txn\n>>> gets assigned to the top-level one, this would break. I don't think this\n>>> can happen thanks to the immediate logging of assignments, but I'm too\n>>> tired to think about it now.\n>>>\n>>\n>> This needs some thought because I think we can't guarantee the\n>> association till we reach the point where we can actually decode the\n>> xact. See comments in AssertTXNLsnOrder() [1].\n>>\n\nI suppose you mean the comment before the SnapBuildXactNeedsSkip call,\nwhich says:\n\n /*\n * Skip the verification if we don't reach the LSN at which we start\n * decoding the contents of transactions yet because until we reach\n * the LSN, we could have transactions that don't have the association\n * between the top-level transaction and subtransaction yet and\n * consequently have the same LSN. We don't guarantee this\n * association until we try to decode the actual contents of\n * transaction. The ordering of the records prior to the\n * start_decoding_at LSN should have been checked before the restart.\n */\n\nBut doesn't this say that after we actually start decoding / stop\nskipping, we should have seen the assignment? We're already decoding\ntransaction contents (because sequence change *is* part of xact, even if\nwe decide to replay it in the non-transactional way).\n\n> \n> I am wondering that instead of building the infrastructure to know\n> whether a particular change is transactional on the decoding side,\n> can't we have some flag in the WAL record to note whether the change\n> is transactional or not? I have discussed this point with my colleague\n> Kuroda-San and we thought that it may be worth exploring whether we\n> can use rd_createSubid/rd_newRelfilelocatorSubid in RelationData to\n> determine if the sequence is created/changed in the current\n> subtransaction and then record that in WAL record. By this, we need to\n> have additional information in the WAL record like XLOG_SEQ_LOG but we\n> can probably do it only with wal_level as logical.\n> \n\nI may not understand the proposal exactly, but it's not enough to know\nif it was created in the same subxact. It might have been created in\nsome earlier subxact in the same top-level xact.\n\nFWIW I think one of the earlier patch versions did something like this,\nby adding a \"created\" flag in the xlog record. And we concluded doing\nthis on the decoding side is a better solution.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Nov 2023 11:47:53 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 4:17 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 11/27/23 11:13, Amit Kapila wrote:\n> > On Mon, Nov 27, 2023 at 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Mon, Nov 27, 2023 at 6:41 AM Tomas Vondra\n> >> <tomas.vondra@enterprisedb.com> wrote:\n> >>>\n> >>> While going over 0001, I realized there might be an optimization for\n> >>> ReorderBufferSequenceIsTransactional. As coded in 0001, it always\n> >>> searches through all top-level transactions, and if there's many of them\n> >>> that might be expensive, even if very few of them have any relfilenodes\n> >>> in the hash table. It's still linear search, and it needs to happen for\n> >>> each sequence change.\n> >>>\n> >>> But can the relfilenode even be in some other top-level transaction? How\n> >>> could it be - our transaction would not see it, and wouldn't be able to\n> >>> generate the sequence change. So we should be able to simply check *our*\n> >>> transaction (or if it's a subxact, the top-level transaction). Either\n> >>> it's there (and it's transactional change), or not (and then it's\n> >>> non-transactional change).\n> >>>\n> >>\n> >> I also think the relfilenode should be part of either the current\n> >> top-level xact or one of its subxact, so looking at all the top-level\n> >> transactions for each change doesn't seem advisable.\n> >>\n> >>> The 0004 does this.\n> >>>\n> >>> This of course hinges on when exactly the transactions get created, and\n> >>> assignments processed. For example if this would fire before the txn\n> >>> gets assigned to the top-level one, this would break. I don't think this\n> >>> can happen thanks to the immediate logging of assignments, but I'm too\n> >>> tired to think about it now.\n> >>>\n> >>\n> >> This needs some thought because I think we can't guarantee the\n> >> association till we reach the point where we can actually decode the\n> >> xact. See comments in AssertTXNLsnOrder() [1].\n> >>\n>\n> I suppose you mean the comment before the SnapBuildXactNeedsSkip call,\n> which says:\n>\n> /*\n> * Skip the verification if we don't reach the LSN at which we start\n> * decoding the contents of transactions yet because until we reach\n> * the LSN, we could have transactions that don't have the association\n> * between the top-level transaction and subtransaction yet and\n> * consequently have the same LSN. We don't guarantee this\n> * association until we try to decode the actual contents of\n> * transaction. The ordering of the records prior to the\n> * start_decoding_at LSN should have been checked before the restart.\n> */\n>\n> But doesn't this say that after we actually start decoding / stop\n> skipping, we should have seen the assignment? We're already decoding\n> transaction contents (because sequence change *is* part of xact, even if\n> we decide to replay it in the non-transactional way).\n>\n\nIt means to say that the assignment is decided after start_decoding_at\npoint. We haven't decided that we are past start_decoding_at by the\ntime the patch is computing the transactional flag.\n\n> >\n> > I am wondering that instead of building the infrastructure to know\n> > whether a particular change is transactional on the decoding side,\n> > can't we have some flag in the WAL record to note whether the change\n> > is transactional or not? I have discussed this point with my colleague\n> > Kuroda-San and we thought that it may be worth exploring whether we\n> > can use rd_createSubid/rd_newRelfilelocatorSubid in RelationData to\n> > determine if the sequence is created/changed in the current\n> > subtransaction and then record that in WAL record. By this, we need to\n> > have additional information in the WAL record like XLOG_SEQ_LOG but we\n> > can probably do it only with wal_level as logical.\n> >\n>\n> I may not understand the proposal exactly, but it's not enough to know\n> if it was created in the same subxact. It might have been created in\n> some earlier subxact in the same top-level xact.\n>\n\nWe should be able to detect even some earlier subxact or top-level\nxact based on rd_createSubid/rd_newRelfilelocatorSubid.\n\n> FWIW I think one of the earlier patch versions did something like this,\n> by adding a \"created\" flag in the xlog record. And we concluded doing\n> this on the decoding side is a better solution.\n>\n\noh, I thought it would be much simpler than what we are doing on the\ndecoding-side. Can you please point me to the email discussion where\nthis is concluded or share the reason?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:41:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 4:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Nov 27, 2023 at 4:17 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n>\n> > FWIW I think one of the earlier patch versions did something like this,\n> > by adding a \"created\" flag in the xlog record. And we concluded doing\n> > this on the decoding side is a better solution.\n> >\n>\n> oh, I thought it would be much simpler than what we are doing on the\n> decoding-side. Can you please point me to the email discussion where\n> this is concluded or share the reason?\n>\n\nI'll check the thread about this point by myself as well but if by\nchance you remember it then kindly share it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 27 Nov 2023 16:57:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Dear Amit, Tomas,\r\n\r\n> > >\r\n> > > I am wondering that instead of building the infrastructure to know\r\n> > > whether a particular change is transactional on the decoding side,\r\n> > > can't we have some flag in the WAL record to note whether the change\r\n> > > is transactional or not? I have discussed this point with my colleague\r\n> > > Kuroda-San and we thought that it may be worth exploring whether we\r\n> > > can use rd_createSubid/rd_newRelfilelocatorSubid in RelationData to\r\n> > > determine if the sequence is created/changed in the current\r\n> > > subtransaction and then record that in WAL record. By this, we need to\r\n> > > have additional information in the WAL record like XLOG_SEQ_LOG but we\r\n> > > can probably do it only with wal_level as logical.\r\n> > >\r\n> >\r\n> > I may not understand the proposal exactly, but it's not enough to know\r\n> > if it was created in the same subxact. It might have been created in\r\n> > some earlier subxact in the same top-level xact.\r\n> >\r\n> \r\n> We should be able to detect even some earlier subxact or top-level\r\n> xact based on rd_createSubid/rd_newRelfilelocatorSubid.\r\n\r\nHere is a small PoC patchset to help your understanding. Please see attached\r\nfiles.\r\n\r\n0001, 0002 were not changed, and 0004 was reassigned to 0003.\r\n(For now, I focused only on test_decoding, because it is only for evaluation purpose.)\r\n\r\n0004 is what we really wanted to say. is_transactional is added in WAL record, and it stores\r\nwhether the operations is transactional. In order to distinguish the status, rd_createSubid and\r\nrd_newRelfilelocatorSubid are used. According to the comment, they would be a valid value\r\nonly when the relation was changed within the transaction\r\nAlso, sequences_hash was not needed anymore, so it and related functions were removed.\r\n\r\nHow do you think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Mon, 27 Nov 2023 12:08:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 11/27/23 12:11, Amit Kapila wrote:\n> On Mon, Nov 27, 2023 at 4:17 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 11/27/23 11:13, Amit Kapila wrote:\n>>> On Mon, Nov 27, 2023 at 11:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> On Mon, Nov 27, 2023 at 6:41 AM Tomas Vondra\n>>>> <tomas.vondra@enterprisedb.com> wrote:\n>>>>>\n>>>>> While going over 0001, I realized there might be an optimization for\n>>>>> ReorderBufferSequenceIsTransactional. As coded in 0001, it always\n>>>>> searches through all top-level transactions, and if there's many of them\n>>>>> that might be expensive, even if very few of them have any relfilenodes\n>>>>> in the hash table. It's still linear search, and it needs to happen for\n>>>>> each sequence change.\n>>>>>\n>>>>> But can the relfilenode even be in some other top-level transaction? How\n>>>>> could it be - our transaction would not see it, and wouldn't be able to\n>>>>> generate the sequence change. So we should be able to simply check *our*\n>>>>> transaction (or if it's a subxact, the top-level transaction). Either\n>>>>> it's there (and it's transactional change), or not (and then it's\n>>>>> non-transactional change).\n>>>>>\n>>>>\n>>>> I also think the relfilenode should be part of either the current\n>>>> top-level xact or one of its subxact, so looking at all the top-level\n>>>> transactions for each change doesn't seem advisable.\n>>>>\n>>>>> The 0004 does this.\n>>>>>\n>>>>> This of course hinges on when exactly the transactions get created, and\n>>>>> assignments processed. For example if this would fire before the txn\n>>>>> gets assigned to the top-level one, this would break. I don't think this\n>>>>> can happen thanks to the immediate logging of assignments, but I'm too\n>>>>> tired to think about it now.\n>>>>>\n>>>>\n>>>> This needs some thought because I think we can't guarantee the\n>>>> association till we reach the point where we can actually decode the\n>>>> xact. See comments in AssertTXNLsnOrder() [1].\n>>>>\n>>\n>> I suppose you mean the comment before the SnapBuildXactNeedsSkip call,\n>> which says:\n>>\n>> /*\n>> * Skip the verification if we don't reach the LSN at which we start\n>> * decoding the contents of transactions yet because until we reach\n>> * the LSN, we could have transactions that don't have the association\n>> * between the top-level transaction and subtransaction yet and\n>> * consequently have the same LSN. We don't guarantee this\n>> * association until we try to decode the actual contents of\n>> * transaction. The ordering of the records prior to the\n>> * start_decoding_at LSN should have been checked before the restart.\n>> */\n>>\n>> But doesn't this say that after we actually start decoding / stop\n>> skipping, we should have seen the assignment? We're already decoding\n>> transaction contents (because sequence change *is* part of xact, even if\n>> we decide to replay it in the non-transactional way).\n>>\n> \n> It means to say that the assignment is decided after start_decoding_at\n> point. We haven't decided that we are past start_decoding_at by the\n> time the patch is computing the transactional flag.\n> \n\nAh, I see. We're deciding if the change is transactional before calling\nSnapBuildXactNeedsSkip. That's a bit unfortunate.\n\n>>>\n>>> I am wondering that instead of building the infrastructure to know\n>>> whether a particular change is transactional on the decoding side,\n>>> can't we have some flag in the WAL record to note whether the change\n>>> is transactional or not? I have discussed this point with my colleague\n>>> Kuroda-San and we thought that it may be worth exploring whether we\n>>> can use rd_createSubid/rd_newRelfilelocatorSubid in RelationData to\n>>> determine if the sequence is created/changed in the current\n>>> subtransaction and then record that in WAL record. By this, we need to\n>>> have additional information in the WAL record like XLOG_SEQ_LOG but we\n>>> can probably do it only with wal_level as logical.\n>>>\n>>\n>> I may not understand the proposal exactly, but it's not enough to know\n>> if it was created in the same subxact. It might have been created in\n>> some earlier subxact in the same top-level xact.\n>>\n> \n> We should be able to detect even some earlier subxact or top-level\n> xact based on rd_createSubid/rd_newRelfilelocatorSubid.\n> \n\nInteresting. I admit I haven't considered using these fields before, so\nI need to familiarize with it a bit, and try if it'd work.\n\n>> FWIW I think one of the earlier patch versions did something like this,\n>> by adding a \"created\" flag in the xlog record. And we concluded doing\n>> this on the decoding side is a better solution.\n>>\n> \n> oh, I thought it would be much simpler than what we are doing on the\n> decoding-side. Can you please point me to the email discussion where\n> this is concluded or share the reason?\n> \n\nI think the discussion started around [1], and then in a bunch of\nfollowing messages (search for \"relfilenode\").\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAExHW5v_vVqkhF4ehST9EzpX1L3bemD1S%2BkTk_-ZVu_ir-nKDw%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Nov 2023 14:41:40 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 11/27/23 13:08, Hayato Kuroda (Fujitsu) wrote:\n> Dear Amit, Tomas,\n> \n>>>>\n>>>> I am wondering that instead of building the infrastructure to know\n>>>> whether a particular change is transactional on the decoding side,\n>>>> can't we have some flag in the WAL record to note whether the change\n>>>> is transactional or not? I have discussed this point with my colleague\n>>>> Kuroda-San and we thought that it may be worth exploring whether we\n>>>> can use rd_createSubid/rd_newRelfilelocatorSubid in RelationData to\n>>>> determine if the sequence is created/changed in the current\n>>>> subtransaction and then record that in WAL record. By this, we need to\n>>>> have additional information in the WAL record like XLOG_SEQ_LOG but we\n>>>> can probably do it only with wal_level as logical.\n>>>>\n>>>\n>>> I may not understand the proposal exactly, but it's not enough to know\n>>> if it was created in the same subxact. It might have been created in\n>>> some earlier subxact in the same top-level xact.\n>>>\n>>\n>> We should be able to detect even some earlier subxact or top-level\n>> xact based on rd_createSubid/rd_newRelfilelocatorSubid.\n> \n> Here is a small PoC patchset to help your understanding. Please see attached\n> files.\n> \n> 0001, 0002 were not changed, and 0004 was reassigned to 0003.\n> (For now, I focused only on test_decoding, because it is only for evaluation purpose.)\n> \n> 0004 is what we really wanted to say. is_transactional is added in WAL record, and it stores\n> whether the operations is transactional. In order to distinguish the status, rd_createSubid and\n> rd_newRelfilelocatorSubid are used. According to the comment, they would be a valid value\n> only when the relation was changed within the transaction\n> Also, sequences_hash was not needed anymore, so it and related functions were removed.\n> \n> How do you think?\n> \n\nI think it's an a very nice idea, assuming it maintains the current\nbehavior. It makes a lot of code unnecessary, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 27 Nov 2023 14:54:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nI spent a bit of time looking at the proposed change, and unfortunately\nlogging just the boolean flag does not work. A good example is this bit\nfrom a TAP test added by the patch for built-in replication (which was\nnot included with the WIP patch):\n\n BEGIN;\n ALTER SEQUENCE s RESTART WITH 1000;\n SAVEPOINT sp1;\n INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);\n ROLLBACK TO sp1;\n COMMIT;\n\nThis is expected to produce:\n\n 1131|0|t\n\nbut produces\n\n 1000|0|f\n\ninstead. The reason is very simple - as implemented, the patch simply\nchecks if the relfilenode is from the same top-level transaction, which\nit is, and sets the flag to \"true\". So we know the sequence changes need\nto be queued and replayed as part of this transaction.\n\nBut then during decoding, we still queue the changes into the subxact,\nwhich then aborts, and the changes are discarded. That is not how it's\nsupposed to work, because the new relfilenode is still valid, someone\nmight do nextval() and commit. And the nextval() may not get WAL-logged,\nso we'd lose this.\n\nWhat I guess we might do is log not just a boolean flag, but the XID of\nthe subtransaction that created the relfilenode. And then during\ndecoding we'd queue the changes into this subtransaction ...\n\n0006 in the attached patch series does this, and it seems to fix the TAP\ntest failure. I left it at the end, to make it easier to run tests\nwithout the patch applied.\n\nThere's a couple open questions, though.\n\n- I'm not sure it's a good idea to log XIDs of subxacts into WAL like\nthis. I think it'd be OK, and there are other records that do that (like\nRunningXacts or commit record), but maybe I'm missing something.\n\n- We need the actual XID, not just the SubTransactionId. I wrote\nSubTransactionGetXid() to to this, but I did not work with subxacts\nthis, so it'd be better if someone checked it's dealing with XID and\nFullTransactionId correctly.\n\n- I'm a bit concerned how this will perform with deeply nested\nsubtransactions. SubTransactionGetXid() does pretty much a linear\nsearch, which might be somewhat expensive. And it's a cost put on\neveryone who writes WAL, not just the decoding process. Maybe we should\nat least limit this to wal_level=logical?\n\n- seq_decode() then uses this XID (for transactional changes) instead of\nthe XID logged in the record itself. I think that's fine - it's the TXN\nwhere we want to queue the change, after all, right?\n\n- (unrelated) I also noticed that maybe ReorderBufferQueueSequence()\nshould always expect a valid XID. The code seems to suggest people can\npass InvalidTransactionId in the non-transactional case, but that's not\ntrue because the rb->sequence() then fails.\n\n\nThe attached patches should also fix all the typos reported by Amit\nearlier today.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Mon, 27 Nov 2023 19:15:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "FWIW, here are some more minor review comments for v20231127-3-0001\n\n======\ndoc/src/sgml/logicaldecoding.sgml\n\n1.\n+ The <parameter>txn</parameter> parameter contains meta information about\n+ the transaction the sequence change is part of. Note however that for\n+ non-transactional updates, the transaction may be NULL, depending on\n+ if the transaction already has an XID assigned.\n+ The <parameter>sequence_lsn</parameter> has the WAL location of the\n+ sequence update. <parameter>transactional</parameter> says if the\n+ sequence has to be replayed as part of the transaction or directly.\n\n/says if/specifies whether/\n\n======\nsrc/backend/commands/sequence.c\n\n2. DecodeSeqTuple\n\n+ memcpy(((char *) tuple->tuple.t_data),\n+ data + sizeof(xl_seq_rec),\n+ SizeofHeapTupleHeader);\n+\n+ memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,\n+ data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,\n+ datalen);\n\nMaybe I am misreading but isn't this just copying 2 contiguous pieces\nof data? Won't a single memcpy of (SizeofHeapTupleHeader + datalen)\nachieve the same?\n\n======\n.../replication/logical/reorderbuffer.c\n\n3.\n+ * To decide if a sequence change is transactional, we maintain a hash\n+ * table of relfilenodes created in each (sub)transactions, along with\n+ * the XID of the (sub)transaction that created the relfilenode. The\n+ * entries from substransactions are copied to the top-level transaction\n+ * to make checks cheaper. The hash table gets cleaned up when the\n+ * transaction completes (commit/abort).\n\n/substransactions/subtransactions/\n\n~~~\n\n4.\n+ * A naive approach would be to just loop through all transactions and check\n+ * each of them, but there may be (easily thousands) of subtransactions, and\n+ * the check happens for each sequence change. So this could be very costly.\n\n/may be (easily thousands) of/may be (easily thousands of)/\n\n~~~\n\n5. ReorderBufferSequenceCleanup\n\n+ while ((ent = (ReorderBufferSequenceEnt *)\nhash_seq_search(&scan_status)) != NULL)\n+ {\n+ (void) hash_search(txn->toptxn->sequences_hash,\n+ (void *) &ent->rlocator,\n+ HASH_REMOVE, NULL);\n+ }\n\nTypically, other HASH_REMOVE code I saw would check result for NULL to\ngive elog(ERROR, \"hash table corrupted\");\n\n~~~\n\n6. ReorderBufferQueueSequence\n\n+ if (xid != InvalidTransactionId)\n+ txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);\n\nHow about using the macro: TransactionIdIsValid\n\n~~~\n\n7. ReorderBufferQueueSequence\n\n+ if (reloid == InvalidOid)\n+ elog(ERROR, \"could not map filenode \\\"%s\\\" to relation OID\",\n+ relpathperm(rlocator,\n+ MAIN_FORKNUM));\n\nHow about using the macro: OidIsValid\n\n~~~\n\n8.\n+ /*\n+ * Calculate the first value of the next batch (at which point we\n+ * generate and decode another WAL record.\n+ */\n\nMissing ')'\n\n~~~\n\n9. ReorderBufferAddRelFileLocator\n\n+ /*\n+ * We only care about sequence relfilenodes for now, and those always have\n+ * a XID. So if there's no XID, don't bother adding them to the hash.\n+ */\n+ if (xid == InvalidTransactionId)\n+ return;\n\nHow about using the macro: TransactionIdIsValid\n\n~~~\n\n10. ReorderBufferProcessTXN\n\n+ if (reloid == InvalidOid)\n+ elog(ERROR, \"could not map filenode \\\"%s\\\" to relation OID\",\n+ relpathperm(change->data.sequence.locator,\n+ MAIN_FORKNUM));\n\nHow about using the macro: OidIsValid\n\n~~~\n\n11. ReorderBufferChangeSize\n\n+ if (tup)\n+ {\n+ sz += sizeof(HeapTupleData);\n+ len = tup->tuple.t_len;\n+ sz += len;\n+ }\n\nWhy is the 'sz' increment split into 2 parts?\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Tue, 28 Nov 2023 09:06:07 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Mon, Nov 27, 2023 at 11:45 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I spent a bit of time looking at the proposed change, and unfortunately\n> logging just the boolean flag does not work. A good example is this bit\n> from a TAP test added by the patch for built-in replication (which was\n> not included with the WIP patch):\n>\n> BEGIN;\n> ALTER SEQUENCE s RESTART WITH 1000;\n> SAVEPOINT sp1;\n> INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);\n> ROLLBACK TO sp1;\n> COMMIT;\n>\n> This is expected to produce:\n>\n> 1131|0|t\n>\n> but produces\n>\n> 1000|0|f\n>\n> instead. The reason is very simple - as implemented, the patch simply\n> checks if the relfilenode is from the same top-level transaction, which\n> it is, and sets the flag to \"true\". So we know the sequence changes need\n> to be queued and replayed as part of this transaction.\n>\n> But then during decoding, we still queue the changes into the subxact,\n> which then aborts, and the changes are discarded. That is not how it's\n> supposed to work, because the new relfilenode is still valid, someone\n> might do nextval() and commit. And the nextval() may not get WAL-logged,\n> so we'd lose this.\n>\n> What I guess we might do is log not just a boolean flag, but the XID of\n> the subtransaction that created the relfilenode. And then during\n> decoding we'd queue the changes into this subtransaction ...\n>\n> 0006 in the attached patch series does this, and it seems to fix the TAP\n> test failure. I left it at the end, to make it easier to run tests\n> without the patch applied.\n>\n\nOffhand, I don't have any better idea than what you have suggested for\nthe problem but this needs some thoughts including the questions asked\nby you. I'll spend some time on it and respond back.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 28 Nov 2023 17:02:03 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 11/28/23 12:32, Amit Kapila wrote:\n> On Mon, Nov 27, 2023 at 11:45 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I spent a bit of time looking at the proposed change, and unfortunately\n>> logging just the boolean flag does not work. A good example is this bit\n>> from a TAP test added by the patch for built-in replication (which was\n>> not included with the WIP patch):\n>>\n>> BEGIN;\n>> ALTER SEQUENCE s RESTART WITH 1000;\n>> SAVEPOINT sp1;\n>> INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,100);\n>> ROLLBACK TO sp1;\n>> COMMIT;\n>>\n>> This is expected to produce:\n>>\n>> 1131|0|t\n>>\n>> but produces\n>>\n>> 1000|0|f\n>>\n>> instead. The reason is very simple - as implemented, the patch simply\n>> checks if the relfilenode is from the same top-level transaction, which\n>> it is, and sets the flag to \"true\". So we know the sequence changes need\n>> to be queued and replayed as part of this transaction.\n>>\n>> But then during decoding, we still queue the changes into the subxact,\n>> which then aborts, and the changes are discarded. That is not how it's\n>> supposed to work, because the new relfilenode is still valid, someone\n>> might do nextval() and commit. And the nextval() may not get WAL-logged,\n>> so we'd lose this.\n>>\n>> What I guess we might do is log not just a boolean flag, but the XID of\n>> the subtransaction that created the relfilenode. And then during\n>> decoding we'd queue the changes into this subtransaction ...\n>>\n>> 0006 in the attached patch series does this, and it seems to fix the TAP\n>> test failure. I left it at the end, to make it easier to run tests\n>> without the patch applied.\n>>\n> \n> Offhand, I don't have any better idea than what you have suggested for\n> the problem but this needs some thoughts including the questions asked\n> by you. I'll spend some time on it and respond back.\n> \n\nI've been experimenting with the idea to log the XID, and for a moment I\nwas worried it actually can't work, because subtransactions may not\nactually be just nested in simple way, but form a tree. And what if the\nsequence was altered in a different branch (sibling subxact), not in the\nimmediate parent. In which case the new SubTransactionGetXid() would\nfail, because it just walks the current chain of subtransactions.\n\nI've been thinking about cases like this:\n\n BEGIN;\n CREATE SEQUENCE s; # XID 1000\n SELECT alter_sequence(); # XID 1001\n SAVEPOINT s1;\n SELECT COUNT(nextval('s')) FROM generate_series(1,100); # XID 1000\n ROLLBACK TO s1;\n SELECT COUNT(nextval('s')) FROM generate_series(1,100); # XID 1000\n COMMIT;\n\nThe XID values are what the sequence wal record will reference, assuming\nthat the main transaction XID is 1000.\n\nInitially, I thought it's wrong that the nextval() calls reference XID\nof the main transaction, because the last relfilenode comes from 1001,\nwhich is the subxact created by alter_sequence() thanks to the exception\nhandling block. And that's where the approach in reorderbuffer would\nqueue the changes.\n\nBut I think this is actually correct too. When a subtransaction commits\n(e.g. when alter_sequence() completes), it essentially becomes part of\nthe parent. And AtEOSubXact_cleanup() updates rd_newRelfilelocatorSubid\naccordingly, setting it to parentSubid.\n\nThis also means that SubTransactionGetXid() can't actually fail, because\nthe ID has to reference an active subtransaction in the current stack.\nI'm still concerned about the cost of the lookup, because the list may\nbe long and the subxact we're looking for may be quite high, but I guess\nwe might have another field, caching the XID. It'd need to be updated\nonly in AtEOSubXact_cleanup, and at that point we know it's the\nimmediate parent, so it'd be pretty cheap I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 28 Nov 2023 15:41:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nI have been hacking on improving the improvements outlined in my\npreceding e-mail, but I have some bad news - I ran into an issue that I\ndon't know how to solve :-(\n\nConsider this transaction:\n\n BEGIN;\n ALTER SEQUENCE s RESTART 1000;\n\n SAVEPOINT s1;\n ALTER SEQUENCE s RESTART 2000;\n ROLLBACK TO s1;\n\n INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,40);\n COMMIT;\n\nIf you try this with the approach relying on rd_newRelfilelocatorSubid\nand rd_createSubid, it fails like this on the subscriber:\n\n ERROR: could not map filenode \"base/5/16394\" to relation OID\n\nThis happens because ReorderBufferQueueSequence tries to do this in the\nnon-transactional branch:\n\n reloid = RelidByRelfilenumber(rlocator.spcOid, rlocator.relNumber);\n\nand the relfilenode is the one created by the first ALTER. But this is\nobviously wrong - the changes should have been treated as transactional,\nbecause they are tied to the first ALTER. So how did we get there?\n\nWell, the whole problem is that in case of abort, AtEOSubXact_cleanup\nresets the two fields to InvalidSubTransactionId. Which means the\nrollback in the above transaction also forgets about the first ALTER.\nNow that I look at the RelationData comments, it actually describes\nexactly this situation:\n\n *\n * rd_newRelfilelocatorSubid is the ID of the highest subtransaction\n * the most-recent relfilenumber change has survived into or zero if\n * not changed in the current transaction (or we have forgotten\n * changing it). This field is accurate when non-zero, but it can be\n * zero when a relation has multiple new relfilenumbers within a\n * single transaction, with one of them occurring in a subsequently\n * aborted subtransaction, e.g.\n * BEGIN;\n * TRUNCATE t;\n * SAVEPOINT save;\n * TRUNCATE t;\n * ROLLBACK TO save;\n * -- rd_newRelfilelocatorSubid is now forgotten\n *\n\nThe root of this problem is that we'd need some sort of \"history\" for\nthe field, so that when a subxact aborts, we can restore the previous\nvalue. But we obviously don't have that, and I doubt we want to add that\nto relcache - for example, it'd either need to impose some limit on the\nhistory (and thus a failure when we reach the limit), or it'd need to\nhandle histories of arbitrary length.\n\nAt this point I don't see a solution for this, which means the best way\nforward with the sequence decoding patch seems to be the original\napproach, on the decoding side.\n\nI'm attaching the patch with 0005 and 0006, adding two simple tests (no\nother changes compared to yesterday's version).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 28 Nov 2023 22:29:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 11/27/23 23:06, Peter Smith wrote:\n> FWIW, here are some more minor review comments for v20231127-3-0001\n> \n> ======\n> doc/src/sgml/logicaldecoding.sgml\n> \n> 1.\n> + The <parameter>txn</parameter> parameter contains meta information about\n> + the transaction the sequence change is part of. Note however that for\n> + non-transactional updates, the transaction may be NULL, depending on\n> + if the transaction already has an XID assigned.\n> + The <parameter>sequence_lsn</parameter> has the WAL location of the\n> + sequence update. <parameter>transactional</parameter> says if the\n> + sequence has to be replayed as part of the transaction or directly.\n> \n> /says if/specifies whether/\n> \n\nWill fix.\n\n> ======\n> src/backend/commands/sequence.c\n> \n> 2. DecodeSeqTuple\n> \n> + memcpy(((char *) tuple->tuple.t_data),\n> + data + sizeof(xl_seq_rec),\n> + SizeofHeapTupleHeader);\n> +\n> + memcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,\n> + data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,\n> + datalen);\n> \n> Maybe I am misreading but isn't this just copying 2 contiguous pieces\n> of data? Won't a single memcpy of (SizeofHeapTupleHeader + datalen)\n> achieve the same?\n> \n\nYou're right, will fix. I think the code looked differently before, got\nsimplified and I haven't noticed this can be a single memcpy().\n\n> ======\n> .../replication/logical/reorderbuffer.c\n> \n> 3.\n> + * To decide if a sequence change is transactional, we maintain a hash\n> + * table of relfilenodes created in each (sub)transactions, along with\n> + * the XID of the (sub)transaction that created the relfilenode. The\n> + * entries from substransactions are copied to the top-level transaction\n> + * to make checks cheaper. The hash table gets cleaned up when the\n> + * transaction completes (commit/abort).\n> \n> /substransactions/subtransactions/\n> \n\nWill fix.\n\n> ~~~\n> \n> 4.\n> + * A naive approach would be to just loop through all transactions and check\n> + * each of them, but there may be (easily thousands) of subtransactions, and\n> + * the check happens for each sequence change. So this could be very costly.\n> \n> /may be (easily thousands) of/may be (easily thousands of)/\n> \n> ~~~\n\nThanks. I've reworded this to\n\n ... may be many (easily thousands of) subtransactions ...\n\n> \n> 5. ReorderBufferSequenceCleanup\n> \n> + while ((ent = (ReorderBufferSequenceEnt *)\n> hash_seq_search(&scan_status)) != NULL)\n> + {\n> + (void) hash_search(txn->toptxn->sequences_hash,\n> + (void *) &ent->rlocator,\n> + HASH_REMOVE, NULL);\n> + }\n> \n> Typically, other HASH_REMOVE code I saw would check result for NULL to\n> give elog(ERROR, \"hash table corrupted\");\n> \n\nGood point, I'll add the error check\n\n> ~~~\n> \n> 6. ReorderBufferQueueSequence\n> \n> + if (xid != InvalidTransactionId)\n> + txn = ReorderBufferTXNByXid(rb, xid, true, NULL, lsn, true);\n> \n> How about using the macro: TransactionIdIsValid\n> \n\nActually, I wrote in some other message, I think the check is not\nnecessary. Or rather it should be an assert that XID is valid. And yeah,\nthe macro is a good idea.\n\n> ~~~\n> \n> 7. ReorderBufferQueueSequence\n> \n> + if (reloid == InvalidOid)\n> + elog(ERROR, \"could not map filenode \\\"%s\\\" to relation OID\",\n> + relpathperm(rlocator,\n> + MAIN_FORKNUM));\n> \n> How about using the macro: OidIsValid\n> \n\nI chose to keep this consistent with other places in reorderbuffer, and\nall of them use the equality check.\n\n> ~~~\n> \n> 8.\n> + /*\n> + * Calculate the first value of the next batch (at which point we\n> + * generate and decode another WAL record.\n> + */\n> \n> Missing ')'\n> \n\nWill fix.\n\n> ~~~\n> \n> 9. ReorderBufferAddRelFileLocator\n> \n> + /*\n> + * We only care about sequence relfilenodes for now, and those always have\n> + * a XID. So if there's no XID, don't bother adding them to the hash.\n> + */\n> + if (xid == InvalidTransactionId)\n> + return;\n> \n> How about using the macro: TransactionIdIsValid\n> \n\nWill change.\n\n> ~~~\n> \n> 10. ReorderBufferProcessTXN\n> \n> + if (reloid == InvalidOid)\n> + elog(ERROR, \"could not map filenode \\\"%s\\\" to relation OID\",\n> + relpathperm(change->data.sequence.locator,\n> + MAIN_FORKNUM));\n> \n> How about using the macro: OidIsValid\n> \n\nSame as the other Oid check - consistency.\n\n> ~~~\n> \n> 11. ReorderBufferChangeSize\n> \n> + if (tup)\n> + {\n> + sz += sizeof(HeapTupleData);\n> + len = tup->tuple.t_len;\n> + sz += len;\n> + }\n> \n> Why is the 'sz' increment split into 2 parts?\n> \n\nBecause the other branches in ReorderBufferChangeSize do it that way.\nYou're right it might be coded on a single line.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Nov 2023 13:45:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi!\n\nConsidering my findings about issues with the rd_newRelfilelocatorSubid\nfield and how it makes that approach impossible, I decided to rip out\nthose patches, and go back to the approach where reorderbuffer tracks\nnew relfilenodes. This means the open questions I listed two days ago\ndisappear, because all of that was about the alternative approach.\n\nI've also added a couple more tests into 034_sequences.pl, testing the\nbasic cases with substransactions that rollback (or not), etc. The\nattached patch also addresses the review comments by Peter Smith.\n\nThe one remaining open question is ReorderBufferSequenceIsTransactional\nand whether it can do better than searching through all top-level\ntransactions. The idea of 0002 was to only search the current top-level\nxact, but Amit pointed out we can't rely on seeing the assignment until\nwe know we're in a consistent snapshot.\n\nI'm yet to try doing some tests to measure how expensive this lookup can\nbe in practice. But let's assume it's measurable and significant enough\nto matter. I wonder if we could salvage this optimization somehow. I'm\nthinking about three options:\n\n1) Could ReorderBufferSequenceIsTransactional check the snapshot is\nalready consistent etc. and use the optimized variant (looking only at\nthe same top-level xact) in that case? And if not, fallback to the\nsearch of all top-level xacts. In practice, the full search would be\nused only for a short initial period.\n\n2) We could also make ReorderBufferSequenceIsTransactional to always\ncheck the same top-level transaction first and then fallback, no matter\nwhether the snapshot is consistent or not. The problem is this doesn't\nreally optimize the common case where there are no new relfilenodes, so\nwe won't find a match in the top-level xact, and will always search\neverything anyway.\n\n3) Alternatively, we could maintain a global hash table, instead of in\nthe top-level transaction. So there'd always be two copies, one in the\nxact itself and then in the global hash. Now there's either one (in\ncurrent top-level xact), or two (subxact + top-level xact).\n\nI kinda like (3), because it just works and doesn't require the snapshot\nbeing consistent etc.\n\n\nOpinions?\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Wed, 29 Nov 2023 14:28:47 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 2:59 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I have been hacking on improving the improvements outlined in my\n> preceding e-mail, but I have some bad news - I ran into an issue that I\n> don't know how to solve :-(\n>\n> Consider this transaction:\n>\n> BEGIN;\n> ALTER SEQUENCE s RESTART 1000;\n>\n> SAVEPOINT s1;\n> ALTER SEQUENCE s RESTART 2000;\n> ROLLBACK TO s1;\n>\n> INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,40);\n> COMMIT;\n>\n> If you try this with the approach relying on rd_newRelfilelocatorSubid\n> and rd_createSubid, it fails like this on the subscriber:\n>\n> ERROR: could not map filenode \"base/5/16394\" to relation OID\n>\n> This happens because ReorderBufferQueueSequence tries to do this in the\n> non-transactional branch:\n>\n> reloid = RelidByRelfilenumber(rlocator.spcOid, rlocator.relNumber);\n>\n> and the relfilenode is the one created by the first ALTER. But this is\n> obviously wrong - the changes should have been treated as transactional,\n> because they are tied to the first ALTER. So how did we get there?\n>\n> Well, the whole problem is that in case of abort, AtEOSubXact_cleanup\n> resets the two fields to InvalidSubTransactionId. Which means the\n> rollback in the above transaction also forgets about the first ALTER.\n> Now that I look at the RelationData comments, it actually describes\n> exactly this situation:\n>\n> *\n> * rd_newRelfilelocatorSubid is the ID of the highest subtransaction\n> * the most-recent relfilenumber change has survived into or zero if\n> * not changed in the current transaction (or we have forgotten\n> * changing it). This field is accurate when non-zero, but it can be\n> * zero when a relation has multiple new relfilenumbers within a\n> * single transaction, with one of them occurring in a subsequently\n> * aborted subtransaction, e.g.\n> * BEGIN;\n> * TRUNCATE t;\n> * SAVEPOINT save;\n> * TRUNCATE t;\n> * ROLLBACK TO save;\n> * -- rd_newRelfilelocatorSubid is now forgotten\n> *\n>\n> The root of this problem is that we'd need some sort of \"history\" for\n> the field, so that when a subxact aborts, we can restore the previous\n> value. But we obviously don't have that, and I doubt we want to add that\n> to relcache - for example, it'd either need to impose some limit on the\n> history (and thus a failure when we reach the limit), or it'd need to\n> handle histories of arbitrary length.\n>\n\nYeah, I think that would be really tricky and we may not want to go there.\n\n> At this point I don't see a solution for this, which means the best way\n> forward with the sequence decoding patch seems to be the original\n> approach, on the decoding side.\n>\n\nOne thing that worries me about that approach is that it can suck with\nthe workload that has a lot of DDLs that create XLOG_SMGR_CREATE\nrecords. We have previously fixed some such workloads in logical\ndecoding where decoding a transaction containing truncation of a table\nwith a lot of partitions (1000 or more) used to take a very long time.\nDon't we face performance issues in such scenarios?\n\nHow do we see this work w.r.t to some sort of global sequences? There\nis some recent discussion where I have raised a similar point [1].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JF%3D4_Eoq7FFjHSe98-_ooJ5QWd0s2_pj8gR%2B_dvwKxvA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 29 Nov 2023 19:12:18 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 11/29/23 14:42, Amit Kapila wrote:\n> On Wed, Nov 29, 2023 at 2:59 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I have been hacking on improving the improvements outlined in my\n>> preceding e-mail, but I have some bad news - I ran into an issue that I\n>> don't know how to solve :-(\n>>\n>> Consider this transaction:\n>>\n>> BEGIN;\n>> ALTER SEQUENCE s RESTART 1000;\n>>\n>> SAVEPOINT s1;\n>> ALTER SEQUENCE s RESTART 2000;\n>> ROLLBACK TO s1;\n>>\n>> INSERT INTO seq_test SELECT nextval('s') FROM generate_series(1,40);\n>> COMMIT;\n>>\n>> If you try this with the approach relying on rd_newRelfilelocatorSubid\n>> and rd_createSubid, it fails like this on the subscriber:\n>>\n>> ERROR: could not map filenode \"base/5/16394\" to relation OID\n>>\n>> This happens because ReorderBufferQueueSequence tries to do this in the\n>> non-transactional branch:\n>>\n>> reloid = RelidByRelfilenumber(rlocator.spcOid, rlocator.relNumber);\n>>\n>> and the relfilenode is the one created by the first ALTER. But this is\n>> obviously wrong - the changes should have been treated as transactional,\n>> because they are tied to the first ALTER. So how did we get there?\n>>\n>> Well, the whole problem is that in case of abort, AtEOSubXact_cleanup\n>> resets the two fields to InvalidSubTransactionId. Which means the\n>> rollback in the above transaction also forgets about the first ALTER.\n>> Now that I look at the RelationData comments, it actually describes\n>> exactly this situation:\n>>\n>> *\n>> * rd_newRelfilelocatorSubid is the ID of the highest subtransaction\n>> * the most-recent relfilenumber change has survived into or zero if\n>> * not changed in the current transaction (or we have forgotten\n>> * changing it). This field is accurate when non-zero, but it can be\n>> * zero when a relation has multiple new relfilenumbers within a\n>> * single transaction, with one of them occurring in a subsequently\n>> * aborted subtransaction, e.g.\n>> * BEGIN;\n>> * TRUNCATE t;\n>> * SAVEPOINT save;\n>> * TRUNCATE t;\n>> * ROLLBACK TO save;\n>> * -- rd_newRelfilelocatorSubid is now forgotten\n>> *\n>>\n>> The root of this problem is that we'd need some sort of \"history\" for\n>> the field, so that when a subxact aborts, we can restore the previous\n>> value. But we obviously don't have that, and I doubt we want to add that\n>> to relcache - for example, it'd either need to impose some limit on the\n>> history (and thus a failure when we reach the limit), or it'd need to\n>> handle histories of arbitrary length.\n>>\n> \n> Yeah, I think that would be really tricky and we may not want to go there.\n> \n>> At this point I don't see a solution for this, which means the best way\n>> forward with the sequence decoding patch seems to be the original\n>> approach, on the decoding side.\n>>\n> \n> One thing that worries me about that approach is that it can suck with\n> the workload that has a lot of DDLs that create XLOG_SMGR_CREATE\n> records. We have previously fixed some such workloads in logical\n> decoding where decoding a transaction containing truncation of a table\n> with a lot of partitions (1000 or more) used to take a very long time.\n> Don't we face performance issues in such scenarios?\n> \n\nI don't think we do, really. We will have to decode the SMGR records and\nadd the relfilenodes to the hash table(s), but I think that affects the\nlookup performance too much. What I think might be a problem is if we\nhave many top-level transactions, especially if those transactions do\nsomething that creates a relfilenode. Because then we'll have to do a\nhash_search for each of them, and that might be measurable even if each\nlookup is O(1). And we do the lookup for every sequence change ...\n\n> How do we see this work w.r.t to some sort of global sequences? There\n> is some recent discussion where I have raised a similar point [1].\n> \n> [1] - https://www.postgresql.org/message-id/CAA4eK1JF%3D4_Eoq7FFjHSe98-_ooJ5QWd0s2_pj8gR%2B_dvwKxvA%40mail.gmail.com\n> \n\nI think those are very different things, even though called \"sequences\".\nAFAIK solutions like snowflakeID or UUIDs don't require replication of\nany shared state (that's kinda the whole point), so I don't see why\nwould it need some special support in logical decoding.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 29 Nov 2023 15:41:30 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 11/29/23 15:41, Tomas Vondra wrote:\n> ...\n>>\n>> One thing that worries me about that approach is that it can suck with\n>> the workload that has a lot of DDLs that create XLOG_SMGR_CREATE\n>> records. We have previously fixed some such workloads in logical\n>> decoding where decoding a transaction containing truncation of a table\n>> with a lot of partitions (1000 or more) used to take a very long time.\n>> Don't we face performance issues in such scenarios?\n>>\n> \n> I don't think we do, really. We will have to decode the SMGR records and\n> add the relfilenodes to the hash table(s), but I think that affects the\n> lookup performance too much. What I think might be a problem is if we\n> have many top-level transactions, especially if those transactions do\n> something that creates a relfilenode. Because then we'll have to do a\n> hash_search for each of them, and that might be measurable even if each\n> lookup is O(1). And we do the lookup for every sequence change ...\n> \n\nI did some micro-benchmarking today, trying to identify cases where this\nwould cause unexpected problems, either due to having to maintain all\nthe relfilenodes, or due to having to do hash lookups for every sequence\nchange. But I think it's fine, mostly ...\n\nI did all the following tests with 64 clients. I may try more, but even\nwith this there should be fair number of concurrent transactions, which\ndetermines the number of top-level transactions in reorderbuffer. I'll\ntry with more clients tomorrow, but I don't think it'll change stuff.\n\nThe test is fairly simple - run a particular number of transactions\n(might be 1000 * 64, or more). And then measure how long it takes to\ndecode the changes using test_decoding.\n\nNow, the various workloads I tried:\n\n1) \"good case\" - small OLTP transactions, a couple nextval('s') calls\n\n begin;\n insert into t (1);\n select nextval('s');\n insert into t (1);\n commit;\n\nThis is pretty fine, the sequence part of reorderbuffer is really not\nmeasurable, it's like 1% of the total CPU time. Which is expected,\nbecause we only wal-log every 32-nd increment or so.\n\n2) \"good case\" - same as (1) but more nextval calls to always do wal\n\n\n begin;\n insert into t (1);\n select nextval('s') from generate_series(1,40);\n insert into t (1);\n commit;\n\nHere sequences are more measurable, it's like 15% of CPU time, but most\nof that comes to AbortCurrentTransaction() in the non-transactional\nbranch of ReorderBufferQueueSequence. I don't think there's a way around\nthat, and it's entirely unrelated to relfilenodes. The function checking\nif the change is transactional (ReorderBufferSequenceIsTransactional) is\nless than 1% of the profile - and this is the version that always walks\nall top-level transactions.\n\n3) \"bad case\" - small transactions that generate a lot of relfilenodes\n\n select alter_sequence();\n\nwhere the function is defined like this (I did create 1000 sequences\nbefore the test):\n\n CREATE OR REPLACE FUNCTION alter_sequence() RETURNS void AS $$\n DECLARE\n v INT;\n BEGIN\n v := 1 + (random() * 999)::int;\n execute format('alter sequence s%s restart with 1000', v);\n perform nextval('s');\n END;\n $$ LANGUAGE plpgsql;\n\nThis performs terribly, but it's entirely unrelated to sequences.\nCurrent master has exactly the same problem, if transactions do DDL.\nLike this, for example:\n\n CREATE OR REPLACE FUNCTION create_table() RETURNS void AS $$\n DECLARE\n v INT;\n BEGIN\n v := 1 + (random() * 999)::int;\n execute format('create table t%s (a int)', v);\n execute format('drop table t%s', v);\n insert into t values (1);\n END;\n $$ LANGUAGE plpgsql;\n\nThis has the same impact on master. The perf report shows this:\n\n --98.06%--pg_logical_slot_get_changes_guts\n |\n --97.88%--LogicalDecodingProcessRecord\n |\n --97.56%--xact_decode\n |\n --97.51%--DecodeCommit\n |\n |--91.92%--SnapBuildCommitTxn\n | |\n | --91.65%--SnapBuildBuildSnapshot\n | |\n | --91.14%--pg_qsort\n\nThe sequence decoding is maybe ~1%. The reason why SnapBuildSnapshot\ntakes so long is because:\n\n-----------------\n Breakpoint 1, SnapBuildBuildSnapshot (builder=0x21f60f8)\n at snapbuild.c:498\n 498 + sizeof(TransactionId) * builder->committed.xcnt\n (gdb) p builder->committed.xcnt\n $4 = 11532\n-----------------\n\nAnd with each iteration it grows by 1. That looks quite weird, possibly\na bug worth fixing, but unrelated to this patch. I can't investigate\nthis more at the moment, not sure when/if I'll get to that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 30 Nov 2023 00:58:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Nov 29, 2023 at 11:45 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n>\n>\n> On 11/27/23 23:06, Peter Smith wrote:\n> > FWIW, here are some more minor review comments for v20231127-3-0001\n> >\n> > ======\n> > .../replication/logical/reorderbuffer.c\n> >\n> > 3.\n> > + * To decide if a sequence change is transactional, we maintain a hash\n> > + * table of relfilenodes created in each (sub)transactions, along with\n> > + * the XID of the (sub)transaction that created the relfilenode. The\n> > + * entries from substransactions are copied to the top-level transaction\n> > + * to make checks cheaper. The hash table gets cleaned up when the\n> > + * transaction completes (commit/abort).\n> >\n> > /substransactions/subtransactions/\n> >\n>\n> Will fix.\n\nFYI - I think this typo still exists in the patch v20231128-0001.\n\n======\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Thu, 30 Nov 2023 12:47:51 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Nov 30, 2023 at 5:28 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> 3) \"bad case\" - small transactions that generate a lot of relfilenodes\n>\n> select alter_sequence();\n>\n> where the function is defined like this (I did create 1000 sequences\n> before the test):\n>\n> CREATE OR REPLACE FUNCTION alter_sequence() RETURNS void AS $$\n> DECLARE\n> v INT;\n> BEGIN\n> v := 1 + (random() * 999)::int;\n> execute format('alter sequence s%s restart with 1000', v);\n> perform nextval('s');\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> This performs terribly, but it's entirely unrelated to sequences.\n> Current master has exactly the same problem, if transactions do DDL.\n> Like this, for example:\n>\n> CREATE OR REPLACE FUNCTION create_table() RETURNS void AS $$\n> DECLARE\n> v INT;\n> BEGIN\n> v := 1 + (random() * 999)::int;\n> execute format('create table t%s (a int)', v);\n> execute format('drop table t%s', v);\n> insert into t values (1);\n> END;\n> $$ LANGUAGE plpgsql;\n>\n> This has the same impact on master. The perf report shows this:\n>\n> --98.06%--pg_logical_slot_get_changes_guts\n> |\n> --97.88%--LogicalDecodingProcessRecord\n> |\n> --97.56%--xact_decode\n> |\n> --97.51%--DecodeCommit\n> |\n> |--91.92%--SnapBuildCommitTxn\n> | |\n> | --91.65%--SnapBuildBuildSnapshot\n> | |\n> | --91.14%--pg_qsort\n>\n> The sequence decoding is maybe ~1%. The reason why SnapBuildSnapshot\n> takes so long is because:\n>\n> -----------------\n> Breakpoint 1, SnapBuildBuildSnapshot (builder=0x21f60f8)\n> at snapbuild.c:498\n> 498 + sizeof(TransactionId) * builder->committed.xcnt\n> (gdb) p builder->committed.xcnt\n> $4 = 11532\n> -----------------\n>\n> And with each iteration it grows by 1.\n>\n\nCan we somehow avoid this either by keeping DDL-related xacts open or\naborting them? Also, will it make any difference to use setval as\ndo_setval() seems to be logging each time?\n\nIf possible, can you share the scripts? Kuroda-San has access to the\nperformance machine, he may be able to try it as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 30 Nov 2023 17:26:22 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Dear Tomas,\r\n\r\n> I did some micro-benchmarking today, trying to identify cases where this\r\n> would cause unexpected problems, either due to having to maintain all\r\n> the relfilenodes, or due to having to do hash lookups for every sequence\r\n> change. But I think it's fine, mostly ...\r\n>\r\n\r\nI did also performance tests (especially case 3). First of all, there are some\r\nvariants from yours.\r\n\r\n1. patch 0002 was reverted because it has an issue. So this test checks whether\r\n refactoring around ReorderBufferSequenceIsTransactional seems really needed.\r\n2. per comments from Amit, I also measured the abort case. In this case, the\r\n alter_sequence() is called but the transaction is aborted.\r\n3. I measured with changing number of clients {8, 16, 32, 64, 128}. In any cases,\r\n clients executed 1000 transactions. The performance machine has 128 core so that\r\n result for 128 clients might be saturated.\r\n4. a short sleep (0.1s) was added in alter_sequence(), especially between\r\n \"alter sequence\" and nextval(). Because while testing, I found that the\r\n transaction is too short to execute in parallel. I think it is reasonable\r\n because ReorderBufferSequenceIsTransactional() might be worse when the parallelism\r\n is increased.\r\n\r\nI attached one backend process via perf and executed pg_slot_logical_get_changes().\r\nAttached txt file shows which function occupied CPU time, especially from\r\npg_logical_slot_get_changes_guts() and ReorderBufferSequenceIsTransactional().\r\nHere are my observations about them.\r\n\r\n* In case of commit, as you said, SnapBuildCommitTxn() seems dominant for 8-64\r\n clients case.\r\n* For (commit, 128 clients) case, however, ReorderBufferRestoreChanges() waste\r\n many times. I think this is because changes exceed logical_decoding_work_mem,\r\n so we do not have to analyze anymore.\r\n* In case of abort, CPU time used by ReorderBufferSequenceIsTransactional() is linearly\r\n longer. This means that we need to think some solution to avoid the overhead by\r\n ReorderBufferSequenceIsTransactional().\r\n\r\n```\r\n8 clients 3.73% occupied time\r\n16 7.26%\r\n32 15.82%\r\n64 29.14%\r\n128 46.27%\r\n```\r\n\r\n* In case of abort, I also checked CPU time used by ReorderBufferAddRelFileLocator(), but\r\n it seems not so depends on the number of clients.\r\n\r\n```\r\n8 clients 3.66% occupied time\r\n16 6.94%\r\n32 4.65%\r\n64 5.39%\r\n128 3.06%\r\n```\r\n\r\nAs next step, I've planned to run the case which uses setval() function, because it\r\ngenerates more WALs than normal nextval();\r\nHow do you think?\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 1 Dec 2023 11:08:16 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 11/30/23 12:56, Amit Kapila wrote:\n> On Thu, Nov 30, 2023 at 5:28 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> 3) \"bad case\" - small transactions that generate a lot of relfilenodes\n>>\n>> select alter_sequence();\n>>\n>> where the function is defined like this (I did create 1000 sequences\n>> before the test):\n>>\n>> CREATE OR REPLACE FUNCTION alter_sequence() RETURNS void AS $$\n>> DECLARE\n>> v INT;\n>> BEGIN\n>> v := 1 + (random() * 999)::int;\n>> execute format('alter sequence s%s restart with 1000', v);\n>> perform nextval('s');\n>> END;\n>> $$ LANGUAGE plpgsql;\n>>\n>> This performs terribly, but it's entirely unrelated to sequences.\n>> Current master has exactly the same problem, if transactions do DDL.\n>> Like this, for example:\n>>\n>> CREATE OR REPLACE FUNCTION create_table() RETURNS void AS $$\n>> DECLARE\n>> v INT;\n>> BEGIN\n>> v := 1 + (random() * 999)::int;\n>> execute format('create table t%s (a int)', v);\n>> execute format('drop table t%s', v);\n>> insert into t values (1);\n>> END;\n>> $$ LANGUAGE plpgsql;\n>>\n>> This has the same impact on master. The perf report shows this:\n>>\n>> --98.06%--pg_logical_slot_get_changes_guts\n>> |\n>> --97.88%--LogicalDecodingProcessRecord\n>> |\n>> --97.56%--xact_decode\n>> |\n>> --97.51%--DecodeCommit\n>> |\n>> |--91.92%--SnapBuildCommitTxn\n>> | |\n>> | --91.65%--SnapBuildBuildSnapshot\n>> | |\n>> | --91.14%--pg_qsort\n>>\n>> The sequence decoding is maybe ~1%. The reason why SnapBuildSnapshot\n>> takes so long is because:\n>>\n>> -----------------\n>> Breakpoint 1, SnapBuildBuildSnapshot (builder=0x21f60f8)\n>> at snapbuild.c:498\n>> 498 + sizeof(TransactionId) * builder->committed.xcnt\n>> (gdb) p builder->committed.xcnt\n>> $4 = 11532\n>> -----------------\n>>\n>> And with each iteration it grows by 1.\n>>\n> \n> Can we somehow avoid this either by keeping DDL-related xacts open or\n> aborting them?\nI\nI'm not sure why the snapshot builder does this, i.e. why we end up\naccumulating that many xids, and I didn't have time to look closer. So I\ndon't know if this would be a solution or not.\n\n> Also, will it make any difference to use setval as\n> do_setval() seems to be logging each time?\n> \n\nI think that's pretty much what case (2) does, as it calls nextval()\nenough time for each transaction do generate WAL. But I don't think this\nis a very sensible benchmark - it's an extreme case, but practical cases\nare far closer to case (1) because sequences are intermixed with other\nactivity. No one really does just nextval() calls.\n\n> If possible, can you share the scripts? Kuroda-San has access to the\n> performance machine, he may be able to try it as well.\n> \n\nSure, attached. But it's a very primitive script, nothing fancy.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sat, 2 Dec 2023 01:10:57 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 12/1/23 12:08, Hayato Kuroda (Fujitsu) wrote:\n> Dear Tomas,\n> \n>> I did some micro-benchmarking today, trying to identify cases where this\n>> would cause unexpected problems, either due to having to maintain all\n>> the relfilenodes, or due to having to do hash lookups for every sequence\n>> change. But I think it's fine, mostly ...\n>>\n> \n> I did also performance tests (especially case 3). First of all, there are some\n> variants from yours.\n> \n> 1. patch 0002 was reverted because it has an issue. So this test checks whether\n> refactoring around ReorderBufferSequenceIsTransactional seems really needed.\n\nFWIW I also did the benchmarks without the 0002 patch, for the same\nreason. I forgot to mention that.\n\n> 2. per comments from Amit, I also measured the abort case. In this case, the\n> alter_sequence() is called but the transaction is aborted.\n> 3. I measured with changing number of clients {8, 16, 32, 64, 128}. In any cases,\n> clients executed 1000 transactions. The performance machine has 128 core so that\n> result for 128 clients might be saturated.\n> 4. a short sleep (0.1s) was added in alter_sequence(), especially between\n> \"alter sequence\" and nextval(). Because while testing, I found that the\n> transaction is too short to execute in parallel. I think it is reasonable\n> because ReorderBufferSequenceIsTransactional() might be worse when the parallelism\n> is increased.\n> \n> I attached one backend process via perf and executed pg_slot_logical_get_changes().\n> Attached txt file shows which function occupied CPU time, especially from\n> pg_logical_slot_get_changes_guts() and ReorderBufferSequenceIsTransactional().\n> Here are my observations about them.\n> \n> * In case of commit, as you said, SnapBuildCommitTxn() seems dominant for 8-64\n> clients case.\n> * For (commit, 128 clients) case, however, ReorderBufferRestoreChanges() waste\n> many times. I think this is because changes exceed logical_decoding_work_mem,\n> so we do not have to analyze anymore.\n> * In case of abort, CPU time used by ReorderBufferSequenceIsTransactional() is linearly\n> longer. This means that we need to think some solution to avoid the overhead by\n> ReorderBufferSequenceIsTransactional().\n> \n> ```\n> 8 clients 3.73% occupied time\n> 16 7.26%\n> 32 15.82%\n> 64 29.14%\n> 128 46.27%\n> ```\n\nInteresting, so what exactly does the transaction do? Anyway, I don't\nthink this is very surprising - I believe it behaves like this because\nof having to search in many hash tables (one in each toplevel xact). And\nI think the solution I explained before (maintaining a single toplevel\nhash, instead of many per-top-level hashes).\n\nFWIW I find this case interesting, but not very practical, because no\npractical workload has that many aborts.\n\n> \n> * In case of abort, I also checked CPU time used by ReorderBufferAddRelFileLocator(), but\n> it seems not so depends on the number of clients.\n> \n> ```\n> 8 clients 3.66% occupied time\n> 16 6.94%\n> 32 4.65%\n> 64 5.39%\n> 128 3.06%\n> ```\n> \n> As next step, I've planned to run the case which uses setval() function, because it\n> generates more WALs than normal nextval();\n> How do you think?\n> \n\nSure, although I don't think it's much different from the test selecting\n40 values from the sequence (in each transaction). That generates about\nthe same amount of WAL.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 2 Dec 2023 01:23:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Dear Tomas,\r\n\r\n> > I did also performance tests (especially case 3). First of all, there are some\r\n> > variants from yours.\r\n> >\r\n> > 1. patch 0002 was reverted because it has an issue. So this test checks whether\r\n> > refactoring around ReorderBufferSequenceIsTransactional seems really\r\n> needed.\r\n> \r\n> FWIW I also did the benchmarks without the 0002 patch, for the same\r\n> reason. I forgot to mention that.\r\n\r\nOh, good news. So your bench markings are quite meaningful.\r\n\r\n> \r\n> Interesting, so what exactly does the transaction do?\r\n\r\nIt is quite simple - PSA the script file. It was executed with 64 multiplicity.\r\nThe definition of alter_sequence() is same as you said.\r\n(I did use normal bash script for running them, but your approach may be smarter)\r\n\r\n> Anyway, I don't\r\n> think this is very surprising - I believe it behaves like this because\r\n> of having to search in many hash tables (one in each toplevel xact). And\r\n> I think the solution I explained before (maintaining a single toplevel\r\n> hash, instead of many per-top-level hashes).\r\n\r\nAgreed. And I can benchmark again for new ones, maybe when we decide new\r\napproach.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Sun, 3 Dec 2023 12:55:56 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/3/23 13:55, Hayato Kuroda (Fujitsu) wrote:\n> Dear Tomas,\n> \n>>> I did also performance tests (especially case 3). First of all, there are some\n>>> variants from yours.\n>>>\n>>> 1. patch 0002 was reverted because it has an issue. So this test checks whether\n>>> refactoring around ReorderBufferSequenceIsTransactional seems really\n>> needed.\n>>\n>> FWIW I also did the benchmarks without the 0002 patch, for the same\n>> reason. I forgot to mention that.\n> \n> Oh, good news. So your bench markings are quite meaningful.\n> \n>>\n>> Interesting, so what exactly does the transaction do?\n> \n> It is quite simple - PSA the script file. It was executed with 64 multiplicity.\n> The definition of alter_sequence() is same as you said.\n> (I did use normal bash script for running them, but your approach may be smarter)\n> \n>> Anyway, I don't\n>> think this is very surprising - I believe it behaves like this because\n>> of having to search in many hash tables (one in each toplevel xact). And\n>> I think the solution I explained before (maintaining a single toplevel\n>> hash, instead of many per-top-level hashes).\n> \n> Agreed. And I can benchmark again for new ones, maybe when we decide new\n> approach.\n> \n\nThanks for the script. Are you also measuring the time it takes to\ndecode this using test_decoding?\n\nFWIW I did more comprehensive suite of tests over the weekend, with a\ncouple more variations. I'm attaching the updated scripts, running it\nshould be as simple as\n\n ./run.sh BRANCH TRANSACTIONS RUNS\n\nso perhaps\n\n ./run.sh master 1000 3\n\nto do 3 runs with 1000 transactions per client. And it'll run a bunch of\ncombinations hard-coded in the script, and write the timings into a CSV\nfile (with \"master\" in each row).\n\nI did this on two machines (i5 with 4 cores, xeon with 16/32 cores). I\ndid this with current master, the basic patch (without the 0002 part),\nand then with the optimized approach (single global hash table, see the\n0004 part). That's what master / patched / optimized in the results is.\n\nInterestingly enough, the i5 handled this much faster, it seems to be\nbetter in single-core tasks. The xeon is still running, so the results\nfor \"optimized\" only have one run (out of 3), but shouldn't change much.\n\nAttached is also a table summarizing this, and visualizing the timing\nchange (vs. master) in the last couple columns. Green is \"faster\" than\nmaster (but we don't really expect that), and \"red\" means slower than\nmaster (the more red, the slower).\n\nThere results are grouped by script (see the attached .tgz), with either\n32 or 96 clients (which does affect the timing, but not between master\nand patch). Some executions have no pg_sleep() calls, some have 0.001\nwait (but that doesn't seem to make much difference).\n\nOverall, I'd group the results into about three groups:\n\n1) good cases [nextval, nextval-40, nextval-abort]\n\nThese are cases that slow down a bit, but the slowdown is mostly within\nreasonable bounds (we're making the decoding to do more stuff, so it'd\nbe a bit silly to require that extra work to make no impact). And I do\nthink this is reasonable, because this is pretty much an extreme / worst\ncase behavior. People don't really do just nextval() calls, without\ndoing anything else. Not to mention doing aborts for 100% transactions.\n\nSo in practice this is going to be within noise (and in those cases the\nresults even show speedup, which seems a bit surprising). It's somewhat\ndependent on CPU too - on xeon there's hardly any regression.\n\n\n2) nextval-40-abort\n\nHere the slowdown is clear, but I'd argue it generally falls in the same\ngroup as (1). Yes, I'd be happier if it didn't behave like this, but if\nsomeone can show me a practical workload affected by this ...\n\n\n3) irrelevant cases [all the alters taking insane amounts of time]\n\nI absolutely refuse to care about these extreme cases where decoding\n100k transactions takes 5-10 minutes (on i5), or up to 30 minutes (on\nxeon). If this was a problem for some practical workload, we'd have\nalready heard about it I guess. And even if there was such workload, it\nwouldn't be up to this patch to fix that. There's clearly something\nmisbehaving in the snapshot builder.\n\n\nI was hopeful the global hash table would be an improvement, but that\ndoesn't seem to be the case. I haven't done much profiling yet, but I'd\nguess most of the overhead is due to ReorderBufferQueueSequence()\nstarting and aborting a transaction in the non-transactinal case. Which\nis unfortunate, but I don't know if there's a way to optimize that.\n\nSome time ago I floated the idea of maybe \"queuing\" the sequence changes\nand only replay them on the next commit, somehow. But we did ran into\nproblems with which snapshot to use, that I didn't know how to solve.\nMaybe we should try again. The idea is we'd queue the non-transactional\nchanges somewhere (can't be in the transaction, because we must keep\nthem even if it aborts), and then \"inject\" them into the next commit.\nThat'd mean we wouldn't do the separate start/abort for each change.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Sun, 3 Dec 2023 18:52:12 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/3/23 18:52, Tomas Vondra wrote:\n> ...\n> \n> Some time ago I floated the idea of maybe \"queuing\" the sequence changes\n> and only replay them on the next commit, somehow. But we did ran into\n> problems with which snapshot to use, that I didn't know how to solve.\n> Maybe we should try again. The idea is we'd queue the non-transactional\n> changes somewhere (can't be in the transaction, because we must keep\n> them even if it aborts), and then \"inject\" them into the next commit.\n> That'd mean we wouldn't do the separate start/abort for each change.\n> \n\nAnother idea is that maybe we could somehow inform ReorderBuffer whether\nthe output plugin even is interested in sequences. That'd help with\ncases where we don't even want/need to replicate sequences, e.g. because\nthe publication does not specify (publish=sequence).\n\nWhat happens now in that case is we call ReorderBufferQueueSequence(),\nit does the whole dance with starting/aborting the transaction, calls\nrb->sequence() which just does \"meh\" and doesn't do anything. Maybe we\ncould just short-circuit this by asking the output plugin somehow.\n\nIn an extreme case the plugin may not even specify the sequence\ncallbacks, and we're still doing all of this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 3 Dec 2023 19:26:16 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sun, Dec 3, 2023 at 11:22 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Thanks for the script. Are you also measuring the time it takes to\n> decode this using test_decoding?\n>\n> FWIW I did more comprehensive suite of tests over the weekend, with a\n> couple more variations. I'm attaching the updated scripts, running it\n> should be as simple as\n>\n> ./run.sh BRANCH TRANSACTIONS RUNS\n>\n> so perhaps\n>\n> ./run.sh master 1000 3\n>\n> to do 3 runs with 1000 transactions per client. And it'll run a bunch of\n> combinations hard-coded in the script, and write the timings into a CSV\n> file (with \"master\" in each row).\n>\n> I did this on two machines (i5 with 4 cores, xeon with 16/32 cores). I\n> did this with current master, the basic patch (without the 0002 part),\n> and then with the optimized approach (single global hash table, see the\n> 0004 part). That's what master / patched / optimized in the results is.\n>\n> Interestingly enough, the i5 handled this much faster, it seems to be\n> better in single-core tasks. The xeon is still running, so the results\n> for \"optimized\" only have one run (out of 3), but shouldn't change much.\n>\n> Attached is also a table summarizing this, and visualizing the timing\n> change (vs. master) in the last couple columns. Green is \"faster\" than\n> master (but we don't really expect that), and \"red\" means slower than\n> master (the more red, the slower).\n>\n> There results are grouped by script (see the attached .tgz), with either\n> 32 or 96 clients (which does affect the timing, but not between master\n> and patch). Some executions have no pg_sleep() calls, some have 0.001\n> wait (but that doesn't seem to make much difference).\n>\n> Overall, I'd group the results into about three groups:\n>\n> 1) good cases [nextval, nextval-40, nextval-abort]\n>\n> These are cases that slow down a bit, but the slowdown is mostly within\n> reasonable bounds (we're making the decoding to do more stuff, so it'd\n> be a bit silly to require that extra work to make no impact). And I do\n> think this is reasonable, because this is pretty much an extreme / worst\n> case behavior. People don't really do just nextval() calls, without\n> doing anything else. Not to mention doing aborts for 100% transactions.\n>\n> So in practice this is going to be within noise (and in those cases the\n> results even show speedup, which seems a bit surprising). It's somewhat\n> dependent on CPU too - on xeon there's hardly any regression.\n>\n>\n> 2) nextval-40-abort\n>\n> Here the slowdown is clear, but I'd argue it generally falls in the same\n> group as (1). Yes, I'd be happier if it didn't behave like this, but if\n> someone can show me a practical workload affected by this ...\n>\n>\n> 3) irrelevant cases [all the alters taking insane amounts of time]\n>\n> I absolutely refuse to care about these extreme cases where decoding\n> 100k transactions takes 5-10 minutes (on i5), or up to 30 minutes (on\n> xeon). If this was a problem for some practical workload, we'd have\n> already heard about it I guess. And even if there was such workload, it\n> wouldn't be up to this patch to fix that. There's clearly something\n> misbehaving in the snapshot builder.\n>\n>\n> I was hopeful the global hash table would be an improvement, but that\n> doesn't seem to be the case. I haven't done much profiling yet, but I'd\n> guess most of the overhead is due to ReorderBufferQueueSequence()\n> starting and aborting a transaction in the non-transactinal case. Which\n> is unfortunate, but I don't know if there's a way to optimize that.\n>\n\nBefore discussing the alternative ideas you shared, let me try to\nclarify my understanding so that we are on the same page. I see two\nobservations based on the testing and discussion we had (a) for\nnon-transactional cases, the overhead observed is mainly due to\nstarting/aborting a transaction for each change; (b) for transactional\ncases, we see overhead due to traversing all the top-level txns and\ncheck the hash table for each one to find whether change is\ntransactional.\n\nAm, I missing something?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 5 Dec 2023 17:47:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/5/23 13:17, Amit Kapila wrote:\n> ...\n>> I was hopeful the global hash table would be an improvement, but that\n>> doesn't seem to be the case. I haven't done much profiling yet, but I'd\n>> guess most of the overhead is due to ReorderBufferQueueSequence()\n>> starting and aborting a transaction in the non-transactinal case. Which\n>> is unfortunate, but I don't know if there's a way to optimize that.\n>>\n> \n> Before discussing the alternative ideas you shared, let me try to\n> clarify my understanding so that we are on the same page. I see two\n> observations based on the testing and discussion we had (a) for\n> non-transactional cases, the overhead observed is mainly due to\n> starting/aborting a transaction for each change;\n\nYes, I believe that's true. See the attached profiles for nextval.sql\nand nextval-40.sql from master and optimized build (with the global\nhash), and also a perf-diff. I only include the top 1000 lines for each\nprofile, that should be enough.\n\nmaster - current master without patches applied\noptimized - master + sequence decoding with global hash table\n\nFor nextval, there's almost no difference in the profile. Decoding the\nother changes (inserts) is the dominant part, as we only log sequences\nevery 32 increments.\n\nFor nextval-40, the main increase is likely due to this part\n\n |--11.09%--seq_decode\n | |\n | |--9.25%--ReorderBufferQueueSequence\n | | |\n | | |--3.56%--AbortCurrentTransaction\n | | | |\n | | | --3.53%--AbortSubTransaction\n | | | |\n | | | |--0.95%--AtSubAbort_Portals\n | | | | |\n | | | | --0.83%--hash_seq_search\n | | | |\n | | | --0.83%--ResourceOwnerReleaseInternal\n | | |\n | | |--2.06%--BeginInternalSubTransaction\n | | | |\n | | | --1.10%--CommitTransactionCommand\n | | | |\n | | | --1.07%--StartSubTransaction\n | | |\n | | |--1.28%--CleanupSubTransaction\n | | | |\n | | | --0.64%--AtSubCleanup_Portals\n | | | |\n | | | --0.55%--hash_seq_search\n | | |\n | | --0.67%--RelidByRelfilenumber\n\nSo yeah, that's the transaction stuff in ReorderBufferQueueSequence.\n\nThere's also per-diff, comparing individual functions.\n\n> (b) for transactional\n> cases, we see overhead due to traversing all the top-level txns and\n> check the hash table for each one to find whether change is\n> transactional.\n> \n\nNot really, no. As I explained in my preceding e-mail, this check makes\nalmost no difference - I did expect it to matter, but it doesn't. And I\nwas a bit disappointed the global hash table didn't move the needle.\n\nMost of the time is spent in\n\n 78.81% 0.00% postgres postgres [.] DecodeCommit (inlined)\n |\n ---DecodeCommit (inlined)\n |\n |--72.65%--SnapBuildCommitTxn\n | |\n | --72.61%--SnapBuildBuildSnapshot\n | |\n | --72.09%--pg_qsort\n | |\n | |--66.24%--pg_qsort\n | | |\n\nAnd there's almost no difference between master and build with sequence\ndecoding - see the attached diff-alter-sequence.perf, comparing the two\nbranches (perf diff -c delta-abs).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Tue, 5 Dec 2023 17:53:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sun, Dec 3, 2023 at 11:22 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n\n> Some time ago I floated the idea of maybe \"queuing\" the sequence changes\n> and only replay them on the next commit, somehow. But we did ran into\n> problems with which snapshot to use, that I didn't know how to solve.\n> Maybe we should try again. The idea is we'd queue the non-transactional\n> changes somewhere (can't be in the transaction, because we must keep\n> them even if it aborts), and then \"inject\" them into the next commit.\n> That'd mean we wouldn't do the separate start/abort for each change.\n\nWhy can't we use the same concept of\nSnapBuildDistributeNewCatalogSnapshot(), I mean we keep queuing the\nnon-transactional changes (have some base snapshot before the first\nchange), and whenever there is any catalog change, queue new snapshot\nchange also in the queue of the non-transactional sequence change so\nthat while sending it to downstream whenever it is necessary we will\nchange the historic snapshot?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 11:12:35 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Dec 5, 2023 at 10:23 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/5/23 13:17, Amit Kapila wrote:\n>\n> > (b) for transactional\n> > cases, we see overhead due to traversing all the top-level txns and\n> > check the hash table for each one to find whether change is\n> > transactional.\n> >\n>\n> Not really, no. As I explained in my preceding e-mail, this check makes\n> almost no difference - I did expect it to matter, but it doesn't. And I\n> was a bit disappointed the global hash table didn't move the needle.\n>\n> Most of the time is spent in\n>\n> 78.81% 0.00% postgres postgres [.] DecodeCommit (inlined)\n> |\n> ---DecodeCommit (inlined)\n> |\n> |--72.65%--SnapBuildCommitTxn\n> | |\n> | --72.61%--SnapBuildBuildSnapshot\n> | |\n> | --72.09%--pg_qsort\n> | |\n> | |--66.24%--pg_qsort\n> | | |\n>\n> And there's almost no difference between master and build with sequence\n> decoding - see the attached diff-alter-sequence.perf, comparing the two\n> branches (perf diff -c delta-abs).\n>\n\nI think in this the commit time predominates which hides the overhead.\nWe didn't investigate in detail if that can be improved but if we see\na similar case of abort [1], it shows the overhead of\nReorderBufferSequenceIsTransactional(). I understand that aborts won't\nbe frequent and it is sort of unrealistic test but still helps to show\nthat there is overhead in ReorderBufferSequenceIsTransactional(). Now,\nI am not sure if we can ignore that case because theoretically, the\noverhead can increase based on the number of top-level transactions.\n\n[1]: https://www.postgresql.org/message-id/TY3PR01MB9889D457278B254CA87D1325F581A%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Dec 2023 14:26:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 11:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Dec 3, 2023 at 11:22 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n\nI was also wondering what happens if the sequence changes are\ntransactional but somehow the snap builder state changes to\nSNAPBUILD_FULL_SNAPSHOT in between processing of the smgr_decode() and\nthe seq_decode() which means RelFileLocator will not be added to the\nhash table and during the seq_decode() we will consider the change as\nnon-transactional. I haven't fully analyzed that what is the real\nproblem in this case but have we considered this case? what happens if\nthe transaction having both ALTER SEQUENCE and nextval() gets aborted\nbut the nextva() has been considered as non-transactional because\nsmgr_decode() changes were not processed because snap builder state\nwas not yet SNAPBUILD_FULL_SNAPSHOT.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 14:35:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 11:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sun, Dec 3, 2023 at 11:22 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n>\n> > Some time ago I floated the idea of maybe \"queuing\" the sequence changes\n> > and only replay them on the next commit, somehow. But we did ran into\n> > problems with which snapshot to use, that I didn't know how to solve.\n> > Maybe we should try again. The idea is we'd queue the non-transactional\n> > changes somewhere (can't be in the transaction, because we must keep\n> > them even if it aborts), and then \"inject\" them into the next commit.\n> > That'd mean we wouldn't do the separate start/abort for each change.\n>\n> Why can't we use the same concept of\n> SnapBuildDistributeNewCatalogSnapshot(), I mean we keep queuing the\n> non-transactional changes (have some base snapshot before the first\n> change), and whenever there is any catalog change, queue new snapshot\n> change also in the queue of the non-transactional sequence change so\n> that while sending it to downstream whenever it is necessary we will\n> change the historic snapshot?\n>\n\nOh, do you mean maintain different historic snapshots and then switch\nbased on the change we are processing? I guess the other thing we need\nto consider is the order of processing the changes if we maintain\nseparate queues that need to be processed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Dec 2023 15:35:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sun, Dec 3, 2023 at 11:56 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/3/23 18:52, Tomas Vondra wrote:\n> > ...\n> >\n>\n> Another idea is that maybe we could somehow inform ReorderBuffer whether\n> the output plugin even is interested in sequences. That'd help with\n> cases where we don't even want/need to replicate sequences, e.g. because\n> the publication does not specify (publish=sequence).\n>\n> What happens now in that case is we call ReorderBufferQueueSequence(),\n> it does the whole dance with starting/aborting the transaction, calls\n> rb->sequence() which just does \"meh\" and doesn't do anything. Maybe we\n> could just short-circuit this by asking the output plugin somehow.\n>\n> In an extreme case the plugin may not even specify the sequence\n> callbacks, and we're still doing all of this.\n>\n\nWe could explore this but I guess it won't solve the problem we are\nfacing in cases where all sequences are published and plugin has\nspecified the sequence callbacks. I think it would add some overhead\nof this check in positive cases where we decide to anyway do send the\nchanges.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 6 Dec 2023 15:49:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Why can't we use the same concept of\n> > SnapBuildDistributeNewCatalogSnapshot(), I mean we keep queuing the\n> > non-transactional changes (have some base snapshot before the first\n> > change), and whenever there is any catalog change, queue new snapshot\n> > change also in the queue of the non-transactional sequence change so\n> > that while sending it to downstream whenever it is necessary we will\n> > change the historic snapshot?\n> >\n>\n> Oh, do you mean maintain different historic snapshots and then switch\n> based on the change we are processing? I guess the other thing we need\n> to consider is the order of processing the changes if we maintain\n> separate queues that need to be processed.\n\nI mean we will not specifically maintain the historic changes, but if\nthere is any catalog change where we are pushing the snapshot to all\nthe transaction's change queue, at the same time we will push this\nsnapshot in the non-transactional sequence queue as well. I am not\nsure what is the problem with the ordering? because we will be\nqueueing all non-transactional sequence changes in a separate queue in\nthe order they arrive and as soon as we process the next commit we\nwill process all the non-transactional changes at that time. Do you\nsee issue with that?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 6 Dec 2023 16:35:27 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/6/23 10:05, Dilip Kumar wrote:\n> On Wed, Dec 6, 2023 at 11:12 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Sun, Dec 3, 2023 at 11:22 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n>>>\n> \n> I was also wondering what happens if the sequence changes are\n> transactional but somehow the snap builder state changes to\n> SNAPBUILD_FULL_SNAPSHOT in between processing of the smgr_decode() and\n> the seq_decode() which means RelFileLocator will not be added to the\n> hash table and during the seq_decode() we will consider the change as\n> non-transactional. I haven't fully analyzed that what is the real\n> problem in this case but have we considered this case? what happens if\n> the transaction having both ALTER SEQUENCE and nextval() gets aborted\n> but the nextva() has been considered as non-transactional because\n> smgr_decode() changes were not processed because snap builder state\n> was not yet SNAPBUILD_FULL_SNAPSHOT.\n> \n\nYes, if something like this happens, that'd be a problem:\n\n1) decoding starts, with\n\n SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT\n\n2) transaction that creates a new refilenode gets decoded, but we skip\n it because we don't have the correct snapshot\n\n3) snapshot changes to SNAPBUILD_FULL_SNAPSHOT\n\n4) we decode sequence change from nextval() for the sequence\n\nThis would lead to us attempting to apply sequence change for a\nrelfilenode that's not visible yet (and may even get aborted).\n\nBut can this even happen? Can we start decoding in the middle of a\ntransaction? How come this wouldn't affect e.g. XLOG_HEAP2_NEW_CID,\nwhich is also skipped until SNAPBUILD_FULL_SNAPSHOT. Or logical\nmessages, where we also call the output plugin in non-transactional cases.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Dec 2023 14:39:42 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/6/23 12:05, Dilip Kumar wrote:\n> On Wed, Dec 6, 2023 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>>> Why can't we use the same concept of\n>>> SnapBuildDistributeNewCatalogSnapshot(), I mean we keep queuing the\n>>> non-transactional changes (have some base snapshot before the first\n>>> change), and whenever there is any catalog change, queue new snapshot\n>>> change also in the queue of the non-transactional sequence change so\n>>> that while sending it to downstream whenever it is necessary we will\n>>> change the historic snapshot?\n>>>\n>>\n>> Oh, do you mean maintain different historic snapshots and then switch\n>> based on the change we are processing? I guess the other thing we need\n>> to consider is the order of processing the changes if we maintain\n>> separate queues that need to be processed.\n> \n> I mean we will not specifically maintain the historic changes, but if\n> there is any catalog change where we are pushing the snapshot to all\n> the transaction's change queue, at the same time we will push this\n> snapshot in the non-transactional sequence queue as well. I am not\n> sure what is the problem with the ordering? because we will be\n> queueing all non-transactional sequence changes in a separate queue in\n> the order they arrive and as soon as we process the next commit we\n> will process all the non-transactional changes at that time. Do you\n> see issue with that?\n> \n\nIsn't this (in principle) the idea of queuing the non-transactional\nchanges and then applying them on the next commit? Yes, I didn't get\nvery far with that, but I got stuck exactly on tracking which snapshot\nto use, so if there's a way to do that, that'd fix my issue.\n\nAlso, would this mean we don't need to track the relfilenodes, if we're\nable to query the catalog? Would we be able to check if the relfilenode\nwas created by the current xact?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Dec 2023 14:47:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/6/23 11:19, Amit Kapila wrote:\n> On Sun, Dec 3, 2023 at 11:56 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 12/3/23 18:52, Tomas Vondra wrote:\n>>> ...\n>>>\n>>\n>> Another idea is that maybe we could somehow inform ReorderBuffer whether\n>> the output plugin even is interested in sequences. That'd help with\n>> cases where we don't even want/need to replicate sequences, e.g. because\n>> the publication does not specify (publish=sequence).\n>>\n>> What happens now in that case is we call ReorderBufferQueueSequence(),\n>> it does the whole dance with starting/aborting the transaction, calls\n>> rb->sequence() which just does \"meh\" and doesn't do anything. Maybe we\n>> could just short-circuit this by asking the output plugin somehow.\n>>\n>> In an extreme case the plugin may not even specify the sequence\n>> callbacks, and we're still doing all of this.\n>>\n> \n> We could explore this but I guess it won't solve the problem we are\n> facing in cases where all sequences are published and plugin has\n> specified the sequence callbacks. I think it would add some overhead\n> of this check in positive cases where we decide to anyway do send the\n> changes.\n\nWell, the idea is the check would be very simple (essentially just a\nboolean flag somewhere), so not really measurable.\n\nAnd if the plugin requests decoding sequences, I guess it's natural it\nmay have a bit of overhead. It needs to do more things, after all. It\nneeds to be acceptable, ofc.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Dec 2023 14:50:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/6/23 09:56, Amit Kapila wrote:\n> On Tue, Dec 5, 2023 at 10:23 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 12/5/23 13:17, Amit Kapila wrote:\n>>\n>>> (b) for transactional\n>>> cases, we see overhead due to traversing all the top-level txns and\n>>> check the hash table for each one to find whether change is\n>>> transactional.\n>>>\n>>\n>> Not really, no. As I explained in my preceding e-mail, this check makes\n>> almost no difference - I did expect it to matter, but it doesn't. And I\n>> was a bit disappointed the global hash table didn't move the needle.\n>>\n>> Most of the time is spent in\n>>\n>> 78.81% 0.00% postgres postgres [.] DecodeCommit (inlined)\n>> |\n>> ---DecodeCommit (inlined)\n>> |\n>> |--72.65%--SnapBuildCommitTxn\n>> | |\n>> | --72.61%--SnapBuildBuildSnapshot\n>> | |\n>> | --72.09%--pg_qsort\n>> | |\n>> | |--66.24%--pg_qsort\n>> | | |\n>>\n>> And there's almost no difference between master and build with sequence\n>> decoding - see the attached diff-alter-sequence.perf, comparing the two\n>> branches (perf diff -c delta-abs).\n>>\n> \n> I think in this the commit time predominates which hides the overhead.\n> We didn't investigate in detail if that can be improved but if we see\n> a similar case of abort [1], it shows the overhead of\n> ReorderBufferSequenceIsTransactional(). I understand that aborts won't\n> be frequent and it is sort of unrealistic test but still helps to show\n> that there is overhead in ReorderBufferSequenceIsTransactional(). Now,\n> I am not sure if we can ignore that case because theoretically, the\n> overhead can increase based on the number of top-level transactions.\n> \n> [1]: https://www.postgresql.org/message-id/TY3PR01MB9889D457278B254CA87D1325F581A%40TY3PR01MB9889.jpnprd01.prod.outlook.com\n> \n\nBut those profiles were with the \"old\" patch, with one hash table per\ntop-level transaction. I see nothing like that with the patch [1] that\nreplaces that with a single global hash table. With that patch, the\nReorderBufferSequenceIsTransactional() took ~0.5% in any tests I did.\n\nWhat did have bigger impact is this:\n\n 46.12% 1.47% postgres [.] pg_logical_slot_get_changes_guts\n |\n |--45.12%--pg_logical_slot_get_changes_guts\n | |\n | |--42.34%--LogicalDecodingProcessRecord\n | | |\n | | |--12.82%--xact_decode\n | | | |\n | | | |--9.46%--DecodeAbort (inlined)\n | | | | |\n | | | | |--8.44%--ReorderBufferCleanupTXN\n | | | | | |\n | | | | | |--3.25%--ReorderBufferSequenceCleanup (in)\n | | | | | | |\n | | | | | | |--1.59%--hash_seq_search\n | | | | | | |\n | | | | | | |--0.80%--hash_search_with_hash_value\n | | | | | | |\n | | | | | | --0.59%--hash_search\n | | | | | | hash_bytes\n\nI guess that could be optimized, but it's also a direct consequence of\nthe huge number of aborts for transactions that create relfilenode. For\nany other workload this will be negligible.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Dec 2023 15:18:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 7:20 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/6/23 11:19, Amit Kapila wrote:\n> > On Sun, Dec 3, 2023 at 11:56 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 12/3/23 18:52, Tomas Vondra wrote:\n> >>> ...\n> >>>\n> >>\n> >> Another idea is that maybe we could somehow inform ReorderBuffer whether\n> >> the output plugin even is interested in sequences. That'd help with\n> >> cases where we don't even want/need to replicate sequences, e.g. because\n> >> the publication does not specify (publish=sequence).\n> >>\n> >> What happens now in that case is we call ReorderBufferQueueSequence(),\n> >> it does the whole dance with starting/aborting the transaction, calls\n> >> rb->sequence() which just does \"meh\" and doesn't do anything. Maybe we\n> >> could just short-circuit this by asking the output plugin somehow.\n> >>\n> >> In an extreme case the plugin may not even specify the sequence\n> >> callbacks, and we're still doing all of this.\n> >>\n> >\n> > We could explore this but I guess it won't solve the problem we are\n> > facing in cases where all sequences are published and plugin has\n> > specified the sequence callbacks. I think it would add some overhead\n> > of this check in positive cases where we decide to anyway do send the\n> > changes.\n>\n> Well, the idea is the check would be very simple (essentially just a\n> boolean flag somewhere), so not really measurable.\n>\n> And if the plugin requests decoding sequences, I guess it's natural it\n> may have a bit of overhead. It needs to do more things, after all. It\n> needs to be acceptable, ofc.\n>\n\nI agree with you that if it can be done cheaply or without a\nmeasurable overhead then it would be a good idea and can serve other\npurposes as well. For example, see discussion [1]. I had more of what\nthe patch in email [1] is doing where it needs to start/stop xact and\ndo so relcache access etc. which seems can add some overhead if done\nfor each change, though I haven't measured so can't be sure.\n\n[1] - https://www.postgresql.org/message-id/CAGfChW5Qo2SrjJ7rU9YYtZbRaWv6v-Z8MJn%3DdQNx4uCSqDEOHA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Dec 2023 09:33:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 7:17 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/6/23 12:05, Dilip Kumar wrote:\n> > On Wed, Dec 6, 2023 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >>> Why can't we use the same concept of\n> >>> SnapBuildDistributeNewCatalogSnapshot(), I mean we keep queuing the\n> >>> non-transactional changes (have some base snapshot before the first\n> >>> change), and whenever there is any catalog change, queue new snapshot\n> >>> change also in the queue of the non-transactional sequence change so\n> >>> that while sending it to downstream whenever it is necessary we will\n> >>> change the historic snapshot?\n> >>>\n> >>\n> >> Oh, do you mean maintain different historic snapshots and then switch\n> >> based on the change we are processing? I guess the other thing we need\n> >> to consider is the order of processing the changes if we maintain\n> >> separate queues that need to be processed.\n> >\n> > I mean we will not specifically maintain the historic changes, but if\n> > there is any catalog change where we are pushing the snapshot to all\n> > the transaction's change queue, at the same time we will push this\n> > snapshot in the non-transactional sequence queue as well. I am not\n> > sure what is the problem with the ordering? because we will be\n> > queueing all non-transactional sequence changes in a separate queue in\n> > the order they arrive and as soon as we process the next commit we\n> > will process all the non-transactional changes at that time. Do you\n> > see issue with that?\n> >\n>\n> Isn't this (in principle) the idea of queuing the non-transactional\n> changes and then applying them on the next commit?\n\nYes, it is.\n\n Yes, I didn't get\n> very far with that, but I got stuck exactly on tracking which snapshot\n> to use, so if there's a way to do that, that'd fix my issue.\n\nThinking more about the snapshot issue do we need to even bother about\nchanging the snapshot at all while streaming the non-transactional\nsequence changes or we can send all the non-transactional changes with\na single snapshot? So mainly snapshot logically gets changed due to\nthese 2 events case1: When any transaction gets committed which has\ndone catalog operation (this changes the global snapshot) and case2:\nWhen within a transaction, there is some catalog change (this just\nupdates the 'curcid' in the base snapshot of the transaction).\n\nNow, if we are thinking that we are streaming all the\nnon-transactional sequence changes right before the next commit then\nwe are not bothered about the (case1) at all because all changes we\nhave queues so far are before this commit. And if we come to a\n(case2), if we are performing any catalog change on the sequence then\nthe following changes on the same sequence will be considered\ntransactional and if the changes are just on some other catalog (not\nrelevant to our sequence operation) then also we should not be worried\nabout command_id change because visibility of catalog lookup for our\nsequence will be unaffected by this.\n\nIn short, I am trying to say that we can safely queue the\nnon-transactional sequence changes and stream them based on the\nsnapshot we got when we decode the first change, and as long as we are\nplanning to stream just before the next commit (or next in-progress\nstream), we don't ever need to update the snapshot.\n\n> Also, would this mean we don't need to track the relfilenodes, if we're\n> able to query the catalog? Would we be able to check if the relfilenode\n> was created by the current xact?\n\nI think by querying the catalog and checking the xmin we should be\nable to figure that out, but isn't that costlier than looking up the\nrelfilenode in hash? Because just for identifying whether the changes\nare transactional or non-transactional you would have to query the\ncatalog, that means for each change before we decide whether we add to\nthe transaction's change queue or non-transactional change queue we\nwill have to query the catalog i.e. you will have to start/stop the\ntransaction?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 10:05:20 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 7:17 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/6/23 12:05, Dilip Kumar wrote:\n> > On Wed, Dec 6, 2023 at 3:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >>> Why can't we use the same concept of\n> >>> SnapBuildDistributeNewCatalogSnapshot(), I mean we keep queuing the\n> >>> non-transactional changes (have some base snapshot before the first\n> >>> change), and whenever there is any catalog change, queue new snapshot\n> >>> change also in the queue of the non-transactional sequence change so\n> >>> that while sending it to downstream whenever it is necessary we will\n> >>> change the historic snapshot?\n> >>>\n> >>\n> >> Oh, do you mean maintain different historic snapshots and then switch\n> >> based on the change we are processing? I guess the other thing we need\n> >> to consider is the order of processing the changes if we maintain\n> >> separate queues that need to be processed.\n> >\n> > I mean we will not specifically maintain the historic changes, but if\n> > there is any catalog change where we are pushing the snapshot to all\n> > the transaction's change queue, at the same time we will push this\n> > snapshot in the non-transactional sequence queue as well. I am not\n> > sure what is the problem with the ordering?\n> >\n\nCurrently, we set up the historic snapshot before starting a\ntransaction to process the change and then adapt the updates to it\nwhile processing the changes for the transaction. Now, while\nprocessing this new queue of non-transactional sequence messages, we\nprobably need a separate snapshot and updates to it. So, either we\nneed some sort of switching between snapshots or do it in different\ntransactions.\n\n> > because we will be\n> > queueing all non-transactional sequence changes in a separate queue in\n> > the order they arrive and as soon as we process the next commit we\n> > will process all the non-transactional changes at that time. Do you\n> > see issue with that?\n> >\n>\n> Isn't this (in principle) the idea of queuing the non-transactional\n> changes and then applying them on the next commit? Yes, I didn't get\n> very far with that, but I got stuck exactly on tracking which snapshot\n> to use, so if there's a way to do that, that'd fix my issue.\n>\n> Also, would this mean we don't need to track the relfilenodes, if we're\n> able to query the catalog? Would we be able to check if the relfilenode\n> was created by the current xact?\n>\n\nI thought this new mechanism was for processing a queue of\nnon-transactional sequence changes. The tracking of relfilenodes is to\ndistinguish between transactional and non-transactional messages, so I\nthink we probably still need that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 7 Dec 2023 10:26:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 6, 2023 at 7:09 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Yes, if something like this happens, that'd be a problem:\n>\n> 1) decoding starts, with\n>\n> SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT\n>\n> 2) transaction that creates a new refilenode gets decoded, but we skip\n> it because we don't have the correct snapshot\n>\n> 3) snapshot changes to SNAPBUILD_FULL_SNAPSHOT\n>\n> 4) we decode sequence change from nextval() for the sequence\n>\n> This would lead to us attempting to apply sequence change for a\n> relfilenode that's not visible yet (and may even get aborted).\n>\n> But can this even happen? Can we start decoding in the middle of a\n> transaction? How come this wouldn't affect e.g. XLOG_HEAP2_NEW_CID,\n> which is also skipped until SNAPBUILD_FULL_SNAPSHOT. Or logical\n> messages, where we also call the output plugin in non-transactional cases.\n\nIt's not a problem for logical messages because whether the message is\ntransaction or non-transactional is decided while WAL logs the message\nitself. But here our problem starts with deciding whether the change\nis transactional vs non-transactional, because if we insert the\n'relfilenode' in hash then the subsequent sequence change in the same\ntransaction would be considered transactional otherwise\nnon-transactional. And XLOG_HEAP2_NEW_CID is just for changing the\nsnapshot->curcid which will only affect the catalog visibility of the\nupcoming operation in the same transaction, but that's not an issue\nbecause if some of the changes of this transaction are seen when\nsnapbuild state < SNAPBUILD_FULL_SNAPSHOT then this transaction has to\nget committed before the state change to SNAPBUILD_CONSISTENT_SNAPSHOT\ni.e. the commit LSN of this transaction is going to be <\nstart_decoding_at.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 7 Dec 2023 10:41:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nThere's been a lot discussed over the past month or so, and it's become\ndifficult to get a good idea what's the current state - what issues\nremain to be solved, what's unrelated to this patch, and how to move if\nforward. Long-running threads tend to be confusing, so I had a short\ncall with Amit to discuss the current state yesterday, and to make sure\nwe're on the same page. I believe it was very helpful, and I've promised\nto post a short summary of the call - issues, what we agreed seems like\na path forward, etc.\n\nObviously, I might have misunderstood something, in which case Amit can\ncorrect me. And I'd certainly welcome opinions from others.\n\nIn general, we discussed three areas - desirability of the feature,\ncorrectness and performance. I believe a brief summary of the agreement\nwould be this:\n\n- desirability of the feature: Random IDs (UUIDs etc.) are likely a much\nbetter solution for distributed (esp. active-active) systems. But there\nare important use cases that are likely to keep using regular sequences\n(online upgrades of single-node instances, existing systems, ...).\n\n- correctness: There's one possible correctness issue, when the snapshot\nchanges to FULL between record creating a sequence relfilenode and that\nsequence advancing. This needs to be verified/reproduced, and fixed.\n\n- performance issues: We've agreed the case with a lot of aborts (when\nDecodeCommit consumes a lot of CPU) is unrelated to this patch. We've\ndiscussed whether the overhead with many sequence changes (nextval-40)\nis acceptable, and/or how to improve it.\n\nNext, I'll go over these points in more details, with my understanding\nof what the challenges are, possible solutions etc. Most of this was\ndiscussed/agreed on the call, but some are ideas I had only after the\ncall when writing this summary.\n\n\n1) desirability of the feature\n\nFirstly, do we actually want/need this feature? I believe that's very\nmuch a question of what use cases we're targeting.\n\nIf we only focus on distributed databases (particularly those with\nmultiple active nodes), then we probably agree that the right solution\nis to not use sequences (~generators of incrementing values) but UUIDs\nor similar random identifiers (better not call them sequences, there's\nnot much sequential about it). The huge advantage is this does not\nrequire replicating any state between the nodes, so logical decoding can\nsimply ignore them and replicate just the generated values. I don't\nthink there's any argument about that. If I as building such distributed\nsystem, I'd certainly use such random IDs.\n\nThe question is what to do about the other use cases - online upgrades\nrelying on logical decoding, failovers to logical replicas, and so on.\nOr what to do about existing systems that can't be easily changed to use\ndifferent/random identifiers. Those are not really distributed systems\nand therefore don't quite need random IDs.\n\nFurthermore, it's not like random IDs have no drawbacks - UUIDv4 can\neasily lead to massive write amplification, for example. There are\nvariants like UUIDv7 reducing the impact, but there's other stuff.\n\nMy takeaway from this is there's still value in having this feature.\n\n\n2) correctness\n\nThe only correctness issue I'm aware of is the question what happens\nwhen the snapshot switches to SNAPBUILD_FULL_SNAPSHOT between decoding\nthe relfilenode creation and the sequence increment, pointed out by\nDilip in [1].\n\nIf this happens (and while I don't have a reproducer, I also don't have\na very clear idea why it couldn't happen), it breaks how the patch\ndecides between transactional and non-transactional sequence changes.\n\nSo this seems like a fatal flaw - it definitely needs to be solved. I\ndon't have a good idea how to do that, unfortunately. The problem is the\ndependency on an earlier record, and that this needs to be evaluated\nimmediately (in the decode phase). Logical messages don't have the same\nissue because the \"transactional\" flag does not depend on earlier stuff,\nand other records are not interpreted until apply/commit, when we know\neverything relevant was decoded.\n\nI don't know what the solution is. Either we find a way to make sure not\nto lose/skip the smgr record, or we need to rethink how we determine the\ntransactional flag (perhaps even try again adding it to the WAL record,\nbut we didn't find a way to do that earlier).\n\n\n3) performance issues\n\nWe have discussed two cases - \"ddl-abort\" and \"nextval-40\".\n\nThe \"ddl-abort\" is when the workload does a lot of DDL and then aborts\nthem, leading to profiles dominated by DecodeCommit. The agreement here\nis that while this is a valid issue and we should try fixing it, it's\nunrelated to this patch. The issue exists even on master. So in the\ncontext of this patch we can ignore this issue.\n\nThe \"nextval-40\" applies to workloads doing a lot of regular sequence\nchanges. We only decode/apply changes written to WAL, and that happens\nonly for every 32 increments or so. The test was with a very simple\ntransaction (just sequence advanced to write WAL + 1-row insert), which\nmeans it's pretty much a worst case impact. For larger transactions,\nit's going to be hardly measurable. Also, this only measured decoding,\nnot apply (which also will make this less significant).\n\nMost of the overhead comes from ReorderBufferQueueSequence() starting\nand then aborting a transaction, per the profile in [2]. This only\nhappens in the non-transactional case, but we expect that in regular\n\nAnyway, let's say we want to mitigate this overhead. I think there are\nthree ways to do that:\n\n\na) find a way to not have to apply sequence changes immediately, but\nqueue them until the next commit\n\nThis would give a chance to combine multiple sequence changes into a\nsingle \"replay change\", reducing the overhead. There's a couple problems\nwith this, though. Firstly, it can't help OLTP workloads because the\ntransactions are short so sequence changes are unlikely to combine. It's\nalso, not clear how expensive this be - could it be expensive enough to\noutweigh the benefits?\n\nAll of this is assuming it can be implemented, we don't have such patch\nyet. I was speculating about something like this earlier, but I haven't\nmanaged to make that work. Doesn't mean it's impossible, ofc.\n\n\nb) provide a way for the output plugin to skip sequence decoding early\n\nThe way the decoding is coded now, ReorderBufferQueueSequence does all\nthe expensive dance even if the output plugin does not implement the\nsequence callbacks.\n\nMaybe we should have a way to allow skipping all of this early, right at\nthe beginning of ReorderBufferQueueSequence (and thus before we even try\nto start/abort the transaction).\n\nOfc, this is not a perfect solution either - it won't help workloads\nthat actually need/want sequence decoding but the workload is such that\nthe decoding has significant overhead, or with plugins that choose to\nsupport decoding sequences in genera. For example the built-in output\nplugin would certainly support sequences - and the overhead would still\nbe there (even if no sequences are added to the publication).\n\n\nb) instruct people to increase the sequence cache from 32 to 1024\n\nThis would reduce the number of WAL messages that need to be decoded and\nreplayed, reducing the overhead proportionally. Of course, this also\nmeans the sequence will \"jump forward\" more in case of crash or failover\nto the logical replica, but I think that's acceptable tradeoff. People\nshould not expect sequences to be gap-less anyway.\n\nConsidering nextval-40 is pretty much a worst-case behavior, I think\nthis might actually be an acceptable solution/workaround.\n\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/CAFiTN-vAx-Y%2B19ROKOcWnGf7ix2VOTUebpzteaGw9XQyCAeK6g%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/0bc34f71-7745-dc16-d765-5ba1f0776a3f%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 12 Dec 2023 11:01:38 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 7, 2023 at 10:41 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Dec 6, 2023 at 7:09 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Yes, if something like this happens, that'd be a problem:\n> >\n> > 1) decoding starts, with\n> >\n> > SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT\n> >\n> > 2) transaction that creates a new refilenode gets decoded, but we skip\n> > it because we don't have the correct snapshot\n> >\n> > 3) snapshot changes to SNAPBUILD_FULL_SNAPSHOT\n> >\n> > 4) we decode sequence change from nextval() for the sequence\n> >\n> > This would lead to us attempting to apply sequence change for a\n> > relfilenode that's not visible yet (and may even get aborted).\n> >\n> > But can this even happen? Can we start decoding in the middle of a\n> > transaction? How come this wouldn't affect e.g. XLOG_HEAP2_NEW_CID,\n> > which is also skipped until SNAPBUILD_FULL_SNAPSHOT. Or logical\n> > messages, where we also call the output plugin in non-transactional cases.\n>\n> It's not a problem for logical messages because whether the message is\n> transaction or non-transactional is decided while WAL logs the message\n> itself. But here our problem starts with deciding whether the change\n> is transactional vs non-transactional, because if we insert the\n> 'relfilenode' in hash then the subsequent sequence change in the same\n> transaction would be considered transactional otherwise\n> non-transactional.\n>\n\nIt is correct that we can make a wrong decision about whether a change\nis transactional or non-transactional when sequence DDL happens before\nthe SNAPBUILD_FULL_SNAPSHOT state and the sequence operation happens\nafter that state. However, one thing to note here is that we won't try\nto stream such a change because for non-transactional cases we don't\nproceed unless the snapshot is in a consistent state. Now, if the\ndecision had been correct then we would probably have queued the\nsequence change and discarded at commit.\n\nOne thing that we deviate here is that for non-sequence transactional\ncases (including logical messages), we immediately start queuing the\nchanges as soon as we reach SNAPBUILD_FULL_SNAPSHOT state (provided\nSnapBuildProcessChange() returns true which is quite possible) and\ntake final decision at commit/prepare/abort time. However, that won't\nbe the case for sequences because of the dependency of determining\ntransactional cases on one of the prior records. Now, I am not\ncompletely sure at this stage if such a deviation can cause any\nproblem and or whether we are okay to have such a deviation for\nsequences.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 13 Dec 2023 18:26:34 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Dear hackers,\r\n\r\n> It is correct that we can make a wrong decision about whether a change\r\n> is transactional or non-transactional when sequence DDL happens before\r\n> the SNAPBUILD_FULL_SNAPSHOT state and the sequence operation happens\r\n> after that state.\r\n\r\nI found a workload which decoder distinguish wrongly.\r\n\r\n# Prerequisite\r\n\r\nApply an attached patch for inspecting the sequence status. It can be applied atop v20231203 patch set.\r\nAlso, a table and a sequence must be defined:\r\n\r\n```\r\nCREATE TABLE foo (var int);\r\nCREATE SEQUENCE s;\r\n```\r\n\r\n# Workload\r\n\r\nThen, you can execute concurrent transactions from three clients like below:\r\n\r\nClient-1\r\n\r\nBEGIN;\r\nINSERT INTO foo VALUES (1);\r\n\r\n\t\t\tClient-2\r\n\r\n\t\t\tSELECT pg_create_logical_replication_slot('slot', 'test_decoding');\r\n\r\n\t\t\t\t\t\tClient-3\r\n\r\n\t\t\t\t\t\tBEGIN;\r\n\t\t\t\t\t\tALTER SEQUENCE s MAXVALUE 5000;\r\nCOMMIT;\r\n\t\t\t\t\t\tSAVEPOINT s1;\r\n\t\t\t\t\t\tSELECT setval('s', 2000);\r\n\t\t\t\t\t\tROLLBACK;\r\n\r\n\t\t\tSELECT pg_logical_slot_get_changes('slot', 'test_decoding');\r\n\r\n# Result and analysis\r\n\r\nAt first, below lines would be output on the log. This meant that WAL records\r\nfor ALTER SEQUENCE were decoded but skipped because the snapshot had been building.\r\n\r\n```\r\n...\r\nLOG: logical decoding found initial starting point at 0/154D238\r\nDETAIL: Waiting for transactions (approximately 1) older than 741 to end.\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: smgr_decode. snapshot is SNAPBUILD_BUILDING_SNAPSHOT\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: skipped\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: seq_decode. snapshot is SNAPBUILD_BUILDING_SNAPSHOT\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: skipped\r\n...\r\n```\r\n\r\nNote that above `seq_decode...` line was not output via `setval()`, it was done\r\nby ALTER SEQUENCE statement. Below is a call stack for inserting WAL.\r\n\r\n```\r\nXLogInsert(RM_SEQ_ID, XLOG_SEQ_LOG);\r\nfill_seq_fork_with_data\r\nfill_seq_with_data\r\nAlterSequence\r\n```\r\n\r\nThen, subsequent lines would say like them. This means that the snapshot becomes\r\nFULL and `setval()` is regarded non-transactional wrongly.\r\n\r\n```\r\nLOG: logical decoding found initial consistent point at 0/154D658\r\nDETAIL: Waiting for transactions (approximately 1) older than 742 to end.\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: seq_decode. snapshot is SNAPBUILD_FULL_SNAPSHOT\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: the sequence is non-transactional\r\nSTATEMENT: SELECT * FROM pg_create_logical_replication_slot('slot', 'test_decoding');\r\nLOG: XXX: not consistent: skipped\r\n```\r\n\r\nThe change would be discarded because the snapshot has not been CONSISTENT yet\r\nby the below part. If it has been transactional, we would have queued this\r\nchange though the transaction will be skipped at commit.\r\n\r\n```\r\n\telse if (!transactional &&\r\n\t\t\t (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\r\n\t\t\t SnapBuildXactNeedsSkip(builder, buf->origptr)))\r\n\t\treturn;\r\n```\r\n\r\nBut anyway, we could find a case which we can make a wrong decision. This example\r\nis lucky - does not output wrongly, but I'm not sure all the case like that.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 14 Dec 2023 03:44:22 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Dec 13, 2023 at 6:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > > But can this even happen? Can we start decoding in the middle of a\n> > > transaction? How come this wouldn't affect e.g. XLOG_HEAP2_NEW_CID,\n> > > which is also skipped until SNAPBUILD_FULL_SNAPSHOT. Or logical\n> > > messages, where we also call the output plugin in non-transactional cases.\n> >\n> > It's not a problem for logical messages because whether the message is\n> > transaction or non-transactional is decided while WAL logs the message\n> > itself. But here our problem starts with deciding whether the change\n> > is transactional vs non-transactional, because if we insert the\n> > 'relfilenode' in hash then the subsequent sequence change in the same\n> > transaction would be considered transactional otherwise\n> > non-transactional.\n> >\n>\n> It is correct that we can make a wrong decision about whether a change\n> is transactional or non-transactional when sequence DDL happens before\n> the SNAPBUILD_FULL_SNAPSHOT state and the sequence operation happens\n> after that state. However, one thing to note here is that we won't try\n> to stream such a change because for non-transactional cases we don't\n> proceed unless the snapshot is in a consistent state. Now, if the\n> decision had been correct then we would probably have queued the\n> sequence change and discarded at commit.\n>\n> One thing that we deviate here is that for non-sequence transactional\n> cases (including logical messages), we immediately start queuing the\n> changes as soon as we reach SNAPBUILD_FULL_SNAPSHOT state (provided\n> SnapBuildProcessChange() returns true which is quite possible) and\n> take final decision at commit/prepare/abort time. However, that won't\n> be the case for sequences because of the dependency of determining\n> transactional cases on one of the prior records. Now, I am not\n> completely sure at this stage if such a deviation can cause any\n> problem and or whether we are okay to have such a deviation for\n> sequences.\n\nOkay, so this particular scenario that I raised is somehow saved, I\nmean although we are considering transactional sequence operation as\nnon-transactional we also know that if some of the changes for a\ntransaction are skipped because the snapshot was not FULL that means\nthat transaction can not be streamed because that transaction has to\nbe committed before snapshot become CONSISTENT (based on the snapshot\nstate change machinery). Ideally based on the same logic that the\nsnapshot is not consistent the non-transactional sequence changes are\nalso skipped. But the only thing that makes me a bit uncomfortable is\nthat even though the result is not wrong we have made some wrong\nintermediate decisions i.e. considered transactional change as\nnon-transactions.\n\nOne solution to this issue is that, even if the snapshot state does\nnot reach FULL just add the sequence relids to the hash, I mean that\nhash is only maintained for deciding whether the sequence is changed\nin that transaction or not. So no adding such relids to hash seems\nlike a root cause of the issue. Honestly, I haven't analyzed this\nidea in detail about how easy it would be to add only these changes to\nthe hash and what are the other dependencies, but this seems like a\nworthwhile direction IMHO.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Dec 2023 10:52:57 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 10:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> > >\n> >\n> > It is correct that we can make a wrong decision about whether a change\n> > is transactional or non-transactional when sequence DDL happens before\n> > the SNAPBUILD_FULL_SNAPSHOT state and the sequence operation happens\n> > after that state. However, one thing to note here is that we won't try\n> > to stream such a change because for non-transactional cases we don't\n> > proceed unless the snapshot is in a consistent state. Now, if the\n> > decision had been correct then we would probably have queued the\n> > sequence change and discarded at commit.\n> >\n> > One thing that we deviate here is that for non-sequence transactional\n> > cases (including logical messages), we immediately start queuing the\n> > changes as soon as we reach SNAPBUILD_FULL_SNAPSHOT state (provided\n> > SnapBuildProcessChange() returns true which is quite possible) and\n> > take final decision at commit/prepare/abort time. However, that won't\n> > be the case for sequences because of the dependency of determining\n> > transactional cases on one of the prior records. Now, I am not\n> > completely sure at this stage if such a deviation can cause any\n> > problem and or whether we are okay to have such a deviation for\n> > sequences.\n>\n> Okay, so this particular scenario that I raised is somehow saved, I\n> mean although we are considering transactional sequence operation as\n> non-transactional we also know that if some of the changes for a\n> transaction are skipped because the snapshot was not FULL that means\n> that transaction can not be streamed because that transaction has to\n> be committed before snapshot become CONSISTENT (based on the snapshot\n> state change machinery). Ideally based on the same logic that the\n> snapshot is not consistent the non-transactional sequence changes are\n> also skipped. But the only thing that makes me a bit uncomfortable is\n> that even though the result is not wrong we have made some wrong\n> intermediate decisions i.e. considered transactional change as\n> non-transactions.\n>\n> One solution to this issue is that, even if the snapshot state does\n> not reach FULL just add the sequence relids to the hash, I mean that\n> hash is only maintained for deciding whether the sequence is changed\n> in that transaction or not. So no adding such relids to hash seems\n> like a root cause of the issue. Honestly, I haven't analyzed this\n> idea in detail about how easy it would be to add only these changes to\n> the hash and what are the other dependencies, but this seems like a\n> worthwhile direction IMHO.\n\nI also thought about the same solution. I tried this solution as the\nattached patch on top of Hayato's diagnostic changes. Following log\nmessages are seen in server error log. Those indicate that the\nsequence change was correctly deemed as a transactional change (line\n2023-12-14 12:14:55.591 IST [321229] LOG: XXX: the sequence is\ntransactional).\n2023-12-14 12:12:50.550 IST [321229] ERROR: relation\n\"pg_replication_slot\" does not exist at character 15\n2023-12-14 12:12:50.550 IST [321229] STATEMENT: select * from\npg_replication_slot;\n2023-12-14 12:12:57.289 IST [321229] LOG: logical decoding found\ninitial starting point at 0/1598D50\n2023-12-14 12:12:57.289 IST [321229] DETAIL: Waiting for transactions\n(approximately 1) older than 759 to end.\n2023-12-14 12:12:57.289 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:13:49.551 IST [321229] LOG: XXX: smgr_decode. snapshot\nis SNAPBUILD_BUILDING_SNAPSHOT\n2023-12-14 12:13:49.551 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:13:49.551 IST [321229] LOG: XXX: seq_decode. snapshot is\nSNAPBUILD_BUILDING_SNAPSHOT\n2023-12-14 12:13:49.551 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:13:49.551 IST [321229] LOG: XXX: skipped\n2023-12-14 12:13:49.551 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:13:49.552 IST [321229] LOG: logical decoding found\ninitial consistent point at 0/1599170\n2023-12-14 12:13:49.552 IST [321229] DETAIL: Waiting for transactions\n(approximately 1) older than 760 to end.\n2023-12-14 12:13:49.552 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:14:55.591 IST [321229] LOG: XXX: seq_decode. snapshot is\nSNAPBUILD_FULL_SNAPSHOT\n2023-12-14 12:14:55.591 IST [321230] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:14:55.591 IST [321229] LOG: XXX: the sequence is transactional\n2023-12-14 12:14:55.591 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 12:14:55.813 IST [321229] LOG: logical decoding found\nconsistent point at 0/15992E8\n2023-12-14 12:14:55.813 IST [321229] DETAIL: There are no running transactions.\n2023-12-14 12:14:55.813 IST [321229] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n\nIt looks like the solution works. But this is the only place where we\nprocess a change before SNAPSHOT reaches FULL. But this is also the\nonly record which affects a decision to queue/not a following change.\nSo it should be ok. The sequence_hash'es as separate for each\ntransaction and they are cleaned when processing COMMIT record. So I\nthink we don't have any side effects of adding relfilenode to sequence\nhash even though snapshot is not FULL.\n\n\n\nAs a side note\n1. the prologue of ReorderBufferSequenceCleanup() mentions only abort,\nbut this function will be called for COMMIT as well. Prologue needs to\nbe fixed.\n2. Now that sequence hashes are per transaction, do we need\nReoderBufferTXN in ReorderBufferSequenceEnt?\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 14 Dec 2023 12:31:03 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 12:31 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 14, 2023 at 10:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > >\n> > >\n> > > It is correct that we can make a wrong decision about whether a change\n> > > is transactional or non-transactional when sequence DDL happens before\n> > > the SNAPBUILD_FULL_SNAPSHOT state and the sequence operation happens\n> > > after that state. However, one thing to note here is that we won't try\n> > > to stream such a change because for non-transactional cases we don't\n> > > proceed unless the snapshot is in a consistent state. Now, if the\n> > > decision had been correct then we would probably have queued the\n> > > sequence change and discarded at commit.\n> > >\n> > > One thing that we deviate here is that for non-sequence transactional\n> > > cases (including logical messages), we immediately start queuing the\n> > > changes as soon as we reach SNAPBUILD_FULL_SNAPSHOT state (provided\n> > > SnapBuildProcessChange() returns true which is quite possible) and\n> > > take final decision at commit/prepare/abort time. However, that won't\n> > > be the case for sequences because of the dependency of determining\n> > > transactional cases on one of the prior records. Now, I am not\n> > > completely sure at this stage if such a deviation can cause any\n> > > problem and or whether we are okay to have such a deviation for\n> > > sequences.\n> >\n> > Okay, so this particular scenario that I raised is somehow saved, I\n> > mean although we are considering transactional sequence operation as\n> > non-transactional we also know that if some of the changes for a\n> > transaction are skipped because the snapshot was not FULL that means\n> > that transaction can not be streamed because that transaction has to\n> > be committed before snapshot become CONSISTENT (based on the snapshot\n> > state change machinery). Ideally based on the same logic that the\n> > snapshot is not consistent the non-transactional sequence changes are\n> > also skipped. But the only thing that makes me a bit uncomfortable is\n> > that even though the result is not wrong we have made some wrong\n> > intermediate decisions i.e. considered transactional change as\n> > non-transactions.\n> >\n> > One solution to this issue is that, even if the snapshot state does\n> > not reach FULL just add the sequence relids to the hash, I mean that\n> > hash is only maintained for deciding whether the sequence is changed\n> > in that transaction or not. So no adding such relids to hash seems\n> > like a root cause of the issue. Honestly, I haven't analyzed this\n> > idea in detail about how easy it would be to add only these changes to\n> > the hash and what are the other dependencies, but this seems like a\n> > worthwhile direction IMHO.\n>\n> I also thought about the same solution. I tried this solution as the\n> attached patch on top of Hayato's diagnostic changes.\n\nI think you forgot to attach the patch.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 14 Dec 2023 12:37:07 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 12:31 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 14, 2023 at 10:53 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> > > >\n> > >\n> > > It is correct that we can make a wrong decision about whether a change\n> > > is transactional or non-transactional when sequence DDL happens before\n> > > the SNAPBUILD_FULL_SNAPSHOT state and the sequence operation happens\n> > > after that state. However, one thing to note here is that we won't try\n> > > to stream such a change because for non-transactional cases we don't\n> > > proceed unless the snapshot is in a consistent state. Now, if the\n> > > decision had been correct then we would probably have queued the\n> > > sequence change and discarded at commit.\n> > >\n> > > One thing that we deviate here is that for non-sequence transactional\n> > > cases (including logical messages), we immediately start queuing the\n> > > changes as soon as we reach SNAPBUILD_FULL_SNAPSHOT state (provided\n> > > SnapBuildProcessChange() returns true which is quite possible) and\n> > > take final decision at commit/prepare/abort time. However, that won't\n> > > be the case for sequences because of the dependency of determining\n> > > transactional cases on one of the prior records. Now, I am not\n> > > completely sure at this stage if such a deviation can cause any\n> > > problem and or whether we are okay to have such a deviation for\n> > > sequences.\n> >\n> > Okay, so this particular scenario that I raised is somehow saved, I\n> > mean although we are considering transactional sequence operation as\n> > non-transactional we also know that if some of the changes for a\n> > transaction are skipped because the snapshot was not FULL that means\n> > that transaction can not be streamed because that transaction has to\n> > be committed before snapshot become CONSISTENT (based on the snapshot\n> > state change machinery). Ideally based on the same logic that the\n> > snapshot is not consistent the non-transactional sequence changes are\n> > also skipped. But the only thing that makes me a bit uncomfortable is\n> > that even though the result is not wrong we have made some wrong\n> > intermediate decisions i.e. considered transactional change as\n> > non-transactions.\n> >\n> > One solution to this issue is that, even if the snapshot state does\n> > not reach FULL just add the sequence relids to the hash, I mean that\n> > hash is only maintained for deciding whether the sequence is changed\n> > in that transaction or not. So no adding such relids to hash seems\n> > like a root cause of the issue. Honestly, I haven't analyzed this\n> > idea in detail about how easy it would be to add only these changes to\n> > the hash and what are the other dependencies, but this seems like a\n> > worthwhile direction IMHO.\n>\n>\n...\n> It looks like the solution works. But this is the only place where we\n> process a change before SNAPSHOT reaches FULL. But this is also the\n> only record which affects a decision to queue/not a following change.\n> So it should be ok. The sequence_hash'es as separate for each\n> transaction and they are cleaned when processing COMMIT record.\n>\n\n>\nIt looks like the solution works. But this is the only place where we\nprocess a change before SNAPSHOT reaches FULL. But this is also the\nonly record which affects a decision to queue/not a following change.\nSo it should be ok. The sequence_hash'es as separate for each\ntransaction and they are cleaned when processing COMMIT record.\n>\n\nBut it is possible that even commit or abort also happens before the\nsnapshot reaches full state in which case the hash table will have\nstale or invalid (for aborts) entries. That will probably be cleaned\nat a later point by running_xact records. Now, I think in theory, it\nis possible that the same RelFileLocator can again be allocated before\nwe clean up the existing entry which can probably confuse the system.\nIt might or might not be a problem in practice but I think the more\nassumptions we add for sequences, the more difficult it will become to\nensure its correctness.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 14:36:31 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 12:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I think you forgot to attach the patch.\n\nSorry. Here it is.\n\nOn Thu, Dec 14, 2023 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> It looks like the solution works. But this is the only place where we\n> process a change before SNAPSHOT reaches FULL. But this is also the\n> only record which affects a decision to queue/not a following change.\n> So it should be ok. The sequence_hash'es as separate for each\n> transaction and they are cleaned when processing COMMIT record.\n> >\n>\n> But it is possible that even commit or abort also happens before the\n> snapshot reaches full state in which case the hash table will have\n> stale or invalid (for aborts) entries. That will probably be cleaned\n> at a later point by running_xact records.\n\nWhy would cleaning wait till running_xact records? Won't txn entry\nitself be removed when processing commit/abort record? At the same the\nsequence hash will be cleaned as well.\n\n> Now, I think in theory, it\n> is possible that the same RelFileLocator can again be allocated before\n> we clean up the existing entry which can probably confuse the system.\n\nHow? The transaction allocating the first time would be cleaned before\nit happens the second time. So shouldn't matter.\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 14 Dec 2023 14:44:56 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 2:45 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 14, 2023 at 12:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I think you forgot to attach the patch.\n>\n> Sorry. Here it is.\n>\n> On Thu, Dec 14, 2023 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > It looks like the solution works. But this is the only place where we\n> > process a change before SNAPSHOT reaches FULL. But this is also the\n> > only record which affects a decision to queue/not a following change.\n> > So it should be ok. The sequence_hash'es as separate for each\n> > transaction and they are cleaned when processing COMMIT record.\n> > >\n> >\n> > But it is possible that even commit or abort also happens before the\n> > snapshot reaches full state in which case the hash table will have\n> > stale or invalid (for aborts) entries. That will probably be cleaned\n> > at a later point by running_xact records.\n>\n> Why would cleaning wait till running_xact records? Won't txn entry\n> itself be removed when processing commit/abort record? At the same the\n> sequence hash will be cleaned as well.\n>\n> > Now, I think in theory, it\n> > is possible that the same RelFileLocator can again be allocated before\n> > we clean up the existing entry which can probably confuse the system.\n>\n> How? The transaction allocating the first time would be cleaned before\n> it happens the second time. So shouldn't matter.\n>\n\nIt can only be cleaned if we process it but xact_decode won't allow us\nto process it and I don't think it would be a good idea to add another\nhack for sequences here. See below code:\n\nxact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n{\nSnapBuild *builder = ctx->snapshot_builder;\nReorderBuffer *reorder = ctx->reorder;\nXLogReaderState *r = buf->record;\nuint8 info = XLogRecGetInfo(r) & XLOG_XACT_OPMASK;\n\n/*\n* If the snapshot isn't yet fully built, we cannot decode anything, so\n* bail out.\n*/\nif (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\nreturn;\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 14 Dec 2023 14:50:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 2:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Dec 14, 2023 at 2:45 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > On Thu, Dec 14, 2023 at 12:37 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I think you forgot to attach the patch.\n> >\n> > Sorry. Here it is.\n> >\n> > On Thu, Dec 14, 2023 at 2:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > It looks like the solution works. But this is the only place where we\n> > > process a change before SNAPSHOT reaches FULL. But this is also the\n> > > only record which affects a decision to queue/not a following change.\n> > > So it should be ok. The sequence_hash'es as separate for each\n> > > transaction and they are cleaned when processing COMMIT record.\n> > > >\n> > >\n> > > But it is possible that even commit or abort also happens before the\n> > > snapshot reaches full state in which case the hash table will have\n> > > stale or invalid (for aborts) entries. That will probably be cleaned\n> > > at a later point by running_xact records.\n> >\n> > Why would cleaning wait till running_xact records? Won't txn entry\n> > itself be removed when processing commit/abort record? At the same the\n> > sequence hash will be cleaned as well.\n> >\n> > > Now, I think in theory, it\n> > > is possible that the same RelFileLocator can again be allocated before\n> > > we clean up the existing entry which can probably confuse the system.\n> >\n> > How? The transaction allocating the first time would be cleaned before\n> > it happens the second time. So shouldn't matter.\n> >\n>\n> It can only be cleaned if we process it but xact_decode won't allow us\n> to process it and I don't think it would be a good idea to add another\n> hack for sequences here. See below code:\n>\n> xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n> {\n> SnapBuild *builder = ctx->snapshot_builder;\n> ReorderBuffer *reorder = ctx->reorder;\n> XLogReaderState *r = buf->record;\n> uint8 info = XLogRecGetInfo(r) & XLOG_XACT_OPMASK;\n>\n> /*\n> * If the snapshot isn't yet fully built, we cannot decode anything, so\n> * bail out.\n> */\n> if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\n> return;\n\nThat may be true for a transaction which is decoded, but I think all\nthe transactions which are added to ReorderBuffer should be cleaned up\nonce they have been processed irrespective of whether they are\ndecoded/sent downstream or not. In this case I see the sequence hash\nbeing cleaned up for the sequence related transaction in Hayato's\nreproducer. See attached patch with a diagnostic change and the output\nbelow (notice sequence cleanup called on transaction 767).\n2023-12-14 21:06:36.756 IST [386957] LOG: logical decoding found\ninitial starting point at 0/15B2F68\n2023-12-14 21:06:36.756 IST [386957] DETAIL: Waiting for transactions\n(approximately 1) older than 767 to end.\n2023-12-14 21:06:36.756 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:05.679 IST [386957] LOG: XXX: smgr_decode. snapshot\nis SNAPBUILD_BUILDING_SNAPSHOT\n2023-12-14 21:07:05.679 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:05.679 IST [386957] LOG: XXX: seq_decode. snapshot\nis SNAPBUILD_BUILDING_SNAPSHOT\n2023-12-14 21:07:05.679 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:05.679 IST [386957] LOG: XXX: skipped\n2023-12-14 21:07:05.679 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:05.710 IST [386957] LOG: logical decoding found\ninitial consistent point at 0/15B3388\n2023-12-14 21:07:05.710 IST [386957] DETAIL: Waiting for transactions\n(approximately 1) older than 768 to end.\n2023-12-14 21:07:05.710 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:39.292 IST [386298] LOG: checkpoint starting: time\n2023-12-14 21:07:40.919 IST [386957] LOG: XXX: seq_decode. snapshot\nis SNAPBUILD_FULL_SNAPSHOT\n2023-12-14 21:07:40.919 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:40.919 IST [386957] LOG: XXX: the sequence is transactional\n2023-12-14 21:07:40.919 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:40.919 IST [386957] LOG: sequence cleanup called on\ntransaction 767\n2023-12-14 21:07:40.919 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n2023-12-14 21:07:40.919 IST [386957] LOG: logical decoding found\nconsistent point at 0/15B3518\n2023-12-14 21:07:40.919 IST [386957] DETAIL: There are no running transactions.\n2023-12-14 21:07:40.919 IST [386957] STATEMENT: SELECT\npg_create_logical_replication_slot('slot', 'test_decoding');\n\nWe see similar output when pg_logical_slot_get_changes() is called.\n\nI haven't found the code path from where the sequence cleanup gets\ncalled. But it's being called. Am I missing something?\n\n-- \nBest Wishes,\nAshutosh Bapat",
"msg_date": "Thu, 14 Dec 2023 21:14:23 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023, at 12:44 PM, Ashutosh Bapat wrote:\n> I haven't found the code path from where the sequence cleanup gets\n> called. But it's being called. Am I missing something?\n\nReorderBufferCleanupTXN.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Thu, Dec 14, 2023, at 12:44 PM, Ashutosh Bapat wrote:I haven't found the code path from where the sequence cleanup getscalled. But it's being called. Am I missing something?ReorderBufferCleanupTXN.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Thu, 14 Dec 2023 16:05:44 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 14, 2023 at 9:14 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Thu, Dec 14, 2023 at 2:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > It can only be cleaned if we process it but xact_decode won't allow us\n> > to process it and I don't think it would be a good idea to add another\n> > hack for sequences here. See below code:\n> >\n> > xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n> > {\n> > SnapBuild *builder = ctx->snapshot_builder;\n> > ReorderBuffer *reorder = ctx->reorder;\n> > XLogReaderState *r = buf->record;\n> > uint8 info = XLogRecGetInfo(r) & XLOG_XACT_OPMASK;\n> >\n> > /*\n> > * If the snapshot isn't yet fully built, we cannot decode anything, so\n> > * bail out.\n> > */\n> > if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\n> > return;\n>\n> That may be true for a transaction which is decoded, but I think all\n> the transactions which are added to ReorderBuffer should be cleaned up\n> once they have been processed irrespective of whether they are\n> decoded/sent downstream or not. In this case I see the sequence hash\n> being cleaned up for the sequence related transaction in Hayato's\n> reproducer.\n>\n\nIt was because the test you are using was not designed to show the\nproblem I mentioned. In this case, the rollback was after a full\nsnapshot state was reached.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 15 Dec 2023 08:03:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nI wanted to hop in here on one particular issue:\n\n> On Dec 12, 2023, at 02:01, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> - desirability of the feature: Random IDs (UUIDs etc.) are likely a much\n> better solution for distributed (esp. active-active) systems. But there\n> are important use cases that are likely to keep using regular sequences\n> (online upgrades of single-node instances, existing systems, ...).\n\n+1.\n\nRight now, the lack of sequence replication is a rather large foot-gun on logical replication upgrades. Copying the sequences over during the cutover period is doable, of course, but:\n\n(a) There's no out-of-the-box tooling that does it, so everyone has to write some scripts just for that one function.\n(b) It's one more thing that extends the cutover window.\n\nI don't think it is a good idea to make it mandatory: for example, there's a strong use case for replicating a table but not a sequence associated with it. But it's definitely a missing feature in logical replication.\n\n",
"msg_date": "Tue, 19 Dec 2023 04:54:32 -0800",
"msg_from": "Christophe Pettus <xof@thebuild.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/19/23 13:54, Christophe Pettus wrote:\n> Hi,\n> \n> I wanted to hop in here on one particular issue:\n> \n>> On Dec 12, 2023, at 02:01, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>> - desirability of the feature: Random IDs (UUIDs etc.) are likely a much\n>> better solution for distributed (esp. active-active) systems. But there\n>> are important use cases that are likely to keep using regular sequences\n>> (online upgrades of single-node instances, existing systems, ...).\n> \n> +1.\n> \n> Right now, the lack of sequence replication is a rather large \n> foot-gun on logical replication upgrades. Copying the sequences\n> over during the cutover period is doable, of course, but:\n> \n> (a) There's no out-of-the-box tooling that does it, so everyone has\n> to write some scripts just for that one function.\n>\n> (b) It's one more thing that extends the cutover window.\n> \n\nI agree it's an annoying gap for this use case. But if this is the only\nuse cases, maybe a better solution would be to provide such tooling\ninstead of adding it to the logical decoding?\n\nIt might seem a bit strange if most data is copied by replication\ndirectly, while sequences need special handling, ofc.\n\n> I don't think it is a good idea to make it mandatory: for example, \n> there's a strong use case for replicating a table but not a sequence \n> associated with it. But it's definitely a missing feature in\n> logical replication.\n\nI don't think the plan was to make replication of sequences mandatory,\ncertainly not with the built-in replication. If you don't add sequences\nto the publication, the sequence changes will be skipped.\n\nBut it still needs to be part of the decoding, which adds overhead for\nall logical decoding uses, even if the sequence changes end up being\ndiscarded. That's somewhat annoying, especially considering sequences\nare fairly common part of the WAL stream.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 14:17:29 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 12/15/23 03:33, Amit Kapila wrote:\n> On Thu, Dec 14, 2023 at 9:14 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>>\n>> On Thu, Dec 14, 2023 at 2:51 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>\n>>> It can only be cleaned if we process it but xact_decode won't allow us\n>>> to process it and I don't think it would be a good idea to add another\n>>> hack for sequences here. See below code:\n>>>\n>>> xact_decode(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n>>> {\n>>> SnapBuild *builder = ctx->snapshot_builder;\n>>> ReorderBuffer *reorder = ctx->reorder;\n>>> XLogReaderState *r = buf->record;\n>>> uint8 info = XLogRecGetInfo(r) & XLOG_XACT_OPMASK;\n>>>\n>>> /*\n>>> * If the snapshot isn't yet fully built, we cannot decode anything, so\n>>> * bail out.\n>>> */\n>>> if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT)\n>>> return;\n>>\n>> That may be true for a transaction which is decoded, but I think all\n>> the transactions which are added to ReorderBuffer should be cleaned up\n>> once they have been processed irrespective of whether they are\n>> decoded/sent downstream or not. In this case I see the sequence hash\n>> being cleaned up for the sequence related transaction in Hayato's\n>> reproducer.\n>>\n> \n> It was because the test you are using was not designed to show the\n> problem I mentioned. In this case, the rollback was after a full\n> snapshot state was reached.\n> \n\nRight, I haven't tried to reproduce this, but it very much looks like we\nthe entry would not be removed if the xact aborts/commits before the\nsnapshot reaches FULL state.\n\nI suppose one way to deal with this would be to first check if an entry\nfor the same relfilenode exists. If it does, the original transaction\nmust have terminated, but we haven't cleaned it up yet - in which case\nwe can just \"move\" the relfilenode to the new one.\n\nHowever, can't that happen even with full snapshots? I mean, let's say a\ntransaction creates a relfilenode and terminates without writing an\nabort record (surely that's possible, right?). And then another xact\ncomes and generates the same relfilenode (presumably that's unlikely,\nbut perhaps possible?). Aren't we in pretty much the same situation,\nuntil the next RUNNING_XACTS cleans up the hash table?\n\n\nI think tracking all relfilenodes would fix the original issue (with\ntreating some changes as transactional), and the tweak that \"moves\" the\nrelfilenode to the new xact would fix this other issue too.\n\nThat being said, I feel a bit uneasy about it, for similar reasons as\nAmit. If we start processing records before full snapshot, that seems\nlike moving the assumptions a bit. For example it means we'd create\nReorderBufferTXN entries for cases that'd have skipped before. OTOH this\nis (or should be) only a very temporary period while starting the\nreplication, I believe.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 21 Dec 2023 15:04:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nHere's new version of this patch series. It rebases the 2023/12/03\nversion, and there's a couple improvements to address the performance\nand correctness questions.\n\nSince the 2023/12/03 version was posted, there were a couple off-list\ndiscussions with several people - with Amit, as mentioned in [1], and\nthen also internally and at pgconf.eu.\n\nMy personal (very brief) takeaway from these discussions is this:\n\n1) desirability: We want a built-in way to handle sequences in logical\nreplication. I think everyone agrees this is not a way to do distributed\nsequences in an active-active setups, but that there are other use cases\nthat need this feature - typically upgrades / logical failover.\n\nMultiple approaches were discussed (support in logical replication or a\nseparate tool to be executed on the logical replica). Both might work,\npeople usually end up with some sort of custom tool anyway. But it's\ncumbersome, and the consensus seems the logical rep feature is better.\n\n\n2) performance: There was concern about the performance impact, and that\nit affects everyone, including those who don't replicate sequences (as\nthe overhead is mostly incurred before calls to output plugin etc.).\n\nI do agree with this, but I don't think sequences can be decoded in a\nmuch cheaper way. There was a proposal [2] that maybe we could batch the\nnon-transactional sequences changes in the \"next\" transaction, and\ndistribute them similarly to SnapBuildDistributeNewCatalogSnapshot()\ndistributes catalog snapshots.\n\nBut I doubt that'd actually work. Or more precisely - if we can make the\ncode work, I think it would not solve the issue for some common cases.\nConsider for example a case with many concurrent top-level transactions,\nmaking this quite expensive. And I'd bet sequence changes are far more\ncommon than catalog changes.\n\nHowever, I think we ultimately agreed that the overhead is acceptable if\nit only applies to use cases that actually need to decode sequences. So\nif there was a way to skip sequence decoding when not necessary, that\nwould work. Unfortunately, that can't be based on simply checking which\ncallbacks are defined by the output plugin, because e.g. pgoutput needs\nto handle both cases (so the callbacks need to be defined). Nor it can\nbe determined based on what's included in the publication (as that's not\navailable that early).\n\nThe agreement was that the best way is to have a CREATE SUBSCRIPTION\noption that would instruct the upstream to decode sequences. By default\nthis option is 'off' (because that's the no-overhead case), but it can\nbe enabled for each subscription.\n\nThis is what 0005 implements, and interestingly enough, this is what an\nearlier version [3] from 2023/04/02 did.\n\nThis means that if you add a sequence to the publication, but leave\n\"sequences=off\" in CREATE SUBSCRIPTION, the sequence won't be replicated\nafter all. That may seems a bit surprising, and I don't like it, but I\ndon't think there's a better way to do this.\n\n\n3) correctness: The last point is about making \"transactional\" flag\ncorrect when the snapshot state changes mid-transaction, originally\npointed out by Dilip [4]. Per [5] this however happens to work\ncorrectly, because while we identify the change as 'non-transactional'\n(which is incorrect), we immediately throw it again (so we don't try to\napply it, which would error-out).\n\nOne option would be to document/describe this in the comments, per 0006.\nThis means that when ReorderBufferSequenceIsTransactional() returns\ntrue, it's correct. But if it returns 'false', it means 'maybe'. I agree\nit seems a bit strange, but with the extra comments I think it's OK. It\nsimply means that if we get transactional=false incorrectly, we're\nguaranteed to not process it. Maybe we could rename the function to make\nthis clear from the name.\n\nThe other solution proposed in the thread [6] was to always decode the\nrelfilenode, and add it to the hash table. 0007 does this, and it works.\nBut I agree this seems possibly worse than 0006 - it means we may be\nadding entries to the hash table, and it's not clear when exactly we'll\nclean them up etc. It'd be the only place processing stuff before the\nsnapshots reaches FULL.\n\nI personally would go with 0006, i.e. just explaining why doing it this\nway is correct.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/12822961-b7de-9d59-dd27-2e3dc3980c7e%40enterprisedb.com\n\n[2]\nhttps://www.postgresql.org/message-id/CAFiTN-vm3-bGfm-uJdzRLERMHozW8xjZHu4rdmtWR-rP-SJYMQ%40mail.gmail.com\n\n[3]\nhttps://www.postgresql.org/message-id/1f96b282-cb90-8302-cee8-7b3f5576a31c%40enterprisedb.com\n\n[4]\nhttps://www.postgresql.org/message-id/CAFiTN-vAx-Y%2B19ROKOcWnGf7ix2VOTUebpzteaGw9XQyCAeK6g%40mail.gmail.com\n\n[5]\nhttps://www.postgresql.org/message-id/CAA4eK1LFise9iN%2BNN%3Dagrk4prR1qD%2BebvzNjKAWUog2%2Bhy3HxQ%40mail.gmail.com\n\n[6]\nhttps://www.postgresql.org/message-id/CAFiTN-sYpyUBabxopJysqH3DAp4OZUCTi6m_qtgt8d32vDcWSA%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 11 Jan 2024 17:26:56 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Jan 11, 2024 at 11:27 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> 1) desirability: We want a built-in way to handle sequences in logical\n> replication. I think everyone agrees this is not a way to do distributed\n> sequences in an active-active setups, but that there are other use cases\n> that need this feature - typically upgrades / logical failover.\n\nYeah. I find it extremely hard to take seriously the idea that this\nisn't a valuable feature. How else are you supposed to do a logical\nfailover without having your entire application break?\n\n> 2) performance: There was concern about the performance impact, and that\n> it affects everyone, including those who don't replicate sequences (as\n> the overhead is mostly incurred before calls to output plugin etc.).\n>\n> The agreement was that the best way is to have a CREATE SUBSCRIPTION\n> option that would instruct the upstream to decode sequences. By default\n> this option is 'off' (because that's the no-overhead case), but it can\n> be enabled for each subscription.\n\nSeems reasonable, at least unless and until we come up with something better.\n\n> 3) correctness: The last point is about making \"transactional\" flag\n> correct when the snapshot state changes mid-transaction, originally\n> pointed out by Dilip [4]. Per [5] this however happens to work\n> correctly, because while we identify the change as 'non-transactional'\n> (which is incorrect), we immediately throw it again (so we don't try to\n> apply it, which would error-out).\n\nI've said this before, but I still find this really scary. It's\nunclear to me that we can simply classify updates as transactional or\nnon-transactional and expect things to work. If it's possible, I hope\nwe have a really good explanation somewhere of how and why it's\npossible. If we do, can somebody point me to it so I can read it?\n\nTo be possibly slightly more clear about my concern, I think the scary\ncase is where we have transactional and non-transactional things\nhappening to the same sequence in close temporal proximity, either\nwithin the same session or across two or more sessions. If a\nnon-transactional change can get reordered ahead of some transactional\nchange upon which it logically depends, or behind some transactional\nchange that logically depends on it, then we have trouble. I also\nwonder if there are any cases where the same operation is partly\ntransactional and partly non-transactional.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Jan 2024 15:47:24 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 1/23/24 21:47, Robert Haas wrote:\n> On Thu, Jan 11, 2024 at 11:27 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> 1) desirability: We want a built-in way to handle sequences in logical\n>> replication. I think everyone agrees this is not a way to do distributed\n>> sequences in an active-active setups, but that there are other use cases\n>> that need this feature - typically upgrades / logical failover.\n> \n> Yeah. I find it extremely hard to take seriously the idea that this\n> isn't a valuable feature. How else are you supposed to do a logical\n> failover without having your entire application break?\n> \n>> 2) performance: There was concern about the performance impact, and that\n>> it affects everyone, including those who don't replicate sequences (as\n>> the overhead is mostly incurred before calls to output plugin etc.).\n>>\n>> The agreement was that the best way is to have a CREATE SUBSCRIPTION\n>> option that would instruct the upstream to decode sequences. By default\n>> this option is 'off' (because that's the no-overhead case), but it can\n>> be enabled for each subscription.\n> \n> Seems reasonable, at least unless and until we come up with something better.\n> \n>> 3) correctness: The last point is about making \"transactional\" flag\n>> correct when the snapshot state changes mid-transaction, originally\n>> pointed out by Dilip [4]. Per [5] this however happens to work\n>> correctly, because while we identify the change as 'non-transactional'\n>> (which is incorrect), we immediately throw it again (so we don't try to\n>> apply it, which would error-out).\n> \n> I've said this before, but I still find this really scary. It's\n> unclear to me that we can simply classify updates as transactional or\n> non-transactional and expect things to work. If it's possible, I hope\n> we have a really good explanation somewhere of how and why it's\n> possible. If we do, can somebody point me to it so I can read it?\n> \n\nI did try to explain how this works (and why) in a couple places:\n\n1) the commit message\n2) reorderbuffer header comment\n3) ReorderBufferSequenceIsTransactional comment (and nearby)\n\nIt's possible this does not meet your expectations, ofc. Maybe there\nshould be a separate README for this - I haven't found anything like\nthat for logical decoding in general, which is why I did (1)-(3).\n\n> To be possibly slightly more clear about my concern, I think the scary\n> case is where we have transactional and non-transactional things\n> happening to the same sequence in close temporal proximity, either\n> within the same session or across two or more sessions. If a\n> non-transactional change can get reordered ahead of some transactional\n> change upon which it logically depends, or behind some transactional\n> change that logically depends on it, then we have trouble. I also\n> wonder if there are any cases where the same operation is partly\n> transactional and partly non-transactional.\n> \n\nI certainly understand this concern, and to some extent I even share it.\nHaving to differentiate between transactional and non-transactional\nchanges certainly confused me more than once. It's especially confusing,\nbecause the decoding implicitly changes the perceived ordering/atomicity\nof the events.\n\nThat being said, I don't think it get reordered the way you're concerned\nabout. The \"transactionality\" is determined by relfilenode change, so\nhow could the reordering happen? We'd have to misidentify change in\neither direction - and for nontransactional->transactional change that's\nclearly not possible. There has to be a new relfilenode in that xact.\n\nIn the other direction (transactional->nontransactional), it can happen\nif we fail to decode the relfilenode record. Which is what we discussed\nearlier, but came to the conclusion that it actually works OK.\n\nOf course, there might be bugs. I spent quite a bit of effort reviewing\nand testing this, but there still might be something wrong. But I think\nthat applies to any feature.\n\nWhat would be worse is some sort of thinko in the approach in general. I\ndon't have a good answer to that, unfortunately - I think it works, but\nhow would I know for sure? We explored multiple alternative approaches\nand all of them crashed and burned ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 24 Jan 2024 18:46:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Jan 24, 2024 at 12:46 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> I did try to explain how this works (and why) in a couple places:\n>\n> 1) the commit message\n> 2) reorderbuffer header comment\n> 3) ReorderBufferSequenceIsTransactional comment (and nearby)\n>\n> It's possible this does not meet your expectations, ofc. Maybe there\n> should be a separate README for this - I haven't found anything like\n> that for logical decoding in general, which is why I did (1)-(3).\n\nI read over these and I do think they answer a bunch of questions, but\nI don't think they answer all of the questions.\n\nSuppose T1 creates a sequence and commits. Then T2 calls nextval().\nThen T3 drops the sequence. According to the commit message, T2's\nchange will be \"replayed immediately after decoding\". But it's\nessential to replay T2's change after we replay T1 and before we\nreplay T3, and the comments don't explain why that's guaranteed.\n\nThe answer might be \"locks\". If we always replay a transaction\nimmediately when we see it's commit record then in the example above\nwe're fine, because the commit record for the transaction that creates\nthe sequence must precede the nextval() call, since the sequence won't\nbe visible until the transaction commits, and also because T1 holds a\nlock on it at that point sufficient to hedge out nextval. And the\nnextval record must precede the point where T3 takes an exclusive lock\non the sequence.\n\nNote, however, that this change of reasoning critically depends on us\nnever delaying application of a transaction. If we might reach T1's\ncommit record and say \"hey, let's hold on to this for a bit and replay\nit after we've decoded some more,\" everything immediately breaks,\nunless we also delay application of T2's non-transactional update in\nsuch a way that it's still guaranteed to happen after T1. I wonder if\nthis kind of situation would be a problem for a future parallel-apply\nfeature. It wouldn't work, for example, to hand T1 and T3 off (in that\norder) to a separate apply process but handle T2's \"non-transactional\"\nmessage directly, because it might handle that message before the\napplication of T1 got completed.\n\nThis also seems to depend on every transactional operation that might\naffect a future non-transactional operation holding a lock that would\nconflict with that non-transactional operation. For example, if ALTER\nSEQUENCE .. RESTART WITH didn't take a strong lock on the sequence,\nthen you could have: T1 does nextval, T2 does ALTER SEQUENCE RESTART\nWITH, T1 does nextval again, T1 commits, T2 commits. It's unclear what\nthe semantics of that would be -- would T1's second nextval() see the\nsequence restart, or what? But if the effect of T1's second nextval\ndoes depend in some way on the ALTER SEQUENCE operation which precedes\nit in the WAL stream, then we might have some trouble here, because\nboth nextvals precede the commit of T2. Fortunately, this sequence of\nevents is foreclosed by locking.\n\nBut I did find one somewhat-similar case in which that's not so.\n\nS1: create table withseq (a bigint generated always as identity);\nS1: begin;\nS2: select nextval('withseq_a_seq');\nS1: alter table withseq set unlogged;\nS2: select nextval('withseq_a_seq');\n\nI think this is a bug in the code that supports owned sequences rather\nthan a problem that this patch should have to do something about. When\na sequence is flipped between logged and unlogged directly, we take a\nstronger lock than we do here when it's done in this indirect way.\nAlso, I'm not quite sure if it would pose a problem for sequence\ndecoding anyway: it changes the relfilenode, but not the value. But\nthis is the *kind* of problem that could make the approach unsafe:\nsupposedly transactional changes being interleaved with supposedly\nnon-transctional changes, in such a way that the non-transactional\nchanges might get applied at the wrong time relative to the\ntransactional changes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Jan 2024 09:39:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 1/26/24 15:39, Robert Haas wrote:\n> On Wed, Jan 24, 2024 at 12:46 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> I did try to explain how this works (and why) in a couple places:\n>>\n>> 1) the commit message\n>> 2) reorderbuffer header comment\n>> 3) ReorderBufferSequenceIsTransactional comment (and nearby)\n>>\n>> It's possible this does not meet your expectations, ofc. Maybe there\n>> should be a separate README for this - I haven't found anything like\n>> that for logical decoding in general, which is why I did (1)-(3).\n> \n> I read over these and I do think they answer a bunch of questions, but\n> I don't think they answer all of the questions.\n> \n> Suppose T1 creates a sequence and commits. Then T2 calls nextval().\n> Then T3 drops the sequence. According to the commit message, T2's\n> change will be \"replayed immediately after decoding\". But it's\n> essential to replay T2's change after we replay T1 and before we\n> replay T3, and the comments don't explain why that's guaranteed.\n> \n> The answer might be \"locks\". If we always replay a transaction\n> immediately when we see it's commit record then in the example above\n> we're fine, because the commit record for the transaction that creates\n> the sequence must precede the nextval() call, since the sequence won't\n> be visible until the transaction commits, and also because T1 holds a\n> lock on it at that point sufficient to hedge out nextval. And the\n> nextval record must precede the point where T3 takes an exclusive lock\n> on the sequence.\n> \n\nRight, locks + apply in commit order gives us this guarantee (I can't\nthink of a case where it wouldn't be the case).\n\n> Note, however, that this change of reasoning critically depends on us\n> never delaying application of a transaction. If we might reach T1's\n> commit record and say \"hey, let's hold on to this for a bit and replay\n> it after we've decoded some more,\" everything immediately breaks,\n> unless we also delay application of T2's non-transactional update in\n> such a way that it's still guaranteed to happen after T1. I wonder if\n> this kind of situation would be a problem for a future parallel-apply\n> feature. It wouldn't work, for example, to hand T1 and T3 off (in that\n> order) to a separate apply process but handle T2's \"non-transactional\"\n> message directly, because it might handle that message before the\n> application of T1 got completed.\n> \n\nDoesn't the whole logical replication critically depend on the commit\norder? If you decide to arbitrarily reorder/delay the transactions, all\nkinds of really bad things can happen. That's a generic problem, it\napplies to all kinds of objects, not just sequences - a parallel apply\nwould need to detect this sort of dependencies (e.g. INSERT + DELETE of\nthe same key), and do something about it.\n\nSimilar for sequences, where the important event is allocation of a new\nrelfilenode.\n\nIf anything, it's easier for sequences, because the relfilenode tracking\ngives us an explicit (and easy) way to detect these dependencies between\ntransactions.\n\n> This also seems to depend on every transactional operation that might\n> affect a future non-transactional operation holding a lock that would\n> conflict with that non-transactional operation. For example, if ALTER\n> SEQUENCE .. RESTART WITH didn't take a strong lock on the sequence,\n> then you could have: T1 does nextval, T2 does ALTER SEQUENCE RESTART\n> WITH, T1 does nextval again, T1 commits, T2 commits. It's unclear what\n> the semantics of that would be -- would T1's second nextval() see the\n> sequence restart, or what? But if the effect of T1's second nextval\n> does depend in some way on the ALTER SEQUENCE operation which precedes\n> it in the WAL stream, then we might have some trouble here, because\n> both nextvals precede the commit of T2. Fortunately, this sequence of\n> events is foreclosed by locking.\n> \n\nI don't quite follow :-(\n\nAFAIK this theory hinges on not having the right lock, but I believe\nALTER SEQUENCE does obtain the lock (at least in cases that assign a new\nrelfilenode). Which means such reordering should not be possible,\nbecause nextval() in other transactions will then wait until commit. And\nall nextval() calls in the same transaction will be treated as\ntransactional.\n\nSo I think this works OK. If something does not lock the sequence in a\nway that would prevent other xacts to do nextval() on it, it's not a\nchange that would change the relfilenode - and so it does not switch the\nsequence into a transactional mode.\n\n> But I did find one somewhat-similar case in which that's not so.\n> \n> S1: create table withseq (a bigint generated always as identity);\n> S1: begin;\n> S2: select nextval('withseq_a_seq');\n> S1: alter table withseq set unlogged;\n> S2: select nextval('withseq_a_seq');\n> \n> I think this is a bug in the code that supports owned sequences rather\n> than a problem that this patch should have to do something about. When\n> a sequence is flipped between logged and unlogged directly, we take a\n> stronger lock than we do here when it's done in this indirect way.\n\nYes, I think this is a bug in handling of owned sequences - from the\nmoment the \"ALTER TABLE ... SET UNLOGGED\" is executed, the two sessions\ngenerate duplicate values (until the S1 is committed, at which point the\nvalues generated in S2 get \"forgotten\").\n\nIt seems we end up updating both relfilenodes, which is clearly wrong.\n\nSeems like a bug independent of the decoding, IMO.\n\n> Also, I'm not quite sure if it would pose a problem for sequence\n> decoding anyway: it changes the relfilenode, but not the value. But\n> this is the *kind* of problem that could make the approach unsafe:\n> supposedly transactional changes being interleaved with supposedly\n> non-transctional changes, in such a way that the non-transactional\n> changes might get applied at the wrong time relative to the\n> transactional changes.\n> \n\nI'm not sure what you mean by \"changes relfilenode, not value\" but I\nsuspect it might break the sequence decoding - or at least confuse it. I\nhaven't thecked what exactly happens when we change logged/unlogged for\na sequence, but I assume it does change the relfilenode, which already\nis a change of a value - we WAL-log the new sequence state, at least.\nBut it should be treated as \"transactional\" in the transaction that did\nthe ALTER TABLE, because it created the relfilenode.\n\nHowever, I'm not sure this is a valid argument against the sequence\ndecoding patch. If something does not acquire the correct lock, it's not\nsurprising something else breaks, if it relies on the lock.\n\nOf course, I understand you're trying to make a broader point - that if\nsomething like this could happen in \"correct\" case, it'd be a problem.\n\nBut I don't think that's possible. The whole \"transactional\" thing is\ndetermined by having a new relfilenode for the sequence, and I can't\nimagine a case where we could assign a new relfilenode without a lock.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 27 Jan 2024 20:37:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Sun, Jan 28, 2024 at 1:07 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> Right, locks + apply in commit order gives us this guarantee (I can't\n> think of a case where it wouldn't be the case).\n\nI couldn't find any cases of inadequate locking other than the one I mentioned.\n\n> Doesn't the whole logical replication critically depend on the commit\n> order? If you decide to arbitrarily reorder/delay the transactions, all\n> kinds of really bad things can happen. That's a generic problem, it\n> applies to all kinds of objects, not just sequences - a parallel apply\n> would need to detect this sort of dependencies (e.g. INSERT + DELETE of\n> the same key), and do something about it.\n\nYes, but here I'm not just talking about the commit order. I'm talking\nabout the order of applying non-transactional operations relative to\ncommits.\n\nConsider:\n\nT1: CREATE SEQUENCE s;\nT2: BEGIN;\nT2: SELECT nextval('s');\nT3: SELECT nextval('s');\nT2: ALTER SEQUENCE s INCREMENT 2;\nT2: SELECT nextval('s');\nT2: COMMIT;\n\nThe commit order is T1 < T3 < T2, but T3 makes no transactional\nchanges, so the commit order is really just T1 < T2. But it's\ncompletely wrong to say that all we need to do is apply T1 before we\napply T2. The correct order of application is:\n\n1. T1.\n2. T2's first nextval\n3. T3's nextval\n4. T2's transactional changes (i.e. the ALTER SEQUENCE INCREMENT and\nthe subsequent nextval)\n\nIn other words, the fact that some sequence changes are\nnon-transactional creates ordering hazards that don't exist if there\nare no non-transactional changes. So in that way, sequences are\ndifferent from table modifications, where applying the transactions in\norder of commit is all we need to do. Here we need to apply the\ntransactions in order of commit and also apply the non-transactional\nchanges at the right point in the sequence. Consider the following\nalternative apply sequence:\n\n1. T1.\n2. T2's transactional changes (i.e. the ALTER SEQUENCE INCREMENT and\nthe subsequent nextval)\n3. T3's nextval\n4. T2's first nextval\n\nThat's still in commit order. It's also wrong.\n\nImagine that you commit this patch and someone later wants to do\nparallel logical apply. So every time they finish decoding a\ntransaction, they stick it in a queue to be applied by the next\navailable worker. But, non-transactional changes are very simple, so\nwe just directly apply those in the main process. Well, kaboom! But\nnow this can happen with the above example.\n\n1. Decode T1. Add to queue for apply.\n2. Before the (idle) apply worker has a chance to pull T1 out of the\nqueue, decode the first nextval and try to apply it.\n\nOops. We're trying to apply a modification to a sequence that hasn't\nbeen created yet. I'm not saying that this kind of hypothetical is a\nreason not to commit the patch. But it seems like we're not on the\nsame page about what the ordering requirements are here. I'm just\nmaking the argument that those non-transactional operations actually\nact like mini-transactions. They need to happen at the right time\nrelative to the real transactions. A non-transactional operation needs\nto be applied after any transactions that commit before it is logged,\nand before any transactions that commit after it's logged.\n\n> Yes, I think this is a bug in handling of owned sequences - from the\n> moment the \"ALTER TABLE ... SET UNLOGGED\" is executed, the two sessions\n> generate duplicate values (until the S1 is committed, at which point the\n> values generated in S2 get \"forgotten\").\n>\n> It seems we end up updating both relfilenodes, which is clearly wrong.\n>\n> Seems like a bug independent of the decoding, IMO.\n\nYeah.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 13 Feb 2024 22:07:11 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 2/13/24 17:37, Robert Haas wrote:\n> On Sun, Jan 28, 2024 at 1:07 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> Right, locks + apply in commit order gives us this guarantee (I can't\n>> think of a case where it wouldn't be the case).\n> \n> I couldn't find any cases of inadequate locking other than the one I mentioned.\n> \n>> Doesn't the whole logical replication critically depend on the commit\n>> order? If you decide to arbitrarily reorder/delay the transactions, all\n>> kinds of really bad things can happen. That's a generic problem, it\n>> applies to all kinds of objects, not just sequences - a parallel apply\n>> would need to detect this sort of dependencies (e.g. INSERT + DELETE of\n>> the same key), and do something about it.\n> \n> Yes, but here I'm not just talking about the commit order. I'm talking\n> about the order of applying non-transactional operations relative to\n> commits.\n> \n> Consider:\n> \n> T1: CREATE SEQUENCE s;\n> T2: BEGIN;\n> T2: SELECT nextval('s');\n> T3: SELECT nextval('s');\n> T2: ALTER SEQUENCE s INCREMENT 2;\n> T2: SELECT nextval('s');\n> T2: COMMIT;\n> \n\nIt's not clear to me if you're talking about nextval() that happens to\ngenerate WAL, or nextval() covered by WAL generated by a previous call.\n\nI'm going to assume it's the former, i.e. nextval() that generated WAL\ndescribing the *next* sequence chunk, because without WAL there's\nnothing to apply and therefore no issue with T3 ordering.\n\nThe way I think about non-transactional sequence changes is as if they\nwere tiny transactions that happen \"fully\" (including commit) at the LSN\nwhere the LSN change is logged.\n\n\n> The commit order is T1 < T3 < T2, but T3 makes no transactional\n> changes, so the commit order is really just T1 < T2. But it's\n> completely wrong to say that all we need to do is apply T1 before we\n> apply T2. The correct order of application is:\n> \n> 1. T1.\n> 2. T2's first nextval\n> 3. T3's nextval\n> 4. T2's transactional changes (i.e. the ALTER SEQUENCE INCREMENT and\n> the subsequent nextval)\n> \n\nIs that quite true? If T3 generated WAL (for the nextval call), it will\nbe applied at that particular LSN. AFAIK that guarantees it happens\nafter the first T2 change (which is also non-transactional) and before\nthe transactional T2 change (because that creates a new relfilenode).\n\n> In other words, the fact that some sequence changes are\n> non-transactional creates ordering hazards that don't exist if there\n> are no non-transactional changes. So in that way, sequences are\n> different from table modifications, where applying the transactions in\n> order of commit is all we need to do. Here we need to apply the\n> transactions in order of commit and also apply the non-transactional\n> changes at the right point in the sequence. Consider the following\n> alternative apply sequence:\n> \n> 1. T1.\n> 2. T2's transactional changes (i.e. the ALTER SEQUENCE INCREMENT and\n> the subsequent nextval)\n> 3. T3's nextval\n> 4. T2's first nextval\n> \n> That's still in commit order. It's also wrong.\n> \n\nYes, this would be wrong. Thankfully the apply is not allowed to reorder\nthe changes like this, because that's not what \"non-transactional\" means\nin this context.\n\nIt does not mean we can arbitrarily reorder the changes, it only means\nthe changes are applied as if they were independent transactions (but in\nthe same order as they were executed originally). Both with respect to\nthe other non-transactional changes, and to \"commits\" of other stuff.\n\n(for serial apply, at least)\n\n> Imagine that you commit this patch and someone later wants to do\n> parallel logical apply. So every time they finish decoding a\n> transaction, they stick it in a queue to be applied by the next\n> available worker. But, non-transactional changes are very simple, so\n> we just directly apply those in the main process. Well, kaboom! But\n> now this can happen with the above example.\n> \n> 1. Decode T1. Add to queue for apply.\n> 2. Before the (idle) apply worker has a chance to pull T1 out of the\n> queue, decode the first nextval and try to apply it.\n> \n> Oops. We're trying to apply a modification to a sequence that hasn't\n> been created yet. I'm not saying that this kind of hypothetical is a\n> reason not to commit the patch. But it seems like we're not on the\n> same page about what the ordering requirements are here. I'm just\n> making the argument that those non-transactional operations actually\n> act like mini-transactions. They need to happen at the right time\n> relative to the real transactions. A non-transactional operation needs\n> to be applied after any transactions that commit before it is logged,\n> and before any transactions that commit after it's logged.\n> \n\nHow is this issue specific to sequences? AFAIK this is a general problem\nwith transactions that depend on each other. Consider for example this:\n\nT1: INSERT INTO t (id) VALUES (1);\nT2: DELETE FROM t WHERE id = 1;\n\nIf you parallelize this in a naive way, maybe T2 gets applied before T1.\nIn which case the DELETE won't find the row yet.\n\nThere's different ways to address this. You can detect this type of\nconflicts (e.g. when a DELETE that doesn't find a match), drain the\napply queue and retry the transaction. Or you may compare keysets of the\ntransactions and make sure the apply waits until the conflicting one\ngets fully applied first.\n\nAFAIK for sequences it's not any different, except the key we'd have to\ncompare is the sequence itself.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 14 Feb 2024 17:51:37 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 10:21 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> The way I think about non-transactional sequence changes is as if they\n> were tiny transactions that happen \"fully\" (including commit) at the LSN\n> where the LSN change is logged.\n\n100% this.\n\n> It does not mean we can arbitrarily reorder the changes, it only means\n> the changes are applied as if they were independent transactions (but in\n> the same order as they were executed originally). Both with respect to\n> the other non-transactional changes, and to \"commits\" of other stuff.\n\nRight, this is very important and I agree completely.\n\nI'm feeling more confident about this now that I heard you say that\nstuff -- this is really the key issue I've been worried about since I\nfirst looked at this, and I wasn't sure that you were in agreement,\nbut it sounds like you are. I think we should (a) fix the locking bug\nI found (but that can be independent of this patch) and (b) make sure\nthat this patch documents the points from the quoted material above so\nthat everyone who reads the code (and maybe tries to enhance it) is\nclear on what the assumptions are.\n\n(I haven't checked whether it documents that stuff or not. I'm just\nsaying it should, because I think it's a subtlety that someone might\nmiss.)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Feb 2024 09:46:30 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On 2/15/24 05:16, Robert Haas wrote:\n> On Wed, Feb 14, 2024 at 10:21 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> The way I think about non-transactional sequence changes is as if they\n>> were tiny transactions that happen \"fully\" (including commit) at the LSN\n>> where the LSN change is logged.\n> \n> 100% this.\n> \n>> It does not mean we can arbitrarily reorder the changes, it only means\n>> the changes are applied as if they were independent transactions (but in\n>> the same order as they were executed originally). Both with respect to\n>> the other non-transactional changes, and to \"commits\" of other stuff.\n> \n> Right, this is very important and I agree completely.\n> \n> I'm feeling more confident about this now that I heard you say that\n> stuff -- this is really the key issue I've been worried about since I\n> first looked at this, and I wasn't sure that you were in agreement,\n> but it sounds like you are. I think we should (a) fix the locking bug\n> I found (but that can be independent of this patch) and (b) make sure\n> that this patch documents the points from the quoted material above so\n> that everyone who reads the code (and maybe tries to enhance it) is\n> clear on what the assumptions are.\n> \n> (I haven't checked whether it documents that stuff or not. I'm just\n> saying it should, because I think it's a subtlety that someone might\n> miss.)\n> \n\nThanks for thinking about these issues with reordering events. Good we\nseem to be in agreement and that you feel more confident about this.\nI'll check if there's a good place to document this.\n\nFor me, the part that I feel most uneasy about is the decoding while the\nsnapshot is still being built (and can flip to consistent snapshot\nbetween the relfilenode creation and sequence change, confusing the\nlogic that decides which changes are transactional).\n\nIt seems \"a bit weird\" that we either keep the \"simple\" logic that may\nend up with incorrect \"non-transactional\" result, but happens to then\nwork fine because we immediately discard the change.\n\nBut it still feels better than the alternative, which requires us to\nstart decoding stuff (relfilenode creation) before building a proper\nsnapshot is consistent, which we didn't do before - or at least not in\nthis particular way. While I don't have a practical example where it\nwould cause trouble now, I have a nagging feeling it might easily cause\ntrouble in the future by making some new features harder to implement.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 15 Feb 2024 21:26:59 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Fri, Feb 16, 2024 at 1:57 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> For me, the part that I feel most uneasy about is the decoding while the\n> snapshot is still being built (and can flip to consistent snapshot\n> between the relfilenode creation and sequence change, confusing the\n> logic that decides which changes are transactional).\n>\n> It seems \"a bit weird\" that we either keep the \"simple\" logic that may\n> end up with incorrect \"non-transactional\" result, but happens to then\n> work fine because we immediately discard the change.\n>\n> But it still feels better than the alternative, which requires us to\n> start decoding stuff (relfilenode creation) before building a proper\n> snapshot is consistent, which we didn't do before - or at least not in\n> this particular way. While I don't have a practical example where it\n> would cause trouble now, I have a nagging feeling it might easily cause\n> trouble in the future by making some new features harder to implement.\n\nI don't understand the issues here well enough to comment. Is there a\ngood write-up someplace I can read to understand the design here?\n\nIs the rule that changes are transactional if and only if the current\ntransaction has assigned a new relfilenode to the sequence?\n\nWhy does the logic get confused if the state of the snapshot changes?\n\nMy naive reaction is that it kinda sounds like you're relying on two\ndifferent mistakes cancelling each other out, and that might be a bad\nidea, because maybe there's some situation where they don't. But I\ndon't understand the issue well enough to have an educated opinion at\nthis point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 10:30:01 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Thu, Dec 21, 2023 at 6:47 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 12/19/23 13:54, Christophe Pettus wrote:\n> > Hi,\n> >\n> > I wanted to hop in here on one particular issue:\n> >\n> >> On Dec 12, 2023, at 02:01, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >> - desirability of the feature: Random IDs (UUIDs etc.) are likely a much\n> >> better solution for distributed (esp. active-active) systems. But there\n> >> are important use cases that are likely to keep using regular sequences\n> >> (online upgrades of single-node instances, existing systems, ...).\n> >\n> > +1.\n> >\n> > Right now, the lack of sequence replication is a rather large\n> > foot-gun on logical replication upgrades. Copying the sequences\n> > over during the cutover period is doable, of course, but:\n> >\n> > (a) There's no out-of-the-box tooling that does it, so everyone has\n> > to write some scripts just for that one function.\n> >\n> > (b) It's one more thing that extends the cutover window.\n> >\n>\n> I agree it's an annoying gap for this use case. But if this is the only\n> use cases, maybe a better solution would be to provide such tooling\n> instead of adding it to the logical decoding?\n>\n> It might seem a bit strange if most data is copied by replication\n> directly, while sequences need special handling, ofc.\n>\n\nOne difference between the logical replication of tables and sequences\nis that we can guarantee with synchronous_commit (and\nsynchronous_standby_names) that after failover transactions data is\nreplicated or not whereas for sequences we can't guarantee that\nbecause of their non-transactional nature. Say, there are two\ntransactions T1 and T2, it is possible that T1's entire table data and\nsequence data are committed and replicated but T2's sequence data is\nreplicated. So, after failover to logical subscriber in such a case if\none routes T2 again to the new node as it was not successful\npreviously then it would needlessly perform the sequence changes\nagain. I don't how much that matters but that would probably be the\ndifference between the replication of tables and sequences.\n\nI agree with your point above that for upgrades some tool like\npg_copysequence where we can provide a way to copy sequence data to\nsubscribers from the publisher would suffice the need.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 20 Feb 2024 11:24:53 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 10:30 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Is the rule that changes are transactional if and only if the current\n> transaction has assigned a new relfilenode to the sequence?\n\nYes, thats the rule.\n\n> Why does the logic get confused if the state of the snapshot changes?\n\nThe rule doesn't get changed, but the way this identification is\nimplemented at the decoding gets confused and assumes transactional as\nnon-transactional. The identification of whether the sequence is\ntransactional or not is implemented based on what WAL we have decoded\nfrom the particular transaction and whether we decode a particular WAL\nor not depends upon the snapshot state (it's about what we decode not\nnecessarily what we sent). So if the snapshot state changed the\nmid-transaction that means we haven't decoded the WAL which created a\nnew relfilenode but we will decode the WAL which is operating on the\nsequence. So here we will assume the change is non-transaction\nwhereas it was transactional because we did not decode some of the\nchanges of transaction which we rely on for identifying whether it is\ntransactional or not.\n\n\n> My naive reaction is that it kinda sounds like you're relying on two\n> different mistakes cancelling each other out, and that might be a bad\n> idea, because maybe there's some situation where they don't. But I\n> don't understand the issue well enough to have an educated opinion at\n> this point.\n\nI would say the first one is a mistake in identifying the\ntransactional as non-transactional during the decoding and that\nmistake happens only when we decode the transaction partially. But we\nnever stream the partially decoded transactions downstream which means\neven though we have made a mistake in decoding it, we are not\nstreaming it so our mistake is not getting converted into a real\nproblem. But again I agree there is a temporary wrong decision and if\nwe try to do something else based on this decision then it could be an\nissue.\n\nYou might be interested in more detail [1] where I first reported this\nproblem and also [2] where we concluded why this is not creating a\nreal problem.\n\n[1] https://www.postgresql.org/message-id/CAFiTN-vAx-Y%2B19ROKOcWnGf7ix2VOTUebpzteaGw9XQyCAeK6g%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAFiTN-sYpyUBabxopJysqH3DAp4OZUCTi6m_qtgt8d32vDcWSA%40mail.gmail.com\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:32:36 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 1:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> You might be interested in more detail [1] where I first reported this\n> problem and also [2] where we concluded why this is not creating a\n> real problem.\n>\n> [1] https://www.postgresql.org/message-id/CAFiTN-vAx-Y%2B19ROKOcWnGf7ix2VOTUebpzteaGw9XQyCAeK6g%40mail.gmail.com\n> [2] https://www.postgresql.org/message-id/CAFiTN-sYpyUBabxopJysqH3DAp4OZUCTi6m_qtgt8d32vDcWSA%40mail.gmail.com\n\nThanks. Dilip and I just spent a lot of time talking this through on a\ncall. One of the key bits of logic is here:\n\n+ /* Skip the change if already processed (per the snapshot). */\n+ if (transactional &&\n+ !SnapBuildProcessChange(builder, xid, buf->origptr))\n+ return;\n+ else if (!transactional &&\n+ (SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\n+ SnapBuildXactNeedsSkip(builder, buf->origptr)))\n+ return;\n\nAs a stylistic note, I think this would be mode clear if it were\nwritten if (transactional) { if (!SnapBuildProcessChange()) return; }\nelse { if (something else) return; }.\n\nNow, on to correctness. It's possible for us to identify a\ntransactional change as non-transactional if smgr_decode() was called\nfor the relfilenode before SNAPBUILD_FULL_SNAPSHOT was reached. In\nthat case, if !SnapBuildProcessChange() would have been true, then we\nneed SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT ||\nSnapBuildXactNeedsSkip(builder, buf->origptr) to also be true.\nOtherwise, we'll process this change when we wouldn't have otherwise.\nBut Dilip made an argument to me about this which seems correct to me.\nsnapbuild.h says that SNAPBUILD_CONSISTENT is reached only when we\nfind a point where any transaction that was running at the time we\nreached SNAPBUILD_FULL_SNAPSHOT have finished. So if this transaction\nis one for which we incorrectly identified the sequence change as\nnon-transactional, then we cannot be in the SNAPBUILD_CONSISTENT state\nyet, so SnapBuildCurrentState(builder) != SNAPBUILD_CONSISTENT will be\ntrue and hence whole \"or\" condition we'll be true and we'll return. So\nfar, so good.\n\nI think, anyway. I haven't comprehensively verified that the comment\nin snapbuild.h accurately reflects what the code actually does. But if\nit does, then presumably we shouldn't see a record for which we might\nhave mistakenly identified a change as non-transactional after\nreaching SNAPBUILD_CONSISTENT, which seems to be good enough to\nguarantee that the mistake won't matter.\n\nHowever, the logic in smgr_decode() doesn't only care about the\nsnapshot state. It also cares about the fast-forward flag:\n\n+ if (SnapBuildCurrentState(builder) < SNAPBUILD_FULL_SNAPSHOT ||\n+ ctx->fast_forward)\n+ return;\n\nLet's say fast_forward is true. Then smgr_decode() is going to skip\nrecording anything about the relfilenode, so we'll identify all\nsequence changes as non-transactional. But look at how this case is\nhandled in seq_decode():\n\n+ if (ctx->fast_forward)\n+ {\n+ /*\n+ * We need to set processing_required flag to notify the sequence\n+ * change existence to the caller. Usually, the flag is set when\n+ * either the COMMIT or ABORT records are decoded, but this must be\n+ * turned on here because the non-transactional logical message is\n+ * decoded without waiting for these records.\n+ */\n+ if (!transactional)\n+ ctx->processing_required = true;\n+\n+ return;\n+ }\n\nThis seems suspicious. Why are we testing the transactional flag here\nif it's guaranteed to be false? My guess is that the person who wrote\nthis code thought that the flag would be accurate even in this case,\nbut that doesn't seem to be true. So this case probably needs some\nmore thought.\n\nIt's definitely not great that this logic is so complicated; it's\nreally hard to verify that all the tests match up well enough to keep\nus out of trouble.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 15:38:23 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 3:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Let's say fast_forward is true. Then smgr_decode() is going to skip\n> recording anything about the relfilenode, so we'll identify all\n> sequence changes as non-transactional. But look at how this case is\n> handled in seq_decode():\n>\n> + if (ctx->fast_forward)\n> + {\n> + /*\n> + * We need to set processing_required flag to notify the sequence\n> + * change existence to the caller. Usually, the flag is set when\n> + * either the COMMIT or ABORT records are decoded, but this must be\n> + * turned on here because the non-transactional logical message is\n> + * decoded without waiting for these records.\n> + */\n> + if (!transactional)\n> + ctx->processing_required = true;\n> +\n> + return;\n> + }\n\nIt appears that the 'processing_required' flag was introduced as part\nof supporting upgrades for logical replication slots. Its purpose is\nto determine whether a slot is fully caught up, meaning that there are\nno pending decodable changes left before it can be upgraded.\n\nSo now if some change was transactional but we have identified it as\nnon-transaction then we will mark this flag 'ctx->processing_required\n= true;' so we temporarily set this flag incorrectly, but even if the\nflag would have been correctly identified initially, it would have\nbeen set again to true in the DecodeTXNNeedSkip() function regardless\nof whether the transaction is committed or aborted. As a result, the\nflag would eventually be set to 'true', and the behavior would align\nwith the intended logic.\n\nBut I am wondering why this flag is always set to true in\nDecodeTXNNeedSkip() irrespective of the commit or abort. Because the\naborted transactions are not supposed to be replayed? So if my\nobservation is correct that for the aborted transaction, this\nshouldn't be set to true then we have a problem with sequence where we\nare identifying the transactional changes as non-transaction changes\nbecause now for transactional changes this should depend upon commit\nstatus.\n\nOn another thought, can there be a situation where we have identified\nthis flag wrongly as non-transaction and set this flag, and the\ncommit/abort record never appeared in the WAL so never decoded? That\ncan also lead to an incorrect decision during the upgrade.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Feb 2024 16:32:03 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "\n\nOn 2/20/24 06:54, Amit Kapila wrote:\n> On Thu, Dec 21, 2023 at 6:47 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 12/19/23 13:54, Christophe Pettus wrote:\n>>> Hi,\n>>>\n>>> I wanted to hop in here on one particular issue:\n>>>\n>>>> On Dec 12, 2023, at 02:01, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n>>>> - desirability of the feature: Random IDs (UUIDs etc.) are likely a much\n>>>> better solution for distributed (esp. active-active) systems. But there\n>>>> are important use cases that are likely to keep using regular sequences\n>>>> (online upgrades of single-node instances, existing systems, ...).\n>>>\n>>> +1.\n>>>\n>>> Right now, the lack of sequence replication is a rather large\n>>> foot-gun on logical replication upgrades. Copying the sequences\n>>> over during the cutover period is doable, of course, but:\n>>>\n>>> (a) There's no out-of-the-box tooling that does it, so everyone has\n>>> to write some scripts just for that one function.\n>>>\n>>> (b) It's one more thing that extends the cutover window.\n>>>\n>>\n>> I agree it's an annoying gap for this use case. But if this is the only\n>> use cases, maybe a better solution would be to provide such tooling\n>> instead of adding it to the logical decoding?\n>>\n>> It might seem a bit strange if most data is copied by replication\n>> directly, while sequences need special handling, ofc.\n>>\n> \n> One difference between the logical replication of tables and sequences\n> is that we can guarantee with synchronous_commit (and\n> synchronous_standby_names) that after failover transactions data is\n> replicated or not whereas for sequences we can't guarantee that\n> because of their non-transactional nature. Say, there are two\n> transactions T1 and T2, it is possible that T1's entire table data and\n> sequence data are committed and replicated but T2's sequence data is\n> replicated. So, after failover to logical subscriber in such a case if\n> one routes T2 again to the new node as it was not successful\n> previously then it would needlessly perform the sequence changes\n> again. I don't how much that matters but that would probably be the\n> difference between the replication of tables and sequences.\n> \n\nI don't quite follow what the problem with synchronous_commit is :-(\n\nFor sequences, we log the changes ahead, i.e. even if nextval() did not\nwrite anything into WAL, it's still safe because these changes are\ncovered by the WAL generated some time ago (up to ~32 values back). And\nthat's certainly subject to synchronous_commit, right?\n\nThere certainly are issues with sequences and syncrep:\n\nhttps://www.postgresql.org/message-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com\n\nbut that's unrelated to logical replication.\n\nFWIW I don't think we'd re-apply sequence changes needlessly, because\nthe worker does update the origin after applying non-transactional\nchanges. So after the replication gets restarted, we'd skip what we\nalready applied, no?\n\nBut maybe there is an issue and I'm just not getting it. Could you maybe\nshare an example of T1/T2, with a replication restart and what you think\nwould happen?\n\n> I agree with your point above that for upgrades some tool like\n> pg_copysequence where we can provide a way to copy sequence data to\n> subscribers from the publisher would suffice the need.\n> \n\nPerhaps. Unfortunately it doesn't quite work for failovers, and it's yet\nanother tool users would need to use.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 20 Feb 2024 13:09:25 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 5:39 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/20/24 06:54, Amit Kapila wrote:\n> > On Thu, Dec 21, 2023 at 6:47 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> On 12/19/23 13:54, Christophe Pettus wrote:\n> >>> Hi,\n> >>>\n> >>> I wanted to hop in here on one particular issue:\n> >>>\n> >>>> On Dec 12, 2023, at 02:01, Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> >>>> - desirability of the feature: Random IDs (UUIDs etc.) are likely a much\n> >>>> better solution for distributed (esp. active-active) systems. But there\n> >>>> are important use cases that are likely to keep using regular sequences\n> >>>> (online upgrades of single-node instances, existing systems, ...).\n> >>>\n> >>> +1.\n> >>>\n> >>> Right now, the lack of sequence replication is a rather large\n> >>> foot-gun on logical replication upgrades. Copying the sequences\n> >>> over during the cutover period is doable, of course, but:\n> >>>\n> >>> (a) There's no out-of-the-box tooling that does it, so everyone has\n> >>> to write some scripts just for that one function.\n> >>>\n> >>> (b) It's one more thing that extends the cutover window.\n> >>>\n> >>\n> >> I agree it's an annoying gap for this use case. But if this is the only\n> >> use cases, maybe a better solution would be to provide such tooling\n> >> instead of adding it to the logical decoding?\n> >>\n> >> It might seem a bit strange if most data is copied by replication\n> >> directly, while sequences need special handling, ofc.\n> >>\n> >\n> > One difference between the logical replication of tables and sequences\n> > is that we can guarantee with synchronous_commit (and\n> > synchronous_standby_names) that after failover transactions data is\n> > replicated or not whereas for sequences we can't guarantee that\n> > because of their non-transactional nature. Say, there are two\n> > transactions T1 and T2, it is possible that T1's entire table data and\n> > sequence data are committed and replicated but T2's sequence data is\n> > replicated. So, after failover to logical subscriber in such a case if\n> > one routes T2 again to the new node as it was not successful\n> > previously then it would needlessly perform the sequence changes\n> > again. I don't how much that matters but that would probably be the\n> > difference between the replication of tables and sequences.\n> >\n>\n> I don't quite follow what the problem with synchronous_commit is :-(\n>\n> For sequences, we log the changes ahead, i.e. even if nextval() did not\n> write anything into WAL, it's still safe because these changes are\n> covered by the WAL generated some time ago (up to ~32 values back). And\n> that's certainly subject to synchronous_commit, right?\n>\n> There certainly are issues with sequences and syncrep:\n>\n> https://www.postgresql.org/message-id/712cad46-a9c8-1389-aef8-faf0203c9be9@enterprisedb.com\n>\n> but that's unrelated to logical replication.\n>\n> FWIW I don't think we'd re-apply sequence changes needlessly, because\n> the worker does update the origin after applying non-transactional\n> changes. So after the replication gets restarted, we'd skip what we\n> already applied, no?\n>\n\nIt will work for restarts but I was trying to discuss what happens in\nthe scenario after the publisher node goes down and we failover to the\nsubscriber node and make it a primary node (or a failover case). After\nthat, all unfinished transactions will be re-routed to the new\nprimary. Consider a theoretical case where we send sequence changes of\nthe yet uncommitted transactions directly from wal buffers (something\nlike 91f2cae7a4 does for physical replication) and then immediately\nthe primary or publisher node crashes. After failover to the\nsubscriber node, the application will re-route unfinished transactions\nto the new primary. In such a situation, I think there is a chance\nthat we will update the sequence value when it would have already\nreceived/applied that update via replication. This is what I was\nsaying that there is probably a difference between tables and\nsequences, for tables such a replicated change would be rolled back.\nHaving said that, this is probably no different from what would happen\nin the case of physical replication.\n\n> But maybe there is an issue and I'm just not getting it. Could you maybe\n> share an example of T1/T2, with a replication restart and what you think\n> would happen?\n>\n> > I agree with your point above that for upgrades some tool like\n> > pg_copysequence where we can provide a way to copy sequence data to\n> > subscribers from the publisher would suffice the need.\n> >\n>\n> Perhaps. Unfortunately it doesn't quite work for failovers, and it's yet\n> another tool users would need to use.\n>\n\nBut can logical replica be used for failover? We don't have any way to\nreplicate/sync the slots on subscribers and neither we have a\nmechanism to replicate existing publications. I think if we want to\nachieve failover to a logical subscriber we need to replicate/sync the\nrequired logical and physical slots to the subscribers. I haven't\nthought through it completely so there would probably be more things\nto consider for allowing logical subscribers to be used as failover\ncandidates.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Feb 2024 10:36:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 10:21 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 2/13/24 17:37, Robert Haas wrote:\n>\n> > In other words, the fact that some sequence changes are\n> > non-transactional creates ordering hazards that don't exist if there\n> > are no non-transactional changes. So in that way, sequences are\n> > different from table modifications, where applying the transactions in\n> > order of commit is all we need to do. Here we need to apply the\n> > transactions in order of commit and also apply the non-transactional\n> > changes at the right point in the sequence. Consider the following\n> > alternative apply sequence:\n> >\n> > 1. T1.\n> > 2. T2's transactional changes (i.e. the ALTER SEQUENCE INCREMENT and\n> > the subsequent nextval)\n> > 3. T3's nextval\n> > 4. T2's first nextval\n> >\n> > That's still in commit order. It's also wrong.\n> >\n>\n> Yes, this would be wrong. Thankfully the apply is not allowed to reorder\n> the changes like this, because that's not what \"non-transactional\" means\n> in this context.\n>\n> It does not mean we can arbitrarily reorder the changes, it only means\n> the changes are applied as if they were independent transactions (but in\n> the same order as they were executed originally).\n>\n\nIn this regard, I have another scenario in mind where the apply order\ncould be different for the changes in the same transactions. For\nexample,\n\nTransaction T1\nBegin;\nInsert ..\nInsert ..\nnextval .. --consider this generates WAL\n..\nInsert ..\nnextval .. --consider this generates WAL\n\nIn this case, if the nextval operations will be applied in a different\norder (aka before Inserts) then there could be some inconsistency.\nSay, if, it doesn't follow the above order during apply then a trigger\nfired on both pub and sub for each row insert that refers to the\ncurrent sequence value to make some decision could have different\nbehavior on publisher and subscriber. If this is not how the patch\nwill behave then fine but otherwise, isn't this something that we\nshould be worried about?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 21 Feb 2024 12:06:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Tue, Feb 20, 2024 at 4:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Feb 20, 2024 at 3:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > Let's say fast_forward is true. Then smgr_decode() is going to skip\n> > recording anything about the relfilenode, so we'll identify all\n> > sequence changes as non-transactional. But look at how this case is\n> > handled in seq_decode():\n> >\n> > + if (ctx->fast_forward)\n> > + {\n> > + /*\n> > + * We need to set processing_required flag to notify the sequence\n> > + * change existence to the caller. Usually, the flag is set when\n> > + * either the COMMIT or ABORT records are decoded, but this must be\n> > + * turned on here because the non-transactional logical message is\n> > + * decoded without waiting for these records.\n> > + */\n> > + if (!transactional)\n> > + ctx->processing_required = true;\n> > +\n> > + return;\n> > + }\n>\n> It appears that the 'processing_required' flag was introduced as part\n> of supporting upgrades for logical replication slots. Its purpose is\n> to determine whether a slot is fully caught up, meaning that there are\n> no pending decodable changes left before it can be upgraded.\n>\n> So now if some change was transactional but we have identified it as\n> non-transaction then we will mark this flag 'ctx->processing_required\n> = true;' so we temporarily set this flag incorrectly, but even if the\n> flag would have been correctly identified initially, it would have\n> been set again to true in the DecodeTXNNeedSkip() function regardless\n> of whether the transaction is committed or aborted. As a result, the\n> flag would eventually be set to 'true', and the behavior would align\n> with the intended logic.\n>\n> But I am wondering why this flag is always set to true in\n> DecodeTXNNeedSkip() irrespective of the commit or abort. Because the\n> aborted transactions are not supposed to be replayed? So if my\n> observation is correct that for the aborted transaction, this\n> shouldn't be set to true then we have a problem with sequence where we\n> are identifying the transactional changes as non-transaction changes\n> because now for transactional changes this should depend upon commit\n> status.\n\nI have checked this case with Amit Kapila. So it seems in the cases\nwhere we have sent the prepared transaction or streamed in-progress\ntransaction we would need to send the abort also, and for that reason,\nwe are setting 'ctx->processing_required' as true so that if these\nWALs are not streamed we do not allow upgrade of such slots.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 13:05:58 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 1:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > But I am wondering why this flag is always set to true in\n> > DecodeTXNNeedSkip() irrespective of the commit or abort. Because the\n> > aborted transactions are not supposed to be replayed? So if my\n> > observation is correct that for the aborted transaction, this\n> > shouldn't be set to true then we have a problem with sequence where we\n> > are identifying the transactional changes as non-transaction changes\n> > because now for transactional changes this should depend upon commit\n> > status.\n>\n> I have checked this case with Amit Kapila. So it seems in the cases\n> where we have sent the prepared transaction or streamed in-progress\n> transaction we would need to send the abort also, and for that reason,\n> we are setting 'ctx->processing_required' as true so that if these\n> WALs are not streamed we do not allow upgrade of such slots.\n\nI don't find this explanation clear enough for me to understand.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 13:23:57 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 1:24 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Feb 21, 2024 at 1:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > But I am wondering why this flag is always set to true in\n> > > DecodeTXNNeedSkip() irrespective of the commit or abort. Because the\n> > > aborted transactions are not supposed to be replayed? So if my\n> > > observation is correct that for the aborted transaction, this\n> > > shouldn't be set to true then we have a problem with sequence where we\n> > > are identifying the transactional changes as non-transaction changes\n> > > because now for transactional changes this should depend upon commit\n> > > status.\n> >\n> > I have checked this case with Amit Kapila. So it seems in the cases\n> > where we have sent the prepared transaction or streamed in-progress\n> > transaction we would need to send the abort also, and for that reason,\n> > we are setting 'ctx->processing_required' as true so that if these\n> > WALs are not streamed we do not allow upgrade of such slots.\n>\n> I don't find this explanation clear enough for me to understand.\n\n\nExplanation about why we set 'ctx->processing_required' to true from\nDecodeCommit as well as DecodeAbort:\n--------------------------------------------------------------------------------------------------------------------------------------------------\nFor upgrading logical replication slots, it's essential to ensure\nthese slots are completely synchronized with the subscriber. To\nidentify that we process all the pending WAL in 'fast_forward' mode to\nfind whether there is any decodable WAL or not. So in short any WAL\ntype that we stream to standby in normal mode (no fast_forward mode)\nis considered decodable and so is the abort WAL. That's the reason\nwhy at the end of the transaction commit/abort we need to set this\n'ctx->processing_required' to true i.e. there are some decodable WAL\nexists so we can not upgrade this slot.\n\nWhy the below check is safe?\n> + if (ctx->fast_forward)\n> + {\n> + /*\n> + * We need to set processing_required flag to notify the sequence\n> + * change existence to the caller. Usually, the flag is set when\n> + * either the COMMIT or ABORT records are decoded, but this must be\n> + * turned on here because the non-transactional logical message is\n> + * decoded without waiting for these records.\n> + */\n> + if (!transactional)\n> + ctx->processing_required = true;\n> +\n> + return;\n> + }\n\nSo the problem is that we might consider the transaction change as\nnon-transaction and mark this flag as true. But what would have\nhappened if we would have identified it correctly as transactional?\nIn such cases, we wouldn't have set this flag here but then we would\nhave set this while processing the DecodeAbort/DecodeCommit, so the\nnet effect would be the same no? You may question what if the\nAbort/Commit WAL never appears in the WAL, but this flag is\nspecifically for the upgrade case, and in that case we have to do a\nclean shutdown so may not be an issue. But in the future, if we try\nto use 'ctx->processing_required' for something else where the clean\nshutdown is not guaranteed then this flag can be set incorrectly.\n\nI am not arguing that this is a perfect design but I am just making a\npoint about why it would work.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 14:43:01 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 2:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> So the problem is that we might consider the transaction change as\n> non-transaction and mark this flag as true.\n\nBut it's not \"might\" right? It's absolutely 100% certain that we will\nconsider that transaction's changes as non-transactional ... because\nwhen we're in fast-forward mode, the table of new relfilenodes is not\nbuilt, and so whenever we check whether any transaction made a new\nrelfilenode for this sequence, the answer will be no.\n\n> But what would have\n> happened if we would have identified it correctly as transactional?\n> In such cases, we wouldn't have set this flag here but then we would\n> have set this while processing the DecodeAbort/DecodeCommit, so the\n> net effect would be the same no? You may question what if the\n> Abort/Commit WAL never appears in the WAL, but this flag is\n> specifically for the upgrade case, and in that case we have to do a\n> clean shutdown so may not be an issue. But in the future, if we try\n> to use 'ctx->processing_required' for something else where the clean\n> shutdown is not guaranteed then this flag can be set incorrectly.\n>\n> I am not arguing that this is a perfect design but I am just making a\n> point about why it would work.\n\nEven if this argument is correct (and I don't know if it is), the code\nand comments need some updating. We should not be testing a flag that\nis guaranteed false with comments that make it sound like the value of\nthe flag is trustworthy when it isn't.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 14:52:00 +0530",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "On Wed, Feb 21, 2024 at 2:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Feb 21, 2024 at 2:43 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > So the problem is that we might consider the transaction change as\n> > non-transaction and mark this flag as true.\n>\n> But it's not \"might\" right? It's absolutely 100% certain that we will\n> consider that transaction's changes as non-transactional ... because\n> when we're in fast-forward mode, the table of new relfilenodes is not\n> built, and so whenever we check whether any transaction made a new\n> relfilenode for this sequence, the answer will be no.\n>\n> > But what would have\n> > happened if we would have identified it correctly as transactional?\n> > In such cases, we wouldn't have set this flag here but then we would\n> > have set this while processing the DecodeAbort/DecodeCommit, so the\n> > net effect would be the same no? You may question what if the\n> > Abort/Commit WAL never appears in the WAL, but this flag is\n> > specifically for the upgrade case, and in that case we have to do a\n> > clean shutdown so may not be an issue. But in the future, if we try\n> > to use 'ctx->processing_required' for something else where the clean\n> > shutdown is not guaranteed then this flag can be set incorrectly.\n> >\n> > I am not arguing that this is a perfect design but I am just making a\n> > point about why it would work.\n>\n> Even if this argument is correct (and I don't know if it is), the code\n> and comments need some updating. We should not be testing a flag that\n> is guaranteed false with comments that make it sound like the value of\n> the flag is trustworthy when it isn't.\n\n+1\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 21 Feb 2024 15:25:13 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
},
{
"msg_contents": "Hi,\n\nLet me share a bit of an update regarding this patch and PG17. I have\ndiscussed this patch and how to move it forward with a couple hackers\n(both within EDB and outside), and my takeaway is that the patch is not\nquite baked yet, not enough to make it into PG17 :-(\n\nThere are two main reasons / concerns leading to this conclusion:\n\n* correctness of the decoding part\n\nThere are (were) doubts about decoding during startup, before the\nsnapshot gets consistent, when we can get \"temporarily incorrect\"\ndecisions whether a change is transactional. While the behavior is\nultimately correct (we treat all changes as non-transactional and\ndiscard it), it seems \"dirty\" and it’s unclear to me if it might cause\nmore serious issues down the line (not necessarily bugs, but perhaps\nmaking it harder to implement future changes).\n\n* handling of sequences in built-in replication\n\nPer the patch, sequences need to be added to the publication explicitly.\nBut there were suggestions we might (should) add certain sequences\nautomatically - e.g. sequences backing SERIAL/BIGSERIAL columns, etc.\nI’m not sure we really want to do that, and so far I assumed we would\nstart with the manual approach and move to automatic addition in the\nfuture. But the agreement seems to be it would be a pretty significant\n\"breaking change\", and something we probably don’t want to do.\n\n\nIf someone feels has an opinion on either of the two issues (in either\nway), I'd like to hear it.\n\n\nObviously, I'm not particularly happy about this outcome. And I'm also\nsomewhat cautious because this patch was already committed+reverted in\nPG16 cycle, and doing the same thing in PG17 is not on my wish list.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 6 Mar 2024 18:34:15 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: logical decoding and replication of sequences, take 2"
}
] |
[
{
"msg_contents": "Dear team,\n\nWe are facing issues during installation of postgresql at our environment.\n\nThis command completed with no errors.....\n\ndnf install -y\nhttps://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n\nThen we ran this command....\n\ndnf install postgresql14-server-14.5-1PGDG.rhel8.x86_64.rpm\npostgresql14-14.5-1PGDG.rhel8.x86_64.rpm\npostgresql14-libs-14.5-1PGDG.rhel8.x86_64.rpm\n\nand got the following messages\n\nUpdating Subscription Management repositories.\n\nThis system is registered to Red Hat Subscription Management, but is not\nreceiving updates. You can use subscription-manager to assign subscriptions.\n\nLast metadata expiration check: 0:02:07 ago on Thu 18 Aug 2022 03:12:32 PM\nEDT.\nError:\n Problem 1: cannot install the best candidate for the job\n - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n Problem 2: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\npostgresql14(x86-64) = 14.5-1PGDG.rhel8, but none of the providers can be\ninstalled\n - cannot install the best candidate for the job\n - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to\nuse not only best candidate packages)\n*Again tried `dnf update` and `dnf install -y postgresql14-server` , but\nstill stuck with below error:*\n\n# dnf update\nUpdating Subscription Management repositories.\n\nThis system is registered to Red Hat Subscription Management, but is not\nreceiving updates. You can useions.\n\nLast metadata expiration check: 0:42:29 ago on Thu 18 Aug 2022 04:42:25 PM\nEDT.\nDependencies resolved.\n=======================================================================================================\n Package Architecture\n Version\n=======================================================================================================\nUpgrading:\n pg_qualstats_13 x86_64\n 2.0.4-1.rhel8\n postgresql13 x86_64\n 13.8-1PGDG.rhel8\n postgresql13-contrib x86_64\n 13.8-1PGDG.rhel8\n postgresql13-libs x86_64\n 13.8-1PGDG.rhel8\n postgresql13-server x86_64\n 13.8-1PGDG.rhel8\n\nTransaction Summary\n=======================================================================================================\nUpgrade 5 Packages\n\nTotal download size: 8.1 M\nIs this ok [y/N]: y\nDownloading Packages:\n(1/5): pg_qualstats_13-2.0.4-1.rhel8.x86_64.rpm\n(2/5): postgresql13-contrib-13.8-1PGDG.rhel8.x86_64.rpm\n(3/5): postgresql13-13.8-1PGDG.rhel8.x86_64.rpm\n(4/5): postgresql13-libs-13.8-1PGDG.rhel8.x86_64.rpm\n(5/5): postgresql13-server-13.8-1PGDG.rhel8.x86_64.rpm\n-------------------------------------------------------------------------------------------------------\nTotal\nRunning transaction check\nTransaction check succeeded.\nRunning transaction test\nTransaction test succeeded.\nRunning transaction\n Preparing :\n Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n Upgrading : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n Upgrading : postgresql13-13.8-1PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-13.8-1PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n Upgrading : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n Upgrading : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n Running scriptlet: pg_qualstats_13-2.0.4-1.rhel8.x86_64\n Upgrading : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n Cleanup : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n Cleanup : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n Running scriptlet: pg_qualstats_13-2.0.3-1.rhel8.x86_64\n Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n Cleanup : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n Cleanup : postgresql13-13.1-3PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-13.1-3PGDG.rhel8.x86_64\n Cleanup : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n Running scriptlet: postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n Verifying : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n Verifying : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n Verifying : postgresql13-13.8-1PGDG.rhel8.x86_64\n Verifying : postgresql13-13.1-3PGDG.rhel8.x86_64\n Verifying : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n Verifying : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n Verifying : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n Verifying : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n Verifying : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n Verifying : postgresql13-server-13.1-3PGDG.rhel8.x86_64\nInstalled products updated.\n\nUpgraded:\n pg_qualstats_13-2.0.4-1.rhel8.x86_64\n postgresql13-13.8-1PGDG.rhel8.x86_64 postgre\n postgresql13-libs-13.8-1PGDG.rhel8.x86_64\npostgresql13-server-13.8-1PGDG.rhel8.x86_64\n\nComplete!\n\n\n\n\n# dnf install -y postgresql14-server\nUpdating Subscription Management repositories.\n\nThis system is registered to Red Hat Subscription Management, but is not\nreceiving updates. You can useions.\n\nLast metadata expiration check: 0:42:56 ago on Thu 18 Aug 2022 04:42:25 PM\nEDT.\nError:\n Problem: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\npostgresql14(x86-64) = 14.5-1PGD installed\n - cannot install the best candidate for the job\n - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to\nuse not only best candidate\n\nDear team,We are facing issues during installation of postgresql at our environment.This command completed with no errors.....dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpmThen we ran this command....dnf install postgresql14-server-14.5-1PGDG.rhel8.x86_64.rpm postgresql14-14.5-1PGDG.rhel8.x86_64.rpm postgresql14-libs-14.5-1PGDG.rhel8.x86_64.rpmand got the following messagesUpdating Subscription Management repositories.This system is registered to Red Hat Subscription Management, but is not receiving updates. You can use subscription-manager to assign subscriptions.Last metadata expiration check: 0:02:07 ago on Thu 18 Aug 2022 03:12:32 PM EDT.Error: Problem 1: cannot install the best candidate for the job - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64 Problem 2: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires postgresql14(x86-64) = 14.5-1PGDG.rhel8, but none of the providers can be installed - cannot install the best candidate for the job - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)Again tried `dnf update` and `dnf install -y postgresql14-server` , but still stuck with below error:# dnf updateUpdating Subscription Management repositories.This system is registered to Red Hat Subscription Management, but is not receiving updates. You can useions.Last metadata expiration check: 0:42:29 ago on Thu 18 Aug 2022 04:42:25 PM EDT.Dependencies resolved.======================================================================================================= Package Architecture Version=======================================================================================================Upgrading: pg_qualstats_13 x86_64 2.0.4-1.rhel8 postgresql13 x86_64 13.8-1PGDG.rhel8 postgresql13-contrib x86_64 13.8-1PGDG.rhel8 postgresql13-libs x86_64 13.8-1PGDG.rhel8 postgresql13-server x86_64 13.8-1PGDG.rhel8Transaction Summary=======================================================================================================Upgrade 5 PackagesTotal download size: 8.1 MIs this ok [y/N]: yDownloading Packages:(1/5): pg_qualstats_13-2.0.4-1.rhel8.x86_64.rpm(2/5): postgresql13-contrib-13.8-1PGDG.rhel8.x86_64.rpm(3/5): postgresql13-13.8-1PGDG.rhel8.x86_64.rpm(4/5): postgresql13-libs-13.8-1PGDG.rhel8.x86_64.rpm(5/5): postgresql13-server-13.8-1PGDG.rhel8.x86_64.rpm-------------------------------------------------------------------------------------------------------TotalRunning transaction checkTransaction check succeeded.Running transaction testTransaction test succeeded.Running transaction Preparing : Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64 Upgrading : postgresql13-libs-13.8-1PGDG.rhel8.x86_64 Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64 Upgrading : postgresql13-13.8-1PGDG.rhel8.x86_64 Running scriptlet: postgresql13-13.8-1PGDG.rhel8.x86_64 Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64 Upgrading : postgresql13-server-13.8-1PGDG.rhel8.x86_64 Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64 Upgrading : pg_qualstats_13-2.0.4-1.rhel8.x86_64 Running scriptlet: pg_qualstats_13-2.0.4-1.rhel8.x86_64 Upgrading : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64 Cleanup : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64 Cleanup : pg_qualstats_13-2.0.3-1.rhel8.x86_64 Running scriptlet: pg_qualstats_13-2.0.3-1.rhel8.x86_64 Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64 Cleanup : postgresql13-server-13.1-3PGDG.rhel8.x86_64 Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64 Cleanup : postgresql13-13.1-3PGDG.rhel8.x86_64 Running scriptlet: postgresql13-13.1-3PGDG.rhel8.x86_64 Cleanup : postgresql13-libs-13.1-3PGDG.rhel8.x86_64 Running scriptlet: postgresql13-libs-13.1-3PGDG.rhel8.x86_64 Verifying : pg_qualstats_13-2.0.4-1.rhel8.x86_64 Verifying : pg_qualstats_13-2.0.3-1.rhel8.x86_64 Verifying : postgresql13-13.8-1PGDG.rhel8.x86_64 Verifying : postgresql13-13.1-3PGDG.rhel8.x86_64 Verifying : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64 Verifying : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64 Verifying : postgresql13-libs-13.8-1PGDG.rhel8.x86_64 Verifying : postgresql13-libs-13.1-3PGDG.rhel8.x86_64 Verifying : postgresql13-server-13.8-1PGDG.rhel8.x86_64 Verifying : postgresql13-server-13.1-3PGDG.rhel8.x86_64Installed products updated.Upgraded: pg_qualstats_13-2.0.4-1.rhel8.x86_64 postgresql13-13.8-1PGDG.rhel8.x86_64 postgre postgresql13-libs-13.8-1PGDG.rhel8.x86_64 postgresql13-server-13.8-1PGDG.rhel8.x86_64Complete! # dnf install -y postgresql14-serverUpdating Subscription Management repositories.This system is registered to Red Hat Subscription Management, but is not receiving updates. You can useions.Last metadata expiration check: 0:42:56 ago on Thu 18 Aug 2022 04:42:25 PM EDT.Error: Problem: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires postgresql14(x86-64) = 14.5-1PGD installed - cannot install the best candidate for the job - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate",
"msg_date": "Fri, 19 Aug 2022 03:39:29 +0530",
"msg_from": "kavya chandren <kavyachandren@gmail.com>",
"msg_from_op": true,
"msg_subject": "Issue in postgresql installation - Target version Postgresql 14."
},
{
"msg_contents": "\nFirst, this is not an appropriate question for hackers. Second, this is\na question for the package manager where you got the pre-built software.\n\n---------------------------------------------------------------------------\n\nOn Fri, Aug 19, 2022 at 03:39:29AM +0530, kavya chandren wrote:\n> Dear team,\n> \n> We are facing issues during installation of postgresql at our environment.\n> \n> \n> This command completed with no errors.....\n> \n> dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/\n> EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n> \n> Then we ran this command....\n> \n> dnf install postgresql14-server-14.5-1PGDG.rhel8.x86_64.rpm\n> postgresql14-14.5-1PGDG.rhel8.x86_64.rpm\n> postgresql14-libs-14.5-1PGDG.rhel8.x86_64.rpm\n> \n> and got the following messages\n> \n> Updating Subscription Management repositories.\n> \n> This system is registered to Red Hat Subscription Management, but is not\n> receiving updates. You can use subscription-manager to assign subscriptions.\n> \n> Last metadata expiration check: 0:02:07 ago on Thu 18 Aug 2022 03:12:32 PM EDT.\n> Error:\n> Problem 1: cannot install the best candidate for the job\n> - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> Problem 2: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\n> postgresql14(x86-64) = 14.5-1PGDG.rhel8, but none of the providers can be\n> installed\n> - cannot install the best candidate for the job\n> - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use\n> not only best candidate packages)\n> \n> Again tried `dnf update` and `dnf install -y postgresql14-server` , but still\n> stuck with below error:\n> \n> \n> # dnf update\n> Updating Subscription Management repositories.\n> \n> This system is registered to Red Hat Subscription Management, but is not\n> receiving updates. You can useions.\n> \n> Last metadata expiration check: 0:42:29 ago on Thu 18 Aug 2022 04:42:25 PM EDT.\n> Dependencies resolved.\n> ===============================================================================\n> ========================\n> Package Architecture Version\n> ===============================================================================\n> ========================\n> Upgrading:\n> pg_qualstats_13 x86_64 \n> 2.0.4-1.rhel8\n> postgresql13 x86_64 \n> 13.8-1PGDG.rhel8\n> postgresql13-contrib x86_64 \n> 13.8-1PGDG.rhel8\n> postgresql13-libs x86_64 \n> 13.8-1PGDG.rhel8\n> postgresql13-server x86_64 \n> 13.8-1PGDG.rhel8\n> \n> Transaction Summary\n> ===============================================================================\n> ========================\n> Upgrade 5 Packages\n> \n> Total download size: 8.1 M\n> Is this ok [y/N]: y\n> Downloading Packages:\n> (1/5): pg_qualstats_13-2.0.4-1.rhel8.x86_64.rpm\n> (2/5): postgresql13-contrib-13.8-1PGDG.rhel8.x86_64.rpm\n> (3/5): postgresql13-13.8-1PGDG.rhel8.x86_64.rpm\n> (4/5): postgresql13-libs-13.8-1PGDG.rhel8.x86_64.rpm\n> (5/5): postgresql13-server-13.8-1PGDG.rhel8.x86_64.rpm\n> -------------------------------------------------------------------------------------------------------\n> Total\n> Running transaction check\n> Transaction check succeeded.\n> Running transaction test\n> Transaction test succeeded.\n> Running transaction\n> Preparing :\n> Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Upgrading : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Upgrading : postgresql13-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Upgrading : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Upgrading : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> Running scriptlet: pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> Upgrading : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n> Cleanup : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n> Cleanup : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> Running scriptlet: pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Cleanup : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Cleanup : postgresql13-13.1-3PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-13.1-3PGDG.rhel8.x86_64\n> Cleanup : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> Verifying : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> Verifying : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> Verifying : postgresql13-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-13.1-3PGDG.rhel8.x86_64\n> Verifying : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n> Verifying : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> Verifying : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Installed products updated.\n> \n> Upgraded:\n> pg_qualstats_13-2.0.4-1.rhel8.x86_64 \n> postgresql13-13.8-1PGDG.rhel8.x86_64 postgre\n> postgresql13-libs-13.8-1PGDG.rhel8.x86_64 \n> postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> \n> Complete!\n> \n> \n> \n> \n> # dnf install -y postgresql14-server\n> Updating Subscription Management repositories.\n> \n> This system is registered to Red Hat Subscription Management, but is not\n> receiving updates. You can useions.\n> \n> Last metadata expiration check: 0:42:56 ago on Thu 18 Aug 2022 04:42:25 PM EDT.\n> Error:\n> Problem: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\n> postgresql14(x86-64) = 14.5-1PGD installed\n> - cannot install the best candidate for the job\n> - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use\n> not only best candidate\n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Fri, 19 Aug 2022 11:45:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Issue in postgresql installation - Target version Postgresql 14."
},
{
"msg_contents": "Dear Bruce Momjian,\n\nThanks for clarifying, got the issue resolved with installation of lz4\nindependently.\n\nOn Fri, Aug 19, 2022 at 9:16 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> First, this is not an appropriate question for hackers. Second, this is\n> a question for the package manager where you got the pre-built software.\n>\n> ---------------------------------------------------------------------------\n>\n> On Fri, Aug 19, 2022 at 03:39:29AM +0530, kavya chandren wrote:\n> > Dear team,\n> >\n> > We are facing issues during installation of postgresql at our\n> environment.\n> >\n> >\n> > This command completed with no errors.....\n> >\n> > dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/\n> > EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n> >\n> > Then we ran this command....\n> >\n> > dnf install postgresql14-server-14.5-1PGDG.rhel8.x86_64.rpm\n> > postgresql14-14.5-1PGDG.rhel8.x86_64.rpm\n> > postgresql14-libs-14.5-1PGDG.rhel8.x86_64.rpm\n> >\n> > and got the following messages\n> >\n> > Updating Subscription Management repositories.\n> >\n> > This system is registered to Red Hat Subscription Management, but is not\n> > receiving updates. You can use subscription-manager to assign\n> subscriptions.\n> >\n> > Last metadata expiration check: 0:02:07 ago on Thu 18 Aug 2022 03:12:32\n> PM EDT.\n> > Error:\n> > Problem 1: cannot install the best candidate for the job\n> > - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> > Problem 2: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\n> > postgresql14(x86-64) = 14.5-1PGDG.rhel8, but none of the providers can be\n> > installed\n> > - cannot install the best candidate for the job\n> > - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> > (try to add '--skip-broken' to skip uninstallable packages or '--nobest'\n> to use\n> > not only best candidate packages)\n> >\n> > Again tried `dnf update` and `dnf install -y postgresql14-server` , but\n> still\n> > stuck with below error:\n> >\n> >\n> > # dnf update\n> > Updating Subscription Management repositories.\n> >\n> > This system is registered to Red Hat Subscription Management, but is not\n> > receiving updates. You can useions.\n> >\n> > Last metadata expiration check: 0:42:29 ago on Thu 18 Aug 2022 04:42:25\n> PM EDT.\n> > Dependencies resolved.\n> >\n> ===============================================================================\n> > ========================\n> > Package Architecture\n> Version\n> >\n> ===============================================================================\n> > ========================\n> > Upgrading:\n> > pg_qualstats_13 x86_64\n> > 2.0.4-1.rhel8\n> > postgresql13 x86_64\n> > 13.8-1PGDG.rhel8\n> > postgresql13-contrib x86_64\n> > 13.8-1PGDG.rhel8\n> > postgresql13-libs x86_64\n> > 13.8-1PGDG.rhel8\n> > postgresql13-server x86_64\n> > 13.8-1PGDG.rhel8\n> >\n> > Transaction Summary\n> >\n> ===============================================================================\n> > ========================\n> > Upgrade 5 Packages\n> >\n> > Total download size: 8.1 M\n> > Is this ok [y/N]: y\n> > Downloading Packages:\n> > (1/5): pg_qualstats_13-2.0.4-1.rhel8.x86_64.rpm\n> > (2/5): postgresql13-contrib-13.8-1PGDG.rhel8.x86_64.rpm\n> > (3/5): postgresql13-13.8-1PGDG.rhel8.x86_64.rpm\n> > (4/5): postgresql13-libs-13.8-1PGDG.rhel8.x86_64.rpm\n> > (5/5): postgresql13-server-13.8-1PGDG.rhel8.x86_64.rpm\n> >\n> -------------------------------------------------------------------------------------------------------\n> > Total\n> > Running transaction check\n> > Transaction check succeeded.\n> > Running transaction test\n> > Transaction test succeeded.\n> > Running transaction\n> > Preparing :\n> > Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> > Upgrading : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> > Upgrading : postgresql13-13.8-1PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-13.8-1PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> > Upgrading : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> > Upgrading : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> > Running scriptlet: pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> > Upgrading : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n> > Cleanup : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n> > Cleanup : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> > Running scriptlet: pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> > Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> > Cleanup : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> > Cleanup : postgresql13-13.1-3PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-13.1-3PGDG.rhel8.x86_64\n> > Cleanup : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> > Running scriptlet: postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> > Verifying : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> > Verifying : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> > Verifying : postgresql13-13.8-1PGDG.rhel8.x86_64\n> > Verifying : postgresql13-13.1-3PGDG.rhel8.x86_64\n> > Verifying : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n> > Verifying : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n> > Verifying : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> > Verifying : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> > Verifying : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> > Verifying : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> > Installed products updated.\n> >\n> > Upgraded:\n> > pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> > postgresql13-13.8-1PGDG.rhel8.x86_64 postgre\n> > postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> > postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> >\n> > Complete!\n> >\n> >\n> >\n> >\n> > # dnf install -y postgresql14-server\n> > Updating Subscription Management repositories.\n> >\n> > This system is registered to Red Hat Subscription Management, but is not\n> > receiving updates. You can useions.\n> >\n> > Last metadata expiration check: 0:42:56 ago on Thu 18 Aug 2022 04:42:25\n> PM EDT.\n> > Error:\n> > Problem: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\n> > postgresql14(x86-64) = 14.5-1PGD installed\n> > - cannot install the best candidate for the job\n> > - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> > (try to add '--skip-broken' to skip uninstallable packages or '--nobest'\n> to use\n> > not only best candidate\n> >\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Indecision is a decision. Inaction is an action. Mark Batterson\n>\n>\n\nDear Bruce Momjian,Thanks for clarifying, got the issue resolved with installation of lz4 independently.On Fri, Aug 19, 2022 at 9:16 PM Bruce Momjian <bruce@momjian.us> wrote:\nFirst, this is not an appropriate question for hackers. Second, this is\na question for the package manager where you got the pre-built software.\n\n---------------------------------------------------------------------------\n\nOn Fri, Aug 19, 2022 at 03:39:29AM +0530, kavya chandren wrote:\n> Dear team,\n> \n> We are facing issues during installation of postgresql at our environment.\n> \n> \n> This command completed with no errors.....\n> \n> dnf install -y https://download.postgresql.org/pub/repos/yum/reporpms/\n> EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm\n> \n> Then we ran this command....\n> \n> dnf install postgresql14-server-14.5-1PGDG.rhel8.x86_64.rpm\n> postgresql14-14.5-1PGDG.rhel8.x86_64.rpm\n> postgresql14-libs-14.5-1PGDG.rhel8.x86_64.rpm\n> \n> and got the following messages\n> \n> Updating Subscription Management repositories.\n> \n> This system is registered to Red Hat Subscription Management, but is not\n> receiving updates. You can use subscription-manager to assign subscriptions.\n> \n> Last metadata expiration check: 0:02:07 ago on Thu 18 Aug 2022 03:12:32 PM EDT.\n> Error:\n> Problem 1: cannot install the best candidate for the job\n> - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> Problem 2: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\n> postgresql14(x86-64) = 14.5-1PGDG.rhel8, but none of the providers can be\n> installed\n> - cannot install the best candidate for the job\n> - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use\n> not only best candidate packages)\n> \n> Again tried `dnf update` and `dnf install -y postgresql14-server` , but still\n> stuck with below error:\n> \n> \n> # dnf update\n> Updating Subscription Management repositories.\n> \n> This system is registered to Red Hat Subscription Management, but is not\n> receiving updates. You can useions.\n> \n> Last metadata expiration check: 0:42:29 ago on Thu 18 Aug 2022 04:42:25 PM EDT.\n> Dependencies resolved.\n> ===============================================================================\n> ========================\n> Package Architecture Version\n> ===============================================================================\n> ========================\n> Upgrading:\n> pg_qualstats_13 x86_64 \n> 2.0.4-1.rhel8\n> postgresql13 x86_64 \n> 13.8-1PGDG.rhel8\n> postgresql13-contrib x86_64 \n> 13.8-1PGDG.rhel8\n> postgresql13-libs x86_64 \n> 13.8-1PGDG.rhel8\n> postgresql13-server x86_64 \n> 13.8-1PGDG.rhel8\n> \n> Transaction Summary\n> ===============================================================================\n> ========================\n> Upgrade 5 Packages\n> \n> Total download size: 8.1 M\n> Is this ok [y/N]: y\n> Downloading Packages:\n> (1/5): pg_qualstats_13-2.0.4-1.rhel8.x86_64.rpm\n> (2/5): postgresql13-contrib-13.8-1PGDG.rhel8.x86_64.rpm\n> (3/5): postgresql13-13.8-1PGDG.rhel8.x86_64.rpm\n> (4/5): postgresql13-libs-13.8-1PGDG.rhel8.x86_64.rpm\n> (5/5): postgresql13-server-13.8-1PGDG.rhel8.x86_64.rpm\n> -------------------------------------------------------------------------------------------------------\n> Total\n> Running transaction check\n> Transaction check succeeded.\n> Running transaction test\n> Transaction test succeeded.\n> Running transaction\n> Preparing :\n> Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Upgrading : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Upgrading : postgresql13-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Upgrading : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Upgrading : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> Running scriptlet: pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> Upgrading : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n> Cleanup : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n> Cleanup : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> Running scriptlet: pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Cleanup : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Cleanup : postgresql13-13.1-3PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-13.1-3PGDG.rhel8.x86_64\n> Cleanup : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> Running scriptlet: postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> Verifying : pg_qualstats_13-2.0.4-1.rhel8.x86_64\n> Verifying : pg_qualstats_13-2.0.3-1.rhel8.x86_64\n> Verifying : postgresql13-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-13.1-3PGDG.rhel8.x86_64\n> Verifying : postgresql13-contrib-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-contrib-13.1-3PGDG.rhel8.x86_64\n> Verifying : postgresql13-libs-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-libs-13.1-3PGDG.rhel8.x86_64\n> Verifying : postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> Verifying : postgresql13-server-13.1-3PGDG.rhel8.x86_64\n> Installed products updated.\n> \n> Upgraded:\n> pg_qualstats_13-2.0.4-1.rhel8.x86_64 \n> postgresql13-13.8-1PGDG.rhel8.x86_64 postgre\n> postgresql13-libs-13.8-1PGDG.rhel8.x86_64 \n> postgresql13-server-13.8-1PGDG.rhel8.x86_64\n> \n> Complete!\n> \n> \n> \n> \n> # dnf install -y postgresql14-server\n> Updating Subscription Management repositories.\n> \n> This system is registered to Red Hat Subscription Management, but is not\n> receiving updates. You can useions.\n> \n> Last metadata expiration check: 0:42:56 ago on Thu 18 Aug 2022 04:42:25 PM EDT.\n> Error:\n> Problem: package postgresql14-server-14.5-1PGDG.rhel8.x86_64 requires\n> postgresql14(x86-64) = 14.5-1PGD installed\n> - cannot install the best candidate for the job\n> - nothing provides lz4 needed by postgresql14-14.5-1PGDG.rhel8.x86_64\n> (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use\n> not only best candidate\n> \n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson",
"msg_date": "Fri, 19 Aug 2022 21:34:48 +0530",
"msg_from": "kavya chandren <kavyachandren@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Issue in postgresql installation - Target version Postgresql 14."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nAttached is a patch proposal to allow the use of regular expressions for \nthe username in pg_hba.conf.\n\nUsing regular expressions for the username in the pg_hba.conf file is \nconvenient in situations where an organization has a large number of \nusers and needs an expressive way to map them.\n\nFor example, if an organization wants to allow gss connections only for \nusers having their principal, e.g. @BDTFOREST.LOCAL, they could make use \nof an entry in pg_hba.conf such as:\n\nhost all /^.*@BDTFOREST.LOCAL$ 0.0.0.0/0 gss\n\nWithout this patch, I can think of three alternatives with existing \nfunctionality, which all of tradeoffs. This includes:\n\n1) Create an entry per user: this is challenging for organizations \nmanaging large numbers of users (e.g. 1000s). This is also not dynamic, \ni.e. the HBA file would need to be updated when users are added or removed.\n\n2) Use a mapping in pg_ident.conf, for example:\n\nHere is an entry in pg_hba.conf that uses a map:\n\nhost all all 0.0.0.0/0 gss map=mygssmap\n\nand by defining this mapping in pg_ident.conf:\n\nmygssmap /^(.*)@BDTFOREST\\.LOCAL$ \\1@BDTFOREST.LOCAL\n\nThat works for filtering the username.\n\nLOG: connection authenticated: identity=\"bertrand@BDTFOREST.LOCAL\" \nmethod=gss (/pg_installed/data/pg_hba.conf:95)\n$ grep -n mygssmap /pg_installed/data/pg_hba.conf\n95:host all all 0.0.0.0/0 gss map=mygssmap\n\nHowever, the behavior is not the same for the ones that don’t match the \nmapping in pg_ident.conf: indeed the connection attempt stop here and \nthe next HBA line won’t be evaluated.\n\nFATAL: GSSAPI authentication failed for user \"bdt\"\nDETAIL: Connection matched pg_hba.conf line 95: \"host all \nall 0.0.0.0/0 gss map=mygssmap\"\n\n3) Make use of a role in pg_hba.conf, e.g. “+BDTONLY”. That would work \ntoo, and also allow the evaluation of the next HBA line for the ones \nthat are not part of the role.\n\nHowever:\n\n - That’s not as dynamic as the regular expression, as new users \nwould need to be granted the role and some users who are moving in the \ncompany may need to have the role revoked.\n - Looking at the regular expression in the HBA file makes it clear \nwhat filtering needs to be done. This is not obvious when looking at the \nrole, even if it has a meaningful name. This can generate “incorrect \nfiltering” should one user be granted the role by mistake, or make it \nmore difficult to debug why a user is not being matched to a particular \nline in the HBA file.\n\nThis is why I think username filtering with regular expressions would \nprovide its own advantages.\n\nThoughts? Looking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 19 Aug 2022 10:12:57 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Patch proposal: make use of regular expressions for the username in\n pg_hba.conf"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 1:13 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n> This is why I think username filtering with regular expressions would\n> provide its own advantages.\n>\n> Thoughts? Looking forward to your feedback,\n\nI think your motivation for the feature is solid. It is killing me a\nbit that this is making it easier to switch authentication methods\nbased on the role name, when I suspect what someone might really want\nis to switch authentication methods based on the ID the user is trying\nto authenticate with. But that's not your fault or problem to fix,\nbecause the startup packet doesn't currently have that information.\n(It does make me wonder whether I withdrew my PGAUTHUSER proposal [1]\na month too early. And man, do I wish that pg_ident and pg_hba were\none file.)\n\nI think you're going to have to address backwards compatibility\nconcerns. Today, I can create a role named \"/a\", and I can put that\ninto the HBA without quoting it. I'd be unamused if, after an upgrade,\nmy rule suddenly matched any role name containing an 'a'.\n\nSpeaking of partial matches, should this feature allow them? Maybe\nrules should have to match the entire username instead, and sidestep\nthe inevitable \"I forgot to anchor my regex\" problems?\n\nThanks,\n--Jacob\n\n[1] https://commitfest.postgresql.org/38/3314/\n\n\n",
"msg_date": "Thu, 8 Sep 2022 17:02:00 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On Fri, Aug 19, 2022 at 1:13 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>> This is why I think username filtering with regular expressions would\n>> provide its own advantages.\n\n> I think your motivation for the feature is solid.\n\nYeah. I'm not sure that I buy the argument that this is more useful\nthan writing a role name and controlling things with GRANT ROLE, but\nit's a plausible alternative with properties that might win in some\nuse-cases. So I see little reason not to allow it.\n\nI'd actually ask why stop here? In particular, why not do the same\nwith the database-name column, especially since that does *not*\nhave the ability to use roles as a substitute for a wildcard entry?\n\n> I think you're going to have to address backwards compatibility\n> concerns. Today, I can create a role named \"/a\", and I can put that\n> into the HBA without quoting it. I'd be unamused if, after an upgrade,\n> my rule suddenly matched any role name containing an 'a'.\n\nMeh ... that concern seems overblown to me. I guess it's possible\nthat somebody has an HBA entry that looks like that, but it doesn't\nseem very plausible. Note that we made this exact same change in\npg_ident.conf years ago, and AFAIR we got zero complaints.\n\n> Speaking of partial matches, should this feature allow them? Maybe\n> rules should have to match the entire username instead, and sidestep\n> the inevitable \"I forgot to anchor my regex\" problems?\n\nI think the pg_ident.conf precedent is binding on us here. If we\nmake this one work differently, nobody's going to thank us for it,\nthey're just going to wonder \"did the left hand not know what the\nright hand already did?\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 20:46:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 9/9/22 2:46 AM, Tom Lane wrote:\n> Jacob Champion<jchampion@timescale.com> writes:\n>> On Fri, Aug 19, 2022 at 1:13 AM Drouvot, Bertrand<bdrouvot@amazon.com> wrote:\n>>> This is why I think username filtering with regular expressions would\n>>> provide its own advantages.\n>> I think your motivation for the feature is solid.\n> Yeah. I'm not sure that I buy the argument that this is more useful\n> than writing a role name and controlling things with GRANT ROLE, but\n> it's a plausible alternative with properties that might win in some\n> use-cases. So I see little reason not to allow it.\n\nThank you both for your feedback.\n\n> I'd actually ask why stop here? In particular, why not do the same\n> with the database-name column, especially since that does *not*\n> have the ability to use roles as a substitute for a wildcard entry?\n\nI think that's a fair point, I'll look at it.\n\n>> I think you're going to have to address backwards compatibility\n>> concerns. Today, I can create a role named \"/a\", and I can put that\n>> into the HBA without quoting it. I'd be unamused if, after an upgrade,\n>> my rule suddenly matched any role name containing an 'a'.\n> Meh ... that concern seems overblown to me. I guess it's possible\n> that somebody has an HBA entry that looks like that, but it doesn't\n> seem very plausible. Note that we made this exact same change in\n> pg_ident.conf years ago, and AFAIR we got zero complaints.\n>\nAgree that it seems unlikely but maybe we could add a new GUC to turn \nthe regex usage on the hba file on/off (and use off as the default)?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/9/22 2:46 AM, Tom Lane wrote:\n \n\nJacob Champion <jchampion@timescale.com> writes:\n\n\nOn Fri, Aug 19, 2022 at 1:13 AM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n\n\nThis is why I think username filtering with regular expressions would\nprovide its own advantages.\n\n\n\n\n\n\nI think your motivation for the feature is solid.\n\n\n\nYeah. I'm not sure that I buy the argument that this is more useful\nthan writing a role name and controlling things with GRANT ROLE, but\nit's a plausible alternative with properties that might win in some\nuse-cases. So I see little reason not to allow it.\n\nThank you both for your feedback.\n\n\n\nI'd actually ask why stop here? In particular, why not do the same\nwith the database-name column, especially since that does *not*\nhave the ability to use roles as a substitute for a wildcard entry?\n\nI think that's a fair point, I'll look at it.\n \n\n\nI think you're going to have to address backwards compatibility\nconcerns. Today, I can create a role named \"/a\", and I can put that\ninto the HBA without quoting it. I'd be unamused if, after an upgrade,\nmy rule suddenly matched any role name containing an 'a'.\n\n\n\nMeh ... that concern seems overblown to me. I guess it's possible\nthat somebody has an HBA entry that looks like that, but it doesn't\nseem very plausible. Note that we made this exact same change in\npg_ident.conf years ago, and AFAIR we got zero complaints.\n\n\n\nAgree that it seems unlikely but maybe we could add a new GUC to\n turn the regex usage on the hba file on/off (and use off as the\n default)?\n\n Regards,\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 9 Sep 2022 12:31:08 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com> writes:\n> Agree that it seems unlikely but maybe we could add a new GUC to turn \n> the regex usage on the hba file on/off (and use off as the default)?\n\nI think that will just add useless complication.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 10:19:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jacob Champion <jchampion@timescale.com> writes:\n> > I think you're going to have to address backwards compatibility\n> > concerns. Today, I can create a role named \"/a\", and I can put that\n> > into the HBA without quoting it. I'd be unamused if, after an upgrade,\n> > my rule suddenly matched any role name containing an 'a'.\n>\n> Meh ... that concern seems overblown to me. I guess it's possible\n> that somebody has an HBA entry that looks like that, but it doesn't\n> seem very plausible. Note that we made this exact same change in\n> pg_ident.conf years ago, and AFAIR we got zero complaints.\n\nWhat percentage of users actually use pg_ident maps? My assumption\nwould be that a change to pg_hba would affect many more people, but\nthen I don't have any proof that there are users with role names that\nlook like that to begin with. I won't pound the table with it.\n\n> > Speaking of partial matches, should this feature allow them? Maybe\n> > rules should have to match the entire username instead, and sidestep\n> > the inevitable \"I forgot to anchor my regex\" problems?\n>\n> I think the pg_ident.conf precedent is binding on us here. If we\n> make this one work differently, nobody's going to thank us for it,\n> they're just going to wonder \"did the left hand not know what the\n> right hand already did?\"\n\nHmm... yeah, I suppose. From the other direction, it'd be bad to train\nusers that unanchored regexes are safe in pg_hba only to take those\nguardrails off in pg_ident. I will tuck that away as a potential\nbehavior change, for a different thread.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 9 Sep 2022 15:05:18 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On 8/19/22 01:12, Drouvot, Bertrand wrote:\n> + wstr = palloc((strlen(tok->string + 1) + 1) * sizeof(pg_wchar)); \n> + wlen = pg_mb2wchar_with_len(tok->string + 1, \n> + wstr, strlen(tok->string + 1));\n\nThe (tok->string + 1) construction comes up often enough that I think it\nshould be put in a `regex` variable or similar. That would help my eyes\nwith the (strlen(tok->string + 1) + 1) construction, especially.\n\nI noticed that for pg_ident, we precompile the regexes per-line and\nreuse those in child processes. Whereas here we're compiling, using, and\nthen discarding the regex for each check. I think the example set by the\npg_ident code is probably the one to follow, unless you have a good\nreason not to.\n\n> +# Testing with regular expression for username \n> +reset_pg_hba($node, '/^.*md.*$', 'password'); \n> +test_role($node, 'md5_role', 'password from pgpass and regular expression for username', 0);\n> + \n\nIMO the coverage for this patch needs to be filled out. Negative test\ncases are more important than positive ones for security-related code.\n\nOther than that, and Tom's note on potentially expanding this to other\nareas, I think this is a pretty straightforward patch.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Fri, 9 Sep 2022 16:21:32 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 9/10/22 1:21 AM, Jacob Champion wrote:\n> On 8/19/22 01:12, Drouvot, Bertrand wrote:\n>> + wstr = palloc((strlen(tok->string + 1) + 1) * sizeof(pg_wchar));\n>> + wlen = pg_mb2wchar_with_len(tok->string + 1,\n>> + wstr, strlen(tok->string + 1));\n> The (tok->string + 1) construction comes up often enough that I think it\n> should be put in a `regex` variable or similar. That would help my eyes\n> with the (strlen(tok->string + 1) + 1) construction, especially.\n>\n> I noticed that for pg_ident, we precompile the regexes per-line and\n> reuse those in child processes. Whereas here we're compiling, using, and\n> then discarding the regex for each check. I think the example set by the\n> pg_ident code is probably the one to follow, unless you have a good\n> reason not to.\n\nThanks for the feedback.\n\nYeah fully agree. I'll provide a new version that follow the same logic \nas the pg_ident code.\n\n>> +# Testing with regular expression for username\n>> +reset_pg_hba($node, '/^.*md.*$', 'password');\n>> +test_role($node, 'md5_role', 'password from pgpass and regular expression for username', 0);\n>> +\n> IMO the coverage for this patch needs to be filled out. Negative test\n> cases are more important than positive ones for security-related code.\n\nAgree, will do.\n\n>\n> Other than that, and Tom's note on potentially expanding this to other\n> areas,\n\nI'll add regexp usage for the database column and also the for the \naddress one when non CIDR is provided (so host name(s)) (I think it also \nmakes sense specially as we don't allow multiple values for this column).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 9/10/22 1:21 AM, Jacob Champion\n wrote:\n \n\n\nOn 8/19/22 01:12, Drouvot, Bertrand wrote:\n\n\n+ wstr = palloc((strlen(tok->string + 1) + 1) * sizeof(pg_wchar));\n+ wlen = pg_mb2wchar_with_len(tok->string + 1,\n+ wstr, strlen(tok->string + 1));\n\n\n\nThe (tok->string + 1) construction comes up often enough that I think it\nshould be put in a `regex` variable or similar. That would help my eyes\nwith the (strlen(tok->string + 1) + 1) construction, especially.\n\nI noticed that for pg_ident, we precompile the regexes per-line and\nreuse those in child processes. Whereas here we're compiling, using, and\nthen discarding the regex for each check. I think the example set by the\npg_ident code is probably the one to follow, unless you have a good\nreason not to.\n\n\nThanks for the feedback.\n\nYeah fully agree. I'll provide a new version that follow the same\n logic as the pg_ident code.\n\n\n\n+# Testing with regular expression for username\n+reset_pg_hba($node, '/^.*md.*$', 'password');\n+test_role($node, 'md5_role', 'password from pgpass and regular expression for username', 0);\n+\n\n\n\nIMO the coverage for this patch needs to be filled out. Negative test\ncases are more important than positive ones for security-related code.\n\nAgree, will do.\n\n\n\n\nOther than that, and Tom's note on potentially expanding this to other\nareas, \n\nI'll add regexp usage for the database column and also the for\n the address one when non CIDR is provided (so host name(s)) (I\n think it also makes sense specially as we don't allow multiple\n values for this column).\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 12 Sep 2022 09:55:25 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 9/12/22 9:55 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 9/10/22 1:21 AM, Jacob Champion wrote:\n>> On 8/19/22 01:12, Drouvot, Bertrand wrote:\n>>> + wstr = palloc((strlen(tok->string + 1) + 1) * sizeof(pg_wchar));\n>>> + wlen = pg_mb2wchar_with_len(tok->string + 1,\n>>> + wstr, strlen(tok->string + 1));\n>> The (tok->string + 1) construction comes up often enough that I think it\n>> should be put in a `regex` variable or similar. That would help my eyes\n>> with the (strlen(tok->string + 1) + 1) construction, especially.\n>>\n>> I noticed that for pg_ident, we precompile the regexes per-line and\n>> reuse those in child processes. Whereas here we're compiling, using, and\n>> then discarding the regex for each check. I think the example set by the\n>> pg_ident code is probably the one to follow, unless you have a good\n>> reason not to.\n> \n> Thanks for the feedback.\n> \n> Yeah fully agree. I'll provide a new version that follow the same logic \n> as the pg_ident code.\n> \n>>> +# Testing with regular expression for username\n>>> +reset_pg_hba($node, '/^.*md.*$', 'password');\n>>> +test_role($node, 'md5_role', 'password from pgpass and regular expression for username', 0);\n>>> +\n>> IMO the coverage for this patch needs to be filled out. Negative test\n>> cases are more important than positive ones for security-related code.\n> \n> Agree, will do.\n> \n>> Other than that, and Tom's note on potentially expanding this to other\n>> areas,\n> \n> I'll add regexp usage for the database column and also the for the \n> address one when non CIDR is provided (so host name(s)) (I think it also \n> makes sense specially as we don't allow multiple values for this column).\n> \n\n\nPlease find attached v2 addressing the comments mentioned above.\n\nv2 also provides regular expression usage for the database and the \naddress columns (when a host name is being used).\n\nRemark:\n\nThe CF bot is failing for Windows (all other tests are green) and only \nfor the new tap test related to the regular expression on the host name \n(the ones on database and role are fine).\n\nThe issue is not related to the patch. The issue is that the Windows \nCirrus test does not like when a host name is provided for a \"host\" \nentry in pg_hba.conf (while it works fine when a CIDR is provided).\n\nYou can see an example in [1] where the only change is to replace the \nCIDR by \"localhost\" in 002_scram.pl. As you can see the Cirrus tests are \nfailing on Windows only (its log file is here [2]).\n\nI'll look at this \"Windows\" related issue but would appreciate any \nguidance/help if someone has experience in this area on windows.\n\n\n\n\n[1]: https://github.com/bdrouvot/postgres/branches on branch “host_non_cidr”\n\n[2]: \nhttps://api.cirrus-ci.com/v1/artifact/task/6507279833890816/log/src/test/ssl/tmp_check/log/002_scram_primary.log\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 16 Sep 2022 18:24:07 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 06:24:07PM +0200, Drouvot, Bertrand wrote:\n> The CF bot is failing for Windows (all other tests are green) and only for\n> the new tap test related to the regular expression on the host name (the\n> ones on database and role are fine).\n> \n> The issue is not related to the patch. The issue is that the Windows Cirrus\n> test does not like when a host name is provided for a \"host\" entry in\n> pg_hba.conf (while it works fine when a CIDR is provided).\n> \n> You can see an example in [1] where the only change is to replace the CIDR\n> by \"localhost\" in 002_scram.pl. As you can see the Cirrus tests are failing\n> on Windows only (its log file is here [2]).\n> \n> I'll look at this \"Windows\" related issue but would appreciate any\n> guidance/help if someone has experience in this area on windows.\n\nI recall that being able to do a reverse lookup of a hostname on\nWindows for localhost requires a few extra setup steps as that's not\nguaranteed to be set in all environments by default, which is why we\ngo at great length to use 127.0.0.1 in the TAP test setup for example\n(see Cluster.pm). Looking at your patch, the goal is to test the\nmapping of regular expression for host names, user names and database\nnames. If the first case is not guaranteed, my guess is that it is\nfine to skip this portion of the tests on Windows.\n\nWhile reading the patch, I am a bit confused about token_regcomp() and\ntoken_regexec(). It would help the review a lot if these were\ndocumented with proper comments, even if these act roughly as wrappers\nfor pg_regexec() and pg_regcomp().\n--\nMichael",
"msg_date": "Sat, 17 Sep 2022 15:53:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 9/17/22 8:53 AM, Michael Paquier wrote:\n> On Fri, Sep 16, 2022 at 06:24:07PM +0200, Drouvot, Bertrand wrote:\n>> The CF bot is failing for Windows (all other tests are green) and only for\n>> the new tap test related to the regular expression on the host name (the\n>> ones on database and role are fine).\n>>\n>> The issue is not related to the patch. The issue is that the Windows Cirrus\n>> test does not like when a host name is provided for a \"host\" entry in\n>> pg_hba.conf (while it works fine when a CIDR is provided).\n>>\n>> You can see an example in [1] where the only change is to replace the CIDR\n>> by \"localhost\" in 002_scram.pl. As you can see the Cirrus tests are failing\n>> on Windows only (its log file is here [2]).\n>>\n>> I'll look at this \"Windows\" related issue but would appreciate any\n>> guidance/help if someone has experience in this area on windows.\n> \n> I recall that being able to do a reverse lookup of a hostname on\n> Windows for localhost requires a few extra setup steps as that's not\n> guaranteed to be set in all environments by default, which is why we\n> go at great length to use 127.0.0.1 in the TAP test setup for example\n> (see Cluster.pm). Looking at your patch, the goal is to test the\n> mapping of regular expression for host names, user names and database\n> names. If the first case is not guaranteed, my guess is that it is\n> fine to skip this portion of the tests on Windows.\n\nThanks for looking at it!\n\nThat sounds reasonable, v3 attached is skipping the regular expression \ntests for the hostname on Windows.\n\n\n> \n> While reading the patch, I am a bit confused about token_regcomp() and\n> token_regexec(). It would help the review a lot if these were\n> documented with proper comments, even if these act roughly as wrappers\n> for pg_regexec() and pg_regcomp().\n\nFully agree, comments were missing. They've been added in v3 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 19 Sep 2022 09:36:10 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 03:05:18PM -0700, Jacob Champion wrote:\n> On Thu, Sep 8, 2022 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Jacob Champion <jchampion@timescale.com> writes:\n>> > I think you're going to have to address backwards compatibility\n>> > concerns. Today, I can create a role named \"/a\", and I can put that\n>> > into the HBA without quoting it. I'd be unamused if, after an upgrade,\n>> > my rule suddenly matched any role name containing an 'a'.\n>>\n>> Meh ... that concern seems overblown to me. I guess it's possible\n>> that somebody has an HBA entry that looks like that, but it doesn't\n>> seem very plausible. Note that we made this exact same change in\n>> pg_ident.conf years ago, and AFAIR we got zero complaints.\n> \n> What percentage of users actually use pg_ident maps? My assumption\n> would be that a change to pg_hba would affect many more people, but\n> then I don't have any proof that there are users with role names that\n> look like that to begin with. I won't pound the table with it.\n\nThis concern does not sound overblown to me. A change in pg_hba.conf\nimpacts everybody. I was just looking at this patch, and the logic\nwith usernames and databases is changed so as we would *always* treat\n*any* entries beginning with a slash as a regular expression, skipping\nthem if they don't match with an error fed to the logs and\npg_hba_file_rules. This could lead to silent security issues as \nstricter HBA policies need to be located first in pg_hba.conf and\nthese could suddenly fail to load. It would be much safer to me if we\nhad in place some restrictions to avoid such problems to happen,\nmeaning some extra checks in the DDL code paths for such object names\nand perhaps even something in pg_upgrade with a scan at pg_database\nand pg_authid.\n\nOn the bright side, I have been looking at some of the RFCs covering\nthe set of characters allowed in DNS names and slashes are not\nauthorized in hostnames, making this change rather safe AFAIK.\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 12:55:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n>> On Thu, Sep 8, 2022 at 5:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Meh ... that concern seems overblown to me. I guess it's possible\n>>> that somebody has an HBA entry that looks like that, but it doesn't\n>>> seem very plausible. Note that we made this exact same change in\n>>> pg_ident.conf years ago, and AFAIR we got zero complaints.\n\n> This concern does not sound overblown to me.\n\nYou have to assume that somebody (a) has a role or DB name starting\nwith slash, (b) has an explicit reference to that name in their\npg_hba.conf, (c) doesn't read the release notes, and (d) doesn't\nnotice that things are misbehaving until after some hacker manages\nto break into their installation on the strength of the misbehaving\nentry. OK, I'll grant that the probability of (c) is depressingly\nclose to unity; but each of the other steps seems quite low probability.\nAll four of them happening in one installation is something I doubt\nwill happen.\n\nOn the contrary side, if we make this work differently from the\npg_ident.conf precedent, or install weird rules to try to prevent\naccidental misinterpretations, that could also lead to security\nproblems because things don't work as someone would expect. I see\nno a-priori reason to believe that this risk is negligible compared\nto the other one.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Sep 2022 00:09:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 12:09:33AM -0400, Tom Lane wrote:\n> You have to assume that somebody (a) has a role or DB name starting\n> with slash, (b) has an explicit reference to that name in their\n> pg_hba.conf, (c) doesn't read the release notes, and (d) doesn't\n> notice that things are misbehaving until after some hacker manages\n> to break into their installation on the strength of the misbehaving\n> entry. OK, I'll grant that the probability of (c) is depressingly\n> close to unity; but each of the other steps seems quite low probability.\n> All four of them happening in one installation is something I doubt\n> will happen.\n\nIt is the kind of things that could blow up as a CVE and some bad PR\nfor the project, so I cannot get excited about enforcing this new rule\nin an authentication file (aka before a role is authenticated) while\nwe are talking about 3~4 code paths (?) that would need an extra check\nto make sure that no instances have such object names.\n\n> On the contrary side, if we make this work differently from the\n> pg_ident.conf precedent, or install weird rules to try to prevent\n> accidental misinterpretations, that could also lead to security\n> problems because things don't work as someone would expect. I see\n> no a-priori reason to believe that this risk is negligible compared\n> to the other one.\n\nI also do like a lot the idea of making things consistent across all\nthe auth configuration files for all the fields where this can be\napplied.\n--\nMichael",
"msg_date": "Tue, 20 Sep 2022 13:30:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 9/20/22 6:30 AM, Michael Paquier wrote:\n> On Tue, Sep 20, 2022 at 12:09:33AM -0400, Tom Lane wrote:\n>> You have to assume that somebody (a) has a role or DB name starting\n>> with slash, (b) has an explicit reference to that name in their\n>> pg_hba.conf, (c) doesn't read the release notes, and (d) doesn't\n>> notice that things are misbehaving until after some hacker manages\n>> to break into their installation on the strength of the misbehaving\n>> entry. OK, I'll grant that the probability of (c) is depressingly\n>> close to unity; but each of the other steps seems quite low probability.\n>> All four of them happening in one installation is something I doubt\n>> will happen.\n> \n> It is the kind of things that could blow up as a CVE and some bad PR\n> for the project, so I cannot get excited about enforcing this new rule\n> in an authentication file (aka before a role is authenticated) while\n> we are talking about 3~4 code paths (?) that would need an extra check\n> to make sure that no instances have such object names.\n\nI also have the feeling that having (a), (b) and (d) is low probability.\n\nThat said, If the user \"/john\" already exists and has a hba entry then \nthis entry will still match with the patch. Problem is that all the \nusers that contain \"john\" would also now match.\n\nBut things get worst if say /a is an existing user and hba entry as the \nentry would match any users that contains \"a\" with the patch.\n\nI assume (maybe i should not) that if objects starting with / already \nexist there is very good reason(s) behind. Then I don't think that \npreventing their creation in the DDL would help (quite the contrary for \nthe ones that really need them).\n\nIt looks to me that adding a GUC (off by default) to enable/disable the \nregexp usage in the hba could be a fair compromise. It won't block any \ncreation starting with a / and won't open more doors (if such objects \nexist) by default.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 13:33:09 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 9:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> You have to assume that somebody (a) has a role or DB name starting\n> with slash, (b) has an explicit reference to that name in their\n> pg_hba.conf, (c) doesn't read the release notes, and (d) doesn't\n> notice that things are misbehaving until after some hacker manages\n> to break into their installation on the strength of the misbehaving\n> entry. OK, I'll grant that the probability of (c) is depressingly\n> close to unity; but each of the other steps seems quite low probability.\n> All four of them happening in one installation is something I doubt\n> will happen.\n\nI can't argue with (a) or (b), but (d) seems decently likely to me. If\nyour normal user base consists of people who are authorized to access\nyour system, what clues would you have that your HBA is silently\nfailing open?\n\n--Jacob\n\n\n",
"msg_date": "Tue, 20 Sep 2022 10:15:14 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 01:33:09PM +0200, Drouvot, Bertrand wrote:\n> I assume (maybe i should not) that if objects starting with / already exist\n> there is very good reason(s) behind. Then I don't think that preventing\n> their creation in the DDL would help (quite the contrary for the ones that\n> really need them).\n\nI have been pondering on this point for the last few weeks, and I'd\nlike to change my opinion and side with Tom on this one as per the\nvery unlikeliness of this being a problem in the wild. I have studied\nthe places that would require restrictions but that was just feeling\nadding a bit more bloat into the CREATE/ALTER ROLE paths for what's\naimed at providing a consistent experience for the user across\npg_hba.conf and pg_ident.conf.\n\n> It looks to me that adding a GUC (off by default) to enable/disable the\n> regexp usage in the hba could be a fair compromise. It won't block any\n> creation starting with a / and won't open more doors (if such objects exist)\n> by default.\n\nEnforcing a behavior change in HBA policies with a GUC does not strike\nme as a good thing in the long term. I am ready to bet that it would\njust sit around for nothing like the compatibility GUCs.\n\nAnyway, I have looked at the patch.\n\n+ List *roles_re;\n+ List *databases_re;\n+ regex_t hostname_re;\nI am surprised by the approach of using separate lists for the regular\nexpressions and the raw names. Wouldn't it be better to store\neverything in a single list but assign an entry type? In this case it \nwould be either regex or plain string. This would minimize the\nfootprint of the changes (no extra arguments *_re in the routines\nchecking for a match on the roles, databases or hosts). And it seems\nto me that this would make unnecessary the use of re_num here and\nthere. The hostname is different, of course, requiring only an extra\nfield for its type, or something like that.\n\nPerhaps the documentation would gain in clarity if there were more\nexamples, like a set of comma-separated examples (mix of regex and raw\nstrings for example, for all the field types that gain support for\nregexes)?\n\n-$node->append_conf('postgresql.conf', \"log_connections = on\\n\");\n+$node->append_conf(\n+ 'postgresql.conf', qq{\n+listen_addresses = '127.0.0.1'\n+log_connections = on\n+});\nHmm. I think that we may need to reconsider the location of the tests\nfor the regexes with the host name, as the \"safe\" regression tests\nshould not switch listen_addresses. One location where we already do\nthat is src/test/ssl/, so these could be moved there. Keeping the\ndatabase and user name parts in src/test/authentication/ is fine.\n\nSomething that stood out on a first review is the refactoring of\n001_password.pl that can be done independently of the main patch:\n- test_role() -> test_conn() to be able to pass down a database name.\n- reset_pg_hba() to control the host, db and user parts. The host\npart does not really apply after moving the hosts checks to a more\nsecure location, so I guess that this had better be extended just for\nthe user and database, keeping host=local all the time.\nI am planning to apply 0001 attached independently, reducing the\nfootprint of 0002, which is your previous patch left untouched\n(mostly!).\n--\nMichael",
"msg_date": "Wed, 5 Oct 2022 16:24:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/5/22 9:24 AM, Michael Paquier wrote:\n> Something that stood out on a first review is the refactoring of\n> 001_password.pl that can be done independently of the main patch:\n\nGood idea, thanks for the proposal.\n\n> - test_role() -> test_conn() to be able to pass down a database name.\n> - reset_pg_hba() to control the host, db and user parts. The host\n> part does not really apply after moving the hosts checks to a more\n> secure location, so I guess that this had better be extended just for\n> the user and database, keeping host=local all the time.\n> I am planning to apply 0001 attached independently, \n\n0001 looks good to me.\n\n> reducing the\n> footprint of 0002, which is your previous patch left untouched\n> (mostly!).\n\nThanks! I'll look at it and the comments you just made up-thread.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 5 Oct 2022 15:32:20 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Wed, Oct 05, 2022 at 03:32:20PM +0200, Drouvot, Bertrand wrote:\n> On 10/5/22 9:24 AM, Michael Paquier wrote:\n>> - test_role() -> test_conn() to be able to pass down a database name.\n>> - reset_pg_hba() to control the host, db and user parts. The host\n>> part does not really apply after moving the hosts checks to a more\n>> secure location, so I guess that this had better be extended just for\n>> the user and database, keeping host=local all the time.\n>> I am planning to apply 0001 attached independently,\n> \n> 0001 looks good to me.\n\nThanks. I have applied this refactoring, leaving the host part out of\nthe equation as we should rely only on local connections for this\npart of the test. The best fit I can think about for the checks on\nthe hostname patterns would be either the ssl, ldap or krb5 tests.\nSSL is more widely tested than the two others.\n\n>> reducing the\n>> footprint of 0002, which is your previous patch left untouched\n>> (mostly!).\n> \n> Thanks! I'll look at it and the comments you just made up-thread.\n\nCool, thanks. One thing that matters a lot IMO (that I forgot to\nmention previously) is to preserve the order of the items parsed from\nthe configuration files.\n\nAlso, I am wondering whether we'd better have some regression tests\nwhere a regex includes a comma and a role name itself has a comma,\nactually, just to stress more the parsing of individual items in the\nHBA file.\n--\nMichael",
"msg_date": "Thu, 6 Oct 2022 09:53:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/5/22 9:24 AM, Michael Paquier wrote:\n> On Tue, Sep 20, 2022 at 01:33:09PM +0200, Drouvot, Bertrand wrote:\n> Anyway, I have looked at the patch.\n> \n> + List *roles_re;\n> + List *databases_re;\n> + regex_t hostname_re;\n> I am surprised by the approach of using separate lists for the regular\n> expressions and the raw names. Wouldn't it be better to store\n> everything in a single list but assign an entry type? In this case it\n> would be either regex or plain string. This would minimize the\n> footprint of the changes (no extra arguments *_re in the routines\n> checking for a match on the roles, databases or hosts). And it seems\n> to me that this would make unnecessary the use of re_num here and\n> there. \n\nPlease find attached v5 addressing this. I started with an union but it \nturns out that we still need the plain string when a regex is used. This \nis not needed for the authentication per say but for fill_hba_line(). So \nI ended up creating a new struct without union in v5.\n\n> The hostname is different, of course, requiring only an extra\n> field for its type, or something like that.\n\nI'm using the same new struct as described above for the hostname.\n\n> \n> Perhaps the documentation would gain in clarity if there were more\n> examples, like a set of comma-separated examples (mix of regex and raw\n> strings for example, for all the field types that gain support for\n> regexes)?\n> \n\nRight, I added more examples in v5.\n\n> -$node->append_conf('postgresql.conf', \"log_connections = on\\n\");\n> +$node->append_conf(\n> + 'postgresql.conf', qq{\n> +listen_addresses = '127.0.0.1'\n> +log_connections = on\n> +});\n> Hmm. I think that we may need to reconsider the location of the tests\n> for the regexes with the host name, as the \"safe\" regression tests\n> should not switch listen_addresses. One location where we already do\n> that is src/test/ssl/, so these could be moved there. \n\nGood point, I moved the hostname related tests in src/test/ssl.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 10 Oct 2022 09:00:06 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/6/22 2:53 AM, Michael Paquier wrote:\n> On Wed, Oct 05, 2022 at 03:32:20PM +0200, Drouvot, Bertrand wrote:\n>> On 10/5/22 9:24 AM, Michael Paquier wrote:\n>>> - test_role() -> test_conn() to be able to pass down a database name.\n>>> - reset_pg_hba() to control the host, db and user parts. The host\n>>> part does not really apply after moving the hosts checks to a more\n>>> secure location, so I guess that this had better be extended just for\n>>> the user and database, keeping host=local all the time.\n>>> I am planning to apply 0001 attached independently,\n>>\n>> 0001 looks good to me.\n> \n> Thanks. I have applied this refactoring, leaving the host part out of\n> the equation as we should rely only on local connections for this\n> part of the test. \n\nThanks!\n\n>> Thanks! I'll look at it and the comments you just made up-thread.\n> \n> Cool, thanks. One thing that matters a lot IMO (that I forgot to\n> mention previously) is to preserve the order of the items parsed from\n> the configuration files.\n\nFully agree, all the versions that have been submitted in this thread \npreserves the ordering.\n\n> \n> Also, I am wondering whether we'd better have some regression tests\n> where a regex includes a comma and a role name itself has a comma,\n> actually, just to stress more the parsing of individual items in the\n> HBA file.\n\nGood idea, it has been added in v5 just shared up-thread.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 10 Oct 2022 09:04:16 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Mon, Oct 10, 2022 at 09:00:06AM +0200, Drouvot, Bertrand wrote:\n> \tforeach(cell, tokens)\n> \t{\n> [...]\n> +\t\ttokreg = lfirst(cell);\n> +\t\tif (!token_is_regexp(tokreg))\n> \t\t{\n> -\t\t\tif (strcmp(dbname, role) == 0)\n> +\t\t\tif (am_walsender && !am_db_walsender)\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * physical replication walsender connections can only match\n> +\t\t\t\t * replication keyword\n> +\t\t\t\t */\n> +\t\t\t\tif (token_is_keyword(tokreg->authtoken, \"replication\"))\n> +\t\t\t\t\treturn true;\n> +\t\t\t}\n> +\t\t\telse if (token_is_keyword(tokreg->authtoken, \"all\"))\n> \t\t\t\treturn true;\n\nWhen checking the list of databases in check_db(), physical WAL\nsenders (aka am_walsender && !am_db_walsender) would be able to accept\nregexps, but these should only accept \"replication\" and never a\nregexp, no? The second check on \"replication\" placed in the branch\nfor token_is_regexp() in your patch would be correctly placed, though.\nThis is kind of special in the HBA logic, coming back to 9.0 where\nphysical replication and this special role property have been\nintroduced. WAL senders have gained an actual database property later\non in 9.4 with logical decoding, keeping \"replication\" for\ncompatibility (connection strings can use replication=database to\nconnect as a non-physical WAL sender and connect to a specific\ndatabase).\n\n> +typedef struct AuthToken\n> +{\n> +\tchar\t *string;\n> +\tbool\t\tquoted;\n> +} AuthToken;\n> +\n> +/*\n> + * Distinguish the case a token has to be treated as a regular\n> + * expression or not.\n> + */\n> +typedef struct AuthTokenOrRegex\n> +{\n> +\tbool\t\tis_regex;\n> +\n> +\t/*\n> +\t * Not an union as we still need the token string for fill_hba_line().\n> +\t */\n> +\tAuthToken *authtoken;\n> +\tregex_t *regex;\n> +} AuthTokenOrRegex;\n\nHmm. With is_regex to check if a regex_t exists, both structures may\nnot be necessary. I have not put my hands on that directly, but if\nI guess that I would shape things to have only AuthToken with\n(enforcing regex_t in priority if set in the list of elements to check\nfor a match):\n- the string\n- quoted\n- regex_t\nA list member should never have (regex_t != NULL && quoted), right?\nHostnames would never be quoted, as well.\n\n> +# test with a comma in the regular expression\n> +reset_pg_hba($node, 'all', '\"/^.*5,.*e$\"', 'password');\n> +test_conn($node, 'user=md5,role', 'password', 'matching regexp for username',\n> +\t0);\n\nSo, we check here that the role includes \"5,\" in its name. This is\ngetting fun to parse ;)\n\n> elsif ($ENV{PG_TEST_EXTRA} !~ /\\bssl\\b/)\n> {\n> -\tplan skip_all => 'Potentially unsafe test SSL not enabled in PG_TEST_EXTRA';\n> +\tplan skip_all =>\n> +\t 'Potentially unsafe test SSL not enabled in PG_TEST_EXTRA';\n> }\n\nUnrelated noise from perltidy.\n--\nMichael",
"msg_date": "Tue, 11 Oct 2022 15:29:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/11/22 8:29 AM, Michael Paquier wrote:\n> On Mon, Oct 10, 2022 at 09:00:06AM +0200, Drouvot, Bertrand wrote:\n>> \tforeach(cell, tokens)\n>> \t{\n>> [...]\n>> +\t\ttokreg = lfirst(cell);\n>> +\t\tif (!token_is_regexp(tokreg))\n>> \t\t{\n>> -\t\t\tif (strcmp(dbname, role) == 0)\n>> +\t\t\tif (am_walsender && !am_db_walsender)\n>> +\t\t\t{\n>> +\t\t\t\t/*\n>> +\t\t\t\t * physical replication walsender connections can only match\n>> +\t\t\t\t * replication keyword\n>> +\t\t\t\t */\n>> +\t\t\t\tif (token_is_keyword(tokreg->authtoken, \"replication\"))\n>> +\t\t\t\t\treturn true;\n>> +\t\t\t}\n>> +\t\t\telse if (token_is_keyword(tokreg->authtoken, \"all\"))\n>> \t\t\t\treturn true;\n> \n> When checking the list of databases in check_db(), physical WAL\n> senders (aka am_walsender && !am_db_walsender) would be able to accept\n> regexps, but these should only accept \"replication\" and never a\n> regexp, no? \n\nOh right, good catch, thanks! Please find attached v6 fixing it.\n\n\n> This is kind of special in the HBA logic, coming back to 9.0 where\n> physical replication and this special role property have been\n> introduced. WAL senders have gained an actual database property later\n> on in 9.4 with logical decoding, keeping \"replication\" for\n> compatibility (connection strings can use replication=database to\n> connect as a non-physical WAL sender and connect to a specific\n> database).\n> \n\nThanks for the explanation!\n\n>> +typedef struct AuthToken\n>> +{\n>> +\tchar\t *string;\n>> +\tbool\t\tquoted;\n>> +} AuthToken;\n>> +\n>> +/*\n>> + * Distinguish the case a token has to be treated as a regular\n>> + * expression or not.\n>> + */\n>> +typedef struct AuthTokenOrRegex\n>> +{\n>> +\tbool\t\tis_regex;\n>> +\n>> +\t/*\n>> +\t * Not an union as we still need the token string for fill_hba_line().\n>> +\t */\n>> +\tAuthToken *authtoken;\n>> +\tregex_t *regex;\n>> +} AuthTokenOrRegex;\n> \n> Hmm. With is_regex to check if a regex_t exists, both structures may\n> not be necessary. \n\nAgree that both struct are not necessary. In v6, AuthTokenOrRegex has \nbeen removed and the regex has been moved to AuthToken. There is no \nis_regex bool anymore, as it's enough to test whether regex is NULL or not.\n\n> I have not put my hands on that directly, but if\n> I guess that I would shape things to have only AuthToken with\n> (enforcing regex_t in priority if set in the list of elements to check\n> for a match):\n> - the string\n> - quoted\n> - regex_t\n> A list member should never have (regex_t != NULL && quoted), right?\n\nThe patch does allow that. For example it happens for the test where we \nadd a comma in the role name. As we don't rely on a dedicated char to \nmark the end of a reg exp (we only rely on / to mark its start) then \nallowing (regex_t != NULL && quoted) seems reasonable to me.\n\n>> +# test with a comma in the regular expression\n>> +reset_pg_hba($node, 'all', '\"/^.*5,.*e$\"', 'password');\n>> +test_conn($node, 'user=md5,role', 'password', 'matching regexp for username',\n>> +\t0);\n> \n> So, we check here that the role includes \"5,\" in its name. This is\n> getting fun to parse ;)\n>\n\nIndeed, ;-)\n\n\n>> elsif ($ENV{PG_TEST_EXTRA} !~ /\\bssl\\b/)\n>> {\n>> -\tplan skip_all => 'Potentially unsafe test SSL not enabled in PG_TEST_EXTRA';\n>> +\tplan skip_all =>\n>> +\t 'Potentially unsafe test SSL not enabled in PG_TEST_EXTRA';\n>> }\n> \n> Unrelated noise from perltidy.\n\nRight.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 12 Oct 2022 08:17:14 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 08:17:14AM +0200, Drouvot, Bertrand wrote:\n> Indeed, ;-)\n\nSo, I have spent the last two days looking at all that, studying the\nstructure of the patch and the existing HEAD code, and it looks like\nthat a few things could be consolidated.\n\nFirst, as of HEAD, AuthToken is only used for elements in a list of\nrole and database names in hba.conf before filling in each HbaLine,\nhence we limit its usage to the initial parsing. The patch assigns an\noptional regex_t to it, then extends the use of AuthToken for single\nhostname entries in pg_hba.conf. Things going first: shouldn't we\ncombine ident_user and \"re\" together in the same structure? Even if\nwe finish by not using AuthToken to store the computed regex, it seems\nto me that we'd better use the same base structure for pg_ident.conf\nand pg_hba.conf. While looking closely at the patch, we would expand\nthe use of AuthToken outside its original context. I have also looked\nat make_auth_token(), and wondered if it could be possible to have this\nroutine compile the regexes. This approach would not stick with\npg_ident.conf though, as we validate the fields in each line when we\nput our hands on ident_user and after the base validation of a line\n(number of fields, etc.). So with all that in mind, it feels right to\nnot use AuthToken at all when building each HbaLine and each\nIdentLine, but a new, separate, structure. We could call that an\nAuthItem (string, its compiled regex) perhaps? It could have its own\nmake() routine, taking in input an AuthToken and process\npg_regcomp(). Better ideas for this new structure would be welcome,\nand the idea is that we'd store the post-parsing state of an\nAuthToken to something that has a compiled regex. We could finish by\nusing AuthToken at the end and expand its use, but it does not feel\ncompletely right either to have a make() routine but not be able to\ncompile its regular expression when creating the AuthToken.\n\nThe logic in charge of compiling the regular expressions could be\nconsolidated more. The patch refactors the logic with\ntoken_regcomp(), uses it for the user names (ident_user in\nparse_ident_line() from pg_ident.conf), then extended to the hostnames\n(single item) and the role/database names (list possible in these\ncases). This approach looks right to me. Once we plug in an AuthItem\nto IdentLine, token_regcomp could be changed so as it takes an\nAuthToken in input, saving directly the compiled regex_t in the input\nstructure.\n\nAt the end, the global structure of the code should, I guess, respect\nthe following rules:\n- The number of places where we check if a string is a regex should be\nminimal (aka string beginning by '/').\n- Only one code path of hba.c should call pg_regcomp() (the patch does\nthat), and only one code path should call pg_regexec() (two code paths\nof hba.c do that with the patch, as of the need to store matching\nexpression). This should minimize the areas where we call\npg_mb2wchar_with_len(), for one.\n\nAbout this last point, token_regexec() does not include\ncheck_ident_usermap() in its logic, and it seems to me that it should.\nThe difference is with the expected regmatch_t structures, so\nshouldn't token_regexec be extended with two arguments as of an array\nof regmatch_t and the number of elements in the array? This would\nsave a bit some of the logic around pg_mb2wchar_with_len(), for\nexample. To make all that work, token_regexec() should return an int,\ncoming from pg_regexec, but no specific error strings as we don't want\nto spam the logs when checking hosts, roles and databases in\npg_hba.conf.\n\n /* Check if it has a CIDR suffix and if so isolate it */\n- cidr_slash = strchr(str, '/');\n- if (cidr_slash)\n- *cidr_slash = '\\0';\n+ if (!is_regexp)\n+ {\n+ cidr_slash = strchr(str, '/');\n+ if (cidr_slash)\n+ *cidr_slash = '\\0';\n+ }\n[...]\n /* Get the netmask */\n- if (cidr_slash)\n+ if (cidr_slash && !is_regexp)\n {\nSome of the code handling regexes for hostnames itches me a bit, like\nthis one. Perhaps it would be better to evaluate this interaction\nwith regular expressions separately. The database and role names\ndon't have this need, so their cases are much simpler to think about.\n\nThe code could be split to tackle things step-by-step:\n- One refactoring patch to introduce token_regcomp() and\ntoken_regexec(), with the introduction of a new structure that\nincludes the compiled regexes. (Feel free to counterargue about the\nuse of AuthToken for this purpose, of course!)\n- Plug in the refactored logic for the lists of role names and\ndatabase names in pg_hba.conf.\n- Handle the case of single host entries in pg_hba.conf.\n--\nMichael",
"msg_date": "Fri, 14 Oct 2022 14:30:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 02:30:25PM +0900, Michael Paquier wrote:\n> First, as of HEAD, AuthToken is only used for elements in a list of\n> role and database names in hba.conf before filling in each HbaLine,\n> hence we limit its usage to the initial parsing. The patch assigns an\n> optional regex_t to it, then extends the use of AuthToken for single\n> hostname entries in pg_hba.conf. Things going first: shouldn't we\n> combine ident_user and \"re\" together in the same structure? Even if\n> we finish by not using AuthToken to store the computed regex, it seems\n> to me that we'd better use the same base structure for pg_ident.conf\n> and pg_hba.conf. While looking closely at the patch, we would expand\n> the use of AuthToken outside its original context. I have also looked\n> at make_auth_token(), and wondered if it could be possible to have this\n> routine compile the regexes. This approach would not stick with\n> pg_ident.conf though, as we validate the fields in each line when we\n> put our hands on ident_user and after the base validation of a line\n> (number of fields, etc.). So with all that in mind, it feels right to\n> not use AuthToken at all when building each HbaLine and each\n> IdentLine, but a new, separate, structure. We could call that an\n> AuthItem (string, its compiled regex) perhaps? It could have its own\n> make() routine, taking in input an AuthToken and process\n> pg_regcomp(). Better ideas for this new structure would be welcome,\n> and the idea is that we'd store the post-parsing state of an\n> AuthToken to something that has a compiled regex. We could finish by\n> using AuthToken at the end and expand its use, but it does not feel\n> completely right either to have a make() routine but not be able to\n> compile its regular expression when creating the AuthToken.\n\nI have have sent this part too quickly. As AuthTokens are used in\ncheck_db() and check_role() when matching entries, it is more\nintuitive to store the regex_t directly in it. Changing IdentLine to\nuse a AuthToken makes the \"quoted\" part useless in this case, still it\ncould be used in Assert()s to make sure that the data is shaped as\nexpected at check-time, enforced at false when creating it in\nparse_ident_line()?\n--\nMichael",
"msg_date": "Fri, 14 Oct 2022 15:18:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/14/22 8:18 AM, Michael Paquier wrote:\n> On Fri, Oct 14, 2022 at 02:30:25PM +0900, Michael Paquier wrote:\n>> First, as of HEAD, AuthToken is only used for elements in a list of\n>> role and database names in hba.conf before filling in each HbaLine,\n>> hence we limit its usage to the initial parsing. The patch assigns an\n>> optional regex_t to it, then extends the use of AuthToken for single\n>> hostname entries in pg_hba.conf. Things going first: shouldn't we\n>> combine ident_user and \"re\" together in the same structure? Even if\n>> we finish by not using AuthToken to store the computed regex, it seems\n>> to me that we'd better use the same base structure for pg_ident.conf\n>> and pg_hba.conf. While looking closely at the patch, we would expand\n>> the use of AuthToken outside its original context. I have also looked\n>> at make_auth_token(), and wondered if it could be possible to have this\n>> routine compile the regexes. This approach would not stick with\n>> pg_ident.conf though, as we validate the fields in each line when we\n>> put our hands on ident_user and after the base validation of a line\n>> (number of fields, etc.). So with all that in mind, it feels right to\n>> not use AuthToken at all when building each HbaLine and each\n>> IdentLine, but a new, separate, structure. We could call that an\n>> AuthItem (string, its compiled regex) perhaps? It could have its own\n>> make() routine, taking in input an AuthToken and process\n>> pg_regcomp(). Better ideas for this new structure would be welcome,\n>> and the idea is that we'd store the post-parsing state of an\n>> AuthToken to something that has a compiled regex. We could finish by\n>> using AuthToken at the end and expand its use, but it does not feel\n>> completely right either to have a make() routine but not be able to\n>> compile its regular expression when creating the AuthToken.\n> \n> I have have sent this part too quickly. As AuthTokens are used in\n> check_db() and check_role() when matching entries, it is more\n> intuitive to store the regex_t directly in it. \n\nYeah, I also think this is the right place for it.\n\n> Changing IdentLine to\n> use a AuthToken makes the \"quoted\" part useless in this case, still it\n> could be used in Assert()s to make sure that the data is shaped as\n> expected at check-time, enforced at false when creating it in\n> parse_ident_line()?\n\nI agree, that makes sense. I'll work on that.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 14 Oct 2022 14:47:04 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/14/22 7:30 AM, Michael Paquier wrote:\n> On Wed, Oct 12, 2022 at 08:17:14AM +0200, Drouvot, Bertrand wrote:\n>> Indeed, ;-)\n> \n> So, I have spent the last two days looking at all that, studying the\n> structure of the patch and the existing HEAD code,\n\nThanks!\n\n> The code could be split to tackle things step-by-step:\n> - One refactoring patch to introduce token_regcomp() and\n> token_regexec(), with the introduction of a new structure that\n> includes the compiled regexes. (Feel free to counterargue about the\n> use of AuthToken for this purpose, of course!)\n> - Plug in the refactored logic for the lists of role names and\n> database names in pg_hba.conf.\n> - Handle the case of single host entries in pg_hba.conf.\n> --\n\nI agree to work step-by-step.\n\nWhile looking at it again now, I discovered that the new TAP test for \nthe regexp on the hostname in ssl/002_scram.pl is failing on some of my \ntests environment (and not all..).\n\nSo, I agree with the dedicated steps you are proposing and that the \n\"host case\" needs a dedicated attention.\n\nI'm not ignoring all the remarks you've just done up-thread, I'll \naddress them and/or provide my feedback on them when I'll come back with \nthe step-by-step sub patches.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 14 Oct 2022 15:04:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/14/22 7:30 AM, Michael Paquier wrote:\n> On Wed, Oct 12, 2022 at 08:17:14AM +0200, Drouvot, Bertrand wrote:\n>> Indeed, ;-)\n> \n> \n> I have also looked\n> at make_auth_token(), and wondered if it could be possible to have this\n> routine compile the regexes. \n\nI think that it makes sense.\n\n> This approach would not stick with\n> pg_ident.conf though, as we validate the fields in each line when we\n> put our hands on ident_user and after the base validation of a line\n> (number of fields, etc.).\n\nI'm not sure to get the issue here with the proposed approach and \npg_ident.conf.\n\nThe new attached patch proposal is making use of make_auth_token() \n(through copy_auth_token()) in parse_ident_line(), do you see any issue?\n\n> \n> The logic in charge of compiling the regular expressions could be\n> consolidated more. The patch refactors the logic with\n> token_regcomp(), uses it for the user names (ident_user in\n> parse_ident_line() from pg_ident.conf), then extended to the hostnames\n> (single item) and the role/database names (list possible in these\n> cases). This approach looks right to me. Once we plug in an AuthItem\n> to IdentLine, token_regcomp could be changed so as it takes an\n> AuthToken in input\n\nRight, did it that way in the attached.\n\n\n> - Only one code path of hba.c should call pg_regcomp() (the patch does\n> that), and only one code path should call pg_regexec() (two code paths\n> of hba.c do that with the patch, as of the need to store matching\n> expression). This should minimize the areas where we call\n> pg_mb2wchar_with_len(), for one.\n\nRight.\n\n> About this last point, token_regexec() does not include\n> check_ident_usermap() in its logic, and it seems to me that it should.\n> The difference is with the expected regmatch_t structures, so\n> shouldn't token_regexec be extended with two arguments as of an array\n> of regmatch_t and the number of elements in the array? \n\nYou are right, not using token_regexec() in check_ident_usermap() in the \nprevious patch versions was not right. That's fixed in the attached, \nthough the substitution (if any) is still outside of token_regexec(), do \nyou think it should be part of it? (I think that makes sense to keep it \noutside of it as we wont use the substitution logic for roles, databases \nand hostname)\n\n> \n> The code could be split to tackle things step-by-step:\n> - One refactoring patch to introduce token_regcomp() and\n> token_regexec()\n\nAgree. Please find attached v1-0001-token_reg-functions.patch for this \nfirst step.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Oct 2022 19:56:02 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Mon, Oct 17, 2022 at 07:56:02PM +0200, Drouvot, Bertrand wrote:\n> On 10/14/22 7:30 AM, Michael Paquier wrote:\n>> This approach would not stick with\n>> pg_ident.conf though, as we validate the fields in each line when we\n>> put our hands on ident_user and after the base validation of a line\n>> (number of fields, etc.).\n> \n> I'm not sure to get the issue here with the proposed approach and\n> pg_ident.conf.\n\nMy point is about parse_ident_line(), where we need to be careful in\nthe order of the operations. The macros IDENT_MULTI_VALUE() and\nIDENT_FIELD_ABSENT() need to be applied on all the fields first, and\nthe regex computation needs to be last. Your patch had a subtile\nissue here, as users may get errors on the computed regex before the\nordering of the fields as the computation was used *before* the \"Get\nthe PG rolename token\" part of the logic.\n\n>> About this last point, token_regexec() does not include\n>> check_ident_usermap() in its logic, and it seems to me that it should.\n>> The difference is with the expected regmatch_t structures, so\n>> shouldn't token_regexec be extended with two arguments as of an array\n>> of regmatch_t and the number of elements in the array?\n> \n> You are right, not using token_regexec() in check_ident_usermap() in the\n> previous patch versions was not right. That's fixed in the attached, though\n> the substitution (if any) is still outside of token_regexec(), do you think\n> it should be part of it? (I think that makes sense to keep it outside of it\n> as we wont use the substitution logic for roles, databases and hostname)\n\nKeeping the substition done with the IdentLine's Authtokens outside of\nthe internal execution routine is fine by me.\n\n\nWhile putting my hands on that, I was also wondering whether we should\nhave the error string generated after compilation within the internal\nregcomp() routine, but that would require more arguments to\npg_regcomp() (as of file name, line number, **err_string), and that\nlooks more invasive than necessary. Perhaps the follow-up steps will\nprove me wrong, though :)\n\nA last thing is the introduction of a free() routine for AuthTokens,\nto minimize the number of places where we haev pg_regfree(). The gain\nis minimal, but that looks more consistent with the execution and\ncompilation paths.\n--\nMichael",
"msg_date": "Tue, 18 Oct 2022 14:51:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/18/22 7:51 AM, Michael Paquier wrote:\n> On Mon, Oct 17, 2022 at 07:56:02PM +0200, Drouvot, Bertrand wrote:\n>> On 10/14/22 7:30 AM, Michael Paquier wrote:\n>>> This approach would not stick with\n>>> pg_ident.conf though, as we validate the fields in each line when we\n>>> put our hands on ident_user and after the base validation of a line\n>>> (number of fields, etc.).\n>>\n>> I'm not sure to get the issue here with the proposed approach and\n>> pg_ident.conf.\n> \n> My point is about parse_ident_line(), where we need to be careful in\n> the order of the operations. The macros IDENT_MULTI_VALUE() and\n> IDENT_FIELD_ABSENT() need to be applied on all the fields first, and\n> the regex computation needs to be last. Your patch had a subtile\n> issue here, as users may get errors on the computed regex before the\n> ordering of the fields as the computation was used *before* the \"Get\n> the PG rolename token\" part of the logic.\n\nGotcha, thanks! I was wondering if we shouldn't add a comment about that \nand I see that you've added one in v2, thanks!\n\nBTW, what about adding a new TAP test (dedicated patch) to test the \nbehavior in case of errors during the regexes compilation in \npg_ident.conf and pg_hba.conf (not done yet)? (we could add it once this \n patch series is done).\n\n> While putting my hands on that, I was also wondering whether we should\n> have the error string generated after compilation within the internal\n> regcomp() routine, but that would require more arguments to\n> pg_regcomp() (as of file name, line number, **err_string), and that\n> looks more invasive than necessary. Perhaps the follow-up steps will\n> prove me wrong, though :)\n\nI've had the same thought (and that was what the previous global patch \nwas doing). I'm tempted to think that the follow-steps will prove you \nright ;-) (specially if at the end those will be the same error messages \nfor databases and roles).\n\n> \n> A last thing is the introduction of a free() routine for AuthTokens,\n> to minimize the number of places where we haev pg_regfree(). The gain\n> is minimal, but that looks more consistent with the execution and\n> compilation paths.\n\nAgree, that looks better.\n\nI had a look at your v2, did a few tests and it looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 18 Oct 2022 09:14:21 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 09:14:21AM +0200, Drouvot, Bertrand wrote:\n> BTW, what about adding a new TAP test (dedicated patch) to test the behavior\n> in case of errors during the regexes compilation in pg_ident.conf and\n> pg_hba.conf (not done yet)? (we could add it once this patch series is\n> done).\n\nPerhaps, that may become tricky when it comes to -DEXEC_BACKEND (for\ncases where no fork() implementation is available, aka Windows). But\na postmaster restart failure would generate logs that could be picked\nfor a pattern check?\n\n>> While putting my hands on that, I was also wondering whether we should\n>> have the error string generated after compilation within the internal\n>> regcomp() routine, but that would require more arguments to\n>> pg_regcomp() (as of file name, line number, **err_string), and that\n>> looks more invasive than necessary. Perhaps the follow-up steps will\n>> prove me wrong, though :)\n> \n> I've had the same thought (and that was what the previous global patch was\n> doing). I'm tempted to think that the follow-steps will prove you right ;-)\n> (specially if at the end those will be the same error messages for databases\n> and roles).\n\nAvoiding three times the same error message seems like a good thing in\nthe long run, but let's think about this part later as needed. All\nthese routines are static to hba.c so even if we finish by not\nfinishing the whole job for this development cycle we can still be\nvery flexible.\n--\nMichael",
"msg_date": "Wed, 19 Oct 2022 10:18:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/19/22 3:18 AM, Michael Paquier wrote:\n> On Tue, Oct 18, 2022 at 09:14:21AM +0200, Drouvot, Bertrand wrote:\n>> BTW, what about adding a new TAP test (dedicated patch) to test the behavior\n>> in case of errors during the regexes compilation in pg_ident.conf and\n>> pg_hba.conf (not done yet)? (we could add it once this patch series is\n>> done).\n> \n> Perhaps, that may become tricky when it comes to -DEXEC_BACKEND (for\n> cases where no fork() implementation is available, aka Windows). But\n> a postmaster restart failure would generate logs that could be picked\n> for a pattern check?\n\nRight, that's how I'd see it. I'll give it a look.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 19 Oct 2022 09:50:45 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/14/22 7:30 AM, Michael Paquier wrote:\n> On Wed, Oct 12, 2022 at 08:17:14AM +0200, Drouvot, Bertrand wrote:\n>> Indeed, ;-)\n> \n> \n> The code could be split to tackle things step-by-step:\n> - One refactoring patch to introduce token_regcomp() and\n> token_regexec(), with the introduction of a new structure that\n> includes the compiled regexes. (Feel free to counterargue about the\n> use of AuthToken for this purpose, of course!)\n> - Plug in the refactored logic for the lists of role names and\n> database names in pg_hba.conf.\n\nPlease find attached \nv1-0001-regex-handling-for-db-and-roles-in-hba.patch to implement \nregexes for databases and roles in hba.\n\nIt does also contain new regexes related TAP tests and doc updates.\n\nIt relies on the refactoring made in fc579e11c6 (but changes the \nregcomp_auth_token() parameters so that it is now responsible for \nemitting the compilation error message (if any), to avoid code \nduplication in parse_hba_line() and parse_ident_line() for roles, \ndatabases and user name mapping).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 19 Oct 2022 10:45:44 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Wed, Oct 19, 2022 at 10:45:44AM +0200, Drouvot, Bertrand wrote:\n> Please find attached v1-0001-regex-handling-for-db-and-roles-in-hba.patch to\n> implement regexes for databases and roles in hba.\n> \n> It does also contain new regexes related TAP tests and doc updates.\n\nThanks for the updated version. This is really easy to look at now.\n\n> It relies on the refactoring made in fc579e11c6 (but changes the\n> regcomp_auth_token() parameters so that it is now responsible for emitting\n> the compilation error message (if any), to avoid code duplication in\n> parse_hba_line() and parse_ident_line() for roles, databases and user name\n> mapping).\n\n@@ -652,13 +670,18 @@ check_role(const char *role, Oid roleid, List *tokens)\n[...]\n- if (!tok->quoted && tok->string[0] == '+')\n+ if (!token_has_regexp(tok))\n {\nHmm. Do we need token_has_regexp() here for all the cases? We know\nthat the string can begin with a '+', hence it is no regex. The same\napplies for the case of \"all\". The remaining case is the one where\nthe user name matches exactly the AuthToken string, which should be\nlast as we want to treat anything beginning with a '/' as a regex. It\nseems like we could do an order like that? Say:\nif (!tok->quoted && tok->string[0] == '+')\n //do\nelse if (token_is_keyword(tok, \"all\"))\n //do\nelse if (token_has_regexp(tok))\n //do regex compilation, handling failures\nelse if (token_matches(tok, role))\n //do exact match\n\nThe same approach with keywords first, regex, and exact match could be\napplied as well for the databases? Perhaps it is just mainly a matter\nof taste, and it depends on how much you want to prioritize the place\nof the regex over the rest but that could make the code easier to\nunderstand in the long-run and this is a very sensitive code area, and\nthe case of physical walsenders (in short specific process types)\nrequiringx specific conditions is also something to take into account. \n\n foreach(tokencell, tokens)\n {\n- parsedline->roles = lappend(parsedline->roles,\n- copy_auth_token(lfirst(tokencell)));\n+ AuthToken *tok = copy_auth_token(lfirst(tokencell));\n+\n+ /*\n+ * Compile a regex from the role token, if necessary.\n+ */\n+ if (regcomp_auth_token(tok, HbaFileName, line_num, err_msg, elevel))\n+ return NULL;\n+\n+ parsedline->roles = lappend(parsedline->roles, tok);\n }\n\nCompiling the expressions for the user and database lists one-by-one\nin parse_hba_line() as you do is correct. However there is a gotcha\nthat you are forgetting here: the memory allocations done for the\nregexp compilations are not linked to the memory context where each\nline is processed (named hbacxt in load_hba()) and need a separate\ncleanup. In the same fashion as load_ident(), it seems to me that we\nneed two extra things for this patch:\n- if !ok (see where we do MemoryContextDelete(hbacxt)), we need to go\nthrough new_parsed_lines and free for each line the AuthTokens for the\ndatabase and user lists.\n- if ok and new_parsed_lines != NIL, the same cleanup needs to\nhappen.\nMy guess is that you could do both the same way as load_ident() does,\nkeeping some symmetry between the two code paths. Unifying both into\na common routine would be sort of annoying as HbaLines uses lists\nwithin the lists of parsed lines, and IdentLine can have one at most\nin each line.\n\nI am wondering whether we should link the regexp code to not use\nmalloc(), actually.. This would require a separate analysis, though,\nand I suspect that palloc() would be very expensive for this job. \n\nFor now, I have made your last patch a bit shorter by applying the\nrefactoring of regcomp_auth_token() separately with a few tweaks to\nthe comments.\n--\nMichael",
"msg_date": "Fri, 21 Oct 2022 09:58:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/21/22 2:58 AM, Michael Paquier wrote:\n> On Wed, Oct 19, 2022 at 10:45:44AM +0200, Drouvot, Bertrand wrote:\n>> Please find attached v1-0001-regex-handling-for-db-and-roles-in-hba.patch to\n>> implement regexes for databases and roles in hba.\n>>\n>> It does also contain new regexes related TAP tests and doc updates.\n> \n> Thanks for the updated version. This is really easy to look at now.\n> \n>> It relies on the refactoring made in fc579e11c6 (but changes the\n>> regcomp_auth_token() parameters so that it is now responsible for emitting\n>> the compilation error message (if any), to avoid code duplication in\n>> parse_hba_line() and parse_ident_line() for roles, databases and user name\n>> mapping).\n> \n> @@ -652,13 +670,18 @@ check_role(const char *role, Oid roleid, List *tokens)\n> [...]\n> - if (!tok->quoted && tok->string[0] == '+')\n> + if (!token_has_regexp(tok))\n> {\n> Hmm. Do we need token_has_regexp() here for all the cases? We know\n> that the string can begin with a '+', hence it is no regex. The same\n> applies for the case of \"all\". The remaining case is the one where\n> the user name matches exactly the AuthToken string, which should be\n> last as we want to treat anything beginning with a '/' as a regex. It\n> seems like we could do an order like that? Say:\n> if (!tok->quoted && tok->string[0] == '+')\n> //do\n> else if (token_is_keyword(tok, \"all\"))\n> //do\n> else if (token_has_regexp(tok))\n> //do regex compilation, handling failures\n> else if (token_matches(tok, role))\n> //do exact match\n> \n> The same approach with keywords first, regex, and exact match could be\n> applied as well for the databases? Perhaps it is just mainly a matter\n> of taste, \n\nYeah, I think it is.\n\n> and it depends on how much you want to prioritize the place\n> of the regex over the rest but that could make the code easier to\n> understand in the long-run and this is a very sensitive code area, \n\nAnd agree that your proposal tastes better ;-): it is easier to \nunderstand, v2 attached has been done that way.\n\n> Compiling the expressions for the user and database lists one-by-one\n> in parse_hba_line() as you do is correct. However there is a gotcha\n> that you are forgetting here: the memory allocations done for the\n> regexp compilations are not linked to the memory context where each\n> line is processed (named hbacxt in load_hba()) and need a separate\n> cleanup. \n\nOops, right, thanks for the call out!\n\n> In the same fashion as load_ident(), it seems to me that we\n> need two extra things for this patch:\n> - if !ok (see where we do MemoryContextDelete(hbacxt)), we need to go\n> through new_parsed_lines and free for each line the AuthTokens for the\n> database and user lists.\n> - if ok and new_parsed_lines != NIL, the same cleanup needs to\n> happen.\n\nRight, but I think that should be \"parsed_hba_lines != NIL\".\n\n> My guess is that you could do both the same way as load_ident() does,\n> keeping some symmetry between the two code paths. \n\nRight. To avoid code duplication in the !ok/ok cases, the function \nfree_hba_line() has been added in v2: it goes through the list of \ndatabases and roles tokens and call free_auth_token() for each of them.\n\n> Unifying both into\n> a common routine would be sort of annoying as HbaLines uses lists\n> within the lists of parsed lines, and IdentLine can have one at most\n> in each line.\n\nI agree, and v2 is not attempting to unify them.\n\n> For now, I have made your last patch a bit shorter by applying the\n> refactoring of regcomp_auth_token() separately with a few tweaks to\n> the comments.\n\nThanks! v2 attached does apply on top of that.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 21 Oct 2022 14:10:37 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "On Fri, Oct 21, 2022 at 02:10:37PM +0200, Drouvot, Bertrand wrote:\n> On 10/21/22 2:58 AM, Michael Paquier wrote:\n>> The same approach with keywords first, regex, and exact match could be\n>> applied as well for the databases? Perhaps it is just mainly a matter\n>> of taste,\n> \n> Yeah, I think it is.\n\n;)\n\nStill it looks that this makes for less confusion with a minimal\nfootprint once the new additions are in place.\n\n>> In the same fashion as load_ident(), it seems to me that we\n>> need two extra things for this patch:\n>> - if !ok (see where we do MemoryContextDelete(hbacxt)), we need to go\n>> through new_parsed_lines and free for each line the AuthTokens for the\n>> database and user lists.\n>> - if ok and new_parsed_lines != NIL, the same cleanup needs to\n>> happen.\n> \n> Right, but I think that should be \"parsed_hba_lines != NIL\".\n\nFor the second case, where we need to free the past contents after a\nsuccess, yes.\n\n> Right. To avoid code duplication in the !ok/ok cases, the function\n> free_hba_line() has been added in v2: it goes through the list of databases\n> and roles tokens and call free_auth_token() for each of them.\n\nHaving a small-ish routine for that is fine.\n\nI have spent a couple of hours doing a pass over v2, playing manually\nwith regex patterns, reloads, the system views and item lists. The\nlogic was fine, but I have adjusted a few things related to the\ncomments and the documentation (particularly with the examples,\nremoving one example and updating one with a regex that has a comma,\nneeding double quotes). The CI and all my machines were green, and\nthe test coverage looked sufficient. So, applied. I'll keep an eye\non the buildfarm.\n--\nMichael",
"msg_date": "Mon, 24 Oct 2022 12:34:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nOn 10/24/22 5:34 AM, Michael Paquier wrote:\n> On Fri, Oct 21, 2022 at 02:10:37PM +0200, Drouvot, Bertrand wrote:\n>> On 10/21/22 2:58 AM, Michael Paquier wrote:\n> \n> I have spent a couple of hours doing a pass over v2, playing manually\n> with regex patterns, reloads, the system views and item lists. The\n> logic was fine, but I have adjusted a few things related to the\n> comments and the documentation (particularly with the examples,\n> removing one example and updating one with a regex that has a comma,\n> needing double quotes). The CI and all my machines were green, and\n> the test coverage looked sufficient. So, applied. \nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 24 Oct 2022 12:17:47 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Patch proposal: make use of regular expressions for the username\n in pg_hba.conf"
}
] |
[
{
"msg_contents": "Hi,\n\nAt function parallel_vacuum_process_all_indexes there is\na typo with a logical connector.\n\nI think that correct is &&, because both of the operators are\nbool types [1].\n\nAs a result, parallel vacuum workers can be incorrectly enabled.\n\nAttached a trivial fix.\n\nregards,\nRanier Vilela\n\n[1]\nhttps://wiki.sei.cmu.edu/confluence/display/c/EXP46-C.+Do+not+use+a+bitwise+operator+with+a+Boolean-like+operand",
"msg_date": "Fri, 19 Aug 2022 09:04:57 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 5:35 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Hi,\n>\n> At function parallel_vacuum_process_all_indexes there is\n> a typo with a logical connector.\n>\n> I think that correct is &&, because both of the operators are\n> bool types [1].\n>\n> As a result, parallel vacuum workers can be incorrectly enabled.\n>\n> Attached a trivial fix.\n>\n> [1] https://wiki.sei.cmu.edu/confluence/display/c/EXP46-C.+Do+not+use+a+bitwise+operator+with+a+Boolean-like+operand\n\nGood catch! Patch LGTM.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Fri, 19 Aug 2022 17:39:37 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 5:40 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 5:35 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > At function parallel_vacuum_process_all_indexes there is\n> > a typo with a logical connector.\n> >\n> > I think that correct is &&, because both of the operators are\n> > bool types [1].\n> >\n> > As a result, parallel vacuum workers can be incorrectly enabled.\n> >\n> > Attached a trivial fix.\n> >\n> > [1] https://wiki.sei.cmu.edu/confluence/display/c/EXP46-C.+Do+not+use+a+bitwise+operator+with+a+Boolean-like+operand\n>\n> Good catch! Patch LGTM.\n>\n\n+1. This looks fine to me as well. I'll take care of this early next\nweek unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 19 Aug 2022 17:43:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> At function parallel_vacuum_process_all_indexes there is\n> a typo with a logical connector.\n> I think that correct is &&, because both of the operators are\n> bool types [1].\n> As a result, parallel vacuum workers can be incorrectly enabled.\n\nSince they're bools, the C spec requires them to promote to integer\n0 or 1, therefore the & operator will yield the desired result.\nSo there's not going to be any incorrect behavior. Nonetheless,\nI agree that && would be better, because it would short-circuit\nthe evaluation of parallel_vacuum_index_is_parallel_safe() when\nthere's no need.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 19 Aug 2022 09:28:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "Em sex., 19 de ago. de 2022 às 10:28, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > At function parallel_vacuum_process_all_indexes there is\n> > a typo with a logical connector.\n> > I think that correct is &&, because both of the operators are\n> > bool types [1].\n> > As a result, parallel vacuum workers can be incorrectly enabled.\n>\n> Since they're bools, the C spec requires them to promote to integer\n> 0 or 1, therefore the & operator will yield the desired result.\n> So there's not going to be any incorrect behavior.\n\nIt seems that you are right.\n\n#include <stdio.h>\n\n#ifdef __cplusplus\nextern \"C\" {\n#endif\n\nint main()\n{\n bool op1 = false;\n bool op2 = true;\n bool band;\n bool cand;\n\n band = op1 & op2;\n printf(\"res=%d\\n\", band);\n\n cand = op1 && op2;\n printf(\"res=%d\\n\", cand);\n}\n\n#ifdef __cplusplus\n}\n#endif\n\nresults:\nres=0\nres=0\n\nSo, my assumption is incorrect.\n\nregards,\nRanier Vilela\n\nEm sex., 19 de ago. de 2022 às 10:28, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> At function parallel_vacuum_process_all_indexes there is\n> a typo with a logical connector.\n> I think that correct is &&, because both of the operators are\n> bool types [1].\n> As a result, parallel vacuum workers can be incorrectly enabled.\n\nSince they're bools, the C spec requires them to promote to integer\n0 or 1, therefore the & operator will yield the desired result.\nSo there's not going to be any incorrect behavior.It seems that you are right.#include <stdio.h>#ifdef __cplusplusextern \"C\" {#endifint main(){ bool op1 = false; bool op2 = true; bool band; bool cand; band = op1 & op2; printf(\"res=%d\\n\", band); cand = op1 && op2; printf(\"res=%d\\n\", cand);}#ifdef __cplusplus}#endif results:\nres=0 res=0 So, my assumption is incorrect.regards,Ranier Vilela",
"msg_date": "Fri, 19 Aug 2022 11:15:33 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 7:45 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sex., 19 de ago. de 2022 às 10:28, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > At function parallel_vacuum_process_all_indexes there is\n>> > a typo with a logical connector.\n>> > I think that correct is &&, because both of the operators are\n>> > bool types [1].\n>> > As a result, parallel vacuum workers can be incorrectly enabled.\n>>\n>> Since they're bools, the C spec requires them to promote to integer\n>> 0 or 1, therefore the & operator will yield the desired result.\n>> So there's not going to be any incorrect behavior.\n>\n>\n> So, my assumption is incorrect.\n>\n\nRight, but as Tom pointed it is still better to change this. However,\nI am not sure if we should backpatch this to PG15 as this won't lead\nto any incorrect behavior.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 20 Aug 2022 09:32:57 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Right, but as Tom pointed it is still better to change this. However,\n> I am not sure if we should backpatch this to PG15 as this won't lead\n> to any incorrect behavior.\n\nIf that code only exists in HEAD and v15 then I'd backpatch.\nIt's a very low-risk change and it might avoid merge problems\nfor future backpatches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Aug 2022 00:34:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "Em sáb., 20 de ago. de 2022 às 01:03, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Fri, Aug 19, 2022 at 7:45 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Em sex., 19 de ago. de 2022 às 10:28, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >>\n> >> Ranier Vilela <ranier.vf@gmail.com> writes:\n> >> > At function parallel_vacuum_process_all_indexes there is\n> >> > a typo with a logical connector.\n> >> > I think that correct is &&, because both of the operators are\n> >> > bool types [1].\n> >> > As a result, parallel vacuum workers can be incorrectly enabled.\n> >>\n> >> Since they're bools, the C spec requires them to promote to integer\n> >> 0 or 1, therefore the & operator will yield the desired result.\n> >> So there's not going to be any incorrect behavior.\n> >\n> >\n> > So, my assumption is incorrect.\n> >\n>\n> Right, but as Tom pointed it is still better to change this.\n\nSorry, I expressed myself badly.\nAs Tom pointed out, It's not a bug, as I stated in the first post.\nBut even if it wasn't a small performance improvement, by avoiding the\nfunction call.\nThe correct thing is to use logical connectors (&& ||) with boolean\noperands.\n\n\n\n> However,\n> I am not sure if we should backpatch this to PG15 as this won't lead\n> to any incorrect behavior.\n>\n+1 for backpath to PG15, too.\nIt's certainly a safe change.\n\nregards,\nRanier Vilela\n\nEm sáb., 20 de ago. de 2022 às 01:03, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Fri, Aug 19, 2022 at 7:45 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sex., 19 de ago. de 2022 às 10:28, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > At function parallel_vacuum_process_all_indexes there is\n>> > a typo with a logical connector.\n>> > I think that correct is &&, because both of the operators are\n>> > bool types [1].\n>> > As a result, parallel vacuum workers can be incorrectly enabled.\n>>\n>> Since they're bools, the C spec requires them to promote to integer\n>> 0 or 1, therefore the & operator will yield the desired result.\n>> So there's not going to be any incorrect behavior.\n>\n>\n> So, my assumption is incorrect.\n>\n\nRight, but as Tom pointed it is still better to change this.\nSorry, I expressed myself badly.As Tom pointed out, It's not a bug, as I stated in the first post.But even if it wasn't a small performance improvement, by avoiding the function call.The correct thing is to use logical connectors (&& ||) with boolean operands. However,\nI am not sure if we should backpatch this to PG15 as this won't lead\nto any incorrect behavior.+1 for backpath to PG15, too.It's certainly a safe change.regards,Ranier Vilela",
"msg_date": "Sat, 20 Aug 2022 11:07:59 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Right, but as Tom pointed it is still better to change this. However,\n> > I am not sure if we should backpatch this to PG15 as this won't lead\n> > to any incorrect behavior.\n>\n> If that code only exists in HEAD and v15 then I'd backpatch.\n> It's a very low-risk change and it might avoid merge problems\n> for future backpatches.\n>\n\nOkay, done that way. Thanks!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 10:11:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
},
{
"msg_contents": "Em seg., 22 de ago. de 2022 às 01:42, Amit Kapila <amit.kapila16@gmail.com>\nescreveu:\n\n> On Sat, Aug 20, 2022 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > Right, but as Tom pointed it is still better to change this. However,\n> > > I am not sure if we should backpatch this to PG15 as this won't lead\n> > > to any incorrect behavior.\n> >\n> > If that code only exists in HEAD and v15 then I'd backpatch.\n> > It's a very low-risk change and it might avoid merge problems\n> > for future backpatches.\n> >\n>\n> Okay, done that way. Thanks!\n>\nThank you.\n\nregards,\nRanier Vilela\n\nEm seg., 22 de ago. de 2022 às 01:42, Amit Kapila <amit.kapila16@gmail.com> escreveu:On Sat, Aug 20, 2022 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Right, but as Tom pointed it is still better to change this. However,\n> > I am not sure if we should backpatch this to PG15 as this won't lead\n> > to any incorrect behavior.\n>\n> If that code only exists in HEAD and v15 then I'd backpatch.\n> It's a very low-risk change and it might avoid merge problems\n> for future backpatches.\n>\n\nOkay, done that way. Thanks!Thank you.regards,Ranier Vilela",
"msg_date": "Mon, 22 Aug 2022 08:19:22 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo with logical connector\n (src/backend/commands/vacuumparallel.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nWalSnd structure mutex is being used to protect all the variables of\nthat structure, not just 'variables shown above' [1]. A tiny patch\nattached to fix the comment.\n\nThoughts?\n\n[1]\ndiff --git a/src/include/replication/walsender_private.h\nb/src/include/replication/walsender_private.h\nindex c14888e493..9c61f92c44 100644\n--- a/src/include/replication/walsender_private.h\n+++ b/src/include/replication/walsender_private.h\n@@ -65,7 +65,7 @@ typedef struct WalSnd\n */\n int sync_standby_priority;\n\n- /* Protects shared variables shown above. */\n+ /* Protects shared variables in this structure. */\n slock_t mutex;\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Fri, 19 Aug 2022 17:40:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix a comment in WalSnd structure"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 05:40:40PM +0530, Bharath Rupireddy wrote:\n> WalSnd structure mutex is being used to protect all the variables of\n> that structure, not just 'variables shown above' [1]. A tiny patch\n> attached to fix the comment.\n\nYep, walsender.c tells the same story, aka that replyTime and latch\nare updated with the spinlock taken. I'll go update the comment, and\nyou suggestion sounds fine to me.\n--\nMichael",
"msg_date": "Mon, 22 Aug 2022 09:22:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix a comment in WalSnd structure"
}
] |
[
{
"msg_contents": "Hi,\nCurrently errdetail_busy_db() only shows the number of other sessions using\nthe database but doesn't give any detail about them.\nFor one of the customers,pg_stat_activity is showing lower number of\nconnections compared to the number revealed in the error message.\n\nLooking at CountOtherDBBackends(), it seems proc->pid is available\nwhen nbackends is incremented.\n\nI want to poll the community on whether including proc->pid's in the error\nmessage would be useful for troubleshooting.\n\nThanks\n\nHi,Currently errdetail_busy_db() only shows the number of other sessions using the database but doesn't give any detail about them.For one of the customers,pg_stat_activity is showing lower number of connections compared to the number revealed in the error message.Looking at CountOtherDBBackends(), it seems proc->pid is available when nbackends is incremented.I want to poll the community on whether including proc->pid's in the error message would be useful for troubleshooting.Thanks",
"msg_date": "Fri, 19 Aug 2022 10:10:43 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "including pid's for `There are XX other sessions using the database`"
},
{
"msg_contents": "On Fri, Aug 19, 2022, at 2:10 PM, Zhihong Yu wrote:\n> I want to poll the community on whether including proc->pid's in the error message would be useful for troubleshooting.\nSuch message is only useful for a parameter into a pg_stat_activity query. You\ndon't need the PID list if you already have the most important information:\ndatabase name. I don't think revealing the current session PIDs from the\ndatabase you want to drop will buy you anything. It could be a long list and it\ndoes not help you to solve the issue: why wasn't that database removed?\n \nBesides that, if you know that there is a possibility that a connection is\nopen, you can always use the FORCE option. The old/other alternative is to use\na query like\n \n SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'foo';\n \n(possibly combined with a REVOKE CONNECT or pg_hba.conf modification) before\nexecuting DROP DATABASE.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Fri, Aug 19, 2022, at 2:10 PM, Zhihong Yu wrote:I want to poll the community on whether including proc->pid's in the error message would be useful for troubleshooting.Such message is only useful for a parameter into a pg_stat_activity query. Youdon't need the PID list if you already have the most important information:database name. I don't think revealing the current session PIDs from thedatabase you want to drop will buy you anything. It could be a long list and itdoes not help you to solve the issue: why wasn't that database removed? Besides that, if you know that there is a possibility that a connection isopen, you can always use the FORCE option. The old/other alternative is to usea query like SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'foo'; (possibly combined with a REVOKE CONNECT or pg_hba.conf modification) beforeexecuting DROP DATABASE.--Euler TaveiraEDB https://www.enterprisedb.com/",
"msg_date": "Sat, 20 Aug 2022 01:31:01 -0300",
"msg_from": "\"Euler Taveira\" <euler@eulerto.com>",
"msg_from_op": false,
"msg_subject": "Re: including pid's for `There are XX other sessions using the\n database`"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 9:31 PM Euler Taveira <euler@eulerto.com> wrote:\n\n> On Fri, Aug 19, 2022, at 2:10 PM, Zhihong Yu wrote:\n>\n> I want to poll the community on whether including proc->pid's in the error\n> message would be useful for troubleshooting.\n>\n> Such message is only useful for a parameter into a pg_stat_activity query.\n> You\n> don't need the PID list if you already have the most important information:\n> database name. I don't think revealing the current session PIDs from the\n> database you want to drop will buy you anything. It could be a long list\n> and it\n> does not help you to solve the issue: why wasn't that database removed?\n>\n> Besides that, if you know that there is a possibility that a connection is\n> open, you can always use the FORCE option. The old/other alternative is to\n> use\n> a query like\n>\n> SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname =\n> 'foo';\n>\n> (possibly combined with a REVOKE CONNECT or pg_hba.conf modification)\n> before\n> executing DROP DATABASE.\n>\n>\n> --\n> Euler Taveira\n> EDB https://www.enterprisedb.com/\n>\n>\nThanks for responding.\n\nSince pg_stat_activity shows fewer number of connections compared to the\nnumber revealed in the error message,\nI am not sure the above query would terminate all connections for the\ndatabase to be dropped.\n\nOn Fri, Aug 19, 2022 at 9:31 PM Euler Taveira <euler@eulerto.com> wrote:On Fri, Aug 19, 2022, at 2:10 PM, Zhihong Yu wrote:I want to poll the community on whether including proc->pid's in the error message would be useful for troubleshooting.Such message is only useful for a parameter into a pg_stat_activity query. Youdon't need the PID list if you already have the most important information:database name. I don't think revealing the current session PIDs from thedatabase you want to drop will buy you anything. It could be a long list and itdoes not help you to solve the issue: why wasn't that database removed? Besides that, if you know that there is a possibility that a connection isopen, you can always use the FORCE option. The old/other alternative is to usea query like SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'foo'; (possibly combined with a REVOKE CONNECT or pg_hba.conf modification) beforeexecuting DROP DATABASE.--Euler TaveiraEDB https://www.enterprisedb.com/Thanks for responding.Since pg_stat_activity shows fewer number of connections compared to the number revealed in the error message,I am not sure the above query would terminate all connections for the database to be dropped.",
"msg_date": "Sat, 20 Aug 2022 02:52:29 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: including pid's for `There are XX other sessions using the\n database`"
},
{
"msg_contents": "Hi,\n\nOn Sat, Aug 20, 2022 at 02:52:29AM -0700, Zhihong Yu wrote:\n> On Fri, Aug 19, 2022 at 9:31 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> >\n> Thanks for responding.\n>\n> Since pg_stat_activity shows fewer number of connections compared to the\n> number revealed in the error message,\n> I am not sure the above query would terminate all connections for the\n> database to be dropped.\n\nHow exactly are you checking pg_stat_activity? If you query that view right\nafter a failed attempt at dropping a database, there's no guarantee to find the\nexact same connections on the target database as client might connect or\ndisconnect.\n\nIf you prevent any further connection by e.g. tweaking the pg_hba.conf then you\nhave a guarantee that the query will terminate all conflicting connections.\nUsing the FORCE option is just a simpler way to do it, as dropdb() starts with\npreventing any new connection on the target database.\n\nOverall, I agree that adding the list of pid to the message error message\ndoesn't seem useful.\n\n\n",
"msg_date": "Sun, 21 Aug 2022 21:39:21 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: including pid's for `There are XX other sessions using the\n database`"
},
{
"msg_contents": "On Sun, Aug 21, 2022 at 6:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> Hi,\n>\n> On Sat, Aug 20, 2022 at 02:52:29AM -0700, Zhihong Yu wrote:\n> > On Fri, Aug 19, 2022 at 9:31 PM Euler Taveira <euler@eulerto.com> wrote:\n> >\n> > >\n> > Thanks for responding.\n> >\n> > Since pg_stat_activity shows fewer number of connections compared to the\n> > number revealed in the error message,\n> > I am not sure the above query would terminate all connections for the\n> > database to be dropped.\n>\n> How exactly are you checking pg_stat_activity? If you query that view\n> right\n> after a failed attempt at dropping a database, there's no guarantee to\n> find the\n> exact same connections on the target database as client might connect or\n> disconnect.\n>\n> If you prevent any further connection by e.g. tweaking the pg_hba.conf\n> then you\n> have a guarantee that the query will terminate all conflicting connections.\n> Using the FORCE option is just a simpler way to do it, as dropdb() starts\n> with\n> preventing any new connection on the target database.\n>\n> Overall, I agree that adding the list of pid to the message error message\n> doesn't seem useful.\n>\n\nThanks for the comments, Euler and Julien.\n\nOn Sun, Aug 21, 2022 at 6:39 AM Julien Rouhaud <rjuju123@gmail.com> wrote:Hi,\n\nOn Sat, Aug 20, 2022 at 02:52:29AM -0700, Zhihong Yu wrote:\n> On Fri, Aug 19, 2022 at 9:31 PM Euler Taveira <euler@eulerto.com> wrote:\n>\n> >\n> Thanks for responding.\n>\n> Since pg_stat_activity shows fewer number of connections compared to the\n> number revealed in the error message,\n> I am not sure the above query would terminate all connections for the\n> database to be dropped.\n\nHow exactly are you checking pg_stat_activity? If you query that view right\nafter a failed attempt at dropping a database, there's no guarantee to find the\nexact same connections on the target database as client might connect or\ndisconnect.\n\nIf you prevent any further connection by e.g. tweaking the pg_hba.conf then you\nhave a guarantee that the query will terminate all conflicting connections.\nUsing the FORCE option is just a simpler way to do it, as dropdb() starts with\npreventing any new connection on the target database.\n\nOverall, I agree that adding the list of pid to the message error message\ndoesn't seem useful.Thanks for the comments, Euler and Julien.",
"msg_date": "Sun, 21 Aug 2022 07:05:52 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: including pid's for `There are XX other sessions using the\n database`"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThis is a follow-up for recent changes that optimized [sub]xip lookups in\nXidInMVCCSnapshot() on Intel hardware [0] [1]. I've attached a patch that\nuses ARM Advanced SIMD (Neon) intrinsic functions where available to speed\nup the search. The approach is nearly identical to the SSE2 version, and\nthe usual benchmark [2] shows similar improvements.\n\n writers head simd\n 8 866 836\n 16 849 833\n 32 782 822\n 64 846 833\n 128 805 821\n 256 722 739\n 512 529 674\n 768 374 608\n 1024 268 522\n\nI've tested the patch on a recent macOS (M1 Pro) and Amazon Linux\n(Graviton2), and I've confirmed that the instructions aren't used on a\nLinux/Intel machine. I did add a new configure check to see if the\nrelevant intrinsics are available, but I didn't add a runtime check like\nthere is for the CRC instructions since the compilers I used support these\nintrinsics by default. (I don't think a runtime check would work very well\nwith the inline function, anyway.) AFAICT these intrinsics are pretty\nstandard on aarch64, although IIUC the spec indicates that they are\ntechnically optional. I suspect that a simple check for \"aarch64\" would be\nsufficient, but I haven't investigated the level of compiler support yet.\n\nThoughts?\n\n[0] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b6ef167\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=37a6e5d\n[2] https://postgr.es/m/057a9a95-19d2-05f0-17e2-f46ff20e9b3e@2ndquadrant.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 19 Aug 2022 13:08:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-19 13:08:29 -0700, Nathan Bossart wrote:\n> I've tested the patch on a recent macOS (M1 Pro) and Amazon Linux\n> (Graviton2), and I've confirmed that the instructions aren't used on a\n> Linux/Intel machine. I did add a new configure check to see if the\n> relevant intrinsics are available, but I didn't add a runtime check like\n> there is for the CRC instructions since the compilers I used support these\n> intrinsics by default. (I don't think a runtime check would work very well\n> with the inline function, anyway.) AFAICT these intrinsics are pretty\n> standard on aarch64, although IIUC the spec indicates that they are\n> technically optional. I suspect that a simple check for \"aarch64\" would be\n> sufficient, but I haven't investigated the level of compiler support yet.\n\nAre you sure there's not an appropriate define for us to use here instead of a\nconfigure test? E.g.\n\necho|cc -dM -P -E -|grep -iE 'arm|aarch'\n...\n#define __AARCH64_SIMD__ 1\n...\n#define __ARM_NEON 1\n#define __ARM_NEON_FP 0xE\n#define __ARM_NEON__ 1\n..\n\nI strikes me as non-scalable to explicitly test all the simd instructions we'd\nuse.\n\n\nThe story for the CRC checks is different because those instructions often\naren't available with the default compilation flags and aren't guaranteed to\nbe available at runtime.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Fri, 19 Aug 2022 14:26:02 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Fri, Aug 19, 2022 at 02:26:02PM -0700, Andres Freund wrote:\n> Are you sure there's not an appropriate define for us to use here instead of a\n> configure test? E.g.\n> \n> echo|cc -dM -P -E -|grep -iE 'arm|aarch'\n> ...\n> #define __AARCH64_SIMD__ 1\n> ...\n> #define __ARM_NEON 1\n> #define __ARM_NEON_FP 0xE\n> #define __ARM_NEON__ 1\n> ..\n> \n> I strikes me as non-scalable to explicitly test all the simd instructions we'd\n> use.\n\nThanks for the pointer. GCC, Clang, and the Arm compiler all seem to\ndefine __ARM_NEON, so here is a patch that uses that instead.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 19 Aug 2022 15:28:14 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 5:28 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Fri, Aug 19, 2022 at 02:26:02PM -0700, Andres Freund wrote:\n> > Are you sure there's not an appropriate define for us to use here instead of a\n> > configure test? E.g.\n> >\n> > echo|cc -dM -P -E -|grep -iE 'arm|aarch'\n> > ...\n> > #define __AARCH64_SIMD__ 1\n> > ...\n> > #define __ARM_NEON 1\n> > #define __ARM_NEON_FP 0xE\n> > #define __ARM_NEON__ 1\n> > ..\n> >\n> > I strikes me as non-scalable to explicitly test all the simd instructions we'd\n> > use.\n>\n> Thanks for the pointer. GCC, Clang, and the Arm compiler all seem to\n> define __ARM_NEON, so here is a patch that uses that instead.\n\nIs this also ever defined on 32-bit? If so, is it safe, meaning the\ncompiler will not emit these instructions without additional flags?\nI'm wondering if __aarch64__ would be clearer on that, and if we get\nwindows-on-arm support as has been proposed, could also add _M_ARM64.\n\nI also see #if defined(__aarch64__) || defined(__aarch64) in our\ncodebase already, but I'm not sure what recognizes the latter.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 22 Aug 2022 11:50:35 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 11:50:35AM +0700, John Naylor wrote:\n> On Sat, Aug 20, 2022 at 5:28 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Thanks for the pointer. GCC, Clang, and the Arm compiler all seem to\n>> define __ARM_NEON, so here is a patch that uses that instead.\n> \n> Is this also ever defined on 32-bit? If so, is it safe, meaning the\n> compiler will not emit these instructions without additional flags?\n> I'm wondering if __aarch64__ would be clearer on that, and if we get\n> windows-on-arm support as has been proposed, could also add _M_ARM64.\n\nI haven't been able to enable __ARM_NEON on 32-bit, but if it is somehow\npossible, we should probably add an __aarch64__ check since functions like\nvmaxvq_u32() do not appear to be available on 32-bit. I have been able to\ncompile for __aarch64__ without __ARM_NEON, so it might still be a good\nidea to check for __ARM_NEON. So, to be safe, perhaps we should use\nsomething like the following:\n\n\t#if (defined(__aarch64__) || defined(__aarch64)) && defined(__ARM_NEON)\n\n> I also see #if defined(__aarch64__) || defined(__aarch64) in our\n> codebase already, but I'm not sure what recognizes the latter.\n\nI'm not sure what uses the latter, either.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 Aug 2022 14:15:47 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 4:15 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Aug 22, 2022 at 11:50:35AM +0700, John Naylor wrote:\n\n> > Is this also ever defined on 32-bit? If so, is it safe, meaning the\n> > compiler will not emit these instructions without additional flags?\n> > I'm wondering if __aarch64__ would be clearer on that, and if we get\n> > windows-on-arm support as has been proposed, could also add _M_ARM64.\n>\n> I haven't been able to enable __ARM_NEON on 32-bit, but if it is somehow\n> possible, we should probably add an __aarch64__ check since functions like\n> vmaxvq_u32() do not appear to be available on 32-bit. I have been able to\n> compile for __aarch64__ without __ARM_NEON, so it might still be a good\n> idea to check for __ARM_NEON.\n\nThe important thing is: if we compile with __aarch64__ as a target:\n- Will the compiler emit the intended instructions from the intrinsics\nwithout extra flags?\n- Can a user on ARM64 ever get a runtime fault if the machine attempts\nto execute NEON instructions? \"I have been able to compile for\n__aarch64__ without __ARM_NEON\" doesn't really answer that question --\nwhat exactly did this entail?\n\n> > I also see #if defined(__aarch64__) || defined(__aarch64) in our\n> > codebase already, but I'm not sure what recognizes the latter.\n>\n> I'm not sure what uses the latter, either.\n\nI took a quick look around at Debian code search, *BSD, Apple, and a\nfew other places, and I can't find it. Then, I looked at the\ndiscussions around commit 5c7603c318872a42e \"Add ARM64 (aarch64)\nsupport to s_lock.h\", and the proposed patch [1] only had __aarch64__\n. When it was committed, the platform was vaporware and I suppose we\nincluded \"__aarch64\" as a prophylactic measure because no other reason\nwas given. It doesn't seem to exist anywhere, so unless someone can\ndemonstrate otherwise, I'm going to rip it out soon.\n\n[1] https://www.postgresql.org/message-id/flat/1368448758.23422.12.camel%40t520.redhat.com\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 11:07:03 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 11:07:03AM +0700, John Naylor wrote:\n> The important thing is: if we compile with __aarch64__ as a target:\n> - Will the compiler emit the intended instructions from the intrinsics\n> without extra flags?\n\nMy testing with GCC and Clang did not require any extra flags. GCC appears\nto enable it by default for aarch64 [0]. AFAICT this is the case for Clang\nas well, but that is based on the code and my testing (I couldn't find any\ndocumentation for this).\n\n> - Can a user on ARM64 ever get a runtime fault if the machine attempts\n> to execute NEON instructions?\n\nIIUC yes, although I'm not sure how likely it is in practice.\n\n> \"I have been able to compile for\n> __aarch64__ without __ARM_NEON\" doesn't really answer that question --\n> what exactly did this entail?\n\nCompiling with something like -march=armv8-a+nosimd prevents defining\n__ARM_NEON. Interestingly, Clang still defines __ARM_NEON__ even when\n+nosimd is specified.\n\n> I took a quick look around at Debian code search, *BSD, Apple, and a\n> few other places, and I can't find it. Then, I looked at the\n> discussions around commit 5c7603c318872a42e \"Add ARM64 (aarch64)\n> support to s_lock.h\", and the proposed patch [1] only had __aarch64__\n> . When it was committed, the platform was vaporware and I suppose we\n> included \"__aarch64\" as a prophylactic measure because no other reason\n> was given. It doesn't seem to exist anywhere, so unless someone can\n> demonstrate otherwise, I'm going to rip it out soon.\n\nThis is what I found, too, so +1. I've attached a patch for this.\n\n[0] https://gcc.gnu.org/onlinedocs/gcc/AArch64-Options.html\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 24 Aug 2022 11:01:11 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 1:01 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Aug 24, 2022 at 11:07:03AM +0700, John Naylor wrote:\n> > The important thing is: if we compile with __aarch64__ as a target:\n> > - Will the compiler emit the intended instructions from the intrinsics\n> > without extra flags?\n>\n> My testing with GCC and Clang did not require any extra flags. GCC appears\n> to enable it by default for aarch64 [0]. AFAICT this is the case for Clang\n> as well, but that is based on the code and my testing (I couldn't find any\n> documentation for this).\n\nI guess you meant this part: \"‘simd’ Enable Advanced SIMD\ninstructions. This also enables floating-point instructions. This is\non by default for all possible values for options -march and -mcpu.\"\n\n> > - Can a user on ARM64 ever get a runtime fault if the machine attempts\n> > to execute NEON instructions?\n>\n> IIUC yes, although I'm not sure how likely it is in practice.\n\nGiven the quoted part above, it doesn't seem likely, but we should try\nto find out for sure, because a runtime fault is surely not acceptable\neven on a toy system.\n\n> > \"I have been able to compile for\n> > __aarch64__ without __ARM_NEON\" doesn't really answer that question --\n> > what exactly did this entail?\n>\n> Compiling with something like -march=armv8-a+nosimd prevents defining\n> __ARM_NEON.\n\nOkay, that's unsurprising.\n\n> Interestingly, Clang still defines __ARM_NEON__ even when\n> +nosimd is specified.\n\nPOLA violation, but if no one has complained to them, it's a good bet\nthe instructions are always available.\n\n> > I took a quick look around at Debian code search, *BSD, Apple, and a\n> > few other places, and I can't find it. Then, I looked at the\n> > discussions around commit 5c7603c318872a42e \"Add ARM64 (aarch64)\n> > support to s_lock.h\", and the proposed patch [1] only had __aarch64__\n> > . When it was committed, the platform was vaporware and I suppose we\n> > included \"__aarch64\" as a prophylactic measure because no other reason\n> > was given. It doesn't seem to exist anywhere, so unless someone can\n> > demonstrate otherwise, I'm going to rip it out soon.\n>\n> This is what I found, too, so +1. I've attached a patch for this.\n\nThanks, I'll push this soon. I wondered if the same reasoning applies\nto __arm__ / __arm nowadays, but a quick search does indicate that\n__arm exists (existed?), at least.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:38:34 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 10:38:34AM +0700, John Naylor wrote:\n> On Thu, Aug 25, 2022 at 1:01 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Wed, Aug 24, 2022 at 11:07:03AM +0700, John Naylor wrote:\n>> > - Can a user on ARM64 ever get a runtime fault if the machine attempts\n>> > to execute NEON instructions?\n>>\n>> IIUC yes, although I'm not sure how likely it is in practice.\n> \n> Given the quoted part above, it doesn't seem likely, but we should try\n> to find out for sure, because a runtime fault is surely not acceptable\n> even on a toy system.\n\nThe ARM literature appears to indicate that Neon support is pretty standard\non aarch64, and AFAICT it's pretty common to just assume it's available.\nAs originally suspected, I believe that simply checking for __aarch64__\nwould be sufficient, but I don't think it would be unreasonable to also\ncheck for __ARM_NEON to be safe.\n\n>> Interestingly, Clang still defines __ARM_NEON__ even when\n>> +nosimd is specified.\n> \n> POLA violation, but if no one has complained to them, it's a good bet\n> the instructions are always available.\n\nSorry, I should've been more specific. In my testing, I could include or\nomit __ARM_NEON using +[no]simd, but __ARM_NEON__ (with two underscores at\nthe end) was always there. My brief research seems to indicate this might\nbe unique to Darwin, but in the end, it looks like __ARM_NEON (without the\ntrailing underscores) is the most widely used.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 21:57:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 11:57 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Thu, Aug 25, 2022 at 10:38:34AM +0700, John Naylor wrote:\n> > On Thu, Aug 25, 2022 at 1:01 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> On Wed, Aug 24, 2022 at 11:07:03AM +0700, John Naylor wrote:\n> >> > - Can a user on ARM64 ever get a runtime fault if the machine attempts\n> >> > to execute NEON instructions?\n> >>\n> >> IIUC yes, although I'm not sure how likely it is in practice.\n> >\n> > Given the quoted part above, it doesn't seem likely, but we should try\n> > to find out for sure, because a runtime fault is surely not acceptable\n> > even on a toy system.\n>\n> The ARM literature appears to indicate that Neon support is pretty standard\n> on aarch64, and AFAICT it's pretty common to just assume it's available.\n\nThis doesn't exactly rise to the level of \"find out for sure\", so I\nwent looking myself. This is the language I found [1]:\n\n\"Both floating-point and NEON are required in all standard ARMv8\nimplementations. However, implementations targeting specialized\nmarkets may support the following combinations:\n\nNo NEON or floating-point.\nFull floating-point and SIMD support with exception trapping.\nFull floating-point and SIMD support without exception trapping.\"\n\nSince we assume floating-point, I see no reason not to assume NEON,\nbut a case could be made for documenting that we require NEON on\naarch64, in addition to exception trapping (for CRC runtime check) and\nfloating point on any Arm. Or even just say \"standard\". I don't\nbelieve anyone will want to run Postgres on specialized hardware\nlacking these features, so maybe it's a moot point.\n\n[1] https://developer.arm.com/documentation/den0024/a/AArch64-Floating-point-and-NEON\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 10:45:10 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 10:45:10AM +0700, John Naylor wrote:\n> On Thu, Aug 25, 2022 at 11:57 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n>> The ARM literature appears to indicate that Neon support is pretty standard\n>> on aarch64, and AFAICT it's pretty common to just assume it's available.\n> \n> This doesn't exactly rise to the level of \"find out for sure\", so I\n> went looking myself. This is the language I found [1]:\n> \n> \"Both floating-point and NEON are required in all standard ARMv8\n> implementations. However, implementations targeting specialized\n> markets may support the following combinations:\n> \n> No NEON or floating-point.\n> Full floating-point and SIMD support with exception trapping.\n> Full floating-point and SIMD support without exception trapping.\"\n\nSorry, I should've linked to the documentation I found. I saw similar\nlanguage in a couple of manuals, which is what led me to the conclusion\nthat Neon support is relatively standard.\n\n> Since we assume floating-point, I see no reason not to assume NEON,\n> but a case could be made for documenting that we require NEON on\n> aarch64, in addition to exception trapping (for CRC runtime check) and\n> floating point on any Arm. Or even just say \"standard\". I don't\n> believe anyone will want to run Postgres on specialized hardware\n> lacking these features, so maybe it's a moot point.\n\nI'm okay with assuming Neon support for now. It's probably easier to add\nthe __ARM_NEON check if/when someone complains than it is to justify\nremoving it once it's there.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 21:51:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "Here is a new patch set that applies on top of v9-0001 in the\njson_lex_string patch set [0] and v3 of the is_valid_ascii patch [1].\n\n[0] https://postgr.es/m/CAFBsxsFV4v802idV0-Bo%3DV7wLMHRbOZ4er0hgposhyGCikmVGA%40mail.gmail.com\n[1] https://postgr.es/m/CAFBsxsFFAZ6acUfyUALiem4DpCW%3DApXbF02zrc0G0oT9CPof0Q%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 25 Aug 2022 23:13:47 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 11:13:47PM -0700, Nathan Bossart wrote:\n> Here is a new patch set that applies on top of v9-0001 in the\n> json_lex_string patch set [0] and v3 of the is_valid_ascii patch [1].\n\nHere is a rebased patch set that applies to HEAD.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 26 Aug 2022 11:24:03 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 1:24 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Here is a rebased patch set that applies to HEAD.\n\n0001:\n\n #define USE_NO_SIMD\n typedef uint64 Vector8;\n+typedef uint64 Vector32;\n #endif\n\nI don't forsee any use of emulating vector registers with uint64 if\nthey only hold two ints. I wonder if it'd be better if all vector32\nfunctions were guarded with #ifndef NO_USE_SIMD. (I wonder if\ndeclarations without definitions cause warnings...)\n\n+ * NB: This function assumes that each lane in the given vector either has all\n+ * bits set or all bits zeroed, as it is mainly intended for use with\n+ * operations that produce such vectors (e.g., vector32_eq()). If this\n+ * assumption is not true, this function's behavior is undefined.\n+ */\n\nHmm?\n\nAlso, is_highbit_set() already has uses same intrinsic and has the\nsame intended effect, since we only care about the boolean result.\n\n0002:\n\n-#elif defined(USE_SSE2)\n+#elif defined(USE_SSE2) || defined(USE_NEON)\n\nI think we can just say #else.\n\n-#if defined(USE_SSE2)\n- __m128i sub;\n+#ifndef USE_NO_SIMD\n+ Vector8 sub;\n\n+#elif defined(USE_NEON)\n+\n+ /* use the same approach as the USE_SSE2 block above */\n+ sub = vqsubq_u8(v, vector8_broadcast(c));\n+ result = vector8_has_zero(sub);\n\nI think we should invent a helper that does saturating subtraction and\ncall that, inlining the sub var so we don't need to mess with it\nfurther.\n\nOtherwise seems fine.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 13:59:06 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "I spent a bit more time researching the portability implications of\nthis patch. I think that we should check __ARM_NEON before #including\n<arm_neon.h>; there is authoritative documentation out there telling\nyou to, eg [1], and I can see no upside at all to not checking.\nWe cannot check *only* __ARM_NEON, though. I found it to get defined\nby clang 8.0.0 in my Fedora 30 32-bit image, although that does not\nprovide all the instructions we want (I see \"undefined function\"\ncomplaints for vmaxvq_u8 etc if I try to make it use the patch).\nLooking into that installation's <arm_neon.h>, those functions are\ndefined conditionally if \"__ARM_FP & 2\", which is kind of interesting\n--- per [1], that bit indicates support for 16-bit floating point,\nwhich seems a mite unrelated.\n\nIt appears from the info at [2] that there are at least some 32-bit\nARM platforms that set that bit, implying (if the clang authors are\nwell informed) that they have the instructions we want. But we\ncould not realistically make 32-bit builds that try to use those\ninstructions without a run-time test; such a build would fail for\ntoo many people. I doubt that a run-time test is worth the trouble,\nso I concur with the idea of selecting NEON on aarch64 only and hoping\nto thereby avoid a runtime test.\n\nIn short, I think the critical part of 0002 needs to look more like\nthis:\n\n+#elif defined(__aarch64__) && defined(__ARM_NEON)\n+/*\n+ * We use the Neon instructions if the compiler provides access to them\n+ * (as indicated by __ARM_NEON) and we are on aarch64. While Neon support is\n+ * technically optional for aarch64, it appears that all available 64-bit\n+ * hardware does have it. Neon exists in some 32-bit hardware too, but\n+ * we could not realistically use it there without a run-time check,\n+ * which seems not worth the trouble for now.\n+ */\n+#include <arm_neon.h>\n+#define USE_NEON\n...\n\nCoding like this appears to work on both my Apple M1 and my Raspberry\nPi, with several different OSes checked on the latter.\n\n\t\t\tregards, tom lane\n\n[1] https://developer.arm.com/documentation/101754/0618/armclang-Reference/Other-Compiler-specific-Features/Predefined-macros\n[2] http://micro-os-plus.github.io/develop/predefined-macros/\n\n\n",
"msg_date": "Sat, 27 Aug 2022 17:18:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Sat, Aug 27, 2022 at 01:59:06PM +0700, John Naylor wrote:\n> I don't forsee any use of emulating vector registers with uint64 if\n> they only hold two ints. I wonder if it'd be better if all vector32\n> functions were guarded with #ifndef NO_USE_SIMD. (I wonder if\n> declarations without definitions cause warnings...)\n\nYeah. I was a bit worried about the readability of this file with so many\n#ifndefs, but after trying it out, I suppose it doesn't look _too_ bad.\n\n> + * NB: This function assumes that each lane in the given vector either has all\n> + * bits set or all bits zeroed, as it is mainly intended for use with\n> + * operations that produce such vectors (e.g., vector32_eq()). If this\n> + * assumption is not true, this function's behavior is undefined.\n> + */\n> \n> Hmm?\n\nYup. The problem is that AFAICT there's no equivalent to\n_mm_movemask_epi8() on aarch64, so you end up with something like\n\n\tvmaxvq_u8(vandq_u8(v, vector8_broadcast(0x80))) != 0\n\nBut for pg_lfind32(), we really just want to know if any lane is set, which\nonly requires a call to vmaxvq_u32(). I haven't had a chance to look too\nclosely, but my guess is that this ultimately results in an extra AND\noperation in the aarch64 path, so maybe it doesn't impact performance too\nmuch. The other option would be to open-code the intrinsic function calls\ninto pg_lfind.h. I'm trying to avoid the latter, but maybe it's the right\nthing to do for now... What do you think?\n\n> -#elif defined(USE_SSE2)\n> +#elif defined(USE_SSE2) || defined(USE_NEON)\n> \n> I think we can just say #else.\n\nYes.\n\n> -#if defined(USE_SSE2)\n> - __m128i sub;\n> +#ifndef USE_NO_SIMD\n> + Vector8 sub;\n> \n> +#elif defined(USE_NEON)\n> +\n> + /* use the same approach as the USE_SSE2 block above */\n> + sub = vqsubq_u8(v, vector8_broadcast(c));\n> + result = vector8_has_zero(sub);\n> \n> I think we should invent a helper that does saturating subtraction and\n> call that, inlining the sub var so we don't need to mess with it\n> further.\n\nGood idea, will do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 15:12:34 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 05:18:34PM -0400, Tom Lane wrote:\n> In short, I think the critical part of 0002 needs to look more like\n> this:\n> \n> +#elif defined(__aarch64__) && defined(__ARM_NEON)\n> +/*\n> + * We use the Neon instructions if the compiler provides access to them\n> + * (as indicated by __ARM_NEON) and we are on aarch64. While Neon support is\n> + * technically optional for aarch64, it appears that all available 64-bit\n> + * hardware does have it. Neon exists in some 32-bit hardware too, but\n> + * we could not realistically use it there without a run-time check,\n> + * which seems not worth the trouble for now.\n> + */\n> +#include <arm_neon.h>\n> +#define USE_NEON\n> ...\n\nThank you for the analysis! I'll do it this way in the next patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 15:15:02 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 10:12 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> Yup. The problem is that AFAICT there's no equivalent to\n> _mm_movemask_epi8() on aarch64, so you end up with something like\n>\n> vmaxvq_u8(vandq_u8(v, vector8_broadcast(0x80))) != 0\n>\n> But for pg_lfind32(), we really just want to know if any lane is set, which\n> only requires a call to vmaxvq_u32(). I haven't had a chance to look too\n> closely, but my guess is that this ultimately results in an extra AND\n> operation in the aarch64 path, so maybe it doesn't impact performance too\n> much. The other option would be to open-code the intrinsic function calls\n> into pg_lfind.h. I'm trying to avoid the latter, but maybe it's the right\n> thing to do for now... What do you think?\n\nAhh, this gives me a flashback to John's UTF-8 validation thread[1]\n(the beginner NEON hackery in there was just a learning exercise,\nsadly not followed up with real patches...). He had\n_mm_movemask_epi8(v) != 0 which I first translated to\nto_bool(bitwise_and(v, vmovq_n_u8(0x80))) and he pointed out that\nvmaxvq_u8(v) > 0x7F has the right effect without the and.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJjyXvS6W05kRVpH6Kng50%3DuOGxyiyjgPKm707JxQYHCg%40mail.gmail.com\n\n\n",
"msg_date": "Sun, 28 Aug 2022 10:39:09 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 10:39:09AM +1200, Thomas Munro wrote:\n> On Sun, Aug 28, 2022 at 10:12 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n>> Yup. The problem is that AFAICT there's no equivalent to\n>> _mm_movemask_epi8() on aarch64, so you end up with something like\n>>\n>> vmaxvq_u8(vandq_u8(v, vector8_broadcast(0x80))) != 0\n>>\n>> But for pg_lfind32(), we really just want to know if any lane is set, which\n>> only requires a call to vmaxvq_u32(). I haven't had a chance to look too\n>> closely, but my guess is that this ultimately results in an extra AND\n>> operation in the aarch64 path, so maybe it doesn't impact performance too\n>> much. The other option would be to open-code the intrinsic function calls\n>> into pg_lfind.h. I'm trying to avoid the latter, but maybe it's the right\n>> thing to do for now... What do you think?\n> \n> Ahh, this gives me a flashback to John's UTF-8 validation thread[1]\n> (the beginner NEON hackery in there was just a learning exercise,\n> sadly not followed up with real patches...). He had\n> _mm_movemask_epi8(v) != 0 which I first translated to\n> to_bool(bitwise_and(v, vmovq_n_u8(0x80))) and he pointed out that\n> vmaxvq_u8(v) > 0x7F has the right effect without the and.\n\nI knew there had to be an easier way! I'll give this a try. Thanks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 16:00:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "Here is a new patch set in which I've attempted to address all feedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 27 Aug 2022 20:58:39 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 10:58 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> Here is a new patch set in which I've attempted to address all feedback.\n\nLooks in pretty good shape. Some more comments:\n\n+ uint32 nelem_per_vector = sizeof(Vector32) / sizeof(uint32);\n+ uint32 nelem_per_iteration = 4 * nelem_per_vector;\n\nUsing local #defines would be my style. I don't have a reason to\nobject to this way, but adding const makes these vars more clear.\nSpeaking of const:\n\n- const __m128i tmp1 = _mm_or_si128(result1, result2);\n- const __m128i tmp2 = _mm_or_si128(result3, result4);\n- const __m128i result = _mm_or_si128(tmp1, tmp2);\n+ tmp1 = vector32_or(result1, result2);\n+ tmp2 = vector32_or(result3, result4);\n+ result = vector32_or(tmp1, tmp2);\n\nAny reason to throw away the const declarations?\n\n+static inline bool\n+vector32_is_highbit_set(const Vector32 v)\n+{\n+#ifdef USE_SSE2\n+ return (_mm_movemask_epi8(v) & 0x8888) != 0;\n+#endif\n+}\n\nI'm not sure why we need this function -- AFAICS it just adds more\nwork on x86 for zero benefit. For our present application, can we just\ncast to Vector8 (for Arm's sake) and call the 8-bit version?\n\nAside from that, I plan on rewriting some comments for commit, some of\nwhich pre-date this patch:\n\n- * operations using bitwise operations on unsigned integers.\n+ * operations using bitwise operations on unsigned integers. Note that many\n+ * of the functions in this file presently do not have non-SIMD\n+ * implementations.\n\nIt's unclear to the reader whether this is a matter of 'round-to-it's.\nI'd like to document what I asserted in this thread, that it's likely\nnot worthwhile to do anything with a uint64 representing two 32-bit\nints. (It *is* demonstrably worth it for handling 8 byte-values at a\ntime)\n\n * Use saturating subtraction to find bytes <= c, which will present as\n- * NUL bytes in 'sub'.\n+ * NUL bytes.\n\nI'd like to to point out that the reason to do it this way is to\nworkaround SIMD architectures frequent lack of unsigned comparison.\n\n+ * Return the result of subtracting the respective elements of the input\n+ * vectors using saturation.\n\nI wonder if we should explain briefly what saturating arithmetic is. I\nhad never encountered it outside of a SIMD programming context.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:25:50 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> I wonder if we should explain briefly what saturating arithmetic is. I\n> had never encountered it outside of a SIMD programming context.\n\n+1, it's at least worth a sentence to define the term.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 29 Aug 2022 00:28:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 11:25:50AM +0700, John Naylor wrote:\n> + uint32 nelem_per_vector = sizeof(Vector32) / sizeof(uint32);\n> + uint32 nelem_per_iteration = 4 * nelem_per_vector;\n> \n> Using local #defines would be my style. I don't have a reason to\n> object to this way, but adding const makes these vars more clear.\n\nI added const.\n\n> Speaking of const:\n> \n> - const __m128i tmp1 = _mm_or_si128(result1, result2);\n> - const __m128i tmp2 = _mm_or_si128(result3, result4);\n> - const __m128i result = _mm_or_si128(tmp1, tmp2);\n> + tmp1 = vector32_or(result1, result2);\n> + tmp2 = vector32_or(result3, result4);\n> + result = vector32_or(tmp1, tmp2);\n> \n> Any reason to throw away the const declarations?\n\nThe only reason is because I had to move the declarations to before the\nvector32_load() calls.\n\n> +static inline bool\n> +vector32_is_highbit_set(const Vector32 v)\n> +{\n> +#ifdef USE_SSE2\n> + return (_mm_movemask_epi8(v) & 0x8888) != 0;\n> +#endif\n> +}\n> \n> I'm not sure why we need this function -- AFAICS it just adds more\n> work on x86 for zero benefit. For our present application, can we just\n> cast to Vector8 (for Arm's sake) and call the 8-bit version?\n\nGood idea.\n\n> - * operations using bitwise operations on unsigned integers.\n> + * operations using bitwise operations on unsigned integers. Note that many\n> + * of the functions in this file presently do not have non-SIMD\n> + * implementations.\n> \n> It's unclear to the reader whether this is a matter of 'round-to-it's.\n> I'd like to document what I asserted in this thread, that it's likely\n> not worthwhile to do anything with a uint64 representing two 32-bit\n> ints. (It *is* demonstrably worth it for handling 8 byte-values at a\n> time)\n\nDone.\n\n> * Use saturating subtraction to find bytes <= c, which will present as\n> - * NUL bytes in 'sub'.\n> + * NUL bytes.\n> \n> I'd like to to point out that the reason to do it this way is to\n> workaround SIMD architectures frequent lack of unsigned comparison.\n\nDone.\n\n> + * Return the result of subtracting the respective elements of the input\n> + * vectors using saturation.\n> \n> I wonder if we should explain briefly what saturating arithmetic is. I\n> had never encountered it outside of a SIMD programming context.\n\nDone.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 28 Aug 2022 22:44:49 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 12:44 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> [v6]\n\nPushed with a couple comment adjustments, let's see what the build\nfarm thinks...\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 14:51:03 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 11:25 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> +static inline bool\n> +vector32_is_highbit_set(const Vector32 v)\n> +{\n> +#ifdef USE_SSE2\n> + return (_mm_movemask_epi8(v) & 0x8888) != 0;\n> +#endif\n> +}\n>\n> I'm not sure why we need this function -- AFAICS it just adds more\n> work on x86 for zero benefit. For our present application, can we just\n> cast to Vector8 (for Arm's sake) and call the 8-bit version?\n\nIt turns out MSVC animal drongo doesn't like this cast -- on x86 they\nare the same underlying type. Will look into that as more results come\nin.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 15:19:22 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 3:19 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> It turns out MSVC animal drongo doesn't like this cast -- on x86 they\n> are the same underlying type. Will look into that as more results come\n> in.\n\nHere's the simplest fix I can think of:\n\n/*\n * Exactly like vector8_is_highbit_set except for the input type, so\nit still looks\n * at each _byte_ separately.\n *\n * XXX x86 uses the same underlying type for vectors with 8-bit,\n16-bit, and 32-bit\n * integer elements, but Arm does not, hence the need for a separate function.\n * We could instead adopt the behavior of Arm's vmaxvq_u32(), i.e. check each\n * 32-bit element, but that would require an additional mask operation on x86.\n */\nstatic inline bool\nvector32_is_highbit_set(const Vector32 v)\n{\n#if defined(USE_NEON)\n return vector8_is_highbit_set((Vector8) v);\n#else\n return vector8_is_highbit_set(v);\n#endif\n}\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 16:28:57 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 4:28 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> Here's the simplest fix I can think of:\n>\n> /*\n> * Exactly like vector8_is_highbit_set except for the input type, so\n> it still looks\n> * at each _byte_ separately.\n> *\n> * XXX x86 uses the same underlying type for vectors with 8-bit,\n> 16-bit, and 32-bit\n> * integer elements, but Arm does not, hence the need for a separate function.\n> * We could instead adopt the behavior of Arm's vmaxvq_u32(), i.e. check each\n> * 32-bit element, but that would require an additional mask operation on x86.\n> */\n> static inline bool\n> vector32_is_highbit_set(const Vector32 v)\n> {\n> #if defined(USE_NEON)\n> return vector8_is_highbit_set((Vector8) v);\n> #else\n> return vector8_is_highbit_set(v);\n> #endif\n> }\n\nBowerbird just reported the same error, so I went ahead and pushed a\nfix with this.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:49:46 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 05:49:46PM +0700, John Naylor wrote:\n> Bowerbird just reported the same error, so I went ahead and pushed a\n> fix with this.\n\nThanks! I've attached a follow-up patch with a couple of small\nsuggestions.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 29 Aug 2022 10:17:12 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 12:17 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> Thanks! I've attached a follow-up patch with a couple of small\n> suggestions.\n\nPushed, thanks!\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:51:31 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: use ARM intrinsics in pg_lfind32() where available"
}
] |
[
{
"msg_contents": "On Wed, Aug 17, 2022 at 09:54:34AM -0500, Justin Pryzby wrote:\n> But an unfortunate consequence of not fixing the historic issues is that it\n> precludes the possibility that anyone could be expected to notice if they\n> introduce more instances of the same problem (as in the first half of these\n> patches). Then the hole which has already been dug becomes deeper, further\n> increasing the burden of fixing the historic issues before being able to use\n> -Wshadow.\n> \n> The first half of the patches fix shadow variables newly-introduced in v15\n> (including one of my own patches), the rest are fixing the lowest hanging fruit\n> of the \"short list\" from COPT=-Wshadow=compatible-local\n> \n> I can't see that any of these are bugs, but it seems like a good goal to move\n> towards allowing use of the -Wshadow* options to help avoid future errors, as\n> well as cleanliness and readability (rather than allowing it to get harder to\n> use -Wshadow).\n\n+Alvaro\n\nYou wrote:\n\n|commit 86f575948c773b0ec5b0f27066e37dd93a7f0a96\n|Author: Alvaro Herrera <alvherre@alvh.no-ip.org>\n|Date: Fri Mar 23 10:48:22 2018 -0300\n|\n| Allow FOR EACH ROW triggers on partitioned tables\n\nWhich added:\n\n 1\t+ partition_recurse = !isInternal && stmt->row &&\n 2\t+ rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE;\n 3\t...\n 4\t+ if (partition_recurse)\n 5\t...\n 6\t+ List *idxs = NIL;\n 7\t+ List *childTbls = NIL;\n 8\t...\n 9\t+ if (OidIsValid(indexOid))\n 10\t+ {\n 11\t+ ListCell *l;\n 12\t+ List *idxs = NIL;\n 13\t+\n 14\t+ idxs = find_inheritance_children(indexOid, ShareRowExclusiveLock);\n 15\t+ foreach(l, idxs)\n 16\t+ childTbls = lappend_oid(childTbls,\n 17\t+ IndexGetRelation(lfirst_oid(l),\n 18\t+ false));\n 19\t+ }\n 20\t...\n 21\t+ for (i = 0; i < partdesc->nparts; i++)\n 22\t...\n 23\t+ if (OidIsValid(indexOid))\n 24\t...\n 25\t+ forboth(l, idxs, l2, childTbls)\n\nThe inner idxs is set at line 12, but the outer idxs being looped over at line\n25 is still NIL, because the variable is shadowed.\n\nThat would be a memory leak or some other bug, except that it also seems to be\ndead code ?\n\nhttps://coverage.postgresql.org/src/backend/commands/trigger.c.gcov.html#1166\n\nIs it somwhow possible to call CreateTrigger() to create a FOR EACH ROW\ntrigger, with an index, and not internally ?\n\nThe comments say that a user's CREATE TRIGGER will not have a constraint, so\nwon't have an index.\n\n * constraintOid is zero when\n * executing a user-entered CREATE TRIGGER command.\n *\n+ * indexOid, if nonzero, is the OID of an index associated with the constraint.\n+ * We do nothing with this except store it into pg_trigger.tgconstrindid.\n\nSee also: <20220817145434.GC26426@telsasoft.com>\nRe: shadow variables - pg15 edition\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 19 Aug 2022 16:18:24 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "FOR EACH ROW triggers, on partitioend tables, with indexes?"
},
{
"msg_contents": "On Sat, 20 Aug 2022 at 09:18, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Is it somwhow possible to call CreateTrigger() to create a FOR EACH ROW\n> trigger, with an index, and not internally ?\n\nI've been looking over this and I very much agree that the code looks\nvery broken. As for whether this is dead code or not, I've been\nlooking at that too...\n\nAt trigger.c:1147 we have: if (partition_recurse). partition_recurse\ncan only ever be true if isInternal == false per trigger.c:367's\n\"partition_recurse = !isInternal && stmt->row &&\". isInternal is a\nparameter to the function. Also, the code in question only triggers\nwhen the indexOid parameter is a valid oid. So it should just be a\nmatter of looking for usages of CreateTriggerFiringOn() which pass\nisInternal as false and pass a valid indexOid.\n\nThere seems to be no direct calls doing this, but we do also call this\nfunction via CreateTrigger() and I can see only 1 call to\nCreateTrigger() that passes isInternal as false, but that explicitly\npasses indexOid as InvalidOid, so this code looks very much dead to\nme.\n\nAlvaro, any objections to just ripping this out? aka, the attached.\nI've left an Assert() in there to ensure we notice if we're ever to\nstart calling CreateTriggerFiringOn() with isInternal == false with a\nvalid indexOid.\n\nDavid",
"msg_date": "Thu, 1 Sep 2022 16:19:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: FOR EACH ROW triggers, on partitioend tables, with indexes?"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 04:19:37PM +1200, David Rowley wrote:\n> On Sat, 20 Aug 2022 at 09:18, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Is it somwhow possible to call CreateTrigger() to create a FOR EACH ROW\n> > trigger, with an index, and not internally ?\n> \n> I've been looking over this and I very much agree that the code looks\n> very broken. As for whether this is dead code or not, I've been\n> looking at that too...\n> \n> At trigger.c:1147 we have: if (partition_recurse). partition_recurse\n> can only ever be true if isInternal == false per trigger.c:367's\n> \"partition_recurse = !isInternal && stmt->row &&\". isInternal is a\n> parameter to the function. Also, the code in question only triggers\n> when the indexOid parameter is a valid oid. So it should just be a\n> matter of looking for usages of CreateTriggerFiringOn() which pass\n> isInternal as false and pass a valid indexOid.\n> \n> There seems to be no direct calls doing this, but we do also call this\n> function via CreateTrigger() and I can see only 1 call to\n> CreateTrigger() that passes isInternal as false, but that explicitly\n> passes indexOid as InvalidOid, so this code looks very much dead to\n> me.\n> \n> Alvaro, any objections to just ripping this out? aka, the attached.\n\nIt's possible that extensions or 3rd party code or forks use this, no ?\nIn that case, it might be \"not dead\" ..\n\n> +\t\t * that ever changes then we'll need to quite code here to find the\n\nquite? write? quire? acquire? quine? \n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 1 Sep 2022 00:31:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: FOR EACH ROW triggers, on partitioend tables, with indexes?"
},
{
"msg_contents": "On 2022-Aug-19, Justin Pryzby wrote:\n\n> That would be a memory leak or some other bug, except that it also seems to be\n> dead code ?\n> \n> https://coverage.postgresql.org/src/backend/commands/trigger.c.gcov.html#1166\n> \n> Is it somwhow possible to call CreateTrigger() to create a FOR EACH ROW\n> trigger, with an index, and not internally ?\n\nTBH I don't remember this at all anymore.\n\nSo apparently the way to get a trigger associated with a relation\n(tgconstrrelid) is via CREATE CONSTRAINT TRIGGER, but there doesn't\nappear to be a way to have it associated with a specific *index* on that\nrelation (tgconstrindid). So you're right that it appears to be dead\ncode.\n\nIf the regression tests don't break by removing it, I agree with doing\nthat.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Thu, 1 Sep 2022 10:58:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: FOR EACH ROW triggers, on partitioend tables, with indexes?"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 20:57, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> So apparently the way to get a trigger associated with a relation\n> (tgconstrrelid) is via CREATE CONSTRAINT TRIGGER, but there doesn't\n> appear to be a way to have it associated with a specific *index* on that\n> relation (tgconstrindid). So you're right that it appears to be dead\n> code.\n>\n> If the regression tests don't break by removing it, I agree with doing\n> that.\n\nThanks for having a look here. Yeah, it was a while ago.\n\nI've pushed a patch to remove the dead code from master. I don't quite\nsee the sense in removing it in the back branches.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Sep 2022 15:53:46 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: FOR EACH ROW triggers, on partitioend tables, with indexes?"
}
] |
[
{
"msg_contents": "Hello\n\n\n\nI noticed that sslinfo extension does not have functions to return current client certificate's notbefore and notafter timestamps which are also quite important attributes in a X509 certificate. The attached patch adds 2 functions to get notbefore and notafter timestamps from the currently connected client certificate.\n\n\n\nthank you!\n\n\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca",
"msg_date": "Fri, 19 Aug 2022 16:00:41 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 20 Aug 2022, at 01:00, Cary Huang <cary.huang@highgo.ca> wrote:\n\n> I noticed that sslinfo extension does not have functions to return current client certificate's notbefore and notafter timestamps which are also quite important attributes in a X509 certificate. The attached patch adds 2 functions to get notbefore and notafter timestamps from the currently connected client certificate.\n\nOff the cuff that doesn't seem like a bad idea, but I wonder if we should add\nthem to pg_stat_ssl (or both) instead if we deem them valuable?\n\nRe the patch, it would be nice to move the logic in ssl_client_get_notafter and\nthe _notbefore counterpart to a static function since they are copies of\neachother.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Sat, 20 Aug 2022 13:02:01 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> Off the cuff that doesn't seem like a bad idea, but I wonder if we should add\n > them to pg_stat_ssl (or both) instead if we deem them valuable?\n\nI think the same information should be available to pg_stat_ssl as well. pg_stat_ssl can show the client certificate information for all connecting clients, having it to show not_before and not_after timestamps of every client are important in my opinion. The attached patch \"v2-0002-pg-stat-ssl-add-notbefore-and-notafter-timestamps.patch\" adds this support\n \n > Re the patch, it would be nice to move the logic in ssl_client_get_notafter and\n > the _notbefore counterpart to a static function since they are copies of\n > eachother.\n\nYes agreed. I have made the adjustment attached as \"v2-0001-sslinfo-add-notbefore-and-notafter-timestamps.patch\"\n\nwould this feature be suitable to be added to commitfest? What do you think?\n\nthank you\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca",
"msg_date": "Fri, 23 Jun 2023 13:10:22 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 23 Jun 2023, at 22:10, Cary Huang <cary.huang@highgo.ca> wrote:\n\n> would this feature be suitable to be added to commitfest? What do you think?\n\nYes, please add it to the July commitfest and feel free to set me as Reviewer,\nI intend to take a look at it.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 23 Jun 2023 22:23:04 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": " > Yes, please add it to the July commitfest and feel free to set me as Reviewer,\n > I intend to take a look at it.\n\nThank you Daniel, I have added this patch to July commitfest under security category and added you as reviewer. \n\nbest regards\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca\n\n\n\n",
"msg_date": "Fri, 23 Jun 2023 14:31:11 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 23 Jun 2023, at 22:10, Cary Huang <cary.huang@highgo.ca> wrote:\n\n>> Off the cuff that doesn't seem like a bad idea, but I wonder if we should add\n>> them to pg_stat_ssl (or both) instead if we deem them valuable?\n> \n> I think the same information should be available to pg_stat_ssl as well. pg_stat_ssl can show the client certificate information for all connecting clients, having it to show not_before and not_after timestamps of every client are important in my opinion. The attached patch \"v2-0002-pg-stat-ssl-add-notbefore-and-notafter-timestamps.patch\" adds this support\n\nThis needs to adjust the tests in src/test/ssl which now fails due to SELECT *\nreturning a row which doesn't match what the test was coded for.\n\n>> Re the patch, it would be nice to move the logic in ssl_client_get_notafter and\n>> the _notbefore counterpart to a static function since they are copies of\n>> eachother.\n> \n> Yes agreed. I have made the adjustment attached as \"v2-0001-sslinfo-add-notbefore-and-notafter-timestamps.patch\"\n\nThe new patchset isn't updating contrib/sslinfo/meson with the 1.3 update so it\nfails to build with Meson. \n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 28 Jun 2023 08:26:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> This needs to adjust the tests in src/test/ssl which now fails due to SELECT *\n > returning a row which doesn't match what the test was coded for.\n\nThank you so much for pointing out. I have adjusted the extra ssl test to account for the extra columns returned. It should not fail now. \n \n> The new patchset isn't updating contrib/sslinfo/meson with the 1.3 update so it\n > fails to build with Meson. \n\nThanks again for pointing out, I have adjusted the meson build file to include the 1.3 update\n\nPlease see attached patches for the fixes. \nThank you so much!\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca",
"msg_date": "Fri, 30 Jun 2023 11:12:03 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 30 Jun 2023, at 20:12, Cary Huang <cary.huang@highgo.ca> wrote:\n> \n>> This needs to adjust the tests in src/test/ssl which now fails due to SELECT *\n>> returning a row which doesn't match what the test was coded for.\n> \n> Thank you so much for pointing out. I have adjusted the extra ssl test to account for the extra columns returned. It should not fail now. \n\nThanks for the new version! It doesn't fail the ssl tests, but the Kerberos\ntest now fails. You can see the test reports from the CFBot here:\n\n\thttp://cfbot.cputube.org/cary-huang.html\n\nThis runs on submitted patches, you can also run the same CI checks in your own\nGithub clone using the supplied CI files in the postgres repo.\n\nThere are also some trivial whitespace issues shown with \"git diff --check\",\nthese can of course easily be addressed by a committer in a final-version patch\nbut when sending a new version you might as well fix those.\n\n>> The new patchset isn't updating contrib/sslinfo/meson with the 1.3 update so it\n>> fails to build with Meson. \n> \n> Thanks again for pointing out, I have adjusted the meson build file to include the 1.3 update\n\n+ asn1_notbefore = X509_getm_notBefore(cert);\n\nX509_getm_notBefore() and X509_getm_notAfter() are only available in OpenSSL\n1.1.1 and onwards, but postgres support 1.0.2 (as of today with 8e278b6576).\nX509_get_notAfter() is available in 1.0.2 but deprecated in 1.1.1 and turned\ninto an alias for X509_getm_notAfter() (same with _notBefore of course), and\nsince we set 1.0.2 as the API compatibility we should be able to use that\nwithout warnings instead.\n\n+ <function>ssl_client_get_notbefore() returns text</function>\n+ <function>ssl_client_get_notafter() returns text</function>\n\nThese functions should IMO return timestamp data types to save the user from\nhaving to convert them. Same with the additions to pg_stat_get_activity.\n\nYou should add tests for the new functions in src/test/ssl/t/003_sslinfo.pl.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 11:56:35 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> Thanks for the new version! It doesn't fail the ssl tests, but the Kerberos\n > test now fails. You can see the test reports from the CFBot here:\n\nYes, kerberos tests failed due to the addition of notbefore and notafter values. The values array within \"pg_stat_get_activity\" function related to \"pg_stat_gssapi\" were not set correctly. It is now fixed\n\n\n > This runs on submitted patches, you can also run the same CI checks in your own\n > Github clone using the supplied CI files in the postgres repo.\n\nThank you for pointing this out. I followed the CI instruction as suggested and am able to run the same CI checks to reproduce the test failures.\n\n\n> There are also some trivial whitespace issues shown with \"git diff --check\",\n> these can of course easily be addressed by a committer in a final-version patch\n> but when sending a new version you might as well fix those.\n\nYes, the white spaces issues should be addressed in the attached patches.\n\n\n> X509_getm_notBefore() and X509_getm_notAfter() are only available in OpenSSL\n> 1.1.1 and onwards, but postgres support 1.0.2 (as of today with 8e278b6576).\n> X509_get_notAfter() is available in 1.0.2 but deprecated in 1.1.1 and turned\n> into an alias for X509_getm_notAfter() (same with _notBefore of course), and\n> since we set 1.0.2 as the API compatibility we should be able to use that\n> without warnings instead.\n\nThank you so much for catching this openssl function compatibility issue. I have changed the function calls to:\n- X509_get_notBefore()\n- X509_get_notAfter()\n\nwhich are compatible in OpenSSL v1.0.2 and also v1.1.1 where they will get translated to X509_getm_notBefore() and X509_getm_notAfter() respectively\n\n\n > These functions should IMO return timestamp data types to save the user from\n > having to convert them. Same with the additions to pg_stat_get_activity.\n\nYes, agreed, the attached patches have the output changed to timestamp datatype instead of text.\n\n\n > You should add tests for the new functions in src/test/ssl/t/003_sslinfo.pl.\n\nYes, agreed, I added 2 additional tests in src/test/ssl/t/003_sslinfo.pl to compare the notbefore and notafter outputs from sslinfo extension and pg_stat_ssl outputs. Both should be tested equal.\n\n\nAlso added related documentation about the new not before and not after timestamps in pg_stat_ssl.\n\nthank you\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca",
"msg_date": "Mon, 10 Jul 2023 16:09:51 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "I had another look at this today and I think this patch is in pretty good\nshape, below are a few comments on this revision:\n\n- 'sslinfo--1.2.sql',\n+ 'sslinfo--1.2--1.3.sql',\n+ 'sslinfo--1.3.sql',\n\nThe way we typically ship extensions in contrib/ is to not create a new base\nversion .sql file for smaller changes like adding a few functions. For this\npatch we should keep --1.2.sql and instead supply a 1.2--1.3.sql with the new\nfunctions.\n\n\n+ <structfield>not_befoer</structfield> <type>text</type>\n\ns/not_befoer/not_before/\n\n\n+\terrmsg(\"failed to convert tm to timestamp\")));\n\nI think this error is too obscure for the user to act on, what we use elsewhere\nis \"timestamp out of range\" and I think thats more helpful. I do wonder if\nthere is ever a legitimate case when this can fail while still having a\nauthenticated client connection?\n\n\n+\t^\\d+,t,TLSv[\\d.]+,[\\w-]+,\\d+,/?CN=ssltestuser,$serialno,/?\\QCN=Test CA for PostgreSQL SSL regression test client certs\\E,\\Q2021-03-03 22:12:07\\E,\\Q2048-07-19 22:12:07\\E\\r?$}mx,\n\nThis test output won't actually work for testing against, it works now because\nthe dates match the current set of certificates, but the certificates can be\nregenerated with `cd src/test/ssl && make -f sslfiles.mk` and that will change\nthe not_before/not_after dates. In order to have stable test data we need to\nset fixed end/start dates and reissue all the client certs.\n\n\n+\t\"SELECT ssl_client_get_notbefore() = not_before FROM pg_stat_ssl WHERE pid = pg_backend_pid();\",\n+\tconnstr => $common_connstr);\n+is($result, 't', \"ssl_client_get_notbefore() for not_before timestamp\");\n\nWhile this works, it will fail to catch if we have the same bug in both sslinfo\nand the backend. With stable test data we can add the actual date in the mix\nand verify that both timestamps are equal and that they match the expected\ndate.\n\nI have addressed the issues above in a new v5 patchset which includes a new\npatch for setting stable validity on the test certificates (the notBefore time\nwas arbitrarily chosen to match the date of opening up the tree for v17 - we\njust need a date in the past). Your two patches are rolled into a single one\nwith a commit message added to get started on that part as well.\n\n--\nDaniel Gustafsson",
"msg_date": "Thu, 13 Jul 2023 18:03:21 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "Hello \n\n > The way we typically ship extensions in contrib/ is to not create a new base\n > version .sql file for smaller changes like adding a few functions. For this\n > patch we should keep --1.2.sql and instead supply a 1.2--1.3.sql with the new\n > functions.\n\nThank you for pointing this out. It makes sense to me.\n\n\n > + errmsg(\"failed to convert tm to timestamp\")));\n > \n > I think this error is too obscure for the user to act on, what we use elsewhere\n > is \"timestamp out of range\" and I think thats more helpful. I do wonder if\n > there is ever a legitimate case when this can fail while still having a\n > authenticated client connection?\n\nMy bad here, you are right. \"timestamp out of range\" is a much better error message. However, in an authenticated client connection, there should not be a legitimate case where the not before and not after can fall out of range. The \"not before\" and \"not after\" timestamps in a X509 certificate are normally represented by ASN1, which has maximum timestamp of December 31, 9999. The timestamp data structure in PostgreSQL on the other hand can support year up to June 3, 5874898. Assuming the X509 certificate is generated correctly and no data corruptions happening (which is unlikely), the conversion from ASN1 to timestamp shall not result in out of range error.\n\nPerhaps calling \"tm2timestamp(&pgtm_time, 0, NULL, &ts)\" without checking the return code would be just fine. I see some other usages of tm2timstamp() in other code areas also skip checking the return code.\n\n > I have addressed the issues above in a new v5 patchset which includes a new\n > patch for setting stable validity on the test certificates (the notBefore time\n > was arbitrarily chosen to match the date of opening up the tree for v17 - we\n > just need a date in the past). Your two patches are rolled into a single one\n > with a commit message added to get started on that part as well.\n\nthank you so much for addressing the ssl tests to make \"not before\" and \"not after\" timestamps static in the test certificate and also adjusting 003_sslinfo.pl to expect the new static timestamps in the v5 patches. I am able to apply both and all tests are passing. I did not know this test certificate could be changed by `cd src/test/ssl && make -f sslfiles.mk`, but now I know, thanks to you :p.\n\nBest regards\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca\n\n\n\n\n",
"msg_date": "Fri, 14 Jul 2023 11:41:01 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 14 Jul 2023, at 20:41, Cary Huang <cary.huang@highgo.ca> wrote:\n\n> Perhaps calling \"tm2timestamp(&pgtm_time, 0, NULL, &ts)\" without checking the return code would be just fine. I see some other usages of tm2timstamp() in other code areas also skip checking the return code.\n\nI think we want to know about any failures, btu we can probably make it into an\nelog() instead, as it should never fail.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Fri, 14 Jul 2023 20:50:52 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "Hello\n\n > > Perhaps calling \"tm2timestamp(&pgtm_time, 0, NULL, &ts)\" without checking the return code would be just fine. I see some other usages of tm2timstamp() in other code areas also skip checking the return code.\n > \n > I think we want to know about any failures, btu we can probably make it into an\n > elog() instead, as it should never fail.\n\nYes, sure. I have corrected the error message to elog(ERROR, \"timestamp out of range\") on a rare tm2timestamp() failure. Please see the attached patch based on your v5. \"v6-0001-Set-fixed-dates-for-test-certificates-validity.patch\" is exactly the same as \"v5-0001-Set-fixed-dates-for-test-certificates-validity.patch\", I just up the version to be consistent. \n\nthank you very much\n\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca",
"msg_date": "Mon, 17 Jul 2023 11:26:40 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 17 Jul 2023, at 20:26, Cary Huang <cary.huang@highgo.ca> wrote:\n\n>>> Perhaps calling \"tm2timestamp(&pgtm_time, 0, NULL, &ts)\" without checking the return code would be just fine. I see some other usages of tm2timstamp() in other code areas also skip checking the return code.\n>> \n>> I think we want to know about any failures, btu we can probably make it into an\n>> elog() instead, as it should never fail.\n> \n> Yes, sure. I have corrected the error message to elog(ERROR, \"timestamp out of range\") on a rare tm2timestamp() failure.\n\nI went over this again and ended up pushing it along with a catversion bump.\nDue to a mistake in my testing I didn't however catch that it was using an API\nonly present in OpenSSL 1.1.1 and higher, which caused buildfailures when using\nolder OpenSSL versions, so I ended up reverting it again (leaving certificate\nchanges in place) to keep the buildfarm green.\n\nWill look closer at an implementation which works across all supported versions\nof OpenSSL when I have more time.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 20 Jul 2023 17:24:57 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 20 Jul 2023, at 17:24, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 17 Jul 2023, at 20:26, Cary Huang <cary.huang@highgo.ca> wrote:\n> \n>>>> Perhaps calling \"tm2timestamp(&pgtm_time, 0, NULL, &ts)\" without checking the return code would be just fine. I see some other usages of tm2timstamp() in other code areas also skip checking the return code.\n>>> \n>>> I think we want to know about any failures, btu we can probably make it into an\n>>> elog() instead, as it should never fail.\n>> \n>> Yes, sure. I have corrected the error message to elog(ERROR, \"timestamp out of range\") on a rare tm2timestamp() failure.\n> \n> I went over this again and ended up pushing it along with a catversion bump.\n> Due to a mistake in my testing I didn't however catch that it was using an API\n> only present in OpenSSL 1.1.1 and higher, which caused buildfailures when using\n> older OpenSSL versions, so I ended up reverting it again (leaving certificate\n> changes in place) to keep the buildfarm green.\n> \n> Will look closer at an implementation which works across all supported versions\n> of OpenSSL when I have more time.\n\nFinally had some time, and have made an updated version of the patch.\n\nOpenSSL 1.0.2 doens't expose a function for getting the timestamp, so the patch\ninstead resorts to the older trick of getting the timestamp by inspecing the\ndiff against the UNIX epoch. When doing this, OpenSSL internally use the same\nfunction which later in 1.1.1 was exported for getting the timestamp.\n\nThe attached version passes ssl tests for me on 1.0.2 through OpenSSL Git HEAD.\n\n--\nDaniel Gustafsson",
"msg_date": "Tue, 25 Jul 2023 16:21:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "Hello,\n\nOn 7/25/23 07:21, Daniel Gustafsson wrote:\n> The attached version passes ssl tests for me on 1.0.2 through OpenSSL Git HEAD.\n\nTests pass for me too, including LibreSSL 3.8.\n\n> + /* Calculate the diff from the epoch to the certificat timestamp */\n\n\"certificate\"\n\n> + <function>ssl_client_get_notbefore() returns text</function>\n> ...> + <function>ssl_client_get_notafter() returns text</function>\n\nI think this should say timestamptz rather than text? Ditto for the\npg_stat_ssl documentation.\n\nSpeaking of which: is the use of `timestamp` rather than `timestamptz`\nin pg_proc.dat intentional? Will that cause problems with comparisons?\n\n--\n\nI haven't been able to poke any holes in the ASN1_TIME_to_timestamp()\nimplementations themselves. I went down a rabbit hole trying to find out\nwhether leap seconds could cause problems for us when we switch to\n`struct tm` in the future, but it turns out OpenSSL rejects leap seconds\nin the Validity fields. That seems weird -- as far as I can tell, RFC\n5280 defers to ASN.1 which defers to ISO 8601 which appears to allow\nleap seconds -- but I don't plan to worry about it anymore. (I do idly\nwonder whether some CA, somewhere, has ever had a really Unhappy New\nYear due to that.)\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Tue, 12 Sep 2023 12:40:04 -0700",
"msg_from": "Jacob Champion <jchampion@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 12 Sep 2023, at 21:40, Jacob Champion <jchampion@timescale.com> wrote:\n> \n> Hello,\n> \n> On 7/25/23 07:21, Daniel Gustafsson wrote:\n>> The attached version passes ssl tests for me on 1.0.2 through OpenSSL Git HEAD.\n> \n> Tests pass for me too, including LibreSSL 3.8.\n\nThanks for testing!\n\n>> + /* Calculate the diff from the epoch to the certificat timestamp */\n> \n> \"certificate\"\n\nFixed.\n\n>> + <function>ssl_client_get_notbefore() returns text</function>\n>> ...> + <function>ssl_client_get_notafter() returns text</function>\n> \n> I think this should say timestamptz rather than text? Ditto for the\n> pg_stat_ssl documentation.\n> \n> Speaking of which: is the use of `timestamp` rather than `timestamptz`\n> in pg_proc.dat intentional? Will that cause problems with comparisons?\n\nIt should be timestamptz, it was a tyop on my part. Fixed.\n\n> I haven't been able to poke any holes in the ASN1_TIME_to_timestamp()\n> implementations themselves. I went down a rabbit hole trying to find out\n> whether leap seconds could cause problems for us when we switch to\n> `struct tm` in the future, but it turns out OpenSSL rejects leap seconds\n> in the Validity fields. That seems weird -- as far as I can tell, RFC\n> 5280 defers to ASN.1 which defers to ISO 8601 which appears to allow\n> leap seconds -- but I don't plan to worry about it anymore. (I do idly\n> wonder whether some CA, somewhere, has ever had a really Unhappy New\n> Year due to that.)\n\nThat's an interesting thought, maybe the CA's have adapted given the\nmarketshare of OpenSSL?\n\nThanks for reviewing, the attached v8 contains the fixes from this review along\nwith a fresh rebase and some attempts at making tests more stable in the face\nof timezones by casting to date.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 15 Sep 2023 15:34:35 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "On Mon, Mar 4, 2024 at 6:23 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> > On 12 Sep 2023, at 21:40, Jacob Champion <jchampion@timescale.com> wrote:\n\nSorry for the long delay!\n\n> >> + <function>ssl_client_get_notbefore() returns text</function>\n> >> ...> + <function>ssl_client_get_notafter() returns text</function>\n> >\n> > I think this should say timestamptz rather than text? Ditto for the\n> > pg_stat_ssl documentation.\n> >\n> > Speaking of which: is the use of `timestamp` rather than `timestamptz`\n> > in pg_proc.dat intentional? Will that cause problems with comparisons?\n>\n> It should be timestamptz, it was a tyop on my part. Fixed.\n\nLooks like sslinfo--1.2--1.3.sql is also declaring the functions as\ntimestamp rather than timestamptz, which is breaking comparisons with\nthe not_before/after columns. It might also be nice to rename\nASN1_TIME_to_timestamp().\n\nSquinting further at the server backend implementation, should that\nalso be using TimestampTz throughout, instead of Timestamp? It all\ngoes through float8_timestamptz at the end, so I guess it shouldn't\nhave a material impact, but it's a bit confusing.\n\n> Thanks for reviewing, the attached v8 contains the fixes from this review along\n> with a fresh rebase and some attempts at making tests more stable in the face\n> of timezones by casting to date.\n\nIn my -08 timezone, the date doesn't match what's recorded either\n(it's my \"tomorrow\"). I think those probably just need to be converted\nto UTC explicitly? I've attached a sample diff on top of v8 that\npasses tests on my machine.\n\n--Jacob",
"msg_date": "Tue, 5 Mar 2024 13:54:09 -0800",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "Hello\n\nThank you for the review and your patch. I have tested with minimum OpenSSL version 1.0.2 support and incorporated your changes into the v9 patch as attached. \n\n > In my -08 timezone, the date doesn't match what's recorded either\n > (it's my \"tomorrow\"). I think those probably just need to be converted\n > to UTC explicitly? I've attached a sample diff on top of v8 that\n > passes tests on my machine.\n\nYes, I noticed this in the SSL test too. I am also in GTM-8, so for me the tests would fail too due to the time zone differences from GMT. It shall be okay to specifically set the outputs of pg_stat_ssl, ssl_client_get_notbefore, and ssl_client_get_notafte to be in GMT time zone. The not before and not after time stamps in a client certificate are generally expressed in GMT.\n\n\nThank you!\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca",
"msg_date": "Fri, 08 Mar 2024 17:16:35 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "On Fri, Mar 8, 2024 at 4:16 PM Cary Huang <cary.huang@highgo.ca> wrote:\n> Yes, I noticed this in the SSL test too. I am also in GTM-8, so for me the tests would fail too due to the time zone differences from GMT. It shall be okay to specifically set the outputs of pg_stat_ssl, ssl_client_get_notbefore, and ssl_client_get_notafte to be in GMT time zone. The not before and not after time stamps in a client certificate are generally expressed in GMT.\n\nHi Cary, did you have any thoughts on the timestamptz notes from my last mail?\n\n> It might also be nice to rename\n> ASN1_TIME_to_timestamp().\n>\n> Squinting further at the server backend implementation, should that\n> also be using TimestampTz throughout, instead of Timestamp? It all\n> goes through float8_timestamptz at the end, so I guess it shouldn't\n> have a material impact, but it's a bit confusing.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Mon, 18 Mar 2024 06:34:16 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "Hi Jacob\n\n> Hi Cary, did you have any thoughts on the timestamptz notes from my last mail?\n> \n> > It might also be nice to rename\n> > ASN1_TIME_to_timestamp().\n> >\n> > Squinting further at the server backend implementation, should that\n> > also be using TimestampTz throughout, instead of Timestamp? It all\n> > goes through float8_timestamptz at the end, so I guess it shouldn't\n> > have a material impact, but it's a bit confusing.\n\nSorry I kind of missed this review comment from your last email. Thanks for bringing it up again though. I think it is right to change the backend references of \"timestamp\" to \"timestampTz\" for consistency reasons. I have gone ahead to make the changes.\n\nI have also reviewed the wording on the documentation and removed \"UTC\" from the descriptions. Since sslinfo extension and pg_stat_ssl both return timestampTz in whatever timezone PostgreSQL is running on, they do not always return UTC timestamps.\n\nAttached is the v10 patch with the above changes. Thanks again for the review.\n\nBest regards\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca",
"msg_date": "Mon, 18 Mar 2024 13:48:54 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "On Mon, Mar 18, 2024 at 1:48 PM Cary Huang <cary.huang@highgo.ca> wrote:\n> Attached is the v10 patch with the above changes. Thanks again for the review.\n\nAwesome, looks good.\n\nOn my final review pass I saw one last thing that bothered me (sorry\nfor not seeing it before). The backend version of\nASN1_TIME_to_timestamptz returns the following:\n\n> + return ((int64)days * 24 * 60 * 60) + (int64)seconds;\n\n...but I think Timestamp[Tz]s are stored as microseconds, so we're off\nby a factor of a million. This still works because later we cast to\ndouble and pass it back through float8_timestamptz, which converts it:\n\n> + if (beentry->st_sslstatus->ssl_not_before != 0)\n> + values[25] = DirectFunctionCall1(float8_timestamptz,\n> + Float8GetDatum((double) beentry->st_sslstatus->ssl_not_before));\n\nBut anyone who ends up inspecting the value of\nst_sslstatus->ssl_not_before directly is going to find an incorrect\ntimestamp. I think it'd be clearer to store microseconds to begin\nwith, and then just use TimestampTzGetDatum rather than the\nDirectFunctionCall1 we have now. (I looked for an existing\nimplementation to reuse and didn't find one. Maybe we should use the\noverflow-aware multiplication/addition routines -- i.e.\npg_mul_s64_overflow et al -- to multiply `days` and `seconds` by\nUSECS_PER_DAY/USECS_PER_SEC and combine them.)\n\nAnd I think sslinfo can remain as-is, because that way overflow is\ncaught by float8_timestamptz. WDYT?\n\n--Jacob\n\n\n",
"msg_date": "Mon, 18 Mar 2024 15:39:27 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "Thank you for your review again. \n\n > ...but I think Timestamp[Tz]s are stored as microseconds, so we're off\n > by a factor of a million. This still works because later we cast to\n > double and pass it back through float8_timestamptz, which converts it:\n\nIn my test, if I made ASN1_TIME_to_timestamptz to return in microseconds, the\nfloat8_timestamptz() function will catch a \"timestamp out of range\" exception as\nthis function treats the input as seconds.\n\n> But anyone who ends up inspecting the value of\n > st_sslstatus->ssl_not_before directly is going to find an incorrect\n > timestamp. I think it'd be clearer to store microseconds to begin\n > with, and then just use TimestampTzGetDatum rather than the\n > DirectFunctionCall1 we have now. \n\nI have also tried TimestampTzGetDatum with ASN1_TIME_to_timestamptz \noutputting in microseconds. No exception is caught, but pg_stat_ssl displays \nincorrect results. The timestamps are extra large because the extra factor of \n1 million is considered in the timestamp computation as well.\n\nThe comments for TimestampTz says:\n\n * Timestamps, as well as the h/m/s fields of intervals, are stored as\n * int64 values with units of microseconds. (Once upon a time they were\n * double values with units of seconds.)\n\nbut it seems to me that many of the timestamp related functions still consider\ntimestamp or timestampTz as \"double values with units of seconds\" though. \n\nBest regards\n\nCary Huang\n-------------\nHighGo Software Inc. (Canada)\ncary.huang@highgo.ca\nwww.highgo.ca\n\n\n\n",
"msg_date": "Tue, 19 Mar 2024 16:24:39 -0700",
"msg_from": "Cary Huang <cary.huang@highgo.ca>",
"msg_from_op": true,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 20 Mar 2024, at 00:24, Cary Huang <cary.huang@highgo.ca> wrote:\n\n> but it seems to me that many of the timestamp related functions still consider\n> timestamp or timestampTz as \"double values with units of seconds\" though. \n\nThe issue here is that postgres use a different epoch from the unix epoch, so\nany dates calcuated based on the unix epoch need to be adjusted. I've hacked\nthis up in the attached v11 using overflow-safe integer mul/add as proposed by\nJacob upthread (we really shouldn't risk overflowing an int64 here but there is\nno harm in using belts and suspenders here as a defensive measure).\n\nThe attached v11 is what I propose we go ahead with unless there further\ncomments on this.\n\n--\nDaniel Gustafsson",
"msg_date": "Wed, 20 Mar 2024 15:03:45 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 7:03 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> The issue here is that postgres use a different epoch from the unix epoch, so\n> any dates calcuated based on the unix epoch need to be adjusted.\n\nAh, thank you! I had just reproduced Cary's problem and was really confused...\n\n> I've hacked\n> this up in the attached v11 using overflow-safe integer mul/add as proposed by\n> Jacob upthread (we really shouldn't risk overflowing an int64 here but there is\n> no harm in using belts and suspenders here as a defensive measure).\n>\n> The attached v11 is what I propose we go ahead with unless there further\n> comments on this.\n\nOne last question:\n\n> + result -= ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * USECS_PER_DAY);\n> + return TimestampTzGetDatum(result);\n\nIs that final bare subtraction able to wrap around for dates far in the past?\n\n--Jacob\n\n\n",
"msg_date": "Wed, 20 Mar 2024 07:28:47 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 20 Mar 2024, at 15:28, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n\n>> + result -= ((POSTGRES_EPOCH_JDATE - UNIX_EPOCH_JDATE) * USECS_PER_DAY);\n>> + return TimestampTzGetDatum(result);\n> \n> Is that final bare subtraction able to wrap around for dates far in the past?\n\nWe are subtracting 30 years from a calculation that we know didnt overflow, so\nI guess if the certificate notBefore (the notAfter cannot be that early since\nwe wouldn't be able to connect with it) was set to early enough? It didn't\nstrike me as anything above academical unless I'm thinking wrong here.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Wed, 20 Mar 2024 15:50:50 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "On Wed, Mar 20, 2024 at 7:50 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> We are subtracting 30 years from a calculation that we know didnt overflow, so\n> I guess if the certificate notBefore (the notAfter cannot be that early since\n> we wouldn't be able to connect with it) was set to early enough? It didn't\n> strike me as anything above academical unless I'm thinking wrong here.\n\nYeah, it's super nitpicky. The CA would have had to sign a really\nbroken certificate somehow, anyway...\n\nI can't find anything else to note; patch LGTM.\n\nThanks,\n--Jacob\n\n\n",
"msg_date": "Wed, 20 Mar 2024 09:32:38 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "> On 20 Mar 2024, at 17:32, Jacob Champion <jacob.champion@enterprisedb.com> wrote:\n\n> I can't find anything else to note; patch LGTM.\n\nWhile staging this to commit I realized one silly thing about it warranting\nanother round here. The ASN.1 timediff code can diff against *any* timestamp,\nnot just the UNIX epoch, so we could just pass in the postgres epoch and skip\nthe final subtraction since we're already correctly adjusted. This removes the\nnon-overflow checked arithmetic with a simpler logic.\n\n--\nDaniel Gustafsson",
"msg_date": "Fri, 22 Mar 2024 14:14:57 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
},
{
"msg_contents": "On Fri, Mar 22, 2024 at 6:15 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n> While staging this to commit I realized one silly thing about it warranting\n> another round here. The ASN.1 timediff code can diff against *any* timestamp,\n> not just the UNIX epoch, so we could just pass in the postgres epoch and skip\n> the final subtraction since we're already correctly adjusted. This removes the\n> non-overflow checked arithmetic with a simpler logic.\n\nAh, that's much better! +1.\n\n--Jacob\n\n\n",
"msg_date": "Fri, 22 Mar 2024 07:14:49 -0700",
"msg_from": "Jacob Champion <jacob.champion@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: sslinfo extension - add notbefore and notafter timestamps"
}
] |
[
{
"msg_contents": "Hi,\n\nThis started at https://postgr.es/m/20220817215317.poeofidf7o7dy6hy%40awork3.anarazel.de\n\nPeter made a good point about -DFRONTED not being defined symmetrically\nbetween meson and autoconf builds, which made me look at where we define\nit. And I think we ought to clean this up independ of the meson patch.\n\n\nOn 2022-08-17 14:53:17 -0700, Andres Freund wrote:\n> On 2022-08-17 15:50:23 +0200, Peter Eisentraut wrote:\n> > - -DFRONTEND is used somewhat differently from the makefiles. For\n> > example, meson sets -DFRONTEND for pg_controldata, but the\n> > makefiles don't. Conversely, the makefiles set -DFRONTEND for\n> > ecpglib, but meson does not. This should be checked again to make\n> > sure it all matches up.\n>\n> Yes, should sync that up.\n>\n> FWIW, meson does add -DFRONTEND for ecpglib. There were a few places that did\n> add it twice, I'll push a cleanup of that in a bit.\n\nYikes, the situation in HEAD is quite the mess.\n\nSeveral .c files set FRONTEND themselves, so they can include postgres.h,\ninstead of postgres_fe.h:\n\n$ git grep '#define.*FRONTEND' upstream/master ':^src/include/postgres_fe.h'\nupstream/master:src/bin/pg_controldata/pg_controldata.c:#define FRONTEND 1\nupstream/master:src/bin/pg_resetwal/pg_resetwal.c:#define FRONTEND 1\nupstream/master:src/bin/pg_waldump/compat.c:#define FRONTEND 1\nupstream/master:src/bin/pg_waldump/pg_waldump.c:#define FRONTEND 1\nupstream/master:src/bin/pg_waldump/rmgrdesc.c:#define FRONTEND 1\n\nYet, most of them also define FRONTEND in both make and msvc buildsystem.\n\n$ git grep -E \"(D|AddDefine\\(')FRONTEND\" upstream/master src/bin/ src/tools/msvc/\nupstream/master:src/bin/initdb/Makefile:override CPPFLAGS := -DFRONTEND -I$(libpq_srcdir) -I$(top_srcdir)/src/timezone $(CPPFLAGS)\nupstream/master:src/bin/pg_rewind/Makefile:override CPPFLAGS := -I$(libpq_srcdir) -DFRONTEND $(CPPFLAGS)\nupstream/master:src/bin/pg_waldump/Makefile:override CPPFLAGS := -DFRONTEND $(CPPFLAGS)\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $libpgport->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $libpgcommon->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $libpgfeutils->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $libpq->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $pgtypes->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $libecpg->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $libecpgcompat->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $pgrewind->AddDefine('FRONTEND');\nupstream/master:src/tools/msvc/Mkvcbuild.pm: $pg_waldump->AddDefine('FRONTEND')\n\nThat's largely because they also build files from src/backend, which obviously\ncontain no #define FRONTEND.\n\n\nThe -DFRONTENDs for the various ecpg libraries don't seem necessary\nanymore. That looks to be a leftover from 7143b3e8213, before that ecpg had\ncopies of various pgport libraries.\n\nSame with libpq, also looks to be obsoleted by 7143b3e8213.\n\nI don't think we need FRONTEND in initdb - looks like that stopped being\nrequired with af1a949109d.\n\n\nUnfortunately, the remaining uses of FRONTEND are required. That's:\n- pg_controldata, via #define\n- pg_resetwal, via #define\n- pg_rewind, via -DFRONTEND, due to xlogreader.c\n- pg_waldump, via #define and -DFRONTEND, due to xlogreader.c, xlogstats.c, rmgrdesc/*desc.c\n\nI'm kind of wondering if we should add xlogreader.c, xlogstat.c, *desc to\nfe_utils, instead of building them in various places. That'd leave us only\nwith #define FRONTENDs for cases that do need to include postgres.h\nthemselves, which seems a lot more palatable.\n\n\nGreetings,\n\nAndres Freund",
"msg_date": "Sat, 20 Aug 2022 12:45:50 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-20 12:45:50 -0700, Andres Freund wrote:\n> The -DFRONTENDs for the various ecpg libraries don't seem necessary\n> anymore. That looks to be a leftover from 7143b3e8213, before that ecpg had\n> copies of various pgport libraries.\n>\n> Same with libpq, also looks to be obsoleted by 7143b3e8213.\n>\n> I don't think we need FRONTEND in initdb - looks like that stopped being\n> required with af1a949109d.\n\nI think the patches for this are fairly obvious, and survived CI without an\nissue [1], so the src/tools/msvc bits work too. So I'm planning to push them\nfairly soon.\n\n\nBut the remaining \"issues\" don't have an obvious solutions and not addressed\nby these patches:\n\n> Unfortunately, the remaining uses of FRONTEND are required. That's:\n> - pg_controldata, via #define\n> - pg_resetwal, via #define\n> - pg_rewind, via -DFRONTEND, due to xlogreader.c\n> - pg_waldump, via #define and -DFRONTEND, due to xlogreader.c, xlogstats.c, rmgrdesc/*desc.c\n>\n> I'm kind of wondering if we should add xlogreader.c, xlogstat.c, *desc to\n> fe_utils, instead of building them in various places. That'd leave us only\n> with #define FRONTENDs for cases that do need to include postgres.h\n> themselves, which seems a lot more palatable.\n\n\nGreetings,\n\nAndres Freund\n\n\n[1] https://cirrus-ci.com/build/4648937721167872\n\n\n",
"msg_date": "Mon, 22 Aug 2022 08:48:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 08:48:34 -0700, Andres Freund wrote:\n> On 2022-08-20 12:45:50 -0700, Andres Freund wrote:\n> > The -DFRONTENDs for the various ecpg libraries don't seem necessary\n> > anymore. That looks to be a leftover from 7143b3e8213, before that ecpg had\n> > copies of various pgport libraries.\n> >\n> > Same with libpq, also looks to be obsoleted by 7143b3e8213.\n> >\n> > I don't think we need FRONTEND in initdb - looks like that stopped being\n> > required with af1a949109d.\n> \n> I think the patches for this are fairly obvious, and survived CI without an\n> issue [1], so the src/tools/msvc bits work too. So I'm planning to push them\n> fairly soon.\n\nDone.\n\n- Andres\n\n\n",
"msg_date": "Mon, 22 Aug 2022 20:42:33 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 3:46 PM Andres Freund <andres@anarazel.de> wrote:\n> Unfortunately, the remaining uses of FRONTEND are required. That's:\n> - pg_controldata, via #define\n> - pg_resetwal, via #define\n> - pg_rewind, via -DFRONTEND, due to xlogreader.c\n> - pg_waldump, via #define and -DFRONTEND, due to xlogreader.c, xlogstats.c, rmgrdesc/*desc.c\n\nActually, I think we could fix these pretty easily too. See attached.\n\nThis has been annoying me for a while. I hope we can agree on a way to\nclean it up.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 23 Aug 2022 17:24:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Actually, I think we could fix these pretty easily too. See attached.\n\nHmm, do these headers still pass headerscheck/cpluspluscheck?\n\nI might quibble a bit with the exact placement of the #ifndef FRONTEND\ntests, but overall this looks pretty plausible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 17:56:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 5:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Actually, I think we could fix these pretty easily too. See attached.\n>\n> Hmm, do these headers still pass headerscheck/cpluspluscheck?\n\nI didn't check before sending the patch, but now I ran it locally, and\nI did get failures from both, but they all seem to be unrelated.\nMainly, it's sad that I don't have Python.h, but I didn't configure\nwith python, so whatever.\n\n> I might quibble a bit with the exact placement of the #ifndef FRONTEND\n> tests, but overall this looks pretty plausible.\n\nYep, that's arguable. In particular, should the redo functions also be\nprotected by #ifdef FRONTEND?\n\nI'd be more than thrilled if you wanted to adjust this to taste and\napply it, barring objections from others of course.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:58:50 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 17:24:30 -0400, Robert Haas wrote:\n> On Sat, Aug 20, 2022 at 3:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > Unfortunately, the remaining uses of FRONTEND are required. That's:\n> > - pg_controldata, via #define\n> > - pg_resetwal, via #define\n> > - pg_rewind, via -DFRONTEND, due to xlogreader.c\n> > - pg_waldump, via #define and -DFRONTEND, due to xlogreader.c, xlogstats.c, rmgrdesc/*desc.c\n>\n> Actually, I think we could fix these pretty easily too.\n\nI just meant that they're not already obsolete ;)\n\n\n> See attached.\n\nJust to make sure I understand - you're just trying to get rid of the #define\nfrontends, not the -DFRONTENDs passed in from the Makefile? Because afaics we\nstill need those, correct?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 16:24:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 7:24 PM Andres Freund <andres@anarazel.de> wrote:\n> Just to make sure I understand - you're just trying to get rid of the #define\n> frontends, not the -DFRONTENDs passed in from the Makefile? Because afaics we\n> still need those, correct?\n\nOh, yeah, this only fixes the #define ones. But maybe fixing the other\nones with a similar approach would be possible?\n\nI really don't see why we should tolerate having #define FRONTEND in\nmore than once place.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 19:37:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Aug 23, 2022 at 7:24 PM Andres Freund <andres@anarazel.de> wrote:\n>> Just to make sure I understand - you're just trying to get rid of the #define\n>> frontends, not the -DFRONTENDs passed in from the Makefile? Because afaics we\n>> still need those, correct?\n\n> Oh, yeah, this only fixes the #define ones. But maybe fixing the other\n> ones with a similar approach would be possible?\n\n> I really don't see why we should tolerate having #define FRONTEND in\n> more than once place.\n\nsrc/port and src/common really need to do it like that (ie pass in\nthe -D switch) so that the identical source file can be built\nboth ways. Maybe we could get rid of -DFRONTEND in other places,\nlike pg_rewind and pg_waldump.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 23 Aug 2022 19:50:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 19:50:00 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Aug 23, 2022 at 7:24 PM Andres Freund <andres@anarazel.de> wrote:\n> >> Just to make sure I understand - you're just trying to get rid of the #define\n> >> frontends, not the -DFRONTENDs passed in from the Makefile? Because afaics we\n> >> still need those, correct?\n> \n> > Oh, yeah, this only fixes the #define ones. But maybe fixing the other\n> > ones with a similar approach would be possible?\n> \n> > I really don't see why we should tolerate having #define FRONTEND in\n> > more than once place.\n> \n> src/port and src/common really need to do it like that (ie pass in\n> the -D switch) so that the identical source file can be built\n> both ways. Maybe we could get rid of -DFRONTEND in other places,\n> like pg_rewind and pg_waldump.\n\nWe could, if we make xlogreader.c and the rmgrdesc routines built as part of\nsrc/common. I don't really see how otherwise.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 18:55:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 9:55 PM Andres Freund <andres@anarazel.de> wrote:\n> We could, if we make xlogreader.c and the rmgrdesc routines built as part of\n> src/common. I don't really see how otherwise.\n\nAfter a little bit of study, I agree.\n\nIt looks to me like -DFRONTEND can be removed from\nsrc/fe_utils/Makefile and probably also src/common/unicode/Makefile\nwithout changing anything else, because the C files in those\ndirectories seem to be frontend-only and they already include\n\"postgres_fe.h\". I think we should go ahead and do that, and also\napply the patch I posted yesterday with whatever bikeshedding seems\nappropriate.\n\nIt doesn't really seem like we have a plausible alternative to the\ncurrent system for src/common or src/port.\n\npg_rewind and pg_waldump seem to need the xlogreader code moved to\nsrc/common, as Andres proposes. I'm not volunteering to tackle that\nright now but I think it might be a good thing to do sometime.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 10:40:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-24 10:40:01 -0400, Robert Haas wrote:\n> pg_rewind and pg_waldump seem to need the xlogreader code moved to\n> src/common, as Andres proposes. I'm not volunteering to tackle that\n> right now but I think it might be a good thing to do sometime.\n\nThe easier way would be to just keep their current method of building, but do\nit as part of src/common.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Aug 2022 08:10:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistencies around defining FRONTEND"
}
] |
[
{
"msg_contents": "Attached is another autovacuum (and VACUUM VERBOSE) instrumentation\npatch. This one adds instrumentation about freezing to the report\nautovacuum makes to the server log. Specifically, it makes the output\nlook like this:\n\nregression=# vacuum (freeze,verbose) foo;\nINFO: aggressively vacuuming \"regression.public.foo\"\nINFO: finished vacuuming \"regression.public.foo\": index scans: 0\npages: 0 removed, 45 remain, 45 scanned (100.00% of total)\ntuples: 0 removed, 10000 remain, 0 are dead but not yet removable\nremovable cutoff: 751, which was 0 XIDs old when operation ended\nnew relfrozenxid: 751, which is 2 XIDs ahead of previous value\nXIDs processed: 45 pages from table (100.00% of total) had 10000 tuples frozen\nindex scan not needed: 0 pages from table (0.00% of total) had 0 dead\nitem identifiers removed\nI/O timings: read: 0.023 ms, write: 0.000 ms\navg read rate: 2.829 MB/s, avg write rate: 5.658 MB/s\nbuffer usage: 95 hits, 2 misses, 4 dirtied\nWAL usage: 91 records, 1 full page images, 133380 bytes\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nVACUUM\n\nNotice the new line about freezing, which we always output -- it's the\nline that begins with \"XIDs processed\", that appears about half way\ndown. The new line is deliberately placed after the existing \"new\nrelfrozenxid\" line and before the existing line about dead item\nidentifiers. This placement of the new instrumentation seems logical\nto me; freezing is related to relfrozenxid (obviously), but doesn't\nneed to be shoehorned into the prominent early line that reports on\ntuples removed/remain[ing].\n\nLike its neighboring \"dead item identifier\" line, this new line shows\nthe number of items/tuples affected, and the number of heap pages\naffected -- with heap pages shown both as an absolute number and as a\npercentage of rel_pages (in parentheses). The main cost associated\nwith freezing is the WAL overhead, so emphasizing pages here seems\nlike the way to go -- pages are more interesting than tuples. This\nformat also makes it relatively easy to get a sense of the *relative*\ncosts of the overhead of each distinct class/category of maintenance\nperformed.\n\n-- \nPeter Geoghegan",
"msg_date": "Sat, 20 Aug 2022 16:28:59 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Instrumented pages/tuples frozen in autovacuum's server log out (and\n VACUUM VERBOSE)"
},
{
"msg_contents": "On Sat, Aug 20, 2022 at 7:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n\n> XIDs processed: 45 pages from table (100.00% of total) had 10000 tuples\n> frozen\n>\n\nI like this addition, but I would also like to see how many pages got newly\nset to all frozen by the vacuum. Would that be a convenient thing to also\nreport here?\n\nAlso, isn't all of vacuuming about XID processing? I think \"frozen:\" would\nbe a more suitable line prefix.\n\nCheers,\n\nJeff\n\nOn Sat, Aug 20, 2022 at 7:29 PM Peter Geoghegan <pg@bowt.ie> wrote: XIDs processed: 45 pages from table (100.00% of total) had 10000 tuples frozenI like this addition, but I would also like to see how many pages got newly set to all frozen by the vacuum. Would that be a convenient thing to also report here?Also, isn't all of vacuuming about XID processing? I think \"frozen:\" would be a more suitable line prefix.Cheers,Jeff",
"msg_date": "Wed, 31 Aug 2022 22:49:13 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Instrumented pages/tuples frozen in autovacuum's server log out\n (and VACUUM VERBOSE)"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 7:49 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> I like this addition, but I would also like to see how many pages got newly set to all frozen by the vacuum.\n\nI'd say that that's independent work. Though I'm happy to discuss it now.\n\nIt would be fairly straightforward to show something about the VM\nitself, but it's not entirely clear what aspects of the VM should be\nemphasized. Are we reporting on the state of the table, or work\nperformed by VACUUM? You said you were interested in the latter\nalready, but why prefer that over a summary of the contents of the VM\nat the end of the VACUUM? Are you concerned about the cost of setting\npages all-visible? Do you have an interest in how VACUUM manages to\nset VM pages over time? Something else?\n\nWe already call visibilitymap_count() at the end of every VACUUM,\nwhich scans the authoritative VM to produce a more-or-less consistent\nsummary of the VM at that point in time. This information is then used\nto update pg_class.relallvisible (we don't do anything with the\nall-frozen number at all). Why not show that information in\nVERBOSE/autovacuum's log message? Does it really matter *when* a page\nbecame all-visible/all-frozen/unset?\n\n> Also, isn't all of vacuuming about XID processing? I think \"frozen:\" would be a more suitable line prefix.\n\nThat also works for me. I have no strong feelings here.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 31 Aug 2022 20:24:44 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Instrumented pages/tuples frozen in autovacuum's server log out\n (and VACUUM VERBOSE)"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 7:49 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n> I think \"frozen:\" would be a more suitable line prefix.\n\nAttached revision does it that way.\n\nBarring any objections I will commit this patch within the next few days.\n\nThanks\n-- \nPeter Geoghegan",
"msg_date": "Mon, 5 Sep 2022 12:43:33 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Instrumented pages/tuples frozen in autovacuum's server log out\n (and VACUUM VERBOSE)"
},
{
"msg_contents": "On Mon, Sep 5, 2022 at 12:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Barring any objections I will commit this patch within the next few days.\n\nPushed this just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 8 Sep 2022 10:31:04 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Instrumented pages/tuples frozen in autovacuum's server log out\n (and VACUUM VERBOSE)"
}
] |
[
{
"msg_contents": "Our documentation claims that --with-uuid=bsd works on both\nFreeBSD and NetBSD: installation.sgml says\n\n <option>bsd</option> to use the UUID functions found in FreeBSD, NetBSD,\n and some other BSD-derived systems\n\nand there is comparable wording in uuid-ossp.sgml.\n\nIn the course of setting up a NetBSD buildfarm animal, I discovered\nthat this is a lie. NetBSD indeed has the same uuid_create() function\nas FreeBSD, but it produces version-4 UUIDs not version-1, which causes\nthe contrib/uuid-ossp regression tests to fail. You have to dig down\na level to the respective uuidgen(2) man pages to find documentation\nabout this, but each system appears to be conforming to its docs,\nand the old DCE standard they both refer to conveniently omits saying\nanything about what kind of UUID you get. So this isn't a bug as\nfar as either BSD is concerned.\n\nI'm not personally inclined to do anything about this; I'm certainly\nnot excited enough about it to write our own v1-UUID creation code.\nPerhaps we should just document that on NetBSD, uuid_generate_v1()\nand uuid_generate_v1mc() don't conform to spec.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Aug 2022 19:39:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-20 19:39:32 -0400, Tom Lane wrote:\n> Our documentation claims that --with-uuid=bsd works on both\n> FreeBSD and NetBSD: installation.sgml says\n> \n> <option>bsd</option> to use the UUID functions found in FreeBSD, NetBSD,\n> and some other BSD-derived systems\n> \n> and there is comparable wording in uuid-ossp.sgml.\n> \n> In the course of setting up a NetBSD buildfarm animal, I discovered\n> that this is a lie.\n\nAlso recently reported as a bug: https://postgr.es/m/17358-89806e7420797025%40postgresql.org\nwith a bunch of discussion.\n\n> I'm not personally inclined to do anything about this; I'm certainly\n> not excited enough about it to write our own v1-UUID creation code.\n> Perhaps we should just document that on NetBSD, uuid_generate_v1()\n> and uuid_generate_v1mc() don't conform to spec.\n\nPerhaps we should make them error out instead? It doesn't seem helpful to\njust return something wrong...\n\nCertainly would be good to get the regression tests to pass somehow, given\nthat we don't expect this to work. Don't want to add netbsd's results as an\nalternative, because that'd maybe hide bugs. But if we errored out we could\nprobably have an alternative with the errors, without a large risk of hiding\nbugs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 20 Aug 2022 17:48:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-20 19:39:32 -0400, Tom Lane wrote:\n>> In the course of setting up a NetBSD buildfarm animal, I discovered\n>> that this is a lie.\n\n> Also recently reported as a bug: https://postgr.es/m/17358-89806e7420797025%40postgresql.org\n> with a bunch of discussion.\n\nAh, I'd totally forgotten that thread :-(. After Peter pointed\nto the existence of new UUID format proposals, I kind of lost\ninterest in doing a lot of work to implement our own not-quite-\nper-spec V1 generator.\n\n> Perhaps we should make them error out instead? It doesn't seem helpful to\n> just return something wrong...\n\nYeah, might be appropriate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 20 Aug 2022 21:37:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Hi,\n\nOn 8/21/22 04:37, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> Perhaps we should make them error out instead? It doesn't seem helpful to\n>> just return something wrong...\n> Yeah, might be appropriate.\n\nBased on these discussions, I attached a patch.\n\nThanks,\nNazir Bilal Yavuz",
"msg_date": "Fri, 26 Aug 2022 18:36:29 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Nazir Bilal Yavuz <byavuz81@gmail.com> writes:\n> Based on these discussions, I attached a patch.\n\nThis is the wrong way to go about it:\n\n+#if defined(__NetBSD__)\n+\tereport(ERROR, errmsg(\"NetBSD's uuid_create function generates \"\n+\t\t\t\t\t\t\t\"version-4 UUIDs instead of version-1\"));\n+#endif\n\nOlder versions of NetBSD generated v1, so you'd incorrectly break\nthings on those. And who knows whether they might reconsider\nin the future?\n\nI think the right fix is to call uuid_create and then actually check\nthe version field of the result. This avoids breaking what need not\nbe broken, and it'd also guard against comparable problems on other\nplatforms (so don't blame NetBSD specifically in the message, either).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:21:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Hi,\n\n\nOn 8/26/22 19:21, Tom Lane wrote:\n> Nazir Bilal Yavuz <byavuz81@gmail.com> writes:\n>> Based on these discussions, I attached a patch.\n>\n> I think the right fix is to call uuid_create and then actually check\n> the version field of the result. This avoids breaking what need not\n> be broken, and it'd also guard against comparable problems on other\n> platforms (so don't blame NetBSD specifically in the message, either).\n\n\nI updated my patch. I checked version field in 'uuid_generate_internal' \nfunction instead of checking it in 'uuid_generate_v1' and \n'uuid_generate_v1mc' functions, but I have some questions:\n\n1 - Should it be checked only for '--with-uuid=bsd' option?\n 1.1 - If it is needed to be checked only for '--with-uuid=bsd', \nshould just NetBSD be checked?\n2 - Should it error out without including current UUID version in the \nerror message? General error message could mask if the 'uuid_create' \nfunction starts to produce UUIDs other than version-4.\n\nRegards,\nNazir Bilal Yavuz",
"msg_date": "Fri, 9 Sep 2022 17:54:07 +0300",
"msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Nazir Bilal Yavuz <byavuz81@gmail.com> writes:\n> I updated my patch. I checked version field in 'uuid_generate_internal' \n> function instead of checking it in 'uuid_generate_v1' and \n> 'uuid_generate_v1mc' functions, but I have some questions:\n\nYeah, that seems like the right place. I tweaked the code to check\nstrbuf not str just so we aren't making unnecessary assumptions about\nthe length of what is returned. strbuf[14] is guaranteed to exist,\nstr[14] maybe not.\n\n> 1 - Should it be checked only for '--with-uuid=bsd' option?\n> 1.1 - If it is needed to be checked only for '--with-uuid=bsd', \n> should just NetBSD be checked?\n\nI don't see any reason not to check in the BSD code path --- it's\na cheap enough test. On the other hand, in the other code paths\nthere is no evidence that it's necessary, and we'd find out soon\nenough if it becomes necessary.\n\n> 2 - Should it error out without including current UUID version in the \n> error message? General error message could mask if the 'uuid_create' \n> function starts to produce UUIDs other than version-4.\n\nYeah, I thought reporting the actual version digit was worth doing,\nand made it do so.\n\nPushed with those changes and doc updates. I did not push the\nvariant expected-file. I think the entire point here is that\nwe are *not* deeming the new NetBSD implementation acceptable,\nso allowing it to pass regression tests is the wrong thing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 12:48:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-09 12:48:38 -0400, Tom Lane wrote:\n> Pushed with those changes and doc updates. I did not push the\n> variant expected-file. I think the entire point here is that\n> we are *not* deeming the new NetBSD implementation acceptable,\n> so allowing it to pass regression tests is the wrong thing.\n\nWhat do we gain from the regression test failing exactly this way, given that\nwe know it's a problem? It just makes it harder to run tests. How about we add\nit as variant file, but via the resultmap mechanism? That way we wouldn't\nsilently accept the same bug on other platforms, but can still run the test\nwithout needing to manually filter out bogus netbsd results?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Sep 2022 14:20:12 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-09-09 12:48:38 -0400, Tom Lane wrote:\n>> Pushed with those changes and doc updates. I did not push the\n>> variant expected-file. I think the entire point here is that\n>> we are *not* deeming the new NetBSD implementation acceptable,\n>> so allowing it to pass regression tests is the wrong thing.\n\n> What do we gain from the regression test failing exactly this way, given that\n> we know it's a problem?\n\nIt tells people not to use --with-uuid=bsd on those NetBSD versions.\nThey can either do without uuid-ossp, or install ossp or e2fs.\n(\"Do without\" is not much of a hardship, now that we have\ngen_random_uuid() in core.)\n\nIMV a substantial part of the point of the regression tests is to\nlet end users and/or packagers verify that they have a non-broken\ninstallation. Hiding a problem by making the tests not fail\nbasically breaks that use-case.\n\nIf we had, say, a known openssl security bug that was exposed by our\ntest cases, would you advocate dumbing down the tests to not expose\nthe bug?\n\n> It just makes it harder to run tests.\n\nHarder for who? AFAICT there is nobody but me routinely running\nfull tests on NetBSD, else we'd have found this problem much earlier.\nI've got my animals configured not to use --with-uuid (not much of\na lift considering that's the buildfarm's default). End of problem.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 09 Sep 2022 17:31:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-09 17:31:40 -0400, Tom Lane wrote:\n> Harder for who? AFAICT there is nobody but me routinely running\n> full tests on NetBSD, else we'd have found this problem much earlier.\n\nBilal's report was caused by automating testing on netbsd (and openbsd) as\nwell, as part of the meson stuff. I also occasionally run the tests in a VM.\n\nBut as you say:\n\n> I've got my animals configured not to use --with-uuid (not much of\n> a lift considering that's the buildfarm's default). End of problem.\n\nFair enough.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 9 Sep 2022 15:05:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: configure --with-uuid=bsd fails on NetBSD"
}
] |
[
{
"msg_contents": "Hi,\nIn sqlsh, I issued `\\timing on`.\n\nI don't see timing information displayed for `\\c database`.\n\nDoes someone know how I can obtain such information ?\n\nThanks\n\nHi,In sqlsh, I issued `\\timing on`.I don't see timing information displayed for `\\c database`.Does someone know how I can obtain such information ?Thanks",
"msg_date": "Sun, 21 Aug 2022 05:48:08 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "timing information for switching database"
},
{
"msg_contents": "Hi\n\n\nne 21. 8. 2022 v 14:41 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:\n\n> Hi,\n> In sqlsh, I issued `\\timing on`.\n>\n> I don't see timing information displayed for `\\c database`.\n>\n> Does someone know how I can obtain such information ?\n>\n\nyou cannot do it in psql\n\nyou can write custom application and you can measure disconnect - connect\ntime\n\nRegards\n\nPavel\n\n>\n> Thanks\n>\n\nHine 21. 8. 2022 v 14:41 odesílatel Zhihong Yu <zyu@yugabyte.com> napsal:Hi,In sqlsh, I issued `\\timing on`.I don't see timing information displayed for `\\c database`.Does someone know how I can obtain such information ?you cannot do it in psqlyou can write custom application and you can measure disconnect - connect timeRegardsPavel Thanks",
"msg_date": "Sun, 21 Aug 2022 14:54:33 +0200",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: timing information for switching database"
},
{
"msg_contents": "*\\timing* set the pset.timing flag of the global psql options, and use it\nin the client side to indicate whether to print the query time or not.\n\nThere are two places using it, *SendQuery* and *PSQLexecWatch*,\nyou may check these functions and add the *timing* logic in\nfunction *exec_command_connect* to display the time infomation,\nbut this should be only used for your own testing purpose.\n\nOn Sun, Aug 21, 2022 at 8:41 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> In sqlsh, I issued `\\timing on`.\n>\n> I don't see timing information displayed for `\\c database`.\n>\n> Does someone know how I can obtain such information ?\n>\n> Thanks\n\n\n\n-- \nRegards\nJunwang Zhao\n\n\n",
"msg_date": "Sun, 21 Aug 2022 21:46:42 +0800",
"msg_from": "Junwang Zhao <zhjwpku@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: timing information for switching database"
}
] |
[
{
"msg_contents": "Hello!\n\nHere is a fix for the bug first described in:\nhttps://www.postgresql.org/message-id/flat/adf0452f-8c6b-7def-d35e-ab516c80088e%40inbox.ru\n\nReproduction:\n1) On master with 'wal_level = logical' execute mascmd.sql attached.\n\n2) On replica substitute the correct port in repcmd.sql and execute it.\n\n3) On master execute command:\nINSERT INTO rul_rule_set VALUES ('1', 'name','1','age','true');\n\nReplica will crash with:\nFailedAssertion(\"ActivePortal && ActivePortal->status == PORTAL_ACTIVE\", File: \"pg_proc.c\", Line: 1038, PID: 42894)\nin infinite loop.\n\nAfter applying this patch replica will give the correct error message instead of assertion:\n\n2022-08-21 17:08:39.935 MSK [143171] ERROR: relation \"rul_rule_set\" does not exist at character 172\n2022-08-21 17:08:39.935 MSK [143171] QUERY:\n\t-- Last modified: 2022-08-21 17:08:39.930842+03\n\twith parameters as (\n<<--- skipped by me --- >>>\n\t )\n2022-08-21 17:08:39.935 MSK [143171] CONTEXT: SQL statement \"create or replace function public.rule_set_selector(\n<<--- skipped by me --- >>>\n\tSQL statement \"call public.rebuild_rule_set_selector()\"\n\tPL/pgSQL function public.rul_rule_set_trg() line 4 at CALL\n\tprocessing remote data for replication origin \"pg_16401\" during \"INSERT\"\n for replication target relation \"public.rul_rule_set\" in transaction 741 finished at 0/17BE180\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 21 Aug 2022 17:33:57 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "[BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nOn 21.08.2022 17:33, Anton A. Melnikov wrote:\n> Hello!\n> \n> Here is a fix for the bug first described in:\n> https://www.postgresql.org/message-id/flat/adf0452f-8c6b-7def-d35e-ab516c80088e%40inbox.ru\n> \n\nSorry, there was a wrong patch in the first letter.\nHere is a right version.\n\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 21 Aug 2022 18:32:36 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nThe patch was rebased on current master.\nAnd here is a simplified crash reproduction:\n1) On primary with 'wal_level = logical' execute:\n CREATE TABLE public.test (id int NOT NULL, val integer);\n CREATE PUBLICATION test_pub FOR TABLE test;\n\n2) On replica replace XXXX in the repcmd.sql attached with primary port and execute it:\npsql -f repcmd.sql\n\n3) On master execute command:\nINSERT INTO test VALUES ('1');\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 30 Aug 2022 10:09:04 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nAdded a TAP test for this case.\n\nOn 30.08.2022 10:09, Anton A. Melnikov wrote:\n> Hello!\n> \n> The patch was rebased on current master.\n> And here is a simplified crash reproduction:\n> 1) On primary with 'wal_level = logical' execute:\n> CREATE TABLE public.test (id int NOT NULL, val integer);\n> CREATE PUBLICATION test_pub FOR TABLE test;\n> \n> 2) On replica replace XXXX in the repcmd.sql attached with primary port and execute it:\n> psql -f repcmd.sql\n> \n> 3) On master execute command:\n> INSERT INTO test VALUES ('1');\n> \n \nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 8 Sep 2022 11:47:06 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> [ v4-0001-Fix-logical-replica-assert-on-func-error.patch ]\n\nI took a quick look at this. I think you're solving the\nproblem in the wrong place. The real issue is why are\nwe not setting up ActivePortal correctly when running\nuser-defined code in a logrep worker? There is other code\nthat expects that to be set, eg EnsurePortalSnapshotExists.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Sep 2022 13:27:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nThanks for reply!\n\nOn 24.09.2022 20:27, Tom Lane wrote:\n> I think you're solving the\n> problem in the wrong place. The real issue is why are\n> we not setting up ActivePortal correctly when running\n> user-defined code in a logrep worker?\nDuring a common query from the backend ActivePortal becomes defined\nin the PortalRun and then AfterTriggerEndQuery starts with\nnon-NULL ActivePortal after ExecutorFinish.\nWhen the logrep worker is applying messages there are neither\nPortalStart nor PortalRun calls. And AfterTriggerEndQuery starts\nwith undefined ActivePortal after finish-edata().\nMay be it's normal behavior?\n\n> There is other code\n> that expects that to be set, eg EnsurePortalSnapshotExists.\n\nWhen the logrep worker is applying message it doesn't have to use\nActivePortal in EnsurePortalSnapshotExists because ActiveSnapshot is already installed.\nIt is set at the beginning of each transaction in the begin_replication_step call.\n\nOn the other hand, function_parse_error_transpose() tries to get\nthe original query text (INSERT INTO test VALUES ('1') in our case) from\nthe ActivePortal to clarify the location of the error.\nBut in the logrep worker there is no way to restore original query text\nfrom the logrep message. There is only 'zipped' query equivalent to the original.\nSo any function_parse_error_transpose() call seems to be useless\nin the logrep worker.\n\nAnd it looks like we can simply omit match_prosrc_to_query() call there.\nThe attached patch does it.\n\nBest wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 9 Oct 2022 12:24:23 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "On 2022-Sep-24, Tom Lane wrote:\n\n> \"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> > [ v4-0001-Fix-logical-replica-assert-on-func-error.patch ]\n> \n> I took a quick look at this. I think you're solving the\n> problem in the wrong place. The real issue is why are\n> we not setting up ActivePortal correctly when running\n> user-defined code in a logrep worker? There is other code\n> that expects that to be set, eg EnsurePortalSnapshotExists.\n\nRight ... mostly, the logical replication *does* attempt to set up a\ntransaction and active snapshot when replaying actions (c.f.\nbegin_replication_step()). Is this firing at an inappropriate time,\nperhaps?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 10 Oct 2022 12:06:45 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "On Sun, Oct 09, 2022 at 12:24:23PM +0300, Anton A. Melnikov wrote:\n> On the other hand, function_parse_error_transpose() tries to get\n> the original query text (INSERT INTO test VALUES ('1') in our case) from\n> the ActivePortal to clarify the location of the error.\n> But in the logrep worker there is no way to restore original query text\n> from the logrep message. There is only 'zipped' query equivalent to the original.\n> So any function_parse_error_transpose() call seems to be useless\n> in the logrep worker.\n\nYeah, the query string is not available in this context, but it surely\nlooks wrong to me to assume that something as low-level as\nfunction_parse_error_transpose() needs to be updated for the sake of a\nlogical worker, while we have other areas that would expect a portal\nto be set up.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:24:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Yeah, the query string is not available in this context, but it surely\n> looks wrong to me to assume that something as low-level as\n> function_parse_error_transpose() needs to be updated for the sake of a\n> logical worker, while we have other areas that would expect a portal\n> to be set up.\n\nAfter thinking about this some more, I'm withdrawing my opposition to\nfixing it by making function_parse_error_transpose() cope with not\nhaving an active portal. I have a few reasons:\n\n* A Portal is intended to contain an executor state. While worker.c\ndoes fake up an EState, there's certainly no plan tree or planstate tree,\nand I doubt it'd be sane to create dummy ones. So even if we made a\nPortal, it'd be lacking a lot of the stuff one would expect to find there.\nI fear that moving the cause of this sort of problem from \"there's no\nActivePortal\" to \"there's an ActivePortal but it lacks field X\" wouldn't\nbe an improvement.\n\n* There is actually just one other consumer of ActivePortal,\nnamely EnsurePortalSnapshotExists, and that doesn't offer a lot of\nsupport for the idea that ActivePortal must always be set. It says\n\n * Nothing to do if a snapshot is set. (We take it on faith that the\n * outermost active snapshot belongs to some Portal; or if there is no\n * Portal, it's somebody else's responsibility to manage things.)\n\nand \"it's somebody else's responsibility\" summarizes the situation\nhere pretty perfectly. worker.c *does* set up an active snapshot.\n\n* The comment in function_parse_error_transpose() freely admits that\nlooking at the ActivePortal is a hack. It works, more or less, for\nthe intended case of reporting a function-body syntax error nicely\nduring CREATE FUNCTION. But it's capable of getting false-positive\nmatches, so maybe someday we should replace it with something more\nbulletproof.\n\n* There isn't any strong reason why function_parse_error_transpose()\nhas to succeed at finding the original query text. Its fallback\napproach of treating the syntax error position as internal to the\nfunction body text is fine, in fact it's just what we want here.\n\n\nSo I'm now good with the idea of just not failing. I don't like\nthe patch as presented though. First, the cfbot is quite rightly\ncomplaining about the \"uninitialized variable\" warning it draws.\nSecond, I don't see a good reason to tie the change to logical\nreplication in any way. Let's just change the Assert to an if(),\nas attached.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 02 Nov 2022 14:02:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "On Wed, Nov 2, 2022 at 11:32 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> So I'm now good with the idea of just not failing. I don't like\n> the patch as presented though. First, the cfbot is quite rightly\n> complaining about the \"uninitialized variable\" warning it draws.\n> Second, I don't see a good reason to tie the change to logical\n> replication in any way. Let's just change the Assert to an if(),\n> as attached.\n>\n\nLGTM. I don't know if it is a good idea to omit the test case for this\nscenario. If required, we can reuse the test case from Sawada-San's\npatch in the email [1].\n\n[1] - https://www.postgresql.org/message-id/CAD21AoDKA%2BMB4M9BOnct_%3DZj5bNHbkYn6oKZ2aOQp8m%3D3x2GhQ%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 3 Nov 2022 09:41:46 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> LGTM. I don't know if it is a good idea to omit the test case for this\n> scenario. If required, we can reuse the test case from Sawada-San's\n> patch in the email [1].\n\nI don't think the cost of that test case is justified by the tiny\nprobability that it'd ever catch anything. If we were just adding a\nquery or two to an existing scenario, that could be okay; but spinning\nup and syncing a whole new primary and standby database is *expensive*\nwhen you multiply it by the number of times developers and buildfarm\nanimals are going to run the tests in the future.\n\nThere's also the little issue that I'm not sure it would actually\ndetect a problem if we had one. The case is going to fail, and\nwhat we want to know is just how messily it fails, and I think the\nTAP infrastructure isn't very sensitive to that ... especially\nif the test isn't even checking for specific error messages.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 03 Nov 2022 11:29:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nOn 02.11.2022 21:02, Tom Lane wrote:\n> So I'm now good with the idea of just not failing. I don't like\n> the patch as presented though. First, the cfbot is quite rightly\n> complaining about the \"uninitialized variable\" warning it draws.\n> Second, I don't see a good reason to tie the change to logical\n> replication in any way. Let's just change the Assert to an if(),\n> as attached.\n\nFully agreed that is the most optimal solution for that case. Thanks!\nSurely it's very rare one but there was a real segfault at production server.\nSomeone came up with the idea to modify function like public.test_selector()\nin repcmd.sql (see above) on the fly with adding to it :last_modification:\nfield from current time and some other parameters with the help of replace()\ninside the creation of the rebuild_test() procedure.\n\nOn 03.11.2022 18:29, Tom Lane wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> LGTM. I don't know if it is a good idea to omit the test case for this\n>> scenario. If required, we can reuse the test case from Sawada-San's\n>> patch in the email [1].\n> \n> I don't think the cost of that test case is justified by the tiny\n> probability that it'd ever catch anything. If we were just adding a\n> query or two to an existing scenario, that could be okay; but spinning\n> up and syncing a whole new primary and standby database is *expensive*\n> when you multiply it by the number of times developers and buildfarm\n> animals are going to run the tests in the future.\n> \n> There's also the little issue that I'm not sure it would actually\n> detect a problem if we had one. The case is going to fail, and\n> what we want to know is just how messily it fails, and I think the\n> TAP infrastructure isn't very sensitive to that ... especially\n> if the test isn't even checking for specific error messages.\nSeems it is possible to do a test without these remarks. The attached\ntest uses existing nodes and checks the specific error message. Additionally\ni've tried to reduce overall number of nodes previously\nused in this test in a similar way.\n\nWould be glad for comments and remarks.\n\nWith best wishes,\n\n--\nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Tue, 15 Nov 2022 04:39:53 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> On 02.11.2022 21:02, Tom Lane wrote:\n>> I don't think the cost of that test case is justified by the tiny\n>> probability that it'd ever catch anything.\n\n> Seems it is possible to do a test without these remarks. The attached\n> test uses existing nodes and checks the specific error message.\n\nMy opinion remains unchanged: this would be a very poor use of test\ncycles.\n\n> Additionally\n> i've tried to reduce overall number of nodes previously\n> used in this test in a similar way.\n\nOptimizing existing tests isn't an answer to that. We could\ninstall those optimizations without adding a new test case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 14 Nov 2022 20:59:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Thanks a lot for the fast reply!\n\nOn 03.11.2022 18:29, Tom Lane wrote:\n> If we were just adding a\n> query or two to an existing scenario, that could be okay; but spinning\n> up and syncing a whole new primary and standby database is *expensive*\n> when you multiply it by the number of times developers and buildfarm\n> animals are going to run the tests in the future.\n>\n\nOn 15.11.2022 04:59, Tom Lane wrote:\n> \"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n>> On 02.11.2022 21:02, Tom Lane wrote:\n>>> I don't think the cost of that test case is justified by the tiny\n>>> probability that it'd ever catch anything.\n>\n>> Seems it is possible to do a test without these remarks. The attached\n>> test uses existing nodes and checks the specific error message.\n>\n> My opinion remains unchanged: this would be a very poor use of test\n> cycles.\n\nSorry, i didn't fully understand what is required and\nadded some functions to the test that spend extra cpu time. But i found\nthat it is possible to make a test according to previous remarks by adding\nonly a few extra queries to an existent test without any additional syncing.\n\nExperimentally, i could not observe any significant difference in\nCPU usage due to this test addition.\nThe difference in the CPU usage was less than standard error.\nSee 100_bugs-CPU-time.txt attached.\n\n>> Additionally\n>> i've tried to reduce overall number of nodes previously\n>> used in this test in a similar way.\n>\n> Optimizing existing tests isn't an answer to that. We could\n> install those optimizations without adding a new test case.\n\nOh sure! Here is a separate patch for this optimization:\nhttps://www.postgresql.org/message-id/eb7aa992-c2d7-6ce7-4942-0c784231a362%40inbox.ru\nOn the contrary with previous case in that one the difference in the CPU usage\nduring 100_bugs.pl is essential; it decreases approximately by 30%.\n\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 16 Nov 2022 17:52:50 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hi,\n\nOn 2022-11-16 17:52:50 +0300, Anton A. Melnikov wrote:\n> Sorry, i didn't fully understand what is required and\n> added some functions to the test that spend extra cpu time. But i found\n> that it is possible to make a test according to previous remarks by adding\n> only a few extra queries to an existent test without any additional syncing.\n> \n> Experimentally, i could not observe any significant difference in\n> CPU usage due to this test addition.\n> The difference in the CPU usage was less than standard error.\n> See 100_bugs-CPU-time.txt attached.\n> \n> > > Additionally\n> > > i've tried to reduce overall number of nodes previously\n> > > used in this test in a similar way.\n> > \n> > Optimizing existing tests isn't an answer to that. We could\n> > install those optimizations without adding a new test case.\n> \n> Oh sure! Here is a separate patch for this optimization:\n> https://www.postgresql.org/message-id/eb7aa992-c2d7-6ce7-4942-0c784231a362%40inbox.ru\n> On the contrary with previous case in that one the difference in the CPU usage\n> during 100_bugs.pl is essential; it decreases approximately by 30%.\n\nThis CF entry causes tests to fail on all platforms:\nhttps://cirrus-ci.com/build/5755408111894528\n\nE.g.\nhttps://api.cirrus-ci.com/v1/artifact/task/5298457144459264/testrun/build/testrun/subscription/100_bugs/log/regress_log_100_bugs\n\n#### Begin standard error\npsql:<stdin>:1: NOTICE: dropped replication slot \"sub1\" on publisher\n#### End standard error\ntimed out waiting for match: ERROR: relation \"error_name\" does not exist at character at /tmp/cirrus-ci-build/src/test/subscription/t/100_bugs.pl line 115.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 7 Dec 2022 10:03:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "On 07.12.2022 21:03, Andres Freund wrote:\n\n> \n> This CF entry causes tests to fail on all platforms:\n> https://cirrus-ci.com/build/5755408111894528\n> \n> E.g.\n> https://api.cirrus-ci.com/v1/artifact/task/5298457144459264/testrun/build/testrun/subscription/100_bugs/log/regress_log_100_bugs\n> \n> #### Begin standard error\n> psql:<stdin>:1: NOTICE: dropped replication slot \"sub1\" on publisher\n> #### End standard error\n> timed out waiting for match: ERROR: relation \"error_name\" does not exist at character at /tmp/cirrus-ci-build/src/test/subscription/t/100_bugs.pl line 115.\n> \n> Greetings,\n> \n> Andres Freund\n\nThank you for reminding!\n\nThere was a conflict when applying v2 on current master.\nRebased v3 is attached.\n\nBest wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 11 Dec 2022 06:50:49 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "On Sun, 11 Dec 2022 at 09:21, Anton A. Melnikov <aamelnikov@inbox.ru> wrote:\n>\n>\n> On 07.12.2022 21:03, Andres Freund wrote:\n>\n> >\n> > This CF entry causes tests to fail on all platforms:\n> > https://cirrus-ci.com/build/5755408111894528\n> >\n> > E.g.\n> > https://api.cirrus-ci.com/v1/artifact/task/5298457144459264/testrun/build/testrun/subscription/100_bugs/log/regress_log_100_bugs\n> >\n> > #### Begin standard error\n> > psql:<stdin>:1: NOTICE: dropped replication slot \"sub1\" on publisher\n> > #### End standard error\n> > timed out waiting for match: ERROR: relation \"error_name\" does not exist at character at /tmp/cirrus-ci-build/src/test/subscription/t/100_bugs.pl line 115.\n> >\n> > Greetings,\n> >\n> > Andres Freund\n>\n> Thank you for reminding!\n>\n> There was a conflict when applying v2 on current master.\n> Rebased v3 is attached.\n\nFew suggestions:\n1) There is a warning:\n+# This would crash on the subscriber if not fixed\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab1 VALUES (3, 4)\");\n+\n+my $result = $node_subscriber->wait_for_log(\n+ \"ERROR: relation \\\"error_name\\\" does not exist at character\"\n+);\n\n \"my\" variable $result masks earlier declaration in same scope at\nt/100_bugs.pl line 400.\n\nYou can change:\nmy $result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM sch1.t1\");\nto\n$result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM sch1.t1\");\n\n2) Now that the crash is fixed, you could change it to a better message:\n+# This would crash on the subscriber if not fixed\n+$node_publisher->safe_psql('postgres', \"INSERT INTO tab1 VALUES (3, 4)\");\n+\n+my $result = $node_subscriber->wait_for_log(\n+ \"ERROR: relation \\\"error_name\\\" does not exist at character\"\n+);\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 7 Jan 2023 17:57:18 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Thanks for your remarks.\n\nOn 07.01.2023 15:27, vignesh C wrote:\n> \n> Few suggestions:\n> 1) There is a warning:\n> +# This would crash on the subscriber if not fixed\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab1 VALUES (3, 4)\");\n> +\n> +my $result = $node_subscriber->wait_for_log(\n> + \"ERROR: relation \\\"error_name\\\" does not exist at character\"\n> +);\n> \n> \"my\" variable $result masks earlier declaration in same scope at\n> t/100_bugs.pl line 400.\n> \n> You can change:\n> my $result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM sch1.t1\");\n> to\n> $result = $node_subscriber->safe_psql('postgres', \"SELECT * FROM sch1.t1\");\n\nThe reason is that the patch fell behind the master.\nFixed in v4 together with rebasing on current master.\n\n> 2) Now that the crash is fixed, you could change it to a better message:\n> +# This would crash on the subscriber if not fixed\n> +$node_publisher->safe_psql('postgres', \"INSERT INTO tab1 VALUES (3, 4)\");\n> +\n> +my $result = $node_subscriber->wait_for_log(\n> + \"ERROR: relation \\\"error_name\\\" does not exist at character\"\n> +);\n> \n\nTried to make this comment more clear.\n\nBest wishes for the new year!\n\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Sun, 8 Jan 2023 09:02:33 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nOn 15.03.2023 21:29, Gregory Stark (as CFM) wrote:\n\n> These patches that are \"Needs Review\" and have received no comments at\n> all since before March 1st are these. If your patch is amongst this\n> list I would suggest any of:\n> \n> 1) Move it yourself to the next CF (or withdraw it)\n> 2) Post to the list with any pending questions asking for specific\n> feedback -- it's much more likely to get feedback than just a generic\n> \"here's a patch plz review\"...\n> 3) Mark it Ready for Committer and possibly post explaining the\n> resolution to any earlier questions to make it easier for a committer\n> to understand the state\n>\n\nThere were some remarks:\n1) very poor use of test cycles (by Tom Lane)\nSolved in v2 by adding few extra queries to an existent test without any additional syncing.\n2) the patch-tester fails on all platforms (by Andres Freund)\nFixed in v3.\n3) warning with \"my\" variable $result and suggestion to correct comment (by vignesh C)\nBoth fixed in v4.\n\nNow there are no any pending questions, so moved it to RFC.\n\nWith the best regards!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 16 Mar 2023 16:14:00 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> Now there are no any pending questions, so moved it to RFC.\n\nI did not think this case was worth memorializing in a test before,\nand I still do not. I'm inclined to reject this patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Apr 2023 14:49:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "Hello!\n\nOn 03.04.2023 21:49, Tom Lane wrote:\n> \"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n>> Now there are no any pending questions, so moved it to RFC.\n> \n> I did not think this case was worth memorializing in a test before,\n> and I still do not. I'm inclined to reject this patch.\n\nEarlier, when discussing this test, there was a suggestion like this:\n\n> If we were just adding a\n> query or two to an existing scenario, that could be okay;\n\nThe current version of the test seems to be satisfies this condition.\nThe queries added do not affect the total test time within the measurement error.\nThis case is rare, of cause, but it really took place in practice.\n\nSo either there are some more reasons why this test should not be accepted that\ni do not understand, or i misunderstood something from the above.\n\nCould you help me to figure out, please.\n\nWould be very grateful.\n\nSincerely yours,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Wed, 5 Apr 2023 17:04:39 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> On 03.04.2023 21:49, Tom Lane wrote:\n>> I did not think this case was worth memorializing in a test before,\n>> and I still do not. I'm inclined to reject this patch.\n\n> Could you help me to figure out, please.\n\nThe problem was an Assert that was speculative when it went in,\nand which we eventually found was wrong in the context of logical\nreplication. We removed the Assert. I don't think we need a test\ncase to keep us from putting back the Assert. That line of thinking\nleads to test suites that run for fourteen hours and are near useless\nbecause developers can't run them easily.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 05 Apr 2023 10:35:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
},
{
"msg_contents": "On 05.04.2023 17:35, Tom Lane wrote:\n> \"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n>> On 03.04.2023 21:49, Tom Lane wrote:\n>>> I did not think this case was worth memorializing in a test before,\n>>> and I still do not. I'm inclined to reject this patch.\n> \n>> Could you help me to figure out, please.\n> \n> The problem was an Assert that was speculative when it went in,\n> and which we eventually found was wrong in the context of logical\n> replication. We removed the Assert. I don't think we need a test\n> case to keep us from putting back the Assert. That line of thinking\n> leads to test suites that run for fourteen hours and are near useless\n> because developers can't run them easily.\n> \n> \t\t\tregards, tom lane\n\nOk, i understand! Thanks a lot for the clarification. A rather important point,\ni'll take it into account for the future.\nLet's do that. Revoked the patch.\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Thu, 6 Apr 2023 13:24:21 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] Logical replica crash if there was an error in a function."
}
] |
[
{
"msg_contents": "When costing a btree index scan, num_sa_scans gets computed twice, once in\nbtcostestmeate and once in genericcostestimate. But the computations are\ndifferent. It looks like the generic one includes all =ANY in any column\nin the index, while the bt one includes only =ANY which or on columns for\nwhich all the preceding index columns are tested for equality.\n\nIt looks like the generic one was there first then the bt-specific one was\nadded later to improve planning of btree indexes. But then shouldn't the\nvalue be passed down to generic, rather than recomputed (differently)?\nI've attached a patch to do that. Generic still computes the value itself\nfor other-than-btree indexes.\n\nOr, is there a reason we want a different value to be used in\ngenericcostestimate?\n\nThe context for this is that I was looking at cases where btree indexes\nwere not using all the columns they could, but rather shoving some of the\nconditions down into a Filter unnecessarily/unhelpfully. This change\ndoesn't fix that, but it does seem to be moving in the right direction.\n\nThis does cause a regression test failure due to an (apparently?)\nuninteresting plan change.\n\nCheers,\n\nJeff",
"msg_date": "Sun, 21 Aug 2022 14:45:14 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "num_sa_scans in genericcostestimate"
},
{
"msg_contents": "On Sun, Aug 21, 2022 at 2:45 PM Jeff Janes <jeff.janes@gmail.com> wrote:\n\n\n> ...\n>\n\n\n> The context for this is that I was looking at cases where btree indexes\n> were not using all the columns they could, but rather shoving some of the\n> conditions down into a Filter unnecessarily/unhelpfully. This change\n> doesn't fix that, but it does seem to be moving in the right direction.\n>\n\nAdded to commitfest.\n\n\n> This does cause a regression test failure due to an (apparently?)\n> uninteresting plan change.\n>\n\nLooking more at the regression test plan change, it points up an\ninteresting question which is only tangentially related to this patch.\n\nWith patch applied:\n\n[local] 417536 regression=# explain analyze SELECT thousand, tenthous FROM\ntenk1\n WHERE thousand < 2 AND tenthous IN (1001,3000)\n ORDER BY thousand;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=4.55..4.56 rows=1 width=8) (actual time=0.100..0.101 rows=2\nloops=1)\n Sort Key: thousand\n Sort Method: quicksort Memory: 25kB\n -> Index Only Scan using tenk1_thous_tenthous on tenk1\n (cost=0.29..4.50 rows=1 width=8) (actual time=0.044..0.048 rows=2 loops=1)\n Index Cond: ((thousand < 2) AND (tenthous = ANY\n('{1001,3000}'::integer[])))\n Heap Fetches: 0\n Planning Time: 1.040 ms\n Execution Time: 0.149 ms\n(8 rows)\n\n\n[local] 417536 regression=# set enable_sort TO off ;\n\n\n[local] 417536 regression=# explain analyze SELECT thousand, tenthous FROM\ntenk1\n WHERE thousand < 2 AND tenthous IN (1001,3000)\n ORDER BY thousand;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------\n Index Only Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..4.71\nrows=1 width=8) (actual time=0.021..0.024 rows=2 loops=1)\n Index Cond: (thousand < 2)\n Filter: (tenthous = ANY ('{1001,3000}'::integer[]))\n Rows Removed by Filter: 18\n Heap Fetches: 0\n Planning Time: 0.156 ms\n Execution Time: 0.039 ms\n(7 rows)\n\nWhy does having the =ANY in the \"Index Cond:\" rather than the \"Filter:\"\ninhibit it from understanding that the rows will still be delivered in\norder by \"thousand\"?\n\nCheers,\n\nJeff\n\n>\n\nOn Sun, Aug 21, 2022 at 2:45 PM Jeff Janes <jeff.janes@gmail.com> wrote: ... The context for this is that I was looking at cases where btree indexes were not using all the columns they could, but rather shoving some of the conditions down into a Filter unnecessarily/unhelpfully. This change doesn't fix that, but it does seem to be moving in the right direction.Added to commitfest. This does cause a regression test failure due to an (apparently?) uninteresting plan change.Looking more at the regression test plan change, it points up an interesting question which is only tangentially related to this patch.With patch applied:[local] 417536 regression=# explain analyze SELECT thousand, tenthous FROM tenk1 WHERE thousand < 2 AND tenthous IN (1001,3000) ORDER BY thousand; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- Sort (cost=4.55..4.56 rows=1 width=8) (actual time=0.100..0.101 rows=2 loops=1) Sort Key: thousand Sort Method: quicksort Memory: 25kB -> Index Only Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..4.50 rows=1 width=8) (actual time=0.044..0.048 rows=2 loops=1) Index Cond: ((thousand < 2) AND (tenthous = ANY ('{1001,3000}'::integer[]))) Heap Fetches: 0 Planning Time: 1.040 ms Execution Time: 0.149 ms(8 rows)[local] 417536 regression=# set enable_sort TO off ;[local] 417536 regression=# explain analyze SELECT thousand, tenthous FROM tenk1 WHERE thousand < 2 AND tenthous IN (1001,3000) ORDER BY thousand; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Index Only Scan using tenk1_thous_tenthous on tenk1 (cost=0.29..4.71 rows=1 width=8) (actual time=0.021..0.024 rows=2 loops=1) Index Cond: (thousand < 2) Filter: (tenthous = ANY ('{1001,3000}'::integer[])) Rows Removed by Filter: 18 Heap Fetches: 0 Planning Time: 0.156 ms Execution Time: 0.039 ms(7 rows)Why does having the =ANY in the \"Index Cond:\" rather than the \"Filter:\" inhibit it from understanding that the rows will still be delivered in order by \"thousand\"?Cheers,Jeff",
"msg_date": "Wed, 31 Aug 2022 22:33:08 -0400",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: num_sa_scans in genericcostestimate"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> When costing a btree index scan, num_sa_scans gets computed twice, once in\n> btcostestmeate and once in genericcostestimate. But the computations are\n> different. It looks like the generic one includes all =ANY in any column\n> in the index, while the bt one includes only =ANY which or on columns for\n> which all the preceding index columns are tested for equality.\n\nI think this is correct. As per the comments in btcostestimate:\n\n * For a btree scan, only leading '=' quals plus inequality quals for the\n * immediately next attribute contribute to index selectivity (these are\n * the \"boundary quals\" that determine the starting and stopping points of\n * the index scan). Additional quals can suppress visits to the heap, so\n * it's OK to count them in indexSelectivity, but they should not count\n * for estimating numIndexTuples. So we must examine the given indexquals\n * to find out which ones count as boundary quals. ...\n\nand further down\n\n /* count number of SA scans induced by indexBoundQuals only */\n if (alength > 1)\n num_sa_scans *= alength;\n\nThis num_sa_scans value computed by btcostestimate is (or should be)\nonly used in calculations related to numIndexTuples, whereas the one\nin genericcostestimate should be used for calculations related to the\noverall number of heap tuples returned by the indexscan. Maybe there\nis someplace that is using the wrong one, but it's not a bug that they\nare different.\n\n> The context for this is that I was looking at cases where btree indexes\n> were not using all the columns they could, but rather shoving some of the\n> conditions down into a Filter unnecessarily/unhelpfully. This change\n> doesn't fix that, but it does seem to be moving in the right direction.\n\nIf it helps, it's only accidental, because this patch is surely wrong.\nWe *should* be distinguishing these numbers.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 15:17:18 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: num_sa_scans in genericcostestimate"
},
{
"msg_contents": "Jeff Janes <jeff.janes@gmail.com> writes:\n> Why does having the =ANY in the \"Index Cond:\" rather than the \"Filter:\"\n> inhibit it from understanding that the rows will still be delivered in\n> order by \"thousand\"?\n\nThey won't be. The =ANY in index conditions results in multiple\nindex scans, that is we effectively do a scan with\n\n Index Cond: (thousand < 2) AND (tenthous = 1001)\n\nand then another with\n\n Index Cond: (thousand < 2) AND (tenthous = 3000)\n\nand only by very good luck would the overall result be sorted by\n\"thousand\". On the other hand, if the ScalarArrayOp is a plain\nfilter condition, then it doesn't affect the number of index\nscans --- it's just a (rather expensive) filter condition.\n\nindxpath.c's get_index_paths() is explicitly aware of these\nconsiderations, maybe reading the comments there would help.\n\nI don't say there couldn't be a bug here, but you haven't\ndemonstrated one. I believe that get_index_paths() will\ngenerate paths both ways, with the ScalarArrayOp as a filter\ncondition and an indexqual, and it's evidently deciding the\nfirst way is cheaper.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 08 Sep 2022 15:33:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: num_sa_scans in genericcostestimate"
}
] |
[
{
"msg_contents": "Hi All,\n\nI am writing a postgres extension which writes only generic wal record, but\nthis wal is not recognized by logical replication decoder. I have a basic\nunderstanding of how logical replication(COPY command for initial sync, wal\nreplica for final sync) works, can you please tell us a way to support this?\n\n\nThanks,\nNatarajan.R\n\nHi All,I am writing a postgres extension which writes only generic wal record, but this wal is not recognized by logical replication decoder. I have a basic understanding of how logical replication(COPY command for initial sync, wal replica for final sync) works, can you please tell us a way to support this?Thanks,Natarajan.R",
"msg_date": "Mon, 22 Aug 2022 11:59:05 +0530",
"msg_from": "Natarajan R <nataraj3098@gmail.com>",
"msg_from_op": true,
"msg_subject": "Logical replication support for generic wal record"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com> wrote:\n>\n> Hi All,\n>\n> I am writing a postgres extension which writes only generic wal record, but this wal is not recognized by logical replication decoder. I have a basic understanding of how logical replication(COPY command for initial sync, wal replica for final sync) works, can you please tell us a way to support this?\n\n\"Generic\" resource manager doesn't have a decoding API, see [1], which\nmeans that the generic WAL records will not get decoded.\n\nCan you be more specific about the use-case? Why use only \"Generic\"\ntype WAL records? Why not use \"LogicalMessage\" type WAL records if you\nwant your WAL records to be decoded?\n\n[1] https://github.com/postgres/postgres/blob/master/src/include/access/rmgrlist.h#L48\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Mon, 22 Aug 2022 12:15:45 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication support for generic wal record"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com> wrote:\n>\n> Hi All,\n>\n> I am writing a postgres extension which writes only generic wal record, but this wal is not recognized by logical replication decoder. I have a basic understanding of how logical replication(COPY command for initial sync, wal replica for final sync) works, can you please tell us a way to support this?\n>\n\nDid you try with a custom WAL resource manager [1][2]?\n\n[1] - https://www.postgresql.org/docs/devel/custom-rmgr.html\n[2] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5c279a6d350205cc98f91fb8e1d3e4442a6b25d1\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 22 Aug 2022 14:01:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication support for generic wal record"
},
{
"msg_contents": "On Mon, 22 Aug 2022 at 12:16, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com>\n> wrote:\n> >\n> > Hi All,\n> >\n> > I am writing a postgres extension which writes only generic wal record,\n> but this wal is not recognized by logical replication decoder. I have a\n> basic understanding of how logical replication(COPY command for initial\n> sync, wal replica for final sync) works, can you please tell us a way to\n> support this?\n>\n> \"Generic\" resource manager doesn't have a decoding API, see [1], which\n> means that the generic WAL records will not get decoded.\n>\n> Can you be more specific about the use-case? Why use only \"Generic\"\n> type WAL records? Why not use \"LogicalMessage\" type WAL records if you\n> want your WAL records to be decoded?\n>\n> I am writing an extension which implements postgres table access method\ninterface[1] with master-slave architecture, with the help of doc[1] i\ndecided to go with generic_wal to achieve crash_safety and also for\nstreaming replication. It seems like generic_wal couldn't help with logical\nreplication..\nBut, I don't have knowledge on \"LogicalMessage\" Resource Manager, need to\nexplore about it.\n\n\n[1] https://www.postgresql.org/docs/current/tableam.html\n\nOn Mon, 22 Aug 2022 at 12:16, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com> wrote:\n>\n> Hi All,\n>\n> I am writing a postgres extension which writes only generic wal record, but this wal is not recognized by logical replication decoder. I have a basic understanding of how logical replication(COPY command for initial sync, wal replica for final sync) works, can you please tell us a way to support this?\n\n\"Generic\" resource manager doesn't have a decoding API, see [1], which\nmeans that the generic WAL records will not get decoded.\n\nCan you be more specific about the use-case? Why use only \"Generic\"\ntype WAL records? Why not use \"LogicalMessage\" type WAL records if you\nwant your WAL records to be decoded?I am writing an extension which implements postgres table access method interface[1] with master-slave architecture, with the help of doc[1] i decided to go with generic_wal to achieve crash_safety and also for streaming replication. It seems like generic_wal couldn't help with logical replication..But, I don't have knowledge on \"LogicalMessage\" Resource Manager, need to explore about it.[1] https://www.postgresql.org/docs/current/tableam.html",
"msg_date": "Wed, 24 Aug 2022 17:12:28 +0530",
"msg_from": "Natarajan R <nataraj3098@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication support for generic wal record"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 5:12 PM Natarajan R <nataraj3098@gmail.com> wrote:\n>\n>\n> On Mon, 22 Aug 2022 at 12:16, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com> wrote:\n>> >\n>> > Hi All,\n>> >\n>> > I am writing a postgres extension which writes only generic wal record, but this wal is not recognized by logical replication decoder. I have a basic understanding of how logical replication(COPY command for initial sync, wal replica for final sync) works, can you please tell us a way to support this?\n>>\n>> \"Generic\" resource manager doesn't have a decoding API, see [1], which\n>> means that the generic WAL records will not get decoded.\n>>\n>> Can you be more specific about the use-case? Why use only \"Generic\"\n>> type WAL records? Why not use \"LogicalMessage\" type WAL records if you\n>> want your WAL records to be decoded?\n>>\n> I am writing an extension which implements postgres table access method interface[1] with master-slave architecture, with the help of doc[1] i decided to go with generic_wal to achieve crash_safety and also for streaming replication. It seems like generic_wal couldn't help with logical replication..\n> But, I don't have knowledge on \"LogicalMessage\" Resource Manager, need to explore about it.\n>\n>\n> [1] https://www.postgresql.org/docs/current/tableam.html\n\nI think the 'Custom WAL Resource Managers' feature would serve the\nexact same purpose (as also pointed out by Amit upthread), you may\nwant to explore that feature [1]. Here's a sample extension using that\nfeature [2], for a different purpose though, but helps to understand\nthe usage of custom WAL rmgrs.\n\nI notice that the docs [3] don't mention the feature in the right\nplace, IMO, it can be improved to refer to custom-rmgr.sgml page,\ncc-ing Jeff Davis for his thoughts. This would help developers quickly\ntry the feature out and saves time.\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5c279a6d350205cc98f91fb8e1d3e4442a6b25d1\n[2] https://github.com/BRupireddy/pg_synthesize_wal\n[2] https://www.postgresql.org/docs/devel/tableam.html\n\"For crash safety, an AM can use postgres' WAL, or a custom\nimplementation. If WAL is chosen, either Generic WAL Records can be\nused, or a new type of WAL records can be implemented. Generic WAL\nRecords are easy, but imply higher WAL volume. Implementation of a new\ntype of WAL record currently requires modifications to core code\n(specifically, src/include/access/rmgrlist.h).\"\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Wed, 24 Aug 2022 18:00:16 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Logical replication support for generic wal record"
},
{
"msg_contents": "Thanks, I'll check it out.\n\nOn Wed, 24 Aug 2022 at 18:00, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Wed, Aug 24, 2022 at 5:12 PM Natarajan R <nataraj3098@gmail.com> wrote:\n> >\n> >\n> > On Mon, 22 Aug 2022 at 12:16, Bharath Rupireddy <\n> bharath.rupireddyforpostgres@gmail.com> wrote:\n> >>\n> >> On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com>\n> wrote:\n> >> >\n> >> > Hi All,\n> >> >\n> >> > I am writing a postgres extension which writes only generic wal\n> record, but this wal is not recognized by logical replication decoder. I\n> have a basic understanding of how logical replication(COPY command for\n> initial sync, wal replica for final sync) works, can you please tell us a\n> way to support this?\n> >>\n> >> \"Generic\" resource manager doesn't have a decoding API, see [1], which\n> >> means that the generic WAL records will not get decoded.\n> >>\n> >> Can you be more specific about the use-case? Why use only \"Generic\"\n> >> type WAL records? Why not use \"LogicalMessage\" type WAL records if you\n> >> want your WAL records to be decoded?\n> >>\n> > I am writing an extension which implements postgres table access method\n> interface[1] with master-slave architecture, with the help of doc[1] i\n> decided to go with generic_wal to achieve crash_safety and also for\n> streaming replication. It seems like generic_wal couldn't help with logical\n> replication..\n> > But, I don't have knowledge on \"LogicalMessage\" Resource Manager, need\n> to explore about it.\n> >\n> >\n> > [1] https://www.postgresql.org/docs/current/tableam.html\n>\n> I think the 'Custom WAL Resource Managers' feature would serve the\n> exact same purpose (as also pointed out by Amit upthread), you may\n> want to explore that feature [1]. Here's a sample extension using that\n> feature [2], for a different purpose though, but helps to understand\n> the usage of custom WAL rmgrs.\n>\n> I notice that the docs [3] don't mention the feature in the right\n> place, IMO, it can be improved to refer to custom-rmgr.sgml page,\n> cc-ing Jeff Davis for his thoughts. This would help developers quickly\n> try the feature out and saves time.\n>\n> [1]\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5c279a6d350205cc98f91fb8e1d3e4442a6b25d1\n> [2] https://github.com/BRupireddy/pg_synthesize_wal\n> [2] https://www.postgresql.org/docs/devel/tableam.html\n> \"For crash safety, an AM can use postgres' WAL, or a custom\n> implementation. If WAL is chosen, either Generic WAL Records can be\n> used, or a new type of WAL records can be implemented. Generic WAL\n> Records are easy, but imply higher WAL volume. Implementation of a new\n> type of WAL record currently requires modifications to core code\n> (specifically, src/include/access/rmgrlist.h).\"\n>\n> --\n> Bharath Rupireddy\n> RDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n>\n\nThanks, I'll check it out. On Wed, 24 Aug 2022 at 18:00, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:On Wed, Aug 24, 2022 at 5:12 PM Natarajan R <nataraj3098@gmail.com> wrote:\n>\n>\n> On Mon, 22 Aug 2022 at 12:16, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Mon, Aug 22, 2022 at 11:59 AM Natarajan R <nataraj3098@gmail.com> wrote:\n>> >\n>> > Hi All,\n>> >\n>> > I am writing a postgres extension which writes only generic wal record, but this wal is not recognized by logical replication decoder. I have a basic understanding of how logical replication(COPY command for initial sync, wal replica for final sync) works, can you please tell us a way to support this?\n>>\n>> \"Generic\" resource manager doesn't have a decoding API, see [1], which\n>> means that the generic WAL records will not get decoded.\n>>\n>> Can you be more specific about the use-case? Why use only \"Generic\"\n>> type WAL records? Why not use \"LogicalMessage\" type WAL records if you\n>> want your WAL records to be decoded?\n>>\n> I am writing an extension which implements postgres table access method interface[1] with master-slave architecture, with the help of doc[1] i decided to go with generic_wal to achieve crash_safety and also for streaming replication. It seems like generic_wal couldn't help with logical replication..\n> But, I don't have knowledge on \"LogicalMessage\" Resource Manager, need to explore about it.\n>\n>\n> [1] https://www.postgresql.org/docs/current/tableam.html\n\nI think the 'Custom WAL Resource Managers' feature would serve the\nexact same purpose (as also pointed out by Amit upthread), you may\nwant to explore that feature [1]. Here's a sample extension using that\nfeature [2], for a different purpose though, but helps to understand\nthe usage of custom WAL rmgrs.\n\nI notice that the docs [3] don't mention the feature in the right\nplace, IMO, it can be improved to refer to custom-rmgr.sgml page,\ncc-ing Jeff Davis for his thoughts. This would help developers quickly\ntry the feature out and saves time.\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=5c279a6d350205cc98f91fb8e1d3e4442a6b25d1\n[2] https://github.com/BRupireddy/pg_synthesize_wal\n[2] https://www.postgresql.org/docs/devel/tableam.html\n\"For crash safety, an AM can use postgres' WAL, or a custom\nimplementation. If WAL is chosen, either Generic WAL Records can be\nused, or a new type of WAL records can be implemented. Generic WAL\nRecords are easy, but imply higher WAL volume. Implementation of a new\ntype of WAL record currently requires modifications to core code\n(specifically, src/include/access/rmgrlist.h).\"\n\n--\nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/",
"msg_date": "Wed, 24 Aug 2022 18:30:29 +0530",
"msg_from": "Natarajan R <nataraj3098@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Logical replication support for generic wal record"
}
] |
[
{
"msg_contents": "Hi,\n\nFound a typo in mvcc.sql\n\ntypo kill_prio_tuple -> kill_prior_tuple\n\nRegards,\nZhang Mingli",
"msg_date": "Mon, 22 Aug 2022 15:57:18 +0800",
"msg_from": "Zhang Mingli <zmlpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix typo kill_prio_tuple"
},
{
"msg_contents": "> On 22 Aug 2022, at 09:57, Zhang Mingli <zmlpostgres@gmail.com> wrote:\n\n> Found a typo in mvcc.sql\n> \n> typo kill_prio_tuple -> kill_prior_tuple\n\nCorrect, that should be kill_prior_tuple. I'll apply this in a bit.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 22 Aug 2022 10:38:39 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo kill_prio_tuple"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI noticed that we support 'ALTER TABLE ... SET COMPRESSION default'\nsyntax, but not 'SET STORAGE default' which seems to be a bit\ninconsistent. When the user changes the storage mode for a column\nthere is no convenient way to revert the change.\n\nThe proposed patch fixes this.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 22 Aug 2022 15:34:25 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] ALTER TABLE ... SET STORAGE default"
},
{
"msg_contents": "Hi hackers!\n\nThis seems a little bit confusing and thus very unfriendly for the user,\nbecause the actual meaning\nof the same 'DEFAULT' option will be different for each data type, and\nto check storage mode user\nhas to query full table (or column) description.\nI'd rather add a paragraph in documentation describing each data type\ndefault storage mode.\n\nOn Mon, Aug 22, 2022 at 3:34 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi hackers,\n>\n> I noticed that we support 'ALTER TABLE ... SET COMPRESSION default'\n> syntax, but not 'SET STORAGE default' which seems to be a bit\n> inconsistent. When the user changes the storage mode for a column\n> there is no convenient way to revert the change.\n>\n> The proposed patch fixes this.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nhttps://postgrespro.ru/\n\nHi hackers!This seems a little bit confusing and thus very unfriendly for the user, because the actual meaningof the same 'DEFAULT' option will be different for each data type, and to check storage mode userhas to query full table (or column) description.I'd rather add a paragraph in documentation describing each data type default storage mode.On Mon, Aug 22, 2022 at 3:34 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi hackers,\n\nI noticed that we support 'ALTER TABLE ... SET COMPRESSION default'\nsyntax, but not 'SET STORAGE default' which seems to be a bit\ninconsistent. When the user changes the storage mode for a column\nthere is no convenient way to revert the change.\n\nThe proposed patch fixes this.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita Malakhovhttps://postgrespro.ru/",
"msg_date": "Mon, 22 Aug 2022 16:15:39 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ALTER TABLE ... SET STORAGE default"
},
{
"msg_contents": "Hi Nikita,\n\n> This seems a little bit confusing and thus very unfriendly for the user, because the actual meaning\n> of the same 'DEFAULT' option will be different for each data type, and to check storage mode user\n> has to query full table (or column) description.\n> I'd rather add a paragraph in documentation describing each data type default storage mode.\n\nI agree that \"SET STORAGE default\" syntax leaves much to be desired.\n\nPersonally I would prefer \"RESET STORAGE\" and \"RESET COMPRESSION\". But\nsince we already have \"SET COMPRESSION default\" this going to be\neither two commands that do the same thing, or a broken backward\ncompatibility. Simply removing \"SET COMPRESSION default\" will make the\nsyntax consistent too, but again, this would be a broken backward\ncompatibility. I would argue that a sub-optimal but consistent syntax\nthat does the job is better than inconsistent syntax and figuring out\nthe default storage strategy manually.\n\nBut let's see what is others people opinion.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 22 Aug 2022 16:28:15 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] ALTER TABLE ... SET STORAGE default"
},
{
"msg_contents": "Hi!\n\nAnyway, adding a paragraph with default storage mode for each standard data\ntype seems\nlike a good idea and I'd prepare a patch for it.\nThank you!\n\nOn Mon, Aug 22, 2022 at 4:28 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Nikita,\n>\n> > This seems a little bit confusing and thus very unfriendly for the user,\n> because the actual meaning\n> > of the same 'DEFAULT' option will be different for each data type, and\n> to check storage mode user\n> > has to query full table (or column) description.\n> > I'd rather add a paragraph in documentation describing each data type\n> default storage mode.\n>\n> I agree that \"SET STORAGE default\" syntax leaves much to be desired.\n>\n> Personally I would prefer \"RESET STORAGE\" and \"RESET COMPRESSION\". But\n> since we already have \"SET COMPRESSION default\" this going to be\n> either two commands that do the same thing, or a broken backward\n> compatibility. Simply removing \"SET COMPRESSION default\" will make the\n> syntax consistent too, but again, this would be a broken backward\n> compatibility. I would argue that a sub-optimal but consistent syntax\n> that does the job is better than inconsistent syntax and figuring out\n> the default storage strategy manually.\n>\n> But let's see what is others people opinion.\n>\n> --\n> Best regards,\n> Aleksander Alekseev\n>\n\n\n-- \nRegards,\nNikita Malakhov\nhttps://postgrespro.ru/\n\nHi!Anyway, adding a paragraph with default storage mode for each standard data type seems like a good idea and I'd prepare a patch for it.Thank you!On Mon, Aug 22, 2022 at 4:28 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Hi Nikita,\n\n> This seems a little bit confusing and thus very unfriendly for the user, because the actual meaning\n> of the same 'DEFAULT' option will be different for each data type, and to check storage mode user\n> has to query full table (or column) description.\n> I'd rather add a paragraph in documentation describing each data type default storage mode.\n\nI agree that \"SET STORAGE default\" syntax leaves much to be desired.\n\nPersonally I would prefer \"RESET STORAGE\" and \"RESET COMPRESSION\". But\nsince we already have \"SET COMPRESSION default\" this going to be\neither two commands that do the same thing, or a broken backward\ncompatibility. Simply removing \"SET COMPRESSION default\" will make the\nsyntax consistent too, but again, this would be a broken backward\ncompatibility. I would argue that a sub-optimal but consistent syntax\nthat does the job is better than inconsistent syntax and figuring out\nthe default storage strategy manually.\n\nBut let's see what is others people opinion.\n\n-- \nBest regards,\nAleksander Alekseev\n-- Regards,Nikita Malakhovhttps://postgrespro.ru/",
"msg_date": "Mon, 22 Aug 2022 22:41:33 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ALTER TABLE ... SET STORAGE default"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Hi Nikita,\n>> This seems a little bit confusing and thus very unfriendly for the user, because the actual meaning\n>> of the same 'DEFAULT' option will be different for each data type, and to check storage mode user\n>> has to query full table (or column) description.\n>> I'd rather add a paragraph in documentation describing each data type default storage mode.\n\n> I agree that \"SET STORAGE default\" syntax leaves much to be desired.\n\nFWIW, I don't buy that argument at all. If you believe that then\nyou must also think that\n\n\tINSERT INTO mytab VALUES (..., DEFAULT, ...);\n\nis a poorly-designed feature because you have to go consult the table\ndefinition to find out what will be inserted. (Well, maybe you do\nthink that, but the SQL committee won't agree with you ;-)) So I don't\nsee any problem with DEFAULT representing a data-type-specific default\nin this situation.\n\n> Personally I would prefer \"RESET STORAGE\" and \"RESET COMPRESSION\".\n\nPerhaps, but what's done is done, and I agree that STORAGE had better\nfollow the existing precedent.\n\nI've not read the patch in any detail, but I don't see a problem\nwith the design.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 05 Nov 2022 13:45:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ALTER TABLE ... SET STORAGE default"
},
{
"msg_contents": "I wrote:\n> I've not read the patch in any detail, but I don't see a problem\n> with the design.\n\nHearing no push-back on that position, I reviewed and pushed the\npatch. You'd missed that it also affects CREATE TABLE, but\notherwise it was in pretty good shape.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 10 Nov 2022 18:22:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] ALTER TABLE ... SET STORAGE default"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nThe proposed patch adds the missing tab completion for 'ALTER TABLE\n... SET COMPRESSION ...' syntax.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 22 Aug 2022 15:48:56 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "On 2022-08-22 21:48, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n> The proposed patch adds the missing tab completion for 'ALTER TABLE\n> ... SET COMPRESSION ...' syntax.\nThanks, LGTM.\n\nIn addition, why not take this opportunity to create a tab completion \nfor \"ALTER TABLE <name> OF <type_name>\" and \"ALTER TABLE <name> NOT OF\"?\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 06 Sep 2022 09:54:53 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "On Tue, Sep 06, 2022 at 09:54:53AM +0900, Shinya Kato wrote:\n> In addition, why not take this opportunity to create a tab completion for\n> \"ALTER TABLE <name> OF <type_name>\" and \"ALTER TABLE <name> NOT OF\"?\n\nRight. That looks fine to me, so applied.\n--\nMichael",
"msg_date": "Tue, 6 Sep 2022 15:41:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "Hi hackers,\n\n> Right. That looks fine to me, so applied.\n\nThanks, Michael.\n\n> In addition, why not take this opportunity to create a tab completion for\n> \"ALTER TABLE <name> OF <type_name>\" and \"ALTER TABLE <name> NOT OF\"?\n\nThanks for reviewing, Shinya. Let's fix this too. The patch is attached.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 6 Sep 2022 11:28:10 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "On 2022-09-06 17:28, Aleksander Alekseev wrote:\n\n>> In addition, why not take this opportunity to create a tab completion \n>> for\n>> \"ALTER TABLE <name> OF <type_name>\" and \"ALTER TABLE <name> NOT OF\"?\n> \n> Thanks for reviewing, Shinya. Let's fix this too. The patch is \n> attached.\n\nThanks for the new patch!\nA minor modification has been made so that the composite type is also \ncompleted after \"ALTER TABLE <name> OF\".\n\nThought?\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Tue, 06 Sep 2022 19:32:20 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "Hi Shinya,\n\n> A minor modification has been made so that the composite type is also\n> completed after \"ALTER TABLE <name> OF\".\n\nLGTM. Here is v3 created with `git format-path`. Unlike v2 it also\nincludes the commit message.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 6 Sep 2022 14:57:55 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "On 2022-09-06 20:57, Aleksander Alekseev wrote:\n> Hi Shinya,\n> \n>> A minor modification has been made so that the composite type is also\n>> completed after \"ALTER TABLE <name> OF\".\n> \n> LGTM. Here is v3 created with `git format-path`. Unlike v2 it also\n> includes the commit message.\n\nThanks! I marked it as ready for committer.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 08 Sep 2022 16:40:32 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
},
{
"msg_contents": "On Thu, Sep 08, 2022 at 04:40:32PM +0900, Shinya Kato wrote:\n> Thanks! I marked it as ready for committer.\n\nI thought that there was a gotcha in this area for composite types,\nbut on second look it looks that I was wrong. Hence, applied.\n--\nMichael",
"msg_date": "Sat, 10 Sep 2022 17:23:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Tab completion for SET COMPRESSION"
}
] |
[
{
"msg_contents": "More portability cruft cleanup: Our own definition of offsetof() was \nonly relevant for ancient systems and can surely be removed.",
"msg_date": "Mon, 22 Aug 2022 16:24:47 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Remove offsetof definition"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> More portability cruft cleanup: Our own definition of offsetof() was \n> only relevant for ancient systems and can surely be removed.\n\n+1, it's required by C99.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Aug 2022 11:13:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Remove offsetof definition"
}
] |
[
{
"msg_contents": "Hi,\n\nThe .backup files written to the archive (if archiving is on) are very\nsimilar to the backup_label that's written/returned by\npg_stop_backup()/pg_backup_stop(), they just have a few extra lines\nabout the end of backup process that are missing from backup_label.\n\nThe parser in xlogrecovery.c however barfs on them because it does not\nexpect the additional STOP WAL LOCATION on line 2.\n\nThe attached makes parsing this line optional, so that one can use those\n.backup files in place of backup_label. This is e.g. useful if the\nbackup_label got lost or the output of pg_stop_backup() was not\ncaptured.\n\n\nMichael\n\n-- \nTeam Lead PostgreSQL\nProject Manager\nTel.: +49 2166 9901-171\nMail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nManagement: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nOur handling of personal data is subject to:\nhttps://www.credativ.de/en/contact/privacy/",
"msg_date": "Mon, 22 Aug 2022 17:16:58 +0200",
"msg_from": "Michael Banck <michael.banck@credativ.de>",
"msg_from_op": true,
"msg_subject": "[PATCH] Allow usage of archive .backup files as backup_label"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 05:16:58PM +0200, Michael Banck wrote:\n> The .backup files written to the archive (if archiving is on) are very\n> similar to the backup_label that's written/returned by\n> pg_stop_backup()/pg_backup_stop(), they just have a few extra lines\n> about the end of backup process that are missing from backup_label.\n\nHistorically, there is \"STOP WAL LOCATION\" after \"START WAL LOCATION\",\nand \"STOP TIME\"/\"STOP TIMELINE\" at the end.\n\n> The parser in xlogrecovery.c however barfs on them because it does not\n> expect the additional STOP WAL LOCATION on line 2.\n\nHm, no. I don't think that I'd want to expand the use of the backup\nhistory file in the context of recovery, so as we are free to add any\nextra information into it if necessary without impacting the\ncompatibility of the recovery code. This file is primarily here for\ndebugging, so I'd rather let it be used only for this purpose.\nOpinions of others are welcome, of course.\n--\nMichael",
"msg_date": "Tue, 18 Oct 2022 10:55:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow usage of archive .backup files as backup_label"
},
{
"msg_contents": "On Tue, 2022-10-18 at 10:55 +0900, Michael Paquier wrote:\n> On Mon, Aug 22, 2022 at 05:16:58PM +0200, Michael Banck wrote:\n> > The .backup files written to the archive (if archiving is on) are very\n> > similar to the backup_label that's written/returned by\n> > pg_stop_backup()/pg_backup_stop(), they just have a few extra lines\n> > about the end of backup process that are missing from backup_label.\n> \n> Historically, there is \"STOP WAL LOCATION\" after \"START WAL LOCATION\",\n> and \"STOP TIME\"/\"STOP TIMELINE\" at the end.\n> \n> > The parser in xlogrecovery.c however barfs on them because it does not\n> > expect the additional STOP WAL LOCATION on line 2.\n> \n> Hm, no. I don't think that I'd want to expand the use of the backup\n> history file in the context of recovery, so as we are free to add any\n> extra information into it if necessary without impacting the\n> compatibility of the recovery code. This file is primarily here for\n> debugging, so I'd rather let it be used only for this purpose.\n> Opinions of others are welcome, of course.\n\nI tend to agree with you. It is easy to break PostgreSQL by manipulating\nor removing \"backup_label\", and copying a file from the WAL archive and\nrenaming it to \"backup_label\" sounds like a footgun of the first order.\nThere is nothing that prevents you from copying the wrong file.\nSuch practices should not be encouraged.\n\nAnybody who knows enough about PostgreSQL to be sure that what they are\ndoing is correct should be smart enough to know how to edit the copied file.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 18 Oct 2022 04:55:46 +0200",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow usage of archive .backup files as backup_label"
},
{
"msg_contents": "On Tue, Oct 18, 2022 at 04:55:46AM +0200, Laurenz Albe wrote:\n> I tend to agree with you. It is easy to break PostgreSQL by manipulating\n> or removing \"backup_label\", and copying a file from the WAL archive and\n> renaming it to \"backup_label\" sounds like a footgun of the first order.\n> There is nothing that prevents you from copying the wrong file.\n> Such practices should not be encouraged.\n> \n> Anybody who knows enough about PostgreSQL to be sure that what they are\n> doing is correct should be smart enough to know how to edit the copied file.\n\nA few weeks after, still the same thoughts on the matter, so please\nnote that I have marked that as rejected in the CF app. If somebody\nwants to offer more arguments for this thread, of course please feel\nfree.\n--\nMichael",
"msg_date": "Thu, 10 Nov 2022 13:56:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Allow usage of archive .backup files as backup_label"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile working on the relation stats split into table and index stats \n[1], I noticed that currently pg_stat_have_stats() returns true for \ndropped indexes (or for index creation transaction rolled back).\n\nExample:\n\npostgres=# create table bdt as select a from generate_series(1,1000) a;\nSELECT 1000\npostgres=# create index bdtidx on bdt(a);\nCREATE INDEX\npostgres=# select * from bdt where a = 30;\n a\n----\n 30\n(1 row)\n\npostgres=# SELECT 'bdtidx'::regclass::oid;\n oid\n-------\n 16395\n(1 row)\n\npostgres=# select pg_stat_have_stats('relation', 5, 16395);\n pg_stat_have_stats\n--------------------\n t\n(1 row)\n\npostgres=# drop index bdtidx;\nDROP INDEX\npostgres=# select pg_stat_have_stats('relation', 5, 16395);\n pg_stat_have_stats\n--------------------\n t\n(1 row)\n\n\nPlease find attached a patch proposal to fix it.\n\nIt does contain additional calls to pgstat_create_relation() and \npgstat_drop_relation() as well as additional TAP tests.\n\n[1]: \nhttps://www.postgresql.org/message-id/5bfcf1a5-4224-9324-594b-725e704c95b1%40amazon.com\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 22 Aug 2022 18:39:07 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_have_stats() returns true for dropped indexes (or for index\n creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 18:39:07 +0200, Drouvot, Bertrand wrote:\n> While working on the relation stats split into table and index stats [1], I\n> noticed that currently pg_stat_have_stats() returns true for dropped indexes\n> (or for index creation transaction rolled back).\n\nGood catch.\n\nI guess Horiguchi-san and/or I wrongly assumed it'd be taken care of by the\npgstat_create_relation() in heap_create_with_catalog(), but index_create()\ndoesn't use that.\n\n\n> Please find attached a patch proposal to fix it.\n\nPerhaps a better fix would be to move the pgstat_create_relation() from\nheap_create_with_catalog() into heap_create()? Although I guess it's a bit\npointless to deduplicate given that you're going to split it up again...\n\n\n> It does contain additional calls to pgstat_create_relation() and\n> pgstat_drop_relation() as well as additional TAP tests.\n\nWould be good to add a test for CREATE INDEX / DROP INDEX / REINDEX\nCONCURRENTLY as well.\n\nMight be worth adding a test to stats.sql or stats.spec in the main regression\ntests. Perhaps that's best where the aforementioned things should be tested?\n\n\n> @@ -2349,6 +2354,7 @@ index_drop(Oid indexId, bool concurrent, bool concurrent_lock_mode)\n> \tCatalogTupleDelete(indexRelation, &tuple->t_self);\n> \n> \tReleaseSysCache(tuple);\n> +\n> \ttable_close(indexRelation, RowExclusiveLock);\n> \n> \t/*\n\nAssume this was just an accident?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Aug 2022 10:36:54 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 8/22/22 7:36 PM, Andres Freund wrote:\n> On 2022-08-22 18:39:07 +0200, Drouvot, Bertrand wrote:\n>> Please find attached a patch proposal to fix it.\n> Perhaps a better fix would be to move the pgstat_create_relation() from\n> heap_create_with_catalog() into heap_create()? Although I guess it's a bit\n> pointless to deduplicate given that you're going to split it up again...\n\nThanks for looking at it!\n\nAgree it's better to move it to heap_create(): it's done in the new \nversion attached.\n\nWe'll see later on if it needs to be duplicated for the table/index \nsplit work.\n\n>> It does contain additional calls to pgstat_create_relation() and\n>> pgstat_drop_relation() as well as additional TAP tests.\n> Would be good to add a test for CREATE INDEX / DROP INDEX / REINDEX\n> CONCURRENTLY as well.\n>\n> Might be worth adding a test to stats.sql or stats.spec in the main regression\n> tests. Perhaps that's best where the aforementioned things should be tested?\n\nYeah that sounds better, I'm also adding more tests around table \ncreation while at it.\n\nI ended up adding the new tests in stats.sql.\n\n>\n>> @@ -2349,6 +2354,7 @@ index_drop(Oid indexId, bool concurrent, bool concurrent_lock_mode)\n>> CatalogTupleDelete(indexRelation, &tuple->t_self);\n>>\n>> ReleaseSysCache(tuple);\n>> +\n>> table_close(indexRelation, RowExclusiveLock);\n>>\n>> /*\n> Assume this was just an accident?\n\nOops, thanks!\n\nNew version attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 23 Aug 2022 09:58:03 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Good catch, and thanks for the patch!\n\n(The file name would correctly be v2-0001-...:)\n\nAt Tue, 23 Aug 2022 09:58:03 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Agree it's better to move it to heap_create(): it's done in the new\n> version attached.\n\n+1 (not considering stats splitting)\n\n> We'll see later on if it needs to be duplicated for the table/index\n> split work.\n\nThe code changes looks good to me.\n\n> >> It does contain additional calls to pgstat_create_relation() and\n> >> pgstat_drop_relation() as well as additional TAP tests.\n> > Would be good to add a test for CREATE INDEX / DROP INDEX / REINDEX\n> > CONCURRENTLY as well.\n> >\n> > Might be worth adding a test to stats.sql or stats.spec in the main\n> > regression\n> > tests. Perhaps that's best where the aforementioned things should be\n> > tested?\n> \n> Yeah that sounds better, I'm also adding more tests around table\n> creation while at it.\n> \n> I ended up adding the new tests in stats.sql.\n\n+-- pg_stat_have_stats returns true for table creation inserting data\n+-- pg_stat_have_stats returns true for committed index creation\n+\n\nNot sure we need this, as we check that already in the same file. (In\nother words, if we failed this, the should have failed earlier.) Maybe\nwe only need check for drop operations and reindex cases?\n\nWe have other variable-numbered stats kinds\nFUNCTION/REPLSLOT/SUBSCRIPTION. Don't we need the same for these?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 25 Aug 2022 11:47:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 8/25/22 4:47 AM, Kyotaro Horiguchi wrote:\n> Good catch, and thanks for the patch!\n>\n> (The file name would correctly be v2-0001-...:)\n>\n> At Tue, 23 Aug 2022 09:58:03 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in\n>> Agree it's better to move it to heap_create(): it's done in the new\n>> version attached.\n> +1 (not considering stats splitting)\n>\n>> We'll see later on if it needs to be duplicated for the table/index\n>> split work.\n> The code changes looks good to me.\n\nThanks for looking at it!\n\n> +-- pg_stat_have_stats returns true for table creation inserting data\n> +-- pg_stat_have_stats returns true for committed index creation\n> +\n>\n> Not sure we need this, as we check that already in the same file. (In\n> other words, if we failed this, the should have failed earlier.)\n\nThat's right.\n\n> Maybe\n> we only need check for drop operations\n\nLooking closer at it, I think we are already good for the drop case on \nthe tables (by making direct use of the pg_stat_get_* functions on the \nbefore dropped oid).\n\nSo I think we can remove all the \"table\" new checks: new patch attached \nis doing so.\n\nOn the other hand, for the index case, I think it's better to keep the \n\"committed index creation one\".\n\nIndeed, to check that the drop behaves correctly I think it's better in \n\"the same test\" to ensure we've had the desired result before the drop \n(I mean having pg_stat_have_stats() returning false after a drop does \nnot really help if we are not 100% sure it was returning true for the \nexact same index before the drop).\n\n> We have other variable-numbered stats kinds\n> FUNCTION/REPLSLOT/SUBSCRIPTION. Don't we need the same for these?\n\nI don't think we need more tests for the FUNCTION case (as it looks to \nme it is already covered in stat.sql by the pg_stat_get_function_calls() \ncalls on the dropped functions oids).\n\nFor SUBSCRIPTION, i think this is covered in 026_stats.pl:\n\n# Subscription stats for sub1 should be gone\nis( $node_subscriber->safe_psql(\n $db, qq(SELECT pg_stat_have_stats('subscription', 0, $sub1_oid))),\n qq(f),\n qq(Subscription stats for subscription '$sub1_name' should be \nremoved.));\n\nFor REPLSLOT, I agree that we can add one test: I added it in \ncontrib/test_decoding/sql/stats.sql. It relies on pg_stat_have_stats() \n(as relying on pg_stat_replication_slots and/or \npg_stat_get_replication_slot() would not help that much for this test \ngiven that the slot has been removed from ReplicationSlotCtl)\n\nAttaching v3-0001 (with the right \"numbering\" this time ;-) )\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 25 Aug 2022 11:44:34 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "At Thu, 25 Aug 2022 11:44:34 +0200, \"Drouvot, Bertrand\" <bdrouvot@amazon.com> wrote in \n> Looking closer at it, I think we are already good for the drop case on\n> the tables (by making direct use of the pg_stat_get_* functions on the\n> before dropped oid).\n> \n> So I think we can remove all the \"table\" new checks: new patch\n> attached is doing so.\n> \n> On the other hand, for the index case, I think it's better to keep the\n> \"committed index creation one\".\n\nI agree.\n\n> Indeed, to check that the drop behaves correctly I think it's better\n> in \"the same test\" to ensure we've had the desired result before the\n> drop (I mean having pg_stat_have_stats() returning false after a drop\n> does not really help if we are not 100% sure it was returning true for\n> the exact same index before the drop).\n\nSounds reasonable.\n\n> > We have other variable-numbered stats kinds\n> > FUNCTION/REPLSLOT/SUBSCRIPTION. Don't we need the same for these?\n> \n> I don't think we need more tests for the FUNCTION case (as it looks to\n> me it is already covered in stat.sql by the\n> pg_stat_get_function_calls() calls on the dropped functions oids).\n\nRight.\n\n> For SUBSCRIPTION, i think this is covered in 026_stats.pl:\n> \n> # Subscription stats for sub1 should be gone\n> is( $node_subscriber->safe_psql(\n> $db, qq(SELECT pg_stat_have_stats('subscription', 0,\n> $sub1_oid))),\n> qq(f),\n> qq(Subscription stats for subscription '$sub1_name' should be\n> removed.));\n> \n> For REPLSLOT, I agree that we can add one test: I added it in\n> contrib/test_decoding/sql/stats.sql. It relies on pg_stat_have_stats()\n> (as relying on pg_stat_replication_slots and/or\n> pg_stat_get_replication_slot() would not help that much for this test\n> given that the slot has been removed from ReplicationSlotCtl)\n\nThanks for the searching.\n\n+-- pg_stat_have_stats returns true for regression_slot_stats1\n+-- Its index is 1 in ReplicationSlotCtl->replication_slots\n+select pg_stat_have_stats('replslot', 0, 1);\n\nThis is wrong. The index is actually 0. We cannot know the id\nreliably since we don't expose it at all. We could slightly increase\nrobustness by assuming the range of the id but that is just moving the\nproblem to another place. If the test is broken by a change of\nreplslot id assignment policy, it would be easily found and fixed.\n\nSo is it fine simply fixing the comment with the correct ID?\n\nOr, contrarily we can be more sensitive to the change of ID assignment\npolicy by checking all the replication slots.\n\nselect count(n) from generate_series(0, 2) as n where pg_stat_have_stats('replslot', 0, n);\n\nThe number changes from 3 to 0 across the slots drop.. If any of the\nslots has gone out of the range, the number before the drop decreases.\n\n> Attaching v3-0001 (with the right \"numbering\" this time ;-) )\n\nYeah, Looks fine:p\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Aug 2022 16:10:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hiu,\n\nOn 2022-08-25 11:44:34 +0200, Drouvot, Bertrand wrote:\n> For REPLSLOT, I agree that we can add one test: I added it in\n> contrib/test_decoding/sql/stats.sql. It relies on pg_stat_have_stats() (as\n> relying on pg_stat_replication_slots and/or pg_stat_get_replication_slot()\n> would not help that much for this test given that the slot has been removed\n> from ReplicationSlotCtl)\n\nAs Horiguchi-san noted, we can't rely on specific indexes being used. I feel\nok with the current coverage in that area, but if we *really* feel we need to\ntest it, we'd need to count the number of indexes with slots before dropping\nthe slot and after the drop.\n\n\n> +-- pg_stat_have_stats returns true for committed index creation\n\nMaybe another test for an uncommitted index creation would be good too?\n\nCould you try running this test with debug_discard_caches = 1 - it's pretty\neasy to write tests in this area that aren't reliable timing wise.\n\n\n> +CREATE table stats_test_tab1 as select generate_series(1,10) a;\n> +CREATE index stats_test_idx1 on stats_test_tab1(a);\n> +SELECT oid AS dboid from pg_database where datname = current_database() \\gset\n\nSince you introduced this, maybe convert the other instance of this query at\nthe end of the file as well?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:26:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 8/31/22 9:10 AM, Kyotaro Horiguchi wrote:\n> Thanks for the searching.\n> +-- pg_stat_have_stats returns true for regression_slot_stats1\n> +-- Its index is 1 in ReplicationSlotCtl->replication_slots\n> +select pg_stat_have_stats('replslot', 0, 1);\n>\n> This is wrong. The index is actually 0.\n\nRight, thanks for pointing out.\n\n(gdb) p get_replslot_index(\"regression_slot_stats1\")\n$1 = 0\n(gdb) p get_replslot_index(\"regression_slot_stats2\")\n$2 = 1\n(gdb) p get_replslot_index(\"regression_slot_stats3\")\n\n$3 = 2\n\n> We cannot know the id\n> reliably since we don't expose it at all.\n\nRight.\n\n> We could slightly increase\n> robustness by assuming the range of the id but that is just moving the\n> problem to another place. If the test is broken by a change of\n> replslot id assignment policy, it would be easily found and fixed.\n>\n> So is it fine simply fixing the comment with the correct ID?\n>\n> Or, contrarily we can be more sensitive to the change of ID assignment\n> policy by checking all the replication slots.\n>\n> select count(n) from generate_series(0, 2) as n where pg_stat_have_stats('replslot', 0, n);\n>\n> The number changes from 3 to 0 across the slots drop.. If any of the\n> slots has gone out of the range, the number before the drop decreases.\n\nThanks for the ideas! I'm coming up with a slightly different one (also \nbased on Andre's feedback in [1]) in the upcoming v4.\n\n[1]: \nhttps://www.postgresql.org/message-id/20220831192657.jqhphpud2mxbzbom%40awork3.anarazel.de\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com\n\n\n\n\n\n\nHi,\n\nOn 8/31/22 9:10 AM, Kyotaro Horiguchi\n wrote:\n \nThanks\n for the searching.\n \n+-- pg_stat_have_stats returns true for regression_slot_stats1\n+-- Its index is 1 in ReplicationSlotCtl->replication_slots\n+select pg_stat_have_stats('replslot', 0, 1);\n\nThis is wrong. The index is actually 0. \n\nRight, thanks for pointing out.\n (gdb) p get_replslot_index(\"regression_slot_stats1\")\n $1 = 0\n (gdb) p get_replslot_index(\"regression_slot_stats2\")\n $2 = 1\n (gdb) p get_replslot_index(\"regression_slot_stats3\")\n$3 = 2\n\n\nWe cannot know the id\nreliably since we don't expose it at all.\n\nRight.\n\n\n We could slightly increase\nrobustness by assuming the range of the id but that is just moving the\nproblem to another place. If the test is broken by a change of\nreplslot id assignment policy, it would be easily found and fixed.\n\nSo is it fine simply fixing the comment with the correct ID?\n\nOr, contrarily we can be more sensitive to the change of ID assignment\npolicy by checking all the replication slots.\n\nselect count(n) from generate_series(0, 2) as n where pg_stat_have_stats('replslot', 0, n);\n\nThe number changes from 3 to 0 across the slots drop.. If any of the\nslots has gone out of the range, the number before the drop decreases.\n\n\nThanks for the ideas! I'm coming up with a slightly different one\n (also based on Andre's feedback in [1]) in the upcoming v4.\n[1]:\nhttps://www.postgresql.org/message-id/20220831192657.jqhphpud2mxbzbom%40awork3.anarazel.de\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 07:53:58 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 8/31/22 9:26 PM, Andres Freund wrote:\n> Hiu,\n>\n> On 2022-08-25 11:44:34 +0200, Drouvot, Bertrand wrote:\n>> For REPLSLOT, I agree that we can add one test: I added it in\n>> contrib/test_decoding/sql/stats.sql. It relies on pg_stat_have_stats() (as\n>> relying on pg_stat_replication_slots and/or pg_stat_get_replication_slot()\n>> would not help that much for this test given that the slot has been removed\n>> from ReplicationSlotCtl)\n> As Horiguchi-san noted, we can't rely on specific indexes being used.\n\nYeah.\n\n> I feel\n> ok with the current coverage in that area, but if we *really* feel we need to\n> test it, we'd need to count the number of indexes with slots before dropping\n> the slot and after the drop.\n\nThanks for the suggestion, I'm coming up with this proposal in v4 attached:\n\n * count the number of slots\n * ensure we have at least one for which pg_stat_have_stats() returns true\n * get the list of ids (true_ids) for which pg_stat_have_stats()\n returns true\n * drop all the slots\n * get the list of ids (false_ids) for which pg_stat_have_stats()\n returns false\n * ensure that both lists (true_ids and false_ids) are the same\n\nI don't \"really\" feel the need we need to test it but i think that this \nthread was a good opportunity to try to test it.\n\nThat said, that's also fine for me if this test is not part of the patch.\n\nMaybe a better/simpler option could be to expose a function to get the \nslot id based on its name and then write a \"simple\" test with it? (If so \nI think that would better to start another patch/thread).\n\n>> +-- pg_stat_have_stats returns true for committed index creation\n> Maybe another test for an uncommitted index creation would be good too?\n\nYou mean in addition to the \"-- pg_stat_have_stats returns false for \nrolled back index creation\" one?\n\n> Could you try running this test with debug_discard_caches = 1 - it's pretty\n> easy to write tests in this area that aren't reliable timing wise.\n\nThanks for the suggestion. I did and it passed without any issues.\n\n>> +CREATE table stats_test_tab1 as select generate_series(1,10) a;\n>> +CREATE index stats_test_idx1 on stats_test_tab1(a);\n>> +SELECT oid AS dboid from pg_database where datname = current_database() \\gset\n> Since you introduced this, maybe convert the other instance of this query at\n> the end of the file as well?\n\nyeah good point. In v4, I moved the dboid recording at the top and use \nit when appropriate.\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services:https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 08:40:54 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-01 08:40:54 +0200, Drouvot, Bertrand wrote:\n> Thanks for the suggestion, I'm coming up with this proposal in v4 attached:\n\nI pushed the bugfix / related test portion to 15, master. Thanks!\n\nI left the replication stuff out - it seemed somewhat independent. Probably\nwill just push that to master, unless somebody thinks it should be in both?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Sep 2022 13:45:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 9/23/22 10:45 PM, Andres Freund wrote:\n\n> \n> \n> \n> Hi,\n> \n> On 2022-09-01 08:40:54 +0200, Drouvot, Bertrand wrote:\n>> Thanks for the suggestion, I'm coming up with this proposal in v4 attached:\n> \n> I pushed the bugfix / related test portion to 15, master. Thanks!\n\nThanks!\n\n> \n> I left the replication stuff out - it seemed somewhat independent.\n\nYeah.\n\n> Probably\n> will just push that to master, unless somebody thinks it should be in both?\n\nSounds good to me as this is not a bug and that seems unlikely to me \nthat an issue in this area will be introduced later on on 15 without \nbeing introduced on master too.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 10:23:30 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
},
{
"msg_contents": "Hi,\n\nOn 9/26/22 10:23 AM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 9/23/22 10:45 PM, Andres Freund wrote:\n> \n>>\n>>\n>>\n>> Hi,\n>>\n>> On 2022-09-01 08:40:54 +0200, Drouvot, Bertrand wrote:\n>>> Thanks for the suggestion, I'm coming up with this proposal in v4 \n>>> attached:\n>>\n>> I pushed the bugfix / related test portion to 15, master. Thanks!\n> \n> Thanks!\n> \n\nForgot to say that with that being fixed, I'll come back with a patch \nproposal for the tables/indexes stats split (discovered the issue fixed \nin this current thread while working on the split patch.)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 15:28:16 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_have_stats() returns true for dropped indexes (or for\n index creation transaction rolled back)"
}
] |
[
{
"msg_contents": "Per discussion in [0], here is a patch set that allows pfree() to accept \na NULL argument, like free() does.\n\nAlso, a patch that removes the now-unnecessary null pointer checks \nbefore calling pfree(). And a few patches that do the same for some \nother functions that I found around. (The one with FreeDir() is perhaps \na bit arguable, since FreeDir() wraps closedir() which does *not* accept \nNULL arguments. Also, neither FreeFile() nor the underlying fclose() \naccept NULL.)\n\n\n[0]: https://www.postgresql.org/message-id/1074830.1655442689@sss.pgh.pa.us",
"msg_date": "Mon, 22 Aug 2022 20:16:56 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Change pfree to accept NULL argument"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Per discussion in [0], here is a patch set that allows pfree() to accept \n> a NULL argument, like free() does.\n\nSo the question is, is this actually a good thing to do?\n\nIf we were starting in a green field, I'd be fine with defining\npfree(NULL) as okay. But we're not, so there are a couple of big\nobjections:\n\n* Code developed to this standard will be unsafe to back-patch\n\n* The sheer number of places touched will create back-patching\nhazards.\n\nI'm not very convinced that the benefits of making pfree() more\nlike free() are worth those costs.\n\nWe could ameliorate the first objection if we wanted to back-patch\n0002, I guess.\n\n(FWIW, no objection to your 0001. 0004 and 0005 seem okay too;\nthey don't touch enough places to create much back-patching risk.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 22 Aug 2022 14:30:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Change pfree to accept NULL argument"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-22 14:30:22 -0400, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Per discussion in [0], here is a patch set that allows pfree() to accept\n> > a NULL argument, like free() does.\n>\n> So the question is, is this actually a good thing to do?\n>\n> If we were starting in a green field, I'd be fine with defining\n> pfree(NULL) as okay. But we're not, so there are a couple of big\n> objections:\n>\n> * Code developed to this standard will be unsafe to back-patch\n>\n> * The sheer number of places touched will create back-patching\n> hazards.\n>\n> I'm not very convinced that the benefits of making pfree() more\n> like free() are worth those costs.\n\nIt's probably also not entirely cost free due to the added branches in place\nwe are certain that the pointer is non-null. That could partially be\nameliorated by moving the NULL pointer check into the callers.\n\nIf we don't want to go this route it might be worth adding a\npg_attribute_nonnull() or such to pfree().\n\n\n> (FWIW, no objection to your 0001. 0004 and 0005 seem okay too;\n> they don't touch enough places to create much back-patching risk.)\n\nI like 0001, not sure I find 0004, 0005 an improvement.\n\n\nSemi-related note: I've sometimes wished for a pfreep(void **p) that'd do\nsomething like\n\nif (*p)\n{\n pfree(*p);\n *p = NULL;\n}\n\nso there's no dangling pointers after the pfree(), which often enoughis\nimportant (e.g. because the code could be reached again if there's an error)\nand is also helpful when debugging. The explicit form does bulk up code\nsufficiently to be annoying.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 22 Aug 2022 11:43:58 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Change pfree to accept NULL argument"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 06:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Per discussion in [0], here is a patch set that allows pfree() to accept\n> > a NULL argument, like free() does.\n>\n> So the question is, is this actually a good thing to do?\n\nI think making pfree() accept NULL is a bad idea. The vast majority\nof cases the pointer will never be NULL, so we're effectively just\nburdening those with the additional overhead of checking for NULL.\n\nWe know from [1] that adding branching in the memory management code\ncan be costly.\n\nI'm measuring about a 2.6% slowdown from the 0002 patch using a\nfunction that I wrote [2] to hammer palloc/pfree.\n\nmaster\npostgres=# select pg_allocate_memory_test(64, 1024*1024,\n10::bigint*1024*1024*1024,'aset');\nTime: 2007.527 ms (00:02.008)\nTime: 1991.574 ms (00:01.992)\nTime: 2008.945 ms (00:02.009)\nTime: 2011.410 ms (00:02.011)\nTime: 2019.317 ms (00:02.019)\nTime: 2060.832 ms (00:02.061)\nTime: 2003.066 ms (00:02.003)\nTime: 2025.039 ms (00:02.025)\nTime: 2039.744 ms (00:02.040)\nTime: 2090.384 ms (00:02.090)\n\nmaster + pfree modifed to check for NULLs\npostgres=# select pg_allocate_memory_test(64, 1024*1024,\n10::bigint*1024*1024*1024,'aset');\nTime: 2057.625 ms (00:02.058)\nTime: 2074.699 ms (00:02.075)\nTime: 2075.629 ms (00:02.076)\nTime: 2104.581 ms (00:02.105)\nTime: 2072.620 ms (00:02.073)\nTime: 2066.916 ms (00:02.067)\nTime: 2071.962 ms (00:02.072)\nTime: 2097.520 ms (00:02.098)\nTime: 2087.421 ms (00:02.087)\nTime: 2078.695 ms (00:02.079)\n\n(~2.62% slowdown)\n\nIf the aim here is to remove a bunch of ugly if (ptr) pfree(ptr);\ncode, then why don't we just have a[n inline] function or a macro for\nthat and only use it when we need to?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvr6qFw3jLBL9d4zUpo3A2Cb6hoZsUnWD0vF1OGsd67v=w@mail.gmail.com\n[2] https://www.postgresql.org/message-id/attachment/136801/pg_allocate_memory_test.patch.txt\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:17:10 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change pfree to accept NULL argument"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 13:17, David Rowley <dgrowleyml@gmail.com> wrote:\n> I think making pfree() accept NULL is a bad idea.\n\nOne counter argument to that is for cases like list_free_deep().\nRight now if I'm not mistaken there's a bug (which I just noticed) in\nlist_free_private() that would trigger if you have a List of Lists and\none of the inner Lists is NIL. The code in list_free_private() just\nseems to go off and pfree() whatever is stored in the element, which I\nthink would crash if it found a NIL List. If pfree() was to handle\nNULLs at least that wouldn't have been a crash, but in reality, we\nshould probably fix that with recursion if we detect the element IsA\nList type. If we don't use recursion, then the \"free\" does not seem\nvery \"deep\". (Or maybe it's too late to make it go deeper as it might\nbreak existing code.)\n\nDavid\n\n\n",
"msg_date": "Wed, 24 Aug 2022 23:07:32 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change pfree to accept NULL argument"
},
{
"msg_contents": "On Wed, 24 Aug 2022 at 23:07, David Rowley <dgrowleyml@gmail.com> wrote:\n> One counter argument to that is for cases like list_free_deep().\n> Right now if I'm not mistaken there's a bug (which I just noticed) in\n> list_free_private() that would trigger if you have a List of Lists and\n> one of the inner Lists is NIL. The code in list_free_private() just\n> seems to go off and pfree() whatever is stored in the element, which I\n> think would crash if it found a NIL List. If pfree() was to handle\n> NULLs at least that wouldn't have been a crash, but in reality, we\n> should probably fix that with recursion if we detect the element IsA\n> List type. If we don't use recursion, then the \"free\" does not seem\n> very \"deep\". (Or maybe it's too late to make it go deeper as it might\n> break existing code.)\n\nHmm, that was a false alarm. It seems list_free_deep() can't really\nhandle freeing sublists as the list elements might be non-Node types,\nwhich of course have no node tag, so we can't check for sub-Lists.\n\nDavid\n\n\n",
"msg_date": "Thu, 25 Aug 2022 00:14:16 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Change pfree to accept NULL argument"
},
{
"msg_contents": "On 22.08.22 20:30, Tom Lane wrote:\n> I'm not very convinced that the benefits of making pfree() more\n> like free() are worth those costs.\n> \n> We could ameliorate the first objection if we wanted to back-patch\n> 0002, I guess.\n> \n> (FWIW, no objection to your 0001. 0004 and 0005 seem okay too;\n> they don't touch enough places to create much back-patching risk.)\n\nTo conclude this, I have committed those secondary patches and updated \nthe utils/mmgr/README with some information from this discussion.\n\n\n",
"msg_date": "Sun, 28 Aug 2022 10:00:34 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Change pfree to accept NULL argument"
}
] |
[
{
"msg_contents": "My colleague Dilip Kumar and I have discovered what I believe to be a\nbug in the recently-added \"overwrite contrecord\" stuff. I'm not sure\nwhether or not this bug has any serious consequences. I think that\nthere may be a scenario where it does, but I'm not sure about that.\n\nSuppose you have a primary and a standby, and the standby is promoted\nafter reading a partial WAL record. The attached script, which was\nwritten by Dilip and slightly modified by me, creates this scenario by\nsetting up an archiving-only standby, writing a record that crosses a\nsegment boundary, and then promoting the standby. If you then try to\nrun pg_waldump on the WAL on timeline 2, it goes boom:\n\n[rhaas pg_wal]$ pg_waldump 000000020000000000000004\n000000020000000000000005 2>&1 | tail -n4\nrmgr: Heap len (rec/tot): 1959/ 1959, tx: 728, lsn:\n0/04FFE7B0, prev 0/04FFDFF0, desc: INSERT off 4 flags 0x00, blkref #0:\nrel 1663/5/16384 blk 2132\nrmgr: Heap len (rec/tot): 1959/ 1959, tx: 728, lsn:\n0/04FFEF58, prev 0/04FFE7B0, desc: INSERT+INIT off 1 flags 0x00,\nblkref #0: rel 1663/5/16384 blk 2133\nrmgr: Heap len (rec/tot): 1959/ 1959, tx: 728, lsn:\n0/04FFF700, prev 0/04FFEF58, desc: INSERT off 2 flags 0x00, blkref #0:\nrel 1663/5/16384 blk 2133\npg_waldump: error: error in WAL record at 0/4FFF700: invalid record\nlength at 0/4FFFEA8: wanted 24, got 0\n\nWhat's happening here is that the last WAL segment from timeline 1,\nwhich is 000000010000000000000004, gets copied over to the new\ntimeline up to the point where the last complete record on that\ntimeline ends, namely, 0/4FFFEA8. I think that the first record on the\nnew timeline should be written starting at that LSN, but that's not\nwhat happens. Instead, the rest of that WAL segment remains zeroed,\nand the first WAL record on the new timeline is written at the\nbeginning of the next segment:\n\n[rhaas pg_wal]$ pg_waldump 000000020000000000000005 2>&1 | head -n4\nrmgr: XLOG len (rec/tot): 42/ 42, tx: 0, lsn:\n0/05000028, prev 0/04FFF700, desc: OVERWRITE_CONTRECORD lsn 0/4FFFEA8;\ntime 2022-08-22 13:49:22.874435 EDT\nrmgr: XLOG len (rec/tot): 114/ 114, tx: 0, lsn:\n0/05000058, prev 0/05000028, desc: CHECKPOINT_SHUTDOWN redo 0/5000058;\ntli 2; prev tli 1; fpw true; xid 0:729; oid 24576; multi 1; offset 0;\noldest xid 719 in DB 1; oldest multi 1 in DB 1; oldest/newest commit\ntimestamp xid: 0/0; oldest running xid 0; shutdown\nrmgr: XLOG len (rec/tot): 30/ 30, tx: 0, lsn:\n0/050000D0, prev 0/05000058, desc: NEXTOID 32768\nrmgr: Storage len (rec/tot): 42/ 42, tx: 0, lsn:\n0/050000F0, prev 0/050000D0, desc: CREATE base/5/24576\n\nNothing that uses xlogreader is going to be able to bridge the gap\nbetween file #4 and file #5. In this case it doesn't matter very much,\nbecause we immediately write a checkpoint record into file #5, so if\nwe crash we won't try to replay file #4 anyway. However, if anything\ndid try to look at file #4 it would get confused. Maybe that can\nhappen if this is a streaming standby, where we only write an\nend-of-recovery record upon promotion, rather than a checkpoint, or\nmaybe if there are cascading standbys someone could try to actually\nuse the 000000020000000000000004 file for something. I'm not sure. But\nunless I'm missing something, that file is bogus, and our only hope of\nnot having problems is that perhaps no one will ever look at it.\n\nI think that the cause of this problem is this code right here:\n\n /*\n * Actually, if WAL ended in an incomplete record, skip the parts that\n * made it through and start writing after the portion that persisted.\n * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n * we'll do as soon as we're open for writing new WAL.)\n */\n if (!XLogRecPtrIsInvalid(missingContrecPtr))\n {\n Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n EndOfLog = missingContrecPtr;\n }\n\nIt seems to me that this if-statement should also test that the TLI\nhas not changed i.e. if (newTLI != endOfRecoveryInfo->lastRecTLI &&\n!XLogRecPtrIsInvalid(missingContrecPtr)). If the TLI hasn't changed,\nthen everything the comment says is correct and I think that what the\ncode does is also correct. However, if the TLI *has* changed, then I\nthink we must not advance EndOfLog here, because the WAL that was\ncopied from the old timeline to the new timeline ends at the point in\nthe file corresponding to the value of EndOfLog just before executing\nthis code. When this code then moves EndOfLog forward to the beginning\nof the next segment, it leaves the unused portion of the previous\nsegment as all zeroes, which creates the problem described above.\n\n(Incidentally, there's also a bug in pg_waldump here: it's reporting\nthe wrong LSN as the source of the error. 0/4FFF700 is not the record\nthat's busted, as shown by the fact that it was successfully decoded\nand shown in the output. The relevant code in pg_waldump should be\nusing EndRecPtr instead of ReadRecPtr to report the error. If it did,\nit would complain about 0/4FFFEA8, which is where the problem really\nis. This is of the same vintage as the bug fixed by\nd9fbb8862959912c5266364059c0abeda0c93bbf, though in that case the\nissue was reporting all errors using the start LSN of the first of\nseveral records read no matter where the error actually happened,\nwhereas in this case the error is using the start LSN of the previous\nrecord instead of the current one.)\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 22 Aug 2022 14:36:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 02:36:36PM -0400, Robert Haas wrote:\n> (Incidentally, there's also a bug in pg_waldump here: it's reporting\n> the wrong LSN as the source of the error. 0/4FFF700 is not the record\n> that's busted, as shown by the fact that it was successfully decoded\n> and shown in the output. The relevant code in pg_waldump should be\n> using EndRecPtr instead of ReadRecPtr to report the error. If it did,\n> it would complain about 0/4FFFEA8, which is where the problem really\n> is. This is of the same vintage as the bug fixed by\n> d9fbb8862959912c5266364059c0abeda0c93bbf, though in that case the\n> issue was reporting all errors using the start LSN of the first of\n> several records read no matter where the error actually happened,\n> whereas in this case the error is using the start LSN of the previous\n> record instead of the current one.)\n\nThere was some previous discussion on this [0] [1].\n\n[0] https://postgr.es/m/2B4510B2-3D70-4990-BFE3-0FE64041C08A%40amazon.com\n[1] https://postgr.es/m/20220127.100738.1985658263632578184.horikyota.ntt%40gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 22 Aug 2022 19:38:42 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 10:38 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> There was some previous discussion on this [0] [1].\n>\n> [0] https://postgr.es/m/2B4510B2-3D70-4990-BFE3-0FE64041C08A%40amazon.com\n> [1] https://postgr.es/m/20220127.100738.1985658263632578184.horikyota.ntt%40gmail.com\n\nThanks. It seems like there are various doubts on those threads about\nwhether EndRecPtr is really the right thing, but I'm not able to\nunderstand what the problem is exactly. Certainly, in the common case,\nit is, and as far as I can tell from looking at the code, it's what\nwe're intended to use. So I would like to see some concrete evidence\nof it being wrong before we conclude that we need to do anything other\nthan a trivial change.\n\nBut the main issue for this thread is that we seem to be generating\ninvalid WAL. That seems like something we'd better get fixed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 09:31:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> Nothing that uses xlogreader is going to be able to bridge the gap\n> between file #4 and file #5. In this case it doesn't matter very much,\n> because we immediately write a checkpoint record into file #5, so if\n> we crash we won't try to replay file #4 anyway. However, if anything\n> did try to look at file #4 it would get confused. Maybe that can\n> happen if this is a streaming standby, where we only write an\n> end-of-recovery record upon promotion, rather than a checkpoint, or\n> maybe if there are cascading standbys someone could try to actually\n> use the 000000020000000000000004 file for something. I'm not sure. But\n> unless I'm missing something, that file is bogus, and our only hope of\n> not having problems is that perhaps no one will ever look at it.\n\nYeah, this analysis looks correct to me.\n\n> I think that the cause of this problem is this code right here:\n>\n> /*\n> * Actually, if WAL ended in an incomplete record, skip the parts that\n> * made it through and start writing after the portion that persisted.\n> * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> * we'll do as soon as we're open for writing new WAL.)\n> */\n> if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> {\n> Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> EndOfLog = missingContrecPtr;\n> }\n\nYeah, this statement as well as another statement that creates the\noverwrite contrecord. After changing these two lines the problem is\nfixed for me. Although I haven't yet thought of all the scenarios\nthat whether it is safe in all the cases. I agree that after timeline\nchanges we are pointing to the end of the last valid record we can\nstart writing the next record from that point onward. But I think we\nshould need to think hard that whether it will break any case for\nwhich the overwrite contrecord was actually introduced.\n\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex 7602fc8..3d38613 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -5491,7 +5491,7 @@ StartupXLOG(void)\n * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n * we'll do as soon as we're open for writing new WAL.)\n */\n- if (!XLogRecPtrIsInvalid(missingContrecPtr))\n+ if (newTLI == endOfRecoveryInfo->lastRecTLI &&\n!XLogRecPtrIsInvalid(missingContrecPtr))\n {\n Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n EndOfLog = missingContrecPtr;\n@@ -5589,7 +5589,7 @@ StartupXLOG(void)\n LocalSetXLogInsertAllowed();\n\n /* If necessary, write overwrite-contrecord before doing\nanything else */\n- if (!XLogRecPtrIsInvalid(abortedRecPtr))\n+ if (newTLI == endOfRecoveryInfo->lastRecTLI &&\n!XLogRecPtrIsInvalid(abortedRecPtr))\n {\n Assert(!XLogRecPtrIsInvalid(missingContrecPtr));\n CreateOverwriteContrecordRecord(abortedRecPtr,\nmissingContrecPtr, newTLI);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 11:09:44 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "Nice find!\n\nAt Wed, 24 Aug 2022 11:09:44 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Tue, Aug 23, 2022 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > Nothing that uses xlogreader is going to be able to bridge the gap\n> > between file #4 and file #5. In this case it doesn't matter very much,\n> > because we immediately write a checkpoint record into file #5, so if\n> > we crash we won't try to replay file #4 anyway. However, if anything\n> > did try to look at file #4 it would get confused. Maybe that can\n> > happen if this is a streaming standby, where we only write an\n> > end-of-recovery record upon promotion, rather than a checkpoint, or\n> > maybe if there are cascading standbys someone could try to actually\n> > use the 000000020000000000000004 file for something. I'm not sure. But\n> > unless I'm missing something, that file is bogus, and our only hope of\n> > not having problems is that perhaps no one will ever look at it.\n> \n> Yeah, this analysis looks correct to me.\n\n(I didn't reproduce the case but understand what is happening.)\n\nMe, too. There are two ways to deal with this, I think. One is start\nwriting new records from abortedContRecPtr as if it were not\nexist. Another is copying WAL file up to missingContRecPtr. Since the\nfirst segment of the new timeline doesn't need to be identcal to the\nlast one of the previous timeline, so I think the former way is\ncleaner. XLogInitNewTimeline or near seems to be be the place for fix\nto me. Clearing abortedRecPtr and missingContrecPtr just before the\ncall to findNewestTimeLine will work?\n\n====\ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 87b243e0d4..27e01153e7 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -5396,6 +5396,13 @@ StartupXLOG(void)\n \t\t */\n \t\tXLogInitNewTimeline(EndOfLogTLI, EndOfLog, newTLI);\n \n+\t\t/*\n+\t\t * EndOfLog doesn't cover aborted contrecord even if the last record\n+\t\t * was that, then the next timeline starts writing from there. Forget\n+\t\t * about aborted and missing contrecords even if any.\n+\t\t */\n+\t\tabortedRecPtr = missingContrecPtr = InvalidXLogRecPtr;\n+\n \t\t/*\n \t\t * Remove the signal files out of the way, so that we don't\n \t\t * accidentally re-enter archive recovery mode in a subsequent crash.\n====\n\n> > I think that the cause of this problem is this code right here:\n> >\n> > /*\n> > * Actually, if WAL ended in an incomplete record, skip the parts that\n> > * made it through and start writing after the portion that persisted.\n> > * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> > * we'll do as soon as we're open for writing new WAL.)\n> > */\n> > if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> > {\n> > Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> > EndOfLog = missingContrecPtr;\n> > }\n> \n> Yeah, this statement as well as another statement that creates the\n> overwrite contrecord. After changing these two lines the problem is\n> fixed for me. Although I haven't yet thought of all the scenarios\n> that whether it is safe in all the cases. I agree that after timeline\n> changes we are pointing to the end of the last valid record we can\n> start writing the next record from that point onward. But I think we\n> should need to think hard that whether it will break any case for\n> which the overwrite contrecord was actually introduced.\n> \n> diff --git a/src/backend/access/transam/xlog.c\n> b/src/backend/access/transam/xlog.c\n> index 7602fc8..3d38613 100644\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -5491,7 +5491,7 @@ StartupXLOG(void)\n> * (It's critical to first write an OVERWRITE_CONTRECORD message, which\n> * we'll do as soon as we're open for writing new WAL.)\n> */\n> - if (!XLogRecPtrIsInvalid(missingContrecPtr))\n> + if (newTLI == endOfRecoveryInfo->lastRecTLI &&\n> !XLogRecPtrIsInvalid(missingContrecPtr))\n> {\n> Assert(!XLogRecPtrIsInvalid(abortedRecPtr));\n> EndOfLog = missingContrecPtr;\n> @@ -5589,7 +5589,7 @@ StartupXLOG(void)\n> LocalSetXLogInsertAllowed();\n> \n> /* If necessary, write overwrite-contrecord before doing\n> anything else */\n> - if (!XLogRecPtrIsInvalid(abortedRecPtr))\n> + if (newTLI == endOfRecoveryInfo->lastRecTLI &&\n> !XLogRecPtrIsInvalid(abortedRecPtr))\n> {\n> Assert(!XLogRecPtrIsInvalid(missingContrecPtr));\n> CreateOverwriteContrecordRecord(abortedRecPtr,\n> missingContrecPtr, newTLI);\n\nThis also seems to work, of course.\n\n# However, I haven't managed to reproduce that, yet...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 24 Aug 2022 17:40:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 4:40 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Me, too. There are two ways to deal with this, I think. One is start\n> writing new records from abortedContRecPtr as if it were not\n> exist. Another is copying WAL file up to missingContRecPtr. Since the\n> first segment of the new timeline doesn't need to be identcal to the\n> last one of the previous timeline, so I think the former way is\n> cleaner.\n\nI agree, mostly because that gets us back to the way all of this\nworked before the contrecord stuff went in. This case wasn't broken\nthen, because the breakage had to do with it being unsafe to back up\nand rewrite WAL that might have already been shipped someplace, and\nthat's not an issue when we're first creating a totally new timeline.\nIt seems safer to me to go back to the way this worked before the fix\nwent in than to change over to a new system.\n\nHonestly, in a vacuum, I might prefer to get rid of this thing where\nthe WAL segment gets copied over from the old timeline to the new, and\njust always switch TLIs at segment boundaries. And while we're at it,\nI'd also like TLIs to be 64-bit random numbers instead of integers\nassigned in ascending order. But those kinds of design changes seem\nbest left for a future master-only development effort. Here, we need\nto back-patch the fix, and should try to just unbreak what's currently\nbroken.\n\n> XLogInitNewTimeline or near seems to be be the place for fix\n> to me. Clearing abortedRecPtr and missingContrecPtr just before the\n> call to findNewestTimeLine will work?\n\nHmm, yeah, that seems like a good approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 24 Aug 2022 08:13:36 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\nHowever, if anything\n> did try to look at file #4 it would get confused. Maybe that can\n> happen if this is a streaming standby, where we only write an\n> end-of-recovery record upon promotion, rather than a checkpoint, or\n> maybe if there are cascading standbys someone could try to actually\n> use the 000000020000000000000004 file for something. I'm not sure. But\n> unless I'm missing something, that file is bogus, and our only hope of\n> not having problems is that perhaps no one will ever look at it.\n\nI tried in streaming mode, but it seems in the streaming mode we will\nnever create this bogus file because of this check [1]. So if the\nStandbyMode is true then we are never setting \"abortedRecPtr\" and\n\"missingContrecPtr\" which means we will never create that 0-filled gap\nin the WAL file that we are discussing in this thread.\n\nDo we need to set it? I feel we don't. Why? because on this thread\nwe are also discussing that if the timeline switches then we don’t\nneed to create that 0-filled gap and that is the actual problem we are\ndiscussing here. And we know that if we are coming out of the\nStandbyMode then we will always switch the timeline so we don’t create\nthat 0-filled gap. OTOH if we are coming out of the archive recovery\nthen also we will switch the timeline so in that case also we do not\nneed that. So practically we need to 0 fill that partial record only\nwhen we are doing the crash recovery is that understanding correct?\nIf so then we can simply avoid setting these variables if\nArchiveRecoveryRequested is true. So in the below check[1] instead of\n(!StandbyMode), we can just put (! ArchiveRecoveryRequested), and then\nwe don't need any other fix. Am I missing anything?\n\n[1]\nReadRecord{\n..record = XLogPrefetcherReadRecord(xlogprefetcher, &errormsg);\n if (record == NULL)\n {\n /*\n * When not in standby mode we find that WAL ends in an incomplete\n * record, keep track of that record. After recovery is done,\n * we’ll write a record to indicate to downstream WAL readers that\n * that portion is to be ignored.\n */\n if (!StandbyMode &&\n !XLogRecPtrIsInvalid(xlogreader->abortedRecPtr))\n {\n abortedRecPtr = xlogreader->abortedRecPtr;\n missingContrecPtr = xlogreader->missingContrecPtr;\n }\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 18:14:14 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 8:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> ArchiveRecoveryRequested is true. So in the below check[1] instead of\n> (!StandbyMode), we can just put (! ArchiveRecoveryRequested), and then\n> we don't need any other fix. Am I missing anything?\n>\n> [1]\n> ReadRecord{\n> ..record = XLogPrefetcherReadRecord(xlogprefetcher, &errormsg);\n> if (record == NULL)\n> {\n> /*\n> * When not in standby mode we find that WAL ends in an incomplete\n> * record, keep track of that record. After recovery is done,\n> * we’ll write a record to indicate to downstream WAL readers that\n> * that portion is to be ignored.\n> */\n> if (!StandbyMode &&\n> !XLogRecPtrIsInvalid(xlogreader->abortedRecPtr))\n> {\n> abortedRecPtr = xlogreader->abortedRecPtr;\n> missingContrecPtr = xlogreader->missingContrecPtr;\n> }\n\nI agree. Testing StandbyMode here seems bogus. I thought initially\nthat the test should perhaps be for InArchiveRecovery rather than\nArchiveRecoveryRequested, but I see that the code which switches to a\nnew timeline cares about ArchiveRecoveryRequested, so I think that is\nthe correct thing to test here as well.\n\nConcretely, I propose the following patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 09:33:05 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On 2022-Aug-26, Robert Haas wrote:\n\n> I agree. Testing StandbyMode here seems bogus. I thought initially\n> that the test should perhaps be for InArchiveRecovery rather than\n> ArchiveRecoveryRequested, but I see that the code which switches to a\n> new timeline cares about ArchiveRecoveryRequested, so I think that is\n> the correct thing to test here as well.\n\nYeah, I think you had already established elsewhere that testing\nStandbyMode was the wrong thing to do. Testing ArchiveRecoveryRequested\nhere seems quite odd at first, but given the copying behavior, I agree\nthat it seems a correct thing to do.\n\nThere's a small typo in the comment: \"When find that\". I suppose that\nwas meant to be \"When we find that\". You end that para with \"and thus\nwe should not do this\", but that sounds like it wouldn't matter if we\ndid. Maybe \"and thus doing this would be wrong, so skip it.\" or\nsomething like that. (Perhaps be even more specific and say \"if we did\nthis, we would later create an overwrite record in the wrong place,\nbreaking everything\")\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:06:11 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 10:06 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> There's a small typo in the comment: \"When find that\". I suppose that\n> was meant to be \"When we find that\". You end that para with \"and thus\n> we should not do this\", but that sounds like it wouldn't matter if we\n> did. Maybe \"and thus doing this would be wrong, so skip it.\" or\n> something like that. (Perhaps be even more specific and say \"if we did\n> this, we would later create an overwrite record in the wrong place,\n> breaking everything\")\n\nI think that saying that someone should not do something implies\npretty clearly that it would be bad if they did. But I have no problem\nwith your more specific language, and as a general rule, it's good to\nbe specific, so let's use that.\n\nv2 attached.\n\nThanks for chiming in.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 10:23:41 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "> I agree. Testing StandbyMode here seems bogus. I thought initially\r\n> that the test should perhaps be for InArchiveRecovery rather than\r\n> ArchiveRecoveryRequested, but I see that the code which switches to a\r\n> new timeline cares about ArchiveRecoveryRequested, so I think that is\r\n> the correct thing to test here as well.\r\n\r\n> Concretely, I propose the following patch.\r\n\r\nThis patch looks similar to the change suggested in \r\nhttps://www.postgresql.org/message-id/FB0DEA0B-E14E-43A0-811F-C1AE93D00FF3%40amazon.com\r\nto deal with panics after promoting a standby.\r\n\r\nThe difference is the patch tests !ArchiveRecoveryRequested instead\r\nof !StandbyModeRequested as proposed in the mentioned thread.\r\n\r\n\r\nThanks\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n",
"msg_date": "Fri, 26 Aug 2022 15:59:10 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 11:59 AM Imseih (AWS), Sami <simseih@amazon.com> wrote:\n> > I agree. Testing StandbyMode here seems bogus. I thought initially\n> > that the test should perhaps be for InArchiveRecovery rather than\n> > ArchiveRecoveryRequested, but I see that the code which switches to a\n> > new timeline cares about ArchiveRecoveryRequested, so I think that is\n> > the correct thing to test here as well.\n>\n> > Concretely, I propose the following patch.\n>\n> This patch looks similar to the change suggested in\n> https://www.postgresql.org/message-id/FB0DEA0B-E14E-43A0-811F-C1AE93D00FF3%40amazon.com\n> to deal with panics after promoting a standby.\n>\n> The difference is the patch tests !ArchiveRecoveryRequested instead\n> of !StandbyModeRequested as proposed in the mentioned thread.\n\nOK, I didn't realize this bug had been independently discovered and it\nlooks like I was even involved in the previous discussion. I just\ntotally forgot about it.\n\nI think, however, that your fix is wrong and this one is right.\nFundamentally, the server is either in normal running, or crash\nrecovery, or archive recovery. Standby mode is just an optional\nbehavior of archive recovery, controlling whether or not we keep\nretrying once the end of WAL is reached. But there's no reason why the\nserver should put the contrecord at a different location when recovery\nends depending on that retry behavior. The only thing that matters is\nwhether we're going to switch timelines.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:15:27 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "> I think, however, that your fix is wrong and this one is right.\r\n> Fundamentally, the server is either in normal running, or crash\r\n> recovery, or archive recovery. Standby mode is just an optional\r\n> behavior of archive recovery\r\n\r\nGood point. Thanks for clearing my understanding.\r\n\r\nThanks\r\n--\r\nSami Imseih\r\nAmazon Web Services\r\n\r\n",
"msg_date": "Fri, 26 Aug 2022 18:50:41 +0000",
"msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 7:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Aug 26, 2022 at 10:06 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > There's a small typo in the comment: \"When find that\". I suppose that\n> > was meant to be \"When we find that\". You end that para with \"and thus\n> > we should not do this\", but that sounds like it wouldn't matter if we\n> > did. Maybe \"and thus doing this would be wrong, so skip it.\" or\n> > something like that. (Perhaps be even more specific and say \"if we did\n> > this, we would later create an overwrite record in the wrong place,\n> > breaking everything\")\n>\n> I think that saying that someone should not do something implies\n> pretty clearly that it would be bad if they did. But I have no problem\n> with your more specific language, and as a general rule, it's good to\n> be specific, so let's use that.\n>\n> v2 attached.\n\nThe patch LGTM, this patch will apply on master and v15. PFA patch\nfor back branches.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 28 Aug 2022 10:16:21 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "At Sun, 28 Aug 2022 10:16:21 +0530, Dilip Kumar <dilipbalaut@gmail.com> wrote in \n> On Fri, Aug 26, 2022 at 7:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > v2 attached.\n> \n> The patch LGTM, this patch will apply on master and v15. PFA patch\n> for back branches.\n\nStandbyMode is obviously wrong. On the other hand I thought that\n!ArchiveRecoveryRequested is somewhat wrong, too (, as I stated in the\npointed thread). On second thought, I changed my mind that it is\nright. After aborted contrec is found, The cause of the confusion is\nthat I somehow thought that archive recovery continues from the\naborted-contrec record. However, that assumption is wrong. The next\nredo starts from the beginning of the aborted contrecord so we should\nforget abouat the old missing/aborted contrec info when archive\nrecovery is requested.\n\nIn the end, the point is that we need to set the global variables only\nwhen XLogPrefetcherReadRecord() (or XLogReadRecord()) returns NULL and\nwe return it to the caller. Is it worth to do a small refactoring\nlike the attached? If no, I'm fine with the proposed patch including\nthe added assertion.\n\n# I havent reproduce the issue of the OP in the other thread yet, and\n# also not found how to reproduce this issue, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Mon, 29 Aug 2022 13:13:52 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "At Mon, 29 Aug 2022 13:13:52 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> we return it to the caller. Is it worth to do a small refactoring\n> like the attached? If no, I'm fine with the proposed patch including\n> the added assertion.\n\nMmm. That seems wrong. So forget about that. The proposed patch looks\nfine to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 29 Aug 2022 13:21:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 6:14 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Aug 23, 2022 at 12:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> However, if anything\n> > did try to look at file #4 it would get confused. Maybe that can\n> > happen if this is a streaming standby, where we only write an\n> > end-of-recovery record upon promotion, rather than a checkpoint, or\n> > maybe if there are cascading standbys someone could try to actually\n> > use the 000000020000000000000004 file for something. I'm not sure. But\n> > unless I'm missing something, that file is bogus, and our only hope of\n> > not having problems is that perhaps no one will ever look at it.\n\nI tried to see the problem with the cascading standby, basically the\nsetup is like below\npgprimary->pgstandby(archive only)->pgcascade(streaming + archive).\n\nThe second node has to be archive only because this 0 filled gap is\ncreated in archive only mode. With that I have noticed that the when\ncascading standby is getting that 0 filled gap it report same error\nwhat we seen with pg_waldump and that it keep waiting forever on that\nfile. I have attached a test case, but I think timing is not done\nperfectly in this test so before the cascading standby setup some of\nthe WAL file get removed by the pgstandby so I just put direct return\nin RemoveOldXlogFiles() to test this[2]. And this problem is getting\nresolved with the patch given by Robert upthread.\n\n[1]\n2022-08-25 16:21:26.413 IST [18235] LOG: invalid record length at\n0/FFFFEA8: wanted 24, got 0\n\n[2]\ndiff --git a/src/backend/access/transam/xlog.c\nb/src/backend/access/transam/xlog.c\nindex eb5115f..990a879 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -3558,6 +3558,7 @@ RemoveOldXlogFiles(XLogSegNo segno, XLogRecPtr\nlastredoptr, XLogRecPtr endptr,\n XLogSegNo endlogSegNo;\n XLogSegNo recycleSegNo;\n\n+ return;\n /* Initialize info about where to try to recycle to */\n XLByteToSeg(endptr, endlogSegNo, wal_segment_size);\n recycleSegNo = XLOGfileslop(lastredoptr);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 29 Aug 2022 15:46:58 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: standby promotion can create unreadable WAL"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 12:21 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Mmm. That seems wrong. So forget about that. The proposed patch looks\n> fine to me.\n\nThanks for thinking it over. Committed and back-patched as far as v10,\nsince that's the oldest supported release.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 12:32:30 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: standby promotion can create unreadable WAL"
}
] |
[
{
"msg_contents": ">Per discussion in [0], here is a patch set that allows pfree() to accept\n>a NULL argument, like free() does.\n\n>Also, a patch that removes the now-unnecessary null pointer checks\n>before calling pfree(). And a few patches that do the same for some\n>other functions that I found around. (The one with FreeDir() is perhaps\n>a bit arguable, since FreeDir() wraps closedir() which does *not* accept\n>NULL arguments. Also, neither FreeFile() nor the underlying fclose()\n>accept NULL.)\n\nHi Peter,\n\n+1\n\nHowever, after a quick review, I noticed some cases of PQ freemen missing.\nI took the liberty of making a v1, attached.\n\nregards,\n\nRanier Vilela",
"msg_date": "Mon, 22 Aug 2022 16:15:23 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "re: Change pfree to accept NULL argument"
}
] |
[
{
"msg_contents": "I noticed an accidental ;;\n\nPSA patch to remove the same.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia",
"msg_date": "Tue, 23 Aug 2022 10:13:35 +1000",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "fix typo - empty statement ;;"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 7:14 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I noticed an accidental ;;\n>\n> PSA patch to remove the same.\n\nPushed.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 09:34:01 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: fix typo - empty statement ;;"
}
] |
[
{
"msg_contents": "Today, I see some error messages have been added, two of which look\nsomewhat inconsistent.\n\ncommands/user.c\n@707:\n> errmsg(\"must have admin option on role \\\"%s\\\" to add members\",\n@1971:\n> errmsg(\"grantor must have ADMIN OPTION on \\\"%s\\\"\",\n\nA grep'ing told me that the latter above is the only outlier among 6\noccurrences in total of \"admin option/ADMIN OPTION\".\n\nDon't we unify them? I slightly prefer \"ADMIN OPTION\" but no problem\nwith them being in small letters. (Attached).\n\n\nIn passing, I met the following code in the same file.\n\n>\t\tif (!have_createrole_privilege() &&\n>\t\t\t!is_admin_of_role(currentUserId, roleid))\n>\t\t\tereport(ERROR,\n>\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>\t\t\t\t\t errmsg(\"must have admin option on role \\\"%s\\\"\",\n>\t\t\t\t\t\t\trolename)));\n\nThe message seems a bit short that it only mentions admin option while\nomitting CREATEROLE privilege. \"must have CREATEROLE privilege or\nadmin option on role %s\" might be better. Or we could say just\n\"insufficient privilege\" or \"permission denied\" in the main error\nmessage then provide \"CREATEROLE privilege or admin option on role %s\nis required\" in DETAILS or HINTS message. The message was added by\nc33d575899 along with the have_createrole_privilege() call so it is\nunclear to me whether it is intentional or not.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Tue, 23 Aug 2022 10:29:21 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Letter case of \"admin option\""
},
{
"msg_contents": "On 2022-Aug-23, Kyotaro Horiguchi wrote:\n\n> commands/user.c\n> @707:\n> > errmsg(\"must have admin option on role \\\"%s\\\" to add members\",\n> @1971:\n> > errmsg(\"grantor must have ADMIN OPTION on \\\"%s\\\"\",\n> \n> A grep'ing told me that the latter above is the only outlier among 6\n> occurrences in total of \"admin option/ADMIN OPTION\".\n> \n> Don't we unify them? I slightly prefer \"ADMIN OPTION\" but no problem\n> with them being in small letters. (Attached).\n\nAs a translator, it makes a huge difference to have them in upper vs.\nlower case. In the former case I would keep it untranslated, while in\nthe latter I would translate it. Given that these are keywords to use\nin a command, I think making them uppercase is the better approach.\n\nI see several other messages using \"admin option\" in lower case in\nuser.c. The Spanish translation contains one translation already and it\nis somewhat disappointing; I would prefer to have it as uppercase there\ntoo.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 23 Aug 2022 14:17:04 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Letter case of \"admin option\""
},
{
"msg_contents": "On Mon, Aug 22, 2022 at 9:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Today, I see some error messages have been added, two of which look\n> somewhat inconsistent.\n>\n> commands/user.c\n> @707:\n> > errmsg(\"must have admin option on role \\\"%s\\\" to add members\",\n> @1971:\n> > errmsg(\"grantor must have ADMIN OPTION on \\\"%s\\\"\",\n>\n> A grep'ing told me that the latter above is the only outlier among 6\n> occurrences in total of \"admin option/ADMIN OPTION\".\n>\n> Don't we unify them? I slightly prefer \"ADMIN OPTION\" but no problem\n> with them being in small letters. (Attached).\n\nFair point. There's some ambiguity in my mind about exactly how we\nwant to refer to this, which is probably why the messages ended up not\nbeing entirely consistent. I feel like it's a little weird that we\ntalk about ADMIN OPTION as if it were a thing that you can possess.\nFor example, consider EXPLAIN. If you were trying to troubleshoot a\nproblem with a query plan, you wouldn't tell them \"hey, please run\nEXPLAIN, and be sure to use the ANALYZE OPTION\". You would tell them\n\"hey, please run EXPLAIN, and be sure to use the ANALYZE option\". In\nthat case, it's clear that the thing you need to include in the\ncommand is ANALYZE -- which is an option -- not a thing called ANALYZE\nOPTION.\n\nIn the case of GRANT, that's more ambiguous, because the word OPTION\nactually appears in the syntax. But isn't that sort of accidental?\nIt's quite possible to give someone the right to administer a role\nwithout ever mentioning the OPTION keyword:\n\nrhaas=# create role bob;\nCREATE ROLE\nrhaas=# create role accounting admin bob;\nCREATE ROLE\nrhaas=# select roleid::regrole, member::regrole, grantor::regrole,\nadmin_option from pg_auth_members where roleid =\n'accounting'::regrole;\n roleid | member | grantor | admin_option\n------------+--------+---------+--------------\n accounting | bob | rhaas | t\n(1 row)\n\nYou can't change this after-the-fact with ALTER ROLE or ALTER GROUP,\nbut if we added that ability, I imagine that the syntax would probably\nnot involve the OPTION keyword. You'd probably say something like:\nALTER ROLE accounting ADD ADMIN fred, or ALTER GROUP accounting DROP\nADMIN bob.\n\nIn short, I'm wondering whether we should regard ADMIN as the name of\nthe option, but OPTION as part of the GRANT syntax, and hence\ncapitalize it \"ADMIN option\". However, if the non-English speakers on\nthis list have a strong preference for something else I'm certainly\nnot going to fight about it.\n\n> In passing, I met the following code in the same file.\n>\n> > if (!have_createrole_privilege() &&\n> > !is_admin_of_role(currentUserId, roleid))\n> > ereport(ERROR,\n> > (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > errmsg(\"must have admin option on role \\\"%s\\\"\",\n> > rolename)));\n>\n> The message seems a bit short that it only mentions admin option while\n> omitting CREATEROLE privilege. \"must have CREATEROLE privilege or\n> admin option on role %s\" might be better. Or we could say just\n> \"insufficient privilege\" or \"permission denied\" in the main error\n> message then provide \"CREATEROLE privilege or admin option on role %s\n> is required\" in DETAILS or HINTS message. The message was added by\n> c33d575899 along with the have_createrole_privilege() call so it is\n> unclear to me whether it is intentional or not.\n\nYeah, I wasn't sure what to do about this. We do not mention superuser\nprivileges in every message where they theoretically apply, because it\nwould make a lot of messages longer for not much benefit. CREATEROLE\nis a similar case and I think, but am not sure, that we treat it\nsimilarly. So in my mind it is a judgement call what to do here, and\nif other people think that what I picked wasn't best, we can change\nit.\n\nFor what it's worth, I'm hoping to eventually remove the CREATEROLE\nexception here. The superuser exception will remain, of course.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 09:58:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Letter case of \"admin option\""
},
{
"msg_contents": "Thanks for the comment.\r\n\r\nAt Tue, 23 Aug 2022 09:58:47 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \r\n> On Mon, Aug 22, 2022 at 9:29 PM Kyotaro Horiguchi\r\n> <horikyota.ntt@gmail.com> wrote:\r\n> In the case of GRANT, that's more ambiguous, because the word OPTION\r\n> actually appears in the syntax. But isn't that sort of accidental?\r\n\r\nYeah I think so. My intension is to let the translators do their work\r\nmore mechanically. A capital-letter word is automatically recognized\r\nas a keyword then can be copied as-is.\r\n\r\nI would translate \"ADMIN OPTION\" to \"ADMIN OPTION\" in Japanese but\r\n\"admin option\" is translated to \"管理者オプション\" which is a bit hard\r\nfor the readers to come up with the connection to \"ADMIN OPTION\" (or\r\nADMIN <roles>). I guess this is somewhat simliar to use \"You need to\r\ngive capability to administrate the role\" to suggest users to add WITH\r\nADMIN OPTION to the role.\r\n\r\nMaybe Álvaro has a similar difficulty on it.\r\n\r\n> It's quite possible to give someone the right to administer a role\r\n> without ever mentioning the OPTION keyword:\r\n\r\nMmm.. Fair point.\r\n\r\n> In short, I'm wondering whether we should regard ADMIN as the name of\r\n> the option, but OPTION as part of the GRANT syntax, and hence\r\n> capitalize it \"ADMIN option\". However, if the non-English speakers on\r\n> this list have a strong preference for something else I'm certainly\r\n> not going to fight about it.\r\n\r\n\"ADMIN option\" which is translated into \"ADMINオプション\" is fine by\r\nme. I hope Álvaro thinks the same way.\r\n\r\nWhat do you think about the attached?\r\n\r\n\r\n> > In passing, I met the following code in the same file.\r\n> >\r\n> > > if (!have_createrole_privilege() &&\r\n> > > !is_admin_of_role(currentUserId, roleid))\r\n> > > ereport(ERROR,\r\n> > > (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\r\n> > > errmsg(\"must have admin option on role \\\"%s\\\"\",\r\n> > > rolename)));\r\n> >\r\n> > The message seems a bit short that it only mentions admin option while\r\n> > omitting CREATEROLE privilege. \"must have CREATEROLE privilege or\r\n> > admin option on role %s\" might be better. Or we could say just\r\n> > \"insufficient privilege\" or \"permission denied\" in the main error\r\n> > message then provide \"CREATEROLE privilege or admin option on role %s\r\n> > is required\" in DETAILS or HINTS message. The message was added by\r\n> > c33d575899 along with the have_createrole_privilege() call so it is\r\n> > unclear to me whether it is intentional or not.\r\n> \r\n> Yeah, I wasn't sure what to do about this. We do not mention superuser\r\n> privileges in every message where they theoretically apply, because it\r\n> would make a lot of messages longer for not much benefit. CREATEROLE\r\n> is a similar case and I think, but am not sure, that we treat it\r\n> similarly. So in my mind it is a judgement call what to do here, and\r\n> if other people think that what I picked wasn't best, we can change\r\n> it.\r\n> \r\n> For what it's worth, I'm hoping to eventually remove the CREATEROLE\r\n> exception here. The superuser exception will remain, of course.\r\n\r\nIf it were simply \"permission denied\", I don't think about the details\r\nthen seek for the way to allow that. But I don't mean to fight this\r\nfor now.\r\n\r\nFor the record, I would prefer the follwoing message for this sort of\r\nfailure.\r\n\r\nERROR: permission denied\r\nDETAILS: CREATEROLE or ADMIN option is required for the role.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center",
"msg_date": "Thu, 25 Aug 2022 14:36:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Letter case of \"admin option\""
},
{
"msg_contents": "On 2022-Aug-25, Kyotaro Horiguchi wrote:\n\n> At Tue, 23 Aug 2022 09:58:47 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n\n> I would translate \"ADMIN OPTION\" to \"ADMIN OPTION\" in Japanese but\n> \"admin option\" is translated to \"管理者オプション\" which is a bit hard\n> for the readers to come up with the connection to \"ADMIN OPTION\" (or\n> ADMIN <roles>). I guess this is somewhat simliar to use \"You need to\n> give capability to administrate the role\" to suggest users to add WITH\n> ADMIN OPTION to the role.\n> \n> Maybe Álvaro has a similar difficulty on it.\n\nExactly.\n\nI ran a quick poll in a Spanish community. Everyone who responded (not\nmany admittedly) agreed with this idea -- they find the message clearer\nif the keyword is mentioned explicitly in the translation.\n\n> > In short, I'm wondering whether we should regard ADMIN as the name of\n> > the option, but OPTION as part of the GRANT syntax, and hence\n> > capitalize it \"ADMIN option\". However, if the non-English speakers on\n> > this list have a strong preference for something else I'm certainly\n> > not going to fight about it.\n> \n> \"ADMIN option\" which is translated into \"ADMINオプション\" is fine by\n> me. I hope Álvaro thinks the same way.\n\nHmm, but our docs say that the option is called ADMIN OPTION, don't\nthey? And I think the standard sees it the same way. You cannot invoke\nit without the word OPTION. I understand the point of view, but I don't\nthink it is clearer done that way. It is different for example with\nINHERIT; we could say \"the INHERIT option\" making the word \"option\"\ntranslatable in that phrase. But then you don't have to add that word\nin the command.\n\n> What do you think about the attached?\n\nI prefer the <literal>ADMIN OPTION</literal> interpretation (both for\ndocs and error messages). I think it's clearer that way, given that the\nsyntax is what it is.\n\n> > > > !is_admin_of_role(currentUserId, roleid))\n> > > > ereport(ERROR,\n> > > > (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> > > > errmsg(\"must have admin option on role \\\"%s\\\"\",\n> > > > rolename)));\n> > >\n> > > The message seems a bit short that it only mentions admin option while\n> > > omitting CREATEROLE privilege. \"must have CREATEROLE privilege or\n> > > admin option on role %s\" might be better. Or we could say just\n> > > \"insufficient privilege\" or \"permission denied\" in the main error\n> > > message then provide \"CREATEROLE privilege or admin option on role %s\n> > > is required\" in DETAILS or HINTS message.\n\nI'm not opposed to moving that part of detail/hint, but I would prefer\nthat it says \"the CREATEROLE privilege or ADMIN OPTION\".\n\n> --- a/doc/src/sgml/ref/alter_group.sgml\n> +++ b/doc/src/sgml/ref/alter_group.sgml\n> @@ -55,7 +55,7 @@ ALTER GROUP <replaceable class=\"parameter\">group_name</replaceable> RENAME TO <r\n> <link linkend=\"sql-revoke\"><command>REVOKE</command></link>. Note that\n> <command>GRANT</command> and <command>REVOKE</command> have additional\n> options which are not available with this command, such as the ability\n> - to grant and revoke <literal>ADMIN OPTION</literal>, and the ability to\n> + to grant and revoke <literal>ADMIN</literal> option, and the ability to\n> specify the grantor.\n> </para>\n\nI think the original reads better.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"La libertad es como el dinero; el que no la sabe emplear la pierde\" (Alvarez)\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:58:16 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Letter case of \"admin option\""
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 4:58 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I ran a quick poll in a Spanish community. Everyone who responded (not\n> many admittedly) agreed with this idea -- they find the message clearer\n> if the keyword is mentioned explicitly in the translation.\n\nMakes sense. I didn't really doubt that ADMIN should be capitalized, I\njust wasn't sure about OPTION.\n\n> > > In short, I'm wondering whether we should regard ADMIN as the name of\n> > > the option, but OPTION as part of the GRANT syntax, and hence\n> > > capitalize it \"ADMIN option\". However, if the non-English speakers on\n> > > this list have a strong preference for something else I'm certainly\n> > > not going to fight about it.\n> >\n> > \"ADMIN option\" which is translated into \"ADMINオプション\" is fine by\n> > me. I hope Álvaro thinks the same way.\n>\n> Hmm, but our docs say that the option is called ADMIN OPTION, don't\n> they? And I think the standard sees it the same way. You cannot invoke\n> it without the word OPTION. I understand the point of view, but I don't\n> think it is clearer done that way. It is different for example with\n> INHERIT; we could say \"the INHERIT option\" making the word \"option\"\n> translatable in that phrase. But then you don't have to add that word\n> in the command.\n\nIt's going to be a little strange of we have ADMIN OPTION and INHERIT\noption, isn't it? But we can try it.\n\nOne thing I have noticed, though, is that there are a lot of existing\nreferences to ADMIN OPTION in code comments. If we decide on anything\nelse here we're going to have quite a few things to tidy up. Not that\nthat's a big deal I guess, but it's something to think about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:32:24 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Letter case of \"admin option\""
},
{
"msg_contents": "Here's a patch changing all occurrences of \"admin option\" in error\nmessages to \"ADMIN OPTION\".\n\nTwo of these five messages also exist in previous releases; the other\nthree are new.\n\nI'm not sure if this is our final conclusion on what we want to do\nhere, so please speak up if you don't agree.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 12:33:33 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Letter case of \"admin option\""
},
{
"msg_contents": "On 2022-Aug-26, Robert Haas wrote:\n\n> Here's a patch changing all occurrences of \"admin option\" in error\n> messages to \"ADMIN OPTION\".\n> \n> Two of these five messages also exist in previous releases; the other\n> three are new.\n> \n> I'm not sure if this is our final conclusion on what we want to do\n> here, so please speak up if you don't agree.\n\nThanks -- this is my personal preference, as well as speaking on behalf\nof a few people who considered the matter from a user's point of view.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:07:17 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Letter case of \"admin option\""
}
] |
[
{
"msg_contents": "Hi, hackers\nI made a small patch for xml2 to improve test coverage.\nHowever, there was a problem using the functions below.\n\n- xpath_number\n- xpath_bool\n- xpath_nodeset\n- xpath_list\n\nDo you have any advice on how to use this function correctly?\nIt would also be good to add an example of using the function to the document.\n\n---\nRegards,\nDongWook Lee.",
"msg_date": "Tue, 23 Aug 2022 10:38:51 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "xml2: add test for coverage"
},
{
"msg_contents": "On 23.08.22 03:38, Dong Wook Lee wrote:\n> I made a small patch for xml2 to improve test coverage.\n> However, there was a problem using the functions below.\n> \n> - xpath_number\n> - xpath_bool\n> - xpath_nodeset\n> - xpath_list\n> \n> Do you have any advice on how to use this function correctly?\n> It would also be good to add an example of using the function to the document.\n\nI can confirm that these functions could use more tests and more \ndocumentation and examples. But given that you registered a patch in \nthe commit fest, it should be you who provides a patch to solve those \nissues. Are you still working on this, or were you just looking for \nhelp on how to solve this?\n\n\n\n",
"msg_date": "Fri, 25 Nov 2022 13:37:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: xml2: add test for coverage"
},
{
"msg_contents": "On Fri, 25 Nov 2022 at 18:08, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 23.08.22 03:38, Dong Wook Lee wrote:\n> > I made a small patch for xml2 to improve test coverage.\n> > However, there was a problem using the functions below.\n> >\n> > - xpath_number\n> > - xpath_bool\n> > - xpath_nodeset\n> > - xpath_list\n> >\n> > Do you have any advice on how to use this function correctly?\n> > It would also be good to add an example of using the function to the document.\n>\n> I can confirm that these functions could use more tests and more\n> documentation and examples. But given that you registered a patch in\n> the commit fest, it should be you who provides a patch to solve those\n> issues. Are you still working on this, or were you just looking for\n> help on how to solve this?\n\nHi DongWook Lee,\n\nAre you planning to work on this and provide an updated patch, if you\nare not planning to work on it, we can update the commitfest entry\naccordingly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 17 Jan 2023 17:06:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xml2: add test for coverage"
},
{
"msg_contents": "On Tue, 17 Jan 2023 at 17:06, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 25 Nov 2022 at 18:08, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > On 23.08.22 03:38, Dong Wook Lee wrote:\n> > > I made a small patch for xml2 to improve test coverage.\n> > > However, there was a problem using the functions below.\n> > >\n> > > - xpath_number\n> > > - xpath_bool\n> > > - xpath_nodeset\n> > > - xpath_list\n> > >\n> > > Do you have any advice on how to use this function correctly?\n> > > It would also be good to add an example of using the function to the document.\n> >\n> > I can confirm that these functions could use more tests and more\n> > documentation and examples. But given that you registered a patch in\n> > the commit fest, it should be you who provides a patch to solve those\n> > issues. Are you still working on this, or were you just looking for\n> > help on how to solve this?\n>\n> Hi DongWook Lee,\n>\n> Are you planning to work on this and provide an updated patch, if you\n> are not planning to work on it, we can update the commitfest entry\n> accordingly.\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 31 Jan 2023 23:20:02 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: xml2: add test for coverage"
}
] |
[
{
"msg_contents": "Hi Hackers,\nI wrote a test for coverage.\nUnfortunately, it seems to take quite a while to run the test.\nI want to improve these execution times, but I don't know exactly what to do.\nTherefore, I want to hear feedback from many people.\n---\nRegards,\nDong Wook Lee",
"msg_date": "Tue, 23 Aug 2022 10:50:08 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_waldump: add test for coverage"
},
{
"msg_contents": "On 23.08.22 03:50, Dong Wook Lee wrote:\n> Hi Hackers,\n> I wrote a test for coverage.\n> Unfortunately, it seems to take quite a while to run the test.\n> I want to improve these execution times, but I don't know exactly what to do.\n> Therefore, I want to hear feedback from many people.\n\nI don't find these tests to be particularly slow. How long do they take \nfor you to run?\n\nA couple of tips:\n\n- You should give each test a name. That's why each test function has a \n(usually) last argument that takes a string.\n\n- You could use command_like() to run a command and check that it exits \nsuccessfully and check its standard out. For example, instead of\n\n# test pg_waldump with -F (main)\nIPC::Run::run [ 'pg_waldump', \"$wal_dump_path\", '-F', 'main' ], '>', \n\\$stdout, '2>', \\$stderr;\nisnt($stdout, '', \"\");\n\nit is better to write\n\ncommand_like([ 'pg_waldump', \"$wal_dump_path\", '-F', 'main' ],\n qr/TODO/, 'test -F (main)');\n\n- It would be useful to test the actual output (that is, fill in the \nTODO above). I don't know what the best way to do that is -- that is \npart of designing these tests.\n\nAlso,\n\n- Your patch introduces a spurious blank line at the end of the test file.\n\n- For portability, options must be before non-option arguments. So \ninstead of\n\n[ 'pg_waldump', \"$wal_dump_path\", '-F', 'main' ]\n\nit should be\n\n[ 'pg_waldump', '-F', 'main', \"$wal_dump_path\" ]\n\n\nI think having some more test coverage for pg_waldump would be good, so \nI encourage you to continue working on this.\n\n\n",
"msg_date": "Tue, 6 Sep 2022 07:57:57 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump: add test for coverage"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 10:50:08 +0900, Dong Wook Lee wrote:\n> I wrote a test for coverage.\n\nUnfortunately the test doesn't seem to pass on windows, and hasn't ever done so:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3834\n\nDue to the merge of the meson patchset, you should also add 001_basic.pl to\nthe list of tests in meson.build\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 08:16:10 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump: add test for coverage"
},
{
"msg_contents": "On 06.09.22 07:57, Peter Eisentraut wrote:\n>> I wrote a test for coverage.\n>> Unfortunately, it seems to take quite a while to run the test.\n>> I want to improve these execution times, but I don't know exactly what \n>> to do.\n>> Therefore, I want to hear feedback from many people.\n\n> I think having some more test coverage for pg_waldump would be good, so \n> I encourage you to continue working on this.\n\nI made an updated patch that incorporates many of your ideas and code, \njust made it a bit more compact, and added more tests for various \ncommand-line options. This moves the test coverage of pg_waldump from \n\"bloodbath\" to \"mixed fruit salad\", which I think is pretty good \nprogress. And now there is room for additional patches if someone wants \nto figure out, e.g., how to get more complete coverage in gindesc.c or \nwhatever.",
"msg_date": "Wed, 14 Jun 2023 09:16:50 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump: add test for coverage"
},
{
"msg_contents": "On 14.06.23 09:16, Peter Eisentraut wrote:\n> On 06.09.22 07:57, Peter Eisentraut wrote:\n>>> I wrote a test for coverage.\n>>> Unfortunately, it seems to take quite a while to run the test.\n>>> I want to improve these execution times, but I don't know exactly \n>>> what to do.\n>>> Therefore, I want to hear feedback from many people.\n> \n>> I think having some more test coverage for pg_waldump would be good, \n>> so I encourage you to continue working on this.\n> \n> I made an updated patch that incorporates many of your ideas and code, \n> just made it a bit more compact, and added more tests for various \n> command-line options. This moves the test coverage of pg_waldump from \n> \"bloodbath\" to \"mixed fruit salad\", which I think is pretty good \n> progress. And now there is room for additional patches if someone wants \n> to figure out, e.g., how to get more complete coverage in gindesc.c or \n> whatever.\n\nHere is an updated patch set. I added a test case for the \"first record \nis after\" message. Also, I think this message should really go to \nstderr, since it's more of a notice or warning, so I changed it to use \npg_log_info.",
"msg_date": "Wed, 28 Jun 2023 07:48:46 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump: add test for coverage"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHello,\r\n\r\nI've reviewed your latest v3 patches on Ubuntu 23.04. Both patches apply correctly and all the tests run and pass as they should. Execution time was normal for me, I didn't notice any significant latency when compared to other tests. The only other feedback I can provide would be to add test coverage to some of the other options that aren't currently covered (ie. --bkp-details, --end, --follow, --path, etc.) for completeness. Other than that, this looks like a great patch.\r\n\r\nKind regards,\r\n\r\nTristen",
"msg_date": "Thu, 29 Jun 2023 19:16:52 +0000",
"msg_from": "Tristen Raab <tristen.raab@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump: add test for coverage"
},
{
"msg_contents": "On 29.06.23 21:16, Tristen Raab wrote:\n> I've reviewed your latest v3 patches on Ubuntu 23.04. Both patches apply correctly and all the tests run and pass as they should. Execution time was normal for me, I didn't notice any significant latency when compared to other tests. The only other feedback I can provide would be to add test coverage to some of the other options that aren't currently covered (ie. --bkp-details, --end, --follow, --path, etc.) for completeness. Other than that, this looks like a great patch.\n\nCommitted.\n\nI added a test for the --quiet option. --end and --path are covered.\n\nThe only options not covered now are\n\n -b, --bkp-details output detailed information about backup blocks\n -f, --follow keep retrying after reaching end of WAL\n -t, --timeline=TLI timeline from which to read WAL records\n -x, --xid=XID only show records with transaction ID XID\n\n--follow is a bit tricky to test because you need to leave pg_waldump \nrunning in the background for a while, or something like that. \n--timeline and --xid can be tested but would need some work on the \nunderlying test data (such as creating more than one timeline). I don't \nknow much about --bkp-details, so I don't have a good idea how to test \nit. So I'll leave those as projects for the future.\n\n\n\n",
"msg_date": "Wed, 5 Jul 2023 11:01:33 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: pg_waldump: add test for coverage"
}
] |
[
{
"msg_contents": "Hi hackers,\nI checked the test code not to test the zstd option, then added it.\nI hope my patch will help us to ensure safety of the test.\n\n\n---\nRegards,\nDongWook Lee.",
"msg_date": "Tue, 23 Aug 2022 10:58:22 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 10:58:22AM +0900, Dong Wook Lee wrote:\n> I checked the test code not to test the zstd option, then added it.\n> I hope my patch will help us to ensure safety of the test.\n\nIt seems to me that checking that the contents generated are valid is\nequally necessary. We do that with zlib with gzip --test, and you\ncould use ${ZSTD} in the context of this test.\n\nWhat about lz4?\n--\nMichael",
"msg_date": "Tue, 23 Aug 2022 11:36:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 11:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> It seems to me that checking that the contents generated are valid is\n> equally necessary. We do that with zlib with gzip --test, and you\n> could use ${ZSTD} in the context of this test.\n\nThank you for the good points.\nI supplemented the test according to your suggestion.\nHowever, there was a problem.\nEven though I did export ZSTD on the Makefile , the test runner can't\nfind ZSTD when it actually tests.\n```\nmy $zstd = $ENV{ZSTD};\nskip \"program zstd is not found in your system\", 1\n if (!defined $zstd\n || $zstd eq '');\n```\nlog: regress_log_010_pg_basebackup\n```\nok 183 # skip program zstd is not found in your system.\n```\nCould you check if I missed anything?",
"msg_date": "Thu, 25 Aug 2022 16:52:04 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 3:52 AM Dong Wook Lee <sh95119@gmail.com> wrote:\n> Could you check if I missed anything?\n\nThere is already a far better test for this in\nsrc/bin/pg_verifybackup/t/009_extract.pl\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:35:20 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "Hi\n\nI was looking at the commitfest entry for this patch [1] as it's been dormant\nfor quite a while, with the intent of returning it with feedback.\n\n[1] https://commitfest.postgresql.org/40/3835/\n\n2022年8月25日(木) 16:52 Dong Wook Lee <sh95119@gmail.com>:\n>\n> On Tue, Aug 23, 2022 at 11:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > It seems to me that checking that the contents generated are valid is\n> > equally necessary. We do that with zlib with gzip --test, and you\n> > could use ${ZSTD} in the context of this test.\n>\n> Thank you for the good points.\n> I supplemented the test according to your suggestion.\n> However, there was a problem.\n> Even though I did export ZSTD on the Makefile , the test runner can't\n> find ZSTD when it actually tests.\n> ```\n> my $zstd = $ENV{ZSTD};\n> skip \"program zstd is not found in your system\", 1\n> if (!defined $zstd\n> || $zstd eq '');\n> ```\n> log: regress_log_010_pg_basebackup\n> ```\n> ok 183 # skip program zstd is not found in your system.\n> ```\n> Could you check if I missed anything?\n\nTaking a quick look at the patch itself, as-is it does actually work; maybe\n the zstd binary itself was missing or not in the normal system path?\n It might not have been installed even if the devel library was (IIRC\n this was the case on Rocky Linux).\n\nHowever the code largely duplicates the preceding gzip test,\n and as Michael mentioned there's still lz4 without coverage.\nAttached patch refactors this part of the test so it can be used\nfor multiple compression methods, similar to the test in\nsrc/bin/pg_verifybackup/t/009_extract.pl mentioned by Robert.\nThe difference to that test is that we can exercise all the\ncommand line options and directly check the generated files with\nthe respective binary.\n\nThough on reflection maybe it's overkill and the existing tests\nsuffice. Anyway leaving the patch here in the interests of pushing\nthis forward in some direction.\n\nRegards\n\nIan Barwick",
"msg_date": "Sat, 3 Dec 2022 13:29:27 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 11:29 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> Though on reflection maybe it's overkill and the existing tests\n> suffice. Anyway leaving the patch here in the interests of pushing\n> this forward in some direction.\n\nDo you think that there is a scenario where 008_untar.pl and\n009_extract.pl pass but this test fails, alerting us to a problem that\nwould otherwise have gone undetected? If so, what is that scenario?\n\nThe only thing that I can think of would be if $decompress_program\n--test were failing, but actually trying to decompress succeeded. I\nwould be inclined to dismiss that particular scenario as not important\nenough to be worth the additional CPU cycles.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 5 Dec 2022 09:59:45 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "2022年12月5日(月) 23:59 Robert Haas <robertmhaas@gmail.com>:\n>\n> On Fri, Dec 2, 2022 at 11:29 PM Ian Lawrence Barwick <barwick@gmail.com> wrote:\n> > Though on reflection maybe it's overkill and the existing tests\n> > suffice. Anyway leaving the patch here in the interests of pushing\n> > this forward in some direction.\n>\n> Do you think that there is a scenario where 008_untar.pl and\n> 009_extract.pl pass but this test fails, alerting us to a problem that\n> would otherwise have gone undetected? If so, what is that scenario?\n>\n> The only thing that I can think of would be if $decompress_program\n> --test were failing, but actually trying to decompress succeeded. I\n> would be inclined to dismiss that particular scenario as not important\n> enough to be worth the additional CPU cycles.\n\nYeah, it doesn't really add anything, so let's close this one off.\n\nThanks for the feedback\n\nIan Barwick\n\n\n",
"msg_date": "Tue, 6 Dec 2022 09:15:23 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
},
{
"msg_contents": "> The only thing that I can think of would be if $decompress_program\n> --test were failing, but actually trying to decompress succeeded. I\n> would be inclined to dismiss that particular scenario as not important\n> enough to be worth the additional CPU cycles.\n\nWhen I wrote this test, it was just to increase coverage for pg_basebackup.\nAs I checked again, it already does that in the pg_verifybackup\n008_untar.pl, 009_extrack.pl test you mentioned.\nTherefore, I agree with your opinion.\n\n---\nRegards,\nDongWook Lee.\n\n\n",
"msg_date": "Tue, 6 Dec 2022 21:53:13 +0900",
"msg_from": "Dong Wook Lee <sh95119@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup: add test about zstd compress option"
}
] |
[
{
"msg_contents": "Hi,\n\n(Background: 697492434 added 3 new sort functions to remove the\nindirect function calls for the comparator function. This sped up\nsorting for various of our built-in data types.)\n\nThere was a bit of unfinished discussion around exactly how far to\ntake these specialisations for PG15. We could certainly add more.\n\nThere are various other things we could do to further speed up sorting\nfor these datatypes. One example is, we could add 3 more variations\nof these functions that can be called when there are no NULL datums to\nsort. That effectively multiplies the number of specialisations by 2,\nor adds another dimension.\n\nI have the following dimensions in mind for consideration:\n\n1. Specialisations to handle sorting of non-null datums (eliminates\nchecking for nulls in the comparison function)\n2. Specialisations to handle single column sorts (eliminates\ntiebreaker function call or any checks for existence of tiebreaker)\n3. ASC sort (No need for if (ssup->ssup_reverse) INVERT_COMPARE_RESULT(compare))\n\nIf we did all of the above then we'd end up with 3 * 2 * 2 * 2 = 24\nspecialization functions. That seems a bit excessive. So here I'd\nlike to discuss which ones we should add, if any.\n\nI've attached a very basic implementation of #1 which adds 3 new\nfunctions for sorting non-null datums. This could be made a bit more\nadvanced. For now, I just added a bool flag to track if we have any\nNULL datum1s in memtuples[]. For bounded sorts, we may remove NULLs\nfrom that array, and may end up with no nulls after having seen null.\nSo maybe a count would be better than a flag.\n\nA quick performance test with 1 million random INTs shows ~6%\nperformance improvement when there are no nulls.\n\nMaster\n$ pgbench -n -f bench.sql -T 60 postgres\nlatency average = 159.837 ms\nlatency average = 161.193 ms\nlatency average = 159.512 ms\n\nmaster + not_null_sort_specializations.patch\n$ pgbench -n -f bench.sql -T 60 postgres\nlatency average = 150.791 ms\nlatency average = 149.843 ms\nlatency average = 150.319 ms\n\nI didn't test for any regression when there are NULLs and we're unable\nto use the new specializations. I'm hoping the null tracking will be\nalmost free, but I will need to check.\n\nIt's all quite subjective to know which specializations should be\nadded. I think #1 is likely to have the biggest wins when it can be\nused as it removes the most branching in the comparator function,\nhowever the biggest gains are not the only thing to consider. We also\nneed to consider how commonly these functions will be used. I don't\nhave any information about that.\n\nDavid",
"msg_date": "Tue, 23 Aug 2022 14:17:59 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 9:18 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I have the following dimensions in mind for consideration:\n>\n> 1. Specialisations to handle sorting of non-null datums (eliminates\n> checking for nulls in the comparison function)\n> 2. Specialisations to handle single column sorts (eliminates\n> tiebreaker function call or any checks for existence of tiebreaker)\n> 3. ASC sort (No need for if (ssup->ssup_reverse) INVERT_COMPARE_RESULT(compare))\n>\n> If we did all of the above then we'd end up with 3 * 2 * 2 * 2 = 24\n> specialization functions. That seems a bit excessive. So here I'd\n> like to discuss which ones we should add, if any.\n>\n> I've attached a very basic implementation of #1 which adds 3 new\n> functions for sorting non-null datums.\n\nDid you happen to see\n\nhttps://www.postgresql.org/message-id/CAFBsxsFhq8VUSkUL5YO17cFXbCPwtbbxBu%2Bd9MFrrsssfDXm3Q%40mail.gmail.com\n\nwhere I experimented with removing all null handling? What I had in\nmind was pre-partitioning nulls and non-nulls when populating the\nSortTuple array, then calling qsort twice, once with the non-null\npartition with comparators that assume non-null, and (only if there\nare additional sort keys) once on the null partition. And the\npre-partitioning would take care of nulls first/last upfront. I\nhaven't looked into the feasibility of this yet, but the good thing\nabout the concept is that it removes null handling in the comparators\nwithout additional sort specializations.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 10:22:21 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 15:22, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Did you happen to see\n>\n> https://www.postgresql.org/message-id/CAFBsxsFhq8VUSkUL5YO17cFXbCPwtbbxBu%2Bd9MFrrsssfDXm3Q%40mail.gmail.com\n\nI missed that. It looks like a much more promising idea than what I\ncame up with. I've not looked at your code yet, but I'm interested and\nwill aim to look soon.\n\nDavid\n\n\n",
"msg_date": "Tue, 23 Aug 2022 16:24:01 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 11:24 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 23 Aug 2022 at 15:22, John Naylor <john.naylor@enterprisedb.com> wrote:\n> > Did you happen to see\n> >\n> > https://www.postgresql.org/message-id/CAFBsxsFhq8VUSkUL5YO17cFXbCPwtbbxBu%2Bd9MFrrsssfDXm3Q%40mail.gmail.com\n>\n> I missed that. It looks like a much more promising idea than what I\n> came up with. I've not looked at your code yet, but I'm interested and\n> will aim to look soon.\n\nNote that I haven't actually implemented this idea yet, just tried to\nmodel the effects by lobotomizing the current comparators. I think\nit's worth pursuing and will try to come back to it this cycle, but if\nyou or anyone else wants to try, that's fine of course.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:13:13 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 1:13 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n> On Tue, Aug 23, 2022 at 11:24 AM David Rowley <dgrowleyml@gmail.com>\nwrote:\n> >\n> > On Tue, 23 Aug 2022 at 15:22, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > > Did you happen to see\n> > >\n> > >\nhttps://www.postgresql.org/message-id/CAFBsxsFhq8VUSkUL5YO17cFXbCPwtbbxBu%2Bd9MFrrsssfDXm3Q%40mail.gmail.com\n> >\n> > I missed that. It looks like a much more promising idea than what I\n> > came up with. I've not looked at your code yet, but I'm interested and\n> > will aim to look soon.\n>\n> Note that I haven't actually implemented this idea yet, just tried to\n> model the effects by lobotomizing the current comparators. I think\n> it's worth pursuing and will try to come back to it this cycle, but if\n> you or anyone else wants to try, that's fine of course.\n\nComing back to this, I wanted to sketch out this idea in a bit more detail.\n\nHave two memtuple arrays, one for first sortkey null and one for first\nsortkey non-null:\n- Qsort the non-null array, including whatever specialization is available.\nExisting specialized comparators could ignore nulls (and their ordering)\ntaking less space in the binary.\n- Only if there is more than one sort key, qsort the null array. Ideally at\nsome point we would have a method of ignoring the first sortkey (this is an\nexisting opportunity that applies elsewhere as well).\n- To handle two arrays, grow_memtuples() would need some adjustment, as\nwould any callers that read the final result of an in-memory sort -- they\nwould need to retrieve the tuples starting with the appropriate array\ndepending on NULLS FIRST/LAST behavior.\n\nI believe external merges wouldn't have to do anything different, since\nwhen writing out the tapes, we read from the arrays in the right order.\n\n(One could extend this idea further and have two pools of tapes for null\nand non-null first sortkey, that are merged separately, in the right order.\nThat sounds like quite a bit more complexity than is worth, however.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 23, 2022 at 1:13 PM John Naylor <john.naylor@enterprisedb.com> wrote:>> On Tue, Aug 23, 2022 at 11:24 AM David Rowley <dgrowleyml@gmail.com> wrote:> >> > On Tue, 23 Aug 2022 at 15:22, John Naylor <john.naylor@enterprisedb.com> wrote:> > > Did you happen to see> > >> > > https://www.postgresql.org/message-id/CAFBsxsFhq8VUSkUL5YO17cFXbCPwtbbxBu%2Bd9MFrrsssfDXm3Q%40mail.gmail.com> >> > I missed that. It looks like a much more promising idea than what I> > came up with. I've not looked at your code yet, but I'm interested and> > will aim to look soon.>> Note that I haven't actually implemented this idea yet, just tried to> model the effects by lobotomizing the current comparators. I think> it's worth pursuing and will try to come back to it this cycle, but if> you or anyone else wants to try, that's fine of course.Coming back to this, I wanted to sketch out this idea in a bit more detail.Have two memtuple arrays, one for first sortkey null and one for first sortkey non-null:- Qsort the non-null array, including whatever specialization is available. Existing specialized comparators could ignore nulls (and their ordering) taking less space in the binary.- Only if there is more than one sort key, qsort the null array. Ideally at some point we would have a method of ignoring the first sortkey (this is an existing opportunity that applies elsewhere as well).- To handle two arrays, grow_memtuples() would need some adjustment, as would any callers that read the final result of an in-memory sort -- they would need to retrieve the tuples starting with the appropriate array depending on NULLS FIRST/LAST behavior.I believe external merges wouldn't have to do anything different, since when writing out the tapes, we read from the arrays in the right order.(One could extend this idea further and have two pools of tapes for null and non-null first sortkey, that are merged separately, in the right order. That sounds like quite a bit more complexity than is worth, however.)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Jan 2023 17:29:25 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "Hi, John!\nGenerally, I like the separation of non-null values before sorting and\nwould like to join as a reviewer when we come to patch. I have only a\nsmall question:\n\n> - Only if there is more than one sort key, qsort the null array. Ideally at some point we would have a method of ignoring the first sortkey (this is an existing opportunity that applies elsewhere as well).\nShould we need to sort by the second sort key provided the first one\nin NULL by standard or by some part of the code relying on this? I\nsuppose NULL values in the first sort key mean attribute values are\nundefined and there is no preferred order between these tuples, even\nif their second sort keys are different.\n\nAnd maybe (unlikely IMO) we need some analog of NULLS DISCTICNT/NOT\nDISTINCT in this scope?\n\nKind regards,\nPavel Borisov,\nSupabase.\n\n\n",
"msg_date": "Thu, 26 Jan 2023 15:13:24 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 6:14 PM Pavel Borisov <pashkin.elfe@gmail.com>\nwrote:\n\n> > - Only if there is more than one sort key, qsort the null array.\nIdeally at some point we would have a method of ignoring the first sortkey\n(this is an existing opportunity that applies elsewhere as well).\n\n> Should we need to sort by the second sort key provided the first one\n> in NULL by standard or by some part of the code relying on this? I\n\nI'm not sure I quite understand the question.\n\nIf there is more than one sort key, and the specialized comparison on the\nfirst key gives a definitive zero result, it falls back to comparing all\nkeys from the full tuple. (The sorttuple struct only contains the first\nsortkey, which might actually be an abbreviated key.) A possible\noptimization, relevant here and also elsewhere, is to compare only using\nkeys starting from key2. But note: if the first key is abbreviated, a zero\nresult is not definitive, and we must check the first key's full value from\nthe tuple.\n\n> suppose NULL values in the first sort key mean attribute values are\n> undefined and there is no preferred order between these tuples, even\n> if their second sort keys are different.\n\nThere is in fact a preferred order between these tuples -- the second key\nis the tie breaker in this case.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jan 26, 2023 at 6:14 PM Pavel Borisov <pashkin.elfe@gmail.com> wrote:> > - Only if there is more than one sort key, qsort the null array. Ideally at some point we would have a method of ignoring the first sortkey (this is an existing opportunity that applies elsewhere as well).> Should we need to sort by the second sort key provided the first one> in NULL by standard or by some part of the code relying on this? II'm not sure I quite understand the question.If there is more than one sort key, and the specialized comparison on the first key gives a definitive zero result, it falls back to comparing all keys from the full tuple. (The sorttuple struct only contains the first sortkey, which might actually be an abbreviated key.) A possible optimization, relevant here and also elsewhere, is to compare only using keys starting from key2. But note: if the first key is abbreviated, a zero result is not definitive, and we must check the first key's full value from the tuple.> suppose NULL values in the first sort key mean attribute values are> undefined and there is no preferred order between these tuples, even> if their second sort keys are different.There is in fact a preferred order between these tuples -- the second key is the tie breaker in this case.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Jan 2023 19:06:55 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Thu, 26 Jan 2023 at 23:29, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Coming back to this, I wanted to sketch out this idea in a bit more detail.\n>\n> Have two memtuple arrays, one for first sortkey null and one for first sortkey non-null:\n> - Qsort the non-null array, including whatever specialization is available. Existing specialized comparators could ignore nulls (and their ordering) taking less space in the binary.\n> - Only if there is more than one sort key, qsort the null array. Ideally at some point we would have a method of ignoring the first sortkey (this is an existing opportunity that applies elsewhere as well).\n> - To handle two arrays, grow_memtuples() would need some adjustment, as would any callers that read the final result of an in-memory sort -- they would need to retrieve the tuples starting with the appropriate array depending on NULLS FIRST/LAST behavior.\n\nThanks for coming back to this. I've been thinking about this again\nrecently due to what was discovered in [1]. Basically, the patch on\nthat thread is trying to eliminate the need for the ORDER BY sort in a\nquery such as: SELECT a,b,row_number() over (order by a) from ab order\nby a,b; the idea is to just perform the full sort on a,b for the\nWindowClause so save from having to do an Incremental Sort on b for\nthe ORDER BY after evaluating the window funcs. Surprisingly (for\nme), we found a bunch of cases where the performance is better to do a\nsort on some of the keys, then do an Incremental sort on the remainder\nrather than just doing a single sort on everything.\n\nI don't really understand why this is fully yet, but one theory I have\nis that it might be down to work_mem being larger than the CPU's L3\ncache and causing more cache line misses. With the more simple sort,\nthere's less swapping of items in the array because the comparison\nfunction sees tuples as equal more often. I did find that the gap\nbetween the two is not as large with fewer tuples to sort.\n\nI think the slower sorts I found in [2] could also be partially caused\nby the current sort specialisation comparators re-comparing the\nleading column during a tie-break. I've not gotten around to disabling\nthe sort specialisations to see if and how much this is a factor for\nthat test.\n\nWhy I'm bringing this up here is that I wondered, in addition to what\nyou're mentioning above, if we're making some changes to allow\nmultiple in-memory arrays, would it be worth going a little further\nand allowing it so we could have N arrays which we'd try to keep under\nL3 cache size. Maybe some new GUC could be used to know what a good\nvalue is. With fast enough disks, it's often faster to use smaller\nvalues of work_mem which don't exceed L3 to keep these batches\nsmaller. Keeping them in memory would remove the need to write out\ntapes.\n\nI don't really know exactly if such a feature could easily be tagged\non to what you propose above, but I feel like what you're talking\nabout is part of the way towards that at least, maybe at the least,\nthe additional work could be kept in mind when this is written so that\nit's easier to extend in the future.\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvpAO5H_L84kn9gCJ_hihOavtmDjimKYyftjWtF69BJ=8Q@mail.gmail.com\n[2] https://postgr.es/m/CAApHDvqh%2BqOHk4sbvvy%3DQr2NjPqAAVYf82oXY0g%3DZ2hRpC2Vmg%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 01:14:57 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 7:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 26 Jan 2023 at 23:29, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > Coming back to this, I wanted to sketch out this idea in a bit more\ndetail.\n> >\n> > Have two memtuple arrays, one for first sortkey null and one for first\nsortkey non-null:\n> > - Qsort the non-null array, including whatever specialization is\navailable. Existing specialized comparators could ignore nulls (and their\nordering) taking less space in the binary.\n> > - Only if there is more than one sort key, qsort the null array.\nIdeally at some point we would have a method of ignoring the first sortkey\n(this is an existing opportunity that applies elsewhere as well).\n> > - To handle two arrays, grow_memtuples() would need some adjustment, as\nwould any callers that read the final result of an in-memory sort -- they\nwould need to retrieve the tuples starting with the appropriate array\ndepending on NULLS FIRST/LAST behavior.\n>\n> Thanks for coming back to this. I've been thinking about this again\n> recently due to what was discovered in [1].\n\nThat was indeed part of the motivation for bringing this up.\n\n> Why I'm bringing this up here is that I wondered, in addition to what\n> you're mentioning above, if we're making some changes to allow\n> multiple in-memory arrays, would it be worth going a little further\n> and allowing it so we could have N arrays which we'd try to keep under\n> L3 cache size. Maybe some new GUC could be used to know what a good\n> value is. With fast enough disks, it's often faster to use smaller\n> values of work_mem which don't exceed L3 to keep these batches\n> smaller. Keeping them in memory would remove the need to write out\n> tapes.\n\nThat's interesting. I don't know enough to guess how complex it would be to\nmake \"external\" merges agnostic about whether the tapes are on disk or in\nmemory.\n\nIf in-memory sorts were designed analogously to external ones,\ngrow_memtuples would never have to repalloc, it could just allocate a new\ntape, which could further shave some cycles.\n\nMy hunch in my last email was that having separate groups of tapes for each\nnull/non-null first sortkey would be complex, because it increases the\nnumber of places that have to know about nulls and their ordering. If we\nwanted to go to N arrays instead of 2, and additionally wanted separate\nnull/non-null treatment, it seems we would still need 2 sets of arrays, one\nwith N non-null-first-sortkey tapes and one with M null-first-sortkey tapes.\n\nSo using 2 arrays seems like the logical first step. I'd be curious to hear\nother possible development paths.\n\n> I think the slower sorts I found in [2] could also be partially caused\n> by the current sort specialisation comparators re-comparing the\n> leading column during a tie-break. I've not gotten around to disabling\n> the sort specialisations to see if and how much this is a factor for\n> that test.\n\nRight, that's worth addressing independently of the window function\nconsideration. I'm still swapping this area back in my head, but I believe\none issue is that state->base.onlyKey signals two things: \"one sortkey, not\nabbreviated\". We could add a separate branch for \"first key unabbreviated,\nnkeys>1\" -- I don't think we'd need to specialize, just branch -- and\ninstead of state->base.comparetup, call a set of analogous functions that\nonly handle keys 2 and above (comparetup_tail_* ? or possibly just add a\nboolean parameter compare_first). That would not pose a huge challenge, I\nthink, since they're already written like this:\n\n/* Compare the leading sort key */\ncompare = ApplySortComparator(...);\nif (compare != 0)\n return compare;\n\n/* Compare additional sort keys */\n...\n\nThe null/non-null separation would eliminate a bunch of branches in inlined\ncomparators, so we could afford to add another branch for number of keys.\n\nI haven't thought through either of these ideas in the gory detail, but I\ndon't yet see any big obstacles.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jan 26, 2023 at 7:15 PM David Rowley <dgrowleyml@gmail.com> wrote:>> On Thu, 26 Jan 2023 at 23:29, John Naylor <john.naylor@enterprisedb.com> wrote:> > Coming back to this, I wanted to sketch out this idea in a bit more detail.> >> > Have two memtuple arrays, one for first sortkey null and one for first sortkey non-null:> > - Qsort the non-null array, including whatever specialization is available. Existing specialized comparators could ignore nulls (and their ordering) taking less space in the binary.> > - Only if there is more than one sort key, qsort the null array. Ideally at some point we would have a method of ignoring the first sortkey (this is an existing opportunity that applies elsewhere as well).> > - To handle two arrays, grow_memtuples() would need some adjustment, as would any callers that read the final result of an in-memory sort -- they would need to retrieve the tuples starting with the appropriate array depending on NULLS FIRST/LAST behavior.>> Thanks for coming back to this. I've been thinking about this again> recently due to what was discovered in [1].That was indeed part of the motivation for bringing this up.> Why I'm bringing this up here is that I wondered, in addition to what> you're mentioning above, if we're making some changes to allow> multiple in-memory arrays, would it be worth going a little further> and allowing it so we could have N arrays which we'd try to keep under> L3 cache size. Maybe some new GUC could be used to know what a good> value is. With fast enough disks, it's often faster to use smaller> values of work_mem which don't exceed L3 to keep these batches> smaller. Keeping them in memory would remove the need to write out> tapes.That's interesting. I don't know enough to guess how complex it would be to make \"external\" merges agnostic about whether the tapes are on disk or in memory.If in-memory sorts were designed analogously to external ones, grow_memtuples would never have to repalloc, it could just allocate a new tape, which could further shave some cycles.My hunch in my last email was that having separate groups of tapes for each null/non-null first sortkey would be complex, because it increases the number of places that have to know about nulls and their ordering. If we wanted to go to N arrays instead of 2, and additionally wanted separate null/non-null treatment, it seems we would still need 2 sets of arrays, one with N non-null-first-sortkey tapes and one with M null-first-sortkey tapes.So using 2 arrays seems like the logical first step. I'd be curious to hear other possible development paths.> I think the slower sorts I found in [2] could also be partially caused> by the current sort specialisation comparators re-comparing the> leading column during a tie-break. I've not gotten around to disabling> the sort specialisations to see if and how much this is a factor for> that test.Right, that's worth addressing independently of the window function consideration. I'm still swapping this area back in my head, but I believe one issue is that state->base.onlyKey signals two things: \"one sortkey, not abbreviated\". We could add a separate branch for \"first key unabbreviated, nkeys>1\" -- I don't think we'd need to specialize, just branch -- and instead of state->base.comparetup, call a set of analogous functions that only handle keys 2 and above (comparetup_tail_* ? or possibly just add a boolean parameter compare_first). That would not pose a huge challenge, I think, since they're already written like this:/* Compare the leading sort key */compare = ApplySortComparator(...);if (compare != 0) return compare;/* Compare additional sort keys */...The null/non-null separation would eliminate a bunch of branches in inlined comparators, so we could afford to add another branch for number of keys.I haven't thought through either of these ideas in the gory detail, but I don't yet see any big obstacles.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 27 Jan 2023 13:56:29 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "I wrote:\n\n> On Thu, Jan 26, 2023 at 7:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > I think the slower sorts I found in [2] could also be partially caused\n> > by the current sort specialisation comparators re-comparing the\n> > leading column during a tie-break. I've not gotten around to disabling\n> > the sort specialisations to see if and how much this is a factor for\n> > that test.\n>\n> Right, that's worth addressing independently of the window function\nconsideration. I'm still swapping this area back in my head, but I believe\none issue is that state->base.onlyKey signals two things: \"one sortkey, not\nabbreviated\". We could add a separate branch for \"first key unabbreviated,\nnkeys>1\" -- I don't think we'd need to specialize, just branch -- and\ninstead of state->base.comparetup, call a set of analogous functions that\nonly handle keys 2 and above (comparetup_tail_* ? or possibly just add a\nboolean parameter compare_first). That would not pose a huge challenge, I\nthink, since they're already written like this:\n>\n> /* Compare the leading sort key */\n> compare = ApplySortComparator(...);\n> if (compare != 0)\n> return compare;\n>\n> /* Compare additional sort keys */\n> ...\n>\n> The null/non-null separation would eliminate a bunch of branches in\ninlined comparators, so we could afford to add another branch for number of\nkeys.\n\nI gave this a go, and it turns out we don't need any extra branches in the\ninlined comparators -- the new fallbacks are naturally written to account\nfor the \"!onlyKey\" case. If the first sortkey was abbreviated, call its\nfull comparator, otherwise skip to the next sortkey (if there isn't one, we\nshouldn't have gotten here). The existing comparetup functions try the\nsimple case and then call the fallback (which could be inlined for them but\nI haven't looked).\n\nTests pass, but I'm not sure yet if we need more tests. I don't have a\npurpose-built benchmark at the moment, but I'll see if any of my existing\ntests exercise this code path. I can also try the window function case\nunless someone beats me to it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Jan 2023 17:32:04 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
},
{
"msg_contents": "I wrote:\n> Have two memtuple arrays, one for first sortkey null and one for first\nsortkey non-null:\n\nHacking on this has gotten only as far as the \"compiles but segfaults\"\nstage, but I wanted to note an idea that occurred to me:\n\nInternal qsort doesn't need the srctape member, and removing both that and\nisnull1 would allow 16-byte \"init-tuples\" for qsort, which would save a bit\nof work_mem space, binary space for qsort specializations, and work done\nduring swaps.\n\nDuring heap sort, we already copy one entry into a stack variable to keep\nfrom clobbering it, so it's not a big step to read a member from the init\narray and form a regular sorttuple from it.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nI wrote:> Have two memtuple arrays, one for first sortkey null and one for first sortkey non-null:Hacking on this has gotten only as far as the \"compiles but segfaults\" stage, but I wanted to note an idea that occurred to me:Internal qsort doesn't need the srctape member, and removing both that and isnull1 would allow 16-byte \"init-tuples\" for qsort, which would save a bit of work_mem space, binary space for qsort specializations, and work done during swaps.During heap sort, we already copy one entry into a stack variable to keep from clobbering it, so it's not a big step to read a member from the init array and form a regular sorttuple from it.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Feb 2023 18:46:50 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Considering additional sort specialisation functions for PG16"
}
] |
[
{
"msg_contents": "Hi Andres,\n\nOne of my tests hit an assertion in dshash_detach(). Once again this is\nwith BDR and I don't have a reproduction case with standalone PG. Also,\nthis probably happened because of some weirdness in systemd where it\nremoves shared memory segments underneath, resulting in ERRORs being thrown.\n\nHowever, looking at the stack trace and the code, I wonder if it's possible\nto hit the assertion even with stock postgres. In my case, the stack trace\nlooked like:\n\n```\n(gdb) bt\n#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50\n#1 0x00007fa5775b9535 in __GI_abort () at abort.c:79\n#2 0x0000556dbce828bc in ExceptionalCondition\n(conditionName=0x556dbd027c88\n\"!LWLockAnyHeldByMe(&(hash_table)->control->partitions[0].lock,\nDSHASH_NUM_PARTITIONS, sizeof(dshash_partition))\",\n errorType=0x556dbd027c44 \"FailedAssertion\", fileName=0x556dbd027c10\n\"/opt/postgres/src/postgres/src/backend/lib/dshash.c\", lineNumber=309)\n at /opt/postgres/src/postgres/src/backend/utils/error/assert.c:69\n#3 0x0000556dbcae0aae in dshash_detach (hash_table=0x556dbe0294f0) at\n/opt/postgres/src/postgres/src/backend/lib/dshash.c:309\n#4 0x0000556dbcd045bf in pgstat_detach_shmem () at\n/opt/postgres/src/postgres/src/backend/utils/activity/pgstat_shmem.c:240\n#5 0x0000556dbccfd263 in pgstat_shutdown_hook (code=0, arg=0) at\n/opt/postgres/src/postgres/src/backend/utils/activity/pgstat.c:509\n#6 0x0000556dbcca18b1 in shmem_exit (code=0) at\n/opt/postgres/src/postgres/src/backend/storage/ipc/ipc.c:239\n#7 0x0000556dbcca1769 in proc_exit_prepare (code=0) at\n/opt/postgres/src/postgres/src/backend/storage/ipc/ipc.c:194\n#8 0x0000556dbcca16ba in proc_exit (code=0) at\n/opt/postgres/src/postgres/src/backend/storage/ipc/ipc.c:107\n#9 0x0000556dbcbfcadc in AutoVacWorkerMain (argc=0, argv=0x0) at\n/opt/postgres/src/postgres/src/backend/postmaster/autovacuum.c:1590\n#10 0x0000556dbcbfc968 in StartAutoVacWorker () at\n/opt/postgres/src/postgres/src/backend/postmaster/autovacuum.c:1496\n#11 0x0000556dbcc0aa50 in StartAutovacuumWorker () at\n/opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:5534\n#12 0x0000556dbcc0a56b in sigusr1_handler (postgres_signal_arg=10) at\n/opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:5239\n#13 <signal handler called>\n#14 0x00007fa577687a27 in __GI___select (nfds=10, readfds=0x7fff6e69a370,\nwritefds=0x0, exceptfds=0x0, timeout=0x7fff6e69a3f0) at\n../sysdeps/unix/sysv/linux/select.c:41\n#15 0x0000556dbcc05e7f in ServerLoop () at\n/opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:1770\n#16 0x0000556dbcc0581e in PostmasterMain (argc=5, argv=0x556dbe027490) at\n/opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:1478\n#17 0x0000556dbcafcaf1 in main (argc=5, argv=0x556dbe027490) at\n/opt/postgres/src/postgres/src/backend/main/main.c:202\n```\n\nIf the autovacuum worker is not inside a transaction and throws an ERROR\nwhile holding a lock on the dshash, AFAICS it can hit proc_exit() without\nreleasing the lock (because there is no abort transaction processing)\n\nFor example, at autovaccum.c:1694 pgstat_report_autovac() can\ntheoretically deep down call `dsa_get_address()`, which calls\n`get_segment_by_index()` and that function has couple of elog(ERROR) calls.\n\nI understand that this ERROR path is probably not likely to hit during\nnormal course, but if it does like in my case, then it will result in\nassertion failure. I also think a similar problem may have happened in\nolder releases (not the assertion failure, but backends exiting with a\nLWLock still held), but maybe the likelihood was very small before.\n\nIf this is a problem worth addressing, I wonder if we should explicitly\nrelease all LWLocks in the long jump handler, like we do for other\nprocesses.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB: https://www.enterprisedb..com\n\nHi Andres,One of my tests hit an assertion in dshash_detach(). Once again this is with BDR and I don't have a reproduction case with standalone PG. Also, this probably happened because of some weirdness in systemd where it removes shared memory segments underneath, resulting in ERRORs being thrown.However, looking at the stack trace and the code, I wonder if it's possible to hit the assertion even with stock postgres. In my case, the stack trace looked like:```(gdb) bt#0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50#1 0x00007fa5775b9535 in __GI_abort () at abort.c:79#2 0x0000556dbce828bc in ExceptionalCondition (conditionName=0x556dbd027c88 \"!LWLockAnyHeldByMe(&(hash_table)->control->partitions[0].lock, DSHASH_NUM_PARTITIONS, sizeof(dshash_partition))\", errorType=0x556dbd027c44 \"FailedAssertion\", fileName=0x556dbd027c10 \"/opt/postgres/src/postgres/src/backend/lib/dshash.c\", lineNumber=309) at /opt/postgres/src/postgres/src/backend/utils/error/assert.c:69#3 0x0000556dbcae0aae in dshash_detach (hash_table=0x556dbe0294f0) at /opt/postgres/src/postgres/src/backend/lib/dshash.c:309#4 0x0000556dbcd045bf in pgstat_detach_shmem () at /opt/postgres/src/postgres/src/backend/utils/activity/pgstat_shmem.c:240#5 0x0000556dbccfd263 in pgstat_shutdown_hook (code=0, arg=0) at /opt/postgres/src/postgres/src/backend/utils/activity/pgstat.c:509#6 0x0000556dbcca18b1 in shmem_exit (code=0) at /opt/postgres/src/postgres/src/backend/storage/ipc/ipc.c:239#7 0x0000556dbcca1769 in proc_exit_prepare (code=0) at /opt/postgres/src/postgres/src/backend/storage/ipc/ipc.c:194#8 0x0000556dbcca16ba in proc_exit (code=0) at /opt/postgres/src/postgres/src/backend/storage/ipc/ipc.c:107#9 0x0000556dbcbfcadc in AutoVacWorkerMain (argc=0, argv=0x0) at /opt/postgres/src/postgres/src/backend/postmaster/autovacuum.c:1590#10 0x0000556dbcbfc968 in StartAutoVacWorker () at /opt/postgres/src/postgres/src/backend/postmaster/autovacuum.c:1496#11 0x0000556dbcc0aa50 in StartAutovacuumWorker () at /opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:5534#12 0x0000556dbcc0a56b in sigusr1_handler (postgres_signal_arg=10) at /opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:5239#13 <signal handler called>#14 0x00007fa577687a27 in __GI___select (nfds=10, readfds=0x7fff6e69a370, writefds=0x0, exceptfds=0x0, timeout=0x7fff6e69a3f0) at ../sysdeps/unix/sysv/linux/select.c:41#15 0x0000556dbcc05e7f in ServerLoop () at /opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:1770#16 0x0000556dbcc0581e in PostmasterMain (argc=5, argv=0x556dbe027490) at /opt/postgres/src/postgres/src/backend/postmaster/postmaster.c:1478#17 0x0000556dbcafcaf1 in main (argc=5, argv=0x556dbe027490) at /opt/postgres/src/postgres/src/backend/main/main.c:202```If the autovacuum worker is not inside a transaction and throws an ERROR while holding a lock on the dshash, AFAICS it can hit proc_exit() without releasing the lock (because there is no abort transaction processing)For example, at autovaccum.c:1694 pgstat_report_autovac() can theoretically deep down call `dsa_get_address()`, which calls `get_segment_by_index()` and that function has couple of elog(ERROR) calls.I understand that this ERROR path is probably not likely to hit during normal course, but if it does like in my case, then it will result in assertion failure. I also think a similar problem may have happened in older releases (not the assertion failure, but backends exiting with a LWLock still held), but maybe the likelihood was very small before.If this is a problem worth addressing, I wonder if we should explicitly release all LWLocks in the long jump handler, like we do for other processes.Thanks,Pavan-- Pavan DeolaseeEnterpriseDB: https://www.enterprisedb..com",
"msg_date": "Tue, 23 Aug 2022 12:28:48 +0530",
"msg_from": "Pavan Deolasee <pavan.deolasee@gmail.com>",
"msg_from_op": true,
"msg_subject": "Question regarding ASSERT_NO_PARTITION_LOCKS_HELD_BY_ME in\n dshash_detach()"
}
] |
[
{
"msg_contents": "Often it is beneficial to review one's schema with a view to removing\nindexes (and sometimes tables) that are no longer required. It's very\ndifficult to understand when that is the case by looking at the number of\nscans of a relation as, for example, an index may be used infrequently but\nmay be critical in those times when it is used.\n\nThe attached patch against HEAD adds optional tracking of the last scan\ntime for relations. It updates pg_stat_*_tables with new last_seq_scan and\nlast_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to\nhelp with this.\n\nDue to the use of gettimeofday(), those values are only maintained if a new\nGUC, track_scans, is set to on. By default, it is off.\n\nI did run a 12 hour test to see what the performance impact is. pgbench was\nrun with scale factor 10000 and 75 users across 4 identical bare metal\nmachines running Rocky 8 in parallel which showed roughly a -2% average\nperformance penalty against HEAD with track_scans enabled. Machines were\nPowerEdge R7525's with 128GB RAM, dual 16C/32T AMD 7302 CPUs, with the data\ndirectory on 6 x 800GB 12Gb/s SSD SAS drives in RAID 0. Kernel time source\nis tsc.\n\n HEAD track_scans Penalty (%)\nbox1 19582.49735 19341.8881 -1.22869541\nbox2 19936.55513 19928.07479 -0.04253664659\nbox3 19631.78895 18649.64379 -5.002830696\nbox4 19810.86767 19420.67192 -1.969604525\nAverage 19740.42728 19335.06965 -2.05343896\n\nDoc and test updates included.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 23 Aug 2022 10:55:09 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Tracking last scan time"
},
{
"msg_contents": "On Tue, 23 Aug 2022 at 11:00, Dave Page <dpage@pgadmin.org> wrote:\n>\n> Often it is beneficial to review one's schema with a view to removing indexes (and sometimes tables) that are no longer required. It's very difficult to understand when that is the case by looking at the number of scans of a relation as, for example, an index may be used infrequently but may be critical in those times when it is used.\n\nI think this is easy to answer in a prometheus/datadog/etc world since\nyou can consult the history of the count to see when it was last\nincremented. (Or do effectively that continously).\n\nI guess that just reinforces the idea that it should be optional.\nPerhaps there's room for some sort of general feature for controlling\nvarious time series aggregates like max() and min() sum() or, uhm,\ntimeoflastchange() on whatever stats you want. That would let us\nremove a bunch of stuff from pg_stat_statements and let users turn on\njust the ones they want. And also let users enable things like time of\nlast rollback or conflict etc. But that's just something to think\nabout down the road.\n\n> The attached patch against HEAD adds optional tracking of the last scan time for relations. It updates pg_stat_*_tables with new last_seq_scan and last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to help with this.\n>\n> Due to the use of gettimeofday(), those values are only maintained if a new GUC, track_scans, is set to on. By default, it is off.\n\nBikeshedding warning -- \"track_scans\" could equally apply to almost\nany stats about scans. I think the really relevant thing here is the\ntimes, not the scans. I think the GUC should be \"track_scan_times\". Or\ncould that still be confused with scan durations? Maybe\n\"track_scan_timestamps\"?\n\nYou could maybe make the gettimeofday cheaper by doing it less often.\nLike, skipping the increment if the old timestamp is newer than 1s\nbefore the transaction start time (I think that's available free if\nsome other guc is enabled but I don't recall). Or isn't this cb\nnormally happening after transaction end? So xactStopTimestamp might\nbe available already?\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 23 Aug 2022 13:07:05 +0100",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi\n\nOn Tue, 23 Aug 2022 at 13:07, Greg Stark <stark@mit.edu> wrote:\n\n> On Tue, 23 Aug 2022 at 11:00, Dave Page <dpage@pgadmin.org> wrote:\n> >\n> > Often it is beneficial to review one's schema with a view to removing\n> indexes (and sometimes tables) that are no longer required. It's very\n> difficult to understand when that is the case by looking at the number of\n> scans of a relation as, for example, an index may be used infrequently but\n> may be critical in those times when it is used.\n>\n> I think this is easy to answer in a prometheus/datadog/etc world since\n> you can consult the history of the count to see when it was last\n> incremented. (Or do effectively that continously).\n>\n\nYes. But not every PostgreSQL instance is monitored in that way.\n\n\n>\n> I guess that just reinforces the idea that it should be optional.\n> Perhaps there's room for some sort of general feature for controlling\n> various time series aggregates like max() and min() sum() or, uhm,\n> timeoflastchange() on whatever stats you want. That would let us\n> remove a bunch of stuff from pg_stat_statements and let users turn on\n> just the ones they want. And also let users enable things like time of\n> last rollback or conflict etc. But that's just something to think\n> about down the road.\n>\n\nIt's certainly an interesting idea.\n\n\n>\n> > The attached patch against HEAD adds optional tracking of the last scan\n> time for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to\n> help with this.\n> >\n> > Due to the use of gettimeofday(), those values are only maintained if a\n> new GUC, track_scans, is set to on. By default, it is off.\n>\n> Bikeshedding warning -- \"track_scans\" could equally apply to almost\n> any stats about scans. I think the really relevant thing here is the\n> times, not the scans. I think the GUC should be \"track_scan_times\". Or\n> could that still be confused with scan durations? Maybe\n> \"track_scan_timestamps\"?\n>\n\nThe latter seems reasonable.\n\n\n>\n> You could maybe make the gettimeofday cheaper by doing it less often.\n> Like, skipping the increment if the old timestamp is newer than 1s\n> before the transaction start time (I think that's available free if\n> some other guc is enabled but I don't recall). Or isn't this cb\n> normally happening after transaction end? So xactStopTimestamp might\n> be available already?\n>\n\nSomething like:\n\n if (pgstat_track_scan_timestamps && lstats->t_counts.t_numscans &&\n tabentry->lastscan + USECS_PER_SEC <\nGetCurrentTransactionStopTimestamp())\n tabentry->lastscan = GetCurrentTimestamp();\n\n?\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, 23 Aug 2022 at 13:07, Greg Stark <stark@mit.edu> wrote:On Tue, 23 Aug 2022 at 11:00, Dave Page <dpage@pgadmin.org> wrote:\n>\n> Often it is beneficial to review one's schema with a view to removing indexes (and sometimes tables) that are no longer required. It's very difficult to understand when that is the case by looking at the number of scans of a relation as, for example, an index may be used infrequently but may be critical in those times when it is used.\n\nI think this is easy to answer in a prometheus/datadog/etc world since\nyou can consult the history of the count to see when it was last\nincremented. (Or do effectively that continously).Yes. But not every PostgreSQL instance is monitored in that way. \n\nI guess that just reinforces the idea that it should be optional.\nPerhaps there's room for some sort of general feature for controlling\nvarious time series aggregates like max() and min() sum() or, uhm,\ntimeoflastchange() on whatever stats you want. That would let us\nremove a bunch of stuff from pg_stat_statements and let users turn on\njust the ones they want. And also let users enable things like time of\nlast rollback or conflict etc. But that's just something to think\nabout down the road.It's certainly an interesting idea. \n\n> The attached patch against HEAD adds optional tracking of the last scan time for relations. It updates pg_stat_*_tables with new last_seq_scan and last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to help with this.\n>\n> Due to the use of gettimeofday(), those values are only maintained if a new GUC, track_scans, is set to on. By default, it is off.\n\nBikeshedding warning -- \"track_scans\" could equally apply to almost\nany stats about scans. I think the really relevant thing here is the\ntimes, not the scans. I think the GUC should be \"track_scan_times\". Or\ncould that still be confused with scan durations? Maybe\n\"track_scan_timestamps\"?The latter seems reasonable. \n\nYou could maybe make the gettimeofday cheaper by doing it less often.\nLike, skipping the increment if the old timestamp is newer than 1s\nbefore the transaction start time (I think that's available free if\nsome other guc is enabled but I don't recall). Or isn't this cb\nnormally happening after transaction end? So xactStopTimestamp might\nbe available already?Something like: if (pgstat_track_scan_timestamps && lstats->t_counts.t_numscans && tabentry->lastscan + USECS_PER_SEC < GetCurrentTransactionStopTimestamp()) tabentry->lastscan = GetCurrentTimestamp(); ?-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Aug 2022 14:01:15 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:\n> Often it is beneficial to review one's schema with a view to removing indexes\n> (and sometimes tables) that are no longer required. It's very difficult to\n> understand when that is the case by looking at the number of scans of a\n> relation as, for example, an index may be used infrequently but may be critical\n> in those times when it is used.\n> \n> The attached patch against HEAD adds optional tracking of the last scan time\n> for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to\n> help with this.\n\nWould it be simpler to allow the sequential and index scan columns to be\ncleared so you can look later to see if it is non-zero? Should we allow\narbitrary clearing of stat columns?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 24 Aug 2022 10:18:20 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:\n> > Often it is beneficial to review one's schema with a view to removing\n> indexes\n> > (and sometimes tables) that are no longer required. It's very difficult\n> to\n> > understand when that is the case by looking at the number of scans of a\n> > relation as, for example, an index may be used infrequently but may be\n> critical\n> > in those times when it is used.\n> >\n> > The attached patch against HEAD adds optional tracking of the last scan\n> time\n> > for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column\n> to\n> > help with this.\n>\n> Would it be simpler to allow the sequential and index scan columns to be\n> cleared so you can look later to see if it is non-zero? Should we allow\n> arbitrary clearing of stat columns?\n>\n\nI don't think so, because then stat values wouldn't necessarily correlate\nwith each other, and you wouldn't know when any of them were last reset\nunless we started tracking each individual reset. At least now you can see\nwhen they were all reset, and you know they were reset at the same time.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:\n> Often it is beneficial to review one's schema with a view to removing indexes\n> (and sometimes tables) that are no longer required. It's very difficult to\n> understand when that is the case by looking at the number of scans of a\n> relation as, for example, an index may be used infrequently but may be critical\n> in those times when it is used.\n> \n> The attached patch against HEAD adds optional tracking of the last scan time\n> for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to\n> help with this.\n\nWould it be simpler to allow the sequential and index scan columns to be\ncleared so you can look later to see if it is non-zero? Should we allow\narbitrary clearing of stat columns?I don't think so, because then stat values wouldn't necessarily correlate with each other, and you wouldn't know when any of them were last reset unless we started tracking each individual reset. At least now you can see when they were all reset, and you know they were reset at the same time. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Aug 2022 16:01:21 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:\n> On Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:\n> > Often it is beneficial to review one's schema with a view to removing\n> indexes\n> > (and sometimes tables) that are no longer required. It's very difficult\n> to\n> > understand when that is the case by looking at the number of scans of a\n> > relation as, for example, an index may be used infrequently but may be\n> critical\n> > in those times when it is used.\n> >\n> > The attached patch against HEAD adds optional tracking of the last scan\n> time\n> > for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column\n> to\n> > help with this.\n> \n> Would it be simpler to allow the sequential and index scan columns to be\n> cleared so you can look later to see if it is non-zero? Should we allow\n> \n> I don't think so, because then stat values wouldn't necessarily correlate with\n> each other, and you wouldn't know when any of them were last reset unless we\n> started tracking each individual reset. At least now you can see when they were\n> all reset, and you know they were reset at the same time.\n\nYeah, true. I was more asking if these two columns are in some way\nspecial or if people would want a more general solution, and if so, is\nthat something we want in core Postgres.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 24 Aug 2022 11:03:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, 24 Aug 2022 at 16:03, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:\n> > On Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:\n> > > Often it is beneficial to review one's schema with a view to\n> removing\n> > indexes\n> > > (and sometimes tables) that are no longer required. It's very\n> difficult\n> > to\n> > > understand when that is the case by looking at the number of scans\n> of a\n> > > relation as, for example, an index may be used infrequently but\n> may be\n> > critical\n> > > in those times when it is used.\n> > >\n> > > The attached patch against HEAD adds optional tracking of the last\n> scan\n> > time\n> > > for relations. It updates pg_stat_*_tables with new last_seq_scan\n> and\n> > > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan\n> column\n> > to\n> > > help with this.\n> >\n> > Would it be simpler to allow the sequential and index scan columns\n> to be\n> > cleared so you can look later to see if it is non-zero? Should we\n> allow\n> >\n> > I don't think so, because then stat values wouldn't necessarily\n> correlate with\n> > each other, and you wouldn't know when any of them were last reset\n> unless we\n> > started tracking each individual reset. At least now you can see when\n> they were\n> > all reset, and you know they were reset at the same time.\n>\n> Yeah, true. I was more asking if these two columns are in some way\n> special or if people would want a more general solution, and if so, is\n> that something we want in core Postgres.\n>\n\nThey're special in the sense that they're the ones you're most likely going\nto look at to see how much a relation is used I think (at least, I'd look\nat them rather than the tuple counts).\n\nThere are certainly other things for which a last usage value may be\nuseful. Functions/procedures for example, or views. The benefits to\nremoving unused objects of that type are far, far lower than indexes or\ntables of course.\n\nThere are other potential use cases for similar timestamps, such as object\ncreation times (and creating user), but they are more useful for auditing\nthan monitoring and optimisation.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Wed, 24 Aug 2022 at 16:03, Bruce Momjian <bruce@momjian.us> wrote:On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:\n> On Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Tue, Aug 23, 2022 at 10:55:09AM +0100, Dave Page wrote:\n> > Often it is beneficial to review one's schema with a view to removing\n> indexes\n> > (and sometimes tables) that are no longer required. It's very difficult\n> to\n> > understand when that is the case by looking at the number of scans of a\n> > relation as, for example, an index may be used infrequently but may be\n> critical\n> > in those times when it is used.\n> >\n> > The attached patch against HEAD adds optional tracking of the last scan\n> time\n> > for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column\n> to\n> > help with this.\n> \n> Would it be simpler to allow the sequential and index scan columns to be\n> cleared so you can look later to see if it is non-zero? Should we allow\n> \n> I don't think so, because then stat values wouldn't necessarily correlate with\n> each other, and you wouldn't know when any of them were last reset unless we\n> started tracking each individual reset. At least now you can see when they were\n> all reset, and you know they were reset at the same time.\n\nYeah, true. I was more asking if these two columns are in some way\nspecial or if people would want a more general solution, and if so, is\nthat something we want in core Postgres.They're special in the sense that they're the ones you're most likely going to look at to see how much a relation is used I think (at least, I'd look at them rather than the tuple counts).There are certainly other things for which a last usage value may be useful. Functions/procedures for example, or views. The benefits to removing unused objects of that type are far, far lower than indexes or tables of course.There are other potential use cases for similar timestamps, such as object creation times (and creating user), but they are more useful for auditing than monitoring and optimisation.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 24 Aug 2022 16:15:47 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Thu, 25 Aug 2022 at 03:03, Bruce Momjian <bruce@momjian.us> wrote:\n>\n> On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:\n> > On Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:\n> > Would it be simpler to allow the sequential and index scan columns to be\n> > cleared so you can look later to see if it is non-zero? Should we allow\n> >\n> > I don't think so, because then stat values wouldn't necessarily correlate with\n> > each other, and you wouldn't know when any of them were last reset unless we\n> > started tracking each individual reset. At least now you can see when they were\n> > all reset, and you know they were reset at the same time.\n>\n> Yeah, true. I was more asking if these two columns are in some way\n> special or if people would want a more general solution, and if so, is\n> that something we want in core Postgres.\n\nBack when I used to do a bit of PostgreSQL DBA stuff, I had a nightly\njob setup to record the state of pg_stat_all_tables and put that into\nanother table along with the current date. I then had a view that did\nsome calculations with col - LAG(col) OVER (PARTITION BY relid ORDER\nBY date) to fetch the numerical values for each date. I didn't ever\nwant to reset the stats because it messes with autovacuum. If you zero\nout n_ins_since_vacuum more often than auto-vacuum would trigger, then\nbad things happen over time (we should really warn about that in the\ndocs).\n\nI don't have a particular opinion about the patch, I'm just pointing\nout that there are other ways. Even just writing down the numbers on a\npost-it note and coming back in a month to see if they've changed is\nenough to tell if the table or index has been used.\n\nWe do also need to consider now that stats are stored in shared memory\nthat any fields we add are in RAM.\n\nDavid\n\n\n",
"msg_date": "Thu, 25 Aug 2022 12:43:41 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi\n\nOn Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 25 Aug 2022 at 03:03, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Wed, Aug 24, 2022 at 04:01:21PM +0100, Dave Page wrote:\n> > > On Wed, 24 Aug 2022 at 15:18, Bruce Momjian <bruce@momjian.us> wrote:\n> > > Would it be simpler to allow the sequential and index scan columns\n> to be\n> > > cleared so you can look later to see if it is non-zero? Should we\n> allow\n> > >\n> > > I don't think so, because then stat values wouldn't necessarily\n> correlate with\n> > > each other, and you wouldn't know when any of them were last reset\n> unless we\n> > > started tracking each individual reset. At least now you can see when\n> they were\n> > > all reset, and you know they were reset at the same time.\n> >\n> > Yeah, true. I was more asking if these two columns are in some way\n> > special or if people would want a more general solution, and if so, is\n> > that something we want in core Postgres.\n>\n> Back when I used to do a bit of PostgreSQL DBA stuff, I had a nightly\n> job setup to record the state of pg_stat_all_tables and put that into\n> another table along with the current date. I then had a view that did\n> some calculations with col - LAG(col) OVER (PARTITION BY relid ORDER\n> BY date) to fetch the numerical values for each date. I didn't ever\n> want to reset the stats because it messes with autovacuum. If you zero\n> out n_ins_since_vacuum more often than auto-vacuum would trigger, then\n> bad things happen over time (we should really warn about that in the\n> docs).\n>\n> I don't have a particular opinion about the patch, I'm just pointing\n> out that there are other ways. Even just writing down the numbers on a\n> post-it note and coming back in a month to see if they've changed is\n> enough to tell if the table or index has been used.\n>\n\nThere are usually other ways to perform monitoring tasks, but there is\nsomething to be said for the convenience of having functionality built in\nand not having to rely on tools, scripts, or post-it notes :-)\n\n\n>\n> We do also need to consider now that stats are stored in shared memory\n> that any fields we add are in RAM.\n>\n\nThat is a fair point. I believe this is both minimal, and useful though.\n\nI've attached a v2 patch that incorporates Greg's suggestions.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 14:05:36 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:\n> On Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> I don't have a particular opinion about the patch, I'm just pointing\n> out that there are other ways. Even just writing down the numbers on a\n> post-it note and coming back in a month to see if they've changed is\n> enough to tell if the table or index has been used.\n> \n> \n> There are usually other ways to perform monitoring tasks, but there is\n> something to be said for the convenience of having functionality built in and\n> not having to rely on tools, scripts, or post-it notes :-)\n\nShould we consider using something cheaper like time() so we don't need\na GUC to enable this?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:46:23 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 19:46, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:\n> > On Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I don't have a particular opinion about the patch, I'm just pointing\n> > out that there are other ways. Even just writing down the numbers on\n> a\n> > post-it note and coming back in a month to see if they've changed is\n> > enough to tell if the table or index has been used.\n> >\n> >\n> > There are usually other ways to perform monitoring tasks, but there is\n> > something to be said for the convenience of having functionality built\n> in and\n> > not having to rely on tools, scripts, or post-it notes :-)\n>\n> Should we consider using something cheaper like time() so we don't need\n> a GUC to enable this?\n>\n\nInteresting idea, but on my mac at least, 100,000,000 gettimeofday() calls\ntakes about 2 seconds, whilst 100,000,000 time() calls takes 14(!) seconds.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, 30 Aug 2022 at 19:46, Bruce Momjian <bruce@momjian.us> wrote:On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:\n> On Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> I don't have a particular opinion about the patch, I'm just pointing\n> out that there are other ways. Even just writing down the numbers on a\n> post-it note and coming back in a month to see if they've changed is\n> enough to tell if the table or index has been used.\n> \n> \n> There are usually other ways to perform monitoring tasks, but there is\n> something to be said for the convenience of having functionality built in and\n> not having to rely on tools, scripts, or post-it notes :-)\n\nShould we consider using something cheaper like time() so we don't need\na GUC to enable this?Interesting idea, but on my mac at least, 100,000,000 gettimeofday() calls takes about 2 seconds, whilst 100,000,000 time() calls takes 14(!) seconds.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 31 Aug 2022 17:02:33 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 05:02:33PM +0100, Dave Page wrote:\n> \n> \n> On Tue, 30 Aug 2022 at 19:46, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:\n> > On Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I don't have a particular opinion about the patch, I'm just pointing\n> > out that there are other ways. Even just writing down the numbers on\n> a\n> > post-it note and coming back in a month to see if they've changed is\n> > enough to tell if the table or index has been used.\n> >\n> >\n> > There are usually other ways to perform monitoring tasks, but there is\n> > something to be said for the convenience of having functionality built in\n> and\n> > not having to rely on tools, scripts, or post-it notes :-)\n> \n> Should we consider using something cheaper like time() so we don't need\n> a GUC to enable this?\n> \n> \n> Interesting idea, but on my mac at least, 100,000,000 gettimeofday() calls\n> takes about 2 seconds, whilst 100,000,000 time() calls takes 14(!) seconds.\n\nWow. I was just thinking you need second-level accuracy, which must be\ncheap somewhere.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 12:13:37 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 10:55:09 +0100, Dave Page wrote:\n> Often it is beneficial to review one's schema with a view to removing\n> indexes (and sometimes tables) that are no longer required. It's very\n> difficult to understand when that is the case by looking at the number of\n> scans of a relation as, for example, an index may be used infrequently but\n> may be critical in those times when it is used.\n> \n> The attached patch against HEAD adds optional tracking of the last scan\n> time for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to\n> help with this.\n> \n> Due to the use of gettimeofday(), those values are only maintained if a new\n> GUC, track_scans, is set to on. By default, it is off.\n> \n> I did run a 12 hour test to see what the performance impact is. pgbench was\n> run with scale factor 10000 and 75 users across 4 identical bare metal\n> machines running Rocky 8 in parallel which showed roughly a -2% average\n> performance penalty against HEAD with track_scans enabled. Machines were\n> PowerEdge R7525's with 128GB RAM, dual 16C/32T AMD 7302 CPUs, with the data\n> directory on 6 x 800GB 12Gb/s SSD SAS drives in RAID 0. Kernel time source\n> is tsc.\n> \n> HEAD track_scans Penalty (%)\n> box1 19582.49735 19341.8881 -1.22869541\n> box2 19936.55513 19928.07479 -0.04253664659\n> box3 19631.78895 18649.64379 -5.002830696\n> box4 19810.86767 19420.67192 -1.969604525\n> Average 19740.42728 19335.06965 -2.05343896\n\nBased on the size of those numbers this was a r/w pgbench. If it has this\nnoticable an impact for r/w, with a pretty low number of scans/sec, how's the\noverhead for r/o (which can have 2 orders of magnitude more scans/sec)? It\nmust be quite bad.\n\nI don't think we should accept this feature with this overhead - but I also\nthink we can do better, by accepting a bit less accuracy. For this to be\nuseful we don't need a perfectly accurate timestamp. The statement start time\nis probably not accurate enough, but we could just have bgwriter or such\nupdate one in shared memory every time we wake up? Or perhaps we could go to\nan even lower granularity, by putting in the current LSN or such?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 09:21:08 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 18:21, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-23 10:55:09 +0100, Dave Page wrote:\n> > Often it is beneficial to review one's schema with a view to removing\n> > indexes (and sometimes tables) that are no longer required. It's very\n> > difficult to understand when that is the case by looking at the number of\n> > scans of a relation as, for example, an index may be used infrequently but\n> > may be critical in those times when it is used.\n> >\n> > The attached patch against HEAD adds optional tracking of the last scan\n> > time for relations. It updates pg_stat_*_tables with new last_seq_scan and\n> > last_idx_scan columns, and pg_stat_*_indexes with a last_idx_scan column to\n> > help with this.\n> >\n> > Due to the use of gettimeofday(), those values are only maintained if a new\n> > GUC, track_scans, is set to on. By default, it is off.\n> >\n> > I did run a 12 hour test to see what the performance impact is. pgbench was\n> > run with scale factor 10000 and 75 users across 4 identical bare metal\n> > machines running Rocky 8 in parallel which showed roughly a -2% average\n> > performance penalty against HEAD with track_scans enabled. Machines were\n> > PowerEdge R7525's with 128GB RAM, dual 16C/32T AMD 7302 CPUs, with the data\n> > directory on 6 x 800GB 12Gb/s SSD SAS drives in RAID 0. Kernel time source\n> > is tsc.\n> >\n> > HEAD track_scans Penalty (%)\n> > box1 19582.49735 19341.8881 -1.22869541\n> > box2 19936.55513 19928.07479 -0.04253664659\n> > box3 19631.78895 18649.64379 -5.002830696\n> > box4 19810.86767 19420.67192 -1.969604525\n> > Average 19740.42728 19335.06965 -2.05343896\n>\n> Based on the size of those numbers this was a r/w pgbench. If it has this\n> noticable an impact for r/w, with a pretty low number of scans/sec, how's the\n> overhead for r/o (which can have 2 orders of magnitude more scans/sec)? It\n> must be quite bad.\n>\n> I don't think we should accept this feature with this overhead - but I also\n> think we can do better, by accepting a bit less accuracy. For this to be\n> useful we don't need a perfectly accurate timestamp. The statement start time\n> is probably not accurate enough, but we could just have bgwriter or such\n> update one in shared memory every time we wake up? Or perhaps we could go to\n> an even lower granularity, by putting in the current LSN or such?\n\nI don't think that LSN is precise enough. For example, if you're in a\n(mostly) read-only system, the system may go long times without any\nmeaningful records being written.\n\nAs for having a lower granularity and preventing the\none-syscall-per-Relation issue, can't we reuse the query_start or\nstate_change timestamps that appear in pg_stat_activity (potentially\nupdated immediately before this stat flush), or some other per-backend\ntimestamp that is already maintained and considered accurate enough\nfor this use?\nRegardless, with this patch as it is we get a new timestamp for each\nrelation processed, which I think is a waste of time (heh) even in\nVDSO-enabled systems.\n\nApart from the above, I don't have any other meaningful opinion on\nthis patch - it might be a good addition, but I don't consume stats\noften enough to make a good cost / benefit comparison.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 31 Aug 2022 19:52:49 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 07:52:49PM +0200, Matthias van de Meent wrote:\n> As for having a lower granularity and preventing the\n> one-syscall-per-Relation issue, can't we reuse the query_start or\n> state_change timestamps that appear in pg_stat_activity (potentially\n\nYeah, query start should be fine, but not transaction start time.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:11:11 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-31 19:52:49 +0200, Matthias van de Meent wrote:\n> As for having a lower granularity and preventing the\n> one-syscall-per-Relation issue, can't we reuse the query_start or\n> state_change timestamps that appear in pg_stat_activity (potentially\n> updated immediately before this stat flush), or some other per-backend\n> timestamp that is already maintained and considered accurate enough\n> for this use?\n\nThe problem is that it won't change at all for a query that runs for a week -\nand we'll report the timestamp from a week ago when it finally ends.\n\nBut given this is done when stats are flushed, which only happens after the\ntransaction ended, we can just use GetCurrentTransactionStopTimestamp() - if\nwe got to flushing the transaction stats we'll already have computed that.\n\n\n> \ttabentry->numscans += lstats->t_counts.t_numscans;\n> +\tif (pgstat_track_scans && lstats->t_counts.t_numscans)\n> +\t\ttabentry->lastscan = GetCurrentTimestamp();\n\nBesides replacing GetCurrentTimestamp() with\nGetCurrentTransactionStopTimestamp(), this should then also check if\ntabentry->lastscan is already newer than the new timestamp.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 31 Aug 2022 11:56:29 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 11:56:29AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-08-31 19:52:49 +0200, Matthias van de Meent wrote:\n> > As for having a lower granularity and preventing the\n> > one-syscall-per-Relation issue, can't we reuse the query_start or\n> > state_change timestamps that appear in pg_stat_activity (potentially\n> > updated immediately before this stat flush), or some other per-backend\n> > timestamp that is already maintained and considered accurate enough\n> > for this use?\n> \n> The problem is that it won't change at all for a query that runs for a week -\n> and we'll report the timestamp from a week ago when it finally ends.\n> \n> But given this is done when stats are flushed, which only happens after the\n> transaction ended, we can just use GetCurrentTransactionStopTimestamp() - if\n> we got to flushing the transaction stats we'll already have computed that.\n\nOh, good point --- it is safer to show a more recent time than a too-old\ntime.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 15:17:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 17:13, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Aug 31, 2022 at 05:02:33PM +0100, Dave Page wrote:\n> >\n> >\n> > On Tue, 30 Aug 2022 at 19:46, Bruce Momjian <bruce@momjian.us> wrote:\n> >\n> > On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:\n> > > On Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> > > I don't have a particular opinion about the patch, I'm just\n> pointing\n> > > out that there are other ways. Even just writing down the\n> numbers on\n> > a\n> > > post-it note and coming back in a month to see if they've\n> changed is\n> > > enough to tell if the table or index has been used.\n> > >\n> > >\n> > > There are usually other ways to perform monitoring tasks, but\n> there is\n> > > something to be said for the convenience of having functionality\n> built in\n> > and\n> > > not having to rely on tools, scripts, or post-it notes :-)\n> >\n> > Should we consider using something cheaper like time() so we don't\n> need\n> > a GUC to enable this?\n> >\n> >\n> > Interesting idea, but on my mac at least, 100,000,000 gettimeofday()\n> calls\n> > takes about 2 seconds, whilst 100,000,000 time() calls takes 14(!)\n> seconds.\n>\n> Wow. I was just thinking you need second-level accuracy, which must be\n> cheap somewhere.\n>\n\nSecond-level accuracy would indeed be fine for this. Frankly, for my use\ncase just the date would be enough, but I can imagine people wanting\ngreater accuracy than that.\n\nAnd yes, I was very surprised by the timing results I got as well. I guess\nit's a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s\nfor time().\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Wed, 31 Aug 2022 at 17:13, Bruce Momjian <bruce@momjian.us> wrote:On Wed, Aug 31, 2022 at 05:02:33PM +0100, Dave Page wrote:\n> \n> \n> On Tue, 30 Aug 2022 at 19:46, Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Fri, Aug 26, 2022 at 02:05:36PM +0100, Dave Page wrote:\n> > On Thu, 25 Aug 2022 at 01:44, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I don't have a particular opinion about the patch, I'm just pointing\n> > out that there are other ways. Even just writing down the numbers on\n> a\n> > post-it note and coming back in a month to see if they've changed is\n> > enough to tell if the table or index has been used.\n> >\n> >\n> > There are usually other ways to perform monitoring tasks, but there is\n> > something to be said for the convenience of having functionality built in\n> and\n> > not having to rely on tools, scripts, or post-it notes :-)\n> \n> Should we consider using something cheaper like time() so we don't need\n> a GUC to enable this?\n> \n> \n> Interesting idea, but on my mac at least, 100,000,000 gettimeofday() calls\n> takes about 2 seconds, whilst 100,000,000 time() calls takes 14(!) seconds.\n\nWow. I was just thinking you need second-level accuracy, which must be\ncheap somewhere.Second-level accuracy would indeed be fine for this. Frankly, for my use case just the date would be enough, but I can imagine people wanting greater accuracy than that. And yes, I was very surprised by the timing results I got as well. I guess it's a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s for time().-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 1 Sep 2022 09:46:59 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Thu, Sep 1, 2022 at 09:46:59AM +0100, Dave Page wrote:\n> On Wed, 31 Aug 2022 at 17:13, Bruce Momjian <bruce@momjian.us> wrote:\n> Wow. I was just thinking you need second-level accuracy, which must be\n> cheap somewhere.\n> \n> \n> Second-level accuracy would indeed be fine for this. Frankly, for my use case\n> just the date would be enough, but I can imagine people wanting greater\n> accuracy than that. \n> \n> And yes, I was very surprised by the timing results I got as well. I guess it's\n> a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s for time\n> ().\n\ni think we lose 95% of our users if we require it to be enabled so let's\nwork to find a way it can be always enabled.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Thu, 1 Sep 2022 08:03:59 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Thu, 1 Sept 2022 at 13:04, Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Sep 1, 2022 at 09:46:59AM +0100, Dave Page wrote:\n> > On Wed, 31 Aug 2022 at 17:13, Bruce Momjian <bruce@momjian.us> wrote:\n> > Wow. I was just thinking you need second-level accuracy, which must\n> be\n> > cheap somewhere.\n> >\n> >\n> > Second-level accuracy would indeed be fine for this. Frankly, for my use\n> case\n> > just the date would be enough, but I can imagine people wanting greater\n> > accuracy than that.\n> >\n> > And yes, I was very surprised by the timing results I got as well. I\n> guess it's\n> > a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s\n> for time\n> > ().\n>\n> i think we lose 95% of our users if we require it to be enabled so let's\n> work to find a way it can be always enabled.\n>\n\nSo based on Andres' suggestion, something like this seems like it might\nwork:\n\nif (pgstat_track_scan_timestamps && lstats->t_counts.t_numscans)\n{\n TimestampTz t = GetCurrentTransactionStopTimestamp();\n if (t > tabentry->lastscan)\n tabentry->lastscan = t;\n}\n\nIf that seems like a good option, I can run some more benchmarks (and then\nremove the GUC if it looks good).\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, 1 Sept 2022 at 13:04, Bruce Momjian <bruce@momjian.us> wrote:On Thu, Sep 1, 2022 at 09:46:59AM +0100, Dave Page wrote:\n> On Wed, 31 Aug 2022 at 17:13, Bruce Momjian <bruce@momjian.us> wrote:\n> Wow. I was just thinking you need second-level accuracy, which must be\n> cheap somewhere.\n> \n> \n> Second-level accuracy would indeed be fine for this. Frankly, for my use case\n> just the date would be enough, but I can imagine people wanting greater\n> accuracy than that. \n> \n> And yes, I was very surprised by the timing results I got as well. I guess it's\n> a quirk of macOS - on a Linux box I get ~4s for gettimeofday() and ~1s for time\n> ().\n\ni think we lose 95% of our users if we require it to be enabled so let's\nwork to find a way it can be always enabled.So based on Andres' suggestion, something like this seems like it might work:if (pgstat_track_scan_timestamps && lstats->t_counts.t_numscans){ TimestampTz t = GetCurrentTransactionStopTimestamp(); if (t > tabentry->lastscan) tabentry->lastscan = t;} If that seems like a good option, I can run some more benchmarks (and then remove the GUC if it looks good).-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 1 Sep 2022 13:18:00 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 20:56, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-31 19:52:49 +0200, Matthias van de Meent wrote:\n> > As for having a lower granularity and preventing the\n> > one-syscall-per-Relation issue, can't we reuse the query_start or\n> > state_change timestamps that appear in pg_stat_activity (potentially\n> > updated immediately before this stat flush), or some other per-backend\n> > timestamp that is already maintained and considered accurate enough\n> > for this use?\n>\n> The problem is that it won't change at all for a query that runs for a week -\n> and we'll report the timestamp from a week ago when it finally ends.\n\nThis earlier proposal to reuse pg_stat_activity values is also invalid\nbecause those timestamps don't exist when you SET track_activities =\nOFF.\n\n> But given this is done when stats are flushed, which only happens after the\n> transaction ended, we can just use GetCurrentTransactionStopTimestamp() - if\n> we got to flushing the transaction stats we'll already have computed that.\n\nI'm not entirely happy with that, as that would still add function\ncall overhead, and potentially still call GetCurrentTimestamp() in\nthis somewhat hot loop.\n\nAs an alternative, we could wire the `now` variable in\npgstat_report_stat (generated from\nGetCurrentTransactionStopTimestamp() into pgstat_flush_pending_entries\nand then into flush_pending_cb (or, store this in a static variable)\nso that we can reuse that value, saving any potential function call\noverhead.\n\n> > tabentry->numscans += lstats->t_counts.t_numscans;\n> > + if (pgstat_track_scans && lstats->t_counts.t_numscans)\n> > + tabentry->lastscan = GetCurrentTimestamp();\n>\n> Besides replacing GetCurrentTimestamp() with\n> GetCurrentTransactionStopTimestamp(), this should then also check if\n> tabentry->lastscan is already newer than the new timestamp.\n\nI wonder how important that is. This value only gets set in a stats\nflush, which may skew the stat update by several seconds (up to\nPGSTAT_MAX_INTERVAL). I don't expect concurrent flushes to take so\nlong that it will set the values to It is possible, but I think it is\nextremely unlikely that this is going to be important when you\nconsider that these stat flushes are not expected to run for more than\n1 second.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 1 Sep 2022 14:18:42 +0200",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-01 14:18:42 +0200, Matthias van de Meent wrote:\n> On Wed, 31 Aug 2022 at 20:56, Andres Freund <andres@anarazel.de> wrote:\n> > But given this is done when stats are flushed, which only happens after the\n> > transaction ended, we can just use GetCurrentTransactionStopTimestamp() - if\n> > we got to flushing the transaction stats we'll already have computed that.\n> \n> I'm not entirely happy with that, as that would still add function\n> call overhead, and potentially still call GetCurrentTimestamp() in\n> this somewhat hot loop.\n\nWe already used GetCurrentTransactionStopTimestamp() (as you reference below)\nbefore we get to this point, so I doubt that we'll ever call\nGetCurrentTimestamp(). And it's hard to imagine that the function call\noverhead of GetCurrentTransactionStopTimestamp() matters compared to acquiring\nlocks etc.\n\n\n> As an alternative, we could wire the `now` variable in\n> pgstat_report_stat (generated from\n> GetCurrentTransactionStopTimestamp() into pgstat_flush_pending_entries\n> and then into flush_pending_cb (or, store this in a static variable)\n> so that we can reuse that value, saving any potential function call\n> overhead.\n\nPassing it in doesn't clearly seem an improvement, but I also don't have a\nstrong opinion on it. I am strongly against the static variable approach.\n\n\n> > > tabentry->numscans += lstats->t_counts.t_numscans;\n> > > + if (pgstat_track_scans && lstats->t_counts.t_numscans)\n> > > + tabentry->lastscan = GetCurrentTimestamp();\n> >\n> > Besides replacing GetCurrentTimestamp() with\n> > GetCurrentTransactionStopTimestamp(), this should then also check if\n> > tabentry->lastscan is already newer than the new timestamp.\n> \n> I wonder how important that is. This value only gets set in a stats\n> flush, which may skew the stat update by several seconds (up to\n> PGSTAT_MAX_INTERVAL). I don't expect concurrent flushes to take so\n> long that it will set the values to It is possible, but I think it is\n> extremely unlikely that this is going to be important when you\n> consider that these stat flushes are not expected to run for more than\n> 1 second.\n\nI think it'll be confusing if you have values going back and forth, even if\njust by a little. And it's cheap to defend against, so why not just do that?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Sep 2022 11:35:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi\n\nOn Thu, 1 Sept 2022 at 19:35, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-09-01 14:18:42 +0200, Matthias van de Meent wrote:\n> > On Wed, 31 Aug 2022 at 20:56, Andres Freund <andres@anarazel.de> wrote:\n> > > But given this is done when stats are flushed, which only happens\n> after the\n> > > transaction ended, we can just use\n> GetCurrentTransactionStopTimestamp() - if\n> > > we got to flushing the transaction stats we'll already have computed\n> that.\n> >\n> > I'm not entirely happy with that, as that would still add function\n> > call overhead, and potentially still call GetCurrentTimestamp() in\n> > this somewhat hot loop.\n>\n> We already used GetCurrentTransactionStopTimestamp() (as you reference\n> below)\n> before we get to this point, so I doubt that we'll ever call\n> GetCurrentTimestamp(). And it's hard to imagine that the function call\n> overhead of GetCurrentTransactionStopTimestamp() matters compared to\n> acquiring\n> locks etc.\n\n\nVik and I looked at this a little, and found that we actually don't have\ngenerally have GetCurrentTransactionStopTimestamp() at this point - a\nsimple 'select * from pg_class' will result in 9 passes of this code, none\nof which have xactStopTimestamp != 0.\n\nAfter discussing it a little, we came to the conclusion that for the stated\nuse case, xactStartTimestamp is actually accurate enough, provided that we\nonly ever update it with a newer value. It would only likely be in extreme\nedge-cases where the difference between start and end transaction time\nwould have any bearing on whether or not one might drop a table/index for\nlack of use.\n\nDoing it this way also means we no longer need the GUC to enable the\nfeature, which as Bruce notes, is likely to lose 95% of users.\n\nUpdated patch attached:\n\n- GUC removed.\n- The timestamp recorded is xactStartTimestamp.\n- Docs updated to make it clear we're recording transaction start time.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 6 Sep 2022 14:15:56 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-06 14:15:56 +0100, Dave Page wrote:\n> Vik and I looked at this a little, and found that we actually don't have\n> generally have GetCurrentTransactionStopTimestamp() at this point - a\n> simple 'select * from pg_class' will result in 9 passes of this code, none\n> of which have xactStopTimestamp != 0.\n\nHuh, pgstat_report_stat() used GetCurrentTransactionStopTimestamp() has used\nfor a long time. Wonder when that was broken. Looks like it's set only when a\nxid is assigned. We should fix this.\n\n\n> After discussing it a little, we came to the conclusion that for the stated\n> use case, xactStartTimestamp is actually accurate enough, provided that we\n> only ever update it with a newer value. It would only likely be in extreme\n> edge-cases where the difference between start and end transaction time\n> would have any bearing on whether or not one might drop a table/index for\n> lack of use.\n\nI don't at all agree with this. Since we already use\nGetCurrentTransactionStopTimestamp() in this path we should fix it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Sep 2022 08:53:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "At Tue, 6 Sep 2022 08:53:25 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-09-06 14:15:56 +0100, Dave Page wrote:\n> > Vik and I looked at this a little, and found that we actually don't have\n> > generally have GetCurrentTransactionStopTimestamp() at this point - a\n> > simple 'select * from pg_class' will result in 9 passes of this code, none\n> > of which have xactStopTimestamp != 0.\n> \n> Huh, pgstat_report_stat() used GetCurrentTransactionStopTimestamp() has used\n> for a long time. Wonder when that was broken. Looks like it's set only when a\n> xid is assigned. We should fix this.\n\n/*\n *\tGetCurrentTransactionStopTimestamp\n *\n * We return current time if the transaction stop time hasn't been set\n * (which can happen if we decide we don't need to log an XLOG record).\n\nSo, that seems like intentional since 2007 (957d08c81f). It seems to\nme that the patch assumes that the only other use of the timstamp is\npgstats and it didn't let GetCurrentTransactionStopTimestamp() set the\nvariable for future use.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 07 Sep 2022 17:58:22 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi\n\nOn Tue, 6 Sept 2022 at 16:53, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-09-06 14:15:56 +0100, Dave Page wrote:\n> > Vik and I looked at this a little, and found that we actually don't have\n> > generally have GetCurrentTransactionStopTimestamp() at this point - a\n> > simple 'select * from pg_class' will result in 9 passes of this code,\n> none\n> > of which have xactStopTimestamp != 0.\n>\n> Huh, pgstat_report_stat() used GetCurrentTransactionStopTimestamp() has\n> used\n> for a long time. Wonder when that was broken. Looks like it's set only\n> when a\n> xid is assigned. We should fix this.\n>\n>\n> > After discussing it a little, we came to the conclusion that for the\n> stated\n> > use case, xactStartTimestamp is actually accurate enough, provided that\n> we\n> > only ever update it with a newer value. It would only likely be in\n> extreme\n> > edge-cases where the difference between start and end transaction time\n> > would have any bearing on whether or not one might drop a table/index for\n> > lack of use.\n>\n> I don't at all agree with this. Since we already use\n> GetCurrentTransactionStopTimestamp() in this path we should fix it.\n>\n\nI just spent some time looking at this, and as far as I can see, we only\nset xactStopTimestamp if the transaction needs to be WAL logged (and in\nthose cases, it is set before the stats callback runs). As you note though,\nwe are already calling GetCurrentTransactionStopTimestamp() in the\nread-only case anyway, and thus already incurring the cost of\ngettimeofday().\n\nHere's a v4 patch. This reverts to using\nGetCurrentTransactionStopTimestamp() for the last_scan times, and will\nset xactStopTimestamp the first time GetCurrentTransactionStopTimestamp()\nis called, thus avoiding multiple gettimeofday() calls.\nSetCurrentTransactionStopTimestamp() is removed, as is use\nof xactStopTimestamp (except when resetting it to 0).\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 7 Sep 2022 11:03:56 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On 9/7/22 12:03, Dave Page wrote:\n> Here's a v4 patch. This reverts to using\n> GetCurrentTransactionStopTimestamp() for the last_scan times, and will\n> set xactStopTimestamp the first time GetCurrentTransactionStopTimestamp()\n> is called, thus avoiding multiple gettimeofday() calls.\n> SetCurrentTransactionStopTimestamp() is removed, as is use\n> of xactStopTimestamp (except when resetting it to 0).\n\nThis patch looks good to me and has much saner behavior than what it \nreplaces.\n\nAs a matter of process, the oid for the new function should be in the \n8000-9000 range and the catversion should be bumped by the committer.\n\nMarked as Ready for Committer. Thanks for the patch!\n-- \nVik Fearing\n\n\n",
"msg_date": "Fri, 30 Sep 2022 17:58:31 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-30 17:58:31 +0200, Vik Fearing wrote:\n> On 9/7/22 12:03, Dave Page wrote:\n> > Here's a v4 patch. This reverts to using\n> > GetCurrentTransactionStopTimestamp() for the last_scan times, and will\n> > set xactStopTimestamp the first time GetCurrentTransactionStopTimestamp()\n> > is called, thus avoiding multiple gettimeofday() calls.\n> > SetCurrentTransactionStopTimestamp() is removed, as is use\n> > of xactStopTimestamp (except when resetting it to 0).\n> \n> This patch looks good to me and has much saner behavior than what it\n> replaces.\n\nI agree. However, it seems like a significant enough behavioural change that\nI'd rather commit it as a separate patch. I agree with Vik's judgement that\nthe patch otherwise is otherwise ready. Happy to do that split myself, or you\ncan do it...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 30 Sep 2022 10:58:06 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi\n\nOn Fri, 30 Sept 2022 at 18:58, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-09-30 17:58:31 +0200, Vik Fearing wrote:\n> > On 9/7/22 12:03, Dave Page wrote:\n> > > Here's a v4 patch. This reverts to using\n> > > GetCurrentTransactionStopTimestamp() for the last_scan times, and will\n> > > set xactStopTimestamp the first time\n> GetCurrentTransactionStopTimestamp()\n> > > is called, thus avoiding multiple gettimeofday() calls.\n> > > SetCurrentTransactionStopTimestamp() is removed, as is use\n> > > of xactStopTimestamp (except when resetting it to 0).\n> >\n> > This patch looks good to me and has much saner behavior than what it\n> > replaces.\n>\n> I agree. However, it seems like a significant enough behavioural change\n> that\n> I'd rather commit it as a separate patch. I agree with Vik's judgement\n> that\n> the patch otherwise is otherwise ready. Happy to do that split myself, or\n> you\n> can do it...\n>\n\nThanks. It's just the changes in xact.c, so it doesn't seem like it would\ncause you any more work either way, in which case, I'll leave it to you :-)\n\nFYI, the OID I chose was simply the closest single value to those used for\nthe other related functions (e.g. pg_stat_get_numscans). Seemed like a good\nway to use up one more random unused value, but I don't care if it gets\nchanged to the 8000+ range.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Fri, 30 Sept 2022 at 18:58, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-09-30 17:58:31 +0200, Vik Fearing wrote:\n> On 9/7/22 12:03, Dave Page wrote:\n> > Here's a v4 patch. This reverts to using\n> > GetCurrentTransactionStopTimestamp() for the last_scan times, and will\n> > set xactStopTimestamp the first time GetCurrentTransactionStopTimestamp()\n> > is called, thus avoiding multiple gettimeofday() calls.\n> > SetCurrentTransactionStopTimestamp() is removed, as is use\n> > of xactStopTimestamp (except when resetting it to 0).\n> \n> This patch looks good to me and has much saner behavior than what it\n> replaces.\n\nI agree. However, it seems like a significant enough behavioural change that\nI'd rather commit it as a separate patch. I agree with Vik's judgement that\nthe patch otherwise is otherwise ready. Happy to do that split myself, or you\ncan do it...Thanks. It's just the changes in xact.c, so it doesn't seem like it would cause you any more work either way, in which case, I'll leave it to you :-)FYI, the OID I chose was simply the closest single value to those used for the other related functions (e.g. pg_stat_get_numscans). Seemed like a good way to use up one more random unused value, but I don't care if it gets changed to the 8000+ range.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 3 Oct 2022 12:55:40 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Mon, Oct 03, 2022 at 12:55:40PM +0100, Dave Page wrote:\n> Thanks. It's just the changes in xact.c, so it doesn't seem like it would\n> cause you any more work either way, in which case, I'll leave it to you :-)\n\nOkay, I have just moved the patch to the next CF then, still marked as\nready for committer. Are you planning to look at that?\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 15:40:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, 12 Oct 2022 at 07:40, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Oct 03, 2022 at 12:55:40PM +0100, Dave Page wrote:\n> > Thanks. It's just the changes in xact.c, so it doesn't seem like it would\n> > cause you any more work either way, in which case, I'll leave it to you\n> :-)\n>\n> Okay, I have just moved the patch to the next CF then, still marked as\n> ready for committer. Are you planning to look at that?\n>\n\nThanks. Was the question directed at me or Andres?\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Wed, 12 Oct 2022 at 07:40, Michael Paquier <michael@paquier.xyz> wrote:On Mon, Oct 03, 2022 at 12:55:40PM +0100, Dave Page wrote:\n> Thanks. It's just the changes in xact.c, so it doesn't seem like it would\n> cause you any more work either way, in which case, I'll leave it to you :-)\n\nOkay, I have just moved the patch to the next CF then, still marked as\nready for committer. Are you planning to look at that?Thanks. Was the question directed at me or Andres? -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 12 Oct 2022 09:09:46 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Wed, Oct 12, 2022 at 09:09:46AM +0100, Dave Page wrote:\n> On Wed, 12 Oct 2022 at 07:40, Michael Paquier <michael@paquier.xyz> wrote:\n>> Okay, I have just moved the patch to the next CF then, still marked as\n>> ready for committer. Are you planning to look at that?\n>\n> Thanks. Was the question directed at me or Andres?\n\nApologies for the confusion. This question is addressed to Andres.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 17:15:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-12 15:40:21 +0900, Michael Paquier wrote:\n> On Mon, Oct 03, 2022 at 12:55:40PM +0100, Dave Page wrote:\n> > Thanks. It's just the changes in xact.c, so it doesn't seem like it would\n> > cause you any more work either way, in which case, I'll leave it to you :-)\n> \n> Okay, I have just moved the patch to the next CF then, still marked as\n> ready for committer. Are you planning to look at that?\n\nYep, doing so right now.\n\nI think this should have at a basic test in src/test/regress/sql/stats.sql. If\nI can write one in a few minutes I'll go for that, otherwise will reply\ndetailing difficulties.\n\n\n> + <para>\n> + The time of the last sequential scan of this table, based on the\n> + most recent transaction stop time\n> + </para></entry>\n\nRelated rows seem to say \"on this table\".\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 12 Oct 2022 12:50:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-12 12:50:31 -0700, Andres Freund wrote:\n> I think this should have at a basic test in src/test/regress/sql/stats.sql. If\n> I can write one in a few minutes I'll go for that, otherwise will reply\n> detailing difficulties.\n\nTook a bit longer (+lunch). Attached.\n\n\nIn the attached 0001, the patch to make GetCurrentTransactionStopTimestamp()\nset xactStopTimestamp, I added a few comment updates and an Assert() to ensure\nthat CurrentTransactionState->state is TRANS_(DEFAULT|COMMIT|ABORT|PREPARE). I\nam worried that otherwise we might end up with someone ending up using it in a\nplace before the end of the transaction, which'd then end up recording the\nwrong timestamp in the commit/abort record.\n\n\nFor 0002, the commit adding lastscan, I added catversion/stats version bumps\n(because I was planning to commit it already...), a commit message, and that\nminor docs change mentioned earlier.\n\n\n0003 adds the tests mentioned above. I plan to merge them with 0002, but left\nthem separate for easier review for now.\n\nTo be able to compare timestamps for > not just >= we need to make sure that\ntwo subsequent timestamps differ. The attached achieves this by sleeping for\n100ms between those points - we do that in other places already. I'd started\nout with 10ms, which I am fairly sure would suffice, but then deciced to copy\nthe existing 100ms sleeps.\n\nI verified tests pass under valgrind, debug_discard_caches and after I make\npgstat_report_stat() only flush when force is passed in.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 12 Oct 2022 15:52:36 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi\n\nOn Wed, 12 Oct 2022 at 23:52, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-10-12 12:50:31 -0700, Andres Freund wrote:\n> > I think this should have at a basic test in\n> src/test/regress/sql/stats.sql. If\n> > I can write one in a few minutes I'll go for that, otherwise will reply\n> > detailing difficulties.\n>\n> Took a bit longer (+lunch). Attached.\n>\n>\n> In the attached 0001, the patch to make\n> GetCurrentTransactionStopTimestamp()\n> set xactStopTimestamp, I added a few comment updates and an Assert() to\n> ensure\n> that CurrentTransactionState->state is\n> TRANS_(DEFAULT|COMMIT|ABORT|PREPARE). I\n> am worried that otherwise we might end up with someone ending up using it\n> in a\n> place before the end of the transaction, which'd then end up recording the\n> wrong timestamp in the commit/abort record.\n>\n>\n> For 0002, the commit adding lastscan, I added catversion/stats version\n> bumps\n> (because I was planning to commit it already...), a commit message, and\n> that\n> minor docs change mentioned earlier.\n>\n>\n> 0003 adds the tests mentioned above. I plan to merge them with 0002, but\n> left\n> them separate for easier review for now.\n>\n> To be able to compare timestamps for > not just >= we need to make sure\n> that\n> two subsequent timestamps differ. The attached achieves this by sleeping\n> for\n> 100ms between those points - we do that in other places already. I'd\n> started\n> out with 10ms, which I am fairly sure would suffice, but then deciced to\n> copy\n> the existing 100ms sleeps.\n>\n> I verified tests pass under valgrind, debug_discard_caches and after I make\n> pgstat_report_stat() only flush when force is passed in.\n>\n\nThanks for that. It looks good to me, bar one comment (repeated 3 times in\nthe sql and expected files):\n\nfetch timestamps from before the next test\n\n\"from \" should be removed.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Wed, 12 Oct 2022 at 23:52, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-10-12 12:50:31 -0700, Andres Freund wrote:\n> I think this should have at a basic test in src/test/regress/sql/stats.sql. If\n> I can write one in a few minutes I'll go for that, otherwise will reply\n> detailing difficulties.\n\nTook a bit longer (+lunch). Attached.\n\n\nIn the attached 0001, the patch to make GetCurrentTransactionStopTimestamp()\nset xactStopTimestamp, I added a few comment updates and an Assert() to ensure\nthat CurrentTransactionState->state is TRANS_(DEFAULT|COMMIT|ABORT|PREPARE). I\nam worried that otherwise we might end up with someone ending up using it in a\nplace before the end of the transaction, which'd then end up recording the\nwrong timestamp in the commit/abort record.\n\n\nFor 0002, the commit adding lastscan, I added catversion/stats version bumps\n(because I was planning to commit it already...), a commit message, and that\nminor docs change mentioned earlier.\n\n\n0003 adds the tests mentioned above. I plan to merge them with 0002, but left\nthem separate for easier review for now.\n\nTo be able to compare timestamps for > not just >= we need to make sure that\ntwo subsequent timestamps differ. The attached achieves this by sleeping for\n100ms between those points - we do that in other places already. I'd started\nout with 10ms, which I am fairly sure would suffice, but then deciced to copy\nthe existing 100ms sleeps.\n\nI verified tests pass under valgrind, debug_discard_caches and after I make\npgstat_report_stat() only flush when force is passed in.Thanks for that. It looks good to me, bar one comment (repeated 3 times in the sql and expected files):fetch timestamps from before the next test \"from \" should be removed.-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 13 Oct 2022 14:38:06 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "Hi,\n\nOn 2022-10-13 14:38:06 +0100, Dave Page wrote:\n> Thanks for that. It looks good to me, bar one comment (repeated 3 times in\n> the sql and expected files):\n> \n> fetch timestamps from before the next test\n> \n> \"from \" should be removed.\n\nI was trying to say something with that from, but clearly it wasn't\nunderstandable :). Removed.\n\nWith that I pushed the changes and marked the CF entry as committed.\n\nThanks for the feature Dave and the reviews everyone.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 14 Oct 2022 11:16:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Fri, 14 Oct 2022 at 19:16, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-10-13 14:38:06 +0100, Dave Page wrote:\n> > Thanks for that. It looks good to me, bar one comment (repeated 3 times\n> in\n> > the sql and expected files):\n> >\n> > fetch timestamps from before the next test\n> >\n> > \"from \" should be removed.\n>\n> I was trying to say something with that from, but clearly it wasn't\n> understandable :). Removed.\n>\n> With that I pushed the changes and marked the CF entry as committed.\n\n\nThanks!\n\n\n> --\n-- \nDave Page\nhttps://pgsnake.blogspot.com\n\nEDB Postgres\nhttps://www.enterprisedb.com\n\nOn Fri, 14 Oct 2022 at 19:16, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-10-13 14:38:06 +0100, Dave Page wrote:\n> Thanks for that. It looks good to me, bar one comment (repeated 3 times in\n> the sql and expected files):\n> \n> fetch timestamps from before the next test\n> \n> \"from \" should be removed.\n\nI was trying to say something with that from, but clearly it wasn't\nunderstandable :). Removed.\n\nWith that I pushed the changes and marked the CF entry as committed.Thanks!-- -- Dave Pagehttps://pgsnake.blogspot.comEDB Postgreshttps://www.enterprisedb.com",
"msg_date": "Fri, 14 Oct 2022 19:54:46 +0100",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Fri, Oct 14, 2022 at 2:55 PM Dave Page <dpage@pgadmin.org> wrote:\n> On Fri, 14 Oct 2022 at 19:16, Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-10-13 14:38:06 +0100, Dave Page wrote:\n>> > Thanks for that. It looks good to me, bar one comment (repeated 3 times in\n>> > the sql and expected files):\n>> >\n>> > fetch timestamps from before the next test\n>> >\n>> > \"from \" should be removed.\n>>\n>> I was trying to say something with that from, but clearly it wasn't\n>> understandable :). Removed.\n>>\n>> With that I pushed the changes and marked the CF entry as committed.\n>\n>\n> Thanks!\n>\n\nHey folks,\n\nI was looking at this a bit further (great addition btw) and noticed\nthe following behavior (this is a mre of the original testing that\nuncovered this):\n\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n(0 rows)\n\npagila=# create table x (xx int);\nCREATE TABLE\nTime: 2.145 ms\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16392 | public | x | 0 | [null] |\n0 | [null] | [null] | [null] | 0 | 0 |\n 0 | 0 | 0 | 0 |\n 0 | 0 | [null] | [null] | [null]\n| [null] | 0 | 0 | 0 |\n 0\n(1 row)\n\npagila=# insert into x select 1;\nINSERT 0 1\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16392 | public | x | 0 | 1999-12-31 19:00:00-05 |\n 0 | [null] | [null] | [null] | 1 |\n 0 | 0 | 0 | 1 | 0 |\n 1 | 1 | [null] | [null] |\n[null] | [null] | 0 | 0 |\n 0 | 0\n(1 row)\n\nNormally we populate \"last\" columns with a NULL value when the\ncorresponding marker is zero, which seems correct in the first query,\nbut no longer matches in the second. I can see an argument that this\nis a necessary exception to that rule (I'm not sure I agree with it,\nbut I see it) but even in that scenario, ISTM we should avoid\npopulating the table with a \"special value\", which generally goes\nagainst observability best practices, and I believe we've been able to\navoid it elsewhere.\n\nBeyond that, I also notice the behavior changes when adding a table\nwith a PK, though not necessarily better...\n\npagila=# drop table x;\nDROP TABLE\nTime: 2.896 ms\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n(0 rows)\n\npagila=# create table x (xx int primary key) ;\nCREATE TABLE\n\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan\n | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\nn_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\nn_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+-------------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16400 | public | x | 1 | 2022-10-23\n15:53:04.935192-04 | 0 | 0 | [null] |\n 0 | 0 | 0 | 0 | 0 | 0\n| 0 | 0 | 0 | [null]\n| [null] | [null] | [null] | 0 |\n 0 | 0 | 0\n(1 row)\n\npagila=# insert into x select 1;\nINSERT 0 1\n\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan\n | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\nn_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\nn_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+-------------------------------+--------------+----------+------------------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16400 | public | x | 1 | 2022-10-23\n15:53:04.935192-04 | 0 | 0 | 1999-12-31 19:00:00-05\n| 0 | 1 | 0 | 0 | 0 |\n 1 | 0 | 1 | 1 |\n[null] | [null] | [null] | [null] |\n 0 | 0 | 0 | 0\n(1 row)\n\nThis time the create table both populate a sequential scan and\npopulates the last_seq_scan with a real/correct value. However an\ninsert into the table neither advances the seq_scan nor the\nlast_seq_scan values which seems like different behavior from my\noriginal example, with the added negative that the last_idx_scan is\nnow populated with a special value :-(\n\nI think the simplest fix which should correspond to existing versions\nbehavior would be to just ensure that we replace any \"special value\"\ntimestamps with a real transaction timestamp, and then maybe note that\nthese fields may be advanced by operations which don't strictly show\nup as a sequential or index scan.\n\n\nRobert Treat\nhttps://xzilla.net\n\n\n",
"msg_date": "Sun, 23 Oct 2022 16:09:44 -0400",
"msg_from": "Robert Treat <rob@xzilla.net>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "FYI, this is not intentional, and I do plan to look into it, however I've\nbeen somewhat busy with pgconfeu, and am travelling for the rest of this\nweek as well.\n\nOn Sun, 23 Oct 2022 at 21:09, Robert Treat <rob@xzilla.net> wrote:\n\n> On Fri, Oct 14, 2022 at 2:55 PM Dave Page <dpage@pgadmin.org> wrote:\n> > On Fri, 14 Oct 2022 at 19:16, Andres Freund <andres@anarazel.de> wrote:\n> >> On 2022-10-13 14:38:06 +0100, Dave Page wrote:\n> >> > Thanks for that. It looks good to me, bar one comment (repeated 3\n> times in\n> >> > the sql and expected files):\n> >> >\n> >> > fetch timestamps from before the next test\n> >> >\n> >> > \"from \" should be removed.\n> >>\n> >> I was trying to say something with that from, but clearly it wasn't\n> >> understandable :). Removed.\n> >>\n> >> With that I pushed the changes and marked the CF entry as committed.\n> >\n> >\n> > Thanks!\n> >\n>\n> Hey folks,\n>\n> I was looking at this a bit further (great addition btw) and noticed\n> the following behavior (this is a mre of the original testing that\n> uncovered this):\n>\n> pagila=# select * from pg_stat_user_tables ;\n> relid | schemaname | relname | seq_scan | last_seq_scan |\n> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n>\n> -------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n> (0 rows)\n>\n> pagila=# create table x (xx int);\n> CREATE TABLE\n> Time: 2.145 ms\n> pagila=# select * from pg_stat_user_tables ;\n> relid | schemaname | relname | seq_scan | last_seq_scan |\n> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n>\n> -------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n> 16392 | public | x | 0 | [null] |\n> 0 | [null] | [null] | [null] | 0 | 0 |\n> 0 | 0 | 0 | 0 |\n> 0 | 0 | [null] | [null] | [null]\n> | [null] | 0 | 0 | 0 |\n> 0\n> (1 row)\n>\n> pagila=# insert into x select 1;\n> INSERT 0 1\n> pagila=# select * from pg_stat_user_tables ;\n> relid | schemaname | relname | seq_scan | last_seq_scan |\n> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n>\n> -------+------------+---------+----------+------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n> 16392 | public | x | 0 | 1999-12-31 19:00:00-05 |\n> 0 | [null] | [null] | [null] | 1 |\n> 0 | 0 | 0 | 1 | 0 |\n> 1 | 1 | [null] | [null] |\n> [null] | [null] | 0 | 0 |\n> 0 | 0\n> (1 row)\n>\n> Normally we populate \"last\" columns with a NULL value when the\n> corresponding marker is zero, which seems correct in the first query,\n> but no longer matches in the second. I can see an argument that this\n> is a necessary exception to that rule (I'm not sure I agree with it,\n> but I see it) but even in that scenario, ISTM we should avoid\n> populating the table with a \"special value\", which generally goes\n> against observability best practices, and I believe we've been able to\n> avoid it elsewhere.\n>\n> Beyond that, I also notice the behavior changes when adding a table\n> with a PK, though not necessarily better...\n>\n> pagila=# drop table x;\n> DROP TABLE\n> Time: 2.896 ms\n> pagila=# select * from pg_stat_user_tables ;\n> relid | schemaname | relname | seq_scan | last_seq_scan |\n> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n>\n> -------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n> (0 rows)\n>\n> pagila=# create table x (xx int primary key) ;\n> CREATE TABLE\n>\n> pagila=# select * from pg_stat_user_tables ;\n> relid | schemaname | relname | seq_scan | last_seq_scan\n> | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\n> n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\n> n_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n>\n> -------+------------+---------+----------+-------------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n> 16400 | public | x | 1 | 2022-10-23\n> 15:53:04.935192-04 | 0 | 0 | [null] |\n> 0 | 0 | 0 | 0 | 0 | 0\n> | 0 | 0 | 0 | [null]\n> | [null] | [null] | [null] | 0 |\n> 0 | 0 | 0\n> (1 row)\n>\n> pagila=# insert into x select 1;\n> INSERT 0 1\n>\n> pagila=# select * from pg_stat_user_tables ;\n> relid | schemaname | relname | seq_scan | last_seq_scan\n> | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\n> n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\n> n_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n> autovacuum_count | analyze_count | autoanalyze_count\n>\n> -------+------------+---------+----------+-------------------------------+--------------+----------+------------------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n> 16400 | public | x | 1 | 2022-10-23\n> 15:53:04.935192-04 | 0 | 0 | 1999-12-31 19:00:00-05\n> | 0 | 1 | 0 | 0 | 0 |\n> 1 | 0 | 1 | 1 |\n> [null] | [null] | [null] | [null] |\n> 0 | 0 | 0 | 0\n> (1 row)\n>\n> This time the create table both populate a sequential scan and\n> populates the last_seq_scan with a real/correct value. However an\n> insert into the table neither advances the seq_scan nor the\n> last_seq_scan values which seems like different behavior from my\n> original example, with the added negative that the last_idx_scan is\n> now populated with a special value :-(\n>\n> I think the simplest fix which should correspond to existing versions\n> behavior would be to just ensure that we replace any \"special value\"\n> timestamps with a real transaction timestamp, and then maybe note that\n> these fields may be advanced by operations which don't strictly show\n> up as a sequential or index scan.\n>\n>\n> Robert Treat\n> https://xzilla.net\n>\n\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nFYI, this is not intentional, and I do plan to look into it, however I've been somewhat busy with pgconfeu, and am travelling for the rest of this week as well.On Sun, 23 Oct 2022 at 21:09, Robert Treat <rob@xzilla.net> wrote:On Fri, Oct 14, 2022 at 2:55 PM Dave Page <dpage@pgadmin.org> wrote:\n> On Fri, 14 Oct 2022 at 19:16, Andres Freund <andres@anarazel.de> wrote:\n>> On 2022-10-13 14:38:06 +0100, Dave Page wrote:\n>> > Thanks for that. It looks good to me, bar one comment (repeated 3 times in\n>> > the sql and expected files):\n>> >\n>> > fetch timestamps from before the next test\n>> >\n>> > \"from \" should be removed.\n>>\n>> I was trying to say something with that from, but clearly it wasn't\n>> understandable :). Removed.\n>>\n>> With that I pushed the changes and marked the CF entry as committed.\n>\n>\n> Thanks!\n>\n\nHey folks,\n\nI was looking at this a bit further (great addition btw) and noticed\nthe following behavior (this is a mre of the original testing that\nuncovered this):\n\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n(0 rows)\n\npagila=# create table x (xx int);\nCREATE TABLE\nTime: 2.145 ms\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16392 | public | x | 0 | [null] |\n0 | [null] | [null] | [null] | 0 | 0 |\n 0 | 0 | 0 | 0 |\n 0 | 0 | [null] | [null] | [null]\n| [null] | 0 | 0 | 0 |\n 0\n(1 row)\n\npagila=# insert into x select 1;\nINSERT 0 1\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16392 | public | x | 0 | 1999-12-31 19:00:00-05 |\n 0 | [null] | [null] | [null] | 1 |\n 0 | 0 | 0 | 1 | 0 |\n 1 | 1 | [null] | [null] |\n[null] | [null] | 0 | 0 |\n 0 | 0\n(1 row)\n\nNormally we populate \"last\" columns with a NULL value when the\ncorresponding marker is zero, which seems correct in the first query,\nbut no longer matches in the second. I can see an argument that this\nis a necessary exception to that rule (I'm not sure I agree with it,\nbut I see it) but even in that scenario, ISTM we should avoid\npopulating the table with a \"special value\", which generally goes\nagainst observability best practices, and I believe we've been able to\navoid it elsewhere.\n\nBeyond that, I also notice the behavior changes when adding a table\nwith a PK, though not necessarily better...\n\npagila=# drop table x;\nDROP TABLE\nTime: 2.896 ms\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan |\nseq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\nn_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\nn_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n(0 rows)\n\npagila=# create table x (xx int primary key) ;\nCREATE TABLE\n\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan\n | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\nn_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\nn_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+-------------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16400 | public | x | 1 | 2022-10-23\n15:53:04.935192-04 | 0 | 0 | [null] |\n 0 | 0 | 0 | 0 | 0 | 0\n| 0 | 0 | 0 | [null]\n| [null] | [null] | [null] | 0 |\n 0 | 0 | 0\n(1 row)\n\npagila=# insert into x select 1;\nINSERT 0 1\n\npagila=# select * from pg_stat_user_tables ;\n relid | schemaname | relname | seq_scan | last_seq_scan\n | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\nn_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\nn_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\nlast_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\nautovacuum_count | analyze_count | autoanalyze_count\n-------+------------+---------+----------+-------------------------------+--------------+----------+------------------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n 16400 | public | x | 1 | 2022-10-23\n15:53:04.935192-04 | 0 | 0 | 1999-12-31 19:00:00-05\n| 0 | 1 | 0 | 0 | 0 |\n 1 | 0 | 1 | 1 |\n[null] | [null] | [null] | [null] |\n 0 | 0 | 0 | 0\n(1 row)\n\nThis time the create table both populate a sequential scan and\npopulates the last_seq_scan with a real/correct value. However an\ninsert into the table neither advances the seq_scan nor the\nlast_seq_scan values which seems like different behavior from my\noriginal example, with the added negative that the last_idx_scan is\nnow populated with a special value :-(\n\nI think the simplest fix which should correspond to existing versions\nbehavior would be to just ensure that we replace any \"special value\"\ntimestamps with a real transaction timestamp, and then maybe note that\nthese fields may be advanced by operations which don't strictly show\nup as a sequential or index scan.\n\n\nRobert Treat\nhttps://xzilla.net\n-- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 31 Oct 2022 11:36:51 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Mon, 31 Oct 2022 at 07:36, Dave Page <dpage@pgadmin.org> wrote:\n\n> FYI, this is not intentional, and I do plan to look into it, however I've\n> been somewhat busy with pgconfeu, and am travelling for the rest of this\n> week as well.\n>\n\nHere's a patch to fix this issue. Many thanks to Peter Eisentraut who\nfigured it out in a few minutes after I spent far too long looking down\nrabbit holes in entirely the wrong place.\n\nThanks for the bug report.\n\n\n>\n> On Sun, 23 Oct 2022 at 21:09, Robert Treat <rob@xzilla.net> wrote:\n>\n>> On Fri, Oct 14, 2022 at 2:55 PM Dave Page <dpage@pgadmin.org> wrote:\n>> > On Fri, 14 Oct 2022 at 19:16, Andres Freund <andres@anarazel.de> wrote:\n>> >> On 2022-10-13 14:38:06 +0100, Dave Page wrote:\n>> >> > Thanks for that. It looks good to me, bar one comment (repeated 3\n>> times in\n>> >> > the sql and expected files):\n>> >> >\n>> >> > fetch timestamps from before the next test\n>> >> >\n>> >> > \"from \" should be removed.\n>> >>\n>> >> I was trying to say something with that from, but clearly it wasn't\n>> >> understandable :). Removed.\n>> >>\n>> >> With that I pushed the changes and marked the CF entry as committed.\n>> >\n>> >\n>> > Thanks!\n>> >\n>>\n>> Hey folks,\n>>\n>> I was looking at this a bit further (great addition btw) and noticed\n>> the following behavior (this is a mre of the original testing that\n>> uncovered this):\n>>\n>> pagila=# select * from pg_stat_user_tables ;\n>> relid | schemaname | relname | seq_scan | last_seq_scan |\n>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n>> autovacuum_count | analyze_count | autoanalyze_count\n>>\n>> -------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n>> (0 rows)\n>>\n>> pagila=# create table x (xx int);\n>> CREATE TABLE\n>> Time: 2.145 ms\n>> pagila=# select * from pg_stat_user_tables ;\n>> relid | schemaname | relname | seq_scan | last_seq_scan |\n>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n>> autovacuum_count | analyze_count | autoanalyze_count\n>>\n>> -------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n>> 16392 | public | x | 0 | [null] |\n>> 0 | [null] | [null] | [null] | 0 | 0 |\n>> 0 | 0 | 0 | 0 |\n>> 0 | 0 | [null] | [null] | [null]\n>> | [null] | 0 | 0 | 0 |\n>> 0\n>> (1 row)\n>>\n>> pagila=# insert into x select 1;\n>> INSERT 0 1\n>> pagila=# select * from pg_stat_user_tables ;\n>> relid | schemaname | relname | seq_scan | last_seq_scan |\n>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n>> autovacuum_count | analyze_count | autoanalyze_count\n>>\n>> -------+------------+---------+----------+------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n>> 16392 | public | x | 0 | 1999-12-31 19:00:00-05 |\n>> 0 | [null] | [null] | [null] | 1 |\n>> 0 | 0 | 0 | 1 | 0 |\n>> 1 | 1 | [null] | [null] |\n>> [null] | [null] | 0 | 0 |\n>> 0 | 0\n>> (1 row)\n>>\n>> Normally we populate \"last\" columns with a NULL value when the\n>> corresponding marker is zero, which seems correct in the first query,\n>> but no longer matches in the second. I can see an argument that this\n>> is a necessary exception to that rule (I'm not sure I agree with it,\n>> but I see it) but even in that scenario, ISTM we should avoid\n>> populating the table with a \"special value\", which generally goes\n>> against observability best practices, and I believe we've been able to\n>> avoid it elsewhere.\n>>\n>> Beyond that, I also notice the behavior changes when adding a table\n>> with a PK, though not necessarily better...\n>>\n>> pagila=# drop table x;\n>> DROP TABLE\n>> Time: 2.896 ms\n>> pagila=# select * from pg_stat_user_tables ;\n>> relid | schemaname | relname | seq_scan | last_seq_scan |\n>> seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch | n_tup_ins |\n>> n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup | n_dead_tup |\n>> n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n>> autovacuum_count | analyze_count | autoanalyze_count\n>>\n>> -------+------------+---------+----------+---------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n>> (0 rows)\n>>\n>> pagila=# create table x (xx int primary key) ;\n>> CREATE TABLE\n>>\n>> pagila=# select * from pg_stat_user_tables ;\n>> relid | schemaname | relname | seq_scan | last_seq_scan\n>> | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\n>> n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\n>> n_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n>> autovacuum_count | analyze_count | autoanalyze_count\n>>\n>> -------+------------+---------+----------+-------------------------------+--------------+----------+---------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n>> 16400 | public | x | 1 | 2022-10-23\n>> 15:53:04.935192-04 | 0 | 0 | [null] |\n>> 0 | 0 | 0 | 0 | 0 | 0\n>> | 0 | 0 | 0 | [null]\n>> | [null] | [null] | [null] | 0 |\n>> 0 | 0 | 0\n>> (1 row)\n>>\n>> pagila=# insert into x select 1;\n>> INSERT 0 1\n>>\n>> pagila=# select * from pg_stat_user_tables ;\n>> relid | schemaname | relname | seq_scan | last_seq_scan\n>> | seq_tup_read | idx_scan | last_idx_scan | idx_tup_fetch |\n>> n_tup_ins | n_tup_upd | n_tup_del | n_tup_hot_upd | n_live_tup |\n>> n_dead_tup | n_mod_since_analyze | n_ins_since_vacuum | last_vacuum |\n>> last_autovacuum | last_analyze | last_autoanalyze | vacuum_count |\n>> autovacuum_count | analyze_count | autoanalyze_count\n>>\n>> -------+------------+---------+----------+-------------------------------+--------------+----------+------------------------+---------------+-----------+-----------+-----------+---------------+------------+------------+---------------------+--------------------+-------------+-----------------+--------------+------------------+--------------+------------------+---------------+-------------------\n>> 16400 | public | x | 1 | 2022-10-23\n>> 15:53:04.935192-04 | 0 | 0 | 1999-12-31 19:00:00-05\n>> | 0 | 1 | 0 | 0 | 0 |\n>> 1 | 0 | 1 | 1 |\n>> [null] | [null] | [null] | [null] |\n>> 0 | 0 | 0 | 0\n>> (1 row)\n>>\n>> This time the create table both populate a sequential scan and\n>> populates the last_seq_scan with a real/correct value. However an\n>> insert into the table neither advances the seq_scan nor the\n>> last_seq_scan values which seems like different behavior from my\n>> original example, with the added negative that the last_idx_scan is\n>> now populated with a special value :-(\n>>\n>> I think the simplest fix which should correspond to existing versions\n>> behavior would be to just ensure that we replace any \"special value\"\n>> timestamps with a real transaction timestamp, and then maybe note that\n>> these fields may be advanced by operations which don't strictly show\n>> up as a sequential or index scan.\n>>\n>>\n>> Robert Treat\n>> https://xzilla.net\n>>\n>\n>\n> --\n> Dave Page\n> Blog: https://pgsnake.blogspot.com\n> Twitter: @pgsnake\n>\n> EDB: https://www.enterprisedb.com\n>\n>\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 3 Nov 2022 16:44:16 -0400",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Thu, Nov 03, 2022 at 04:44:16PM -0400, Dave Page wrote:\n> Here's a patch to fix this issue. Many thanks to Peter Eisentraut who\n> figured it out in a few minutes after I spent far too long looking down\n> rabbit holes in entirely the wrong place.\n\nFWIW, all the other areas of pgstatfuncs.c manipulate timestamptz\nfields with a style like the attached. That's a nit, still per the\nrole of consistency with the surroundings..\n\nAnyway, it seems to me that a regression test is in order before a\nscan happens just after the relation creation, and the same problem\nshows up with last_idx_scan.\n--\nMichael",
"msg_date": "Mon, 7 Nov 2022 16:54:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Mon, Nov 07, 2022 at 04:54:07PM +0900, Michael Paquier wrote:\n> FWIW, all the other areas of pgstatfuncs.c manipulate timestamptz\n> fields with a style like the attached. That's a nit, still per the\n> role of consistency with the surroundings..\n> \n> Anyway, it seems to me that a regression test is in order before a\n> scan happens just after the relation creation, and the same problem\n> shows up with last_idx_scan.\n\nHearing nothing, done this way as of d7744d5. Thanks for the report,\nRobert. And thanks for the patch, Dave.\n--\nMichael",
"msg_date": "Tue, 8 Nov 2022 13:09:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Tracking last scan time"
},
{
"msg_contents": "On Tue, 8 Nov 2022 at 04:10, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Nov 07, 2022 at 04:54:07PM +0900, Michael Paquier wrote:\n> > FWIW, all the other areas of pgstatfuncs.c manipulate timestamptz\n> > fields with a style like the attached. That's a nit, still per the\n> > role of consistency with the surroundings..\n> >\n> > Anyway, it seems to me that a regression test is in order before a\n> > scan happens just after the relation creation, and the same problem\n> > shows up with last_idx_scan.\n>\n> Hearing nothing, done this way as of d7744d5. Thanks for the report,\n> Robert. And thanks for the patch, Dave.\n>\n\nThank you!\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Tue, 8 Nov 2022 at 04:10, Michael Paquier <michael@paquier.xyz> wrote:On Mon, Nov 07, 2022 at 04:54:07PM +0900, Michael Paquier wrote:\n> FWIW, all the other areas of pgstatfuncs.c manipulate timestamptz\n> fields with a style like the attached. That's a nit, still per the\n> role of consistency with the surroundings..\n> \n> Anyway, it seems to me that a regression test is in order before a\n> scan happens just after the relation creation, and the same problem\n> shows up with last_idx_scan.\n\nHearing nothing, done this way as of d7744d5. Thanks for the report,\nRobert. And thanks for the patch, Dave.Thank you! -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 8 Nov 2022 09:25:25 +0000",
"msg_from": "Dave Page <dpage@pgadmin.org>",
"msg_from_op": true,
"msg_subject": "Re: Tracking last scan time"
}
] |
[
{
"msg_contents": "Hello,\n\nIt seems for me that there is currently a pitfall in the pg_rewind\nimplementation.\n\nImagine the following situation:\n\n\nThere is a cluster consisting of a primary with the following\nconfiguration: wal_level=‘replica’, archive_mode=‘on’ and a replica.\n\n 1. The primary that is not fast enough in archiving WAL segments (e.g.\n network issues, high CPU/Disk load...)\n 2. The primary fails\n 3. The replica is promoted\n 4. We are not lucky enough, the new and the old primary’s timelines\n diverged, we need to run pg_rewind\n 5. We are even less lucky: the old primary still has some WAL segments\n with .ready signal files that were generated before the point of divergence\n and were not archived. (e.g. 000000020004D20200000095.done,\n 000000020004D20200000096.ready, 000000020004D20200000097.ready,\n 000000020004D20200000098.ready)\n 6. The promoted primary runs for some time and recycles the old WAL\n segments.\n 7. We revive the old primary and try to rewind it\n 8. When pg_rewind finished successfully, we see that the WAL segments\n with .ready files are removed, because they were already absent on the\n promoted replica. We end up in a situation where we completely lose some\n WAL segments, even though we had a clear sign that they were not\narchived and\n more importantly, pg_rewind read these segments while collecting\n information about the data blocks.\n 9. The old primary fails to start because of the missing WAL segments\n (more strictly, the records between the last common checkpoint and the\n point of divergence) with the following log record: \"ERROR: requested WAL\n segment 000000020004D20200000096 has already been removed\"\n\n\nIn this situation, after pg_rewind:\narchived:\n\n000000020004D20200000095\n\n000000020004D20200000099.partial\n\n000000030004D20200000099\n\n\nthe following segments are lost:\n\n000000020004D20200000096\n\n000000020004D20200000097\n\n000000020004D20200000098\n\n\nThus, my thoughts are: why can’t pg_rewind be a little bit wiser in terms\nof creating filemap for WALs? Can it preserve the WAL segments that contain\nthose potentially lost records (> the last common checkpoint and < the\npoint of divergence) on the target? (see the patch attached)\n\n\nIf I am missing something however, please correct me or explain why it is\nnot possible to implement this straightforward solution.\n\n\nThank you,\n\nPolina Bungina",
"msg_date": "Tue, 23 Aug 2022 17:46:30 +0200",
"msg_from": "=?UTF-8?B?0J/QvtC70LjQvdCwINCR0YPQvdCz0LjQvdCw?= <bungina@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "In the first place, this is not a bug. (At least doesn't seem.)\nIf you mean to propose behavioral changes, -hackers is the place.\n\nAt Tue, 23 Aug 2022 17:46:30 +0200, Полина Бунгина <bungina@gmail.com> wrote in \n> 4. We are not lucky enough, the new and the old primary’s timelines\n> diverged, we need to run pg_rewind\n> 5. We are even less lucky: the old primary still has some WAL segments\n> with .ready signal files that were generated before the point of divergence\n> and were not archived.\n\nThat dones't harm pg_rewind at all.\n\n> 6. The promoted primary runs for some time and recycles the old WAL\n> segments.\n> 7. We revive the old primary and try to rewind it\n> 8. When pg_rewind finished successfully, we see that the WAL segments\n> with .ready files are removed, because they were already absent on the\n> promoted replica. We end up in a situation where we completely lose some\n> WAL segments, even though we had a clear sign that they were not\n> archived and\n> more importantly, pg_rewind read these segments while collecting\n> information about the data blocks.\n\nIn terms of syncing the old primary to the new primary, no data has\nbeen lost. The \"lost\" segments are anyway unusable for the new primary\nsince they no longer compatible with it. How do you intended to use\nthe WAL files for the incompatible cluster?\n\n> 9. The old primary fails to start because of the missing WAL segments\n> (more strictly, the records between the last common checkpoint and the\n> point of divergence) with the following log record: \"ERROR: requested WAL\n> segment 000000020004D20200000096 has already been removed\"\n\nThat means that the tail end of the rewound old primary has been lost\non the new primary's pg_wal. In that case, you need to somehow\ncopy-in the archived WAL files on the new primary. You can just do\nthat or you can set up restore_command properly.\n> Thus, my thoughts are: why can’t pg_rewind be a little bit wiser in terms\n> of creating filemap for WALs? Can it preserve the WAL segments that contain\n> those potentially lost records (> the last common checkpoint and < the\n> point of divergence) on the target? (see the patch attached)\n\nSince they are not really needed once rewind completes.\n\n> If I am missing something however, please correct me or explain why it is\n> not possible to implement this straightforward solution.\n\nMaybe you're mistaking the operation. If I understand the situation\ncorrectly, I think the following steps replays your \"issue\" and then\nresolve that.\n\n\n# killall -9 postgres\n# rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\nmkdir newarch oldarch\ninitdb -k -D oldprim\necho \"archive_mode = 'always'\">> oldprim/postgresql.conf\necho \"archive_command = 'cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.conf\npg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\npsql -p 5432 -c 'create table t(a int)'\npg_basebackup -D newprim -p 5432\necho \"primary_conninfo='host=/tmp port=5432'\">> oldprim/postgresql.conf\necho \"archive_command = 'cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conf\ntouch newprim/standby.signal\npg_ctl -D newprim -o '-p 5433' -l newprim.log start\npg_ctl -D newprim promote\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5432 -c 'checkpoint'\npg_ctl -D oldprim stop\necho \"restore_command = 'cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.conf\n# pg_rewind -D oldprim --source-server='port=5433' # fails\npg_rewind -D oldprim --source-server='port=5433' -c\nfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5433 -c 'checkpoint'\necho \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\ntouch oldprim/standby.signal\n\npostgres -D oldprim\n\n> FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000020000000000000003 has already been removed\n\n[ctrl-C]\n======\n\nNow that the old primary requires older WAL files *on the new\nprimary*. Here, define restore command to do that.\n\n=====\necho \"restore_command='cp `pwd`/newarch/%f %p'\">> oldprim/postgresql.conf\npostgres -D oldprim\n=====\n\nNow the old primary run as the standby of the new primary.\n\n> LOG: restored log file \"000000020000000000000006\" from archive\n> LOG: consistent recovery state reached at 0/30020B0\n> LOG: database system is ready to accept read-only connections\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:49:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello Kyotaro,\n\n\nOn Thu, 25 Aug 2022 at 09:49, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> In the first place, this is not a bug. (At least doesn't seem.)\n> If you mean to propose behavioral changes, -hackers is the place.\n>\n\nWell, maybe... We can always change it.\n\n\n> > 8. When pg_rewind finished successfully, we see that the WAL segments\n> > with .ready files are removed, because they were already absent on the\n> > promoted replica. We end up in a situation where we completely lose\n> some\n> > WAL segments, even though we had a clear sign that they were not\n> > archived and\n> > more importantly, pg_rewind read these segments while collecting\n> > information about the data blocks.\n>\n> In terms of syncing the old primary to the new primary, no data has\n> been lost. The \"lost\" segments are anyway unusable for the new primary\n> since they no longer compatible with it. How do you intended to use\n> the WAL files for the incompatible cluster?\n>\n\nThese files are required for the old primary to start as a replica.\n\n\n>\n> > 9. The old primary fails to start because of the missing WAL segments\n> > (more strictly, the records between the last common checkpoint and the\n> > point of divergence) with the following log record: \"ERROR:\n> requested WAL\n> > segment 000000020004D20200000096 has already been removed\"\n>\n> That means that the tail end of the rewound old primary has been lost\n> on the new primary's pg_wal.\n\n\nCorrect. The old primary was down for about 20m and we have\ncheckpoint_timeout = 5m, so the new primary already recycled them.\n\n\n> In that case, you need to somehow\n> copy-in the archived WAL files on the new primary. You can just do\n> that or you can set up restore_command properly.\n>\n\nThese files never made it to the archive because the server crashed. The\nonly place where they existed was pg_wal in the old primary.\n\n\n\n> > Thus, my thoughts are: why can’t pg_rewind be a little bit wiser in terms\n> > of creating filemap for WALs? Can it preserve the WAL segments that\n> contain\n> > those potentially lost records (> the last common checkpoint and < the\n> > point of divergence) on the target? (see the patch attached)\n>\n> Since they are not really needed once rewind completes.\n>\n\nThe pg_rewind creates the backup_label file with START WAL LOCATION and\nCHECKPOINT LOCATION that point to the last common checkpoint.\nRemoved files are between the last common checkpoint and diverged WAL\nlocation, and therefore are required for Postgres to do successful recovery.\nSince these files never made it to the archive and are also absent on the\nnew primary the old primary can't start as a replica.\nAnd I will emphasize one more time, that these files were removed by\npg_rewind despite the known fact that they are required to perform a\nrecovery.\n\n\n\n>\n> > If I am missing something however, please correct me or explain why it is\n> > not possible to implement this straightforward solution.\n>\n> Maybe you're mistaking the operation.\n\n\nWe are not (Patroni author is here).\n\n\n> If I understand the situation\n> correctly, I think the following steps replays your \"issue\" and then\n> resolve that.\n>\n>\n> # killall -9 postgres\n> # rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\n> mkdir newarch oldarch\n> initdb -k -D oldprim\n> echo \"archive_mode = 'always'\">> oldprim/postgresql.conf\n>\n\nWith archive_mode = always you can't reproduce it.\nIt is very rarely people set it to always in production due to the overhead.\n\n\n\n\n> echo \"archive_command = 'cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.conf\n> pg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\n> psql -p 5432 -c 'create table t(a int)'\n> pg_basebackup -D newprim -p 5432\n> echo \"primary_conninfo='host=/tmp port=5432'\">> oldprim/postgresql.conf\n> echo \"archive_command = 'cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conf\n> touch newprim/standby.signal\n> pg_ctl -D newprim -o '-p 5433' -l newprim.log start\n> pg_ctl -D newprim promote\n> for i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select\n> pg_switch_wal();'; done\n> psql -p 5432 -c 'checkpoint'\n> pg_ctl -D oldprim stop\n>\n\nThe archive_mode has to be set to on and the archive_command should be\nfailing when you do pg_ctl -D oldprim stop\n\n\n> echo \"restore_command = 'cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.conf\n> # pg_rewind -D oldprim --source-server='port=5433' # fails\n> pg_rewind -D oldprim --source-server='port=5433' -c\n> for i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select\n> pg_switch_wal();'; done\n> psql -p 5433 -c 'checkpoint'\n> echo \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\n> touch oldprim/standby.signal\n>\n> postgres -D oldprim\n>\n> > FATAL: could not receive data from WAL stream: ERROR: requested WAL\n> segment 000000020000000000000003 has already been removed\n>\n>\nRegards,\n--\nAlexander Kukushkin\n\nHello Kyotaro,On Thu, 25 Aug 2022 at 09:49, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:In the first place, this is not a bug. (At least doesn't seem.)\nIf you mean to propose behavioral changes, -hackers is the place.Well, maybe... We can always change it. \n> 8. When pg_rewind finished successfully, we see that the WAL segments\n> with .ready files are removed, because they were already absent on the\n> promoted replica. We end up in a situation where we completely lose some\n> WAL segments, even though we had a clear sign that they were not\n> archived and\n> more importantly, pg_rewind read these segments while collecting\n> information about the data blocks.\n\nIn terms of syncing the old primary to the new primary, no data has\nbeen lost. The \"lost\" segments are anyway unusable for the new primary\nsince they no longer compatible with it. How do you intended to use\nthe WAL files for the incompatible cluster?These files are required for the old primary to start as a replica. \n\n> 9. The old primary fails to start because of the missing WAL segments\n> (more strictly, the records between the last common checkpoint and the\n> point of divergence) with the following log record: \"ERROR: requested WAL\n> segment 000000020004D20200000096 has already been removed\"\n\nThat means that the tail end of the rewound old primary has been lost\non the new primary's pg_wal.Correct. The old primary was down for about 20m and we have checkpoint_timeout = 5m, so the new primary already recycled them. In that case, you need to somehow\ncopy-in the archived WAL files on the new primary. You can just do\nthat or you can set up restore_command properly.These files never made it to the archive because the server crashed. The only place where they existed was pg_wal in the old primary. \n> Thus, my thoughts are: why can’t pg_rewind be a little bit wiser in terms\n> of creating filemap for WALs? Can it preserve the WAL segments that contain\n> those potentially lost records (> the last common checkpoint and < the\n> point of divergence) on the target? (see the patch attached)\n\nSince they are not really needed once rewind completes.The pg_rewind creates the backup_label file with START WAL LOCATION and CHECKPOINT LOCATION that point to the last common checkpoint.Removed files are between the last common checkpoint and diverged WAL location, and therefore are required for Postgres to do successful recovery.Since these files never made it to the archive and are also absent on the new primary the old primary can't start as a replica.And I will emphasize one more time, that these files were removed by pg_rewind despite the known fact that they are required to perform a recovery. \n\n> If I am missing something however, please correct me or explain why it is\n> not possible to implement this straightforward solution.\n\nMaybe you're mistaking the operation. We are not (Patroni author is here). If I understand the situation\ncorrectly, I think the following steps replays your \"issue\" and then\nresolve that.\n\n\n# killall -9 postgres\n# rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\nmkdir newarch oldarch\ninitdb -k -D oldprim\necho \"archive_mode = 'always'\">> oldprim/postgresql.confWith archive_mode = always you can't reproduce it.It is very rarely people set it to always in production due to the overhead. \necho \"archive_command = 'cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.conf\npg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\npsql -p 5432 -c 'create table t(a int)'\npg_basebackup -D newprim -p 5432\necho \"primary_conninfo='host=/tmp port=5432'\">> oldprim/postgresql.conf\necho \"archive_command = 'cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conf\ntouch newprim/standby.signal\npg_ctl -D newprim -o '-p 5433' -l newprim.log start\npg_ctl -D newprim promote\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5432 -c 'checkpoint'\npg_ctl -D oldprim stopThe archive_mode has to be set to on and the archive_command should be failing when you do pg_ctl -D oldprim stop \necho \"restore_command = 'cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.conf\n# pg_rewind -D oldprim --source-server='port=5433' # fails\npg_rewind -D oldprim --source-server='port=5433' -c\nfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5433 -c 'checkpoint'\necho \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\ntouch oldprim/standby.signal\n\npostgres -D oldprim\n\n> FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000020000000000000003 has already been removed\nRegards,--Alexander Kukushkin",
"msg_date": "Thu, 25 Aug 2022 10:34:40 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "(Moved to -hackers)\n\nAt Thu, 25 Aug 2022 10:34:40 +0200, Alexander Kukushkin <cyberdemn@gmail.com> wrote in \n> > # killall -9 postgres\n> > # rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\n> > mkdir newarch oldarch\n> > initdb -k -D oldprim\n> > echo \"archive_mode = 'always'\">> oldprim/postgresql.conf\n> >\n> \n> With archive_mode = always you can't reproduce it.\n> It is very rarely people set it to always in production due to the overhead.\n...\n> The archive_mode has to be set to on and the archive_command should be\n> failing when you do pg_ctl -D oldprim stop\n\nAh, I see.\n\nWhat I don't still understand is why pg_rewind doesn't work for the\nold primary in that case. When archive_mode=on, the old primary has\nthe complete set of WAL files counting both pg_wal and its archive. So\nas the same to the privious repro, pg_rewind -c ought to work (but it\nuses its own archive this time). In that sense the proposed solution\nis still not needed in this case.\n\nA bit harder situation comes after the server successfully rewound; if\nthe new primary goes so far that the old primary cannot connect. Even\nin that case, you can copy-in the requried WAL files or configure\nrestore_command of the old pimary so that it finds required WAL files\nthere.\n\nAs the result the system in total doesn't lose a WAL file.\n\nSo.. I might still be missing something..\n\n\n###############################\n# killall -9 postgres\n# rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\nmkdir newarch oldarch\ninitdb -k -D oldprim\necho \"archive_mode = 'on'\">> oldprim/postgresql.conf\necho \"archive_command = 'cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.conf\npg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\npsql -p 5432 -c 'create table t(a int)'\npg_basebackup -D newprim -p 5432\necho \"primary_conninfo='host=/tmp port=5432'\">> newprim/postgresql.conf\necho \"archive_command = 'cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conf\ntouch newprim/standby.signal\npg_ctl -D newprim -o '-p 5433' -l newprim.log start\n\n# the last common checkpoint\npsql -p 5432 -c 'checkpoint'\n\n# record approx. diverging WAL segment\nstart_wal=`psql -p 5433 -Atc 'select pg_walfile_name(pg_last_wal_replay_lsn() - (select setting from pg_settings where name = 'wal_segment_size')::int);\n`\npsql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'\npg_ctl -D newprim promote\npsql -p 5433 -c 'checkpoint'\n\n# old rprimary loses diverging WAL segment\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done\n\n# old primary cannot archive any more\necho \"archive_command = 'false'\">> oldprim/postgresql.conf\npg_ctl -D oldprim reload\npg_ctl -D oldprim stop\n\n# rewind the old primary, using its own archive\n# pg_rewind -D oldprim --source-server='port=5433' # should fail\necho \"restore_command = 'cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.conf\npg_rewind -D oldprim --source-server='port=5433' -c\n\n# advance WAL on the old primary; new primary loses the launching WAL seg\nfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5433 -c 'checkpoint'\necho \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\ntouch oldprim/standby.signal\n\npostgres -D oldprim # fails with \"WAL file has been removed\"\n\n# The alternative of copying-in\n# echo \"restore_command = 'cp `pwd`/newarch/%f %p'\">> oldprim/postgresql.conf\n\n# copy-in WAL files from new primary's archive to old primary\n(cd newarch;\nfor f in `ls`; do\n if [[ \"$f\" > \"$start_wal\" ]]; then echo copy $f; cp $f ../oldprim/pg_wal; fi\ndone)\n\npostgres -D oldprim\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Aug 2022 17:04:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello Kyotaro,\n\n\nOn Fri, 26 Aug 2022 at 10:04, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> > With archive_mode = always you can't reproduce it.\n> > It is very rarely people set it to always in production due to the\n> overhead.\n> ...\n> > The archive_mode has to be set to on and the archive_command should be\n> > failing when you do pg_ctl -D oldprim stop\n>\n> Ah, I see.\n>\n> What I don't still understand is why pg_rewind doesn't work for the\n> old primary in that case. When archive_mode=on, the old primary has\n> the complete set of WAL files counting both pg_wal and its archive. So\n> as the same to the privious repro, pg_rewind -c ought to work (but it\n> uses its own archive this time). In that sense the proposed solution\n> is still not needed in this case.\n>\n\nThe pg_rewind finishes successfully. But as a result it removes some files\nfrom pg_wal that are required to perform recovery because they are missing\non the new primary.\n\n\n\n>\n> A bit harder situation comes after the server successfully rewound; if\n> the new primary goes so far that the old primary cannot connect. Even\n> in that case, you can copy-in the requried WAL files or configure\n> restore_command of the old pimary so that it finds required WAL files\n> there.\n>\n\nYes, we can do the backup of pg_wal before running pg_rewind, but it feels\nvery ugly, because we will also have to clean this \"backup\" after a\nsuccessful recovery.\nIt would be much better if pg_rewind didn't remove WAL files between the\nlast common checkpoint and diverged LSN in the first place.\n\nRegards,\n--\nAlexander Kukushkin\n\nHello Kyotaro,On Fri, 26 Aug 2022 at 10:04, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> With archive_mode = always you can't reproduce it.\n> It is very rarely people set it to always in production due to the overhead.\n...\n> The archive_mode has to be set to on and the archive_command should be\n> failing when you do pg_ctl -D oldprim stop\n\nAh, I see.\n\nWhat I don't still understand is why pg_rewind doesn't work for the\nold primary in that case. When archive_mode=on, the old primary has\nthe complete set of WAL files counting both pg_wal and its archive. So\nas the same to the privious repro, pg_rewind -c ought to work (but it\nuses its own archive this time). In that sense the proposed solution\nis still not needed in this case.The pg_rewind finishes successfully. But as a result it removes some files from pg_wal that are required to perform recovery because they are missing on the new primary. \n\nA bit harder situation comes after the server successfully rewound; if\nthe new primary goes so far that the old primary cannot connect. Even\nin that case, you can copy-in the requried WAL files or configure\nrestore_command of the old pimary so that it finds required WAL files\nthere.Yes, we can do the backup of pg_wal before running pg_rewind, but it feels very ugly, because we will also have to clean this \"backup\" after a successful recovery.It would be much better if pg_rewind didn't remove WAL files between the last common checkpoint and diverged LSN in the first place.Regards,--Alexander Kukushkin",
"msg_date": "Fri, 26 Aug 2022 10:57:25 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello, Alex.\n\nAt Fri, 26 Aug 2022 10:57:25 +0200, Alexander Kukushkin <cyberdemn@gmail.com> wrote in \n> On Fri, 26 Aug 2022 at 10:04, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > What I don't still understand is why pg_rewind doesn't work for the\n> > old primary in that case. When archive_mode=on, the old primary has\n> > the complete set of WAL files counting both pg_wal and its archive. So\n> > as the same to the privious repro, pg_rewind -c ought to work (but it\n> > uses its own archive this time). In that sense the proposed solution\n> > is still not needed in this case.\n> >\n> \n> The pg_rewind finishes successfully. But as a result it removes some files\n> from pg_wal that are required to perform recovery because they are missing\n> on the new primary.\n\nIFAIS pg_rewind doesn't. -c option contrarily restores the all\nsegments after the last (common) checkpoint and all of them are left\nalone after pg_rewind finishes. postgres itself removes the WAL files\nafter recovery. After-promotion cleanup and checkpoint revmoes the\nfiles on the previous timeline.\n\nBefore pg_rewind runs in the repro below, the old primary has the\nfollowing segments.\n\nTLI1: 2 8 9 A B C D\n\nJust after pg_rewind finishes, the old primary has the following\nsegments.\n\nTLI1: 2 3 5 6 7\nTLI2: 4 (and 00000002.history)\n\npg_rewind copied 1-2 to 1-3 and 2-4 and history file from the new\nprimary, 1-4 to 1-7 from archive. After rewind finished, 1-4,1-8 to\n1-D have been removed since the new primary didn't have them.\n\nRecovery starts from 1-3 and promotes at 0/4_000000. postgres removes\n1-5 to 1-7 by post-promotion cleanup and removes 1-2 to 1-4 by a\nrestartpoint. All of the segments are useless after the old primary\npromotes.\n\nWhen the old primary starts, it uses 1-3 and 2-4 for recovery and\nfails to fetch 2-5 from the new primary. But it is not an issue of\npg_rewind at all.\n\n> > A bit harder situation comes after the server successfully rewound; if\n> > the new primary goes so far that the old primary cannot connect. Even\n> > in that case, you can copy-in the requried WAL files or configure\n> > restore_command of the old pimary so that it finds required WAL files\n> > there.\n> >\n> \n> Yes, we can do the backup of pg_wal before running pg_rewind, but it feels\n\nSo, if I understand you correctly, the issue you are complaining is\nnot about the WAL segments on the old timeline but about those on the\nnew timeline, which don't have a business with what pg_rewind does. As\nthe same with the case of pg_basebackup, the missing segments need to\nbe somehow copied from the new primary since the old primary never had\nthe chance to have them before.\n\n> very ugly, because we will also have to clean this \"backup\" after a\n> successful recovery.\n\nWhat do you mean by the \"backup\" here? Concretely what WAL segments do\nyou feel need to remove, for example, in the repro case? Or, could\nyou show your issue by something like the repro below?\n\n> It would be much better if pg_rewind didn't remove WAL files between the\n> last common checkpoint and diverged LSN in the first place.\n\nThus I don't follow this..\n\nregards.\n\n\n(Fixed a bug and slightly modified)\n====\n# killall -9 postgres\n# rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\nmkdir newarch oldarch\ninitdb -k -D oldprim\necho \"archive_mode = 'on'\">> oldprim/postgresql.conf\necho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.conf\npg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\npsql -p 5432 -c 'create table t(a int)'\npg_basebackup -D newprim -p 5432\necho \"primary_conninfo='host=/tmp port=5432'\">> newprim/postgresql.conf\necho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conf\ntouch newprim/standby.signal\npg_ctl -D newprim -o '-p 5433' -l newprim.log start\n\n# the last common checkpoint\npsql -p 5432 -c 'checkpoint'\n\n# record approx. diverging WAL segment\nstart_wal=`psql -p 5433 -Atc \"select pg_walfile_name(pg_last_wal_replay_lsn() - (select setting from pg_settings where name = 'wal_segment_size')::int);\"`\npsql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'\npg_ctl -D newprim promote\n\n# old rprimary loses diverging WAL segment\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5432 -c 'checkpoint;'\npsql -p 5433 -c 'checkpoint;'\n\n# old primary cannot archive any more\necho \"archive_command = 'false'\">> oldprim/postgresql.conf\npg_ctl -D oldprim reload\npg_ctl -D oldprim stop\n\n# rewind the old primary, using its own archive\n# pg_rewind -D oldprim --source-server='port=5433' # should fail\necho \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.conf\npg_rewind -D oldprim --source-server='port=5433' -c\n\n# advance WAL on the old primary; new primary loses the launching WAL seg\nfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5433 -c 'checkpoint'\necho \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\ntouch oldprim/standby.signal\n\npostgres -D oldprim # fails with \"WAL file has been removed\"\n\n# The alternative of copying-in\n# echo \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/newarch/%f %p'\">> oldprim/postgresql.conf\n\n# copy-in WAL files from new primary's archive to old primary\n(cd newarch;\nfor f in `ls`; do\n if [[ \"$f\" > \"$start_wal\" ]]; then echo copy $f; cp $f ../oldprim/pg_wal; fi\ndone)\n\npostgres -D oldprim\n====\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 30 Aug 2022 14:50:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello Kyotaro,\n\nOn Tue, 30 Aug 2022 at 07:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n\n\n> So, if I understand you correctly, the issue you are complaining is\n> not about the WAL segments on the old timeline but about those on the\n> new timeline, which don't have a business with what pg_rewind does. As\n> the same with the case of pg_basebackup, the missing segments need to\n> be somehow copied from the new primary since the old primary never had\n> the chance to have them before.\n>\n\nNo, we are complaining exactly about WAL segments from the old timeline\nthat are removed by pg_rewind.\nThose segments haven't been archived by the old primary and the new primary\nalready recycled them.\n\n\n\n\n>\n> Thus I don't follow this..\n>\n\nI did a slight modification of your script that reproduces a problem.\n\n\n====\nmkdir newarch oldarch\ninitdb -k -D oldprim\necho \"archive_mode = 'on'\">> oldprim/postgresql.conf\necho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/oldarch/%f'\">>\noldprim/postgresql.conf\npg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\npsql -p 5432 -c 'create table t(a int)'\npg_basebackup -D newprim -p 5432\necho \"primary_conninfo='host=/tmp port=5432'\">> newprim/postgresql.conf\necho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/newarch/%f'\">>\nnewprim/postgresql.conf\ntouch newprim/standby.signal\npg_ctl -D newprim -o '-p 5433' -l newprim.log start\n\n# the last common checkpoint\npsql -p 5432 -c 'checkpoint'\n\n# old primary cannot archive any more\necho \"archive_command = 'false'\">> oldprim/postgresql.conf\npg_ctl -D oldprim reload\n# advance WAL on the old primary; four WAL segments will never make it to\nthe archive\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select\npg_switch_wal();'; done\n\n# record approx. diverging WAL segment\nstart_wal=`psql -p 5432 -Atc \"select\npg_walfile_name(pg_last_wal_replay_lsn() - (select setting from pg_settings\nwhere name = 'wal_segment_size')::int);\"`\npg_ctl -D newprim promote\n\n# old rprimary loses diverging WAL segment\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select\npg_switch_wal();'; done\npsql -p 5432 -c 'checkpoint;'\npsql -p 5433 -c 'checkpoint;'\n\npg_ctl -D oldprim stop\n\n# rewind the old primary, using its own archive\n# pg_rewind -D oldprim --source-server='port=5433' # should fail\necho \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/oldarch/%f %p'\">>\noldprim/postgresql.conf\npg_rewind -D oldprim --source-server='port=5433' -c\n\n# advance WAL on the old primary; new primary loses the launching WAL seg\nfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select\npg_switch_wal();'; done\npsql -p 5433 -c 'checkpoint'\necho \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\ntouch oldprim/standby.signal\n\npostgres -D oldprim # fails with \"WAL file has been removed\"\n\n# The alternative of copying-in\n# echo \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/newarch/%f\n%p'\">> oldprim/postgresql.conf\n\n# copy-in WAL files from new primary's archive to old primary\n(cd newarch;\nfor f in `ls`; do\n if [[ \"$f\" > \"$start_wal\" ]]; then echo copy $f; cp $f ../oldprim/pg_wal;\nfi\ndone)\n\npostgres -D oldprim # also fails with \"requested WAL segment XXX has\nalready been removed\"\n===\n\nRegards,\n--\nAlexander Kukushkin\n\nHello Kyotaro,On Tue, 30 Aug 2022 at 07:50, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote: \nSo, if I understand you correctly, the issue you are complaining is\nnot about the WAL segments on the old timeline but about those on the\nnew timeline, which don't have a business with what pg_rewind does. As\nthe same with the case of pg_basebackup, the missing segments need to\nbe somehow copied from the new primary since the old primary never had\nthe chance to have them before.No, we are complaining exactly about WAL segments from the old timeline that are removed by pg_rewind.Those segments haven't been archived by the old primary and the new primary already recycled them. \n\nThus I don't follow this..I did a slight modification of your script that reproduces a problem. ====mkdir newarch oldarchinitdb -k -D oldprimecho \"archive_mode = 'on'\">> oldprim/postgresql.confecho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.confpg_ctl -D oldprim -o '-p 5432' -l oldprim.log startpsql -p 5432 -c 'create table t(a int)'pg_basebackup -D newprim -p 5432echo \"primary_conninfo='host=/tmp port=5432'\">> newprim/postgresql.confecho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conftouch newprim/standby.signalpg_ctl -D newprim -o '-p 5433' -l newprim.log start# the last common checkpointpsql -p 5432 -c 'checkpoint'# old primary cannot archive any moreecho \"archive_command = 'false'\">> oldprim/postgresql.confpg_ctl -D oldprim reload# advance WAL on the old primary; four WAL segments will never make it to the archivefor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done# record approx. diverging WAL segmentstart_wal=`psql -p 5432 -Atc \"select pg_walfile_name(pg_last_wal_replay_lsn() - (select setting from pg_settings where name = 'wal_segment_size')::int);\"`pg_ctl -D newprim promote# old rprimary loses diverging WAL segmentfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; donepsql -p 5432 -c 'checkpoint;'psql -p 5433 -c 'checkpoint;'pg_ctl -D oldprim stop# rewind the old primary, using its own archive# pg_rewind -D oldprim --source-server='port=5433' # should failecho \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.confpg_rewind -D oldprim --source-server='port=5433' -c# advance WAL on the old primary; new primary loses the launching WAL segfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select pg_switch_wal();'; donepsql -p 5433 -c 'checkpoint'echo \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conftouch oldprim/standby.signalpostgres -D oldprim # fails with \"WAL file has been removed\"# The alternative of copying-in# echo \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/newarch/%f %p'\">> oldprim/postgresql.conf# copy-in WAL files from new primary's archive to old primary(cd newarch;for f in `ls`; do if [[ \"$f\" > \"$start_wal\" ]]; then echo copy $f; cp $f ../oldprim/pg_wal; fidone)postgres -D oldprim # also fails with \"requested WAL segment XXX has already been removed\"===Regards,--Alexander Kukushkin",
"msg_date": "Tue, 30 Aug 2022 08:49:27 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": ">\n>\n> I did a slight modification of your script that reproduces a problem.\n>\n>\n> ====\n>\n\nIt seems that formatting damaged the script, so I better attach it as a\nfile.\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Tue, 30 Aug 2022 08:56:10 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Tue, 30 Aug 2022 14:50:26 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> IFAIS pg_rewind doesn't. -c option contrarily restores the all\n> segments after the last (common) checkpoint and all of them are left\n> alone after pg_rewind finishes. postgres itself removes the WAL files\n> after recovery. After-promotion cleanup and checkpoint revmoes the\n> files on the previous timeline.\n> \n> Before pg_rewind runs in the repro below, the old primary has the\n> following segments.\n> \n> TLI1: 2 8 9 A B C D\n> \n> Just after pg_rewind finishes, the old primary has the following\n> segments.\n> \n> TLI1: 2 3 5 6 7\n> TLI2: 4 (and 00000002.history)\n> \n> pg_rewind copied 1-2 to 1-3 and 2-4 and history file from the new\n1> primary, 1-4 to 1-7 from archive. After rewind finished, 1-4,1-8 to\n> 1-D have been removed since the new primary didn't have them.\n> \n> Recovery starts from 1-3 and promotes at 0/4_000000. postgres removes\n> 1-5 to 1-7 by post-promotion cleanup and removes 1-2 to 1-4 by a\n> restartpoint. All of the segments are useless after the old primary\n> promotes.\n> \n> When the old primary starts, it uses 1-3 and 2-4 for recovery and\n> fails to fetch 2-5 from the new primary. But it is not an issue of\n> pg_rewind at all.\n\nAh. I think I understand what you are mentioning. If the new primary\ndidn't have the segment 1-3 to 1-6, pg_rewind removes it. The new\nprimary doesn't have it in pg_wal nor in archive. The old primary has\nit in its archive. So get out from the situation, we need to the\nfollowing *two* things before the old primary can start:\n\n1. copy 1-3 to 1-6 from the archive of the *old* primary\n2. copy 2-7 and later from the archive of the *new* primary\n\nSince pg_rewind have copied in to the old primary's pg_wal, removing them just have users to perform the task duplicatedly, as you stated.\n\nOkay, I completely understand the problem and convinced that it is\nworth changing the behavior.\n\nHowever, the proposed patch looks too complex to me. It can be done\nby just comparing xlog file name and the last checkpoint location and\nTLI in decide_file_actions().\n\nregards.\n\n=====\n# killall -9 postgres\n# rm -r oldprim newprim oldarch newarch oldprim.log newprim.log\nmkdir newarch oldarch\ninitdb -k -D oldprim\necho \"archive_mode = 'on'\">> oldprim/postgresql.conf\necho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/oldarch/%f'\">> oldprim/postgresql.conf\npg_ctl -D oldprim -o '-p 5432' -l oldprim.log start\npsql -p 5432 -c 'create table t(a int)'\npg_basebackup -D newprim -p 5432\necho \"primary_conninfo='host=/tmp port=5432'\">> newprim/postgresql.conf\necho \"archive_command = 'echo \"archive %f\" >&2; cp %p `pwd`/newarch/%f'\">> newprim/postgresql.conf\ntouch newprim/standby.signal\npg_ctl -D newprim -o '-p 5433' -l newprim.log start\n\n# the last common checkpoint\npsql -p 5432 -c 'checkpoint'\n\n# record approx. diverging WAL segment\nstart_wal=`psql -p 5433 -Atc \"select pg_walfile_name(pg_last_wal_replay_lsn() - (select setting from pg_settings where name = 'wal_segment_size')::int);\"`\nfor i in $(seq 1 5); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5432 -c 'checkpoint'\npg_ctl -D newprim promote\n\n# old rprimary loses diverging WAL segment\nfor i in $(seq 1 4); do psql -p 5432 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5432 -c 'checkpoint;'\npsql -p 5433 -c 'checkpoint;'\n\n# old primary cannot archive any more\necho \"archive_command = 'false'\">> oldprim/postgresql.conf\npg_ctl -D oldprim reload\npg_ctl -D oldprim stop\n\n# rewind the old primary, using its own archive\n# pg_rewind -D oldprim --source-server='port=5433' # should fail\necho \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/oldarch/%f %p'\">> oldprim/postgresql.conf\npg_rewind -D oldprim --source-server='port=5433' -c\n\n# advance WAL on the old primary; new primary loses the launching WAL seg\nfor i in $(seq 1 4); do psql -p 5433 -c 'insert into t values(0); select pg_switch_wal();'; done\npsql -p 5433 -c 'checkpoint'\necho \"primary_conninfo='host=/tmp port=5433'\">> oldprim/postgresql.conf\ntouch oldprim/standby.signal\n\n#### copy the missing file of the old timeline\n## cp oldarch/00000001000000000000000[3456] oldprim/pg_wal\n## cp newarch/00000002000000000000000* oldprim/pg_wal\n\npostgres -D oldprim # fails with \"WAL file has been removed\"\n\n\n# The alternative of copying-in\n# echo \"restore_command = 'echo \"restore %f\" >&2; cp `pwd`/newarch/%f %p'\">> oldprim/postgresql.conf\n\n# copy-in WAL files from new primary's archive to old primary\n(cd newarch;\nfor f in `ls`; do\n if [[ \"$f\" > \"$start_wal\" ]]; then echo copy $f; cp $f ../oldprim/pg_wal; fi\ndone)\n\npostgres -D oldprim\n=====\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 30 Aug 2022 16:39:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Tue, 30 Aug 2022 08:49:27 +0200, Alexander Kukushkin <cyberdemn@gmail.com> wrote in \n> No, we are complaining exactly about WAL segments from the old timeline\n> that are removed by pg_rewind.\n> Those segments haven't been archived by the old primary and the new primary\n> already recycled them.\n\nYeah, sorry for my thick skull but I finally got your point.\n\nAnd as I said in a mail I sent just before, the patch looks too\ncomplex. How about just comparing WAL file name aginst the last\ncommon checkpoint's tli and lsn? We can tell filemap.c about the last\ncheckpoint and decide_file_action can compare the file name with it.\n\nIt is sufficient to preserve WAL files if tli matches and the segment\nnumber of the WAL file is equal to or later than the checkpoint\nlocation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 30 Aug 2022 16:51:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello Kyotaro,\n\n\nOn Tue, 30 Aug 2022 at 09:51, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n>\n> And as I said in a mail I sent just before, the patch looks too\n> complex. How about just comparing WAL file name aginst the last\n> common checkpoint's tli and lsn? We can tell filemap.c about the last\n> checkpoint and decide_file_action can compare the file name with it.\n>\n> It is sufficient to preserve WAL files if tli matches and the segment\n> number of the WAL file is equal to or later than the checkpoint\n> location.\n>\n\nWhat if the last common checkpoint was on a previous timeline?\nI.e., standby was promoted to primary, the timeline changed from 1 to 2,\nand after that the node crashed _before_ the CHECKPOINT after promote has\nfinished.\nThe next node will advance the timeline from 2 to 3.\nIn this case, the last common checkpoint will be on timeline 1, and the\ncheck becomes more complex because we will have to consider both timelines,\n1 and 2.\n\nAlso, we need to take into account the divergency LSN. Files after it are\nnot required.\n\nRegards,\n--\nAlexander Kukushkin\n\nHello Kyotaro,On Tue, 30 Aug 2022 at 09:51, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\nAnd as I said in a mail I sent just before, the patch looks too\ncomplex. How about just comparing WAL file name aginst the last\ncommon checkpoint's tli and lsn? We can tell filemap.c about the last\ncheckpoint and decide_file_action can compare the file name with it.\n\nIt is sufficient to preserve WAL files if tli matches and the segment\nnumber of the WAL file is equal to or later than the checkpoint\nlocation.What if the last common checkpoint was on a previous timeline?I.e., standby was promoted to primary, the timeline changed from 1 to 2, and after that the node crashed _before_ the CHECKPOINT after promote has finished.The next node will advance the timeline from 2 to 3.In this case, the last common checkpoint will be on timeline 1, and the check becomes more complex because we will have to consider both timelines, 1 and 2.Also, we need to take into account the divergency LSN. Files after it are not required.Regards,--Alexander Kukushkin",
"msg_date": "Tue, 30 Aug 2022 10:03:07 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Tue, 30 Aug 2022 10:03:07 +0200, Alexander Kukushkin <cyberdemn@gmail.com> wrote in \n> On Tue, 30 Aug 2022 at 09:51, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> > And as I said in a mail I sent just before, the patch looks too\n> > complex. How about just comparing WAL file name aginst the last\n> > common checkpoint's tli and lsn? We can tell filemap.c about the last\n> > checkpoint and decide_file_action can compare the file name with it.\n> >\n> > It is sufficient to preserve WAL files if tli matches and the segment\n> > number of the WAL file is equal to or later than the checkpoint\n> > location.\n> >\n> \n> What if the last common checkpoint was on a previous timeline?\n> I.e., standby was promoted to primary, the timeline changed from 1 to 2,\n> and after that the node crashed _before_ the CHECKPOINT after promote has\n> finished.\n> The next node will advance the timeline from 2 to 3.\n> In this case, the last common checkpoint will be on timeline 1, and the\n> check becomes more complex because we will have to consider both timelines,\n> 1 and 2.\n\nHmm. Doesn't it work to ignoring tli then? All segments that their\nsegment number is equal to or larger than the checkpoint locaiton are\npreserved regardless of TLI?\n\n> Also, we need to take into account the divergency LSN. Files after it are\n> not required.\n\nThey are removed at the later checkpoints. But also we can remove\nsegments that are out of the range between the last common checkpoint\nand divergence point ignoring TLI. the divergence point is also\ncompared?\n\n> if (file_segno >= last_common_checkpoint_seg &&\n> file_segno <= divergence_seg)\n> <PRESERVE IT>;\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:27:30 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On Tue, 30 Aug 2022 at 10:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> Hmm. Doesn't it work to ignoring tli then? All segments that their\n> segment number is equal to or larger than the checkpoint locaiton are\n> preserved regardless of TLI?\n>\n\nIf we ignore TLI there is a chance that we may retain some unnecessary (or\njust wrong) files.\n\n\n>\n> > Also, we need to take into account the divergency LSN. Files after it are\n> > not required.\n>\n> They are removed at the later checkpoints. But also we can remove\n> segments that are out of the range between the last common checkpoint\n> and divergence point ignoring TLI.\n\n\nEverything that is newer last_common_checkpoint_seg could be removed (but\nit already happens automatically, because these files are missing on the\nnew primary).\nWAL files that are older than last_common_checkpoint_seg could be either\nremoved or at least not copied from the new primary.\n\n\n\n> the divergence point is also\n> compared?\n>\n> > if (file_segno >= last_common_checkpoint_seg &&\n> > file_segno <= divergence_seg)\n> > <PRESERVE IT>;\n>\n\nThe current implementation relies on tracking WAL files being open while\nsearching for the last common checkpoint. It automatically starts from the\ndivergence_seg, automatically finishes at last_common_checkpoint_seg, and\nlast but not least, automatically handles timeline changes. I don't think\nthat manually written code that decides what to do from the WAL file name\n(and also takes into account TLI) could be much simpler than the current\napproach.\n\n\nActually, since we start doing some additional \"manipulations\" with files\nin pg_wal, we probably should do a symmetric action with files inside\npg_wal/archive_status\n\nRegards,\n--\nAlexander Kukushkin\n\nOn Tue, 30 Aug 2022 at 10:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\nHmm. Doesn't it work to ignoring tli then? All segments that their\nsegment number is equal to or larger than the checkpoint locaiton are\npreserved regardless of TLI?If we ignore TLI there is a chance that we may retain some unnecessary (or just wrong) files. \n\n> Also, we need to take into account the divergency LSN. Files after it are\n> not required.\n\nThey are removed at the later checkpoints. But also we can remove\nsegments that are out of the range between the last common checkpoint\nand divergence point ignoring TLI. Everything that is newer last_common_checkpoint_seg could be removed (but it already happens automatically, because these files are missing on the new primary).WAL files that are older than last_common_checkpoint_seg could be either removed or at least not copied from the new primary. the divergence point is also\ncompared?\n\n> if (file_segno >= last_common_checkpoint_seg &&\n> file_segno <= divergence_seg)\n> <PRESERVE IT>;The current implementation relies on tracking WAL files being open while searching for the last common checkpoint. It automatically starts from the divergence_seg, automatically finishes at last_common_checkpoint_seg, and last but not least, automatically handles timeline changes. I don't think that manually written code that decides what to do from the WAL file name (and also takes into account TLI) could be much simpler than the current approach.Actually, since we start doing some additional \"manipulations\" with files in pg_wal, we probably should do a symmetric action with files inside pg_wal/archive_statusRegards,--Alexander Kukushkin",
"msg_date": "Tue, 30 Aug 2022 11:01:58 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Tue, 30 Aug 2022 11:01:58 +0200, Alexander Kukushkin <cyberdemn@gmail.com> wrote in \n> On Tue, 30 Aug 2022 at 10:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n> >\n> > Hmm. Doesn't it work to ignoring tli then? All segments that their\n> > segment number is equal to or larger than the checkpoint locaiton are\n> > preserved regardless of TLI?\n> >\n> \n> If we ignore TLI there is a chance that we may retain some unnecessary (or\n> just wrong) files.\n\nRight. I mean I don't think thats a problem and we can rely on\npostgres itself for later cleanup. Theoretically some out-of-range tli\nor segno files are left alone but they surely will be gone soon after\nthe server starts.\n\n> > > Also, we need to take into account the divergency LSN. Files after it are\n> > > not required.\n> >\n> > They are removed at the later checkpoints. But also we can remove\n> > segments that are out of the range between the last common checkpoint\n> > and divergence point ignoring TLI.\n> \n> \n> Everything that is newer last_common_checkpoint_seg could be removed (but\n> it already happens automatically, because these files are missing on the\n> new primary).\n> WAL files that are older than last_common_checkpoint_seg could be either\n> removed or at least not copied from the new primary.\n..\n> The current implementation relies on tracking WAL files being open while\n> searching for the last common checkpoint. It automatically starts from the\n> divergence_seg, automatically finishes at last_common_checkpoint_seg, and\n> last but not least, automatically handles timeline changes. I don't think\n> that manually written code that decides what to do from the WAL file name\n> (and also takes into account TLI) could be much simpler than the current\n> approach.\n\nYeah, I know. My expectation is taking the simplest way for the same\neffect. My concern was the additional hash. On second thought, I\nconcluded that we should that on the existing filehash.\n\nWe can just add a FILE_ACTION_NONE entry to the file hash from\nSimpleXLogPageRead. Since this happens before decide_file_action()\ncall, decide_file_action() should ignore the entries with\nFILE_ACTION_NONE. Also we need to call filehash_init() earlier.\n\n> Actually, since we start doing some additional \"manipulations\" with files\n> in pg_wal, we probably should do a symmetric action with files inside\n> pg_wal/archive_status\n\nIn that sense, pg_rewind rather should place missing\narchive_status/*.done for segments including restored ones seen while\nfinding checkpoint. This is analogous of the behavior with\npg_basebackup and pg_receivewal. Also we should add FILE_ACTION_NONE\nentries for .done files for segments read while finding checkpoint.\n\nWhat do you think about that?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:30:31 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Wed, 31 Aug 2022 14:30:31 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> What do you think about that?\n\nBy the way don't you add an CF entry for this?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 31 Aug 2022 14:36:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello Kayotaro,\n\n\nHere is the new version of the patch that includes the changes you\nsuggested. It is smaller now but I doubt if it is as easy to understand as\nit used to be.\n\n\nThe need of manipulations with the target’s pg_wal/archive_status directory\nis a question to discuss…\n\nAt first glance it seems to be useless for .ready files: checkpointer\nprocess will anyway recreate them if archiving is enabled on the rewound\nold primary and we will finally have them in the archive. As for the .done\nfiles, it seems reasonable to follow the pg_basebackup logic and keep .done\nfiles together with the corresponding segments (those between the last\ncommon checkpoint and the point of divergence) to protect them from being\narchived once again.\n\nBut on the other hand it seems to be not that straightforward: imaging we\nhave WAL segment X on the target along with X.done file and we decide to\npreserve them both (or we download it from archive and force .done file\ncreation), while archive_mode was set to ‘always’ and the source (promoted\nreplica) also still has WAL segment X and X.ready file. After pg_rewind we\nwill end up with both X.ready and X.done, which seems to be not a good\nsituation (but most likely not critical either).\n\n\nRegards,\n\nPolina Bungina\n\n\nOn Wed, Aug 31, 2022 at 7:30 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Tue, 30 Aug 2022 11:01:58 +0200, Alexander Kukushkin <\n> cyberdemn@gmail.com> wrote in\n> > On Tue, 30 Aug 2022 at 10:27, Kyotaro Horiguchi <horikyota.ntt@gmail.com\n> >\n> > wrote:\n> >\n> > >\n> > > Hmm. Doesn't it work to ignoring tli then? All segments that their\n> > > segment number is equal to or larger than the checkpoint locaiton are\n> > > preserved regardless of TLI?\n> > >\n> >\n> > If we ignore TLI there is a chance that we may retain some unnecessary\n> (or\n> > just wrong) files.\n>\n> Right. I mean I don't think thats a problem and we can rely on\n> postgres itself for later cleanup. Theoretically some out-of-range tli\n> or segno files are left alone but they surely will be gone soon after\n> the server starts.\n>\n> > > > Also, we need to take into account the divergency LSN. Files after\n> it are\n> > > > not required.\n> > >\n> > > They are removed at the later checkpoints. But also we can remove\n> > > segments that are out of the range between the last common checkpoint\n> > > and divergence point ignoring TLI.\n> >\n> >\n> > Everything that is newer last_common_checkpoint_seg could be removed (but\n> > it already happens automatically, because these files are missing on the\n> > new primary).\n> > WAL files that are older than last_common_checkpoint_seg could be either\n> > removed or at least not copied from the new primary.\n> ..\n> > The current implementation relies on tracking WAL files being open while\n> > searching for the last common checkpoint. It automatically starts from\n> the\n> > divergence_seg, automatically finishes at last_common_checkpoint_seg, and\n> > last but not least, automatically handles timeline changes. I don't think\n> > that manually written code that decides what to do from the WAL file name\n> > (and also takes into account TLI) could be much simpler than the current\n> > approach.\n>\n> Yeah, I know. My expectation is taking the simplest way for the same\n> effect. My concern was the additional hash. On second thought, I\n> concluded that we should that on the existing filehash.\n>\n> We can just add a FILE_ACTION_NONE entry to the file hash from\n> SimpleXLogPageRead. Since this happens before decide_file_action()\n> call, decide_file_action() should ignore the entries with\n> FILE_ACTION_NONE. Also we need to call filehash_init() earlier.\n>\n> > Actually, since we start doing some additional \"manipulations\" with files\n> > in pg_wal, we probably should do a symmetric action with files inside\n> > pg_wal/archive_status\n>\n> In that sense, pg_rewind rather should place missing\n> archive_status/*.done for segments including restored ones seen while\n> finding checkpoint. This is analogous of the behavior with\n> pg_basebackup and pg_receivewal. Also we should add FILE_ACTION_NONE\n> entries for .done files for segments read while finding checkpoint.\n>\n> What do you think about that?\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>",
"msg_date": "Thu, 1 Sep 2022 13:33:09 +0200",
"msg_from": "Polina Bungina <bungina@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Terribly sorry for misspelling your name and for the topposting!\n\nRegards,\nPolina Bungina\n\nTerribly sorry for misspelling your name and for the topposting!Regards,Polina Bungina",
"msg_date": "Thu, 1 Sep 2022 13:58:04 +0200",
"msg_from": "Polina Bungina <bungina@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hello Kyotaro,\n\nany further thoughts on it?\n\nRegards,\n--\nAlexander Kukushkin\n\nHello Kyotaro,any further thoughts on it?Regards,--Alexander Kukushkin",
"msg_date": "Mon, 26 Sep 2022 09:08:25 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Thu, 1 Sep 2022 13:33:09 +0200, Polina Bungina <bungina@gmail.com> wrote in \n> Here is the new version of the patch that includes the changes you\n> suggested. It is smaller now but I doubt if it is as easy to understand as\n> it used to be.\n\npg_rewind works in two steps. First it constructs file map which\ndecides the action for each file, then second, it performs file\noperations according to the file map. So, if we are going to do\nsomething on some files, that action should be record that in the file\nmap, I think.\n\nRegarding the the patch, pg_rewind starts reading segments from the\ndivergence point back to the nearest checkpoint, then moves foward\nduring rewinding. So, the fact that SimpleXLogPageRead have read a\nsegment suggests that the segment is required during the next startup.\nSo I don't think we need to move around the keepWalSeg flag. All\nfiles that are wanted while rewinding should be preserved\nunconditionally.\n\nIt's annoying that the file path for file map and open(2) have\ndifferent top directory. But sharing the same path string between the\ntwo seems rather ugly..\n\nI feel uncomfortable to directly touch the internal of file_entry_t\noutside filemap.c. I'd like to hide the internals in filemap.c, but\npg_rewind already does that..\n\n+\t\t/*\n+\t\t * Some entries (WAL segments) already have an action assigned\n+\t\t * (see SimpleXLogPageRead()).\n+\t\t */\n+\t\tif (entry->action == FILE_ACTION_NONE)\n+\t\t\tcontinue;\n \t\tentry->action = decide_file_action(entry);\n\nIt might be more reasonable to call decide_file_action() when action\nis UNDECIDED.\n\n> The need of manipulations with the target’s pg_wal/archive_status directory\n> is a question to discuss…\n>\n> At first glance it seems to be useless for .ready files: checkpointer\n> process will anyway recreate them if archiving is enabled on the rewound\n> old primary and we will finally have them in the archive. As for the .done\n> files, it seems reasonable to follow the pg_basebackup logic and keep .done\n> files together with the corresponding segments (those between the last\n> common checkpoint and the point of divergence) to protect them from being\n> archived once again.\n> \n> But on the other hand it seems to be not that straightforward: imaging we\n> have WAL segment X on the target along with X.done file and we decide to\n> preserve them both (or we download it from archive and force .done file\n> creation), while archive_mode was set to ‘always’ and the source (promoted\n> replica) also still has WAL segment X and X.ready file. After pg_rewind we\n> will end up with both X.ready and X.done, which seems to be not a good\n> situation (but most likely not critical either).\n\nThanks for the thought. Yes, it's not so straight-forward. And, as you\nmentioned, the worst result comes from not doing that is that some\nalready-archived segments are archived at next run, which is generally\nharmless. So I think we're ok to ignore that in this patdh then create\nother patch if we still want to do that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 27 Sep 2022 16:50:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 9:50 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> Regarding the the patch, pg_rewind starts reading segments from the\n> divergence point back to the nearest checkpoint, then moves foward\n> during rewinding. So, the fact that SimpleXLogPageRead have read a\n> segment suggests that the segment is required during the next startup.\n> So I don't think we need to move around the keepWalSeg flag. All\n> files that are wanted while rewinding should be preserved\n> unconditionally.\n>\n\nI am probably not getting this right but as far as I see SimpleXLogPageRead\nis called at most 3 times during pg_rewind run:\n1. From readOneRecord to determine the end-of-WAL on the target by reading\nthe last shutdown checkpoint record/minRecoveryPoint on it\n2. From findLastCheckpoint to find last common checkpoint (here it\nindeed reads all the segments that are required during the startup, hence\nthe keepWalSeg flag set to true)\n3. From extractPageMap to extract all the pages modified after the fork\n(here we also read all the segments that should be kept but also the ones\nfurther, until the target's end record. Doesn't seem we should\nunconditionally preserve them all).\nAm I missing something?\n\n\n\n> + /*\n> + * Some entries (WAL segments) already have an action\n> assigned\n> + * (see SimpleXLogPageRead()).\n> + */\n> + if (entry->action == FILE_ACTION_NONE)\n> + continue;\n> entry->action = decide_file_action(entry);\n\nIt might be more reasonable to call decide_file_action() when action\n> is UNDECIDED.\n>\n\nAgree, will change this part.\n\nRegards,\nPolina Bungina\n\nOn Tue, Sep 27, 2022 at 9:50 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:Regarding the the patch, pg_rewind starts reading segments from thedivergence point back to the nearest checkpoint, then moves fowardduring rewinding. So, the fact that SimpleXLogPageRead have read asegment suggests that the segment is required during the next startup.So I don't think we need to move around the keepWalSeg flag. Allfiles that are wanted while rewinding should be preservedunconditionally.I am probably not getting this right but as far as I see SimpleXLogPageRead is called at most 3 times during pg_rewind run:1. From readOneRecord to determine the end-of-WAL on the target by reading the last shutdown checkpoint record/minRecoveryPoint on it2. From findLastCheckpoint to find last common checkpoint (here it indeed reads all the segments that are required during the startup, hence the keepWalSeg flag set to true)3. From extractPageMap to extract all the pages modified after the fork (here we also read all the segments that should be kept but also the ones further, until the target's end record. Doesn't seem we should unconditionally preserve them all).Am I missing something? + /*+ * Some entries (WAL segments) already have an action assigned+ * (see SimpleXLogPageRead()).+ */+ if (entry->action == FILE_ACTION_NONE)+ continue; entry->action = decide_file_action(entry);It might be more reasonable to call decide_file_action() when actionis UNDECIDED. Agree, will change this part. Regards,Polina Bungina",
"msg_date": "Wed, 28 Sep 2022 10:09:05 +0200",
"msg_from": "Polina Bungina <bungina@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Wed, 28 Sep 2022 10:09:05 +0200, Polina Bungina <bungina@gmail.com> wrote in \n> On Tue, Sep 27, 2022 at 9:50 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n> wrote:\n> \n> > Regarding the the patch, pg_rewind starts reading segments from the\n> > divergence point back to the nearest checkpoint, then moves foward\n> > during rewinding. So, the fact that SimpleXLogPageRead have read a\n> > segment suggests that the segment is required during the next startup.\n> > So I don't think we need to move around the keepWalSeg flag. All\n> > files that are wanted while rewinding should be preserved\n> > unconditionally.\n> >\n> \n> I am probably not getting this right but as far as I see SimpleXLogPageRead\n> is called at most 3 times during pg_rewind run:\n> 1. From readOneRecord to determine the end-of-WAL on the target by reading\n> the last shutdown checkpoint record/minRecoveryPoint on it\n> 2. From findLastCheckpoint to find last common checkpoint (here it\n> indeed reads all the segments that are required during the startup, hence\n> the keepWalSeg flag set to true)\n> 3. From extractPageMap to extract all the pages modified after the fork\n> (here we also read all the segments that should be kept but also the ones\n> further, until the target's end record. Doesn't seem we should\n> unconditionally preserve them all).\n> Am I missing something?\n\nNo. You're right. I have to admit that I was confused at the time X(,\nsorry for that. Those extra files are I believe harmless but of\ncourse it's preferable to avoid them. So the keepWalSeg is useful.\n\nSo the latest version become very similar to v1 in that the both have\nkeepWalSeg flag. The difference is the need of the file name hash. I\nstill think that it's better if we don't need the additional file\nhash. If we move out the bare code in v2 added to\nSimpleXLogPageRead(), then name it \"preserve_file(char *filepath)\",\nthe code become more easy to read.\n\n+\t\tif (private->keepWalSeg)\n+\t\t{\n+\t\t\t/* the caller told us to preserve this file for future use */\n+\t\t\tsnprintf(xlogfpath, MAXPGPATH, XLOGDIR \"/%s\", xlogfname);\n+ preserve_file(xlogfpath);\n+\t\t}\n\nInstead, I think we should add a comment here:\n\n@@ -192,6 +195,7 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,\n \n > \tprivate.tliIndex = tliIndex;\n > \tprivate.restoreCommand = restoreCommand;\n++\t/*\n++ * WAL files read during searching for the last checkpoint are required\n++ * by the next startup recovery of the target cluster.\n++ */\n >+\tprivate.keepWalSeg = true;\n\nWhat do you think about the above?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 28 Sep 2022 18:17:39 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "I agree with your suggestions, so here is the updated version of patch.\nHope I haven't missed anything.\n\nRegards,\nPolina Bungina",
"msg_date": "Thu, 29 Sep 2022 10:18:43 +0200",
"msg_from": "Polina Bungina <bungina@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2022-09-29 17:18, Polina Bungina wrote:\n> I agree with your suggestions, so here is the updated version of\n> patch. Hope I haven't missed anything.\n> \n> Regards,\n> Polina Bungina\n\nThanks for working on this!\nIt seems like we are also facing the same issue.\n\nI tested the v3 patch under our condition, old primary has succeeded to \nbecome new standby.\n\n\nBTW when I used pg_rewind-removes-wal-segments-reproduce.sh attached in \n[1], old primary also failed to become standby:\n\n FATAL: could not receive data from WAL stream: ERROR: requested WAL \nsegment 000000020000000000000007 has already been removed\n\nHowever, I think this is not a problem: just adding restore_command \nlike below fixed the situation.\n\n echo \"restore_command = '/bin/cp `pwd`/newarch/%f %p'\" >> \noldprim/postgresql.conf\n\nAttached modified reproduction script for reference.\n\n[1]https://www.postgresql.org/message-id/CAFh8B%3DnNiFZOAPsv49gffxHBPzwmZ%3D6Msd4miMis87K%3Dd9rcRA%40mail.gmail.com\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Wed, 28 Jun 2023 22:28:13 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Wed, 28 Jun 2023 22:28:13 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> \n> On 2022-09-29 17:18, Polina Bungina wrote:\n> > I agree with your suggestions, so here is the updated version of\n> > patch. Hope I haven't missed anything.\n> > Regards,\n> > Polina Bungina\n> \n> Thanks for working on this!\n> It seems like we are also facing the same issue.\n\nThanks for looking this.\n\n> I tested the v3 patch under our condition, old primary has succeeded\n> to become new standby.\n> \n> \n> BTW when I used pg_rewind-removes-wal-segments-reproduce.sh attached\n> in [1], old primary also failed to become standby:\n> \n> FATAL: could not receive data from WAL stream: ERROR: requested WAL\n> segment 000000020000000000000007 has already been removed\n> \n> However, I think this is not a problem: just adding restore_command\n> like below fixed the situation.\n> \n> echo \"restore_command = '/bin/cp `pwd`/newarch/%f %p'\" >>\n> oldprim/postgresql.conf\n\nI thought on the same line at first, but that's not the point\nhere. The problem we want ot address is that pg_rewind ultimately\nremoves certain crucial WAL files required for the new primary to\nstart, despite them being present previously. In other words, that\nrestore_command works, but it only undoes what pg_rewind wrongly did,\nresulting in unnecessary consupmtion of I/O and/or network bandwidth\nthat essentially serves no purpose.\n\npg_rewind already has a feature that determines how each file should\nbe handled, but it is currently making wrong dicisions for WAL\nfiles. The goal here is to rectify this behavior and ensure that\npg_rewind makes the right decisions.\n\n> Attached modified reproduction script for reference.\n> \n> [1]https://www.postgresql.org/message-id/CAFh8B%3DnNiFZOAPsv49gffxHBPzwmZ%3D6Msd4miMis87K%3Dd9rcRA%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 29 Jun 2023 10:25:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2023-06-29 10:25, Kyotaro Horiguchi wrote:\nThanks for the comment!\n\n> At Wed, 28 Jun 2023 22:28:13 +0900, torikoshia\n> <torikoshia@oss.nttdata.com> wrote in\n>> \n>> On 2022-09-29 17:18, Polina Bungina wrote:\n>> > I agree with your suggestions, so here is the updated version of\n>> > patch. Hope I haven't missed anything.\n>> > Regards,\n>> > Polina Bungina\n>> \n>> Thanks for working on this!\n>> It seems like we are also facing the same issue.\n> \n> Thanks for looking this.\n> \n>> I tested the v3 patch under our condition, old primary has succeeded\n>> to become new standby.\n>> \n>> \n>> BTW when I used pg_rewind-removes-wal-segments-reproduce.sh attached\n>> in [1], old primary also failed to become standby:\n>> \n>> FATAL: could not receive data from WAL stream: ERROR: requested WAL\n>> segment 000000020000000000000007 has already been removed\n>> \n>> However, I think this is not a problem: just adding restore_command\n>> like below fixed the situation.\n>> \n>> echo \"restore_command = '/bin/cp `pwd`/newarch/%f %p'\" >>\n>> oldprim/postgresql.conf\n> \n> I thought on the same line at first, but that's not the point\n> here.\n\nYes. I don't think adding restore_command solves the problem and\nmodification to prevent deleting necessary WAL like proposed\npatch is necessary.\n\nI added restore_command since\npg_rewind-removes-wal-segments-reproduce.sh failed to catch up\neven after applying v3 patch and prevent pg_rewind from delete\nWALs(*), because some necessary WALs were archived.\n\nIt's not a problem we are discussing here, but I wanted to get\nthe script to work to the point where old primary could\nsuccessfully catch up to new primary.\n\n(*)Specifically, running the script without apply the patch,\nrecovery failed because 000000010000000000000003 which has\nalready been removed. This file was deleted by pg_rewind as\nwe know.\nOTHO without the restore_command, recovery failed because\n000000020000000000000007 has already been removed even after\napplying the patch.\n\n> The problem we want ot address is that pg_rewind ultimately\n> removes certain crucial WAL files required for the new primary to\n> start, despite them being present previously.\n\nI thought it's not \"new primary\", but \"old primary\".\n\n> In other words, that\n> restore_command works, but it only undoes what pg_rewind wrongly did,\n> resulting in unnecessary consupmtion of I/O and/or network bandwidth\n> that essentially serves no purpose.\n\nAs far as I tested using the script and the situation we are facing,\nafter promoting newprim necessary WAL(000000010000000000000003..) were\nnot available and just adding restore_command did not solve the problem.\n\n> pg_rewind already has a feature that determines how each file should\n> be handled, but it is currently making wrong dicisions for WAL\n> files. The goal here is to rectify this behavior and ensure that\n> pg_rewind makes the right decisions.\n\n+1\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 29 Jun 2023 18:42:37 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2022-09-29 17:18, Polina Bungina wrote:\n> I agree with your suggestions, so here is the updated version of\n> patch. Hope I haven't missed anything.\n\nThanks for the patch, I've marked this as ready-for-committer.\n\nBTW, this issue can be considered a bug, right?\nI think it would be appropriate to provide backpatch.\n\nOn 2023-06-29 18:42, torikoshia wrote:\n> On 2023-06-29 10:25, Kyotaro Horiguchi wrote:\n> Thanks for the comment!\n> \n>> At Wed, 28 Jun 2023 22:28:13 +0900, torikoshia\n>> <torikoshia@oss.nttdata.com> wrote in\n>>> \n>>> On 2022-09-29 17:18, Polina Bungina wrote:\n>>> > I agree with your suggestions, so here is the updated version of\n>>> > patch. Hope I haven't missed anything.\n>>> > Regards,\n>>> > Polina Bungina\n>>> \n>>> Thanks for working on this!\n>>> It seems like we are also facing the same issue.\n>> \n>> Thanks for looking this.\n>> \n>>> I tested the v3 patch under our condition, old primary has succeeded\n>>> to become new standby.\n>>> \n>>> \n>>> BTW when I used pg_rewind-removes-wal-segments-reproduce.sh attached\n>>> in [1], old primary also failed to become standby:\n>>> \n>>> FATAL: could not receive data from WAL stream: ERROR: requested WAL\n>>> segment 000000020000000000000007 has already been removed\n>>> \n>>> However, I think this is not a problem: just adding restore_command\n>>> like below fixed the situation.\n>>> \n>>> echo \"restore_command = '/bin/cp `pwd`/newarch/%f %p'\" >>\n>>> oldprim/postgresql.conf\n>> \n>> I thought on the same line at first, but that's not the point\n>> here.\n> \n> Yes. I don't think adding restore_command solves the problem and\n> modification to prevent deleting necessary WAL like proposed\n> patch is necessary.\n> \n> I added restore_command since\n> pg_rewind-removes-wal-segments-reproduce.sh failed to catch up\n> even after applying v3 patch and prevent pg_rewind from delete\n> WALs(*), because some necessary WALs were archived.\n> \n> It's not a problem we are discussing here, but I wanted to get\n> the script to work to the point where old primary could\n> successfully catch up to new primary.\n> \n> (*)Specifically, running the script without apply the patch,\n> recovery failed because 000000010000000000000003 which has\n> already been removed. This file was deleted by pg_rewind as\n> we know.\n> OTHO without the restore_command, recovery failed because\n> 000000020000000000000007 has already been removed even after\n> applying the patch.\n> \n>> The problem we want ot address is that pg_rewind ultimately\n>> removes certain crucial WAL files required for the new primary to\n>> start, despite them being present previously.\n> \n> I thought it's not \"new primary\", but \"old primary\".\n> \n>> In other words, that\n>> restore_command works, but it only undoes what pg_rewind wrongly did,\n>> resulting in unnecessary consupmtion of I/O and/or network bandwidth\n>> that essentially serves no purpose.\n> \n> As far as I tested using the script and the situation we are facing,\n> after promoting newprim necessary WAL(000000010000000000000003..) were\n> not available and just adding restore_command did not solve the \n> problem.\n> \n>> pg_rewind already has a feature that determines how each file should\n>> be handled, but it is currently making wrong dicisions for WAL\n>> files. The goal here is to rectify this behavior and ensure that\n>> pg_rewind makes the right decisions.\n> \n> +1\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 18 Aug 2023 15:40:57 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On Fri, Aug 18, 2023 at 03:40:57PM +0900, torikoshia wrote:\n> Thanks for the patch, I've marked this as ready-for-committer.\n> \n> BTW, this issue can be considered a bug, right?\n> I think it would be appropriate to provide backpatch.\n\nHmm, I agree that there is a good argument in back-patching as we have\nthe WAL files between the redo LSN and the divergence LSN, but\npg_rewind is not smart enough to keep them around. If the archives of\nthe primary were not able to catch up, the old primary is as good as\nkaput, and restore_command won't help here.\n\nI don't like much this patch. While it takes correctly advantage of\nthe backward record read logic from SimpleXLogPageRead() able to\nhandle correctly timeline jumps, it creates a hidden dependency in the\ncode between the hash table from filemap.c and the page callback.\nWouldn't it be simpler to build a list of the segment names using the\ninformation from WALOpenSegment and build this list in\nfindLastCheckpoint()? Also, I am wondering if we should be smarter\nwith any potential conflict handling between the source and the\ntarget, rather than just enforcing a FILE_ACTION_NONE for all these\nfiles. In short, could it be better to copy the WAL file from the\nsource if it exists there?\n\n+ /*\n+ * Some entries (WAL segments) already have an action assigned\n+ * (see SimpleXLogPageRead()).\n+ */\n+ if (entry->action == FILE_ACTION_UNDECIDED)\n+ entry->action = decide_file_action(entry);\n\nThis change makes me a bit uneasy, per se my previous comment with the\nadditional code dependencies.\n\nI think that this scenario deserves a test case. If one wants to\nemulate a delay in WAL archiving, it is possible to set\narchive_command to a command that we know will fail, for instance.\n--\nMichael",
"msg_date": "Tue, 22 Aug 2023 14:32:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2023-08-22 14:32, Michael Paquier wrote:\nThanks for your review!\n\n> On Fri, Aug 18, 2023 at 03:40:57PM +0900, torikoshia wrote:\n>> Thanks for the patch, I've marked this as ready-for-committer.\n>> \n>> BTW, this issue can be considered a bug, right?\n>> I think it would be appropriate to provide backpatch.\n> \n> Hmm, I agree that there is a good argument in back-patching as we have\n> the WAL files between the redo LSN and the divergence LSN, but\n> pg_rewind is not smart enough to keep them around. If the archives of\n> the primary were not able to catch up, the old primary is as good as\n> kaput, and restore_command won't help here.\n\nTrue.\nI also imagine that in the typical failover scenario where the target \ncluster was shut down soon after the divergence and pg_rewind was \nexecuted without much time, we can avoid this kind of 'requested WAL \nsegment has already removed' error by preventing pg_rewind from deleting \nnecessary WALs.\n\n\n> I don't like much this patch. While it takes correctly advantage of\n> the backward record read logic from SimpleXLogPageRead() able to\n> handle correctly timeline jumps, it creates a hidden dependency in the\n> code between the hash table from filemap.c and the page callback.\n> Wouldn't it be simpler to build a list of the segment names using the\n> information from WALOpenSegment and build this list in\n> findLastCheckpoint()? Also, I am wondering if we should be smarter\n> with any potential conflict handling between the source and the\n> target, rather than just enforcing a FILE_ACTION_NONE for all these\n> files. In short, could it be better to copy the WAL file from the\n> source if it exists there?\n> \n> + /*\n> + * Some entries (WAL segments) already have an action assigned\n> + * (see SimpleXLogPageRead()).\n> + */\n> + if (entry->action == FILE_ACTION_UNDECIDED)\n> + entry->action = decide_file_action(entry);\n> \n> This change makes me a bit uneasy, per se my previous comment with the\n> additional code dependencies.\n> \n> I think that this scenario deserves a test case. If one wants to\n> emulate a delay in WAL archiving, it is possible to set\n> archive_command to a command that we know will fail, for instance.\n> --\n> Michael\n\nBungina, are you going to respond to these comments?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Wed, 23 Aug 2023 18:04:18 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi,\n\n\n\nOn Tue, 22 Aug 2023 at 07:32, Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n>\n> I don't like much this patch. While it takes correctly advantage of\n> the backward record read logic from SimpleXLogPageRead() able to\n> handle correctly timeline jumps, it creates a hidden dependency in the\n> code between the hash table from filemap.c and the page callback.\n> Wouldn't it be simpler to build a list of the segment names using the\n> information from WALOpenSegment and build this list in\n> findLastCheckpoint()?\n\n\nI think the first version of the patch more or less did that. Not\nnecessarily a list, but a hash table of WAL file names that we want to\nkeep. But Kyotaro Horiguchi didn't like it and suggested creating entries\nin the filemap.c hash table instead.\nBut, I agree, doing it directly from the findLastCheckpoint() makes the\ncode easier to understand.\n\n\n\n> Also, I am wondering if we should be smarter\n> with any potential conflict handling between the source and the\n> target, rather than just enforcing a FILE_ACTION_NONE for all these\n> files. In short, could it be better to copy the WAL file from the\n> source if it exists there?\n>\n\nBefore the switchpoint these files are supposed to be the same on the old\nprimary, new primary, and also in the archive. Also, if there is a\nrestore_command postgres will fetch the same file from the archive even if\nit already exists in pg_wal, which effectively discards all pg_rewind\nefforts on copying WAL files.\n\n\n>\n> + /*\n> + * Some entries (WAL segments) already have an action assigned\n> + * (see SimpleXLogPageRead()).\n> + */\n> + if (entry->action == FILE_ACTION_UNDECIDED)\n> + entry->action = decide_file_action(entry);\n>\n> This change makes me a bit uneasy, per se my previous comment with the\n> additional code dependencies.\n>\n\nWe can revert to the original approach (see\nv1-0001-pg_rewind-wal-deletion.patch from the very first email) if you like.\n\n\n> I think that this scenario deserves a test case. If one wants to\n> emulate a delay in WAL archiving, it is possible to set\n> archive_command to a command that we know will fail, for instance.\n>\n\nYes, I totally agree, it is on our radar, but meanwhile please see the new\nversion, just to check if I correctly understood your idea.\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Wed, 23 Aug 2023 13:44:52 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "At Wed, 23 Aug 2023 13:44:52 +0200, Alexander Kukushkin <cyberdemn@gmail.com> wrote in \n> On Tue, 22 Aug 2023 at 07:32, Michael Paquier <michael@paquier.xyz> wrote:\n> > I don't like much this patch. While it takes correctly advantage of\n> > the backward record read logic from SimpleXLogPageRead() able to\n> > handle correctly timeline jumps, it creates a hidden dependency in the\n> > code between the hash table from filemap.c and the page callback.\n> > Wouldn't it be simpler to build a list of the segment names using the\n> > information from WALOpenSegment and build this list in\n> > findLastCheckpoint()?\n> \n> I think the first version of the patch more or less did that. Not\n> necessarily a list, but a hash table of WAL file names that we want to\n> keep. But Kyotaro Horiguchi didn't like it and suggested creating entries\n> in the filemap.c hash table instead.\n> But, I agree, doing it directly from the findLastCheckpoint() makes the\n> code easier to understand.\n...\n> > + /*\n> > + * Some entries (WAL segments) already have an action assigned\n> > + * (see SimpleXLogPageRead()).\n> > + */\n> > + if (entry->action == FILE_ACTION_UNDECIDED)\n> > + entry->action = decide_file_action(entry);\n> >\n> > This change makes me a bit uneasy, per se my previous comment with the\n> > additional code dependencies.\n> >\n> \n> We can revert to the original approach (see\n> v1-0001-pg_rewind-wal-deletion.patch from the very first email) if you like.\n\nOn the other hand, that approach brings in another source that\nsuggests the way that file should be handled. I still think that\nentry->action should be the only source. However, it seems I'm in the\nminority here. So I'm not tied to that approach.\n\n> > I think that this scenario deserves a test case. If one wants to\n> > emulate a delay in WAL archiving, it is possible to set\n> > archive_command to a command that we know will fail, for instance.\n> >\n> \n> Yes, I totally agree, it is on our radar, but meanwhile please see the new\n> version, just to check if I correctly understood your idea.\n\nAgreed.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 24 Aug 2023 09:45:38 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2023-08-24 09:45, Kyotaro Horiguchi wrote:\n> At Wed, 23 Aug 2023 13:44:52 +0200, Alexander Kukushkin\n> <cyberdemn@gmail.com> wrote in\n>> On Tue, 22 Aug 2023 at 07:32, Michael Paquier <michael@paquier.xyz> \n>> wrote:\n>> > I don't like much this patch. While it takes correctly advantage of\n>> > the backward record read logic from SimpleXLogPageRead() able to\n>> > handle correctly timeline jumps, it creates a hidden dependency in the\n>> > code between the hash table from filemap.c and the page callback.\n>> > Wouldn't it be simpler to build a list of the segment names using the\n>> > information from WALOpenSegment and build this list in\n>> > findLastCheckpoint()?\n>> \n>> I think the first version of the patch more or less did that. Not\n>> necessarily a list, but a hash table of WAL file names that we want to\n>> keep. But Kyotaro Horiguchi didn't like it and suggested creating \n>> entries\n>> in the filemap.c hash table instead.\n>> But, I agree, doing it directly from the findLastCheckpoint() makes \n>> the\n>> code easier to understand.\n> ...\n>> > + /*\n>> > + * Some entries (WAL segments) already have an action assigned\n>> > + * (see SimpleXLogPageRead()).\n>> > + */\n>> > + if (entry->action == FILE_ACTION_UNDECIDED)\n>> > + entry->action = decide_file_action(entry);\n>> >\n>> > This change makes me a bit uneasy, per se my previous comment with the\n>> > additional code dependencies.\n>> >\n>> \n>> We can revert to the original approach (see\n>> v1-0001-pg_rewind-wal-deletion.patch from the very first email) if you \n>> like.\n> \n> On the other hand, that approach brings in another source that\n> suggests the way that file should be handled. I still think that\n> entry->action should be the only source.\n\n+1.\nImaging a case when we come to need decide how to treat files based on \nyet another factor, I feel that a single source of truth is better than \ncreating a list or hash for each factor.\n\n> However, it seems I'm in the\n> minority here. So I'm not tied to that approach.\n> \n>> > I think that this scenario deserves a test case. If one wants to\n>> > emulate a delay in WAL archiving, it is possible to set\n>> > archive_command to a command that we know will fail, for instance.\n>> >\n>> \n>> Yes, I totally agree, it is on our radar, but meanwhile please see the \n>> new\n>> version, just to check if I correctly understood your idea.\n\nThanks for the patch.\nI tested v4 patch using the script attached below thread and it has \nsuccessfully finished.\n\nhttps://www.postgresql.org/message-id/2e75ae22dce9a227c3d47fa6d0ed094a%40oss.nttdata.com\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Tue, 29 Aug 2023 22:15:51 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi,\n\nPlease find attached v5.\nWhat changed:\n1. Now we collect which files should be kept in a separate hash table.\n2. Decision whether to keep the file is made only when the file is actually\nmissing on the source. That is, remaining WAL files will be copied over as\nit currently is, although it could be extremely inefficient and unnecessary.\n3. Added TAP test that actually at least one file isn't removed.\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Tue, 12 Sep 2023 15:29:46 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi,\n\nPlease find attached v6.\nChanges compared to v5:\n1. use \"perl -e 'exit(1)'\" instead of \"false\" as archive_command, so it\nalso works on Windows\n2. fixed the test name\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Wed, 13 Sep 2023 09:21:35 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Thanks for the patch.\n\nI tested the v6 patch using the test script attached on [1], old primary \nhas succeeded to become new standby.\n\nI have very minor questions on the regression tests mainly regarding the \nconsistency with other tests for pg_rewind:\n\n\n> +setup_cluster;\n> +create_standby;\n\nWould it be better to add parentheses?\nAlso should we add \"RewindTest::\" for these function?\n\n\n> +primary_psql(\"create table t(a int)\");\n> +primary_psql(\"insert into t values(0)\");\n> +primary_psql(\"select pg_switch_wal()\");\n..\n\nShould 'select', 'create', etc be capitalized?\n\n\n> my $false = \"$^X -e 'exit(1)'\";\nI feel it's hard to understand what does this mean.\nIsn't it better to add comments and describe this is for windows \nenvironments?\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Wed, 18 Oct 2023 15:50:39 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi,\n\nOn Wed, 18 Oct 2023 at 08:50, torikoshia <torikoshia@oss.nttdata.com> wrote:\n\n>\n> I have very minor questions on the regression tests mainly regarding the\n> consistency with other tests for pg_rewind:\n>\n\nPlease find attached a new version of the patch. It addresses all your\ncomments.\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Mon, 30 Oct 2023 16:26:39 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2023-10-31 00:26, Alexander Kukushkin wrote:\n> Hi,\n> \n> On Wed, 18 Oct 2023 at 08:50, torikoshia <torikoshia@oss.nttdata.com>\n> wrote:\n> \n>> I have very minor questions on the regression tests mainly regarding\n>> the\n>> consistency with other tests for pg_rewind:\n> \n> Please find attached a new version of the patch. It addresses all your\n> comments.\n\nThanks for updating the patch!\n\n> +extern void preserve_file(char *filepath);\n\nIs this necessary?\nThis function was defined in older version patch, but no longer seems to \nexist.\n\n+# We use \"perl -e 'exit(1)'\" as a alternative to \"false\", because the \nlast one\n'a' should be 'an'?\n\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Thu, 02 Nov 2023 12:24:50 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi Torikoshia,\n\n\nOn Thu, 2 Nov 2023 at 04:24, torikoshia <torikoshia@oss.nttdata.com> wrote:\n\n>\n>\n> > +extern void preserve_file(char *filepath);\n>\n> Is this necessary?\n> This function was defined in older version patch, but no longer seems to\n> exist.\n>\n> +# We use \"perl -e 'exit(1)'\" as a alternative to \"false\", because the\n> last one\n> 'a' should be 'an'?\n>\n>\nThanks for the feedback\n\nPlease find the new version attached.\n\nRegards,\n--\nAlexander Kukushkin\n\nOn Thu, 2 Nov 2023 at 04:24, torikoshia <torikoshia@oss.nttdata.com> wrote:\n\n> On 2023-10-31 00:26, Alexander Kukushkin wrote:\n> > Hi,\n> >\n> > On Wed, 18 Oct 2023 at 08:50, torikoshia <torikoshia@oss.nttdata.com>\n> > wrote:\n> >\n> >> I have very minor questions on the regression tests mainly regarding\n> >> the\n> >> consistency with other tests for pg_rewind:\n> >\n> > Please find attached a new version of the patch. It addresses all your\n> > comments.\n>\n> Thanks for updating the patch!\n>\n> > +extern void preserve_file(char *filepath);\n>\n> Is this necessary?\n> This function was defined in older version patch, but no longer seems to\n> exist.\n>\n> +# We use \"perl -e 'exit(1)'\" as a alternative to \"false\", because the\n> last one\n> 'a' should be 'an'?\n>\n>\n> --\n> Regards,\n>\n> --\n> Atsushi Torikoshi\n> NTT DATA Group Corporation\n>\n\n\n-- \nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Mon, 6 Nov 2023 15:58:56 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "On 2023-11-06 23:58, Alexander Kukushkin wrote:\n> Hi Torikoshia,\n> \n> On Thu, 2 Nov 2023 at 04:24, torikoshia <torikoshia@oss.nttdata.com>\n> wrote:\n> \n>>> +extern void preserve_file(char *filepath);\n>> \n>> Is this necessary?\n>> This function was defined in older version patch, but no longer\n>> seems to\n>> exist.\n>> \n>> +# We use \"perl -e 'exit(1)'\" as a alternative to \"false\", because\n>> the\n>> last one\n>> 'a' should be 'an'?\n> \n> Thanks for the feedback\n> \n> Please find the new version attached.\nThanks for the update!\n\nI've set the CF entry to \"Ready for Committer\".\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA Group Corporation\n\n\n",
"msg_date": "Thu, 09 Nov 2023 15:30:56 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Ready for Committer\", but it is\ncurrently failing some CFbot tests [1]. Please have a look and post an\nupdated version..\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/3874\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 10:38:08 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi Peter,\n\nOn Mon, 22 Jan 2024 at 00:38, Peter Smith <smithpb2250@gmail.com> wrote:\n\n> 2024-01 Commitfest.\n>\n> Hi, This patch has a CF status of \"Ready for Committer\", but it is\n> currently failing some CFbot tests [1]. Please have a look and post an\n> updated version..\n>\n> ======\n> [1]\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/3874\n>\n>\n From what I can see all failures are not related to this patch:\n1. Windows build failed with\n[10:52:49.679] 126/281 postgresql:recovery / recovery/019_replslot_limit\nERROR 185.84s (exit status 255 or signal 127 SIGinvalid)\n2. FreeBSD build failed with\n[09:11:57.656] 190/285 postgresql:psql / psql/010_tab_completion ERROR\n0.46s exit status 2\n[09:11:57.656] 220/285 postgresql:authentication /\nauthentication/001_password ERROR 0.57s exit status 2\n\nIn fact, I don't even see this patch being applied for these builds and the\nintroduced TAP test being executed.\n\nRegards,\n--\nAlexander Kukushkin\n\nHi Peter,On Mon, 22 Jan 2024 at 00:38, Peter Smith <smithpb2250@gmail.com> wrote:2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Ready for Committer\", but it is\ncurrently failing some CFbot tests [1]. Please have a look and post an\nupdated version..\n\n======\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/3874\nFrom what I can see all failures are not related to this patch:1. Windows build failed with [10:52:49.679] 126/281 postgresql:recovery / recovery/019_replslot_limit ERROR 185.84s (exit status 255 or signal 127 SIGinvalid)2. FreeBSD build failed with[09:11:57.656] 190/285 postgresql:psql / psql/010_tab_completion ERROR 0.46s exit status 2[09:11:57.656] 220/285 postgresql:authentication / authentication/001_password ERROR 0.57s exit status 2 In fact, I don't even see this patch being applied for these builds and the introduced TAP test being executed.Regards,--Alexander Kukushkin",
"msg_date": "Tue, 23 Jan 2024 09:23:29 +0100",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi,\n\nI'm reviewing patches in Commitfest 2024-07 from top to bottom:\nhttps://commitfest.postgresql.org/48/\n\nThis is the 1st patch:\nhttps://commitfest.postgresql.org/48/3874/\n\nThe latest patch can't be applied on master:\nhttps://www.postgresql.org/message-id/CAFh8B=nNJtm9ke4_1mhpwGz2PV9yoyF6hMnYh5XACt0AA4VG-A@mail.gmail.com\n\nI've rebased on master. See the attached patch.\nHere are changes for it:\n\n* Resolve conflict\n* Update copyright year to 2024 from 2023\n* Add an added test to meson.build\n* Run pgindent\n\nHere are my review comments:\n\n@@ -217,6 +221,26 @@ findLastCheckpoint(const char *datadir, XLogRecPtr forkptr, int tliIndex,\n+\t\t\tchar\t\txlogfname[MAXFNAMELEN];\n+\n+\t\t\ttli = xlogreader->seg.ws_tli;\n+\t\t\tsegno = xlogreader->seg.ws_segno;\n+\n+\t\t\tsnprintf(xlogfname, MAXPGPATH, XLOGDIR \"/\");\n+\t\t\tXLogFileName(xlogfname + strlen(xlogfname),\n+\t\t\t\t\t\t xlogreader->seg.ws_tli,\n+\t\t\t\t\t\t xlogreader->seg.ws_segno, WalSegSz);\n+\n+\t\t\t/*\n+\t\t\t * Make sure pg_rewind doesn't remove this file, because it is\n+\t\t\t * required for postgres to start after rewind.\n+\t\t\t */\n+\t\t\tinsert_keepwalhash_entry(xlogfname);\n\nMAXFNAMELEN is 64 and MAXPGPATH is 1024. strlen(XLOGDIR \"/\")\nis 7 because XLOGDIR is \"pg_wal\". So xlogfname has enough\nsize but snprintf(xlogfname, MAXPGPATH) is wrong usage.\n(And XLogFileName() uses snprintf(xlogfname, MAXFNAMELEN)\ninternally.)\n\nHow about using one more buffer?\n\n----\nchar\t\txlogpath[MAXPGPATH];\nchar\t\txlogfname[MAXFNAMELEN];\n\ntli = xlogreader->seg.ws_tli;\nsegno = xlogreader->seg.ws_segno;\n\nXLogFileName(xlogfname,\n\t\t\t xlogreader->seg.ws_tli,\n\t\t\t xlogreader->seg.ws_segno, WalSegSz);\nsnprintf(xlogpath, MAXPGPATH, \"%s/%s\", XLOGDIR, xlogfname);\n\n/*\n * Make sure pg_rewind doesn't remove this file, because it is\n * required for postgres to start after rewind.\n */\ninsert_keepwalhash_entry(xlogpath);\n----\n\n\nThanks,\n-- \nkou\n\nIn <CAFh8B=mDDZEsK0jDMfvP3MmxkWaeTCxW4yN42OH33JY6sQWS5Q@mail.gmail.com>\n \"Re: pg_rewind WAL segments deletion pitfall\" on Tue, 23 Jan 2024 09:23:29 +0100,\n Alexander Kukushkin <cyberdemn@gmail.com> wrote:\n\n> Hi Peter,\n> \n> On Mon, 22 Jan 2024 at 00:38, Peter Smith <smithpb2250@gmail.com> wrote:\n> \n>> 2024-01 Commitfest.\n>>\n>> Hi, This patch has a CF status of \"Ready for Committer\", but it is\n>> currently failing some CFbot tests [1]. Please have a look and post an\n>> updated version..\n>>\n>> ======\n>> [1]\n>> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/46/3874\n>>\n>>\n> From what I can see all failures are not related to this patch:\n> 1. Windows build failed with\n> [10:52:49.679] 126/281 postgresql:recovery / recovery/019_replslot_limit\n> ERROR 185.84s (exit status 255 or signal 127 SIGinvalid)\n> 2. FreeBSD build failed with\n> [09:11:57.656] 190/285 postgresql:psql / psql/010_tab_completion ERROR\n> 0.46s exit status 2\n> [09:11:57.656] 220/285 postgresql:authentication /\n> authentication/001_password ERROR 0.57s exit status 2\n> \n> In fact, I don't even see this patch being applied for these builds and the\n> introduced TAP test being executed.\n> \n> Regards,\n> --\n> Alexander Kukushkin",
"msg_date": "Fri, 12 Jul 2024 16:24:06 +0900 (JST)",
"msg_from": "Sutou Kouhei <kou@clear-code.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
},
{
"msg_contents": "Hi Sutou,\n\nThank you for picking it up!\n\nOn Fri, 12 Jul 2024 at 09:24, Sutou Kouhei <kou@clear-code.com> wrote:\n\nHere are my review comments:\n>\n> @@ -217,6 +221,26 @@ findLastCheckpoint(const char *datadir, XLogRecPtr\n> forkptr, int tliIndex,\n> + char xlogfname[MAXFNAMELEN];\n> +\n> + tli = xlogreader->seg.ws_tli;\n> + segno = xlogreader->seg.ws_segno;\n> +\n> + snprintf(xlogfname, MAXPGPATH, XLOGDIR \"/\");\n> + XLogFileName(xlogfname + strlen(xlogfname),\n> + xlogreader->seg.ws_tli,\n> + xlogreader->seg.ws_segno,\n> WalSegSz);\n> +\n> + /*\n> + * Make sure pg_rewind doesn't remove this file,\n> because it is\n> + * required for postgres to start after rewind.\n> + */\n> + insert_keepwalhash_entry(xlogfname);\n>\n> MAXFNAMELEN is 64 and MAXPGPATH is 1024. strlen(XLOGDIR \"/\")\n> is 7 because XLOGDIR is \"pg_wal\". So xlogfname has enough\n> size but snprintf(xlogfname, MAXPGPATH) is wrong usage.\n> (And XLogFileName() uses snprintf(xlogfname, MAXFNAMELEN)\n> internally.)\n>\n\nNice catch!\n\nI don't think we need another buffer here, just need to use MAXFNAMELEN,\nbecause strlen(\"pg_wal/$wal_filename\") + 1 = 32 perfectly fits into 64\nbytes.\n\nThe new version is attached.\n\nRegards,\n--\nAlexander Kukushkin",
"msg_date": "Fri, 12 Jul 2024 11:00:24 +0200",
"msg_from": "Alexander Kukushkin <cyberdemn@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_rewind WAL segments deletion pitfall"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at the following code in DetachPartitionFinalize():\n\n /* If there's a constraint associated with the index, detach it too\n*/\n constrOid =\nget_relation_idx_constraint_oid(RelationGetRelid(partRel),\n idxid);\n\nAs mentioned in email thread `parenting a PK constraint to a self-FK one`,\nthere may be multiple matching constraints, I think we should\ncall ConstraintSetParentConstraint() for each of them.\n\nThis means adding a helper method similar to\nget_relation_idx_constraint_oid() which finds constraint and calls\nConstraintSetParentConstraint().\n\nI am preparing a patch.\nPlease let me know if my proposal makes sense.\n\nThanks\n\nHi,I was looking at the following code in DetachPartitionFinalize(): /* If there's a constraint associated with the index, detach it too */ constrOid = get_relation_idx_constraint_oid(RelationGetRelid(partRel), idxid);As mentioned in email thread `parenting a PK constraint to a self-FK one`, there may be multiple matching constraints, I think we should call ConstraintSetParentConstraint() for each of them.This means adding a helper method similar to get_relation_idx_constraint_oid() which finds constraint and calls ConstraintSetParentConstraint().I am preparing a patch.Please let me know if my proposal makes sense.Thanks",
"msg_date": "Tue, 23 Aug 2022 10:10:51 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "handling multiple matching constraints in DetachPartitionFinalize()"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 10:10 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> I was looking at the following code in DetachPartitionFinalize():\n>\n> /* If there's a constraint associated with the index, detach it\n> too */\n> constrOid =\n> get_relation_idx_constraint_oid(RelationGetRelid(partRel),\n> idxid);\n>\n> As mentioned in email thread `parenting a PK constraint to a self-FK one`,\n> there may be multiple matching constraints, I think we should\n> call ConstraintSetParentConstraint() for each of them.\n>\n> This means adding a helper method similar to\n> get_relation_idx_constraint_oid() which finds constraint and calls\n> ConstraintSetParentConstraint().\n>\n> I am preparing a patch.\n> Please let me know if my proposal makes sense.\n>\n> Thanks\n>\n\nThis is what I came up with.",
"msg_date": "Tue, 23 Aug 2022 10:27:53 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: handling multiple matching constraints in\n DetachPartitionFinalize()"
},
{
"msg_contents": "On 2022-Aug-23, Zhihong Yu wrote:\n\n> This is what I came up with.\n\nI suggest you provide a set of SQL commands that provoke some wrong\nbehavior with the original code, and show that they generate good\nbehavior after the patch. Otherwise, it's hard to evaluate the\nusefulness of this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n\n\n",
"msg_date": "Tue, 23 Aug 2022 19:53:22 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: handling multiple matching constraints in\n DetachPartitionFinalize()"
},
{
"msg_contents": "On 2022-Aug-23, Zhihong Yu wrote:\n\n> Toggling enable_seqscan on / off using the example from `parenting a PK\n> constraint to a self-FK one` thread, it can be shown that different\n> constraint Id would be detached which is incorrect.\n> However, I am not sure whether toggling enable_seqscan mid-test is\n> legitimate.\n\nWell, let's see it in action.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 23 Aug 2022 20:10:03 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: handling multiple matching constraints in\n DetachPartitionFinalize()"
},
{
"msg_contents": "On Tue, Aug 23, 2022 at 10:53 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Aug-23, Zhihong Yu wrote:\n>\n> > This is what I came up with.\n>\n> I suggest you provide a set of SQL commands that provoke some wrong\n> behavior with the original code, and show that they generate good\n> behavior after the patch. Otherwise, it's hard to evaluate the\n> usefulness of this.\n>\n> --\n> Álvaro Herrera Breisgau, Deutschland —\n> https://www.EnterpriseDB.com/\n> \"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"\n>\n\nToggling enable_seqscan on / off using the example from `parenting a PK\nconstraint to a self-FK one` thread, it can be shown that different\nconstraint Id would be detached which is incorrect.\nHowever, I am not sure whether toggling enable_seqscan mid-test is\nlegitimate.\n\nCheers\n\nOn Tue, Aug 23, 2022 at 10:53 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Aug-23, Zhihong Yu wrote:\n\n> This is what I came up with.\n\nI suggest you provide a set of SQL commands that provoke some wrong\nbehavior with the original code, and show that they generate good\nbehavior after the patch. Otherwise, it's hard to evaluate the\nusefulness of this.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"Toggling enable_seqscan on / off using the example from `parenting a PK constraint to a self-FK one` thread, it can be shown that different constraint Id would be detached which is incorrect.However, I am not sure whether toggling enable_seqscan mid-test is legitimate.Cheers",
"msg_date": "Tue, 23 Aug 2022 11:11:20 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: handling multiple matching constraints in\n DetachPartitionFinalize()"
}
] |
[
{
"msg_contents": "Hi,\n\nI occasionally\n\nRunning the ecpg regression tests interactively (to try to find a different\nissue), triggered a crash on windows due to an uninitialized variable (after\npressing \"ignore\" in that stupid gui window that we've only disabled for the\nbackend).\n\n\"The variable 'replace_val' is being used without being initialized.\"\n\nChild-SP RetAddr Call Site\n000000b3`3bcfe140 00007ff9`03f9cd74 libpgtypes!failwithmessage(\n void * retaddr = 0x00007ff9`03f96133,\n int crttype = 0n1,\n int errnum = 0n3,\n char * msg = 0x000000b3`3bcff050 \"The variable 'replace_val' is being used without being initialized.\")+0x234 [d:\\a01\\_work\\12\\s\\src\\vctools\\crt\\vcstartup\\src\\rtc\\error.cpp @ 213]\n000000b3`3bcff030 00007ff9`03f96133 libpgtypes!_RTC_UninitUse(\n char * varname = 0x00007ff9`03fa8a90 \"replace_val\")+0xa4 [d:\\a01\\_work\\12\\s\\src\\vctools\\crt\\vcstartup\\src\\rtc\\error.cpp @ 362]\n000000b3`3bcff470 00007ff9`03f94acd libpgtypes!dttofmtasc_replace(\n int64 * ts = 0x000000b3`3bcff778,\n long dDate = 0n0,\n int dow = 0n6,\n struct tm * tm = 0x000000b3`3bcff598,\n char * output = 0x0000026e`9c223de0 \"abc-00:00:00\",\n int * pstr_len = 0x000000b3`3bcff620,\n char * fmtstr = 0x00007ff7`b01ae5c0 \"abc-%X-def-%x-ghi%%\")+0xe53 [C:\\dev\\postgres-meson\\src\\interfaces\\ecpg\\pgtypeslib\\timestamp.c @ 759]\n*** WARNING: Unable to verify checksum for C:\\dev\\postgres-meson\\build-msbuild\\src\\interfaces\\ecpg\\test\\pgtypeslib\\dt_test.exe\n000000b3`3bcff550 00007ff7`b01a23c9 libpgtypes!PGTYPEStimestamp_fmt_asc(\n int64 * ts = 0x000000b3`3bcff778,\n char * output = 0x0000026e`9c223de0 \"abc-00:00:00\",\n int str_len = 0n19,\n char * fmtstr = 0x00007ff7`b01ae5c0 \"abc-%X-def-%x-ghi%%\")+0xed [C:\\dev\\postgres-meson\\src\\interfaces\\ecpg\\pgtypeslib\\timestamp.c @ 794]\n000000b3`3bcff610 00007ff7`b01a4499 dt_test!main(void)+0xe59 [C:\\dev\\postgres-meson\\src\\interfaces\\ecpg\\test\\pgtypeslib\\dt_test.pgc @ 200]\n000000b3`3bcff860 00007ff7`b01a433e dt_test!invoke_main(void)+0x39 [d:\\a01\\_work\\12\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 79]\n000000b3`3bcff8b0 00007ff7`b01a41fe dt_test!__scrt_common_main_seh(void)+0x12e [d:\\a01\\_work\\12\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 288]\n000000b3`3bcff920 00007ff7`b01a452e dt_test!__scrt_common_main(void)+0xe [d:\\a01\\_work\\12\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_common.inl @ 331]\n000000b3`3bcff950 00007ff9`1d987034 dt_test!mainCRTStartup(\n void * __formal = 0x000000b3`3bbe8000)+0xe [d:\\a01\\_work\\12\\s\\src\\vctools\\crt\\vcstartup\\src\\startup\\exe_main.cpp @ 17]\n000000b3`3bcff980 00007ff9`1f842651 KERNEL32!BaseThreadInitThunk+0x14\n000000b3`3bcff9b0 00000000`00000000 ntdll!RtlUserThreadStart+0x21\n\nI haven't analyzed this further.\n\n\nCI also shows ecpg itself occasionally crashing, but I haven't managed to\ncatch it in the act.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 20:36:55 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "ecpg assertion on windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-23 20:36:55 -0700, Andres Freund wrote:\n> Running the ecpg regression tests interactively (to try to find a different\n> issue), triggered a crash on windows due to an uninitialized variable (after\n> pressing \"ignore\" in that stupid gui window that we've only disabled for the\n> backend).\n> \n> \"The variable 'replace_val' is being used without being initialized.\"\n\nLooks to me like that's justified. The paths in dttofmtasc_replace using\nPGTYPES_TYPE_NOTHING don't set replace_val, but call pgtypes_fmt_replace() -\nwith replace_val passed by value. If that's the first replacement, an\nunitialized variable is passed...\n\nSeems either the caller should skip calling pgtypes_fmt_replace() in the\nNOTHING case, or replace_val should be zero initialized?\n\n- Andres\n\n\n",
"msg_date": "Tue, 23 Aug 2022 21:01:18 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ecpg assertion on windows"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-23 20:36:55 -0700, Andres Freund wrote:\n>> Running the ecpg regression tests interactively (to try to find a different\n>> issue), triggered a crash on windows due to an uninitialized variable (after\n>> pressing \"ignore\" in that stupid gui window that we've only disabled for the\n>> backend).\n>> \"The variable 'replace_val' is being used without being initialized.\"\n\n> Looks to me like that's justified.\n\nHmm ... that message sounded like it is a run-time detection not from\nstatic analysis. But if the regression tests are triggering use of\nuninitialized values, how could we have failed to detect that?\nEither valgrind or unstable behavior should have found this ages ago.\n\nSeeing that replace_val is a union of differently-sized types,\nI was wondering if this message is a false positive based on\nstruct assignment transferring a few uninitialized bytes, or\nsomething like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 00:18:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg assertion on windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-24 00:18:27 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-23 20:36:55 -0700, Andres Freund wrote:\n> >> Running the ecpg regression tests interactively (to try to find a different\n> >> issue), triggered a crash on windows due to an uninitialized variable (after\n> >> pressing \"ignore\" in that stupid gui window that we've only disabled for the\n> >> backend).\n> >> \"The variable 'replace_val' is being used without being initialized.\"\n> \n> > Looks to me like that's justified.\n> \n> Hmm ... that message sounded like it is a run-time detection not from\n> static analysis.\n\nYes, it's a runtime error.\n\n\n> But if the regression tests are triggering use of uninitialized values, how\n> could we have failed to detect that? Either valgrind or unstable behavior\n> should have found this ages ago.\n\nI think it's just different criteria for when to report issues. Valgrind\nreports uninitialized memory only when there's a conditional branch depending\non it or such. Whereas this seems to trigger when passing an uninitialized\nvalue to a function by value, even if it's then not relied upon.\n\nI don't think we regularly test all client tests with valgrind, btw. Skink\nonly runs the server under valgrind at least.\n\n\n> Seeing that replace_val is a union of differently-sized types,\n> I was wondering if this message is a false positive based on\n> struct assignment transferring a few uninitialized bytes, or\n> something like that.\n\nI think it's genuinely uninitialized - if you track what happens if the first\nparameter is e.g. %X: It'll not initialize replace_val, but then call\npgtypes_fmt_replace(). So an uninit value is passed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 23 Aug 2022 21:26:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ecpg assertion on windows"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-08-24 00:18:27 -0400, Tom Lane wrote:\n>> But if the regression tests are triggering use of uninitialized values, how\n>> could we have failed to detect that? Either valgrind or unstable behavior\n>> should have found this ages ago.\n\n> I think it's just different criteria for when to report issues. Valgrind\n> reports uninitialized memory only when there's a conditional branch depending\n> on it or such. Whereas this seems to trigger when passing an uninitialized\n> value to a function by value, even if it's then not relied upon.\n\nIf the value is not actually relied on, then it's a false positive.\n\nI don't say we shouldn't fix it, because we routinely jump through\nhoops to silence various sorts of functionally-harmless warnings.\nBut let's be clear about whether there's a real bug here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 00:32:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg assertion on windows"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-24 00:32:53 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-08-24 00:18:27 -0400, Tom Lane wrote:\n> >> But if the regression tests are triggering use of uninitialized values, how\n> >> could we have failed to detect that? Either valgrind or unstable behavior\n> >> should have found this ages ago.\n> \n> > I think it's just different criteria for when to report issues. Valgrind\n> > reports uninitialized memory only when there's a conditional branch depending\n> > on it or such. Whereas this seems to trigger when passing an uninitialized\n> > value to a function by value, even if it's then not relied upon.\n> \n> If the value is not actually relied on, then it's a false positive.\n\nMy understanding is that formally speaking passing an undefined value by value\nto a function is \"relying on it\" and undefined behaviour. Hard to believe\nit'll cause any compiler go haywire and eat the computer, but ...\n\n\n> I don't say we shouldn't fix it, because we routinely jump through\n> hoops to silence various sorts of functionally-harmless warnings.\n> But let's be clear about whether there's a real bug here.\n\nYea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 24 Aug 2022 08:16:41 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: ecpg assertion on windows"
}
] |
[
{
"msg_contents": "Hi.\n\nIt's possible to extend deparsing in postgres_fdw, so that we can push \ndown semi-joins, which doesn't refer to inner reltarget. This allows\nus to push down joins in queries like\n\nSELECT * FROM ft1 t1 WHERE t1.c1 < 10 AND t1.c3 IN (SELECT c3 FROM ft2 \nt2 WHERE date(c5) = '1970-01-17'::date);\n\n\nEXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE t1.c1 < 10 AND \nt1.c3 IN (SELECT c3 FROM ft2 t2 WHERE date(c5) = '1970-01-17'::date);\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nForeign Scan\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6, \nr1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 10)) AND (EXISTS \n(SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((date(r3.c5) = \n'1970-01-17'::date)) AND ((r1.c3 = r3.c3))))\n\nDeparsing semi-joins leads to generating (text) conditions like 'EXISTS \n(SELECT NULL FROM inner_rel WHERE join_conds) . Such conditions are \ngenerated in deparseFromExprForRel() and distributed to nearest WHERE, \nwhere they are added to the list of and clauses.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Wed, 24 Aug 2022 10:25:46 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi Alexander,\nThanks for working on this. It's great to see FDW join pushdown scope\nbeing expanded to more complex cases.\n\nI am still figuring out the implementation. It's been a while I have\nlooked at join push down code.\n\nBut following change strikes me odd\n -- subquery using immutable function (can be sent to remote)\n PREPARE st3(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3\nIN (SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c5) =\n'1970-01-17'::date) ORDER BY c1;\n EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st3(10, 20);\n- QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n- Sort\n+\n\nQUERY PLAN\n+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n+ Foreign Scan\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Sort Key: t1.c1\n- -> Nested Loop Semi Join\n- Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Join Filter: (t1.c3 = t2.c3)\n- -> Foreign Scan on public.ft1 t1\n- Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8\nFROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n- -> Materialize\n- Output: t2.c3\n- -> Foreign Scan on public.ft2 t2\n- Output: t2.c3\n- Remote SQL: SELECT c3 FROM \"S 1\".\"T 1\" WHERE\n((\"C 1\" > 10)) AND ((date(c5) = '1970-01-17'::date))\n-(14 rows)\n+ Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n+ Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\nr1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND (EXISTS\n(SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n((date(r3.c5) = '1970-01-17'::date)) AND ((r1.c3 = r3.c3)))) ORDER BY\nr1.\"C 1\" ASC NULLS LAST\n+(4 rows)\n\ndate_in | s | 1 | [0:0]={cstring}\ndate_in which will be used to cast a test to date is not immutable. So\nthe query should't be pushed down. May not be a problem with your\npatch. Can you please check?\n\nOn Wed, Aug 24, 2022 at 12:55 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n>\n> Hi.\n>\n> It's possible to extend deparsing in postgres_fdw, so that we can push\n> down semi-joins, which doesn't refer to inner reltarget. This allows\n> us to push down joins in queries like\n>\n> SELECT * FROM ft1 t1 WHERE t1.c1 < 10 AND t1.c3 IN (SELECT c3 FROM ft2\n> t2 WHERE date(c5) = '1970-01-17'::date);\n>\n>\n> EXPLAIN (VERBOSE, COSTS OFF) SELECT * FROM ft1 t1 WHERE t1.c1 < 10 AND\n> t1.c3 IN (SELECT c3 FROM ft2 t2 WHERE date(c5) = '1970-01-17'::date);\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Foreign Scan\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n> Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\n> r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 10)) AND (EXISTS\n> (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((date(r3.c5) =\n> '1970-01-17'::date)) AND ((r1.c3 = r3.c3))))\n>\n\nThanks for working on this. It's great to see FDW join pushdown scope\nbeing expanded to more complex cases.\n\nI am still figuring out the implementation. It's been a while I have\nlooked at join push down code.\n\nBut following change strikes me odd\n -- subquery using immutable function (can be sent to remote)\n PREPARE st3(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3\nIN (SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c5) =\n'1970-01-17'::date) ORDER BY c1;\n EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st3(10, 20);\n- QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n- Sort\n+\n\nQUERY PLAN\n+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n+ Foreign Scan\n Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Sort Key: t1.c1\n- -> Nested Loop Semi Join\n- Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Join Filter: (t1.c3 = t2.c3)\n- -> Foreign Scan on public.ft1 t1\n- Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n- Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8\nFROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n- -> Materialize\n- Output: t2.c3\n- -> Foreign Scan on public.ft2 t2\n- Output: t2.c3\n- Remote SQL: SELECT c3 FROM \"S 1\".\"T 1\" WHERE\n((\"C 1\" > 10)) AND ((date(c5) = '1970-01-17'::date))\n-(14 rows)\n+ Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n+ Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\nr1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND (EXISTS\n(SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n((date(r3.c5) = '1970-01-17'::date)) AND ((r1.c3 = r3.c3)))) ORDER BY\nr1.\"C 1\" ASC NULLS LAST\n+(4 rows)\n\ndate_in | s | 1 | [0:0]={cstring}\ndate_in which will be used to cast a test to date is not immutable. So\nthe query should't be pushed down. May not be a problem with your\npatch. Can you please check?\n\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Mon, 29 Aug 2022 19:42:19 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Ashutosh Bapat писал 2022-08-29 17:12:\n> Hi Alexander,\n> Thanks for working on this. It's great to see FDW join pushdown scope\n> being expanded to more complex cases.\n> \n> I am still figuring out the implementation. It's been a while I have\n> looked at join push down code.\n> \n> But following change strikes me odd\n> -- subquery using immutable function (can be sent to remote)\n> PREPARE st3(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3\n> IN (SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c5) =\n> '1970-01-17'::date) ORDER BY c1;\n> EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st3(10, 20);\n> - QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------\n> - Sort\n> +\n> \n> QUERY PLAN\n> +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> + Foreign Scan\n> Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> - Sort Key: t1.c1\n> - -> Nested Loop Semi Join\n> - Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, \n> t1.c8\n> - Join Filter: (t1.c3 = t2.c3)\n> - -> Foreign Scan on public.ft1 t1\n> - Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, \n> t1.c7, t1.c8\n> - Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8\n> FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n> - -> Materialize\n> - Output: t2.c3\n> - -> Foreign Scan on public.ft2 t2\n> - Output: t2.c3\n> - Remote SQL: SELECT c3 FROM \"S 1\".\"T 1\" WHERE\n> ((\"C 1\" > 10)) AND ((date(c5) = '1970-01-17'::date))\n> -(14 rows)\n> + Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n> + Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\n> r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND (EXISTS\n> (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n> ((date(r3.c5) = '1970-01-17'::date)) AND ((r1.c3 = r3.c3)))) ORDER BY\n> r1.\"C 1\" ASC NULLS LAST\n> +(4 rows)\n> \n> date_in | s | 1 | [0:0]={cstring}\n> date_in which will be used to cast a test to date is not immutable. So\n> the query should't be pushed down. May not be a problem with your\n> patch. Can you please check?\n\nHi.\n\nIt is not related to my change and works as expected. As I see, we have \nexpression FuncExprdate(oid = 2029, args=Var ) = Const(type date)\n(date(r3.c5) = '1970-01-17'::date).\nFunction is\n\n# select proname, provolatile from pg_proc where oid=2029;\n proname | provolatile\n---------+-------------\n date | i\n\nSo it's shippable.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:58:39 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "2022年8月30日(火) 15:58 Alexander Pyhalov <a.pyhalov@postgrespro.ru>:\n>\n> Ashutosh Bapat писал 2022-08-29 17:12:\n> > Hi Alexander,\n> > Thanks for working on this. It's great to see FDW join pushdown scope\n> > being expanded to more complex cases.\n> >\n> > I am still figuring out the implementation. It's been a while I have\n> > looked at join push down code.\n> >\n> > But following change strikes me odd\n> > -- subquery using immutable function (can be sent to remote)\n> > PREPARE st3(int) AS SELECT * FROM ft1 t1 WHERE t1.c1 < $2 AND t1.c3\n> > IN (SELECT c3 FROM ft2 t2 WHERE c1 > $1 AND date(c5) =\n> > '1970-01-17'::date) ORDER BY c1;\n> > EXPLAIN (VERBOSE, COSTS OFF) EXECUTE st3(10, 20);\n> > - QUERY PLAN\n> > ------------------------------------------------------------------------------------------------------------------------\n> > - Sort\n> > +\n> >\n> > QUERY PLAN\n> > +---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > + Foreign Scan\n> > Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7, t1.c8\n> > - Sort Key: t1.c1\n> > - -> Nested Loop Semi Join\n> > - Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6, t1.c7,\n> > t1.c8\n> > - Join Filter: (t1.c3 = t2.c3)\n> > - -> Foreign Scan on public.ft1 t1\n> > - Output: t1.c1, t1.c2, t1.c3, t1.c4, t1.c5, t1.c6,\n> > t1.c7, t1.c8\n> > - Remote SQL: SELECT \"C 1\", c2, c3, c4, c5, c6, c7, c8\n> > FROM \"S 1\".\"T 1\" WHERE ((\"C 1\" < 20))\n> > - -> Materialize\n> > - Output: t2.c3\n> > - -> Foreign Scan on public.ft2 t2\n> > - Output: t2.c3\n> > - Remote SQL: SELECT c3 FROM \"S 1\".\"T 1\" WHERE\n> > ((\"C 1\" > 10)) AND ((date(c5) = '1970-01-17'::date))\n> > -(14 rows)\n> > + Relations: (public.ft1 t1) SEMI JOIN (public.ft2 t2)\n> > + Remote SQL: SELECT r1.\"C 1\", r1.c2, r1.c3, r1.c4, r1.c5, r1.c6,\n> > r1.c7, r1.c8 FROM \"S 1\".\"T 1\" r1 WHERE ((r1.\"C 1\" < 20)) AND (EXISTS\n> > (SELECT NULL FROM \"S 1\".\"T 1\" r3 WHERE ((r3.\"C 1\" > 10)) AND\n> > ((date(r3.c5) = '1970-01-17'::date)) AND ((r1.c3 = r3.c3)))) ORDER BY\n> > r1.\"C 1\" ASC NULLS LAST\n> > +(4 rows)\n> >\n> > date_in | s | 1 | [0:0]={cstring}\n> > date_in which will be used to cast a test to date is not immutable. So\n> > the query should't be pushed down. May not be a problem with your\n> > patch. Can you please check?\n>\n> Hi.\n>\n> It is not related to my change and works as expected. As I see, we have\n> expression FuncExprdate(oid = 2029, args=Var ) = Const(type date)\n> (date(r3.c5) = '1970-01-17'::date).\n> Function is\n>\n> # select proname, provolatile from pg_proc where oid=2029;\n> proname | provolatile\n> ---------+-------------\n> date | i\n>\n> So it's shippable.\n\nThis entry was marked as \"Needs review\" in the CommitFest app but cfbot\nreports the patch no longer applies.\n\nWe've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time update the patch.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can move the patch entry forward by visiting\n\n https://commitfest.postgresql.org/40/3838/\n\nand changing the status to \"Needs review\".\n\n\nThanks\n\nIan Barwick\n\n\n",
"msg_date": "Fri, 4 Nov 2022 08:21:51 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Ian Lawrence Barwick писал 2022-11-04 02:21:\n> \n> This entry was marked as \"Needs review\" in the CommitFest app but cfbot\n> reports the patch no longer applies.\n> \n> We've marked it as \"Waiting on Author\". As CommitFest 2022-11 is\n> currently underway, this would be an excellent time update the patch.\n> \n> Once you think the patchset is ready for review again, you (or any\n> interested party) can move the patch entry forward by visiting\n> \n> https://commitfest.postgresql.org/40/3838/\n> \n> and changing the status to \"Needs review\".\n> \n\nHi. I've rebased the patch.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Mon, 07 Nov 2022 10:52:43 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for work on this useful patch.\r\nI'm starting to review v2 patch.\r\nI have cheked we can apply v2 patch to commit ec386948948c1708c0c28c48ef08b9c4dd9d47cc\r\n(Date:Thu Dec 1 12:56:21 2022 +0100).\r\nI briefly looked at this whole thing and did step execute this\r\nby running simple queries such as the followings.\r\n\r\nquery1) select * from f_t1 a1 where a1.c1 in (select c1 from f_t2);\r\nquery2) select * from f_t1 a1 join f_t3 a2 on a1.c1 = a2.c1 where a1.c1 in (select c1 from f_t3) ;\r\nquery3) update f_t2 set c1 = 1 from f_t1 a1 where a1.c2 = f_t2.c2 and exists (select null from f_t2 where c1 = a1.c1);\r\n\r\nAlthough I haven't seen all of v2 patch, for now I have the following questions.\r\n\r\nquestion1) \r\n > + if (jointype == JOIN_SEMI && bms_is_member(var->varno, innerrel->relids) && !bms_is_member(var->varno, outerrel->relids))\r\n It takes time for me to find in what case this condition is true.\r\n There is cases in which this condition is true for semi-join of two baserels \r\n when running query which joins more than two relations such as query2 and query3.\r\n Running queries such as query2, you maybe want to pushdown of only semi-join path of \r\n joinrel(outerrel) defined by (f_t1 a1 join f_t3 a2 on a1.c1 = a2.c1) and baserel(innerrel) f_t3 \r\n because of safety deparse. So you add this condition.\r\n Becouase of this limitation, your patch can't push down subquery expression \r\n \"exists (select null from f_t2 where c1 = a1.c1)\" in query3.\r\n I think, it is one of difficulty points for semi-join pushdown.\r\n This is my understanding of the intent of this condition and the restrictions imposed by this condition.\r\n Is my understanding right?\r\n I think if there are comments for the intent of this condition and the restrictions imposed by this condition \r\n then they help PostgreSQL developper. What do you think?\r\n\r\nquestion2) In foreign_join_ok\r\n > * Constructing queries representing ANTI joins is hard, hence\r\n Is this true? Is it hard to expand your approach to ANTI join pushdown?\r\n\r\nquestion3) You use variables whose name is \"addl_condXXX\" in the following code.\r\n > appendStringInfo(addl_conds, \"EXISTS (SELECT NULL FROM %s\", join_sql_i.data);\r\n Does this naming mean additional literal?\r\n Is there more complehensive naming, such as \"subquery_exprXXX\"?\r\n\r\nquestion4) Although really detail, there is expression making space such as\r\n \"ft4.c2 = ft2.c2\" and one making no space such as \"c1=ftupper.c1\".\r\n Is there reason for this difference? If not, need we use same policy for making space?\r\n\r\nLater, I'm going to look at part of your patch which is used when running more complex query.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n",
"msg_date": "Sat, 3 Dec 2022 03:02:02 +0000",
"msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi, Yuki.\n\nThanks for looking at this patch.\n\nFujii.Yuki@df.MitsubishiElectric.co.jp писал 2022-12-03 06:02:\n\n> question1)\n> > + if (jointype == JOIN_SEMI && bms_is_member(var->varno,\n> innerrel->relids) && !bms_is_member(var->varno, outerrel->relids))\n> It takes time for me to find in what case this condition is true.\n> There is cases in which this condition is true for semi-join of two \n> baserels\n> when running query which joins more than two relations such as\n> query2 and query3.\n> Running queries such as query2, you maybe want to pushdown of only\n> semi-join path of\n> joinrel(outerrel) defined by (f_t1 a1 join f_t3 a2 on a1.c1 = a2.c1)\n> and baserel(innerrel) f_t3\n> because of safety deparse. So you add this condition.\n> Becouase of this limitation, your patch can't push down subquery \n> expression\n> \"exists (select null from f_t2 where c1 = a1.c1)\" in query3.\n> I think, it is one of difficulty points for semi-join pushdown.\n> This is my understanding of the intent of this condition and the\n> restrictions imposed by this condition.\n> Is my understanding right?\n\nIIRC, planner can create semi-join, which targetlist references Vars \nfrom inner join relation. However, it's deparsed as exists and so we \ncan't reference it from SQL. So, there's this check - if Var is \nreferenced in semi-join target list, it can't be pushed down.\nYou can see this if comment out this check.\n\nEXPLAIN (verbose, costs off)\n SELECT ft2.*, ft4.* FROM ft2 INNER JOIN\n (SELECT * FROM ft4 WHERE EXISTS (\n SELECT 1 FROM ft2 WHERE ft2.c2=ft4.c2)) ft4\n ON ft2.c2 = ft4.c1\n INNER JOIN\n (SELECT * FROM ft2 WHERE EXISTS (\n SELECT 1 FROM ft4 WHERE ft2.c2=ft4.c2)) ft21\n ON ft2.c2 = ft21.c2\n WHERE ft2.c1 > 900\n ORDER BY ft2.c1 LIMIT 10;\n\nwill fail with\nEXPLAIN SELECT r8.c2, r9.c2 FROM \"S 1\".\"T 1\" r8 WHERE (EXISTS (SELECT \nNULL FROM \"S 1\".\"T 3\" r9 WHERE ((r8.c2 = r9.c2))))\n\nHere you can see that\nSELECT * FROM ft2 WHERE EXISTS (\n SELECT 1 FROM ft4 WHERE ft2.c2=ft4.c2)\n\nwas transformed to\nSELECT r8.c2, r9.c2 FROM \"S 1\".\"T 1\" r8 WHERE (EXISTS (SELECT NULL FROM \n\"S 1\".\"T 3\" r9 WHERE ((r8.c2 = r9.c2))))\n\nwhere our exists subquery is referenced from tlist. It's fine for plan \n(relations, participating in semi-join, can be referenced in tlist),\nbut is not going to work with EXISTS subquery.\nBTW, there's a comment in joinrel_target_ok(). It tells exactly that -\n\n5535 if (jointype == JOIN_SEMI && bms_is_member(var->varno, \ninnerrel->relids) && !bms_is_member(var->varno, outerrel->relids))\n5536 {\n5537 /* We deparse semi-join as exists() subquery, and \nso can't deparse references to inner rel in join target list. */\n5538 ok = false;\n5539 break;\n5540 }\n\nExpanded comment.\n\n> question2) In foreign_join_ok\n> > * Constructing queries representing ANTI joins is hard, hence\n> Is this true? Is it hard to expand your approach to ANTI join \n> pushdown?\n\nI haven't tried, so don't know.\n\n> question3) You use variables whose name is \"addl_condXXX\" in the \n> following code.\n> > appendStringInfo(addl_conds, \"EXISTS (SELECT NULL FROM %s\",\n> join_sql_i.data);\n> Does this naming mean additional literal?\n> Is there more complehensive naming, such as \"subquery_exprXXX\"?\n\nThe naming means additional conditions (for WHERE clause, by analogy \nwith ignore_conds and remote_conds). Not sure if subquery_expr sounds \nbetter, but if you come with better idea, I'm fine with renaming them.\n\n> question4) Although really detail, there is expression making space \n> such as\n> \"ft4.c2 = ft2.c2\" and one making no space such as \"c1=ftupper.c1\".\n> Is there reason for this difference? If not, need we use same policy\n> for making space?\n> \n\nFixed.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Tue, 06 Dec 2022 12:28:43 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi Mr.Pyhalov.\r\n\r\nThank you for fixing it and giving more explanation.\r\n\r\n> IIRC, planner can create semi-join, which targetlist references Vars \r\n> from inner join relation. However, it's deparsed as exists and so we \r\n> can't reference it from SQL. So, there's this check - if Var is \r\n> referenced in semi-join target list, it can't be pushed down.\r\n> You can see this if comment out this check.\r\n> \r\n> EXPLAIN (verbose, costs off)\r\n> SELECT ft2.*, ft4.* FROM ft2 INNER JOIN\r\n> (SELECT * FROM ft4 WHERE EXISTS (\r\n> SELECT 1 FROM ft2 WHERE ft2.c2=ft4.c2)) ft4\r\n> ON ft2.c2 = ft4.c1\r\n> INNER JOIN\r\n> (SELECT * FROM ft2 WHERE EXISTS (\r\n> SELECT 1 FROM ft4 WHERE ft2.c2=ft4.c2)) ft21\r\n> ON ft2.c2 = ft21.c2\r\n> WHERE ft2.c1 > 900\r\n> ORDER BY ft2.c1 LIMIT 10;\r\n> \r\n> will fail with\r\n> EXPLAIN SELECT r8.c2, r9.c2 FROM \"S 1\".\"T 1\" r8 WHERE (EXISTS (SELECT \r\n> NULL FROM \"S 1\".\"T 3\" r9 WHERE ((r8.c2 = r9.c2))))\r\n> \r\n> Here you can see that\r\n> SELECT * FROM ft2 WHERE EXISTS (\r\n> SELECT 1 FROM ft4 WHERE ft2.c2=ft4.c2)\r\n> \r\n> was transformed to\r\n> SELECT r8.c2, r9.c2 FROM \"S 1\".\"T 1\" r8 WHERE (EXISTS (SELECT NULL \r\n> FROM \"S 1\".\"T 3\" r9 WHERE ((r8.c2 = r9.c2))))\r\n> \r\n> where our exists subquery is referenced from tlist. It's fine for plan \r\n> (relations, participating in semi-join, can be referenced in tlist), \r\n> but is not going to work with EXISTS subquery.\r\n> BTW, there's a comment in joinrel_target_ok(). It tells exactly that -\r\n> \r\n> 5535 if (jointype == JOIN_SEMI &&\r\n> bms_is_member(var->varno,\r\n> innerrel->relids) && !bms_is_member(var->varno, outerrel->relids))\r\n> 5536 {\r\n> 5537 /* We deparse semi-join as exists() subquery, and\r\n> so can't deparse references to inner rel in join target list. */\r\n> 5538 ok = false;\r\n> 5539 break;\r\n> 5540 }\r\n> \r\n> Expanded comment.\r\nThank you for expanding your comment and giving examples. \r\nThanks to the above examples, I understood in what case planner wolud create semi-join, \r\nwhich targetlist references Vars from inner join relation.\r\n\r\n> > question2) In foreign_join_ok\r\n> > > * Constructing queries representing ANTI joins is hard, hence\r\n> > Is this true? Is it hard to expand your approach to ANTI join \r\n> > pushdown?\r\n> \r\n> I haven't tried, so don't know.\r\nI understand the situation.\r\n\r\n> The naming means additional conditions (for WHERE clause, by analogy \r\n> with ignore_conds and remote_conds). Not sure if subquery_expr sounds \r\n> better, but if you come with better idea, I'm fine with renaming them.\r\nSure.\r\n\r\n> > question4) Although really detail, there is expression making space \r\n> > such as\r\n> > \"ft4.c2 = ft2.c2\" and one making no space such as \"c1=ftupper.c1\".\r\n> > Is there reason for this difference? If not, need we use same \r\n> > policy for making space?\r\nThank you.\r\n\r\nLater, I'm going to look at other part of your patch.\r\n\r\nSincerely yours,\r\nYuuki Fujii\r\n\r\n--\r\nYuuki Fujii\r\nInformation Technology R&D Center Mitsubishi Electric Corporation\r\n",
"msg_date": "Tue, 6 Dec 2022 10:25:24 +0000",
"msg_from": "\"Fujii.Yuki@df.MitsubishiElectric.co.jp\"\n\t<Fujii.Yuki@df.MitsubishiElectric.co.jp>",
"msg_from_op": false,
"msg_subject": "RE: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi.\n\nI took a quick look at the patch. It needs a rebase, although it applies\nfine using patch.\n\nA couple minor comments:\n\n1) addl_conds seems a bit hard to understand, I'd use either the full\nwording (additional_conds) or maybe extra_conds\n\n2) some of the lines got quite long, and need a wrap\n\n3) unknown_subquery_rels name is a bit misleading - AFAIK it's the rels\nthat can't be referenced from upper rels (per what the .h says). So they\nare known, but hidden. Is there a better name?\n\n4) joinrel_target_ok() needs a better comment, explaining *when* the\nreltarget is safe for pushdown. The conditions are on the same row, but\nthe project style is to break after '&&'.\n\nAlso, I'd write\n\n if (!IsA(var, Var))\n continue;\n\nwhich saves one level of nesting. IMHO that makes it more readable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Jan 2023 18:49:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi.\n\nTomas Vondra писал 2023-01-19 20:49:\n> I took a quick look at the patch. It needs a rebase, although it \n> applies\n> fine using patch.\n> \n> A couple minor comments:\n> \n> 1) addl_conds seems a bit hard to understand, I'd use either the full\n> wording (additional_conds) or maybe extra_conds\n\nRenamed to additional_conds.\n\n> \n> 2) some of the lines got quite long, and need a wrap\nSplitted some of them. Not sure if it's enough.\n\n> \n> 3) unknown_subquery_rels name is a bit misleading - AFAIK it's the rels\n> that can't be referenced from upper rels (per what the .h says). So \n> they\n> are known, but hidden. Is there a better name?\n\nRenamed to hidden_subquery_rels. These are rels, which can't be referred \nto from upper join levels.\n\n> \n> 4) joinrel_target_ok() needs a better comment, explaining *when* the\n> reltarget is safe for pushdown. The conditions are on the same row, but\n> the project style is to break after '&&'.\n\nAdded comment. It seems to be a rephrasing of lower comment in \njoinrel_target_ok().\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Fri, 20 Jan 2023 12:00:04 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi, Alexander!\n\nThank you for working on this. I believe this is a very interesting patch,\nwhich significantly improves our FDW-based distributed facilities. This is\nwhy I decided to review this.\n\nOn Fri, Jan 20, 2023 at 11:00 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru>\nwrote:\n> Tomas Vondra писал 2023-01-19 20:49:\n> > I took a quick look at the patch. It needs a rebase, although it\n> > applies\n> > fine using patch.\n> >\n> > A couple minor comments:\n> >\n> > 1) addl_conds seems a bit hard to understand, I'd use either the full\n> > wording (additional_conds) or maybe extra_conds\n>\n> Renamed to additional_conds.\n>\n> >\n> > 2) some of the lines got quite long, and need a wrap\n> Splitted some of them. Not sure if it's enough.\n>\n> >\n> > 3) unknown_subquery_rels name is a bit misleading - AFAIK it's the rels\n> > that can't be referenced from upper rels (per what the .h says). So\n> > they\n> > are known, but hidden. Is there a better name?\n>\n> Renamed to hidden_subquery_rels. These are rels, which can't be referred\n> to from upper join levels.\n>\n> >\n> > 4) joinrel_target_ok() needs a better comment, explaining *when* the\n> > reltarget is safe for pushdown. The conditions are on the same row, but\n> > the project style is to break after '&&'.\n>\n> Added comment. It seems to be a rephrasing of lower comment in\n> joinrel_target_ok().\n\n+ /*\n+ * We can't push down join if its reltarget is not safe\n+ */\n+ if (!joinrel_target_ok(root, joinrel, jointype, outerrel, innerrel))\n return false;\n\nAs I get joinrel_target_ok() function do meaningful checks only for semi\njoin and always return false for all other kinds of joins. I think we\nshould call this only for semi join and name the function accordingly.\n\n+ fpinfo->unknown_subquery_rels =\nbms_union(fpinfo_o->unknown_subquery_rels,\n+\nfpinfo_i->unknown_subquery_rels);\n\nShould the comment before this code block be revised?\n\n+ case JOIN_SEMI:\n+ fpinfo->joinclauses = list_concat(fpinfo->joinclauses,\n+ fpinfo_i->remote_conds);\n+ fpinfo->joinclauses = list_concat(fpinfo->joinclauses,\n+ fpinfo->remote_conds);\n+ fpinfo->remote_conds = list_copy(fpinfo_o->remote_conds);\n+ fpinfo->unknown_subquery_rels =\nbms_union(fpinfo->unknown_subquery_rels,\n+ innerrel->relids);\n+ break;\n\nI think that comment before switch() should be definitely revised.\n\n+ Relids hidden_subquery_rels; /* relids, which can't be referred to\n+ * from upper relations */\n\nCould this definition contain the positive part? Can't be referred to from\nupper relations, but used internally for semi joins (or something like\nthat)?\n\nAlso, I think the machinery around the append_conds could be somewhat\nsimpler if we turn them into a list (list of strings). I think that should\nmake code clearer and also save us some memory allocations.\n\nIn [1] you've referenced the cases, when your patch can't push down\nsemi-joins. It doesn't seem impossible to handle these cases, but that\nwould make the patch much more complicated. I'm OK to continue with a\nsimpler patch to handle the majority of cases. Could you please add the\ncases, which can't be pushed down with the current patch, to the test suite?\n\nLinks\n1.\nhttps://www.postgresql.org/message-id/816fa8b1bc2da09a87484d1ef239a332%40postgrespro.ru\n\n------\nRegards,\nAlexander Korotkov\n\nHi, Alexander!Thank you for working on this. I believe this is a very interesting patch, which significantly improves our FDW-based distributed facilities. This is why I decided to review this.On Fri, Jan 20, 2023 at 11:00 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:> Tomas Vondra писал 2023-01-19 20:49:> > I took a quick look at the patch. It needs a rebase, although it> > applies> > fine using patch.> >> > A couple minor comments:> >> > 1) addl_conds seems a bit hard to understand, I'd use either the full> > wording (additional_conds) or maybe extra_conds>> Renamed to additional_conds.>> >> > 2) some of the lines got quite long, and need a wrap> Splitted some of them. Not sure if it's enough.>> >> > 3) unknown_subquery_rels name is a bit misleading - AFAIK it's the rels> > that can't be referenced from upper rels (per what the .h says). So> > they> > are known, but hidden. Is there a better name?>> Renamed to hidden_subquery_rels. These are rels, which can't be referred> to from upper join levels.>> >> > 4) joinrel_target_ok() needs a better comment, explaining *when* the> > reltarget is safe for pushdown. The conditions are on the same row, but> > the project style is to break after '&&'.>> Added comment. It seems to be a rephrasing of lower comment in> joinrel_target_ok().+ /*+ * We can't push down join if its reltarget is not safe+ */+ if (!joinrel_target_ok(root, joinrel, jointype, outerrel, innerrel)) return false;As I get joinrel_target_ok() function do meaningful checks only for semi join and always return false for all other kinds of joins. I think we should call this only for semi join and name the function accordingly.+ fpinfo->unknown_subquery_rels = bms_union(fpinfo_o->unknown_subquery_rels,+ fpinfo_i->unknown_subquery_rels);Should the comment before this code block be revised?+ case JOIN_SEMI:+ fpinfo->joinclauses = list_concat(fpinfo->joinclauses,+ fpinfo_i->remote_conds);+ fpinfo->joinclauses = list_concat(fpinfo->joinclauses,+ fpinfo->remote_conds);+ fpinfo->remote_conds = list_copy(fpinfo_o->remote_conds);+ fpinfo->unknown_subquery_rels = bms_union(fpinfo->unknown_subquery_rels,+ innerrel->relids);+ break;I think that comment before switch() should be definitely revised.+\tRelids\t\thidden_subquery_rels;\t/* relids, which can't be referred to+\t\t\t\t\t\t\t\t\t\t * from upper relations */Could this definition contain the positive part? Can't be referred to from upper relations, but used internally for semi joins (or something like that)?Also, I think the machinery around the append_conds could be somewhat simpler if we turn them into a list (list of strings). I think that should make code clearer and also save us some memory allocations.In [1] you've referenced the cases, when your patch can't push down semi-joins. It doesn't seem impossible to handle these cases, but that would make the patch much more complicated. I'm OK to continue with a simpler patch to handle the majority of cases. Could you please add the cases, which can't be pushed down with the current patch, to the test suite?Links1. https://www.postgresql.org/message-id/816fa8b1bc2da09a87484d1ef239a332%40postgrespro.ru------Regards,Alexander Korotkov",
"msg_date": "Mon, 30 Oct 2023 18:05:09 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Alexander Korotkov писал 2023-10-30 19:05:\n> Hi, Alexander!\n> \n> Thank you for working on this. I believe this is a very interesting\n> patch, which significantly improves our FDW-based distributed\n> facilities. This is why I decided to review this.\n> \n\nHi. Thanks for reviewing.\n\n> + /*\n> + * We can't push down join if its reltarget is not safe\n> + */\n> + if (!joinrel_target_ok(root, joinrel, jointype, outerrel,\n> innerrel))\n> return false;\n> \n> As I get joinrel_target_ok() function do meaningful checks only for\n> semi join and always return false for all other kinds of joins. I\n> think we should call this only for semi join and name the function\n> accordingly.\n\nDone.\n\n> \n> + fpinfo->unknown_subquery_rels =\n> bms_union(fpinfo_o->unknown_subquery_rels,\n> +\n> fpinfo_i->unknown_subquery_rels);\n> \n> Should the comment before this code block be revised?\n\nUpdated comment.\n\n> \n> + case JOIN_SEMI:\n> + fpinfo->joinclauses = list_concat(fpinfo->joinclauses,\n> + fpinfo_i->remote_conds);\n> + fpinfo->joinclauses = list_concat(fpinfo->joinclauses,\n> + fpinfo->remote_conds);\n> + fpinfo->remote_conds = list_copy(fpinfo_o->remote_conds);\n> + fpinfo->unknown_subquery_rels =\n> bms_union(fpinfo->unknown_subquery_rels,\n> + innerrel->relids);\n> + break;\n> \n> I think that comment before switch() should be definitely revised.\n> \n> + Relids hidden_subquery_rels; /* relids, which can't be referred to\n> + * from upper relations */\n> \n> Could this definition contain the positive part? Can't be referred to\n> from upper relations, but used internally for semi joins (or something\n> like that)?\n\nMade comment a bit more verbose.\n\n> \n> Also, I think the machinery around the append_conds could be somewhat\n> simpler if we turn them into a list (list of strings). I think that\n> should make code clearer and also save us some memory allocations.\n> \n\nI've tried to rewrite it as managing lists.. to find out that these are \nnot lists.\nI mean, in deparseFromExprForRel() we replace lists from both side with \none condition.\nThis allows us to preserve conditions hierarchy. We should merge these \nconditions\nin the end of IS_JOIN_REL(foreignrel) branch, or we'll push them too \nhigh. And if we\ndeparse them in this place as StringInfo, I see no benefit to convert \nthem to lists.\n\n\n> In [1] you've referenced the cases, when your patch can't push down\n> semi-joins. It doesn't seem impossible to handle these cases, but\n> that would make the patch much more complicated. I'm OK to continue\n> with a simpler patch to handle the majority of cases. Could you\n> please add the cases, which can't be pushed down with the current\n> patch, to the test suite?\n> \n\nThere are several cases when we can't push down semi-join in current \npatch.\n\n1) When target list has attributes from inner relation, which are \nequivalent to some attributes of outer\nrelation, we fail to notice this.\n\n2) When we examine A join B and decide that we can't push it down, this \ndecision is final - we state it in fdw_private of joinrel,\nand so if we consider joining these relations in another order, we don't \nreconsider.\nThis means that if later examine B join A, we don't try to push it down. \nAs semi-join can be executed as JOIN_UNIQUE_INNER or JOIN_UNIQUE_OUTER,\nthis can be a problem - we look at some of these paths and remember that \nwe can't push down such join.\n\n\n\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Tue, 31 Oct 2023 14:07:56 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Tue, Oct 31, 2023 at 1:07 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> There are several cases when we can't push down semi-join in current\n> patch.\n>\n> 1) When target list has attributes from inner relation, which are\n> equivalent to some attributes of outer\n> relation, we fail to notice this.\n>\n> 2) When we examine A join B and decide that we can't push it down, this\n> decision is final - we state it in fdw_private of joinrel,\n> and so if we consider joining these relations in another order, we don't\n> reconsider.\n> This means that if later examine B join A, we don't try to push it down.\n> As semi-join can be executed as JOIN_UNIQUE_INNER or JOIN_UNIQUE_OUTER,\n> this can be a problem - we look at some of these paths and remember that\n> we can't push down such join.\n\nThank you for the revision.\n\nI've revised the patch myself. I've replaced StringInfo with\nadditional conds into a list of strings as I proposed before. I think\nthe code became much clearer. Also, it gets rid of some unnecessary\nallocations.\n\nI think the code itself is not in bad shape. But patch lacks some\nhigh-level description of semi-joins processing as well as comments on\neach manipulation with additional conds. Could you please add this?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Mon, 27 Nov 2023 02:49:25 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Alexander Korotkov писал(а) 2023-11-27 03:49:\n\n> Thank you for the revision.\n> \n> I've revised the patch myself. I've replaced StringInfo with\n> additional conds into a list of strings as I proposed before. I think\n> the code became much clearer. Also, it gets rid of some unnecessary\n> allocations.\n> \n> I think the code itself is not in bad shape. But patch lacks some\n> high-level description of semi-joins processing as well as comments on\n> each manipulation with additional conds. Could you please add this?\n> \n\nHi. The updated patch looks better. It seems I've failed to fix logic in \ndeparseFromExprForRel() when tried to convert StringInfos to Lists.\n\nI've added some comments. The most complete description of how SEMI-JOIN \nis processed, is located in deparseFromExprForRel(). Unfortunately,\nthere seems to be no single place, describing current JOIN deparsing \nlogic.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional",
"msg_date": "Mon, 27 Nov 2023 18:11:56 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi, Alexander!\n\nOn Mon, Nov 27, 2023 at 5:11 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n> Alexander Korotkov писал(а) 2023-11-27 03:49:\n>\n> > Thank you for the revision.\n> >\n> > I've revised the patch myself. I've replaced StringInfo with\n> > additional conds into a list of strings as I proposed before. I think\n> > the code became much clearer. Also, it gets rid of some unnecessary\n> > allocations.\n> >\n> > I think the code itself is not in bad shape. But patch lacks some\n> > high-level description of semi-joins processing as well as comments on\n> > each manipulation with additional conds. Could you please add this?\n> >\n>\n> Hi. The updated patch looks better. It seems I've failed to fix logic in\n> deparseFromExprForRel() when tried to convert StringInfos to Lists.\n>\n> I've added some comments. The most complete description of how SEMI-JOIN\n> is processed, is located in deparseFromExprForRel(). Unfortunately,\n> there seems to be no single place, describing current JOIN deparsing\n> logic.\n\nLooks good to me. I've made some grammar and formatting adjustments.\nAlso, I've written the commit message.\n\nNow, I think this looks good. I'm going to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Sun, 3 Dec 2023 22:52:30 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Alexander Korotkov писал(а) 2023-12-03 23:52:\n> Hi, Alexander!\n> \n> On Mon, Nov 27, 2023 at 5:11 PM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n>> Alexander Korotkov писал(а) 2023-11-27 03:49:\n>> \n>> > Thank you for the revision.\n>> >\n>> > I've revised the patch myself. I've replaced StringInfo with\n>> > additional conds into a list of strings as I proposed before. I think\n>> > the code became much clearer. Also, it gets rid of some unnecessary\n>> > allocations.\n>> >\n>> > I think the code itself is not in bad shape. But patch lacks some\n>> > high-level description of semi-joins processing as well as comments on\n>> > each manipulation with additional conds. Could you please add this?\n>> >\n>> \n>> Hi. The updated patch looks better. It seems I've failed to fix logic \n>> in\n>> deparseFromExprForRel() when tried to convert StringInfos to Lists.\n>> \n>> I've added some comments. The most complete description of how \n>> SEMI-JOIN\n>> is processed, is located in deparseFromExprForRel(). Unfortunately,\n>> there seems to be no single place, describing current JOIN deparsing\n>> logic.\n> \n> Looks good to me. I've made some grammar and formatting adjustments.\n> Also, I've written the commit message.\n> \n> Now, I think this looks good. I'm going to push this if no objections.\n> \n> ------\n> Regards,\n> Alexander Korotkov\n\nHi. No objections from my side.\n\nPerhaps, some rephrasing is needed in comment in semijoin_target_ok():\n\n\"The planner can create semi-joins, which refer to inner rel\nvars in its target list.\"\n\nPerhaps, change \"semi-joins, which refer\" to \"a semi-join, which refers \n...\",\nas later we speak about \"its\" target list.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n",
"msg_date": "Tue, 05 Dec 2023 13:29:47 +0300",
"msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hello,\n\nWhile playing with this feature I found the following.\n\nTwo foreign tables:\npostgres@demo_postgres_fdw(17.0)=# \\det aircrafts|seats\n List of foreign tables\n Schema | Table | Server\n--------+-----------+-------------\n public | aircrafts | demo_server\n public | seats | demo_server\n(2 rows)\n\n\nThis query uses optimization:\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\nFROM aircrafts a\nWHERE a.aircraft_code = '320' AND EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\n QUERY PLAN >\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->\n Foreign Scan\n Output: a.aircraft_code, a.model, a.range\n Relations: (public.aircrafts a) SEMI JOIN (public.seats s)\n Remote SQL: SELECT r1.aircraft_code, r1.model, r1.range FROM bookings.aircrafts r1 WHERE ((r1.aircraft_code = '320')) AND EXISTS (SELECT NULL FROM bookings.seats r2 WHERE ((r2.aircraft_code =>\n(4 rows)\n\n\nBut optimization not used for NOT EXISTS:\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\nFROM aircrafts a\nWHERE a.aircraft_code = '320' AND NOT EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Nested Loop Anti Join\n Output: a.aircraft_code, a.model, a.range\n -> Foreign Scan on public.aircrafts a\n Output: a.aircraft_code, a.model, a.range\n Remote SQL: SELECT aircraft_code, model, range FROM bookings.aircrafts WHERE ((aircraft_code = '320'))\n -> Materialize\n Output: s.aircraft_code\n -> Foreign Scan on public.seats s\n Output: s.aircraft_code\n Remote SQL: SELECT aircraft_code FROM bookings.seats WHERE ((aircraft_code = '320'))\n(10 rows)\n\nAlso, optimization not used after deleting first condition (a.aircraft_code = '320'):\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\nFROM aircrafts a\nWHERE EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\n QUERY PLAN\n--------------------------------------------------------------------------------\n Hash Join\n Output: a.aircraft_code, a.model, a.range\n Inner Unique: true\n Hash Cond: (a.aircraft_code = s.aircraft_code)\n -> Foreign Scan on public.aircrafts a\n Output: a.aircraft_code, a.model, a.range\n Remote SQL: SELECT aircraft_code, model, range FROM bookings.aircrafts\n -> Hash\n Output: s.aircraft_code\n -> HashAggregate\n Output: s.aircraft_code\n Group Key: s.aircraft_code\n -> Foreign Scan on public.seats s\n Output: s.aircraft_code\n Remote SQL: SELECT aircraft_code FROM bookings.seats\n(15 rows)\n\n\nBut the worst thing is that replacing AND with OR causes breaking session and server restart:\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\nFROM aircrafts a\nWHERE a.aircraft_code = '320' OR EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nHello,\nWhile playing with this feature I found the following.\n\nTwo foreign tables:\npostgres@demo_postgres_fdw(17.0)=# \\det aircrafts|seats\n List of foreign tables\n Schema | Table | Server \n--------+-----------+-------------\n public | aircrafts | demo_server\n public | seats | demo_server\n(2 rows)\n\n\nThis query uses optimization:\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT * \nFROM aircrafts a\nWHERE a.aircraft_code = '320' AND EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\n QUERY PLAN >\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->\n Foreign Scan\n Output: a.aircraft_code, a.model, a.range\n Relations: (public.aircrafts a) SEMI JOIN (public.seats s)\n Remote SQL: SELECT r1.aircraft_code, r1.model, r1.range FROM bookings.aircrafts r1 WHERE ((r1.aircraft_code = '320')) AND EXISTS (SELECT NULL FROM bookings.seats r2 WHERE ((r2.aircraft_code =>\n(4 rows)\n\n\nBut optimization not used for NOT EXISTS:\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT * \nFROM aircrafts a\nWHERE a.aircraft_code = '320' AND NOT EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------\n Nested Loop Anti Join\n Output: a.aircraft_code, a.model, a.range\n -> Foreign Scan on public.aircrafts a\n Output: a.aircraft_code, a.model, a.range\n Remote SQL: SELECT aircraft_code, model, range FROM bookings.aircrafts WHERE ((aircraft_code = '320'))\n -> Materialize\n Output: s.aircraft_code\n -> Foreign Scan on public.seats s\n Output: s.aircraft_code\n Remote SQL: SELECT aircraft_code FROM bookings.seats WHERE ((aircraft_code = '320'))\n(10 rows)\n\nAlso, optimization not used after deleting first condition (a.aircraft_code = '320'):\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT * \nFROM aircrafts a\nWHERE EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\n QUERY PLAN \n--------------------------------------------------------------------------------\n Hash Join\n Output: a.aircraft_code, a.model, a.range\n Inner Unique: true\n Hash Cond: (a.aircraft_code = s.aircraft_code)\n -> Foreign Scan on public.aircrafts a\n Output: a.aircraft_code, a.model, a.range\n Remote SQL: SELECT aircraft_code, model, range FROM bookings.aircrafts\n -> Hash\n Output: s.aircraft_code\n -> HashAggregate\n Output: s.aircraft_code\n Group Key: s.aircraft_code\n -> Foreign Scan on public.seats s\n Output: s.aircraft_code\n Remote SQL: SELECT aircraft_code FROM bookings.seats\n(15 rows)\n\n\nBut the worst thing is that replacing AND with OR causes breaking session and server restart:\n\npostgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT * \nFROM aircrafts a\nWHERE a.aircraft_code = '320' OR EXISTS (\n SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n);\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\nThe connection to the server was lost. Attempting reset: Failed.\n\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Fri, 9 Feb 2024 23:08:11 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "On Fri, Feb 9, 2024 at 10:08 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n> While playing with this feature I found the following.\n>\n> Two foreign tables:\n> postgres@demo_postgres_fdw(17.0)=# \\det aircrafts|seats\n> List of foreign tables\n> Schema | Table | Server\n> --------+-----------+-------------\n> public | aircrafts | demo_server\n> public | seats | demo_server\n> (2 rows)\n>\n>\n> This query uses optimization:\n>\n> postgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\n> FROM aircrafts a\n> WHERE a.aircraft_code = '320' AND EXISTS (\n> SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n> );\n> QUERY PLAN >\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------->\n> Foreign Scan\n> Output: a.aircraft_code, a.model, a.range\n> Relations: (public.aircrafts a) SEMI JOIN (public.seats s)\n> Remote SQL: SELECT r1.aircraft_code, r1.model, r1.range FROM bookings.aircrafts r1 WHERE ((r1.aircraft_code = '320')) AND EXISTS (SELECT NULL FROM bookings.seats r2 WHERE ((r2.aircraft_code =>\n> (4 rows)\n>\n>\n> But optimization not used for NOT EXISTS:\n>\n> postgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\n> FROM aircrafts a\n> WHERE a.aircraft_code = '320' AND NOT EXISTS (\n> SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n> );\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n> Nested Loop Anti Join\n> Output: a.aircraft_code, a.model, a.range\n> -> Foreign Scan on public.aircrafts a\n> Output: a.aircraft_code, a.model, a.range\n> Remote SQL: SELECT aircraft_code, model, range FROM bookings.aircrafts WHERE ((aircraft_code = '320'))\n> -> Materialize\n> Output: s.aircraft_code\n> -> Foreign Scan on public.seats s\n> Output: s.aircraft_code\n> Remote SQL: SELECT aircraft_code FROM bookings.seats WHERE ((aircraft_code = '320'))\n> (10 rows)\n>\n> Also, optimization not used after deleting first condition (a.aircraft_code = '320'):\n>\n> postgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\n> FROM aircrafts a\n> WHERE EXISTS (\n> SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n> );\n> QUERY PLAN\n> --------------------------------------------------------------------------------\n> Hash Join\n> Output: a.aircraft_code, a.model, a.range\n> Inner Unique: true\n> Hash Cond: (a.aircraft_code = s.aircraft_code)\n> -> Foreign Scan on public.aircrafts a\n> Output: a.aircraft_code, a.model, a.range\n> Remote SQL: SELECT aircraft_code, model, range FROM bookings.aircrafts\n> -> Hash\n> Output: s.aircraft_code\n> -> HashAggregate\n> Output: s.aircraft_code\n> Group Key: s.aircraft_code\n> -> Foreign Scan on public.seats s\n> Output: s.aircraft_code\n> Remote SQL: SELECT aircraft_code FROM bookings.seats\n> (15 rows)\n>\n>\n> But the worst thing is that replacing AND with OR causes breaking session and server restart:\n>\n> postgres@demo_postgres_fdw(17.0)=# EXPLAIN (costs off, verbose) SELECT *\n> FROM aircrafts a\n> WHERE a.aircraft_code = '320' OR EXISTS (\n> SELECT * FROM seats s WHERE s.aircraft_code = a.aircraft_code\n> );\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> The connection to the server was lost. Attempting reset: Failed.\n\nThank you, Pavel. I'm looking into this.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 9 Feb 2024 22:27:27 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi, Pavel!\n\n\nOn Fri, Feb 9, 2024 at 10:08 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n> But optimization not used for NOT EXISTS:\n\nRight, anti-joins are not supported yet.\n\n> Also, optimization not used after deleting first condition (a.aircraft_code = '320'):\n\nThis is a costing issue. Optimization worlds for me when set\n\"use_remote_estimate = true\" for the server;\n\n> But the worst thing is that replacing AND with OR causes breaking session and server restart:\n\nI haven't managed to reproduce this yet. Could you give more details:\nmachine, OS, compile options, backtrace?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 12 Feb 2024 04:27:14 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
},
{
"msg_contents": "Hi, Alexander!\n\nOn 12.02.2024 05:27, Alexander Korotkov wrote:\n>> But the worst thing is that replacing AND with OR causes breaking session and server restart:\n> I haven't managed to reproduce this yet. Could you give more details:\n> machine, OS, compile options, backtrace?\n\nWe already had off-list conversation with Alexander Pyhalov.\n\nYesterday, after rebuilding the server, I can't reproduce the error.\nI have good reason to believe that the problem was on my side.\nOn Friday, I tested another patch and built the server several times.\nMost likely, I just made a mistake during the server build.\n\nSorry for the noise.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n Hi, Alexander!\n\nOn 12.02.2024 05:27, Alexander Korotkov\n wrote:\n\n\n\n\n\nBut the worst thing is that replacing AND with OR causes breaking session and server restart:\n\n\n\nI haven't managed to reproduce this yet. Could you give more details:\nmachine, OS, compile options, backtrace?\n\nWe already had off-list conversation with Alexander Pyhalov.\n\nYesterday, after rebuilding the server, I can't reproduce the error.\nI have good reason to believe that the problem was on my side.\nOn Friday, I tested another patch and built the server several times.\nMost likely, I just made a mistake during the server build.\n\nSorry for the noise.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com",
"msg_date": "Mon, 12 Feb 2024 11:50:34 +0300",
"msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Add semi-join pushdown to postgres_fdw"
}
] |
[
{
"msg_contents": "Hello, I recently got a server crash (bug #17583 [1]) caused by a stack overflow. \n \nTom Lane and Richard Guo, in a discussion of this bug, suggested that there could be more such places. \nTherefore, Alexander Lakhin and I decided to deal with this issue and Alexander developed a methodology. We processed src/backend/*/*.c with \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the functions that call themselves directly. I checked each of them for features that protect against stack overflows.\nWe analyzed 4 catalogs: regex, tsearch, snowball and adt.\nFirstly, we decided to test the regex catalog functions and found 6 of them that lack the check_stach_depth() call.\n \nzaptreesubs\nmarkst\nnext\nnfatree\nnumst\nrepeat\n \nWe have tried to exploit the recursion in the function zaptreesubs():\nselect regexp_matches('a' || repeat(' a', 11000), '(.)(' || repeat(' \\1', 11000) || ')?');\n \nERROR: invalid regular expression: regular expression is too complex\n \nrepeat():\nselect regexp_match('abc01234xyz',repeat('a{0,2}',100001));\n \nERROR: invalid regular expression: regular expression is too complex\n \nnumst():\nselect regexp_match('abc01234xyz',repeat('(.)\\1e',100001));\n \nERROR: invalid regular expression: regular expression is too complex\n \nmarkst():\nmarkst is called in the code after v->tree = parse(...);\nit is necessary that the tree be successfully parsed, but with a nesting level of about 100,000 this will not work - stack protection will work during parsing and v->ntree = numst(...); is also there.\n \nnext():\nwe were able to crash the server with the following query:\n(printf \"SELECT regexp_match('abc', 'a\"; for ((i=1;i<1000000;i++)); do printf \"(?#)\"; done; printf \"b')\" ) | psql\n \nSecondly, we have tried to exploit the recursion in the adt catalog functions and Alexander was able to crash the server with the following query:\n \nregex_selectivity_sub(): \nSELECT * FROM pg_proc WHERE proname ~ ('(a' || repeat('|', 200000) || 'b)');\n \nAnd this query:\n \n(n=100000;\nprintf \"SELECT polygon '((0,0),(0,1000000))' <@ polygon '((-200000,1000000),\";\nfor ((i=1;i<$n;i++)); do printf \"(100000,$(( 300000 + $i))),(-100000,$((800000 + $i))),\"; done;\nprintf \"(200000,900000),(200000,0))';\"\n) | psql\n \nThirdly, the snowball catalog, Alexander has tried to exploit the recursion in the r_stem_suffix_chain_before_ki function and crashed a server using this query:\n \nr_stem_suffix_chain_before_ki():\nSELECT ts_lexize('turkish_stem', repeat('lerdeki', 1000000));\n \nThe last one is the tsearch catalog. We have found 4 functions that didn't have check_stach_depth() function: \n \nSplitToVariants\nmkANode\nmkSPNode\nLexizeExec\n \nWe have tried to exploit the recursion in the SplitToVariants function and Alexander crashed a server using this:\n \nSplitToVariants():\nCREATE TEXT SEARCH DICTIONARY ispell (Template=ispell, DictFile=ispell_sample,AffFile=ispell_sample);\nSELECT ts_lexize('ispell', repeat('bally', 10000));\n \nAfter trying to exploit the recursion in the LexizeExec function Alexander made this conlusion: \n \nLexizeExec has two branches \"ld->curDictId == InvalidOid\" (usual mode) and \"ld->curDictId != InvalidOid\" (multiword mode) - we start with the first one, then make recursive call to switch to the multiword mode, but then we return to the usual mode again.\n \nmkANode and mkSPNode deal with the dictionary structs, not with user-supplied data, so we believe these functions are not vulnerable.\n \n[1] https://www.postgresql.org/message-id/flat/CAMbWs499ytQiH4mLMhRxRWP-iEUz3-DSinpAD-cUCtVo_23Wtg%40mail.gmail.com#03ad703cf4bc8d28ccba69913e1e8106\nHello, I recently got a server crash (bug #17583 [1]) caused by a stack overflow. Tom Lane and Richard Guo, in a discussion of this bug, suggested that there could be more such places. Therefore, Alexander Lakhin and I decided to deal with this issue and Alexander developed a methodology. We processed src/backend/*/*.c with \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the functions that call themselves directly. I checked each of them for features that protect against stack overflows.We analyzed 4 catalogs: regex, tsearch, snowball and adt.Firstly, we decided to test the regex catalog functions and found 6 of them that lack the check_stach_depth() call. zaptreesubsmarkstnextnfatreenumstrepeat We have tried to exploit the recursion in the function zaptreesubs():select regexp_matches('a' || repeat(' a', 11000), '(.)(' || repeat(' \\1', 11000) || ')?'); ERROR: invalid regular expression: regular expression is too complex repeat():select regexp_match('abc01234xyz',repeat('a{0,2}',100001)); ERROR: invalid regular expression: regular expression is too complex numst():select regexp_match('abc01234xyz',repeat('(.)\\1e',100001)); ERROR: invalid regular expression: regular expression is too complex markst():markst is called in the code after v->tree = parse(...);it is necessary that the tree be successfully parsed, but with a nesting level of about 100,000 this will not work - stack protection will work during parsing and v->ntree = numst(...); is also there. next():we were able to crash the server with the following query:(printf \"SELECT regexp_match('abc', 'a\"; for ((i=1;i<1000000;i++)); do printf \"(?#)\"; done; printf \"b')\" ) | psql Secondly, we have tried to exploit the recursion in the adt catalog functions and Alexander was able to crash the server with the following query: regex_selectivity_sub(): SELECT * FROM pg_proc WHERE proname ~ ('(a' || repeat('|', 200000) || 'b)'); And this query: (n=100000;printf \"SELECT polygon '((0,0),(0,1000000))' <@ polygon '((-200000,1000000),\";for ((i=1;i<$n;i++)); do printf \"(100000,$(( 300000 + $i))),(-100000,$((800000 + $i))),\"; done;printf \"(200000,900000),(200000,0))';\") | psql Thirdly, the snowball catalog, Alexander has tried to exploit the recursion in the r_stem_suffix_chain_before_ki function and crashed a server using this query: r_stem_suffix_chain_before_ki():SELECT ts_lexize('turkish_stem', repeat('lerdeki', 1000000)); The last one is the tsearch catalog. We have found 4 functions that didn't have check_stach_depth() function: SplitToVariantsmkANodemkSPNodeLexizeExec We have tried to exploit the recursion in the SplitToVariants function and Alexander crashed a server using this: SplitToVariants():CREATE TEXT SEARCH DICTIONARY ispell (Template=ispell, DictFile=ispell_sample,AffFile=ispell_sample);SELECT ts_lexize('ispell', repeat('bally', 10000)); After trying to exploit the recursion in the LexizeExec function Alexander made this conlusion: LexizeExec has two branches \"ld->curDictId == InvalidOid\" (usual mode) and \"ld->curDictId != InvalidOid\" (multiword mode) - we start with the first one, then make recursive call to switch to the multiword mode, but then we return to the usual mode again. mkANode and mkSPNode deal with the dictionary structs, not with user-supplied data, so we believe these functions are not vulnerable. [1] https://www.postgresql.org/message-id/flat/CAMbWs499ytQiH4mLMhRxRWP-iEUz3-DSinpAD-cUCtVo_23Wtg%40mail.gmail.com#03ad703cf4bc8d28ccba69913e1e8106",
"msg_date": "Wed, 24 Aug 2022 12:51:12 +0300",
"msg_from": "=?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <kyzevan23@mail.ru>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?U3RhY2sgb3ZlcmZsb3cgaXNzdWU=?="
},
{
"msg_contents": "Hi,\nCan we have a parameter to control the recursion depth in these cases to\navoid crashes?\nJust a thought.\n\nThanks,\nMahendrakar.\n\nOn Wed, 24 Aug, 2022, 3:21 pm Егор Чиндяскин, <kyzevan23@mail.ru> wrote:\n\n> Hello, I recently got a server crash (bug #17583 [1]) caused by a stack\n> overflow.\n>\n> Tom Lane and Richard Guo, in a discussion of this bug, suggested that\n> there could be more such places.\n> Therefore, Alexander Lakhin and I decided to deal with this issue and\n> Alexander developed a methodology. We processed src/backend/*/*.c with\n> \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the\n> functions that call themselves directly. I checked each of them for\n> features that protect against stack overflows.\n> We analyzed 4 catalogs: regex, tsearch, snowball and adt.\n> Firstly, we decided to test the regex catalog functions and found 6 of\n> them that lack the check_stach_depth() call.\n>\n> zaptreesubs\n> markst\n> next\n> nfatree\n> numst\n> repeat\n>\n> We have tried to exploit the recursion in the function zaptreesubs():\n> select regexp_matches('a' || repeat(' a', 11000), '(.)(' || repeat(' \\1',\n> 11000) || ')?');\n>\n> ERROR: invalid regular expression: regular expression is too complex\n>\n> repeat():\n> select regexp_match('abc01234xyz',repeat('a{0,2}',100001));\n>\n> ERROR: invalid regular expression: regular expression is too complex\n>\n> numst():\n> select regexp_match('abc01234xyz',repeat('(.)\\1e',100001));\n>\n> ERROR: invalid regular expression: regular expression is too complex\n>\n> markst():\n> markst is called in the code after v->tree = parse(...);\n> it is necessary that the tree be successfully parsed, but with a nesting\n> level of about 100,000 this will not work - stack protection will work\n> during parsing and v->ntree = numst(...); is also there.\n>\n> next():\n> we were able to crash the server with the following query:\n> (printf \"SELECT regexp_match('abc', 'a\"; for ((i=1;i<1000000;i++)); do\n> printf \"(?#)\"; done; printf \"b')\" ) | psql\n>\n> Secondly, we have tried to exploit the recursion in the adt catalog\n> functions and Alexander was able to crash the server with the following\n> query:\n>\n> regex_selectivity_sub():\n> SELECT * FROM pg_proc WHERE proname ~ ('(a' || repeat('|', 200000) ||\n> 'b)');\n>\n> And this query:\n>\n> (n=100000;\n> printf \"SELECT polygon '((0,0),(0,1000000))' <@ polygon\n> '((-200000,1000000),\";\n> for ((i=1;i<$n;i++)); do printf \"(100000,$(( 300000 +\n> $i))),(-100000,$((800000 + $i))),\"; done;\n> printf \"(200000,900000),(200000,0))';\"\n> ) | psql\n>\n> Thirdly, the snowball catalog, Alexander has tried to exploit the\n> recursion in the r_stem_suffix_chain_before_ki function and crashed a\n> server using this query:\n>\n> r_stem_suffix_chain_before_ki():\n> SELECT ts_lexize('turkish_stem', repeat('lerdeki', 1000000));\n>\n> The last one is the tsearch catalog. We have found 4 functions that didn't\n> have check_stach_depth() function:\n>\n> SplitToVariants\n> mkANode\n> mkSPNode\n> LexizeExec\n>\n> We have tried to exploit the recursion in the SplitToVariants function and\n> Alexander crashed a server using this:\n>\n> SplitToVariants():\n> CREATE TEXT SEARCH DICTIONARY ispell (Template=ispell,\n> DictFile=ispell_sample,AffFile=ispell_sample);\n> SELECT ts_lexize('ispell', repeat('bally', 10000));\n>\n> After trying to exploit the recursion in the LexizeExec function Alexander\n> made this conlusion:\n>\n> LexizeExec has two branches \"ld->curDictId == InvalidOid\" (usual mode) and\n> \"ld->curDictId != InvalidOid\" (multiword mode) - we start with the first\n> one, then make recursive call to switch to the multiword mode, but then we\n> return to the usual mode again.\n>\n> mkANode and mkSPNode deal with the dictionary structs, not with\n> user-supplied data, so we believe these functions are not vulnerable.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAMbWs499ytQiH4mLMhRxRWP-iEUz3-DSinpAD-cUCtVo_23Wtg%40mail.gmail.com#03ad703cf4bc8d28ccba69913e1e8106\n>\n\nHi,Can we have a parameter to control the recursion depth in these cases to avoid crashes? Just a thought.Thanks,Mahendrakar.On Wed, 24 Aug, 2022, 3:21 pm Егор Чиндяскин, <kyzevan23@mail.ru> wrote:\nHello, I recently got a server crash (bug #17583 [1]) caused by a stack overflow. Tom Lane and Richard Guo, in a discussion of this bug, suggested that there could be more such places. Therefore, Alexander Lakhin and I decided to deal with this issue and Alexander developed a methodology. We processed src/backend/*/*.c with \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the functions that call themselves directly. I checked each of them for features that protect against stack overflows.We analyzed 4 catalogs: regex, tsearch, snowball and adt.Firstly, we decided to test the regex catalog functions and found 6 of them that lack the check_stach_depth() call. zaptreesubsmarkstnextnfatreenumstrepeat We have tried to exploit the recursion in the function zaptreesubs():select regexp_matches('a' || repeat(' a', 11000), '(.)(' || repeat(' \\1', 11000) || ')?'); ERROR: invalid regular expression: regular expression is too complex repeat():select regexp_match('abc01234xyz',repeat('a{0,2}',100001)); ERROR: invalid regular expression: regular expression is too complex numst():select regexp_match('abc01234xyz',repeat('(.)\\1e',100001)); ERROR: invalid regular expression: regular expression is too complex markst():markst is called in the code after v->tree = parse(...);it is necessary that the tree be successfully parsed, but with a nesting level of about 100,000 this will not work - stack protection will work during parsing and v->ntree = numst(...); is also there. next():we were able to crash the server with the following query:(printf \"SELECT regexp_match('abc', 'a\"; for ((i=1;i<1000000;i++)); do printf \"(?#)\"; done; printf \"b')\" ) | psql Secondly, we have tried to exploit the recursion in the adt catalog functions and Alexander was able to crash the server with the following query: regex_selectivity_sub(): SELECT * FROM pg_proc WHERE proname ~ ('(a' || repeat('|', 200000) || 'b)'); And this query: (n=100000;printf \"SELECT polygon '((0,0),(0,1000000))' <@ polygon '((-200000,1000000),\";for ((i=1;i<$n;i++)); do printf \"(100000,$(( 300000 + $i))),(-100000,$((800000 + $i))),\"; done;printf \"(200000,900000),(200000,0))';\") | psql Thirdly, the snowball catalog, Alexander has tried to exploit the recursion in the r_stem_suffix_chain_before_ki function and crashed a server using this query: r_stem_suffix_chain_before_ki():SELECT ts_lexize('turkish_stem', repeat('lerdeki', 1000000)); The last one is the tsearch catalog. We have found 4 functions that didn't have check_stach_depth() function: SplitToVariantsmkANodemkSPNodeLexizeExec We have tried to exploit the recursion in the SplitToVariants function and Alexander crashed a server using this: SplitToVariants():CREATE TEXT SEARCH DICTIONARY ispell (Template=ispell, DictFile=ispell_sample,AffFile=ispell_sample);SELECT ts_lexize('ispell', repeat('bally', 10000)); After trying to exploit the recursion in the LexizeExec function Alexander made this conlusion: LexizeExec has two branches \"ld->curDictId == InvalidOid\" (usual mode) and \"ld->curDictId != InvalidOid\" (multiword mode) - we start with the first one, then make recursive call to switch to the multiword mode, but then we return to the usual mode again. mkANode and mkSPNode deal with the dictionary structs, not with user-supplied data, so we believe these functions are not vulnerable. [1] https://www.postgresql.org/message-id/flat/CAMbWs499ytQiH4mLMhRxRWP-iEUz3-DSinpAD-cUCtVo_23Wtg%40mail.gmail.com#03ad703cf4bc8d28ccba69913e1e8106",
"msg_date": "Wed, 24 Aug 2022 15:37:01 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On 2022-Aug-24, mahendrakar s wrote:\n\n> Hi,\n> Can we have a parameter to control the recursion depth in these cases to\n> avoid crashes?\n\nWe already have one (max_stack_depth). The problem is lack of calling\nthe control function in a few places.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 24 Aug 2022 12:49:53 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Thanks.\n\nOn Wed, 24 Aug, 2022, 4:19 pm Alvaro Herrera, <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Aug-24, mahendrakar s wrote:\n>\n> > Hi,\n> > Can we have a parameter to control the recursion depth in these cases to\n> > avoid crashes?\n>\n> We already have one (max_stack_depth). The problem is lack of calling\n> the control function in a few places.\n>\n> --\n> Álvaro Herrera 48°01'N 7°57'E —\n> https://www.EnterpriseDB.com/\n>\n\nThanks. On Wed, 24 Aug, 2022, 4:19 pm Alvaro Herrera, <alvherre@alvh.no-ip.org> wrote:On 2022-Aug-24, mahendrakar s wrote:\n\n> Hi,\n> Can we have a parameter to control the recursion depth in these cases to\n> avoid crashes?\n\nWe already have one (max_stack_depth). The problem is lack of calling\nthe control function in a few places.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/",
"msg_date": "Wed, 24 Aug 2022 16:29:07 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 6:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Aug-24, mahendrakar s wrote:\n>\n> > Hi,\n> > Can we have a parameter to control the recursion depth in these cases to\n> > avoid crashes?\n>\n> We already have one (max_stack_depth). The problem is lack of calling\n> the control function in a few places.\n\n\nThanks Egor and Alexander for the work! I think we can just add\ncheck_stack_depth checks in these cases.\n\nThanks\nRichard\n\nOn Wed, Aug 24, 2022 at 6:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Aug-24, mahendrakar s wrote:\n\n> Hi,\n> Can we have a parameter to control the recursion depth in these cases to\n> avoid crashes?\n\nWe already have one (max_stack_depth). The problem is lack of calling\nthe control function in a few places. Thanks Egor and Alexander for the work! I think we can just addcheck_stack_depth checks in these cases.ThanksRichard",
"msg_date": "Wed, 24 Aug 2022 19:12:34 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 7:12 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Wed, Aug 24, 2022 at 6:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n>> On 2022-Aug-24, mahendrakar s wrote:\n>>\n>> > Hi,\n>> > Can we have a parameter to control the recursion depth in these cases to\n>> > avoid crashes?\n>>\n>> We already have one (max_stack_depth). The problem is lack of calling\n>> the control function in a few places.\n>\n>\n> Thanks Egor and Alexander for the work! I think we can just add\n> check_stack_depth checks in these cases.\n>\n\nAttached adds the checks in these places. But I'm not sure about the\nsnowball case. Can we edit src/backend/snowball/libstemmer/*.c directly?\n\nThanks\nRichard",
"msg_date": "Wed, 24 Aug 2022 19:54:36 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi Richard,\n\nPatch is looking good to me. Would request others to take a look at it as\nwell.\n\nThanks,\nMahendrakar.\n\nOn Wed, 24 Aug 2022 at 17:24, Richard Guo <guofenglinux@gmail.com> wrote:\n\n>\n> On Wed, Aug 24, 2022 at 7:12 PM Richard Guo <guofenglinux@gmail.com>\n> wrote:\n>\n>>\n>> On Wed, Aug 24, 2022 at 6:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\n>> wrote:\n>>\n>>> On 2022-Aug-24, mahendrakar s wrote:\n>>>\n>>> > Hi,\n>>> > Can we have a parameter to control the recursion depth in these cases\n>>> to\n>>> > avoid crashes?\n>>>\n>>> We already have one (max_stack_depth). The problem is lack of calling\n>>> the control function in a few places.\n>>\n>>\n>> Thanks Egor and Alexander for the work! I think we can just add\n>> check_stack_depth checks in these cases.\n>>\n>\n> Attached adds the checks in these places. But I'm not sure about the\n> snowball case. Can we edit src/backend/snowball/libstemmer/*.c directly?\n>\n> Thanks\n> Richard\n>\n\nHi Richard,Patch is looking good to me. Would request others to take a look at it as well.Thanks,Mahendrakar.On Wed, 24 Aug 2022 at 17:24, Richard Guo <guofenglinux@gmail.com> wrote:On Wed, Aug 24, 2022 at 7:12 PM Richard Guo <guofenglinux@gmail.com> wrote:On Wed, Aug 24, 2022 at 6:49 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Aug-24, mahendrakar s wrote:\n\n> Hi,\n> Can we have a parameter to control the recursion depth in these cases to\n> avoid crashes?\n\nWe already have one (max_stack_depth). The problem is lack of calling\nthe control function in a few places. Thanks Egor and Alexander for the work! I think we can just addcheck_stack_depth checks in these cases. Attached adds the checks in these places. But I'm not sure about thesnowball case. Can we edit src/backend/snowball/libstemmer/*.c directly?ThanksRichard",
"msg_date": "Wed, 24 Aug 2022 18:29:02 +0530",
"msg_from": "mahendrakar s <mahendrakarforpg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "=?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <kyzevan23@mail.ru> writes:\n> Therefore, Alexander Lakhin and I decided to deal with this issue and Alexander developed a methodology. We processed src/backend/*/*.c with \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the functions that call themselves directly. I checked each of them for features that protect against stack overflows.\n> We analyzed 4 catalogs: regex, tsearch, snowball and adt.\n> Firstly, we decided to test the regex catalog functions and found 6 of them that lack the check_stach_depth() call.\n\nNice work! I wonder if you can make the regex crashes reachable by\nreducing the value of max_stack_depth enough that it's hit before\nreaching the \"regular expression is too complex\" limit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 09:58:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: =?UTF-8?B?U3RhY2sgb3ZlcmZsb3cgaXNzdWU=?="
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> Attached adds the checks in these places. But I'm not sure about the\n> snowball case. Can we edit src/backend/snowball/libstemmer/*.c directly?\n\nNo, that file is generated code, as it says right at the top.\n\nI think most likely we should report this to Snowball upstream\nand see what they think is an appropriate fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 10:03:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "=?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <kyzevan23@mail.ru> writes:\n> Firstly, we decided to test the regex catalog functions and found 6 of them that lack the check_stach_depth() call.\n\n> zaptreesubs\n> markst\n> next\n> nfatree\n> numst\n> repeat\n\nI took a closer look at these. I think the markst, numst, and nfatree\ncases are non-issues. They are recursing on a subre tree that was just\nbuilt by parse(), so parse() must have successfully recursed the same\nnumber of levels. parse() surely has a larger stack frame, and it\ndoes have a stack overflow guard (in subre()), so it would have failed\ncleanly before making a data structure that could be hazardous here.\nAlso, having markst error out would be problematic for the reasons\ndiscussed in its comment, so I'm disinclined to try to add checks\nthat have no use.\n\nBTW, I wonder why your test didn't notice freesubre()? But the\nsame analysis applies there, as does the concern that we can't\njust error out.\n\nLikewise, zaptreesubs() can't recurse more levels than cdissect() did,\nand that has a stack check, so I'm not very excited about adding\nanother one there.\n\nI believe that repeat() is a non-issue because (a) the number of\nrecursion levels in it is limited by DUPMAX, which is generally going\nto be 255, or at least not enormous, and (b) it will recurse at most\nonce before calling dupnfa(), which contains stack checks.\n\nI think next() is a legit issue, although your example doesn't crash\nfor me. I suppose that's because my compiler turned the tail recursion\ninto a loop, and I suggest that we fix it by doing that manually.\n(Richard's proposed fix is wrong anyway: we can't just throw elog(ERROR)\nin the regex code without creating memory leaks.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 12:23:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: =?UTF-8?B?U3RhY2sgb3ZlcmZsb3cgaXNzdWU=?="
},
{
"msg_contents": "I wrote:\n> I think most likely we should report this to Snowball upstream\n> and see what they think is an appropriate fix.\n\nDone at [1], and I pushed the other fixes. Thanks again for the report!\n\n\t\t\tregards, tom lane\n\n[1] https://lists.tartarus.org/pipermail/snowball-discuss/2022-August/001734.html\n\n\n",
"msg_date": "Wed, 24 Aug 2022 13:30:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "I wrote:\n>> I think most likely we should report this to Snowball upstream\n>> and see what they think is an appropriate fix.\n\n> Done at [1], and I pushed the other fixes. Thanks again for the report!\n\nThe upstream recommendation, which seems pretty sane to me, is to\nsimply reject any string exceeding some threshold length as not\npossibly being a word. Apparently it's common to use thresholds\nas small as 64 bytes, but in the attached I used 1000 bytes.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 30 Aug 2022 11:02:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "I wrote:\n> The upstream recommendation, which seems pretty sane to me, is to\n> simply reject any string exceeding some threshold length as not\n> possibly being a word. Apparently it's common to use thresholds\n> as small as 64 bytes, but in the attached I used 1000 bytes.\n\nOn further thought: that coding treats anything longer than 1000\nbytes as a stopword, but maybe we should just accept it unmodified.\nThe manual says \"A Snowball dictionary recognizes everything, whether\nor not it is able to simplify the word\". While \"recognizes\" formally\nincludes the case of \"recognizes as a stopword\", people might find\nthis behavior surprising. We could alternatively do it as attached,\nwhich accepts overlength words but does nothing to them except\ncase-fold. This is closer to the pre-patch behavior, but gives up\nthe opportunity to avoid useless downstream processing of long words.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 30 Aug 2022 18:57:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 6:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > The upstream recommendation, which seems pretty sane to me, is to\n> > simply reject any string exceeding some threshold length as not\n> > possibly being a word. Apparently it's common to use thresholds\n> > as small as 64 bytes, but in the attached I used 1000 bytes.\n>\n> On further thought: that coding treats anything longer than 1000\n> bytes as a stopword, but maybe we should just accept it unmodified.\n> The manual says \"A Snowball dictionary recognizes everything, whether\n> or not it is able to simplify the word\". While \"recognizes\" formally\n> includes the case of \"recognizes as a stopword\", people might find\n> this behavior surprising. We could alternatively do it as attached,\n> which accepts overlength words but does nothing to them except\n> case-fold. This is closer to the pre-patch behavior, but gives up\n> the opportunity to avoid useless downstream processing of long words.\n\n\nThis patch looks good to me. It avoids overly-long words (> 1000 bytes)\ngoing through the stemmer so the stack overflow issue in Turkish stemmer\nshould not exist any more.\n\nThanks\nRichard\n\nOn Wed, Aug 31, 2022 at 6:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> The upstream recommendation, which seems pretty sane to me, is to\n> simply reject any string exceeding some threshold length as not\n> possibly being a word. Apparently it's common to use thresholds\n> as small as 64 bytes, but in the attached I used 1000 bytes.\n\nOn further thought: that coding treats anything longer than 1000\nbytes as a stopword, but maybe we should just accept it unmodified.\nThe manual says \"A Snowball dictionary recognizes everything, whether\nor not it is able to simplify the word\". While \"recognizes\" formally\nincludes the case of \"recognizes as a stopword\", people might find\nthis behavior surprising. We could alternatively do it as attached,\nwhich accepts overlength words but does nothing to them except\ncase-fold. This is closer to the pre-patch behavior, but gives up\nthe opportunity to avoid useless downstream processing of long words. This patch looks good to me. It avoids overly-long words (> 1000 bytes)going through the stemmer so the stack overflow issue in Turkish stemmershould not exist any more.ThanksRichard",
"msg_date": "Wed, 31 Aug 2022 10:38:23 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "24.08.2022 20:58, Tom Lane writes:\n> Nice work! I wonder if you can make the regex crashes reachable by\n> reducing the value of max_stack_depth enough that it's hit before\n> reaching the \"regular expression is too complex\" limit.\n>\n> \t\t\tregards, tom lane\nHello everyone! It's been a while since me and Alexander Lakhin have \npublished a list of functions that have a stack overflow illness. We are \nback to tell you more about such places.\nDuring our analyze we made a conclusion that some functions can be \ncrashed without changing any of the parameters and some can be crashed \nonly if we change some stuff.\n\nThe first function crashes without any changes:\n\n# CheckAttributeType\n\n(n=60000; printf \"create domain dint as int; create domain dinta0 as \ndint[];\"; for ((i=1;i<=$n;i++)); do printf \"create domain dinta$i as \ndinta$(( $i - 1 ))[]; \"; done; ) | psql\npsql -c \"create table t(f1 dinta60000[]);\"\n\nSome of the others crash if we change \"max_locks_per_transaction\" \nparameter:\n\n# findDependentObjects\n\nmax_locks_per_transaction = 200\n\n(n=10000; printf \"create table t (i int); create view v0 as select * \nfrom t;\"; for ((i=1;i<$n;i++)); do printf \"create view v$i as select * \nfrom v$(( $i - 1 )); \"; done; ) | psql\npsql -c \"drop table t\"\n\n# ATExecDropColumn\n\nmax_locks_per_transaction = 300\n\n(n=50000; printf \"create table t0 (a int, b int); \"; for \n((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1 \n))); \"; done; printf \"alter table t0 drop b;\" ) | psql\n\n# ATExecDropConstraint\n\nmax_locks_per_transaction = 300\n\n(n=50000; printf \"create table t0 (a int, b int, constraint bc check (b \n > 0));\"; for ((i=1;i<=$n;i++)); do printf \"create table t$i() \ninherits(t$(( $i - 1 ))); \"; done; printf \"alter table t0 drop \nconstraint bc;\" ) | psql\n\n# ATExecAddColumn\n\nmax_locks_per_transaction = 200\n\n(n=50000; printf \"create table t0 (a int, b int);\"; for \n((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1 \n))); \"; done; printf \"alter table t0 add column c int;\" ) | psql\n\n# ATExecAlterConstrRecurse\n\nmax_locks_per_transaction = 300\n\n(n=50000;\nprintf \"create table t(a int primary key); create table pt (a int \nprimary key, foreign key(a) references t) partition by range (a);\";\nprintf \"create table pt0 partition of pt for values from (0) to (100000) \npartition by range (a);\";\nfor ((i=1;i<=$n;i++)); do printf \"create table pt$i partition of pt$(( \n$i - 1 )) for values from ($i) to (100000) partition by range (a); \"; done;\nprintf \"alter table pt alter constraint pt_a_fkey deferrable initially \ndeferred;\"\n) | psql\n\nThis is where the fun begins. According to Tom Lane, a decrease in \nmax_stack_depth could lead to new crashes, but it turned out that \nAlexander was able to find new crashes precisely due to the increase in \nthis parameter. Also, we had ulimit -s set to 8MB as the default value.\n\n# eval_const_expressions_mutator\n\nmax_stack_depth = '7000kB'\n\n(n=10000; printf \"select 'a' \"; for ((i=1;i<$n;i++)); do printf \" \ncollate \\\"C\\\" \"; done; ) | psql\n\nIf you didn’t have a crash, like me, when Alexander shared his find, \nthen probably you configured your cluster with an optimization flag -Og. \nIn the process of trying to break this function, we came to the \nconclusion that the maximum stack depth depends on the optimization flag \n(-O0/-Og). As it turned out, when optimizing, the function frame on the \nstack becomes smaller and because of this, the limit is reached more \nslowly, therefore, the system can withstand more torment. Therefore, \nthis query will fail if you have a cluster configured with the -O0 \noptimization flag.\n\nThe crash of the next function not only depends on the optimization \nflag, but also on a number of other things. While researching, we \nnoticed that postgres enforces a distance ~400kB from max_stack_depth to \nulimit -s. We thought we could hit the max_stack_depth limit and then \nhit the OS limit as well. Therefore, Alexander wrote a recursive SQL \nfunction, that eats up a stack within max_stack_depth, including a query \nthat eats up the remaining ~400kB. And this causes a crash.\n\n# executeBoolItem\n\nmax_stack_depth = '7600kB'\n\ncreate function infinite_recurse(i int) returns int as $$\nbegin\n raise notice 'Level %', i;\n begin\n perform jsonb_path_query('{\"a\":[1]}'::jsonb, ('$.a[*] ? (' || \nrepeat('!(', 4800) || '@ == @' || repeat(')', 4800) || ')')::jsonpath);\n exception\n when others then raise notice 'jsonb_path_query error at level %, \n%', i, sqlerrm;\n end;\n begin\n select infinite_recurse(i + 1) into i;\n exception\n when others then raise notice 'Max stack depth reached at level %, \n%', i, sqlerrm;\n end;\n return i;\nend;\n$$ language plpgsql;\n\nselect infinite_recurse(1);\n\nTo sum it all up, we have not yet decided on a general approach to such \nfunctions. Some functions are definitely subject to stack overflow. Some \nare definitely not. This can be seen from the code where the recurse \nflag is passed, or a function that checks the stack is called before a \nrecursive call. Some require special conditions - for example, you need \nto parse the query and build a plan, and at that stage the stack is \neaten faster (and checked) than by the function that we are interested in.\n\nWe keep researching and hope to come up with a good solution sooner or \nlater.\n\n\n",
"msg_date": "Wed, 26 Oct 2022 21:47:08 +0700",
"msg_from": "Egor Chindyaskin <kyzevan23@mail.ru>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": ">Среда, 26 октября 2022, 21:47 +07:00 от Egor Chindyaskin <kyzevan23@mail.ru>:\n> \n>24.08.2022 20:58, Tom Lane writes:\n>> Nice work! I wonder if you can make the regex crashes reachable by\n>> reducing the value of max_stack_depth enough that it's hit before\n>> reaching the \"regular expression is too complex\" limit.\n>>\n>> regards, tom lane Hello everyone! It's been a while since me and Alexander Lakhin have\n>published a list of functions that have a stack overflow illness. We are\n>back to tell you more about such places.\n>During our analyze we made a conclusion that some functions can be\n>crashed without changing any of the parameters and some can be crashed\n>only if we change some stuff.\n>\n>The first function crashes without any changes:\n>\n># CheckAttributeType\n>\n>(n=60000; printf \"create domain dint as int; create domain dinta0 as\n>dint[];\"; for ((i=1;i<=$n;i++)); do printf \"create domain dinta$i as\n>dinta$(( $i - 1 ))[]; \"; done; ) | psql\n>psql -c \"create table t(f1 dinta60000[]);\"\n>\n>Some of the others crash if we change \"max_locks_per_transaction\"\n>parameter:\n>\n># findDependentObjects\n>\n>max_locks_per_transaction = 200\n>\n>(n=10000; printf \"create table t (i int); create view v0 as select *\n>from t;\"; for ((i=1;i<$n;i++)); do printf \"create view v$i as select *\n>from v$(( $i - 1 )); \"; done; ) | psql\n>psql -c \"drop table t\"\n>\n># ATExecDropColumn\n>\n>max_locks_per_transaction = 300\n>\n>(n=50000; printf \"create table t0 (a int, b int); \"; for\n>((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1\n>))); \"; done; printf \"alter table t0 drop b;\" ) | psql\n>\n># ATExecDropConstraint\n>\n>max_locks_per_transaction = 300\n>\n>(n=50000; printf \"create table t0 (a int, b int, constraint bc check (b\n> > 0));\"; for ((i=1;i<=$n;i++)); do printf \"create table t$i()\n>inherits(t$(( $i - 1 ))); \"; done; printf \"alter table t0 drop\n>constraint bc;\" ) | psql\n>\n># ATExecAddColumn\n>\n>max_locks_per_transaction = 200\n>\n>(n=50000; printf \"create table t0 (a int, b int);\"; for\n>((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1\n>))); \"; done; printf \"alter table t0 add column c int;\" ) | psql\n>\n># ATExecAlterConstrRecurse\n>\n>max_locks_per_transaction = 300\n>\n>(n=50000;\n>printf \"create table t(a int primary key); create table pt (a int\n>primary key, foreign key(a) references t) partition by range (a);\";\n>printf \"create table pt0 partition of pt for values from (0) to (100000)\n>partition by range (a);\";\n>for ((i=1;i<=$n;i++)); do printf \"create table pt$i partition of pt$((\n>$i - 1 )) for values from ($i) to (100000) partition by range (a); \"; done;\n>printf \"alter table pt alter constraint pt_a_fkey deferrable initially\n>deferred;\"\n>) | psql\n>\n>This is where the fun begins. According to Tom Lane, a decrease in\n>max_stack_depth could lead to new crashes, but it turned out that\n>Alexander was able to find new crashes precisely due to the increase in\n>this parameter. Also, we had ulimit -s set to 8MB as the default value.\n>\n># eval_const_expressions_mutator\n>\n>max_stack_depth = '7000kB'\n>\n>(n=10000; printf \"select 'a' \"; for ((i=1;i<$n;i++)); do printf \"\n>collate \\\"C\\\" \"; done; ) | psql\n>\n>If you didn’t have a crash, like me, when Alexander shared his find,\n>then probably you configured your cluster with an optimization flag -Og.\n>In the process of trying to break this function, we came to the\n>conclusion that the maximum stack depth depends on the optimization flag\n>(-O0/-Og). As it turned out, when optimizing, the function frame on the\n>stack becomes smaller and because of this, the limit is reached more\n>slowly, therefore, the system can withstand more torment. Therefore,\n>this query will fail if you have a cluster configured with the -O0\n>optimization flag.\n>\n>The crash of the next function not only depends on the optimization\n>flag, but also on a number of other things. While researching, we\n>noticed that postgres enforces a distance ~400kB from max_stack_depth to\n>ulimit -s. We thought we could hit the max_stack_depth limit and then\n>hit the OS limit as well. Therefore, Alexander wrote a recursive SQL\n>function, that eats up a stack within max_stack_depth, including a query\n>that eats up the remaining ~400kB. And this causes a crash.\n>\n># executeBoolItem\n>\n>max_stack_depth = '7600kB'\n>\n>create function infinite_recurse(i int) returns int as $$\n>begin\n> raise notice 'Level %', i;\n> begin\n> perform jsonb_path_query('{\"a\":[1]}'::jsonb, ('$.a[*] ? (' ||\n>repeat('!(', 4800) || '@ == @' || repeat(')', 4800) || ')')::jsonpath);\n> exception\n> when others then raise notice 'jsonb_path_query error at level %,\n>%', i, sqlerrm;\n> end;\n> begin\n> select infinite_recurse(i + 1) into i;\n> exception\n> when others then raise notice 'Max stack depth reached at level %,\n>%', i, sqlerrm;\n> end;\n> return i;\n>end;\n>$$ language plpgsql;\n>\n>select infinite_recurse(1);\n>\n>To sum it all up, we have not yet decided on a general approach to such\n>functions. Some functions are definitely subject to stack overflow. Some\n>are definitely not. This can be seen from the code where the recurse\n>flag is passed, or a function that checks the stack is called before a\n>recursive call. Some require special conditions - for example, you need\n>to parse the query and build a plan, and at that stage the stack is\n>eaten faster (and checked) than by the function that we are interested in.\n>\n>We keep researching and hope to come up with a good solution sooner or\n>later.\nHello, in continuation of the topic of the stack overflow problem, Alexander Lakhin was able to find a few more similar places.\n \nAn important point for the first function is that the server must be built with asserts enabled, otherwise the crash will not happen.\nAlso, the result in the form of a server crash will be achieved only after 2-3 hours.\n \n#MemoryContextCheck\n(n=1000000; printf \"begin;\"; for ((i=1;i<=$n;i++)); do printf \"savepoint s$i;\"; done; printf \"release s1;\" ) | psql >/dev/null\n \nOther functions could be crashed without asserts enabled.\n \n#CommitTransactionCommand\n(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT s$i;\"; done; printf \"ERROR; COMMIT;\") | psql >/dev/null\n \n#MemoryContextStatsInternal\n(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT s$i;\"; done; printf \"SELECT pg_log_backend_memory_contexts(pg_backend_pid())\") | psql >/dev/null\n \n#ShowTransactionStateRec\n(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT s$i;\"; done; printf \"SET log_min_messages = 'DEBUG5'; SAVEPOINT sp;\") | psql >/dev/null\n \nThe following next two functions call each other; the following way was found to overflow the stack (with modified server configuration):\n \n#MemoryContextDeleteChildren with MemoryContextDelete\n \nmax_connections = 1000\nmax_stack_depth = '7600kB'\n \ncreate table idxpart (a int) partition by range (a);\n \nselect 'create index on idxpart (a)' from generate_series(1, 40000);\n\\gexec\n \ncreate table idxpart (a int) partition by range (a);\n \nselect 'create index on idxpart (a)' from generate_series(1, 40000);\n\\gexec\n \ncreate function infinite_recurse(level int) returns int as $$\ndeclare l int;\nbegin\n begin\n select infinite_recurse(level + 1) into level;\n exception\n when others then raise notice 'Max stack depth reached at level %, %', level, sqlerrm;\n \n create table idxpart1 partition of idxpart for values from (1) to (2) partition by range (a); \n \n end;\n return level;\nend;\n$$ language plpgsql;\n \nselect infinite_recurse(1);\n \nFinally, there are yet two recursive functions in mcxt.c:\n \n#MemoryContextResetChildren - could be vulnerable but not used at all after eaa5808e.\n \n#MemoryContextMemAllocated - at present called only with local contexts.\nСреда, 26 октября 2022, 21:47 +07:00 от Egor Chindyaskin <kyzevan23@mail.ru>: 24.08.2022 20:58, Tom Lane writes:> Nice work! I wonder if you can make the regex crashes reachable by> reducing the value of max_stack_depth enough that it's hit before> reaching the \"regular expression is too complex\" limit.>> regards, tom laneHello everyone! It's been a while since me and Alexander Lakhin havepublished a list of functions that have a stack overflow illness. We areback to tell you more about such places.During our analyze we made a conclusion that some functions can becrashed without changing any of the parameters and some can be crashedonly if we change some stuff.The first function crashes without any changes:# CheckAttributeType(n=60000; printf \"create domain dint as int; create domain dinta0 asdint[];\"; for ((i=1;i<=$n;i++)); do printf \"create domain dinta$i asdinta$(( $i - 1 ))[]; \"; done; ) | psqlpsql -c \"create table t(f1 dinta60000[]);\"Some of the others crash if we change \"max_locks_per_transaction\"parameter:# findDependentObjectsmax_locks_per_transaction = 200(n=10000; printf \"create table t (i int); create view v0 as select *from t;\"; for ((i=1;i<$n;i++)); do printf \"create view v$i as select *from v$(( $i - 1 )); \"; done; ) | psqlpsql -c \"drop table t\"# ATExecDropColumnmax_locks_per_transaction = 300(n=50000; printf \"create table t0 (a int, b int); \"; for((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1))); \"; done; printf \"alter table t0 drop b;\" ) | psql# ATExecDropConstraintmax_locks_per_transaction = 300(n=50000; printf \"create table t0 (a int, b int, constraint bc check (b > 0));\"; for ((i=1;i<=$n;i++)); do printf \"create table t$i()inherits(t$(( $i - 1 ))); \"; done; printf \"alter table t0 dropconstraint bc;\" ) | psql# ATExecAddColumnmax_locks_per_transaction = 200(n=50000; printf \"create table t0 (a int, b int);\"; for((i=1;i<=$n;i++)); do printf \"create table t$i() inherits(t$(( $i - 1))); \"; done; printf \"alter table t0 add column c int;\" ) | psql# ATExecAlterConstrRecursemax_locks_per_transaction = 300(n=50000;printf \"create table t(a int primary key); create table pt (a intprimary key, foreign key(a) references t) partition by range (a);\";printf \"create table pt0 partition of pt for values from (0) to (100000)partition by range (a);\";for ((i=1;i<=$n;i++)); do printf \"create table pt$i partition of pt$(($i - 1 )) for values from ($i) to (100000) partition by range (a); \"; done;printf \"alter table pt alter constraint pt_a_fkey deferrable initiallydeferred;\") | psqlThis is where the fun begins. According to Tom Lane, a decrease inmax_stack_depth could lead to new crashes, but it turned out thatAlexander was able to find new crashes precisely due to the increase inthis parameter. Also, we had ulimit -s set to 8MB as the default value.# eval_const_expressions_mutatormax_stack_depth = '7000kB'(n=10000; printf \"select 'a' \"; for ((i=1;i<$n;i++)); do printf \"collate \\\"C\\\" \"; done; ) | psqlIf you didn’t have a crash, like me, when Alexander shared his find,then probably you configured your cluster with an optimization flag -Og.In the process of trying to break this function, we came to theconclusion that the maximum stack depth depends on the optimization flag(-O0/-Og). As it turned out, when optimizing, the function frame on thestack becomes smaller and because of this, the limit is reached moreslowly, therefore, the system can withstand more torment. Therefore,this query will fail if you have a cluster configured with the -O0optimization flag.The crash of the next function not only depends on the optimizationflag, but also on a number of other things. While researching, wenoticed that postgres enforces a distance ~400kB from max_stack_depth toulimit -s. We thought we could hit the max_stack_depth limit and thenhit the OS limit as well. Therefore, Alexander wrote a recursive SQLfunction, that eats up a stack within max_stack_depth, including a querythat eats up the remaining ~400kB. And this causes a crash.# executeBoolItemmax_stack_depth = '7600kB'create function infinite_recurse(i int) returns int as $$begin raise notice 'Level %', i; begin perform jsonb_path_query('{\"a\":[1]}'::jsonb, ('$.a[*] ? (' ||repeat('!(', 4800) || '@ == @' || repeat(')', 4800) || ')')::jsonpath); exception when others then raise notice 'jsonb_path_query error at level %,%', i, sqlerrm; end; begin select infinite_recurse(i + 1) into i; exception when others then raise notice 'Max stack depth reached at level %,%', i, sqlerrm; end; return i;end;$$ language plpgsql;select infinite_recurse(1);To sum it all up, we have not yet decided on a general approach to suchfunctions. Some functions are definitely subject to stack overflow. Someare definitely not. This can be seen from the code where the recurseflag is passed, or a function that checks the stack is called before arecursive call. Some require special conditions - for example, you needto parse the query and build a plan, and at that stage the stack iseaten faster (and checked) than by the function that we are interested in.We keep researching and hope to come up with a good solution sooner orlater.Hello, in continuation of the topic of the stack overflow problem, Alexander Lakhin was able to find a few more similar places. An important point for the first function is that the server must be built with asserts enabled, otherwise the crash will not happen.Also, the result in the form of a server crash will be achieved only after 2-3 hours. #MemoryContextCheck(n=1000000; printf \"begin;\"; for ((i=1;i<=$n;i++)); do printf \"savepoint s$i;\"; done; printf \"release s1;\" ) | psql >/dev/null Other functions could be crashed without asserts enabled. #CommitTransactionCommand(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT s$i;\"; done; printf \"ERROR; COMMIT;\") | psql >/dev/null #MemoryContextStatsInternal(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT s$i;\"; done; printf \"SELECT pg_log_backend_memory_contexts(pg_backend_pid())\") | psql >/dev/null #ShowTransactionStateRec(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT s$i;\"; done; printf \"SET log_min_messages = 'DEBUG5'; SAVEPOINT sp;\") | psql >/dev/null The following next two functions call each other; the following way was found to overflow the stack (with modified server configuration): #MemoryContextDeleteChildren with MemoryContextDelete max_connections = 1000max_stack_depth = '7600kB' create table idxpart (a int) partition by range (a); select 'create index on idxpart (a)' from generate_series(1, 40000);\\gexec create table idxpart (a int) partition by range (a); select 'create index on idxpart (a)' from generate_series(1, 40000);\\gexec create function infinite_recurse(level int) returns int as $$declare l int;begin begin select infinite_recurse(level + 1) into level; exception when others then raise notice 'Max stack depth reached at level %, %', level, sqlerrm; create table idxpart1 partition of idxpart for values from (1) to (2) partition by range (a); end; return level;end;$$ language plpgsql; select infinite_recurse(1); Finally, there are yet two recursive functions in mcxt.c: #MemoryContextResetChildren - could be vulnerable but not used at all after eaa5808e. #MemoryContextMemAllocated - at present called only with local contexts.",
"msg_date": "Tue, 03 Jan 2023 18:40:57 +0300",
"msg_from": "=?UTF-8?B?0JXQs9C+0YAg0KfQuNC90LTRj9GB0LrQuNC9?= <kyzevan23@mail.ru>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?UmU6IFN0YWNrIG92ZXJmbG93IGlzc3Vl?="
},
{
"msg_contents": "Great work. Max Stack depth is memory dependent? Processor dependent?\n\nЕгор Чиндяскин <kyzevan23@mail.ru> schrieb am Mi., 24. Aug. 2022, 11:51:\n\n> Hello, I recently got a server crash (bug #17583 [1]) caused by a stack\n> overflow.\n>\n> Tom Lane and Richard Guo, in a discussion of this bug, suggested that\n> there could be more such places.\n> Therefore, Alexander Lakhin and I decided to deal with this issue and\n> Alexander developed a methodology. We processed src/backend/*/*.c with\n> \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the\n> functions that call themselves directly. I checked each of them for\n> features that protect against stack overflows.\n> We analyzed 4 catalogs: regex, tsearch, snowball and adt.\n> Firstly, we decided to test the regex catalog functions and found 6 of\n> them that lack the check_stach_depth() call.\n>\n> zaptreesubs\n> markst\n> next\n> nfatree\n> numst\n> repeat\n>\n> We have tried to exploit the recursion in the function zaptreesubs():\n> select regexp_matches('a' || repeat(' a', 11000), '(.)(' || repeat(' \\1',\n> 11000) || ')?');\n>\n> ERROR: invalid regular expression: regular expression is too complex\n>\n> repeat():\n> select regexp_match('abc01234xyz',repeat('a{0,2}',100001));\n>\n> ERROR: invalid regular expression: regular expression is too complex\n>\n> numst():\n> select regexp_match('abc01234xyz',repeat('(.)\\1e',100001));\n>\n> ERROR: invalid regular expression: regular expression is too complex\n>\n> markst():\n> markst is called in the code after v->tree = parse(...);\n> it is necessary that the tree be successfully parsed, but with a nesting\n> level of about 100,000 this will not work - stack protection will work\n> during parsing and v->ntree = numst(...); is also there.\n>\n> next():\n> we were able to crash the server with the following query:\n> (printf \"SELECT regexp_match('abc', 'a\"; for ((i=1;i<1000000;i++)); do\n> printf \"(?#)\"; done; printf \"b')\" ) | psql\n>\n> Secondly, we have tried to exploit the recursion in the adt catalog\n> functions and Alexander was able to crash the server with the following\n> query:\n>\n> regex_selectivity_sub():\n> SELECT * FROM pg_proc WHERE proname ~ ('(a' || repeat('|', 200000) ||\n> 'b)');\n>\n> And this query:\n>\n> (n=100000;\n> printf \"SELECT polygon '((0,0),(0,1000000))' <@ polygon\n> '((-200000,1000000),\";\n> for ((i=1;i<$n;i++)); do printf \"(100000,$(( 300000 +\n> $i))),(-100000,$((800000 + $i))),\"; done;\n> printf \"(200000,900000),(200000,0))';\"\n> ) | psql\n>\n> Thirdly, the snowball catalog, Alexander has tried to exploit the\n> recursion in the r_stem_suffix_chain_before_ki function and crashed a\n> server using this query:\n>\n> r_stem_suffix_chain_before_ki():\n> SELECT ts_lexize('turkish_stem', repeat('lerdeki', 1000000));\n>\n> The last one is the tsearch catalog. We have found 4 functions that didn't\n> have check_stach_depth() function:\n>\n> SplitToVariants\n> mkANode\n> mkSPNode\n> LexizeExec\n>\n> We have tried to exploit the recursion in the SplitToVariants function and\n> Alexander crashed a server using this:\n>\n> SplitToVariants():\n> CREATE TEXT SEARCH DICTIONARY ispell (Template=ispell,\n> DictFile=ispell_sample,AffFile=ispell_sample);\n> SELECT ts_lexize('ispell', repeat('bally', 10000));\n>\n> After trying to exploit the recursion in the LexizeExec function Alexander\n> made this conlusion:\n>\n> LexizeExec has two branches \"ld->curDictId == InvalidOid\" (usual mode) and\n> \"ld->curDictId != InvalidOid\" (multiword mode) - we start with the first\n> one, then make recursive call to switch to the multiword mode, but then we\n> return to the usual mode again.\n>\n> mkANode and mkSPNode deal with the dictionary structs, not with\n> user-supplied data, so we believe these functions are not vulnerable.\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAMbWs499ytQiH4mLMhRxRWP-iEUz3-DSinpAD-cUCtVo_23Wtg%40mail.gmail.com#03ad703cf4bc8d28ccba69913e1e8106\n>\n\nGreat work. Max Stack depth is memory dependent? Processor dependent? Егор Чиндяскин <kyzevan23@mail.ru> schrieb am Mi., 24. Aug. 2022, 11:51:\nHello, I recently got a server crash (bug #17583 [1]) caused by a stack overflow. Tom Lane and Richard Guo, in a discussion of this bug, suggested that there could be more such places. Therefore, Alexander Lakhin and I decided to deal with this issue and Alexander developed a methodology. We processed src/backend/*/*.c with \"clang -emit-llvm ... | opt -analyze -print-calgraph\" to find all the functions that call themselves directly. I checked each of them for features that protect against stack overflows.We analyzed 4 catalogs: regex, tsearch, snowball and adt.Firstly, we decided to test the regex catalog functions and found 6 of them that lack the check_stach_depth() call. zaptreesubsmarkstnextnfatreenumstrepeat We have tried to exploit the recursion in the function zaptreesubs():select regexp_matches('a' || repeat(' a', 11000), '(.)(' || repeat(' \\1', 11000) || ')?'); ERROR: invalid regular expression: regular expression is too complex repeat():select regexp_match('abc01234xyz',repeat('a{0,2}',100001)); ERROR: invalid regular expression: regular expression is too complex numst():select regexp_match('abc01234xyz',repeat('(.)\\1e',100001)); ERROR: invalid regular expression: regular expression is too complex markst():markst is called in the code after v->tree = parse(...);it is necessary that the tree be successfully parsed, but with a nesting level of about 100,000 this will not work - stack protection will work during parsing and v->ntree = numst(...); is also there. next():we were able to crash the server with the following query:(printf \"SELECT regexp_match('abc', 'a\"; for ((i=1;i<1000000;i++)); do printf \"(?#)\"; done; printf \"b')\" ) | psql Secondly, we have tried to exploit the recursion in the adt catalog functions and Alexander was able to crash the server with the following query: regex_selectivity_sub(): SELECT * FROM pg_proc WHERE proname ~ ('(a' || repeat('|', 200000) || 'b)'); And this query: (n=100000;printf \"SELECT polygon '((0,0),(0,1000000))' <@ polygon '((-200000,1000000),\";for ((i=1;i<$n;i++)); do printf \"(100000,$(( 300000 + $i))),(-100000,$((800000 + $i))),\"; done;printf \"(200000,900000),(200000,0))';\") | psql Thirdly, the snowball catalog, Alexander has tried to exploit the recursion in the r_stem_suffix_chain_before_ki function and crashed a server using this query: r_stem_suffix_chain_before_ki():SELECT ts_lexize('turkish_stem', repeat('lerdeki', 1000000)); The last one is the tsearch catalog. We have found 4 functions that didn't have check_stach_depth() function: SplitToVariantsmkANodemkSPNodeLexizeExec We have tried to exploit the recursion in the SplitToVariants function and Alexander crashed a server using this: SplitToVariants():CREATE TEXT SEARCH DICTIONARY ispell (Template=ispell, DictFile=ispell_sample,AffFile=ispell_sample);SELECT ts_lexize('ispell', repeat('bally', 10000)); After trying to exploit the recursion in the LexizeExec function Alexander made this conlusion: LexizeExec has two branches \"ld->curDictId == InvalidOid\" (usual mode) and \"ld->curDictId != InvalidOid\" (multiword mode) - we start with the first one, then make recursive call to switch to the multiword mode, but then we return to the usual mode again. mkANode and mkSPNode deal with the dictionary structs, not with user-supplied data, so we believe these functions are not vulnerable. [1] https://www.postgresql.org/message-id/flat/CAMbWs499ytQiH4mLMhRxRWP-iEUz3-DSinpAD-cUCtVo_23Wtg%40mail.gmail.com#03ad703cf4bc8d28ccba69913e1e8106",
"msg_date": "Tue, 3 Jan 2023 16:45:16 +0100",
"msg_from": "Sascha Kuhl <yogidabanli@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hello! In continuation of the topic, I, under the leadership of \nAlexander Lakhin, prepared patches that fix these problems.\nWe decided that these checks would be enough and put them in the places \nwe saw fit.",
"msg_date": "Thu, 19 Jan 2023 16:18:42 +0700",
"msg_from": "Egor Chindyaskin <kyzevan23@mail.ru>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "\n03.01.2023 22:45, Sascha Kuhl writes:\n> Great work. Max Stack depth is memory dependent? Processor dependent?\nHello! These situations are not specific to the x86_64 architecture, but \nalso manifest themselves, for example, on aarch64 architecture.\nFor example this query, ran on aarch64, (n=1000000;printf \"begin;\"; for \n((i=1;i<=$n;i++)); do printf \"savepoint s$i;\"; done; printf \"release \ns1;\" ) | psql > /dev/null\ncrashed the server on the savepoint174617 with the following stacktrace:\n\nCore was generated by `postgres: test test [local] \nSAVEPOINT '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 AllocSetCheck (context=<error reading variable: Cannot access memory \nat address 0xffffe2397fe8>) at aset.c:1409\n1409 {\n(gdb) bt\n#0 AllocSetCheck (context=<error reading variable: Cannot access memory \nat address 0xffffe2397fe8>) at aset.c:1409\n#1 0x0000aaaad78c38c4 in MemoryContextCheck (context=0xaaab39ee16a0) at \nmcxt.c:740\n#2 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39edf690) at \nmcxt.c:742\n#3 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39edd680) at \nmcxt.c:742\n#4 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39edb670) at \nmcxt.c:742\n#5 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39ed9660) at \nmcxt.c:742\n#6 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39ed7650) at \nmcxt.c:742\n#7 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39ed5640) at \nmcxt.c:742\n#8 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39ed3630) at \nmcxt.c:742\n#9 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39ed1620) at \nmcxt.c:742\n#10 0x0000aaaad78c38dc in MemoryContextCheck (context=0xaaab39ecf610) at \nmcxt.c:742\n...\n#174617 0x0000aaaad78c38dc in MemoryContextCheck \n(context=0xaaaae47994b0) at mcxt.c:742\n#174618 0x0000aaaad78c38dc in MemoryContextCheck \n(context=0xaaaae476dcd0) at mcxt.c:742\n#174619 0x0000aaaad78c38dc in MemoryContextCheck \n(context=0xaaaae46ead50) at mcxt.c:742\n#174620 0x0000aaaad76c7e24 in finish_xact_command () at postgres.c:2739\n#174621 0x0000aaaad76c55b8 in exec_simple_query \n(query_string=0xaaaae46f0540 \"savepoint s174617;\") at postgres.c:1238\n#174622 0x0000aaaad76ca7a4 in PostgresMain (argc=1, argv=0xffffe2b96898, \ndbname=0xaaaae471c098 \"test\", username=0xaaaae471c078 \"test\") at \npostgres.c:4508\n#174623 0x0000aaaad75e263c in BackendRun (port=0xaaaae4711470) at \npostmaster.c:4530\n#174624 0x0000aaaad75e1f70 in BackendStartup (port=0xaaaae4711470) at \npostmaster.c:4252\n#174625 0x0000aaaad75dd4c0 in ServerLoop () at postmaster.c:1745\n#174626 0x0000aaaad75dcd3c in PostmasterMain (argc=3, \nargv=0xaaaae46eacb0) at postmaster.c:1417\n#174627 0x0000aaaad74d462c in main (argc=3, argv=0xaaaae46eacb0) at \nmain.c:209\n\n\n",
"msg_date": "Fri, 20 Jan 2023 12:50:20 +0700",
"msg_from": "Egor Chindyaskin <kyzevan23@mail.ru>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hello! In continuation of the topic I would like to suggest solution. \nThis patch adds several checks to the vulnerable functions above.",
"msg_date": "Wed, 21 Jun 2023 16:45:00 +0300",
"msg_from": "Egor Chindyaskin <kyzevan23@mail.ru>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On 21/06/2023 16:45, Egor Chindyaskin wrote:\n> Hello! In continuation of the topic I would like to suggest solution.\n> This patch adds several checks to the vulnerable functions above.\n\nI looked at this last patch. The depth checks are clearly better than \nsegfaulting, but I think we can also avoid the recursions and having to \nerror out. That seems nice especially for MemoryContextDelete(), which \nis called at transaction cleanup.\n\n1. CommitTransactionCommand\n\nThis is just tail recursion. The compiler will almost certainly optimize \nit away unless you're using -O0. We can easily turn it into iteration \nourselves to avoid that hazard, per attached \n0001-Turn-tail-recursion-into-iteration-in-CommitTransact.patch.\n\n2. ShowTransactionStateRec\n\nSince this is just a debugging aid, I think we can just stop recursing \nif we're about to run out of stack space. Seems nicer than erroring out, \nalthough it can still error if you run out of memory. See \n0002-Avoid-stack-overflow-in-ShowTransactionStateRec.patch.\n\n3. All the MemoryContext functions\n\nI'm reluctant to add stack checks to these, because they are called in \nplaces like cleaning up after transaction abort. MemoryContextDelete() \nin particular. If you ereport an error, it's not clear that you can \nrecover cleanly; you'll leak memory if nothing else.\n\nFortunately MemoryContext contains pointers to parent and siblings, so \nwe can traverse a tree of MemoryContexts iteratively, without using stack.\n\nMemoryContextStats() is a bit tricky, but we can put a limit on how much \nit recurses, and just print a summary line if the limit is reached. \nThat's what we already do if a memory context has a lot of children. \n(Actually, if we didn't try keep track of the # of children at each \nlevel, to trigger the summarization, we could traverse the tree without \nusing stack. But a limit seems useful.)\n\nWhat do you think?\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 24 Nov 2023 17:14:24 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On 24/11/2023 21:14, Heikki Linnakangas wrote:\n> What do you think?\nHello! Thank you for researching the problem! I'm more of a tester than \na developer, so I was able to check the patches from that side.\nI've configured the server with CFLAGS=\" -O0\" and cassert enabled and \nchecked the following queries:\n\n#CommitTransactionCommand\n(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT \ns$i;\"; done; printf \"ERROR; COMMIT;\") | psql >/dev/null\n\n#ShowTransactionStateRec\n(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT \ns$i;\"; done; printf \"SET log_min_messages = 'DEBUG5'; SAVEPOINT sp;\") | \npsql >/dev/null\n\n#MemoryContextCheck\n(n=1000000; printf \"begin;\"; for ((i=1;i<=$n;i++)); do printf \"savepoint \ns$i;\"; done; printf \"release s1;\" ) | psql >/dev/null\n\n#MemoryContextStatsInternal\n(n=1000000; printf \"BEGIN;\"; for ((i=1;i<=$n;i++)); do printf \"SAVEPOINT \ns$i;\"; done; printf \"SELECT \npg_log_backend_memory_contexts(pg_backend_pid())\") | psql >/dev/null\n\nOn my system, every of that queries led to a server crash at a number of \nsavepoints in the range from 174,400 to 174,700.\nWith your patches applied, the savepoint counter goes well beyond these \nvalues, I settled on an amount of approximately 300,000 savepoints.\nYour patches look good to me.\n\nBest regards,\nEgor Chindyaskin\nPostgres Professional: http://postgrespro.com/\n\n\n",
"msg_date": "Thu, 21 Dec 2023 15:45:47 +0700",
"msg_from": "Egor Chindyaskin <kyzevan23@mail.ru>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Fri, Nov 24, 2023 at 10:47 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> What do you think?\n\nAt least for 0001 and 0002, I think we should just add the stack depth checks.\n\nWith regard to 0001, CommitTransactionCommand() and friends are hard\nenough to understand as it is; they need \"goto\" like I need an extra\nhole in my head.\n\nWith regard to 0002, this function isn't sufficiently important to\njustify adding special-case code for an extremely rare event. We\nshould just handle it the way we do in general.\n\nI agree that in the memory-context case it might be worth expending\nsome more code to be more clever. But I probably wouldn't do that for\nMemoryContextStats(); check_stack_depth() seems fine for that one.\n\nIn general, I think we should try to keep the number of places that\nhandle stack overflow in \"special\" ways as small as possible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:23:25 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi,\n\nOn 2024-01-05 12:23:25 -0500, Robert Haas wrote:\n> I agree that in the memory-context case it might be worth expending\n> some more code to be more clever. But I probably wouldn't do that for\n> MemoryContextStats(); check_stack_depth() seems fine for that one.\n\nWe run MemoryContextStats() when we fail to allocate memory, including during\nabort processing after a previous error. So I think it qualifies for being\nsomewhat special. Thus I suspect check_stack_depth() wouldn't be a good idea -\nbut we could make the stack_is_too_deep() path simpler and just return in the\nexisting MemoryContextStatsInternal() when that's the case.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 5 Jan 2024 12:16:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Fri, Jan 5, 2024 at 3:16 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2024-01-05 12:23:25 -0500, Robert Haas wrote:\n> > I agree that in the memory-context case it might be worth expending\n> > some more code to be more clever. But I probably wouldn't do that for\n> > MemoryContextStats(); check_stack_depth() seems fine for that one.\n>\n> We run MemoryContextStats() when we fail to allocate memory, including during\n> abort processing after a previous error. So I think it qualifies for being\n> somewhat special.\n\nOK.\n\n> Thus I suspect check_stack_depth() wouldn't be a good idea -\n> but we could make the stack_is_too_deep() path simpler and just return in the\n> existing MemoryContextStatsInternal() when that's the case.\n\nSince this kind of code will be exercised so rarely, it's highly\nvulnerable to bugs, so I favor keeping it as simple as we can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 5 Jan 2024 15:19:18 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On 05/01/2024 19:23, Robert Haas wrote:\n> On Fri, Nov 24, 2023 at 10:47 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> What do you think?\n> \n> At least for 0001 and 0002, I think we should just add the stack depth checks.\n> \n> With regard to 0001, CommitTransactionCommand() and friends are hard\n> enough to understand as it is; they need \"goto\" like I need an extra\n> hole in my head.\n> \n> With regard to 0002, this function isn't sufficiently important to\n> justify adding special-case code for an extremely rare event. We\n> should just handle it the way we do in general.\n> \n> I agree that in the memory-context case it might be worth expending\n> some more code to be more clever. But I probably wouldn't do that for\n> MemoryContextStats(); check_stack_depth() seems fine for that one.\n> \n> In general, I think we should try to keep the number of places that\n> handle stack overflow in \"special\" ways as small as possible.\n\nThe problem with CommitTransactionCommand (or rather \nAbortCurrentTransaction() which has the same problem)\nand ShowTransactionStateRec is that they get called in a state where \naborting can lead to a panic. If you add a \"check_stack_depth()\" to them \nand try to reproducer scripts that Egor posted, you still get a panic.\n\nI'm not sure if MemoryContextStats() could safely elog(ERROR). But at \nleast it would mask the \"out of memory\" that caused the stats to be \nprinted in the first place.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Wed, 10 Jan 2024 23:25:42 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Wed, Jan 10, 2024 at 4:25 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> The problem with CommitTransactionCommand (or rather\n> AbortCurrentTransaction() which has the same problem)\n> and ShowTransactionStateRec is that they get called in a state where\n> aborting can lead to a panic. If you add a \"check_stack_depth()\" to them\n> and try to reproducer scripts that Egor posted, you still get a panic.\n\nHmm, that's unfortunate. I'm not sure what to do about that. But I'd\nstill suggest looking for a goto-free approach.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 11 Jan 2024 12:37:58 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On 11/01/2024 19:37, Robert Haas wrote:\n> On Wed, Jan 10, 2024 at 4:25 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> The problem with CommitTransactionCommand (or rather\n>> AbortCurrentTransaction() which has the same problem)\n>> and ShowTransactionStateRec is that they get called in a state where\n>> aborting can lead to a panic. If you add a \"check_stack_depth()\" to them\n>> and try to reproducer scripts that Egor posted, you still get a panic.\n> \n> Hmm, that's unfortunate. I'm not sure what to do about that. But I'd\n> still suggest looking for a goto-free approach.\n\nHere's one goto-free attempt. It adds a local loop to where the \nrecursion was, so that if you have a chain of subtransactions that need \nto be aborted in CommitTransactionCommand, they are aborted iteratively. \nThe TBLOCK_SUBCOMMIT case already had such a loop.\n\nI added a couple of comments in the patch marked with \"REVIEWER NOTE\", \nto explain why I changed some things. They are to be removed before \ncommitting.\n\nI'm not sure if this is better than a goto. In fact, even if we commit \nthis, I think I'd still prefer to replace the remaining recursive calls \nwith a goto. Recursion feels a weird to me, when we're unwinding the \nstates from the stack as we go.\n\nOf course we could use a \"for (;;) { ... continue }\" construct around \nthe whole function, instead of a goto, but I don't think that's better \nthan a goto in this case.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)",
"msg_date": "Fri, 12 Jan 2024 17:12:14 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Fri, Jan 12, 2024 at 10:12 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Here's one goto-free attempt. It adds a local loop to where the\n> recursion was, so that if you have a chain of subtransactions that need\n> to be aborted in CommitTransactionCommand, they are aborted iteratively.\n> The TBLOCK_SUBCOMMIT case already had such a loop.\n>\n> I added a couple of comments in the patch marked with \"REVIEWER NOTE\",\n> to explain why I changed some things. They are to be removed before\n> committing.\n>\n> I'm not sure if this is better than a goto. In fact, even if we commit\n> this, I think I'd still prefer to replace the remaining recursive calls\n> with a goto. Recursion feels a weird to me, when we're unwinding the\n> states from the stack as we go.\n\nI'm not able to quickly verify whether this version is correct, but I\ndo think the code looks nicer this way.\n\nI understand that's a question of opinion rather than fact, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 12 Jan 2024 16:00:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi!\n\nOn Fri, Jan 12, 2024 at 11:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Jan 12, 2024 at 10:12 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > Here's one goto-free attempt. It adds a local loop to where the\n> > recursion was, so that if you have a chain of subtransactions that need\n> > to be aborted in CommitTransactionCommand, they are aborted iteratively.\n> > The TBLOCK_SUBCOMMIT case already had such a loop.\n> >\n> > I added a couple of comments in the patch marked with \"REVIEWER NOTE\",\n> > to explain why I changed some things. They are to be removed before\n> > committing.\n> >\n> > I'm not sure if this is better than a goto. In fact, even if we commit\n> > this, I think I'd still prefer to replace the remaining recursive calls\n> > with a goto. Recursion feels a weird to me, when we're unwinding the\n> > states from the stack as we go.\n>\n> I'm not able to quickly verify whether this version is correct, but I\n> do think the code looks nicer this way.\n>\n> I understand that's a question of opinion rather than fact, though.\n\nI'd like to revive this thread. The attached 0001 patch represents my\nattempt to remove recursion in\nCommitTransactionCommand()/AbortCurrentTransaction() by adding a\nwrapper function. This method doesn't use goto, doesn't require much\ncode changes and subjectively provides good readability.\n\nRegarding ShowTransactionStateRec() and memory context function, as I\nget from this thread they are called in states where abortion can lead\nto a panic. So, it's preferable to change them into loops too rather\nthan just adding check_stack_depth(). The 0002 and 0003 patches by\nHeikki posted in [1] look good to me. Can we accept them?\n\nAlso there are a number of recursive functions, which seem to be not\nused in critical states where abortion can lead to a panic. I've\nextracted them from [2] into an attached 0002 patch. I'd like to push\nit if there is no objection.\n\nLinks.\n1. https://www.postgresql.org/message-id/6b48c746-9704-46dc-b9be-01fe4137c824%40iki.fi\n2. https://www.postgresql.org/message-id/4530546a-3216-eaa9-4c92-92d33290a211%40mail.ru\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 14 Feb 2024 14:00:06 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Wed, Feb 14, 2024 at 2:00 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Fri, Jan 12, 2024 at 11:00 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Fri, Jan 12, 2024 at 10:12 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > > Here's one goto-free attempt. It adds a local loop to where the\n> > > recursion was, so that if you have a chain of subtransactions that need\n> > > to be aborted in CommitTransactionCommand, they are aborted iteratively.\n> > > The TBLOCK_SUBCOMMIT case already had such a loop.\n> > >\n> > > I added a couple of comments in the patch marked with \"REVIEWER NOTE\",\n> > > to explain why I changed some things. They are to be removed before\n> > > committing.\n> > >\n> > > I'm not sure if this is better than a goto. In fact, even if we commit\n> > > this, I think I'd still prefer to replace the remaining recursive calls\n> > > with a goto. Recursion feels a weird to me, when we're unwinding the\n> > > states from the stack as we go.\n> >\n> > I'm not able to quickly verify whether this version is correct, but I\n> > do think the code looks nicer this way.\n> >\n> > I understand that's a question of opinion rather than fact, though.\n>\n> I'd like to revive this thread. The attached 0001 patch represents my\n> attempt to remove recursion in\n> CommitTransactionCommand()/AbortCurrentTransaction() by adding a\n> wrapper function. This method doesn't use goto, doesn't require much\n> code changes and subjectively provides good readability.\n>\n> Regarding ShowTransactionStateRec() and memory context function, as I\n> get from this thread they are called in states where abortion can lead\n> to a panic. So, it's preferable to change them into loops too rather\n> than just adding check_stack_depth(). The 0002 and 0003 patches by\n> Heikki posted in [1] look good to me. Can we accept them?\n>\n> Also there are a number of recursive functions, which seem to be not\n> used in critical states where abortion can lead to a panic. I've\n> extracted them from [2] into an attached 0002 patch. I'd like to push\n> it if there is no objection.\n\nThe revised set of remaining patches is attached.\n\n0001 Turn tail recursion into iteration in CommitTransactionCommand()\nI did minor revision of comments and code blocks order to improve the\nreadability.\n\n0002 Avoid stack overflow in ShowTransactionStateRec()\nI didn't notice any issues, leave this piece as is.\n\n0003 Avoid recursion in MemoryContext functions\nI've renamed MemoryContextTraverse() => MemoryContextTraverseNext(),\nwhich I think is a bit more intuitive. Also I fixed\nMemoryContextMemConsumed(), which was still trying to use the removed\nargument \"print\" of MemoryContextStatsInternal() function.\n\nGenerally, I think this patchset fixes important stack overflow holes.\nIt is quite straightforward, clear and the code has a good shape. I'm\ngoing to push this if no objections.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 6 Mar 2024 14:17:23 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The revised set of remaining patches is attached.\n> ...\n> 0003 Avoid recursion in MemoryContext functions\n> I've renamed MemoryContextTraverse() => MemoryContextTraverseNext(),\n> which I think is a bit more intuitive. Also I fixed\n> MemoryContextMemConsumed(), which was still trying to use the removed\n> argument \"print\" of MemoryContextStatsInternal() function.\n\nThis patch still doesn't compile for me --- MemoryContextMemConsumed\ngot modified some more by commit 743112a2e, and needs minor fixes.\n\nI initially didn't like the definition of MemoryContextTraverseNext\nbecause it requires two copies of the \"process node\" logic. However,\nthat seems fine for most of the callers, and even where we are\nduplicating logic it's just a line or so, so I guess it's ok.\nHowever, MemoryContextTraverseNext seems undercommented to me, plus\nthe claim that it traverses in depth-first order is just wrong.\n\nI found some bugs in MemoryContextStatsInternal too: the old\nlogic assumed that ichild exceeding max_children was the only\nway to get into the summarization logic, but now ichild minus\nmax_children could very well be negative. Fortunately we can\njust reset ichild to zero and not worry about having any\nconnection between the first loop and the second.\n\nHere's a v5 of 0003 with those issues and some more-cosmetic ones\ncleaned up. I didn't look at 0001 or 0002.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 06 Mar 2024 17:52:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 12:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > The revised set of remaining patches is attached.\n> > ...\n> > 0003 Avoid recursion in MemoryContext functions\n> > I've renamed MemoryContextTraverse() => MemoryContextTraverseNext(),\n> > which I think is a bit more intuitive. Also I fixed\n> > MemoryContextMemConsumed(), which was still trying to use the removed\n> > argument \"print\" of MemoryContextStatsInternal() function.\n>\n> This patch still doesn't compile for me --- MemoryContextMemConsumed\n> got modified some more by commit 743112a2e, and needs minor fixes.\n>\n> I initially didn't like the definition of MemoryContextTraverseNext\n> because it requires two copies of the \"process node\" logic. However,\n> that seems fine for most of the callers, and even where we are\n> duplicating logic it's just a line or so, so I guess it's ok.\n> However, MemoryContextTraverseNext seems undercommented to me, plus\n> the claim that it traverses in depth-first order is just wrong.\n>\n> I found some bugs in MemoryContextStatsInternal too: the old\n> logic assumed that ichild exceeding max_children was the only\n> way to get into the summarization logic, but now ichild minus\n> max_children could very well be negative. Fortunately we can\n> just reset ichild to zero and not worry about having any\n> connection between the first loop and the second.\n>\n> Here's a v5 of 0003 with those issues and some more-cosmetic ones\n> cleaned up. I didn't look at 0001 or 0002.\n>\n\nTom, thank you for your revision of this patch!\n\nSorry for tediousness, but isn't pre-order a variation of depth-first order\n[1]?\n\nLinks.\n1. https://en.wikipedia.org/wiki/Tree_traversal#Depth-first_search\n\n------\nRegards,\nAlexander Korotkov\n\nOn Thu, Mar 7, 2024 at 12:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Alexander Korotkov <aekorotkov@gmail.com> writes:\n> The revised set of remaining patches is attached.\n> ...\n> 0003 Avoid recursion in MemoryContext functions\n> I've renamed MemoryContextTraverse() => MemoryContextTraverseNext(),\n> which I think is a bit more intuitive. Also I fixed\n> MemoryContextMemConsumed(), which was still trying to use the removed\n> argument \"print\" of MemoryContextStatsInternal() function.\n\nThis patch still doesn't compile for me --- MemoryContextMemConsumed\ngot modified some more by commit 743112a2e, and needs minor fixes.\n\nI initially didn't like the definition of MemoryContextTraverseNext\nbecause it requires two copies of the \"process node\" logic. However,\nthat seems fine for most of the callers, and even where we are\nduplicating logic it's just a line or so, so I guess it's ok.\nHowever, MemoryContextTraverseNext seems undercommented to me, plus\nthe claim that it traverses in depth-first order is just wrong.\n\nI found some bugs in MemoryContextStatsInternal too: the old\nlogic assumed that ichild exceeding max_children was the only\nway to get into the summarization logic, but now ichild minus\nmax_children could very well be negative. Fortunately we can\njust reset ichild to zero and not worry about having any\nconnection between the first loop and the second.\n\nHere's a v5 of 0003 with those issues and some more-cosmetic ones\ncleaned up. I didn't look at 0001 or 0002.Tom, thank you for your revision of this patch!Sorry for tediousness, but isn't pre-order a variation of depth-first order [1]?Links.1. https://en.wikipedia.org/wiki/Tree_traversal#Depth-first_search------Regards,Alexander Korotkov",
"msg_date": "Thu, 7 Mar 2024 01:24:33 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> Sorry for tediousness, but isn't pre-order a variation of depth-first order\n> [1]?\n\nTo me, depth-first implies visiting children before parents.\nDo I have the terminology wrong?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 06 Mar 2024 18:49:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi, Egor!\n\nOn Thu, Mar 7, 2024 at 9:53 AM Egor Chindyaskin <kyzevan23@mail.ru> wrote:\n>\n> > 6 march 2024 г., at 19:17, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> >\n> > The revised set of remaining patches is attached.\n> >\n> > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > I did minor revision of comments and code blocks order to improve the\n> > readability.\n> >\n> > 0002 Avoid stack overflow in ShowTransactionStateRec()\n> > I didn't notice any issues, leave this piece as is.\n> >\n> > 0003 Avoid recursion in MemoryContext functions\n> > I've renamed MemoryContextTraverse() => MemoryContextTraverseNext(),\n> > which I think is a bit more intuitive. Also I fixed\n> > MemoryContextMemConsumed(), which was still trying to use the removed\n> > argument \"print\" of MemoryContextStatsInternal() function.\n> >\n> > Generally, I think this patchset fixes important stack overflow holes.\n> > It is quite straightforward, clear and the code has a good shape. I'm\n> > going to push this if no objections.\n>\n> I have tested the scripts from message [1]. After applying these patches and Tom Lane’s patch from message [2], all of the above mentioned functions no longer caused the server to crash. I also tried increasing the values in the presented scripts, which also did not lead to server crashes. Thank you!\n> Also, I would like to clarify something. Will fixes from message [3] and others be backported to all other branches, not just the master branch? As far as I remember, Tom Lane made corrections to all branches. For example [4].\n>\n> Links:\n> 1. https://www.postgresql.org/message-id/343ff14f-3060-4f88-9cc6-efdb390185df%40mail.ru\n> 2. https://www.postgresql.org/message-id/386032.1709765547%40sss.pgh.pa.us\n> 3. https://www.postgresql.org/message-id/CAPpHfduZqAjF%2B7rDRP-RGNHjOXy7nvFROQ0MGS436f8FPY5DpQ%40mail.gmail.com\n> 4. https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e07ebd4b\n\nThank you for your feedback!\n\nInitially I didn't intend to backpatch any of these. But on second\nthought with the references you provided, I think we should backpatch\nsimple check_stack_depth() checks from d57b7cc333 to all supported\nbranches, but apply refactoring of memory contextes and transaction\ncommit/abort just to master. Opinions?\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Thu, 7 Mar 2024 11:07:34 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 1:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > Sorry for tediousness, but isn't pre-order a variation of depth-first order\n> > [1]?\n>\n> To me, depth-first implies visiting children before parents.\n> Do I have the terminology wrong?\n\nAccording to Wikipedia, depth-first is a general term describing the\ntree traversal algorithm, which goes as deep as possible in one branch\nbefore visiting other branches. The order of between parents and\nchildren, and between siblings specifies the variation of depth-first\nsearch, and pre-order is one of them. But \"pre-order\" is the most\naccurate term for MemoryContextTraverseNext() anyway.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Fri, 8 Mar 2024 12:56:57 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Thu, Mar 7, 2024 at 11:07 AM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Thu, Mar 7, 2024 at 9:53 AM Egor Chindyaskin <kyzevan23@mail.ru> wrote:\n> >\n> > > 6 march 2024 г., at 19:17, Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > >\n> > > The revised set of remaining patches is attached.\n> > >\n> > > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > > I did minor revision of comments and code blocks order to improve the\n> > > readability.\n> > >\n> > > 0002 Avoid stack overflow in ShowTransactionStateRec()\n> > > I didn't notice any issues, leave this piece as is.\n> > >\n> > > 0003 Avoid recursion in MemoryContext functions\n> > > I've renamed MemoryContextTraverse() => MemoryContextTraverseNext(),\n> > > which I think is a bit more intuitive. Also I fixed\n> > > MemoryContextMemConsumed(), which was still trying to use the removed\n> > > argument \"print\" of MemoryContextStatsInternal() function.\n> > >\n> > > Generally, I think this patchset fixes important stack overflow holes.\n> > > It is quite straightforward, clear and the code has a good shape. I'm\n> > > going to push this if no objections.\n> >\n> > I have tested the scripts from message [1]. After applying these patches and Tom Lane’s patch from message [2], all of the above mentioned functions no longer caused the server to crash. I also tried increasing the values in the presented scripts, which also did not lead to server crashes. Thank you!\n> > Also, I would like to clarify something. Will fixes from message [3] and others be backported to all other branches, not just the master branch? As far as I remember, Tom Lane made corrections to all branches. For example [4].\n> >\n> > Links:\n> > 1. https://www.postgresql.org/message-id/343ff14f-3060-4f88-9cc6-efdb390185df%40mail.ru\n> > 2. https://www.postgresql.org/message-id/386032.1709765547%40sss.pgh.pa.us\n> > 3. https://www.postgresql.org/message-id/CAPpHfduZqAjF%2B7rDRP-RGNHjOXy7nvFROQ0MGS436f8FPY5DpQ%40mail.gmail.com\n> > 4. https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=e07ebd4b\n>\n> Thank you for your feedback!\n>\n> Initially I didn't intend to backpatch any of these. But on second\n> thought with the references you provided, I think we should backpatch\n> simple check_stack_depth() checks from d57b7cc333 to all supported\n> branches, but apply refactoring of memory contextes and transaction\n> commit/abort just to master. Opinions?\n\nI've just backpatched check_stack_depth() checks to all supported branches.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Mon, 11 Mar 2024 04:24:57 +0200",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi,\n\nOn 2024-03-06 14:17:23 +0200, Alexander Korotkov wrote:\n> 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> I did minor revision of comments and code blocks order to improve the\n> readability.\n\nAfter sending\nhttps://www.postgresql.org/message-id/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\nI looked some more at important areas where changes didn't have code\ncoverage. One thing I noticed was that the \"non-internal\" part of\nAbortCurrentTransaction() is uncovered:\nhttps://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/transam/xact.c.gcov.html#L3403\n\nWhich made me try to understand fefd9a3fed2. I'm a bit confused about why\nsome parts are handled in CommitCurrentTransaction()/AbortCurrentTransaction()\nand others are in the *Internal functions.\n\nI understand that fefd9a3fed2 needed to remove the recursion in\nCommitTransactionCommand()/AbortCurrentTransaction(). But I don't understand\nwhy that means having some code in in the non-internal and some in the\ninternal functions? Wouldn't it be easier to just have all the state handling\ncode in the Internal() function and just break after the\nCleanupSubTransaction() calls?\n\n\nThat's of course largely unrelated to the coverage aspects. I just got\ncurious.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 15 Apr 2024 15:48:34 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 1:48 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2024-03-06 14:17:23 +0200, Alexander Korotkov wrote:\n> > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > I did minor revision of comments and code blocks order to improve the\n> > readability.\n>\n> After sending\n> https://www.postgresql.org/message-id/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\n> I looked some more at important areas where changes didn't have code\n> coverage. One thing I noticed was that the \"non-internal\" part of\n> AbortCurrentTransaction() is uncovered:\n> https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/transam/xact.c.gcov.html#L3403\n>\n> Which made me try to understand fefd9a3fed2. I'm a bit confused about why\n> some parts are handled in CommitCurrentTransaction()/AbortCurrentTransaction()\n> and others are in the *Internal functions.\n>\n> I understand that fefd9a3fed2 needed to remove the recursion in\n> CommitTransactionCommand()/AbortCurrentTransaction(). But I don't understand\n> why that means having some code in in the non-internal and some in the\n> internal functions? Wouldn't it be easier to just have all the state handling\n> code in the Internal() function and just break after the\n> CleanupSubTransaction() calls?\n\nI'm not sure I correctly get what you mean. Do you think the attached\npatch matches the direction you're pointing? The patch itself is not\nfinal, it requires cleanup and comments revision, just to check the\ndirection.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Tue, 16 Apr 2024 15:45:42 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-16 15:45:42 +0300, Alexander Korotkov wrote:\n> On Tue, Apr 16, 2024 at 1:48 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2024-03-06 14:17:23 +0200, Alexander Korotkov wrote:\n> > > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > > I did minor revision of comments and code blocks order to improve the\n> > > readability.\n> >\n> > After sending\n> > https://www.postgresql.org/message-id/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\n> > I looked some more at important areas where changes didn't have code\n> > coverage. One thing I noticed was that the \"non-internal\" part of\n> > AbortCurrentTransaction() is uncovered:\n> > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/transam/xact.c.gcov.html#L3403\n> >\n> > Which made me try to understand fefd9a3fed2. I'm a bit confused about why\n> > some parts are handled in CommitCurrentTransaction()/AbortCurrentTransaction()\n> > and others are in the *Internal functions.\n> >\n> > I understand that fefd9a3fed2 needed to remove the recursion in\n> > CommitTransactionCommand()/AbortCurrentTransaction(). But I don't understand\n> > why that means having some code in in the non-internal and some in the\n> > internal functions? Wouldn't it be easier to just have all the state handling\n> > code in the Internal() function and just break after the\n> > CleanupSubTransaction() calls?\n> \n> I'm not sure I correctly get what you mean. Do you think the attached\n> patch matches the direction you're pointing? The patch itself is not\n> final, it requires cleanup and comments revision, just to check the\n> direction.\n\nSomething like that, yea. The split does seem less confusing that way to me,\nbut also not 100% certain.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 16 Apr 2024 08:35:01 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 6:35 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2024-04-16 15:45:42 +0300, Alexander Korotkov wrote:\n> > On Tue, Apr 16, 2024 at 1:48 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2024-03-06 14:17:23 +0200, Alexander Korotkov wrote:\n> > > > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > > > I did minor revision of comments and code blocks order to improve the\n> > > > readability.\n> > >\n> > > After sending\n> > > https://www.postgresql.org/message-id/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\n> > > I looked some more at important areas where changes didn't have code\n> > > coverage. One thing I noticed was that the \"non-internal\" part of\n> > > AbortCurrentTransaction() is uncovered:\n> > > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/transam/xact.c.gcov.html#L3403\n> > >\n> > > Which made me try to understand fefd9a3fed2. I'm a bit confused about why\n> > > some parts are handled in CommitCurrentTransaction()/AbortCurrentTransaction()\n> > > and others are in the *Internal functions.\n> > >\n> > > I understand that fefd9a3fed2 needed to remove the recursion in\n> > > CommitTransactionCommand()/AbortCurrentTransaction(). But I don't understand\n> > > why that means having some code in in the non-internal and some in the\n> > > internal functions? Wouldn't it be easier to just have all the state handling\n> > > code in the Internal() function and just break after the\n> > > CleanupSubTransaction() calls?\n> >\n> > I'm not sure I correctly get what you mean. Do you think the attached\n> > patch matches the direction you're pointing? The patch itself is not\n> > final, it requires cleanup and comments revision, just to check the\n> > direction.\n>\n> Something like that, yea. The split does seem less confusing that way to me,\n> but also not 100% certain.\n\nThank you for your feedback. I'm going to go ahead and polish this patch.\n\n------\nRegards,\nAlexander Korotkov\n\n\n",
"msg_date": "Tue, 16 Apr 2024 19:42:51 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Tue, Apr 16, 2024 at 7:42 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n>\n> On Tue, Apr 16, 2024 at 6:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2024-04-16 15:45:42 +0300, Alexander Korotkov wrote:\n> > > On Tue, Apr 16, 2024 at 1:48 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > On 2024-03-06 14:17:23 +0200, Alexander Korotkov wrote:\n> > > > > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > > > > I did minor revision of comments and code blocks order to improve the\n> > > > > readability.\n> > > >\n> > > > After sending\n> > > > https://www.postgresql.org/message-id/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\n> > > > I looked some more at important areas where changes didn't have code\n> > > > coverage. One thing I noticed was that the \"non-internal\" part of\n> > > > AbortCurrentTransaction() is uncovered:\n> > > > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/transam/xact.c.gcov.html#L3403\n> > > >\n> > > > Which made me try to understand fefd9a3fed2. I'm a bit confused about why\n> > > > some parts are handled in CommitCurrentTransaction()/AbortCurrentTransaction()\n> > > > and others are in the *Internal functions.\n> > > >\n> > > > I understand that fefd9a3fed2 needed to remove the recursion in\n> > > > CommitTransactionCommand()/AbortCurrentTransaction(). But I don't understand\n> > > > why that means having some code in in the non-internal and some in the\n> > > > internal functions? Wouldn't it be easier to just have all the state handling\n> > > > code in the Internal() function and just break after the\n> > > > CleanupSubTransaction() calls?\n> > >\n> > > I'm not sure I correctly get what you mean. Do you think the attached\n> > > patch matches the direction you're pointing? The patch itself is not\n> > > final, it requires cleanup and comments revision, just to check the\n> > > direction.\n> >\n> > Something like that, yea. The split does seem less confusing that way to me,\n> > but also not 100% certain.\n>\n> Thank you for your feedback. I'm going to go ahead and polish this patch.\n\nI've invested more time into polishing this. I'm intended to push\nthis. Could you, please, take a look before?\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 17 Apr 2024 14:37:24 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "On Wed, Apr 17, 2024 at 2:37 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> On Tue, Apr 16, 2024 at 7:42 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > On Tue, Apr 16, 2024 at 6:35 PM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2024-04-16 15:45:42 +0300, Alexander Korotkov wrote:\n> > > > On Tue, Apr 16, 2024 at 1:48 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > On 2024-03-06 14:17:23 +0200, Alexander Korotkov wrote:\n> > > > > > 0001 Turn tail recursion into iteration in CommitTransactionCommand()\n> > > > > > I did minor revision of comments and code blocks order to improve the\n> > > > > > readability.\n> > > > >\n> > > > > After sending\n> > > > > https://www.postgresql.org/message-id/20240414223305.m3i5eju6zylabvln%40awork3.anarazel.de\n> > > > > I looked some more at important areas where changes didn't have code\n> > > > > coverage. One thing I noticed was that the \"non-internal\" part of\n> > > > > AbortCurrentTransaction() is uncovered:\n> > > > > https://anarazel.de/postgres/cov/16-vs-HEAD-2024-04-14/src/backend/access/transam/xact.c.gcov.html#L3403\n> > > > >\n> > > > > Which made me try to understand fefd9a3fed2. I'm a bit confused about why\n> > > > > some parts are handled in CommitCurrentTransaction()/AbortCurrentTransaction()\n> > > > > and others are in the *Internal functions.\n> > > > >\n> > > > > I understand that fefd9a3fed2 needed to remove the recursion in\n> > > > > CommitTransactionCommand()/AbortCurrentTransaction(). But I don't understand\n> > > > > why that means having some code in in the non-internal and some in the\n> > > > > internal functions? Wouldn't it be easier to just have all the state handling\n> > > > > code in the Internal() function and just break after the\n> > > > > CleanupSubTransaction() calls?\n> > > >\n> > > > I'm not sure I correctly get what you mean. Do you think the attached\n> > > > patch matches the direction you're pointing? The patch itself is not\n> > > > final, it requires cleanup and comments revision, just to check the\n> > > > direction.\n> > >\n> > > Something like that, yea. The split does seem less confusing that way to me,\n> > > but also not 100% certain.\n> >\n> > Thank you for your feedback. I'm going to go ahead and polish this patch.\n>\n> I've invested more time into polishing this. I'm intended to push\n> this. Could you, please, take a look before?\n\nJust after sending this I spotted a typo s/untill/until/. The updated\nversion is attached.\n\n------\nRegards,\nAlexander Korotkov",
"msg_date": "Wed, 17 Apr 2024 14:39:14 +0300",
"msg_from": "Alexander Korotkov <aekorotkov@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
},
{
"msg_contents": "Hi,\n\nOn 2024-04-17 14:39:14 +0300, Alexander Korotkov wrote:\n> On Wed, Apr 17, 2024 at 2:37 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > I've invested more time into polishing this. I'm intended to push\n> > this. Could you, please, take a look before?\n> \n> Just after sending this I spotted a typo s/untill/until/. The updated\n> version is attached.\n\nNice, I see you moved the code back to \"where it was\", the diff to 16 got\nsmaller this way.\n\n\n> +\t/*\n> +\t * Repeatedly call CommitTransactionCommandInternal() until all the work\n> +\t * is done.\n> +\t */\n> +\twhile (!CommitTransactionCommandInternal());\n\nPersonally I'd use\n{\n}\ninstead of just ; here. The above scans weirdly for me. But it's also not\nimportant.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 17 Apr 2024 10:35:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Stack overflow issue"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've found a duplicate \"a a\" in func.sgml and fixed it.\nPatch is attached.\n\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Wed, 24 Aug 2022 19:44:04 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Fix typo in func.sgml"
},
{
"msg_contents": "On Wed, 24 Aug 2022 at 22:44, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote:\n> I've found a duplicate \"a a\" in func.sgml and fixed it.\n> Patch is attached.\n\nThanks. Pushed.\n\nDavid\n\n\n",
"msg_date": "Wed, 24 Aug 2022 23:47:17 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix typo in func.sgml"
},
{
"msg_contents": "On 2022-08-24 20:47, David Rowley wrote:\n> On Wed, 24 Aug 2022 at 22:44, Shinya Kato \n> <Shinya11.Kato@oss.nttdata.com> wrote:\n>> I've found a duplicate \"a a\" in func.sgml and fixed it.\n>> Patch is attached.\n> \n> Thanks. Pushed.\n> \n> David\nThanks for pushing!\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Wed, 24 Aug 2022 20:50:08 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix typo in func.sgml"
}
] |
[
{
"msg_contents": "I'm trying to build Postgres using the Nix language and the Nix package\nmanager on macOS (see [1]). After some work I was able to build, and even\nrun Postgres. But `make check` failed with the error\n\npg_regress: could not exec \"sh\": No such file or directory\n\nThe reason is that pg_regress uses execl() function to execute the shell\nrecorded in its shellprog variable, whose value is provided using the SHELL\nvariable via the makefile. Specifically, this error is because execl()\nfunction expects the full path of the executable as the first parameter,\nand it does _not_ perform a lookup in $PATH.\n\nUsing execl() in pg_regress has worked in the past, because the default\nvalue of SHELL used by GNUmake (and I presume other make implementations)\nis /bin/sh, and /bin/sh expected to be present on any Unix-like system.\n\nBut in Nixpkgs (the default and the central package repository of Nix/NixOS\ndistro), they have chosen to patch GNU make, and turn the default value of\nSHELL from '/bin/sh' to just 'sh'; see their patch [2]. They did this\nbecause Nixpkgs consider any files outside the Nix Store (the path\n/nix/store/, by default) to be \"impure\". They want the packagers to use\n$PATH (consisting solely of paths that begin with /nix/store/...), to\nlookup their binaries and other files.\n\nSo when pg_regress tries to run a program (the postmaster, in this case),\nthe execl() function complains that it could not find 'sh', since there's\nno file ./sh in the directory where pg_regress is being run.\n\nPlease see attached the one-letter patch that fixes this problem. I have\nchosen to replace the execl() call with execlp(), which performs a lookup\nin $PATH, and finds the 'sh' to use for running the postmaster. This patch\ndoes _not_ cause 'make check' or any other failures when Postgres is built\nwith non-Nix build tools available on macOS.\n\nThere is one other use of execl(), in pg_ctl.c, but that is safe from the\nbehaviour introduced by Nixpkgs, since that call site uses the absolute\npath /bin/sh, and hence there's no ambiguity in where to look for the\nexecutable.\n\nThere are no previous uses of execlp() in Postgres, which made me rethink\nthis patch. But I think it's safe to use execlp() since it's part of POSIX,\nand it's also supported by Windows (see [3]; they say the function name is\n\"deprecated\" but function is \"supported\" in the same paragraph!!).\n\nThere's one mention of execl in src/pl/plperl/ppport.h, and since it's a\ngenerated file, I believe now execlp also needs to be added to that list.\nBut I'm not sure how to generate that file, or if it really needs to be\ngenerated and included in this patch; maybe the file is re-generated during\na release process. Need advice on that.\n\nGNU make's docs clearly explain (see [4]) the special handling of variable\nSHELL, and it seems impossible to pass this variable from an env variable\ndown to the GNUmakefile of interest. The only alternative I see for us\nbeing able to pass a custom value via SHELL, is to detect and declare the\nSHELL variable in one of our higher-level files; and I don't think that'd\nbe a good idea.\n\nWe could propose to Nixpkgs community that they stop patching make, and\nleave the default SHELL value alone. But I see very low likelihood of them\naccepting our arguments, or changing their ways.\n\nIt took many days of debugging, troubleshooting etc, to get to this\nroot-cause. I first tried to coax autoconf, make, etc. to pass my custom\nSHELL through to pg_regress' makefile. Changing CONFIG_SHELL, or SHELL does\nnot have any impact. Then I had to read the Nixpkgs code, and work out the\narchaic ways the packages are defined, and after much code wrangling I was\nable to find out that _they_ changed the default value of SHELL by patching\nthe make sources.\n\nThe Nix language is not so bad, but the way it's used to write code in the\nNix community leaves a lot to be desired; ad-hoc environment variable\nnaming, polluting the built environment with all kinds of variables, almost\nnon-existent comments, no standards on indentation, etc. These reasons made\nme decide to use the plain Nix language as much as possible, and not rely\non Nixpkgs, whenever I can avoid it.\n\nThe Nixpkgs and NixOS distro includes all the supported versions of\nPostgres, so one would assume they would've also encountered, and solved,\nthis problem. But they didn't. My best guess as to why, is, I believe they\nnever bothered to run `make check` on their built binaries.\n\n[1]: https://github.com/DrPostgres/HaathiFarm/blob/master/default.nix\n[2]:\nhttps://github.com/NixOS/nixpkgs/blob/release-22.05/pkgs/development/tools/build-managers/gnumake/0001-No-impure-bin-sh.patch\n[3]:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/execlp?view=msvc-170\n[4]:\nhttps://www.gnu.org/software/make/manual/html_node/Choosing-the-Shell.html\n\nBest regards,\nGurjeet\nhttp://Gurje.et",
"msg_date": "Wed, 24 Aug 2022 14:59:12 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": true,
"msg_subject": "pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "Gurjeet Singh <gurjeet@singh.im> writes:\n> Please see attached the one-letter patch that fixes this problem. I have\n> chosen to replace the execl() call with execlp(), which performs a lookup\n> in $PATH, and finds the 'sh' to use for running the postmaster.\n\nI can't say that I think this is a great fix. It creates security\nquestions that did not exist before, even without the point you\nmake about Windows considering execlp deprecated.\n\nGiven the lack of complaints about how pg_ctl works, I'd be inclined\nto follow its lead and just hard-wire \"/bin/sh\", removing the whole\nSHELLPROG/shellprog dance. I have not heard of anyone using the\ntheoretical ability to compile pg_regress with some other value.\n\n> The Nixpkgs and NixOS distro includes all the supported versions of\n> Postgres, so one would assume they would've also encountered, and solved,\n> this problem. But they didn't. My best guess as to why, is, I believe they\n> never bothered to run `make check` on their built binaries.\n\nTBH, it's not clear to me that that project is competent enough to\nbe something we should take into account. But in any case, I'd\nrather see us using fewer ways to do this, not more.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 22:14:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "I wrote:\n> Given the lack of complaints about how pg_ctl works, I'd be inclined\n> to follow its lead and just hard-wire \"/bin/sh\", removing the whole\n> SHELLPROG/shellprog dance. I have not heard of anyone using the\n> theoretical ability to compile pg_regress with some other value.\n\ngit blame blames that whole mechanism on me: 60cfe25e68d. It looks\nlike the reason I did it like that is that I was replacing use of\nsystem(3) with execl(), and system(3) is defined thus by POSIX:\n\n\texecl(<shell path>, \"sh\", \"-c\", command, (char *)0);\n\n\twhere <shell path> is an unspecified pathname for the sh utility.\n\nUsing SHELL for the \"unspecified path\" is already a bit of a leap\nof faith, since users are allowed to make that point at a non-Bourne\nshell. I don't see any strong reason to preserve that behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 24 Aug 2022 22:30:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "On Wed, Aug 24, 2022 at 10:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> git blame blames that whole mechanism on me: 60cfe25e68d. It looks\n> like the reason I did it like that is that I was replacing use of\n> system(3) with execl(), and system(3) is defined thus by POSIX:\n>\n> execl(<shell path>, \"sh\", \"-c\", command, (char *)0);\n>\n> where <shell path> is an unspecified pathname for the sh utility.\n>\n> Using SHELL for the \"unspecified path\" is already a bit of a leap\n> of faith, since users are allowed to make that point at a non-Bourne\n> shell. I don't see any strong reason to preserve that behavior.\n\nIt seems weird that you use any arbitrary shell to run 'sh', but I\nguess the point is that your shell command, whatever it is, is\nsupposed to be a full pathname, and then it can do pathname resolution\nto figure out where you 'sh' executable is. So that makes me think\nthat the problem Gurjeet is reporting is an issue with Nix rather than\nan issue with PostgreSQL.\n\nBut what we've got is:\n\n[rhaas pgsql]$ git grep execl\\(\nsrc/bin/pg_ctl/pg_ctl.c: (void) execl(\"/bin/sh\", \"/bin/sh\", \"-c\", cmd,\n(char *) NULL);\nsrc/test/regress/pg_regress.c: execl(shellprog, shellprog, \"-c\",\ncmdline2, (char *) NULL);\n\nAnd surely that's stupid. The whole point here has to be that if you\nwant to run something called 'sh' but don't know where it is, you need\nto execute a shell at a known pathname to figure it out for you.\n\nWe could do as you propose and I don't think we would be worse off\nthan we are today. But I'm confused why the correct formulation\nwouldn't be exactly what POSIX specifies, namely execl(shellprog,\n\"sh\", \"-c\", ...). That way, if somebody has a system where they do set\n$SHELL properly but do not have /bin/sh, things would still work.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 09:50:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> But what we've got is:\n\n> [rhaas pgsql]$ git grep execl\\(\n> src/bin/pg_ctl/pg_ctl.c: (void) execl(\"/bin/sh\", \"/bin/sh\", \"-c\", cmd,\n> (char *) NULL);\n> src/test/regress/pg_regress.c: execl(shellprog, shellprog, \"-c\",\n> cmdline2, (char *) NULL);\n\nRight. I wouldn't really feel a need to change anything, except\nthat we have this weird inconsistency between the way pg_ctl does\nit and the way pg_regress does it. I think we should settle on\njust one way.\n\n> We could do as you propose and I don't think we would be worse off\n> than we are today. But I'm confused why the correct formulation\n> wouldn't be exactly what POSIX specifies, namely execl(shellprog,\n> \"sh\", \"-c\", ...). That way, if somebody has a system where they do set\n> $SHELL properly but do not have /bin/sh, things would still work.\n\nMy point is that that *isn't* what POSIX specifies. They say in so\nmany words that the path actually used by system(3) is unspecified.\nThey do NOT say that it's the value of $SHELL, and given that you're\nallowed to set $SHELL to a non-POSIX-compatible shell, using that\nis really wrong. We've gotten away with it so far because we\nresolve $SHELL at build time not run time, but it's still shaky.\n\nInterestingly, if you look at concrete man pages, you tend to find\nsomething else. Linux says\n\n The system() library function uses fork(2) to create a child process\n that executes the shell command specified in command using execl(3) as\n follows:\n execl(\"/bin/sh\", \"sh\", \"-c\", command, (char *) 0);\n\nMy BSD machines say \"the command is handed to sh(1)\", without committing\nto just how that's found ... but guess what, \"which sh\" finds /bin/sh.\n\nIn any case, I can't find any system(3) that relies on $SHELL,\nso my translation wasn't correct according to either the letter\nof POSIX or common practice. It's supposed to be more or less\na hard-wired path, they just don't want to commit to which path.\n\nMoreover, leaving aside the question of whether pg_regress'\ncurrent behavior is actually bug-compatible with system(3),\nwhat is the argument that it needs to be? We have at this\npoint sufficient experience with pg_ctl's use of /bin/sh\nto be pretty confident that that works everywhere. So let's\nstandardize on the simpler way, not the more complex way.\n\n(It looks like pg_ctl has used /bin/sh since 6bcce25801c3f\nof Oct 2015.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:13:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My point is that that *isn't* what POSIX specifies. They say in so\n> many words that the path actually used by system(3) is unspecified.\n> They do NOT say that it's the value of $SHELL, and given that you're\n> allowed to set $SHELL to a non-POSIX-compatible shell, using that\n> is really wrong. We've gotten away with it so far because we\n> resolve $SHELL at build time not run time, but it's still shaky.\n>\n> Interestingly, if you look at concrete man pages, you tend to find\n> something else. Linux says\n>\n> The system() library function uses fork(2) to create a child process\n> that executes the shell command specified in command using execl(3) as\n> follows:\n> execl(\"/bin/sh\", \"sh\", \"-c\", command, (char *) 0);\n>\n> My BSD machines say \"the command is handed to sh(1)\", without committing\n> to just how that's found ... but guess what, \"which sh\" finds /bin/sh.\n>\n> In any case, I can't find any system(3) that relies on $SHELL,\n> so my translation wasn't correct according to either the letter\n> of POSIX or common practice. It's supposed to be more or less\n> a hard-wired path, they just don't want to commit to which path.\n>\n> Moreover, leaving aside the question of whether pg_regress'\n> current behavior is actually bug-compatible with system(3),\n> what is the argument that it needs to be? We have at this\n> point sufficient experience with pg_ctl's use of /bin/sh\n> to be pretty confident that that works everywhere. So let's\n> standardize on the simpler way, not the more complex way.\n\nI mean, I can see you're on the warpath here and I don't care enough\nto fight about it very much, but as a matter of theory, I believe that\nhard-coded pathnames suck. Giving the user a way to survive if /bin/sh\ndoesn't exist on their system or isn't the path they want to use seems\nfundamentally sensible to me. Now if system() doesn't do that anyhow,\nwell then there is no such mechanism in such cases, and so the benefit\nof providing one in the tiny number of other cases that we have may\nnot be there. But if you're trying to convince me that hard-coded\npaths are as a theoretical matter brilliant, I'm not buying it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:25:22 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I mean, I can see you're on the warpath here and I don't care enough\n> to fight about it very much, but as a matter of theory, I believe that\n> hard-coded pathnames suck. Giving the user a way to survive if /bin/sh\n> doesn't exist on their system or isn't the path they want to use seems\n> fundamentally sensible to me. Now if system() doesn't do that anyhow,\n> well then there is no such mechanism in such cases, and so the benefit\n> of providing one in the tiny number of other cases that we have may\n> not be there. But if you're trying to convince me that hard-coded\n> paths are as a theoretical matter brilliant, I'm not buying it.\n\nIf we were executing a program that the user needs to have some control\nover, sure, but what we have here is an implementation detail that I\ndoubt anyone cares about. The fact that we're using a shell at all is\nonly because nobody has cared to manually implement I/O redirection logic\nin these places; otherwise we'd be execl()'ing the server or psql directly.\nMaybe the best answer would be to do that, and get out of the business\nof knowing where the shell is?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 10:48:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If we were executing a program that the user needs to have some control\n> over, sure, but what we have here is an implementation detail that I\n> doubt anyone cares about. The fact that we're using a shell at all is\n> only because nobody has cared to manually implement I/O redirection logic\n> in these places; otherwise we'd be execl()'ing the server or psql directly.\n> Maybe the best answer would be to do that, and get out of the business\n> of knowing where the shell is?\n\nWell that also would not be crazy.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 11:04:35 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Aug 25, 2022 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> If we were executing a program that the user needs to have some control\n>> over, sure, but what we have here is an implementation detail that I\n>> doubt anyone cares about. The fact that we're using a shell at all is\n>> only because nobody has cared to manually implement I/O redirection logic\n>> in these places; otherwise we'd be execl()'ing the server or psql directly.\n>> Maybe the best answer would be to do that, and get out of the business\n>> of knowing where the shell is?\n\n> Well that also would not be crazy.\n\nI experimented with this, and it seems like it might not be as awful as\nwe've always assumed it would be. Attached is an incomplete POC that\nconverts pg_regress proper to doing things this way. (isolationtester\nand pg_regress_ecpg are outright broken by this patch, because they rely\non pg_regress' spawn_process and I didn't fix them yet. But you can run\nthe core regression tests to see it works.)\n\nThe Windows side of this is completely untested and may be broken; also,\nperhaps Windows has something more nearly equivalent to execvp() that we\ncould use instead of reconstructing a command line? It's annoying that\nthe patch removes all shell-quoting hazards on the Unix side but they\nare still there on the Windows side.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 25 Aug 2022 16:04:39 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 04:04:39PM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Aug 25, 2022 at 10:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> If we were executing a program that the user needs to have some control\n> >> over, sure, but what we have here is an implementation detail that I\n> >> doubt anyone cares about. The fact that we're using a shell at all is\n> >> only because nobody has cared to manually implement I/O redirection logic\n> >> in these places; otherwise we'd be execl()'ing the server or psql directly.\n> >> Maybe the best answer would be to do that, and get out of the business\n> >> of knowing where the shell is?\n\n> The Windows side of this is completely untested and may be broken; also,\n> perhaps Windows has something more nearly equivalent to execvp() that we\n> could use instead of reconstructing a command line? It's annoying that\n\nWindows has nothing like execvp(), unfortunately.\n\n> the patch removes all shell-quoting hazards on the Unix side but they\n> are still there on the Windows side.\n\nIt's feasible to take cmd.exe out of the loop. One could then eliminate\ncmd.exe quoting (the \"^\" characters). Can't avoid the rest of the quoting\n(https://docs.microsoft.com/en-us/cpp/cpp/main-function-command-line-args#parsing-c-command-line-arguments).\nBypassing cmd.exe would also make it easy to remove the ban on newlines and\ncarriage returns in arguments.\n\n\n",
"msg_date": "Sat, 1 Oct 2022 19:59:55 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_regress: lookup shellprog in $PATH"
}
] |
[
{
"msg_contents": "Hi,\nI was looking at 0004-COPY_IGNORE_ERRORS.patch\n\n+ * Ignore constraints if IGNORE_ERRORS is enabled\n+ */\n+static void\n+safeExecConstraints(CopyFromState cstate, ResultRelInfo *resultRelInfo,\nTupleTableSlot *myslot, EState *estate)\n\nI think the existing ExecConstraints() can be expanded by\nchecking cstate->opts.ignore_errors so that it can selectively\nignore Constraint Violations.\n\nThis way you don't need safeExecConstraints().\n\nCheers\n\nHi,I was looking at 0004-COPY_IGNORE_ERRORS.patch+ * Ignore constraints if IGNORE_ERRORS is enabled+ */+static void+safeExecConstraints(CopyFromState cstate, ResultRelInfo *resultRelInfo, TupleTableSlot *myslot, EState *estate)I think the existing ExecConstraints() can be expanded by checking cstate->opts.ignore_errors so that it can selectively ignore Constraint Violations.This way you don't need safeExecConstraints().Cheers",
"msg_date": "Wed, 24 Aug 2022 15:47:54 -0700",
"msg_from": "Zhihong Yu <zyu@yugabyte.com>",
"msg_from_op": true,
"msg_subject": "Re: POC PATCH: copy from ... exceptions to: (was Re: VLDB Features)"
}
] |
[
{
"msg_contents": "\nHi hackers,\n\nWe can specify compression method (for example, lz4, zstd), but it is \nhard to know the effect of compression depending on the method. There is \nalready a way to know the compression effect using pg_waldump. However, \nhaving these statistics in the view makes it more accessible. I am \nproposing to add statistics, which keeps track of compression effect in \npg_stat_ wal view.\n\nThe design I am thinking is below:\n\ncompression_saved | compression_times\n------------------+-------------------\n 38741 | 6\n\n\nAccumulating the values, which indicates how much space is saved by each \ncompression (size before compression - size after compression), and keep \ntrack of how many times compression has happened. So that one can know \nhow much space is saved on average.\n\nWhat do you think?\n\nRegards,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:04:50 +0900",
"msg_from": "Ken Kato <katouknl@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "pg_stat_wal: tracking the compression effect"
},
{
"msg_contents": "At Thu, 25 Aug 2022 16:04:50 +0900, Ken Kato <katouknl@oss.nttdata.com> wrote in \n> Accumulating the values, which indicates how much space is saved by\n> each compression (size before compression - size after compression),\n> and keep track of how many times compression has happened. So that one\n> can know how much space is saved on average.\n\nHonestly, I don't think its useful much.\nHow about adding them to pg_waldump and pg_walinspect instead?\n\n# It further widens the output of pg_waldump, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Aug 2022 11:55:27 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_wal: tracking the compression effect"
},
{
"msg_contents": "At Fri, 26 Aug 2022 11:55:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 25 Aug 2022 16:04:50 +0900, Ken Kato <katouknl@oss.nttdata.com> wrote in \n> > Accumulating the values, which indicates how much space is saved by\n> > each compression (size before compression - size after compression),\n> > and keep track of how many times compression has happened. So that one\n> > can know how much space is saved on average.\n> \n> Honestly, I don't think its useful much.\n> How about adding them to pg_waldump and pg_walinspect instead?\n> \n> # It further widens the output of pg_waldump, though..\n\nSorry, that was apparently too short.\n\nI know you already see that in per-record output of pg_waldump, but\nmaybe we need the summary of saved bytes in \"pg_waldump -b -z\" output\nand the corresponding output of pg_walinspect.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:09:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_wal: tracking the compression effect"
},
{
"msg_contents": "\n\n> On 25 Aug 2022, at 12:04, Ken Kato <katouknl@oss.nttdata.com> wrote:\n> \n> What do you think?\n\nI think users will need to choose between Lz4 and Zstd. So they need to know tradeoff - compression ratio vs cpu time spend per page(or any other segment).\n\nI know that Zstd must be kind of \"better\", but doubt it have enough runway on 1 block to show off. If only we could persist compression context between many pages...\nCompression ratio may be different on different workloads, so system view or something similar could be of use.\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 26 Aug 2022 10:11:00 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_wal: tracking the compression effect"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 8:39 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 26 Aug 2022 11:55:27 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Thu, 25 Aug 2022 16:04:50 +0900, Ken Kato <katouknl@oss.nttdata.com> wrote in\n> > > Accumulating the values, which indicates how much space is saved by\n> > > each compression (size before compression - size after compression),\n> > > and keep track of how many times compression has happened. So that one\n> > > can know how much space is saved on average.\n> >\n> > Honestly, I don't think its useful much.\n> > How about adding them to pg_waldump and pg_walinspect instead?\n> >\n> > # It further widens the output of pg_waldump, though..\n>\n> Sorry, that was apparently too short.\n>\n> I know you already see that in per-record output of pg_waldump, but\n> maybe we need the summary of saved bytes in \"pg_waldump -b -z\" output\n> and the corresponding output of pg_walinspect.\n\n+1 for adding compression stats such as type and saved bytes to\npg_waldump and pg_walinspect given that the WAL records already have\nthe saved bytes info. Collecting them in the server via pg_stat_wal\nwill require some extra effort, for instance, every WAL record insert\nrequires that code to be executed. When users want to analyze the\ncompression efforts they can either use pg_walinspect or pg_waldump\nand change the compression type if required.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n",
"msg_date": "Sat, 27 Aug 2022 13:18:21 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_wal: tracking the compression effect"
},
{
"msg_contents": "On 2022-08-27 16:48, Bharath Rupireddy wrote:\n> On Fri, Aug 26, 2022 at 8:39 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> \n>> At Fri, 26 Aug 2022 11:55:27 +0900 (JST), Kyotaro Horiguchi \n>> <horikyota.ntt@gmail.com> wrote in\n>> > At Thu, 25 Aug 2022 16:04:50 +0900, Ken Kato <katouknl@oss.nttdata.com> wrote in\n>> > > Accumulating the values, which indicates how much space is saved by\n>> > > each compression (size before compression - size after compression),\n>> > > and keep track of how many times compression has happened. So that one\n>> > > can know how much space is saved on average.\n>> >\n>> > Honestly, I don't think its useful much.\n>> > How about adding them to pg_waldump and pg_walinspect instead?\n>> >\n>> > # It further widens the output of pg_waldump, though..\n>> \n>> Sorry, that was apparently too short.\n>> \n>> I know you already see that in per-record output of pg_waldump, but\n>> maybe we need the summary of saved bytes in \"pg_waldump -b -z\" output\n>> and the corresponding output of pg_walinspect.\n> \n> +1 for adding compression stats such as type and saved bytes to\n> pg_waldump and pg_walinspect given that the WAL records already have\n> the saved bytes info. Collecting them in the server via pg_stat_wal\n> will require some extra effort, for instance, every WAL record insert\n> requires that code to be executed. When users want to analyze the\n> compression efforts they can either use pg_walinspect or pg_waldump\n> and change the compression type if required.\n\nThank you for all the comments!\n\nI will go with adding the compression stats in pg_waldump and \npg_walinspect.\n\nRegards,\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:19:28 +0900",
"msg_from": "Ken Kato <katouknl@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_stat_wal: tracking the compression effect"
}
] |
[
{
"msg_contents": "Hello,\n\nInspired by the recent discussions[1][2] around sort improvements, I took a look around the code and noticed the use of a somewhat naive version of insertion sort within the broader quicksort code.\n\nThe current implementation (see sort_template.h) is practically the textbook version of insertion sort:\n\nfor (pm = a + ST_POINTER_STEP; pm < a + n * ST_POINTER_STEP; pm += ST_POINTER_STEP)\n for (pl = pm; pl > a && DO_COMPARE(pl - ST_POINTER_STEP, pl) > 0; pl -= ST_POINTER_STEP)\n DO_SWAP(pl, pl - ST_POINTER_STEP);\n\nI propose to rather use the slightly more efficient variant of insertion sort where only a single assignment instead of a fully-fledged swap is performed in the inner loop:\n\nfor (pm = a + ST_POINTER_STEP; pm < a + n * ST_POINTER_STEP; pm += ST_POINTER_STEP) {\n DO_COPY(pm_temp, pm); /* pm_temp <- copy of pm */\n \n pl = pm - ST_POINTER_STEP;\n \n for (; pl >= a && DO_COMPARE(pl, pm_temp) > 0; pl -= ST_POINTER_STEP)\n DO_ASSIGN(pl + ST_POINTER_STEP, pl); /* pl + 1 <- pl */\n \n DO_COPY(pl + ST_POINTER_STEP, pm_temp); /* pl + 1 <- copy of pm_temp */\n}\n\nDO_ASSIGN and DO_COPY macros would have to be declared analogue to DO_SWAP via the template.\n\nThere is obviously a trade-off involved here as O(1) extra memory is required to hold the temporary variable and DO_COPY might be expensive if the sort element is large. In case of single datum sort with trivial data types this would not be a big issue. For large tuples on the other hand it could mean a significant overhead that might not be made up for by the improved inner loop. One might want to limit this algorithm to a certain maximum tuple size and use the original insertion sort version for larger elements (this could be decided at compile-time via sort_template.h).\n\nAnyways, there might be some low hanging fruit here. If it turns out to be significantly faster one might even consider increasing the insertion sort threshold from < 7 to < 10 sort elements. But that is a whole other discussion for another day.\n\nHas anyone tested such an approach before? Please let me know your thoughts.\n\nCheers,\n\nBenjamin\n\n[1] https://www.postgresql.org/message-id/flat/CAFBsxsHanJTsX9DNJppXJxwg3bU%2BYQ6pnmSfPM0uvYUaFdwZdQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/flat/CAApHDvoTTtoQYfp3d0kTPF6y1pjexgLwquzKmjzvjC9NCw4RGw%40mail.gmail.com\n\n-- \n\nBenjamin Coutu\nhttp://www.zeyos.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 12:55:46 +0200",
"msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>",
"msg_from_op": true,
"msg_subject": "Insertion Sort Improvements"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 5:55 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n>\n> Hello,\n>\n> Inspired by the recent discussions[1][2] around sort improvements, I took a look around the code and noticed the use of a somewhat naive version of insertion sort within the broader quicksort code.\n>\n> The current implementation (see sort_template.h) is practically the textbook version of insertion sort:\n\nRight. I think it's worth looking into. When I tested dual-pivot\nquicksort, a threshold of 18 seemed best for my inputs, so making\ninsertion sort more efficient could tilt the balance more in favor of\ndual-pivot. (It already seems slightly ahead, but as I mentioned in\nthe thread you linked, removing branches for null handling would make\nit more compelling).\n\n> DO_ASSIGN and DO_COPY macros would have to be declared analogue to DO_SWAP via the template.\n\nI don't think you need these macros, since this optimization is only\nconvenient if you know the type at compile time. See the attached,\nwhich I had laying around when I was looking at PDQuicksort. I believe\nit's similar to what you have in mind. (Ignore the part about\n\"unguarded\", it's irrelevant out of context). Notice that it avoids\nunnecessary copies, but has two calls to DO_COMPARE, which is *not*\nsmall for tuples.\n\n> Anyways, there might be some low hanging fruit here. If it turns out to be significantly faster one might even consider increasing the insertion sort threshold from < 7 to < 10 sort elements. But that is a whole other discussion for another day.\n\nI think it belongs around 10 now anyway. I also don't think you'll see\nmuch benefit with your proposal at a threshold of 7 -- I suspect it'd\nbe more enlightening to test a range of thresholds with and without\nthe patch, to see how the inflection point shifts. That worked pretty\nwell when testing dual-pivot.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 18:26:27 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "> convenient if you know the type at compile time. See the attached,\n> which I had laying around when I was looking at PDQuicksort. I believe\n> it's similar to what you have in mind.\n\nThat looks very promising.\nI also love your recent proposal of partitioning into null and non-null. I suspect that to be a clear winner.\n\n> I think it belongs around 10 now anyway.\n\nYeah, I think that change is overdue given modern hardware characteristics (even with the current implementation).\n\nAnother idea could be to run a binary insertion sort and use a much higher threshold. This could significantly cut down on comparisons (especially with presorted runs, which are quite common in real workloads).\n\nIf full binary search turned out to be an issue regarding cache locality, we could do it in smaller chunks, e.g. do a micro binary search between the current position (I) and position minus chunk size (say 6-12 or so, whatever fits 1 or 2 cachelines) whenever A[I] < A[I-1] and if we don't find the position within that chunk we continue with the previous chunk, thereby preserving cache locality.\n\nWith less comparisons we should start keeping track of swaps and use that as an efficient way to determine presortedness. We could change the insertion sort threshold to a swap threshold and do insertion sort until we hit the swap threshold. I suspect that would make the current presorted check obsolete (as binary insertion sort without or even with a few swaps should be faster than the current presorted-check).\n\nCheers, Ben\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:06:16 +0200",
"msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>",
"msg_from_op": true,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 9:06 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n>\n> Another idea could be to run a binary insertion sort and use a much higher threshold. This could significantly cut down on comparisons (especially with presorted runs, which are quite common in real workloads).\n\nComparisons that must go to the full tuple are expensive enough that\nthis idea might have merit in some cases, but that would be a research\nproject.\n\n> If full binary search turned out to be an issue regarding cache locality, we could do it in smaller chunks,\n\nThe main issue with binary search is poor branch prediction. Also, if\nlarge chunks are bad for cache locality, isn't that a strike against\nusing a \"much higher threshold\"?\n\n> With less comparisons we should start keeping track of swaps and use that as an efficient way to determine presortedness. We could change the insertion sort threshold to a swap threshold and do insertion sort until we hit the swap threshold. I suspect that would make the current presorted check obsolete (as binary insertion sort without or even with a few swaps should be faster than the current presorted-check).\n\nThe thread you linked to discusses partial insertion sort as a\nreplacement for the pre-sorted check, along with benchmark results and\ngraphs IIRC. I think it's possibly worth doing, but needs more\ninvestigation to make sure the (few) regressions I saw either: 1. were\njust noise or 2. can be ameliorated. As I said in the dual pivot\nthread, this would be great for dual pivot since we could reuse\npartial insertion sort for choosing the pivots, reducing binary space.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 29 Aug 2022 12:18:05 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "Getting back to improvements for small sort runs, it might make sense to consider using in-register based sorting via sorting networks for very small runs.\n\nThis talk is specific to database sorting and illustrates how such an approach can be vectorized: https://youtu.be/HeFwVNHsDzc?list=PLSE8ODhjZXjasmrEd2_Yi1deeE360zv5O&t=1090\n\nIt looks like some of the commercial DBMSs do this very successfully. They use 4 512bit registers (AVX-512) in this example, which could technically store 4 * 4 sort-elements (given that the sorting key is 64 bit and the tuple pointer is 64 bit). I wonder whether this could also be done with just SSE (instead of AVX), which the project now readily supports thanks to your recent efforts?\n\n\n",
"msg_date": "Tue, 27 Sep 2022 11:39:31 +0200",
"msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>",
"msg_from_op": true,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 4:39 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n>\n> Getting back to improvements for small sort runs, it might make sense to\nconsider using in-register based sorting via sorting networks for very\nsmall runs.\n\n> It looks like some of the commercial DBMSs do this very successfully.\n\n\"Looks like\"?\n\n> They use 4 512bit registers (AVX-512) in this example, which could\ntechnically store 4 * 4 sort-elements (given that the sorting key is 64 bit\nand the tuple pointer is 64 bit). I wonder whether this could also be done\nwith just SSE (instead of AVX), which the project now readily supports\nthanks to your recent efforts?\n\nSortTuples are currently 24 bytes, and supported vector registers are 16\nbytes, so not sure how you think that would work.\n\nMore broadly, the more invasive a change is, the more risk is involved, and\nthe more effort to test and evaluate. If you're serious about trying to\nimprove insertion sort performance, the simple idea we discussed at the\nstart of the thread is a much more modest step that has a good chance of\njustifying the time put into it. That is not to say it's easy, however,\nbecause testing is a non-trivial amount of work.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Sep 27, 2022 at 4:39 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:>> Getting back to improvements for small sort runs, it might make sense to consider using in-register based sorting via sorting networks for very small runs.> It looks like some of the commercial DBMSs do this very successfully. \"Looks like\"?> They use 4 512bit registers (AVX-512) in this example, which could technically store 4 * 4 sort-elements (given that the sorting key is 64 bit and the tuple pointer is 64 bit). I wonder whether this could also be done with just SSE (instead of AVX), which the project now readily supports thanks to your recent efforts?SortTuples are currently 24 bytes, and supported vector registers are 16 bytes, so not sure how you think that would work.More broadly, the more invasive a change is, the more risk is involved, and the more effort to test and evaluate. If you're serious about trying to improve insertion sort performance, the simple idea we discussed at the start of the thread is a much more modest step that has a good chance of justifying the time put into it. That is not to say it's easy, however, because testing is a non-trivial amount of work.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 28 Sep 2022 10:31:33 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "> \"Looks like\"?\n\nI cannot find the reference, but I've read a while back that a well-known company from Redwood uses it for their in-memory columnar storage. That might have just been a rumor or might have been research only - not sure. It does not really matter anyways.\n\n> SortTuples are currently 24 bytes, and supported vector registers are 16 bytes, so not sure how you think that would work.\n\nThe thought was to logically group multiple sort tuples together and then create a vectorized version of that group with just the primitive type sort key as well as a small-sized index/offset into that sort group to later swap the corresponding sort tuple referenced by that index/offset. The sorting network would allow us to do branch-less register based sorting for a particular sort run. I guess this idea is moot, given ...\n\n> More broadly, the more invasive a change is, the more risk is involved, and the more effort to test and evaluate. If you're serious about trying to improve insertion sort performance, the simple idea we discussed at the start of the thread is a much more modest step that has a good chance of justifying the time put into it. That is not to say it's easy, however, because testing is a non-trivial amount of work.\n\nI absolutely agree. Let's concentrate on improving things incrementally.\nPlease excuse me wondering given that you have contributed some of the recent vectorization stuff and seeing that you have obviously experimented a lot with the sort code, that you might already have tried something along those lines or researched the subject - it is definitely a very interesting topic.\n\nCheers, Ben\n\n\n",
"msg_date": "Wed, 28 Sep 2022 09:04:31 +0200",
"msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>",
"msg_from_op": true,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "Greetings,\n\nI would like to revisit the discussion and concur with John's perspective that incremental progress through small, successive modifications is the appropriate approach to move forward. Therefore, I would like to propose two distinct ideas aimed at enhancing the functionality of insertion sort:\n\n1. Implementation of a more efficient variant (as described in the introductory part of this thread):\n\n------------ OLD ------------\n\nfor (pm = a + ST_POINTER_STEP; pm < a + n * ST_POINTER_STEP;\n\t pm += ST_POINTER_STEP)\n\tfor (pl = pm; pl > a && DO_COMPARE(pl - ST_POINTER_STEP, pl) > 0;\n\t\t pl -= ST_POINTER_STEP)\n\t\tDO_SWAP(pl, pl - ST_POINTER_STEP);\n\n------------ NEW ------------\n\nfor (\n\tpm = a + ST_POINTER_STEP;\n\tpm < a + n * ST_POINTER_STEP;\n\tpm += ST_POINTER_STEP\n) {\n\tST_POINTER_TYPE tmp = *pm;\n\t \n\tfor (\n\t\tpl = pm - ST_POINTER_STEP;\n\t\tpl >= a && DO_COMPARE(pl, &tmp) > 0;\n\t\tpl -= ST_POINTER_STEP\n\t)\n\t\t*(pl + ST_POINTER_STEP) = *pl;\n\t\t\n\t*(pl + ST_POINTER_STEP) = tmp;\n}\n\n------------\n\n2. It appears that there is a consensus against disregarding the presorted check, despite its questionable value. In light of this, an alternative suggestion is to integrate the presorted check into the insertion sort path by utilizing an unbounded insertion sort. Only after a maximum number of swaps have occurred should we abandon the sorting process. If insertion sort is executed on the entire array without any swaps, we can simply return. If not, and we continue with quicksort after the swap limit has been reached, we at least have left the array in a more sorted state, which may likely be advantageous for subsequent iterations.\n\n------------ OLD ------------\n\nif (n < 7)\n{\n\tfor (pm = a + ST_POINTER_STEP; pm < a + n * ST_POINTER_STEP;\n\t\t pm += ST_POINTER_STEP)\n\t\tfor (pl = pm; pl > a && DO_COMPARE(pl - ST_POINTER_STEP, pl) > 0;\n\t\t\t pl -= ST_POINTER_STEP)\n\t\t\tDO_SWAP(pl, pl - ST_POINTER_STEP);\n\treturn;\n}\npresorted = 1;\nfor (pm = a + ST_POINTER_STEP; pm < a + n * ST_POINTER_STEP;\n\t pm += ST_POINTER_STEP)\n{\n\tDO_CHECK_FOR_INTERRUPTS();\n\tif (DO_COMPARE(pm - ST_POINTER_STEP, pm) > 0)\n\t{\n\t\tpresorted = 0;\n\t\tbreak;\n\t}\n}\nif (presorted)\n\treturn;\n\n------------ NEW ------------\n\n#define LIMIT_SWAPS 30 /* to be determined empirically */\n\nint swaps = 0;\n\t\nfor (pm = a + ST_POINTER_STEP; pm < a + n * ST_POINTER_STEP;\n\t pm += ST_POINTER_STEP)\n\tfor (pl = pm; pl > a && DO_COMPARE(pl - ST_POINTER_STEP, pl) > 0;\n\t\t pl -= ST_POINTER_STEP) {\n\t\tDO_SWAP(pl, pl - ST_POINTER_STEP);\n\t\t\n\t\tif (++swaps == LIMIT_SWAPS)\n\t\t\tgoto quicksort;\n\t}\n\t\nif (swaps == 0)\n\treturn;\n\t\nquicksort:\n\n------------\n\nNaturally, both modifications (with point 2 being highly speculative) are currently independent of each other, and it is also crucial to benchmark the combined variant as well as different values for LIMIT_SWAPS.\nI would greatly appreciate assistance in benchmarking these proposed changes. Your collaboration in this matter would be invaluable.\n\nCheers, Ben\n\n\n",
"msg_date": "Tue, 23 May 2023 11:10:45 +0200",
"msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>",
"msg_from_op": true,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "On Tue, May 23, 2023 at 4:10 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:\n>\n> Greetings,\n>\n> I would like to revisit the discussion and concur with John's perspective\nthat incremental progress through small, successive modifications is the\nappropriate approach to move forward. Therefore, I would like to propose\ntwo distinct ideas aimed at enhancing the functionality of insertion sort:\n>\n> 1. Implementation of a more efficient variant (as described in the\nintroductory part of this thread):\n\nThat's worth trying out. It might also then be worth trying to push both\nunordered values -- the big one up / the small one down. I've seen other\nimplementations do that, but don't remember where, or what it's called.\n\n> 2. It appears that there is a consensus against disregarding the\npresorted check, despite its questionable value. In light of this, an\nalternative suggestion is to integrate the presorted check into the\ninsertion sort path by utilizing an unbounded insertion sort.\n\n\"Unbounded\" means no bounds check on the array. That's not possible in its\ncurrent form, so I think you misunderstood something.\n\n> Only after a maximum number of swaps have occurred should we abandon the\nsorting process.\n\nI only remember implementations tracking loop iterations, not swaps. You'd\nneed evidence that this is better.\n\n> If insertion sort is executed on the entire array without any swaps, we\ncan simply return. If not, and we continue with quicksort after the swap\nlimit has been reached, we at least have left the array in a more sorted\nstate, which may likely be advantageous for subsequent iterations.\n\nAn important part not mentioned yet: This might only be worth doing if the\nprevious recursion level performed no swaps during partitioning and the\ncurrent pivot candidates are ordered. That's a bit of work and might not be\nconvenient now -- it'd be trivial with dual-pivot, but I've not looked at\nthat in a while. (Fittingly, dual-pivot requires a higher insertion sort\nthreshold so it goes both ways.)\n\n> I would greatly appreciate assistance in benchmarking these proposed\nchanges. Your collaboration in this matter would be invaluable.\n\nI advise looking in the archives for scripts from previous benchmarks. No\nneed to reinvent the wheel -- it's enough work as it is. A key thing here\nfor #1 is that improving insertion sort requires increasing the threshold\nto show the true improvement. It's currently 7, but should really be\nsomething like 10. A script that repeats tests for, say, 7 through 18\nshould show a concave-up shape in the times. The bottom of the bowl should\nshift to higher values, and that minimum is what should be compared.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, May 23, 2023 at 4:10 PM Benjamin Coutu <ben.coutu@zeyos.com> wrote:>> Greetings,>> I would like to revisit the discussion and concur with John's perspective that incremental progress through small, successive modifications is the appropriate approach to move forward. Therefore, I would like to propose two distinct ideas aimed at enhancing the functionality of insertion sort:>> 1. Implementation of a more efficient variant (as described in the introductory part of this thread):That's worth trying out. It might also then be worth trying to push both unordered values -- the big one up / the small one down. I've seen other implementations do that, but don't remember where, or what it's called.> 2. It appears that there is a consensus against disregarding the presorted check, despite its questionable value. In light of this, an alternative suggestion is to integrate the presorted check into the insertion sort path by utilizing an unbounded insertion sort.\"Unbounded\" means no bounds check on the array. That's not possible in its current form, so I think you misunderstood something.> Only after a maximum number of swaps have occurred should we abandon the sorting process. I only remember implementations tracking loop iterations, not swaps. You'd need evidence that this is better.> If insertion sort is executed on the entire array without any swaps, we can simply return. If not, and we continue with quicksort after the swap limit has been reached, we at least have left the array in a more sorted state, which may likely be advantageous for subsequent iterations.An important part not mentioned yet: This might only be worth doing if the previous recursion level performed no swaps during partitioning and the current pivot candidates are ordered. That's a bit of work and might not be convenient now -- it'd be trivial with dual-pivot, but I've not looked at that in a while. (Fittingly, dual-pivot requires a higher insertion sort threshold so it goes both ways.)> I would greatly appreciate assistance in benchmarking these proposed changes. Your collaboration in this matter would be invaluable.I advise looking in the archives for scripts from previous benchmarks. No need to reinvent the wheel -- it's enough work as it is. A key thing here for #1 is that improving insertion sort requires increasing the threshold to show the true improvement. It's currently 7, but should really be something like 10. A script that repeats tests for, say, 7 through 18 should show a concave-up shape in the times. The bottom of the bowl should shift to higher values, and that minimum is what should be compared.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 24 May 2023 09:10:00 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Insertion Sort Improvements"
},
{
"msg_contents": "> That's worth trying out. It might also then be worth trying to push both unordered values -- the big one up / the small one down. I've seen other implementations do that, but don't remember where, or what it's called.\n\nIt is important that we do not do 2 compares two avoid one copy (assignment to temporary) as you did in your patch earlier in this thread, cause compares are usually pretty costly (also two compares are then inlined, bloating the binary).\nAssigning a sort tuple to a temporary translates to pretty simple assembly code, so my suggested modification should outperform. It cuts down the cost of the inner loop by ca. 40% comparing the assembly. And it avoids having to re-read memory on each comparison, as the temporary can be held in registers.\n\n> \"Unbounded\" means no bounds check on the array. That's not possible in its current form, so I think you misunderstood something.\n\nSorry for the confusion. I didn't mean unbounded in the \"array bound checking\" sense, but in the \"unrestricted number of loops\" sense.\n\n> I only remember implementations tracking loop iterations, not swaps. You'd need evidence that this is better.\n\nWell, the idea was to include the presorted check somehow. Stopping after a certain number of iterations is surely more safe than stopping after a number of swaps, but we would then implicitly also stop our presort check. We could change that though: Count loop iterations and on bailout continue with a pure presort check, but from the last position of the insertion sort -- not all over again -- by comparing against the maximum value recorded during the insertion sort. Thoughts?\n\n> An important part not mentioned yet: This might only be worth doing if the previous recursion level performed no swaps during partitioning and the current pivot candidates are ordered.\n\nAgreed.\n\n> It's currently 7, but should really be something like 10. A script that repeats tests for, say, 7 through 18 should show a concave-up shape in the times. The bottom of the bowl should shift to higher values, and that minimum is what should be compared.\n\nYeah, as alluded to before, it should be closer to 10 nowadays.\nIn any case it should be changed at least from 7 to 8, cause then we could at least discard the additional check for n > 7 in the quicksort code path (see /src/include/lib/sort_template.h#L322). Currently we check n < 7 and a few lines down we check for n > 7, if we check n < 8 for insertion sort then the second check becomes obsolete.\n\nBenjamin Coutu\nhttp://www.zeyos.com\n\n\n",
"msg_date": "Wed, 24 May 2023 07:59:45 +0200",
"msg_from": "Benjamin Coutu <ben.coutu@zeyos.com>",
"msg_from_op": true,
"msg_subject": "Re: Insertion Sort Improvements"
}
] |
[
{
"msg_contents": "The postgres_fdw tests contain this (as amended by patch 0001):\n\nALTER SERVER loopback_nopw OPTIONS (ADD password 'dummypw');\nERROR: invalid option \"password\"\nHINT: Valid options in this context are: service, passfile, \nchannel_binding, connect_timeout, dbname, host, hostaddr, port, options, \napplication_name, keepalives, keepalives_idle, keepalives_interval, \nkeepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert, \nsslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer, \nssl_min_protocol_version, ssl_max_protocol_version, gssencmode, \nkrbsrvname, gsslib, target_session_attrs, use_remote_estimate, \nfdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable, \nfetch_size, batch_size, async_capable, parallel_commit, keep_connections\n\nThis annoys developers who are working on libpq connection options, \nbecause any option added, removed, or changed causes this test to need \nto be updated.\n\nIt's also questionable how useful this hint is in its current form, \nconsidering how long it is and that the options are in an \nimplementation-dependent order.\n\nPossible changes:\n\n- Hide the hint from this particular test (done in the attached patches).\n\n- Keep the hint, but maybe make it sorted?\n\n- Remove all the hints like this from foreign data commands.\n\n- Don't show the hint when there are more than N valid options.\n\n- Do some kind of \"did you mean\" like we have for column names.\n\nThoughts?",
"msg_date": "Thu, 25 Aug 2022 15:42:40 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "postgres_fdw hint messages"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 6:42 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> The postgres_fdw tests contain this (as amended by patch 0001):\n>\n> ALTER SERVER loopback_nopw OPTIONS (ADD password 'dummypw');\n> ERROR: invalid option \"password\"\n> HINT: Valid options in this context are: service, passfile,\n> channel_binding, connect_timeout, dbname, host, hostaddr, port, options,\n> application_name, keepalives, keepalives_idle, keepalives_interval,\n> keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert,\n> sslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer,\n> ssl_min_protocol_version, ssl_max_protocol_version, gssencmode,\n> krbsrvname, gsslib, target_session_attrs, use_remote_estimate,\n> fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable,\n> fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n>\n> This annoys developers who are working on libpq connection options,\n> because any option added, removed, or changed causes this test to need\n> to be updated.\n>\n> It's also questionable how useful this hint is in its current form,\n> considering how long it is and that the options are in an\n> implementation-dependent order.\n>\n>\nThanks Peter, for looking at that; this HINT message is growing over time.\n\nIn my opinion, we should hide the complete message in case of an invalid\noption. But\ntry to show dependent options; for example, if someone specify \"sslcrl\" and\nthat option\nrequire some more options, then show the HINT of that options.\n\nPossible changes:\n>\n> - Hide the hint from this particular test (done in the attached patches).\n>\n>\n\n> - Keep the hint, but maybe make it sorted?\n>\n> - Remove all the hints like this from foreign data commands.\n>\n> - Don't show the hint when there are more than N valid options.\n>\n> - Do some kind of \"did you mean\" like we have for column names.\n>\n> Thoughts?\n\n\n\n-- \nIbrar Ahmed\n\nOn Thu, Aug 25, 2022 at 6:42 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:The postgres_fdw tests contain this (as amended by patch 0001):\n\nALTER SERVER loopback_nopw OPTIONS (ADD password 'dummypw');\nERROR: invalid option \"password\"\nHINT: Valid options in this context are: service, passfile, \nchannel_binding, connect_timeout, dbname, host, hostaddr, port, options, \napplication_name, keepalives, keepalives_idle, keepalives_interval, \nkeepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert, \nsslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer, \nssl_min_protocol_version, ssl_max_protocol_version, gssencmode, \nkrbsrvname, gsslib, target_session_attrs, use_remote_estimate, \nfdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable, \nfetch_size, batch_size, async_capable, parallel_commit, keep_connections\n\nThis annoys developers who are working on libpq connection options, \nbecause any option added, removed, or changed causes this test to need \nto be updated.\n\nIt's also questionable how useful this hint is in its current form, \nconsidering how long it is and that the options are in an \nimplementation-dependent order.\nThanks Peter, for looking at that; this HINT message is growing over time.In my opinion, we should hide the complete message in case of an invalid option. Buttry to show dependent options; for example, if someone specify \"sslcrl\" and that option require some more options, then show the HINT of that options.\nPossible changes:\n\n- Hide the hint from this particular test (done in the attached patches).\n \n- Keep the hint, but maybe make it sorted?\n\n- Remove all the hints like this from foreign data commands.\n\n- Don't show the hint when there are more than N valid options.\n\n- Do some kind of \"did you mean\" like we have for column names.\n\nThoughts?-- Ibrar Ahmed",
"msg_date": "Fri, 26 Aug 2022 00:22:37 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 9:42 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> The postgres_fdw tests contain this (as amended by patch 0001):\n>\n> ALTER SERVER loopback_nopw OPTIONS (ADD password 'dummypw');\n> ERROR: invalid option \"password\"\n> HINT: Valid options in this context are: service, passfile,\n> channel_binding, connect_timeout, dbname, host, hostaddr, port, options,\n> application_name, keepalives, keepalives_idle, keepalives_interval,\n> keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert,\n> sslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer,\n> ssl_min_protocol_version, ssl_max_protocol_version, gssencmode,\n> krbsrvname, gsslib, target_session_attrs, use_remote_estimate,\n> fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable,\n> fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n>\n> This annoys developers who are working on libpq connection options,\n> because any option added, removed, or changed causes this test to need\n> to be updated.\n>\n> It's also questionable how useful this hint is in its current form,\n> considering how long it is and that the options are in an\n> implementation-dependent order.\n\nI think the place to list the legal options is in the documentation,\nnot the HINT.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:26:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Aug 25, 2022 at 9:42 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> HINT: Valid options in this context are: service, passfile,\n>> channel_binding, connect_timeout, dbname, host, hostaddr, port, options,\n>> application_name, keepalives, keepalives_idle, keepalives_interval,\n>> keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert,\n>> sslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer,\n>> ssl_min_protocol_version, ssl_max_protocol_version, gssencmode,\n>> krbsrvname, gsslib, target_session_attrs, use_remote_estimate,\n>> fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable,\n>> fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n>> \n>> This annoys developers who are working on libpq connection options,\n>> because any option added, removed, or changed causes this test to need\n>> to be updated.\n>> \n>> It's also questionable how useful this hint is in its current form,\n>> considering how long it is and that the options are in an\n>> implementation-dependent order.\n\n> I think the place to list the legal options is in the documentation,\n> not the HINT.\n\nI think listing them in a hint is reasonable as long as the hint doesn't\nget longer than a line or two. This one is entirely out of hand, so\nI agree with just dropping it.\n\nNote that there is essentially identical code in dblink, file_fdw,\nand backend/foreign/foreign.c. Do we want to nuke them all? Or\nmaybe make a policy decision to suppress such HINTs when there are\nmore than ~10 matches? (The latter policy would likely eventually\nend by always suppressing everything...)\n\nPeter also mentioned the possibility of \"did you mean\" with a closest\nmatch offered. That seems like a reasonable idea if someone\nis motivated to create the code, which I'm not.\n\nI vote for just dropping all these hints for now, while leaving the\ndoor open for anyone who wants to write closest-match-offering code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Aug 2022 12:35:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On 25.08.22 15:42, Peter Eisentraut wrote:\n> It's also questionable how useful this hint is in its current form, \n> considering how long it is and that the options are in an \n> implementation-dependent order.\n> \n> Possible changes:\n\n> - Remove all the hints like this from foreign data commands.\n\nIt appears that there was a strong preference toward this solution, so \nthat's what I implemented in the updated patch set.\n\n(Considering the hints that are removed in the tests cases, I don't \nthink this loses any value. What might be useful in practice instead is \nsomething like \"the option you specified on this foreign server should \nactually be specified on a user mapping or foreign table\", but this \nwould take a fair amount of code to cover a reasonable set of cases, so \nI'll leave this as a future exercise.)",
"msg_date": "Tue, 30 Aug 2022 09:20:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 12:35:38PM -0400, Tom Lane wrote:\n> Peter also mentioned the possibility of \"did you mean\" with a closest\n> match offered. That seems like a reasonable idea if someone\n> is motivated to create the code, which I'm not.\n> \n> I vote for just dropping all these hints for now, while leaving the\n> door open for anyone who wants to write closest-match-offering code.\n\nHere is a quickly-hacked-together proof-of-concept for using Levenshtein\ndistances to determine which option to include in the hint. Would\nsomething like this suffice? If so, I will work on polishing it up a bit.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 1 Sep 2022 15:31:28 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Aug 26, 2022 at 12:35:38PM -0400, Tom Lane wrote:\n>> I vote for just dropping all these hints for now, while leaving the\n>> door open for anyone who wants to write closest-match-offering code.\n\n> Here is a quickly-hacked-together proof-of-concept for using Levenshtein\n> distances to determine which option to include in the hint. Would\n> something like this suffice? If so, I will work on polishing it up a bit.\n\nSeems reasonable to me, but\n\n(1) there probably needs to be some threshold of closeness, so we don't\noffer \"foobar\" when the user wrote \"huh\"\n\n(2) there are several places doing this now, and there will no doubt\nbe more later, so we need to try to abstract the logic so it can be\nshared.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 01 Sep 2022 19:08:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 07:08:49PM -0400, Tom Lane wrote:\n> (1) there probably needs to be some threshold of closeness, so we don't\n> offer \"foobar\" when the user wrote \"huh\"\n\nAgreed.\n\n> (2) there are several places doing this now, and there will no doubt\n> be more later, so we need to try to abstract the logic so it can be\n> shared.\n\nWill do.\n\nI'm also considering checking whether the user-provided string is longer\nthan MAX_LEVENSHTEIN_STRLEN so that we can avoid the ERROR from\nvarstr_levenshtein(). Or perhaps varstr_levenshtein() could indicate that\nthe string is too long without ERROR-ing (e.g., by returning -1). If the\nuser-provided string is too long, we'd just omit the hint.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 1 Sep 2022 16:31:20 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Thu, Sep 01, 2022 at 04:31:20PM -0700, Nathan Bossart wrote:\n> On Thu, Sep 01, 2022 at 07:08:49PM -0400, Tom Lane wrote:\n>> (1) there probably needs to be some threshold of closeness, so we don't\n>> offer \"foobar\" when the user wrote \"huh\"\n> \n> Agreed.\n> \n>> (2) there are several places doing this now, and there will no doubt\n>> be more later, so we need to try to abstract the logic so it can be\n>> shared.\n> \n> Will do.\n\nHere is a new patch. Two notes:\n\n* I considered whether to try to unite this new functionality with the\nexisting stuff in parse_relation.c, but the existing code looked a bit too\nspecialized.\n\n* I chose a Levenshtein distance of 5 as the threshold of closeness for the\nhint messages. This felt lenient, but it should hopefully still filter out\nsome of the more ridiculous suggestions. However, it's still little more\nthan a wild guess, so if folks think the threshold needs to be higher or\nlower, I'd readily change it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 2 Sep 2022 14:26:09 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 02:26:09PM -0700, Nathan Bossart wrote:\n> Here is a new patch. Two notes:\n> \n> * I considered whether to try to unite this new functionality with the\n> existing stuff in parse_relation.c, but the existing code looked a bit too\n> specialized.\n> \n> * I chose a Levenshtein distance of 5 as the threshold of closeness for the\n> hint messages. This felt lenient, but it should hopefully still filter out\n> some of the more ridiculous suggestions. However, it's still little more\n> than a wild guess, so if folks think the threshold needs to be higher or\n> lower, I'd readily change it.\n\nHmm. FWIW I would tend toward simplifying all this code and just drop\nall the hints rather than increasing the dependency to more\nlevenshtein calculations in those error code paths, which is what\nPeter E has posted.\n--\nMichael",
"msg_date": "Sat, 3 Sep 2022 10:03:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Sep 02, 2022 at 02:26:09PM -0700, Nathan Bossart wrote:\n>> * I chose a Levenshtein distance of 5 as the threshold of closeness for the\n>> hint messages. This felt lenient, but it should hopefully still filter out\n>> some of the more ridiculous suggestions. However, it's still little more\n>> than a wild guess, so if folks think the threshold needs to be higher or\n>> lower, I'd readily change it.\n\n> Hmm. FWIW I would tend toward simplifying all this code and just drop\n> all the hints rather than increasing the dependency to more\n> levenshtein calculations in those error code paths, which is what\n> Peter E has posted.\n\nPersonally I'm not a huge fan of this style of hint either. However,\npeople seem to like the ones for misspelled column names, so I'm\nbetting there will be a majority in favor of this one too.\n\nI think the distance limit of 5 is too loose though. I see that\nit accommodates examples like \"passfile\" for \"password\", which\nseems great at first glance; but it also allows fundamentally\nsilly suggestions like \"user\" for \"server\" or \"host\" for \"foo\".\nWe'd need something smarter than Levenshtein if we want to offer\n\"passfile\" for \"password\" without looking stupid on a whole lot\nof other cases --- those words seem close, but they are close\nsemantically not textually.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 02 Sep 2022 22:06:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Fri, Sep 02, 2022 at 10:06:54PM -0400, Tom Lane wrote:\n> I think the distance limit of 5 is too loose though. I see that\n> it accommodates examples like \"passfile\" for \"password\", which\n> seems great at first glance; but it also allows fundamentally\n> silly suggestions like \"user\" for \"server\" or \"host\" for \"foo\".\n> We'd need something smarter than Levenshtein if we want to offer\n> \"passfile\" for \"password\" without looking stupid on a whole lot\n> of other cases --- those words seem close, but they are close\n> semantically not textually.\n\nYeah, it's really only useful for simple misspellings, but IMO even that is\nrather handy.\n\nI noticed that the parse_relation.c stuff excludes matches where more than\nhalf the characters are different, so I added that here and lowered the\ndistance limit to 4. This seems to prevent the silly suggestions (e.g.,\n\"host\" for \"foo\") while retaining the more believable ones (e.g.,\n\"passfile\" for \"password\"), at least for the small set of examples covered\nin the tests.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 2 Sep 2022 21:30:58 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On 03.09.22 06:30, Nathan Bossart wrote:\n> On Fri, Sep 02, 2022 at 10:06:54PM -0400, Tom Lane wrote:\n>> I think the distance limit of 5 is too loose though. I see that\n>> it accommodates examples like \"passfile\" for \"password\", which\n>> seems great at first glance; but it also allows fundamentally\n>> silly suggestions like \"user\" for \"server\" or \"host\" for \"foo\".\n>> We'd need something smarter than Levenshtein if we want to offer\n>> \"passfile\" for \"password\" without looking stupid on a whole lot\n>> of other cases --- those words seem close, but they are close\n>> semantically not textually.\n> \n> Yeah, it's really only useful for simple misspellings, but IMO even that is\n> rather handy.\n> \n> I noticed that the parse_relation.c stuff excludes matches where more than\n> half the characters are different, so I added that here and lowered the\n> distance limit to 4. This seems to prevent the silly suggestions (e.g.,\n> \"host\" for \"foo\") while retaining the more believable ones (e.g.,\n> \"passfile\" for \"password\"), at least for the small set of examples covered\n> in the tests.\n\nI think this code is compact enough and the hints it produces are \nreasonable, so I think we could go with it.\n\nI notice that for column misspellings, the hint is phrased \"Perhaps you \nmeant X.\" whereas here we have \"Did you mean X?\". Let's make that uniform.\n\n\n\n",
"msg_date": "Tue, 13 Sep 2022 08:32:43 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Tue, Sep 13, 2022 at 08:32:43AM +0200, Peter Eisentraut wrote:\n> I notice that for column misspellings, the hint is phrased \"Perhaps you\n> meant X.\" whereas here we have \"Did you mean X?\". Let's make that uniform.\n\nGood point. I attached a new version of the patch that uses the column\nphrasing. I wasn't sure whether \"reference\" was the right word to use in\nthis context, but I used it for now for consistency with the column hints.\nI think \"specify\" or \"use\" would work as well.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 13 Sep 2022 12:02:55 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On 13.09.22 21:02, Nathan Bossart wrote:\n> On Tue, Sep 13, 2022 at 08:32:43AM +0200, Peter Eisentraut wrote:\n>> I notice that for column misspellings, the hint is phrased \"Perhaps you\n>> meant X.\" whereas here we have \"Did you mean X?\". Let's make that uniform.\n> \n> Good point. I attached a new version of the patch that uses the column\n> phrasing. I wasn't sure whether \"reference\" was the right word to use in\n> this context, but I used it for now for consistency with the column hints.\n> I think \"specify\" or \"use\" would work as well.\n\nI don't think we need a verb there at all. I committed it without a verb.\n\n\n\n",
"msg_date": "Fri, 16 Sep 2022 15:54:53 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw hint messages"
},
{
"msg_contents": "On Fri, Sep 16, 2022 at 03:54:53PM +0200, Peter Eisentraut wrote:\n> I don't think we need a verb there at all. I committed it without a verb.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Sep 2022 08:55:40 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw hint messages"
}
] |
[
{
"msg_contents": "libpq now contains a mix of error message strings that end with newlines \nand don't end with newlines, due to some newer code paths with new ways \nof passing errors around. This has now gotten me confused a few too \nmany times both during development and translation. So I looked into \nwhether we can unify this, similar to how we have done elsewhere (e.g., \npg_upgrade). I came up with the attached patch. It's not complete, but \nit shows the idea and it looks like a nice simplification to me. \nThoughts on this approach?",
"msg_date": "Thu, 25 Aug 2022 16:34:26 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "libpq error message refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-25 16:34:26 +0200, Peter Eisentraut wrote:\n> libpq now contains a mix of error message strings that end with newlines and\n> don't end with newlines, due to some newer code paths with new ways of\n> passing errors around. This has now gotten me confused a few too many times\n> both during development and translation. So I looked into whether we can\n> unify this, similar to how we have done elsewhere (e.g., pg_upgrade). I\n> came up with the attached patch. It's not complete, but it shows the idea\n> and it looks like a nice simplification to me. Thoughts on this approach?\n\nThis patch has been failing for a while:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3854\n\nInterestingly, previously the error only happened when targetting windows, but\nmeson also shows it on freebsd.\n\nIt's not the cause of this failure, I think, but doesn't appendPQExpBufferVA\nneed to be added to exports.txt?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 08:42:25 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "On 22.09.22 17:42, Andres Freund wrote:\n> This patch has been failing for a while:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3854\n> \n> Interestingly, previously the error only happened when targetting windows, but\n> meson also shows it on freebsd.\n> \n> It's not the cause of this failure, I think, but doesn't appendPQExpBufferVA\n> need to be added to exports.txt?\n\nI don't want to make that function available to users of libpq, just use \nit inside libpq across .c files. Is there no visibility level for that? \n Is that also the problem in the freebsd build?\n\n\n\n",
"msg_date": "Thu, 22 Sep 2022 22:00:00 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "HHi,\n\nOn 2022-09-22 22:00:00 -0400, Peter Eisentraut wrote:\n> On 22.09.22 17:42, Andres Freund wrote:\n> > This patch has been failing for a while:\n> > https://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest/39/3854\n> > \n> > Interestingly, previously the error only happened when targetting windows, but\n> > meson also shows it on freebsd.\n> > \n> > It's not the cause of this failure, I think, but doesn't appendPQExpBufferVA\n> > need to be added to exports.txt?\n> \n> I don't want to make that function available to users of libpq, just use it\n> inside libpq across .c files. Is there no visibility level for that? Is\n> that also the problem in the freebsd build?\n\nI suspect the appendPQExpBufferVA is orthogonal - most (all?) of the other\nfunctions in pqexpbuffer.h are visible, so it feels weird/confusing to not\nmake appendPQExpBufferVA() available. I just noticed it when trying to\nunderstand the linker failure - which I still don't...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 19:27:27 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 22.09.22 17:42, Andres Freund wrote:\n>> It's not the cause of this failure, I think, but doesn't appendPQExpBufferVA\n>> need to be added to exports.txt?\n\n> I don't want to make that function available to users of libpq, just use \n> it inside libpq across .c files. Is there no visibility level for that? \n\nShould \"just work\", I should think. I agree with Andres that that's\nnot the cause of the build failure. I wonder if somehow the failing\nlinks are picking up the wrong libpq.a.\n\nSeparately from that: is it really okay to delegate use of a va_list\nargument like that? The other call paths of\nappendPQExpBufferVA[_internal] write va_start/va_end directly around it,\nnot somewhere else in the call chain. I'm too tired to language-lawyer\nout what happens when you do it like this, but I'm suspecting that it's\nnot well-defined portable behavior.\n\nI think what you probably need to do is export appendPQExpBufferVA\nas-is and require libpq_append_error to provide the error loop.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 22:37:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I suspect the appendPQExpBufferVA is orthogonal - most (all?) of the other\n> functions in pqexpbuffer.h are visible, so it feels weird/confusing to not\n> make appendPQExpBufferVA() available.\n\nI thought the same to start with, but if I'm right in my nearby reply\nthat we'll have to make callers loop around appendPQExpBufferVA,\nthen it seems like a good idea to keep it closely held.\n\nMore than zero commentary about that would be a good thing, too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 22:40:47 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "Hi,\n\nOn 2022-09-22 19:27:27 -0700, Andres Freund wrote:\n> I just noticed it when trying to understand the linker failure - which I\n> still don't...\n\nHeh, figured it out. It's inside #ifdef ENABLE_NLS. So it fails on all\nplatforms without NLS enabled.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 22 Sep 2022 19:45:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Heh, figured it out. It's inside #ifdef ENABLE_NLS. So it fails on all\n> platforms without NLS enabled.\n\nArgh, how simple!\n\nThe question about va_start/va_end placement still stands, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Sep 2022 22:48:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "On 23.09.22 04:45, Andres Freund wrote:\n> On 2022-09-22 19:27:27 -0700, Andres Freund wrote:\n>> I just noticed it when trying to understand the linker failure - which I\n>> still don't...\n> \n> Heh, figured it out. It's inside #ifdef ENABLE_NLS. So it fails on all\n> platforms without NLS enabled.\n\nHah!\n\nHere is an updated patch to get the CI clean. I'll look into the other \ndiscussed issues later.",
"msg_date": "Fri, 23 Sep 2022 16:31:03 -0400",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "On 25.08.22 16:34, Peter Eisentraut wrote:\n> libpq now contains a mix of error message strings that end with newlines \n> and don't end with newlines, due to some newer code paths with new ways \n> of passing errors around. This has now gotten me confused a few too \n> many times both during development and translation. So I looked into \n> whether we can unify this, similar to how we have done elsewhere (e.g., \n> pg_upgrade). I came up with the attached patch. It's not complete, but \n> it shows the idea and it looks like a nice simplification to me. \n\nI have completed this patch, taking into account the fixes discussed in \nthis thread.\n\nI have split the patch in two, for review: The first is just the new \nAPIs, the second are the changes that apply the API everywhere.",
"msg_date": "Wed, 12 Oct 2022 09:35:49 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "On 23.09.22 04:37, Tom Lane wrote:\n> Separately from that: is it really okay to delegate use of a va_list\n> argument like that? The other call paths of\n> appendPQExpBufferVA[_internal] write va_start/va_end directly around it,\n> not somewhere else in the call chain. I'm too tired to language-lawyer\n> out what happens when you do it like this, but I'm suspecting that it's\n> not well-defined portable behavior.\n> \n> I think what you probably need to do is export appendPQExpBufferVA\n> as-is and require libpq_append_error to provide the error loop.\n\nThere was actually a live problem here, maybe not the exact one you had \nin mind: When you actually need the \"need more space\" loop, you must do \nva_end() and va_start() before calling down again. Otherwise, the next \nva_arg() gets garbage.\n\nIt so happens that the error message\n\n\"private key file \\\"%s\\\" has group or world access; file must have \npermissions u=rw (0600) or less if owned by the current user, or \npermissions u=rw,g=r (0640) or less if owned by root\"\n\ntogether with an in-tree test location for the file in question just \nbarely exceeds INITIAL_EXPBUFFER_SIZE (256), and so my previous patch \nwould fail the \"ssl\" test suite. Good test coverage. :)\n\nAnyway, I have updated my patch with your suggestion, which should fix \nthese kinds of issues.\n\n\n\n",
"msg_date": "Wed, 12 Oct 2022 09:45:01 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "Hello\n\nI gave this series a quick look. Overall it seems a good idea, since\nthe issue of newlines-or-not is quite bothersome for the libpq\ntranslations.\n\n> +/*\n> + * Append a formatted string to the given buffer, after translation. A\n> + * newline is automatically appended; the format should not end with a\n> + * newline.\n> + */\n\nI find the \"after translation\" bit unclear -- does it mean that the\ncaller should have already translated, or is it the other way around? I\nwould say \"Append a formatted string to the given buffer, after\ntranslating it\", which (to me) conveys more clearly that translation\noccurs here.\n\n\n> +\t/* Loop in case we have to retry after enlarging the buffer. */\n> +\tdo\n> +\t{\n> +\t\terrno = save_errno;\n> +\t\tva_start(args, fmt);\n> +\t\tdone = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);\n\nI wonder if it makes sense to do libpq_gettext() just once, instead of\nredoing it on each iteration.\n\n> +void\n> +libpq_append_conn_error(PGconn *conn, const char *fmt, ...)\n\nThese two routines are essentially identical. While we could argue\nabout sharing an underlying implementation, I think it's okay the way\nyou have it, because the overheard of sharing it would make that\npointless, given how short they are.\n\n\n> +extern void libpq_append_error(PQExpBuffer errorMessage, const char *fmt, ...) pg_attribute_printf(2, 3);\n> +extern void libpq_append_conn_error(PGconn *conn, const char *fmt, ...) pg_attribute_printf(2, 3);\n\npg_attribute_printf marker present -- check.\n\n> -GETTEXT_TRIGGERS = libpq_gettext pqInternalNotice:2\n> -GETTEXT_FLAGS = libpq_gettext:1:pass-c-format pqInternalNotice:2:c-format\n> +GETTEXT_TRIGGERS = libpq_append_conn_error:2 \\\n> + libpq_append_error:2 \\\n> + libpq_gettext pqInternalNotice:2\n> +GETTEXT_FLAGS = libpq_append_conn_error:2:c-format \\\n> + libpq_append_error:2:c-format \\\n> + libpq_gettext:1:pass-c-format pqInternalNotice:2:c-format\n\nLooks good.\n\n> --- a/src/interfaces/libpq/pqexpbuffer.h\n> +++ b/src/interfaces/libpq/pqexpbuffer.h\n\n> +/*------------------------\n> + * appendPQExpBufferVA\n> + * Shared guts of printfPQExpBuffer/appendPQExpBuffer.\n> + * Attempt to format data and append it to str. Returns true if done\n> + * (either successful or hard failure), false if need to retry.\n> + *\n> + * Caution: callers must be sure to preserve their entry-time errno\n> + * when looping, in case the fmt contains \"%m\".\n> + */\n> +extern bool appendPQExpBufferVA(PQExpBuffer str, const char *fmt, va_list args) pg_attribute_printf(2, 0);\n\nAs an API user, I don't care that this is shared guts for something\nelse, I just care about what it does. I think deleting that line is a\nsufficient fix.\n\n> -\t\t\tappendPQExpBufferStr(&conn->errorMessage,\n> -\t\t\t\t\t\t\t\t libpq_gettext(\"malformed SCRAM message (empty message)\\n\"));\n> +\t\t\tlibpq_append_conn_error(conn, \"malformed SCRAM message (empty message)\");\n\nOverall, this type of change looks positive. I didn't review all these\nchanges too closely other than the first couple of dozens, as there are\nway too many; I suppose you did these with some Emacs macros or something?\n\n> @@ -420,7 +418,8 @@ pqsecure_raw_write(PGconn *conn, const void *ptr, size_t len)\n> \t\t\t\tsnprintf(msgbuf, sizeof(msgbuf),\n> \t\t\t\t\t\t libpq_gettext(\"server closed the connection unexpectedly\\n\"\n> \t\t\t\t\t\t\t\t\t \"\\tThis probably means the server terminated abnormally\\n\"\n> -\t\t\t\t\t\t\t\t\t \"\\tbefore or while processing the request.\\n\"));\n> +\t\t\t\t\t\t\t\t\t \"\\tbefore or while processing the request.\"));\n> +\t\t\t\tstrlcat(msgbuf, \"\\n\", sizeof(msgbuf));\n> \t\t\t\tconn->write_err_msg = strdup(msgbuf);\n> \t\t\t\t/* Now claim the write succeeded */\n> \t\t\t\tn = len;\n\nIn these two places, we're writing the error message manually to a\nseparate variable, so the extra \\n is necessary. It looks a bit odd to\ndo it with strlcat() after the fact, but AFAICT it's necessary as it\nkeeps the \\n out of the translation catalog, which is good. This is\nnonobvious, so perhaps add a comment about it.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Before you were born your parents weren't as boring as they are now. They\ngot that way paying your bills, cleaning up your room and listening to you\ntell them how idealistic you are.\" -- Charles J. Sykes' advice to teenagers\n\n\n",
"msg_date": "Wed, 9 Nov 2022 13:29:31 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "On 09.11.22 13:29, Alvaro Herrera wrote:\n>> +/*\n>> + * Append a formatted string to the given buffer, after translation. A\n>> + * newline is automatically appended; the format should not end with a\n>> + * newline.\n>> + */\n> \n> I find the \"after translation\" bit unclear -- does it mean that the\n> caller should have already translated, or is it the other way around? I\n> would say \"Append a formatted string to the given buffer, after\n> translating it\", which (to me) conveys more clearly that translation\n> occurs here.\n\nok\n\n>> +\t/* Loop in case we have to retry after enlarging the buffer. */\n>> +\tdo\n>> +\t{\n>> +\t\terrno = save_errno;\n>> +\t\tva_start(args, fmt);\n>> +\t\tdone = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);\n> \n> I wonder if it makes sense to do libpq_gettext() just once, instead of\n> redoing it on each iteration.\n\nI wonder whether that would expose us to potential compiler warnings \nabout the format string not being constant. As long as the compiler can \ntrace that the string comes from gettext, it knows what's going on.\n\nAlso, most error strings in practice don't need the loop, so maybe it's \nnot a big issue.\n\n>> +/*------------------------\n>> + * appendPQExpBufferVA\n>> + * Shared guts of printfPQExpBuffer/appendPQExpBuffer.\n>> + * Attempt to format data and append it to str. Returns true if done\n>> + * (either successful or hard failure), false if need to retry.\n>> + *\n>> + * Caution: callers must be sure to preserve their entry-time errno\n>> + * when looping, in case the fmt contains \"%m\".\n>> + */\n>> +extern bool appendPQExpBufferVA(PQExpBuffer str, const char *fmt, va_list args) pg_attribute_printf(2, 0);\n> \n> As an API user, I don't care that this is shared guts for something\n> else, I just care about what it does. I think deleting that line is a\n> sufficient fix.\n\nok\n\n>> @@ -420,7 +418,8 @@ pqsecure_raw_write(PGconn *conn, const void *ptr, size_t len)\n>> \t\t\t\tsnprintf(msgbuf, sizeof(msgbuf),\n>> \t\t\t\t\t\t libpq_gettext(\"server closed the connection unexpectedly\\n\"\n>> \t\t\t\t\t\t\t\t\t \"\\tThis probably means the server terminated abnormally\\n\"\n>> -\t\t\t\t\t\t\t\t\t \"\\tbefore or while processing the request.\\n\"));\n>> +\t\t\t\t\t\t\t\t\t \"\\tbefore or while processing the request.\"));\n>> +\t\t\t\tstrlcat(msgbuf, \"\\n\", sizeof(msgbuf));\n>> \t\t\t\tconn->write_err_msg = strdup(msgbuf);\n>> \t\t\t\t/* Now claim the write succeeded */\n>> \t\t\t\tn = len;\n> \n> In these two places, we're writing the error message manually to a\n> separate variable, so the extra \\n is necessary. It looks a bit odd to\n> do it with strlcat() after the fact, but AFAICT it's necessary as it\n> keeps the \\n out of the translation catalog, which is good. This is\n> nonobvious, so perhaps add a comment about it.\n\nok\n\n\n\n",
"msg_date": "Sun, 13 Nov 2022 12:13:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: libpq error message refactoring"
},
{
"msg_contents": "On 2022-Nov-13, Peter Eisentraut wrote:\n\n> On 09.11.22 13:29, Alvaro Herrera wrote:\n\n> > > +\t/* Loop in case we have to retry after enlarging the buffer. */\n> > > +\tdo\n> > > +\t{\n> > > +\t\terrno = save_errno;\n> > > +\t\tva_start(args, fmt);\n> > > +\t\tdone = appendPQExpBufferVA(errorMessage, libpq_gettext(fmt), args);\n> > \n> > I wonder if it makes sense to do libpq_gettext() just once, instead of\n> > redoing it on each iteration.\n> \n> I wonder whether that would expose us to potential compiler warnings about\n> the format string not being constant. As long as the compiler can trace\n> that the string comes from gettext, it knows what's going on.\n> \n> Also, most error strings in practice don't need the loop, so maybe it's not\n> a big issue.\n\nTrue.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Mon, 14 Nov 2022 11:11:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: libpq error message refactoring"
}
] |
[
{
"msg_contents": "Without this patch concurrent ALTER/DROP SUBSCRIPTION statements for \nthe same subscription could result in one of these statements returning the\nfollowing error:\n\nERROR: XX000: tuple concurrently updated\n\nThis patch fixes that by re-fetching the tuple after acquiring the lock on the\nsubscription. The included isolation test fails most of its permutations\nwithout this patch, with the error shown above.\n\nThe loop to re-fetch the tuple is heavily based on the code from dbcommands.c",
"msg_date": "Thu, 25 Aug 2022 14:47:26 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 8:17 PM Jelte Fennema\n<Jelte.Fennema@microsoft.com> wrote:\n>\n> Without this patch concurrent ALTER/DROP SUBSCRIPTION statements for\n> the same subscription could result in one of these statements returning the\n> following error:\n>\n> ERROR: XX000: tuple concurrently updated\n>\n> This patch fixes that by re-fetching the tuple after acquiring the lock on the\n> subscription. The included isolation test fails most of its permutations\n> without this patch, with the error shown above.\n>\n\nWon't the same thing can happen for similar publication commands? Why\nis this unique to the subscription and not other Alter/Drop commands?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 26 Aug 2022 07:41:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "> Won't the same thing can happen for similar publication commands? Why\n> is this unique to the subscription and not other Alter/Drop commands?\n\nI indeed don't think this problem is unique to subscriptions, but it seems \nbetter to at least have this problem in a few places less (not making perfect\nthe enemy of good).\n\nIf someone has a more generic way of solving this for other commands too, \nthen that sounds great, but if not then slowly chipping away at these cases \nseems better than keeping the status quo.\n\nAttached is a new patch where ALTER SUBSCRIPTION ... OWNER TO ... can \nnow also be executed concurrently with the other subscription commands.",
"msg_date": "Fri, 26 Aug 2022 08:05:02 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThe patch applies with few \"Hunk succeeded, offset -3 lines\" warnings. Tested against master '7d5852ca'.\r\n\r\n+ if (!HeapTupleIsValid(tup))\r\n+ {\r\n+ if (!missing_ok)\r\n+ ereport(ERROR,\r\n+ (errcode(ERRCODE_UNDEFINED_OBJECT),\r\n+ errmsg(\"subscription \\\"%s\\\" does not exist\",\r\n+ subname)));\r\n+ else\r\n+ ereport(NOTICE,\r\n+ (errmsg(\"subscription \\\"%s\\\" does not exist, skipping\",\r\n+ subname)));\r\n+\r\n+ return InvalidOid;\r\n+ }\r\n+\r\n\r\nI think 'tup' should be released before returning, or break out of loop instead to release it.\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Wed, 31 Aug 2022 10:29:58 +0000",
"msg_from": "Asif Rehman <asifr.rehman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "On 2022-Aug-26, Jelte Fennema wrote:\n\n> I indeed don't think this problem is unique to subscriptions, but it seems \n> better to at least have this problem in a few places less (not making perfect\n> the enemy of good).\n> \n> If someone has a more generic way of solving this for other commands too, \n> then that sounds great, but if not then slowly chipping away at these cases \n> seems better than keeping the status quo.\n> \n> Attached is a new patch where ALTER SUBSCRIPTION ... OWNER TO ... can \n> now also be executed concurrently with the other subscription commands.\n\nWould it work to use get_object_address() instead? That would save\nhaving to write a lookup-and-lock function with a retry loop for each\nobject type.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 9 Sep 2022 18:37:07 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "On Fri, Sep 09, 2022 at 06:37:07PM +0200, Alvaro Herrera wrote:\n> Would it work to use get_object_address() instead? That would save\n> having to write a lookup-and-lock function with a retry loop for each\n> object type.\n\nJeite, this thread is waiting for your input. This is a bug fix, so I\nhave moved this patch to the next CF for now to keep track of it.\n--\nMichael",
"msg_date": "Wed, 12 Oct 2022 14:17:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "On Wed, 12 Oct 2022 at 10:48, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Sep 09, 2022 at 06:37:07PM +0200, Alvaro Herrera wrote:\n> > Would it work to use get_object_address() instead? That would save\n> > having to write a lookup-and-lock function with a retry loop for each\n> > object type.\n>\n> Jeite, this thread is waiting for your input. This is a bug fix, so I\n> have moved this patch to the next CF for now to keep track of it.\n\nJeite, please post an updated version with the fixes. As CommitFest\n2023-01 is currently underway, this would be an excellent time to\nupdate the patch.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Mon, 16 Jan 2023 20:09:41 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "On Mon, 16 Jan 2023 at 09:47, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Jeite, please post an updated version with the fixes. As CommitFest\n> 2023-01 is currently underway, this would be an excellent time to\n> update the patch.\n\nHm. This patch is still waiting on updates. But it's a bug fix and it\nwould be good to get this in. Is someone else interested in finishing\nthis if Jeite isn't available?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 13:40:17 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> Hm. This patch is still waiting on updates. But it's a bug fix and it\n> would be good to get this in. Is someone else interested in finishing\n> this if Jeite isn't available?\n\nI think the patch as-submitted is pretty uninteresting, mainly because the\ndesign of adding bespoke lock code for subscription objects doesn't scale.\nI'm not excited about doing this just for subscriptions without at least a\nclear plan for working on other object types.\n\nAlvaro suggested switching it to use get_object_address() instead, which'd\nbe better; but before going in that direction we might have to do more\nwork on get_object_address's error reporting (note the para in its header\ncomments saying it's pretty weak on that).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 20 Mar 2023 14:00:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
},
{
"msg_contents": "> \"Gregory Stark (as CFM)\" <stark.cfm@gmail.com> writes:\n> > Hm. This patch is still waiting on updates. But it's a bug fix and it\n> > would be good to get this in. Is someone else interested in finishing\n> > this if Jeite isn't available?\n>\n> I think the patch as-submitted is pretty uninteresting, mainly because the\n> design of adding bespoke lock code for subscription objects doesn't scale.\n> I'm not excited about doing this just for subscriptions without at least a\n> clear plan for working on other object types.\n>\n> Alvaro suggested switching it to use get_object_address() instead, which'd\n> be better; but before going in that direction we might have to do more\n> work on get_object_address's error reporting (note the para in its header\n> comments saying it's pretty weak on that).\n\nSorry for not responding earlier in this thread. I'll be honest in\nsaying this was a small annoyance to me, so I ignored theresonses more\nthan I should have. It caused some test flakiness in the Citus test\nsuite, and it seemed that fixing the underlying issue in Postgres was\nmost appropriate. I addressed this in Citus its test suite by\ndisabling the relevant test (which was an edge case test anyway). So\nmy immidiate problem was fixed, and I stopped caring about this patch\nvery much. Definitely not enough to address this for all other DDLs\nwith the same issue.\n\nAll in all I'm having a hard time feeling motivated to work on a patch\nthat I don't care much about. Especially since I have two other\npatches open for a few commit fests that I actually care about, but\nthose patches have received (imho) very little input. Which makes it\nhard to justify to myself to spend time on this patch, given the\nknowledge that if I would spend time on it, it might take away the\nprecious reviewer time from the patches I do care about.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 22:54:30 +0100",
"msg_from": "Jelte Fennema <postgres@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix alter subscription concurrency errors"
}
] |
[
{
"msg_contents": "Hi,\n\nWe've had some previous discussions about when to use\nhas_privs_of_role and when to use is_member_of_role, and\nhas_privs_of_role has mostly won the fight. That means that, if role\n\"robert\" is set to NOINHERIT and you \"GRANT stuff TO robert\", for the\nmost part \"robert\" will not actually be able to do things that \"stuff\"\ncould do. Now, robert will be able TO \"SET ROLE stuff\" and then do all\nand only those things that \"stuff\" can do, but he won't be able to do\nthose things as \"robert\". For example:\n\nrhaas=# set role robert;\nSET\nrhaas=> select * from stuff_table;\nERROR: permission denied for table stuff_table\n\nSo far, so good. But it's clearly not the case that \"GRANT stuff TO\nrobert\" has conferred no privileges at all on robert. At the very\nleast, it's enabled him to \"SET ROLE stuff\", but what else? I decided\nto go through the code and make a list of the things that robert can\nnow do that he couldn't do before. Here it is:\n\n1. robert can create new objects of various types owned by stuff:\n\nrhaas=> create schema stuff_by_robert authorization stuff;\nCREATE SCHEMA\nrhaas=> create schema unrelated_by_robert authorization unrelated;\nERROR: must be member of role \"unrelated\"\n\n2. robert can change the owner of objects he owns to instead be owned by stuff:\n\nrhaas=> alter table robert_table owner to unrelated;\nERROR: must be member of role \"unrelated\"\nrhaas=> alter table robert_table owner to stuff;\nALTER TABLE\n\n3. robert can change the default privileges for stuff:\n\nrhaas=> alter default privileges for role unrelated grant select on\ntables to public;\nERROR: must be member of role \"unrelated\"\nrhaas=> alter default privileges for role stuff grant select on tables\nto public;\nALTER DEFAULT PRIVILEGES\n\n4. robert can execute \"SET ROLE stuff\".\n\nThat's it. There are two other behaviors that change -- the return\nvalue of pg_has_role('robert', 'stuff', 'MEMBER') and pg_hba.conf\nmatching to groups -- but those aren't things that robert gains the\nability to do. The above is an exhaustive list of the things robert\ngains the ability to do.\n\nI argue that #3 is a clear bug. robert can't select from stuff's\ntables or change privileges on stuff's objects, so why can he change\nstuff's default privileges? is_member_of_role() has a note that it is\nnot to be used for privilege checking, and this seems like it's pretty\nclearly a privilege check.\n\nOn the flip side, #4 is pretty clearly correct. Presumably, allowing\nthat to happen was the whole point of executing \"GRANT stuff TO\nrobert\" in the first place.\n\nThe other two are less clear, in my opinion. We don't want users to\nend up owning objects that they didn't intend to own; in particular,\nif any user could make a security-definer function and then gift it to\nthe superuser, it would be a disaster. So, arguably, the ability to\nmake some other role the owner of an object represents a privilege\nthat your role holds with respect to their role. Under that theory,\nthe is_member_of_role() checks that are performed in cases #1 and #2\nare privilege checks, and we ought to be using has_privis_of_role()\ninstead, so that a non-inherited role grant doesn't confer those\nprivileges. But I don't find this very clear cut, because except when\nthe object you're gifting is a Trojan horse, giving stuff away helps\nthe recipient, not the donor.\n\nAlso, from a practical point of view, changing the owner of an object\nis different from other things that robert might want to do. If robert\nwants to create a table as user stuff or read some data from tables\nuser stuff can access or change privileges on objects that role stuff\nowns, he can just execute \"SET ROLE stuff\" and then do any of that\nstuff. But he can't give away his own objects by assuming stuff's\nprivileges. Either he can do it as himself, or he can't do it at all.\nIt wouldn't be crazy IMHO to decide that a non-inherited grant isn't\nsufficient to donate objects to the granted role, and thus an\ninherited grant is required in such cases. However, the current system\ndoesn't seem insane either, and in fact might be convenient in some\nsituations.\n\nIn short, my proposal is to change the ALTER DEFAULT PRIVILEGES code\nso that you have to have the privileges of the target role, not jut\nmembership in the target role, and leave everything else unchanged.\n\nThoughts?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 12:12:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On 8/25/22 12:12, Robert Haas wrote:\n> So far, so good. But it's clearly not the case that \"GRANT stuff TO\n> robert\" has conferred no privileges at all on robert. At the very\n> least, it's enabled him to \"SET ROLE stuff\", but what else? I decided\n> to go through the code and make a list of the things that robert can\n> now do that he couldn't do before. Here it is:\n> \n> 1. robert can create new objects of various types owned by stuff:\n\n> 2. robert can change the owner of objects he owns to instead be owned by stuff:\n\n> 3. robert can change the default privileges for stuff:\n\n> 4. robert can execute \"SET ROLE stuff\".\n\nNice analysis, and surprising (to me)\n\n> I argue that #3 is a clear bug. robert can't select from stuff's\n> tables or change privileges on stuff's objects, so why can he change\n> stuff's default privileges? is_member_of_role() has a note that it is\n> not to be used for privilege checking, and this seems like it's pretty\n> clearly a privilege check.\n\n\n+1 this feels very wrong to me\n\n\n> On the flip side, #4 is pretty clearly correct. Presumably, allowing\n> that to happen was the whole point of executing \"GRANT stuff TO\n> robert\" in the first place.\n\nExactly\n\n> The other two are less clear, in my opinion. We don't want users to\n> end up owning objects that they didn't intend to own; in particular,\n> if any user could make a security-definer function and then gift it to\n> the superuser, it would be a disaster. So, arguably, the ability to\n> make some other role the owner of an object represents a privilege\n> that your role holds with respect to their role. Under that theory,\n> the is_member_of_role() checks that are performed in cases #1 and #2\n> are privilege checks, and we ought to be using has_privis_of_role()\n> instead, so that a non-inherited role grant doesn't confer those\n> privileges. But I don't find this very clear cut, because except when\n> the object you're gifting is a Trojan horse, giving stuff away helps\n> the recipient, not the donor.\n> \n> Also, from a practical point of view, changing the owner of an object\n> is different from other things that robert might want to do. If robert\n> wants to create a table as user stuff or read some data from tables\n> user stuff can access or change privileges on objects that role stuff\n> owns, he can just execute \"SET ROLE stuff\" and then do any of that\n> stuff. But he can't give away his own objects by assuming stuff's\n> privileges. Either he can do it as himself, or he can't do it at all.\n> It wouldn't be crazy IMHO to decide that a non-inherited grant isn't\n> sufficient to donate objects to the granted role, and thus an\n> inherited grant is required in such cases. However, the current system\n> doesn't seem insane either, and in fact might be convenient in some\n> situations.\n> \n> In short, my proposal is to change the ALTER DEFAULT PRIVILEGES code\n> so that you have to have the privileges of the target role, not jut\n> membership in the target role, and leave everything else unchanged.\n> \n> Thoughts?\n\nI'm not sure about these last two. Does it matter that object creation \nis being logged, maybe for auditing purposes, under a different user \nthan the owner of the object?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 15:03:27 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 3:03 PM Joe Conway <mail@joeconway.com> wrote:\n> Nice analysis, and surprising (to me)\n\nThanks.\n\n> > I argue that #3 is a clear bug. robert can't select from stuff's\n> > tables or change privileges on stuff's objects, so why can he change\n> > stuff's default privileges? is_member_of_role() has a note that it is\n> > not to be used for privilege checking, and this seems like it's pretty\n> > clearly a privilege check.\n>\n> +1 this feels very wrong to me\n\nCool. I'll prepare a patch for that, unless someone else beats me to it.\n\nI really hate back-patching this kind of change but it's possible that\nit's the right thing to do. There's no real security exposure because\nthe member could always SET ROLE and then do the exact same thing, so\nback-patching feels to me like it has a significantly higher chance of\nturning happy users into unhappy ones than the reverse. On the other\nhand, it's pretty hard to defend the current behavior once you stop to\nthink about it, so perhaps it should be back-patched on those grounds.\nOn the third hand, the fact that this has gone undiscovered for a\ndecade makes you wonder whether we've really had clear enough ideas\nabout this to justify calling it a bug rather than, say, an elevation\nof our thinking on this topic.\n\n> I'm not sure about these last two. Does it matter that object creation\n> is being logged, maybe for auditing purposes, under a different user\n> than the owner of the object?\n\nI'd be inclined to say that it doesn't matter, because the grant could\nhave just as well been inheritable, or the action could have been\nperformed by a superuser. Also, as a rule of thumb, I don't think we\nshould choose to prohibit things on the grounds that some auditing\nregime might not be able to understand what happened. If that's an\nissue, we should address it by making the logging better, or including\nbetter logging hooks, or what have you. I think that the focus should\nbe on the permissions model: what is the \"right thing\" from a security\nstandpoint?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:19:21 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I really hate back-patching this kind of change but it's possible that\n> it's the right thing to do. There's no real security exposure because\n> the member could always SET ROLE and then do the exact same thing, so\n> back-patching feels to me like it has a significantly higher chance of\n> turning happy users into unhappy ones than the reverse. On the other\n> hand, it's pretty hard to defend the current behavior once you stop to\n> think about it, so perhaps it should be back-patched on those grounds.\n> On the third hand, the fact that this has gone undiscovered for a\n> decade makes you wonder whether we've really had clear enough ideas\n> about this to justify calling it a bug rather than, say, an elevation\n> of our thinking on this topic.\n\nYeah, I'd lean against back-patching. This is the sort of behavioral\nchange that users tend not to like finding in minor releases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:41:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 4:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I'd lean against back-patching. This is the sort of behavioral\n> change that users tend not to like finding in minor releases.\n\nHere's a small patch. Despite the small size of the patch, there are a\ncouple of debatable points here:\n\n1. Should we have a code comment? I feel it isn't necessary, because\nthere's a comment just a few lines earlier saying \"Look up the role\nOIDs and do permissions checks\" and that seems like sufficient\njustification for what follows.\n\n2. What about the error message? Personally, I'm not very excited\nabout \"permission denied to whatever\" as a way to phrase an error\nmessage. It doesn't sound like particularly good grammar to me. But\nit's the phrasing we use elsewhere, so I guess we should do the same\nhere.\n\n3. Should we have a test case? We are extremely thin on test cases for\nNOINHERIT behavior, it seems, and testing this one thing when we don't\ntest anything else seems relatively useless. Also, privileges.sql is a\ngiant mess. It's a 1700+ line test file that tests many fairly\nunrelated things. I am inclined to think that this file badly needs to\nbe split up into a bunch of smaller files, because it's practically\nunmaintainable as is. For instance, the stuff at the top of the file\nis testing a bunch of things about role privileges, but then check\nsome stuff about leakproof functions before coming back to test stuff\nabout groups, which logically probably belongs with the role\nprivileges stuff. Perhaps a reasonable starting split would be\nsomething like:\n\n- Privileges on roles.\n- Privileges on relations.\n- Privileges on other kinds of objects.\n- Default privileges.\n- Security barriers and leakproof functions.\n- Security-restricted operations.\n\nSome of those files might be fairly small initially, but they might be\nget bigger later, especially because it'd be a heck of a lot easier to\nadd new test cases if you didn't have to worry that some change you\nmake is going to break a test 1000 lines down in the file.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 10:11:38 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Jeff Davis's comment in\nhttp://postgr.es/m/4f8d536a9221bccc5a33bb784dace0ef2310ec4a.camel@j-davis.com\nreminds me that I need to update this thread based on the patch posted\nover there. That patch allows you to grant membership in one role to\nanother while withholding the ability to SET ROLE to the target role.\nAnd it's already possible to grant membership in one role to another\nwhile not allowing for inheritance of privileges. And I think that\nsheds new light on the two debatable points from my original email:\n\nOn Thu, Aug 25, 2022 at 12:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 1. robert can create new objects of various types owned by stuff:\n>\n> rhaas=> create schema stuff_by_robert authorization stuff;\n> CREATE SCHEMA\n> rhaas=> create schema unrelated_by_robert authorization unrelated;\n> ERROR: must be member of role \"unrelated\"\n>\n> 2. robert can change the owner of objects he owns to instead be owned by stuff:\n>\n> rhaas=> alter table robert_table owner to unrelated;\n> ERROR: must be member of role \"unrelated\"\n> rhaas=> alter table robert_table owner to stuff;\n> ALTER TABLE\n\nIt now seems to me that we should disallow these, because if we adopt\nthe patch from that other thread, and then you GRANT\npg_read_all_settings TO alice WITH INHERIT false, SET false, you might\nreasonably expect that alice is not going to be able to clutter the\nsystem with a bunch of objects owned by pg_read_all_settings, but\nbecause of (1) and (2), alice can do exactly that.\n\nTo be more precise, I propose that in order for alice to create\nobjects owned by bob or to change one of her objects to be owned by\nbob, she must not only be a member of role bob, but also inherit bob's\nprivileges. If she has the ability to SET ROLE bob but not does not\ninherit his privileges, she can create new objects owned by bob only\nif she first does SET ROLE bob, and she cannot reassign ownership of\nher objects to bob at all.\n\nMeanwhile, the patch that I posted previously to fix point (3) from\nthe original email, that ALTER DEFAULT PRIVILEGES is allowed for no\ngood reason, still seems like a good idea. Any reviews appreciated.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 15:08:09 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> Jeff Davis's comment in\n> http://postgr.es/m/4f8d536a9221bccc5a33bb784dace0ef2310ec4a.camel@j-davis.com\n> reminds me that I need to update this thread based on the patch posted\n> over there. That patch allows you to grant membership in one role to\n> another while withholding the ability to SET ROLE to the target role.\n> And it's already possible to grant membership in one role to another\n> while not allowing for inheritance of privileges. And I think that\n> sheds new light on the two debatable points from my original email:\n> \n> On Thu, Aug 25, 2022 at 12:12 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > 1. robert can create new objects of various types owned by stuff:\n> >\n> > rhaas=> create schema stuff_by_robert authorization stuff;\n> > CREATE SCHEMA\n> > rhaas=> create schema unrelated_by_robert authorization unrelated;\n> > ERROR: must be member of role \"unrelated\"\n> >\n> > 2. robert can change the owner of objects he owns to instead be owned by stuff:\n> >\n> > rhaas=> alter table robert_table owner to unrelated;\n> > ERROR: must be member of role \"unrelated\"\n> > rhaas=> alter table robert_table owner to stuff;\n> > ALTER TABLE\n> \n> It now seems to me that we should disallow these, because if we adopt\n> the patch from that other thread, and then you GRANT\n> pg_read_all_settings TO alice WITH INHERIT false, SET false, you might\n> reasonably expect that alice is not going to be able to clutter the\n> system with a bunch of objects owned by pg_read_all_settings, but\n> because of (1) and (2), alice can do exactly that.\n\nErr, that shouldn't be allowed and if it is then that's my fault for not\nimplementing something to avoid having that happen. imv, predefined\nroles shouldn't be able to end up with objects they own except in cases\nwhere we declare that a predefined role owns X.\n\nI do think that the above two are correct and am fairly confident that\nthey were intentional as implemented as, otherwise, as noted in your\noriginal message, you can't actually change the ownership of the\nexisting object/table and instead end up having to copy the whole thing,\nwhich seems quite inefficient. In other words, the same result could be\naccomplished but in a much less efficient way and therefore it makes\nsense to provide a way for it to be done that is efficient.\n\n> To be more precise, I propose that in order for alice to create\n> objects owned by bob or to change one of her objects to be owned by\n> bob, she must not only be a member of role bob, but also inherit bob's\n> privileges. If she has the ability to SET ROLE bob but not does not\n> inherit his privileges, she can create new objects owned by bob only\n> if she first does SET ROLE bob, and she cannot reassign ownership of\n> her objects to bob at all.\n\n... which means that to get a table owned by bob which is currently\nowned by alice, alice has to:\n\nset role bob;\ncreate table;\ngrant insert on table to alice;\nreset role;\ninsert into table select * from table;\n\nThat's pretty sucky and is the case that had been contemplated at the\ntime that was written to allow it (at least, if memory serves). iirc,\nthat's also why we check the *bob* has CREATE rights in the place where\nthis is happening (as otherwise the above process wouldn't work either).\n\n> Meanwhile, the patch that I posted previously to fix point (3) from\n> the original email, that ALTER DEFAULT PRIVILEGES is allowed for no\n> good reason, still seems like a good idea. Any reviews appreciated.\n\nHaven't looked at the patch, +1 on the general change though, that looks\nlike incorrect usage of is_member_of_role to me.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 7 Sep 2022 17:51:28 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 5:51 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > To be more precise, I propose that in order for alice to create\n> > objects owned by bob or to change one of her objects to be owned by\n> > bob, she must not only be a member of role bob, but also inherit bob's\n> > privileges. If she has the ability to SET ROLE bob but not does not\n> > inherit his privileges, she can create new objects owned by bob only\n> > if she first does SET ROLE bob, and she cannot reassign ownership of\n> > her objects to bob at all.\n>\n> ... which means that to get a table owned by bob which is currently\n> owned by alice, alice has to:\n>\n> set role bob;\n> create table;\n> grant insert on table to alice;\n> reset role;\n> insert into table select * from table;\n>\n> That's pretty sucky and is the case that had been contemplated at the\n> time that was written to allow it (at least, if memory serves). iirc,\n> that's also why we check the *bob* has CREATE rights in the place where\n> this is happening (as otherwise the above process wouldn't work either).\n\nSure. I think it comes down to what you think that the system\nadministrator intended to block by not allowing alice to inherit bob's\npermissions. In existing releases, there's no facility to prevent\nalice from doing SET ROLE bob, so the system administrator can't have\nintended this as a security measure. But the system administrator\nmight have intended that alice shouldn't do anything that relies on\nbob's permissions by accident, else she should have SET ROLE. And in\nthat case the intention is defeated by allowing the operation. Now,\nyou may well have in mind some other intention that the system\nadministrator could have had where allowing alice to perform this\noperation without needing to inherit bob's permissions is sensible;\nI'm not trying to say there is no such case. I don't know what it is,\nthough.\n\nMy first reaction was in the same ballpark as yours: what's the big\ndeal? But as I think about it more, I struggle to reconcile that\ninstinct with any specific use case.\n\nFairly obviously, my thinking here is biased by having written the\npatch to allow restricting SET ROLE. If alice can neither inherit\nbob's privileges nor SET ROLE bob, she had better not be able to\ncreate objects owned by bob, because otherwise she can make a table,\nadd an expression index that calls a user-defined function, do stuff\nuntil it needs to be autovacuumed, and then give it to bob, and boom,\nexploit. But that doesn't mean that the is_member_of_role() tests here\nhave to be changed to has_privs_of_role(). They could be changed to\nhas_privs_of_role() || member_can_set_role(). And if the consensus is\nto do it that way, I'm OK with that.\n\nI'm just a little unconvinced that it's actually the best route. I\nthink that logic of the form \"well Alice could just SET ROLE and do it\nanyway\" is weak -- and not only because of the patch to allow\nrestricting SET ROLE, but because AFAICT there is no point to the\nINHERIT option in the first place unless it is to force you to issue\nSET ROLE. That is literally the only thing it does. If we're going to\nhave weird exceptions where you don't have to SET ROLE after all, why\neven have INHERIT in the first place?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 09:21:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n> Fairly obviously, my thinking here is biased by having written the\n> patch to allow restricting SET ROLE. If alice can neither inherit\n> bob's privileges nor SET ROLE bob, she had better not be able to\n> create objects owned by bob, because otherwise she can make a table,\n> add an expression index that calls a user-defined function, do stuff\n> until it needs to be autovacuumed, and then give it to bob, and boom,\n> exploit. But that doesn't mean that the is_member_of_role() tests here\n> have to be changed to has_privs_of_role(). They could be changed to\n> has_privs_of_role() || member_can_set_role(). And if the consensus is\n> to do it that way, I'm OK with that.\n> \n> I'm just a little unconvinced that it's actually the best route. I\n> think that logic of the form \"well Alice could just SET ROLE and do it\n> anyway\" is weak -- and not only because of the patch to allow\n> restricting SET ROLE, but because AFAICT there is no point to the\n> INHERIT option in the first place unless it is to force you to issue\n> SET ROLE. That is literally the only thing it does. If we're going to\n> have weird exceptions where you don't have to SET ROLE after all, why\n> even have INHERIT in the first place?\n\nI think to change the owner of an object from role A to role B, you just \nneed a different \"privilege\" on that role B to \"use\" the role that way, \nwhich is distinct from INHERIT or SET ROLE \"privileges\".\n\nWhen you are allowed to INHERIT a role, you are allowed to use the \nGRANTs that have been given to this role. When you are allowed to SET \nROLE, then you are allowed to switch into this role. You could think of \nanother \"privilege\", USAGE on a role, which would allow you to \"use\" \nthis role as a target in a statement to change the owner of an object.\n\nTo change the owner for an object from role A to role B, you need:\n- the privilege to ALTER the object, which is implied when you are A\n- the privilege to \"use\" role B as a target\n\nSo basically the privilege to use role B as the new owner, is a \nprivilege you have **on** the role object B, while the privilege to \nchange the owner of an object is something you have **through** your \nmembership in role A.\n\nUp to v15, there were no separate privileges for this. You were either a \nmember of a role or you were not. Now with INHERIT and maybe SET ROLE \nprivileges/grant options, we can do two things:\n- Keep the ability to use a role as a target in those statements as the \nmost basic privilege on a role, that is implied by membership in that \nrole and can't be taken away (currently the case), or\n- invent a new privilege or grant option to allow changing that.\n\nBut mixing this with either INHERIT or SET ROLE doesn't make sense, imho.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Thu, 8 Sep 2022 17:45:07 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 11:45 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n> I think to change the owner of an object from role A to role B, you just\n> need a different \"privilege\" on that role B to \"use\" the role that way,\n> which is distinct from INHERIT or SET ROLE \"privileges\".\n\nIt's not distinct, though, because if you can transfer ownership of a\ntable to another user, you can use that ability to gain the privileges\nof that user.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 8 Sep 2022 12:02:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n>> I think to change the owner of an object from role A to role B, you just\n>> need a different \"privilege\" on that role B to \"use\" the role that way,\n>> which is distinct from INHERIT or SET ROLE \"privileges\".\n> \n> It's not distinct, though, because if you can transfer ownership of a\n> table to another user, you can use that ability to gain the privileges\n> of that user.\n\nRight, but the inverse is not neccessarily true, so you could have SET \nROLE privileges, but not \"USAGE\" - and then couldn't change the owner of \nan object to this role.\n\nUSAGE is not a good term, because it implies \"least amount of \nprivileges\", but in this case it's quite the opposite.\n\nIn any case, adding a grant option for SET ROLE, while keeping the \nrequired privileges for a transfer of ownership at the minimum \n(membership only), doesn't really make sense. I guess both threads \nshould be discussed together?\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Thu, 8 Sep 2022 18:30:19 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n> Fairly obviously, my thinking here is biased by having written the\n> patch to allow restricting SET ROLE. If alice can neither inherit\n> bob's privileges nor SET ROLE bob, she had better not be able to\n> create objects owned by bob, because otherwise she can make a table,\n> add an expression index that calls a user-defined function, do stuff\n> until it needs to be autovacuumed, and then give it to bob, and boom,\n> exploit. But that doesn't mean that the is_member_of_role() tests here\n> have to be changed to has_privs_of_role(). They could be changed to\n> has_privs_of_role() || member_can_set_role(). And if the consensus is\n> to do it that way, I'm OK with that.\n\nA different line of thought (compared to the \"USAGE\" privilege I \ndiscussed earlier), would be:\nTo transfer ownership of an object, you need two sets of privileges:\n- You need to have the privilege to initiate a request to transfer \nownership.\n- You need to have the privilege to accept a request to transfer ownership.\n\nLet's imagine there'd be such a request created temporarily, then when I \nstart the process of changing ownership, I would have to change to the \nother role and then accept that request.\n\nIn theory, I could also inherit that privilege, but that's not how the \nsystem works today. By using is_member_of_role, the decision was already \nmade that this should not depend on inheritance. What is left, is the \nability to do it via SET ROLE only.\n\nSo it should not be has_privs_of_role() nor has_privs_of_role() || \nmember_can_set_role(), as you suggested above, but rather just \nmember_can_set_role() only. Of course, only in the context of the SET \nROLE patch.\n\nBasically, with that patch is_member_of_role() has to become \nmember_can_set_role().\n\n> I'm just a little unconvinced that it's actually the best route. I\n> think that logic of the form \"well Alice could just SET ROLE and do it\n> anyway\" is weak -- and not only because of the patch to allow\n> restricting SET ROLE, but because AFAICT there is no point to the\n> INHERIT option in the first place unless it is to force you to issue\n> SET ROLE. That is literally the only thing it does. If we're going to\n> have weird exceptions where you don't have to SET ROLE after all, why\n> even have INHERIT in the first place?\n\nAs stated above, I don't think this is about INHERIT. INHERIT works fine \nboth without the SET ROLE patch (and keeping is_member_of_role) and with \nthe SET ROLE patch (and changing to member_can_set_role).\n\nThe exception is made, because there is no formal two-step process for \nrequesting and accepting a transfer of ownership. Or alternatively: \nThere is no exception, it's just that during the command to transfer \nownership, the current role has to be changed temporarily to the \naccepting role. And that's the same as checking is_member_of_role or \nmember_can_set_role, respectively.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Thu, 8 Sep 2022 19:06:24 +0200",
"msg_from": "walther@technowledgy.de",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 10:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Here's a small patch. Despite the small size of the patch, there are a\n> couple of debatable points here:\n\nNobody's commented on this patch specifically, but it seemed like we\nhad consensus that ALTER DEFAULT PRIVILEGES was doing The Wrong Thing,\nso I've pushed the patch I posted previously for that issue.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 14:42:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 1:06 PM <walther@technowledgy.de> wrote:\n> A different line of thought (compared to the \"USAGE\" privilege I\n> discussed earlier), would be:\n> To transfer ownership of an object, you need two sets of privileges:\n> - You need to have the privilege to initiate a request to transfer\n> ownership.\n> - You need to have the privilege to accept a request to transfer ownership.\n>\n> Let's imagine there'd be such a request created temporarily, then when I\n> start the process of changing ownership, I would have to change to the\n> other role and then accept that request.\n>\n> In theory, I could also inherit that privilege, but that's not how the\n> system works today. By using is_member_of_role, the decision was already\n> made that this should not depend on inheritance. What is left, is the\n> ability to do it via SET ROLE only.\n\nI do not accept the argument that we've already made the decision that\nthis should not depend on inheritance. It's pretty clear that we\nhaven't thought carefully enough about which checks should depend only\non membership, and which ones should depend on inheritance. The patch\nI committed just now to fix ALTER DEFAULT PRIVILEGES is one clear\nexample of where we've gotten that wrong. We also changed the way\npredefined roles worked with inheritance not too long ago, so that\nthey started using has_privs_of_role() rather than\nis_member_of_role(). Our past thinking on this topic has been fuzzy\nenough that we can't really conclude that because something uses\nis_member_of_role() now that's what it should continue to do in the\nfuture. We are working to get from a messy situation where the rules\naren't consistent or understandable to one where they are, and that\nmay mean changing some things.\n\n> So it should not be has_privs_of_role() nor has_privs_of_role() ||\n> member_can_set_role(), as you suggested above, but rather just\n> member_can_set_role() only. Of course, only in the context of the SET\n> ROLE patch.\n\nNow, having said that, this choice of behavior might have some\nadvantages. It would mean that you could GRANT pg_read_all_settings TO\nsomeone WITH INHERIT TRUE, SET FALSE and that user would be able to\nread all settings but would not be able to create objects owned by\npg_read_all_settings. It would also be upward-compatible with the\nexisting behavior, which is nice.\n\nWell, maybe. Suppose that role A has been granted pg_read_all_settings\nWITH INHERIT TRUE, SET TRUE and role B has been granted\npg_read_all_settings WITH INHERIT TRUE, SET FALSE. A can create a\ntable owned by pg_read_all_settings. If A does that, then B can now\ncreate a trigger on that table and usurp the privileges of\npg_read_all_settings, after which B can now create any number of\nobjects owned by pg_read_all_settings. If A does not do that, though,\nI think that with the proposed rule, B would have no way to create\nobjects owned by A. This is a bit unsatisfying. It seems like B should\neither have the right to usurp pg_read_all_settings's privileges or\nnot, rather than maybe having that right depending on what some other\nuser chooses to do.\n\nBut maybe it's OK. It's hard to come up with perfect solutions here.\nOne could take the view that the issue here is that\npg_read_all_settings shouldn't have the right to create objects in the\nfirst place, and that this INHERIT vs. SET ROLE distinction is just a\ndistraction. However, that would require accepting the idea that it's\npossible for a role to lack privileges granted to PUBLIC, which also\nsounds pretty unsatisfying. On the whole, I'm inclined to think it's\nreasonable to suppose that if you want to grant a role to someone\nwithout letting them create objects owned by that role, it should be a\nrole that doesn't own any existing objects either. Essentially, that's\nlegislating that predefined roles should be minimally privileged: they\nshould hold the ability to do whatever it is that they are there to do\n(like read all settings) but not have any other privileges (like the\nability to do stuff to objects they own).\n\nBut maybe there's a better answer. Ideas/opinions welcome.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Sep 2022 15:32:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n> Well, maybe. Suppose that role A has been granted pg_read_all_settings\n> WITH INHERIT TRUE, SET TRUE and role B has been granted\n> pg_read_all_settings WITH INHERIT TRUE, SET FALSE. A can create a\n> table owned by pg_read_all_settings. If A does that, then B can now\n> create a trigger on that table and usurp the privileges of\n> pg_read_all_settings, after which B can now create any number of\n> objects owned by pg_read_all_settings.\n\nI'm not seeing how this is possible. A trigger function would run with \nthe invoking user's privileges by default, right? So B would have to \ncreate a trigger with a SECURITY DEFINER function, which is owned by \npg_read_all_settings to actually usurp the privileges of that role. But \ncreating objects with that owner is exactly the thing B can't do.\n\nWhat am I missing?\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Sun, 25 Sep 2022 11:08:10 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Sun, Sep 25, 2022 at 5:08 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n> Robert Haas:\n> > Well, maybe. Suppose that role A has been granted pg_read_all_settings\n> > WITH INHERIT TRUE, SET TRUE and role B has been granted\n> > pg_read_all_settings WITH INHERIT TRUE, SET FALSE. A can create a\n> > table owned by pg_read_all_settings. If A does that, then B can now\n> > create a trigger on that table and usurp the privileges of\n> > pg_read_all_settings, after which B can now create any number of\n> > objects owned by pg_read_all_settings.\n>\n> I'm not seeing how this is possible. A trigger function would run with\n> the invoking user's privileges by default, right? So B would have to\n> create a trigger with a SECURITY DEFINER function, which is owned by\n> pg_read_all_settings to actually usurp the privileges of that role. But\n> creating objects with that owner is exactly the thing B can't do.\n\nYeah, my statement before wasn't correct. It appears that alice can't\njust usurp the privileges of pg_read_all_settings trivially, but she\ncan create a trigger on any preexisting table owned by\npg_read_all_settings and then anyone who performs an operation that\ncauses that trigger to fire is at risk:\n\nrhaas=# create role alice;\nCREATE ROLE\nrhaas=# create table foo (a int, b text);\nCREATE TABLE\nrhaas=# alter table foo owner to pg_read_all_settings;\nALTER TABLE\nrhaas=# grant pg_read_all_settings to alice;\nGRANT ROLE\nrhaas=# grant create on schema public to alice;\nGRANT\nrhaas=# set session authorization alice;\nSET\nrhaas=> create or replace function alice_function () returns trigger\nas $$begin raise notice 'this trigger is running as %', current_user;\nreturn null; end$$ language plpgsql;\nCREATE FUNCTION\nrhaas=> create trigger t1 before insert or update or delete on foo for\neach row execute function alice_function();\nCREATE TRIGGER\nrhaas=> begin;\nBEGIN\nrhaas=*> insert into foo values (1, 'stuff');\nNOTICE: this trigger is running as alice\nINSERT 0 0\nrhaas=*> rollback;\nROLLBACK\nrhaas=> reset session authorization;\nRESET\nrhaas=# begin;\nBEGIN\nrhaas=*# insert into foo values (1, 'stuff');\nNOTICE: this trigger is running as rhaas\nINSERT 0 0\nrhaas=*# rollback;\nROLLBACK\n\nThis shows that if rhaas (or whoever) performs DML on a table owned by\npg_read_all_settings, he might trigger arbitrary code written by alice\nto run under his own user ID. Now, that hazard would exist anyway for\ntables owned by alice, but now it also exists for any tables owned by\npg_read_all_settings. I'm not really sure how significant that is. If\nyou can create triggers as some other user and that user ever does\nstuff as themselves, you can probably steal their privileges, because\nthey will probably eventually do DML on one of their own tables and\nthereby execute your Trojan trigger. However, in the particular case\nof pg_read_all_settings, the intent is probably that nobody would ever\nrun as that user, and there is probably also no reason to create\ntables or other objects owned by that user. So maybe we really can say\nthat just blocking SET ROLE is enough.\n\nI'm slightly skeptical of that conclusion because the whole thing just\nfeels a bit flimsy. Like, the whole idea that you can compromise your\naccount by inserting a row into somebody else's table feels a little\nnuts to me. Triggers and row-level security policies make it easy to\ndo things that look safe and are actually very dangerous. I think\nanyone would reasonably expect that calling a function owned by some\nother user might be risky, because who knows what that function might\ndo, but it seems less obvious that accessing a table could execute\narbitrary code, yet it can. And it is even less obvious that creating\na table owned by one role might give some other role who inherits that\nuser's privileges to booby-trap that table in a way that might fool a\nthird user into doing something unsafe. But I have no idea what we\ncould reasonably do to improve the situation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 11:27:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n> This shows that if rhaas (or whoever) performs DML on a table owned by\n> pg_read_all_settings, he might trigger arbitrary code written by alice\n> to run under his own user ID. Now, that hazard would exist anyway for\n> tables owned by alice, but now it also exists for any tables owned by\n> pg_read_all_settings.\n\nThis hazard exists for all tables that alice has been granted the \nTRIGGER privilege on. While we prevent alice from creating tables that \nare owned by pg_read_all_settings, we do not prevent inheriting the \nTRIGGER privilege.\n\n> I'm slightly skeptical of that conclusion because the whole thing just\n> feels a bit flimsy. Like, the whole idea that you can compromise your\n> account by inserting a row into somebody else's table feels a little\n> nuts to me. Triggers and row-level security policies make it easy to\n> do things that look safe and are actually very dangerous. I think\n> anyone would reasonably expect that calling a function owned by some\n> other user might be risky, because who knows what that function might\n> do, but it seems less obvious that accessing a table could execute\n> arbitrary code, yet it can. And it is even less obvious that creating\n> a table owned by one role might give some other role who inherits that\n> user's privileges to booby-trap that table in a way that might fool a\n> third user into doing something unsafe. But I have no idea what we\n> could reasonably do to improve the situation.\n\nRight. This will always be the case when giving out the TRIGGER \nprivilege on one of your tables to somebody else.\n\nThere is two kind of TRIGGER privileges: An explicitly GRANTed privilege \nand an implicit privilege, that is given to the table owner.\n\nI think, when WITH INHERIT TRUE, SET FALSE is set, we should:\n- Inherit all explicitly granted privileges\n- Not inherit any DDL privileges implicitly given through ownership: \nCREATE, REFERENCES, TRIGGER.\n- Inherit all other privileges implicitly given through ownership (DML + \nothers)\n\nThose implicit DDL privileges should be considered part of WITH SET \nTRUE. When you can't do SET ROLE x, then you can't act as the owner of \nany object owned by x.\n\nOr to put it the other way around: Only allow implicit ownership \nprivileges to be executed when the CURRENT_USER is the owner. But \nprovide a shortcut, when you have the WITH SET TRUE option on a role, so \nthat you don't need to do SET ROLE + CREATE TRIGGER, but can just do \nCREATE TRIGGER instead. This is similar to the mental model of \n\"requesting and accepting a transfer of ownership\" with an implicit SET \nROLE built-in, that I used before.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Mon, 26 Sep 2022 18:16:46 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 12:16 PM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n> I think, when WITH INHERIT TRUE, SET FALSE is set, we should:\n> - Inherit all explicitly granted privileges\n> - Not inherit any DDL privileges implicitly given through ownership:\n> CREATE, REFERENCES, TRIGGER.\n> - Inherit all other privileges implicitly given through ownership (DML +\n> others)\n\nI don't think we're going to be very happy if we redefine inheriting\nthe privileges of another role to mean inheriting only some of them.\nThat seems pretty counterintuitive to me. I also think that this\nparticular definition is pretty fuzzy.\n\nYour previous proposal was to make the SET attribute of a GRANT\ncontrol not only the ability to SET ROLE to the target role but also\nthe ability to create objects owned by that role and/or transfer\nobjects to that role. I think some people might find that behavior a\nlittle bit surprising - certainly, it goes beyond what the name SET\nimplies - but it is at least simple enough to explain in one sentence,\nand the consequences don't seem too difficult to reason about.\n\nHere, though, it doesn't really seem simple enough to explain in one\nsentence, nor does it seem easy to reason about. I'm not sure that\nthere's any firm distinction between DML privileges and DDL\nprivileges. I'm not sure that the privileges that you mention are all\nand only those that are security-relevant, nor that it would\nnecessarily remain true in the future even if it's true today.\n\nThere are many operations which are permitted or declined just using\nan owner-check. One example is commenting on an object. That sure\nsounds like it would fit within your proposed \"DDL privileges\nimplicitly given through ownership\" category, but it doesn't really\npresent any security hazard, so I do not think there is a good reason\nto restrict that from a user who has INHERIT TRUE, SET FALSE. Another\nis renaming an object, which is a little more murky. You can't\ndirectly usurp someone's privileges by renaming an object that they\nown, but you could potentially rename an object out of the way and\nreplace it with one that you own and thus induce a user to do\nsomething dangerous. I don't really want to make even small exceptions\nto the idea that inheriting a role's privileges means inheriting all\nof them, and I especially don't want to make large exceptions, or\nexceptions that involve judgement calls about the relative degree of\nrisk of each possible operation.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 13:15:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n> I don't think we're going to be very happy if we redefine inheriting\n> the privileges of another role to mean inheriting only some of them.\n> That seems pretty counterintuitive to me. I also think that this\n> particular definition is pretty fuzzy.\n\nScratch my previous suggestion. A new, less fuzyy definition would be: \nOwnership is not a privilege itself and as such not inheritable.\n\nWhen role A is granted to role B, two things happen:\n1. Role B now has the right to use the GRANTed privileges of role A.\n2. Role B now has the right to become role A via SET ROLE.\n\nWITH SET controls whether point 2 is the case or not.\n\nWITH INHERIT controls whether role B actually executes their right to \nuse those privileges (\"inheritance\") **and** whether the set role is \ndone implicitly for anything that requires ownership, but of course only \nWITH SET TRUE.\n\nThis is the same way that the role attributes INHERIT / NOINHERIT behave.\n\n> Your previous proposal was to make the SET attribute of a GRANT\n> control not only the ability to SET ROLE to the target role but also\n> the ability to create objects owned by that role and/or transfer\n> objects to that role. I think some people might find that behavior a\n> little bit surprising - certainly, it goes beyond what the name SET\n> implies - but it is at least simple enough to explain in one sentence,\n> and the consequences don't seem too difficult to reason about.\n\nThis would be included in the above.\n\n> Here, though, it doesn't really seem simple enough to explain in one\n> sentence, nor does it seem easy to reason about.\n\nI think the \"ownership is not inheritable\" idea is easy to explain.\n\n> There are many operations which are permitted or declined just using\n> an owner-check. One example is commenting on an object. That sure\n> sounds like it would fit within your proposed \"DDL privileges\n> implicitly given through ownership\" category, but it doesn't really\n> present any security hazard, so I do not think there is a good reason\n> to restrict that from a user who has INHERIT TRUE, SET FALSE. Another\n> is renaming an object, which is a little more murky. You can't\n> directly usurp someone's privileges by renaming an object that they\n> own, but you could potentially rename an object out of the way and\n> replace it with one that you own and thus induce a user to do\n> something dangerous. I don't really want to make even small exceptions\n> to the idea that inheriting a role's privileges means inheriting all\n> of them, and I especially don't want to make large exceptions, or\n> exceptions that involve judgement calls about the relative degree of\n> risk of each possible operation.\n\nI would not make this about security-risks only. We didn't distinguish \nbetween privileges and ownership that much before, because we didn't \nhave WITH INHERIT or WITH SET. Now that we have both, we could do so.\n\nThe ideas of \"inherited GRANTs\" and \"a shortcut to avoid SET ROLE to do \nowner-things\" should be better to explain.\n\nNo judgement required.\n\nAll of this is to find a way to make WITH INHERIT TRUE, SET FALSE a \n\"real\", risk-free thing - and not just some syntactic sugar. And if that \ncomes with the inability to COMMENT ON TABLE \nowned_by_pg_read_all_settings... fine. No need for that at all.\n\nHowever, it would come with the inability to do SELECT * FROM \nowned_by_pg_read_all_settings, **unless** explicitly GRANTed to the \nowner, too. This might feel strange at first, but should not be a \nproblem either. WITH INHERIT TRUE, SET FALSE is designed for built-in \nroles or other container roles that group a set of privileges. Those \nroles should not have objects they own anyway. And if they still do, \ndenying access to those objects unless explicitly granted is the safe way.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Mon, 26 Sep 2022 21:16:50 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Sep 8, 2022 at 1:06 PM <walther@technowledgy.de> wrote:\n> > In theory, I could also inherit that privilege, but that's not how the\n> > system works today. By using is_member_of_role, the decision was already\n> > made that this should not depend on inheritance. What is left, is the\n> > ability to do it via SET ROLE only.\n> \n> I do not accept the argument that we've already made the decision that\n> this should not depend on inheritance. It's pretty clear that we\n> haven't thought carefully enough about which checks should depend only\n> on membership, and which ones should depend on inheritance. The patch\n> I committed just now to fix ALTER DEFAULT PRIVILEGES is one clear\n> example of where we've gotten that wrong. We also changed the way\n> predefined roles worked with inheritance not too long ago, so that\n> they started using has_privs_of_role() rather than\n> is_member_of_role(). Our past thinking on this topic has been fuzzy\n> enough that we can't really conclude that because something uses\n> is_member_of_role() now that's what it should continue to do in the\n> future. We are working to get from a messy situation where the rules\n> aren't consistent or understandable to one where they are, and that\n> may mean changing some things.\n\nAgreed that we haven't been good about the distinction between these,\nbut that the recent work by Joshua and yourself has been moving us in\nthe right direction.\n\n> One could take the view that the issue here is that\n> pg_read_all_settings shouldn't have the right to create objects in the\n> first place, and that this INHERIT vs. SET ROLE distinction is just a\n> distraction. However, that would require accepting the idea that it's\n> possible for a role to lack privileges granted to PUBLIC, which also\n> sounds pretty unsatisfying. On the whole, I'm inclined to think it's\n> reasonable to suppose that if you want to grant a role to someone\n> without letting them create objects owned by that role, it should be a\n> role that doesn't own any existing objects either. Essentially, that's\n> legislating that predefined roles should be minimally privileged: they\n> should hold the ability to do whatever it is that they are there to do\n> (like read all settings) but not have any other privileges (like the\n> ability to do stuff to objects they own).\n\nPredefined roles are special in that they should GRANT just the\nprivileges that the role is described to GRANT and that users really\nshouldn't be able to SET ROLE to them nor should they be allowed to own\nobjects, or at least that's my general feeling on them.\n\nIf an administrator doesn't wish for a user to have the privileges\nprovided by the predefined role by default, they should be able to set\nthat up by creating another role who has that privilege which the user\nis able to SET ROLE to. That is:\n\nCREATE ROLE admin WITH INHERIT FALSE;\nGRANT pg_read_all_settings TO admin;\nGRANT admin TO alice;\n\nWould allow 'alice' to log in without the privileges associated with\npg_read_all_settings but 'alice' is able to SET ROLE admin; and gain\nthose privileges. It wasn't intended that 'alice' be able to SET ROLE\nto pg_read_all_settings itself though.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> Yeah, my statement before wasn't correct. It appears that alice can't\n> just usurp the privileges of pg_read_all_settings trivially, but she\n> can create a trigger on any preexisting table owned by\n> pg_read_all_settings and then anyone who performs an operation that\n> causes that trigger to fire is at risk:\n\nTriggers aren't the only thing to be worried about in this area-\nfunctions defined inside of views are also run with the privileges of\nthe user running the SELECT and not as the owner of the view. The same\nis true of running SELECT against tables with RLS too, of course.\nGenerally speaking, it's always been very risky to access the objects of\nusers who you don't trust in any way and we don't currently provide any\nparticularly easy way to make that kind of access safe. RLS at least\nprovides an escape by allowing a user to turn it off, but the same isn't\navailable for setting a search_path and then running queries or\naccessing views or running DML against tables with triggers.\n\n> I'm slightly skeptical of that conclusion because the whole thing just\n> feels a bit flimsy. Like, the whole idea that you can compromise your\n> account by inserting a row into somebody else's table feels a little\n> nuts to me. Triggers and row-level security policies make it easy to\n> do things that look safe and are actually very dangerous. I think\n> anyone would reasonably expect that calling a function owned by some\n> other user might be risky, because who knows what that function might\n> do, but it seems less obvious that accessing a table could execute\n> arbitrary code, yet it can. And it is even less obvious that creating\n> a table owned by one role might give some other role who inherits that\n> user's privileges to booby-trap that table in a way that might fool a\n> third user into doing something unsafe. But I have no idea what we\n> could reasonably do to improve the situation.\n\nJust to reiterate- this is not only about DML/triggers or RLS but also\nincludes SELECT statements against views and setting of the search_path\nto that owned by someone trying to compromise your account (though the\nlatter does require a bit more than just the SET itself).\n\nOne approach to dealing with this would be to have a mechanism to define\nexactly what code you feel comfortable running and set that to be only\nthe bootstrap superuser (or perhaps we'd have this as a superuser-set\nGUC list) and the current role and then fail any queries that end up\ncalling code owned by any other role.\n\n* Wolfgang Walther (walther@technowledgy.de) wrote:\n> Robert Haas:\n> > This shows that if rhaas (or whoever) performs DML on a table owned by\n> > pg_read_all_settings, he might trigger arbitrary code written by alice\n> > to run under his own user ID. Now, that hazard would exist anyway for\n> > tables owned by alice, but now it also exists for any tables owned by\n> > pg_read_all_settings.\n> \n> This hazard exists for all tables that alice has been granted the TRIGGER\n> privilege on. While we prevent alice from creating tables that are owned by\n> pg_read_all_settings, we do not prevent inheriting the TRIGGER privilege.\n\nThe issue here is that we don't prevent alice from issuing a 'SET' to\npg_read_all_settings nor do we prevent predefined roles from creating\nobjects. I'd be inclined to change the system to explicitly prevent\nboth of those things from being allowed- for the special case of\npredefined roles and not as some general role capability. Maybe there's\nan argument that we should allow administrators to create roles that no\nuser is allowed to SET to or which aren't allowed to create/own objects\nbut I'm not sure that there's a strong use-case for that. I do think\nit's useful to allow administrators to create roles that *some* users\nare allowed to have the privileges are but aren't allowed to SET to, but\nthe whole point of predefined roles is to use the role GRANT system as a\nmore flexible way to give out certain distinct privileges to certain\nroles and that's it.\n\n> > I'm slightly skeptical of that conclusion because the whole thing just\n> > feels a bit flimsy. Like, the whole idea that you can compromise your\n> > account by inserting a row into somebody else's table feels a little\n> > nuts to me. Triggers and row-level security policies make it easy to\n> > do things that look safe and are actually very dangerous. I think\n> > anyone would reasonably expect that calling a function owned by some\n> > other user might be risky, because who knows what that function might\n> > do, but it seems less obvious that accessing a table could execute\n> > arbitrary code, yet it can. And it is even less obvious that creating\n> > a table owned by one role might give some other role who inherits that\n> > user's privileges to booby-trap that table in a way that might fool a\n> > third user into doing something unsafe. But I have no idea what we\n> > could reasonably do to improve the situation.\n> \n> Right. This will always be the case when giving out the TRIGGER privilege on\n> one of your tables to somebody else.\n\nGiving out of the TRIGGER privilege is really just a ill-conceived idea\nthat we should acknowledge only exists because it's part of the\nstandard.\n\n> There is two kind of TRIGGER privileges: An explicitly GRANTed privilege and\n> an implicit privilege, that is given to the table owner.\n\nThat's really only half-right: TRIGGER is just one of the privileges\nthat the owner has if there's no ACL on the table. If the ACL is\ndefined and the owner's entry has the TRIGGER privilege removed then\nthey'll lose the ability to create triggers on that table. Of course,\nthe owner can simply GRANT that ability back to themselves if they wish\nbut it's a useful distinction to be aware of.\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Mon, Sep 26, 2022 at 12:16 PM Wolfgang Walther\n> <walther@technowledgy.de> wrote:\n> > I think, when WITH INHERIT TRUE, SET FALSE is set, we should:\n> > - Inherit all explicitly granted privileges\n> > - Not inherit any DDL privileges implicitly given through ownership:\n> > CREATE, REFERENCES, TRIGGER.\n> > - Inherit all other privileges implicitly given through ownership (DML +\n> > others)\n> \n> I don't think we're going to be very happy if we redefine inheriting\n> the privileges of another role to mean inheriting only some of them.\n> That seems pretty counterintuitive to me. I also think that this\n> particular definition is pretty fuzzy.\n\nI agree with Robert on this part. Inheriting the privileges of another\nrole should generally mean exactly that and not some awkward subset of\nthe privileges.\n\n> Your previous proposal was to make the SET attribute of a GRANT\n> control not only the ability to SET ROLE to the target role but also\n> the ability to create objects owned by that role and/or transfer\n> objects to that role. I think some people might find that behavior a\n> little bit surprising - certainly, it goes beyond what the name SET\n> implies - but it is at least simple enough to explain in one sentence,\n> and the consequences don't seem too difficult to reason about.\n\nI still feel it's useful to allow users to transfer object to roles that\nthey can SET to even if they don't inherit the privileges of that role.\nI don't feel that should be allowed for predefined roles, however.\n\nOne thing that I don't want to miss mentioning is that I'm not against\nthe idea of predefined roles having ownership of some objects- but\nshould that happen (tho I tend to doubt it will, because we usually use\nthe bootstrap superuser for objects that admins can use but shouldn't be\nmucking around with and changing), those objects shouldn't be ones that\nare able to be messed with by anyone except a superuser running around\nwith allow_system_table_mods or such. \n\nThanks,\n\nStephen",
"msg_date": "Mon, 26 Sep 2022 15:40:08 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Greetings,\n\n* Wolfgang Walther (walther@technowledgy.de) wrote:\n> Robert Haas:\n> > I don't think we're going to be very happy if we redefine inheriting\n> > the privileges of another role to mean inheriting only some of them.\n> > That seems pretty counterintuitive to me. I also think that this\n> > particular definition is pretty fuzzy.\n> \n> Scratch my previous suggestion. A new, less fuzyy definition would be:\n> Ownership is not a privilege itself and as such not inheritable.\n\nOne of the reasons the role system was brought into being was explicitly\nto allow other roles to have ownership-level rights on objects that they\ndidn't directly own.\n\nI don't see us changing that.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 26 Sep 2022 15:41:11 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Mon, Sep 26, 2022 at 3:16 PM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n> Robert Haas:\n> > I don't think we're going to be very happy if we redefine inheriting\n> > the privileges of another role to mean inheriting only some of them.\n> > That seems pretty counterintuitive to me. I also think that this\n> > particular definition is pretty fuzzy.\n>\n> Scratch my previous suggestion. A new, less fuzyy definition would be:\n> Ownership is not a privilege itself and as such not inheritable.\n>\n> When role A is granted to role B, two things happen:\n> 1. Role B now has the right to use the GRANTed privileges of role A.\n> 2. Role B now has the right to become role A via SET ROLE.\n>\n> WITH SET controls whether point 2 is the case or not.\n>\n> WITH INHERIT controls whether role B actually executes their right to\n> use those privileges (\"inheritance\") **and** whether the set role is\n> done implicitly for anything that requires ownership, but of course only\n> WITH SET TRUE.\n\nIf I'm understanding correctly, this would amount to a major\nredefinition of what it means to inherit privileges, and I think the\nchances of such a change being accepted are approximately zero.\nInheriting privileges needs to keep meaning what it means now, namely,\nyou inherit all the rights of the granted role.\n\n> > Here, though, it doesn't really seem simple enough to explain in one\n> > sentence, nor does it seem easy to reason about.\n>\n> I think the \"ownership is not inheritable\" idea is easy to explain.\n\nI don't. And even if I did think it were easy to explain, I don't\nthink it would be a good idea. One of my first patches to PostgreSQL\nadded a grantable TRUNCATE privilege to tables. I think that, under\nyour proposed definitions, the addition of this privilege would have\nhad the result that a role grant would cease to allow the recipient to\ntruncate tables owned by the granted role. There is currently a\nproposal on the table to make VACUUM and ANALYZE grantable permissions\non tables, which would have the same issue. I think that if I made it\nso that adding such privileges resulted in role inheritance not\nworking for those operations any more, people would come after me with\npitchforks. And I wouldn't blame them: that sounds terrible.\n\nI think the only thing we should be discussing here is how to tighten\nup the tests for operations in categories (1) and (2) in my original\nemail. The options so far proposed are: (a) do nothing, which makes\nthe proposed SET option on grants a lot less useful; (b) restrict\nthose operations by has_privs_of_role(), basically making them\ndependent on the INHERIT option, (c) restrict them by\nhas_privs_of_role() || member_can_set_role(), requiring either the\nINHERIT option or the SET option, or (d) restrict them by\nmember_can_set_role() only, i.e. making them depend on the SET option\nalone. A broader reworking of what the INHERIT option means is not on\nthe table: I don't want to write a patch for it, I don't think it's a\ngood idea, and I don't think the community would accept it even if I\ndid want to write a patch for it and even if I did think it was a good\nidea.\n\nI would like to hear more opinions on that topic. I understand your\nvote from among those four options to be (d). I do not know what\nanyone else thinks.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 16:24:06 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "Robert Haas:\n>> Scratch my previous suggestion. A new, less fuzyy definition would be:\n>> Ownership is not a privilege itself and as such not inheritable.\n>> [...]\n> If I'm understanding correctly, this would amount to a major\n> redefinition of what it means to inherit privileges, and I think the\n> chances of such a change being accepted are approximately zero.\n> Inheriting privileges needs to keep meaning what it means now, namely,\n> you inherit all the rights of the granted role.\n\nNo. Inheriting stays the same, it's just WITH SET that's different from \nwhat it is \"now\".\n\n> I don't. And even if I did think it were easy to explain, I don't\n> think it would be a good idea. One of my first patches to PostgreSQL\n> added a grantable TRUNCATE privilege to tables. I think that, under\n> your proposed definitions, the addition of this privilege would have\n> had the result that a role grant would cease to allow the recipient to\n> truncate tables owned by the granted role. There is currently a\n> proposal on the table to make VACUUM and ANALYZE grantable permissions\n> on tables, which would have the same issue. I think that if I made it\n> so that adding such privileges resulted in role inheritance not\n> working for those operations any more, people would come after me with\n> pitchforks. And I wouldn't blame them: that sounds terrible.\n\nNo, there is a misunderstanding. In my proposal, when you do WITH SET \nTRUE everything stays exactly the same as it is right now.\n\nI'm just saying WITH SET FALSE should take away more of the things you \ncan do (all the ownership things) to a point where it's safe to GRANT .. \nWITH INHERIT TRUE, SET FALSE and still be useful for pre-defined or \nprivilege-container roles.\n\nCould be discussed in the WITH SET thread, but it's a natural extension \nof the categories (1) and (2) in your original email. It's all about \nownership.\n\nBest\n\nWolfgang\n\n\n",
"msg_date": "Tue, 27 Sep 2022 08:05:23 +0200",
"msg_from": "Wolfgang Walther <walther@technowledgy.de>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 2:05 AM Wolfgang Walther\n<walther@technowledgy.de> wrote:\n> I'm just saying WITH SET FALSE should take away more of the things you\n> can do (all the ownership things) to a point where it's safe to GRANT ..\n> WITH INHERIT TRUE, SET FALSE and still be useful for pre-defined or\n> privilege-container roles.\n\nI don't see that as viable, either. It's too murky what you'd have to\ntake away to make it safe, and it sounds like stuff that naturally\nfalls under INHERIT rather than SET.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Sep 2022 07:55:14 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Mon, 2022-09-19 at 15:32 -0400, Robert Haas wrote:\n> One could take the view that the issue here is that\n> pg_read_all_settings shouldn't have the right to create objects in\n> the\n> first place, and that this INHERIT vs. SET ROLE distinction is just a\n> distraction. However, that would require accepting the idea that it's\n> possible for a role to lack privileges granted to PUBLIC, which also\n> sounds pretty unsatisfying. On the whole, I'm inclined to think it's\n> reasonable to suppose that if you want to grant a role to someone\n> without letting them create objects owned by that role, it should be\n> a\n> role that doesn't own any existing objects either. Essentially,\n> that's\n> legislating that predefined roles should be minimally privileged:\n> they\n> should hold the ability to do whatever it is that they are there to\n> do\n> (like read all settings) but not have any other privileges (like the\n> ability to do stuff to objects they own).\n\nI like this approach -- the idea that you can create a role that can't\nown anything, can't create anything, and to which nobody else can \"SET\nROLE\".\n\nCreating a \"virtual\" role like that feels much more declarative and\neasy to document: \"this isn't a real user, it's just a collection of\ninheritable privileges\". Even superusers couldn't \"SET ROLE\npg_read_all_settings\" or \"OWNER TO pg_signal_backend\".\n\nI wouldn't call it \"minimally privileged\" (which feels wrong because it\nwouldn't even have privileges on PUBLIC, as you say); I'd just say that\nit's a type of role where those things just don't make sense.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 20 Oct 2022 12:57:43 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
},
{
"msg_contents": "On Mon, 2022-09-26 at 15:40 -0400, Stephen Frost wrote:\n> Predefined roles are special in that they should GRANT just the\n> privileges that the role is described to GRANT and that users really\n> shouldn't be able to SET ROLE to them nor should they be allowed to\n> own\n> objects, or at least that's my general feeling on them.\n\nWhat about granting privileges to others? I don't think that makes\nsense for a predefined role, either, because then they'd own a bunch of\ngrants, which is as awkward as owning objects.\n\n> If an administrator doesn't wish for a user to have the privileges\n> provided by the predefined role by default, they should be able to\n> set\n> that up by creating another role who has that privilege which the\n> user\n> is able to SET ROLE to.\n\nAnd that other role could be used for grants, if needed, too.\n\nBut I don't think we need to special-case predefined roles though. I\nthink a lot of administrators would like to declare some roles that are\njust a collection of inheritable privileges.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 20 Oct 2022 13:05:04 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: has_privs_of_role vs. is_member_of_role, redux"
}
] |
[
{
"msg_contents": ">The postgres_fdw tests contain this (as amended by patch 0001):\n\n>ALTER SERVER loopback_nopw OPTIONS (ADD password 'dummypw');\n>ERROR: invalid option \"password\"\n>HINT: Valid options in this context are: service, passfile,\n>channel_binding, connect_timeout, dbname, host, hostaddr, port, options,\n>application_name, keepalives, keepalives_idle, keepalives_interval,\n>keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert,\n>sslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer,\n>ssl_min_protocol_version, ssl_max_protocol_version, gssencmode,\n>krbsrvname, gsslib, target_session_attrs, use_remote_estimate,\n>fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable,\n>fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n\n>This annoys developers who are working on libpq connection options,\n>because any option added, removed, or changed causes this test to need\n>to be updated.\n\n>It's also questionable how useful this hint is in its current form,\n>considering how long it is and that the options are in an\n>implementation-dependent order.\n\n>Possible changes:\n\n>- Hide the hint from this particular test (done in the attached patches).\n+1\nI vote for this option.\nLess work for future developers changes, I think worth the effort.\n\nAnyway, in alphabetical order, it's a lot easier for users to read.\n\nPatch attached.\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 25 Aug 2022 14:31:41 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "re: postgres_fdw hint messages"
},
{
"msg_contents": "Em qui., 25 de ago. de 2022 às 14:31, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> >The postgres_fdw tests contain this (as amended by patch 0001):\n>\n> >ALTER SERVER loopback_nopw OPTIONS (ADD password 'dummypw');\n> >ERROR: invalid option \"password\"\n> >HINT: Valid options in this context are: service, passfile,\n> >channel_binding, connect_timeout, dbname, host, hostaddr, port, options,\n> >application_name, keepalives, keepalives_idle, keepalives_interval,\n> >keepalives_count, tcp_user_timeout, sslmode, sslcompression, sslcert,\n> >sslkey, sslrootcert, sslcrl, sslcrldir, sslsni, requirepeer,\n> >ssl_min_protocol_version, ssl_max_protocol_version, gssencmode,\n> >krbsrvname, gsslib, target_session_attrs, use_remote_estimate,\n> >fdw_startup_cost, fdw_tuple_cost, extensions, updatable, truncatable,\n> >fetch_size, batch_size, async_capable, parallel_commit, keep_connections\n>\n> >This annoys developers who are working on libpq connection options,\n> >because any option added, removed, or changed causes this test to need\n> >to be updated.\n>\n> >It's also questionable how useful this hint is in its current form,\n> >considering how long it is and that the options are in an\n> >implementation-dependent order.\n>\n> >Possible changes:\n>\n> >- Hide the hint from this particular test (done in the attached patches).\n> +1\n> I vote for this option.\n> Less work for future developers changes, I think worth the effort.\n>\n> Anyway, in alphabetical order, it's a lot easier for users to read.\n>\n> Patch attached.\n>\nLittle tweak in the comments.\n\nregards,\nRanier Vilela\n\n>\n>",
"msg_date": "Thu, 25 Aug 2022 14:49:20 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw hint messages"
}
] |
[
{
"msg_contents": "I realized $SUBJECT while wondering why my new buildfarm animal chickadee\n(NetBSD on gaur's old hardware) fails the plpython tests on v13 and\nearlier. After a bit of investigation I realized it *should* be failing,\nbecause neither NetBSD nor Python have done anything about the problem\ndocumented in [1]. The reason it fails to fail in current branches is\nthat we're now pulling -lpthread into the backend, which AFAICT is an\nunintentional side-effect of sloppy autoconfmanship in commits\nde91c3b97 / 44bf3d508. We wanted pthread_barrier_wait() for pgbench,\nnot the backend, but as-committed we'll add -lpthread to LIBS if it\nprovides pthread_barrier_wait.\n\nNow maybe someday we'll be brave enough to make the backend multithreaded,\nbut today is not that day, and in the meantime this seems like a rather\ndangerous situation. There has certainly been exactly zero analysis\nof whether it's safe.\n\n... On the third hand, poking at backends with ldd shows that at\nleast on Linux, we've been linking the backend with -lpthread for\nquite some time, back to 9.4 or so. The new-in-v14 behavior is that\nit's getting in there on BSD-ish platforms as well.\n\nShould we try to pull that back out, or just cross our fingers and\nhope there's no real problem?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/25662.1560896200%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 25 Aug 2022 13:41:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "V14 and later build the backend with -lpthread"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I realized $SUBJECT while wondering why my new buildfarm animal chickadee\n> (NetBSD on gaur's old hardware) fails the plpython tests on v13 and\n> earlier. After a bit of investigation I realized it *should* be failing,\n> because neither NetBSD nor Python have done anything about the problem\n> documented in [1]. The reason it fails to fail in current branches is\n> that we're now pulling -lpthread into the backend, which AFAICT is an\n> unintentional side-effect of sloppy autoconfmanship in commits\n> de91c3b97 / 44bf3d508. We wanted pthread_barrier_wait() for pgbench,\n> not the backend, but as-committed we'll add -lpthread to LIBS if it\n> provides pthread_barrier_wait.\n>\n> Now maybe someday we'll be brave enough to make the backend multithreaded,\n> but today is not that day, and in the meantime this seems like a rather\n> dangerous situation. There has certainly been exactly zero analysis\n> of whether it's safe.\n>\n> ... On the third hand, poking at backends with ldd shows that at\n> least on Linux, we've been linking the backend with -lpthread for\n> quite some time, back to 9.4 or so. The new-in-v14 behavior is that\n> it's getting in there on BSD-ish platforms as well.\n>\n> Should we try to pull that back out, or just cross our fingers and\n> hope there's no real problem?\n\nAbsent some evidence of a real problem, I vote for crossing our\nfingers. It would certainly be a very bad idea to start using pthreads\nwilly-nilly in the back end, but the mere presence of the library\ndoesn't seem like a particularly severe issue. I might feel\ndifferently if no such version had been released yet, but it's hard to\nfeel like the sky is falling if it's been like this on Linux since\n9.4.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:40:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: V14 and later build the backend with -lpthread"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 8:40 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Aug 25, 2022 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I realized $SUBJECT while wondering why my new buildfarm animal chickadee\n> > (NetBSD on gaur's old hardware) fails the plpython tests on v13 and\n> > earlier. After a bit of investigation I realized it *should* be failing,\n> > because neither NetBSD nor Python have done anything about the problem\n> > documented in [1]. The reason it fails to fail in current branches is\n> > that we're now pulling -lpthread into the backend, which AFAICT is an\n> > unintentional side-effect of sloppy autoconfmanship in commits\n> > de91c3b97 / 44bf3d508. We wanted pthread_barrier_wait() for pgbench,\n> > not the backend, but as-committed we'll add -lpthread to LIBS if it\n> > provides pthread_barrier_wait.\n> >\n> > Now maybe someday we'll be brave enough to make the backend multithreaded,\n> > but today is not that day, and in the meantime this seems like a rather\n> > dangerous situation. There has certainly been exactly zero analysis\n> > of whether it's safe.\n> >\n> > ... On the third hand, poking at backends with ldd shows that at\n> > least on Linux, we've been linking the backend with -lpthread for\n> > quite some time, back to 9.4 or so. The new-in-v14 behavior is that\n> > it's getting in there on BSD-ish platforms as well.\n> >\n> > Should we try to pull that back out, or just cross our fingers and\n> > hope there's no real problem?\n>\n> Absent some evidence of a real problem, I vote for crossing our\n> fingers. It would certainly be a very bad idea to start using pthreads\n> willy-nilly in the back end, but the mere presence of the library\n> doesn't seem like a particularly severe issue. I might feel\n> differently if no such version had been released yet, but it's hard to\n> feel like the sky is falling if it's been like this on Linux since\n> 9.4.\n\nI suspect we will end up linked against the threading library anyway\nin real-world builds via --with-XXX (I see that --with-icu has that\neffect on my FreeBSD system, but I know that details about threading\nare quite different in NetBSD). I may lack imagination but I'm\nstruggling to see how it could break anything.\n\nHow should I have done that, by the way? Is the attached the right trick?",
"msg_date": "Fri, 26 Aug 2022 08:51:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: V14 and later build the backend with -lpthread"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Aug 25, 2022 at 1:41 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... On the third hand, poking at backends with ldd shows that at\n>> least on Linux, we've been linking the backend with -lpthread for\n>> quite some time, back to 9.4 or so. The new-in-v14 behavior is that\n>> it's getting in there on BSD-ish platforms as well.\n\n[ further study shows that it's been pulled in on Linux to get sem_init() ]\n\n>> Should we try to pull that back out, or just cross our fingers and\n>> hope there's no real problem?\n\n> Absent some evidence of a real problem, I vote for crossing our\n> fingers. It would certainly be a very bad idea to start using pthreads\n> willy-nilly in the back end, but the mere presence of the library\n> doesn't seem like a particularly severe issue. I might feel\n> differently if no such version had been released yet, but it's hard to\n> feel like the sky is falling if it's been like this on Linux since\n> 9.4.\n\nWell, -lpthread on other platforms might have more or different\nside-effects than it does on Linux, so I'm not particularly comforted\nby that argument. I concede though that the lack of complaints about\nv14 is comforting. I'm prepared to do nothing for now; I just wanted\nto raise visibility of this point so that if we do come across any\nweird pre-vs-post-v14 issues, we think of this as a possible cause.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:56:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: V14 and later build the backend with -lpthread"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I suspect we will end up linked against the threading library anyway\n> in real-world builds via --with-XXX (I see that --with-icu has that\n> effect on my FreeBSD system, but I know that details about threading\n> are quite different in NetBSD). I may lack imagination but I'm\n> struggling to see how it could break anything.\n\nYeah, there are plenty of situations where you end up with thread\nsupport present somehow. So it may be a lost cause. I was mostly\nconcerned about this because it seemed like an unintentional change.\n\n(I'm also still struggling to explain why mamba, with the *exact*\nsame NetBSD code on a different hardware platform, isn't showing\nthe same failures as chickadee. More news if I figure that out.)\n\n> How should I have done that, by the way? Is the attached the right trick?\n\nI think that'd do for preventing side-effects on LIBS, but I'm not\nsure if we'd have to back-fill something in pgbench's link options.\nAnyway, as I said to Robert, I'm content to watch and wait for now.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 17:04:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: V14 and later build the backend with -lpthread"
},
{
"msg_contents": "I wrote:\n> (I'm also still struggling to explain why mamba, with the *exact*\n> same NetBSD code on a different hardware platform, isn't showing\n> the same failures as chickadee. More news if I figure that out.)\n\nHah: I left --with-libxml out of chickadee's configuration, because\nlibxml2 seemed to have some problems on that platform, and that is\nwhat is pulling in libpthread on mamba:\n\n$ ldd /usr/pkg/lib/libxml2.so\n/usr/pkg/lib/libxml2.so:\n -lz.1 => /usr/lib/libz.so.1\n -lc.12 => /usr/lib/libc.so.12\n -llzma.2 => /usr/lib/liblzma.so.2\n -lpthread.1 => /lib/libpthread.so.1\n -lm.0 => /usr/lib/libm.so.0\n -lgcc_s.1 => /lib/libgcc_s.so.1\n\nReinforces your point about real-world builds, I suppose.\n\nFor the moment I'll just disable testing plpython pre-v14 on\nchickadee.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 25 Aug 2022 17:33:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: V14 and later build the backend with -lpthread"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-25 17:04:37 -0400, Tom Lane wrote:\n> (I'm also still struggling to explain why mamba, with the *exact*\n> same NetBSD code on a different hardware platform, isn't showing\n> the same failures as chickadee. More news if I figure that out.)\n\nI'd guess it's because of the different dependencies that are enabled. On my\nnetbsd VM libxml2 pulls in -lpthread, for example. We add xml2's dependencies\nto LIBS, so if that's enabled, we end up indirectly pulling in libxml2 in as\nwell.\n\n\n> > How should I have done that, by the way? Is the attached the right trick?\n>\n> I think that'd do for preventing side-effects on LIBS, but I'm not\n> sure if we'd have to back-fill something in pgbench's link options.\n> Anyway, as I said to Robert, I'm content to watch and wait for now.\n\nGiven that linking in pthreads support fixes things, that seems the right\ncourse... I wonder if we shouldn't even be more explicit about it and just add\nit - who knows what extension libraries pull in. It'd not be good if we end up\nwith non-reentrant versions of functions just because initially the backend\nisn't threaded etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 25 Aug 2022 14:34:49 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: V14 and later build the backend with -lpthread"
}
] |
[
{
"msg_contents": "Hi,\r\n\r\nWe will be releasing a PostgreSQL 15 Beta 4 on September 8, 2022.\r\n\r\nPlease have open items[1] completed and committed no later than \r\nSeptember 5, 2022 0:00 AoE[2].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\r\n[2] https://en.wikipedia.org/wiki/Anywhere_on_Earth",
"msg_date": "Thu, 25 Aug 2022 17:14:45 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 15 Beta 4"
}
] |
[
{
"msg_contents": "Attached patch series is a completely overhauled version of earlier\nwork on freezing. Related work from the Postgres 15 cycle became\ncommits 0b018fab, f3c15cbe, and 44fa8488.\n\nRecap\n=====\n\nThe main high level goal of this work is to avoid painful, disruptive\nantiwraparound autovacuums (and other aggressive VACUUMs) that do way\ntoo much \"catch up\" freezing, all at once, causing significant\ndisruption to production workloads. The patches teach VACUUM to care\nabout how far behind it is on freezing for each table -- the number of\nunfrozen all-visible pages that have accumulated so far is directly\nand explicitly kept under control over time. Unfrozen pages can be\nseen as debt. There isn't necessarily anything wrong with getting into\ndebt (getting into debt to a small degree is all but inevitable), but\ndebt can be dangerous when it isn't managed carefully. Accumulating\nlarge amounts of debt doesn't always end badly, but it does seem to\nreliably create the *risk* that things will end badly.\n\nRight now, a standard append-only table could easily do *all* freezing\nin aggressive/antiwraparound VACUUM, without any earlier\nnon-aggressive VACUUM operations triggered by\nautovacuum_vacuum_insert_threshold doing any freezing at all (unless\nthe user goes out of their way to tune vacuum_freeze_min_age). There\nis currently no natural limit on the number of unfrozen all-visible\npages that can accumulate -- unless you count age(relfrozenxid), the\ntriggering condition for antiwraparound autovacuum. But relfrozenxid\nage predicts almost nothing about how much freezing is required (or\nwill be required later on). The overall result is that it oftens takes\nfar too long for freezing to finally happen, even when the table\nreceives plenty of autovacuums (they all could freeze something, but\nin practice just don't freeze anything). It's very hard to avoid that\nthrough tuning, because what we really care about is something pretty\nclosely related to (if not exactly) the number of unfrozen heap pages\nin the system. XID age is fundamentally \"the wrong unit\" here -- the\nphysical cost of freezing is the most important thing, by far.\n\nIn short, the goal of the patch series/project is to make autovacuum\nscheduling much more predictable over time. Especially with very large\nappend-only tables. The patches improve the performance stability of\nVACUUM by managing costs holistically, over time. What happens in one\nsingle VACUUM operation is much less important than the behavior of\nsuccessive VACUUM operations over time.\n\nWhat's new: freezing/skipping strategies\n========================================\n\nThis newly overhauled version introduces the concept of\nper-VACUUM-operation strategies, which we decide on once per VACUUM,\nat the very start. There are 2 choices to be made at this point (right\nafter we acquire OldestXmin and similar cutoffs):\n\n1) Do we scan all-visible pages, or do we skip instead? (Added by\nsecond patch, involves a trade-off between eagerness and laziness.)\n2) How should we freeze -- eagerly or lazily? (Added by third patch)\n\nThe strategy-based approach can be thought of as something that blurs\nthe distinction between aggressive and non-aggressive VACUUM, giving\nVACUUM more freedom to do either more or less work, based on known\ncosts and benefits. This doesn't completely supersede\naggressive/antiwraparound VACUUMs, but should make them much rarer\nwith larger tables, where controlling freeze debt actually matters.\nThere is a need to keep laziness and eagerness in balance here. We try\nto get the benefit of lazy behaviors/strategies, but will still course\ncorrect when it doesn't work out.\n\nA new GUC/reloption called vacuum_freeze_strategy_threshold is added\nto control freezing strategy (also influences our choice of skipping\nstrategy). It defaults to 4GB, so tables smaller than that cutoff\n(which are usually the majority of all tables) will continue to freeze\nin much the same way as today by default. Our current lazy approach to\nfreezing makes sense there, and should be preserved for its own sake.\n\nCompatibility\n=============\n\nStructuring the new freezing behavior as an explicit user-configurable\nstrategy is also useful as a bridge between the old and new freezing\nbehaviors. It makes it fairly easy to get the old/current behavior\nwhere that's preferred -- which, I must admit, is something that\nwasn't well thought through last time around. The\nvacuum_freeze_strategy_threshold GUC is effectively (though not\nexplicitly) a compatibility option. Users that want something close to\nthe old/current behavior can use the GUC or reloption to more or less\nopt-out of the new freezing behavior, and can do so selectively. The\nGUC should be easy for users to understand, too -- it's just a table\nsize cutoff.\n\nSkipping pages using a snapshot of the visibility map\n=====================================================\n\nWe now take a copy of the visibility map at the point that VACUUM\nbegins, and work off of that when skipping, instead of working off of\nthe mutable/authoritative VM -- this is a visibility map snapshot.\nThis new infrastructure helps us to decide on a skipping strategy.\nEvery non-aggressive VACUUM operation now has a choice to make: Which\nskipping strategy should it use? (This was introduced as\nitem/question #1 a moment ago.)\n\nThe decision on skipping strategy is a decision about our priorities\nfor this table, at this time: Is it more important to advance\nrelfrozenxid early (be eager), or to skip all-visible pages instead\n(be lazy)? If it's the former, then we must scan every single page\nthat isn't all-frozen according to the VM snapshot (including every\nall-visible page). If it's the latter, we'll scan exactly 0\nall-visible pages. Either way, once a decision has been made, we don't\nleave much to chance -- we commit. ISTM that this is the only approach\nthat really makes sense. Fundamentally, we advance relfrozenxid a\ntable at a time, and at most once per VACUUM operation. And for larger\ntables it's just impossible as a practical matter to have frequent\nVACUUM operations. We ought to be *somewhat* biased in the direction\nof advancing relfrozenxid by *some* amount during each VACUUM, even\nwhen relfrozenxid isn't all that old right now.\n\nA strategy (whether for skipping or for freezing) is a big, up-front\ndecision -- and there are certain kinds of risks that naturally\naccompany that approach. The information driving the decision had\nbetter be fairly reliable! By using a VM snapshot, we can choose our\nskipping strategy based on precise information about how many *extra*\npages we will have to scan if we go with eager scanning/relfrozenxid\nadvancement. Concurrent activity cannot change what we scan and what\nwe skip, either -- everything is locked in from the start. That seems\nimportant to me. It justifies trying to advance relfrozenxid early,\njust because the added cost of scanning any all-visible pages happens\nto be low.\n\nThis is quite a big shift for VACUUM, at least in some ways. The patch\nadds a DETAIL to the \"starting vacuuming\" INFO message shown by VACUUM\nVERBOSE. The VERBOSE output is already supposed to work as a\nrudimentary progress indicator (at least when it is run at the\ndatabase level), so it now shows the final scanned_pages up-front,\nbefore the physical scan of the heap even begins:\n\nregression=# vacuum verbose tenk1;\nINFO: vacuuming \"regression.public.tenk1\"\nDETAIL: total table size is 486 pages, 3 pages (0.62% of total) must be scanned\nINFO: finished vacuuming \"regression.public.tenk1\": index scans: 0\npages: 0 removed, 486 remain, 3 scanned (0.62% of total)\n*** SNIP ***\nsystem usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\nVACUUM\n\nI included this VERBOSE tweak in the second patch because it became\nnatural with VM snapshots, and not because it felt particularly\ncompelling -- scanned_pages just works like this now (an assertion\nverifies that our initial scanned_pages is always an exact match to\nwhat happened during the physical scan, in fact).\n\nThere are many things that VM snapshots might also enable that aren't\nparticularly related to freeze debt. VM snapshotting has the potential\nto enable more flexible behavior by VACUUM. I'm thinking of things\nlike suspend-and-resume for VACUUM/autovacuum, or even autovacuum\nscheduling that coordinates autovacuum workers before and during\nprocessing by vacuumlazy.c. Locking-in scanned_pages up-front avoids\nthe main downside that comes with throttling VACUUM right now: the\nfact that simply taking our time during VACUUM will tend to increase\nthe number of concurrently modified pages that we end up scanning.\nThese pages are bound to mostly just contain \"recently dead\" tuples\nthat the ongoing VACUUM can't do much about anyway -- we could dirty a\nlot more heap pages as a result, for little to no benefit.\n\nNew patch to avoid allocating MultiXacts\n========================================\n\nThe fourth and final patch is also new. It corrects an undesirable\nconsequence of the work done by the earlier patches: it makes VACUUM\navoid allocating new MultiXactIds (unless it's fundamentally\nimpossible, like in a VACUUM FREEZE). With just the first 3 patches\napplied, VACUUM will naively process xmax using a cutoff XID that\ncomes from OldestXmin (and not FreezeLimit, which is how it works on\nHEAD). But with the fourth patch in place VACUUM applies an XID cutoff\nof either OldestXmin or FreezeLimit selectively, based on the costs\nand benefits for any given xmax.\n\nJust like in lazy_scan_noprune, the low level xmax-freezing code can\npick and choose as it goes, within certain reasonable constraints. We\nmust accept an older final relfrozenxid/relminmxid value for the rel's\nauthoritative pg_class tuple as a consequence of avoiding xmax\nprocessing, of course, but that shouldn't matter at all (it's\ndefinitely better than the alternative).\n\nReducing the WAL space overhead of freezing\n===========================================\n\nNot included in this new v1 are other patches that control the\noverhead of added freezing -- my focus since joining AWS has been to\nget these more strategic patches in shape, and telling the right story\nabout what I'm trying to do here. I'm going to say a little on the\npatches that I have in the pipeline here, though. Getting the\nlow-level/mechanical overhead of freezing under control will probably\nrequire a few complementary techniques, not just high-level strategies\n(though the strategy stuff is the most important piece).\n\nThe really interesting omitted-in-v1 patch adds deduplication of\nxl_heap_freeze_page WAL records. This reduces the space overhead of\nWAL records used to freeze by ~5x in most cases. It works in the\nobvious way: we just store the 12 byte freeze plans that appear in\neach xl_heap_freeze_page record only once, and then store an array of\nitem offset numbers for each entry (rather than naively storing a full\n12 bytes per tuple frozen per page-level WAL record). This means that\nwe only need an \"extra\" ~2 bytes of WAL space per \"extra\" tuple frozen\n(2 bytes for an OffsetNumber) once we decide to freeze something on\nthe same page. The *marginal* cost can be much lower than it is today,\nwhich makes page-based batching of freezing much more compelling IMV.\n\nThoughts?\n--\nPeter Geoghegan",
"msg_date": "Thu, 25 Aug 2022 14:21:12 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On 8/25/22 2:21 PM, Peter Geoghegan wrote:\n> \n> New patch to avoid allocating MultiXacts\n> ========================================\n> \n> The fourth and final patch is also new. It corrects an undesirable\n> consequence of the work done by the earlier patches: it makes VACUUM\n> avoid allocating new MultiXactIds (unless it's fundamentally\n> impossible, like in a VACUUM FREEZE). With just the first 3 patches\n> applied, VACUUM will naively process xmax using a cutoff XID that\n> comes from OldestXmin (and not FreezeLimit, which is how it works on\n> HEAD). But with the fourth patch in place VACUUM applies an XID cutoff\n> of either OldestXmin or FreezeLimit selectively, based on the costs\n> and benefits for any given xmax.\n> \n> Just like in lazy_scan_noprune, the low level xmax-freezing code can\n> pick and choose as it goes, within certain reasonable constraints. We\n> must accept an older final relfrozenxid/relminmxid value for the rel's\n> authoritative pg_class tuple as a consequence of avoiding xmax\n> processing, of course, but that shouldn't matter at all (it's\n> definitely better than the alternative).\n\nWe should be careful here. IIUC, the current autovac behavior helps\nbound the \"spread\" or range of active multixact IDs in the system, which\ndirectly determines the number of distinct pages that contain those\nmultixacts. If the proposed change herein causes the spread/range of\nMXIDs to significantly increase, then it will increase the number of\nblocks and increase the probability of thrashing on the SLRUs for these\ndata structures. There may be another separate thread or two about\nissues with SLRUs already?\n\n-Jeremy\n\n\nPS. see also\nhttps://www.postgresql.org/message-id/247e3ce4-ae81-d6ad-f54d-7d3e0409a950@ardentperf.com\n\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n\n",
"msg_date": "Thu, 25 Aug 2022 15:35:17 -0700",
"msg_from": "Jeremy Schneider <schnjere@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 3:35 PM Jeremy Schneider <schnjere@amazon.com> wrote:\n> We should be careful here. IIUC, the current autovac behavior helps\n> bound the \"spread\" or range of active multixact IDs in the system, which\n> directly determines the number of distinct pages that contain those\n> multixacts. If the proposed change herein causes the spread/range of\n> MXIDs to significantly increase, then it will increase the number of\n> blocks and increase the probability of thrashing on the SLRUs for these\n> data structures.\n\nAs a general rule VACUUM will tend to do more eager freezing with the\npatch set compared to HEAD, though it should never do less eager\nfreezing. Not even in corner cases -- never.\n\nWith the patch, VACUUM pretty much uses the most aggressive possible\nXID-wise/MXID-wise cutoffs in almost all cases (though only when we\nactually decide to freeze a page at all, which is now a separate\nquestion). The fourth patch in the patch series introduces a very\nlimited exception, where we use the same cutoffs that we'll always use\non HEAD (FreezeLimit + MultiXactCutoff) instead of the aggressive\nvariants (OldestXmin and OldestMxact). This isn't just *any* xmax\ncontaining a MultiXact: it's a Multi that contains *some* XIDs that\n*need* to go away during the ongoing VACUUM, and others that *cannot*\ngo away. Oh, and there usually has to be a need to keep two or more\nXIDs for this to happen -- if there is only one XID then we can\nusually swap xmax with that XID without any fuss.\n\n> PS. see also\n> https://www.postgresql.org/message-id/247e3ce4-ae81-d6ad-f54d-7d3e0409a950@ardentperf.com\n\nI think that the problem you describe here is very real, though I\nsuspect that it needs to be addressed by making opportunistic cleanup\nof Multis happen more reliably. Running VACUUM more often just isn't\npractical once a table reaches a certain size. In general, any kind of\nprocessing that is time sensitive probably shouldn't be happening\nsolely during VACUUM -- it's just too risky. VACUUM might take a\nrelatively long time to get to the affected page. It might not even be\nthat long in wall clock time or whatever -- just too long to reliably\navoid the problem.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Aug 2022 16:23:09 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 4:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> As a general rule VACUUM will tend to do more eager freezing with the\n> patch set compared to HEAD, though it should never do less eager\n> freezing. Not even in corner cases -- never.\n\nCome to think of it, I don't think that that's quite true. Though the\nfourth patch isn't particularly related to the problem.\n\nIt *is* true that VACUUM will do at least as much freezing of XID\nbased tuple header fields as before. That just leaves MXIDs. It's even\ntrue that we will do just as much freezing as before if you go pure on\nMultiXact-age. But I'm the one that likes to point out that age is\naltogether the wrong approach for stuff like this -- so that won't cut\nit.\n\nMore concretely, I think that the patch series will fail to do certain\ninexpensive eager processing of tuple xmax, that will happen today,\nregardless of what the user has set vacuum_freeze_min_age or\nvacuum_multixact_freeze_min_age to. Although we currently only care\nabout XID age when processing simple XIDs, we already sometimes make\ntrade-offs similar to the trade-off I propose to make in the fourth\npatch for Multis.\n\nIn other words, on HEAD, we promise to process any XMID >=\nMultiXactCutoff inside FreezeMultiXactId(). But we also manage to do\n\"eager processing of xmax\" when it's cheap and easy to do so, without\ncaring about MultiXactCutoff at all -- this is likely the common case,\neven. This preexisting eager processing of Multis is likely important\nin many applications.\n\nThe problem that I think I've created is that page-level freezing as\nimplemented in lazy_scan_prune by the third patch doesn't know that we\nmight be a good idea to execute a subset of freeze plans, in order to\nremove a multi from a page right away. It mostly has the right idea by\nholding off on freezing until it looks like a good idea at the level\nof the whole page, but I think that this is a plausible exception.\nJust because we're much more sensitive to leaving behind an Multi, and\nright now the only code path that can remove a Multi that isn't needed\nanymore is FreezeMultiXactId().\n\nIf xmax was an updater that aborted instead of a multi then we could\nrely on hint bits being set by pruning to avoid clog lookups.\nTechnically nobody has violated their contract here, I think, but it\nstill seems like it could easily be unacceptable.\n\nI need to come up with my own microbenchmark suite for Multis -- that\nwas on my TODO list already. I still believe that the fourth patch\naddresses Andres' complaint about allocating new Multis during VACUUM.\nBut it seems like I need to think about the nuances of Multis some\nmore. In particular, what the performance impact might be of making a\ndecision on freezing at the page level, in light of the special\nperformance considerations for Multis.\n\nMaybe it needs to be more granular than that, more often. Or maybe we\ncan comprehensively solve the problem in some other way entirely.\nMaybe pruning should do this instead, in general. Something like that\nmight put this right, and be independently useful.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 25 Aug 2022 17:14:51 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, 2022-08-25 at 14:21 -0700, Peter Geoghegan wrote:\n> The main high level goal of this work is to avoid painful, disruptive\n> antiwraparound autovacuums (and other aggressive VACUUMs) that do way\n> too much \"catch up\" freezing, all at once, causing significant\n> disruption to production workloads.\n\nSounds like a good goal, and loosely follows the precedent of\ncheckpoint targets and vacuum cost delays.\n\n> A new GUC/reloption called vacuum_freeze_strategy_threshold is added\n> to control freezing strategy (also influences our choice of skipping\n> strategy). It defaults to 4GB, so tables smaller than that cutoff\n> (which are usually the majority of all tables) will continue to\n> freeze\n> in much the same way as today by default. Our current lazy approach\n> to\n> freezing makes sense there, and should be preserved for its own sake.\n\nWhy is the threshold per-table? Imagine someone who has a bunch of 4GB\npartitions that add up to a huge amount of deferred freezing work.\n\nThe initial problem you described is a system-level problem, so it\nseems we should track the overall debt in the system in order to keep\nup.\n\n> for this table, at this time: Is it more important to advance\n> relfrozenxid early (be eager), or to skip all-visible pages instead\n> (be lazy)? If it's the former, then we must scan every single page\n> that isn't all-frozen according to the VM snapshot (including every\n> all-visible page).\n\nThis feels too absolute, to me. If the goal is to freeze more\nincrementally, well in advance of wraparound limits, then why can't we\njust freeze 1000 out of 10000 freezable pages on this run, and then\nleave the rest for a later run?\n\n> Thoughts?\n\nWhat if we thought about this more like a \"background freezer\". It\nwould keep track of the total number of unfrozen pages in the system,\nand freeze them at some kind of controlled/adaptive rate.\n\nRegular autovacuum's job would be to keep advancing relfrozenxid for\nall tables and to do other cleanup, and the background freezer's job\nwould be to keep the absolute number of unfrozen pages under some\nlimit. Conceptually those two jobs seem different to me.\n\nAlso, regarding patch v1-0001-Add-page-level-freezing, do you think\nthat narrows the conceptual gap between an all-visible page and an all-\nfrozen page?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 29 Aug 2022 11:47:16 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 11:47 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> Sounds like a good goal, and loosely follows the precedent of\n> checkpoint targets and vacuum cost delays.\n\nRight.\n\n> Why is the threshold per-table? Imagine someone who has a bunch of 4GB\n> partitions that add up to a huge amount of deferred freezing work.\n\nI think it's possible that our cost model will eventually become very\nsophisticated, and weigh all kinds of different factors, and work as\none component of a new framework that dynamically schedules autovacuum\nworkers. My main goal in posting this v1 was validating the *general\nidea* of strategies with cost models, and the related question of how\nwe might use VM snapshots for that. After all, even the basic concept\nis totally novel.\n\n> The initial problem you described is a system-level problem, so it\n> seems we should track the overall debt in the system in order to keep\n> up.\n\nI agree that the problem is fundamentally a system-level problem. One\nreason why vacuum_freeze_strategy_threshold works at the table level\nright now is to get the ball rolling. In any case the specifics of how\nwe trigger each strategy are from from settled. That's not the only\nreason why we think about things at the table level in the patch set,\nthough.\n\nThere *are* some fundamental reasons why we need to care about\nindividual tables, rather than caring about unfrozen pages at the\nsystem level *exclusively*. This is something that\nvacuum_freeze_strategy_threshold kind of gets right already, despite\nits limitations. There are 2 aspects of the design that seemingly have\nto work at the whole table level:\n\n1. Concentration matters when it comes to wraparound risk.\n\nFundamentally, each VACUUM still targets exactly one heap rel, and\nadvances relfrozenxid at most once per VACUUM operation. While the\ntotal number of \"unfrozen heap pages\" across the whole database is the\nsingle most important metric, it's not *everything*.\n\nAs a general rule, there is much less risk in having a certain fixed\nnumber of unfrozen heap pages spread fairly evenly among several\nlarger tables, compared to the case where the same number of unfrozen\npages are all concentrated in one particular table -- right now it'll\noften be one particular table that is far larger than any other table.\nRight now the pain is generally felt with large tables only.\n\n2. We need to think about things at the table level is to manage costs\n*over time* holistically. (Closely related to #1.)\n\nThe ebb and flow of VACUUM for one particular table is a big part of\nthe picture here -- and will be significantly affected by table size.\nWe can probably always afford to risk falling behind on\nfreezing/relfrozenxid (i.e. we should prefer laziness) if we know that\nwe'll almost certainly be able to catch up later when things don't\nquite work out. That makes small tables much less trouble, even when\nthere are many more of them (at least up to a point).\n\nAs you know, my high level goal is to avoid ever having to make huge\nballoon payments to catch up on freezing, which is a much bigger risk\nwith a large table -- this problem is mostly a per-table problem (both\nnow and in the future).\n\nA large table will naturally require fewer, larger VACUUM operations\nthan a small table, no matter what approach is taken with the strategy\nstuff. We therefore have fewer VACUUM operations in a given\nweek/month/year/whatever to spread out the burden -- there will\nnaturally be fewer opportunities. We want to create the impression\nthat each autovacuum does approximately the same amount of work (or at\nleast the same per new heap page for large append-only tables).\n\nIt also becomes much more important to only dirty each heap page\nduring vacuuming ~once with larger tables. With a smaller table, there\nis a much higher chance that the pages we modify will already be dirty\nfrom user queries.\n\n> > for this table, at this time: Is it more important to advance\n> > relfrozenxid early (be eager), or to skip all-visible pages instead\n> > (be lazy)? If it's the former, then we must scan every single page\n> > that isn't all-frozen according to the VM snapshot (including every\n> > all-visible page).\n>\n> This feels too absolute, to me. If the goal is to freeze more\n> incrementally, well in advance of wraparound limits, then why can't we\n> just freeze 1000 out of 10000 freezable pages on this run, and then\n> leave the rest for a later run?\n\nMy remarks here applied only to the question of relfrozenxid\nadvancement -- not to freezing. Skipping strategy (relfrozenxid\nadvancement) is a distinct though related concept to freezing\nstrategy. So I was making a very narrow statement about\ninvariants/basic correctness rules -- I wasn't arguing against\nalternative approaches to freezing beyond the 2 freezing strategies\n(not to be confused with skipping strategies) that appear in v1.\nThat's all I meant -- there is definitely no point in scanning only a\nsubset of the table's all-visible pages, as far as relfrozenxid\nadvancement is concerned (and skipping strategy is fundamentally a\nchoice about relfrozenxid advancement vs work avoidance, eagerness vs\nlaziness).\n\nMaybe you're right that there is room for additional freezing\nstrategies, besides the two added by v1-0003-*patch. Definitely seems\npossible. The freezing strategy concept should be usable as a\nframework for adding additional strategies, including (just for\nexample) a strategy that decides ahead of time to freeze only so many\npages, though not others (without regard for the fact that the pages\nthat we are freezing may not be very different to those we won't be\nfreezing in the current VACUUM).\n\nI'm definitely open to that. It's just a matter of characterizing what\nset of workload characteristics this third strategy would solve, how\nusers might opt in or opt out, etc. Both the eager and the lazy\nfreezing strategies are based on some notion of what's important for\nthe table, based on its known characteristics, and based on what seems\nlike to happen to the table in the future (the next VACUUM, at least).\nI'm not completely sure how many strategies we'll end up needing.\nThough it seems like the eager/lazy trade-off is a really important\npart of how these strategies will need to work, in general.\n\n(Thinks some more) I guess that such an alternative freezing strategy\nwould probably have to affect the skipping strategy too. It's tricky\nto tease apart because it breaks the idea that skipping strategy and\nfreezing strategy are basically distinct questions. That is a factor\nthat makes it a bit more complicated to discuss. In any case, as I\nsaid, I have an open mind about alternative freezing strategies beyond\nthe 2 basic lazy/eager freezing strategies from the patch.\n\n> What if we thought about this more like a \"background freezer\". It\n> would keep track of the total number of unfrozen pages in the system,\n> and freeze them at some kind of controlled/adaptive rate.\n\nI like the idea of storing metadata in shared memory. And scheduling\nand deprioritizing running autovacuums. Being able to slow down or\neven totally halt a given autovacuum worker without much consequence\nis enabled by the VM snapshot concept.\n\nThat said, this seems like future work to me. Worth discussing, but\ntrying to keep out of scope in the first version of this that is\ncommitted.\n\n> Regular autovacuum's job would be to keep advancing relfrozenxid for\n> all tables and to do other cleanup, and the background freezer's job\n> would be to keep the absolute number of unfrozen pages under some\n> limit. Conceptually those two jobs seem different to me.\n\nThe problem with making it such a sharp distinction is that it can be\nvery useful to manage costs by making it the job of VACUUM to do both\n-- we can avoid dirtying the same page multiple times.\n\nI think that we can accomplish the same thing by giving VACUUM more\nfreedom to do either more or less work, based on the observed\ncharacteristics of the table, and some sense of how costs will tend to\nwork over time. across multiple distinct VACUUM operations. In\npractice that might end up looking very similar to what you describe.\n\nIt seems undesirable for VACUUM to ever be too sure of itself -- the\ninformation that triggers autovacuum may not be particularly reliable,\nwhich can be solved to some degree by making as many decisions as\npossible at runtime, dynamically, based on the most authoritative and\nrecent information. Delaying committing to one particular course of\naction isn't always possible, but when it is possible (and not too\nexpensive) we should do it that way on general principle.\n\n> Also, regarding patch v1-0001-Add-page-level-freezing, do you think\n> that narrows the conceptual gap between an all-visible page and an all-\n> frozen page?\n\nYes, definitely. However, I don't think that we can just get rid of\nthe distinction completely -- though I did think about it for a while.\nFor one thing we need to be able to handle cases like the case where\nheap_lock_tuple() modifies an all-frozen page, and makes it\nall-visible without making it completely unskippable to every VACUUM\noperation.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 29 Aug 2022 13:27:42 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, 2022-08-25 at 14:21 -0700, Peter Geoghegan wrote:\n> Attached patch series is a completely overhauled version of earlier\n> work on freezing. Related work from the Postgres 15 cycle became\n> commits 0b018fab, f3c15cbe, and 44fa8488.\n> \n> Recap\n> =====\n> \n> The main high level goal of this work is to avoid painful, disruptive\n> antiwraparound autovacuums (and other aggressive VACUUMs) that do way\n> too much \"catch up\" freezing, all at once\n\nI agree with the motivation: that keeping around a lot of deferred work\n(unfrozen pages) is risky, and that administrators would want a way to\ncontrol that risk.\n\nThe solution involves more changes to the philosophy and mechanics of\nvacuum than I would expect, though. For instance, VM snapshotting,\npage-level-freezing, and a cost model all might make sense, but I don't\nsee why they are critical for solving the problem above. I think I'm\nstill missing something. My mental model is closer to the bgwriter and\ncheckpoint_completion_target.\n\nAllow me to make a naive counter-proposal (not a real proposal, just so\nI can better understand the contrast with your proposal):\n\n * introduce a reloption unfrozen_pages_target (default -1, meaning\ninfinity, which is the current behavior)\n * introduce two fields to LVRelState: n_pages_frozen and\ndelay_skip_count, both initialized to zero\n * when marking a page frozen: n_pages_frozen++\n * when vacuum begins:\n if (unfrozen_pages_target >= 0 &&\n current_unfrozen_page_count > unfrozen_pages_target)\n {\n vacrel->delay_skip_count = current_unfrozen_page_count -\n unfrozen_pages_target;\n /* ?also use more aggressive freezing thresholds? */\n }\n * in lazy_scan_skip(), have a final check:\n if (vacrel->n_pages_frozen < vacrel->delay_skip_count)\n {\n break;\n }\n \nI know there would still be some problem cases, but to me it seems like\nwe solve 80% of the problem in a couple dozen lines of code.\n\na. Can you clarify some of the problem cases, and why it's worth\nspending more code to fix them?\n\nb. How much of your effort is groundwork for related future\nimprovements? If it's a substantial part, can you explain in that\nlarger context?\n\nc. Can some of your patches be separated into independent discussions?\nFor instance, patch 1 has been discussed in other threads and seems\nindependently useful, and I don't see the current work as dependent on\nit. Patch 4 also seems largerly independent.\n\nd. Can you help give me a sense of scale of the problems solved by\nvisibilitymap snapshots and the cost model? Do those need to be in v1?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 11:11:41 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 11:11 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> The solution involves more changes to the philosophy and mechanics of\n> vacuum than I would expect, though. For instance, VM snapshotting,\n> page-level-freezing, and a cost model all might make sense, but I don't\n> see why they are critical for solving the problem above.\n\nI certainly wouldn't say that they're critical. I tend to doubt that I\ncan be perfectly crisp about what the exact relationship is between\neach component in isolation and how it contributes towards addressing\nthe problems we're concerned with.\n\n> I think I'm\n> still missing something. My mental model is closer to the bgwriter and\n> checkpoint_completion_target.\n\nThat's not a bad starting point. The main thing that that mental model\nis missing is how the timeframes work with VACUUM, and the fact that\nthere are multiple timeframes involved (maybe the system's vacuuming\nwork could be seen as having one timeframe at the highest level, but\nit's more of a fractal picture overall). Checkpoints just don't take\nthat long, and checkpoint duration has a fairly low variance (barring\npathological performance problems).\n\nYou only have so many buffers that you can dirty, too -- it's a\nself-limiting process. This is even true when (for whatever reason)\nthe checkpoint_completion_target logic just doesn't do what it's\nsupposed to do. There is more or less a natural floor on how bad\nthings can get, so you don't have to invent a synthetic floor at all.\nLSM-based DB systems like the MyRocks storage engine for MySQL don't\nuse checkpoints at all -- the closest analog is compaction, which is\ncloser to a hybrid of VACUUM and checkpointing than anything else.\n\nThe LSM compaction model necessitates adding artificial throttling to\nkeep the system stable over time [1]. There is a disconnect between\nthe initial ingest of data, and the compaction process. And so\ntop-down modelling of costs and benefits with compaction is more\nnatural with an LSM [2] -- and not a million miles from the strategy\nstuff I'm proposing.\n\n> Allow me to make a naive counter-proposal (not a real proposal, just so\n> I can better understand the contrast with your proposal):\n\n> I know there would still be some problem cases, but to me it seems like\n> we solve 80% of the problem in a couple dozen lines of code.\n\nIt's not that this statement is wrong, exactly. It's that I believe\nthat it is all but mandatory for me to ameliorate the downside that\ngoes with more eager freezing, for example by not doing it at all when\nit doesn't seem to make sense. I want to solve the big problem of\nfreeze debt, without creating any new problems. And if I should also\nmake things in adjacent areas better too, so much the better.\n\nWhy stop at a couple of dozens of lines of code? Why not just change\nthe default of vacuum_freeze_min_age and\nvacuum_multixact_freeze_min_age to 0?\n\n> a. Can you clarify some of the problem cases, and why it's worth\n> spending more code to fix them?\n\nFor one thing if we're going to do a lot of extra freezing, we really\nwant to \"get credit\" for it afterwards, by updating relfrozenxid to\nreflect the new oldest extant XID, and so avoid getting an\nantiwraparound VACUUM early, in the near future.\n\nThat isn't strictly true, of course. But I think that we at least\nought to have a strong bias in the direction of updating relfrozenxid,\nhaving decided to do significantly more freezing in some particular\nVACUUM operation.\n\n> b. How much of your effort is groundwork for related future\n> improvements? If it's a substantial part, can you explain in that\n> larger context?\n\nHard to say. It's true that the idea of VM snapshots is quite general,\nand could have been introduced in a number of different ways. But I\ndon't think that that should count against it. It's also not something\nthat seems contrived or artificial -- it's at least as good of a\nreason to add VM snapshots as any other I can think of.\n\nDoes it really matter if this project is the freeze debt project, or\nthe VM snapshot project? Do we even need to decide which one it is\nright now?\n\n> c. Can some of your patches be separated into independent discussions?\n> For instance, patch 1 has been discussed in other threads and seems\n> independently useful, and I don't see the current work as dependent on\n> it.\n\nI simply don't know if I can usefully split it up just yet.\n\n> Patch 4 also seems largerly independent.\n\nPatch 4 directly compensates for a problem created by the earlier\npatches. The patch series as a whole isn't supposed to amerliorate the\nproblem of MultiXacts being allocated in VACUUM. It only needs to\navoid making the situation any worse than it is today IMV (I suspect\nthat the real fix is to make the VACUUM FREEZE command not tune\nvacuum_freeze_min_age).\n\n> d. Can you help give me a sense of scale of the problems solved by\n> visibilitymap snapshots and the cost model? Do those need to be in v1?\n\nI'm not sure. I think that having certainty that we'll be able to scan\nonly so many pages up-front is very broadly useful, though. Plus it\nremoves the SKIP_PAGES_THRESHOLD stuff, which was intended to enable\nrelfrozenxid advancement in non-aggressive VACUUMs, but does so in a\nway that results in scanning many more pages needlessly. See commit\nbf136cf6, which added the SKIP_PAGES_THRESHOLD stuff back in 2009,\nshortly after the visibility map first appeared.\n\nSince relfrozenxid advancement fundamentally works at the table level,\nit seems natural to make it a top-down, VACUUM-level thing -- even\nwithin non-aggessive VACUUMs (I guess it already meets that\ndescription in aggressive VACUUMs). And since we really want to\nadvance relfrozenxid when we do extra freezing (for the reasons I just\nwent into), it seems natural to me to view it as one problem. I accept\nthat it's not clear cut, though.\n\n[1] https://docs.google.com/presentation/d/1WgP-SlKay5AnSoVDSvOIzmu7edMmtYhdywoa0oAR4JQ/edit?usp=sharing\n[2] https://disc-projects.bu.edu/compactionary/research.html\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Aug 2022 13:45:19 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 1:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > d. Can you help give me a sense of scale of the problems solved by\n> > visibilitymap snapshots and the cost model? Do those need to be in v1?\n>\n> I'm not sure. I think that having certainty that we'll be able to scan\n> only so many pages up-front is very broadly useful, though. Plus it\n> removes the SKIP_PAGES_THRESHOLD stuff, which was intended to enable\n> relfrozenxid advancement in non-aggressive VACUUMs, but does so in a\n> way that results in scanning many more pages needlessly. See commit\n> bf136cf6, which added the SKIP_PAGES_THRESHOLD stuff back in 2009,\n> shortly after the visibility map first appeared.\n\nHere is a better example:\n\nRight now the second patch adds both VM snapshots and the skipping\nstrategy stuff. The VM snapshot is used in the second patch, as a\nsource of reliable information about how we need to process the table,\nin terms of the total number of scanned_pages -- which drives our\nchoice of strategy. Importantly, we can assess the question of which\nskipping strategy to take (in non-aggressive VACUUM) based on 100%\naccurate information about how many *extra* pages we'll have to scan\nin the event of being eager (i.e. in the event that we prioritize\nearly relfrozenxid advancement over skipping some pages). Importantly,\nthat cannot change later on, since VM snapshots are immutable --\neverything is locked in. That already seems quite valuable to me.\n\nThis general concept could be pushed a lot further without great\ndifficulty. Since VM snapshots are immutable, it should be relatively\neasy to have the implementation make its final decision on skipping\nonly *after* lazy_scan_heap() returns. We could allow VACUUM to\n\"change its mind about skipping\" in cases where it initially thought\nthat skipping was the best strategy, only to discover much later on\nthat that was the wrong choice after all.\n\nA huge amount of new, reliable information will come to light from\nscanning the heap rel. In particular, the current value of\nvacrel->NewRelfrozenXid seems like it would be particularly\ninteresting when the time came to consider if a second scan made sense\n-- if NewRelfrozenXid is a recent-ish value already, then that argues\nfor finishing off the all-visible pages in a second heap pass, with\nthe aim of setting relfrozenxid to a similarly recent value when it\nhappens to be cheap to do so.\n\nThe actual process of scanning precisely those all-visible pages that\nwere skipped the first time around during a second call to\nlazy_scan_heap() can be implemented in the obvious way: by teaching\nthe VM snapshot infrastructure/lazy_scan_skip() to treat pages that\nwere skipped the first time around to get scanned during the second\npass over the heap instead. Also, those pages that were scanned the\nfirst time around can/must be skipped on our second pass (excluding\nall-frozen pages, which won't be scanned in either heap pass).\n\nI've used the term \"second heap pass\" here, but that term is slightly\nmisleading. The final outcome of this whole process is that every heap\npage that the vmsnap says VACUUM will need to scan in order for it to\nbe able to safely advance relfrozenxid will be scanned, precisely\nonce. The overall order that the heap pages are scanned in will of\ncourse differ from the simple case, but I don't think that it makes\nvery much difference. In reality there will have only been one heap\npass, consisting of two distinct phases. No individual heap page will\never be considered for pruning/freezing more than once, no matter\nwhat. This is just a case of *reordering* work. Immutability makes\nreordering work easy in general.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Aug 2022 18:50:49 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, 2022-08-30 at 18:50 -0700, Peter Geoghegan wrote:\n> Since VM snapshots are immutable, it should be relatively\n> easy to have the implementation make its final decision on skipping\n> only *after* lazy_scan_heap() returns.\n\nI like this idea.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 21:37:20 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 9:37 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Tue, 2022-08-30 at 18:50 -0700, Peter Geoghegan wrote:\n> > Since VM snapshots are immutable, it should be relatively\n> > easy to have the implementation make its final decision on skipping\n> > only *after* lazy_scan_heap() returns.\n>\n> I like this idea.\n\nI was hoping that you would. I imagine that this idea (with minor\nvariations) could enable an approach that's much closer to what you\nwere thinking of: one that mostly focuses on controlling the number of\nunfrozen pages, and not so much on advancing relfrozenxid early, just\nbecause we can and we might not get another chance for a long time. In\nother words your idea of a design that can freeze more during a\nnon-aggressive VACUUM, while still potentially skipping all-visible\npages.\n\nI said earlier on that we ought to at least have a strong bias in the\ndirection of advancing relfrozenxid in larger tables, especially when\nwe decide to freeze whole pages more eagerly -- we only get one chance\nto advance relfrozenxid per VACUUM, and those opportunities will\nnaturally be few and far between. We cannot really justify all this\nextra freezing if it doesn't prevent antiwraparound autovacuums. That\nwas more or less my objection to going in that direction.\n\nBut if we can more or less double the number of opportunities to at\nleast ask the question \"is now a good time to advance relfrozenxid?\"\nwithout really paying much for keeping this option open (and I think\nthat we can), my concern about relfrozenxid advancement becomes far\nless important. Just being able to ask that question is significantly\nless rare and precious. Plus we'll probably be able to make\nsignificantly better decisions about relfrozenxid overall with the\n\"second phase because I changed my mind about skipping\" concept in\nplace.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 30 Aug 2022 22:12:57 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, 2022-08-30 at 13:45 -0700, Peter Geoghegan wrote:\n> It's that I believe\n> that it is all but mandatory for me to ameliorate the downside that\n> goes with more eager freezing, for example by not doing it at all\n> when\n> it doesn't seem to make sense. I want to solve the big problem of\n> freeze debt, without creating any new problems. And if I should also\n> make things in adjacent areas better too, so much the better.\n\nThat clarifies your point. It's still a challenge for me to reason\nabout which of these potential new problems really need to be solved in\nv1, though.\n\n> Why stop at a couple of dozens of lines of code? Why not just change\n> the default of vacuum_freeze_min_age and\n> vacuum_multixact_freeze_min_age to 0?\n\nI don't think that would actually solve the unbounded buildup of\nunfrozen pages. It would still be possible for pages to be marked all\nvisible before being frozen, and then end up being skipped until an\naggressive vacuum is forced, right?\n\nOr did you mean vacuum_freeze_table_age?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 23:28:46 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 11:28 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> That clarifies your point. It's still a challenge for me to reason\n> about which of these potential new problems really need to be solved in\n> v1, though.\n\nI don't claim to understand it that well myself -- not just yet.\nI feel like I have the right general idea, but the specifics\naren't all there (which is very often the case for me at this\npoint in the cycle). That seems like a good basis for further\ndiscussion.\n\nIt's going to be quite a few months before some version of this\npatchset is committed, at the very earliest. Obviously these are\nquestions that need answers, but the process of getting to those\nanswers is a significant part of the work itself IMV.\n\n> > Why stop at a couple of dozens of lines of code? Why not just change\n> > the default of vacuum_freeze_min_age and\n> > vacuum_multixact_freeze_min_age to 0?\n>\n> I don't think that would actually solve the unbounded buildup of\n> unfrozen pages. It would still be possible for pages to be marked all\n> visible before being frozen, and then end up being skipped until an\n> aggressive vacuum is forced, right?\n\nWith the 15 work in place, and with the insert-driven autovacuum\nbehavior from 13, it is likely that this would be enough to avoid all\nantiwraparound vacuums in an append-only table. There is still one\ncase where we can throw away the opportunity to advance relfrozenxid\nduring non-aggressive VACUUMs for no good reason -- I didn't fix them\nall just yet. But the remaining case (which is in lazy_scan_skip()) is\nvery narrow.\n\nWith vacuum_freeze_min_age = 0 and vacuum_multixact_freeze_min_age =\n0, any page that is eligible to be set all-visible is also eligible to\nhave its tuples frozen and be set all-frozen instead, immediately.\nWhen it isn't then we'll scan it in the next VACUUM anyway.\n\nActually I'm also ignoring some subtleties with Multis that could make\nthis not quite happen, but again, that's only a super obscure corner case.\nThe idea that just setting vacuum_freeze_min_age = 0 and\nvacuum_multixact_freeze_min_age = 0 will be enough is definitely true\nin spirit. You don't need to touch vacuum_freeze_table_age (if you did\nthen you'd get aggressive VACUUMs, and one goal here is to avoid\nthose whenever possible -- especially aggressive antiwraparound\nautovacuums).\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 31 Aug 2022 00:03:02 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Aug 25, 2022 at 2:21 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached patch series is a completely overhauled version of earlier\n> work on freezing. Related work from the Postgres 15 cycle became\n> commits 0b018fab, f3c15cbe, and 44fa8488.\n\nAttached is v2.\n\nThis is just to keep CFTester happy, since v1 now has conflicts when\napplied against HEAD. There are no notable changes in this v2 compared\nto v1.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 31 Aug 2022 15:05:52 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Aug 31, 2022 at 12:03 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually I'm also ignoring some subtleties with Multis that could make\n> this not quite happen, but again, that's only a super obscure corner case.\n> The idea that just setting vacuum_freeze_min_age = 0 and\n> vacuum_multixact_freeze_min_age = 0 will be enough is definitely true\n> in spirit. You don't need to touch vacuum_freeze_table_age (if you did\n> then you'd get aggressive VACUUMs, and one goal here is to avoid\n> those whenever possible -- especially aggressive antiwraparound\n> autovacuums).\n\nAttached is v3. There is a new patch included here -- v3-0004-*patch,\nor \"Unify aggressive VACUUM with antiwraparound VACUUM\". No other\nnotable changes.\n\nI decided to work on this now because it seems like it might give a\nmore complete picture of the high level direction that I'm pushing\ntowards. Perhaps this will make it easier to review the patch series\nas a whole, even. The new patch unifies the concept of antiwraparound\nVACUUM with the concept of aggressive VACUUM. Now there is only\nantiwraparound and regular VACUUM (uh, barring VACUUM FULL). And now\nantiwraparound VACUUMs are not limited to antiwraparound autovacuums\n-- a manual VACUUM can also be antiwraparound (that's just the new\nname for \"aggressive\").\n\nWe will typically only get antiwraparound vacuuming in a regular\nVACUUM when the user goes out of their way to get that behavior.\nVACUUM FREEZE is the best example. For the most part the\nskipping/freezing strategy stuff has a good sense of what matters\nalready, and shouldn't need to be guided very often.\n\nThe patch relegates vacuum_freeze_table_age to a compatibility option,\nmaking its default -1, meaning \"just use autovacuum_freeze_max_age\". I\nalways thought that having two table age based GUCs was confusing.\nThere was a period between 2006 and 2009 when we had\nautovacuum_freeze_max_age, but did not yet have\nvacuum_freeze_table_age. This change can almost be thought of as a\nreturn to the simpler user interface that existed at that time. Of\ncourse we must not resurrect the problems that vacuum_freeze_table_age\nwas intended to address (see originating commit 65878185) by mistake.\nWe need an improved version of the same basic concept, too.\n\nThe patch more or less replaces the table-age-aggressive-escalation\nconcept (previously implemented using vacuum_freeze_table_age) with\nnew logic that makes lazyvacuum.c's choice of skipping strategy *also*\ndepend upon table age -- it is now one more factor to be considered.\nBoth costs and benefits are weighed here. We now give just a little\nweight to table age at a relatively early stage (XID-age-wise early),\nand escalate from there. As the table's relfrozenxid gets older and\nolder, we give less and less weight to putting off the cost of\nfreezing. This general approach is possible because the false\ndichotomy that is \"aggressive vs non-aggressive\" has mostly been\neliminated. This makes things less confusing for users and hackers.\n\nThe details of the skipping-strategy-choice algorithm are still\nunsettled in v3 (no real change there). ISTM that the important thing\nis still the high level concepts. Jeff was slightly puzzled by the\nemphasis placed on the cost model/strategy stuff, at least at one\npoint. Hopefully my intent will be made clearer by the ideas featured\nin the new patch. The skipping strategy decision making process isn't\nparticularly complicated, but it now looks more like an optimization\nproblem of some kind or other.\n\nIt might make sense to go further in the same direction by making\n\"regular vs aggressive/antiwraparound\" into a *strict* continuum. In\nother words, it might make sense to get rid of the two remaining cases\nwhere VACUUM conditions its behavior on whether this VACUUM operation\nis antiwraparound/aggressive or not. I'm referring to the cleanup lock\nskipping behavior around lazy_scan_noprune(), as well as the\nPROC_VACUUM_FOR_WRAPAROUND no-auto-cancellation behavior enforced in\nautovacuum workers. We will still need to keep roughly the same two\nbehaviors, but the timelines can be totally different. We must be\nreasonably sure that the cure won't be worse than the disease -- I'm\naware of quite a few individual cases that felt that way [1].\nAggressive interventions can make sense, but they need to be\nproportionate to the problem that's right in front of us. \"Kicking the\ncan down the road\" is often the safest and most responsible approach\n-- it all depends on the details.\n\n[1] https://www.tritondatacenter.com/blog/manta-postmortem-7-27-2015\nis the most high profile example, but I have personally been called in\nto deal with similar problems in the past\n--\nPeter Geoghegan",
"msg_date": "Thu, 8 Sep 2022 13:23:52 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 1:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v3. There is a new patch included here -- v3-0004-*patch,\n> or \"Unify aggressive VACUUM with antiwraparound VACUUM\". No other\n> notable changes.\n>\n> I decided to work on this now because it seems like it might give a\n> more complete picture of the high level direction that I'm pushing\n> towards. Perhaps this will make it easier to review the patch series\n> as a whole, even.\n\nThis needed to be rebased over the guc.c work recently pushed to HEAD.\n\nAttached is v4. This isn't just to fix bitrot, though; I'm also\nincluding one new patch -- v4-0006-*.patch. This small patch teaches\nVACUUM to size dead_items while capping the allocation at the space\nrequired for \"scanned_pages * MaxHeapTuplesPerPage\" item pointers. In\nother words, we now use scanned_pages instead of rel_pages to cap the\nsize of dead_items, potentially saving quite a lot of memory. There is\nno possible downside to this approach, because we already know exactly\nhow many pages will be scanned from the VM snapshot -- there is zero\nadded risk of a second pass over the indexes.\n\nThis is still only scratching the surface of what is possible with\ndead_items. The visibility map snapshot concept can enable a far more\nsophisticated approach to resource management in vacuumlazy.c. It\ncould help us to replace a simple array of item pointers (the current\ndead_items array) with a faster and more space-efficient data\nstructure. Masahiko Sawada has done a lot of work on this recently, so\nthis may interest him.\n\nWe don't just have up-front knowledge of the total number of\nscanned_pages with VM snapshots -- we also have up-front knowledge of\nwhich specific pages will be scanned. So we have reliable information\nabout the final distribution of dead_items (which specific heap blocks\nmight have dead_items) right from the start. While this extra\ninformation/context is not a totally complete picture, it still seems\nlike it could be very useful as a way of driving how some new\ndead_items data structure compresses TIDs. That will depend on the\ndistribution of TIDs -- the final \"heap TID key space\".\n\nVM snapshots could also make it practical for the new data structure\nto spill to disk to avoid multiple index scans/passed by VACUUM.\nPerhaps this will result in behavior that's similar to how hash joins\nspill to disk -- having 90% of the memory required to do everything\nin-memory *usually* has similar performance characteristics to just\ndoing everything in memory. Most individual TID lookups from\nambulkdelete() will find that the TID *doesn't* need to be deleted --\na little like a hash join with low join selectivity (the common case\nfor hash joins). It's not like a merge join + sort, where we must\neither spill everything or nothing (a merge join can be better than a\nhash join with high join selectivity).\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 13 Sep 2022 10:53:06 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 12:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> This is still only scratching the surface of what is possible with\n> dead_items. The visibility map snapshot concept can enable a far more\n> sophisticated approach to resource management in vacuumlazy.c. It\n> could help us to replace a simple array of item pointers (the current\n> dead_items array) with a faster and more space-efficient data\n> structure. Masahiko Sawada has done a lot of work on this recently, so\n> this may interest him.\n\nI don't quite see how it helps \"enable\" that. It'd be more logical to\nme to say the VM snapshot *requires* you to think harder about\nresource management, since a palloc'd snapshot should surely be\ncounted as part of the configured memory cap that admins control.\n(Commonly, it'll be less than a few dozen MB, so I'll leave that\naside.) Since Masahiko hasn't (to my knowlege) gone as far as\nintegrating his ideas into vacuum, I'm not sure if the current state\nof affairs has some snag that a snapshot will ease, but if there is,\nyou haven't described what it is.\n\nI do remember your foreshadowing in the radix tree thread a while\nback, and I do think it's an intriguing idea to combine pages-to-scan\nand dead TIDs in the same data structure. The devil is in the details,\nof course. It's worth looking into.\n\n> VM snapshots could also make it practical for the new data structure\n> to spill to disk to avoid multiple index scans/passed by VACUUM.\n\nI'm not sure spilling to disk is solving the right problem (as opposed\nto the hash join case, or to the proposed conveyor belt system which\nhas a broader aim). I've found several times that a customer will ask\nif raising maintenance work mem from 1GB to 10GB will make vacuum\nfaster. Looking at the count of index scans, it's pretty much always\n\"1\", so even if the current approach could scale above 1GB, \"no\" it\nwouldn't help to raise that limit.\n\nYour mileage may vary, of course.\n\nContinuing my customer example, searching the dead TID list faster\n*will* make vacuum faster. The proposed tree structure is more memory\nefficient, and IIUC could scale beyond 1GB automatically since each\nnode is a separate allocation, so the answer will be \"yes\" in the rare\ncase the current setting is in fact causing multiple index scans.\nFurthermore, it doesn't have to anticipate the maximum size, so there\nis no up front calculation assuming max-tuples-per-page, so it\nautomatically uses less memory for less demanding tables.\n\n(But +1 for changing that calculation for as long as we do have the\nsingle array.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Sep 2022 17:18:08 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 3:18 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Wed, Sep 14, 2022 at 12:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > This is still only scratching the surface of what is possible with\n> > dead_items. The visibility map snapshot concept can enable a far more\n> > sophisticated approach to resource management in vacuumlazy.c.\n\n> I don't quite see how it helps \"enable\" that.\n\nI have already written a simple throwaway patch that can use the\ncurrent VM snapshot data structure (which is just a local copy of the\nVM's pages) to do a cheap precheck ahead of actually doing a binary\nsearch in dead_items -- if a TID's heap page is all-visible or\nall-frozen (depending on the type of VACUUM) then we're 100%\nguaranteed to not visit it, and so it's 100% guaranteed to not have\nany dead_items (actually it could have LP_DEAD items by the time the\nindex scan happens, but they won't be in our dead_items array in any\ncase). Since we're working off of an immutable source, this\noptimization is simple to implement already. Very simple.\n\nI haven't even bothered to benchmark this throwaway patch (I literally\nwrote it in 5 minutes to show Masahiko what I meant). I can't see why\neven that throwaway prototype wouldn't significantly improve\nperformance, though. After all, the VM snapshot data structure is far\ndenser than dead_items, and the largest tables often have most heap\npages skipped via the VM.\n\nI'm not really interested in pursuing this simple approach because it\nconflicts with Masahiko's work on the data structure, and there are\nother good reasons to expect that to help. Plus I'm already very busy\nwith what I have here.\n\n> It'd be more logical to\n> me to say the VM snapshot *requires* you to think harder about\n> resource management, since a palloc'd snapshot should surely be\n> counted as part of the configured memory cap that admins control.\n\nThat's clearly true -- it creates a new problem for resource\nmanagement that will need to be solved. But that doesn't mean that it\ncan't ultimately make resource management better and easier.\n\nRemember, we don't randomly visit some skippable pages for no good\nreason in the patch, since the SKIP_PAGES_THRESHOLD stuff is\ncompletely gone. The VM snapshot isn't just a data structure that\nvacuumlazy.c uses as it sees fit -- it's actually more like a set of\ninstructions on which pages to scan, that vacuumlazy.c *must* follow.\nThere is no way that vacuumlazy.c can accidentally pick up a few extra\ndead_items here and there due to concurrent activity that unsets VM\npages. We don't need to leave that to chance -- it is locked in from\nthe start.\n\n> I do remember your foreshadowing in the radix tree thread a while\n> back, and I do think it's an intriguing idea to combine pages-to-scan\n> and dead TIDs in the same data structure. The devil is in the details,\n> of course. It's worth looking into.\n\nOf course.\n\n> Looking at the count of index scans, it's pretty much always\n> \"1\", so even if the current approach could scale above 1GB, \"no\" it\n> wouldn't help to raise that limit.\n\nI agree that multiple index scans are rare. But I also think that\nthey're disproportionately involved in really problematic cases for\nVACUUM. That said, I agree that simply making lookups to dead_items as\nfast as possible is the single most important way to improve VACUUM by\nimproving dead_items.\n\n> Furthermore, it doesn't have to anticipate the maximum size, so there\n> is no up front calculation assuming max-tuples-per-page, so it\n> automatically uses less memory for less demanding tables.\n\nThe final number of TIDs doesn't seem like the most interesting\ninformation that VM snapshots could provide us when it comes to\nbuilding the dead_items TID data structure -- the *distribution* of\nTIDs across heap pages seems much more interesting. The \"shape\" can be\nknown ahead of time, at least to some degree. It can help with\ncompression, which will reduce cache misses.\n\nAndres made remarks about memory usage with sparse dead TID patterns\nat this point on the \"Improve dead tuple storage for lazy vacuum\"\nthread:\n\nhttps://postgr.es/m/20210710025543.37sizjvgybemkdus@alap3.anarazel.de\n\nI haven't studied the radix tree stuff in great detail, so I am\nuncertain of how much the VM snapshot concept could help, and where\nexactly it would help. I'm just saying that it seems promising,\nespecially as a way of addressing concerns like this.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 14 Sep 2022 09:33:17 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Sep 14, 2022 at 11:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Sep 14, 2022 at 3:18 AM John Naylor\n\n> > Furthermore, it doesn't have to anticipate the maximum size, so there\n> > is no up front calculation assuming max-tuples-per-page, so it\n> > automatically uses less memory for less demanding tables.\n>\n> The final number of TIDs doesn't seem like the most interesting\n> information that VM snapshots could provide us when it comes to\n> building the dead_items TID data structure -- the *distribution* of\n> TIDs across heap pages seems much more interesting. The \"shape\" can be\n> known ahead of time, at least to some degree. It can help with\n> compression, which will reduce cache misses.\n\nMy point here was simply that spilling to disk is an admission of\nfailure to utilize memory efficiently and thus shouldn't be a selling\npoint of VM snapshots. Other selling points could still be valid.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 15 Sep 2022 14:09:44 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Sep 15, 2022 at 12:09 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> On Wed, Sep 14, 2022 at 11:33 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The final number of TIDs doesn't seem like the most interesting\n> > information that VM snapshots could provide us when it comes to\n> > building the dead_items TID data structure -- the *distribution* of\n> > TIDs across heap pages seems much more interesting. The \"shape\" can be\n> > known ahead of time, at least to some degree. It can help with\n> > compression, which will reduce cache misses.\n>\n> My point here was simply that spilling to disk is an admission of\n> failure to utilize memory efficiently and thus shouldn't be a selling\n> point of VM snapshots. Other selling points could still be valid.\n\nI was trying to explain the goals of this work in a way that was as\naccessible as possible. It's not easy to get the high level ideas\nacross, let alone all of the details.\n\nIt's true that I have largely ignored the question of how VM snapshots\nwill need to spill up until now. There are several reasons for this,\nmost of which you could probably guess. FWIW it wouldn't be at all\ndifficult to add *some* reasonable spilling behavior very soon; the\nunderlying access patterns are highly sequential and predictable, in\nthe obvious way.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Sep 2022 10:59:55 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, 2022-09-08 at 13:23 -0700, Peter Geoghegan wrote:\n> The new patch unifies the concept of antiwraparound\n> VACUUM with the concept of aggressive VACUUM. Now there is only\n> antiwraparound and regular VACUUM (uh, barring VACUUM FULL). And now\n> antiwraparound VACUUMs are not limited to antiwraparound autovacuums\n> -- a manual VACUUM can also be antiwraparound (that's just the new\n> name for \"aggressive\").\n\nI like this general approach. The existing GUCs have evolved in a\nconfusing way.\n\n> For the most part the\n> skipping/freezing strategy stuff has a good sense of what matters\n> already, and shouldn't need to be guided very often.\n\nI'd like to know more clearly where manual VACUUM fits in here. Will it\nuser a more aggressive strategy than an autovacuum, and how so?\n\n> The patch relegates vacuum_freeze_table_age to a compatibility\n> option,\n> making its default -1, meaning \"just use autovacuum_freeze_max_age\".\n\nThe purpose of vacuum_freeze_table_age seems to be that, if you\nregularly issue VACUUM commands, it will prevent a surprise\nantiwraparound vacuum. Is that still the case?\n\nMaybe it would make more sense to have vacuum_freeze_table_age be a\nfraction of autovacuum_freeze_max_age, and be treated as a maximum so\nthat other intelligence might kick in and freeze sooner?\n\n> This makes things less confusing for users and hackers.\n\nIt may take an adjustment period ;-)\n\n> The details of the skipping-strategy-choice algorithm are still\n> unsettled in v3 (no real change there). ISTM that the important thing\n> is still the high level concepts. Jeff was slightly puzzled by the\n> emphasis placed on the cost model/strategy stuff, at least at one\n> point. Hopefully my intent will be made clearer by the ideas featured\n> in the new patch.\n\nYes, it's clearing things up, but it's still a complex problem.\nThere's:\n\n a. xid age vs the actual amount of deferred work to be done\n b. advancing relfrozenxid vs skipping all-visible pages\n c. difficulty in controlling reasonable behavior (e.g.\n vacuum_freeze_min_age often being ignored, freezing\n individual tuples rather than pages)\n\nYour first email described the motivation in terms of (a), but the\npatches seem more focused on (b) and (c).\n\n> The skipping strategy decision making process isn't\n> particularly complicated, but it now looks more like an optimization\n> problem of some kind or other.\n\nThere's another important point here, which is that it gives an\nopportunity to decide to freeze some all-visible pages in a given round\njust to reduce the deferred work, without worrying about advancing\nrelfrozenxid.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 03 Oct 2022 17:41:56 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 5:41 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I like this general approach. The existing GUCs have evolved in a\n> confusing way.\n\nThanks for taking a look!\n\n> > For the most part the\n> > skipping/freezing strategy stuff has a good sense of what matters\n> > already, and shouldn't need to be guided very often.\n>\n> I'd like to know more clearly where manual VACUUM fits in here. Will it\n> user a more aggressive strategy than an autovacuum, and how so?\n\nThere is no change whatsoever in the relationship between manually\nissued VACUUMs and autovacuums. We interpret autovacuum_freeze_max_age\nin almost the same way as HEAD. The only detail that's changed is that\nwe almost always interpret \"freeze_table_age\" as \"just use\nautovacuum_freeze_max_age\" in the patch, rather than as\n\"vacuum_freeze_table_age, though never more than 95% of\nautovacuum_freeze_max_age\", as on HEAD.\n\nMaybe this would be less confusing if I went just a bit further, and\ntotally got rid of the concept that vacuumlazy.c calls aggressive\nVACUUM on HEAD -- then there really would be exactly one kind of\nVACUUM, just like before the visibility map was first introduced back\nin 2009. This would relegate antiwraparound-ness to just another\ncondition that autovacuum.c used to launch VACUUMs.\n\nGiving VACUUM the freedom to choose where and how to freeze and\nadvance relfrozenxid based on both costs and benefits is key here.\nAnything that needlessly imposes a rigid rule on vacuumlazy.c\nundermines that -- it ties VACUUM's hands. The user can still\ninfluence many of the details using high-level GUCs that work at the\ntable level, rather than GUCs that can only work at the level of\nindividual VACUUM operations (that leaves too much to chance). Users\nshouldn't want or need to micromanage VACUUM.\n\n> > The patch relegates vacuum_freeze_table_age to a compatibility\n> > option,\n> > making its default -1, meaning \"just use autovacuum_freeze_max_age\".\n>\n> The purpose of vacuum_freeze_table_age seems to be that, if you\n> regularly issue VACUUM commands, it will prevent a surprise\n> antiwraparound vacuum. Is that still the case?\n\nThe user really shouldn't need to do anything with\nvacuum_freeze_table_age at all now. It's mostly just a way for the\nuser to optionally insist on advancing relfrozenxid via a\nantiwraparound/aggressive VACUUM -- like in a manual VACUUM FREEZE.\nEven VACUUM FREEZE shouldn't be necessary very often.\n\n> Maybe it would make more sense to have vacuum_freeze_table_age be a\n> fraction of autovacuum_freeze_max_age, and be treated as a maximum so\n> that other intelligence might kick in and freeze sooner?\n\nThat's kind of how the newly improved skipping strategy stuff works.\nIt gives some weight to table age as one additional factor (based on\nhow close the table's age is to autovacuum_freeze_max_age or its Multi\nequivalent).\n\nIf table age is (say) 60% of autovacuum_freeze_max_age, then VACUUM\nshould be \"60% as aggressive\" as a conventional\naggressive/antiwraparound autovacuum would be. What that actually\nmeans is that the VACUUM will tend to prefer advancing relfrozenxid\nthe closer we get to the cutoff, gradually giving less and less\nconsideration to putting off work as we get closer and closer. When we\nget to 100% then we'll definitely advance relfrozenxid (via a\nconventional aggressive/antiwraparound VACUUM).\n\nThe precise details are unsettled, but I'm pretty sure that the\ngeneral idea is sound. Basically we're replacing\nvacuum_freeze_table_age with a dynamic, flexible version of the same\nbasic idea. Now we don't just care about the need to advance\nrelfrozenxid (benefits), though; we also care about costs.\n\n> > This makes things less confusing for users and hackers.\n>\n> It may take an adjustment period ;-)\n\nPerhaps this is more of an aspiration at this point. :-)\n\n> Yes, it's clearing things up, but it's still a complex problem.\n> There's:\n>\n> a. xid age vs the actual amount of deferred work to be done\n> b. advancing relfrozenxid vs skipping all-visible pages\n> c. difficulty in controlling reasonable behavior (e.g.\n> vacuum_freeze_min_age often being ignored, freezing\n> individual tuples rather than pages)\n>\n> Your first email described the motivation in terms of (a), but the\n> patches seem more focused on (b) and (c).\n\nI think that all 3 areas are deeply and hopelessly intertwined.\n\nFor example, vacuum_freeze_min_age is effectively ignored in many\nimportant cases right now precisely because we senselessly skip\nall-visible pages with unfrozen tuples, no matter what -- the problem\nactually comes from the visibility map, which vacuum_freeze_min_age\npredates by quite a few years. So how can you possibly address the\nvacuum_freeze_min_age issues without also significantly revising VM\nskipping behavior? They're practically the same problem!\n\nAnd once you've fixed vacuum_freeze_min_age (and skipping), how can\nyou then pass up the opportunity to advance relfrozenxid early when\ndoing so will require only a little extra work? I'm going to regress\nsome cases if I simply ignore the relfrozenxid factor. Finally, the\ndebt issue is itself a consequence of the other problems.\n\nPerhaps this is an example of the inventor's paradox, where the more\nambitious plan may actually be easier and more likely to succeed than\na more limited plan that just focuses on one immediate problem. All of\nthese problems seem to be a result of adding accretion after accretion\nover the years. A high-level rethink is well overdue. We need to\nreturn to basics.\n\n> > The skipping strategy decision making process isn't\n> > particularly complicated, but it now looks more like an optimization\n> > problem of some kind or other.\n>\n> There's another important point here, which is that it gives an\n> opportunity to decide to freeze some all-visible pages in a given round\n> just to reduce the deferred work, without worrying about advancing\n> relfrozenxid.\n\nTrue. Though I think that a strong bias in the direction of advancing\nrelfrozenxid by some amount (not necessarily by very many XIDs) still\nmakes sense, especially when we're already freezing aggressively.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Oct 2022 20:11:20 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, 2022-10-03 at 20:11 -0700, Peter Geoghegan wrote:\n> True. Though I think that a strong bias in the direction of advancing\n> relfrozenxid by some amount (not necessarily by very many XIDs) still\n> makes sense, especially when we're already freezing aggressively.\n\nTake the case where you load a lot of data in one transaction. After\nthe loading transaction finishes, those new pages will soon be marked\nall-visible.\n\nIn the future, vacuum runs will have to decide what to do. If a vacuum\ndecides to do an aggressive scan to freeze all of those pages, it may\nbe at some unfortunate time and disrupt the workload. But if it skips\nthem all, then it's just deferring the work until it runs up against\nautovacuum_freeze_max_age, which might also be at an unfortunate time.\n\nSo how does your patch series handle this case? I assume there's some\nmechanism to freeze a moderate number of pages without worrying about\nadvancing relfrozenxid?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Mon, 03 Oct 2022 22:13:24 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Oct 3, 2022 at 10:13 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Take the case where you load a lot of data in one transaction. After\n> the loading transaction finishes, those new pages will soon be marked\n> all-visible.\n>\n> In the future, vacuum runs will have to decide what to do. If a vacuum\n> decides to do an aggressive scan to freeze all of those pages, it may\n> be at some unfortunate time and disrupt the workload. But if it skips\n> them all, then it's just deferring the work until it runs up against\n> autovacuum_freeze_max_age, which might also be at an unfortunate time.\n\nPredicting the future accurately is intrinsically hard. We're already\ndoing that today by freezing lazily. I think that we can come up with\na better overall strategy, but there is always a risk that we'll come\nout worse off in some individual cases. I think it's worth it if it\navoids ever really flying off the rails.\n\n> So how does your patch series handle this case? I assume there's some\n> mechanism to freeze a moderate number of pages without worrying about\n> advancing relfrozenxid?\n\nIt mostly depends on whether or not the table exceeds the new\nvacuum_freeze_strategy_threshold GUC in size at the time of the\nVACUUM. This is 4GB by default, at least right now.\n\nThe case where the table size doesn't exceed that threshold yet will\nsee each VACUUM advance relfrozenxid when it happens to be very cheap\nto do so, in terms of the amount of extra scanned_pages. If the number\nof extra scanned_pages is less than 5% of the total table size\n(current rel_pages), then we'll advance relfrozenxid early by making\nsure to scan any all-visible pages.\n\nActually, this scanned_pages threshold starts at 5%. It is usually 5%,\nbut it will eventually start to grow (i.e. make VACUUM freeze eagerly\nmore often) once table age exceeds 50% of autovacuum_freeze_max_age at\nthe start of the VACUUM. So the skipping strategy threshold is more or\nless a blend of physical units (heap pages) and logical units (XID\nage).\n\nThen there is the case where it's already a larger table at the point\na given VACUUM begins -- a table that ends up exceeding the same table\nsize threshold, vacuum_freeze_strategy_threshold. When that happens\nwe'll freeze all pages that are going to be marked all-visible as a\nmatter of policy (i.e. use eager freezing strategy), so that the same\npages can be marked all-frozen instead. We won't freeze pages that\naren't full of all-visible tuples (except for LP_DEAD items), unless\nthey have XIDs that are so old that vacuum_freeze_min_age triggers\nfreezing.\n\nOnce a table becomes larger than vacuum_freeze_strategy_threshold,\nVACUUM stops marking pages all-visible in the first place,\nconsistently marking them all-frozen instead. So naturally there just\ncannot be any all-visible pages after the first eager freezing VACUUM\n(actually there are some obscure edge cases that can result in the odd\nall-visible page here or there, but this should be extremely rare, and\nhave only negligible impact).\n\nBigger tables always have pages frozen eagerly, and in practice always\nadvance relfrozenxid early. In other words, eager freezing strategy\nimplies eager freezing strategy -- though not the other way around.\nAgain, these details that may change in the future. My focus is\nvalidating the high level concepts.\n\nSo we avoid big spikes, and try to do the work when it's cheapest.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 3 Oct 2022 22:45:37 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, 2022-10-03 at 22:45 -0700, Peter Geoghegan wrote:\n> Once a table becomes larger than vacuum_freeze_strategy_threshold,\n> VACUUM stops marking pages all-visible in the first place,\n> consistently marking them all-frozen instead.\n\nWhat are the trade-offs here? Why does it depend on table size?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 04 Oct 2022 10:39:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Oct 4, 2022 at 10:39 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2022-10-03 at 22:45 -0700, Peter Geoghegan wrote:\n> > Once a table becomes larger than vacuum_freeze_strategy_threshold,\n> > VACUUM stops marking pages all-visible in the first place,\n> > consistently marking them all-frozen instead.\n>\n> What are the trade-offs here? Why does it depend on table size?\n\nThat's a great question. The table-level threshold\nvacuum_freeze_strategy_threshold more or less buckets every table into\none of two categories: small tables and big tables. Perhaps this seems\nsimplistic to you. That would be an understandable reaction, given the\ncentral importance of this threshold. The current default of 4GB could\nhave easily been 8GB or perhaps even 16GB instead.\n\nIt's not so much size as the rate of growth over time that matters. We\nreally want to do eager freezing on \"growth tables\", particularly\nappend-only tables. On the other hand we don't want to do useless\nfreezing on small, frequently updated tables, like pgbench_tellers or\npgbench_branches -- those tables may well require zero freezing, and\nyet each VACUUM will advance relfrozenxid to a very recent value\nconsistently (even on Postgres 15). But \"growth\" is hard to capture,\nbecause in general we have to infer things about the future from the\npast, which is difficult and messy.\n\nSince it's hard to capture \"growth table vs fixed size table\"\ndirectly, we use table size as a proxy. It's far from perfect, but I\nthink that it will work quite well in practice because most individual\ntables simply never get very large. It's very common for a relatively\nsmall number of tables to consistently grow, without bound (perhaps\nnot strictly append-only tables, but tables where nothing is ever\ndeleted and inserts keep happening). So a simplistic threshold\n(combined with dynamic per-page decisions about freezing) should be\nenough to avoid most of the downside of eager freezing. In particular,\nwe will still freeze lazily in tables where it's obviously very\nunlikely to be worth it.\n\nIn general I think that being correct on average is overrated. It's\nmore important to always avoid being dramatically wrong -- especially\nif there is no way to course correct in the next VACUUM. Although I\nthink that we have a decent chance of coming out ahead by every\navailable metric, that isn't really the goal. Why should performance\nstability not have some cost, at least in some cases? I want to keep\nthe cost as low as possible (often \"negative cost\" relative to\nPostgres 15), but overall I am consciously making a trade-off. There\nare downsides.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 4 Oct 2022 11:09:31 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, 2022-10-04 at 11:09 -0700, Peter Geoghegan wrote:\n> So a simplistic threshold\n> (combined with dynamic per-page decisions about freezing) should be\n> enough to avoid most of the downside of eager freezing.\n\n...\n\n> I want to keep\n> the cost as low as possible (often \"negative cost\" relative to\n> Postgres 15), but overall I am consciously making a trade-off. There\n> are downsides.\n\nI am fine with that, but I'd like us all to understand what the\ndownsides are.\n\nIf I understand correctly:\n\n1. Eager freezing (meaning to freeze at the same time as setting all-\nvisible) causes a modest amount of WAL traffic, hopefully before the\nnext checkpoint so we can avoid FPIs. Lazy freezing (meaning set all-\nvisible but don't freeze) defers the work, and it might never need to\nbe done; but if it does, it can cause spikes at unfortunate times and\nis more likely to generate more FPIs.\n\n2. You're trying to mitigate the downsides of eager freezing by:\n a. when freezing a tuple, eagerly freeze other tuples on that page\n b. optimize WAL freeze records\n\n3. You're trying to capture the trade-off in #1 by using the table size\nas a proxy. Deferred work is only really a problem for big tables, so\nthat's where you use eager freezing. But maybe we can just always use\neager freezing?:\n a. You're mitigating the WAL work for freezing.\n b. A lot of people run with checksums on, meaning that setting the\nall-visible bit requires WAL work anyway, and often FPIs.\n c. All-visible is conceptually similar to freezing, but less\nimportant, and it feels more and more like the design concept of all-\nvisible isn't carrying its weight.\n d. (tangent) I had an old patch[1] that actually removed\nPD_ALL_VISIBLE (the page bit, not the VM bit), which was rejected, but\nperhaps its time has come?\n\nRegards,\n\tJeff Davis\n\n\n[1]\nhttps://www.postgresql.org/message-id/1353551097.11440.128.camel%40sussancws0025\n\n\n\n",
"msg_date": "Tue, 04 Oct 2022 19:59:49 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Oct 4, 2022 at 7:59 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I am fine with that, but I'd like us all to understand what the\n> downsides are.\n\nAlthough I'm sure that there must be one case that loses measurably,\nit's not particularly obvious where to start looking for one. I mean\nit's easy to imagine individual pages that we lose on, but a practical\ntest case where most of the pages are like that reliably is harder to\nimagine.\n\n> If I understand correctly:\n>\n> 1. Eager freezing (meaning to freeze at the same time as setting all-\n> visible) causes a modest amount of WAL traffic, hopefully before the\n> next checkpoint so we can avoid FPIs. Lazy freezing (meaning set all-\n> visible but don't freeze) defers the work, and it might never need to\n> be done; but if it does, it can cause spikes at unfortunate times and\n> is more likely to generate more FPIs.\n\nLazy freezing means to freeze every eligible tuple (every XID <\nOldestXmin) when one or more XIDs are before FreezeLimit. Eager\nfreezing means freezing every eligible tuple when the page is about to\nbe set all-visible, or whenever lazy freezing would trigger freezing.\n\nEager freezing tends to avoid big spikes in larger tables, which is\nvery important. It can sometimes be cheaper and better in every way\nthan lazy freezing. Though lazy freezing sometimes retains an\nadvantage by avoiding freezing that is never going to be needed\naltogether, typically only in small tables.\n\nLazy freezing is fairly similar to what we do on HEAD now -- though\nit's not identical. It's still \"page level freezing\". It has lazy\ncriteria for triggering page freezing.\n\n> 2. You're trying to mitigate the downsides of eager freezing by:\n> a. when freezing a tuple, eagerly freeze other tuples on that page\n> b. optimize WAL freeze records\n\nSort of.\n\nBoth of these techniques apply to eager freezing too, in fact. It's\njust that eager freezing is likely to do the bulk of all freezing that\nactually goes ahead. It'll disproportionately be helped by these\ntechniques because it'll do most actual freezing that goes ahead (even\nwhen most VACUUM operations use the lazy freezing strategy, which is\nprobably the common case -- just because lazy freezing freezes\nlazily).\n\n> 3. You're trying to capture the trade-off in #1 by using the table size\n> as a proxy. Deferred work is only really a problem for big tables, so\n> that's where you use eager freezing.\n\nRight.\n\n> But maybe we can just always use\n> eager freezing?:\n\nThat doesn't seem like a bad idea, though it might be tricky to put\ninto practice. It might be possible to totally unite the concept of\nall-visible and all-frozen pages in the scope of this work. But there\nare surprisingly many tricky details involved. I'm not surprised that\nyou're suggesting this -- it basically makes sense to me. It's just\nthe practicalities that I worry about here.\n\n> a. You're mitigating the WAL work for freezing.\n\nI don't see why this would be true. Lazy vs Eager are exactly the same\nfor a given page at the point that freezing is triggered. We'll freeze\nall eligible tuples (often though not always every tuple), or none at\nall.\n\nLazy vs Eager describe the policy for deciding to freeze a page, but\ndo not affect the actual execution steps taken once we decide to\nfreeze.\n\n> b. A lot of people run with checksums on, meaning that setting the\n> all-visible bit requires WAL work anyway, and often FPIs.\n\nThe idea of rolling the WAL records into one does seem appealing, but\nwe'd still need the original WAL record to set a page all-visible in\nVACUUM's second heap pass (only setting a page all-visible in the\nfirst heap pass could be optimized by making the FREEZE_PAGE WAL\nrecord mark the page all-visible too). Or maybe we'd roll that into\nthe VACUUM WAL record at the same time.\n\nIn any case the second heap pass would have to have a totally\ndifferent WAL logging strategy to the first heap pass. Not\ninsurmountable, but not exactly an easy thing to do in passing either.\n\n> c. All-visible is conceptually similar to freezing, but less\n> important, and it feels more and more like the design concept of all-\n> visible isn't carrying its weight.\n\nWell, not quite -- at least not on the VM side itself.\n\nThere are cases where heap_lock_tuple() will update a tuple's xmax,\nreplacing it with a new Multi. This will necessitate clearly the\npage's all-frozen bit in the VM -- but the all-visible bit will stay\nset. This is why it's possible for small numbers of all-visible pages\nto appear even in large tables that have been eagerly frozen.\n\n> d. (tangent) I had an old patch[1] that actually removed\n> PD_ALL_VISIBLE (the page bit, not the VM bit), which was rejected, but\n> perhaps its time has come?\n\nI remember that pgCon developer meeting well. :-)\n\nIf anything your original argument for getting rid of PD_ALL_VISIBLE\nis weakened by the proposal to merge together the WAL records for\nfreezing and for setting a heap page all visible. You'd know for sure\nthat the page will be dirtied when such a WAL record needed to be\nwritten, so there is actually no reason to care about dirtying the\npage. No?\n\nI'm in favor of reducing the number of WAL records required in common\ncases if at all possible -- purely because the generic WAL record\noverhead of having an extra WAL record does probably add to the WAL\noverhead for work performed in lazy_scan_prune(). But it seems like\nseparate work to me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 4 Oct 2022 21:00:46 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Sep 8, 2022 at 1:23 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It might make sense to go further in the same direction by making\n> \"regular vs aggressive/antiwraparound\" into a *strict* continuum. In\n> other words, it might make sense to get rid of the two remaining cases\n> where VACUUM conditions its behavior on whether this VACUUM operation\n> is antiwraparound/aggressive or not.\n\nI decided to go ahead with this in the attached revision, v5. This\nrevision totally gets rid of the general concept of discrete\naggressive/non-aggressive modes for each VACUUM operation (see\n\"v5-0004-Make-VACUUM-s-aggressive-behaviors-continuous.patch\" and its\ncommit message). My new approach turned out to be simpler than the\nprevious half measures that I described as \"unifying aggressive and\nantiwraparound\" (which itself first appeared in v3).\n\nI now wish that I had all of these pieces in place for v1, since this\nwas the direction I was thinking of all along -- that might have made\nlife easier for reviewers like Jeff. What we have in v5 is what I had\nin mind all along, which turns out to have only a little extra code\nanyway. It might have been less confusing if I'd started this thread\nwith something like v5 -- the story I need to tell would have been\nsimpler that way. This is pretty much the end point I had in mind.\n\nNote that we still retain what were previously \"aggressive only\"\nbehaviors. We only remove \"aggressive\" as a distinct mode of operation\nthat exclusively applies the aggressive behaviors. We're now selective\nin how we apply each of the behaviors, based on the needs of the\ntable. We want to behave in a way that's proportionate to the problem\nat hand, which is made easy by not tying anything to a discrete mode\nof operation. It's a false dichotomy; why should we ever have only one\nreason for running VACUUM, that's determined up front?\n\nThere are still antiwraparound autovacuums in v5, but that is really\njust another way that autovacuum can launch an autovacuum worker (much\nlike it was before the introduction of the visibility map in 8.4) --\nboth conceptually, and in terms of how the code works in vacuumlazy.c.\nIn practice an antiwraparound autovacuum is guaranteed to advance\nrelfrozenxid in roughly the same way as on HEAD (otherwise what's the\npoint?), but that doesn't make the VACUUM operation itself special in\nany way. Besides, antiwraparound autovacuums will naturally be rare,\nbecause there are many more opportunities for a VACUUM to advance\nrelfrozenxid \"early\" now (only \"early\" relative to how it would work\non early Postgres versions). It's already clear that having\nantiwraparound autovacuums and aggressive mode VACUUMs as two separate\nconcepts that are closely associated has some problems [1]. Formally\nmaking antiwraparound autovacuums just another way to launch a VACUUM\nvia autovacuum seems quite useful to me.\n\nFor the most part users are expected to just take relfrozenxid\nadvancement for granted now. They should mostly be able to assume that\nVACUUM will do whatever is required to keep it sufficiently current\nover time. They can influence VACUUM's behavior, but that mostly works\nat the level of the table (not the level of any individual VACUUM\noperation). The freezing and skipping strategy stuff should do what is\nnecessary to keep up in the long run. We don't want to put too much\nemphasis on relfrozenxid in the short run, because it isn't a reliable\nproxy for how we've kept up with the physical work of freezing --\nthat's what really matters. It should be okay to \"fall behind on table\nage\" in the short run, provided we don't fall behind on the physical\nwork of freezing. Those two things shouldn't be conflated.\n\nWe now use a separate pair of XID/MXID-based cutoffs to determine\nwhether or not we're willing to wait for a cleanup lock the hard way\n(which can happen in any VACUUM, since of course there is no longer\nany special VACUUM with special behaviors). The new pair of cutoffs\nreplace the use of FreezeLimit/MultiXactCutoff by lazy_scan_noprune\n(those are now only used to decide on what to freeze inside\nlazy_scan_prune). Same concept, but with a different, independent\ntimeline. This was necessary just to get an existing isolation test\n(vacuum-no-cleanup-lock) to continue to work. But it just makes sense\nto have a different timeline for a completely different behavior. And\nit'll be more robust.\n\nIt's a really bad idea for VACUUM to try to wait indefinitely long for\na cleanup lock, since that's totally outside of its control. It\ntypically won't take very long at all for VACUUM to acquire a cleanup\nlock, of course, but that is beside the point -- who really cares\nwhat's true on average, for something like this? Sometimes it'll take\nhours to acquire a cleanup lock, and there is no telling when that\nmight happen! And so pausing VACUUM/freezing of all other pages just\nto freeze one page makes little sense. Waiting for a cleanup lock\nbefore we really need to is just an overreaction, which risks making\nthe situation worse. The cure must not be worse than the disease.\n\nThis revision also resolves problems with freezing MultiXactIds too\nlazily [2]. We now always trigger page level freezing in the event of\nencountering a Multi. This is more consistent with the behavior on\nHEAD, where we can easily process a Multi well before the cutoff\nrepresented by vacuum_multixact_freeze_min_age (e.g., we notice that a\nMulti has no members still running, making it safe to remove before\nthe cutoff is reached).\n\nAlso attaching a prebuilt copy of the \"routine vacuuming\" docs as of\nv5. This is intended to be a convenience for reviewers, or anybody\nwith a general interest in the patch series. The docs certainly still\nneed work, but I feel that I'm making progress on that side of things\n(especially in this latest revision). Making life easier for DBAs is\nthe single most important goal of this work, so the user docs are of\ncentral importance. The current \"Routine Vacuuming\" docs have lots of\nproblems, but to some extent the problems are with the concepts\nthemselves.\n\n[1] https://postgr.es/m/CAH2-Wz=DJAokY_GhKJchgpa8k9t_H_OVOvfPEn97jGNr9W=deg@mail.gmail.com\n[2] https://postgr.es/m/CAH2-Wz=+B5f1izRDPYKw+sUgOr6=AkWXp2NikU5cub0ftbRQhA@mail.gmail.com\n--\nPeter Geoghegan",
"msg_date": "Mon, 17 Oct 2022 16:52:11 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Note that this fails under -fsanitize=align\n\nSubject: [PATCH v5 2/6] Teach VACUUM to use visibility map snapshot.\n\nperforming post-bootstrap initialization ...\n../src/backend/access/heap/visibilitymap.c:482:38: runtime error: load of misaligned address 0x5559e1352424 for type 'uint64', which requires 8 byte alignment\n\n> *all_visible += pg_popcount64(umap[i] & VISIBLE_MASK64);\n\n\n",
"msg_date": "Thu, 10 Nov 2022 21:44:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 7:44 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> performing post-bootstrap initialization ...\n> ../src/backend/access/heap/visibilitymap.c:482:38: runtime error: load of misaligned address 0x5559e1352424 for type 'uint64', which requires 8 byte alignment\n\nThis issue is fixed in the attached revision, v6. I now avoid breaking\nalignment-picky platforms in visibilitymap.c by using PGAlignedBlock\nin the vm snapshot struct (this replaces the raw char buffer used in\nearlier revisions).\n\nPosting v6 will also keep CFTester happy. v5 no longer applies cleanly\ndue to conflicts caused by today's \"Deduplicate freeze plans in freeze\nWAL records\" commit.\n\nNo other changes in v6 that are worth noting here.\n\nThanks\n--\nPeter Geoghegan",
"msg_date": "Tue, 15 Nov 2022 19:02:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 19:02:12 -0800, Peter Geoghegan wrote:\n> From 352867c5027fae6194ab1c6480cd326963e201b1 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Sun, 12 Jun 2022 15:46:08 -0700\n> Subject: [PATCH v6 1/6] Add page-level freezing to VACUUM.\n> \n> Teach VACUUM to decide on whether or not to trigger freezing at the\n> level of whole heap pages, not individual tuple fields. OldestXmin is\n> now treated as the cutoff for freezing eligibility in all cases, while\n> FreezeLimit is used to trigger freezing at the level of each page (we\n> now freeze all eligible XIDs on a page when freezing is triggered for\n> the page).\n> \n> This approach decouples the question of _how_ VACUUM could/will freeze a\n> given heap page (which of its XIDs are eligible to be frozen) from the\n> question of whether it actually makes sense to do so right now.\n> \n> Just adding page-level freezing does not change all that much on its\n> own: VACUUM will still typically freeze very lazily, since we're only\n> forcing freezing of all of a page's eligible tuples when we decide to\n> freeze at least one (on the basis of XID age and FreezeLimit). For now\n> VACUUM still freezes everything almost as lazily as it always has.\n> Later work will teach VACUUM to apply an alternative eager freezing\n> strategy that triggers page-level freezing earlier, based on additional\n> criteria.\n> ---\n> src/include/access/heapam.h | 42 +++++-\n> src/backend/access/heap/heapam.c | 199 +++++++++++++++++----------\n> src/backend/access/heap/vacuumlazy.c | 95 ++++++++-----\n> 3 files changed, 222 insertions(+), 114 deletions(-)\n> \n> diff --git a/src/include/access/heapam.h b/src/include/access/heapam.h\n> index ebe723abb..ea709bf1b 100644\n> --- a/src/include/access/heapam.h\n> +++ b/src/include/access/heapam.h\n> @@ -112,6 +112,38 @@ typedef struct HeapTupleFreeze\n> \tOffsetNumber offset;\n> } HeapTupleFreeze;\n> \n> +/*\n> + * State used by VACUUM to track what the oldest extant XID/MXID will become\n> + * when determing whether and how to freeze a page's heap tuples via calls to\n> + * heap_prepare_freeze_tuple.\n\nPerhaps this could say something like \"what the oldest extant XID/MXID\ncurrently is and what it would be if we decide to freeze the page\" or such?\n\n\n> + * The relfrozenxid_out and relminmxid_out fields are the current target\n> + * relfrozenxid and relminmxid for VACUUM caller's heap rel. Any and all\n\n\"VACUUM caller's heap rel.\" could stand to be rephrased.\n\n\n> + * unfrozen XIDs or MXIDs that remain in caller's rel after VACUUM finishes\n> + * _must_ have values >= the final relfrozenxid/relminmxid values in pg_class.\n> + * This includes XIDs that remain as MultiXact members from any tuple's xmax.\n> + * Each heap_prepare_freeze_tuple call pushes back relfrozenxid_out and/or\n> + * relminmxid_out as needed to avoid unsafe values in rel's authoritative\n> + * pg_class tuple.\n> + *\n> + * Alternative \"no freeze\" variants of relfrozenxid_nofreeze_out and\n> + * relminmxid_nofreeze_out must also be maintained for !freeze pages.\n> + */\n\nrelfrozenxid_nofreeze_out isn't really a \"no freeze variant\" :)\n\nI think it might be better to just always maintain the nofreeze state.\n\n\n> +typedef struct HeapPageFreeze\n> +{\n> +\t/* Is heap_prepare_freeze_tuple caller required to freeze page? */\n> +\tbool\t\tfreeze;\n\ns/freeze/freeze_required/?\n\n\n> +\t/* Values used when page is to be frozen based on freeze plans */\n> +\tTransactionId relfrozenxid_out;\n> +\tMultiXactId relminmxid_out;\n> +\n> +\t/* Used by caller for '!freeze' pages */\n> +\tTransactionId relfrozenxid_nofreeze_out;\n> +\tMultiXactId relminmxid_nofreeze_out;\n> +\n> +} HeapPageFreeze;\n> +\n\nGiven the number of parameters to heap_prepare_freeze_tuple, why don't we pass\nin more of them in via HeapPageFreeze?\n\n\n> /* ----------------\n> *\t\tfunction prototypes for heap access method\n> *\n> @@ -180,17 +212,17 @@ extern void heap_inplace_update(Relation relation, HeapTuple tuple);\n> extern bool heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> \t\t\t\t\t\t\t\t\t TransactionId relfrozenxid, TransactionId relminmxid,\n> \t\t\t\t\t\t\t\t\t TransactionId cutoff_xid, TransactionId cutoff_multi,\n> +\t\t\t\t\t\t\t\t\t TransactionId limit_xid, MultiXactId limit_multi,\n> \t\t\t\t\t\t\t\t\t HeapTupleFreeze *frz, bool *totally_frozen,\n> -\t\t\t\t\t\t\t\t\t TransactionId *relfrozenxid_out,\n> -\t\t\t\t\t\t\t\t\t MultiXactId *relminmxid_out);\n> +\t\t\t\t\t\t\t\t\t HeapPageFreeze *xtrack);\n\nWhat does 'xtrack' stand for? Xid Tracking?\n\n\n> * VACUUM caller must assemble HeapFreezeTuple entries for every tuple that we\n> * returned true for when called. A later heap_freeze_execute_prepared call\n> - * will execute freezing for caller's page as a whole.\n> + * will execute freezing for caller's page as a whole. Caller should also\n> + * initialize xtrack fields for page as a whole before calling here with first\n> + * tuple for the page. See page_frozenxid_tracker comments.\n\ns/should/need to/?\n\npage_frozenxid_tracker appears to be a dangling pointer.\n\n\n> +\t * VACUUM calls limit_xid \"FreezeLimit\", and cutoff_xid \"OldestXmin\".\n> +\t * (limit_multi is \"MultiXactCutoff\", and cutoff_multi \"OldestMxact\".)\n\nHm. Perhaps we should just rename them if it requires this kind of\nexplanation? They're really not good names.\n\n\n\n> @@ -6524,8 +6524,8 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> \t\telse\n> \t\t{\n> \t\t\t/* xmin to remain unfrozen. Could push back relfrozenxid_out. */\n> -\t\t\tif (TransactionIdPrecedes(xid, *relfrozenxid_out))\n> -\t\t\t\t*relfrozenxid_out = xid;\n> +\t\t\tif (TransactionIdPrecedes(xid, xtrack->relfrozenxid_out))\n> +\t\t\t\txtrack->relfrozenxid_out = xid;\n> \t\t}\n> \t}\n\nCould use TransactionIdOlder().\n\n\n> @@ -6563,8 +6564,11 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> \t\t\t */\n> \t\t\tAssert(!freeze_xmax);\n> \t\t\tAssert(TransactionIdIsValid(newxmax));\n> -\t\t\tif (TransactionIdPrecedes(newxmax, *relfrozenxid_out))\n> -\t\t\t\t*relfrozenxid_out = newxmax;\n> +\t\t\tAssert(heap_tuple_would_freeze(tuple, limit_xid, limit_multi,\n> +\t\t\t\t\t\t\t\t\t\t &xtrack->relfrozenxid_nofreeze_out,\n> +\t\t\t\t\t\t\t\t\t\t &xtrack->relminmxid_nofreeze_out));\n> +\t\t\tif (TransactionIdPrecedes(newxmax, xtrack->relfrozenxid_out))\n> +\t\t\t\txtrack->relfrozenxid_out = newxmax;\n\nPerhaps the Assert(heap_tuple_would_freeze()) bit could be handled once at the\nend of the routine, for all paths?\n\n\n> @@ -6731,18 +6751,36 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> \t\t\telse\n> \t\t\t\tfrz->frzflags |= XLH_FREEZE_XVAC;\n> \n> -\t\t\t/*\n> -\t\t\t * Might as well fix the hint bits too; usually XMIN_COMMITTED\n> -\t\t\t * will already be set here, but there's a small chance not.\n> -\t\t\t */\n> +\t\t\t/* Set XMIN_COMMITTED defensively */\n> \t\t\tAssert(!(tuple->t_infomask & HEAP_XMIN_INVALID));\n> \t\t\tfrz->t_infomask |= HEAP_XMIN_COMMITTED;\n> +\n> +\t\t\t/*\n> +\t\t\t * Force freezing any page with an xvac to keep things simple.\n> +\t\t\t * This allows totally_frozen tracking to ignore xvac.\n> +\t\t\t */\n> \t\t\tchanged = true;\n> +\t\t\txtrack->freeze = true;\n> \t\t}\n> \t}\n\nOh - I totally didn't realize that ->freeze is an out parameter. Seems a bit\nodd to have the other fields suffied with _out but not this one?\n\n\n\n> @@ -6786,13 +6824,13 @@ heap_execute_freeze_tuple(HeapTupleHeader tuple, HeapTupleFreeze *frz)\n> */\n> void\n> heap_freeze_execute_prepared(Relation rel, Buffer buffer,\n> -\t\t\t\t\t\t\t TransactionId FreezeLimit,\n> +\t\t\t\t\t\t\t TransactionId OldestXmin,\n> \t\t\t\t\t\t\t HeapTupleFreeze *tuples, int ntuples)\n> {\n> \tPage\t\tpage = BufferGetPage(buffer);\n> \n> \tAssert(ntuples > 0);\n> -\tAssert(TransactionIdIsValid(FreezeLimit));\n> +\tAssert(TransactionIdIsValid(OldestXmin));\n> \n> \tSTART_CRIT_SECTION();\n> \n> @@ -6822,11 +6860,10 @@ heap_freeze_execute_prepared(Relation rel, Buffer buffer,\n> \n> \t\t/*\n> \t\t * latestRemovedXid describes the latest processed XID, whereas\n> -\t\t * FreezeLimit is (approximately) the first XID not frozen by VACUUM.\n> -\t\t * Back up caller's FreezeLimit to avoid false conflicts when\n> -\t\t * FreezeLimit is precisely equal to VACUUM's OldestXmin cutoff.\n> +\t\t * OldestXmin is the first XID not frozen by VACUUM. Back up caller's\n> +\t\t * OldestXmin to avoid false conflicts.\n> \t\t */\n> -\t\tlatestRemovedXid = FreezeLimit;\n> +\t\tlatestRemovedXid = OldestXmin;\n> \t\tTransactionIdRetreat(latestRemovedXid);\n> \n> \t\txlrec.latestRemovedXid = latestRemovedXid;\n\nWon't using OldestXmin instead of FreezeLimit potentially cause additional\nconflicts? Is there any reason to not compute an accurate value?\n\n\n> @@ -1634,27 +1639,23 @@ retry:\n> \t\t\tcontinue;\n> \t\t}\n> \n> -\t\t/*\n> -\t\t * LP_DEAD items are processed outside of the loop.\n> -\t\t *\n> -\t\t * Note that we deliberately don't set hastup=true in the case of an\n> -\t\t * LP_DEAD item here, which is not how count_nondeletable_pages() does\n> -\t\t * it -- it only considers pages empty/truncatable when they have no\n> -\t\t * items at all (except LP_UNUSED items).\n> -\t\t *\n> -\t\t * Our assumption is that any LP_DEAD items we encounter here will\n> -\t\t * become LP_UNUSED inside lazy_vacuum_heap_page() before we actually\n> -\t\t * call count_nondeletable_pages(). In any case our opinion of\n> -\t\t * whether or not a page 'hastup' (which is how our caller sets its\n> -\t\t * vacrel->nonempty_pages value) is inherently race-prone. It must be\n> -\t\t * treated as advisory/unreliable, so we might as well be slightly\n> -\t\t * optimistic.\n> -\t\t */\n> \t\tif (ItemIdIsDead(itemid))\n> \t\t{\n> +\t\t\t/*\n> +\t\t\t * Delay unsetting all_visible until after we have decided on\n> +\t\t\t * whether this page should be frozen. We need to test \"is this\n> +\t\t\t * page all_visible, assuming any LP_DEAD items are set LP_UNUSED\n> +\t\t\t * in final heap pass?\" to reach a decision. all_visible will be\n> +\t\t\t * unset before we return, as required by lazy_scan_heap caller.\n> +\t\t\t *\n> +\t\t\t * Deliberately don't set hastup for LP_DEAD items. We make the\n> +\t\t\t * soft assumption that any LP_DEAD items encountered here will\n> +\t\t\t * become LP_UNUSED later on, before count_nondeletable_pages is\n> +\t\t\t * reached. Whether the page 'hastup' is inherently race-prone.\n> +\t\t\t * It must be treated as unreliable by caller anyway, so we might\n> +\t\t\t * as well be slightly optimistic about it.\n> +\t\t\t */\n> \t\t\tdeadoffsets[lpdead_items++] = offnum;\n> -\t\t\tprunestate->all_visible = false;\n> -\t\t\tprunestate->has_lpdead_items = true;\n> \t\t\tcontinue;\n> \t\t}\n\nWhat does this have to do with the rest of the commit? And why are we doing\nthis?\n\n\n> @@ -1782,11 +1783,13 @@ retry:\n> \t\tif (heap_prepare_freeze_tuple(tuple.t_data,\n> \t\t\t\t\t\t\t\t\t vacrel->relfrozenxid,\n> \t\t\t\t\t\t\t\t\t vacrel->relminmxid,\n> +\t\t\t\t\t\t\t\t\t vacrel->OldestXmin,\n> +\t\t\t\t\t\t\t\t\t vacrel->OldestMxact,\n> \t\t\t\t\t\t\t\t\t vacrel->FreezeLimit,\n> \t\t\t\t\t\t\t\t\t vacrel->MultiXactCutoff,\n> \t\t\t\t\t\t\t\t\t &frozen[tuples_frozen],\n> \t\t\t\t\t\t\t\t\t &tuple_totally_frozen,\n> -\t\t\t\t\t\t\t\t\t &NewRelfrozenXid, &NewRelminMxid))\n> +\t\t\t\t\t\t\t\t\t &xtrack))\n> \t\t{\n> \t\t\t/* Save prepared freeze plan for later */\n> \t\t\tfrozen[tuples_frozen++].offset = offnum;\n> @@ -1807,9 +1810,33 @@ retry:\n> \t * that will need to be vacuumed in indexes later, or a LP_NORMAL tuple\n> \t * that remains and needs to be considered for freezing now (LP_UNUSED and\n> \t * LP_REDIRECT items also remain, but are of no further interest to us).\n> +\t *\n> +\t * Freeze the page when heap_prepare_freeze_tuple indicates that at least\n> +\t * one XID/MXID from before FreezeLimit/MultiXactCutoff is present.\n> \t */\n> -\tvacrel->NewRelfrozenXid = NewRelfrozenXid;\n> -\tvacrel->NewRelminMxid = NewRelminMxid;\n> +\tif (xtrack.freeze || tuples_frozen == 0)\n> +\t{\n> +\t\t/*\n> +\t\t * We're freezing the page. Our final NewRelfrozenXid doesn't need to\n> +\t\t * be affected by the XIDs that are just about to be frozen anyway.\n\nSeems quite confusing to enter a block with described as \"We're freezing the\npage.\" when we're not freezing anything (tuples_frozen == 0).\n\n\n> +\t\t * Note: although we're freezing all eligible tuples on this page, we\n> +\t\t * might not need to freeze anything (might be zero eligible tuples).\n> +\t\t */\n> +\t\tvacrel->NewRelfrozenXid = xtrack.relfrozenxid_out;\n> +\t\tvacrel->NewRelminMxid = xtrack.relminmxid_out;\n> +\t\tfreeze_all_eligible = true;\n\nI don't really get what freeze_all_eligible is trying to do.\n\n\n> #ifdef USE_ASSERT_CHECKING\n> \t/* Note that all_frozen value does not matter when !all_visible */\n> -\tif (prunestate->all_visible)\n> +\tif (prunestate->all_visible && lpdead_items == 0)\n> \t{\n> \t\tTransactionId cutoff;\n> \t\tbool\t\tall_frozen;\n> @@ -1849,8 +1876,7 @@ retry:\n> \t\tif (!heap_page_is_all_visible(vacrel, buf, &cutoff, &all_frozen))\n> \t\t\tAssert(false);\n\nNot related to this change, but why isn't this just\nAssert(heap_page_is_all_visible(vacrel, buf, &cutoff, &all_frozen))?\n\n\n\n> From 8f3b6237affda15101ffb0b88787bfd6bb92e32f Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Mon, 18 Jul 2022 14:35:44 -0700\n> Subject: [PATCH v6 2/6] Teach VACUUM to use visibility map snapshot.\n> \n> Acquire an in-memory immutable \"snapshot\" of the target rel's visibility\n> map at the start of each VACUUM, and use the snapshot to determine when\n> and how VACUUM will skip pages.\n\nThis should include a description of the memory usage effects.\n\n\n> This has significant advantages over the previous approach of using the\n> authoritative VM fork to decide on which pages to skip. The number of\n> heap pages processed will no longer increase when some other backend\n> concurrently modifies a skippable page, since VACUUM will continue to\n> see the page as skippable (which is correct because the page really is\n> still skippable \"relative to VACUUM's OldestXmin cutoff\").\n\nWhy is it an advantage for the number of pages to not increase?\n\n\n> It also\n> gives VACUUM reliable information about how many pages will be scanned,\n> before its physical heap scan even begins. That makes it easier to\n> model the costs that VACUUM incurs using a top-down, up-front approach.\n> \n> Non-aggressive VACUUMs now make an up-front choice about VM skipping\n> strategy: they decide whether to prioritize early advancement of\n> relfrozenxid (eager behavior) over avoiding work by skipping all-visible\n> pages (lazy behavior). Nothing about the details of how lazy_scan_prune\n> freezes changes just yet, though a later commit will add the concept of\n> freezing strategies.\n> \n> Non-aggressive VACUUMs now explicitly commit to (or decide against)\n> early relfrozenxid advancement up-front.\n\nWhy?\n\n\n> VACUUM will now either scan\n> every all-visible page, or none at all. This replaces lazy_scan_skip's\n> SKIP_PAGES_THRESHOLD behavior, which was intended to enable early\n> relfrozenxid advancement (see commit bf136cf6), but left many of the\n> details to chance.\n\nThe main goal according to bf136cf6 was to avoid defeating OS readahead, so I\nthink it should be mentioned here.\n\nTo me this is something that ought to be changed separately from the rest of\nthis commit.\n\n\n> TODO: We don't spill VM snapshots to disk just yet (resource management\n> aspects of VM snapshots still need work). For now a VM snapshot is just\n> a copy of the VM pages stored in local buffers allocated by palloc().\n\nHEAPBLOCKS_PER_PAGE is 32672 with the defaults. The maximum relation size is\n2**32 - 1 blocks. So the max FSM size is 131458 pages, a bit more than 1GB. Is\nthat correct?\n\nFor large relations that are already nearly all-frozen this does add a\nnoticable amount of overhead, whether spilled to disk or not. Of course\nthey're also not going to be vacuumed super often, but ...\n\nPerhaps worth turning the VM into a range based description for the snapshot,\ngiven it's a readonly datastructure in local memory? And we don't necessarily\nneed the all-frozen and all-visible in memory, one should suffice? We don't\neven need random access, so it could easily be allocated incrementally, rather\nthan one large allocation.\n\nHard to imagine anybody having a multi-TB table without \"runs\" of\nall-visible/all-frozen. I don't think it'd be worth worrying about patterns\nthat'd be inefficient in a range representation.\n\n\n\n> +\t/*\n> +\t * VACUUM must scan all pages that might have XIDs < OldestXmin in tuple\n> +\t * headers to be able to safely advance relfrozenxid later on. There is\n> +\t * no good reason to scan any additional pages. (Actually we might opt to\n> +\t * skip all-visible pages. Either way we won't scan pages for no reason.)\n> +\t *\n> +\t * Now that OldestXmin and rel_pages are acquired, acquire an immutable\n> +\t * snapshot of the visibility map as well. lazy_scan_skip works off of\n> +\t * the vmsnap, not the authoritative VM, which can continue to change.\n> +\t * Pages that lazy_scan_heap will scan are fixed and known in advance.\n\nHm. It's a bit sad to compute the snapshot after determining OldestXmin.\n\nWe probably should refresh OldestXmin periodically. That won't allow us to get\na more aggressive relfrozenxid, but it'd allow to remove more gunk.\n\n\n> +\t *\n> +\t * The exact number of pages that lazy_scan_heap will scan also depends on\n> +\t * our choice of skipping strategy. VACUUM can either choose to skip any\n> +\t * all-visible pages lazily, or choose to scan those same pages instead.\n\nWhat does it mean to \"skip lazily\"?\n\n\n\n\n\n> +\t\t/*\n> +\t\t * Visibility map page copied to local buffer for caller's snapshot.\n> +\t\t * Caller requires an exact count of all-visible and all-frozen blocks\n> +\t\t * in the heap relation. Handle that now.\n\nThis part of the comment seems like it actually belongs further down?\n\n\n> +\t\t * Must \"truncate\" our local copy of the VM to avoid incorrectly\n> +\t\t * counting heap pages >= rel_pages as all-visible/all-frozen. Handle\n> +\t\t * this by clearing irrelevant bits on the last VM page copied.\n> +\t\t */\n\nHm - why would those bits already be set?\n\n\n> +\t\tmap = PageGetContents(localvmpage);\n> +\t\tif (mapBlock == mapBlockLast)\n> +\t\t{\n> +\t\t\t/* byte and bit for first heap page not to be scanned by VACUUM */\n> +\t\t\tuint32\t\ttruncByte = HEAPBLK_TO_MAPBYTE(rel_pages);\n> +\t\t\tuint8\t\ttruncOffset = HEAPBLK_TO_OFFSET(rel_pages);\n> +\n> +\t\t\tif (truncByte != 0 || truncOffset != 0)\n> +\t\t\t{\n> +\t\t\t\t/* Clear any bits set for heap pages >= rel_pages */\n> +\t\t\t\tMemSet(&map[truncByte + 1], 0, MAPSIZE - (truncByte + 1));\n> +\t\t\t\tmap[truncByte] &= (1 << truncOffset) - 1;\n> +\t\t\t}\n> +\n> +\t\t\t/* Now it's safe to tally bits from this final VM page below */\n> +\t\t}\n> +\n> +\t\t/* Tally the all-visible and all-frozen counts from this page */\n> +\t\tumap = (uint64 *) map;\n> +\t\tfor (int i = 0; i < MAPSIZE / sizeof(uint64); i++)\n> +\t\t{\n> +\t\t\t*all_visible += pg_popcount64(umap[i] & VISIBLE_MASK64);\n> +\t\t\t*all_frozen += pg_popcount64(umap[i] & FROZEN_MASK64);\n> +\t\t}\n> +\t}\n> +\n> +\treturn vmsnap;\n> +}\n\n\n\n> From 4f5969932451869f0f28295933c28de49a22fdf2 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Mon, 18 Jul 2022 15:13:27 -0700\n> Subject: [PATCH v6 3/6] Add eager freezing strategy to VACUUM.\n> \n> Avoid large build-ups of all-visible pages by making non-aggressive\n> VACUUMs freeze pages proactively for VACUUMs/tables where eager\n> vacuuming is deemed appropriate. Use of the eager strategy (an\n> alternative to the classic lazy freezing strategy) is controlled by a\n> new GUC, vacuum_freeze_strategy_threshold (and an associated\n> autovacuum_* reloption). Tables whose rel_pages are >= the cutoff will\n> have VACUUM use the eager freezing strategy.\n\nWhat's the logic behind a hard threshold? Suddenly freezing everything on a\nhuge relation seems problematic. I realize that never getting all that far\nbehind is part of the theory, but I don't think that's always going to work.\n\nWouldn't a better strategy be to freeze a percentage of the relation on every\nnon-aggressive vacuum? That way the amount of work for an eventual aggressive\nvacuum will shrink, without causing individual vacuums to take extremely long.\n\n\n> When the eager strategy is in use, lazy_scan_prune will trigger freezing\n> a page's tuples at the point that it notices that it will at least\n> become all-visible -- it can be made all-frozen instead. We still\n> respect FreezeLimit, though: the presence of any XID < FreezeLimit also\n> triggers page-level freezing (just as it would with the lazy strategy).\n\nThe other thing that I think would be to good to use is a) whether the page is\nalready in s_b, and b) whether the page already is dirty. The cost of freezing\nshrinks significantly if it doesn't cause an additional read + write. And that\nadditional IO IMO one of the major concerns with freezing much more\naggressively in OLTPish workloads where a lot of the rows won't ever get old\nenough to need freezing.\n\n\n\n\n> From f2066c8ca5ba1b6f31257a36bb3dd065ecb1e3d4 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Mon, 5 Sep 2022 17:46:34 -0700\n> Subject: [PATCH v6 4/6] Make VACUUM's aggressive behaviors continuous.\n> \n> The concept of aggressive/scan_all VACUUM dates back to the introduction\n> of the visibility map in Postgres 8.4. Before then, every lazy VACUUM\n> was \"equally aggressive\": each operation froze whatever tuples before\n> the age-wise cutoff needed to be frozen. And each table's relfrozenxid\n> was updated at the end. In short, the previous behavior was much less\n> efficient, but did at least have one thing going for it: it was much\n> easier to understand at a high level.\n>\n> VACUUM no longer applies a separate mode of operation (aggressive mode).\n> There are still antiwraparound autovacuums, but they're now little more\n> than another way that autovacuum.c can launch an autovacuum worker to\n> run VACUUM.\n\nThe most significant aspect of anti-wrap autvacuums right now is that they\ndon't auto-cancel. Is that still used? If so, what's the threshold?\n\nIME one of the most common reasons for autovac not keeping up is that the\napplication occasionally acquires conflicting locks on one of the big\ntables. Before reaching anti-wrap age all autovacuums on that table get\ncancelled before it gets to update relfrozenxid. Once in that situation\nautovac really focusses only on that relation...\n\n\n> Now every VACUUM might need to wait for a cleanup lock, though few will.\n> It can only happen when required to advance relfrozenxid to no less than\n> half way between the existing relfrozenxid and nextXID.\n\nWhere's that \"halfway\" bit coming from?\n\nIsn't \"half way between the relfrozenxid and nextXID\" a problem for instances\nwith longrunning transactions? Wouldn't this mean that wait for every page if\nrelfrozenxid can't be advanced much because of a longrunning query or such?\n\n\n\n> From 51a863190f70c8baa6d04e3ffd06473843f3326d Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Sun, 31 Jul 2022 13:53:19 -0700\n> Subject: [PATCH v6 5/6] Avoid allocating MultiXacts during VACUUM.\n> \n> Pass down vacuumlazy.c's OldestXmin cutoff to FreezeMultiXactId(), and\n> teach it the difference between OldestXmin and FreezeLimit. Use this\n> high-level context to intelligently avoid allocating new MultiXactIds\n> during VACUUM operations. We should always prefer to avoid allocating\n> new MultiXacts during VACUUM on general principle. VACUUM is the only\n> mechanism that can claw back MultixactId space, so allowing VACUUM to\n> consume MultiXactId space (for any reason) adds to the risk that the\n> system will trigger the multiStopLimit wraparound protection mechanism.\n\nStrictly speaking that's not quite true, you can also drop/truncate tables ;)\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 21:20:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 9:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > Subject: [PATCH v6 1/6] Add page-level freezing to VACUUM.\n\nAttached is v7, which incorporates much of your feedback. Thanks for the review!\n\n> > +/*\n> > + * State used by VACUUM to track what the oldest extant XID/MXID will become\n> > + * when determing whether and how to freeze a page's heap tuples via calls to\n> > + * heap_prepare_freeze_tuple.\n>\n> Perhaps this could say something like \"what the oldest extant XID/MXID\n> currently is and what it would be if we decide to freeze the page\" or such?\n\nFixed.\n\n> > + * The relfrozenxid_out and relminmxid_out fields are the current target\n> > + * relfrozenxid and relminmxid for VACUUM caller's heap rel. Any and all\n>\n> \"VACUUM caller's heap rel.\" could stand to be rephrased.\n\nFixed.\n\n> > + * unfrozen XIDs or MXIDs that remain in caller's rel after VACUUM finishes\n> > + * _must_ have values >= the final relfrozenxid/relminmxid values in pg_class.\n> > + * This includes XIDs that remain as MultiXact members from any tuple's xmax.\n> > + * Each heap_prepare_freeze_tuple call pushes back relfrozenxid_out and/or\n> > + * relminmxid_out as needed to avoid unsafe values in rel's authoritative\n> > + * pg_class tuple.\n> > + *\n> > + * Alternative \"no freeze\" variants of relfrozenxid_nofreeze_out and\n> > + * relminmxid_nofreeze_out must also be maintained for !freeze pages.\n> > + */\n>\n> relfrozenxid_nofreeze_out isn't really a \"no freeze variant\" :)\n\nWhy not? I think that that's exactly what it is. We maintain these\nalternative \"oldest extant XID\" values so that vacuumlazy.c's\nlazy_scan_prune function can \"opt out\" of freezing. This is exactly\nthe same as what we do in lazy_scan_noprune, both conceptually and at\nthe implementation level.\n\n> I think it might be better to just always maintain the nofreeze state.\n\nNot sure. Even if there is very little to gain in cycles by not\nmaintaining the \"nofreeze\" cutoffs needlessly, it's still a pure waste\nof cycles that can easily be avoided. So it just feels natural to not\nwaste those cycles -- it may even make the design clearer.\n\n> > +typedef struct HeapPageFreeze\n> > +{\n> > + /* Is heap_prepare_freeze_tuple caller required to freeze page? */\n> > + bool freeze;\n>\n> s/freeze/freeze_required/?\n\nFixed.\n\n> Given the number of parameters to heap_prepare_freeze_tuple, why don't we pass\n> in more of them in via HeapPageFreeze?\n\nHeapPageFreeze is supposed to be mutable state used for one single\npage, though. Seems like we should use a separate immutable struct for\nthis instead.\n\nI've already prototyped a dedicated immutable \"cutoffs\" struct, which\nis instantiated exactly once per VACUUM. Seems like a good approach to\nme. The immutable state can be shared by heapam.c's\nheap_prepare_freeze_tuple(), vacuumlazy.c, and even\nvacuum_set_xid_limits() -- so everybody can work off of the same\nstruct directly. Will try to get that into shape for the next\nrevision.\n\n> What does 'xtrack' stand for? Xid Tracking?\n\nYes.\n\n> > * VACUUM caller must assemble HeapFreezeTuple entries for every tuple that we\n> > * returned true for when called. A later heap_freeze_execute_prepared call\n> > - * will execute freezing for caller's page as a whole.\n> > + * will execute freezing for caller's page as a whole. Caller should also\n> > + * initialize xtrack fields for page as a whole before calling here with first\n> > + * tuple for the page. See page_frozenxid_tracker comments.\n>\n> s/should/need to/?\n\nChanged it to \"must\".\n\n> page_frozenxid_tracker appears to be a dangling pointer.\n\nI think that you mean that the code comments reference an obsolete\ntype name -- fixed.\n\n> > + * VACUUM calls limit_xid \"FreezeLimit\", and cutoff_xid \"OldestXmin\".\n> > + * (limit_multi is \"MultiXactCutoff\", and cutoff_multi \"OldestMxact\".)\n>\n> Hm. Perhaps we should just rename them if it requires this kind of\n> explanation? They're really not good names.\n\nAgreed -- this can be taken care of as part of using a new VACUUM\noperation level struct that is passed as immutable state, which I went\ninto a moment ago. That centralizes the definitions, which makes it\nfar easier to understand which cutoff is which. For now I've kept the\nnames as they were.\n\n> Could use TransactionIdOlder().\n\nI suppose, but the way I've done it feels a bit more natural to me,\nand appears more often elsewhere. Not sure.\n\n> > @@ -6563,8 +6564,11 @@ heap_prepare_freeze_tuple(HeapTupleHeader tuple,\n> > */\n> > Assert(!freeze_xmax);\n> > Assert(TransactionIdIsValid(newxmax));\n> > - if (TransactionIdPrecedes(newxmax, *relfrozenxid_out))\n> > - *relfrozenxid_out = newxmax;\n> > + Assert(heap_tuple_would_freeze(tuple, limit_xid, limit_multi,\n> > + &xtrack->relfrozenxid_nofreeze_out,\n> > + &xtrack->relminmxid_nofreeze_out));\n> > + if (TransactionIdPrecedes(newxmax, xtrack->relfrozenxid_out))\n> > + xtrack->relfrozenxid_out = newxmax;\n>\n> Perhaps the Assert(heap_tuple_would_freeze()) bit could be handled once at the\n> end of the routine, for all paths?\n\nThe problem with that is that we cannot Assert() when we're removing a\nMulti via FRM_INVALIDATE_XMAX processing in certain cases (I tried it\nthis way myself, and the assertion fails there). This can happen when\nthe call to FreezeMultiXactId() for the xmax determined that we should\ndo FRM_INVALIDATE_XMAX processing for the xmax due to the Multi being\n\"isLockOnly\" and preceding \"OldestVisibleMXactId[MyBackendId])\". Which\nis relatively common.\n\nI fixed this by moving the assert further down, while still only\nchecking the FRM_RETURN_IS_XID and FRM_RETURN_IS_MULTI cases.\n\n> Oh - I totally didn't realize that ->freeze is an out parameter. Seems a bit\n> odd to have the other fields suffied with _out but not this one?\n\nFixed this by not having an \"_out\" suffix for any of these mutable\nfields from HeapPageFreeze. Now everything is consistent. (The \"_out\"\nconvention is totally redundant, now that we have the HeapPageFreeze\nstruct, which makes it obvious that it is is all mutable state.)\n\n> Won't using OldestXmin instead of FreezeLimit potentially cause additional\n> conflicts? Is there any reason to not compute an accurate value?\n\nThis is a concern that I share. I was hoping that I'd be able to get\naway with using OldestXmin just for this, because it's simpler that\nway. But I had my doubts about it already.\n\nI wonder why it's correct to use FreezeLimit for this on HEAD, though.\nWhat about those FRM_INVALIDATE_XMAX cases that I just mentioned we\ncouldn't Assert() on? That case effectively removes XIDs that might be\nwell after FreezeLimit. Granted it might be safe in practice, but it's\nfar from obvious why it is safe.\n\nPerhaps we can fix this in a not-too-invasive way by reusing\nLVPagePruneState.visibility_cutoff_xid for FREEZE_PAGE conflicts (not\njust VISIBLE conflicts) in cases where that was possible (while still\nusing OldestXmin as a fallback in much rarer cases). In practice we're\nonly triggering freezing eagerly because the page is already expected\nto be set all-visible (the whole point is that we'd prefer if it was\nset all-frozen instead of all-visible).\n\n(I've not done this in v7, but it's on my TODO list.)\n\nNote that the patch already maintains\nLVPagePruneState.visibility_cutoff_xid when there are some LP_DEAD\nitems on the page, because we temporarily ignore those LP_DEAD items\nwhen considering the eager freezing stuff......\n\n> > if (ItemIdIsDead(itemid))\n> > {\n\n> > deadoffsets[lpdead_items++] = offnum;\n> > - prunestate->all_visible = false;\n> > - prunestate->has_lpdead_items = true;\n> > continue;\n> > }\n>\n> What does this have to do with the rest of the commit? And why are we doing\n> this?\n\n....which is what you're asking about here.\n\nThe eager freezing strategy triggers page-level freezing for any page\nthat is about to become all-visible, so that it can be set all-frozen\ninstead. But that's not entirely straightforward when there happens to\nbe some LP_DEAD items on the heap page. There are really two ways that\na page can become all-visible during VACUUM, and we want to account\nfor that here. With eager freezing we want to make the pages become\nall-frozen instead of just all-visible, regardless of which heap pass\n(first pass or second pass) the page is set to become all-visible (and\nmaybe even all-frozen).\n\nThe comments that you mention were moved around a bit in passing.\n\nNote that we still set prunestate->all_visible to false inside\nlazy_scan_prune when we see remaining LP_DEAD stub items. We just do\nit later on, after we've decided on freezing stuff. (Obviously it\nwouldn't be okay to return to lazy_scan_heap without unsetting\nprunestate->all_visible if there are LP_DEAD items.)\n\n> Seems quite confusing to enter a block with described as \"We're freezing the\n> page.\" when we're not freezing anything (tuples_frozen == 0).\n\n> I don't really get what freeze_all_eligible is trying to do.\n\nfreeze_all_eligible (and the \"tuples_frozen == 0\" behavior) are both\nthere because we can mark a page as all-frozen in the VM without\nfreezing any of its tuples first. When that happens, we must make sure\nthat \"prunestate->all_frozen\" is set to true, so that we'll actually\nset the all-frozen bit. At the same time, we need to be careful about\nthe case where we *could* set the page all-frozen if we decided to\nfreeze all eligible tuples -- we need to handle the case where we\nchoose against freezing (and so can't set the all-frozen bit in the\nVM, and so must actually set \"prunestate->all_frozen\" to false).\n\nThis is all kinda tricky because we're simultaneously dealing with the\nactual state of the page, and the anticipated state of the page in the\nnear future. Closely related concepts, but distinct in important ways.\n\n> > #ifdef USE_ASSERT_CHECKING\n> > /* Note that all_frozen value does not matter when !all_visible */\n> > - if (prunestate->all_visible)\n> > + if (prunestate->all_visible && lpdead_items == 0)\n> > {\n> > TransactionId cutoff;\n> > bool all_frozen;\n> > @@ -1849,8 +1876,7 @@ retry:\n> > if (!heap_page_is_all_visible(vacrel, buf, &cutoff, &all_frozen))\n> > Assert(false);\n>\n> Not related to this change, but why isn't this just\n> Assert(heap_page_is_all_visible(vacrel, buf, &cutoff, &all_frozen))?\n\nIt's just a matter of personal preference. I prefer to have a clear\nblock of related code that contains multiple related assertions. You\nwould probably have declared PG_USED_FOR_ASSERTS_ONLY variables at the\ntop of lazy_scan_prune instead. FWIW if you did it the other way the\nassertion would actually have to include a \"!prunestate->all_visible\"\ntest that short circuits the heap_page_is_all_visible() call from the\nAssert().\n\n> > Subject: [PATCH v6 2/6] Teach VACUUM to use visibility map snapshot.\n\n> This should include a description of the memory usage effects.\n\nThe visibilitymap.c side of this is the least worked out part of the\npatch series, by far. I have deliberately put off work on the data\nstructure itself, preferring to focus on the vacuumlazy.c side of\nthings for the time being. But I still agree -- fixed by acknowledging\nthat that particular aspect of resource management is unresolved.\n\nI did have an open TODO before in the commit message, which is now\nimproved based on your feedback: it now fully owns the fact that we\nreally ignore the impact on memory usage right now. Just because that\npart is very WIP (much more so than every other part).\n\n> > This has significant advantages over the previous approach of using the\n> > authoritative VM fork to decide on which pages to skip. The number of\n> > heap pages processed will no longer increase when some other backend\n> > concurrently modifies a skippable page, since VACUUM will continue to\n> > see the page as skippable (which is correct because the page really is\n> > still skippable \"relative to VACUUM's OldestXmin cutoff\").\n>\n> Why is it an advantage for the number of pages to not increase?\n\nThe commit message goes into that immediately after the last line that\nyou quoted. :-)\n\nHaving an immutable structure will help us, both in the short term,\nfor this particular project, and the long term. for other VACUUM\nenhancements.\n\nWe need to have something that drives the cost model in vacuumlazy.c\nfor the skipping strategy stuff -- we need to have advanced\ninformation about costs that drive the decision making process. Thanks\nto VM snapshots, the cost model is able to reason about the cost of\nrelfrozenxid advancement precisely, in terms of \"extra\" scanned_pages\nimplied by advancing relfrozenxid during this VACUUM. That level of\nprecision is pretty nice IMV. It's not strictly necessary, but it's\nnice to be able to make a precise accurate comparison between each of\nthe two skipping strategies.\n\nDid you happen to look at the 6th and final patch? It's trivial, but\ncan have a big impact. It sizes dead_items while capping its sized\nbased on scanned_pages, not based on rel_pages. That's obviously\nguaranteed to be correct. Note also that the 2nd patch teaches VACUUM\nVERBOSE to report the final number of scanned_pages right at the\nstart, before scanning anything -- so it's a useful basis for much\nbetter progress reporting in pg_stat_progress_vacuum. Stuff like that\nalso becomes very easy with VM snapshots.\n\nThen there is the more ambitious stuff, that's not in scope for this\nproject. Example: Perhaps Sawada san will be able to take the concept\nof visibility map snapshots, and combine it with his Radix tree design\n-- which could presumably benefit from advanced knowledge of which\npages can be scanned. This is information that is reliable, by\ndefinition. In fact I think that it would make a lot of sense for this\nvisibility map snapshot data structure to be exactly the same\nstructure used to store dead_items. They really are kind of the same\nthing. The design can reason precisely about which heap pages can ever\nend up having any LP_DEAD items. (It's already trivial to use the VM\nsnapshot infrastructure as a precheck cache for dead_items lookups.)\n\n> > Non-aggressive VACUUMs now explicitly commit to (or decide against)\n> > early relfrozenxid advancement up-front.\n>\n> Why?\n\nWe can advance relfrozenxid because it's cheap to, or because it's\nurgent (according to autovacuum_freeze_max_age). This is kind of true\non HEAD already due to the autovacuum_freeze_max_age \"escalate to\naggressive\" thing -- but we can do much better than that. Why not\ndecide to advance relfrozenxid when (say) it's only *starting* to get\nurgent when it happens to be relatively cheap (though not dirt cheap)?\nWe make relfrozenxid advancement a deliberate decision that weighs\n*all* available information, and has a sense of the needs of the table\nover time.\n\nThe user experience is important here. Going back to a model where\nthere is really just one kind of lazy VACUUM makes a lot of sense. We\nshould have much more approximate guarantees about relfrozenxid\nadvancement, since that's what gives us the flexibility to find a\ncheaper (or more stable) way of keeping up over time. It matters that\nwe keep up over time, but it doesn't matter if we fall behind on\nrelfrozenxid advancement -- at least not if we don't also fall behind\non the work of freezing physical heap pages.\n\n> > VACUUM will now either scan\n> > every all-visible page, or none at all. This replaces lazy_scan_skip's\n> > SKIP_PAGES_THRESHOLD behavior, which was intended to enable early\n> > relfrozenxid advancement (see commit bf136cf6), but left many of the\n> > details to chance.\n>\n> The main goal according to bf136cf6 was to avoid defeating OS readahead, so I\n> think it should be mentioned here.\n\nAgreed. Fixed.\n\n> To me this is something that ought to be changed separately from the rest of\n> this commit.\n\nMaybe, but I'd say it depends on the final approach taken -- the\nvisibilitymap.c aspects of the patch are the least settled. I am\nseriously considering adding prefetching to the vm snapshot structure,\nwhich would make it very much a direct replacement for\nSKIP_PAGES_THRESHOLD.\n\nSeparately, I'm curious about what you think of VM snapshots from an\naio point of view. Seems like it would be ideal for prefetching for\naio?\n\n> > TODO: We don't spill VM snapshots to disk just yet (resource management\n> > aspects of VM snapshots still need work). For now a VM snapshot is just\n> > a copy of the VM pages stored in local buffers allocated by palloc().\n>\n> HEAPBLOCKS_PER_PAGE is 32672 with the defaults. The maximum relation size is\n> 2**32 - 1 blocks. So the max FSM size is 131458 pages, a bit more than 1GB. Is\n> that correct?\n\nI think that you meant \"max VM size\". That sounds correct to me.\n\n> For large relations that are already nearly all-frozen this does add a\n> noticable amount of overhead, whether spilled to disk or not. Of course\n> they're also not going to be vacuumed super often, but ...\n\nI wouldn't be surprised if the patch didn't work with relations that\napproach 32 TiB in size. As I said, the visibilitymap.c data structure\nis the least worked out piece of the project.\n\n> Perhaps worth turning the VM into a range based description for the snapshot,\n> given it's a readonly datastructure in local memory? And we don't necessarily\n> need the all-frozen and all-visible in memory, one should suffice? We don't\n> even need random access, so it could easily be allocated incrementally, rather\n> than one large allocation.\n\nDefinitely think that we should do simple run-length encoding, stuff\nlike that. Just as long as it allows vacuumlazy.c to work off of a\ntrue snapshot, with scanned_pages known right from the start. The\nconsumer side of things has been my focus so far.\n\n> Hm. It's a bit sad to compute the snapshot after determining OldestXmin.\n>\n> We probably should refresh OldestXmin periodically. That won't allow us to get\n> a more aggressive relfrozenxid, but it'd allow to remove more gunk.\n\nThat may well be a good idea, but I think that it's also a good idea\nto just not scan heap pages that we know won't have XIDs < OldestXmin\n(OldestXmin at the start of the VACUUM). That visibly makes the\nproblem of \"recently dead\" tuples that cannot be cleaned up a lot\nbetter, without requiring that we do anything with OldestXmin.\n\nI also think that there is something to be said for not updating the\nFSM for pages that were all-visible at the beginning of the VACUUM\noperation. VACUUM is currently quite happy to update the FSM with its\nown confused idea about how much free space there really is on heap\npages with recently dead (dead but not yet removable) tuples. That's\nreally bad, but really subtle.\n\n> What does it mean to \"skip lazily\"?\n\nSkipping even all-visible pages, prioritizing avoiding work over\nadvancing relfrozenxid. This is a cost-based decision. As I mentioned\na moment ago, that's one immediate use of VM snapshots (it gives us\nprecise information to base our decision on, that simply *cannot*\nbecome invalid later on).\n\n> > + /*\n> > + * Visibility map page copied to local buffer for caller's snapshot.\n> > + * Caller requires an exact count of all-visible and all-frozen blocks\n> > + * in the heap relation. Handle that now.\n>\n> This part of the comment seems like it actually belongs further down?\n\nNo, it just looks a bit like that because of the \"truncate in-memory\nVM\" code stanza. It's actually the right order.\n\n> > + * Must \"truncate\" our local copy of the VM to avoid incorrectly\n> > + * counting heap pages >= rel_pages as all-visible/all-frozen. Handle\n> > + * this by clearing irrelevant bits on the last VM page copied.\n> > + */\n>\n> Hm - why would those bits already be set?\n\nNo real reason, we \"truncate\" like this defensively. This will\nprobably look quite different before too long.\n\n> > Subject: [PATCH v6 3/6] Add eager freezing strategy to VACUUM.\n\n> What's the logic behind a hard threshold? Suddenly freezing everything on a\n> huge relation seems problematic. I realize that never getting all that far\n> behind is part of the theory, but I don't think that's always going to work.\n\nIt's a vast improvement on what we do currently, especially in\nappend-only tables.\n\nThere is simply no limit on how many physical heap pages will have to\nbe frozen when there is an aggressive mode VACUUM. It could be\nterabytes, since table age predicts precisely nothing about costs.\nWith the patch we have a useful limit for the first time, that uses\nphysical units (the only kind of units that make any sense).\n\nAdmittedly we should really have special instrumentation that reports\nwhen VACUUM must do \"catch up freezing\" when the\nvacuum_freeze_strategy_threshold threshold is first crossed, to help\nusers to make better choices in this area. And maybe\nvacuum_freeze_strategy_threshold should be lower by default, so it's\nnot as noticeable. (The GUC partly exists as a compatibility option, a\nbridge to the old lazy behavior.)\n\nFreezing just became approximately 5x cheaper with the freeze plan\ndeduplication work (commit 9e540599). To say nothing about how\nvacuuming indexes became a lot cheaper in recent releases. So to some\nextent we can afford to be more proactive here. There are some very\nnonlinear cost profiles involved here due to write amplification\neffects. So having a strong bias against write amplification seems\ntotally reasonable to me -- we can potentially \"get it wrong\" and\nstill come out ahead, because we at least had the right idea about\ncosts.\n\nI don't deny that there are clear downsides, though. I am convinced\nthat it's worth it -- performance stability is what users actually\ncomplain about in almost all cases. Why should performance stability\nbe 100% free?\n\n> Wouldn't a better strategy be to freeze a percentage of the relation on every\n> non-aggressive vacuum? That way the amount of work for an eventual aggressive\n> vacuum will shrink, without causing individual vacuums to take extremely long.\n\nI think that it's better to avoid aggressive mode altogether. By\ncommitting to advancing relfrozenxid by *some* amount in ~all VACUUMs\nagainst larger tables, we can notice when we don't actually need to do\nvery much freezing to keep relfrozenxid current, due to workload\ncharacteristics. It depends on workload, of course. But if we don't\ntry to do this we'll never notice that it's possible to do it.\n\nWhy should we necessarily need to freeze very much, after a while? Why\nshouldn't most newly frozen pages stay frozen ~forever after a little\nwhile?\n\n> The other thing that I think would be to good to use is a) whether the page is\n> already in s_b, and b) whether the page already is dirty. The cost of freezing\n> shrinks significantly if it doesn't cause an additional read + write. And that\n> additional IO IMO one of the major concerns with freezing much more\n> aggressively in OLTPish workloads where a lot of the rows won't ever get old\n> enough to need freezing.\n\nMaybe, but I think that systematic effects are more important. We\nfreeze eagerly during this VACUUM in part because it makes\nrelfrozenxid advancement possible in the next VACUUM.\n\nNote that eager freezing doesn't freeze the page unless it's already\ngoing to set it all-visible. That's another way in which we ameliorate\nthe problem of freezing when it makes little sense to -- even with\neager freezing strategy, we *don't* freeze heap pages where it\nobviously makes little sense to. Which makes a huge difference on its\nown.\n\nThere is good reason to believe that most individual heap pages are\nvery cold data, even in OLTP apps. To a large degree Postgres is\nsuccessful because it is good at inexpensively storing data that will\npossibly never be accessed:\n\nhttps://www.microsoft.com/en-us/research/video/cost-performance-in-modern-data-stores-how-data-cashing-systems-succeed/\n\nSpeaking of OLTP apps:\n\nin many cases VACUUM will prune just to remove one or two heap-only\ntuples, maybe even generating an FPI in the process. But the removed\ntuple wasn't actually doing any harm -- an opportunistic prune could\nhave done the same thing later on, once we'd built up some more\ngarbage tuples. So the only reason to prune is to freeze the page. And\nyet right now we don't recognize this and freeze the page to get\n*some* benefit out of the arguably needlessly prune. This is quite\ncommon, in fact.\n\n> The most significant aspect of anti-wrap autvacuums right now is that they\n> don't auto-cancel. Is that still used? If so, what's the threshold?\n\nThis patch set doesn't change anything about antiwraparound\nautovacuums -- though it does completely eliminate aggressive mode (so\nit's a little like Postgres 8.4).\n\nThere is a separate thread discussing the antiwraparound side of this, actually:\n\nhttps://postgr.es/m/CAH2-Wz=S-R_2rO49Hm94Nuvhu9_twRGbTm6uwDRmRu-Sqn_t3w@mail.gmail.com\n\nI think that I will need to invent a new type of autovacuum that's\nsimilar to antiwraparound autovacuum, but doesn't have the\ncancellation behavior -- that is more or less prerequisite to\ncommitting this patch series. We can accept some risk of relfrozenxid\nfalling behind if that doesn't create any real risk of antiwraparound\nautovacuums.\n\nWe can retain antiwraparound autovacuum, which should kick in only\nwhen the new kind of autovacuum has failed to advance relfrozenxid,\nhaving had the opportunity. Maybe antiwraparound autovacuum should be\ntriggered when age(relfrozenxid) is twice the value of\nautovacuum_freeze_max_age. The new kind of autovacuum would trigger at\nthe same table age that triggers antiwraparound autovacuum with the\ncurrent design.\n\nSo antiwraparound autovacuum would work in the same way, but would be\nmuch less common -- even for totally static tables. We'd at least be\nsure that the auto cancellation behavior was *proportionate* to the\nproblem at hand, because we'll always have tried and ultimately failed\nto advance relfrozenxid without activating the auto cancellation\nbehavior. We wouldn't trigger a very disruptive behavior routinely,\nwithout any very good reason.\n\n> > Now every VACUUM might need to wait for a cleanup lock, though few will.\n> > It can only happen when required to advance relfrozenxid to no less than\n> > half way between the existing relfrozenxid and nextXID.\n>\n> Where's that \"halfway\" bit coming from?\n\nWe don't use FreezeLimit within lazy_scan_noprune in the patch that\ngets rid of aggressive mode VACUUM. We use something called minXid in\nits place. So a different timeline to freezing (even for tables where\nwe always use the lazy freezing strategy).\n\nThe new minXid cutoff (used by lazy_scan_noprune) comes from this\npoint in vacuum_set_xid_limits():\n\n+ *minXid = nextXID - (freeze_table_age / 2);\n+ if (!TransactionIdIsNormal(*minXid))\n+ *minXid = FirstNormalTransactionId;\n\nSo that's what I meant by \"half way\".\n\n(Note that minXid is guaranteed to be <= FreezeLimit, which is itself\nguaranteed to be <= OldestXmin, no matter what.)\n\n> Isn't \"half way between the relfrozenxid and nextXID\" a problem for instances\n> with longrunning transactions?\n\nShould we do less relfrozenxid advancement because there is a long\nrunning transaction, though? It's obviously seriously bad when things\nare blocked by a long running transaction, but I don't see the\nconnection between that and how we wait for cleanup locks. Waiting for\ncleanup locks is always really, really bad, and can be avoided in\nalmost all cases.\n\nI suspect that I still haven't been aggressive enough in how minXid is\nset, BTW -- we should be avoiding waiting for a cleanup lock like the\nplague. So \"half way\" isn't enough. Maybe we should have a LOG message\nin cases where it actually proves necessary to wait, because it's just\nasking for trouble (at least when we're running in autovacuum).\n\n> Wouldn't this mean that wait for every page if\n> relfrozenxid can't be advanced much because of a longrunning query or such?\n\nOld XIDs always start out as young XIDs. Which we're now quite willing\nto freeze when conditions look good.\n\nPage level freezing always freezes all eligible XIDs on the page when\ntriggered, no matter what the details may be. This means that the\noldest XID on a heap page is more or less always an XID that's after\nwhatever OldestXmin was for the last VACUUM that ran and froze the\npage, whenever that happened, and regardless of the mix of XID ages\nwas on the page at that time.\n\nAs a consequence, lone XIDs that are far older than other XIDs on the\nsame page become much rarer than what you'd see with the current\ndesign -- they have to \"survive\" multiple VACUUMs, not just one\nVACUUM. The best predictor of XID age becomes the time that VACUUM\nlast froze the page as a whole -- so workload characteristics and\nnatural variations are much much less likely to lead to problems from\nwaiting for cleanup locks. (Of course it also helps that we'll try\nreally hard to do that, and almost always prefer lazy_scan_noprune\nprocessing.)\n\nThere is some sense in which we're trying to create a virtuous cycle\nhere. If we are always in a position to advance relfrozenxid by *some*\namount each VACUUM, however small, then we will have many individual\nopportunities (spaced out over multiple VACUUM operations) to freeze\ntuples on any heap tuples that (for whatever reason) are harder to get\na cleanup lock on, and then catch up on relfrozenxid by a huge amount\nwhenever we \"get lucky\". We have to \"keep an open mind\" to ever have\nany chance of \"getting lucky\" in this sense, though.\n\n> > VACUUM is the only\n> > mechanism that can claw back MultixactId space, so allowing VACUUM to\n> > consume MultiXactId space (for any reason) adds to the risk that the\n> > system will trigger the multiStopLimit wraparound protection mechanism.\n>\n> Strictly speaking that's not quite true, you can also drop/truncate tables ;)\n\nFixed.\n\n--\nPeter Geoghegan",
"msg_date": "Fri, 18 Nov 2022 17:06:57 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, Nov 18, 2022 at 5:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I've already prototyped a dedicated immutable \"cutoffs\" struct, which\n> is instantiated exactly once per VACUUM. Seems like a good approach to\n> me. The immutable state can be shared by heapam.c's\n> heap_prepare_freeze_tuple(), vacuumlazy.c, and even\n> vacuum_set_xid_limits() -- so everybody can work off of the same\n> struct directly. Will try to get that into shape for the next\n> revision.\n\nAttached is v8.\n\nNotable improvement over v7:\n\n* As anticipated on November 18th, his revision adds a new refactoring\ncommit/patch, which adds a struct that contains fields like\nFreezeLimit and OldestXmin, which is used by vacuumlazy.c to pass the\ninformation to heap_prepare_freeze_tuple().\n\nThis refactoring makes everything easier to understand -- it's a\nsignificant structural improvement.\n\n* The changes intended to avoid allocating a new Multi during VACUUM\nno longer appear in their own commit. That was squashed/combined with\nthe earlier page-level freezing commit.\n\nThis is another structural improvement.\n\nThe FreezeMultiXactId() changes were never really an optimization, and\nI shouldn't have explained them that way. They are only needed to\navoid MultiXactId related regressions that page-level freezing would\notherwise cause. Doing these changes in the page-level freezing patch\nmakes that far clearer.\n\n* Fixes an issue with snapshotConflictHorizon values for FREEZE_PAGE\nrecords, where earlier revisions could have more false recovery\nconflicts relative to the behavior on HEAD.\n\nIn other words, v8 addresses a concern that you (Andres) had in your\nreview of v6, here:\n\n> > Won't using OldestXmin instead of FreezeLimit potentially cause additional\n> > conflicts? Is there any reason to not compute an accurate value?\n\nAs anticipated, it is possible to generate valid FREEZE_PAGE\nsnapshotConflictHorizon using LVPagePruneState.visibility_cutoff_xid\nin almost all cases -- so we should avoid almost all false recovery\nconflicts. Granted, my approach here only works when the page will\nbecome eligible to mark all-frozen (otherwise we can't trust\nLVPagePruneState.visibility_cutoff_xid and have to fall back on\nOldestXmin), but that's not really a problem in practice. Since in\npractice page-level freezing is supposed to find a way to freeze pages\nas a group, or not at all (so falling back on OldestXmin should be\nvery rare).\n\nI could be more precise about generating a FREEZE_PAGE\nsnapshotConflictHorizon than this, but that didn't seem worth the\nadded complexity (I'd prefer to be able to ignore MultiXacts/xmax for\nthis stuff). I'm pretty sure that the new v8 approach is more than\ngood enough. It's actually an improvement on HEAD, where\nsnapshotConflictHorizon is derived from FreezeLimit, an approach with\nthe same basic problem as deriving snapshotConflictHorizon from\nOldestXmin. Namely: using FreezeLimit is a poor proxy for what we\nreally want to use, which is a cutoff that comes from the specific\nlatest XID in some specific tuple header on the page we're freezing.\n\nThere are no remaining blockers to commit for the first two patches\nfrom v8 (the two patches that add page-level freezing). I think that\nI'll be able to commit page-level freezing in a matter of weeks, in\nfact. All specific outstanding concerns about page-level freezing have\nbeen addressed.\n\nI believe that page-level freezing is uncontroversial. Unlike later\npatches in the series, it changes nothing user-facing about VACUUM --\nnothing very high level. Having the freeze plan deduplication work\nadded by commit 9e540599 helps here. The focus is WAL overhead over\ntime, and page level freezing can almost be understood as a mechanical\nimprovement to freezing that keeps costs over time down.\n\n--\nPeter Geoghegan",
"msg_date": "Wed, 23 Nov 2022 15:06:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-23 15:06:52 -0800, Peter Geoghegan wrote:\n> Attached is v8.\n\nThe docs don't build:\nhttps://cirrus-ci.com/task/5456939761532928\n[20:00:58.203] postgres.sgml:52: element link: validity error : IDREF attribute linkend references an unknown ID \"vacuum-for-wraparound\"\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 6 Dec 2022 10:42:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 10:42 AM Andres Freund <andres@anarazel.de> wrote:\n> The docs don't build:\n> https://cirrus-ci.com/task/5456939761532928\n> [20:00:58.203] postgres.sgml:52: element link: validity error : IDREF attribute linkend references an unknown ID \"vacuum-for-wraparound\"\n\nThanks for pointing this out. FWIW it is a result of Bruce's recent\naddition of the transaction processing chapter to the docs.\n\nMy intention is to post v9 later in the week, which will fix the doc\nbuild, and a lot more besides that. If you are planning on doing\nanother round of review, I'd suggest that you hold off until then. v9\nwill have structural improvements that will likely make it easier to\nunderstand all the steps leading up to removing aggressive mode\ncompletely. It'll be easier to relate each local step/patch to the\nbigger picture for VACUUM.\n\nv9 will also address some of the concerns you raised in your review\nthat weren't covered by v8, especially about the VM snapshotting\ninfrastructure. But also your concerns about the transition from lazy\nstrategies to eager strategies. The \"catch up freezing\" performed by\nthe first VACUUM operation run against a table that just exceeded the\nGUC-controlled table size threshold will have far more limited impact,\nbecause the burden of freezing will be spread out across multiple\nVACUUM operations. The big idea behind the patch series is to relieve\nusers from having to think about a special type of VACUUM that has to\ndo much more freezing than other VACUUMs that ran against the same\ntable in the recent past, of course, so it is important to avoid\naccidentally allowing any behavior that looks kind of like the ghost\nof aggressive VACUUM.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 6 Dec 2022 13:45:09 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 6, 2022 at 1:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> v9 will also address some of the concerns you raised in your review\n> that weren't covered by v8, especially about the VM snapshotting\n> infrastructure. But also your concerns about the transition from lazy\n> strategies to eager strategies.\n\nAttached is v9. Highlights:\n\n* VM snapshot infrastructure now spills using temp files when required\n(only in larger tables).\n\nv9 is the first version that has a credible approach to resource\nmanagement, which was something I put off until recently. We only use\na fixed amount of memory now, which should be acceptable from the\nviewpoint of VACUUM resource management. The temp files use the\nBufFile infrastructure in a relatively straightforward way.\n\n* VM snapshot infrastructure now uses explicit prefetching.\n\nOur approach is straightforward, and perhaps even obvious: we prefetch\nat the point that VACUUM requests the next block in line. There is a\nconfigurable prefetch distance, controlled by\nmaintenance_io_concurrency. We \"stage\" a couple of thousand\nBlockNumbers in VACUUM's vmsnap by bulk-reading from the vmsnap's\nlocal copy of the visibility map -- these staged blocks are returned\nto VACUUM to scan, with interlaced prefetching of later blocks from\nthe same local BlockNumber array.\n\nThe addition of prefetching ought to be enough to avoid regressions\nthat might otherwise result from the removal of SKIP_PAGES_THRESHOLD\nfrom vacuumlazy.c (see commit bf136cf6 from around the time the\nvisibility map first went in for the full context). While I definitely\nneed to do more performance validation work around prefetching\n(especially on high latency network-attached storage), I imagine that\nit won't be too hard to get into shape for commit. It's certainly not\ncommittable yet, but it's vastly better than v8.\n\nThe visibility map snapshot interface (presented by visibilitymap.h)\nalso changed in v9, mostly to support prefetching. We now have an\niterator style interface (so vacuumlazy.c cannot request random\naccess). This iterator interface is implemented by visibilitymap.c\nusing logic similar to the current lazy_scan_skip() logic from\nvacuumlazy.c (which is gone).\n\nAll told, visibilitymap.c knows quite a bit more than it used to about\nhigh level requirements from vacuumlazy.c. For example it has explicit\nawareness of VM skipping strategies.\n\n* Page-level freezing commit now freezes a page whenever VACUUM\ndetects that pruning ran and generated an FPI.\n\nFollowing a suggestion by Andres, page-level freezing is now always\ntriggered when pruning needs an FPI. Note that this optimization gets\napplied regardless of freezing strategy (unless you turn off\nfull_page_writes, I suppose).\n\nThis optimization is added by the second patch\n(v9-0002-Add-page-level-freezing-to-VACUUM.patch).\n\n* Fixed the doc build.\n\n* Much improved criteria for deciding on freezing and vmsnap skipping\nstrategies in vacuumlazy.c lazy_scan_strategy function -- improved\n\"cost model\".\n\nVACUUM should now give users a far smoother \"transition\" from lazy\nprocessing to eager processing. A table that starts out small (smaller\nthan vacuum_freeze_strategy_threshold), but gradually grows, and\neventually becomes fairly large (perhaps to a multiple of\nvacuum_freeze_strategy_threshold in size) will now experience a far\nmore gradual transition, with catch-up freezing spread out multiple\nVACUUM operations. We avoid big jumps in the overhead of freezing,\nwhere one particular VACUUM operation does all required \"catch-up\nfreezing\" in one go.\n\nMy approach is to \"stagger\" the timeline for switching freezing\nstrategy and vmsnap skipping strategy. We now change over from lazy to\neager freezing strategy when the table size threshold (controlled by\nvacuum_freeze_strategy_threshold) is first crossed, just like in v8.\nBut unlike v8, v9 will switch over to eager skipping in some later\nVACUUM operation (barring edge cases). This is implemented in a fairly\nsimple way: we now apply a \"separate\" threshold that is based on\nvacuum_freeze_strategy_threshold: a threshold that's *twice* the\ncurrent value of the vacuum_freeze_strategy_threshold GUC/reloption\nthreshold.\n\nMy approach of \"staggering\" multiple distinct behaviors to avoid\nhaving them all kick in during the same VACUUM operation isn't new to\nv9. The behavior around waiting for cleanup locks (added by\nv9-0005-Finish-removing-aggressive-mode-VACUUM.patch) is another\nexample of the same general idea.\n\nIn general I think that VACUUM shouldn't switch to more aggressive\nbehaviors all at the same time, in the same VACUUM. Each distinct\naggressive behavior has totally different properties, so there is no\nreason why VACUUM should start to apply each and every one of them at\nthe same time. Some \"aggressive\" behaviors have the potential to make\nthings quite a lot worse, in fact. The cure must not be worse than the\ndisease.\n\n* Related to the previous item (about the \"cost model\" that chooses a\nstrategy), we now have a much more sophisticated approach when it\ncomes to when and how we decide to advance relfrozenxid in smaller\ntables (tables whose size is < vacuum_freeze_strategy_threshold). This\nimproves things for tables that start out small, and stay small.\nTables where we're unlikely to want to advance relfrozenxid in every\nsingle VACUUM (better to be lazy with such a table), but still want to\nbe clever about advancing relfrozenxid \"opportunistically\".\n\nThe way that VACUUM weighs both table age and the added cost of\nrelfrozenxid advancement is more sophisticated in v9. The goal is to\nmake it more likely that VACUUM will stumble upon opportunities to\nadvance relfrozenxid when it happens to be cheap, which can happen for\nmany reasons. All of which have a great deal to do with workload\ncharacteristics.\n\nAs in v8, v9 makes VACUUM willing to advance relfrozenxid without\nconcern for table age, whenever it notices that the cost of doing so\nhappens to be very cheap (in practice this means that the number of\n\"extra\" heap pages scanned is < 5% of rel_pages). However, in v9 we\nnow go further by scaling this threshold through interpolation, based\non table age.\n\nWe have the same \"5% of rel_pages\" threshold when table age is less\nthan half way towards the point that autovacuum.c will launch an\nantiwraparound autovacuum -- when we still have only minimal concern\nabout table age. But the rel_pages-wise threshold starts to grow once\ntable age gets past that \"half way towards antiwrap AV\" point. We\ninterpolate the rel_pages-wise threshold using a new approach in v9.\n\nAt first the rel_pages-wise threshold grows quite slowly (relative to\nthe rate at which table age approaches the point of forcing an\nantiwraparound AV). For example, when we're 60% of the way towards\nneeding an antiwraparound AV, and VACUUM runs, we'll eagerly advance\nrelfrozenxid provided that the \"extra\" cost of doing so happens to be\nless than ~22% of rel_pages. It \"accelerates\" from there (assuming\nfixed rel_pages).\n\nVACUUM will now tend to take advantage of individual table\ncharacteristics that make it relatively cheap to advance relfrozenxid.\nBear in mind that these characteristics are not fixed for the same\ntable. The \"extra\" cost of advancing relfrozenxid during this VACUUM\n(whether measured in absolute terms, of as a proportion of the net\namount of work just to do simple vacuuming) just isn't predictable\nwith real workloads. Especially not with the FPI opportunistic\nfreezing stuff from the second patch (the \"freeze when heap pruning\ngets an FPI\" thing) in place. We should expect significant \"natural\nvariation\" among tables, and within the same table over time -- this\nis a good thing.\n\nFor example, imagine a table that experiences a bunch of random\ndeletes, which leads to a VACUUM that must visit most heap pages (say\n85% of rel_pages). Let's suppose that those deletes are a once-off\nthing. The added cost of advancing relfrozenxid in the next VACUUM\nstill isn't trivial (assuming the remaining 15% of pages are\nall-visible). But it is probably still worth doing if table age is at\nleast starting to become a concern. It might actually be a lot cheaper\nto advance relfrozenxid early.\n\n* Numerous structural improvements, lots of code polishing.\n\nThe patches have been reordered in a way that should make review a bit\neasier. Now the commit messages are written in a way that clearly\nanticipates the removal of aggressive mode VACUUM, which the last\npatch actually finishes. Most of the earlier commits are presented as\npreparation for completely removing aggressive mode VACUUM.\n\nThe first patch (which refactors how VACUUM passes around cutoffs like\nFreezeLimit and OldestXmin by using a dedicated struct) is much\nimproved. heap_prepare_freeze_tuple() now takes a more explicit\napproach to tracking what needs to happen for the tuple's freeze plan.\nThis allowed me to pepper it with defensive assertions. It's also a\nlot clearer IMV. For example, we now have separate freeze_xmax and\nreplace_xmax tracker variables.\n\nThe second patch in the series (the page-level freezing patch) is also\nmuch improved. I'm much happier with the way that\nheap_prepare_freeze_tuple() now explicitly delegates control of\npage-level freezing to FreezeMultiXactId() in v9, for example.\n\nNote that I squashed the patch that taught VACUUM to size dead_items\nusing scanned_pages into the main visibility map patch\n(v9-0004-Add-eager-and-lazy-VM-strategies-to-VACUUM.patch). That's why\nthere are only 5 patches (down from 6) in v9.\n\n--\nPeter Geoghegan",
"msg_date": "Sat, 10 Dec 2022 18:11:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sat, 2022-12-10 at 18:11 -0800, Peter Geoghegan wrote:\n> On Tue, Dec 6, 2022 at 1:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > v9 will also address some of the concerns you raised in your review\n> > that weren't covered by v8, especially about the VM snapshotting\n> > infrastructure. But also your concerns about the transition from\n> > lazy\n> > strategies to eager strategies.\n> \n> Attached is v9. Highlights:\n\nComments:\n\n* The documentation shouldn't have a heading like \"Managing the 32-bit\nTransaction ID address space\". We already have a concept of \"age\"\ndocumented, and I think that's all that's needed in the relevant\nsection. Freezing is driven by a need to keep the age of the oldest\ntransaction ID in a table to less than ~2B; and also the need to\ntruncate the clog (and reduce lookups of really old xids). It's fine to\ngive a brief explanation about why we can't track very old xids, but\nit's more of an internal detail and not the main point.\n\n* I'm still having a hard time with vacuum_freeze_strategy_threshold.\nPart of it is the name, which doesn't seem to convey the meaning. But\nthe heuristic also seems off to me. What if you have lots of partitions\nin an append-only range-partitioned table? That would tend to use the\nlazy freezing strategy (because each partition is small), but that's\nnot what you want. I understand heuristics aren't perfect, but it feels\nlike we could do something better. Also, another purpose of this seems\nto be to achieve v15 behavior (if v16 behavior causes a problem for\nsome workload), which seems like a good idea, but perhaps we should\nhave a more direct setting for that?\n\n* The comment above lazy_scan_strategy() is phrased in terms of the\n\"traditional approach\". It would be more clear if you described the\ncurrent strategies and how they're chosen. The pre-16 behavior was as\nlazy as possible, so that's easy enough to describe without referring\nto history.\n\n* \"eager skipping behavior\" seems like a weird phrasing because it's\nnot immediately clear if that means \"skip more pages\" (eager to skip\npages and lazy to process them) or \"skip fewer pages\" (lazy to skip the\npages and eager to process the pages).\n\n* The skipping behavior is for all-visible pages is binary: skip them\nall, or skip none. That makes sense in the context of relfrozenxid\nadvancement. But how does that avoid IO spikes? It would seem perfectly\nreasonable to me, if relfrozenxid advancement is not a pressing\nproblem, to process some fraction of the all-visible pages (or perhaps\nprocess enough of them to freeze some fraction). That would ensure that\neach VACUUM makes a payment on the deferred costs of freezing. I think\nthis has already been discussed but it keeps reappearing in my mind, so\nmaybe we can settle this with a comment (and/or docs)?\n\n* I'm wondering whether vacuum_freeze_min_age makes sense anymore. It\ndoesn't take effect unless the page is not skipped, which is confusing\nfrom a usability standpoint, and we have better heuristics to decide if\nthe whole page should be frozen or not anyway (i.e. if an FPI was\nalready taken then freezing is cheaper).\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 12 Dec 2022 15:47:16 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Dec 12, 2022 at 3:47 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Freezing is driven by a need to keep the age of the oldest\n> transaction ID in a table to less than ~2B; and also the need to\n> truncate the clog (and reduce lookups of really old xids). It's fine to\n> give a brief explanation about why we can't track very old xids, but\n> it's more of an internal detail and not the main point.\n\nI agree that that's the conventional definition. What I am proposing\nis that we revise that definition a little. We should start the\ndiscussion of freezing in the user level docs by pointing out that\nfreezing also plays a role at the level of individual pages. An\nall-frozen page is self-contained, now and forever (or until it gets\ndirtied again, at least). Even on a standby we will reliably avoid\nhaving to do clog lookups for a page that happens to have all of its\ntuples frozen.\n\nI don't want to push back too much here. I just don't think that it\nmakes terribly much sense for the docs to start the conversation about\nfreezing by talking about the worst consequences of not freezing for\nan extended period of time. That's relevant, and it's probably going\nto end up as the aspect of freezing that we spend most time on, but it\nstill doesn't seem like a useful starting point to me.\n\nTo me this seems related to the fallacy that relfrozenxid age is any\nkind of indicator about how far behind we are on freezing. I think\nthat there is value in talking about freezing as a maintenance task\nfor physical heap pages, and only then talking about relfrozenxid and\nthe circular XID space. The 64-bit XID patch doesn't get rid of\nfreezing at all, because it is still needed to break the dependency of\ntuples stored in heap pages on the pg_xact, and other SLRUs -- which\nsuggests that you can talk about freezing and advancing relfrozenxid\nas different (though still closely related) concepts.\n\n> * I'm still having a hard time with vacuum_freeze_strategy_threshold.\n> Part of it is the name, which doesn't seem to convey the meaning.\n\nI chose the name long ago, and never gave it terribly much thought.\nI'm happy to go with whatever name you prefer.\n\n> But the heuristic also seems off to me. What if you have lots of partitions\n> in an append-only range-partitioned table? That would tend to use the\n> lazy freezing strategy (because each partition is small), but that's\n> not what you want. I understand heuristics aren't perfect, but it feels\n> like we could do something better.\n\nIt is at least vastly superior to vacuum_freeze_min_age in cases like\nthis. Not that that's hard -- vacuum_freeze_min_age just doesn't ever\ntrigger freezing in any autovacuum given a table like pgbench_history\n(barring during aggressive mode), due to how it interacts with the\nvisibility map. So we're practically guaranteed to do literally all\nfreezing for an append-only table in an aggressive mode VACUUM.\n\nWorst of all, that happens on a timeline that has nothing to do with\nthe physical characteristics of the table itself (like the number of\nunfrozen heap pages or something). In fact, it doesn't even have\nanything to do with how many distinct XIDs modified that particular\ntable -- XID age works at the system level.\n\nBy working at the heap rel level (which means the partition level if\nit's a partitioned table), and by being based on physical units (table\nsize), vacuum_freeze_strategy_threshold at least manages to limit the\naccumulation of unfrozen heap pages in each individual relation. This\nis the fundamental unit at which VACUUM operates. So even if you get\nvery unlucky and accumulate many unfrozen heap pages that happen to be\ndistributed across many different tables, you can at least vacuum each\ntable independently, and in parallel. The really big problems all seem\nto involve concentration of unfrozen tables in one particular table\n(usually the events table, the largest table in the system by a couple\nof orders of magnitude).\n\nThat said, I agree that the system-level picture of debt (the system\nlevel view of the number of unfrozen heap pages) is relevant, and that\nit isn't directly considered by the patch. I think that that can be\ntreated as work for a future release. In fact, I think that there is a\ngreat deal that we could teach autovacuum.c about the system level\nview of things -- this is only one.\n\n> Also, another purpose of this seems\n> to be to achieve v15 behavior (if v16 behavior causes a problem for\n> some workload), which seems like a good idea, but perhaps we should\n> have a more direct setting for that?\n\nWhy, though? I think that it happens to make sense to do both with one\nsetting. Not because it's better to have 2 settings than 1 (though it\nis) -- just because it makes sense here, given these specifics.\n\n> * The comment above lazy_scan_strategy() is phrased in terms of the\n> \"traditional approach\". It would be more clear if you described the\n> current strategies and how they're chosen. The pre-16 behavior was as\n> lazy as possible, so that's easy enough to describe without referring\n> to history.\n\nAgreed. Will fix.\n\n> * \"eager skipping behavior\" seems like a weird phrasing because it's\n> not immediately clear if that means \"skip more pages\" (eager to skip\n> pages and lazy to process them) or \"skip fewer pages\" (lazy to skip the\n> pages and eager to process the pages).\n\nI agree that that's a problem. I'll try to come up with a terminology\nthat doesn't have this problem ahead of the next version.\n\n> * The skipping behavior is for all-visible pages is binary: skip them\n> all, or skip none. That makes sense in the context of relfrozenxid\n> advancement. But how does that avoid IO spikes? It would seem perfectly\n> reasonable to me, if relfrozenxid advancement is not a pressing\n> problem, to process some fraction of the all-visible pages (or perhaps\n> process enough of them to freeze some fraction).\n\nThat's something that v9 will do, unlike earlier versions. So I agree.\n\nIn particular, we'll now start freezing eagerly before we switch over\nto preferring to advance relfrozenxid for the same table. As I said in\nmy summary of v9 the other day, we \"stagger\" the point at which these\ntwo behaviors are first applied, with the goal of smoothing the\ntransition. We try to disguise the fact that there are still two\ndifferent sets of behavior. We try to get the best of both worlds\n(eager and lazy behaviors), without the user ever really noticing.\n\nDon't forget that eager behavior with the visibility map is expected\nto directly lead to freezing more pages (not a guarantee, but quite\nlikely). So while skipping strategy and freezing strategy are two\nindependent things, they're independent in name only, mechanically.\nThey are not independent things in any practical sense. (The\nunderlying reason why that is true is of course the same reason why\nvacuum_freeze_min_age only really works as designed in aggressive mode\nVACUUMs.)\n\n> each VACUUM makes a payment on the deferred costs of freezing. I think\n> this has already been discussed but it keeps reappearing in my mind, so\n> maybe we can settle this with a comment (and/or docs)?\n\nThat said, I believe that we should always advance relfrozenxid in\ntables that are already moderately sized -- a table that is already\nbig enough to be some small multiple of\nvacuum_freeze_strategy_threshold should always take an eager approach\nto advancing relfrozenxid. That is, I don't think that it makes sense\nto pay the cost of freezing down incrementally given a moderately\nlarge table.\n\nLarge tables and small tables are qualitatively different things, at\nleast from a VACUUM point of view. To some degree we can afford to be\nwrong about small tables, because that won't cause us any serious\npain. This isn't really true with larger tables -- a VACUUM of a large\ntable is \"too big to fail\". Our working assumption for tables that are\nstill growing now, in the ongoing VACUUM, is that they will continue\nto grow.\n\nThere is often one very large table, and by the time the next VACUUM\ncomes around, the table may have accumulated more unfrozen pages than\nthe entire rest of the database combined (I mean all of the rest of\nthe database, frozen and unfrozen pages alike). This may even be\ncommon:\n\nhttps://brandur.org/fragments/events\n\n> * I'm wondering whether vacuum_freeze_min_age makes sense anymore. It\n> doesn't take effect unless the page is not skipped, which is confusing\n> from a usability standpoint, and we have better heuristics to decide if\n> the whole page should be frozen or not anyway (i.e. if an FPI was\n> already taken then freezing is cheaper).\n\nI think that vacuum_freeze_min_age still has a role to play. The only\nthing that can trigger freezing during a VACUUM that opts to use a\nlazy strategy VACUUM is the FPI-from-pruning trigger mechanism (new to\nv9), plus vacuum_freeze_min_age/FreezeLimit. So you cannot really have\na lazy strategy without vacuum_freeze_min_age. The original\nvacuum_freeze_min_age design did make sense, at least\npre-visibility-map, because sometimes being lazy about freezing is the\nbest strategy. Especially with small, frequently updated tables like\nmost of the pgbench tables.\n\nThere is nothing inherently wrong with deciding to freeze (or even to\nwait for a cleanup lock) on the basis of a given XID's age. My problem\nisn't with that behavior in general. It's with the fact that we use it\neven when it's clearly inappropriate -- wildly inappropriate. We have\nplenty of information that strongly hints at whether or not laziness\nis a good idea. It's a good idea whenever laziness has a decent chance\nof avoiding completely unnecessary work altogether, provided we can\nafford to be wrong about that without having to pay too high a cost\nlater on, when we have to course correct. What this mostly boils down\nto is this: lazy freezing is generally a good idea in small tables\nonly.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 12 Dec 2022 16:59:58 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 8:00 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Dec 12, 2022 at 3:47 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > But the heuristic also seems off to me. What if you have lots of\npartitions\n> > in an append-only range-partitioned table? That would tend to use the\n> > lazy freezing strategy (because each partition is small), but that's\n> > not what you want. I understand heuristics aren't perfect, but it feels\n> > like we could do something better.\n>\n> It is at least vastly superior to vacuum_freeze_min_age in cases like\n> this. Not that that's hard -- vacuum_freeze_min_age just doesn't ever\n> trigger freezing in any autovacuum given a table like pgbench_history\n> (barring during aggressive mode), due to how it interacts with the\n> visibility map. So we're practically guaranteed to do literally all\n> freezing for an append-only table in an aggressive mode VACUUM.\n>\n> Worst of all, that happens on a timeline that has nothing to do with\n> the physical characteristics of the table itself (like the number of\n> unfrozen heap pages or something).\n\nIf the number of unfrozen heap pages is the thing we care about, perhaps\nthat, and not the total size of the table, should be the parameter that\ndrives freezing strategy?\n\n> That said, I agree that the system-level picture of debt (the system\n> level view of the number of unfrozen heap pages) is relevant, and that\n> it isn't directly considered by the patch. I think that that can be\n> treated as work for a future release. In fact, I think that there is a\n> great deal that we could teach autovacuum.c about the system level\n> view of things -- this is only one.\n\nIt seems an easier path to considering system-level of debt (as measured by\nunfrozen heap pages) would be to start with considering table-level debt\nmeasured the same way.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Dec 13, 2022 at 8:00 AM Peter Geoghegan <pg@bowt.ie> wrote:>> On Mon, Dec 12, 2022 at 3:47 PM Jeff Davis <pgsql@j-davis.com> wrote:> > But the heuristic also seems off to me. What if you have lots of partitions> > in an append-only range-partitioned table? That would tend to use the> > lazy freezing strategy (because each partition is small), but that's> > not what you want. I understand heuristics aren't perfect, but it feels> > like we could do something better.>> It is at least vastly superior to vacuum_freeze_min_age in cases like> this. Not that that's hard -- vacuum_freeze_min_age just doesn't ever> trigger freezing in any autovacuum given a table like pgbench_history> (barring during aggressive mode), due to how it interacts with the> visibility map. So we're practically guaranteed to do literally all> freezing for an append-only table in an aggressive mode VACUUM.>> Worst of all, that happens on a timeline that has nothing to do with> the physical characteristics of the table itself (like the number of> unfrozen heap pages or something).If the number of unfrozen heap pages is the thing we care about, perhaps that, and not the total size of the table, should be the parameter that drives freezing strategy?> That said, I agree that the system-level picture of debt (the system> level view of the number of unfrozen heap pages) is relevant, and that> it isn't directly considered by the patch. I think that that can be> treated as work for a future release. In fact, I think that there is a> great deal that we could teach autovacuum.c about the system level> view of things -- this is only one.It seems an easier path to considering system-level of debt (as measured by unfrozen heap pages) would be to start with considering table-level debt measured the same way.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 13 Dec 2022 15:29:18 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 12:29 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> If the number of unfrozen heap pages is the thing we care about, perhaps that, and not the total size of the table, should be the parameter that drives freezing strategy?\n\nThat's not the only thing we care about, though. And to the extent we\ncare about it, we mostly care about the consequences of either\nfreezing or not freezing eagerly. Concentration of unfrozen pages in\none particular table is a lot more of a concern than the same number\nof heap pages being spread out across multiple tables. Those tables\ncan all be independently vacuumed, and come with their own\nrelfrozenxid, that can be advanced independently, and are very likely\nto be frozen as part of a vacuum that needed to happen anyway.\n\nPages become frozen pages because VACUUM freezes those pages. Same\nwith all-visible pages, which could in principle have been made\nall-frozen instead, had VACUUM opted to do it that way back when it\nprocessed the page. So VACUUM is not a passive, neutral observer here.\nWhat happens over time and across multiple VACUUM operations is very\nrelevant. VACUUM needs to pick up where it left off last time, at\nleast with larger tables, where the time between VACUUMs is naturally\nvery high, and where each individual VACUUM has to process a huge\nnumber of individual pages. It's not really practical to take a \"wait\nand see\" approach with big tables.\n\nAt the very least, a given VACUUM operation has to choose its freezing\nstrategy based on how it expects the table will look when it's done\nvacuuming the table, and how that will impact the next VACUUM against\nthe same table. Without that, then vacuuming an append-only table will\nfall into a pattern of setting pages all-visible in one vacuum, and\nthen freezing those same pages all-frozen in the very next vacuum\nbecause there are too many. Which makes little sense; we're far better\noff freezing the pages at the earliest opportunity instead.\n\nWe're going to have to write a WAL record for the visibility map\nanyway, so doing everything at the same time has a lot to recommend\nit. Even if it turns out to be quite wrong, we may still come out\nahead in terms of absolute volume of WAL written, and especially in\nterms of performance stability. To a limited extent we need to reason\nabout what will happen in the near future. But we also need to reason\nabout which kinds of mispredictions we cannot afford to make, and\nwhich kinds are okay. Some mistakes hurt a lot more than others.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Dec 2022 09:16:19 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 9:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> That's not the only thing we care about, though. And to the extent we\n> care about it, we mostly care about the consequences of either\n> freezing or not freezing eagerly. Concentration of unfrozen pages in\n> one particular table is a lot more of a concern than the same number\n> of heap pages being spread out across multiple tables. Those tables\n> can all be independently vacuumed, and come with their own\n> relfrozenxid, that can be advanced independently, and are very likely\n> to be frozen as part of a vacuum that needed to happen anyway.\n\nAt the suggestion of Jeff, I wrote a Wiki page that shows motivating\nexamples for the patch series:\n\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples\n\nThese are all cases where VACUUM currently doesn't do the right thing\naround freezing, in a way that is greatly ameliorated by the patch.\nPerhaps this will help other hackers to understand the motivation\nbehind some of these mechanisms. There are plenty of details that only\nmake sense in the context of a certain kind of table, with certain\nperformance characteristics that the design is sensitive to, and seeks\nto take advantage of in one way or another.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 13 Dec 2022 15:07:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, 14 Dec 2022 at 00:07, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Dec 13, 2022 at 9:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > That's not the only thing we care about, though. And to the extent we\n> > care about it, we mostly care about the consequences of either\n> > freezing or not freezing eagerly. Concentration of unfrozen pages in\n> > one particular table is a lot more of a concern than the same number\n> > of heap pages being spread out across multiple tables. Those tables\n> > can all be independently vacuumed, and come with their own\n> > relfrozenxid, that can be advanced independently, and are very likely\n> > to be frozen as part of a vacuum that needed to happen anyway.\n>\n> At the suggestion of Jeff, I wrote a Wiki page that shows motivating\n> examples for the patch series:\n>\n> https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples\n>\n> These are all cases where VACUUM currently doesn't do the right thing\n> around freezing, in a way that is greatly ameliorated by the patch.\n> Perhaps this will help other hackers to understand the motivation\n> behind some of these mechanisms. There are plenty of details that only\n> make sense in the context of a certain kind of table, with certain\n> performance characteristics that the design is sensitive to, and seeks\n> to take advantage of in one way or another.\n\nIn this mentioned wiki page, section \"Simple append-only\", the\nfollowing is written:\n\n> Our \"transition from lazy to eager strategies\" concludes with an autovacuum that actually advanced relfrozenxid eagerly:\n>> automatic vacuum of table \"regression.public.pgbench_history\": index scans: 0\n>> pages: 0 removed, 1078444 remain, 561143 scanned (52.03% of total)\n>> [...]\n>> frozen: 560841 pages from table (52.00% of total) had 88051825 tuples frozen\n>> [...]\n>> WAL usage: 1121683 records, 557662 full page images, 4632208091 bytes\n\nI think that this 'transition from lazy to eager' could benefit from a\nlimit on how many all_visible blocks each autovacuum iteration can\nfreeze. This first run of (auto)vacuum after the 8GB threshold seems\nto appear as a significant IO event (both in WAL and relation\nread/write traffic) with 50% of the table updated and WAL-logged. I\nthink this should be limited to some degree, such as only freeze\nall_visible blocks up to 10% of the table's blocks in eager vacuum, so\nthat the load is spread across a larger time frame and more VACUUM\nruns.\n\n\nKind regards,\n\nMatthias van de Meent.\n\n\n",
"msg_date": "Thu, 15 Dec 2022 15:50:19 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 6:50 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> This first run of (auto)vacuum after the 8GB threshold seems\n> to appear as a significant IO event (both in WAL and relation\n> read/write traffic) with 50% of the table updated and WAL-logged. I\n> think this should be limited to some degree, such as only freeze\n> all_visible blocks up to 10% of the table's blocks in eager vacuum, so\n> that the load is spread across a larger time frame and more VACUUM\n> runs.\n\nI agree that the burden of catch-up freezing is excessive here (in\nfact I already wrote something to that effect on the wiki page). The\nlikely solution can be simple enough.\n\nIn v9 of the patch, we switch over to eager freezing when table size\ncrosses 4GB (since that is the value of the\nvacuum_freeze_strategy_threshold GUC). The catch up freezing that you\ndraw attention to here occurs when table size exceeds 8GB, which is a\nseparate physical table size threshold that forces eager relfrozenxid\nadvancement. The second threshold is hard-coded to 2x the first one.\n\nI think that this issue can be addressed by making the second\nthreshold 4x or even 8x vacuum_freeze_strategy_threshold, not just 2x.\nThat would mean that we'd have to freeze just as many pages whenever\nwe did the catch-up freezing -- so no change in the added *absolute*\ncost of freezing. But, the *relative* cost would be much lower, simply\nbecause catch-up freezing would take place when the table was much\nlarger. So it would be a lot less noticeable.\n\nNote that we might never reach the second table size threshold before\nwe must advance relfrozenxid, in any case. The catch-up freezing might\nactually take place because table age created pressure to advance\nrelfrozenxid. It's useful to have a purely physical/table-size\nthreshold like this, especially in bulk loading scenarios. But it's\nnot like table age doesn't have any influence at all, anymore. The\ncost model weighs physical units/costs as well as table age, and in\ngeneral the most likely trigger for advancing relfrozenxid is usually\nsome combination of the two, not any one factor on its own [1].\n\n[1] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Opportunistically_advancing_relfrozenxid_with_bursty.2C_real-world_workloads\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:53:04 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "The patches (003 and 005) are missing a word\nshould use to decide whether to its eager freezing strategy.\n\nOn the wiki, missing a word:\nbuilds on related added\n\n\n",
"msg_date": "Thu, 15 Dec 2022 13:11:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 11:11 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The patches (003 and 005) are missing a word\n> should use to decide whether to its eager freezing strategy.\n\nI mangled this during rebasing for v9, which reordered the commits.\nWill be fixed in v10.\n\n> On the wiki, missing a word:\n> builds on related added\n\nFixed.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 15 Dec 2022 20:13:07 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 6:07 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> At the suggestion of Jeff, I wrote a Wiki page that shows motivating\n> examples for the patch series:\n>\n>\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples\n>\n> These are all cases where VACUUM currently doesn't do the right thing\n> around freezing, in a way that is greatly ameliorated by the patch.\n> Perhaps this will help other hackers to understand the motivation\n> behind some of these mechanisms. There are plenty of details that only\n> make sense in the context of a certain kind of table, with certain\n> performance characteristics that the design is sensitive to, and seeks\n> to take advantage of in one way or another.\n\nThanks for this. This is the kind of concrete, data-based evidence that I\nfind much more convincing, or at least easy to reason about. I'd actually\nrecommend in the future to open discussion with this kind of analysis --\neven before coding, it's possible to indicate what a design is *intended*\nto achieve. And reviewers can likewise bring up cases of their own in a\nconcrete fashion.\n\nOn Wed, Dec 14, 2022 at 12:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> At the very least, a given VACUUM operation has to choose its freezing\n> strategy based on how it expects the table will look when it's done\n> vacuuming the table, and how that will impact the next VACUUM against\n> the same table. Without that, then vacuuming an append-only table will\n> fall into a pattern of setting pages all-visible in one vacuum, and\n> then freezing those same pages all-frozen in the very next vacuum\n> because there are too many. Which makes little sense; we're far better\n> off freezing the pages at the earliest opportunity instead.\n\nThat makes sense, but I wonder if we can actually be more specific: One\nmotivating example mentioned is the append-only table. If we detected that\ncase, which I assume we can because autovacuum_vacuum_insert_* GUCs exist,\nwe could use that information as one way to drive eager freezing\nindependently of size. At least in theory -- it's very possible size will\nbe a necessary part of the decision, but it's less clear that it's as\nuseful as a user-tunable knob.\n\nIf we then ignored the append-only case when evaluating a freezing policy,\nmaybe other ideas will fall out. I don't have a well-thought out idea about\npolicy or knobs, but it's worth thinking about.\n\nAside from that, I've only given the patches a brief reading. Having seen\nthe VM snapshot in practice (under \"Scanned pages, visibility map snapshot\"\nin the wiki page), it's neat to see fewer pages being scanned. Prefetching\nnot only seems superior to SKIP_PAGES_THRESHOLD, but anticipates\nasynchronous IO. Keeping only one VM snapshot page in memory makes perfect\nsense.\n\nI do have a cosmetic, but broad-reaching, nitpick about terms regarding\n\"skipping strategy\". That's phrased as a kind of negative -- what we're\n*not* doing. Many times I had to pause and compute in my head what we're\n*doing*, i.e. the \"scanning strategy\". For example, I wonder if the VM\nstrategies would be easier to read as:\n\nVMSNAP_SKIP_ALL_VISIBLE -> VMSNAP_SCAN_LAZY\nVMSNAP_SKIP_ALL_FROZEN -> VMSNAP_SCAN_EAGER\nVMSNAP_SKIP_NONE -> VMSNAP_SCAN_ALL\n\nNotice here they're listed in order of increasing eagerness.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Dec 14, 2022 at 6:07 AM Peter Geoghegan <pg@bowt.ie> wrote:>> At the suggestion of Jeff, I wrote a Wiki page that shows motivating> examples for the patch series:>> https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples>> These are all cases where VACUUM currently doesn't do the right thing> around freezing, in a way that is greatly ameliorated by the patch.> Perhaps this will help other hackers to understand the motivation> behind some of these mechanisms. There are plenty of details that only> make sense in the context of a certain kind of table, with certain> performance characteristics that the design is sensitive to, and seeks> to take advantage of in one way or another.Thanks for this. This is the kind of concrete, data-based evidence that I find much more convincing, or at least easy to reason about. I'd actually recommend in the future to open discussion with this kind of analysis -- even before coding, it's possible to indicate what a design is *intended* to achieve. And reviewers can likewise bring up cases of their own in a concrete fashion.On Wed, Dec 14, 2022 at 12:16 AM Peter Geoghegan <pg@bowt.ie> wrote:> At the very least, a given VACUUM operation has to choose its freezing> strategy based on how it expects the table will look when it's done> vacuuming the table, and how that will impact the next VACUUM against> the same table. Without that, then vacuuming an append-only table will> fall into a pattern of setting pages all-visible in one vacuum, and> then freezing those same pages all-frozen in the very next vacuum> because there are too many. Which makes little sense; we're far better> off freezing the pages at the earliest opportunity instead.That makes sense, but I wonder if we can actually be more specific: One motivating example mentioned is the append-only table. If we detected that case, which I assume we can because autovacuum_vacuum_insert_* GUCs exist, we could use that information as one way to drive eager freezing independently of size. At least in theory -- it's very possible size will be a necessary part of the decision, but it's less clear that it's as useful as a user-tunable knob.If we then ignored the append-only case when evaluating a freezing policy, maybe other ideas will fall out. I don't have a well-thought out idea about policy or knobs, but it's worth thinking about.Aside from that, I've only given the patches a brief reading. Having seen the VM snapshot in practice (under \"Scanned pages, visibility map snapshot\" in the wiki page), it's neat to see fewer pages being scanned. Prefetching not only seems superior to SKIP_PAGES_THRESHOLD, but anticipates asynchronous IO. Keeping only one VM snapshot page in memory makes perfect sense.I do have a cosmetic, but broad-reaching, nitpick about terms regarding \"skipping strategy\". That's phrased as a kind of negative -- what we're *not* doing. Many times I had to pause and compute in my head what we're *doing*, i.e. the \"scanning strategy\". For example, I wonder if the VM strategies would be easier to read as:VMSNAP_SKIP_ALL_VISIBLE -> VMSNAP_SCAN_LAZYVMSNAP_SKIP_ALL_FROZEN -> VMSNAP_SCAN_EAGERVMSNAP_SKIP_NONE -> VMSNAP_SCAN_ALLNotice here they're listed in order of increasing eagerness. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 16 Dec 2022 14:48:17 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi!\n\nI've found this discussion very interesting, in view of vacuuming\nTOAST tables is always a problem because these tables tend to\nbloat very quickly with dead data - just to remind, all TOAST-able\ncolumns of the relation use the same TOAST table which is one\nfor the relation, and TOASTed data are not updated - there are\nonly insert and delete operations.\n\nHave you tested it with large and constantly used TOAST tables?\nHow would it work with the current TOAST implementation?\n\nWe propose a different approach to the TOAST mechanics [1],\nand a new vacuum would be very promising.\n\nThank you!\n\n[1] https://commitfest.postgresql.org/41/3490/\n\nOn Fri, Dec 16, 2022 at 10:48 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n>\n> On Wed, Dec 14, 2022 at 6:07 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > At the suggestion of Jeff, I wrote a Wiki page that shows motivating\n> > examples for the patch series:\n> >\n> >\n> https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples\n> >\n> > These are all cases where VACUUM currently doesn't do the right thing\n> > around freezing, in a way that is greatly ameliorated by the patch.\n> > Perhaps this will help other hackers to understand the motivation\n> > behind some of these mechanisms. There are plenty of details that only\n> > make sense in the context of a certain kind of table, with certain\n> > performance characteristics that the design is sensitive to, and seeks\n> > to take advantage of in one way or another.\n>\n> Thanks for this. This is the kind of concrete, data-based evidence that I\n> find much more convincing, or at least easy to reason about. I'd actually\n> recommend in the future to open discussion with this kind of analysis --\n> even before coding, it's possible to indicate what a design is *intended*\n> to achieve. And reviewers can likewise bring up cases of their own in a\n> concrete fashion.\n>\n> On Wed, Dec 14, 2022 at 12:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> > At the very least, a given VACUUM operation has to choose its freezing\n> > strategy based on how it expects the table will look when it's done\n> > vacuuming the table, and how that will impact the next VACUUM against\n> > the same table. Without that, then vacuuming an append-only table will\n> > fall into a pattern of setting pages all-visible in one vacuum, and\n> > then freezing those same pages all-frozen in the very next vacuum\n> > because there are too many. Which makes little sense; we're far better\n> > off freezing the pages at the earliest opportunity instead.\n>\n> That makes sense, but I wonder if we can actually be more specific: One\n> motivating example mentioned is the append-only table. If we detected that\n> case, which I assume we can because autovacuum_vacuum_insert_* GUCs exist,\n> we could use that information as one way to drive eager freezing\n> independently of size. At least in theory -- it's very possible size will\n> be a necessary part of the decision, but it's less clear that it's as\n> useful as a user-tunable knob.\n>\n> If we then ignored the append-only case when evaluating a freezing policy,\n> maybe other ideas will fall out. I don't have a well-thought out idea about\n> policy or knobs, but it's worth thinking about.\n>\n> Aside from that, I've only given the patches a brief reading. Having seen\n> the VM snapshot in practice (under \"Scanned pages, visibility map snapshot\"\n> in the wiki page), it's neat to see fewer pages being scanned. Prefetching\n> not only seems superior to SKIP_PAGES_THRESHOLD, but anticipates\n> asynchronous IO. Keeping only one VM snapshot page in memory makes perfect\n> sense.\n>\n> I do have a cosmetic, but broad-reaching, nitpick about terms regarding\n> \"skipping strategy\". That's phrased as a kind of negative -- what we're\n> *not* doing. Many times I had to pause and compute in my head what we're\n> *doing*, i.e. the \"scanning strategy\". For example, I wonder if the VM\n> strategies would be easier to read as:\n>\n> VMSNAP_SKIP_ALL_VISIBLE -> VMSNAP_SCAN_LAZY\n> VMSNAP_SKIP_ALL_FROZEN -> VMSNAP_SCAN_EAGER\n> VMSNAP_SKIP_NONE -> VMSNAP_SCAN_ALL\n>\n> Notice here they're listed in order of increasing eagerness.\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I've found this discussion very interesting, in view of vacuumingTOAST tables is always a problem because these tables tend tobloat very quickly with dead data - just to remind, all TOAST-ablecolumns of the relation use the same TOAST table which is onefor the relation, and TOASTed data are not updated - there areonly insert and delete operations.Have you tested it with large and constantly used TOAST tables?How would it work with the current TOAST implementation?We propose a different approach to the TOAST mechanics [1],and a new vacuum would be very promising.Thank you![1] https://commitfest.postgresql.org/41/3490/On Fri, Dec 16, 2022 at 10:48 AM John Naylor <john.naylor@enterprisedb.com> wrote:On Wed, Dec 14, 2022 at 6:07 AM Peter Geoghegan <pg@bowt.ie> wrote:>> At the suggestion of Jeff, I wrote a Wiki page that shows motivating> examples for the patch series:>> https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples>> These are all cases where VACUUM currently doesn't do the right thing> around freezing, in a way that is greatly ameliorated by the patch.> Perhaps this will help other hackers to understand the motivation> behind some of these mechanisms. There are plenty of details that only> make sense in the context of a certain kind of table, with certain> performance characteristics that the design is sensitive to, and seeks> to take advantage of in one way or another.Thanks for this. This is the kind of concrete, data-based evidence that I find much more convincing, or at least easy to reason about. I'd actually recommend in the future to open discussion with this kind of analysis -- even before coding, it's possible to indicate what a design is *intended* to achieve. And reviewers can likewise bring up cases of their own in a concrete fashion.On Wed, Dec 14, 2022 at 12:16 AM Peter Geoghegan <pg@bowt.ie> wrote:> At the very least, a given VACUUM operation has to choose its freezing> strategy based on how it expects the table will look when it's done> vacuuming the table, and how that will impact the next VACUUM against> the same table. Without that, then vacuuming an append-only table will> fall into a pattern of setting pages all-visible in one vacuum, and> then freezing those same pages all-frozen in the very next vacuum> because there are too many. Which makes little sense; we're far better> off freezing the pages at the earliest opportunity instead.That makes sense, but I wonder if we can actually be more specific: One motivating example mentioned is the append-only table. If we detected that case, which I assume we can because autovacuum_vacuum_insert_* GUCs exist, we could use that information as one way to drive eager freezing independently of size. At least in theory -- it's very possible size will be a necessary part of the decision, but it's less clear that it's as useful as a user-tunable knob.If we then ignored the append-only case when evaluating a freezing policy, maybe other ideas will fall out. I don't have a well-thought out idea about policy or knobs, but it's worth thinking about.Aside from that, I've only given the patches a brief reading. Having seen the VM snapshot in practice (under \"Scanned pages, visibility map snapshot\" in the wiki page), it's neat to see fewer pages being scanned. Prefetching not only seems superior to SKIP_PAGES_THRESHOLD, but anticipates asynchronous IO. Keeping only one VM snapshot page in memory makes perfect sense.I do have a cosmetic, but broad-reaching, nitpick about terms regarding \"skipping strategy\". That's phrased as a kind of negative -- what we're *not* doing. Many times I had to pause and compute in my head what we're *doing*, i.e. the \"scanning strategy\". For example, I wonder if the VM strategies would be easier to read as:VMSNAP_SKIP_ALL_VISIBLE -> VMSNAP_SCAN_LAZYVMSNAP_SKIP_ALL_FROZEN -> VMSNAP_SCAN_EAGERVMSNAP_SKIP_NONE -> VMSNAP_SCAN_ALLNotice here they're listed in order of increasing eagerness. --John NaylorEDB: http://www.enterprisedb.com\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Fri, 16 Dec 2022 10:59:39 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 11:48 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Thanks for this. This is the kind of concrete, data-based evidence that I find much more convincing, or at least easy to reason about.\n\nI'm glad to hear that it helped. It's always difficult to judge where\nother people are coming from, especially when it's not clear how much\ncontext is shared. Face time would have helped here, too.\n\n> One motivating example mentioned is the append-only table. If we detected that case, which I assume we can because autovacuum_vacuum_insert_* GUCs exist, we could use that information as one way to drive eager freezing independently of size. At least in theory -- it's very possible size will be a necessary part of the decision, but it's less clear that it's as useful as a user-tunable knob.\n\nI am not strongly opposed to that idea, though I have my doubts about\nit. I have thought about it already, and it wouldn't be hard to get\nthe information to vacuumlazy.c (I plan on doing it as part of related\nwork on antiwraparound autovacuum, in fact [1]). I'm skeptical of the\ngeneral idea that autovacuum.c has enough reliable information to give\ndetailed recommendations as to how vacuumlazy.c should process the\ntable.\n\nI have pointed out several major flaws with the autovacuum.c dead\ntuple accounting in the past [2][3], but I also think that there are\nsignificant problems with the tuples inserted accounting. Basically, I\nthink that there are effects which are arguably an example of the\ninspection paradox [4]. Insert-based autovacuums occur on a timeline\ndetermined by the \"inserted since last autovacuum\" statistics. These\nstatistics are (in part) maintained by autovacuum/VACUUM itself. Which\nhas no specific understanding of how it might end up chasing its own\ntail.\n\nLet me be more concrete about what I mean about autovacuum chasing its\nown tail. The autovacuum_vacuum_insert_threshold mechanism works by\ntriggering an autovacuum whenever the number of tuples inserted since\nthe last autovacuum/VACUUM reaches a certain threshold -- usually some\nfixed proportion of pg_class.reltuples. But the\ntuples-inserted-since-last-VACUUM counter gets reset at the end of\nVACUUM, not at the start. Whereas VACUUM itself processes only the\nsubset of pages that needed to be vacuumed at the start of the VACUUM.\nThere is no attempt to compensate for that disparity. This *isn't*\nreally a measure of \"unvacuumed tuples\" (you'd need to compensate to\nget that).\n\nThis \"at the start vs at the end\" difference won't matter at all with\nsmaller tables. And even in larger tables we might hope that the\neffect would kind of average out. But what about cases where one\nparticular VACUUM operation takes an unusually long time, out of a\nsequence of successive VACUUMs that run against the same table? For\nexample, the sequence that you see on the Wiki page, when Postgres\nHEAD autovacuum does an aggressive VACUUM on one occasion, which takes\ndramatically longer [5].\n\nNotice that the sequence in [5] shows that the patch does one more\nautovacuum operation in total, compared to HEAD/master. That's a lot\nmore -- we're talking about VACUUMs that each take 40+ minutes. That\ncan be explained by the fact that VACUUM (quite naturally) resets the\n\"tuples inserted since last VACUUM\" at the end of that unusually long\nrunning aggressive autovacuum -- just like any other VACUUM would.\nThat seems very weird to me. If (say) we happened to have a much\nhigher vacuum_freeze_table_age setting, then we wouldn't have had an\naggressive VACUUM until much later on (or never, because the benchmark\nwould just end). And the VACUUM that was aggressive would have been a\nregular VACUUM instead, and would therefore have completed far sooner,\nand would therefore have had a *totally* different cadence, compared\nto what we actually saw -- it becomes distorted in a way that outlasts\nthe aggressive VACUUM.\n\nWith a far higher vacuum_freeze_table_age, we might have even managed\nto do two regular autovacuums in the same period that it took a single\naggressive VACUUM to run in (that's not too far from what actually\nhappened with the patch). The *second* regular autovacuum would then\nend up resetting the \"inserted since last VACUUM\" counter to 0 at the\nsame time as the long running aggressive VACUUM actually did so (same\nwall clock time, same time since the start of the benchmark). Notice\nthat we'll have done much less useful work (on cleaning up bloat and\nsetting newer pages all-visible) with the \"one long aggressive mode\nVACUUM\" setup/scenario -- we'll be way behind -- but the statistics\nwill nevertheless look about the same as they do in the \"two fast\nautovacuums instead of one slow autovacuum\" counterfactual scenario.\n\nIn short, autovacuum.c fails to appreciate that a lot of stuff about\nthe table changes when VACUUM runs. Time hasn't stood still -- the\ntable was modified and extended throughout. So autovacuum.c hasn't\ncompensated for how VACUUM actually performed, and, in effect, forgets\nhow far it has fallen behind. It should be eager to start the nex\nautovacuum very quickly, having fallen behind, but it isn't eager.\nThis is all the more reason to get rid of aggressive mode, but that's\nnot my point -- my point is that the statistics driving things seem\nquite dubious, in all sorts of ways.\n\n> Aside from that, I've only given the patches a brief reading.\n\nThanks for taking a look.\n\n> Having seen the VM snapshot in practice (under \"Scanned pages, visibility map snapshot\" in the wiki page), it's neat to see fewer pages being scanned. Prefetching not only seems superior to SKIP_PAGES_THRESHOLD, but anticipates asynchronous IO.\n\nAll of that is true, but more than anything else the VM snapshot\nconcept appeals to me because it seems to make VACUUMs of large tables\nmore similar to VACUUMs of small tables. Particularly when one\nindividual VACUUM happens to take an unusually long amount of time,\nfor whatever reason (best example right now is aggressive mode, but\nthere are other ways in which VACUUM can take far longer than\nexpected). That approach seems much more logical. I also think that\nit'll make it easier to teach VACUUM to \"pick up where the last VACUUM\nleft off\" in the future.\n\nI understand why you haven't seriously investigated using the same\ninformation for the Radix tree dead_items project. I certainly don't\nobject. But I still think that having one integrated data structure\n(VM snapshots + dead_items) is worth exploring in the future. It's\nsomething that I think is quite promising.\n\n> I do have a cosmetic, but broad-reaching, nitpick about terms regarding \"skipping strategy\". That's phrased as a kind of negative -- what we're *not* doing. Many times I had to pause and compute in my head what we're *doing*, i.e. the \"scanning strategy\". For example, I wonder if the VM strategies would be easier to read as:\n>\n> VMSNAP_SKIP_ALL_VISIBLE -> VMSNAP_SCAN_LAZY\n> VMSNAP_SKIP_ALL_FROZEN -> VMSNAP_SCAN_EAGER\n> VMSNAP_SKIP_NONE -> VMSNAP_SCAN_ALL\n>\n> Notice here they're listed in order of increasing eagerness.\n\nI agree that the terminology around skipping strategies is confusing,\nand plan to address that in the next version. I'll consider using this\nscheme for v10.\n\n[1] https://commitfest.postgresql.org/41/4027/\n[2] https://postgr.es/m/CAH2-Wz=MGFwJEpEjVzXwEjY5yx=UuNPzA6Bt4DSMasrGLUq9YA@mail.gmail.com\n[3] https://postgr.es/m/CAH2-WznrZC-oHkB+QZQS65o+8_Jtj6RXadjh+8EBqjrD1f8FQQ@mail.gmail.com\n[4] https://towardsdatascience.com/the-inspection-paradox-is-everywhere-2ef1c2e9d709\n[5] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Scanned_pages.2C_visibility_map_snapshot\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 16 Dec 2022 13:53:56 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 11:59 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n> I've found this discussion very interesting, in view of vacuuming\n> TOAST tables is always a problem because these tables tend to\n> bloat very quickly with dead data - just to remind, all TOAST-able\n> columns of the relation use the same TOAST table which is one\n> for the relation, and TOASTed data are not updated - there are\n> only insert and delete operations.\n\nI don't think that it would be any different to any other table that\nhappened to have lots of inserts and deletes, such as the table\ndescribed here:\n\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Mixed_inserts_and_deletes\n\nIn the real world, a table like this would probably consist of some\ncompletely static data, combined with other data that is constantly\ndeleted and re-inserted -- probably only a small fraction of the table\nat any one time. I would expect such a table to work quite well,\nbecause the static pages would all become frozen (at least after a\nwhile), leaving behind only the tuples that are deleted quickly, most\nof the time. VACUUM would have a decent chance of noticing that it\nwill be cheap to advance relfrozenxid in earlier VACUUM operations, as\nbloat is cleaned up -- even a VACUUM that happens long before the\npoint that autovacuum.c will launch an antiwraparound autovacuum has a\ndecent chance of it. That's not a new idea, really; the\npgbench_branches example from the Wiki page looks like that already,\nand even works on Postgres 15.\n\nHere is the part that's new: the pressure to advance relfrozenxid\ngrows gradually, as table age grows. If table age is still very young,\nthen we'll only do it if the number of \"extra\" scanned pages is < 5%\nof rel_pages -- only when the added cost is very low (again, like the\npgbench_branches example, mostly). Once table age gets about halfway\ntowards the point that antiwraparound autovacuuming is required,\nVACUUM then starts caring less about costs. It gradually worries less\nabout the costs, and more about the need to advance it. Ideally it\nwill happen before antiwraparound autovacuum is actually required.\n\nI'm not sure how much this would help with bloat. I suspect that it\ncould make a big difference with the right workload. If you always\nneed frequent autovacuums, just to deal with bloat, then there is\nnever a good time to run an aggressive antiwraparound autovacuum. An\naggressive AV will probably end up taking much longer than the typical\nautovacuum that deals with bloat. While the aggressive AV will remove\nas much bloat as any other AV, in theory, that might not help much. If\nthe aggressive AV takes as long as (say) 5 regular autovacuums would\nhave taken, and if you really needed those 5 separate autovacuums to\nrun, just to deal with the bloat, then that's a real problem. The\naggressive AV effectively causes bloat with such a workload.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 16 Dec 2022 18:44:11 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 10:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I agree that the burden of catch-up freezing is excessive here (in\n> fact I already wrote something to that effect on the wiki page). The\n> likely solution can be simple enough.\n\nAttached is v10, which fixes this issue, but using a different\napproach to the one I sketched here.\n\nThis revision also changes the terminology around VM skipping: we now\ncall the strategies there \"scanning strategies\", per feedback from\nJeff and John. This does seem a lot clearer.\n\nAlso cleaned up the docs a little bit, which were messed up by a\nrebasing issue in v9.\n\nI ended up fixing the aforementioned \"too much catch-up freezing\"\nissue by just getting rid of the whole concept of a second table-size\nthreshold that forces the eager scanning strategy. I now believe that\nit's fine to just rely on the generic logic that determines scanning\nstrategy based on a combination of table age and the added cost of\neager scanning. It'll work in a way that doesn't result in too much of\na freezing spike during any one VACUUM operation, without waiting\nuntil an antiwraparound autovacuum to advance relfrozenxid (it'll\nhappen far earlier than that, though still quite a lot later than what\nyou'd see with v9, so as to avoid that big spike in freezing that was\npossible in pgbench_history-like tables [1]).\n\nThis means that vacuum_freeze_strategy_threshold is now strictly\nconcerned with freezing. A table that is always frozen eagerly will\ninevitably fall into a pattern of advancing relfrozenxid in every\nVACUUM operation, but that isn't something that needs to be documented\nor anything. We don't need to introduce a special case here.\n\nThe other notable change for v10 is in the final patch, which removes\naggressive mode altogether. v10 now makes lazy_scan_noprune less\nwilling to give up on setting relfrozenxid to a relatively recent XID.\nNow lazy_scan_noprune is willing to wait a short while for a cleanup\nlock on a heap page (a few tens of milliseconds) when doing so might\nbe all it takes to preserve VACUUM's ability to advance relfrozenxid\nall the way up to FreezeLimit, which is the traditional guarantee made\nby aggressive mode VACUUM.\n\nThis makes lazy_scan_noprune \"under promise and over deliver\". It now\nonly promises to advance relfrozenxid up to MinXid in the very worst\ncase -- even if that means waiting indefinitely long for a cleanup\nlock. That's not a very strong promise, because advancing relfrozenxid\nup to MinXid is only barely adequate. At the same time,\nlazy_scan_noprune is willing to go to extra trouble to\nget a recent enough FreezeLimit -- it'll wait for a few 10s of milliseconds.\nIt's just not willing to wait indefinitely. This seems likely to give us the\nbest of both worlds.\n\nThis was based in part on something that Andres said about cleanup\nlocks a while back. He had a concern about cases where even MinXid was\nbefore OldestXmin. To some degree that's addressed here, because I've\nalso changed the way that MinXid is determined, so that it'll be a\nmuch earlier value. That doesn't have much downside now, because of the\nway that lazy_scan_noprune is now \"aggressive-ish\" when that happens to\nmake sense.\n\nNot being able to get a cleanup lock on our first attempt is relatively\nrare, and when it happens it's often something completely benign. For\nexample, it might just be that the checkpointer was writing out the\nsame page at the time, which signifies nothing about it really being\nhard to get a cleanup lock -- the checkpointer will have dropped its\nconflicting buffer pin almost immediately. It would be a shame to\naccept a significantly older final relfrozenxid during an infrequent,\nlong running antiwraparound autovacuum of larger tables when that\nhappens -- we should be willing to wait 30 milliseconds (just not 30\nminutes, or 30 days).\n\nNone of this even comes up for pages whose XIDs are >= FreezeLimit,\nwhich is actually most pages with the patch, even in larger tables.\nIt's relatively rare for VACUUM to need to process any heap page in\nlazy_scan_noprune, but it'll be much rarer still for it to have to do\na \"short wait\" like this. So \"short waits\" have a very small downside,\nand (at least occasionally) a huge upside.\n\nBy inventing a third alternative behavior (to go along with processing\npages via standard lazy_scan_noprune skipping and processing pages in\nlazy_scan_prune), VACUUM has the flexibility to respond in a way\nthat's proportionate to the problem at hand, in one particular heap\npage. The new behavior has zero chance of mattering in most individual\ntables/workloads, but it's good to have every possible eventuality\ncovered. I really hate the idea of getting a significantly worse\noutcome just because of something that happened in one single heap\npage, because the wind changed directions at the wrong time.\n\n[1] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch\n\n--\nPeter Geoghegan",
"msg_date": "Sun, 18 Dec 2022 14:20:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sun, 2022-12-18 at 14:20 -0800, Peter Geoghegan wrote:\n> Attached is v10, which fixes this issue, but using a different\n> approach to the one I sketched here.\n\nIn 0001, it's fairly straightforward rearrangement and looks like an\nimprovement to me. I have a few complaints, but they are about pre-\nexisting code that you moved around, and I like that you didn't\neditorialize too much while just moving code around. +1 from me.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 20 Dec 2022 01:01:49 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi!\n\nI'll try to apply this patch onto my branch with Pluggable TOAST to test\nthese mechanics with new TOAST. Would reply on the result. It could\nbe difficult though, because both have a lot of changes that affect\nthe same code.\n\n>I'm not sure how much this would help with bloat. I suspect that it\n>could make a big difference with the right workload. If you always\n>need frequent autovacuums, just to deal with bloat, then there is\n>never a good time to run an aggressive antiwraparound autovacuum. An\n>aggressive AV will probably end up taking much longer than the typical\n>autovacuum that deals with bloat. While the aggressive AV will remove\n>as much bloat as any other AV, in theory, that might not help much. If\n>the aggressive AV takes as long as (say) 5 regular autovacuums would\n>have taken, and if you really needed those 5 separate autovacuums to\n>run, just to deal with the bloat, then that's a real problem. The\n>aggressive AV effectively causes bloat with such a workload.\n\n\n\nOn Tue, Dec 20, 2022 at 12:01 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Sun, 2022-12-18 at 14:20 -0800, Peter Geoghegan wrote:\n> > Attached is v10, which fixes this issue, but using a different\n> > approach to the one I sketched here.\n>\n> In 0001, it's fairly straightforward rearrangement and looks like an\n> improvement to me. I have a few complaints, but they are about pre-\n> existing code that you moved around, and I like that you didn't\n> editorialize too much while just moving code around. +1 from me.\n>\n>\n> --\n> Jeff Davis\n> PostgreSQL Contributor Team - AWS\n>\n>\n>\n>\n>\n\n-- \nRegards,\n\n--\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I'll try to apply this patch onto my branch with Pluggable TOAST to testthese mechanics with new TOAST. Would reply on the result. It couldbe difficult though, because both have a lot of changes that affectthe same code.>I'm not sure how much this would help with bloat. I suspect that it>could make a big difference with the right workload. If you always>need frequent autovacuums, just to deal with bloat, then there is>never a good time to run an aggressive antiwraparound autovacuum. An>aggressive AV will probably end up taking much longer than the typical>autovacuum that deals with bloat. While the aggressive AV will remove>as much bloat as any other AV, in theory, that might not help much. If>the aggressive AV takes as long as (say) 5 regular autovacuums would>have taken, and if you really needed those 5 separate autovacuums to>run, just to deal with the bloat, then that's a real problem. The>aggressive AV effectively causes bloat with such a workload.On Tue, Dec 20, 2022 at 12:01 PM Jeff Davis <pgsql@j-davis.com> wrote:On Sun, 2022-12-18 at 14:20 -0800, Peter Geoghegan wrote:\n> Attached is v10, which fixes this issue, but using a different\n> approach to the one I sketched here.\n\nIn 0001, it's fairly straightforward rearrangement and looks like an\nimprovement to me. I have a few complaints, but they are about pre-\nexisting code that you moved around, and I like that you didn't\neditorialize too much while just moving code around. +1 from me.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n-- Regards,--Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 20 Dec 2022 21:04:14 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sun, 2022-12-18 at 14:20 -0800, Peter Geoghegan wrote:\n> On Thu, Dec 15, 2022 at 10:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I agree that the burden of catch-up freezing is excessive here (in\n> > fact I already wrote something to that effect on the wiki page).\n> > The\n> > likely solution can be simple enough.\n> \n> Attached is v10, which fixes this issue, but using a different\n> approach to the one I sketched here.\n\nComments on 0002:\n\nCan you explain the following portion of the diff:\n\n\n - else if (MultiXactIdPrecedes(multi, cutoffs->MultiXactCutoff))\n + else if (MultiXactIdPrecedes(multi, cutoffs->OldestMxact))\n\n ...\n\n + /* Can't violate the MultiXactCutoff invariant, either */\n + if (!need_replace)\n + need_replace = MultiXactIdPrecedes(\n + multi, cutoffs->MultiXactCutoff);\n\nRegarding correctness, it seems like the basic structure and invariants\nare the same, and it builds on the changes already in 9e5405993c. Patch\n0002 seems *mostly* about making choices within the existing framework.\nThat gives me more confidence.\n\nThat being said, it does push harder against the limits on both sides.\nIf I understand correctly, that means pages with wider distributions of\nxids are going to persist longer, which could expose pre-existing bugs\nin new and interesting ways.\n\nNext, the 'freeze_required' field suggests that it's more involved in\nthe control flow that causes freezing than it actually is. All it does\nis communicate how the trackers need to be adjusted. The return value\nof heap_prepare_freeze_tuple() (and underneath, the flags set by\nFreezeMultiXactId()) are what actually control what happens. It would\nbe nice to make this more clear somehow.\n\nThe comment:\n\n /* \n * If we freeze xmax, make absolutely sure that it's not an XID that\n * is important. (Note, a lock-only xmax can be removed independent\n * of committedness, since a committed lock holder has released the \n * lock). \n */\n\ncaused me to go down a rabbit hole looking for edge cases where we\nmight want to freeze an xmax but not an xmin; e.g. tup.xmax <\nOldestXmin < tup.xmin or the related case where tup.xmax < RecentXmin <\ntup.xmin. I didn't find a problem, so that's good news.\n\nI also tried some pgbench activity along with concurrent vacuums (and\nvacuum freezes) along with periodic verify_heapam(). No problems there.\n \nDid you already describe the testing you've done for 0001+0002\nspecfiically? It's not radically new logic, but it would be good to try\nto catch minor state-handling errors.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Tue, 20 Dec 2022 17:44:27 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 5:44 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Comments on 0002:\n>\n> Can you explain the following portion of the diff:\n>\n>\n> - else if (MultiXactIdPrecedes(multi, cutoffs->MultiXactCutoff))\n> + else if (MultiXactIdPrecedes(multi, cutoffs->OldestMxact))\n>\n> ...\n>\n> + /* Can't violate the MultiXactCutoff invariant, either */\n> + if (!need_replace)\n> + need_replace = MultiXactIdPrecedes(\n> + multi, cutoffs->MultiXactCutoff);\n\nDon't forget the historic context: before Postgres 15's commit\n0b018fab, VACUUM's final relfrozenxid always came from FreezeLimit.\nAlmost all of this code predates that work. So the general idea that\nyou can make a \"should I freeze or should I ratchet back my\nrelfrozenxid tracker instead?\" trade-off at the level of individual\ntuples and pages is still a very new one. Right now it's only applied\nwithin lazy_scan_noprune(), but 0002 leverages the same principles\nhere.\n\nBefore now, these heapam.c freezing routines had cutoffs called\ncutoff_xid and cutoff_multi. These had values that actually came from\nvacuumlazy.c's FreezeLimit and MultiXactCutoff cutoffs (which was\nrather unclear). But cutoff_xid and cutoff_multi were *also* used as\ninexact proxies for OldestXmin and OldestMxact (also kind of unclear,\nbut true). For example, there are some sanity checks in heapam.c that\nkind of pretend that cutoff_xid is OldestXmin, even though it usually\nisn't the same value (it can be, but only during VACUUM FREEZE, or\nwhen the min freeze age is 0 in some other way).\n\nSo 0002 teaches the same heapam.c code about everything -- about all\nof the different cutoffs, and about the true requirements of VACUUM\naround relfrozenxid advancement. In fact, 0002 makes vacuumlazy.c cede\na lot of control of \"XID stuff\" to the same heapam.c code, freezing it\nup to think about freezing as something that works at the level of\nphysical pages. This is key to allowing vacuumlazy.c to reason about\nfreezing at the level of the whole table. It thinks about physical\nblocks, leaving logical XIDs up to heapam.c code.\n\nThis business that you asked about in FreezeMultiXactId() is needed so\nthat we can allow vacuumlazy.c to \"think in terms of physical pages\",\nwhile at the same time avoiding allocating new Multis in VACUUM --\nwhich requires \"thinking about individual xmax fields\" instead -- a\nsomewhat conflicting goal. We're really trying to have it both ways\n(we get page-level freezing, with a little tuple level freezing on the\nside, sufficient to to avoid allocating new Multis during VACUUMs in\nroughly the same way as we do right now).\n\nIn most cases \"freezing a page\" removes all XIDs < OldestXmin, and all\nMXIDs < OldestMxact. It doesn't quite work that way in certain rare\ncases involving MultiXacts, though. It is convenient to define \"freeze\nthe page\" in a way that gives heapam.c's FreezeMultiXactId() the\nleeway to put off the work of processing an individual tuple's xmax,\nwhenever it happens to be a MultiXactId that would require an\nexpensive second pass to process aggressively (allocating a new Multi\nduring VACUUM is especially worth avoiding here).\n\nOur definition of \"freeze the page\" is a bit creative, at least if\nyou're used to thinking about it in terms of strict XID-wise cutoffs\nlike OldestXmin/FreezeLimit. But even if you do think of it in terms\nof XIDs, the practical difference is extremely small in practice.\n\nFreezeMultiXactId() effectively makes a decision on how to proceed\nwith processing at the level of each individual xmax field. Its no-op\nmulti processing \"freezes\" an xmax in the event of a costly-to-process\nxmax on a page when (for whatever reason) page-level freezing is\ntriggered. If, on the other hand, page-level freezing isn't triggered\nfor the page, then page-level no-op processing takes care of the multi\nfor us instead. Either way, the remaining Multi will ratchet back\nVACUUM's relfrozenxid and/or relminmxid trackers as required, and we\nwon't need an expensive second pass over the multi (unless we really\nhave no choice, for example during a VACUUM FREEZE, where\nOldestXmin==FreezeLimit).\n\n> Regarding correctness, it seems like the basic structure and invariants\n> are the same, and it builds on the changes already in 9e5405993c. Patch\n> 0002 seems *mostly* about making choices within the existing framework.\n> That gives me more confidence.\n\nYou're right that it's the same basic invariants as before, of course.\nTurns out that those invariants can be pushed quite far.\n\nThough note that I kind of invented a new invariant (not really, sort\nof). Well, it's a postcondition, which is a sort of invariant: any\nscanned heap page that can be cleanup locked must never have any\nremaining XIDs < FreezeLimit, nor can any MXIDs < MultiXactCutoff\nremain. But a cleanup-locked page does *not* need to get rid of all\nXIDs < OldestXmin, nor MXIDs < OldestMxact.\n\nThis flexibility is mostly useful because it allows lazy_scan_prune to\njust decide to not freeze. But, to a much lesser degree, it's useful\nbecause of the edge case with multis -- in general we might just need\nthe same leeway when lazy_scan_prune \"freezes the page\".\n\n> That being said, it does push harder against the limits on both sides.\n> If I understand correctly, that means pages with wider distributions of\n> xids are going to persist longer, which could expose pre-existing bugs\n> in new and interesting ways.\n\nI don't think it's fundamentally different to what we're already doing\nin lazy_scan_noprune. It's just more complicated, because you have to\ntease apart slightly different definitions of freezing to understand\ncode around FreezeMultiXactId(). This is more or less needed to\nprovide maximum flexibility, where we delay decisions about what to do\nuntil the very last moment.\n\n> Next, the 'freeze_required' field suggests that it's more involved in\n> the control flow that causes freezing than it actually is. All it does\n> is communicate how the trackers need to be adjusted. The return value\n> of heap_prepare_freeze_tuple() (and underneath, the flags set by\n> FreezeMultiXactId()) are what actually control what happens. It would\n> be nice to make this more clear somehow.\n\nI'm not sure what you mean. Page-level freezing *doesn't* have to go\nahead when freeze_required is not ever set to true for any tuple on\nthe page (which is most of the time, in practice). lazy_scan_prune\ngets to make a choice about freezing the page, when the choice is\navailable.\n\nNote also that the FRM_NOOP case happens when a call to\nFreezeMultiXactId() takes place that won't leave behind a freeze plan\nfor the tuple (unless its xmin happens to necessitate a freeze plan\nfor the same tuple). And yet, it will do useful work, needed iff the\n\"freeze the page\" path is ultimately taken by lazy_scan_prune --\nFreezeMultiXactId() itself will ratchet back\nFreezePageRelfrozenXid/NewRelfrozenXid as needed to make everything\nsafe.\n\n> The comment:\n>\n> /*\n> * If we freeze xmax, make absolutely sure that it's not an XID that\n> * is important. (Note, a lock-only xmax can be removed independent\n> * of committedness, since a committed lock holder has released the\n> * lock).\n> */\n>\n> caused me to go down a rabbit hole looking for edge cases where we\n> might want to freeze an xmax but not an xmin; e.g. tup.xmax <\n> OldestXmin < tup.xmin or the related case where tup.xmax < RecentXmin <\n> tup.xmin. I didn't find a problem, so that's good news.\n\nThis is an example of what I meant about the heapam.c code using a\ncutoff that actually comes from FreezeLimit, when it would be more\nsensible to use OldestXmin instead.\n\n> I also tried some pgbench activity along with concurrent vacuums (and\n> vacuum freezes) along with periodic verify_heapam(). No problems there.\n>\n> Did you already describe the testing you've done for 0001+0002\n> specfiically? It's not radically new logic, but it would be good to try\n> to catch minor state-handling errors.\n\nLots of stuff with contrib/amcheck, which, as you must already know,\nwill notice when an XID/MXID is contained in a table whose\nrelfrozenxid and/or relminmxid indicates that it shouldn't be there.\n(Though VACUUM itself does the same thing, albeit not as effectively.)\n\nObviously the invariants haven't changed here. In many ways it's a\nvery small set of changes. But in one or two ways it's a significant\nshift. It depends on how you think about it.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 20 Dec 2022 19:15:33 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 7:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Dec 20, 2022 at 5:44 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > Next, the 'freeze_required' field suggests that it's more involved in\n> > the control flow that causes freezing than it actually is. All it does\n> > is communicate how the trackers need to be adjusted. The return value\n> > of heap_prepare_freeze_tuple() (and underneath, the flags set by\n> > FreezeMultiXactId()) are what actually control what happens. It would\n> > be nice to make this more clear somehow.\n>\n> I'm not sure what you mean. Page-level freezing *doesn't* have to go\n> ahead when freeze_required is not ever set to true for any tuple on\n> the page (which is most of the time, in practice). lazy_scan_prune\n> gets to make a choice about freezing the page, when the choice is\n> available.\n\nOh wait, I think I see the point of confusion now.\n\nWhen freeze_required is set to true, that means that lazy_scan_prune\nliterally has no choice -- it simply must freeze the page as\ninstructed by heap_prepare_freeze_tuple/FreezeMultiXactId. It's not\njust a strong suggestion -- it's crucial that lazy_scan_prune freezes\nthe page as instructed.\n\nThe \"no freeze\" trackers (HeapPageFreeze.NoFreezePageRelfrozenXid and\nHeapPageFreeze.NoFreezePageRelminMxid) won't have been maintained\nproperly when freeze_required was set, so lazy_scan_prune can't expect\nto use them -- doing so would lead to VACUUM setting incorrect values\nin pg_class later on.\n\nAvoiding the work of maintaining those \"no freeze\" trackers isn't just\na nice-to-have microoptimization -- it is sometimes very important. We\nkind of rely on this to be able to avoid getting too many MultiXact\nmember SLRU buffer misses inside FreezeMultiXactId. There is a comment\nabove FreezeMultiXactId that advises its caller that it had better not\ncall heap_tuple_should_freeze when freeze_required is set to true,\nbecause that could easily lead to multixact member SLRU buffer misses\n-- misses that FreezeMultiXactId set out to avoid itself.\n\nIt could actually be cheaper to freeze than to not freeze, in the case\nof a Multi -- member space misses can sometimes be really expensive.\nAnd so FreezeMultiXactId sometimes freezes a Multi even though it's\nnot strictly required to do so.\n\nNote also that this isn't a new behavior -- it's actually an old one,\nfor the most part. It kinda doesn't look that way, because we haven't\npassed down separate FreezeLimit/OldestXmin cutoffs (and separate\nOldestMxact/MultiXactCutoff cutoffs) until now. But we often don't\nneed that granular information to be able to process Multis before the\nmulti value is < MultiXactCutoff.\n\nIf you look at how FreezeMultiXactId works, in detail, you'll see that\neven on Postgres HEAD it can (say) set a tuple's xmax to\nInvalidTransactionId long before the multi value is < MultiXactCutoff.\nIt just needs to detect that the multi is not still running, and\nnotice that it's HEAP_XMAX_IS_LOCKED_ONLY(). Stuff like that happens\nquite a bit. So for the most part \"eager processing of Multis as a\nspecial case\" is an old behavior, that has only been enhanced a little\nbit (the really important, new change in FreezeMultiXactId is how the\nFRM_NOOP case works with FreezeLimit, even though OldestXmin is used\nnearby -- this is extra confusing because 0002 doesn't change how we\nuse FreezeLimit -- it actually changes every other use of FreezeLimit\nnearby, making it OldestXmin).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 20 Dec 2022 21:26:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, 2022-12-20 at 21:26 -0800, Peter Geoghegan wrote:\n> When freeze_required is set to true, that means that lazy_scan_prune\n> literally has no choice -- it simply must freeze the page as\n> instructed by heap_prepare_freeze_tuple/FreezeMultiXactId. It's not\n> just a strong suggestion -- it's crucial that lazy_scan_prune freezes\n> the page as instructed.\n\nThe confusing thing to me is perhaps just the name -- to me,\n\"freeze_required\" suggests that if it were set to true, it would cause\nfreezing to happen. But as far as I can tell, it does not cause\nfreezing to happen, it causes some other things to happen that are\nnecessary when freezing happens (updating and using the right\ntrackers).\n\nA minor point, no need to take action here. Perhaps rename the\nvariable.\n\nI think 0001+0002 are about ready.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 21 Dec 2022 16:30:28 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 4:30 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The confusing thing to me is perhaps just the name -- to me,\n> \"freeze_required\" suggests that if it were set to true, it would cause\n> freezing to happen. But as far as I can tell, it does not cause\n> freezing to happen, it causes some other things to happen that are\n> necessary when freezing happens (updating and using the right\n> trackers).\n\nfreeze_required is about what's required, which tells us nothing about\nwhat will happen when it's not required (could go either way,\ndepending on how lazy_scan_prune feels about it).\n\nSetting freeze_required=true implies that heap_prepare_freeze_tuple\nhas stopped doing maintenance of the \"no freeze\" trackers. When it\nsets freeze_required=true, it really *does* force freezing to happen,\nin every practical sense. This happens because lazy_scan_prune does\nwhat it's told to do when it's told that freezing is required. Because\nof course it does, why wouldn't it?\n\nSo...I still don't get what you mean. Why would lazy_scan_prune ever\nbreak its contract with heap_prepare_freeze_tuple? And in what sense\nwould you say that heap_prepare_freeze_tuple's setting\nfreeze_required=true doesn't quite amount to \"forcing freezing\"? Are\nyou worried about the possibility that lazy_scan_prune will decide to\nrebel at some point, and fail to honor its contract with\nheap_prepare_freeze_tuple? :-)\n\n> A minor point, no need to take action here. Perhaps rename the\n> variable.\n\nAndres was the one that suggested this name, actually. I initially\njust called it \"freeze\", but I think that Andres had it right.\n\n> I think 0001+0002 are about ready.\n\nGreat. I plan on committing 0001 in the next few days. Committing 0002\nmight take a bit longer.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 21 Dec 2022 16:53:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 4:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Great. I plan on committing 0001 in the next few days. Committing 0002\n> might take a bit longer.\n\nI pushed the VACUUM cutoffs patch (previously 0001) this morning -\nthanks for your help with that one.\n\nAttached is v11, which is mostly just to fix the bitrot caused by\ntoday's commits. Though I did adjust some of the commit messages a\nbit. There is also one minor functional change in v11: we now always\nuse eager freezing strategy in unlogged and temp tables, since it's\nvirtually guaranteed to be a win there.\n\nWith an unlogged or temp table, most of the cost of freezing is just\nthe cycles spent preparing to freeze, since, of course, there isn't\nany WAL overhead to have to worry about (which is the dominant concern\nwith freezing costs, in general). Deciding *not* to freeze pages that\nwe can freeze and make all-frozen in the VM from unlogged/temp tables\nseems like a case of wasting the cycles spent preparing freeze plans.\nWhy not just do the tiny additional work of executing the freeze plans\nat that point?\n\nIt's not like eager freezing strategy comes with an added risk that\nVACUUM will allocate new multis that it wouldn't otherwise have to\nallocate. Nor does it change cleanup-lock-wait behavior. Clearly this\noptimization isn't equivalent to interpreting vacuum_freeze_min_age as\n0 in unlogged/temp tables. The whole design of freezing strategies is\nsupposed to abstract away details like that, freeing up high level\ncode like lazy_scan_strategy to think about freezing at the level of\nthe whole table -- the cost model stuff really benefits from being\nable to measure debt at the table level, measuring things in terms of\nunits like total all-frozen pages, rel_pages, etc.\n\n--\nPeter Geoghegan",
"msg_date": "Thu, 22 Dec 2022 11:39:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 11:39 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Dec 21, 2022 at 4:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Great. I plan on committing 0001 in the next few days. Committing 0002\n> > might take a bit longer.\n>\n> I pushed the VACUUM cutoffs patch (previously 0001) this morning -\n> thanks for your help with that one.\n\nAttached is v12. I think that the page-level freezing patch is now\ncommitable, and plan on committing it in the next 2-4 days barring any\nobjections.\n\nNotable changes in v12:\n\n* Simplified some of the logic in FreezeMultiXactId(), which now\ndoesn't have any needless handling of NewRelfrozenXid style cutoffs\nexcept in the one case that still needs it (its no-op processing\ncase).\n\nWe don't need most of the handling on HEAD anymore because every\npossible approach to processing a Multi other than FRM_NOOP will\nreliably leave behind a new xmax that is either InvalidTransactionId,\nor an XID/MXID >= OldestXmin/OldestMxact. Such values cannot possibly\nneed to be tracked by the NewRelfrozenXid trackers, since the trackers\nare initialized using OldestXmin/OldestMxact to begin with.\n\n* v12 merges together the code for the \"freeze the page\"\nlazy_scan_prune path with the block that actually calls\nheap_freeze_execute_prepared().\n\nThis should make it clear that pagefrz.freeze_required really does\nmean that freezing is required. Hopefully that addresses Jeff's recent\nconcern. It's certainly an improvement, in any case.\n\n* On a related note, comments around the same point in lazy_scan_prune\nas well as comments above the HeapPageFreeze struct now explain a\nconcept I decided to call \"nominal freezing\". This is the case where\nwe \"freeze a page\" without having any freeze plans to execute.\n\n\"nominal freezing\" is the new name for a concept I invented many\nmonths ago, which helps to resolve subtle problems with the way that\nheap_prepare_freeze_tuple is tasked with doing two different things\nfor its lazy_scan_prune caller: 1. telling lazy_scan_prune how it\nwould freeze each tuple (were it to freeze the page), and 2. helping\nlazy_scan_prune to determine if the page should become all-frozen in\nthe VM. The latter is always conditioned on page-level freezing\nactually going ahead, since everything else in\nheap_prepare_freeze_tuple has to work that way.\n\nWe always freeze a page with zero freeze plans (or \"nominally freeze\"\nthe page) in lazy_scan_prune (which is nothing new in itself). We\nthereby avoid breaking heap_prepare_freeze_tuple's working assumption\nthat all it needs to focus on what the page will look like after\nfreezing executes, while also avoiding senselessly throwing away the\nability to set a page all-frozen in the VM in lazy_scan_prune when\nit'll cost us nothing extra. That is, by always freezing in the event\nof zero freeze plans, we won't senselessly miss out on setting a page\nall-frozen in cases where we don't actually have to execute any freeze\nplans to make that safe, while the \"freeze the page path versus don't\nfreeze the page path\" dichotomy still works as a high level conceptual\nabstraction.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 26 Dec 2022 12:53:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Dear Peter, Jeff,\r\n\r\nWhile reviewing other patches, I found that cfbot raised ERROR during the VACUUM FREEZE [1] on FreeBSD instance.\r\nIt seemed that same error has been occurred in other threads.\r\n\r\n```\r\n2022-12-23 08:50:20.175 UTC [34653][postmaster] LOG: server process (PID 37171) was terminated by signal 6: Abort trap\r\n2022-12-23 08:50:20.175 UTC [34653][postmaster] DETAIL: Failed process was running: VACUUM FREEZE tab_freeze;\r\n2022-12-23 08:50:20.175 UTC [34653][postmaster] LOG: terminating any other active server processes\r\n```\r\n\r\nI guessed that this assertion failure seemed to be caused by the commit 4ce3af[2],\r\nbecause the Assert() seemed to be added by the commit.\r\n\r\n```\r\n[08:51:31.189] #3 0x00000000009b88d7 in ExceptionalCondition (conditionName=<optimized out>, fileName=0x2fd9df \"../src/backend/access/heap/heapam.c\", lineNumber=lineNumber@entry=6618) at ../src/backend/utils/error/assert.c:66\r\n[08:51:31.189] No locals.\r\n[08:51:31.189] #4 0x0000000000564205 in heap_prepare_freeze_tuple (tuple=0x8070f0bb0, cutoffs=cutoffs@entry=0x80222e768, frz=0x7fffffffb2d0, totally_frozen=totally_frozen@entry=0x7fffffffc478, relfrozenxid_out=<optimized out>, relfrozenxid_out@entry=0x7fffffffc4a8, relminmxid_out=<optimized out>, relminmxid_out@entry=0x7fffffffc474) at ../src/backend/access/heap/heapam.c:6618\r\n```\r\n\r\nSorry for noise if you have already known or it is not related with this thread.\r\n\r\n[1]: https://cirrus-ci.com/task/4580705867399168\r\n[2]: https://github.com/postgres/postgres/commit/4ce3afb82ecfbf64d4f6247e725004e1da30f47c\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 27 Dec 2022 06:57:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 10:57 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n> I guessed that this assertion failure seemed to be caused by the commit 4ce3af[2],\n> because the Assert() seemed to be added by the commit.\n\nI agree that the problem is with this assertion, which is on the\nmaster branch (not in recent versions of the patch series itself)\nfollowing commit 4ce3af:\n\nelse\n{\n /*\n * Freeze plan for tuple \"freezes xmax\" in the strictest sense:\n * it'll leave nothing in xmax (neither an Xid nor a MultiXactId).\n */\n ....\n Assert(MultiXactIdPrecedes(xid, cutoffs->OldestMxact));\n ...\n}\n\nThe problem is that FRM_INVALIDATE_XMAX multi processing can occur\nboth in Multis from before OldestMxact and Multis >= OldestMxact. The\nlatter case (the >= case) is far less common, but still quite\npossible. Not sure how I missed that.\n\nAnyway, this assertion is wrong, and simply needs to be removed.\nThanks for the report\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 26 Dec 2022 23:30:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Dear Peter,\r\n\r\n> Anyway, this assertion is wrong, and simply needs to be removed.\r\n> Thanks for the report\r\n\r\nThanks for modifying for quickly! I found your commit in the remote repository.\r\nI will watch and report again if there are another issue.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Tue, 27 Dec 2022 07:47:52 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 12:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v12. I think that the page-level freezing patch is now\n> commitable, and plan on committing it in the next 2-4 days barring any\n> objections.\n\nI've pushed the page-level freezing patch, so now I need to produce a\nnew revision, just to keep CFTester happy.\n\nAttached is v13. No notable changes since v12.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 28 Dec 2022 09:34:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, 2022-12-26 at 12:53 -0800, Peter Geoghegan wrote:\n> * v12 merges together the code for the \"freeze the page\"\n> lazy_scan_prune path with the block that actually calls\n> heap_freeze_execute_prepared().\n> \n> This should make it clear that pagefrz.freeze_required really does\n> mean that freezing is required. Hopefully that addresses Jeff's\n> recent\n> concern. It's certainly an improvement, in any case.\n\nBetter, thank you.\n\n> * On a related note, comments around the same point in\n> lazy_scan_prune\n> as well as comments above the HeapPageFreeze struct now explain a\n> concept I decided to call \"nominal freezing\". This is the case where\n> we \"freeze a page\" without having any freeze plans to execute.\n> \n> \"nominal freezing\" is the new name for a concept I invented many\n> months ago, which helps to resolve subtle problems with the way that\n> heap_prepare_freeze_tuple is tasked with doing two different things\n> for its lazy_scan_prune caller: 1. telling lazy_scan_prune how it\n> would freeze each tuple (were it to freeze the page), and 2. helping\n> lazy_scan_prune to determine if the page should become all-frozen in\n> the VM. The latter is always conditioned on page-level freezing\n> actually going ahead, since everything else in\n> heap_prepare_freeze_tuple has to work that way.\n> \n> We always freeze a page with zero freeze plans (or \"nominally freeze\"\n> the page) in lazy_scan_prune (which is nothing new in itself). We\n> thereby avoid breaking heap_prepare_freeze_tuple's working assumption\n> that all it needs to focus on what the page will look like after\n> freezing executes, while also avoiding senselessly throwing away the\n> ability to set a page all-frozen in the VM in lazy_scan_prune when\n> it'll cost us nothing extra. That is, by always freezing in the event\n> of zero freeze plans, we won't senselessly miss out on setting a page\n> all-frozen in cases where we don't actually have to execute any\n> freeze\n> plans to make that safe, while the \"freeze the page path versus don't\n> freeze the page path\" dichotomy still works as a high level\n> conceptual\n> abstraction.\n\nI always understood \"freezing\" to mean that a concrete action was\ntaken, and associated WAL generated.\n\n\"Nominal freezing\" is happening when there are no freeze plans at all.\nI get that it's to manage control flow so that the right thing happens\nlater. But I think it should be defined in terms of what state the page\nis in so that we know that following a given path is valid. Defining\n\"nominal freezing\" as a case where there are no freeze plans is just\nconfusing to me.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 30 Dec 2022 12:43:04 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 12:43 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I always understood \"freezing\" to mean that a concrete action was\n> taken, and associated WAL generated.\n\n\"When I use a word… it means just what I choose it to mean -- neither\nmore nor less\".\n\nI have also always understood freezing that way too. In fact, I still\ndo understand it that way -- I don't think that it has been undermined\nby any of this. I've just invented this esoteric concept of nominal\nfreezing that can be ignored approximately all the time, to solve one\nnarrow problem that needed to be solved, that isn't that interesting\nanywhere else.\n\n> \"Nominal freezing\" is happening when there are no freeze plans at all.\n> I get that it's to manage control flow so that the right thing happens\n> later. But I think it should be defined in terms of what state the page\n> is in so that we know that following a given path is valid. Defining\n> \"nominal freezing\" as a case where there are no freeze plans is just\n> confusing to me.\n\nWhat would you prefer? The state that the page is in is not something\nthat I want to draw much attention to, because it's confusing in a way\nthat mostly isn't worth talking about. When we do nominal freezing, we\ndon't necessarily go on to set the page all-frozen. In fact, it's not\nparticularly likely that that will end up happening!\n\nBear in mind that the exact definition of \"freeze the page\" is\nsomewhat creative, even without bringing nominal freezing into it. It\njust has to be in order to support the requirements we have for\nMultiXacts (in particular for FRM_NOOP processing). The new concepts\ndon't quite map directly on to the old ones. At the same time, it\nreally is very often the case that \"freezing the page\" will perform\nmaximally aggressive freezing, in the sense that it does precisely\nwhat a VACUUM FREEZE would do given the same page (in any Postgres\nversion).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Dec 2022 13:12:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 1:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > \"Nominal freezing\" is happening when there are no freeze plans at all.\n> > I get that it's to manage control flow so that the right thing happens\n> > later. But I think it should be defined in terms of what state the page\n> > is in so that we know that following a given path is valid. Defining\n> > \"nominal freezing\" as a case where there are no freeze plans is just\n> > confusing to me.\n>\n> What would you prefer? The state that the page is in is not something\n> that I want to draw much attention to, because it's confusing in a way\n> that mostly isn't worth talking about.\n\nI probably should have addressed what you said more directly. Here goes:\n\nFollowing the path of freezing a page is *always* valid, by\ndefinition. Including when there are zero freeze plans to execute, or\neven zero tuples to examine in the first place -- we'll at least be\nable to perform nominal freezing, no matter what. OTOH, following the\n\"no freeze\" path is permissible whenever the freeze_required flag\nhasn't been set during any call to heap_prepare_freeze_tuple(). It is\nnever actually mandatory for lazy_scan_prune() to *not* freeze.\n\nIt's a bit like how a simple point can be understood as a degenerate\ncircle of radius 0. It's an abstract definition, which is just a tool\nfor describing things precisely -- hopefully a useful tool. I welcome\nthe opportunity to be able to describe things in a way that is clearer\nor more useful, in whatever way. But it's not like I haven't already\nput in significant effort to this exact question of what \"freezing the\npage\" really means to lazy_scan_prune(). Naming things is hard.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Dec 2022 16:58:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, 2022-12-30 at 16:58 -0800, Peter Geoghegan wrote:\n> Following the path of freezing a page is *always* valid, by\n> definition. Including when there are zero freeze plans to execute, or\n> even zero tuples to examine in the first place -- we'll at least be\n> able to perform nominal freezing, no matter what.\n\nThis is a much clearer description, in my opinion. Do you think this is\nalready reflected in the comments (and I missed it)?\n\nPerhaps the comment in the \"if (tuples_frozen == 0)\" branch could be\nsomething more like:\n\n\"We have no freeze plans to execute, so there's no cost to following\nthe freeze path. This is important in the case where the page is\nentirely frozen already, so that the page will be marked as such in the\nVM.\"\n\nI'm not even sure we really want a new concept of \"nominal freezing\". I\nthink you are right to just call it a degenerate case where it can be\ninterpreted as either freezing zero things or not freezing; and the\nformer is convenient for us because we want to follow that code path.\nThat would be another good way of writing the comment, in my opinion.\n\nOf course, I'm sure there are some nuances that I'm still missing.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Sat, 31 Dec 2022 11:46:15 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 11:46 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Fri, 2022-12-30 at 16:58 -0800, Peter Geoghegan wrote:\n> > Following the path of freezing a page is *always* valid, by\n> > definition. Including when there are zero freeze plans to execute, or\n> > even zero tuples to examine in the first place -- we'll at least be\n> > able to perform nominal freezing, no matter what.\n>\n> This is a much clearer description, in my opinion. Do you think this is\n> already reflected in the comments (and I missed it)?\n\nI am arguably the person least qualified to answer this question. :-)\n\n> Perhaps the comment in the \"if (tuples_frozen == 0)\" branch could be\n> something more like:\n>\n> \"We have no freeze plans to execute, so there's no cost to following\n> the freeze path. This is important in the case where the page is\n> entirely frozen already, so that the page will be marked as such in the\n> VM.\"\n\nI'm happy to use your wording instead -- I'll come up with a patch for that.\n\nIn my mind it's just a restatement of what's there already. I assume\nthat you're right about it being clearer this way.\n\n> Of course, I'm sure there are some nuances that I'm still missing.\n\nI don't think that there is, actually. I now believe that you totally\nunderstand the mechanics involved here. I'm glad that I was able to\nascertain that that's all it was. It's worth going to the trouble of\ngetting something like this exactly right.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 31 Dec 2022 12:45:26 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 12:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Dec 31, 2022 at 11:46 AM Jeff Davis <pgsql@j-davis.com> wrote:\n> > \"We have no freeze plans to execute, so there's no cost to following\n> > the freeze path. This is important in the case where the page is\n> > entirely frozen already, so that the page will be marked as such in the\n> > VM.\"\n>\n> I'm happy to use your wording instead -- I'll come up with a patch for that.\n\nWhat do you think of the wording adjustments in the attached patch?\nIt's based on your suggested wording.\n\n--\nPeter Geoghegan",
"msg_date": "Mon, 2 Jan 2023 11:45:11 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, 2023-01-02 at 11:45 -0800, Peter Geoghegan wrote:\n> What do you think of the wording adjustments in the attached patch?\n> It's based on your suggested wording.\n\nGreat, thank you.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 02 Jan 2023 18:26:13 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 6:26 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Mon, 2023-01-02 at 11:45 -0800, Peter Geoghegan wrote:\n> > What do you think of the wording adjustments in the attached patch?\n> > It's based on your suggested wording.\n>\n> Great, thank you.\n\nPushed that today.\n\nAttached is v14.\n\nv14 simplifies the handling of setting the visibility map at the end\nof the blkno-wise loop in lazy_scan_heap(). And,\nvisibilitymap_snap_next() doesn't tell its caller (lazy_scan_heap)\nanything about the visibility status of each returned block -- we no\nlonger need a all_visible_according_to_vm local variable to help with\nsetting the visibility map.\n\nThis new approach to setting the VM is related to hardening that I\nplan on adding, which makes the visibility map robust against certain\nrace conditions that can lead to setting a page all-frozen but not\nall-visible. I go into that here:\n\nhttps://postgr.es/m/CAH2-WznuNGSzF8v6OsgjaC5aYsb3cZ6HW6MLm30X0d65cmSH6A@mail.gmail.com\n\n(It's the second patch -- the first patch already became yesterday's\ncommit 6daeeb1f.)\n\nIn general I don't think that we should be using\nall_visible_according_to_vm for anything, especially not anything\ncritical -- it is just information about how the page used to be in\nthe past, after all. This will be more of a problem with visibility\nmap snapshots, since all_visible_according_to_vm could be information\nthat is hours old by the time it's actually used by lazy_scan_heap().\nBut it is an existing issue.\n\nBTW, it would be helpful if I could get a +1 to the visibility map\npatch posted on that other thread. It's practically a bug fix -- the\nVM shouldn't be able to show contradictory information about any given\nheap page (i.e. \"page is all-frozen but not all-visible\"), no matter\nwhat. Just on general principle.\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 3 Jan 2023 12:30:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 21:30, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> Attached is v14.\n\nSome reviews (untested; only code review so far) on these versions of\nthe patches:\n\n> [PATCH v14 1/3] Add eager and lazy freezing strategies to VACUUM.\n\n> + /*\n> + * Threshold cutoff point (expressed in # of physical heap rel blocks in\n> + * rel's main fork) that triggers VACUUM's eager freezing strategy\n> + */\n\nI don't think the mention of 'cutoff point' is necessary when it has\n'Threshold'.\n\n> + int freeze_strategy_threshold; /* threshold to use eager\n> [...]\n> + BlockNumber freeze_strategy_threshold;\n\nIs there a way to disable the 'eager' freezing strategy? `int` cannot\nhold the maximum BlockNumber...\n\n> + lazy_scan_strategy(vacrel);\n> if (verbose)\n\nI'm slightly suprised you didn't update the message for verbose vacuum\nto indicate whether we used the eager strategy: there are several GUCs\nfor tuning this behaviour, so you'd expect to want direct confirmation\nthat the configuration is effective.\n(looks at further patches) I see that the message for verbose vacuum\nsees significant changes in patch 2 instead.\n\n---\n\n> [PATCH v14 2/3] Add eager and lazy VM strategies to VACUUM.\n\nGeneral comments:\n\nI don't see anything regarding scan synchronization in the vmsnap scan\nsystem. I understand that VACUUM is a load that is significantly\ndifferent from normal SEQSCANs, but are there good reasons to _not_\nsynchronize the start of VACUUM?\n\nRight now, we don't use syncscan to determine a startpoint. I can't\nfind the reason why in the available documentation: [0] discusses the\nissue but without clearly describing an issue why it wouldn't be\ninteresting from a 'nothing lost' perspective.\n\nIn addition, I noticed that progress reporting of blocks scanned\n(\"heap_blocks_scanned\", duh) includes skipped pages. Now that we have\na solid grasp of how many blocks we're planning to scan, we can update\nthe reported stats to how many blocks we're planning to scan (and have\nscanned), increasing the user value of that progress view.\n\n[0] https://www.postgresql.org/message-id/flat/19398.1212328662%40sss.pgh.pa.us#17b2feb0fde6a41779290632d8c70ef1\n\n> + double tableagefrac;\n\nI think this can use some extra info on the field itself, that it is\nthe fraction of how \"old\" the relfrozenxid and relminmxid fields are,\nas a fraction between 0 (latest values; nextXID and nextMXID), and 1\n(values that are old by at least freeze_table_age and\nmultixact_freeze_table_age (multi)transaction ids, respectively).\n\n\n> -#define VACOPT_DISABLE_PAGE_SKIPPING 0x80 /* don't skip any pages */\n> +#define VACOPT_DISABLE_PAGE_SKIPPING 0x80 /* don't skip using VM */\n\nI'm not super happy with this change. I don't think we should touch\nthe VM using snapshots _at all_ when disable_page_skipping is set:\n\n> + * Decide vmsnap scanning strategy.\n> *\n> - * This test also enables more frequent relfrozenxid advancement during\n> - * non-aggressive VACUUMs. If the range has any all-visible pages then\n> - * skipping makes updating relfrozenxid unsafe, which is a real downside.\n> + * First acquire a visibility map snapshot, which determines the number of\n> + * pages that each vmsnap scanning strategy is required to scan for us in\n> + * passing.\n\nI think we should not take disk-backed vm snapshots when\nforce_scan_all is set. We need VACUUM to be able to run on very\nresource-constrained environments, and this does not do that - it adds\na disk space requirement for the VM snapshot for all but the smallest\nrelation sizes, which is bad when you realize that we need VACUUM when\nwe want to clean up things like CLOG.\n\nAdditionally, it took me several reads of the code and comments to\nunderstand what the decision-making process for lazy vs eager is, and\nwhy. The comments are interspersed with the code, with no single place\nthat describes it from a bird's eyes' view. I think something like the\nfollowing would be appreciated by other readers of the code:\n\n+ We determine whether we choose the eager or lazy scanning strategy\nbased on how many extra pages the eager strategy would take over the\nlazy strategy, and how \"old\" the table is (as determined in\ntableagefrac):\n+ When a table is still \"young\" (tableagefrac <\nTABLEAGEFRAC_MIDPOINT), the eager strategy is accepted if we need to\nscan 5% (MAX_PAGES_YOUNG_TABLEAGE) more of the table.\n+ As the table gets \"older\" (tableagefrac between MIDPOINT and\nHIGHPOINT), the threshold for eager scanning is relaxed linearly from\nthis 5% to 70% (MAX_PAGES_OLD_TABLEAGE) of the table being scanned\nextra (over what would be scanned by the lazy strategy).\n+ Once the tableagefrac passes HIGHPOINT, we stop considering the lazy\nstrategy, and always eagerly scan the table.\n\n> @@ -1885,6 +1902,30 @@ retry:\n> tuples_frozen = 0; /* avoid miscounts in instrumentation */\n> }\n>\n> /*\n> + * There should never be dead or deleted tuples when PD_ALL_VISIBLE is\n> + * set. Check that here in passing.\n> + *\n> [...]\n\nI'm not sure this patch is the appropriate place for this added check.\nI don't disagree with the change, I just think that it's unrelated to\nthe rest of the patch. Same with some of the changes in\nlazy_scan_heap.\n\n> +vm_snap_stage_blocks\n\nDoesn't this waste a lot of cycles on skipping frozen blocks if most\nof the relation is frozen? I'd expected something more like a byte- or\nword-wise processing of skippable blocks, as opposed to this per-block\nloop. I don't think it's strictly necessary to patch, but I think it\nwould be a very useful addition for those with larger tables.\n\n> + XIDFrac = (double) (nextXID - cutoffs->relfrozenxid) /\n> + ((double) freeze_table_age + 0.5);\n\nI don't quite understand what this `+ 0.5` is used for, could you explain?\n\n> + [...] Freezing and advancing\n> + <structname>pg_class</structname>.<structfield>relfrozenxid</structfield>\n> + now take place more proactively, in every\n> + <command>VACUUM</command> operation.\n\nThis claim that it happens more proactively in \"every\" VACUUM\noperation is false, so I think the removal of \"every\" would be better.\n\n---\n\n> [PATCH v14 3/3] Finish removing aggressive mode VACUUM.\n\nI've not completed a review for this patch - I'll continue on that\ntomorrow - but here's a first look:\n\nI don't quite enjoy the refactoring+rewriting of the docs section;\nit's difficult to determine what changed when so many things changed\nline lengths and were moved around. Tomorrow I'll take a closer look,\nbut a separation of changes vs moved would be useful for review.\n\n> + /*\n> + * Earliest permissible NewRelfrozenXid/NewRelminMxid values that can be\n> + * set in pg_class at the end of VACUUM.\n> + */\n> + TransactionId MinXid;\n> + MultiXactId MinMulti;\n\nI don't quite like this wording, but I'm not sure what would be better.\n\n> + cutoffs->MinXid = nextXID - (freeze_table_age * 0.95);\n> [...]\n> + cutoffs->MinMulti = nextMXID - (multixact_freeze_table_age * 0.95);\n\nWhy are these values adjusted down (up?) by 5%? If I configure this\nGUC, I'd expect this to be used effectively verbatim; not adjusted by\nan arbitrary factor.\n\n---\n\nThat's it for now; thanks for working on this,\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 5 Jan 2023 02:21:37 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 02:21, I wrote:\n>\n> On Tue, 3 Jan 2023 at 21:30, Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > Attached is v14.\n> > [PATCH v14 3/3] Finish removing aggressive mode VACUUM.\n>\n> I've not completed a review for this patch - I'll continue on that\n> tomorrow:\n\nThis is that.\n\n> @@ -2152,10 +2109,98 @@ lazy_scan_noprune(LVRelState *vacrel,\n> [...]\n> + /* wait 10ms, then 20ms, then 30ms, then give up */\n> [...]\n> + pg_usleep(1000L * 10L * i);\n\nCould this use something like autovacuum_cost_delay? I don't quite\nlike the use of arbitrary hardcoded millisecond delays - it can slow a\nsystem down by a significant fraction, especially on high-contention\nsystems, and this potential of 60ms delay per scanned page can limit\nthe throughput of this new vacuum strategy to < 17 pages/second\n(<136kB/sec) for highly contended sections, which is not great.\n\nIt is also not unlikely that in the time it was waiting, the page\ncontents were updated significantly (concurrent prune, DELETEs\ncommitted), which could result in improved bounds. I think we should\nredo the dead items check if we waited, but failed to get a lock - any\ntuples removed now reduce work we'll have to do later.\n\n> +++ b/doc/src/sgml/ref/vacuum.sgml\n> [...] Pages where\n> + all tuples are known to be frozen are always skipped.\n\n\"...are always skipped, unless the >DISABLE_PAGE_SKIPPING< option is used.\"\n\n> +++ b/doc/src/sgml/maintenance.sgml\n\nThere are a lot of details being lost from the previous version of\nthat document. Some of the details are obsolete (mentions of\naggressive VACUUM and freezing behavior), but others are not\n(FrozenTransactionId in rows from a pre-9.4 system, the need for\nvacuum for prevention of issues surrounding XID wraparound).\n\nI also am not sure this is the best place to store most of these\nmentions, but I can't find a different place where these details on\ncertain interesting parts of the system are documented, and plain\nremoval of the information does not sit right with me.\n\nSpecifically, I don't like the removal of the following information\nfrom our documentation:\n\n- Size of pg_xact and pg_commit_ts data in relation to autovacuum_freeze_max_age\n Although it is less likely with the new behaviour that we'll hit\nthese limits due to more eager freezing of transactions, it is still\nimportant for users to have easy access to this information, and\ntuning this for storage size is not useless information.\n\n- The reason why VACUUM is essential to the long-term consistency of\nPostgres' MVCC system\n Informing the user about our use of 32-bit transaction IDs and\nthat we update an epoch when this XID wraps around does not\nautomatically make the user aware of the issues that surface around\nXID wraparound. Retaining the explainer for XID wraparound in the docs\nseems like a decent idea - it may be moved, but please don't delete\nit.\n\n- Special transaction IDs, their meaning and where they can occur\n I can't seem to find any other information in the docs section, and\nit is useful to have users understand that certain values are\nconsidered special: FrozenTransactionId and BootstrapTransactionId.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 5 Jan 2023 19:19:07 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 5:21 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Some reviews (untested; only code review so far) on these versions of\n> the patches:\n\nThanks for the review!\n\n> > [PATCH v14 1/3] Add eager and lazy freezing strategies to VACUUM.\n\n> I don't think the mention of 'cutoff point' is necessary when it has\n> 'Threshold'.\n\nFair. Will fix.\n\n> > + int freeze_strategy_threshold; /* threshold to use eager\n> > [...]\n> > + BlockNumber freeze_strategy_threshold;\n>\n> Is there a way to disable the 'eager' freezing strategy? `int` cannot\n> hold the maximum BlockNumber...\n\nI'm going to fix this by switching over to making the GUC (and the\nreloption) GUC_UNIT_MB, while keeping it in ConfigureNamesInt[]. That\napproach is a little bit more cumbersome, but not by much. That'll\nsolve this problem.\n\n> > + lazy_scan_strategy(vacrel);\n> > if (verbose)\n>\n> I'm slightly suprised you didn't update the message for verbose vacuum\n> to indicate whether we used the eager strategy: there are several GUCs\n> for tuning this behaviour, so you'd expect to want direct confirmation\n> that the configuration is effective.\n\nPerhaps that would be worth doing, but I don't think that it's all\nthat useful in the grand scheme of things. I wouldn't mind including\nit, but I think that it shouldn't be given much prominence. It's\ncertainly far less important than \"aggressive vs non-aggressive\" is\nright now.\n\nEagerness is not just a synonym of aggressiveness. For example, every\nVACUUM of a table like pgbench_tellers or pgbench_branches will use\neager scanning strategy. More generally, you have to bear in mind that\nthe actual state of the table is just as important as the GUCs\nthemselves. We try to avoid obligations that could be very hard or\neven impossible for vacuumlazy.c to fulfill.\n\nThere are far weaker constraints on things like the final relfrozenxid\nvalue we'll set in pg_class (more on this below, when I talk about\nMinXid/MinMulti). It will advance far more frequently and by many more\nXIDs than it would today, on average. But occasionally it will allow a\nfar earlier relfrozenxid than aggressive mode would ever allow, since\nmaking some small amount of progress now is almost always much better\nthan making no progress at all.\n\n> (looks at further patches) I see that the message for verbose vacuum\n> sees significant changes in patch 2 instead.\n\nIt just works out to be slightly simpler that way. I want to add the\nscanned_pages stuff to VERBOSE in the vmsnap/scanning strategies\ncommit, so I need to make significant changes to the initial VERBOSE\nmessage in that commit. There is little point in preserving\ninformation about aggressive mode if it's removed in the very next\ncommit anyway.\n\n> > [PATCH v14 2/3] Add eager and lazy VM strategies to VACUUM.\n\n> Right now, we don't use syncscan to determine a startpoint. I can't\n> find the reason why in the available documentation: [0] discusses the\n> issue but without clearly describing an issue why it wouldn't be\n> interesting from a 'nothing lost' perspective.\n\nThat's not something I've given much thought to. It's a separate issue, I think.\n\nThough I will say that one reason why I think that the vm snapshot\nconcept will become important is that working off an immutable\nstructure makes various things much easier, in fairly obvious ways. It\nmakes it straightforward to reorder work. So things like parallel heap\nvacuuming are a lot more straightforward.\n\nI also think that it would be useful to teach VACUUM to speculatively\nscan a random sample of pages, just like a normal VACUUM. We start out\ndoing a normal VACUUM that just processes scanned_pages in a random\norder. At some point we look at the state of pages so far. If it looks\nlike the table really doesn't urgently need to be vacuumed, then we\ncan give up before paying much of a cost. If it looks like the table\nreally needs to be VACUUM'd, we can press on almost like any other\nVACUUM would.\n\nThis is related to the problem of bad statistics that drive\nautovacuum. Deciding as much as possible at runtime, dynamically,\nseems promising to me.\n\n> In addition, I noticed that progress reporting of blocks scanned\n> (\"heap_blocks_scanned\", duh) includes skipped pages. Now that we have\n> a solid grasp of how many blocks we're planning to scan, we can update\n> the reported stats to how many blocks we're planning to scan (and have\n> scanned), increasing the user value of that progress view.\n\nYeah, that's definitely a natural direction to go with this. Knowing\nscanned_pages from the start is a basis for much more useful progress\nreporting.\n\n> > + double tableagefrac;\n>\n> I think this can use some extra info on the field itself, that it is\n> the fraction of how \"old\" the relfrozenxid and relminmxid fields are,\n> as a fraction between 0 (latest values; nextXID and nextMXID), and 1\n> (values that are old by at least freeze_table_age and\n> multixact_freeze_table_age (multi)transaction ids, respectively).\n\nAgreed that that needs more than that in comments above the\n\"tableagefrac\" struct field.\n\n> > + * Decide vmsnap scanning strategy.\n> > *\n> > - * This test also enables more frequent relfrozenxid advancement during\n> > - * non-aggressive VACUUMs. If the range has any all-visible pages then\n> > - * skipping makes updating relfrozenxid unsafe, which is a real downside.\n> > + * First acquire a visibility map snapshot, which determines the number of\n> > + * pages that each vmsnap scanning strategy is required to scan for us in\n> > + * passing.\n>\n> I think we should not take disk-backed vm snapshots when\n> force_scan_all is set. We need VACUUM to be able to run on very\n> resource-constrained environments, and this does not do that - it adds\n> a disk space requirement for the VM snapshot for all but the smallest\n> relation sizes, which is bad when you realize that we need VACUUM when\n> we want to clean up things like CLOG.\n\nI agree that I still have work to do to make visibility map snapshots\nas robust as possible in resource constrained environments, including\nin cases where there is simply no disk space at all. They should\ngracefully degrade even when there isn't space on disk to store a copy\nof the VM in temp files, or even a single page.\n\n> Additionally, it took me several reads of the code and comments to\n> understand what the decision-making process for lazy vs eager is, and\n> why. The comments are interspersed with the code, with no single place\n> that describes it from a bird's eyes' view.\n\nYou probably have a good point there. I'll try to come up with\nsomething, possibly based on your suggested wording.\n\n> > @@ -1885,6 +1902,30 @@ retry:\n> > tuples_frozen = 0; /* avoid miscounts in instrumentation */\n> > }\n> >\n> > /*\n> > + * There should never be dead or deleted tuples when PD_ALL_VISIBLE is\n> > + * set. Check that here in passing.\n> > + *\n> > [...]\n>\n> I'm not sure this patch is the appropriate place for this added check.\n> I don't disagree with the change, I just think that it's unrelated to\n> the rest of the patch. Same with some of the changes in\n> lazy_scan_heap.\n\nThis issue is hard to explain. I kind of need to do this in the VM\nsnapshot/scanning strategies commit, because it removes the\nall_visible_according_to_vm local variable used inside lazy_scan_heap.\n\nThis change that you highlight detects cases where PD_ALL_VISIBLE is\nset incorrectly earlier in lazy_scan_prune is part of that, and then\nunsets it, so that once lazy_scan_prune returns and lazy_scan_heap\nneeds to consider setting the VM, it can trust PD_ALL_VISIBLE -- it is\ndefinitely up to date at that point, even in cases involving\ncorruption. So the steps where we consider setting the VM now always\nstarts from a clean slate.\n\nNow we won't just unset both PD_ALL_VISIBLE and the VM bits in the\nevent of corruption like this. We'll complain about it in\nlazy_scan_prune, then fully fix the issue in the most appropriate way\nin lazy_scan_heap (could be setting the page all-visible now, even\nthough it shouldn't have been set but was set when we first arrived).\nWe also won't fail to complain about PD_ALL_VISIBLE corruption because\nlazy_scan_prune \"destroyed the evidence\" before lazy_scan_heap had the\nchance to notice the problem. PD_ALL_VISIBLE corruption should never\nhappen, obviously, so we should make a point of complaining about it\nwhenever it can be detected. Which is much more often than what you\nsee on HEAD today.\n\n> > +vm_snap_stage_blocks\n>\n> Doesn't this waste a lot of cycles on skipping frozen blocks if most\n> of the relation is frozen? I'd expected something more like a byte- or\n> word-wise processing of skippable blocks, as opposed to this per-block\n> loop. I don't think it's strictly necessary to patch, but I think it\n> would be a very useful addition for those with larger tables.\n\nI agree that the visibility map snapshot stuff could stand to be a bit\nmore frugal with memory. It's certainly not critical, but it is\nprobably fairly easy to do better here, and so I should do better.\n\n> > + XIDFrac = (double) (nextXID - cutoffs->relfrozenxid) /\n> > + ((double) freeze_table_age + 0.5);\n>\n> I don't quite understand what this `+ 0.5` is used for, could you explain?\n\nIt avoids division by zero.\n\n> > + [...] Freezing and advancing\n> > + <structname>pg_class</structname>.<structfield>relfrozenxid</structfield>\n> > + now take place more proactively, in every\n> > + <command>VACUUM</command> operation.\n>\n> This claim that it happens more proactively in \"every\" VACUUM\n> operation is false, so I think the removal of \"every\" would be better.\n\nGood catch. Will fix.\n\n> > [PATCH v14 3/3] Finish removing aggressive mode VACUUM.\n\n> I don't quite enjoy the refactoring+rewriting of the docs section;\n> it's difficult to determine what changed when so many things changed\n> line lengths and were moved around. Tomorrow I'll take a closer look,\n> but a separation of changes vs moved would be useful for review.\n\nI think that I should break out the doc changes some more. The docs\nare likely the least worked out thing at this point.\n\n> > + cutoffs->MinXid = nextXID - (freeze_table_age * 0.95);\n> > [...]\n> > + cutoffs->MinMulti = nextMXID - (multixact_freeze_table_age * 0.95);\n>\n> Why are these values adjusted down (up?) by 5%? If I configure this\n> GUC, I'd expect this to be used effectively verbatim; not adjusted by\n> an arbitrary factor.\n\nIt is kind of arbitrary, but not in the way that you suggest. This\nisn't documented in the user docs, and shouldn't really need to be. It\nshould have very little if any noticeable impact on our final\nrelfrozenxid/relminmxid in practice. If it does have any noticeable\nimpact, I strongly suspect it'll be a useful, positive impact.\n\nMinXid/MinMulti control the behavior around whether or not\nlazy_scan_noprune is willing to wait the hard way for a cleanup lock,\nno matter how long it takes. We do still need something like that, but\nit can be far looser than it is right now. The problem with aggressive\nmode is that it absolutely insists on a certain outcome, no matter the\ncost, and regardless of whether or not a slightly inferior outcome is\nacceptable. It's extremely rigid. Rigid things tend to break. Loose,\nspringy things much less so.\n\nI think that it's an extremely bad idea to wait indefinitely for a\ncleanup lock. Sure, it'll work out the vast majority of the time --\nit's *very* likely to work. But when it doesn't work right away, there\nis no telling how long the wait will be -- all bets are off. Could be\na day, a week, a month -- who knows? The application itself is the\ncrucial factor here, and in general the application can do whatever it\nwants to do -- that is the reality. So we should be willing to kick\nthe can down the road in almost all cases -- that is actually the\nresponsible thing to do under the circumstances. We need to get on\nwith freezing every other page in the table!\n\nThere just cannot be very many pages that can't be cleanup locked at\nany given time, so waiting indefinitely is a very drastic measure in\nresponse to a problem that is quite likely to go away on its own. A\nproblem that waiting doesn't really solve anyway. Maybe the only thing\nthat will work is waiting for a very long time, but we have nothing to\nlose (and everything to gain) by waiting to wait.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 6 Jan 2023 15:07:04 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 10:19 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Could this use something like autovacuum_cost_delay? I don't quite\n> like the use of arbitrary hardcoded millisecond delays\n\nIt's not unlike (say) the way that there can sometimes be hardcoded\nwaits inside GetMultiXactIdMembers(), which does run during VACUUM.\n\nIt's not supposed to be noticeable at all. If it is noticeable in any\npractical sense, then the design is flawed, and should be fixed.\n\n> it can slow a\n> system down by a significant fraction, especially on high-contention\n> systems, and this potential of 60ms delay per scanned page can limit\n> the throughput of this new vacuum strategy to < 17 pages/second\n> (<136kB/sec) for highly contended sections, which is not great.\n\nWe're only willing to wait the full 60ms when smaller waits don't work\nout. And when 60ms doesn't do it, we'll then accept an older final\nNewRelfrozenXid value. Our willingness to wait at all is conditioned\non the existing NewRelfrozenXid tracker being affected at all by\nwhether or not we accept reduced lazy_scan_noprune processing for the\npage. So the waits are naturally self-limiting.\n\nYou may be right that I need to do more about the possibility of\nsomething like that happening -- it's a legitimate concern. But I\nthink that this may be enough on its own. I've never seen a workload\nwhere more than a small fraction of all pages couldn't be cleanup\nlocked right away. But I *have* seen workloads where VACUUM vainly\nwaited forever for a cleanup lock on one single heap page.\n\n> It is also not unlikely that in the time it was waiting, the page\n> contents were updated significantly (concurrent prune, DELETEs\n> committed), which could result in improved bounds. I think we should\n> redo the dead items check if we waited, but failed to get a lock - any\n> tuples removed now reduce work we'll have to do later.\n\nI don't think that it matters very much. That's always true. It seems\nvery unlikely that we'll get better bounds here, unless it happens by\ngetting a full cleanup lock and then doing full lazy_scan_prune\nprocessing after all.\n\nSure, it's possible that a concurrent opportunistic prune could make\nthe crucial difference, even though we ourselves couldn't get a\ncleanup lock despite going to considerable trouble. I just don't think\nthat it's worth doing anything about.\n\n> > +++ b/doc/src/sgml/ref/vacuum.sgml\n> > [...] Pages where\n> > + all tuples are known to be frozen are always skipped.\n>\n> \"...are always skipped, unless the >DISABLE_PAGE_SKIPPING< option is used.\"\n\nI'll look into changing this.\n\n> > +++ b/doc/src/sgml/maintenance.sgml\n>\n> There are a lot of details being lost from the previous version of\n> that document. Some of the details are obsolete (mentions of\n> aggressive VACUUM and freezing behavior), but others are not\n> (FrozenTransactionId in rows from a pre-9.4 system, the need for\n> vacuum for prevention of issues surrounding XID wraparound).\n\nI will admit that I really hate the \"Routine Vacuuming\" docs, and\nthink that they explain things in just about the worst possible way.\n\nI also think that this needs to be broken up into pieces. As I said\nrecently, the docs are the part of the patch series that is the least\nworked out.\n\n> I also am not sure this is the best place to store most of these\n> mentions, but I can't find a different place where these details on\n> certain interesting parts of the system are documented, and plain\n> removal of the information does not sit right with me.\n\nI'm usually the person that argues for describing more implementation\ndetails in the docs. But starting with low-level details here is\ndeeply confusing. At most these are things that should be discussed in\nthe context of internals, as part of some completely different\nchapter.\n\nI'll see about moving details of things like FrozenTransactionId somewhere else.\n\n> Specifically, I don't like the removal of the following information\n> from our documentation:\n>\n> - Size of pg_xact and pg_commit_ts data in relation to autovacuum_freeze_max_age\n> Although it is less likely with the new behaviour that we'll hit\n> these limits due to more eager freezing of transactions, it is still\n> important for users to have easy access to this information, and\n> tuning this for storage size is not useless information.\n\nThat is a fair point. Though note that these things have weaker\nrelationships with settings like autovacuum_freeze_max_age now. Mostly\nthis is a positive improvement (in the sense that we can truncate\nSLRUs much more aggressively on average), but not always.\n\n> - The reason why VACUUM is essential to the long-term consistency of\n> Postgres' MVCC system\n> Informing the user about our use of 32-bit transaction IDs and\n> that we update an epoch when this XID wraps around does not\n> automatically make the user aware of the issues that surface around\n> XID wraparound. Retaining the explainer for XID wraparound in the docs\n> seems like a decent idea - it may be moved, but please don't delete\n> it.\n\nWe do need to stop telling users to enter single user mode. It's quite\nsimply obsolete, bad advice, and has been since Postgres 14. It's the\nworst thing that you could do, in fact.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 6 Jan 2023 15:28:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 12:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v14.\n\nThis has stopped applying due to conflicts with nearby work on VACUUM\nfrom Tom. So I attached a new revision, v15, just to make CFTester\ngreen again.\n\nI didn't have time to incorporate any of the feedback from Matthias\njust yet. That will have to wait until v16.\n\n--\nPeter Geoghegan",
"msg_date": "Sun, 8 Jan 2023 17:45:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 7:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 3, 2023 at 12:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached is v14.\n>\n> This has stopped applying due to conflicts with nearby work on VACUUM\n> from Tom. So I attached a new revision, v15, just to make CFTester\n> green again.\n>\n> I didn't have time to incorporate any of the feedback from Matthias\n> just yet. That will have to wait until v16.\n>\nI have looked into the patch set, I think 0001 looks good to me about\n0002 I have a few questions, 0003 I haven't yet looked at\n\n1.\n+ /*\n+ * Finally, set tableagefrac for VACUUM. This can come from either XID or\n+ * XMID table age (whichever is greater currently).\n+ */\n+ XIDFrac = (double) (nextXID - cutoffs->relfrozenxid) /\n+ ((double) freeze_table_age + 0.5);\n\nI think '(nextXID - cutoffs->relfrozenxid) / freeze_table_age' should\nbe the actual fraction right? What is the point of adding 0.5 to the\ndivisor? If there is a logical reason, maybe we can explain in the\ncomments.\n\n2.\nWhile looking into the logic of 'lazy_scan_strategy', I think the idea\nlooks very good but the only thing is that\nwe have kept eager freeze and eager scan completely independent.\nDon't you think that if a table is chosen for an eager scan\nthen we should force the eager freezing as well?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Jan 2023 10:42:56 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sun, Jan 15, 2023 at 9:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I have looked into the patch set, I think 0001 looks good to me about\n> 0002 I have a few questions, 0003 I haven't yet looked at\n\nThanks for taking a look.\n\n> I think '(nextXID - cutoffs->relfrozenxid) / freeze_table_age' should\n> be the actual fraction right? What is the point of adding 0.5 to the\n> divisor? If there is a logical reason, maybe we can explain in the\n> comments.\n\nIt's just a way of avoiding division by zero.\n\n> While looking into the logic of 'lazy_scan_strategy', I think the idea\n> looks very good but the only thing is that\n> we have kept eager freeze and eager scan completely independent.\n> Don't you think that if a table is chosen for an eager scan\n> then we should force the eager freezing as well?\n\nEarlier versions of the patch kind of worked that way.\nlazy_scan_strategy would actually use twice the GUC setting to\ndetermine scanning strategy. That approach could make our \"transition\nfrom lazy to eager strategies\" involve an excessive amount of\n\"catch-up freezing\" in the VACUUM operation that advanced relfrozenxid\nfor the first time, which you see an example of here:\n\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch\n\nNow we treat the scanning and freezing strategies as two independent\nchoices. Of course they're not independent in any practical sense, but\nI think it's slightly simpler and more elegant that way -- it makes\nthe GUC vacuum_freeze_strategy_threshold strictly about freezing\nstrategy, while still leading to VACUUM advancing relfrozenxid in a\nway that makes sense. It just happens as a second order effect. Why\nadd a special case?\n\nIn principle the break-even point for eager scanning strategy (i.e.\nadvancing relfrozenxid) is based on the added cost only under this\nscheme. There is no reason for lazy_scan_strategy to care about what\nhappened in the past to make the eager scanning strategy look like a\ngood idea. Similarly, there isn't any practical reason why\nlazy_scan_strategy needs to anticipate what will happen in the near\nfuture with freezing.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Jan 2023 10:00:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 5:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I didn't have time to incorporate any of the feedback from Matthias\n> just yet. That will have to wait until v16.\n\nAttached is v16, which incorporates some of Matthias' feedback.\n\nI've rolled back the major restructuring to the \"Routine Vacuuming\"\ndocs that previously appeared in 0003, preferring to take a much more\nincremental approach. I do still think that somebody needs to do some\nmajor reworking of that, just in general. That can be done by a\nseparate patch. There are now only fairly mechanical doc updates in\nall 3 patches.\n\nOther changes:\n\n* vacuum_freeze_strategy_threshold is now MB-based, and can be set up to 512TB.\n\n* Various refinements to comments.\n\n--\nPeter Geoghegan",
"msg_date": "Mon, 16 Jan 2023 10:10:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 10:10 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached is v16, which incorporates some of Matthias' feedback.\n\n0001 (the freezing strategies patch) is now committable IMV. Or at\nleast will be once I polish the docs a bit more. I plan on committing\n0001 some time next week, barring any objections.\n\nI should point out that 0001 is far shorter and simpler than the\npage-level freezing commit that already went in (commit 1de58df4). The\nonly thing in 0001 that seems like it might be a bit controversial\n(when considered on its own) is the addition of the\nvacuum_freeze_strategy_threshold GUC/reloption. Note in particular\nthat vacuum_freeze_strategy_threshold doesn't look like any other\nexisting GUC; it gets applied as a threshold on the size of the rel's\nmain fork at the beginning of vacuumlazy.c processing. As far as I\nknow there are no objections to that approach at this time, but it\ndoes still seem worth drawing attention to now.\n\n0001 also makes unlogged tables and temp tables always use eager\nfreezing strategy, no matter how the GUC/reloption are set. This seems\n*very* easy to justify, since the potential downside of such a policy\nis obviously extremely low, even when we make very pessimistic\nassumptions. The usual cost we need to worry about when it comes to\nfreezing is the added WAL overhead -- that clearly won't apply when\nwe're vacuuming non-permanent tables. That really just leaves the cost\nof dirtying extra pages, which in general could have a noticeable\nsystem-level impact in the case of unlogged tables.\n\nDirtying extra pages when vacuuming an unlogged table is also a\nnon-issue. Even the eager freezing strategy only freezes \"extra\" pages\n(\"extra\" relative to the lazy strategy behavior) given a page that\nwill be set all-visible in any case [1]. Such a page will need to have\nits page-level PD_ALL_VISIBLE bit set in any case -- which is already\nenough to dirty the page. And so there can never be any additional\npages dirtied as a result of the special policy 0001 adds for\nnon-permanent relations.\n\n[1] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch_2\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Jan 2023 17:55:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 10:00 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Now we treat the scanning and freezing strategies as two independent\n> choices. Of course they're not independent in any practical sense, but\n> I think it's slightly simpler and more elegant that way -- it makes\n> the GUC vacuum_freeze_strategy_threshold strictly about freezing\n> strategy, while still leading to VACUUM advancing relfrozenxid in a\n> way that makes sense. It just happens as a second order effect. Why\n> add a special case?\n\nThis might be a better way to explain it:\n\nThe main page-level freezing commit (commit 1de58df4) already added an\noptimization that triggers page-level freezing \"early\" (early relative\nto vacuum_freeze_min_age). This happens whenever a page already needs\nto have an FPI logged inside lazy_scan_prune -- even when we're using\nthe lazy freezing strategy. The optimization isn't configurable, and\ngets applied regardless of freezing strategy (technically there is no\nsuch thing as freezing strategies on HEAD just yet, though HEAD still\nhas this optimization).\n\nThere will be workloads where the FPI optimization will result in\nfreezing many more pages -- especially when data checksums are in use\n(since then we could easily need to log an FPI just so pruning can set\na hint bit). As a result, certain VACUUMs that use the lazy freezing\nstrategy will freeze in almost the same way as an equivalent VACUUM\nusing the eager freezing strategy. Such a \"nominally lazy but actually\nquite eager\" VACUUM operation should get the same benefit in terms of\nrelfrozenxid advancement as it would if it really had used the eager\nfreezing strategy instead. It's fairly obvious that we'll get the same\nbenefit in relfrozenxid advancement (comparable relfrozenxid results\nfor comparable freezing work), since the way that VACUUM decides on\nits scanning strategy is not conditioned on freezing strategy (whether\nby the ongoing VACUUM or any other VACUUM against the same table).\n\nAll that matters is the conditions in the table (in particular the\nadded cost of opting for eager scanning over lazy scanning) as\nindicated by the visibility map at the start of each VACUUM -- how\nthose conditions came about really isn't interesting at that point.\nAnd so lazy_scan_strategy doesn't care about them when it chooses\nVACUUM's scanning strategy.\n\nThere are even tables/workloads where relfrozenxid will be able to\njump forward by a huge amount whenever VACUUM choosing the eager\nscanning strategy, despite the fact that VACUUM generally does little\nor no freezing to make that possible:\n\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch_3\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Jan 2023 18:24:53 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 11:31 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> > I think '(nextXID - cutoffs->relfrozenxid) / freeze_table_age' should\n> > be the actual fraction right? What is the point of adding 0.5 to the\n> > divisor? If there is a logical reason, maybe we can explain in the\n> > comments.\n>\n> It's just a way of avoiding division by zero.\n\noh, correct :)\n\n> > While looking into the logic of 'lazy_scan_strategy', I think the idea\n> > looks very good but the only thing is that\n> > we have kept eager freeze and eager scan completely independent.\n> > Don't you think that if a table is chosen for an eager scan\n> > then we should force the eager freezing as well?\n>\n> Earlier versions of the patch kind of worked that way.\n> lazy_scan_strategy would actually use twice the GUC setting to\n> determine scanning strategy. That approach could make our \"transition\n> from lazy to eager strategies\" involve an excessive amount of\n> \"catch-up freezing\" in the VACUUM operation that advanced relfrozenxid\n> for the first time, which you see an example of here:\n>\n> https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch\n>\n> Now we treat the scanning and freezing strategies as two independent\n> choices. Of course they're not independent in any practical sense, but\n> I think it's slightly simpler and more elegant that way -- it makes\n> the GUC vacuum_freeze_strategy_threshold strictly about freezing\n> strategy, while still leading to VACUUM advancing relfrozenxid in a\n> way that makes sense. It just happens as a second order effect. Why\n> add a special case?\n\nI think that it makes sense to keep 'vacuum_freeze_strategy_threshold'\nstrictly for freezing. But the point is that the eager scanning\nstrategy is driven by table freezing needs of the table (tableagefrac)\nthat make sense, but if we have selected the eager freezing based on\nthe table age and its freezing need then why don't we force the eager\nfreezing as well if we have selected eager scanning, after all the\neager scanning is selected for satisfying the freezing need. But\nOTOH, the eager scanning might get selected if it appears that we\nmight not have to scan too many extra pages compared to lazy scan so\nin those cases forcing the eager freezing might not be wise. So maybe\nit is a good idea to keep them the way you have in your patch.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 17 Jan 2023 09:43:12 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 8:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I think that it makes sense to keep 'vacuum_freeze_strategy_threshold'\n> strictly for freezing. But the point is that the eager scanning\n> strategy is driven by table freezing needs of the table (tableagefrac)\n> that make sense, but if we have selected the eager freezing based on\n> the table age and its freezing need then why don't we force the eager\n> freezing as well if we have selected eager scanning, after all the\n> eager scanning is selected for satisfying the freezing need.\n\nDon't think of eager scanning as the new name for aggressive mode --\nit's a fairly different concept, because we care about costs now.\nEager scanning can be chosen just because it's very cheap relative to\nthe alternative of lazy scanning, even when relfrozenxid is still very\nrecent. (This kind of behavior isn't really new [1], but the exact\nimplementation from the patch is new.)\n\nTables such as pgbench_branches and pgbench_tellers will reliably use\neager scanning strategy, no matter how any GUC has been set -- just\nbecause the added cost is always zero (relative to lazy scanning). It\nreally doesn't matter how far along tableagefrac here, ever. These\nsame tables will never use eager freezing strategy, unless the\nvacuum_freeze_strategy_threshold GUC is misconfigured. (This is\nanother example of how scanning strategy and freezing strategy may\ndiffer for the same table.)\n\nYou do have a good point, though. I think that I know what you mean.\nNote that antiwraparound autovacuums (or VACUUMs of tables very near\nto that point) *will* always use both the eager freezing strategy and\nthe eager scanning strategy -- which is probably close to what you\nmeant.\n\nThe important point is that there can be more than one reason to\nprefer one strategy to another -- and the reasons can be rather\ndifferent. Occasionally it'll be a combination of two factors together\nthat push things in favor of one strategy over the other -- even\nthough either factor on its own would not have resulted in the same\nchoice.\n\n[1] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Constantly_updated_tables_.28usually_smaller_tables.29\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 16 Jan 2023 20:35:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 10:05 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jan 16, 2023 at 8:13 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > I think that it makes sense to keep 'vacuum_freeze_strategy_threshold'\n> > strictly for freezing. But the point is that the eager scanning\n> > strategy is driven by table freezing needs of the table (tableagefrac)\n> > that make sense, but if we have selected the eager freezing based on\n> > the table age and its freezing need then why don't we force the eager\n> > freezing as well if we have selected eager scanning, after all the\n> > eager scanning is selected for satisfying the freezing need.\n>\n> Don't think of eager scanning as the new name for aggressive mode --\n> it's a fairly different concept, because we care about costs now.\n> Eager scanning can be chosen just because it's very cheap relative to\n> the alternative of lazy scanning, even when relfrozenxid is still very\n> recent. (This kind of behavior isn't really new [1], but the exact\n> implementation from the patch is new.)\n>\n> Tables such as pgbench_branches and pgbench_tellers will reliably use\n> eager scanning strategy, no matter how any GUC has been set -- just\n> because the added cost is always zero (relative to lazy scanning). It\n> really doesn't matter how far along tableagefrac here, ever. These\n> same tables will never use eager freezing strategy, unless the\n> vacuum_freeze_strategy_threshold GUC is misconfigured. (This is\n> another example of how scanning strategy and freezing strategy may\n> differ for the same table.)\n\nYes, I agree with that. Thanks for explaining in detail.\n\n> You do have a good point, though. I think that I know what you mean.\n> Note that antiwraparound autovacuums (or VACUUMs of tables very near\n> to that point) *will* always use both the eager freezing strategy and\n> the eager scanning strategy -- which is probably close to what you\n> meant.\n\nRight\n\n> The important point is that there can be more than one reason to\n> prefer one strategy to another -- and the reasons can be rather\n> different. Occasionally it'll be a combination of two factors together\n> that push things in favor of one strategy over the other -- even\n> though either factor on its own would not have resulted in the same\n> choice.\n\nYes, that makes sense to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:47:44 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 1:47 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 10:05 AM Peter Geoghegan <pg@bowt.ie> wrote:\n\nMy final set of comments for 0002\n\n1.\n+struct vmsnapshot\n+{\n+ /* Target heap rel */\n+ Relation rel;\n+ /* Scanning strategy used by VACUUM operation */\n+ vmstrategy strat;\n+ /* Per-strategy final scanned_pages */\n+ BlockNumber rel_pages;\n+ BlockNumber scanned_pages_lazy;\n+ BlockNumber scanned_pages_eager;\n\nI do not understand much use of maintaining these two\n'scanned_pages_lazy' and 'scanned_pages_eager' variables. I think\njust maintaining 'scanned_pages' should be sufficient. I do not see\nin patches also they are really used. lazy_scan_strategy() is using\nthese variables but this is getting values of these out parameters\nfrom visibilitymap_snap_acquire(). And visibilitymap_snap_strategy()\nis also using this, but it seems there we just need the final result\nof 'scanned_pages' instead of these two variables.\n\n2.\n\n+#define MAX_PAGES_YOUNG_TABLEAGE 0.05 /* 5% of rel_pages */\n+#define MAX_PAGES_OLD_TABLEAGE 0.70 /* 70% of rel_pages */\n\nWhy is the logic behind 5% and 70% are those based on some\nexperiments? Should those be tuning parameters so that with real\nworld use cases if we realise that it would be good if the eager scan\nis getting selected more frequently or less frequently then we can\ntune those parameters?\n\n3.\n+ /*\n+ * VACUUM's DISABLE_PAGE_SKIPPING option overrides our decision by forcing\n+ * VACUUM to scan every page (VACUUM effectively distrusts rel's VM)\n+ */\n+ if (force_scan_all)\n+ vacrel->vmstrat = VMSNAP_SCAN_ALL;\n\nI think this should be moved as first if case, I mean why to do all\nthe calculations based on the 'tableagefrac' and\n'TABLEAGEFRAC_XXPOINT' if we are forced to scan them all. I agree the\nextra computation we are doing might not really matter compared to the\nvacuum work we are going to perform but still seems logical to me to\ndo the simple check first.\n\n4. Should we move prefetching as a separate patch, instead of merging\nwith the scanning strategy?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Jan 2023 16:47:09 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 3:17 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> My final set of comments for 0002\n\nThanks for the review!\n\n> I do not understand much use of maintaining these two\n> 'scanned_pages_lazy' and 'scanned_pages_eager' variables. I think\n> just maintaining 'scanned_pages' should be sufficient. I do not see\n> in patches also they are really used.\n\nI agree that the visibility map snapshot struct could stand to be\ncleaned up -- some of that state may not be needed, and it wouldn't be\nthat hard to use memory a little more economically, particularly with\nvery small tables. It's on my TODO list already.\n\n> +#define MAX_PAGES_YOUNG_TABLEAGE 0.05 /* 5% of rel_pages */\n> +#define MAX_PAGES_OLD_TABLEAGE 0.70 /* 70% of rel_pages */\n>\n> Why is the logic behind 5% and 70% are those based on some\n> experiments? Should those be tuning parameters so that with real\n> world use cases if we realise that it would be good if the eager scan\n> is getting selected more frequently or less frequently then we can\n> tune those parameters?\n\nThe specific multipliers constants chosen (for\nMAX_PAGES_YOUNG_TABLEAGE and MAX_PAGES_OLD_TABLEAGE) were based on\nboth experiments and intuition. The precise values could be somewhat\ndifferent without it really mattering, though. For example, with a\ntable like pgbench_history (which is a really important case for the\npatch in general), there won't be any all-visible pages at all (at\nleast after a short while), so it won't matter what these constants\nare -- eager scanning will always be chosen.\n\nI don't think that they should be parameters. The useful parameter for\nusers remains vacuum_freeze_table_age/autovacuum_freeze_max_age (note\nthat vacuum_freeze_table_age usually gets its value from\nautovacuum_freeze_max_age due to changes in 0002). Like today,\nvacuum_freeze_table_age forces VACUUM to scan all not-all-frozen pages\nso that relfrozenxid can be advanced. Unlike today, it forces eager\nscanning (not aggressive mode). But even long before eager scanning is\n*forced*, pressure to use eager scanning gradually builds. That\npressure will usually cause some VACUUM to use eager scanning before\nit's strictly necessary. Overall,\nvacuum_freeze_table_age/autovacuum_freeze_max_age now provide loose\nguidance.\n\nIt really has to be loose in this sense in order for\nlazy_scan_strategy() to have the freedom to do the right thing based\non the characteristics of the table as a whole, according to its\nvisibility map snapshot. This allows lazy_scan_strategy() to stumble\nupon once-off opportunities to advance relfrozenxid inexpensively,\nincluding cases where it could never happen with the current model.\nThese opportunities are side-effects of workload characteristics that\ncan be hard to predict [1][2].\n\n> I think this should be moved as first if case, I mean why to do all\n> the calculations based on the 'tableagefrac' and\n> 'TABLEAGEFRAC_XXPOINT' if we are forced to scan them all. I agree the\n> extra computation we are doing might not really matter compared to the\n> vacuum work we are going to perform but still seems logical to me to\n> do the simple check first.\n\nThis is only needed for DISABLE_PAGE_SKIPPING, which is an escape\nhatch option that is never supposed to be needed. I don't think that\nit's worth going to the trouble of indenting the code more just so\nthis is avoided -- it really is an afterthought. Besides, the compiler\nmight well be doing this for us.\n\n> 4. Should we move prefetching as a separate patch, instead of merging\n> with the scanning strategy?\n\nI don't think that breaking that out would be an improvement. A lot of\nthe prefetching stuff informs how the visibility map code is\nstructured.\n\n[1] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch_3\n[2] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Opportunistically_advancing_relfrozenxid_with_bursty.2C_real-world_workloads\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 23 Jan 2023 10:01:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 5:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> 0001 (the freezing strategies patch) is now committable IMV. Or at\n> least will be once I polish the docs a bit more. I plan on committing\n> 0001 some time next week, barring any objections.\n\nI plan on committing 0001 (the freezing strategies commit) tomorrow\nmorning, US Pacific time.\n\nAttached is v17. There are no significant differences compared to v17.\nI decided to post a new version now, ahead of commit, to show how I've\ncleaned up the docs in 0001 -- docs describing the new GUC, freeze\nstrategies, and so on.\n\n--\nPeter Geoghegan",
"msg_date": "Tue, 24 Jan 2023 14:49:38 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Tue, 24 Jan 2023 at 23:50, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jan 16, 2023 at 5:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > 0001 (the freezing strategies patch) is now committable IMV. Or at\n> > least will be once I polish the docs a bit more. I plan on committing\n> > 0001 some time next week, barring any objections.\n>\n> I plan on committing 0001 (the freezing strategies commit) tomorrow\n> morning, US Pacific time.\n>\n> Attached is v17. There are no significant differences compared to v17.\n> I decided to post a new version now, ahead of commit, to show how I've\n> cleaned up the docs in 0001 -- docs describing the new GUC, freeze\n> strategies, and so on.\n\nLGTM, +1 on 0001\n\nSome more comments on 0002:\n\n> +lazy_scan_strategy(LVRelState *vacrel, bool force_scan_all)\n> scanned_pages_lazy & scanned_pages_eager\n\nWe have not yet scanned the pages, so I suggest plan/scan_pages_eager\nand *_lazy as variable names instead, to minimize confusion about the\nnaming.\n\nI'll await the next iteration of 0002 in which you've completed more\nTODOs before I'll dig deeper into that patch.\n\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Wed, 25 Jan 2023 16:51:33 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-24 14:49:38 -0800, Peter Geoghegan wrote:\n> On Mon, Jan 16, 2023 at 5:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > 0001 (the freezing strategies patch) is now committable IMV. Or at\n> > least will be once I polish the docs a bit more. I plan on committing\n> > 0001 some time next week, barring any objections.\n>\n> I plan on committing 0001 (the freezing strategies commit) tomorrow\n> morning, US Pacific time.\n\nI unfortunately haven't been able to keep up with the thread and saw this just\nnow. But I've expressed the concern below several times before, so it\nshouldn't come as a surprise.\n\nI think, as committed, this will cause serious issues for some reasonably\ncommon workloads, due to substantially increased WAL traffic.\n\n\nThe most common problematic scenario I see are tables full of rows with\nlimited lifetime. E.g. because rows get aggregated up after a while. Before\nthose rows practically never got frozen - but now we'll freeze them all the\ntime.\n\n\nI whipped up a quick test: 15 pgbench threads insert rows, 1 psql \\while loop\ndeletes older rows.\n\nWorkload fits in s_b:\n\nAutovacuum on average generates between 1.5x-7x as much WAL as before,\ndepending on how things interact with checkpoints. And not just that, each\nautovac cycle also takes substantially longer than before - the average time\nfor an autovacuum roughly doubled. Which of course increases the amount of\nbloat.\n\n\nWhen workload doesn't fit in s_b:\n\nTime for vacuuming goes up to ~5x. WAL volume to ~9x. Autovacuum can't keep up\nwith bloat, every vacuum takes longer than the prior one:\n65s->78s->139s->176s\nAnd that's with autovac cost limits removed! Relation size nearly doubles due\nto bloat.\n\n\nAfter I disabled the new strategy autovac started to catch up again:\n124s->101s->103->46s->20s->28s->24s\n\n\nThis is significantly worse than I predicted. This was my first attempt at\ncoming up with a problematic workload. There'll likely be way worse in\nproduction.\n\n\n\nI think as-is this logic will cause massive issues.\n\nAndres\n\n\n",
"msg_date": "Wed, 25 Jan 2023 16:43:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-24 14:49:38 -0800, Peter Geoghegan wrote:\n> From e41d3f45fcd6f639b768c22139006ad11422575f Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Thu, 24 Nov 2022 18:20:36 -0800\n> Subject: [PATCH v17 1/3] Add eager and lazy freezing strategies to VACUUM.\n> \n> Eager freezing strategy avoids large build-ups of all-visible pages. It\n> makes VACUUM trigger page-level freezing whenever doing so will enable\n> the page to become all-frozen in the visibility map. This is useful for\n> tables that experience continual growth, particularly strict append-only\n> tables such as pgbench's history table. Eager freezing significantly\n> improves performance stability by spreading out the cost of freezing\n> over time, rather than doing most freezing during aggressive VACUUMs.\n> It complements the insert autovacuum mechanism added by commit b07642db.\n\nHowever, it significantly increases the overall work when rows have a somewhat\nlimited lifetime. The documented reason why vacuum_freeze_min_age exist -\nalthough I think it doesn't really achieve its documented goal anymore, after\nthe recent changes page-level freezing changes.\n\n\n> VACUUM determines its freezing strategy based on the value of the new\n> vacuum_freeze_strategy_threshold GUC (or reloption) with logged tables;\n> tables that exceed the size threshold use the eager freezing strategy.\n\nI think that's not a sufficient guard at all. The size of a table doesn't say\nmuch about how a table is used.\n\n\n> Unlogged tables and temp tables will always use eager freezing strategy,\n> since there is essentially no downside.\n\nI somewhat doubt that that is true, but certainly the cost is lower.\n\n\n> Eager freezing is strictly more aggressive than lazy freezing. Settings\n> like vacuum_freeze_min_age still get applied in just the same way in\n> every VACUUM, independent of the strategy in use. The only mechanical\n> difference between eager and lazy freezing strategies is that only the\n> former applies its own additional criteria to trigger freezing pages.\n\nThat's only true because vacuum_freeze_min_age being has been fairly radically\nredefined recently.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:15:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> I unfortunately haven't been able to keep up with the thread and saw this just\n> now. But I've expressed the concern below several times before, so it\n> shouldn't come as a surprise.\n\nYou missed the announcement 9 days ago, and the similar clear\nsignalling of a commit from yesterday. I guess I'll need to start\npersonally reaching out to you any time I commit anything in this area\nin the future. I almost considered doing that here, in fact.\n\n> The most common problematic scenario I see are tables full of rows with\n> limited lifetime. E.g. because rows get aggregated up after a while. Before\n> those rows practically never got frozen - but now we'll freeze them all the\n> time.\n\nFundamentally, the choice to freeze or not freeze is driven by\nspeculation about the needs of the table, with some guidance from the\nuser. That isn't new. It seems to me that it will always be possible\nfor you to come up with an adversarial case that makes any given\napproach look bad, no matter how good it is. Of course that doesn't\nmean that this particular complaint has no validity; but it does mean\nthat you need to be willing to draw the line somewhere.\n\nIn particular, it would be very useful to know what the parameters of\nthe discussion are. Obviously I cannot come up with an algorithm that\ncan literally predict the future. But I may be able to handle specific\ncases of concern better, or to better help users cope in whatever way.\n\n> I whipped up a quick test: 15 pgbench threads insert rows, 1 psql \\while loop\n> deletes older rows.\n\nCan you post the script? And what setting did you use?\n\n> Workload fits in s_b:\n>\n> Autovacuum on average generates between 1.5x-7x as much WAL as before,\n> depending on how things interact with checkpoints. And not just that, each\n> autovac cycle also takes substantially longer than before - the average time\n> for an autovacuum roughly doubled. Which of course increases the amount of\n> bloat.\n\nAnything that causes an autovacuum to take longer will effectively\nmake autovacuum think that it has removed more bloat than it really\nhas, which will then make autovacuum less aggressive when it really\nshould be more aggressive. That's a preexisting issue, that needs to\nbe accounted for in the context of this discussion.\n\n> This is significantly worse than I predicted. This was my first attempt at\n> coming up with a problematic workload. There'll likely be way worse in\n> production.\n\nAs I said in the commit message, the current default for\nvacuum_freeze_strategy_threshold is considered low, and was always\nintended to be provisional. Something that I explicitly noted would be\nreviewed after the beta period is over, once we gained more experience\nwith the setting.\n\nI think that a far higher setting could be almost as effective. 32GB,\nor even 64GB could work quite well, since you'll still have the FPI\noptimization.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:22:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-25 16:43:47 -0800, Andres Freund wrote:\n> I think, as committed, this will cause serious issues for some reasonably\n> common workloads, due to substantially increased WAL traffic.\n> \n> \n> The most common problematic scenario I see are tables full of rows with\n> limited lifetime. E.g. because rows get aggregated up after a while. Before\n> those rows practically never got frozen - but now we'll freeze them all the\n> time.\n\nAnother bad scenario: Some longrunning / hung transaction caused us to get\nclose to the xid wraparound. Problem was resolved, autovacuum runs. Previously\nwe wouldn't have frozen the portion of the table that was actively changing,\nnow we will. Consequence: We get closer to the \"no write\" limit / the outage\nlasts longer.\n\nI don't see an alternative to reverting this for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:26:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 5:15 PM Andres Freund <andres@anarazel.de> wrote:\n> However, it significantly increases the overall work when rows have a somewhat\n> limited lifetime. The documented reason why vacuum_freeze_min_age exist -\n> although I think it doesn't really achieve its documented goal anymore, after\n> the recent changes page-level freezing changes.\n\nHuh? vacuum_freeze_min_age hasn't done that, at all. At least not\nsince the visibility map went in back in 8.4:\n\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Today.2C_on_Postgres_HEAD_2\n\nThat's why we literally do ~100% of all freezing in aggressive mode\nVACUUM with append-only or append-mostly tables.\n\n> > VACUUM determines its freezing strategy based on the value of the new\n> > vacuum_freeze_strategy_threshold GUC (or reloption) with logged tables;\n> > tables that exceed the size threshold use the eager freezing strategy.\n>\n> I think that's not a sufficient guard at all. The size of a table doesn't say\n> much about how a table is used.\n\nSufficient for what purpose?\n\n> > Eager freezing is strictly more aggressive than lazy freezing. Settings\n> > like vacuum_freeze_min_age still get applied in just the same way in\n> > every VACUUM, independent of the strategy in use. The only mechanical\n> > difference between eager and lazy freezing strategies is that only the\n> > former applies its own additional criteria to trigger freezing pages.\n>\n> That's only true because vacuum_freeze_min_age being has been fairly radically\n> redefined recently.\n\nSo? This part of the commit message is a simple statement of fact.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:28:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:\n> Another bad scenario: Some longrunning / hung transaction caused us to get\n> close to the xid wraparound. Problem was resolved, autovacuum runs. Previously\n> we wouldn't have frozen the portion of the table that was actively changing,\n> now we will. Consequence: We get closer to the \"no write\" limit / the outage\n> lasts longer.\n\nObviously it isn't difficult to just invent a new rule that gets\napplied by lazy_scan_strategy. For example, it would take me less than\n5 minutes to write a patch that disables eager freezing when the\nfailsafe is in effect.\n\n> I don't see an alternative to reverting this for now.\n\nI want to see your test case before acting.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:37:17 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-25 17:22:32 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 25, 2023 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > I unfortunately haven't been able to keep up with the thread and saw this just\n> > now. But I've expressed the concern below several times before, so it\n> > shouldn't come as a surprise.\n> \n> You missed the announcement 9 days ago, and the similar clear\n> signalling of a commit from yesterday. I guess I'll need to start\n> personally reaching out to you any time I commit anything in this area\n> in the future. I almost considered doing that here, in fact.\n\nThere's just too much email on -hackers to keep up with, if I ever want to do\nany development of my own. I raised this concern before though, so it's not\nlike it's a surprise.\n\n\n> > The most common problematic scenario I see are tables full of rows with\n> > limited lifetime. E.g. because rows get aggregated up after a while. Before\n> > those rows practically never got frozen - but now we'll freeze them all the\n> > time.\n> \n> Fundamentally, the choice to freeze or not freeze is driven by\n> speculation about the needs of the table, with some guidance from the\n> user. That isn't new. It seems to me that it will always be possible\n> for you to come up with an adversarial case that makes any given\n> approach look bad, no matter how good it is. Of course that doesn't\n> mean that this particular complaint has no validity; but it does mean\n> that you need to be willing to draw the line somewhere.\n\nSure. But significantly regressing plausible if not common workloads is\ndifferent than knowing that there'll be some edge case where we'll do\nsomething worse.\n\n\n> > I whipped up a quick test: 15 pgbench threads insert rows, 1 psql \\while loop\n> > deletes older rows.\n> \n> Can you post the script? And what setting did you use?\n\nprep:\nCREATE TABLE pgbench_time_data(client_id int8 NOT NULL, ts timestamptz NOT NULL, filla int8 NOT NULL, fillb int8 not null, fillc int8 not null);\nCREATE INDEX ON pgbench_time_data(ts);\nALTER SYSTEM SET autovacuum_naptime = '10s';\nALTER SYSTEM SET autovacuum_vacuum_cost_delay TO -1;\nALTER SYSTEM SET synchronous_commit = off; -- otherwise more clients are needed\n\npgbench script, with 15 clients:\nINSERT INTO pgbench_time_data(client_id, ts, filla, fillb, fillc) VALUES (:client_id, now(), 0, 0, 0);\n\npsql session deleting old data:\nEXPLAIN ANALYZE DELETE FROM pgbench_time_data WHERE ts < now() - '120s'::interval \\watch 1\n\nRealistically the time should be longer, but I didn't want to wait that long\nfor the deletions to actually start.\n\n\nI reproduced both with checkpoint_timeout=5min and 1min. 1min is easier for\nimpatient me.\n\n\nI switched between vacuum_freeze_strategy_threshold=0 and\nvacuum_freeze_strategy_threshold=too-high, because it's quicker/takes less\nwarmup to set up something with smaller tables.\n\nshared_buffers=32GB for fits in s_b, 1GB otherwise.\n\nmax_wal_size=150GB, log_autovacuum_min_duration=0, and a bunch of logging\nsettings.\n\n\n> > Workload fits in s_b:\n> >\n> > Autovacuum on average generates between 1.5x-7x as much WAL as before,\n> > depending on how things interact with checkpoints. And not just that, each\n> > autovac cycle also takes substantially longer than before - the average time\n> > for an autovacuum roughly doubled. Which of course increases the amount of\n> > bloat.\n> \n> Anything that causes an autovacuum to take longer will effectively\n> make autovacuum think that it has removed more bloat than it really\n> has, which will then make autovacuum less aggressive when it really\n> should be more aggressive. That's a preexisting issue, that needs to\n> be accounted for in the context of this discussion.\n\nThat's not the problem here - on my system autovac starts again very\nquickly. The problem is that we accumulate bloat while autovacuum is\nrunning. Wasting time/WAL volume on freezing pages that don't need to be\nfrozen is an issue.\n\n\n\n> In particular, it would be very useful to know what the parameters of\n> the discussion are. Obviously I cannot come up with an algorithm that\n> can literally predict the future. But I may be able to handle specific\n> cases of concern better, or to better help users cope in whatever way.\n\n> > This is significantly worse than I predicted. This was my first attempt at\n> > coming up with a problematic workload. There'll likely be way worse in\n> > production.\n> \n> As I said in the commit message, the current default for\n> vacuum_freeze_strategy_threshold is considered low, and was always\n> intended to be provisional. Something that I explicitly noted would be\n> reviewed after the beta period is over, once we gained more experience\n> with the setting.\n\n> I think that a far higher setting could be almost as effective. 32GB,\n> or even 64GB could work quite well, since you'll still have the FPI\n> optimization.\n\nThe concrete setting of vacuum_freeze_strategy_threshold doesn't matter.\nTable size simply isn't a usable proxy for whether eager freezing is a good\nidea or not.\n\nYou can have a 1TB table full of transient data, or you can have a 1TB table\nwhere part of the data is transient and only settles after a time. In neither\ncase eager freezing is ok.\n\nOr you can have an append-only table. In which case eager freezing is great.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:49:28 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-25 17:37:17 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 25, 2023 at 5:26 PM Andres Freund <andres@anarazel.de> wrote:\n> > Another bad scenario: Some longrunning / hung transaction caused us to get\n> > close to the xid wraparound. Problem was resolved, autovacuum runs. Previously\n> > we wouldn't have frozen the portion of the table that was actively changing,\n> > now we will. Consequence: We get closer to the \"no write\" limit / the outage\n> > lasts longer.\n> \n> Obviously it isn't difficult to just invent a new rule that gets\n> applied by lazy_scan_strategy. For example, it would take me less than\n> 5 minutes to write a patch that disables eager freezing when the\n> failsafe is in effect.\n\nSure. I'm not saying that these issues cannot be addressed. Of course no patch\nof a meaningful size is perfect and we all can't predict the future. But this\nis a very significant behavioural change to vacuum, and there are pretty\nsimple scenarios in which it causes significant regressions. And at least some\nof the issues have been pointed out before.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 17:56:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 5:49 PM Andres Freund <andres@anarazel.de> wrote:\n> Sure. But significantly regressing plausible if not common workloads is\n> different than knowing that there'll be some edge case where we'll do\n> something worse.\n\nThat's very vague. Significant to whom, for what purpose?\n\n> prep:\n> CREATE TABLE pgbench_time_data(client_id int8 NOT NULL, ts timestamptz NOT NULL, filla int8 NOT NULL, fillb int8 not null, fillc int8 not null);\n> CREATE INDEX ON pgbench_time_data(ts);\n> ALTER SYSTEM SET autovacuum_naptime = '10s';\n> ALTER SYSTEM SET autovacuum_vacuum_cost_delay TO -1;\n> ALTER SYSTEM SET synchronous_commit = off; -- otherwise more clients are needed\n>\n> pgbench script, with 15 clients:\n> INSERT INTO pgbench_time_data(client_id, ts, filla, fillb, fillc) VALUES (:client_id, now(), 0, 0, 0);\n>\n> psql session deleting old data:\n> EXPLAIN ANALYZE DELETE FROM pgbench_time_data WHERE ts < now() - '120s'::interval \\watch 1\n>\n> Realistically the time should be longer, but I didn't want to wait that long\n> for the deletions to actually start.\n\nI'll review this tomorrow.\n\n> I reproduced both with checkpoint_timeout=5min and 1min. 1min is easier for\n> impatient me.\n\nYou said \"Autovacuum on average generates between 1.5x-7x as much WAL\nas before\". Why stop there, though? There's a *big* multiplicative\neffect in play here from FPIs, obviously, so the sky's the limit. Why\nnot set checkpoint_timeout to 30s?\n\n> I switched between vacuum_freeze_strategy_threshold=0 and\n> vacuum_freeze_strategy_threshold=too-high, because it's quicker/takes less\n> warmup to set up something with smaller tables.\n\nThis makes no sense to me, at all.\n\n> The concrete setting of vacuum_freeze_strategy_threshold doesn't matter.\n> Table size simply isn't a usable proxy for whether eager freezing is a good\n> idea or not.\n\nIt's not supposed to be - you have it backwards. It's intended to work\nas a proxy for whether lazy freezing is a bad idea, particularly in\nthe worst case.\n\nThere is also an effect that likely would have been protective with\nyour test case had you used a larger table with the same test case\n(and had you not lowered vacuum_freeze_strategy_threshold from its\nalready low default). In general there'd be a much better chance of\nconcurrent reuse of space by new inserts discouraging page-level\nfreezing, since VACUUM would take much longer relative to everything\nelse, as compared to a small table.\n\n> You can have a 1TB table full of transient data, or you can have a 1TB table\n> where part of the data is transient and only settles after a time. In neither\n> case eager freezing is ok.\n\nIt sounds like you're not willing to accept any kind of trade-off.\nHow, in general, can we detect what kind of 1TB table it will be, in\nthe absence of user input? And in the absence of user input, why would\nwe prefer to default to a behavior that is highly destabilizing when\nwe get it wrong?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 18:31:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-25 17:28:48 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 25, 2023 at 5:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > However, it significantly increases the overall work when rows have a somewhat\n> > limited lifetime. The documented reason why vacuum_freeze_min_age exist -\n> > although I think it doesn't really achieve its documented goal anymore, after\n> > the recent changes page-level freezing changes.\n> \n> Huh? vacuum_freeze_min_age hasn't done that, at all. At least not\n> since the visibility map went in back in 8.4:\n\nMy point was the other way round. That vacuum_freeze_min_age *prevented* us\nfrom freezing rows \"too soon\" - obviously a very blunt instrument.\n\nSince page level freezing, it only partially does that, because we'll freeze\neven newer rows, if pruning triggered an FPI (I don't think that's quite the\nright check, but that's a separate discussion).\n\nAs far as I can tell, with the eager strategy, the only thing\nvacuum_freeze_min_age really influences is whether we'll block waiting for a\ncleanup lock. IOW, VACUUM on a table > vacuum_freeze_strategy_threshold is\nnow a slightly less-blocking version of VACUUM FREEZE.\n\n\nThe paragraph I was referencing:\n <para>\n One disadvantage of decreasing <varname>vacuum_freeze_min_age</varname> is that\n it might cause <command>VACUUM</command> to do useless work: freezing a row\n version is a waste of time if the row is modified\n soon thereafter (causing it to acquire a new XID). So the setting should\n be large enough that rows are not frozen until they are unlikely to change\n any more.\n </para>\n\nBut now vacuum_freeze_min_age doesn't reliably influence whether we'll freeze\nrow anymore.\n\nAm I missing something here?\n\n\n\n> > > VACUUM determines its freezing strategy based on the value of the new\n> > > vacuum_freeze_strategy_threshold GUC (or reloption) with logged tables;\n> > > tables that exceed the size threshold use the eager freezing strategy.\n> >\n> > I think that's not a sufficient guard at all. The size of a table doesn't say\n> > much about how a table is used.\n> \n> Sufficient for what purpose?\n\nNot not regress a substantial portion of our userbase.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 18:33:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 6:33 PM Andres Freund <andres@anarazel.de> wrote:\n> My point was the other way round. That vacuum_freeze_min_age *prevented* us\n> from freezing rows \"too soon\" - obviously a very blunt instrument.\n\nYes, not freezing at all until aggressive vacuum is definitely good\nwhen you don't really need to freeze at all.\n\n> Since page level freezing, it only partially does that, because we'll freeze\n> even newer rows, if pruning triggered an FPI (I don't think that's quite the\n> right check, but that's a separate discussion).\n\nBut the added cost is very low, and it might well make all the difference.\n\n> As far as I can tell, with the eager strategy, the only thing\n> vacuum_freeze_min_age really influences is whether we'll block waiting for a\n> cleanup lock. IOW, VACUUM on a table > vacuum_freeze_strategy_threshold is\n> now a slightly less-blocking version of VACUUM FREEZE.\n\nThat's simply not true, at all. I'm very surprised that you think\nthat. The commit message very clearly addresses this. You know, the\npart that you specifically quoted to complain about today!\n\nOnce again I'll refer you to my Wiki page on this:\n\nhttps://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch_2\n\nThe difference between this and VACUUM FREEZE is described here:\n\n\"Note how we freeze most pages, but still leave a significant number\nunfrozen each time, despite using an eager approach to freezing\n(2981204 scanned - 2355230 frozen = 625974 pages scanned but left\nunfrozen). Again, this is because we don't freeze pages unless they're\nalready eligible to be set all-visible. We saw the same effect with\nthe first pgbench_history example, but it was hardly noticeable at all\nthere. Whereas here we see that even eager freezing opts to hold off\non freezing relatively many individual heap pages, due to the observed\nconditions on those particular heap pages.\"\n\nIf it was true that eager freezing strategy behaved just the same as\nVACUUM FREEZE (at least as far as freezing is concerned) then\nscenarios like this one would show that VACUUM froze practically all\nof the pages it scanned -- maybe fully 100% of all scanned pages would\nbe frozen. This effect is absent from small tables, and I suspect that\nit's absent from your test case in part because you used a table that\nwas too small.\n\nObviously the way that eager freezing strategy avoids freezing\nconcurrently modified pages isn't perfect. It's one approach to\nlimiting the downside from eager freezing, in tables (or even\nindividual pages) where it's inappropriate. Of course that isn't\nperfect, but it's a significant factor.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 18:43:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hk,\n\nOn 2023-01-25 18:31:16 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 25, 2023 at 5:49 PM Andres Freund <andres@anarazel.de> wrote:\n> > Sure. But significantly regressing plausible if not common workloads is\n> > different than knowing that there'll be some edge case where we'll do\n> > something worse.\n> \n> That's very vague. Significant to whom, for what purpose?\n\nSure it's vague. But you can't tell me that it's uncommon to use postgres to\nstore rows that isn't retained for > 50million xids.\n\n\n\n> > I reproduced both with checkpoint_timeout=5min and 1min. 1min is easier for\n> > impatient me.\n> \n> You said \"Autovacuum on average generates between 1.5x-7x as much WAL\n> as before\". Why stop there, though? There's a *big* multiplicative\n> effect in play here from FPIs, obviously, so the sky's the limit. Why\n> not set checkpoint_timeout to 30s?\n\nThe amount of WAL increases substantially even with 5min, the degree of the\nincrease varies more though. But that largely vanishes if you increase the\ntime after which rows are deleted a bit. I just am not patient enough to wait\nfor that.\n\n\n> > I switched between vacuum_freeze_strategy_threshold=0 and\n> > vacuum_freeze_strategy_threshold=too-high, because it's quicker/takes less\n> > warmup to set up something with smaller tables.\n> \n> This makes no sense to me, at all.\n\nIt's quicker to run the workload with a table that initially is below 4GB, but\nstill be able to test the eager strategy. It wouldn't change anything\nfundamental to just make the rows a bit wider, or to have a static portion of\nthe table.\n\nAnd changing between vacuum_freeze_strategy_threshold=0/very-large (or I\nassume -1, didn't check) while the workload is running having to wait until\nthe 120s to start deleting have passed..\n\n\n> > The concrete setting of vacuum_freeze_strategy_threshold doesn't matter.\n> > Table size simply isn't a usable proxy for whether eager freezing is a good\n> > idea or not.\n> \n> It's not supposed to be - you have it backwards. It's intended to work\n> as a proxy for whether lazy freezing is a bad idea, particularly in\n> the worst case.\n\nThat's a distinction without a difference.\n\n\n> There is also an effect that likely would have been protective with\n> your test case had you used a larger table with the same test case\n> (and had you not lowered vacuum_freeze_strategy_threshold from its\n> already low default).\n\nAgain, you just need a less heavily changing portion of the the table or a\nslightly larger \"deletion delay\" and you end up with a table well over\n4GB. Even as stated I end up with > 4GB after a bit of running.\n\nIt's just a shortcut to make testing this easier.\n\n\n\n> > You can have a 1TB table full of transient data, or you can have a 1TB table\n> > where part of the data is transient and only settles after a time. In neither\n> > case eager freezing is ok.\n> \n> It sounds like you're not willing to accept any kind of trade-off.\n\nI am. Just not every tradeoff. I just don't see any useful tradeoffs purely\nbased on the relation size.\n\n\n> How, in general, can we detect what kind of 1TB table it will be, in the\n> absence of user input?\n\nI suspect we'll need some form of heuristics to differentiate between tables\nthat are more append heavy and tables that are changing more heavily. I think\nit might be preferrable to not have a hard cliff but a gradual changeover -\nhard cliffs tend to lead to issue one can't see coming.\n\nI think several of the heuristics below become easier once we introduce \"xid\nage\" vacuums.\n\n\nOne idea is to start tracking the number of all-frozen pages in pg_class. If\nthere's a significant percentage of all-visible but not all-frozen pages,\nvacuum should be more eager. If only a small portion of the table is not\nfrozen, there's no need to be eager. If only a small portion of the table is\nall-visible, there similarly is no need to freeze eagerly.\n\n\nI IIRC previously was handwaving at keeping track of the average age of tuples\non all-visible pages. That could extend the prior heuristic. A heavily\nchanging table will have a relatively young average, a more append only table\nwill have an increasing average age.\n\n\nIt might also make sense to look at the age of relfrozenxid - there's really\nno point in being overly eager if the relation is quite young. And a very\nheavily changing table will tend to be younger. But likely the approach of\ntracking the age of all-visible pages will be more accurate.\n\n\n\nThe heuristics don't have to be perfect. If we get progressively more eager,\nan occasional somewhat eager vacuum isn't a huge issue, as long as it then\nleads to the next few vacuums to become less eager.\n\n\n\n> And in the absence of user input, why would we prefer to default to a\n> behavior that is highly destabilizing when we get it wrong?\n\nUsers know the current behaviour. Introducing significant issues that didn't\npreviously exist will cause new issues and new frustrations.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 19:10:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 8:49 PM Andres Freund <andres@anarazel.de> wrote:\n> The concrete setting of vacuum_freeze_strategy_threshold doesn't matter.\n> Table size simply isn't a usable proxy for whether eager freezing is a good\n> idea or not.\n\nI strongly agree. I can't imagine how a size-based threshold can make\nany sense at all.\n\nBoth Andres and I have repeatedly expressed concern about how much is\nbeing changed in the behavior of vacuum, and how quickly, and IMHO on\nthe basis of very limited evidence that the changes are improvements.\nThe fact that Andres was very quickly able to find cases where the\npatch produces large regression is just more evidence of that. It's\nalso hard to even understand what has been changed, because the\ndescriptions are so theoretical.\n\nI think we're on a very dangerous path here. I want VACUUM to be\nbetter as the next person, but I really don't believe that's the\ndirection we're headed. I think if we release like this, we're going\nto experience more VACUUM pain, not less. And worse still, I don't\nthink anyone other than Peter and Andres is going to understand why\nit's happening.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Jan 2023 22:41:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 7:11 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I switched between vacuum_freeze_strategy_threshold=0 and\n> > > vacuum_freeze_strategy_threshold=too-high, because it's quicker/takes less\n> > > warmup to set up something with smaller tables.\n> >\n> > This makes no sense to me, at all.\n>\n> It's quicker to run the workload with a table that initially is below 4GB, but\n> still be able to test the eager strategy. It wouldn't change anything\n> fundamental to just make the rows a bit wider, or to have a static portion of\n> the table.\n\nWhat does that actually mean? Wouldn't change anything fundamental?\n\nWhat it would do is significantly reduce the write amplification\neffect that you encountered. You came up with numbers of up to 7x, a\nnumber that you used without any mention of checkpoint_timeout being\nlowered to only 1 minutes (I had to push to get that information). Had\nyou done things differently (larger table, larger setting) then that\nwould have made the regression far smaller. So yeah, \"nothing\nfundamental\".\n\n> > How, in general, can we detect what kind of 1TB table it will be, in the\n> > absence of user input?\n>\n> I suspect we'll need some form of heuristics to differentiate between tables\n> that are more append heavy and tables that are changing more heavily.\n\nThe TPC-C tables are actually a perfect adversarial cases for this,\nbecause it's both, together. What then?\n\n> I think\n> it might be preferrable to not have a hard cliff but a gradual changeover -\n> hard cliffs tend to lead to issue one can't see coming.\n\nAs soon as you change your behavior you have to account for the fact\nthat you behaved lazily up until all prior VACUUMs. I think that\nyou're better off just being eager with new pages and modified pages,\nwhile not specifically going\n\n> I IIRC previously was handwaving at keeping track of the average age of tuples\n> on all-visible pages. That could extend the prior heuristic. A heavily\n> changing table will have a relatively young average, a more append only table\n> will have an increasing average age.\n>\n>\n> It might also make sense to look at the age of relfrozenxid - there's really\n> no point in being overly eager if the relation is quite young.\n\nI don't think that's true. What about bulk loading? It's a totally\nvalid and common requirement.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 19:48:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-25 18:43:10 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 25, 2023 at 6:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > As far as I can tell, with the eager strategy, the only thing\n> > vacuum_freeze_min_age really influences is whether we'll block waiting for a\n> > cleanup lock. IOW, VACUUM on a table > vacuum_freeze_strategy_threshold is\n> > now a slightly less-blocking version of VACUUM FREEZE.\n>\n> That's simply not true, at all. I'm very surprised that you think\n> that. The commit message very clearly addresses this.\n\nIt says something like that, but it's not really true:\n\nLooking at the results of\n DROP TABLE IF EXISTS frak;\n -- autovac disabled so we see just the result of the vacuum below\n CREATE TABLE frak WITH (autovacuum_enabled=0) AS SELECT generate_series(1, 10000000);\n VACUUM frak;\n SELECT pg_relation_size('frak') / 8192 AS relsize_pages, SUM(all_visible::int) all_vis_pages, SUM(all_frozen::int) all_frozen_pages FROM pg_visibility('frak');\n\nacross releases.\n\nIn < 16 you'll get:\n┌───────────────┬───────────────┬──────────────────┐\n│ relsize_pages │ all_vis_pages │ all_frozen_pages │\n├───────────────┼───────────────┼──────────────────┤\n│ 44248 │ 44248 │ 0 │\n└───────────────┴───────────────┴──────────────────┘\n\nYou simply can't freeze these rows, because they're not vacuum_freeze_min_age\nxids old.\n\nWith 16 and the default vacuum_freeze_strategy_threshold you'll get the same\n(even though we wouldn't actually trigger an FPW).\n\nWith 16 and vacuum_freeze_strategy_threshold=0, you'll get:\n┌───────────────┬───────────────┬──────────────────┐\n│ relsize_pages │ all_vis_pages │ all_frozen_pages │\n├───────────────┼───────────────┼──────────────────┤\n│ 44248 │ 44248 │ 44248 │\n└───────────────┴───────────────┴──────────────────┘\n\nIOW, basically what you get with VACUUM FREEZE.\n\n\nThat's actually what I was complaining about. The commit message in a way is\nright that\n Settings\n like vacuum_freeze_min_age still get applied in just the same way in\n every VACUUM, independent of the strategy in use. The only mechanical\n difference between eager and lazy freezing strategies is that only the\n former applies its own additional criteria to trigger freezing pages.\n\nbut that's only true because page level freezing neutered\nvacuum_freeze_min_age. Compared to <16, it's a *huge* change.\n\n\nYes, it's true that VACUUM still is less agressive than VACUUM FREEZE, even\ndisregarding cleanup locks, because it won't freeze if there's non-removable\nrows on the page. But more often than not that's a pretty small difference.\n\n\n\n> Once again I'll refer you to my Wiki page on this:\n>\n> https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch_2\n>\n> The difference between this and VACUUM FREEZE is described here:\n>\n> \"Note how we freeze most pages, but still leave a significant number\n> unfrozen each time, despite using an eager approach to freezing\n> (2981204 scanned - 2355230 frozen = 625974 pages scanned but left\n> unfrozen). Again, this is because we don't freeze pages unless they're\n> already eligible to be set all-visible.\n\nThe only reason there is a substantial difference is because of pgbench's\nuniform access pattern. Most real-world applications don't have that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Jan 2023 19:56:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 10:11 AM Andres Freund <andres@anarazel.de> wrote:\n\n> I am. Just not every tradeoff. I just don't see any useful tradeoffs\npurely\n> based on the relation size.\n\nI expressed reservations about relation size six weeks ago:\n\nOn Wed, Dec 14, 2022 at 12:16 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Dec 13, 2022 at 12:29 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > If the number of unfrozen heap pages is the thing we care about,\nperhaps that, and not the total size of the table, should be the parameter\nthat drives freezing strategy?\n>\n> That's not the only thing we care about, though.\n\nThat was followed by several paragraphs that never got around to explaining\nwhy table size should drive freezing strategy. Review is a feedback\nmechanism alerting the patch author to possible problems. Listening to\nfeedback is like vacuum, in a way: If it hurts, you're not doing it enough.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Jan 26, 2023 at 10:11 AM Andres Freund <andres@anarazel.de> wrote:> I am. Just not every tradeoff. I just don't see any useful tradeoffs purely> based on the relation size.I expressed reservations about relation size six weeks ago:On Wed, Dec 14, 2022 at 12:16 AM Peter Geoghegan <pg@bowt.ie> wrote:>> On Tue, Dec 13, 2022 at 12:29 AM John Naylor> <john.naylor@enterprisedb.com> wrote:> > If the number of unfrozen heap pages is the thing we care about, perhaps that, and not the total size of the table, should be the parameter that drives freezing strategy?>> That's not the only thing we care about, though.That was followed by several paragraphs that never got around to explaining why table size should drive freezing strategy. Review is a feedback mechanism alerting the patch author to possible problems. Listening to feedback is like vacuum, in a way: If it hurts, you're not doing it enough. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 26 Jan 2023 11:12:22 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 7:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Both Andres and I have repeatedly expressed concern about how much is\n> being changed in the behavior of vacuum, and how quickly, and IMHO on\n> the basis of very limited evidence that the changes are improvements.\n> The fact that Andres was very quickly able to find cases where the\n> patch produces large regression is just more evidence of that. It's\n> also hard to even understand what has been changed, because the\n> descriptions are so theoretical.\n\nDid you actually read the motivating examples Wiki page?\n\n> I think we're on a very dangerous path here. I want VACUUM to be\n> better as the next person, but I really don't believe that's the\n> direction we're headed. I think if we release like this, we're going\n> to experience more VACUUM pain, not less. And worse still, I don't\n> think anyone other than Peter and Andres is going to understand why\n> it's happening.\n\nI think that the only sensible course of action at this point is for\nme to revert the page-level freezing commit from today, and abandon\nall outstanding work on VACUUM. I will still stand by the basic\npage-level freezing work, at least to the extent that I am able to.\nHonestly, just typing that makes me feel a big sense of relief.\n\nI am a proud, stubborn man. While the experience of working on the\nearlier related stuff for Postgres 15 was itself enough to make me\nseriously reassess my choice to work on VACUUM in general, I still\nwanted to finish off what I'd started. I don't see how that'll be\npossible now -- I'm just not in a position to be in the center of\nanother controversy, and I just don't seem to be able to avoid them\nhere, as a practical matter. I will resolve to be a less stubborn\nperson. I don't have the constitution for it anymore.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 20:24:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 8:12 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> That was followed by several paragraphs that never got around to explaining why table size should drive freezing strategy.\n\nYou were talking about the system level view of freeze debt, and how\nthe table view might not be a sufficient proxy for that. What does\nthat have to do with anything that we've discussed on this thread\nrecently?\n\n> Review is a feedback mechanism alerting the patch author to possible problems. Listening to feedback is like vacuum, in a way: If it hurts, you're not doing it enough.\n\nAn elegant analogy.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 20:36:11 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 8:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think we're on a very dangerous path here. I want VACUUM to be\n> > better as the next person, but I really don't believe that's the\n> > direction we're headed. I think if we release like this, we're going\n> > to experience more VACUUM pain, not less. And worse still, I don't\n> > think anyone other than Peter and Andres is going to understand why\n> > it's happening.\n>\n> I think that the only sensible course of action at this point is for\n> me to revert the page-level freezing commit from today, and abandon\n> all outstanding work on VACUUM. I will still stand by the basic\n> page-level freezing work, at least to the extent that I am able to.\n\nI have now reverted today's commit. I have also withdrawn all\nremaining work from the patch series as a whole, which is reflected in\nthe CF app. Separately, I have withdrawn 2 other VACUUM related\npatches of mine via the CF app: the antiwraparound autovacuum patch\nseries, plus a patch that did some further work on freezing\nMultiXacts.\n\nI have no intention of picking any of these patches back up again. I\nalso intend to completely avoid new work on both VACUUM and\nautovacuum, not including ambulkdelete() code run by index access\nmethods. I will continue to do maintenance and bugfix work when it\nhappens to involve VACUUM, though.\n\nFor the record, in case it matters: I certainly have no objection to\nanybody else picking up any of this unfinished work for themselves, in\npart or in full.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 25 Jan 2023 22:38:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 11:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Jan 25, 2023 at 7:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Both Andres and I have repeatedly expressed concern about how much is\n> > being changed in the behavior of vacuum, and how quickly, and IMHO on\n> > the basis of very limited evidence that the changes are improvements.\n> > The fact that Andres was very quickly able to find cases where the\n> > patch produces large regression is just more evidence of that. It's\n> > also hard to even understand what has been changed, because the\n> > descriptions are so theoretical.\n>\n> Did you actually read the motivating examples Wiki page?\n\nI don't know. I've read a lot of stuff that you've written on this\ntopic, which has taken a significant amount of time, and I still don't\nunderstand a lot of what you're changing, and I don't agree with all\nof the things that I do understand. I can't state with confidence that\nthe motivating examples wiki page was or was not among the things that\nI read. But, you know, when people start running PostgreSQL 16, and\nhave some problem, they're not going to read the motivating examples\nwiki page. They're going to read the documentation. If they can't find\nthe answer there, they (or some hacker that they contact) will\nprobably read the code comments and the relevant commit messages.\nThose either clearly explain what was changed in a way that somebody\ncan understand, or they don't. If they don't, *the commits are not\ngood enough*, regardless of what other information may exist in any\nother place.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 08:41:06 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 10:56 PM Andres Freund <andres@anarazel.de> wrote:\n> but that's only true because page level freezing neutered\n> vacuum_freeze_min_age. Compared to <16, it's a *huge* change.\n\nDo you think that page-level freezing\n(1de58df4fec7325d91f5a8345757314be7ac05da) was improvidently\ncommitted?\n\nI have always been a bit skeptical of vacuum_freeze_min_age as a\nmechanism. It's certainly true that it is a waste of energy to freeze\ntuples that will soon be removed anyway, but on the other hand,\nrepeatedly dirtying the same page for various different freezing and\nvisibility related reasons *really sucks*, and even repeatedly reading\nthe page because we kept deciding not to do anything yet isn't great.\nIt seems possible that the page-level freezing mechanism could help\nwith that quite a bit, and I think that the heuristic that patch\nproposes is basically reasonable: if there's at least one tuple on the\npage that is old enough to justify freezing, it doesn't seem like a\nbad bet to freeze all the others that can be frozen at the same time,\nat least if it means that we can mark the page all-visible or\nall-frozen. If it doesn't, then I'm not so sure; maybe we're best off\ndeferring as much work as possible to a time when we *can* mark the\npage all-visible or all-frozen.\n\nIn short, I think that neutering vacuum_freeze_min_age at least to\nsome degree might be a good thing, but that's not to say that I'm\naltogether confident in that patch, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 09:20:57 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 7:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples#Patch_2\n> >\n> > The difference between this and VACUUM FREEZE is described here:\n> >\n> > \"Note how we freeze most pages, but still leave a significant number\n> > unfrozen each time, despite using an eager approach to freezing\n> > (2981204 scanned - 2355230 frozen = 625974 pages scanned but left\n> > unfrozen). Again, this is because we don't freeze pages unless they're\n> > already eligible to be set all-visible.\n>\n> The only reason there is a substantial difference is because of pgbench's\n> uniform access pattern. Most real-world applications don't have that.\n\nIt's not pgbench! It's TPC-C. It's actually an adversarial case for\nthe patch series.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 08:24:21 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 5:41 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Jan 25, 2023 at 11:25 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Wed, Jan 25, 2023 at 7:41 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > Both Andres and I have repeatedly expressed concern about how much is\n> > > being changed in the behavior of vacuum, and how quickly, and IMHO on\n> > > the basis of very limited evidence that the changes are improvements.\n> > > The fact that Andres was very quickly able to find cases where the\n> > > patch produces large regression is just more evidence of that. It's\n> > > also hard to even understand what has been changed, because the\n> > > descriptions are so theoretical.\n> >\n> > Did you actually read the motivating examples Wiki page?\n>\n> I don't know. I've read a lot of stuff that you've written on this\n> topic, which has taken a significant amount of time, and I still don't\n> understand a lot of what you're changing, and I don't agree with all\n> of the things that I do understand.\n\nYou complained about the descriptions being theoretical. But there's\nnothing theoretical about the fact that we more or less do *all*\nfreezing in an eventual aggressive VACUUM in many important cases,\nincluding very simple cases like pgbench_history -- the simplest\npossible append-only table case. We'll merrily rewrite the entire\ntable, all at once, for no good reason at all. Consistently, reliably.\nIt's so incredibly obvious that this makes zero sense! And yet I don't\nthink you've ever engaged with such basic points as that one.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 08:35:04 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 09:20:57 -0500, Robert Haas wrote:\n> On Wed, Jan 25, 2023 at 10:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > but that's only true because page level freezing neutered\n> > vacuum_freeze_min_age. Compared to <16, it's a *huge* change.\n> \n> Do you think that page-level freezing\n> (1de58df4fec7325d91f5a8345757314be7ac05da) was improvidently\n> committed?\n\nI think it's probably ok, but perhaps deserves a bit more thought about when\nto \"opportunistically\" freeze. Perhaps to make it *more* aggressive than it's\nnow.\n\nWith \"opportunistic freezing\" I mean freezing the page, even though we don't\n*have* to freeze any of the tuples.\n\nThe overall condition gating freezing is:\n\tif (pagefrz.freeze_required || tuples_frozen == 0 ||\n\t\t(prunestate->all_visible && prunestate->all_frozen &&\n\t\t fpi_before != pgWalUsage.wal_fpi))\n\nfpi_before is set before the heap_page_prune() call.\n\nTo me the\n fpi_before != pgWalUsage.wal_fpi\"\npart doesn't make a whole lot of sense. For one, it won't at all work if\nfull_page_writes=off. But more importantly, it also means we'll not freeze\nwhen VACUUMing a recently modified page, even if pruning already emitted a WAL\nrecord and we'd not emit an FPI if we freezed the page now.\n\n\nTo me a condition that checked if the buffer is already dirty and if another\nXLogInsert() would be likely to generate an FPI would make more sense. The\nrare race case of a checkpoint starting concurrently doesn't matter IMO.\n\n\nA minor complaint I have about the code is that the \"tuples_frozen == 0\" path\nimo is confusing. We go into the \"freeze\" path, which then inside has another\nif for the tuples_frozen == 0 part. I get that this deduplicates the\nNewRelFrozenXid handling, but it still looks odd.\n\n\n> I have always been a bit skeptical of vacuum_freeze_min_age as a\n> mechanism. It's certainly true that it is a waste of energy to freeze\n> tuples that will soon be removed anyway, but on the other hand,\n> repeatedly dirtying the same page for various different freezing and\n> visibility related reasons *really sucks*, and even repeatedly reading\n> the page because we kept deciding not to do anything yet isn't great.\n> It seems possible that the page-level freezing mechanism could help\n> with that quite a bit, and I think that the heuristic that patch\n> proposes is basically reasonable: if there's at least one tuple on the\n> page that is old enough to justify freezing, it doesn't seem like a\n> bad bet to freeze all the others that can be frozen at the same time,\n> at least if it means that we can mark the page all-visible or\n> all-frozen. If it doesn't, then I'm not so sure; maybe we're best off\n> deferring as much work as possible to a time when we *can* mark the\n> page all-visible or all-frozen.\n\nAgreed. Freezing everything if we need to freeze some things seems quite safe\nto me.\n\n\n> In short, I think that neutering vacuum_freeze_min_age at least to\n> some degree might be a good thing, but that's not to say that I'm\n> altogether confident in that patch, either.\n\nI am not too woried about the neutering in the page level freezing patch.\n\nThe combination of the page level work with the eager strategy is where the\nsensibly-more-aggressive freeze_min_age got turbocharged to an imo dangerous\ndegree.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 08:35:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 8:35 AM Andres Freund <andres@anarazel.de> wrote:\n> I think it's probably ok, but perhaps deserves a bit more thought about when\n> to \"opportunistically\" freeze. Perhaps to make it *more* aggressive than it's\n> now.\n>\n> With \"opportunistic freezing\" I mean freezing the page, even though we don't\n> *have* to freeze any of the tuples.\n>\n> The overall condition gating freezing is:\n> if (pagefrz.freeze_required || tuples_frozen == 0 ||\n> (prunestate->all_visible && prunestate->all_frozen &&\n> fpi_before != pgWalUsage.wal_fpi))\n>\n> fpi_before is set before the heap_page_prune() call.\n\nHave you considered page-level checksums, and how the impact on hint\nbits needs to be accounted for here?\n\nAll RDS customers use page-level checksums. And I've noticed that it's\nvery common for the number of FPIs to only be very slightly less than\nthe number of pages dirtied. Much of which is just hint bits. The\n\"fpi_before != pgWalUsage.wal_fpi\" test catches that.\n\n> To me a condition that checked if the buffer is already dirty and if another\n> XLogInsert() would be likely to generate an FPI would make more sense. The\n> rare race case of a checkpoint starting concurrently doesn't matter IMO.\n\nThat's going to be very significantly more aggressive. For example\nit'll impact small tables very differently.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 08:54:55 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 08:54:55 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 26, 2023 at 8:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think it's probably ok, but perhaps deserves a bit more thought about when\n> > to \"opportunistically\" freeze. Perhaps to make it *more* aggressive than it's\n> > now.\n> >\n> > With \"opportunistic freezing\" I mean freezing the page, even though we don't\n> > *have* to freeze any of the tuples.\n> >\n> > The overall condition gating freezing is:\n> > if (pagefrz.freeze_required || tuples_frozen == 0 ||\n> > (prunestate->all_visible && prunestate->all_frozen &&\n> > fpi_before != pgWalUsage.wal_fpi))\n> >\n> > fpi_before is set before the heap_page_prune() call.\n> \n> Have you considered page-level checksums, and how the impact on hint\n> bits needs to be accounted for here?\n> \n> All RDS customers use page-level checksums. And I've noticed that it's\n> very common for the number of FPIs to only be very slightly less than\n> the number of pages dirtied. Much of which is just hint bits. The\n> \"fpi_before != pgWalUsage.wal_fpi\" test catches that.\n\nI assume the case you're thinking of is that pruning did *not* do any changes,\nbut in the process of figuring out that nothing needed to be pruned, we did a\nMarkBufferDirtyHint(), and as part of that emitted an FPI?\n\n\n> > To me a condition that checked if the buffer is already dirty and if another\n> > XLogInsert() would be likely to generate an FPI would make more sense. The\n> > rare race case of a checkpoint starting concurrently doesn't matter IMO.\n> \n> That's going to be very significantly more aggressive. For example\n> it'll impact small tables very differently.\n\nMaybe it would be too aggressive, not sure. The cost of a freeze WAL record is\nrelatively small, with one important exception below, if we are 99.99% sure\nthat it's not going to require an FPI and isn't going to dirty the page.\n\nThe exception is that a newer LSN on the page can cause the ringbuffer\nreplacement to trigger more more aggressive WAL flushing. No meaningful\ndifference if we modified the page during pruning, or if the page was already\nin s_b (since it likely won't be written out via the ringbuffer in that case),\nbut if checksums are off and we just hint-dirtied the page, it could be a\nsignificant issue.\n\nThus a modification of the above logic could be to opportunistically freeze if\na ) it won't cause an FPI and either\nb1) the page was already dirty before pruning, as we'll not do a ringbuffer\n replacement in that case\nor\nb2) We wrote a WAL record during pruning, as the difference in flush position\n is marginal\n\nAn even more aggressive version would be to replace b1) with logic that'd\nallow newly dirtying the page if it wasn't read through the ringbuffer. But\nnewly dirtying the page feels like it'd be more dangerous.\n\n\nA less aggressive version would be to check if any WAL records were emitted\nduring heap_page_prune() (instead of FPIs) and whether we'd emit an FPI if we\nmodified the page again. Similar to what we do now, except not requiring an\nFPI to have been emitted.\n\nBut to me it seems a bit odd that VACUUM now is more aggressive if checksums /\nwal_log_hint bits is on, than without them. Which I think is how using either\nof pgWalUsage.wal_fpi, pgWalUsage.wal_records ends up working?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 09:53:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 9:53 AM Andres Freund <andres@anarazel.de> wrote:\n> I assume the case you're thinking of is that pruning did *not* do any changes,\n> but in the process of figuring out that nothing needed to be pruned, we did a\n> MarkBufferDirtyHint(), and as part of that emitted an FPI?\n\nYes.\n\n> > That's going to be very significantly more aggressive. For example\n> > it'll impact small tables very differently.\n>\n> Maybe it would be too aggressive, not sure. The cost of a freeze WAL record is\n> relatively small, with one important exception below, if we are 99.99% sure\n> that it's not going to require an FPI and isn't going to dirty the page.\n>\n> The exception is that a newer LSN on the page can cause the ringbuffer\n> replacement to trigger more more aggressive WAL flushing. No meaningful\n> difference if we modified the page during pruning, or if the page was already\n> in s_b (since it likely won't be written out via the ringbuffer in that case),\n> but if checksums are off and we just hint-dirtied the page, it could be a\n> significant issue.\n\nMost of the overhead of FREEZE WAL records (with freeze plan\ndeduplication and page-level freezing in) is generic WAL record header\noverhead. Your recent adversarial test case is going to choke on that,\ntoo. At least if you set checkpoint_timeout to 1 minute again.\n\n> Thus a modification of the above logic could be to opportunistically freeze if\n> a ) it won't cause an FPI and either\n> b1) the page was already dirty before pruning, as we'll not do a ringbuffer\n> replacement in that case\n> or\n> b2) We wrote a WAL record during pruning, as the difference in flush position\n> is marginal\n>\n> An even more aggressive version would be to replace b1) with logic that'd\n> allow newly dirtying the page if it wasn't read through the ringbuffer. But\n> newly dirtying the page feels like it'd be more dangerous.\n\nIn many cases we'll have to dirty the page anyway, just to set\nPD_ALL_VISIBLE. The whole way the logic works is conditioned (whether\ntriggered by an FPI or triggered by my now-reverted GUC) on being able\nto set the whole page all-frozen in the VM.\n\n> A less aggressive version would be to check if any WAL records were emitted\n> during heap_page_prune() (instead of FPIs) and whether we'd emit an FPI if we\n> modified the page again. Similar to what we do now, except not requiring an\n> FPI to have been emitted.\n\nAlso way more aggressive. Not nearly enough on its own.\n\n> But to me it seems a bit odd that VACUUM now is more aggressive if checksums /\n> wal_log_hint bits is on, than without them. Which I think is how using either\n> of pgWalUsage.wal_fpi, pgWalUsage.wal_records ends up working?\n\nWhich part is the odd part? Is it odd that page-level freezing works\nthat way, or is it odd that page-level checksums work that way?\n\nIn any case this seems like an odd thing for you to say, having\neviscerated a patch that really just made the same behavior trigger\nindependently of FPIs in some tables, controlled via a GUC.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 10:44:45 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, 26 Jan 2023 at 19:45, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Jan 26, 2023 at 9:53 AM Andres Freund <andres@anarazel.de> wrote:\n> > I assume the case you're thinking of is that pruning did *not* do any changes,\n> > but in the process of figuring out that nothing needed to be pruned, we did a\n> > MarkBufferDirtyHint(), and as part of that emitted an FPI?\n>\n> Yes.\n>\n> > > That's going to be very significantly more aggressive. For example\n> > > it'll impact small tables very differently.\n> >\n> > Maybe it would be too aggressive, not sure. The cost of a freeze WAL record is\n> > relatively small, with one important exception below, if we are 99.99% sure\n> > that it's not going to require an FPI and isn't going to dirty the page.\n> >\n> > The exception is that a newer LSN on the page can cause the ringbuffer\n> > replacement to trigger more more aggressive WAL flushing. No meaningful\n> > difference if we modified the page during pruning, or if the page was already\n> > in s_b (since it likely won't be written out via the ringbuffer in that case),\n> > but if checksums are off and we just hint-dirtied the page, it could be a\n> > significant issue.\n>\n> Most of the overhead of FREEZE WAL records (with freeze plan\n> deduplication and page-level freezing in) is generic WAL record header\n> overhead. Your recent adversarial test case is going to choke on that,\n> too. At least if you set checkpoint_timeout to 1 minute again.\n\nCould someone explain to me why we don't currently (optionally)\ninclude the functionality of page freezing in the PRUNE records? I\nthink they're quite closely related (in that they both execute in\nVACUUM and are required for long-term system stability), and are even\nmore related now that we have opportunistic page-level freezing. I\nthink adding a \"freeze this page as well\"-flag in PRUNE records would\ngo a long way to reducing the WAL overhead of aggressive and more\nopportunistic freezing.\n\n-Matthias\n\n\n",
"msg_date": "Thu, 26 Jan 2023 20:26:00 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 11:35 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> You complained about the descriptions being theoretical. But there's\n> nothing theoretical about the fact that we more or less do *all*\n> freezing in an eventual aggressive VACUUM in many important cases,\n> including very simple cases like pgbench_history -- the simplest\n> possible append-only table case. We'll merrily rewrite the entire\n> table, all at once, for no good reason at all. Consistently, reliably.\n> It's so incredibly obvious that this makes zero sense! And yet I don't\n> think you've ever engaged with such basic points as that one.\n\nI'm aware that that's a problem, and I agree that it sucks. I think\nthat what this patch does is make vacuum more aggressively, and I\nexpect that would help this problem. I haven't said much about that\nbecause I don't think it's controversial. However, the patch also has\na cost, and that's what I think is controversial.\n\nI think it's pretty much impossible to freeze more aggressively\nwithout losing in some scenario or other. If waiting longer to freeze\nwould have resulted in the data getting updated again or deleted\nbefore we froze it, then waiting longer reduces the total amount of\nfreezing work that ever has to be done. Freezing more aggressively\ninevitably gives up some amount of that potential benefit in order to\ntry to secure some other benefit. It's a trade-off.\n\nI think that the goal of a patch that makes vacuum more (or less)\naggressive should be to make the cases where we lose as obscure as\npossible, and the cases where we win as broad as possible. I think\nthat, in order to be a good patch, it needs to be relatively difficult\nto find cases where we incur a big loss. If it's easy to find a big\nloss, then I think it's better to stick with the current behavior,\neven if it's also easy to find a big gain. There's nothing wonderful\nabout the current behavior, but (to paraphrase what I think Andres has\nalready said several times) it's better to keep shipping code with the\nsame bad behavior than to put out a new major release with behaviors\nthat are just as bad, but different.\n\nI feel like your emails sometimes seem to suppose that I think that\nyou're a bad person, or a bad developer, or that you have no good\nideas, or that you have no good ideas about this topic, or that this\ntopic is not important, or that we don't need to do better than we are\ncurrently doing. I think none of those things. However, I'm also not\nprepared to go all the way to the other end of the spectrum and say\nthat all of your ideas and everything in this patch are great. I don't\nthink either of those things, either.\n\nI certainly think that freezing more aggressively in some scenarios\ncould be a great idea, but it seems like the patch's theory is to be\nvery nearly maximally aggressive in every vacuum run if the table size\nis greater than some threshold, and I don't think that's right at all.\nI'm not exactly sure what information we should use to decide how\naggressive to be, but I am pretty sure that the size of the table is\nnot it. It's true that, for a small table, the cost of having to\neventually vacuum the whole table at once isn't going to be very high,\nwhereas for a large table, it will be. That line of reasoning makes a\nsize threshold sound reasonable. However, the amount of extra work\nthat we can potentially do by vacuuming more aggressively *also*\nincreases with the table size, which to me means using that a\ncriterion actually isn't sensible at all.\n\nOne idea that I've had about how to solve this problem is to try to\nmake vacuum try to aggressively freeze some portion of the table on\neach pass, and to behave less aggressively on the rest of the table so\nthat, hopefully, no single vacuum does too much work. Unfortunately, I\ndon't really know how to do that effectively. If we knew that the\ntable was going to see 10 vacuums before we hit\nautovacuum_freeze_max_age, we could try to have each one do 10% of the\namount of freezing that was going to need to be done rather than\nletting any single vacuum do all of it, but we don't have that sort of\ninformation. Also, even if we did have that sort of information, the\nidea only works if the pages that we freeze sooner are ones that we're\nnot about to update or delete again, and we don't have any idea what\nis likely there. In theory we could have some system that tracks how\nrecently each page range in a table has been modified, and direct our\nfreezing activity toward the ones less-recently modified on the theory\nthat they're not so likely to be modified again in the near future,\nbut in reality we have no such system. So I don't really feel like I\nknow what the right answer is here, yet.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 14:27:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 11:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I think it's pretty much impossible to freeze more aggressively\n> without losing in some scenario or other. If waiting longer to freeze\n> would have resulted in the data getting updated again or deleted\n> before we froze it, then waiting longer reduces the total amount of\n> freezing work that ever has to be done. Freezing more aggressively\n> inevitably gives up some amount of that potential benefit in order to\n> try to secure some other benefit. It's a trade-off.\n\nThere is no question about that.\n\n> I think that the goal of a patch that makes vacuum more (or less)\n> aggressive should be to make the cases where we lose as obscure as\n> possible, and the cases where we win as broad as possible. I think\n> that, in order to be a good patch, it needs to be relatively difficult\n> to find cases where we incur a big loss. If it's easy to find a big\n> loss, then I think it's better to stick with the current behavior,\n> even if it's also easy to find a big gain.\n\nAgain, this seems totally uncontroversial. It's just incredibly vague,\nand not at all actionable.\n\nRelatively difficult for Andres, or for somebody else? What are the\nreal parameters here? Obviously there are no clear answers available.\n\n> However, I'm also not\n> prepared to go all the way to the other end of the spectrum and say\n> that all of your ideas and everything in this patch are great. I don't\n> think either of those things, either.\n\nIt doesn't matter. I'm done with it. This is not a negotiation about\nwhat gets in and what doesn't get in.\n\nAll that I aim to do now is to draw some kind of line under the basic\npage-level freezing work, since of course I'm still responsible for\nthat. And perhaps to defend my personal reputation.\n\n> I certainly think that freezing more aggressively in some scenarios\n> could be a great idea, but it seems like the patch's theory is to be\n> very nearly maximally aggressive in every vacuum run if the table size\n> is greater than some threshold, and I don't think that's right at all.\n\nWe'll systematically avoid accumulating debt past a certain point --\nthat's its purpose. That is, we'll avoid accumulating all-visible\npages that eventually need to be frozen.\n\n> I'm not exactly sure what information we should use to decide how\n> aggressive to be, but I am pretty sure that the size of the table is\n> not it. It's true that, for a small table, the cost of having to\n> eventually vacuum the whole table at once isn't going to be very high,\n> whereas for a large table, it will be. That line of reasoning makes a\n> size threshold sound reasonable. However, the amount of extra work\n> that we can potentially do by vacuuming more aggressively *also*\n> increases with the table size, which to me means using that a\n> criterion actually isn't sensible at all.\n\nThe overwhelming cost is usually FPIs in any case. If you're not\nmostly focussing on that, you're focussing on the wrong thing. At\nleast with larger tables. You just have to focus on the picture over\ntime, across multiple VACUUM operations.\n\n> One idea that I've had about how to solve this problem is to try to\n> make vacuum try to aggressively freeze some portion of the table on\n> each pass, and to behave less aggressively on the rest of the table so\n> that, hopefully, no single vacuum does too much work. Unfortunately, I\n> don't really know how to do that effectively.\n\nThat has been proposed a couple of times in the context of this\nthread. It won't work, because the way autovacuum works in general\n(and likely always will work) doesn't allow it. With an append-only\ntable, each VACUUM will naturally have to scan significantly more\npages than the last one, forever (barring antiwraparound vacuums). Why\nwouldn't it continue that way? I mean it might not (the table might\nstop growing altogether), but then it doesn't matter much what we do.\n\nIf you're not behaving very proactively at the level of each VACUUM\noperation, then the picture over time is that you're *already* falling\nbehind. At least with an append-only table. You have to think of the\nsequence of operations, not just one.\n\n> In theory we could have some system that tracks how\n> recently each page range in a table has been modified, and direct our\n> freezing activity toward the ones less-recently modified on the theory\n> that they're not so likely to be modified again in the near future,\n> but in reality we have no such system. So I don't really feel like I\n> know what the right answer is here, yet.\n\nSo we need to come up with a way of getting reliable information from\nthe future, about an application that we have no particular\nunderstanding of. As opposed to just eating the cost to some degree,\nand making it configurable.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 11:56:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 11:26 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> Could someone explain to me why we don't currently (optionally)\n> include the functionality of page freezing in the PRUNE records? I\n> think they're quite closely related (in that they both execute in\n> VACUUM and are required for long-term system stability), and are even\n> more related now that we have opportunistic page-level freezing. I\n> think adding a \"freeze this page as well\"-flag in PRUNE records would\n> go a long way to reducing the WAL overhead of aggressive and more\n> opportunistic freezing.\n\nYeah, we've talked about doing that in the past year. It's quite\npossible. It would make quite a lot of sense, because the actual\noverhead of the WAL record for freezing tends to come from the generic\nWAL record header stuff itself. If there was only one record for both,\nthen you'd only need to include the relfilenode and block number (and\nso on) once.\n\nIt would be tricky to handle Multis, so what you'd probably do is just\nfreezing xmin, and possibly aborted and locker XIDs in xmax. So you\nwouldn't completely get rid of the main freeze record, but would be\nable to avoid it in many important cases.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 12:32:01 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 10:44:45 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 26, 2023 at 9:53 AM Andres Freund <andres@anarazel.de> wrote:\n> > > That's going to be very significantly more aggressive. For example\n> > > it'll impact small tables very differently.\n> >\n> > Maybe it would be too aggressive, not sure. The cost of a freeze WAL record is\n> > relatively small, with one important exception below, if we are 99.99% sure\n> > that it's not going to require an FPI and isn't going to dirty the page.\n> >\n> > The exception is that a newer LSN on the page can cause the ringbuffer\n> > replacement to trigger more more aggressive WAL flushing. No meaningful\n> > difference if we modified the page during pruning, or if the page was already\n> > in s_b (since it likely won't be written out via the ringbuffer in that case),\n> > but if checksums are off and we just hint-dirtied the page, it could be a\n> > significant issue.\n> \n> Most of the overhead of FREEZE WAL records (with freeze plan\n> deduplication and page-level freezing in) is generic WAL record header\n> overhead. Your recent adversarial test case is going to choke on that,\n> too. At least if you set checkpoint_timeout to 1 minute again.\n\nI don't quite follow. What do you mean with \"record header overhead\"? Unless\nthat includes FPIs, I don't think that's that commonly true?\n\nThe problematic case I am talking about is when we do *not* emit a WAL record\nduring pruning (because there's nothing to prune), but want to freeze the\ntable. If you don't log an FPI, the remaining big overhead is that increasing\nthe LSN on the page will often cause an XLogFlush() when writing out the\nbuffer.\n\nI don't see what your reference to checkpoint timeout is about here?\n\nAlso, as I mentioned before, the problem isn't specific to checkpoint_timeout\n= 1min. It just makes it cheaper to reproduce.\n\n\n> > Thus a modification of the above logic could be to opportunistically freeze if\n> > a ) it won't cause an FPI and either\n> > b1) the page was already dirty before pruning, as we'll not do a ringbuffer\n> > replacement in that case\n> > or\n> > b2) We wrote a WAL record during pruning, as the difference in flush position\n> > is marginal\n> >\n> > An even more aggressive version would be to replace b1) with logic that'd\n> > allow newly dirtying the page if it wasn't read through the ringbuffer. But\n> > newly dirtying the page feels like it'd be more dangerous.\n> \n> In many cases we'll have to dirty the page anyway, just to set\n> PD_ALL_VISIBLE. The whole way the logic works is conditioned (whether\n> triggered by an FPI or triggered by my now-reverted GUC) on being able\n> to set the whole page all-frozen in the VM.\n\nIIRC setting PD_ALL_VISIBLE doesn't trigger an FPI unless we need to log hint\nbits. But freezing does trigger one even without wal_log_hint_bits.\n\nYou're right, it makes sense to consider whether we'll emit a\nXLOG_HEAP2_VISIBLE anyway.\n\n\n> > A less aggressive version would be to check if any WAL records were emitted\n> > during heap_page_prune() (instead of FPIs) and whether we'd emit an FPI if we\n> > modified the page again. Similar to what we do now, except not requiring an\n> > FPI to have been emitted.\n> \n> Also way more aggressive. Not nearly enough on its own.\n\nIn which cases will it be problematically more aggressive?\n\nIf we emitted a WAL record during pruning we've already set the LSN of the\npage to a very recent LSN. We know the page is dirty. Thus we'll already\ntrigger an XLogFlush() during ringbuffer replacement. We won't emit an FPI.\n\n\n\n> > But to me it seems a bit odd that VACUUM now is more aggressive if checksums /\n> > wal_log_hint bits is on, than without them. Which I think is how using either\n> > of pgWalUsage.wal_fpi, pgWalUsage.wal_records ends up working?\n> \n> Which part is the odd part? Is it odd that page-level freezing works\n> that way, or is it odd that page-level checksums work that way?\n\nThat page-level freezing works that way.\n\n\n> In any case this seems like an odd thing for you to say, having\n> eviscerated a patch that really just made the same behavior trigger\n> independently of FPIs in some tables, controlled via a GUC.\n\njdksjfkjdlkajsd;lfkjasd;lkfj;alskdfj\n\nThat behaviour I critizied was causing a torrent of FPIs and additional\ndirtying of pages. My proposed replacement for the current FPI check doesn't,\nbecause a) it only triggers when we wrote a WAL record b) It doesn't trigger\nif we would write an FPI.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 12:45:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 2:57 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Relatively difficult for Andres, or for somebody else? What are the\n> real parameters here? Obviously there are no clear answers available.\n\nAndres is certainly smarter than the average guy, but practically any\nscenario that someone can create in a few lines of SQL is something to\nwhich code will be exposed to on some real-world system. If Andres\ncame along and said, hey, well I found a way to make this patch suck,\nand proceeded to describe a scenario that involved a complex set of\ntables and multiple workloads running simultaneously and using a\ndebugger to trigger some race condition and whatever, I'd be like \"OK,\nbut is that really going to happen?\". The actual scenario he came up\nwith is three lines of SQL, and it's nothing remotely obscure. That\nkind of thing is going to happen *all the time*.\n\n> The overwhelming cost is usually FPIs in any case. If you're not\n> mostly focussing on that, you're focussing on the wrong thing. At\n> least with larger tables. You just have to focus on the picture over\n> time, across multiple VACUUM operations.\n\nI think that's all mostly true, but the cases where being more\naggressive can cause *extra* FPIs are worthy of just as much attention\nas the cases where we can reduce them.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 15:54:17 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 20:26:00 +0100, Matthias van de Meent wrote:\n> Could someone explain to me why we don't currently (optionally)\n> include the functionality of page freezing in the PRUNE records?\n\nI think we definitely should (and have argued for it a couple times). It's not\njust about reducing WAL overhead, it's also about reducing redundant\nvisibility checks - which are where a very significant portion of the CPU time\nfor VACUUMing goes to.\n\nBesides performance considerations, it's also just plain weird that\nlazy_scan_prune() can end up with a different visibility than\nheap_page_prune() (mostly due to concurrent aborts).\n\n\nThe number of WAL records we often end up emitting for a processing a single\npage in vacuum is just plain absurd:\n- PRUNE\n- FREEZE_PAGE\n- VISIBLE\n\nThere's afaict no justification whatsoever for these to be separate records.\n\n\n> I think they're quite closely related (in that they both execute in VACUUM\n> and are required for long-term system stability), and are even more related\n> now that we have opportunistic page-level freezing. I think adding a \"freeze\n> this page as well\"-flag in PRUNE records would go a long way to reducing the\n> WAL overhead of aggressive and more opportunistic freezing.\n\nYep.\n\nI think we should also seriously consider setting all-visible during on-access\npruning, and freezing rows during on-access pruning.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 12:55:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 12:54 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The overwhelming cost is usually FPIs in any case. If you're not\n> > mostly focussing on that, you're focussing on the wrong thing. At\n> > least with larger tables. You just have to focus on the picture over\n> > time, across multiple VACUUM operations.\n>\n> I think that's all mostly true, but the cases where being more\n> aggressive can cause *extra* FPIs are worthy of just as much attention\n> as the cases where we can reduce them.\n\nIt's a question of our exposure to real problems, in no small part.\nWhat can we afford to be wrong about? What problem can be fixed by the\nuser more or less as it emerges, and what problem doesn't have that\nquality?\n\nThere is very good reason to believe that the large majority of all\ndata that people store in a system like Postgres is extremely cold\ndata:\n\nhttps://www.microsoft.com/en-us/research/video/cost-performance-in-modern-data-stores-how-data-cashing-systems-succeed/\nhttps://brandur.org/fragments/events\n\nHaving a separate aggressive step that rewrites an entire large table,\napparently at random, is just a huge burden to users. You've said that\nyou agree that it sucks, but somehow I still can't shake the feeling\nthat you don't fully understand just how much it sucks.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 13:06:31 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 4:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There is very good reason to believe that the large majority of all\n> data that people store in a system like Postgres is extremely cold\n> data:\n\nThe systems where I end up troubleshooting problems seem to be, most\ntypically, busy OLTP systems. I'm not in a position to say whether\nthat's more or less common than systems with extremely cold data, but\nI am in a position to say that my employer will have a lot fewer happy\ncustomers if we regress that use case. Naturally I'm keen to avoid\nthat.\n\n> Having a separate aggressive step that rewrites an entire large table,\n> apparently at random, is just a huge burden to users. You've said that\n> you agree that it sucks, but somehow I still can't shake the feeling\n> that you don't fully understand just how much it sucks.\n\nHa!\n\nWell, that's possible. But maybe you don't understand how much your\npatch makes other things suck.\n\nI don't think we can really get anywhere here by postulating that the\nproblem is the other person's lack of understanding, even if such a\npostulate should happen to be correct.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 16:21:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 1:22 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Jan 26, 2023 at 4:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > There is very good reason to believe that the large majority of all\n> > data that people store in a system like Postgres is extremely cold\n> > data:\n>\n> The systems where I end up troubleshooting problems seem to be, most\n> typically, busy OLTP systems. I'm not in a position to say whether\n> that's more or less common than systems with extremely cold data, but\n> I am in a position to say that my employer will have a lot fewer happy\n> customers if we regress that use case. Naturally I'm keen to avoid\n> that.\n\nThis is the kind of remark that makes me think that you don't get it.\n\nThe most influential OLTP benchmark of all time is TPC-C, which has\nexactly this problem. In spades -- it's enormously disruptive. Which\nis one reason why I used it as a showcase for a lot of this work. Plus\npractical experience (like the Heroku database in the blog post I\nlinked to) fully agrees with that benchmark, as far as this stuff goes\n-- that was also a busy OLTP database.\n\nOnline transaction involves transactions. Right? There is presumably\nsome kind of ledger, some kind of orders table. Naturally these have\nentries that age out fairly predictably. After a while, almost all the\ndata is cold data. It is usually about that simple.\n\nOne of the key strengths of systems like Postgres is the ability to\ninexpensively store a relatively large amount of data that has just\nabout zero chance of being read, let alone modified. While at the same\ntime having decent OLTP performance for the hot data. Not nearly as\ngood as an in-memory system, mind you -- and yet in-memory systems\nremain largely a niche thing.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 13:51:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 12:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > Most of the overhead of FREEZE WAL records (with freeze plan\n> > deduplication and page-level freezing in) is generic WAL record header\n> > overhead. Your recent adversarial test case is going to choke on that,\n> > too. At least if you set checkpoint_timeout to 1 minute again.\n>\n> I don't quite follow. What do you mean with \"record header overhead\"? Unless\n> that includes FPIs, I don't think that's that commonly true?\n\nEven if there are no directly observable FPIs, there is still extra\nWAL, which can cause FPIs indirectly, just by making checkpoints more\nfrequent. I feel ridiculous even having to explain this to you.\n\n> The problematic case I am talking about is when we do *not* emit a WAL record\n> during pruning (because there's nothing to prune), but want to freeze the\n> table. If you don't log an FPI, the remaining big overhead is that increasing\n> the LSN on the page will often cause an XLogFlush() when writing out the\n> buffer.\n>\n> I don't see what your reference to checkpoint timeout is about here?\n>\n> Also, as I mentioned before, the problem isn't specific to checkpoint_timeout\n> = 1min. It just makes it cheaper to reproduce.\n\nThat's flagrantly intellectually dishonest. Sure, it made it easier to\nreproduce. But that's not all it did!\n\nYou had *lots* of specific numbers and technical details in your first\nemail, such as \"Time for vacuuming goes up to ~5x. WAL volume to\n~9x.\". But you did not feel that it was worth bothering with details\nlike having set checkpoint_timeout to 1 minute, which is a setting\nthat nobody uses, and obviously had a multiplicative effect. That\ndetail was unimportant. I had to drag it out of you!\n\nYou basically found a way to add WAL overhead to a system/workload\nthat is already in a write amplification vicious cycle, with latent\ntipping point type behavior.\n\nThere is a practical point here, that is equally obvious, and yet\nsomehow still needs to be said: benchmarks like that one are basically\ncompletely free of useful information. If we can't agree on how to\nassess such things in general, then what can we agree on when it comes\nto what should be done about it, what trade-off to make, when it comes\nto any similar question?\n\n> > In many cases we'll have to dirty the page anyway, just to set\n> > PD_ALL_VISIBLE. The whole way the logic works is conditioned (whether\n> > triggered by an FPI or triggered by my now-reverted GUC) on being able\n> > to set the whole page all-frozen in the VM.\n>\n> IIRC setting PD_ALL_VISIBLE doesn't trigger an FPI unless we need to log hint\n> bits. But freezing does trigger one even without wal_log_hint_bits.\n\nThat is correct.\n\n> You're right, it makes sense to consider whether we'll emit a\n> XLOG_HEAP2_VISIBLE anyway.\n\nAs written the page-level freezing FPI mechanism probably doesn't\nreally stand to benefit much from doing that. Either checksums are\ndisabled and it's just a hint, or they're enabled and there is a very\nhigh chance that we'll get an FPI inside lazy_scan_prune rather than\nright after it is called, when PD_ALL_VISIBLE is set.\n\nThat's not perfect, of course, but it doesn't have to be. Perhaps it\nshould still be improved, just on general principle.\n\n> > > A less aggressive version would be to check if any WAL records were emitted\n> > > during heap_page_prune() (instead of FPIs) and whether we'd emit an FPI if we\n> > > modified the page again. Similar to what we do now, except not requiring an\n> > > FPI to have been emitted.\n> >\n> > Also way more aggressive. Not nearly enough on its own.\n>\n> In which cases will it be problematically more aggressive?\n>\n> If we emitted a WAL record during pruning we've already set the LSN of the\n> page to a very recent LSN. We know the page is dirty. Thus we'll already\n> trigger an XLogFlush() during ringbuffer replacement. We won't emit an FPI.\n\nYou seem to be talking about this as if the only thing that could\nmatter is the immediate FPI -- the first order effects -- and not any\nsecond order effects. You certainly didn't get to 9x extra WAL\noverhead by controlling for that before. Should I take it that you've\ndecided to assess these things more sensibly now? Out of curiosity:\nwhy the change of heart?\n\n> > > But to me it seems a bit odd that VACUUM now is more aggressive if checksums /\n> > > wal_log_hint bits is on, than without them. Which I think is how using either\n> > > of pgWalUsage.wal_fpi, pgWalUsage.wal_records ends up working?\n> >\n> > Which part is the odd part? Is it odd that page-level freezing works\n> > that way, or is it odd that page-level checksums work that way?\n>\n> That page-level freezing works that way.\n\nI think that it will probably cause a little confusion, and should be\nspecifically documented. But other than that, it seems reasonable\nenough to me. I mean, should I not do something that's going to be of\nsignificant help to users with checksums, just because it'll be\nsomewhat confusing to a small minority of them?\n\n> > In any case this seems like an odd thing for you to say, having\n> > eviscerated a patch that really just made the same behavior trigger\n> > independently of FPIs in some tables, controlled via a GUC.\n>\n> jdksjfkjdlkajsd;lfkjasd;lkfj;alskdfj\n>\n> That behaviour I critizied was causing a torrent of FPIs and additional\n> dirtying of pages. My proposed replacement for the current FPI check doesn't,\n> because a) it only triggers when we wrote a WAL record b) It doesn't trigger\n> if we would write an FPI.\n\nIt increases the WAL written in many important cases that\nvacuum_freeze_strategy_threshold avoided. Sure, it did have some\nproblems, but the general idea of adding some high level\ncontext/strategies seems essential to me.\n\nYou also seem to be suggesting that your proposed change to how basic\npage-level freezing works will make freezing of pages on databases\nwith page-level checksums similar to an equivalent case without\nchecksums enabled. Even assuming that that's an important goal, you\nwon't be much closer to achieving it under your scheme, since hint\nbits being set during VACUUM and requiring an FPI still make a huge\ndifference. Tables like pgbench_history have pages that generally\naren't pruned, that don't need to log an FPI just to set\nPD_ALL_VISIBLE once checksums are disabled.\n\nThat's the difference that users are going to notice between checksums\nenabled vs disabled, if they notice any -- it's the most important one\nby far.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 15:36:52 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 14:27:53 -0500, Robert Haas wrote:\n> One idea that I've had about how to solve this problem is to try to\n> make vacuum try to aggressively freeze some portion of the table on\n> each pass, and to behave less aggressively on the rest of the table so\n> that, hopefully, no single vacuum does too much work.\n\nI agree that this rough direction is worthwhile to purse.\n\n\n> Unfortunately, I don't really know how to do that effectively. If we knew\n> that the table was going to see 10 vacuums before we hit\n> autovacuum_freeze_max_age, we could try to have each one do 10% of the\n> amount of freezing that was going to need to be done rather than letting any\n> single vacuum do all of it, but we don't have that sort of information.\n\nI think, quite fundamentally, it's not possible to bound the amount of work an\nanti-wraparound vacuum has to do if we don't have an age based autovacuum\ntrigger kicking in before autovacuum_freeze_max_age. After all, there might be\nno autovacuum before that's autovacuum_freeze_max_age is reached.\n\nBut there's just no reason to not have a trigger below\nautovacuum_freeze_max_age. That's why I think Peter's patch to split age and\nanti-\"auto-cancel\" autovacuums is an strictly necessary change if we want to\nmake autovacuum fundamentally suck less. There's a few boring details to\nfigure out how to set/compute those limits, but I don't think there's anything\nfundamentally hard.\n\n\nI think we also need the number of all-frozen pages in pg_class if we want to\nmake better scheduling decision. As we already compute the number of\nall-visible pages at the end of vacuuming, we can compute the number of\nall-frozen pages as well. The space for another integer in pg_class doesn't\nbother me one bit.\n\n\nLet's say we had a autovacuum_vacuum_age trigger of 100m, and\nautovacuum_freeze_max_age=500m. We know that we're roughly going to be\nvacuuming 5 times before reaching autovacuum_freeze_max_age (very slow\nautovacuums are an issue, but if one autovacuum takes 100m+ xids long, there's\nnot much we can do).\n\nWith that we could determine the eager percentage along the lines of:\n frozen_target = Min(age(relfrozenxid), autovacuum_freeze_max_age)/autovacuum_freeze_max_age\n eager_percentage = Min(0, frozen_target * relpages - pg_class.relallfrozen * relpages)\n\nOne thing I don't know fully how to handle is how to ensure that we try to\nfreeze a different part of the table each vacuum. I guess we could store a\npage number in pgstats?\n\n\nThis would help address the \"cliff\" issue of reaching\nautovacuum_freeze_max_age. What it would *not*, on its own, would is the\nnumber of times we rewrite pages.\n\nI can guess at a few ways to heuristically identify when tables are \"append\nmostly\" from vacuum's view (a table can be update heavy, but very localized to\nrecent rows, and still be append mostly from vacuum's view). There's obvious\ncases, e.g. when there are way more inserts than dead rows. But other cases\nare harder.\n\n\n\n> Also, even if we did have that sort of information, the idea only works if\n> the pages that we freeze sooner are ones that we're not about to update or\n> delete again, and we don't have any idea what is likely there.\n\nPerhaps we could use something like\n (age(relfrozenxid) - age(newest_xid_on_page)) / age(relfrozenxid)\nas a heuristic?\n\nI have a gut feeling that we should somehow collect/use statistics about the\nnumber of frozen pages, marked as such by the last (or recent?) vacuum, that\nhad to be \"defrosted\" by backends. But I don't quite know how to yet. I think\nwe could collect statistics about that by storing the LSN of the last vacuum\nin the shared stats, and incrementing that counter when defrosting.\n\nA lot of things like that would work a whole lot better if we had statistics\nthat take older data into account, but weigh it less than more recent\ndata. But that's hard/expensive to collect.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 17:15:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 15:36:52 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 26, 2023 at 12:45 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Most of the overhead of FREEZE WAL records (with freeze plan\n> > > deduplication and page-level freezing in) is generic WAL record header\n> > > overhead. Your recent adversarial test case is going to choke on that,\n> > > too. At least if you set checkpoint_timeout to 1 minute again.\n> >\n> > I don't quite follow. What do you mean with \"record header overhead\"? Unless\n> > that includes FPIs, I don't think that's that commonly true?\n>\n> Even if there are no directly observable FPIs, there is still extra\n> WAL, which can cause FPIs indirectly, just by making checkpoints more\n> frequent. I feel ridiculous even having to explain this to you.\n\nWhat does that have to do with \"generic WAL record overhead\"?\n\n\nI also don't really see how that is responsive to anything else in my\nemail. That's just as true for the current gating condition (the issuance of\nan FPI during heap_page_prune() / HTSV()).\n\nWhat I was wondering about is whether we should replace the\n fpi_before != pgWalUsage.wal_fpi\nwith\n records_before != pgWalUsage.wal_records && !WouldIssueFpi(page)\n\n\n> > The problematic case I am talking about is when we do *not* emit a WAL record\n> > during pruning (because there's nothing to prune), but want to freeze the\n> > table. If you don't log an FPI, the remaining big overhead is that increasing\n> > the LSN on the page will often cause an XLogFlush() when writing out the\n> > buffer.\n> >\n> > I don't see what your reference to checkpoint timeout is about here?\n> >\n> > Also, as I mentioned before, the problem isn't specific to checkpoint_timeout\n> > = 1min. It just makes it cheaper to reproduce.\n>\n> That's flagrantly intellectually dishonest. Sure, it made it easier to\n> reproduce. But that's not all it did!\n>\n> You had *lots* of specific numbers and technical details in your first\n> email, such as \"Time for vacuuming goes up to ~5x. WAL volume to\n> ~9x.\". But you did not feel that it was worth bothering with details\n> like having set checkpoint_timeout to 1 minute, which is a setting\n> that nobody uses, and obviously had a multiplicative effect. That\n> detail was unimportant. I had to drag it out of you!\n\nThe multiples were for checkpoint_timeout=5min, with\n '250s' instead of WHERE ts < now() - '120s'\n\nI started out with checkpoint_timeout=1min, as I wanted to quickly test my\ntheory. Then I increased checkpoint_timeout back to 5min, adjusted the query\nto some randomly guessed value. Happened to get nearly the same results.\n\nI then experimented more with '1min', because it's less annoying to have to\nwait for 120s until deletions start, than to wait for 250s. Because it's\nquicker to run I thought I'd share the less resource intensive version. A\nmistake as I now realize.\n\n\nThis wasn't intended as a carefully designed benchmark, or anything. It was a\nquick proof for a problem that I found obvious. And it's not something worth\ntesting carefully - e.g. the constants in the test are actually quite hardware\nspecific, because the insert/seconds rate is very machine specific, and it's\ncompletely unnecessarily hardware intensive due to the use of single-row\ninserts, instead of batched operations. It's just a POC.\n\n\n\n> You basically found a way to add WAL overhead to a system/workload\n> that is already in a write amplification vicious cycle, with latent\n> tipping point type behavior.\n>\n> There is a practical point here, that is equally obvious, and yet\n> somehow still needs to be said: benchmarks like that one are basically\n> completely free of useful information. If we can't agree on how to\n> assess such things in general, then what can we agree on when it comes\n> to what should be done about it, what trade-off to make, when it comes\n> to any similar question?\n\nIt's not at all free of useful information. It reproduces a problem I\npredicted repeatedly, that others in the discussion also wondered about, that\nyou refused to acknowledge or address.\n\nIt's not a good benchmark - I completely agree with that much. It was not\ndesigned to carefully benchmark different settings or such. It was designed to\nshow a problem. And it does that.\n\n\n\n> > You're right, it makes sense to consider whether we'll emit a\n> > XLOG_HEAP2_VISIBLE anyway.\n>\n> As written the page-level freezing FPI mechanism probably doesn't\n> really stand to benefit much from doing that. Either checksums are\n> disabled and it's just a hint, or they're enabled and there is a very\n> high chance that we'll get an FPI inside lazy_scan_prune rather than\n> right after it is called, when PD_ALL_VISIBLE is set.\n\nI think it might be useful with logged hint bits, consider cases where all the\ntuples on the page were already fully hinted. That's not uncommon, I think?\n\n\n> > > > A less aggressive version would be to check if any WAL records were emitted\n> > > > during heap_page_prune() (instead of FPIs) and whether we'd emit an FPI if we\n> > > > modified the page again. Similar to what we do now, except not requiring an\n> > > > FPI to have been emitted.\n> > >\n> > > Also way more aggressive. Not nearly enough on its own.\n> >\n> > In which cases will it be problematically more aggressive?\n> >\n> > If we emitted a WAL record during pruning we've already set the LSN of the\n> > page to a very recent LSN. We know the page is dirty. Thus we'll already\n> > trigger an XLogFlush() during ringbuffer replacement. We won't emit an FPI.\n>\n> You seem to be talking about this as if the only thing that could\n> matter is the immediate FPI -- the first order effects -- and not any\n> second order effects.\n\n\t * Freeze the page when heap_prepare_freeze_tuple indicates that at least\n\t * one XID/MXID from before FreezeLimit/MultiXactCutoff is present. Also\n\t * freeze when pruning generated an FPI, if doing so means that we set the\n\t * page all-frozen afterwards (might not happen until final heap pass).\n\t */\n\tif (pagefrz.freeze_required || tuples_frozen == 0 ||\n\t\t(prunestate->all_visible && prunestate->all_frozen &&\n\t\t fpi_before != pgWalUsage.wal_fpi))\n\nThat's just as true for this.\n\nWhat I'd like to know is why the second order effects of the above are lesser\nthan for\n\tif (pagefrz.freeze_required || tuples_frozen == 0 ||\n\t\t(prunestate->all_visible && prunestate->all_frozen &&\n\t\t records_before != pgWalUsage.wal_records && !WouldIssueFpi(page)))\n\n\n\n\n> You certainly didn't get to 9x extra WAL\n> overhead by controlling for that before. Should I take it that you've\n> decided to assess these things more sensibly now? Out of curiosity:\n> why the change of heart?\n\nDude.\n\nWhat would the point have been to invest a lot of time in a repro for a\npredicted problem? It's a problem repro, not a carefully designed benchmark.\n\n\n\n> > > In any case this seems like an odd thing for you to say, having\n> > > eviscerated a patch that really just made the same behavior trigger\n> > > independently of FPIs in some tables, controlled via a GUC.\n> >\n> > jdksjfkjdlkajsd;lfkjasd;lkfj;alskdfj\n> >\n> > That behaviour I critizied was causing a torrent of FPIs and additional\n> > dirtying of pages. My proposed replacement for the current FPI check doesn't,\n> > because a) it only triggers when we wrote a WAL record b) It doesn't trigger\n> > if we would write an FPI.\n>\n> It increases the WAL written in many important cases that\n> vacuum_freeze_strategy_threshold avoided. Sure, it did have some\n> problems, but the general idea of adding some high level\n> context/strategies seems essential to me.\n\nI was discussing changing the conditions for the \"oppportunistic pruning\"\nlogic, not about a replacement for the eager freezing strategy.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 18:37:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 6:37 PM Andres Freund <andres@anarazel.de> wrote:\n> I also don't really see how that is responsive to anything else in my\n> email. That's just as true for the current gating condition (the issuance of\n> an FPI during heap_page_prune() / HTSV()).\n>\n> What I was wondering about is whether we should replace the\n> fpi_before != pgWalUsage.wal_fpi\n> with\n> records_before != pgWalUsage.wal_records && !WouldIssueFpi(page)\n\nI understand that. What I'm saying is that that's going to create a\nhuge problem of its own, unless you separately account for that\nproblem.\n\nThe simplest and obvious example is something like a pgbench_tellers\ntable. VACUUM will generally run fast enough relative to the workload\nthat it will set some of those pages all-visible. Now it's going to\nfreeze them, too. Arguably it shouldn't even be setting the pages\nall-visible, but now you make that existing problem much worse.\n\nThe important point is that there doesn't seem to be any good way\naround thinking about the table as a whole if you're going to freeze\nspeculatively. This is not the same dynamic as we see with the FPI\nthing IMV -- that's not nearly so speculative as what you're talking\nabout, since it is speculative in roughly the same sense that eager\nfreezing was speculative (hence the suggestion that something like\nvacuum_freeze_strategy_threshold could have a roll to play).\n\nThe FPI thing is mostly about the cost now versus the cost later on.\nYou're gambling that you won't get another FPI later on if you freeze\nnow. But the cost of a second FPI later on is so much higher than the\nadded cost of freezing now that that's a very favorable bet, that we\ncan afford to \"lose\" many times while still coming out ahead overall.\nAnd even when we lose, you generally still won't have been completely\nwrong -- even then there generally will indeed be a second FPI later\non for the same page, to go with everything else. This makes the\nwasted freezing even less significant, on a comparative basis!\n\nIt's also likely true that an FPI in lazy_scan_prune is a much\nstronger signal, but I think that the important dynamic is that we're\nreasoning about \"costs now vs costs later on\". The asymmetry is really\nimportant.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 19:01:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 19:01:03 -0800, Peter Geoghegan wrote:\n> On Thu, Jan 26, 2023 at 6:37 PM Andres Freund <andres@anarazel.de> wrote:\n> > I also don't really see how that is responsive to anything else in my\n> > email. That's just as true for the current gating condition (the issuance of\n> > an FPI during heap_page_prune() / HTSV()).\n> >\n> > What I was wondering about is whether we should replace the\n> > fpi_before != pgWalUsage.wal_fpi\n> > with\n> > records_before != pgWalUsage.wal_records && !WouldIssueFpi(page)\n>\n> I understand that. What I'm saying is that that's going to create a\n> huge problem of its own, unless you separately account for that\n> problem.\n\n> The simplest and obvious example is something like a pgbench_tellers\n> table. VACUUM will generally run fast enough relative to the workload\n> that it will set some of those pages all-visible. Now it's going to\n> freeze them, too. Arguably it shouldn't even be setting the pages\n> all-visible, but now you make that existing problem much worse.\n\nSo the benefit of the FPI condition is that it indicates that the page hasn't\nbeen updated all that recently, because, after all, a checkpoint has happened\nsince? If that's the intention, it needs a huge honking comment - at least I\ncan't read that out of:\n\n Also freeze when pruning generated an FPI, if doing so means that we set the\n page all-frozen afterwards (might not happen until final heap pass).\n\n\nIt doesn't seem like a great proxy to me. ISTM that this means that how\naggressive vacuum is about opportunistically freezing pages depends on config\nvariables like checkpoint_timeout & max_wal_size (less common opportunistic\nfreezing), full_page_writes & use of unlogged tables (no opportunistic\nfreezing), and the largely random scheduling of autovac workers.\n\n\nI can see it making a difference for pgbench_tellers, but it's a pretty small\ndifference in overall WAL volume. I can think of more adverse workloads though\n- but even there the difference seems not huge, and not predictably\nreached. Due to the freeze plan stuff you added, the amount of WAL for\nfreezing a page is pretty darn small compared to the amount of WAL if compared\nto the amount of WAL needed to fill a page with non-frozen tuples.\n\nThat's not to say we shouldn't reduce the risk - I agree that both the \"any\nfpi\" and the \"any record\" condition can have adverse effects!\n\n\nHowever, an already dirty page getting frozen is also the one case where\nfreezing won't have meaningful write amplication effect. So I think it's worth\ntrying spending effort figuring out how we can make freezing in that situation\nhave unlikely and small downsides.\n\n\nThe cases with downsides are tables that are very heavily updated througout,\nwhere the page is going to be defrosted again almost immediately. As you say,\nthe all-visible marking has a similar problem.\n\n\nEssentially the \"any fpi\" logic is a very coarse grained way of using the page\nLSN as a measurement. As I said, I don't think \"has a checkpoint occurred\nsince the last write\" is a good metric to avoid unnecessary freezing - it's\ntoo coarse. But I think using the LSN is the right thought. What about\nsomething like\n\n lsn_threshold = insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1\n if (/* other conds */ && PageGetLSN(page) <= lsn_threshold)\n FreezeMe();\n\nI probably got some details wrong, what I am going for with lsn_threshold is\nthat we'd freeze an already dirty page if it's not been updated within 10% of\nthe LSN distance to the last VACUUM.\n\n\n\n> The important point is that there doesn't seem to be any good way\n> around thinking about the table as a whole if you're going to freeze\n> speculatively. This is not the same dynamic as we see with the FPI\n> thing IMV -- that's not nearly so speculative as what you're talking\n> about, since it is speculative in roughly the same sense that eager\n> freezing was speculative (hence the suggestion that something like\n> vacuum_freeze_strategy_threshold could have a roll to play).\n\nI don't think the speculation is that fundamentally different - a heavily\nupdated table with a bit of a historic, non-changing portion, makes\nvacuum_freeze_strategy_threshold freeze way more aggressively than either \"any\nrecord\" or \"any fpi\".\n\n\n> The FPI thing is mostly about the cost now versus the cost later on.\n> You're gambling that you won't get another FPI later on if you freeze\n> now. But the cost of a second FPI later on is so much higher than the\n> added cost of freezing now that that's a very favorable bet, that we\n> can afford to \"lose\" many times while still coming out ahead overall.\n\nAgreed. And not just avoiding FPIs, avoiding another dirtying of the page! The\nlatter part is especially huge IMO. Depending on s_b size it can also avoid\nanother *read* of the page...\n\n\n> And even when we lose, you generally still won't have been completely\n> wrong -- even then there generally will indeed be a second FPI later\n> on for the same page, to go with everything else. This makes the\n> wasted freezing even less significant, on a comparative basis!\n\nThis is precisely why I think that we can afford to be quite aggressive about\nfreezing already dirty pages...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 26 Jan 2023 21:58:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 9:58 PM Andres Freund <andres@anarazel.de> wrote:\n> It doesn't seem like a great proxy to me. ISTM that this means that how\n> aggressive vacuum is about opportunistically freezing pages depends on config\n> variables like checkpoint_timeout & max_wal_size (less common opportunistic\n> freezing), full_page_writes & use of unlogged tables (no opportunistic\n> freezing), and the largely random scheduling of autovac workers.\n\nThe FPI thing was originally supposed to complement the freezing\nstrategies stuff, and possibly other rules that live in\nlazy_scan_prune. Obviously you can freeze a page by following any rule\nthat you care to invent -- you can decide by calling random(). Two\nrules can coexist during the same VACUUM (actually, they do already).\n\n> Essentially the \"any fpi\" logic is a very coarse grained way of using the page\n> LSN as a measurement. As I said, I don't think \"has a checkpoint occurred\n> since the last write\" is a good metric to avoid unnecessary freezing - it's\n> too coarse. But I think using the LSN is the right thought. What about\n> something like\n>\n> lsn_threshold = insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1\n> if (/* other conds */ && PageGetLSN(page) <= lsn_threshold)\n> FreezeMe();\n>\n> I probably got some details wrong, what I am going for with lsn_threshold is\n> that we'd freeze an already dirty page if it's not been updated within 10% of\n> the LSN distance to the last VACUUM.\n\nIt seems to me that you're reinventing something akin to eager\nfreezing strategy here. At least that's how I define it, since now\nyou're bringing the high level context into it; what happens with the\ntable, with VACUUM operations, and so on. Obviously this requires\ntracking the metadata that you suppose will be available in some way\nor other, in particular things like lsn_of_last_vacuum.\n\nWhat about unlogged/temporary tables? The obvious thing to do there is\nwhat I did in the patch that was reverted (freeze whenever the page\nwill thereby become all-frozen), and forget about LSNs. But you have\nalready objected to that part, specifically.\n\nBTW, you still haven't changed the fact that you get rather different\nbehavior with checksums/wal_log_hints. I think that that's good, but\nyou didn't seem to.\n\n> I don't think the speculation is that fundamentally different - a heavily\n> updated table with a bit of a historic, non-changing portion, makes\n> vacuum_freeze_strategy_threshold freeze way more aggressively than either \"any\n> record\" or \"any fpi\".\n\nThat's true. The point I was making is that both this proposal and\neager freezing are based on some kind of high level picture of the\nneeds of the table, based on high level metadata. To me that's the\ndefining characteristic.\n\n> > And even when we lose, you generally still won't have been completely\n> > wrong -- even then there generally will indeed be a second FPI later\n> > on for the same page, to go with everything else. This makes the\n> > wasted freezing even less significant, on a comparative basis!\n>\n> This is precisely why I think that we can afford to be quite aggressive about\n> freezing already dirty pages...\n\nI'm beginning to warm to this idea, now that I understand it a little better.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 26 Jan 2023 23:11:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-26 23:11:41 -0800, Peter Geoghegan wrote:\n> > Essentially the \"any fpi\" logic is a very coarse grained way of using the page\n> > LSN as a measurement. As I said, I don't think \"has a checkpoint occurred\n> > since the last write\" is a good metric to avoid unnecessary freezing - it's\n> > too coarse. But I think using the LSN is the right thought. What about\n> > something like\n> >\n> > lsn_threshold = insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1\n> > if (/* other conds */ && PageGetLSN(page) <= lsn_threshold)\n> > FreezeMe();\n> >\n> > I probably got some details wrong, what I am going for with lsn_threshold is\n> > that we'd freeze an already dirty page if it's not been updated within 10% of\n> > the LSN distance to the last VACUUM.\n> \n> It seems to me that you're reinventing something akin to eager\n> freezing strategy here. At least that's how I define it, since now\n> you're bringing the high level context into it; what happens with the\n> table, with VACUUM operations, and so on. Obviously this requires\n> tracking the metadata that you suppose will be available in some way\n> or other, in particular things like lsn_of_last_vacuum.\n\nI agree with bringing high-level context into the decision about whether to\nfreeze agressively - my problem with the eager freezing strategy patch isn't\nthat it did that too much, it's that it didn't do it enough.\n\n\nBut I also don't think what I describe above is really comparable to \"table\nlevel\" eager freezing though - the potential worst case overhead is a small\nfraction of the WAL volume, and there's zero increase in data write volume. I\nsuspect the absolute worst case of \"always freeze dirty pages\" is when a\nsingle tuple on the page gets updated immediately after every time we freeze\nthe page - a single tuple is where the freeze record is the least space\nefficient. The smallest update is about the same size as the smallest freeze\nrecord. For that to amount to a large WAL increase you'd a crazy rate of such\nupdates interspersed with vacuums. In slightly more realistic cases (i.e. not\ncolumn less tuples that constantly get updated and freezing happening all the\ntime) you end up with a reasonably small WAL rate overhead.\n\nThat worst case of \"freeze dirty\" is bad enough to spend some brain and\ncompute cycles to prevent. But if we don't always get it right in some\nworkload, it's not *awful*.\n\n\nThe worst case of the \"eager freeze strategy\" is a lot larger - it's probably\nsomething like updating one narrow tuple every page, once per checkpoint, so\nthat each freeze generates an FPI. I think that results in a max overhead of\n2x for data writes, and about 150x for WAL volume (ratio of one update record\nwith an FPI). Obviously that's a pointless workload, but I do think that\nanalyzing the \"outer boundaries\" of the regression something can cause, can be\nhelpful.\n\n\nI think one way forward with the eager strategy approach would be to have a\nvery narrow gating condition for now, and then incrementally expand it in\nlater releases.\n\nOne use-case where the eager strategy is particularly useful is\n[nearly-]append-only tables - and it's also the one workload that's reasonably\neasy to detect using stats. Maybe something like\n(dead_tuples_since_last_vacuum / inserts_since_last_vacuum) < 0.05\nor so.\n\nThat'll definitely leave out loads of workloads where eager freezing would be\nuseful - but are there semi-reasonable workloads where it'll hurt badly? I\ndon't *think* so.\n\n\n> What about unlogged/temporary tables? The obvious thing to do there is\n> what I did in the patch that was reverted (freeze whenever the page\n> will thereby become all-frozen), and forget about LSNs. But you have\n> already objected to that part, specifically.\n\nMy main concern about that is the data write amplification it could cause when\npage is clean when we start freezing. But I can't see a large potential\ndownside to always freezing unlogged/temp tables when the page is already\ndirty.\n\n\n> BTW, you still haven't changed the fact that you get rather different\n> behavior with checksums/wal_log_hints. I think that that's good, but\n> you didn't seem to.\n\nI think that, if we had something like the recency test I was talking about,\nwe could afford to alway freeze when the page is already dirty and not very\nrecently modified. I.e. not even insist on a WAL record having been generated\nduring pruning/HTSV. But I need to think through the dangers of that more.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 00:51:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-27 00:51:59 -0800, Andres Freund wrote:\n> One use-case where the eager strategy is particularly useful is\n> [nearly-]append-only tables - and it's also the one workload that's reasonably\n> easy to detect using stats. Maybe something like\n> (dead_tuples_since_last_vacuum / inserts_since_last_vacuum) < 0.05\n> or so.\n> \n> That'll definitely leave out loads of workloads where eager freezing would be\n> useful - but are there semi-reasonable workloads where it'll hurt badly? I\n> don't *think* so.\n\nThat 0.05 could be a GUC + relopt combo, which'd allow users to opt in tables\nwith known usage pattern into always using eager freezing.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 01:02:53 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 4:51 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This is the kind of remark that makes me think that you don't get it.\n>\n> The most influential OLTP benchmark of all time is TPC-C, which has\n> exactly this problem. In spades -- it's enormously disruptive. Which\n> is one reason why I used it as a showcase for a lot of this work. Plus\n> practical experience (like the Heroku database in the blog post I\n> linked to) fully agrees with that benchmark, as far as this stuff goes\n> -- that was also a busy OLTP database.\n>\n> Online transaction involves transactions. Right? There is presumably\n> some kind of ledger, some kind of orders table. Naturally these have\n> entries that age out fairly predictably. After a while, almost all the\n> data is cold data. It is usually about that simple.\n>\n> One of the key strengths of systems like Postgres is the ability to\n> inexpensively store a relatively large amount of data that has just\n> about zero chance of being read, let alone modified. While at the same\n> time having decent OLTP performance for the hot data. Not nearly as\n> good as an in-memory system, mind you -- and yet in-memory systems\n> remain largely a niche thing.\n\nI think it's interesting that TPC-C suffers from the kind of problem\nthat your patch was intended to address. I hadn't considered that. But\nI do not think it detracts from the basic point I was making, which is\nthat you need to think about the downsides of your patch, not just the\nupsides.\n\nIf you want to argue that there is *no* OLTP workload that will be\nharmed by freezing as aggressively as possible, then that would be a\ngood argument in favor of your patch, because it would be arguing that\nthe downside simply doesn't exist, at least for OLTP workloads. The\nfact that you can think of *one particular* OLTP workload that can\nbenefit from the patch is just doubling down on the \"my patch has an\nupside\" argument, which literally no one is disputing.\n\nI don't think you can make such an argument stick, though. OLTP\nworkloads come in all shapes and sizes. It's pretty common to have\ntables where the application inserts a bunch of data, updates it over\nand over again like, truncates the table, and starts over. In such a\ncase, aggressive freezing has to be a loss, because no freezing is\never needed. It's also surprisingly common to have tables where a\nbunch of data is inserted and then, after a bit of processing, a bunch\nof rows are updated exactly once, after which the data is not modified\nany further. In those kinds of cases, aggressive freezing is a great\nidea if it happens after that round of updates but a poor idea if it\nhappens before that round of updates.\n\nIt's also pretty common to have cases where portions of the table\nbecome very hot, get a lot of updates for a while, and then that part\nof the table becomes cool and some other part of the table becomes\nvery hot for a while. I think it's possible that aggressive freezing\nmight do OK in such environments, actually. It will be a negative if\nwe aggressively freeze the part of the table that's currently hot, but\nI think typically tables that have this access pattern are quite big,\nso VACUUM isn't going to sweep through the table all that often. It\nwill probably freeze a lot more data-that-was-hot-a-bit-ago than it\nwill freeze data-that-is-hot-this-very-minute. Then again, maybe that\nwould happen without the patch, too. Maybe this kind of case is a wash\nfor your patch? I don't know.\n\nWhatever you think of these examples, I don't see how it can be right\nto suppose that *in general* freezing very aggressively has no\ndownsides. If that were true, then we probably wouldn't have\nvacuum_freeze_min_age at all. We would always just freeze everything\nASAP. I mean, you could theorize that whoever invented that GUC is an\nidiot and that they had absolutely no good reason for introducing it,\nbut that seems pretty ridiculous. Someone put guards against\noverly-aggressive freezing into the system *for a reason* and if you\njust go rip them all out, you're going to reintroduce the problems\nagainst which they were intended to guard.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 09:48:43 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 6:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I don't see what your reference to checkpoint timeout is about here?\n> >\n> > Also, as I mentioned before, the problem isn't specific to checkpoint_timeout\n> > = 1min. It just makes it cheaper to reproduce.\n>\n> That's flagrantly intellectually dishonest.\n\nThis kind of ad hominum attack has no place on this mailing list, or\nanywhere in the PostgreSQL community.\n\nIf you think there's a problem with Andres's test case, or his\nanalysis of it, you can talk about those problems without accusing him\nof intellectual dishonesty.\n\nI don't see anything to indicate that he was being intentionally\ndishonest, either. At most he was mistaken. More than likely, not even\nthat.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 09:53:29 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 6:48 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > One of the key strengths of systems like Postgres is the ability to\n> > inexpensively store a relatively large amount of data that has just\n> > about zero chance of being read, let alone modified. While at the same\n> > time having decent OLTP performance for the hot data. Not nearly as\n> > good as an in-memory system, mind you -- and yet in-memory systems\n> > remain largely a niche thing.\n>\n> I think it's interesting that TPC-C suffers from the kind of problem\n> that your patch was intended to address. I hadn't considered that. But\n> I do not think it detracts from the basic point I was making, which is\n> that you need to think about the downsides of your patch, not just the\n> upsides.\n>\n> If you want to argue that there is *no* OLTP workload that will be\n> harmed by freezing as aggressively as possible, then that would be a\n> good argument in favor of your patch, because it would be arguing that\n> the downside simply doesn't exist, at least for OLTP workloads. The\n> fact that you can think of *one particular* OLTP workload that can\n> benefit from the patch is just doubling down on the \"my patch has an\n> upside\" argument, which literally no one is disputing.\n\nYou've treated me to another multi paragraph talking down, as if I was\nstill clinging to my original position, which is of course not the\ncase. I've literally said I'm done with VACUUM for good, and that I\njust want to put a line under this. Yet you still persist in doing\nthis sort of thing. I'm not fighting you, I'm not fighting Andres.\n\nI was making a point about the need to do something in this area in\ngeneral. That's all.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 27 Jan 2023 08:22:23 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 12:58 AM Andres Freund <andres@anarazel.de> wrote:\n> Essentially the \"any fpi\" logic is a very coarse grained way of using the page\n> LSN as a measurement. As I said, I don't think \"has a checkpoint occurred\n> since the last write\" is a good metric to avoid unnecessary freezing - it's\n> too coarse. But I think using the LSN is the right thought. What about\n> something like\n>\n> lsn_threshold = insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1\n> if (/* other conds */ && PageGetLSN(page) <= lsn_threshold)\n> FreezeMe();\n>\n> I probably got some details wrong, what I am going for with lsn_threshold is\n> that we'd freeze an already dirty page if it's not been updated within 10% of\n> the LSN distance to the last VACUUM.\n\nI think this might not be quite the right idea for a couple of reasons.\n\nFirst, suppose that the table is being processed just by autovacuum\n(no manual VACUUM operations) and that the rate of WAL generation is\npretty even, so that LSN age is a good proxy for time. If autovacuum\nprocesses the table once per hour, this will freeze if it hasn't been\nupdated in the last six minutes. That sounds good. But if autovacuum\nprocesses the table once per day, then this will freeze if it hasn't\nbeen updated in 2.4 hours. That might be OK, but it sounds a little on\nthe long side. If autovacuum processes the table once per week, then\nthis will freeze if it hasn't been updated in 16.8 hours. That sounds\ntoo conservative. Conversely, if autovacuum processes the table every\n3 minutes, then this will freeze the data if it hasn't been updated in\nthe last 18 seconds, which sounds awfully aggressive. Maybe I'm wrong\nhere, but I feel like the absolute amount of wall-clock time we're\ntalking about here probably matters to some degree. I'm not sure\nwhether a strict time-based threshold like, say, 10 minutes would be a\ngood idea, leaving aside the difficulties of implementation. It might\nbe right to think that if the table is being vacuumed a lot, freezing\nmore aggressively is smart, and if it's being vacuumed infrequently,\nfreezing less aggressively is smart, because if the table has enough\nactivity that it's being vacuumed frequently, that might also be a\nsign that we need to freeze more aggressively in order to avoid having\nthings go sideways. However, I'm not completely sure about that, and I\nthink it's possible that we need some guardrails to avoid going too\nfar in either direction.\n\nSecond, and more seriously, I think this would, in some circumstances,\nlead to tremendously unstable behavior. Suppose somebody does a bunch\nof work on a table and then they're like \"oh, we should clean up,\nVACUUM\" and it completes quickly because it's been a while since the\nlast vacuum and so it doesn't freeze much. Then, for whatever reason,\nthey decide to run it one more time, and it goes bananas and starts\nfreezing all kinds of stuff because the LSN distance since the last\nvacuum is basically zero. Or equally, you run a manual VACUUM, and you\nget completely different behavior depending on how long it's been\nsince the last autovacuum ran.\n\nIn some ways, I think this proposal has many of the same problems as\nvacuum_freeze_min_age. In both cases, the instinct is that we should\nuse something on the page to let us know how long it's been since the\npage was modified, and proceed on the theory that if the page has not\nbeen modified recently, it probably isn't about to be modified again.\nThat's a reasonable instinct, but the rate of XID advancement and the\nrate of LSN advancement are both highly variable, even on a system\nthat's always under some load.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 12:53:58 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-27 12:53:58 -0500, Robert Haas wrote:\n> On Fri, Jan 27, 2023 at 12:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > Essentially the \"any fpi\" logic is a very coarse grained way of using the page\n> > LSN as a measurement. As I said, I don't think \"has a checkpoint occurred\n> > since the last write\" is a good metric to avoid unnecessary freezing - it's\n> > too coarse. But I think using the LSN is the right thought. What about\n> > something like\n> >\n> > lsn_threshold = insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1\n> > if (/* other conds */ && PageGetLSN(page) <= lsn_threshold)\n> > FreezeMe();\n> >\n> > I probably got some details wrong, what I am going for with lsn_threshold is\n> > that we'd freeze an already dirty page if it's not been updated within 10% of\n> > the LSN distance to the last VACUUM.\n>\n> I think this might not be quite the right idea for a couple of reasons.\n\nIt's definitely not perfect. If we had an approximate LSN->time map as\ngeneral infrastructure, we could do a lot better. I think it'd be reasonably\neasy to maintain that in the autovacuum launcher, for example.\n\n\nOne thing worth calling out here, because it's not obvious from the code\nquoted above in isolation, is that what I was trying to refine here was the\ndecision when to perform opportunistic freezing *of already dirty pages that\ndo not require an FPI*.\n\nSo all that we need to prevent here is freezing very hotly updated data, where\nthe WAL overhead of the freeze records would be noticable, because we\nconstantly VACUUM, due to the high turnover.\n\n\n> First, suppose that the table is being processed just by autovacuum\n> (no manual VACUUM operations) and that the rate of WAL generation is\n> pretty even, so that LSN age is a good proxy for time. If autovacuum\n> processes the table once per hour, this will freeze if it hasn't been\n> updated in the last six minutes. That sounds good. But if autovacuum\n> processes the table once per day, then this will freeze if it hasn't\n> been updated in 2.4 hours. That might be OK, but it sounds a little on\n> the long side.\n\nYou're right. I was thinking of the \"lsn_since_last_vacuum\" because I was\nposulating it being useful elsewhere in the thread (but for eager strategy\nlogic) - but here that's really not very relevant.\n\nGiven that we're dealing with already dirty pages not requiring an FPI, I\nthink a much better \"reference LSN\" would be the LSN of the last checkpoint\n(LSN of the last checkpoint record, not the current REDO pointer).\n\n\n> Second, and more seriously, I think this would, in some circumstances,\n> lead to tremendously unstable behavior. Suppose somebody does a bunch\n> of work on a table and then they're like \"oh, we should clean up,\n> VACUUM\" and it completes quickly because it's been a while since the\n> last vacuum and so it doesn't freeze much. Then, for whatever reason,\n> they decide to run it one more time, and it goes bananas and starts\n> freezing all kinds of stuff because the LSN distance since the last\n> vacuum is basically zero. Or equally, you run a manual VACUUM, and you\n> get completely different behavior depending on how long it's been\n> since the last autovacuum ran.\n\nI don't think this quite applies to the scenario at hand, because it's\nrestricted to already dirty pages. And the max increased overhead is also\nsmall due to that - so occasionally getting it wrong is that impactful.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 10:36:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 12:52 AM Andres Freund <andres@anarazel.de> wrote:\n> I agree with bringing high-level context into the decision about whether to\n> freeze agressively - my problem with the eager freezing strategy patch isn't\n> that it did that too much, it's that it didn't do it enough.\n>\n>\n> But I also don't think what I describe above is really comparable to \"table\n> level\" eager freezing though - the potential worst case overhead is a small\n> fraction of the WAL volume, and there's zero increase in data write volume.\n\nAll I meant was that I initially thought that you were trying to\nreplace the FPI thing with something at the same level of ambition,\nthat could work in a low context way. But I now see that you're\nactually talking about something quite a bit more ambitious for\nPostgres 16, which is structurally similar to a freezing strategy,\nfrom a code point of view -- it relies on high-level context for the\nVACUUM/table as a whole. I wasn't equating it with the eager freezing\nstrategy in any other way.\n\nIt might also be true that this other thing happens to render the FPI\nmechanism redundant. I'm actually not completely sure that it will\njust yet. Let me verify my understanding of your proposal:\n\nYou mean that we'd take the page LSN before doing anything with the\npage, right at the top of lazy_scan_prune, at the same point that\n\"fpi_before\" is initialized currently. Then, if we subsequently\ndirtied the page (as determined by its LSN, so as to focus on \"dirtied\nvia WAL logged operation\") during pruning, *and* if the \"lsn_before\"\nof the page was from before our cutoff (derived via \" lsn_threshold =\n insert_lsn - (insert_lsn - lsn_of_last_vacuum) * 0.1\" or similar),\n*and* if the page is eligible to become all-frozen, then we'd freeze\nthe page.\n\nThat's it, right? It's about pages that *we* (VACUUM) dirtied, and\nwrote records and/or FPIs for already?\n\n> I suspect the absolute worst case of \"always freeze dirty pages\" is when a\n> single tuple on the page gets updated immediately after every time we freeze\n> the page - a single tuple is where the freeze record is the least space\n> efficient. The smallest update is about the same size as the smallest freeze\n> record. For that to amount to a large WAL increase you'd a crazy rate of such\n> updates interspersed with vacuums. In slightly more realistic cases (i.e. not\n> column less tuples that constantly get updated and freezing happening all the\n> time) you end up with a reasonably small WAL rate overhead.\n\nOther thing is that we'd be doing this in situations where we already\nknow that a VISIBLE record is required, which is comparable in size to\na FREEZE_PAGE record with one tuple/plan (around 64 bytes). The\nsmallest WAL records are mostly just generic WAL record header\noverhead.\n\n> Obviously that's a pointless workload, but I do think that\n> analyzing the \"outer boundaries\" of the regression something can cause, can be\n> helpful.\n\nI agree about the \"outer boundaries\" being a useful guide.\n\n> I think one way forward with the eager strategy approach would be to have a\n> very narrow gating condition for now, and then incrementally expand it in\n> later releases.\n>\n> One use-case where the eager strategy is particularly useful is\n> [nearly-]append-only tables - and it's also the one workload that's reasonably\n> easy to detect using stats. Maybe something like\n> (dead_tuples_since_last_vacuum / inserts_since_last_vacuum) < 0.05\n> or so.\n>\n> That'll definitely leave out loads of workloads where eager freezing would be\n> useful - but are there semi-reasonable workloads where it'll hurt badly? I\n> don't *think* so.\n\nI have no further plans to work on eager freezing strategy, or\nanything of the sort, in light of recent developments. My goal at this\npoint is very unambitious: to get the basic page-level freezing work\ninto a form that makes sense as a standalone thing for Postgres 16. To\nput things on a good footing, so that I can permanently bow out of all\nwork on VACUUM having left everything in good order. That's all.\n\nNow, that might still mean that I'd facilitate future work of this\nsort, by getting the right basic structure in place. But my\ninvolvement in any work on freezing or anything of the sort ends here,\nboth as a patch author and a committer of anybody else's work. I'm\nproud of the work I've done on VACUUM, but I'm keen to move on from\nit.\n\n> > What about unlogged/temporary tables? The obvious thing to do there is\n> > what I did in the patch that was reverted (freeze whenever the page\n> > will thereby become all-frozen), and forget about LSNs. But you have\n> > already objected to that part, specifically.\n>\n> My main concern about that is the data write amplification it could cause when\n> page is clean when we start freezing. But I can't see a large potential\n> downside to always freezing unlogged/temp tables when the page is already\n> dirty.\n\nBut we have to dirty the page anyway, just to set PD_ALL_VISIBLE. That\nwas always a gating condition. Actually, that may have depended on not\nhaving SKIP_PAGES_THRESHOLD, which the vm snapshot infrastructure\nwould have removed. That's not happening now, so I may need to\nreassess. But even with SKIP_PAGES_THRESHOLD, it should be fine.\n\n> > BTW, you still haven't changed the fact that you get rather different\n> > behavior with checksums/wal_log_hints. I think that that's good, but\n> > you didn't seem to.\n>\n> I think that, if we had something like the recency test I was talking about,\n> we could afford to alway freeze when the page is already dirty and not very\n> recently modified. I.e. not even insist on a WAL record having been generated\n> during pruning/HTSV. But I need to think through the dangers of that more.\n\nNow I'm confused. I thought that the recency test you talked about was\npurely to be used to do something a bit like the FPI thing, but using\nsome high level context. Now I don't know what to think.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 27 Jan 2023 10:40:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: New strategies for freezing, advancing relfrozenxid early"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI've found typos in ja.po, and fixed them.\nThe patch is attached.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION",
"msg_date": "Fri, 26 Aug 2022 10:23:01 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Fix japanese translation of log messages"
},
{
"msg_contents": "At Fri, 26 Aug 2022 10:23:01 +0900, Shinya Kato <Shinya11.Kato@oss.nttdata.com> wrote in \n> I've found typos in ja.po, and fixed them.\n> The patch is attached.\n\n(This is not for -hackers but I'm fine with it being posted here;p)\n\nThanks for the report! Pushed to 10 to 15 of translation repository\nwith some minor changes. They will be reflected in the code tree some\ntime later.\n\nmsgid \"More details may be available in the server log.\"\n-msgstr \"詳細な情報がはサーバログにあるかもしれません。\"\n+msgstr \"詳細な情報はサーバログにあるかもしれません。\"\n\nI prefer \"詳細な情報が\" than \"詳細な情報は\" here. (The existnce of\nthe details is unknown here.)\n\n msgid \"cannot drop active portal \\\"%s\\\"\"\n-msgstr \"アクテイブなポータル\\\"%s\\\"を削除できません\"\n+msgstr \"アクティブなポータル\\\"%s\\\"を削除できません\"\n\nI canged it to \"アクティブなポータル\\\"%s\\\"は削除できません\". (It\ndescribes state facts, not telling the result of an action.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Aug 2022 14:07:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "On 2022-08-26 10:23, Shinya Kato wrote:\n> Hi hackers,\n> \n> I've found typos in ja.po, and fixed them.\n> The patch is attached.\n\nThanks for the patch!\nLGTM.\n\nI had found a similar typo before in ja.po, so I added that as well.\n\n @@ -12739,7 +12739,7 @@ msgstr \"ロールオプション\\\"%s\\\"が認識できません\"\n> #: gram.y:1588 gram.y:1604\n> #, c-format\n> msgid \"CREATE SCHEMA IF NOT EXISTS cannot include schema elements\"\n> -msgstr \"CREATE SCHEMA IF NOT EXISTSんはスキーマ要素を含めることはできません\"\n> +msgstr \"CREATE SCHEMA IF NOT EXISTSはスキーマ要素を含めることはできません\"\n\nHow do you think?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION",
"msg_date": "Fri, 26 Aug 2022 14:28:26 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "At Fri, 26 Aug 2022 14:28:26 +0900, torikoshia <torikoshia@oss.nttdata.com> wrote in \n> \"\n> > #: gram.y:1588 gram.y:1604\n> > #, c-format\n> > msgid \"CREATE SCHEMA IF NOT EXISTS cannot include schema elements\"\n> > -msgstr \"CREATE SCHEMA IF NOT EXISTSんはスキーマ要素を含めることはでき\n> > -ません\"\n> > +msgstr \"CREATE SCHEMA IF NOT EXISTSはスキーマ要素を含めることはできま\n> > せん\"\n> \n> How do you think?\n\n\"NHa\" Darn... That kind of mistypes are inevitable when I worked on\nnearly or over a thousand of messages at once.. Currently I'm working\nin an incremental fashion and I only process at most up to 10 or so\nmessages at a time thus that kind of silly mistake cannot happen..\n\nIt's a mistake of \"には\". I'll load it into the next ship. The next\nrelease is 9/8 and I'm not sure the limit of translation commits for\nthe release, though..\n\nAnyway, thank you for reporting!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Aug 2022 15:20:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "On 2022-08-26 15:20, Kyotaro Horiguchi wrote:\n> At Fri, 26 Aug 2022 14:28:26 +0900, torikoshia\n> <torikoshia@oss.nttdata.com> wrote in\n>> > #: gram.y:1588 gram.y:1604\n>> > #, c-format\n>> > msgid \"CREATE SCHEMA IF NOT EXISTS cannot include schema elements\"\n>> > -msgstr \"CREATE SCHEMA IF NOT EXISTSんはスキーマ要素を含めることはでき\n>> > -ません\"\n>> > +msgstr \"CREATE SCHEMA IF NOT EXISTSはスキーマ要素を含めることはできま\n>> > せん\"\n>> \n>> How do you think?\n> \n> \"NHa\" Darn... That kind of mistypes are inevitable when I worked on\n> nearly or over a thousand of messages at once.. Currently I'm working\n> in an incremental fashion and I only process at most up to 10 or so\n> messages at a time thus that kind of silly mistake cannot happen..\n\nNo problem, rather thanks for working on this!\n\n> It's a mistake of \"には\".\n\nAh, I got it.\n\n> I'll load it into the next ship. The next\n> release is 9/8 and I'm not sure the limit of translation commits for\n> the release, though..\n\nThanks a lot!\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 26 Aug 2022 15:34:44 +0900",
"msg_from": "torikoshia <torikoshia@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "On 2022-08-26 14:07, Kyotaro Horiguchi wrote:\n> At Fri, 26 Aug 2022 10:23:01 +0900, Shinya Kato\n> <Shinya11.Kato@oss.nttdata.com> wrote in\n>> I've found typos in ja.po, and fixed them.\n>> The patch is attached.\n> \n> (This is not for -hackers but I'm fine with it being posted here;p)\nSorry, I didn't know there was an pgsql-translators.\n\n> Thanks for the report! Pushed to 10 to 15 of translation repository\n> with some minor changes. They will be reflected in the code tree some\n> time later.\nThanks!\n\n> msgid \"More details may be available in the server log.\"\n> -msgstr \"詳細な情報がはサーバログにあるかもしれません。\"\n> +msgstr \"詳細な情報はサーバログにあるかもしれません。\"\n> \n> I prefer \"詳細な情報が\" than \"詳細な情報は\" here. (The existnce of\n> the details is unknown here.)\n> \n> msgid \"cannot drop active portal \\\"%s\\\"\"\n> -msgstr \"アクテイブなポータル\\\"%s\\\"を削除できません\"\n> +msgstr \"アクティブなポータル\\\"%s\\\"を削除できません\"\n> \n> I canged it to \"アクティブなポータル\\\"%s\\\"は削除できません\". (It\n> describes state facts, not telling the result of an action.)\nThanks, LGTM.\n\n-- \nRegards,\n\n--\nShinya Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:13:16 +0900",
"msg_from": "Shinya Kato <Shinya11.Kato@oss.nttdata.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "On 2022-Aug-26, Kyotaro Horiguchi wrote:\n\n> It's a mistake of \"には\". I'll load it into the next ship. The next\n> release is 9/8 and I'm not sure the limit of translation commits for\n> the release, though..\n\nTypically the translations are updated from the pgtranslation repository\non Monday of the release week, at around noon European time. You can\nkeep translating till the previous Sunday if you feel like it :-)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 26 Aug 2022 10:25:17 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "At Fri, 26 Aug 2022 10:25:17 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> Typically the translations are updated from the pgtranslation repository\n> on Monday of the release week, at around noon European time. You can\n> keep translating till the previous Sunday if you feel like it :-)\n\nYeah... . .\n\nSo.. the limit is around 9/5 12:00 CEST(?).. is.. 9/5 19:00 JST?\n\nThank you very much for the significant info.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 26 Aug 2022 17:48:16 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "On 2022-Aug-26, Kyotaro Horiguchi wrote:\n\n> At Fri, 26 Aug 2022 10:25:17 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > Typically the translations are updated from the pgtranslation repository\n> > on Monday of the release week, at around noon European time. You can\n> > keep translating till the previous Sunday if you feel like it :-)\n> \n> Yeah... . .\n> \n> So.. the limit is around 9/5 12:00 CEST(?).. is.. 9/5 19:00 JST?\n\nWell, Sept 8th is the date of 15 beta4. I suppose there'll be at least\ntwo or three weeks from beta4 to the RC1, and maybe one or two more\nweeks from there to 15.0. You can obviously continue to translate until\nthen, if you want these translations to appear in 15.0. And as for\nstable branches, the next one is scheduled for early November, so you\nhave until then to fix typos in those.\n\nFor any translations that do not appear in 15.0, you have three more\nmonths until 15.1 ... and so on.\n\nIt never ends. Blessing or curse?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n\n",
"msg_date": "Fri, 26 Aug 2022 11:07:13 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
},
{
"msg_contents": "At Fri, 26 Aug 2022 11:07:13 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> On 2022-Aug-26, Kyotaro Horiguchi wrote:\n> \n> > At Fri, 26 Aug 2022 10:25:17 +0200, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> > > Typically the translations are updated from the pgtranslation repository\n> > > on Monday of the release week, at around noon European time. You can\n> > > keep translating till the previous Sunday if you feel like it :-)\n> > \n> > Yeah... . .\n> > \n> > So.. the limit is around 9/5 12:00 CEST(?).. is.. 9/5 19:00 JST?\n> \n> Well, Sept 8th is the date of 15 beta4. I suppose there'll be at least\n> two or three weeks from beta4 to the RC1, and maybe one or two more\n> weeks from there to 15.0. You can obviously continue to translate until\n> then, if you want these translations to appear in 15.0. And as for\n> stable branches, the next one is scheduled for early November, so you\n> have until then to fix typos in those.\n> \n> For any translations that do not appear in 15.0, you have three more\n> months until 15.1 ... and so on.\n> \n> It never ends. Blessing or curse?\n\nEven if it's a curse, it is easily gone by just stopping.. but..\n\nI have refrained from committing too frequently to the repo. (Still it\nmight a bit too often..) If it is not a problem to commit at most once\nper day, things will get a bit easier to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:02:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix japanese translation of log messages"
}
] |
[
{
"msg_contents": "Hi,\n\nSince doing some work for PG15 for speeding up sorts, I've been a\nlittle irritated by the fact that dumptuples() calls WRITETUP, (which\nis now always calling writetuple()) and calls pfree() on the tuple\nonly for dumptuples() to do\nMemoryContextReset(state->base.tuplecontext) directly afterwards.\nThese pfrees are just a waste of effort and we might as well leave it\nto the context reset to do the cleanup. (Probably especially so when\nusing AllocSet for storing tuples)\n\nThere are only 2 calls to WRITETUP and the other one is always called\nwhen state->slabAllocatorUsed is true. writetuple() checks for that\nbefore freeing the tuple, which is a bit of a wasted branch since\nit'll always prove to be false for the use case in mergeonerun().\n(It's possible the compiler might inline that now anyway since the\nWRITETUP macro always calls writetuple() directly now)\n\nI've attached 3 patches aimed to do a small amount of cleanup work in\ntuplesort.c\n\n0001: Just fixes a broken looking comment in writetuple()\n0002: Gets rid of the WRITETUP marco. That does not do anything useful\nsince 097366c45\n0003: Changes writetuple to tell it what it should do in regards to\nfreeing and adjusting the memory accounting.\n\nProbably 0003 could be done differently. I'm certainly not set on the\nbool args. I understand that I'm never calling it with \"freetup\" ==\ntrue. So other options include 1) rip out the pfree code and that\nparameter; or 2) just do the inlining manually at both call sites.\n\nI'll throw this in the September CF to see if anyone wants to look.\nThere's probably lots more cleaning jobs that could be done in\ntuplesort.c.\n\nThe performance improvement from 0003 is not that impressive, but it\nlooks like it makes things very slightly faster, so probably worth it\nif the patch makes the code cleaner. See attached gif and script for\nthe benchmark I ran to test it. I think the gains might go up\nslightly with [1] applied as that patch seems to do more to improve\nthe speed of palloc() than it does to improve the speed of pfree().\n\nDavid\n\n[1] https://commitfest.postgresql.org/39/3810/",
"msg_date": "Fri, 26 Aug 2022 16:48:18 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Small cleanups to tuplesort.c and a bonus small performance\n improvement"
},
{
"msg_contents": "On Fri, 26 Aug 2022 at 16:48, David Rowley <dgrowleyml@gmail.com> wrote:\n> 0003: Changes writetuple to tell it what it should do in regards to\n> freeing and adjusting the memory accounting.\n>\n> Probably 0003 could be done differently. I'm certainly not set on the\n> bool args. I understand that I'm never calling it with \"freetup\" ==\n> true. So other options include 1) rip out the pfree code and that\n> parameter; or 2) just do the inlining manually at both call sites.\n\nThis patch series needed to be rebased and on looking it at again,\nsince the pfree() code is never used I felt it makes very little sense\nto keep it, so I decided that it might be better just to keep the\nWRITETUP macro and just completely get rid of the writetuple function\nand have the macro call the function pointed to be the \"writetup\"\npointer. The only extra code we needed from writetuple() was the\nmemory accounting code which was only used in dumptuples(), so I've\njust included that code in that function instead.\n\nI also noticed that dumptuples() had a pretty braindead method of\nzeroing out state->memtupcount by subtracting 1 from it on each loop.\nSince that's not being used to keep track of the loop's progress, I've\njust moved it out the loop and changed the code to set it to 0 once\nthe loop is done.\n\n> I'll throw this in the September CF to see if anyone wants to look.\n> There's probably lots more cleaning jobs that could be done in\n> tuplesort.c.\n\nMy current thoughts are that this is a very trivial patch and unless\nthere's any objections I plan to push it soon.\n\nDavid",
"msg_date": "Wed, 31 Aug 2022 22:39:45 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small cleanups to tuplesort.c and a bonus small performance\n improvement"
},
{
"msg_contents": "On Wed, 31 Aug 2022 at 22:39, David Rowley <dgrowleyml@gmail.com> wrote:\n> My current thoughts are that this is a very trivial patch and unless\n> there's any objections I plan to push it soon.\n\nPushed.\n\nDavid\n\n\n",
"msg_date": "Thu, 1 Sep 2022 11:28:14 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Small cleanups to tuplesort.c and a bonus small performance\n improvement"
}
] |
[
{
"msg_contents": "Hi,\n\nWe allow $SUBJECT on Windows. I'm not sure exactly how we finished up\nwith that, maybe a historical mistake, but I find it misleading today.\nModern Windows flushes drive write caches for fsync (= _commit()) and\nfdatasync (= FLUSH_FLAGS_FILE_DATA_SYNC_ONLY). In fact it is possible\nto tell Windows to write out file data without flushing the drive\ncache (= FLUSH_FLAGS_NO_SYNC), but I don't believe anyone is\ninterested in new weaker levels. Any reason not to just get rid of\nit?\n\nOn macOS, our fsync and fdatasync levels *don't* flush drive caches,\nbecause those system calls don't on that OS, and they offer a weird\nspecial fcntl, so there we offer $SUBJECT for a good reason. Now that\nmacOS 10.2 systems are thoroughly extinct, I think we might as well\ndrop the configure probe, though, while we're doing a lot of that sort\nof thing.\n\nThe documentation also says a couple of things that aren't quite\ncorrect about wal_sync_level. (I would also like to revise other\nnearby outdated paragraphs about volatile write caches, sector sizes\netc, but that'll take some more research.)",
"msg_date": "Fri, 26 Aug 2022 16:55:05 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "wal_sync_method=fsync_writethrough"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 6:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi,\n>\n> We allow $SUBJECT on Windows. I'm not sure exactly how we finished up\n> with that, maybe a historical mistake, but I find it misleading today.\n> Modern Windows flushes drive write caches for fsync (= _commit()) and\n> fdatasync (= FLUSH_FLAGS_FILE_DATA_SYNC_ONLY). In fact it is possible\n> to tell Windows to write out file data without flushing the drive\n> cache (= FLUSH_FLAGS_NO_SYNC), but I don't believe anyone is\n> interested in new weaker levels. Any reason not to just get rid of\n> it?\n\nSo, I don't know how it works now, but the history at least was this:\nit was not about the disk caches, it was about raid controller caches.\n\nBasically, we determined that windows didn't fsync it all the way. But\nit would with But if we changed wal_sync_method=fsync to actually\n*do* that, then people who had paid big money for raid controllers\nwith flash or battery backed cache would lose a ton of performance. So\nwe needed one level that would sync out of the OS but not through the\nRAID cache, and another one that would sync it out of the RAID cache\nas well. Which would/could be different from the drive caches\nthemselves, and they often behaved differently. And I think it may\nhave even been dependent on the individual RAID drivers what the\ndefault would be.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n",
"msg_date": "Fri, 26 Aug 2022 14:17:21 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: wal_sync_method=fsync_writethrough"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 12:17 AM Magnus Hagander <magnus@hagander.net> wrote:\n> So, I don't know how it works now, but the history at least was this:\n> it was not about the disk caches, it was about raid controller caches.\n> Basically, we determined that windows didn't fsync it all the way. But\n> it would with But if we changed wal_sync_method=fsync to actually\n> *do* that, then people who had paid big money for raid controllers\n> with flash or battery backed cache would lose a ton of performance. So\n> we needed one level that would sync out of the OS but not through the\n> RAID cache, and another one that would sync it out of the RAID cache\n> as well. Which would/could be different from the drive caches\n> themselves, and they often behaved differently. And I think it may\n> have even been dependent on the individual RAID drivers what the\n> default would be.\n\nThanks for the background. Yeah, that makes sense to motivate\nopen_datasync for Windows. Not sure what you meant about fsync or\nmeant to write after \"would with\".\n\nIt seems like the 2005 discussions were primarily about open_datasync\nbut also had the by-product of introducing the name\nfsync_writethrough. If I'm reading between the lines[1] correctly,\nperhaps the logic went like this:\n\n1. We noticed that _commit() AKA FlushFileBuffers() issued\nSYNCHRONIZE CACHE (or equivalent) on Windows.\n\n2. At that time in history, Linux (and other Unixes) probably did not\nissue SYNCHRONIZE CACHE when you called fsync()/fdatasync().\n\n3. We concluded therefore that Windows was strange and we needed to\nuse a different level name for the setting to reflect this extra\neffect.\n\nNow it looks strange: we have both \"fsync\" and \"fsync_writethrough\"\ndoing exactly the same thing while vaguely implying otherwise, and the\ncontrast with other operating systems (if I divined that aspect\ncorrectly) mostly doesn't apply. How flush commands affect various\ncaches in modern storage stacks is also not really OS-specific AFAIK.\n\n(Obviously macOS is a different story...)\n\n[1] https://www.postgresql.org/message-id/flat/26109.1111084860%40sss.pgh.pa.us#e7f8c2e14d76cad76b1857e89c8a6314\n\n\n",
"msg_date": "Sat, 27 Aug 2022 09:28:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: wal_sync_method=fsync_writethrough"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 11:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Sat, Aug 27, 2022 at 12:17 AM Magnus Hagander <magnus@hagander.net> wrote:\n> > So, I don't know how it works now, but the history at least was this:\n> > it was not about the disk caches, it was about raid controller caches.\n> > Basically, we determined that windows didn't fsync it all the way. But\n> > it would with But if we changed wal_sync_method=fsync to actually\n> > *do* that, then people who had paid big money for raid controllers\n> > with flash or battery backed cache would lose a ton of performance. So\n> > we needed one level that would sync out of the OS but not through the\n> > RAID cache, and another one that would sync it out of the RAID cache\n> > as well. Which would/could be different from the drive caches\n> > themselves, and they often behaved differently. And I think it may\n> > have even been dependent on the individual RAID drivers what the\n> > default would be.\n>\n> Thanks for the background. Yeah, that makes sense to motivate\n> open_datasync for Windows. Not sure what you meant about fsync or\n> meant to write after \"would with\".\n\nThat's a good question indeed :) I think I meant it would with\nFILE_FLAG_WRITE_THROUGH.\n\n\n> It seems like the 2005 discussions were primarily about open_datasync\n> but also had the by-product of introducing the name\n> fsync_writethrough. If I'm reading between the lines[1] correctly,\n> perhaps the logic went like this:\n>\n> 1. We noticed that _commit() AKA FlushFileBuffers() issued\n> SYNCHRONIZE CACHE (or equivalent) on Windows.\n>\n> 2. At that time in history, Linux (and other Unixes) probably did not\n> issue SYNCHRONIZE CACHE when you called fsync()/fdatasync().\n\nI think it may have been driver dependent there (as well), at the time.\n\n\n> 3. We concluded therefore that Windows was strange and we needed to\n> use a different level name for the setting to reflect this extra\n> effect.\n\nIt was certainly strange to us :)\n\n\n> Now it looks strange: we have both \"fsync\" and \"fsync_writethrough\"\n> doing exactly the same thing while vaguely implying otherwise, and the\n> contrast with other operating systems (if I divined that aspect\n> correctly) mostly doesn't apply. How flush commands affect various\n> caches in modern storage stacks is also not really OS-specific AFAIK.\n>\n> (Obviously macOS is a different story...)\n\nGiven that it does vary (because macOS is actually an OS :D), we might\nneed to start from a matrix of exactly what happens in different\nstates, and then try to map that to a set? I fully agree that if\nthings actually behave the same, they should be called the same.\n\nAnd it may also be that there is no longer a difference between\ndirect-drive and RAID-with-battery-or-flash, which used to be the huge\ndifference back then, where you had to tune for it. For many cases\nthat has been negated by just not using that (and using NVME and\npossibly software raid instead), but there are certainly still people\nusing such systems...\n\n//Magnus\n\n\n",
"msg_date": "Mon, 29 Aug 2022 17:44:25 +0200",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: wal_sync_method=fsync_writethrough"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 3:44 AM Magnus Hagander <magnus@hagander.net> wrote:\n> On Fri, Aug 26, 2022 at 11:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Now it looks strange: we have both \"fsync\" and \"fsync_writethrough\"\n> > doing exactly the same thing while vaguely implying otherwise, and the\n> > contrast with other operating systems (if I divined that aspect\n> > correctly) mostly doesn't apply. How flush commands affect various\n> > caches in modern storage stacks is also not really OS-specific AFAIK.\n> >\n> > (Obviously macOS is a different story...)\n>\n> Given that it does vary (because macOS is actually an OS :D), we might\n> need to start from a matrix of exactly what happens in different\n> states, and then try to map that to a set? I fully agree that if\n> things actually behave the same, they should be called the same.\n\nThanks, I'll take that as a +1 for dropping the redundant level for\nWindows. (Of course it stays for macOS).\n\nI like that our current levels are the literal names of standard\ninterfaces we call, since the rest is out of our hands. I'm not sure\nwhat you could actually *do* with the information that some OS doesn't\nflush write caches, other than document it and suggest a remedy (e.g.\nturn it off). I would even prefer it if fsync_writethrough were\ncalled F_FULLFSYNC, following that just-say-what-it-does-directly\nphilosophy, but that horse is already over the horizon.\n\n> And it may also be that there is no longer a difference between\n> direct-drive and RAID-with-battery-or-flash, which used to be the huge\n> difference back then, where you had to tune for it. For many cases\n> that has been negated by just not using that (and using NVME and\n> possibly software raid instead), but there are certainly still people\n> using such systems...\n\nI believe modern systems are a lot better at negotiating the need for\nflushes (i.e. for *volatile* caches). In contrast, the FUA situation\n(as used for FILE_FLAG_WRITE_THROUGH) seems like a multi-level\ndumpster fire on anything but high-end gear, from what I've been able\nto figure out so far, though I'm no expert.\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:14:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: wal_sync_method=fsync_writethrough"
}
] |
[
{
"msg_contents": "Hi,\n\nOn Mon, Apr 4, 2022 at 11:46 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> I think there's a few more things that'd be good to check. For example\n> amcheck\n> doesn't verify that HOT chains are reasonable, which can often be spotted\n> looking at an individual page. Which is a bit unfortunate, given how many\n> bugs\n> we had in that area.\n>\n> Stuff to check around that:\n> - target of redirect has HEAP_ONLY_TUPLE, HEAP_UPDATED set\n> - In a valid ctid chain within a page (i.e. xmax = xmin):\n> - tuples have HEAP_UPDATED set\n> - HEAP_ONLY_TUPLE / HEAP_HOT_UPDATED matches across chains elements\n\n\n(I changed the subject because the attached patch is related to HOT chain\nvalidation).\n\nPlease find attached the patch with the above idea of HOT chain's\nvalidation(within a Page) and a few more validation as below.\n\n* If the predecessor’s xmin is aborted or in progress, the current tuples\nxmin should be aborted or in progress respectively. Also, Both xmin must be\nequal.\n* If the predecessor’s xmin is not frozen, then-current tuple’s shouldn’t\nbe either.\n* If the predecessor’s xmin is equal to the current tuple’s xmin, the\ncurrent tuple’s cmin should be greater than the predecessor’s xmin.\n* If the current tuple is not HOT then its predecessor’s tuple must not be\nHEAP_HOT_UPDATED.\n* If the current Tuple is HOT then its predecessor’s tuple must be\nHEAP_HOT_UPDATED and vice-versa.\n* If xmax is 0, which means it's the last tuple in the chain, then it must\nnot be HEAP_HOT_UPDATED.\n* If the current tuple is the last tuple in the HOT chain then the next\ntuple should not be HOT.\n\nI am looking into the process of adding the TAP test for these changes and\nfinding a way to corrupt a page in the tap test. Will try to include these\ntest cases in my Upcoming version of the patch.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 26 Aug 2022 11:50:08 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nThanks for working on this.\n\n+ htup = (HeapTupleHeader) PageGetItem(ctx.page, rditem);\n+ if (!(HeapTupleHeaderIsHeapOnly(htup) &&\nhtup->t_infomask & HEAP_UPDATED))\n+ report_corruption(&ctx,\n+ psprintf(\"Redirected Tuple at\nline pointer offset %u is not HEAP_ONLY_TUPLE or HEAP_UPDATED tuple\",\n+ (unsigned) rdoffnum));\n\nThis isn't safe because the line pointer referenced by rditem may not\nhave been sanity-checked yet. Refer to the comment just below where it\nsays \"Sanity-check the line pointer's offset and length values\".\n\nThere are multiple problems with this error message. First, if you\ntake a look at the existing messages - which is always a good thing to\ndo when adding new ones - you will see that they are capitalized\ndifferently. Try to match the existing style. Second, we made a real\neffort with the existing messages to avoid referring to the names of\nidentifiers that only exist at the C level. For example, just above\nyou will see a message which says \"line pointer redirection to item at\noffset %u precedes minimum offset %u\". It deliberately does not say\n\"line pointer redirection to item at offset %u is less than\nFirstOffsetNumber\" even though that would be an equally correct\nstatement of the problem. The intent here is to make the messages at\nleast somewhat accessible to users who are somewhat familiar with how\nPostgreSQL storage works but may not read the C code. These comments\napply to every message you add in the patch.\n\nThe message also does not match the code. The code tests for\nHEAP_UPDATED, but the message claims that the code is testing for\neither HEAP_ONLY_TUPLE or HEAP_UPDATED. As a general rule, it is best\nnot to club related tests together in cases like this, because it\nenables better and more specific error messages.\n\nIt would be clearer to make an explicit comparison to 0, like\n(htup->t_infomask & HEAP_UPDATED) != 0, rather than relying on 0 being\nfalse and non-zero being true. It doesn't matter to the compiler, but\nit may help human readers.\n\n+ /*\n+ * Add line pointer offset to predecessor array.\n+ * 1) If xmax is matching with xmin of next\nTuple(reaching via its t_ctid).\n+ * 2) If next tuple is in the same page.\n+ * Raise corruption if:\n+ * We have two tuples having same predecessor.\n+ *\n+ * We add offset to predecessor irrespective of\ntransaction(t_xmin) status. We will\n+ * do validation related to transaction status(and also\nall other validations)\n+ * when we loop over predecessor array.\n+ */\n\nThe formatting of this comment will, I think, be mangled if pgindent\nis run against the file. You can use ----- markers to prevent that, I\nbelieve, or (better) write this as a paragraph without relying on the\nlines ending up uneven in length.\n\n+ if (predecessor[nextTupOffnum] != 0)\n+ {\n+ report_corruption(&ctx,\n+ psprintf(\"Tuple at offset %u is\nreachable from two or more updated tuple\",\n+ (unsigned) nextTupOffnum));\n+ continue;\n+ }\n\nYou need to do this test after xmin/xmax matching. Otherwise you might\nget false positives. Also, the message should be more specific and\nmatch the style of the existing messages. ctx.offnum is already going\nto be reported in another column, but we can report both nextTupOffnum\nand predecessor[nextTupOffnum] here e.g. \"updated version at offset %u\nis also the updated version of tuple at offset %u\".\n\n+ currTupXmax = HeapTupleHeaderGetUpdateXid(ctx.tuphdr);\n+ lp = PageGetItemId(ctx.page, nextTupOffnum);\n+\n+ htup = (HeapTupleHeader) PageGetItem(ctx.page, lp);\n\nThis has the same problem I mentioned in my first comment above,\nnamely, we haven't necessarily sanity-checked the length and offset\nvalues for nextTupOffnum yet. Saying that another way, if the contents\nof lp are corrupt and point off the page, we want that to be reported\nas corruption (which the current code will already do) and we want\nthis check to be skipped so that we don't crash or access random\nmemory addresses. You need to think about how to rearrange the code so\nthat we only perform checks that are known to be safe.\n\n+ /* Now loop over offset and validate data in predecessor array.*/\n+ for ( ctx.offnum = FirstOffsetNumber; ctx.offnum <= maxoff;\n+ ctx.offnum = OffsetNumberNext(ctx.offnum))\n\nPlease take the time to format your code according to the PostgeSQL\nstandard practice. If you don't know what that looks like, use\npgindent.\n\n+ {\n+ HeapTupleHeader pred_htup, curr_htup;\n+ TransactionId pred_xmin, curr_xmin, curr_xmax;\n+ ItemId pred_lp, curr_lp;\n\nSame here.\n\nWithin this loop, you need to think about what to include in the\ncolumns of the output other than 'msg' and what to include in the\nmessage itself. There's no reason to include ctx.offnum in the message\ntext because it's already included in the 'offnum' column of the\noutput.\n\nI think it would actually be a good idea to set ctx.offnum to the\npredecessor's offset number, and use a separate variable for the\ncurrent offset number. The reason why I think this is that I believe\nit will make it easier to phrase the messages appropriately. For\nexample, if ctx.offnum is the predecessor tuple, then we can issue\ncomplaints like this:\n\ntuple with uncommitted xmin %u was updated to produce a tuple at\noffset %u with differing xmin %u\nunfrozen tuple was updated to produce a tuple at offset %u which is not frozen\ntuple with uncommitted xmin %u has cmin %u, but was updated to produce\na tuple with cmin %u\nnon-heap-only update produced a heap-only tuple at offset %u\nheap-only update produced a non-heap only tuple at offset %u\n\n+ if (!TransactionIdIsValid(curr_xmax) &&\n+ HeapTupleHeaderIsHotUpdated(curr_htup))\n+ {\n+ report_corruption(&ctx,\n+ psprintf(\"Current tuple at offset %u is\nHOT but is last tuple in the HOT chain.\",\n+ (unsigned) ctx.offnum));\n+ }\n\nThis check has nothing to do with the predecessor[] array, so it seems\nlike it belongs in check_tuple() rather than here. Also, the message\nis rather confused, because the test is checking whether the tuple has\nbeen HOT-updated, while the message is talking about whether the tuple\nwas *itself* created by a HOT update. Also, when we're dealing with\ncorruption, statements like \"is last tuple in the HOT chain\" are\npretty ambiguous. Also, isn't this an issue for both HOT-updated\ntuples and also just regular updated tuples? i.e. maybe what we should\nbe complaining about here is something like \"tuple has been updated,\nbut xmax is 0\" and then make the test check exactly that.\n\n+ if (!HeapTupleHeaderIsHotUpdated(pred_htup) &&\n+ HeapTupleHeaderIsHeapOnly(pred_htup) &&\n+ HeapTupleHeaderIsHeapOnly(curr_htup))\n+ {\n+ report_corruption(&ctx,\n+ psprintf(\"Current tuple at offset %u is\nHOT but it is next updated tuple of last Tuple in HOT chain.\",\n+ (unsigned) ctx.offnum));\n+ }\n\nThree if-statements up, you tested two out of these three conditions\nand complained if they were met. So any time this fires, that will\nhave also fired.\n\n...Robert\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:17:02 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi Robert,\n\nThanks for sharing the feedback.\n\nOn Sat, Aug 27, 2022 at 1:47 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> + htup = (HeapTupleHeader) PageGetItem(ctx.page, rditem);\n> + if (!(HeapTupleHeaderIsHeapOnly(htup) &&\n> htup->t_infomask & HEAP_UPDATED))\n> + report_corruption(&ctx,\n> + psprintf(\"Redirected Tuple at\n> line pointer offset %u is not HEAP_ONLY_TUPLE or HEAP_UPDATED tuple\",\n> + (unsigned) rdoffnum));\n>\n> This isn't safe because the line pointer referenced by rditem may not\n> have been sanity-checked yet. Refer to the comment just below where it\n> says \"Sanity-check the line pointer's offset and length values\".\n>\n> handled by creating a new function check_lp and calling it before\naccessing the redirected tuple.\n\n>\n>\n> + /*\n> + * Add line pointer offset to predecessor array.\n> + * 1) If xmax is matching with xmin of next\n> Tuple(reaching via its t_ctid).\n> + * 2) If next tuple is in the same page.\n> + * Raise corruption if:\n> + * We have two tuples having same predecessor.\n> + *\n> + * We add offset to predecessor irrespective of\n> transaction(t_xmin) status. We will\n> + * do validation related to transaction status(and also\n> all other validations)\n> + * when we loop over predecessor array.\n> + */\n>\n> The formatting of this comment will, I think, be mangled if pgindent\n> is run against the file. You can use ----- markers to prevent that, I\n> believe, or (better) write this as a paragraph without relying on the\n> lines ending up uneven in length.\n>\n>\nDone, reformatted using pg_indent.\n\n+ if (predecessor[nextTupOffnum] != 0)\n> + {\n> + report_corruption(&ctx,\n> + psprintf(\"Tuple at offset %u is\n> reachable from two or more updated tuple\",\n> + (unsigned) nextTupOffnum));\n> + continue;\n> + }\n>\n> You need to do this test after xmin/xmax matching. Otherwise you might\n> get false positives. Also, the message should be more specific and\n> match the style of the existing messages. ctx.offnum is already going\n> to be reported in another column, but we can report both nextTupOffnum\n> and predecessor[nextTupOffnum] here e.g. \"updated version at offset %u\n> is also the updated version of tuple at offset %u\".\n>\n>\nagree, done.\n\n+ currTupXmax = HeapTupleHeaderGetUpdateXid(ctx.tuphdr);\n> + lp = PageGetItemId(ctx.page, nextTupOffnum);\n> +\n> + htup = (HeapTupleHeader) PageGetItem(ctx.page, lp);\n>\n> This has the same problem I mentioned in my first comment above,\n> namely, we haven't necessarily sanity-checked the length and offset\n> values for nextTupOffnum yet. Saying that another way, if the contents\n> of lp are corrupt and point off the page, we want that to be reported\n> as corruption (which the current code will already do) and we want\n> this check to be skipped so that we don't crash or access random\n> memory addresses. You need to think about how to rearrange the code so\n> that we only perform checks that are known to be safe.\n>\n>\nMoved logic of sanity checked to a new function check_lp() and called\nbefore accessing the next tuple while populating the predecessor array.\n\n\n> Please take the time to format your code according to the PostgeSQL\n> standard practice. If you don't know what that looks like, use\n> pgindent.\n>\n> + {\n> + HeapTupleHeader pred_htup, curr_htup;\n> + TransactionId pred_xmin, curr_xmin, curr_xmax;\n> + ItemId pred_lp, curr_lp;\n>\n> Same here.\n>\n\nDone.\n\nI think it would actually be a good idea to set ctx.offnum to the\n> predecessor's offset number, and use a separate variable for the\n> current offset number. The reason why I think this is that I believe\n> it will make it easier to phrase the messages appropriately. For\n> example, if ctx.offnum is the predecessor tuple, then we can issue\n> complaints like this:\n>\n> tuple with uncommitted xmin %u was updated to produce a tuple at\n> offset %u with differing xmin %u\n> unfrozen tuple was updated to produce a tuple at offset %u which is not\n> frozen\n> tuple with uncommitted xmin %u has cmin %u, but was updated to produce\n> a tuple with cmin %u\n> non-heap-only update produced a heap-only tuple at offset %u\n> heap-only update produced a non-heap only tuple at offset %u\n>\n>\nAgree, Done.\n\n\n> + if (!TransactionIdIsValid(curr_xmax) &&\n> + HeapTupleHeaderIsHotUpdated(curr_htup))\n> + {\n> + report_corruption(&ctx,\n> + psprintf(\"Current tuple at offset %u is\n> HOT but is last tuple in the HOT chain.\",\n> + (unsigned) ctx.offnum));\n> + }\n>\n> This check has nothing to do with the predecessor[] array, so it seems\n> like it belongs in check_tuple() rather than here. Also, the message\n> is rather confused, because the test is checking whether the tuple has\n> been HOT-updated, while the message is talking about whether the tuple\n> was *itself* created by a HOT update. Also, when we're dealing with\n> corruption, statements like \"is last tuple in the HOT chain\" are\n> pretty ambiguous. Also, isn't this an issue for both HOT-updated\n> tuples and also just regular updated tuples? i.e. maybe what we should\n> be complaining about here is something like \"tuple has been updated,\n> but xmax is 0\" and then make the test check exactly that.\n>\n\nMoved to check_tuple_header. This should be applicable for both HOT and\nnormal updates but even the last updated tuple in the normal update is\nHEAP_UPDATED so not sure how we can apply this check for a normal update?\n\n+ if (!HeapTupleHeaderIsHotUpdated(pred_htup) &&\n> + HeapTupleHeaderIsHeapOnly(pred_htup) &&\n> + HeapTupleHeaderIsHeapOnly(curr_htup))\n> + {\n> + report_corruption(&ctx,\n> + psprintf(\"Current tuple at offset %u is\n> HOT but it is next updated tuple of last Tuple in HOT chain.\",\n> + (unsigned) ctx.offnum));\n> + }\n>\n> Three if-statements up, you tested two out of these three conditions\n> and complained if they were met. So any time this fires, that will\n> have also fired.\n>\n\nYes, the above condition is not required. Now removed.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 6 Sep 2022 16:04:10 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi Himanshu,\n\nMany thanks for working on this!\n\n> Please find attached the patch with the above idea of HOT chain's validation\n\nPlease correct me if I'm wrong, but don't we have a race condition here:\n\n```\n+ if ((TransactionIdDidAbort(pred_xmin) ||\nTransactionIdIsInProgress(pred_xmin))\n+ && !TransactionIdEquals(pred_xmin, curr_xmin))\n {\n```\n\nThe scenario that concerns me is the following:\n\n1. TransactionIdDidAbort(pred_xmin) returns false\n2. The transaction aborts\n3. TransactionIdIsInProgress(pred_xmin) returns false\n4. (false || false) gives us false. An error is reported, although\nactually the condition should have been true.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 6 Sep 2022 15:29:22 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi again,\n\n> I am looking into the process of adding the TAP test for these changes and finding a way to corrupt a page in the tap test\n\nPlease note that currently the patch breaks many existing tests. I\nsuggest fixing these first.\n\nFor the details please see the cfbot report [1] or execute the tests\nlocally. Personally I'm using a little script for this [2].\n\n[1]: http://cfbot.cputube.org/\n[2]: https://github.com/afiskon/pgscripts/blob/master/full-build.sh\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 6 Sep 2022 15:35:34 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi hackers,\n\n> Please correct me if I'm wrong, but don't we have a race condition here:\n>\n> ```\n> + if ((TransactionIdDidAbort(pred_xmin) ||\n> TransactionIdIsInProgress(pred_xmin))\n> + && !TransactionIdEquals(pred_xmin, curr_xmin))\n> {\n> ```\n>\n> The scenario that concerns me is the following:\n>\n> 1. TransactionIdDidAbort(pred_xmin) returns false\n> 2. The transaction aborts\n> 3. TransactionIdIsInProgress(pred_xmin) returns false\n> 4. (false || false) gives us false. An error is reported, although\n> actually the condition should have been true.\n\nIt looks like I had a slight brain fade here.\n\nIn order to report a false error either TransactionIdDidAbort() or\nTransactionIdIsInProgress() should return true and\nTransactionIdEquals() should be false. So actually under rare\nconditions the error will NOT be reported while it should. Other than\nthat we seem to be safe from the concurrency perspective, unless I'm\nmissing something again.\n\nPersonally I don't have a strong opinion on whether we should bother\nabout this scenario. Probably an explicit comment will not hurt.\n\nAlso I suggest checking TransactionIdEquals() first though since it's cheaper.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 6 Sep 2022 16:38:01 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 9:38 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > Please correct me if I'm wrong, but don't we have a race condition here:\n> >\n> > ```\n> > + if ((TransactionIdDidAbort(pred_xmin) ||\n> > TransactionIdIsInProgress(pred_xmin))\n> > + && !TransactionIdEquals(pred_xmin, curr_xmin))\n> > {\n> > ```\n>\n> It looks like I had a slight brain fade here.\n>\n> In order to report a false error either TransactionIdDidAbort() or\n> TransactionIdIsInProgress() should return true and\n> TransactionIdEquals() should be false. So actually under rare\n> conditions the error will NOT be reported while it should. Other than\n> that we seem to be safe from the concurrency perspective, unless I'm\n> missing something again.\n>\n> Personally I don't have a strong opinion on whether we should bother\n> about this scenario. Probably an explicit comment will not hurt.\n>\n> Also I suggest checking TransactionIdEquals() first though since it's cheaper.\n\nI think the check should be written like this:\n\n!TransactionIdEquals(pred_xmin, curr_xmin) && !TransctionIdDidCommit(pred_xmin)\n\nThe TransactionIdEquals check should be done first for the reason you\nstate: it's cheaper.\n\nI think that we shouldn't be using TransactionIdDidAbort() at all,\nbecause it can sometimes return false even when the transaction\nactually did abort. See test_lockmode_for_conflict() and\nTransactionIdIsInProgress() for examples of logic that copes with\nthis. What's happening here is that TransactionIdDidAbort doesn't ever\nget called if the system crashes while a transaction is running. So we\ncan use TransactionIdDidAbort() only as an optimization: if it returns\ntrue, the transaction is definitely aborted, but if it returns false,\nwe have to check whether it's still running. If not, it aborted\nanyway.\n\nTransactionIdDidCommit() does not have the same issue. A transaction\ncan abort without updating CLOG if the system crashes, but it can\nnever commit without updating CLOG. If the transaction didn't commit,\nthen it is either aborted or still in progress (and we don't care\nwhich, because neither is an error here).\n\nAs to whether the existing formulation of the test has an error\ncondition, you're generally right that we should test\nTransactionIdIsInProgress() before TransactionIdDidCommit/Abort,\nbecause we during commit or abort, we first set the status in CLOG -\nwhich is queried by TransactionIdDidCommit/Abort - and only afterward\nupdate the procarray - which is queried by TransactionIdIsInProgress.\nSo normally TransactionIdIsInProgress should be checked first, and\nTransactionIdDidCommit/Abort should only be checked if it returns\nfalse, at which point we know that the return values of the latter\ncalls can't ever change. Possibly there is an argument for including\nthe TransactionIdInProgress check here too:\n\n!TransactionIdEquals(pred_xmin, curr_xmin) &&\n(TransactionIdIsInProgress(pred_xmin) ||\n!TransctionIdDidCommit(pred_xmin))\n\n...but I don't think it could change the answer. Most places that\ncheck TransactionIdIsInProgress() first are concerned with MVCC\nsemantics, and here we are not. I think the only effects of including\nor excluding the TransactionIdIsInProgress() test are (1) performance,\nin that searching the procarray might avoid expense if it's cheaper\nthan searching clog, or add expense if the reverse is true and (2)\nslightly changing the time at which we're first able to detect this\nform of corruption. I am inclined to prefer the simpler form of the\ntest without TransactionIdIsInProgress() unless someone can say why we\nshouldn't go that route.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:41:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Sep 6, 2022 at 6:34 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n>> This isn't safe because the line pointer referenced by rditem may not\n>> have been sanity-checked yet. Refer to the comment just below where it\n>> says \"Sanity-check the line pointer's offset and length values\".\n>>\n> handled by creating a new function check_lp and calling it before accessing the redirected tuple.\n\nI think this is going to result in duplicate error messages, because\nif A redirects to B, what keeps us from calling check_lp(B) once when\nwe reach A and again when we reach B?\n\nI am kind of generally suspicious of the idea that, both for redirects\nand for ctid links, you just have it check_lp() on the target line\npointer and then maybe try to skip doing that again later when we get\nthere. That feels error-prone to me. I think we should try to find a\nway of organizing the code where we do the check_lp() checks on all\nline pointers in order without skipping around or backing up. It's not\n100% clear to me what the best way of accomplishing that is, though.\n\nBut here's one random idea: add a successor[] array and an lp_valid[]\narray. In the first loop, set lp_valid[offset] = true if it passes the\ncheck_lp() checks, and set successor[A] = B if A redirects to B or has\na CTID link to B, without matching xmin/xmax. Then, in a second loop,\niterate over the successor[] array. If successor[A] = B && lp_valid[A]\n&& lp_valid[B], then check whether A.xmax = B.xmin; if so, then\ncomplain if predecessor[B] is already set, else set predecessor[B] =\nA. Then, in the third loop, iterate over the predecessor array just as\nyou're doing now. Then it's clear that we do the lp_valid checks\nexactly once for every offset that might need them, and in order. And\nit's also clear that the predecessor-based checks can never happen\nunless the lp_valid checks passed for both of the offsets involved.\n\n> Done, reformatted using pg_indent.\n\nThanks, but the new check_lp() function's declaration is not formatted\naccording to pgindent guidelines. It's not enough to fix the problems\nonce, you have to avoid reintroducing them.\n\n>> + if (!TransactionIdIsValid(curr_xmax) &&\n>> + HeapTupleHeaderIsHotUpdated(curr_htup))\n>> + {\n>> + report_corruption(&ctx,\n>> + psprintf(\"Current tuple at offset %u is\n>> HOT but is last tuple in the HOT chain.\",\n>> + (unsigned) ctx.offnum));\n>> + }\n>>\n>> This check has nothing to do with the predecessor[] array, so it seems\n>> like it belongs in check_tuple() rather than here. Also, the message\n>> is rather confused, because the test is checking whether the tuple has\n>> been HOT-updated, while the message is talking about whether the tuple\n>> was *itself* created by a HOT update. Also, when we're dealing with\n>> corruption, statements like \"is last tuple in the HOT chain\" are\n>> pretty ambiguous. Also, isn't this an issue for both HOT-updated\n>> tuples and also just regular updated tuples? i.e. maybe what we should\n>> be complaining about here is something like \"tuple has been updated,\n>> but xmax is 0\" and then make the test check exactly that.\n>\n> Moved to check_tuple_header. This should be applicable for both HOT and normal updates but even the last updated tuple in the normal update is HEAP_UPDATED so not sure how we can apply this check for a normal update?\n\nOh, yeah. You're right. I was thinking that HEAP_UPDATED was like\nHEAP_HOT_UPDATED, but it's not: HEAP_UPDATED gets set on the new\ntuple, while HEAP_HOT_UPDATED gets set on the old tuple.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 6 Sep 2022 17:19:00 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 2:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> But here's one random idea: add a successor[] array and an lp_valid[]\n> array. In the first loop, set lp_valid[offset] = true if it passes the\n> check_lp() checks, and set successor[A] = B if A redirects to B or has\n> a CTID link to B, without matching xmin/xmax. Then, in a second loop,\n> iterate over the successor[] array. If successor[A] = B && lp_valid[A]\n> && lp_valid[B], then check whether A.xmax = B.xmin; if so, then\n> complain if predecessor[B] is already set, else set predecessor[B] =\n> A. Then, in the third loop, iterate over the predecessor array just as\n> you're doing now. Then it's clear that we do the lp_valid checks\n> exactly once for every offset that might need them, and in order. And\n> it's also clear that the predecessor-based checks can never happen\n> unless the lp_valid checks passed for both of the offsets involved.\n>\n>\n>\nApproach of introducing a successor array is good but I see one overhead\nwith having both successor and predecessor array, that is, we will traverse\neach offset on page thrice(one more for original loop on offset) and with\neach offset we have to retrieve\n/reach an ItemID(PageGetItemId) and Item(PageGetItem) itself. This is not\nmuch overhead as they are all preprocessors but there will be some overhead.\nHow about having new array(initialised with LP_NOT_CHECKED) of enum\nLPStatus as below\n\ntypedef enum LPStatus\n{\nLP_NOT_CHECKED,\nLP_VALID,\nLP_NOT_VALID\n}LPStatus;\n\nand validating and setting with proper status at three places\n1) while validating Redirect Tuple\n2) while validating populating predecessor array and\n3) at original place of \"sanity check\"\n\n\nsomething like:\n\" if (lpStatus[rdoffnum] == LP_NOT_CHECKED)\n {\n ctx.offnum = rdoffnum;\n if (!check_lp(&ctx,\nItemIdGetLength(rditem), ItemIdGetOffset(rditem)))\n {\n lpStatus[rdoffnum] =\nLP_NOT_VALID;\n continue;\n }\n lpStatus[rdoffnum] = LP_VALID;\n }\n else if (lpStatus[rdoffnum] == LP_NOT_VALID)\n continue;\n\"\n\n\n\nthoughts?\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Sep 7, 2022 at 2:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\nBut here's one random idea: add a successor[] array and an lp_valid[]\narray. In the first loop, set lp_valid[offset] = true if it passes the\ncheck_lp() checks, and set successor[A] = B if A redirects to B or has\na CTID link to B, without matching xmin/xmax. Then, in a second loop,\niterate over the successor[] array. If successor[A] = B && lp_valid[A]\n&& lp_valid[B], then check whether A.xmax = B.xmin; if so, then\ncomplain if predecessor[B] is already set, else set predecessor[B] =\nA. Then, in the third loop, iterate over the predecessor array just as\nyou're doing now. Then it's clear that we do the lp_valid checks\nexactly once for every offset that might need them, and in order. And\nit's also clear that the predecessor-based checks can never happen\nunless the lp_valid checks passed for both of the offsets involved.\nApproach of introducing a successor array is good but I see one overhead with having both successor and predecessor array, that is, we will traverse each offset on page thrice(one more for original loop on offset) and with each offset we have to retrieve /reach an ItemID(PageGetItemId) and Item(PageGetItem) itself. This is not much overhead as they are all preprocessors but there will be some overhead.How about having new array(initialised with LP_NOT_CHECKED) of enum LPStatus as belowtypedef enum LPStatus{\tLP_NOT_CHECKED,\tLP_VALID,\tLP_NOT_VALID}LPStatus;and validating and setting with proper status at three places1) while validating Redirect Tuple2) while validating populating predecessor array and3) at original place of \"sanity check\"something like:\" if (lpStatus[rdoffnum] == LP_NOT_CHECKED) { ctx.offnum = rdoffnum; if (!check_lp(&ctx, ItemIdGetLength(rditem), ItemIdGetOffset(rditem))) { lpStatus[rdoffnum] = LP_NOT_VALID; continue; } lpStatus[rdoffnum] = LP_VALID; } else if (lpStatus[rdoffnum] == LP_NOT_VALID) continue;\"thoughts?-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 9 Sep 2022 19:30:07 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Fri, Sep 9, 2022 at 10:00 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Approach of introducing a successor array is good but I see one overhead with having both successor and predecessor array, that is, we will traverse each offset on page thrice(one more for original loop on offset) and with each offset we have to retrieve\n> /reach an ItemID(PageGetItemId) and Item(PageGetItem) itself. This is not much overhead as they are all preprocessors but there will be some overhead.\n> How about having new array(initialised with LP_NOT_CHECKED) of enum LPStatus as below\n>\n> typedef enum LPStatus\n> {\n> LP_NOT_CHECKED,\n> LP_VALID,\n> LP_NOT_VALID\n> }LPStatus;\n>\n> and validating and setting with proper status at three places\n> 1) while validating Redirect Tuple\n> 2) while validating populating predecessor array and\n> 3) at original place of \"sanity check\"\n\nWell, having to duplicate the logic in three places doesn't seem all\nthat clean to me. Admittedly, I haven't tried implementing my\nproposal, so maybe that doesn't come out very clean either. I don't\nknow. But I think having the code be clearly correct here is the most\nimportant thing, not shaving a few CPU cycles here or there. It's not\neven clear to me that your way would be cheaper, because an \"if\"\nstatement is certainly not free, and in fact is probably more\nexpensive than an extra call to PageGetItem() or PageGetItemId().\nBranches are usually more expensive than math. But actually I doubt\nthat it matters much either way. I think the way to figure out whether\nwe have a performance problem is to write the code in the\nstylistically best way and then test it. There may be no problem at\nall, and if there is a problem, it may not be where we think it will\nbe.\n\nIn short, let's apply Knuth's optimization principle.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 9 Sep 2022 10:53:47 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 2:49 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> But here's one random idea: add a successor[] array and an lp_valid[]\n> array. In the first loop, set lp_valid[offset] = true if it passes the\n> check_lp() checks, and set successor[A] = B if A redirects to B or has\n> a CTID link to B, without matching xmin/xmax. Then, in a second loop,\n> iterate over the successor[] array. If successor[A] = B && lp_valid[A]\n> && lp_valid[B], then check whether A.xmax = B.xmin; if so, then\n> complain if predecessor[B] is already set, else set predecessor[B] =\n> A. Then, in the third loop, iterate over the predecessor array just as\n> you're doing now. Then it's clear that we do the lp_valid checks\n> exactly once for every offset that might need them, and in order. And\n> it's also clear that the predecessor-based checks can never happen\n> unless the lp_valid checks passed for both of the offsets involved.\n>\n>\nok, I have introduced a new approach to first construct a successor array\nand then loop over the successor array to construct a predecessor array.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Sep 2022 13:58:03 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Sep 7, 2022 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I think the check should be written like this:\n>\n> !TransactionIdEquals(pred_xmin, curr_xmin) &&\n> !TransctionIdDidCommit(pred_xmin)\n>\n> The TransactionIdEquals check should be done first for the reason you\n> state: it's cheaper.\n>\n> I think that we shouldn't be using TransactionIdDidAbort() at all,\n> because it can sometimes return false even when the transaction\n> actually did abort. See test_lockmode_for_conflict() and\n> TransactionIdIsInProgress() for examples of logic that copes with\n> this. What's happening here is that TransactionIdDidAbort doesn't ever\n> get called if the system crashes while a transaction is running. So we\n> can use TransactionIdDidAbort() only as an optimization: if it returns\n> true, the transaction is definitely aborted, but if it returns false,\n> we have to check whether it's still running. If not, it aborted\n> anyway.\n>\n> TransactionIdDidCommit() does not have the same issue. A transaction\n> can abort without updating CLOG if the system crashes, but it can\n> never commit without updating CLOG. If the transaction didn't commit,\n> then it is either aborted or still in progress (and we don't care\n> which, because neither is an error here).\n>\n> As to whether the existing formulation of the test has an error\n> condition, you're generally right that we should test\n> TransactionIdIsInProgress() before TransactionIdDidCommit/Abort,\n> because we during commit or abort, we first set the status in CLOG -\n> which is queried by TransactionIdDidCommit/Abort - and only afterward\n> update the procarray - which is queried by TransactionIdIsInProgress.\n> So normally TransactionIdIsInProgress should be checked first, and\n> TransactionIdDidCommit/Abort should only be checked if it returns\n> false, at which point we know that the return values of the latter\n> calls can't ever change. Possibly there is an argument for including\n> the TransactionIdInProgress check here too:\n>\n> !TransactionIdEquals(pred_xmin, curr_xmin) &&\n> (TransactionIdIsInProgress(pred_xmin) ||\n> !TransctionIdDidCommit(pred_xmin))\n>\n> ...but I don't think it could change the answer. Most places that\n> check TransactionIdIsInProgress() first are concerned with MVCC\n> semantics, and here we are not. I think the only effects of including\n> or excluding the TransactionIdIsInProgress() test are (1) performance,\n> in that searching the procarray might avoid expense if it's cheaper\n> than searching clog, or add expense if the reverse is true and (2)\n> slightly changing the time at which we're first able to detect this\n> form of corruption. I am inclined to prefer the simpler form of the\n> test without TransactionIdIsInProgress() unless someone can say why we\n> shouldn't go that route.\n>\n> Done, updated in the v3 patch.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Sep 7, 2022 at 12:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\nI think the check should be written like this:\n\n!TransactionIdEquals(pred_xmin, curr_xmin) && !TransctionIdDidCommit(pred_xmin)\n\nThe TransactionIdEquals check should be done first for the reason you\nstate: it's cheaper.\n\nI think that we shouldn't be using TransactionIdDidAbort() at all,\nbecause it can sometimes return false even when the transaction\nactually did abort. See test_lockmode_for_conflict() and\nTransactionIdIsInProgress() for examples of logic that copes with\nthis. What's happening here is that TransactionIdDidAbort doesn't ever\nget called if the system crashes while a transaction is running. So we\ncan use TransactionIdDidAbort() only as an optimization: if it returns\ntrue, the transaction is definitely aborted, but if it returns false,\nwe have to check whether it's still running. If not, it aborted\nanyway.\n\nTransactionIdDidCommit() does not have the same issue. A transaction\ncan abort without updating CLOG if the system crashes, but it can\nnever commit without updating CLOG. If the transaction didn't commit,\nthen it is either aborted or still in progress (and we don't care\nwhich, because neither is an error here).\n\nAs to whether the existing formulation of the test has an error\ncondition, you're generally right that we should test\nTransactionIdIsInProgress() before TransactionIdDidCommit/Abort,\nbecause we during commit or abort, we first set the status in CLOG -\nwhich is queried by TransactionIdDidCommit/Abort - and only afterward\nupdate the procarray - which is queried by TransactionIdIsInProgress.\nSo normally TransactionIdIsInProgress should be checked first, and\nTransactionIdDidCommit/Abort should only be checked if it returns\nfalse, at which point we know that the return values of the latter\ncalls can't ever change. Possibly there is an argument for including\nthe TransactionIdInProgress check here too:\n\n!TransactionIdEquals(pred_xmin, curr_xmin) &&\n(TransactionIdIsInProgress(pred_xmin) ||\n!TransctionIdDidCommit(pred_xmin))\n\n...but I don't think it could change the answer. Most places that\ncheck TransactionIdIsInProgress() first are concerned with MVCC\nsemantics, and here we are not. I think the only effects of including\nor excluding the TransactionIdIsInProgress() test are (1) performance,\nin that searching the procarray might avoid expense if it's cheaper\nthan searching clog, or add expense if the reverse is true and (2)\nslightly changing the time at which we're first able to detect this\nform of corruption. I am inclined to prefer the simpler form of the\ntest without TransactionIdIsInProgress() unless someone can say why we\nshouldn't go that route.\nDone, updated in the v3 patch. -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Sep 2022 14:03:11 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi Himanshu,\n\n> Done, updated in the v3 patch.\n\nThanks for the updated patch.\n\nHere is v4 with fixed compiler warnings and some minor tweaks from me.\n\nI didn't put too much thought into the algorithm but I already see\nsomething strange. At verify_heapam.c:553 you declared curr_xmax and\nnext_xmin. However the variables are not used/initialized until you\ndo:\n\n```\n if (lp_valid[nextoffnum] && lp_valid[ctx.offnum] &&\n TransactionIdIsValid(curr_xmax) &&\n TransactionIdEquals(curr_xmax, next_xmin)) {\n/* ... */\n```\n\nIn v4 I elected to initialize both curr_xmax and next_xmin with\nInvalidTransactionId for safety and in order to silence the compiler\nbut still there is no way this condition can succeed.\n\nPlease make sure there is no logic missing.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Mon, 19 Sep 2022 17:57:36 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 8:27 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Himanshu,\n>\n> > Done, updated in the v3 patch.\n>\n> Thanks for the updated patch.\n>\n> Here is v4 with fixed compiler warnings and some minor tweaks from me.\n>\n> I didn't put too much thought into the algorithm but I already see\n> something strange. At verify_heapam.c:553 you declared curr_xmax and\n> next_xmin. However the variables are not used/initialized until you\n> do:\n>\n> ```\n> if (lp_valid[nextoffnum] && lp_valid[ctx.offnum] &&\n> TransactionIdIsValid(curr_xmax) &&\n> TransactionIdEquals(curr_xmax, next_xmin)) {\n> /* ... */\n> ```\n>\n> In v4 I elected to initialize both curr_xmax and next_xmin with\n> InvalidTransactionId for safety and in order to silence the compiler\n> but still there is no way this condition can succeed.\n>\n> Please make sure there is no logic missing.\n>\n>\nHi Aleksander,\n\nThanks for sharing the feedback,\nIt's my mistake, sorry about that, I was trying to merge two if conditions\nand forgot to move the initialization part for xmin and xmax. Now I think\nthat it will be good to have nested if, and have an inner if condition to\ntest xmax and xmin matching. This way we can retrieve and populate xmin and\nxmax when it is actually required for the inner if condition.\nI have changed this in the attached patch.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 19 Sep 2022 21:50:40 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi Himanshu,\n\n> I have changed this in the attached patch.\n\nIf it's not too much trouble could you please base your changes on v4\nthat I submitted? I put some effort into writing a proper commit\nmessage, editing the comments, etc. The easiest way of doing this is\nusing `git am` and `git format-patch`.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 19 Sep 2022 19:34:05 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Sep 19, 2022 at 10:04 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\n> Hi Himanshu,\n>\n> > I have changed this in the attached patch.\n>\n> If it's not too much trouble could you please base your changes on v4\n> that I submitted? I put some effort into writing a proper commit\n> message, editing the comments, etc. The easiest way of doing this is\n> using `git am` and `git format-patch`.\n>\n> Please find it attached.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 20 Sep 2022 14:29:52 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 5:00 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Please find it attached.\n\nThis patch still has no test cases. Just as we have test cases for the\nexisting corruption checks, we should have test cases for these new\ncorruption checks, showing cases where they actually fire.\n\nI think I would be inclined to set lp_valid[x] = true in both the\nredirected and non-redirected case, and then have the very first thing\nthat the second loop does be if (nextoffnum == 0 ||\n!lp_valid[ctx.offnum]) continue. I think that would be more clear\nabout the intent to ignore line pointers that failed validation. Also,\nif you did it that way, then you could catch the case of a redirected\nline pointer pointing to another redirected line pointer, which is a\ncorruption condition that the current code does not appear to check.\n\n+ /*\n+ * Validation via the predecessor array. 1) If the predecessor's\n+ * xmin is aborted or in progress, the current tuples xmin should\n+ * be aborted or in progress respectively. Also both xmin's must\n+ * be equal. 2) If the predecessor's xmin is not frozen, then\n+ * current tuple's shouldn't be either. 3) If the predecessor's\n+ * xmin is equal to the current tuple's xmin, the current tuple's\n+ * cmin should be greater than the predecessor's cmin. 4) If the\n+ * current tuple is not HOT then its predecessor's tuple must not\n+ * be HEAP_HOT_UPDATED. 5) If the current tuple is HOT then its\n+ * predecessor's tuple must be HEAP_HOT_UPDATED.\n+ */\n\nThis comment needs to be split up into pieces and the pieces need to\nbe moved closer to the tests to which they correspond.\n\n+ psprintf(\"unfrozen tuple was\nupdated to produce a tuple at offset %u which is not frozen\",\n\nShouldn't this say \"which is frozen\"?\n\n+ * Not a corruption if current tuple is updated/deleted by a\n+ * different transaction, means t_cid will point to cmax (that is\n+ * command id of deleting transaction) and cid of predecessor not\n+ * necessarily will be smaller than cid of current tuple. t_cid\n\nI think that the next person who reads this code is likely to\nunderstand that the CIDs of different transactions are numerically\nunrelated. What's less obvious is that if the XID is the same, the\nnewer update must have a higher CID.\n\n+ * can hold combo command id but we are not worrying here since\n+ * combo command id of the next updated tuple (if present) must be\n+ * greater than combo command id of the current tuple. So here we\n+ * are not checking HEAP_COMBOCID flag and simply doing t_cid\n+ * comparison.\n\nI disapprove of ignoring the HEAP_COMBOCID flag. Emitting a message\nclaiming that the CID has a certain value when that's actually a combo\nCID is misleading, so at least a different message wording is needed\nin such cases. But it's also not clear to me that the newer update has\nto have a higher combo CID, because combo CIDs can be reused. If you\nhave multiple cursors open in the same transaction, the updates can be\ninterleaved, and it seems to me that it might be possible for an older\nCID to have created a certain combo CID after a newer CID, and then\nboth cursors could update the same page in succession and end up with\ncombo CIDs out of numerical order. Unless somebody can make a\nconvincing argument that this isn't possible, I think we should just\nskip this check for cases where either tuple has a combo CID.\n\n+ if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n+ (TransactionIdEquals(curr_xmin, curr_xmax) ||\n+ !TransactionIdIsValid(curr_xmax)) && pred_cmin >= curr_cmin)\n\nI don't understand the reason for the middle part of this condition --\nTransactionIdEquals(curr_xmin, curr_xmax) ||\n!TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\nexplain this, but I still don't get it. If a tuple with XMIN 12345\nCMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\ncorruption, regardless of what the XMAX of the second tuple may happen\nto be.\n\n+ if (HeapTupleHeaderIsHeapOnly(curr_htup) &&\n+ !HeapTupleHeaderIsHotUpdated(pred_htup))\n\n+ if (!HeapTupleHeaderIsHeapOnly(curr_htup) &&\n+ HeapTupleHeaderIsHotUpdated(pred_htup))\n\nI think it would be slightly clearer to write these tests the other\nway around i.e. check the previous tuple's state first.\n\n+ if (!TransactionIdIsValid(curr_xmax) &&\nHeapTupleHeaderIsHotUpdated(tuphdr))\n+ {\n+ report_corruption(ctx,\n+ psprintf(\"tuple has been updated, but xmax is 0\"));\n+ result = false;\n+ }\n\nI guess this message needs to say \"tuple has been HOT updated, but\nxmax is 0\" or something like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Sep 2022 09:13:19 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 6:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> I disapprove of ignoring the HEAP_COMBOCID flag. Emitting a message\n> claiming that the CID has a certain value when that's actually a combo\n> CID is misleading, so at least a different message wording is needed\n> in such cases. But it's also not clear to me that the newer update has\n> to have a higher combo CID, because combo CIDs can be reused. If you\n> have multiple cursors open in the same transaction, the updates can be\n> interleaved, and it seems to me that it might be possible for an older\n> CID to have created a certain combo CID after a newer CID, and then\n> both cursors could update the same page in succession and end up with\n> combo CIDs out of numerical order. Unless somebody can make a\n> convincing argument that this isn't possible, I think we should just\n> skip this check for cases where either tuple has a combo CID.\n>\n> Here our objective is to validate if both Predecessor's xmin and current\nTuple's xmin are same then cmin of predecessor must be less than current\nTuple's cmin. In case when both tuple xmin's are same then I think\npredecessor's t_cid will always hold combo CID.\nThen either one or both tuple will always have a combo CID and skipping\nthis check based on \"either tuple has a combo CID\" will make this if\ncondition to be evaluated to false ''.\n\n\n> + if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n> + (TransactionIdEquals(curr_xmin, curr_xmax) ||\n> + !TransactionIdIsValid(curr_xmax)) && pred_cmin >=\n> curr_cmin)\n>\n> I don't understand the reason for the middle part of this condition --\n> TransactionIdEquals(curr_xmin, curr_xmax) ||\n> !TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\n> explain this, but I still don't get it. If a tuple with XMIN 12345\n> CMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\n> corruption, regardless of what the XMAX of the second tuple may happen\n> to be.\n>\n\ntuple | t_xmin | t_xmax | t_cid | t_ctid | tuple_data_split\n |\nheap_tuple_infomask_flags\n\n-------+--------+--------+-------+--------+---------------------------------------------+------------------------------------------------------------------------------------------------------------------\n-------------\n 1 | 971 | 971 | 0 | (0,3) |\n{\"\\\\x1774657374312020202020\",\"\\\\x01000000\"} |\n(\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_HOT_UPDATED}\",{})\n 2 | 971 | 0 | 1 | (0,2) |\n{\"\\\\x1774657374322020202020\",\"\\\\x02000000\"} |\n(\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}\",{})\n 3 | 971 | 971 | 1 | (0,4) |\n{\"\\\\x1774657374322020202020\",\"\\\\x01000000\"} |\n(\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY\n_TUPLE}\",{})\n 4 | 971 | 972 | 0 | (0,5) |\n{\"\\\\x1774657374332020202020\",\"\\\\x01000000\"} |\n(\"{HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{})\n 5 | 972 | 0 | 0 | (0,5) |\n{\"\\\\x1774657374342020202020\",\"\\\\x01000000\"} |\n(\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})\n\nIn the above case Tuple 1->3->4 is inserted and updated by xid 971 and\ntuple 4 is next update by xid 972, here t_cid of tuple 4 is 0 where as its\npredecessor's t_cid is 1, because in Tuple 4 t_cid is having command ID of\ndeleting transaction(cmax), that is why we need to check xmax of the Tuple.\n\nPlease correct me if I am missing anything here?\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Sep 20, 2022 at 6:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\nI disapprove of ignoring the HEAP_COMBOCID flag. Emitting a message\nclaiming that the CID has a certain value when that's actually a combo\nCID is misleading, so at least a different message wording is needed\nin such cases. But it's also not clear to me that the newer update has\nto have a higher combo CID, because combo CIDs can be reused. If you\nhave multiple cursors open in the same transaction, the updates can be\ninterleaved, and it seems to me that it might be possible for an older\nCID to have created a certain combo CID after a newer CID, and then\nboth cursors could update the same page in succession and end up with\ncombo CIDs out of numerical order. Unless somebody can make a\nconvincing argument that this isn't possible, I think we should just\nskip this check for cases where either tuple has a combo CID.\nHere our objective is to validate if both Predecessor's xmin and current Tuple's xmin are same then cmin of predecessor must be less than current Tuple's cmin. In case when both tuple xmin's are same then I think predecessor's t_cid will always hold combo CID.Then either one or both tuple will always have a combo CID and skipping this check based on \"either tuple has a combo CID\" will make this if condition to be evaluated to false ''. \n+ if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n+ (TransactionIdEquals(curr_xmin, curr_xmax) ||\n+ !TransactionIdIsValid(curr_xmax)) && pred_cmin >= curr_cmin)\n\nI don't understand the reason for the middle part of this condition --\nTransactionIdEquals(curr_xmin, curr_xmax) ||\n!TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\nexplain this, but I still don't get it. If a tuple with XMIN 12345\nCMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\ncorruption, regardless of what the XMAX of the second tuple may happen\nto be.tuple | t_xmin | t_xmax | t_cid | t_ctid | tuple_data_split | heap_tuple_infomask_flags -------+--------+--------+-------+--------+---------------------------------------------+------------------------------------------------------------------------------------------------------------------------------- 1 | 971 | 971 | 0 | (0,3) | {\"\\\\x1774657374312020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_HOT_UPDATED}\",{}) 2 | 971 | 0 | 1 | (0,2) | {\"\\\\x1774657374322020202020\",\"\\\\x02000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}\",{}) 3 | 971 | 971 | 1 | (0,4) | {\"\\\\x1774657374322020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{}) 4 | 971 | 972 | 0 | (0,5) | {\"\\\\x1774657374332020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{}) 5 | 972 | 0 | 0 | (0,5) | {\"\\\\x1774657374342020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})In the above case Tuple 1->3->4 is inserted and updated by xid 971 and tuple 4 is next update by xid 972, here t_cid of tuple 4 is 0 where as its predecessor's t_cid is 1, because in Tuple 4 t_cid is having command ID of deleting transaction(cmax), that is why we need to check xmax of the Tuple. Please correct me if I am missing anything here?-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sat, 24 Sep 2022 18:14:51 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Sat, Sep 24, 2022 at 8:45 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Here our objective is to validate if both Predecessor's xmin and current Tuple's xmin are same then cmin of predecessor must be less than current Tuple's cmin. In case when both tuple xmin's are same then I think predecessor's t_cid will always hold combo CID.\n> Then either one or both tuple will always have a combo CID and skipping this check based on \"either tuple has a combo CID\" will make this if condition to be evaluated to false ''.\n\nFair point. I think maybe we should just remove the CID validation\naltogether. I thought initially that it would be possible to have a\nnewer update with a numerically lower combo CID, but after some\nexperimentation I don't see a way to do it. However, it also doesn't\nseem very useful to me to check that the combo CIDs are in ascending\norder. I mean, even if that's not supposed to happen and does anyway,\nthere aren't really any enduring consequences, because command IDs are\nignored completely outside of the transaction that performed the\noperation originally. So even if the combo CIDs were set to completely\nrandom values, is that really corruption? At most it messes things up\nfor the duration of one transaction. And if we just have plain CIDs\nrather than combo CIDs, the same thing is true: they could be totally\nmessed up and it wouldn't really matter beyond the lifetime of that\none transaction.\n\nAlso, it would be a lot more tempting to check this if we could check\nit in all cases, but we can't. If a tuple is inserted in transaction\nT1 and ten updated twice in transaction T2, we'll have only one combo\nCID and nothing to compare it against, nor any way to decode what CMIN\nand CMAX it originally represented. And this is probably a pretty\ncommon type of case.\n\n>> + if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n>> + (TransactionIdEquals(curr_xmin, curr_xmax) ||\n>> + !TransactionIdIsValid(curr_xmax)) && pred_cmin >= curr_cmin)\n>>\n>> I don't understand the reason for the middle part of this condition --\n>> TransactionIdEquals(curr_xmin, curr_xmax) ||\n>> !TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\n>> explain this, but I still don't get it. If a tuple with XMIN 12345\n>> CMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\n>> corruption, regardless of what the XMAX of the second tuple may happen\n>> to be.\n>\n> tuple | t_xmin | t_xmax | t_cid | t_ctid | tuple_data_split | heap_tuple_infomask_flags\n>\n> -------+--------+--------+-------+--------+---------------------------------------------+------------------------------------------------------------------------------------------------------------------\n> -------------\n> 1 | 971 | 971 | 0 | (0,3) | {\"\\\\x1774657374312020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_HOT_UPDATED}\",{})\n> 2 | 971 | 0 | 1 | (0,2) | {\"\\\\x1774657374322020202020\",\"\\\\x02000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}\",{})\n> 3 | 971 | 971 | 1 | (0,4) | {\"\\\\x1774657374322020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY\n> _TUPLE}\",{})\n> 4 | 971 | 972 | 0 | (0,5) | {\"\\\\x1774657374332020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{})\n> 5 | 972 | 0 | 0 | (0,5) | {\"\\\\x1774657374342020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})\n>\n> In the above case Tuple 1->3->4 is inserted and updated by xid 971 and tuple 4 is next update by xid 972, here t_cid of tuple 4 is 0 where as its predecessor's t_cid is 1, because in Tuple 4 t_cid is having command ID of deleting transaction(cmax), that is why we need to check xmax of the Tuple.\n>\n> Please correct me if I am missing anything here?\n\nHmm, I see, so basically you're trying to check whether the CID field\ncontains a CMIN as opposed to a CMAX. But I'm not sure this test is\nentirely reliable, because heap_prepare/execute_freeze_tuple() can set\na tuple's xmax to InvalidTransactionId even after it's had some other\nvalue, and that won't do anything to the contents of the CID field.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Sep 2022 16:05:46 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Sep 27, 2022 at 1:35 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sat, Sep 24, 2022 at 8:45 AM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> > Here our objective is to validate if both Predecessor's xmin and current\n> Tuple's xmin are same then cmin of predecessor must be less than current\n> Tuple's cmin. In case when both tuple xmin's are same then I think\n> predecessor's t_cid will always hold combo CID.\n> > Then either one or both tuple will always have a combo CID and skipping\n> this check based on \"either tuple has a combo CID\" will make this if\n> condition to be evaluated to false ''.\n>\n> Fair point. I think maybe we should just remove the CID validation\n> altogether. I thought initially that it would be possible to have a\n> newer update with a numerically lower combo CID, but after some\n> experimentation I don't see a way to do it. However, it also doesn't\n> seem very useful to me to check that the combo CIDs are in ascending\n> order. I mean, even if that's not supposed to happen and does anyway,\n> there aren't really any enduring consequences, because command IDs are\n> ignored completely outside of the transaction that performed the\n> operation originally. So even if the combo CIDs were set to completely\n> random values, is that really corruption? At most it messes things up\n> for the duration of one transaction. And if we just have plain CIDs\n> rather than combo CIDs, the same thing is true: they could be totally\n> messed up and it wouldn't really matter beyond the lifetime of that\n> one transaction.\n>\n> Also, it would be a lot more tempting to check this if we could check\n> it in all cases, but we can't. If a tuple is inserted in transaction\n> T1 and ten updated twice in transaction T2, we'll have only one combo\n> CID and nothing to compare it against, nor any way to decode what CMIN\n> and CMAX it originally represented. And this is probably a pretty\n> common type of case.\n>\n> ok, I will be removing this entire validation of cmin/cid in my next patch.\n\n\n> >> + if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n> >> + (TransactionIdEquals(curr_xmin, curr_xmax) ||\n> >> + !TransactionIdIsValid(curr_xmax)) && pred_cmin >=\n> curr_cmin)\n> >>\n> >> I don't understand the reason for the middle part of this condition --\n> >> TransactionIdEquals(curr_xmin, curr_xmax) ||\n> >> !TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\n> >> explain this, but I still don't get it. If a tuple with XMIN 12345\n> >> CMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\n> >> corruption, regardless of what the XMAX of the second tuple may happen\n> >> to be.\n> >\n> > tuple | t_xmin | t_xmax | t_cid | t_ctid |\n> tuple_data_split |\n> heap_tuple_infomask_flags\n> >\n> >\n> -------+--------+--------+-------+--------+---------------------------------------------+------------------------------------------------------------------------------------------------------------------\n> > -------------\n> > 1 | 971 | 971 | 0 | (0,3) |\n> {\"\\\\x1774657374312020202020\",\"\\\\x01000000\"} |\n> (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_HOT_UPDATED}\",{})\n> > 2 | 971 | 0 | 1 | (0,2) |\n> {\"\\\\x1774657374322020202020\",\"\\\\x02000000\"} |\n> (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}\",{})\n> > 3 | 971 | 971 | 1 | (0,4) |\n> {\"\\\\x1774657374322020202020\",\"\\\\x01000000\"} |\n> (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY\n> > _TUPLE}\",{})\n> > 4 | 971 | 972 | 0 | (0,5) |\n> {\"\\\\x1774657374332020202020\",\"\\\\x01000000\"} |\n> (\"{HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{})\n> > 5 | 972 | 0 | 0 | (0,5) |\n> {\"\\\\x1774657374342020202020\",\"\\\\x01000000\"} |\n> (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})\n> >\n> > In the above case Tuple 1->3->4 is inserted and updated by xid 971 and\n> tuple 4 is next update by xid 972, here t_cid of tuple 4 is 0 where as its\n> predecessor's t_cid is 1, because in Tuple 4 t_cid is having command ID of\n> deleting transaction(cmax), that is why we need to check xmax of the Tuple.\n> >\n> > Please correct me if I am missing anything here?\n>\n> Hmm, I see, so basically you're trying to check whether the CID field\n> contains a CMIN as opposed to a CMAX. But I'm not sure this test is\n> entirely reliable, because heap_prepare/execute_freeze_tuple() can set\n> a tuple's xmax to InvalidTransactionId even after it's had some other\n> value, and that won't do anything to the contents of the CID field.\n>\n\nok, Got it, as we are removing this cmin/cid validation so we don't need\nany change here.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Sep 27, 2022 at 1:35 AM Robert Haas <robertmhaas@gmail.com> wrote:On Sat, Sep 24, 2022 at 8:45 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Here our objective is to validate if both Predecessor's xmin and current Tuple's xmin are same then cmin of predecessor must be less than current Tuple's cmin. In case when both tuple xmin's are same then I think predecessor's t_cid will always hold combo CID.\n> Then either one or both tuple will always have a combo CID and skipping this check based on \"either tuple has a combo CID\" will make this if condition to be evaluated to false ''.\n\nFair point. I think maybe we should just remove the CID validation\naltogether. I thought initially that it would be possible to have a\nnewer update with a numerically lower combo CID, but after some\nexperimentation I don't see a way to do it. However, it also doesn't\nseem very useful to me to check that the combo CIDs are in ascending\norder. I mean, even if that's not supposed to happen and does anyway,\nthere aren't really any enduring consequences, because command IDs are\nignored completely outside of the transaction that performed the\noperation originally. So even if the combo CIDs were set to completely\nrandom values, is that really corruption? At most it messes things up\nfor the duration of one transaction. And if we just have plain CIDs\nrather than combo CIDs, the same thing is true: they could be totally\nmessed up and it wouldn't really matter beyond the lifetime of that\none transaction.\n\nAlso, it would be a lot more tempting to check this if we could check\nit in all cases, but we can't. If a tuple is inserted in transaction\nT1 and ten updated twice in transaction T2, we'll have only one combo\nCID and nothing to compare it against, nor any way to decode what CMIN\nand CMAX it originally represented. And this is probably a pretty\ncommon type of case.\nok, I will be removing this entire validation of cmin/cid in my next patch. \n>> + if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n>> + (TransactionIdEquals(curr_xmin, curr_xmax) ||\n>> + !TransactionIdIsValid(curr_xmax)) && pred_cmin >= curr_cmin)\n>>\n>> I don't understand the reason for the middle part of this condition --\n>> TransactionIdEquals(curr_xmin, curr_xmax) ||\n>> !TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\n>> explain this, but I still don't get it. If a tuple with XMIN 12345\n>> CMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\n>> corruption, regardless of what the XMAX of the second tuple may happen\n>> to be.\n>\n> tuple | t_xmin | t_xmax | t_cid | t_ctid | tuple_data_split | heap_tuple_infomask_flags\n>\n> -------+--------+--------+-------+--------+---------------------------------------------+------------------------------------------------------------------------------------------------------------------\n> -------------\n> 1 | 971 | 971 | 0 | (0,3) | {\"\\\\x1774657374312020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_HOT_UPDATED}\",{})\n> 2 | 971 | 0 | 1 | (0,2) | {\"\\\\x1774657374322020202020\",\"\\\\x02000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID}\",{})\n> 3 | 971 | 971 | 1 | (0,4) | {\"\\\\x1774657374322020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_COMBOCID,HEAP_XMIN_COMMITTED,HEAP_XMAX_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY\n> _TUPLE}\",{})\n> 4 | 971 | 972 | 0 | (0,5) | {\"\\\\x1774657374332020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMIN_COMMITTED,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{})\n> 5 | 972 | 0 | 0 | (0,5) | {\"\\\\x1774657374342020202020\",\"\\\\x01000000\"} | (\"{HEAP_HASVARWIDTH,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})\n>\n> In the above case Tuple 1->3->4 is inserted and updated by xid 971 and tuple 4 is next update by xid 972, here t_cid of tuple 4 is 0 where as its predecessor's t_cid is 1, because in Tuple 4 t_cid is having command ID of deleting transaction(cmax), that is why we need to check xmax of the Tuple.\n>\n> Please correct me if I am missing anything here?\n\nHmm, I see, so basically you're trying to check whether the CID field\ncontains a CMIN as opposed to a CMAX. But I'm not sure this test is\nentirely reliable, because heap_prepare/execute_freeze_tuple() can set\na tuple's xmax to InvalidTransactionId even after it's had some other\nvalue, and that won't do anything to the contents of the CID field. ok, Got it, as we are removing this cmin/cid validation so we don't need any change here. -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 27 Sep 2022 12:10:01 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Sep 20, 2022 at 6:43 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Sep 20, 2022 at 5:00 AM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> > Please find it attached.\n>\n> This patch still has no test cases. Just as we have test cases for the\n> existing corruption checks, we should have test cases for these new\n> corruption checks, showing cases where they actually fire.\n>\n> Test cases are now part of this v6 patch.\n\n\n> I think I would be inclined to set lp_valid[x] = true in both the\n> redirected and non-redirected case, and then have the very first thing\n> that the second loop does be if (nextoffnum == 0 ||\n> !lp_valid[ctx.offnum]) continue. I think that would be more clear\n> about the intent to ignore line pointers that failed validation. Also,\n> if you did it that way, then you could catch the case of a redirected\n> line pointer pointing to another redirected line pointer, which is a\n> corruption condition that the current code does not appear to check.\n>\n> Yes, it's a good idea to do this additional validation with a redirected\nline pointer. Done.\n\n> + /*\n> + * Validation via the predecessor array. 1) If the\n> predecessor's\n> + * xmin is aborted or in progress, the current tuples xmin\n> should\n> + * be aborted or in progress respectively. Also both xmin's\n> must\n> + * be equal. 2) If the predecessor's xmin is not frozen, then\n> + * current tuple's shouldn't be either. 3) If the\n> predecessor's\n> + * xmin is equal to the current tuple's xmin, the current\n> tuple's\n> + * cmin should be greater than the predecessor's cmin. 4) If\n> the\n> + * current tuple is not HOT then its predecessor's tuple must\n> not\n> + * be HEAP_HOT_UPDATED. 5) If the current tuple is HOT then\n> its\n> + * predecessor's tuple must be HEAP_HOT_UPDATED.\n> + */\n>\n> This comment needs to be split up into pieces and the pieces need to\n> be moved closer to the tests to which they correspond.\n>\n> Done.\n\n\n> + psprintf(\"unfrozen tuple was\n> updated to produce a tuple at offset %u which is not frozen\",\n>\nShouldn't this say \"which is frozen\"?\n>\n> Done.\n\n\n> + * Not a corruption if current tuple is updated/deleted by a\n> + * different transaction, means t_cid will point to cmax\n> (that is\n> + * command id of deleting transaction) and cid of predecessor\n> not\n> + * necessarily will be smaller than cid of current tuple.\n> t_cid\n>\n> I think that the next person who reads this code is likely to\n> understand that the CIDs of different transactions are numerically\n> unrelated. What's less obvious is that if the XID is the same, the\n> newer update must have a higher CID.\n>\n> + * can hold combo command id but we are not worrying here\n> since\n> + * combo command id of the next updated tuple (if present)\n> must be\n> + * greater than combo command id of the current tuple. So\n> here we\n> + * are not checking HEAP_COMBOCID flag and simply doing t_cid\n> + * comparison.\n>\n> I disapprove of ignoring the HEAP_COMBOCID flag. Emitting a message\n> claiming that the CID has a certain value when that's actually a combo\n> CID is misleading, so at least a different message wording is needed\n> in such cases. But it's also not clear to me that the newer update has\n> to have a higher combo CID, because combo CIDs can be reused. If you\n> have multiple cursors open in the same transaction, the updates can be\n> interleaved, and it seems to me that it might be possible for an older\n> CID to have created a certain combo CID after a newer CID, and then\n> both cursors could update the same page in succession and end up with\n> combo CIDs out of numerical order. Unless somebody can make a\n> convincing argument that this isn't possible, I think we should just\n> skip this check for cases where either tuple has a combo CID.\n>\n> + if (TransactionIdEquals(pred_xmin, curr_xmin) &&\n> + (TransactionIdEquals(curr_xmin, curr_xmax) ||\n> + !TransactionIdIsValid(curr_xmax)) && pred_cmin >=\n> curr_cmin)\n>\n> I don't understand the reason for the middle part of this condition --\n> TransactionIdEquals(curr_xmin, curr_xmax) ||\n> !TransactionIdIsValid(curr_xmax). I suppose the comment is meant to\n> explain this, but I still don't get it. If a tuple with XMIN 12345\n> CMIN 2 is updated to produce a tuple with XMIN 12345 CMIN 1, that's\n> corruption, regardless of what the XMAX of the second tuple may happen\n> to be.\n>\n> As discussed in our last discussion, I am removing this check altogether.\n\n\n> + if (HeapTupleHeaderIsHeapOnly(curr_htup) &&\n> + !HeapTupleHeaderIsHotUpdated(pred_htup))\n>\n> + if (!HeapTupleHeaderIsHeapOnly(curr_htup) &&\n> + HeapTupleHeaderIsHotUpdated(pred_htup))\n>\n> I think it would be slightly clearer to write these tests the other\n> way around i.e. check the previous tuple's state first.\n>\n> Done.\n\n\n> + if (!TransactionIdIsValid(curr_xmax) &&\n> HeapTupleHeaderIsHotUpdated(tuphdr))\n> + {\n> + report_corruption(ctx,\n> + psprintf(\"tuple has been updated, but xmax is\n> 0\"));\n> + result = false;\n> + }\n>\n> I guess this message needs to say \"tuple has been HOT updated, but\n> xmax is 0\" or something like that.\n>\n> Done.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Sep 2022 19:24:36 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi Himanshu,\n\n> Test cases are now part of this v6 patch.\n\nI believe the patch is in pretty good shape now. I'm going to change\nits status to \"Ready for Committer\" soon unless there are going to be\nany objections.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 9 Nov 2022 17:06:58 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nTo start with: I think this is an extremely helpful and important\nfeature. Both for checking production systems and for finding problems during\ndevelopment.\n\n\n> From 08fe01f5073c0a850541265494bb4a875bec7d3f Mon Sep 17 00:00:00 2001\n> From: Himanshu Upadhyaya <himanshu.upadhyaya@enterprisedb.com>\n> Date: Fri, 30 Sep 2022 17:44:56 +0530\n> Subject: [PATCH v6] Implement HOT chain validation in verify_heapam()\n> \n> Himanshu Upadhyaya, reviewed by Robert Haas, Aleksander Alekseev\n> \n> Discussion: https://postgr.es/m/CAPF61jBBR2-iE-EmN_9v0hcQEfyz_17e5Lbb0%2Bu2%3D9ukA9sWmQ%40mail.gmail.com\n> ---\n> contrib/amcheck/verify_heapam.c | 207 ++++++++++++++++++++++\n> src/bin/pg_amcheck/t/004_verify_heapam.pl | 192 ++++++++++++++++++--\n> 2 files changed, 388 insertions(+), 11 deletions(-)\n> \n> diff --git a/contrib/amcheck/verify_heapam.c b/contrib/amcheck/verify_heapam.c\n> index c875f3e5a2..007f7b2f37 100644\n> --- a/contrib/amcheck/verify_heapam.c\n> +++ b/contrib/amcheck/verify_heapam.c\n> @@ -399,6 +399,9 @@ verify_heapam(PG_FUNCTION_ARGS)\n> \tfor (ctx.blkno = first_block; ctx.blkno <= last_block; ctx.blkno++)\n> \t{\n> \t\tOffsetNumber maxoff;\n> +\t\tOffsetNumber predecessor[MaxOffsetNumber] = {0};\n> +\t\tOffsetNumber successor[MaxOffsetNumber] = {0};\n> +\t\tbool\t\tlp_valid[MaxOffsetNumber] = {false};\n> \n> \t\tCHECK_FOR_INTERRUPTS();\n> \n> @@ -433,6 +436,8 @@ verify_heapam(PG_FUNCTION_ARGS)\n> \t\tfor (ctx.offnum = FirstOffsetNumber; ctx.offnum <= maxoff;\n> \t\t\t ctx.offnum = OffsetNumberNext(ctx.offnum))\n> \t\t{\n> +\t\t\tOffsetNumber nextoffnum;\n> +\n> \t\t\tctx.itemid = PageGetItemId(ctx.page, ctx.offnum);\n> \n> \t\t\t/* Skip over unused/dead line pointers */\n> @@ -469,6 +474,13 @@ verify_heapam(PG_FUNCTION_ARGS)\n> \t\t\t\t\treport_corruption(&ctx,\n> \t\t\t\t\t\t\t\t\t psprintf(\"line pointer redirection to unused item at offset %u\",\n> \t\t\t\t\t\t\t\t\t\t\t (unsigned) rdoffnum));\n> +\n> +\t\t\t\t/*\n> +\t\t\t\t * make entry in successor array, redirected tuple will be\n> +\t\t\t\t * validated at the time when we loop over successor array\n> +\t\t\t\t */\n> +\t\t\t\tsuccessor[ctx.offnum] = rdoffnum;\n> +\t\t\t\tlp_valid[ctx.offnum] = true;\n> \t\t\t\tcontinue;\n> \t\t\t}\n> \n> @@ -504,9 +516,197 @@ verify_heapam(PG_FUNCTION_ARGS)\n> \t\t\t/* It should be safe to examine the tuple's header, at least */\n> \t\t\tctx.tuphdr = (HeapTupleHeader) PageGetItem(ctx.page, ctx.itemid);\n> \t\t\tctx.natts = HeapTupleHeaderGetNatts(ctx.tuphdr);\n> +\t\t\tlp_valid[ctx.offnum] = true;\n> \n> \t\t\t/* Ok, ready to check this next tuple */\n> \t\t\tcheck_tuple(&ctx);\n> +\n> +\t\t\t/*\n> +\t\t\t * Add the data to the successor array if next updated tuple is in\n> +\t\t\t * the same page. It will be used later to generate the\n> +\t\t\t * predecessor array.\n> +\t\t\t *\n> +\t\t\t * We need to access the tuple's header to populate the\n> +\t\t\t * predecessor array. However the tuple is not necessarily sanity\n> +\t\t\t * checked yet so delaying construction of predecessor array until\n> +\t\t\t * all tuples are sanity checked.\n> +\t\t\t */\n> +\t\t\tnextoffnum = ItemPointerGetOffsetNumber(&(ctx.tuphdr)->t_ctid);\n> +\t\t\tif (ItemPointerGetBlockNumber(&(ctx.tuphdr)->t_ctid) == ctx.blkno &&\n> +\t\t\t\tnextoffnum != ctx.offnum)\n> +\t\t\t{\n> +\t\t\t\tsuccessor[ctx.offnum] = nextoffnum;\n> +\t\t\t}\n\nI don't really understand this logic - why can't we populate the predecessor\narray, if we can construct a successor entry?\n\n\n> +\t\t}\n> +\n> +\t\t/*\n> +\t\t * Loop over offset and populate predecessor array from all entries\n> +\t\t * that are present in successor array.\n> +\t\t */\n> +\t\tctx.attnum = -1;\n> +\t\tfor (ctx.offnum = FirstOffsetNumber; ctx.offnum <= maxoff;\n> +\t\t\t ctx.offnum = OffsetNumberNext(ctx.offnum))\n> +\t\t{\n> +\t\t\tItemId\t\tcurr_lp;\n> +\t\t\tItemId\t\tnext_lp;\n> +\t\t\tHeapTupleHeader curr_htup;\n> +\t\t\tHeapTupleHeader next_htup;\n> +\t\t\tTransactionId curr_xmax;\n> +\t\t\tTransactionId next_xmin;\n> +\n> +\t\t\tOffsetNumber nextoffnum = successor[ctx.offnum];\n> +\n> +\t\t\tcurr_lp = PageGetItemId(ctx.page, ctx.offnum);\n\nWhy do we get the item when nextoffnum is 0?\n\n\n> +\t\t\tif (nextoffnum == 0 || !lp_valid[ctx.offnum] || !lp_valid[nextoffnum])\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * This is either the last updated tuple in the chain or a\n> +\t\t\t\t * corruption raised for this tuple.\n> +\t\t\t\t */\n\n\"or a corruption raised\" isn't quite right grammatically.\n\n\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\t\t\tif (ItemIdIsRedirected(curr_lp))\n> +\t\t\t{\n> +\t\t\t\tnext_lp = PageGetItemId(ctx.page, nextoffnum);\n> +\t\t\t\tif (ItemIdIsRedirected(next_lp))\n> +\t\t\t\t{\n> +\t\t\t\t\treport_corruption(&ctx,\n> +\t\t\t\t\t\t\t\t\t psprintf(\"redirected line pointer pointing to another redirected line pointer at offset %u\",\n> +\t\t\t\t\t\t\t\t\t\t\t (unsigned) nextoffnum));\n> +\t\t\t\t\tcontinue;\n> +\t\t\t\t}\n> +\t\t\t\tnext_htup = (HeapTupleHeader) PageGetItem(ctx.page, next_lp);\n> +\t\t\t\tif (!HeapTupleHeaderIsHeapOnly(next_htup))\n> +\t\t\t\t{\n> +\t\t\t\t\treport_corruption(&ctx,\n> +\t\t\t\t\t\t\t\t\t psprintf(\"redirected tuple at line pointer offset %u is not heap only tuple\",\n> +\t\t\t\t\t\t\t\t\t\t\t (unsigned) nextoffnum));\n> +\t\t\t\t}\n> +\t\t\t\tif ((next_htup->t_infomask & HEAP_UPDATED) == 0)\n> +\t\t\t\t{\n> +\t\t\t\t\treport_corruption(&ctx,\n> +\t\t\t\t\t\t\t\t\t psprintf(\"redirected tuple at line pointer offset %u is not heap updated tuple\",\n> +\t\t\t\t\t\t\t\t\t\t\t (unsigned) nextoffnum));\n> +\t\t\t\t}\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\t/*\n> +\t\t\t * Add a line pointer offset to the predecessor array if xmax is\n> +\t\t\t * matching with xmin of next tuple (reaching via its t_ctid).\n> +\t\t\t * Prior to PostgreSQL 9.4, we actually changed the xmin to\n> +\t\t\t * FrozenTransactionId\n\nI'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\nof getting that right seems low and I don't see us gaining much by even trying.\n\n\n> so we must add offset to predecessor\n> +\t\t\t * array(irrespective of xmax-xmin matching) if updated tuple xmin\n> +\t\t\t * is frozen, so that we can later do validation related to frozen\n> +\t\t\t * xmin. Raise corruption if we have two tuples having the same\n> +\t\t\t * predecessor.\n> +\t\t\t * We add the offset to the predecessor array irrespective of the\n> +\t\t\t * transaction (t_xmin) status. We will do validation related to\n> +\t\t\t * the transaction status (and also all other validations) when we\n> +\t\t\t * loop over the predecessor array.\n> +\t\t\t */\n> +\t\t\tcurr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp);\n> +\t\t\tcurr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup);\n> +\t\t\tnext_lp = PageGetItemId(ctx.page, nextoffnum);\n> +\t\t\tnext_htup = (HeapTupleHeader) PageGetItem(ctx.page, next_lp);\n> +\t\t\tnext_xmin = HeapTupleHeaderGetXmin(next_htup);\n> +\t\t\tif (TransactionIdIsValid(curr_xmax) &&\n> +\t\t\t\t(TransactionIdEquals(curr_xmax, next_xmin) ||\n> +\t\t\t\t next_xmin == FrozenTransactionId))\n> +\t\t\t{\n> +\t\t\t\tif (predecessor[nextoffnum] != 0)\n> +\t\t\t\t{\n> +\t\t\t\t\treport_corruption(&ctx,\n> +\t\t\t\t\t\t\t\t\t psprintf(\"updated version at offset %u is also the updated version of tuple at offset %u\",\n> +\t\t\t\t\t\t\t\t\t\t\t (unsigned) nextoffnum, (unsigned) predecessor[nextoffnum]));\n> +\t\t\t\t\tcontinue;\n\nI doubt it is correct to enter this path with next_xmin ==\nFrozenTransactionId. This is following a ctid chain that we normally wouldn't\nfollow, because it doesn't satisfy the t_self->xmax == t_ctid->xmin condition.\n\nI don't immediately see what prevents the frozen tuple being from an entirely\ndifferent HOT chain than the two tuples pointing to it.\n\n\n\n\n> +\t\t}\n> +\n> +\t\t/* Loop over offsets and validate the data in the predecessor array. */\n> +\t\tfor (OffsetNumber currentoffnum = FirstOffsetNumber; currentoffnum <= maxoff;\n> +\t\t\t currentoffnum = OffsetNumberNext(currentoffnum))\n> +\t\t{\n> +\t\t\tHeapTupleHeader pred_htup;\n> +\t\t\tHeapTupleHeader curr_htup;\n> +\t\t\tTransactionId pred_xmin;\n> +\t\t\tTransactionId curr_xmin;\n> +\t\t\tItemId\t\tpred_lp;\n> +\t\t\tItemId\t\tcurr_lp;\n> +\n> +\t\t\tctx.offnum = predecessor[currentoffnum];\n> +\t\t\tctx.attnum = -1;\n> +\n> +\t\t\tif (ctx.offnum == 0)\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * Either the root of the chain or an xmin-aborted tuple from\n> +\t\t\t\t * an abandoned portion of the HOT chain.\n> +\t\t\t\t */\n\nHm - couldn't we check that the tuple could conceivably be at the root of a\nchain? I.e. isn't HEAP_HOT_UPDATED? Or alternatively has an aborted xmin?\n\n\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n> +\n> +\t\t\tcurr_lp = PageGetItemId(ctx.page, currentoffnum);\n> +\t\t\tcurr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp);\n> +\t\t\tcurr_xmin = HeapTupleHeaderGetXmin(curr_htup);\n> +\n> +\t\t\tctx.itemid = pred_lp = PageGetItemId(ctx.page, ctx.offnum);\n> +\t\t\tpred_htup = (HeapTupleHeader) PageGetItem(ctx.page, pred_lp);\n> +\t\t\tpred_xmin = HeapTupleHeaderGetXmin(pred_htup);\n> +\n> +\t\t\t/*\n> +\t\t\t * If the predecessor's xmin is aborted or in progress, the\n> +\t\t\t * current tuples xmin should be aborted or in progress\n> +\t\t\t * respectively. Also both xmin's must be equal.\n> +\t\t\t */\n> +\t\t\tif (!TransactionIdEquals(pred_xmin, curr_xmin) &&\n> +\t\t\t\t!TransactionIdDidCommit(pred_xmin))\n> +\t\t\t{\n> +\t\t\t\treport_corruption(&ctx,\n> +\t\t\t\t\t\t\t\t psprintf(\"tuple with uncommitted xmin %u was updated to produce a tuple at offset %u with differing xmin %u\",\n> +\t\t\t\t\t\t\t\t\t\t (unsigned) pred_xmin, (unsigned) currentoffnum, (unsigned) curr_xmin));\n\nIs this necessarily true? What about a tuple that was inserted in a\nsubtransaction and then updated in another subtransaction of the same toplevel\ntransaction?\n\n\n> +\t\t\t}\n> +\n> +\t\t\t/*\n> +\t\t\t * If the predecessor's xmin is not frozen, then current tuple's\n> +\t\t\t * shouldn't be either.\n> +\t\t\t */\n> +\t\t\tif (pred_xmin != FrozenTransactionId && curr_xmin == FrozenTransactionId)\n> +\t\t\t{\n> +\t\t\t\treport_corruption(&ctx,\n> +\t\t\t\t\t\t\t\t psprintf(\"unfrozen tuple was updated to produce a tuple at offset %u which is frozen\",\n> +\t\t\t\t\t\t\t\t\t\t (unsigned) currentoffnum));\n> +\t\t\t}\n\nCan't we have a an update chain that is e.g.\nxmin 10, xmax 5 -> xmin 5, xmax invalid\n\nand a vacuum cutoff of 7? That'd preent the first tuple from being removed,\nbut would allow 5 to be frozen.\n\nI think there were recent patches proposing we don't freeze in that case, but\nwe'll having done that in the past....\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Nov 2022 14:08:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 2:08 PM Andres Freund <andres@anarazel.de> wrote:\n> To start with: I think this is an extremely helpful and important\n> feature. Both for checking production systems and for finding problems during\n> development.\n\n+1.\n\nIt's painful to get this in, in part because we now have to actually\ndecide what the rules really are with total precision, including for\nall of the tricky edge cases. The exercise of writing this code should\n\"keep us honest\" about whether or not we really know what the\ninvariants are, which is more than half the battle.\n\n> > + /*\n> > + * Add a line pointer offset to the predecessor array if xmax is\n> > + * matching with xmin of next tuple (reaching via its t_ctid).\n> > + * Prior to PostgreSQL 9.4, we actually changed the xmin to\n> > + * FrozenTransactionId\n>\n> I'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\n> of getting that right seems low and I don't see us gaining much by even trying.\n\nThis is the kind of comment that I'd usually agree with, but I\ndisagree in this instance because of special considerations that apply\nto amcheck (or should IMV apply, at least). We're living in a world\nwhere we have to assume that the pre-9.4 format can occur in the\nfield. If we can't get it right in amcheck, what chance do we have\nwith other new code that tickles the same areas? I think that we need\nto support obsolescent heapam representations (both\nFrozenTransactionId and xvac) here on general principle.\n\nBesides, why not accept some small chance of getting this wrong? The\nworst that can happen is that we'll have a relatively benign bug. If\nwe get it wrong then it's a short term problem, but also an\nopportunity to be less wrong in the future -- including in places\nwhere the consequences of being wrong are much more serious.\n\n> I doubt it is correct to enter this path with next_xmin ==\n> FrozenTransactionId. This is following a ctid chain that we normally wouldn't\n> follow, because it doesn't satisfy the t_self->xmax == t_ctid->xmin condition.\n\nWe should never see FrozenTransactionId in an xmax field (nor should\nit be returned by HeapTupleHeaderGetUpdateXid() under any\ncircumstances). We can \"freeze xmax\" during VACUUM, but that actually\nmeans setting xmax to InvalidTransactionId (in rare cases it might\nmean replacing a Multi with a new Multi). The terminology in this area\nis a bit tricky.\n\nAnyway, it follows that we cannot expect \"next_xmin ==\nFrozenTransactionId\", because that would mean that we'd called\nHeapTupleHeaderGetUpdateXid() which returned FrozenTransactionId -- an\nimpossibility. (Maybe we should be checking that it really is an\nimpossibility by checking the HeapTupleHeaderGetUpdateXid() return\nvalue, but that should be enough.)\n\n> I don't immediately see what prevents the frozen tuple being from an entirely\n> different HOT chain than the two tuples pointing to it.\n\nIn my view it simply isn't possible for a valid HOT chain to be in\nthis state in the first place. So by definition it wouldn't be a HOT\nchain. That would be a form of corruption, which is something that\nwould probably be detected by noticing orphaned heap-only tuples\n(heap-only tuples not reachable from some root item on the same page,\nor some other intermediary heap-only tuple reachable from a root\nitem).\n\n> Can't we have a an update chain that is e.g.\n> xmin 10, xmax 5 -> xmin 5, xmax invalid\n>\n> and a vacuum cutoff of 7? That'd preent the first tuple from being removed,\n> but would allow 5 to be frozen.\n\nI don't see how that can be possible. That is contradictory, and\ncannot possibly work, since it supposes a situation where every\npossible MVCC snapshot sees the update that generated the\nsecond/successor tuple as committed, while at the same time also\nsomehow needing the original tuple to stay in place. Surely both\nthings can never be true at the same time.\n\nI believe you're right that an update chain that looks like this one\nis possible. However, I don't think it's possible for\nOldestXmin/FreezeLimit to take on a value like that (i.e. a value that\n\"skewers\" the update chain like this, the value 7 from your example).\nWe ought to be able to rely on an OldestXmin value that can never let\nsuch a situation emerge. Right?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 9 Nov 2022 15:03:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-09 15:03:39 -0800, Peter Geoghegan wrote:\n> > > + /*\n> > > + * Add a line pointer offset to the predecessor array if xmax is\n> > > + * matching with xmin of next tuple (reaching via its t_ctid).\n> > > + * Prior to PostgreSQL 9.4, we actually changed the xmin to\n> > > + * FrozenTransactionId\n> >\n> > I'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\n> > of getting that right seems low and I don't see us gaining much by even trying.\n> \n> This is the kind of comment that I'd usually agree with, but I\n> disagree in this instance because of special considerations that apply\n> to amcheck (or should IMV apply, at least). We're living in a world\n> where we have to assume that the pre-9.4 format can occur in the\n> field. If we can't get it right in amcheck, what chance do we have\n> with other new code that tickles the same areas? I think that we need\n> to support obsolescent heapam representations (both\n> FrozenTransactionId and xvac) here on general principle.\n\nTo me this is extending the problem into more areas rather than reducing\nit. I'd have *zero* confidence in any warnings that amcheck issued that\ninvolved <9.4 special cases.\n\nWe've previously discussed adding pg_class column tracking the PG version that\nlast scanned the whole relation. We really should get to that one of these\ndecades :(.\n\n\n> Besides, why not accept some small chance of getting this wrong? The\n> worst that can happen is that we'll have a relatively benign bug. If\n> we get it wrong then it's a short term problem, but also an\n> opportunity to be less wrong in the future -- including in places\n> where the consequences of being wrong are much more serious.\n\nI think it doesn't just affect the < 9.4 path, but also makes us implement\nthings differently for >= 9.4. And we loose some accuracy due to that.\n\n\n> > I doubt it is correct to enter this path with next_xmin ==\n> > FrozenTransactionId. This is following a ctid chain that we normally wouldn't\n> > follow, because it doesn't satisfy the t_self->xmax == t_ctid->xmin condition.\n> \n> We should never see FrozenTransactionId in an xmax field (nor should\n> it be returned by HeapTupleHeaderGetUpdateXid() under any\n> circumstances).\n\nThe field we check for FrozenTransactionId in the code I was quoting is the\nxmin of the follower tuple. We follow the chain if either\ncur->xmax == next->xmin or if next->xmin == FrozenTransactionId\n\nWhat I'm doubting is the FrozenTransactionId path.\n\n\n> Anyway, it follows that we cannot expect \"next_xmin ==\n> FrozenTransactionId\", because that would mean that we'd called\n> HeapTupleHeaderGetUpdateXid() which returned FrozenTransactionId -- an\n> impossibility. (Maybe we should be checking that it really is an\n> impossibility by checking the HeapTupleHeaderGetUpdateXid() return\n> value, but that should be enough.)\n\nnext_xmin is acquired via HeapTupleHeaderGetXmin(next_htup), not\nHeapTupleHeaderGetUpdateXid(cur_typ).\n\n\n> > I don't immediately see what prevents the frozen tuple being from an entirely\n> > different HOT chain than the two tuples pointing to it.\n> \n> In my view it simply isn't possible for a valid HOT chain to be in\n> this state in the first place. So by definition it wouldn't be a HOT\n> chain.\n\nWe haven't done any visibility checking at this point and my whole point is\nthat there's no guarantee that the pointed-to tuple actually belongs to the\nsame hot chain, given that we follow as soon as \"xmin == FrozenXid\". So the\npointing tuple might be an orphaned tuple.\n\n\n> That would be a form of corruption, which is something that\n> would probably be detected by noticing orphaned heap-only tuples\n> (heap-only tuples not reachable from some root item on the same page,\n> or some other intermediary heap-only tuple reachable from a root\n> item).\n\nYou're saying that there's no way that there's a tuple pointing to another\ntuple on the same page, which the pointed-to tuple belonging to a different\nHOT chain?\n\nI'm fairly certain that that at least used to be possible, and likely is still\npossible. Isn't that pretty much what you'd expect to happen if there's\nconcurrent aborts leading to abandoned hot chains?\n\n\n\n> > Can't we have a an update chain that is e.g.\n> > xmin 10, xmax 5 -> xmin 5, xmax invalid\n> >\n> > and a vacuum cutoff of 7? That'd preent the first tuple from being removed,\n> > but would allow 5 to be frozen.\n> \n> I don't see how that can be possible. That is contradictory, and\n> cannot possibly work, since it supposes a situation where every\n> possible MVCC snapshot sees the update that generated the\n> second/successor tuple as committed, while at the same time also\n> somehow needing the original tuple to stay in place. Surely both\n> things can never be true at the same time.\n\nThe xmin horizon is very coarse grained. Just because it is 7 doesn't mean\nthat xid 10 is still running. All it means that one backend or slot has an\nxmin or xid of 7.\n\ns1: acquire xid 5\ns2: acquire xid 7\ns3: acquire xid 10\n\ns3: insert\ns3: commit\ns1: update\ns1: commit\n\ns2: get a new snapshot, xmin 7 (or just hold no snapshot)\n\nAt this point the xmin horizon is 7. The first tuple's xmin can't be\nfrozen. The second tuple's xmin can be.\n\nNote that indeed no backend could actually see the first tuple - xid 7's\nsnapshot won't have it marked as running, therefore it will be invisible. But\nwe will think the first tuple is recently dead rather than burried deeply,\nbecause the xmin horizon is only 7.\n\n\n> I believe you're right that an update chain that looks like this one\n> is possible. However, I don't think it's possible for\n> OldestXmin/FreezeLimit to take on a value like that (i.e. a value that\n> \"skewers\" the update chain like this, the value 7 from your example).\n> We ought to be able to rely on an OldestXmin value that can never let\n> such a situation emerge. Right?\n\nI don't see anything that'd guarantee that currently, nor do immediately see a\npossible way to get there.\n\nWhat do you think prevents such an OldestXmin?\n\nI might be missing something myself...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Nov 2022 16:15:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 4:15 PM Andres Freund <andres@anarazel.de> wrote:\n> To me this is extending the problem into more areas rather than reducing\n> it. I'd have *zero* confidence in any warnings that amcheck issued that\n> involved <9.4 special cases.\n\nMaybe you would at first. But then we get to learn what mistake we\nmade. And then we get to fix the bug, and get to know better next time\naround. Next time (or maybe the time after that) you really will have\nconfidence in amcheck, because it'll have been battle tested at that\npoint.\n\nFor something like this that seems like the best way to go. Either we\nsupport an on-disk format that includes legacy representations, or we\ndon't.\n\n> I think it doesn't just affect the < 9.4 path, but also makes us implement\n> things differently for >= 9.4. And we loose some accuracy due to that.\n\nI don't follow. How so?\n\n> The field we check for FrozenTransactionId in the code I was quoting is the\n> xmin of the follower tuple. We follow the chain if either\n> cur->xmax == next->xmin or if next->xmin == FrozenTransactionId\n>\n> What I'm doubting is the FrozenTransactionId path.\n\nAFAICT we shouldn't be treating it as part of the same HOT chain. Only\nthe first heap-only tuple in a valid HOT chain should have an xmin\nthat is FrozenTransactionId (and only with an LP_REDIRECT root item, I\nthink). Otherwise the \"prev_xmax==xmin\" HOT chain traversal logic used\nin places like heap_hot_search_buffer() simply won't work.\n\n> > In my view it simply isn't possible for a valid HOT chain to be in\n> > this state in the first place. So by definition it wouldn't be a HOT\n> > chain.\n>\n> We haven't done any visibility checking at this point and my whole point is\n> that there's no guarantee that the pointed-to tuple actually belongs to the\n> same hot chain, given that we follow as soon as \"xmin == FrozenXid\". So the\n> pointing tuple might be an orphaned tuple.\n\nBut an orphaned heap-only tuple shouldn't ever have\nxmin==FrozenTransactionId to begin with. The only valid source of\norphaned heap-only tuples is transaction aborts. Aborted XID xmin\nfields are never frozen.\n\n> > That would be a form of corruption, which is something that\n> > would probably be detected by noticing orphaned heap-only tuples\n> > (heap-only tuples not reachable from some root item on the same page,\n> > or some other intermediary heap-only tuple reachable from a root\n> > item).\n>\n> You're saying that there's no way that there's a tuple pointing to another\n> tuple on the same page, which the pointed-to tuple belonging to a different\n> HOT chain?\n\nDefine \"belongs to a different HOT chain\".\n\nYou can get orphaned heap-only tuples, obviously. But only due to\ntransaction abort. Any page with an orphaned heap-only tuple that is\nnot consistent with it being from an earlier abort is a corrupt heap\npage.\n\n> > > Can't we have a an update chain that is e.g.\n> > > xmin 10, xmax 5 -> xmin 5, xmax invalid\n> > >\n> > > and a vacuum cutoff of 7? That'd preent the first tuple from being removed,\n> > > but would allow 5 to be frozen.\n> >\n> > I don't see how that can be possible. That is contradictory, and\n> > cannot possibly work, since it supposes a situation where every\n> > possible MVCC snapshot sees the update that generated the\n> > second/successor tuple as committed, while at the same time also\n> > somehow needing the original tuple to stay in place. Surely both\n> > things can never be true at the same time.\n>\n> The xmin horizon is very coarse grained. Just because it is 7 doesn't mean\n> that xid 10 is still running. All it means that one backend or slot has an\n> xmin or xid of 7.\n\nOf course that's true. But I wasn't talking about the general case --\nI was talking about your \"xmin 10, xmax 5 -> xmin 5, xmax invalid\"\nupdate chain case specifically, with its \"skewered\" OldestXmin of 7.\n\n> s1: acquire xid 5\n> s2: acquire xid 7\n> s3: acquire xid 10\n>\n> s3: insert\n> s3: commit\n> s1: update\n> s1: commit\n>\n> s2: get a new snapshot, xmin 7 (or just hold no snapshot)\n>\n> At this point the xmin horizon is 7. The first tuple's xmin can't be\n> frozen. The second tuple's xmin can be.\n\nBasically what I'm saying about OldestXmin is that it ought to \"work\ntransitively\", from the updater to the inserter that inserted the\nnow-updated tuple. That is, the OldestXmin should either count both\nXIDs that appear in the update chain, or neither XID.\n\n> > I believe you're right that an update chain that looks like this one\n> > is possible. However, I don't think it's possible for\n> > OldestXmin/FreezeLimit to take on a value like that (i.e. a value that\n> > \"skewers\" the update chain like this, the value 7 from your example).\n> > We ought to be able to rely on an OldestXmin value that can never let\n> > such a situation emerge. Right?\n>\n> I don't see anything that'd guarantee that currently, nor do immediately see a\n> possible way to get there.\n>\n> What do you think prevents such an OldestXmin?\n\nComputeXidHorizons() computes VACUUM's OldestXmin (actually it\ncomputes h->data_oldest_nonremovable values) by scanning the proc\narray. And counts PGPROC.xmin from each running xact. So ultimately\nthe inserter and updater are tied together by that. It's either an\nOldestXmin that includes both, or one that includes neither.\n\nHere are some facts that I think we both agree on already:\n\n1. It is definitely possible to have an update chain like your \"xmin\n10, xmax 5 -> xmin 5, xmax invalid\" example.\n\n2. It is definitely not possible to \"freeze xmax\" by setting its value\nto FrozenTransactionId or something similar -- there is simply no code\npath that can do that, and never has been. (The term \"freeze xmax\" is\na bit ambiguous, though it usually means set xmax to\nInvalidTransactionId.)\n\n3. There is no specific reason to believe that there is a live bug here.\n\nPutting all 3 together: doesn't it seem quite likely that the way that\nwe compute OldestXmin is the factor that prevents \"skewering\" of an\nupdate chain? What else could possibly be preventing corruption here?\n(Theoretically it might never have been discovered, but that seems\npretty hard to believe.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 9 Nov 2022 17:32:46 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-09 17:32:46 -0800, Peter Geoghegan wrote:\n> > The xmin horizon is very coarse grained. Just because it is 7 doesn't mean\n> > that xid 10 is still running. All it means that one backend or slot has an\n> > xmin or xid of 7.\n> \n> Of course that's true. But I wasn't talking about the general case --\n> I was talking about your \"xmin 10, xmax 5 -> xmin 5, xmax invalid\"\n> update chain case specifically, with its \"skewered\" OldestXmin of 7.\n\nThe sequence below produces such an OldestXmin:\n\n> > s1: acquire xid 5\n> > s2: acquire xid 7\n> > s3: acquire xid 10\n> >\n> > s3: insert\n> > s3: commit\n> > s1: update\n> > s1: commit\n> >\n> > s2: get a new snapshot, xmin 7 (or just hold no snapshot)\n> >\n> > At this point the xmin horizon is 7. The first tuple's xmin can't be\n> > frozen. The second tuple's xmin can be.\n> \n> Basically what I'm saying about OldestXmin is that it ought to \"work\n> transitively\", from the updater to the inserter that inserted the\n> now-updated tuple. That is, the OldestXmin should either count both\n> XIDs that appear in the update chain, or neither XID.\n\nIt doesn't work that way. The above sequence shows one case where it doesn't.\n\n\n> > > I believe you're right that an update chain that looks like this one\n> > > is possible. However, I don't think it's possible for\n> > > OldestXmin/FreezeLimit to take on a value like that (i.e. a value that\n> > > \"skewers\" the update chain like this, the value 7 from your example).\n> > > We ought to be able to rely on an OldestXmin value that can never let\n> > > such a situation emerge. Right?\n> >\n> > I don't see anything that'd guarantee that currently, nor do immediately see a\n> > possible way to get there.\n> >\n> > What do you think prevents such an OldestXmin?\n> \n> ComputeXidHorizons() computes VACUUM's OldestXmin (actually it\n> computes h->data_oldest_nonremovable values) by scanning the proc\n> array. And counts PGPROC.xmin from each running xact. So ultimately\n> the inserter and updater are tied together by that. It's either an\n> OldestXmin that includes both, or one that includes neither.\n\n> Here are some facts that I think we both agree on already:\n> \n> 1. It is definitely possible to have an update chain like your \"xmin\n> 10, xmax 5 -> xmin 5, xmax invalid\" example.\n> \n> 2. It is definitely not possible to \"freeze xmax\" by setting its value\n> to FrozenTransactionId or something similar -- there is simply no code\n> path that can do that, and never has been. (The term \"freeze xmax\" is\n> a bit ambiguous, though it usually means set xmax to\n> InvalidTransactionId.)\n> \n> 3. There is no specific reason to believe that there is a live bug here.\n\nI don't think there's a live bug here. I think the patch isn't dealing\ncorrectly with that issue though.\n\n\n> Putting all 3 together: doesn't it seem quite likely that the way that\n> we compute OldestXmin is the factor that prevents \"skewering\" of an\n> update chain? What else could possibly be preventing corruption here?\n> (Theoretically it might never have been discovered, but that seems\n> pretty hard to believe.)\n\nI don't see how that follows. The existing code is just ok with that. In fact\nwe have explicit code trying to exploit this:\n\n\t\t/*\n\t\t * If the DEAD tuple is at the end of the chain, the entire chain is\n\t\t * dead and the root line pointer can be marked dead. Otherwise just\n\t\t * redirect the root to the correct chain member.\n\t\t */\n\t\tif (i >= nchain)\n\t\t\theap_prune_record_dead(prstate, rootoffnum);\n\t\telse\n\t\t\theap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 9 Nov 2022 17:46:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nAnd thinking about it, it'd be quite bad if the horizon worked that way. You can easily construct a workload where every single xid would \"skewer\" some chain, never allowing the horizon to be raised.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\nHi,And thinking about it, it'd be quite bad if the horizon worked that way. You can easily construct a workload where every single xid would \"skewer\" some chain, never allowing the horizon to be raised.Andres-- Sent from my Android device with K-9 Mail. Please excuse my brevity.",
"msg_date": "Wed, 09 Nov 2022 18:10:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 5:46 PM Andres Freund <andres@anarazel.de> wrote:\n> > Putting all 3 together: doesn't it seem quite likely that the way that\n> > we compute OldestXmin is the factor that prevents \"skewering\" of an\n> > update chain? What else could possibly be preventing corruption here?\n> > (Theoretically it might never have been discovered, but that seems\n> > pretty hard to believe.)\n>\n> I don't see how that follows. The existing code is just ok with that.\n\nMy remarks about \"3 facts we agree on\" were not intended to be a\nwatertight argument. More like: what else could it possibly be that\nprevents problems in practice, if not *something* to do with how we\ncompute OldestXmin?\n\nLeaving aside the specifics of how OldestXmin is computed for a\nmoment: what alternative explanation is even remotely plausible? There\njust aren't that many moving parts involved here. The idea that we can\never freeze the xmin of a successor tuple/version from an update chain\nwithout also pruning away earlier versions of the same chain is wildly\nimplausible. It sounds totally contradictory.\n\n> In fact\n> we have explicit code trying to exploit this:\n>\n> /*\n> * If the DEAD tuple is at the end of the chain, the entire chain is\n> * dead and the root line pointer can be marked dead. Otherwise just\n> * redirect the root to the correct chain member.\n> */\n> if (i >= nchain)\n> heap_prune_record_dead(prstate, rootoffnum);\n> else\n> heap_prune_record_redirect(prstate, rootoffnum, chainitems[i]);\n\nI don't see why this code is relevant.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 9 Nov 2022 18:13:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 6:10 PM Andres Freund <andres@anarazel.de> wrote:\n> And thinking about it, it'd be quite bad if the horizon worked that way. You can easily construct a workload where every single xid would \"skewer\" some chain, never allowing the horizon to be raised.\n\nYour whole scenario is one involving a insert of a tuple by XID 10,\nwhich is then updated by XID 5 -- a lower XID. Obviously that's\npossible, but it's relatively rare. I have to imagine that the vast\nmajority of updates affect tuples inserted by an XID before the XID of\nthe updater.\n\nMy use of the term \"skewer\" was limited to updates that look like\nthat. So I don't know what you mean about never allowing the horizon\nto be raised.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 9 Nov 2022 18:35:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-09 18:35:12 -0800, Peter Geoghegan wrote:\n> On Wed, Nov 9, 2022 at 6:10 PM Andres Freund <andres@anarazel.de> wrote:\n> > And thinking about it, it'd be quite bad if the horizon worked that way. You can easily construct a workload where every single xid would \"skewer\" some chain, never allowing the horizon to be raised.\n> \n> Your whole scenario is one involving a insert of a tuple by XID 10,\n> which is then updated by XID 5 -- a lower XID. Obviously that's\n> possible, but it's relatively rare. I have to imagine that the vast\n> majority of updates affect tuples inserted by an XID before the XID of\n> the updater.\n\n> My use of the term \"skewer\" was limited to updates that look like\n> that. So I don't know what you mean about never allowing the horizon\n> to be raised.\n\nYou don't need it to happen all the time, it's enough when it happens\noccasionally, since that'd \"block\" the whole range of xids between. So you\nyou'd just need occasional transactions to prevent the horizon from\nincreasing.\n\n\nAnyway, I played a bit around with this. It's hard to hit, not because we\nsomehow won't choose such a horizon, but because we'll commonly prune the\nearlier tuple version away due to xmax being old enough. It *is* possible to\nhit, if the horizon increases between the two tuple version checks (e.g. if\nthere's another tuple inbetween that we check the visibility of).\n\nI think there's another way it can happen in older cluster, but don't want to\nspend the time to verify it.\n\nEither way, we can't error out in this situation - there's nothing invalid\nabout it.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Nov 2022 09:38:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 9:38 AM Andres Freund <andres@anarazel.de> wrote:\n> Anyway, I played a bit around with this. It's hard to hit, not because we\n> somehow won't choose such a horizon, but because we'll commonly prune the\n> earlier tuple version away due to xmax being old enough.\n\nThat must be a bug, then. Since, as I said, I can't see how it could\npossibly be okay to freeze an xmin of tuple in a HOT chain without\nalso making sure that it has no earlier versions left behind. If there\nare earlier versions that we have to go through to get to the\nfrozen-xmin tuple (not just an LP_REDIRECT), we're going to break the\nHOT chain traversal logic in code like heap_hot_search_buffer in a\nrather obvious way.\n\nHOT chain traversal logic code will interpret the frozen xmin from the\ntuple as FrozenTransactionId (not as its raw xmin). So traversal is\njust broken in this scenario.\n\n> It *is* possible to\n> hit, if the horizon increases between the two tuple version checks (e.g. if\n> there's another tuple inbetween that we check the visibility of).\n\nI suppose that that's the detail that \"protects\" us, then -- that\nwould explain the apparent lack of problems in the field. Your\nsequence requires 3 sessions, not just 2.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Nov 2022 09:50:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 9, 2022 at 5:08 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't really understand this logic - why can't we populate the predecessor\n> array, if we can construct a successor entry?\n\nThis whole thing was my idea, so let me try to explain. I think the\nnaming and comments need work, but I believe the fundamental idea may\nbe sound.\n\nsuccessor[x] = y means that when we looked at line pointer x, we saw\nthat it was either a redirect to line pointer y, or else it had\nstorage and the associated tuple's CTID pointed to line pointer y. At\nthis point, we do not have any idea whether y is at all sane, nor we\ndo we know anything about which of x and y is larger. Furthermore, it\nis possible that successor[x] = successor[x'] since the page might be\ncorrupted and we haven't checked otherwise.\n\npredecessor[y] = x means that successor[x] = y but in addition we've\nchecked that y is sane, and that x.xmax=y.xmin. If there are multiple\ntuples for which these conditions hold, we've issued complaints about\nall but one and entered the last into the predecessor array.\n\nAn earlier version of the algorithm had only a predecessor[] array but\nthe code to try to populate in a single pass was complex and looked\nugly and error-prone. To set a predecessor entry in one step, we had\nto sanity-check each of x and y but only if that hadn't yet been done,\nwhich was quite awkward. For example, imagine line pointers 1 and 2\nboth point to 3, and line pointer 3 points backward to line pointer 1\n(because of corruption, since it shouldn't ever be circular). We can't\nreason about the relationship between 1 and 3 without first making\nsure that each one is sane in isolation. But if we do that when we're\nat line pointer 1, then when we get to 2, we need to check 2 but don't\nneed to recheck 3, and when we get to 3 we need to recheck neither 3\nnor 1. This algorithm lets us run through and do all the basic sanity\nchecks first, while populating the successor array, and then check\nrelationships in later stages.\n\nPart of the motivation here is also driven by trying to figure out how\nto word the complaints. We have a dedicated field in the amcheck that\ncan hold one tuple offset or the other, but if we're checking the\nrelationships between tuples, what do we put there? I feel it will be\neasiest to understand if we put the offset of the older tuple in that\nfield and then phrase the complaint as the patch does, e.g.:\n\ntuple with uncommitted xmin %u was updated to produce a tuple at\noffset %u with differing xmin %u\n\nWe could flip that around and put the newer tuple offset in the field\nand then phrase the complaint the other way around, but it seems a bit\nawkward, e.g.:\n\ntuple with uncommited xmin %u at offset %u was updated to produce this\ntuple with differing xmin %u\n\nI think if we did do it that way around (and figured out how to phrase\nthe messages) we might not need both arrays any more (though I'm not\npositive about that). It seems hard to avoid needing at least one,\nelse you can't explicitly notice two converging HOT chains, which\nseems like a case we probably ought to notice. But the \"to produce\nthis tuple\" phrasing is just confusing, I think, and removing \"this\"\ndoesn't help. You need to somehow get people to understand that the\noffset they probably saw in another field is the second tuple, not the\nfirst one. Maybe:\n\nxmin %u does not match xmax %u of prior tuple at offset %u\n\nHmm.\n\nAnyway, whether it was the right idea or not, the desire to have the\nearlier tuple be the focus of the error messages was part of the\nmotivation here.\n\n> I'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\n> of getting that right seems low and I don't see us gaining much by even trying.\n\nI agree with Peter. We have to try to get that case right. If we can\neventually eliminate it as a valid case by some mechanism, hooray. But\nin the meantime we have to deal with it as best we can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:27:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-14 09:50:48 -0800, Peter Geoghegan wrote:\n> On Mon, Nov 14, 2022 at 9:38 AM Andres Freund <andres@anarazel.de> wrote:\n> > Anyway, I played a bit around with this. It's hard to hit, not because we\n> > somehow won't choose such a horizon, but because we'll commonly prune the\n> > earlier tuple version away due to xmax being old enough.\n>\n> That must be a bug, then. Since, as I said, I can't see how it could\n> possibly be okay to freeze an xmin of tuple in a HOT chain without\n> also making sure that it has no earlier versions left behind.\n\nHard to imagine us having bugs in this code. Ahem.\n\nI really wish I knew of a reasonably complex way to utilize coverage guided\nfuzzing on heap pruning / vacuuming.\n\n\nI wonder if we ought to add an error check to heap_prepare_freeze_tuple()\nagainst this scenario. We're working towards being more aggressive around\nfreezing, which will make it more likely to hit corner cases around this.\n\n\n\n> If there are earlier versions that we have to go through to get to the\n> frozen-xmin tuple (not just an LP_REDIRECT), we're going to break the HOT\n> chain traversal logic in code like heap_hot_search_buffer in a rather\n> obvious way.\n>\n> HOT chain traversal logic code will interpret the frozen xmin from the\n> tuple as FrozenTransactionId (not as its raw xmin). So traversal is\n> just broken in this scenario.\n>\n\nWhich'd still be fine if the whole chain were already \"fully dead\". One way I\nthink this can happen is <= PG 13's HEAPTUPLE_DEAD handling in\nlazy_scan_heap().\n\nI now suspect that the seemingly-odd \"We will advance past RECENTLY_DEAD\ntuples just in case there's a DEAD one after them;\" logic in\nheap_prune_chain() might be required for correctness. Which IIRC we'd been\ntalking about getting rid elsewhere?\n\n<tinkers>\n\nAt least as long as correctness requires not ending up in endless loops -\nindeed. We end up with lazy_scan_prune() endlessly retrying. Without a chance\nto interrupt. Shouldn't there at least be a CFI somewhere? The attached\nisolationtester spec has a commented out test for this.\n\n\nI think the problem partially is that the proposed verify_heapam() code is too\n\"aggressive\" considering things to be part of the same hot chain - which then\nmeans we have to be very careful about erroring out.\n\nThe attached isolationtester test triggers:\n\"unfrozen tuple was updated to produce a tuple at offset %u which is frozen\"\n\"updated version at offset 3 is also the updated version of tuple at offset %u\"\n\nDespite there afaict not being any corruption. Worth noting that this happens\nregardless of hot/non-hot updates being used (uncomment s3ci to see).\n\n\n> > It *is* possible to\n> > hit, if the horizon increases between the two tuple version checks (e.g. if\n> > there's another tuple inbetween that we check the visibility of).\n>\n> I suppose that that's the detail that \"protects\" us, then -- that\n> would explain the apparent lack of problems in the field. Your\n> sequence requires 3 sessions, not just 2.\n\nOne important protection right now is that vacuumlazy.c uses a more\npessimistic horizon than pruneheap.c. Even if visibility determinations within\npruning recompute the horizon, vacuumlazy.c won't freeze based on the advanced\nhorizon. I don't quite know where we we'd best put a comment with a warning\nabout this fact.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Mon, 14 Nov 2022 12:58:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 11:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Part of the motivation here is also driven by trying to figure out how\n> to word the complaints. We have a dedicated field in the amcheck that\n> can hold one tuple offset or the other, but if we're checking the\n> relationships between tuples, what do we put there? I feel it will be\n> easiest to understand if we put the offset of the older tuple in that\n> field and then phrase the complaint as the patch does, e.g.:\n\nThat makes a lot of sense to me, and reminds me of how things work in\nverify_nbtree.c.\n\nAt a high level verify_nbtree.c works by doing a breadth-first\ntraversal of the tree. The search makes each distinct page the \"target\npage\" exactly once. The target page is the clear focal point for\neverything -- almost every complaint about corruption frames the\nproblem as a problem in the target page. We consistently describe\nthings in terms of their relationship with the target page, so under\nthis scheme everybody is...on the same page (ahem).\n\nBeing very deliberate about that probably had some small downsides.\nMaybe it would have made a little more sense to word certain\nparticular corruption report messages in a way that placed blame on\n\"ancillary\" pages like sibling/child pages (not the target page) as\nproblems in the ancillary page itself, not the target page. This still\nseems like the right trade-off -- the control flow can be broken up\ninto understandable parts once you understand that the target page is\nthe thing that we use to describe every other page.\n\n> > I'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\n> > of getting that right seems low and I don't see us gaining much by even trying.\n>\n> I agree with Peter. We have to try to get that case right. If we can\n> eventually eliminate it as a valid case by some mechanism, hooray. But\n> in the meantime we have to deal with it as best we can.\n\nPracticed intellectual humility seems like the way to go here. On some\nlevel I suspect that we'll have problems in exactly the places that we\ndon't look for them.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Nov 2022 13:20:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-14 14:27:54 -0500, Robert Haas wrote:\n> On Wed, Nov 9, 2022 at 5:08 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't really understand this logic - why can't we populate the predecessor\n> > array, if we can construct a successor entry?\n> \n> This whole thing was my idea, so let me try to explain. I think the\n> naming and comments need work, but I believe the fundamental idea may\n> be sound.\n> \n> successor[x] = y means that when we looked at line pointer x, we saw\n> that it was either a redirect to line pointer y, or else it had\n> storage and the associated tuple's CTID pointed to line pointer y.\n\n> At this point, we do not have any idea whether y is at all sane, nor we do\n> we know anything about which of x and y is larger.\n\nWhat do you mean with \"larger\" here?\n\n\n> Furthermore, it is\n> possible that successor[x] = successor[x'] since the page might be corrupted\n> and we haven't checked otherwise.\n> \n> predecessor[y] = x means that successor[x] = y but in addition we've\n> checked that y is sane, and that x.xmax=y.xmin. If there are multiple\n> tuples for which these conditions hold, we've issued complaints about\n> all but one and entered the last into the predecessor array.\n\nAs shown by the isolationtester test I just posted, this doesn't quite work\nright now. Probably fixable.\n\nI don't think we can follow non-HOT ctid chains if they're older than the xmin\nhorizon, including all cases of xmin being frozen. There's just nothing\nguaranteeing that the tuples are actually \"related\".\n\n\nIt seems like we should do a bit more validation within a chain of\ntuples. E.g. that no live tuple can follow an !DidCommit xmin?\n\n\n\n> > I'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\n> > of getting that right seems low and I don't see us gaining much by even trying.\n> \n> I agree with Peter. We have to try to get that case right. If we can\n> eventually eliminate it as a valid case by some mechanism, hooray. But\n> in the meantime we have to deal with it as best we can.\n\nI now think that the 9.4 specific reasoning is bogus in the first place. The\npatch says:\n\n\t\t\t * Add a line pointer offset to the predecessor array if xmax is\n\t\t\t * matching with xmin of next tuple (reaching via its t_ctid).\n\t\t\t * Prior to PostgreSQL 9.4, we actually changed the xmin to\n\t\t\t * FrozenTransactionId so we must add offset to predecessor\n\t\t\t * array(irrespective of xmax-xmin matching) if updated tuple xmin\n\t\t\t * is frozen, so that we can later do validation related to frozen\n\t\t\t * xmin. Raise corruption if we have two tuples having the same\n\t\t\t * predecessor.\n\nbut it's simply not correct to iterate through xmin=FrozenTransactionId - as\nshown in the isolationtester test. And that's unrelated to 9.4, because we\ncouldn't rely on the raw xmin value either, because even if they match, they\ncould be from different epochs.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:02:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 12:58 PM Andres Freund <andres@anarazel.de> wrote:\n> I wonder if we ought to add an error check to heap_prepare_freeze_tuple()\n> against this scenario. We're working towards being more aggressive around\n> freezing, which will make it more likely to hit corner cases around this.\n\nIn theory my work on freezing doesn't change the basic rules about how\nfreezing works, and doesn't know anything about HOT, so it shouldn't\nintroduce any new risk. Even still, I agree that this seems like\nsomething to do in the scope of the same work, just in case. Plus it's\njust important.\n\nIt would be possible to have exhaustive heap_prepare_freeze_tuple\nchecks in assert-only builds -- we can exhaustively check the final\narray of prepared freeze plans that we collected for a given heap\npage, and check it against the page exhaustively right before freezing\nis executed. That's not perfect, but it would be a big improvement.\n\nRight now I am not entirely sure what I would need to check in such a\nmechanism. I am legitimately unsure of what the rules are in light of\nthis new information.\n\n> Which'd still be fine if the whole chain were already \"fully dead\". One way I\n> think this can happen is <= PG 13's HEAPTUPLE_DEAD handling in\n> lazy_scan_heap().\n\nYou mean the tupgone thing? Perhaps it would have avoided this\nparticular problem, or one like it. But it had so many other problems\nthat I don't see why it matters now.\n\n> At least as long as correctness requires not ending up in endless loops -\n> indeed. We end up with lazy_scan_prune() endlessly retrying. Without a chance\n> to interrupt. Shouldn't there at least be a CFI somewhere?\n\nProbably, but that wouldn't change the fact that it's a bug when this\nhappens. Obviously it's more important to avoid such a bug than it is\nto ameliorate it.\n\n> I think the problem partially is that the proposed verify_heapam() code is too\n> \"aggressive\" considering things to be part of the same hot chain - which then\n> means we have to be very careful about erroring out.\n>\n> The attached isolationtester test triggers:\n> \"unfrozen tuple was updated to produce a tuple at offset %u which is frozen\"\n> \"updated version at offset 3 is also the updated version of tuple at offset %u\"\n>\n> Despite there afaict not being any corruption. Worth noting that this happens\n> regardless of hot/non-hot updates being used (uncomment s3ci to see).\n\nWhy don't you think that there is corruption?\n\nThe terminology here is tricky. It's possible that the amcheck patch\nmakes a very good point here, even without necessarily complaining\nabout a state that leads to obviously wrong behavior. It's also\npossible that there really is wrong behavior, at least in my mind -- I\ndon't know what your remarks about no corruption are really based on.\n\nI feel like I'm repeating myself more than I should, but: why isn't it\nas simple as \"HOT chain traversal logic is broken by frozen xmin in\nthe obvious way, therefore all bets are off\"? Maybe you're right about\nthe proposed new functionality getting things wrong with your\nadversarial isolation test, but I seem to have missed the underlying\nargument. Are you just talking about regular update chains here, not\nHOT chains? Something else?\n\n> One important protection right now is that vacuumlazy.c uses a more\n> pessimistic horizon than pruneheap.c. Even if visibility determinations within\n> pruning recompute the horizon, vacuumlazy.c won't freeze based on the advanced\n> horizon. I don't quite know where we we'd best put a comment with a warning\n> about this fact.\n\nWe already have comments discussing the relationship between\nOldestXmin and vistest (as well as rel_pages) in heap_vacuum_rel().\nThat seems like the obvious place to put something like this, at least\nto me.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:13:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-14 14:13:10 -0800, Peter Geoghegan wrote:\n> > I think the problem partially is that the proposed verify_heapam() code is too\n> > \"aggressive\" considering things to be part of the same hot chain - which then\n> > means we have to be very careful about erroring out.\n> >\n> > The attached isolationtester test triggers:\n> > \"unfrozen tuple was updated to produce a tuple at offset %u which is frozen\"\n> > \"updated version at offset 3 is also the updated version of tuple at offset %u\"\n> >\n> > Despite there afaict not being any corruption. Worth noting that this happens\n> > regardless of hot/non-hot updates being used (uncomment s3ci to see).\n> \n> Why don't you think that there is corruption?\n\nI looked at the state after the test and the complaint is bogus. It's caused\nby the patch ignoring the cur->xmax == next->xmin condition if next->xmin is\nFrozenTransactionId. The isolationtester test creates a situation where that\nleads to verify_heapam() considering tuples to be part of the same chain even\nthough they aren't.\n\n\n> Because I feel like I'm repeating myself more than I should, but: why isn't\n> it as simple as \"HOT chain traversal logic is broken by frozen xmin in the\n> obvious way, therefore all bets are off\"?\n\nBecause that's irrelevant for the testcase and a good number of my concerns.\n\n\n> Maybe you're right about the proposed new functionality getting things wrong\n> with your adversarial isolation test, but I seem to have missed the\n> underlying argument. Are you just talking about regular update chains here,\n> not HOT chains? Something else?\n\nAs I noted, it happens regardless of HOT being used or not. The tuples aren't\npart of the same chain, but the patch treats them as if they were. The reason\nthe patch considers them to be part of the same chain is precisely the\nFrozenTransactionId condition I was worried about. Just because t_ctid points\nto a tuple on the same page and the next tuple has xmin ==\nFrozenTransactionId, doesn't mean they're part of the same chain. Once you\nencounter a tuple with a frozen xmin you simply cannot assume it's part of the\nchain you've been following.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:33:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 2:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > Why don't you think that there is corruption?\n>\n> I looked at the state after the test and the complaint is bogus. It's caused\n> by the patch ignoring the cur->xmax == next->xmin condition if next->xmin is\n> FrozenTransactionId. The isolationtester test creates a situation where that\n> leads to verify_heapam() considering tuples to be part of the same chain even\n> though they aren't.\n\nHaving looked at your isolation test in more detail, it seems like you\nwere complaining about a fairly specific and uncontroversial\nshortcoming in the patch itself: it complains about a newly inserted\ntuple that gets frozen. It thinks that the inserted tuple is part of\nthe same HOT chain (or at least the same update chain) as other tuples\non the same heap page, when in fact it's just some wholly unrelated\ntuple/logical row. It seems as if the new amcheck code doesn't get all\nthe details of validating HOT chains right, and so jumps the gun here,\nreporting corruption based on a faulty assumption that the frozen-xmin\ntuple is in any way related to the chain.\n\nI was confused about whether we were talking about this patch, bugs in\nHEAD, or both.\n\n> > Maybe you're right about the proposed new functionality getting things wrong\n> > with your adversarial isolation test, but I seem to have missed the\n> > underlying argument. Are you just talking about regular update chains here,\n> > not HOT chains? Something else?\n>\n> As I noted, it happens regardless of HOT being used or not. The tuples aren't\n> part of the same chain, but the patch treats them as if they were. The reason\n> the patch considers them to be part of the same chain is precisely the\n> FrozenTransactionId condition I was worried about. Just because t_ctid points\n> to a tuple on the same page and the next tuple has xmin ==\n> FrozenTransactionId, doesn't mean they're part of the same chain. Once you\n> encounter a tuple with a frozen xmin you simply cannot assume it's part of the\n> chain you've been following.\n\nGot it.\n\nThat seems relatively straightforward and uncontroversial to me,\nbecause it's just how code like heap_hot_search_buffer (HOT chain\ntraversal code) works already. The patch got some of those details\nwrong, and should be revised.\n\nWhat does this have to tell us, if anything, about the implications\nfor code on HEAD? I don't see any connection between this problem and\nthe possibility of a live bug on HEAD involving freezing later tuple\nversions in a HOT chain, leaving earlier non-frozen versions behind to\nbreak HOT chain traversal code. Should I have noticed such a\nconnection?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:42:16 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-14 14:42:16 -0800, Peter Geoghegan wrote:\n> What does this have to tell us, if anything, about the implications\n> for code on HEAD?\n\nNothing really test I sent (*) - I wanted to advance the discussion about the\npatch being wrong as-is in a concrete way.\n\nThis logic was one of my main complaints in\nhttps://postgr.es/m/20221109220803.t25sosmfvkeglhy4%40awork3.anarazel.de\nand you went in a very different direction in your reply. Hence a test\nshowcasing the issue.\n\nNote that neither of my complaints around FrozenTransactionId in that email\nactually require that HOT is involved. The code in the patch doesn't\ndifferentiate between hot and not-hot until later.\n\n\n> I don't see any connection between this problem and the possibility of a\n> live bug on HEAD involving freezing later tuple versions in a HOT chain,\n> leaving earlier non-frozen versions behind to break HOT chain traversal\n> code. Should I have noticed such a connection?\n\nNo.\n\nGreetings,\n\nAndres Freund\n\n(*) the commented out test perhaps is an argument for expanding the comment\nnd \"We will advance past RECENTLY_DEAD tuples just in case there's a DEAD\nafter them;\" in heap_prune_chain()\n\n\n",
"msg_date": "Mon, 14 Nov 2022 14:58:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 2:58 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-14 14:42:16 -0800, Peter Geoghegan wrote:\n> > What does this have to tell us, if anything, about the implications\n> > for code on HEAD?\n>\n> Nothing really test I sent (*) - I wanted to advance the discussion about the\n> patch being wrong as-is in a concrete way.\n\nGot it.\n\n> This logic was one of my main complaints in\n> https://postgr.es/m/20221109220803.t25sosmfvkeglhy4%40awork3.anarazel.de\n> and you went in a very different direction in your reply. Hence a test\n> showcasing the issue.\n\nI guess I was also confused by the fact that you called it\n\"skewer.diff\", which is a terminology I invented to describe the scary\nHOT chain freezing bug. You probably started out writing an isolation\ntest to do something like that, but then repurposed it to show a bug\nin the patch. Anyway, never mind, I understand you now.\n\n> Note that neither of my complaints around FrozenTransactionId in that email\n> actually require that HOT is involved. The code in the patch doesn't\n> differentiate between hot and not-hot until later.\n\nI understand. I mentioned HOT only because it's more obviously not\nokay with HOT -- you can point to the precise code that is broken\nquite easily (index scans break with HOT chain traversals give wrong\nanswers in the problem scenario with freezing HOT chains in the wrong\nplace).\n\nI'd really like to know if the scary HOT chain freezing scenario is\npossible, for the very obvious reason. Have you tried to write a test\ncase for that?\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 14 Nov 2022 15:07:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 14, 2022 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-14 14:27:54 -0500, Robert Haas wrote:\n> > On Wed, Nov 9, 2022 at 5:08 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I don't really understand this logic - why can't we populate the predecessor\n> > > array, if we can construct a successor entry?\n> >\n> > This whole thing was my idea, so let me try to explain. I think the\n> > naming and comments need work, but I believe the fundamental idea may\n> > be sound.\n> >\n> > successor[x] = y means that when we looked at line pointer x, we saw\n> > that it was either a redirect to line pointer y, or else it had\n> > storage and the associated tuple's CTID pointed to line pointer y.\n>\n> > At this point, we do not have any idea whether y is at all sane, nor we do\n> > we know anything about which of x and y is larger.\n>\n> What do you mean with \"larger\" here?\n\nNumerically bigger. As in, a redirect line pointer or CTID will most\ncommonly point to a tuple that appears later in the line pointer\narray, because we assign offset numbers in ascending order. But we\nalso reuse line pointers, so it's possible for a redirect line pointer\nor CTID to point backwards to a lower-number offset.\n\n> > Furthermore, it is\n> > possible that successor[x] = successor[x'] since the page might be corrupted\n> > and we haven't checked otherwise.\n> >\n> > predecessor[y] = x means that successor[x] = y but in addition we've\n> > checked that y is sane, and that x.xmax=y.xmin. If there are multiple\n> > tuples for which these conditions hold, we've issued complaints about\n> > all but one and entered the last into the predecessor array.\n>\n> As shown by the isolationtester test I just posted, this doesn't quite work\n> right now. Probably fixable.\n>\n> I don't think we can follow non-HOT ctid chains if they're older than the xmin\n> horizon, including all cases of xmin being frozen. There's just nothing\n> guaranteeing that the tuples are actually \"related\".\n\nYeah, glad you caught that. I think it's clearly wrong to regard A and\nB as related if B.xmin is frozen. It doesn't matter whether it's\n\"old-style\" frozen where we actually put 2 into the xmin field, or\nnew-style frozen where we set hint bits. Because, even if it's\nnew-style frozen and the value actually stored in B.xmin is\nnumerically equal to A.xmax, it could be from a different epoch. If\nit's from the same epoch, then something is corrupted, because when we\nfroze B we should have pruned A. But we have no way of knowing whether\nthat's the case, and shouldn't assume corruption.\n\n> It seems like we should do a bit more validation within a chain of\n> tuples. E.g. that no live tuple can follow an !DidCommit xmin?\n\nI think this check is already present in stronger form. If we see a\n!DidCommit xmin, the xmin of the next tuple in the chain not only\ncan't be committed, but had better be the same. See \"tuple with\nuncommitted xmin %u was updated to produce a tuple at offset %u with\ndiffering xmin %u\".\n\n> I now think that the 9.4 specific reasoning is bogus in the first place. The\n> patch says:\n>\n> * Add a line pointer offset to the predecessor array if xmax is\n> * matching with xmin of next tuple (reaching via its t_ctid).\n> * Prior to PostgreSQL 9.4, we actually changed the xmin to\n> * FrozenTransactionId so we must add offset to predecessor\n> * array(irrespective of xmax-xmin matching) if updated tuple xmin\n> * is frozen, so that we can later do validation related to frozen\n> * xmin. Raise corruption if we have two tuples having the same\n> * predecessor.\n>\n> but it's simply not correct to iterate through xmin=FrozenTransactionId - as\n> shown in the isolationtester test. And that's unrelated to 9.4, because we\n> couldn't rely on the raw xmin value either, because even if they match, they\n> could be from different epochs.\n\nI agree completely.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Nov 2022 11:36:21 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-15 11:36:21 -0500, Robert Haas wrote:\n> On Mon, Nov 14, 2022 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > It seems like we should do a bit more validation within a chain of\n> > tuples. E.g. that no live tuple can follow an !DidCommit xmin?\n> \n> I think this check is already present in stronger form. If we see a\n> !DidCommit xmin, the xmin of the next tuple in the chain not only can't be\n> committed, but had better be the same.\n\nAs I think I mentioned before, I don't think the \"better be the same\" aspect\nis correct, think subxacts. E.g.\n\noff 0: xmin: top, xmax: child_1\noff 1: xmin: child_1, xmax: invalid\n\nIf top hasn't committed yet, the current logic afaict will warn about this\nsituation, no? And I don't think we can generally the subxid parent at this\npoint, unfortunately (might have truncated subtrans).\n\n\nDifferent aspect: Is it ok that we use TransactionIdDidCommit() without a\npreceding IsInProgress() check?\n\n\nI do think there's some potential for additional checks that don't run into\nthe above issue, e.g. checking that no in-progress xids follow an explicitly\naborted xact, that a committed xid can't follow an uncommitted xid etc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 11:50:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-15 11:36:21 -0500, Robert Haas wrote:\n> > On Mon, Nov 14, 2022 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > > It seems like we should do a bit more validation within a chain of\n> > > tuples. E.g. that no live tuple can follow an !DidCommit xmin?\n> >\n> > I think this check is already present in stronger form. If we see a\n> > !DidCommit xmin, the xmin of the next tuple in the chain not only can't be\n> > committed, but had better be the same.\n>\n> As I think I mentioned before, I don't think the \"better be the same\" aspect\n> is correct, think subxacts. E.g.\n>\n> off 0: xmin: top, xmax: child_1\n> off 1: xmin: child_1, xmax: invalid\n>\n> If top hasn't committed yet, the current logic afaict will warn about this\n> situation, no? And I don't think we can generally the subxid parent at this\n> point, unfortunately (might have truncated subtrans).\n\nWoops, you're right.\n\n> Different aspect: Is it ok that we use TransactionIdDidCommit() without a\n> preceding IsInProgress() check?\n\nWell, the code doesn't match the comments here, sadly. The comments\nclaim that we want to check that if the prior tuple's xmin was aborted\nor in progress the current one is in the same state. If that's\nactually what the code checked, we'd definitely need to check both\nTransactionIdInProgress and TransactionIdCommit and be wary of the\npossibility of the value changing concurrently. But the code doesn't\nactually check the status of more than one XID, nor does it care about\nthe distinction between aborted and in progress, so I don't think that\nthe current code is buggy in that particular way, just in a bunch of\nother ways.\n\n> I do think there's some potential for additional checks that don't run into\n> the above issue, e.g. checking that no in-progress xids follow an explicitly\n> aborted xact, that a committed xid can't follow an uncommitted xid etc.\n\nYeah, maybe so.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 15 Nov 2022 15:28:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-14 15:07:05 -0800, Peter Geoghegan wrote:\n> I'd really like to know if the scary HOT chain freezing scenario is\n> possible, for the very obvious reason. Have you tried to write a test\n> case for that?\n\nI tried. Unfortunately, even if the bug exists, we currently don't have the\ninfrastructure to write isolationtester tests for it. There's just too many\npoints where we'd need to wait where I don't know of ways to wait with\nisolationtester.\n\nI'm quite certain that it's possible to end up freezing an earlier row\nversions in a hot chain in < 14, I got there with careful gdb\norchestration. Of course possible I screwed something up, given I did it once,\ninteractively. Not sure if trying to fix it is worth the risk of backpatching\nall the necessary changes to switch to the retry approach.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 15 Nov 2022 22:55:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 3:38 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > + }\n> > +\n> > + /*\n> > + * Loop over offset and populate predecessor array from\n> all entries\n> > + * that are present in successor array.\n> > + */\n> > + ctx.attnum = -1;\n> > + for (ctx.offnum = FirstOffsetNumber; ctx.offnum <= maxoff;\n> > + ctx.offnum = OffsetNumberNext(ctx.offnum))\n> > + {\n> > + ItemId curr_lp;\n> > + ItemId next_lp;\n> > + HeapTupleHeader curr_htup;\n> > + HeapTupleHeader next_htup;\n> > + TransactionId curr_xmax;\n> > + TransactionId next_xmin;\n> > +\n> > + OffsetNumber nextoffnum = successor[ctx.offnum];\n> > +\n> > + curr_lp = PageGetItemId(ctx.page, ctx.offnum);\n>\n> Why do we get the item when nextoffnum is 0?\n>\n> Yes, right, I will move this call to PageGetItemId, just after the next\n\"if\" condition in the patch.\n\n>\n> > + if (nextoffnum == 0 || !lp_valid[ctx.offnum] ||\n> !lp_valid[nextoffnum])\n> > + {\n> > + /*\n> > + * This is either the last updated tuple\n> in the chain or a\n> > + * corruption raised for this tuple.\n> > + */\n>\n> \"or a corruption raised\" isn't quite right grammatically.\n>\n> will change to \"This is either the last updated tuple in the chain or\ncorruption has been raised for this tuple\"\n\n>\n> > + continue;\n> > + }\n> > + if (ItemIdIsRedirected(curr_lp))\n> > + {\n> > + next_lp = PageGetItemId(ctx.page,\n> nextoffnum);\n> > + if (ItemIdIsRedirected(next_lp))\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"redirected line pointer pointing to another redirected line\n> pointer at offset %u\",\n> > +\n> (unsigned) nextoffnum));\n> > + continue;\n> > + }\n> > + next_htup = (HeapTupleHeader) PageGetItem(\n> ctx.page, next_lp);\n> > + if (!HeapTupleHeaderIsHeapOnly(next_htup))\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"redirected tuple at line pointer offset %u is not heap only\n> tuple\",\n> > +\n> (unsigned) nextoffnum));\n> > + }\n> > + if ((next_htup->t_infomask & HEAP_UPDATED)\n> == 0)\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"redirected tuple at line pointer offset %u is not heap updated\n> tuple\",\n> > +\n> (unsigned) nextoffnum));\n> > + }\n> > + continue;\n> > + }\n> > +\n> > + /*\n> > + * Add a line pointer offset to the predecessor\n> array if xmax is\n> > + * matching with xmin of next tuple (reaching via\n> its t_ctid).\n> > + * Prior to PostgreSQL 9.4, we actually changed\n> the xmin to\n> > + * FrozenTransactionId\n>\n> I'm doubtful it's a good idea to try to validate the 9.4 case. The\n> likelihood\n> of getting that right seems low and I don't see us gaining much by even\n> trying.\n>\n>\n> > so we must add offset to predecessor\n> > + * array(irrespective of xmax-xmin matching) if\n> updated tuple xmin\n> > + * is frozen, so that we can later do validation\n> related to frozen\n> > + * xmin. Raise corruption if we have two tuples\n> having the same\n> > + * predecessor.\n> > + * We add the offset to the predecessor array\n> irrespective of the\n> > + * transaction (t_xmin) status. We will do\n> validation related to\n> > + * the transaction status (and also all other\n> validations) when we\n> > + * loop over the predecessor array.\n> > + */\n> > + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> curr_lp);\n> > + curr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup);\n> > + next_lp = PageGetItemId(ctx.page, nextoffnum);\n> > + next_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> next_lp);\n> > + next_xmin = HeapTupleHeaderGetXmin(next_htup);\n> > + if (TransactionIdIsValid(curr_xmax) &&\n> > + (TransactionIdEquals(curr_xmax, next_xmin)\n> ||\n> > + next_xmin == FrozenTransactionId))\n> > + {\n> > + if (predecessor[nextoffnum] != 0)\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"updated version at offset %u is also the updated version of\n> tuple at offset %u\",\n> > +\n> (unsigned) nextoffnum, (unsigned) predecessor[nextoffnum]));\n> > + continue;\n>\n> I doubt it is correct to enter this path with next_xmin ==\n> FrozenTransactionId. This is following a ctid chain that we normally\n> wouldn't\n> follow, because it doesn't satisfy the t_self->xmax == t_ctid->xmin\n> condition.\n>\n> I don't immediately see what prevents the frozen tuple being from an\n> entirely\n> different HOT chain than the two tuples pointing to it.\n>\n>\n>\n> Prior to 9.4 we can have xmin updated with FrozenTransactionId but with\n9.4 (or later) we set XMIN_FROZEN bit in t_infomask. if updated tuple\nis via prior of 9.4 then \"TransactionIdEquals(curr_xmax, next_xmin)\" will\nbe false for Frozen Tuple.\nThe Intention of adding \"next_xmin == FrozenTransactionId\" to the path is\nbecause we wanted to do validation around Frozen Tuple when we loop over\npredecessor array.\n\n I need to look at the isolation test in details to understand how this can\nprovide false alarm and but if there is a valid case then we can remove\nlogic of raising corruption related with Frozen Tuple?\n\n\n> > + }\n> > +\n> > + /* Loop over offsets and validate the data in the\n> predecessor array. */\n> > + for (OffsetNumber currentoffnum = FirstOffsetNumber;\n> currentoffnum <= maxoff;\n> > + currentoffnum = OffsetNumberNext(currentoffnum))\n> > + {\n> > + HeapTupleHeader pred_htup;\n> > + HeapTupleHeader curr_htup;\n> > + TransactionId pred_xmin;\n> > + TransactionId curr_xmin;\n> > + ItemId pred_lp;\n> > + ItemId curr_lp;\n> > +\n> > + ctx.offnum = predecessor[currentoffnum];\n> > + ctx.attnum = -1;\n> > +\n> > + if (ctx.offnum == 0)\n> > + {\n> > + /*\n> > + * Either the root of the chain or an\n> xmin-aborted tuple from\n> > + * an abandoned portion of the HOT chain.\n> > + */\n>\n> Hm - couldn't we check that the tuple could conceivably be at the root of a\n> chain? I.e. isn't HEAP_HOT_UPDATED? Or alternatively has an aborted xmin?\n>\n>\n I don't see a way to check if tuple is at the root of HOT chain because\npredecessor array will always be having either xmin from non-abandoned\ntransaction or it will be zero. We can't differentiate root or tuple\ninserted via abandoned transaction.\n\n\n> > + continue;\n> > + }\n> > +\n> > + curr_lp = PageGetItemId(ctx.page, currentoffnum);\n> > + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> curr_lp);\n> > + curr_xmin = HeapTupleHeaderGetXmin(curr_htup);\n> > +\n> > + ctx.itemid = pred_lp = PageGetItemId(ctx.page,\n> ctx.offnum);\n> > + pred_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> pred_lp);\n> > + pred_xmin = HeapTupleHeaderGetXmin(pred_htup);\n> > +\n> > + /*\n> > + * If the predecessor's xmin is aborted or in\n> progress, the\n> > + * current tuples xmin should be aborted or in\n> progress\n> > + * respectively. Also both xmin's must be equal.\n> > + */\n> > + if (!TransactionIdEquals(pred_xmin, curr_xmin) &&\n> > + !TransactionIdDidCommit(pred_xmin))\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"tuple with uncommitted xmin %u was updated to produce a tuple at\n> offset %u with differing xmin %u\",\n> > +\n> (unsigned) pred_xmin, (unsigned) currentoffnum, (unsigned)\n> curr_xmin));\n>\n> Is this necessarily true? What about a tuple that was inserted in a\n> subtransaction and then updated in another subtransaction of the same\n> toplevel\n> transaction?\n>\n>\nnot sure if I am getting? I have tried with below test and don't see any\nissue,\n\n‘postgres[14723]=#’drop table test2;\nDROP TABLE\n‘postgres[14723]=#’create table test2 (a int, b int primary key);\nCREATE TABLE\n‘postgres[14723]=#’insert into test2 values (1,1);\nINSERT 0 1\n‘postgres[14723]=#’BEGIN;\nBEGIN\n‘postgres[14723]=#*’update test2 set a =2 where a =1;\nUPDATE 1\n‘postgres[14723]=#*’savepoint s1;\nSAVEPOINT\n‘postgres[14723]=#*’update test2 set a =6;\nUPDATE 1\n‘postgres[14723]=#*’rollback to savepoint s1;\nROLLBACK\n‘postgres[14723]=#*’update test2 set a =6;\nUPDATE 1\n‘postgres[14723]=#*’savepoint s2;\nSAVEPOINT\n‘postgres[14723]=#*’update test2 set a =7;\nUPDATE 1\n‘postgres[14723]=#*’end;\nCOMMIT\n‘postgres[14723]=#’SELECT lp as tuple, t_xmin, t_xmax, t_field3 as t_cid,\nt_ctid,tuple_data_split('test2'::regclass, t_data, t_infomask, t_infomask2,\nt_bits), heap_tuple_infomask_flags(t_infomask, t_infomask2) FROM\nheap_page_items(get_raw_page('test2', 0));\n tuple | t_xmin | t_xmax | t_cid | t_ctid | tuple_data_split |\n heap_tuple_infomask_flags\n-------+--------+--------+-------+--------+-------------------------------+---------------------------------------------------------------------------\n 1 | 1254 | 1255 | 0 | (0,2) | {\"\\\\x01000000\",\"\\\\x01000000\"} |\n(\"{HEAP_XMIN_COMMITTED,HEAP_HOT_UPDATED}\",{})\n 2 | 1255 | 1257 | 1 | (0,4) | {\"\\\\x02000000\",\"\\\\x01000000\"} |\n(\"{HEAP_COMBOCID,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{})\n 3 | 1256 | 0 | 1 | (0,3) | {\"\\\\x06000000\",\"\\\\x01000000\"} |\n(\"{HEAP_XMIN_INVALID,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})\n 4 | 1257 | 1258 | 2 | (0,5) | {\"\\\\x06000000\",\"\\\\x01000000\"} |\n(\"{HEAP_COMBOCID,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{})\n 5 | 1258 | 0 | 3 | (0,5) | {\"\\\\x07000000\",\"\\\\x01000000\"} |\n(\"{HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})\n(5 rows)\n\n\n>\n> > + }\n> > +\n> > + /*\n> > + * If the predecessor's xmin is not frozen, then\n> current tuple's\n> > + * shouldn't be either.\n> > + */\n> > + if (pred_xmin != FrozenTransactionId && curr_xmin\n> == FrozenTransactionId)\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"unfrozen tuple was updated to produce a tuple at offset %u which\n> is frozen\",\n> > +\n> (unsigned) currentoffnum));\n> > + }\n>\n> Can't we have a an update chain that is e.g.\n> xmin 10, xmax 5 -> xmin 5, xmax invalid\n>\n> and a vacuum cutoff of 7? That'd preent the first tuple from being removed,\n> but would allow 5 to be frozen.\n>\n> I think there were recent patches proposing we don't freeze in that case,\n> but\n> we'll having done that in the past....\n>\n>\nNot very sure about this, was trying with such case but found hard to\nreproduce this.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Nov 10, 2022 at 3:38 AM Andres Freund <andres@anarazel.de> wrote:\r\n> + }\r\n> +\r\n> + /*\r\n> + * Loop over offset and populate predecessor array from all entries\r\n> + * that are present in successor array.\r\n> + */\r\n> + ctx.attnum = -1;\r\n> + for (ctx.offnum = FirstOffsetNumber; ctx.offnum <= maxoff;\r\n> + ctx.offnum = OffsetNumberNext(ctx.offnum))\r\n> + {\r\n> + ItemId curr_lp;\r\n> + ItemId next_lp;\r\n> + HeapTupleHeader curr_htup;\r\n> + HeapTupleHeader next_htup;\r\n> + TransactionId curr_xmax;\r\n> + TransactionId next_xmin;\r\n> +\r\n> + OffsetNumber nextoffnum = successor[ctx.offnum];\r\n> +\r\n> + curr_lp = PageGetItemId(ctx.page, ctx.offnum);\n\r\nWhy do we get the item when nextoffnum is 0?\nYes, right, I will move this call to PageGetItemId, just after the next \"if\" condition in the patch.\n\r\n> + if (nextoffnum == 0 || !lp_valid[ctx.offnum] || !lp_valid[nextoffnum])\r\n> + {\r\n> + /*\r\n> + * This is either the last updated tuple in the chain or a\r\n> + * corruption raised for this tuple.\r\n> + */\n\r\n\"or a corruption raised\" isn't quite right grammatically.\nwill change to \"This is either the last updated tuple in the chain or corruption has been raised for this tuple\" \n\r\n> + continue;\r\n> + }\r\n> + if (ItemIdIsRedirected(curr_lp))\r\n> + {\r\n> + next_lp = PageGetItemId(ctx.page, nextoffnum);\r\n> + if (ItemIdIsRedirected(next_lp))\r\n> + {\r\n> + report_corruption(&ctx,\r\n> + psprintf(\"redirected line pointer pointing to another redirected line pointer at offset %u\",\r\n> + (unsigned) nextoffnum));\r\n> + continue;\r\n> + }\r\n> + next_htup = (HeapTupleHeader) PageGetItem(ctx.page, next_lp);\r\n> + if (!HeapTupleHeaderIsHeapOnly(next_htup))\r\n> + {\r\n> + report_corruption(&ctx,\r\n> + psprintf(\"redirected tuple at line pointer offset %u is not heap only tuple\",\r\n> + (unsigned) nextoffnum));\r\n> + }\r\n> + if ((next_htup->t_infomask & HEAP_UPDATED) == 0)\r\n> + {\r\n> + report_corruption(&ctx,\r\n> + psprintf(\"redirected tuple at line pointer offset %u is not heap updated tuple\",\r\n> + (unsigned) nextoffnum));\r\n> + }\r\n> + continue;\r\n> + }\r\n> +\r\n> + /*\r\n> + * Add a line pointer offset to the predecessor array if xmax is\r\n> + * matching with xmin of next tuple (reaching via its t_ctid).\r\n> + * Prior to PostgreSQL 9.4, we actually changed the xmin to\r\n> + * FrozenTransactionId\n\r\nI'm doubtful it's a good idea to try to validate the 9.4 case. The likelihood\r\nof getting that right seems low and I don't see us gaining much by even trying.\n\n\r\n> so we must add offset to predecessor\r\n> + * array(irrespective of xmax-xmin matching) if updated tuple xmin\r\n> + * is frozen, so that we can later do validation related to frozen\r\n> + * xmin. Raise corruption if we have two tuples having the same\r\n> + * predecessor.\r\n> + * We add the offset to the predecessor array irrespective of the\r\n> + * transaction (t_xmin) status. We will do validation related to\r\n> + * the transaction status (and also all other validations) when we\r\n> + * loop over the predecessor array.\r\n> + */\r\n> + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp);\r\n> + curr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup);\r\n> + next_lp = PageGetItemId(ctx.page, nextoffnum);\r\n> + next_htup = (HeapTupleHeader) PageGetItem(ctx.page, next_lp);\r\n> + next_xmin = HeapTupleHeaderGetXmin(next_htup);\r\n> + if (TransactionIdIsValid(curr_xmax) &&\r\n> + (TransactionIdEquals(curr_xmax, next_xmin) ||\r\n> + next_xmin == FrozenTransactionId))\r\n> + {\r\n> + if (predecessor[nextoffnum] != 0)\r\n> + {\r\n> + report_corruption(&ctx,\r\n> + psprintf(\"updated version at offset %u is also the updated version of tuple at offset %u\",\r\n> + (unsigned) nextoffnum, (unsigned) predecessor[nextoffnum]));\r\n> + continue;\n\r\nI doubt it is correct to enter this path with next_xmin ==\r\nFrozenTransactionId. This is following a ctid chain that we normally wouldn't\r\nfollow, because it doesn't satisfy the t_self->xmax == t_ctid->xmin condition.\n\r\nI don't immediately see what prevents the frozen tuple being from an entirely\r\ndifferent HOT chain than the two tuples pointing to it.\n\n\nPrior to 9.4 we can have xmin updated with FrozenTransactionId but with 9.4 (or later) we set XMIN_FROZEN bit in t_infomask. if updated tuple is via prior of 9.4 then \"TransactionIdEquals(curr_xmax, next_xmin)\" will be false for Frozen Tuple.The Intention of adding \"next_xmin == FrozenTransactionId\" to the path is because we wanted to do validation around Frozen Tuple when we loop over predecessor array. I need to look at the isolation test in details to understand how this can provide false alarm and but if there is a valid case then we can remove logic of raising corruption related with Frozen Tuple?\n\r\n> + }\r\n> +\r\n> + /* Loop over offsets and validate the data in the predecessor array. */\r\n> + for (OffsetNumber currentoffnum = FirstOffsetNumber; currentoffnum <= maxoff;\r\n> + currentoffnum = OffsetNumberNext(currentoffnum))\r\n> + {\r\n> + HeapTupleHeader pred_htup;\r\n> + HeapTupleHeader curr_htup;\r\n> + TransactionId pred_xmin;\r\n> + TransactionId curr_xmin;\r\n> + ItemId pred_lp;\r\n> + ItemId curr_lp;\r\n> +\r\n> + ctx.offnum = predecessor[currentoffnum];\r\n> + ctx.attnum = -1;\r\n> +\r\n> + if (ctx.offnum == 0)\r\n> + {\r\n> + /*\r\n> + * Either the root of the chain or an xmin-aborted tuple from\r\n> + * an abandoned portion of the HOT chain.\r\n> + */\n\r\nHm - couldn't we check that the tuple could conceivably be at the root of a\r\nchain? I.e. isn't HEAP_HOT_UPDATED? Or alternatively has an aborted xmin?\n I don't see a way to check if tuple is at the root of HOT chain because predecessor array will always be having either xmin from non-abandoned transaction or it will be zero. We can't differentiate root or tuple inserted via abandoned transaction.\n\r\n> + continue;\r\n> + }\r\n> +\r\n> + curr_lp = PageGetItemId(ctx.page, currentoffnum);\r\n> + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp);\r\n> + curr_xmin = HeapTupleHeaderGetXmin(curr_htup);\r\n> +\r\n> + ctx.itemid = pred_lp = PageGetItemId(ctx.page, ctx.offnum);\r\n> + pred_htup = (HeapTupleHeader) PageGetItem(ctx.page, pred_lp);\r\n> + pred_xmin = HeapTupleHeaderGetXmin(pred_htup);\r\n> +\r\n> + /*\r\n> + * If the predecessor's xmin is aborted or in progress, the\r\n> + * current tuples xmin should be aborted or in progress\r\n> + * respectively. Also both xmin's must be equal.\r\n> + */\r\n> + if (!TransactionIdEquals(pred_xmin, curr_xmin) &&\r\n> + !TransactionIdDidCommit(pred_xmin))\r\n> + {\r\n> + report_corruption(&ctx,\r\n> + psprintf(\"tuple with uncommitted xmin %u was updated to produce a tuple at offset %u with differing xmin %u\",\r\n> + (unsigned) pred_xmin, (unsigned) currentoffnum, (unsigned) curr_xmin));\n\r\nIs this necessarily true? What about a tuple that was inserted in a\r\nsubtransaction and then updated in another subtransaction of the same toplevel\r\ntransaction?\nnot sure if I am getting? I have tried with below test and don't see any issue,‘postgres[14723]=#’drop table test2;DROP TABLE‘postgres[14723]=#’create table test2 (a int, b int primary key);CREATE TABLE‘postgres[14723]=#’insert into test2 values (1,1);INSERT 0 1‘postgres[14723]=#’BEGIN;BEGIN‘postgres[14723]=#*’update test2 set a =2 where a =1;UPDATE 1‘postgres[14723]=#*’savepoint s1;SAVEPOINT‘postgres[14723]=#*’update test2 set a =6;UPDATE 1‘postgres[14723]=#*’rollback to savepoint s1;ROLLBACK‘postgres[14723]=#*’update test2 set a =6;UPDATE 1‘postgres[14723]=#*’savepoint s2;SAVEPOINT‘postgres[14723]=#*’update test2 set a =7;UPDATE 1‘postgres[14723]=#*’end;COMMIT‘postgres[14723]=#’SELECT lp as tuple, t_xmin, t_xmax, t_field3 as t_cid, t_ctid,tuple_data_split('test2'::regclass, t_data, t_infomask, t_infomask2, t_bits), heap_tuple_infomask_flags(t_infomask, t_infomask2) FROM heap_page_items(get_raw_page('test2', 0)); tuple | t_xmin | t_xmax | t_cid | t_ctid | tuple_data_split | heap_tuple_infomask_flags -------+--------+--------+-------+--------+-------------------------------+--------------------------------------------------------------------------- 1 | 1254 | 1255 | 0 | (0,2) | {\"\\\\x01000000\",\"\\\\x01000000\"} | (\"{HEAP_XMIN_COMMITTED,HEAP_HOT_UPDATED}\",{}) 2 | 1255 | 1257 | 1 | (0,4) | {\"\\\\x02000000\",\"\\\\x01000000\"} | (\"{HEAP_COMBOCID,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{}) 3 | 1256 | 0 | 1 | (0,3) | {\"\\\\x06000000\",\"\\\\x01000000\"} | (\"{HEAP_XMIN_INVALID,HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{}) 4 | 1257 | 1258 | 2 | (0,5) | {\"\\\\x06000000\",\"\\\\x01000000\"} | (\"{HEAP_COMBOCID,HEAP_UPDATED,HEAP_HOT_UPDATED,HEAP_ONLY_TUPLE}\",{}) 5 | 1258 | 0 | 3 | (0,5) | {\"\\\\x07000000\",\"\\\\x01000000\"} | (\"{HEAP_XMAX_INVALID,HEAP_UPDATED,HEAP_ONLY_TUPLE}\",{})(5 rows) \n\r\n> + }\r\n> +\r\n> + /*\r\n> + * If the predecessor's xmin is not frozen, then current tuple's\r\n> + * shouldn't be either.\r\n> + */\r\n> + if (pred_xmin != FrozenTransactionId && curr_xmin == FrozenTransactionId)\r\n> + {\r\n> + report_corruption(&ctx,\r\n> + psprintf(\"unfrozen tuple was updated to produce a tuple at offset %u which is frozen\",\r\n> + (unsigned) currentoffnum));\r\n> + }\n\r\nCan't we have a an update chain that is e.g.\r\nxmin 10, xmax 5 -> xmin 5, xmax invalid\n\r\nand a vacuum cutoff of 7? That'd preent the first tuple from being removed,\r\nbut would allow 5 to be frozen.\n\r\nI think there were recent patches proposing we don't freeze in that case, but\r\nwe'll having done that in the past....\nNot very sure about this, was trying with such case but found hard to reproduce this.-- Regards,\nHimanshu Upadhyaya\r\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 16 Nov 2022 12:41:13 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Nov 15, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-11-15 11:36:21 -0500, Robert Haas wrote:\n> > > On Mon, Nov 14, 2022 at 5:02 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > It seems like we should do a bit more validation within a chain of\n> > > > tuples. E.g. that no live tuple can follow an !DidCommit xmin?\n> > >\n> > > I think this check is already present in stronger form. If we see a\n> > > !DidCommit xmin, the xmin of the next tuple in the chain not only\n> can't be\n> > > committed, but had better be the same.\n> >\n> > As I think I mentioned before, I don't think the \"better be the same\"\n> aspect\n> > is correct, think subxacts. E.g.\n> >\n> > off 0: xmin: top, xmax: child_1\n> > off 1: xmin: child_1, xmax: invalid\n> >\n> > If top hasn't committed yet, the current logic afaict will warn about\n> this\n> > situation, no? And I don't think we can generally the subxid parent at\n> this\n> > point, unfortunately (might have truncated subtrans).\n>\n> Woops, you're right.\n\n\nyes, got it, have tried to test and it is giving false corruption in case\nof subtransaction.\nI think a better way to have this check is, we need to check that if\npred_xmin is\naborted then current_xmin should be aborted only. So there is no way that we\nvalidate corruption with in_progress txid.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Nov 16, 2022 at 1:58 AM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Nov 15, 2022 at 2:50 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-11-15 11:36:21 -0500, Robert Haas wrote:\n> > On Mon, Nov 14, 2022 at 5:02 PM Andres Freund <andres@anarazel.de> wrote:\n> > > It seems like we should do a bit more validation within a chain of\n> > > tuples. E.g. that no live tuple can follow an !DidCommit xmin?\n> >\n> > I think this check is already present in stronger form. If we see a\n> > !DidCommit xmin, the xmin of the next tuple in the chain not only can't be\n> > committed, but had better be the same.\n>\n> As I think I mentioned before, I don't think the \"better be the same\" aspect\n> is correct, think subxacts. E.g.\n>\n> off 0: xmin: top, xmax: child_1\n> off 1: xmin: child_1, xmax: invalid\n>\n> If top hasn't committed yet, the current logic afaict will warn about this\n> situation, no? And I don't think we can generally the subxid parent at this\n> point, unfortunately (might have truncated subtrans).\n\nWoops, you're right.yes, got it, have tried to test and it is giving false corruption in case of subtransaction.I think a better way to have this check is, we need to check that if pred_xmin is aborted then current_xmin should be aborted only. So there is no way that wevalidate corruption with in_progress txid.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 16 Nov 2022 15:20:35 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 4:51 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> yes, got it, have tried to test and it is giving false corruption in case of subtransaction.\n> I think a better way to have this check is, we need to check that if pred_xmin is\n> aborted then current_xmin should be aborted only. So there is no way that we\n> validate corruption with in_progress txid.\n\nPlease note that you can't use TransactionIdDidAbort here, because\nthat will return false for transactions aborted by a crash. You have\nto check that it's not in progress and then afterwards check that it's\nnot committed. Also note that if you check whether it's committed\nfirst and then check whether it's in progress afterwards, there's a\nrace condition: it might commit just after you verify that it isn't\ncommitted yet, and then it won't be in progress any more and will look\naborted.\n\nI disagree with the idea that we can't check in progress. I think the\nchecks could look something like this:\n\npred_in_progress = TransactionIdIsInProgress(pred_xmin);\ncurrent_in_progress = TransactionIdIsInProgress(current_xmin);\nif (pred_in_progress)\n{\n if (current_in_progress)\n return ok;\n // recheck to avoid race condition\n if (TransactionIdIsInProgress(pred_xmin))\n {\n if (TransactionIdDidCommit(current_xmin))\n return corruption: predecessor xmin in progress, but\ncurrent xmin committed;\n else\n return corruption: predecessor xmin in progress, but\ncurrent xmin aborted;\n }\n // fallthrough: when we entered this if-block pred_xmin was still\nin progress but no longer;\n pred_in_progress = false;\n}\n\nif (TransactionIdDidCommit(pred_xmin))\n return ok;\n\nif (current_in_progress)\n return corruption: predecessor xmin aborted, but current xmin in progress;\nelse if (TransactionIdDidCommit(current_xmin))\n return corruption: predecessor xmin aborted, but current xmin committed;\n\nThe error messages as phrased here aren't actually what we should use;\nthey would need rephrasing. But I think, or hope anyway, that the\nlogic works. I think you basically just need the 1 recheck: if you see\nthe predecessor xmin in progress and then the current xmin in\nprogress, you have to go back and check that the predecessor xmin is\nstill in progress, because otherwise both could have committed or\naborted together in between.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 16 Nov 2022 12:53:42 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 11:23 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Nov 16, 2022 at 4:51 AM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> > yes, got it, have tried to test and it is giving false corruption in\n> case of subtransaction.\n> > I think a better way to have this check is, we need to check that if\n> pred_xmin is\n> > aborted then current_xmin should be aborted only. So there is no way\n> that we\n> > validate corruption with in_progress txid.\n>\n> Please note that you can't use TransactionIdDidAbort here, because\n> that will return false for transactions aborted by a crash. You have\n> to check that it's not in progress and then afterwards check that it's\n> not committed. Also note that if you check whether it's committed\n> first and then check whether it's in progress afterwards, there's a\n> race condition: it might commit just after you verify that it isn't\n> committed yet, and then it won't be in progress any more and will look\n> aborted.\n>\n> I disagree with the idea that we can't check in progress. I think the\n> checks could look something like this:\n>\n> pred_in_progress = TransactionIdIsInProgress(pred_xmin);\n> current_in_progress = TransactionIdIsInProgress(current_xmin);\n> if (pred_in_progress)\n> {\n> if (current_in_progress)\n> return ok;\n> // recheck to avoid race condition\n> if (TransactionIdIsInProgress(pred_xmin))\n> {\n> if (TransactionIdDidCommit(current_xmin))\n> return corruption: predecessor xmin in progress, but\n> current xmin committed;\n> else\n> return corruption: predecessor xmin in progress, but\n> current xmin aborted;\n> }\n>\nI think we can have a situation where pred_xmin is in progress but\ncurr_xmin is aborted, consider below example:\n ‘postgres[14723]=#’BEGIN;\nBEGIN\n‘postgres[14723]=#*’insert into test2 values (1,1);\nINSERT 0 1\n‘postgres[14723]=#*’savepoint s1;\nSAVEPOINT\n‘postgres[14723]=#*’update test2 set a =2;\nUPDATE 1\n‘postgres[14723]=#*’rollback to savepoint s1;\nROLLBACK\n\nNow pred_xmin is in progress but curr_xmin is aborted, am I missing\nanything here?\nI think if pred_xmin is aborted and curr_xmin is in progress we should\nconsider it as a corruption case but vice versa is not true.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Nov 16, 2022 at 11:23 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Nov 16, 2022 at 4:51 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> yes, got it, have tried to test and it is giving false corruption in case of subtransaction.\n> I think a better way to have this check is, we need to check that if pred_xmin is\n> aborted then current_xmin should be aborted only. So there is no way that we\n> validate corruption with in_progress txid.\n\nPlease note that you can't use TransactionIdDidAbort here, because\nthat will return false for transactions aborted by a crash. You have\nto check that it's not in progress and then afterwards check that it's\nnot committed. Also note that if you check whether it's committed\nfirst and then check whether it's in progress afterwards, there's a\nrace condition: it might commit just after you verify that it isn't\ncommitted yet, and then it won't be in progress any more and will look\naborted.\n\nI disagree with the idea that we can't check in progress. I think the\nchecks could look something like this:\n\npred_in_progress = TransactionIdIsInProgress(pred_xmin);\ncurrent_in_progress = TransactionIdIsInProgress(current_xmin);\nif (pred_in_progress)\n{\n if (current_in_progress)\n return ok;\n // recheck to avoid race condition\n if (TransactionIdIsInProgress(pred_xmin))\n {\n if (TransactionIdDidCommit(current_xmin))\n return corruption: predecessor xmin in progress, but\ncurrent xmin committed;\n else\n return corruption: predecessor xmin in progress, but\ncurrent xmin aborted;\n }I think we can have a situation where pred_xmin is in progress but curr_xmin is aborted, consider below example: ‘postgres[14723]=#’BEGIN;BEGIN‘postgres[14723]=#*’insert into test2 values (1,1);INSERT 0 1‘postgres[14723]=#*’savepoint s1;SAVEPOINT‘postgres[14723]=#*’update test2 set a =2;UPDATE 1‘postgres[14723]=#*’rollback to savepoint s1;ROLLBACKNow pred_xmin is in progress but curr_xmin is aborted, am I missing anything here?I think if pred_xmin is aborted and curr_xmin is in progress we should consider it as a corruption case but vice versa is not true.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Nov 2022 09:27:19 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 10:57 PM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> I think if pred_xmin is aborted and curr_xmin is in progress we should consider it as a corruption case but vice versa is not true.\n\nYeah, you're right. I'm being stupid about subtransactions again.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 17 Nov 2022 08:36:33 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Nov 16, 2022 at 12:41 PM Himanshu Upadhyaya <\nupadhyaya.himanshu@gmail.com> wrote:\n\n>\n>\n>> > + }\n>> > +\n>> > + /* Loop over offsets and validate the data in the\n>> predecessor array. */\n>> > + for (OffsetNumber currentoffnum = FirstOffsetNumber;\n>> currentoffnum <= maxoff;\n>> > + currentoffnum = OffsetNumberNext(currentoffnum))\n>> > + {\n>> > + HeapTupleHeader pred_htup;\n>> > + HeapTupleHeader curr_htup;\n>> > + TransactionId pred_xmin;\n>> > + TransactionId curr_xmin;\n>> > + ItemId pred_lp;\n>> > + ItemId curr_lp;\n>> > +\n>> > + ctx.offnum = predecessor[currentoffnum];\n>> > + ctx.attnum = -1;\n>> > +\n>> > + if (ctx.offnum == 0)\n>> > + {\n>> > + /*\n>> > + * Either the root of the chain or an\n>> xmin-aborted tuple from\n>> > + * an abandoned portion of the HOT chain.\n>> > + */\n>>\n>> Hm - couldn't we check that the tuple could conceivably be at the root of\n>> a\n>> chain? I.e. isn't HEAP_HOT_UPDATED? Or alternatively has an aborted xmin?\n>>\n>>\n> I don't see a way to check if tuple is at the root of HOT chain because\n> predecessor array will always be having either xmin from non-abandoned\n> transaction or it will be zero. We can't differentiate root or tuple\n> inserted via abandoned transaction.\n>\n> I was wrong here. I think this can be done and will be doing these changes\nin my next patch.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Nov 16, 2022 at 12:41 PM Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com> wrote:\n\n> + }\n> +\n> + /* Loop over offsets and validate the data in the predecessor array. */\n> + for (OffsetNumber currentoffnum = FirstOffsetNumber; currentoffnum <= maxoff;\n> + currentoffnum = OffsetNumberNext(currentoffnum))\n> + {\n> + HeapTupleHeader pred_htup;\n> + HeapTupleHeader curr_htup;\n> + TransactionId pred_xmin;\n> + TransactionId curr_xmin;\n> + ItemId pred_lp;\n> + ItemId curr_lp;\n> +\n> + ctx.offnum = predecessor[currentoffnum];\n> + ctx.attnum = -1;\n> +\n> + if (ctx.offnum == 0)\n> + {\n> + /*\n> + * Either the root of the chain or an xmin-aborted tuple from\n> + * an abandoned portion of the HOT chain.\n> + */\n\nHm - couldn't we check that the tuple could conceivably be at the root of a\nchain? I.e. isn't HEAP_HOT_UPDATED? Or alternatively has an aborted xmin?\n I don't see a way to check if tuple is at the root of HOT chain because predecessor array will always be having either xmin from non-abandoned transaction or it will be zero. We can't differentiate root or tuple inserted via abandoned transaction.I was wrong here. I think this can be done and will be doing these changes in my next patch. -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Nov 2022 21:26:57 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 3:32 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > Furthermore, it is\n> > possible that successor[x] = successor[x'] since the page might be\n> corrupted\n> > and we haven't checked otherwise.\n> >\n> > predecessor[y] = x means that successor[x] = y but in addition we've\n> > checked that y is sane, and that x.xmax=y.xmin. If there are multiple\n> > tuples for which these conditions hold, we've issued complaints about\n> > all but one and entered the last into the predecessor array.\n>\n> As shown by the isolationtester test I just posted, this doesn't quite work\n> right now. Probably fixable.\n>\n> I don't think we can follow non-HOT ctid chains if they're older than the\n> xmin\n> horizon, including all cases of xmin being frozen. There's just nothing\n> guaranteeing that the tuples are actually \"related\".\n>\n> I understand the problem with frozen tuples but don't understand the\nconcern with non-HOT chains,\ncould you please help with some explanation around it?\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Tue, Nov 15, 2022 at 3:32 AM Andres Freund <andres@anarazel.de> wrote:\n> Furthermore, it is\n> possible that successor[x] = successor[x'] since the page might be corrupted\n> and we haven't checked otherwise.\n> \n> predecessor[y] = x means that successor[x] = y but in addition we've\n> checked that y is sane, and that x.xmax=y.xmin. If there are multiple\n> tuples for which these conditions hold, we've issued complaints about\n> all but one and entered the last into the predecessor array.\n\nAs shown by the isolationtester test I just posted, this doesn't quite work\nright now. Probably fixable.\n\nI don't think we can follow non-HOT ctid chains if they're older than the xmin\nhorizon, including all cases of xmin being frozen. There's just nothing\nguaranteeing that the tuples are actually \"related\".\nI understand the problem with frozen tuples but don't understand the concern with non-HOT chains,could you please help with some explanation around it?-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Nov 2022 21:33:17 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-17 21:33:17 +0530, Himanshu Upadhyaya wrote:\n> On Tue, Nov 15, 2022 at 3:32 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Furthermore, it is\n> > > possible that successor[x] = successor[x'] since the page might be\n> > corrupted\n> > > and we haven't checked otherwise.\n> > >\n> > > predecessor[y] = x means that successor[x] = y but in addition we've\n> > > checked that y is sane, and that x.xmax=y.xmin. If there are multiple\n> > > tuples for which these conditions hold, we've issued complaints about\n> > > all but one and entered the last into the predecessor array.\n> >\n> > As shown by the isolationtester test I just posted, this doesn't quite work\n> > right now. Probably fixable.\n> >\n> > I don't think we can follow non-HOT ctid chains if they're older than the\n> > xmin\n> > horizon, including all cases of xmin being frozen. There's just nothing\n> > guaranteeing that the tuples are actually \"related\".\n> >\n> I understand the problem with frozen tuples but don't understand the\n> concern with non-HOT chains,\n> could you please help with some explanation around it?\n\nI think there might be cases where following non-HOT ctid-chains across tuples\nwithin a page will trigger spurious errors, if the tuple versions are older\nthan the xmin horizon. But it's a bit hard to say without seeing the code with\na bunch of the other bugs fixed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 17 Nov 2022 09:53:52 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Nov 15, 2022 at 10:55 PM Andres Freund <andres@anarazel.de> wrote:\n> I'm quite certain that it's possible to end up freezing an earlier row\n> versions in a hot chain in < 14, I got there with careful gdb\n> orchestration. Of course possible I screwed something up, given I did it once,\n> interactively. Not sure if trying to fix it is worth the risk of backpatching\n> all the necessary changes to switch to the retry approach.\n\nThere is code in heap_prepare_freeze_tuple() that treats a raw xmax as\n\"xmax_already_frozen = true\", even when the raw xmax value isn't\nalready set to InvalidTransactionId. I'm referring to this code:\n\n if ( ... ) // process raw xmax\n ....\n else if (TransactionIdIsNormal(xid))\n ....\n else if ((tuple->t_infomask & HEAP_XMAX_INVALID) ||\n !TransactionIdIsValid(HeapTupleHeaderGetRawXmax(tuple)))\n {\n freeze_xmax = false;\n xmax_already_frozen = true;\n /* No need for relfrozenxid_out handling for already-frozen xmax */\n }\n else\n ereport(ERROR,\n (errcode(ERRCODE_DATA_CORRUPTED),\n errmsg_internal(\"found xmax %u (infomask 0x%04x) not\nfrozen, not multi, not normal\",\n xid, tuple->t_infomask)));\n\nWhy should it be okay to not process xmax during this call (by setting\n\"xmax_already_frozen = true\"), just because HEAP_XMAX_INVALID happens\nto be set? Isn't HEAP_XMAX_INVALID purely a hint? (HEAP_XMIN_FROZEN is\n*not* a hint, but we're dealing with xmax here.)\n\nI'm not sure how relevant this is to the concerns you have about\nfrozen xmax, or even if it's any kind of problem, but it still seems\nworth fixing. It seems to me that there should be clear rules on what\nspecial transaction IDs can appear in xmax. Namely: the only special\ntransaction ID that can ever appear in xmax is InvalidTransactionId.\n(Also, it's not okay to see *any* other XID in the\n\"xmax_already_frozen = true\" path, nor would it be okay to leave any\nother XID behind in xmax in the nearby \"freeze_xmax = true\" path.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 20 Nov 2022 11:58:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-20 11:58:12 -0800, Peter Geoghegan wrote:\n> There is code in heap_prepare_freeze_tuple() that treats a raw xmax as\n> \"xmax_already_frozen = true\", even when the raw xmax value isn't\n> already set to InvalidTransactionId. I'm referring to this code:\n>\n> if ( ... ) // process raw xmax\n> ....\n> else if (TransactionIdIsNormal(xid))\n> ....\n> else if ((tuple->t_infomask & HEAP_XMAX_INVALID) ||\n> !TransactionIdIsValid(HeapTupleHeaderGetRawXmax(tuple)))\n> {\n> freeze_xmax = false;\n> xmax_already_frozen = true;\n> /* No need for relfrozenxid_out handling for already-frozen xmax */\n> }\n> else\n> ereport(ERROR,\n> (errcode(ERRCODE_DATA_CORRUPTED),\n> errmsg_internal(\"found xmax %u (infomask 0x%04x) not\n> frozen, not multi, not normal\",\n> xid, tuple->t_infomask)));\n>\n> Why should it be okay to not process xmax during this call (by setting\n> \"xmax_already_frozen = true\"), just because HEAP_XMAX_INVALID happens\n> to be set? Isn't HEAP_XMAX_INVALID purely a hint? (HEAP_XMIN_FROZEN is\n> *not* a hint, but we're dealing with xmax here.)\n\nHm. But to get to that point we already need to have decided that xmax\nis not a normal xid. Unhelpfully we reuse the 'xid' variable for xmax as\nwell:\n\txid = HeapTupleHeaderGetRawXmax(tuple);\n\nI don't really know the HEAP_XMAX_INVALID branch is trying to do. For\none, xid already is set to HeapTupleHeaderGetRawXmax(), why is it\nrefetching the value?\n\nSo it looks to me like this path should just test !TransactionIdIsValid(xid)?\n\n\n> It seems to me that there should be clear rules on what\n> special transaction IDs can appear in xmax. Namely: the only special\n> transaction ID that can ever appear in xmax is InvalidTransactionId.\n> (Also, it's not okay to see *any* other XID in the\n> \"xmax_already_frozen = true\" path, nor would it be okay to leave any\n> other XID behind in xmax in the nearby \"freeze_xmax = true\" path.)\n\nYea.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Nov 2022 13:34:45 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Nov 21, 2022 at 1:34 PM Andres Freund <andres@anarazel.de> wrote:\n> Hm. But to get to that point we already need to have decided that xmax\n> is not a normal xid. Unhelpfully we reuse the 'xid' variable for xmax as\n> well:\n> xid = HeapTupleHeaderGetRawXmax(tuple);\n>\n> I don't really know the HEAP_XMAX_INVALID branch is trying to do. For\n> one, xid already is set to HeapTupleHeaderGetRawXmax(), why is it\n> refetching the value?\n\nRight, that detail is correct, but still weird. And suggests that it\nmight not have been super well thought through.\n\n> So it looks to me like this path should just test !TransactionIdIsValid(xid)?\n\nAgreed. Plus there should be a comment that reminds you that this is a\nnormal regular transaction ID (easy to miss, because the initial \"if\"\nblock for Multis is rather large).\n\nI will push something like that soon.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 21 Nov 2022 14:06:32 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Nov 10, 2022 at 3:38 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > + }\n> > +\n> > + /*\n> > + * Loop over offset and populate predecessor array from\n> all entries\n> > + * that are present in successor array.\n> > + */\n> > + ctx.attnum = -1;\n> > + for (ctx.offnum = FirstOffsetNumber; ctx.offnum <= maxoff;\n> > + ctx.offnum = OffsetNumberNext(ctx.offnum))\n> > + {\n> > + ItemId curr_lp;\n> > + ItemId next_lp;\n> > + HeapTupleHeader curr_htup;\n> > + HeapTupleHeader next_htup;\n> > + TransactionId curr_xmax;\n> > + TransactionId next_xmin;\n> > +\n> > + OffsetNumber nextoffnum = successor[ctx.offnum];\n> > +\n> > + curr_lp = PageGetItemId(ctx.page, ctx.offnum);\n>\n> Why do we get the item when nextoffnum is 0?\n>\n\nFixed by moving PageGetItemId() call after the 'if' check.\n\n\n> > + if (nextoffnum == 0 || !lp_valid[ctx.offnum] ||\n> !lp_valid[nextoffnum])\n> > + {\n> > + /*\n> > + * This is either the last updated tuple\n> in the chain or a\n> > + * corruption raised for this tuple.\n> > + */\n>\n> \"or a corruption raised\" isn't quite right grammatically.\n>\n\ndone.\n\n>\n> > + continue;\n> > + }\n> > + if (ItemIdIsRedirected(curr_lp))\n> > + {\n> > + next_lp = PageGetItemId(ctx.page,\n> nextoffnum);\n> > + if (ItemIdIsRedirected(next_lp))\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"redirected line pointer pointing to another redirected line\n> pointer at offset %u\",\n> > +\n> (unsigned) nextoffnum));\n> > + continue;\n> > + }\n> > + next_htup = (HeapTupleHeader) PageGetItem(\n> ctx.page, next_lp);\n> > + if (!HeapTupleHeaderIsHeapOnly(next_htup))\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"redirected tuple at line pointer offset %u is not heap only\n> tuple\",\n> > +\n> (unsigned) nextoffnum));\n> > + }\n> > + if ((next_htup->t_infomask & HEAP_UPDATED)\n> == 0)\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"redirected tuple at line pointer offset %u is not heap updated\n> tuple\",\n> > +\n> (unsigned) nextoffnum));\n> > + }\n> > + continue;\n> > + }\n> > +\n> > + /*\n> > + * Add a line pointer offset to the predecessor\n> array if xmax is\n> > + * matching with xmin of next tuple (reaching via\n> its t_ctid).\n> > + * Prior to PostgreSQL 9.4, we actually changed\n> the xmin to\n> > + * FrozenTransactionId\n>\n> I'm doubtful it's a good idea to try to validate the 9.4 case. The\n> likelihood\n> of getting that right seems low and I don't see us gaining much by even\n> trying.\n>\n>\n>\nremoved code with regards to frozen tuple checks.\n\n> so we must add offset to predecessor\n> > + * array(irrespective of xmax-xmin matching) if\n> updated tuple xmin\n> > + * is frozen, so that we can later do validation\n> related to frozen\n> > + * xmin. Raise corruption if we have two tuples\n> having the same\n> > + * predecessor.\n> > + * We add the offset to the predecessor array\n> irrespective of the\n> > + * transaction (t_xmin) status. We will do\n> validation related to\n> > + * the transaction status (and also all other\n> validations) when we\n> > + * loop over the predecessor array.\n> > + */\n> > + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> curr_lp);\n> > + curr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup);\n> > + next_lp = PageGetItemId(ctx.page, nextoffnum);\n> > + next_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> next_lp);\n> > + next_xmin = HeapTupleHeaderGetXmin(next_htup);\n> > + if (TransactionIdIsValid(curr_xmax) &&\n> > + (TransactionIdEquals(curr_xmax, next_xmin)\n> ||\n> > + next_xmin == FrozenTransactionId))\n> > + {\n> > + if (predecessor[nextoffnum] != 0)\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"updated version at offset %u is also the updated version of\n> tuple at offset %u\",\n> > +\n> (unsigned) nextoffnum, (unsigned) predecessor[nextoffnum]));\n> > + continue;\n>\n> I doubt it is correct to enter this path with next_xmin ==\n> FrozenTransactionId. This is following a ctid chain that we normally\n> wouldn't\n> follow, because it doesn't satisfy the t_self->xmax == t_ctid->xmin\n> condition.\n>\n> removed this frozen check.\n\n> + }\n> > +\n> > + /* Loop over offsets and validate the data in the\n> predecessor array. */\n> > + for (OffsetNumber currentoffnum = FirstOffsetNumber;\n> currentoffnum <= maxoff;\n> > + currentoffnum = OffsetNumberNext(currentoffnum))\n> > + {\n> > + HeapTupleHeader pred_htup;\n> > + HeapTupleHeader curr_htup;\n> > + TransactionId pred_xmin;\n> > + TransactionId curr_xmin;\n> > + ItemId pred_lp;\n> > + ItemId curr_lp;\n> > +\n> > + ctx.offnum = predecessor[currentoffnum];\n> > + ctx.attnum = -1;\n> > +\n> > + if (ctx.offnum == 0)\n> > + {\n> > + /*\n> > + * Either the root of the chain or an\n> xmin-aborted tuple from\n> > + * an abandoned portion of the HOT chain.\n> > + */\n>\n> Hm - couldn't we check that the tuple could conceivably be at the root of a\n> chain? I.e. isn't HEAP_HOT_UPDATED? Or alternatively has an aborted xmin?\n>\n> Done, I have added code to identify cases of missing offset in the\npredecessor[] array and added validation that root of the chain must not be\nHEAP_ONLY_TUPLE.\n\n>\n> > + continue;\n> > + }\n> > +\n> > + curr_lp = PageGetItemId(ctx.page, currentoffnum);\n> > + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> curr_lp);\n> > + curr_xmin = HeapTupleHeaderGetXmin(curr_htup);\n> > +\n> > + ctx.itemid = pred_lp = PageGetItemId(ctx.page,\n> ctx.offnum);\n> > + pred_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> pred_lp);\n> > + pred_xmin = HeapTupleHeaderGetXmin(pred_htup);\n> > +\n> > + /*\n> > + * If the predecessor's xmin is aborted or in\n> progress, the\n> > + * current tuples xmin should be aborted or in\n> progress\n> > + * respectively. Also both xmin's must be equal.\n> > + */\n> > + if (!TransactionIdEquals(pred_xmin, curr_xmin) &&\n> > + !TransactionIdDidCommit(pred_xmin))\n> > + {\n> > + report_corruption(&ctx,\n> > +\n> psprintf(\"tuple with uncommitted xmin %u was updated to produce a tuple at\n> offset %u with differing xmin %u\",\n> > +\n> (unsigned) pred_xmin, (unsigned) currentoffnum, (unsigned)\n> curr_xmin));\n>\n> Is this necessarily true? What about a tuple that was inserted in a\n> subtransaction and then updated in another subtransaction of the same\n> toplevel\n> transaction?\n>\n>\npatch has been updated to handle cases of sub-transaction.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 30 Nov 2022 16:09:19 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2022-11-30 16:09:19 +0530, Himanshu Upadhyaya wrote:\n> has been updated to handle cases of sub-transaction.\n\nThanks!\n\n\n> +\t\t/* Loop over offsets and validate the data in the predecessor array. */\n> +\t\tfor (OffsetNumber currentoffnum = FirstOffsetNumber; currentoffnum <= maxoff;\n> +\t\t\t currentoffnum = OffsetNumberNext(currentoffnum))\n> +\t\t{\n> +\t\t\tHeapTupleHeader pred_htup;\n> +\t\t\tHeapTupleHeader curr_htup;\n> +\t\t\tTransactionId pred_xmin;\n> +\t\t\tTransactionId curr_xmin;\n> +\t\t\tItemId\t\tpred_lp;\n> +\t\t\tItemId\t\tcurr_lp;\n> +\t\t\tbool\t\tpred_in_progress;\n> +\t\t\tXidCommitStatus xid_commit_status;\n> +\t\t\tXidBoundsViolation xid_status;\n> +\n> +\t\t\tctx.offnum = predecessor[currentoffnum];\n> +\t\t\tctx.attnum = -1;\n> +\t\t\tcurr_lp = PageGetItemId(ctx.page, currentoffnum);\n> +\t\t\tif (!lp_valid[currentoffnum] || ItemIdIsRedirected(curr_lp))\n> +\t\t\t\tcontinue;\n\nI don't think we should do PageGetItemId(ctx.page, currentoffnum); if !lp_valid[currentoffnum].\n\n\n> +\t\t\tcurr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp);\n> +\t\t\tcurr_xmin = HeapTupleHeaderGetXmin(curr_htup);\n> +\t\t\txid_status = get_xid_status(curr_xmin, &ctx, &xid_commit_status);\n> +\t\t\tif (!(xid_status == XID_BOUNDS_OK || xid_status == XID_INVALID))\n> +\t\t\t\tcontinue;\n\nWhy can we even get here if the xid status isn't XID_BOUNDS_OK?\n\n\n> +\t\t\tif (ctx.offnum == 0)\n\nFor one, I think it'd be better to use InvalidOffsetNumber here. But more\ngenerally, storing the predecessor in ctx.offnum seems quite confusing.\n\n\n> +\t\t\t{\n> +\t\t\t\t/*\n> +\t\t\t\t * No harm in overriding value of ctx.offnum as we will always\n> +\t\t\t\t * continue if we are here.\n> +\t\t\t\t */\n> +\t\t\t\tctx.offnum = currentoffnum;\n> +\t\t\t\tif (TransactionIdIsInProgress(curr_xmin) || TransactionIdDidCommit(curr_xmin))\n\nIs it actually ok to call TransactionIdDidCommit() here? There's a reason\nget_xid_status() exists after all. And we do have the xid status for xmin\nalready, so this could just check xid_commit_status, no?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 1 Dec 2022 16:13:21 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\n\nOn Fri, Dec 2, 2022 at 5:43 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> curr_lp);\n> > + curr_xmin = HeapTupleHeaderGetXmin(curr_htup);\n> > + xid_status = get_xid_status(curr_xmin, &ctx,\n> &xid_commit_status);\n> > + if (!(xid_status == XID_BOUNDS_OK || xid_status ==\n> XID_INVALID))\n> > + continue;\n>\n> Why can we even get here if the xid status isn't XID_BOUNDS_OK?\n>\n>\n\n @@ -504,9 +516,269 @@ verify_heapam(PG_FUNCTION_ARGS)\n /* It should be safe to examine the tuple's header,\nat least */\n ctx.tuphdr = (HeapTupleHeader) PageGetItem(ctx.page,\nctx.itemid);\n ctx.natts = HeapTupleHeaderGetNatts(ctx.tuphdr);\n+ lp_valid[ctx.offnum] = true;\n\n /* Ok, ready to check this next tuple */\n check_tuple(&ctx);\n\nreferring above code, check_tuple(&ctx); do have this check but we populate\nlp_valid before that call.\nPopulating lp_valid before check_tuple() is intentional because even if we\ndo changes to get the return status from check_tuple() to populate that in\nlp_valid, it will be hard to validate cases that are dependent on aborted\ntransaction (like \"tuple with aborted xmin %u was updated to produce a\ntuple at offset %u with committed xmin %u\") because check_tuple_visibility\nis also looking for aborted xmin and return false if tuple's xmin is\naborted, in fact we can add one more parameter to check_tuple and get the\nstatus of transaction if it is aborted and accordingly set lp_valid to true\nbut that will add unnecessary complexity and don't find it convincing\nimplementation. Alternatively, I found rechecking xid_status is simpler and\nstraight.\n\n\n>\n> > + if (ctx.offnum == 0)\n>\n> For one, I think it'd be better to use InvalidOffsetNumber here. But more\n> generally, storing the predecessor in ctx.offnum seems quite confusing.\n>\n> ok, I will change it to InvalidOffsetNumber at all the places, we need\nctx.offnum to have the value of the predecessor array as this will be\ninternally used by report_corruption function to generate the message(eg.\nbelow), and the format of these message's seems more simple and meaningful\nto report corruption.\n\n report_corruption(&ctx,\n\npsprintf(\"heap-only update produced a non-heap only tuple at offset %u\",\n\n (unsigned) currentoffnum));\nHere we don't need to mention ctx.offnum explicitly in the above message as\nthis will be taken care of by the code below.\n\n\"report_corruption_internal(Tuplestorestate *tupstore, TupleDesc tupdesc,\n BlockNumber blkno,\nOffsetNumber offnum,\n AttrNumber attnum, char\n*msg)\n{\n Datum values[HEAPCHECK_RELATION_COLS] = {0};\n bool nulls[HEAPCHECK_RELATION_COLS] = {0};\n HeapTuple tuple;\n\n values[0] = Int64GetDatum(blkno);\n values[1] = Int32GetDatum(offnum);\"\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,On Fri, Dec 2, 2022 at 5:43 AM Andres Freund <andres@anarazel.de> wrote:\n\n> + curr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp);\n> + curr_xmin = HeapTupleHeaderGetXmin(curr_htup);\n> + xid_status = get_xid_status(curr_xmin, &ctx, &xid_commit_status);\n> + if (!(xid_status == XID_BOUNDS_OK || xid_status == XID_INVALID))\n> + continue;\n\nWhy can we even get here if the xid status isn't XID_BOUNDS_OK?\n @@ -504,9 +516,269 @@ verify_heapam(PG_FUNCTION_ARGS) /* It should be safe to examine the tuple's header, at least */ ctx.tuphdr = (HeapTupleHeader) PageGetItem(ctx.page, ctx.itemid); ctx.natts = HeapTupleHeaderGetNatts(ctx.tuphdr);+ lp_valid[ctx.offnum] = true; /* Ok, ready to check this next tuple */ check_tuple(&ctx);referring above code, check_tuple(&ctx); do have this check but we populate lp_valid before that call. Populating lp_valid before check_tuple() is intentional because even if we do changes to get the return status from check_tuple() to populate that in lp_valid, it will be hard to validate cases that are dependent on aborted transaction (like \"tuple with aborted xmin %u was updated to produce a tuple at offset %u with committed xmin %u\") because check_tuple_visibility is also looking for aborted xmin and return false if tuple's xmin is aborted, in fact we can add one more parameter to check_tuple and get the status of transaction if it is aborted and accordingly set lp_valid to true but that will add unnecessary complexity and don't find it convincing implementation. Alternatively, I found rechecking xid_status is simpler and straight. \n\n> + if (ctx.offnum == 0)\n\nFor one, I think it'd be better to use InvalidOffsetNumber here. But more\ngenerally, storing the predecessor in ctx.offnum seems quite confusing.\nok, I will change it to InvalidOffsetNumber at all the places, we need ctx.offnum to have the value of the predecessor array as this will be internally used by report_corruption function to generate the message(eg. below), and the format of these message's seems more simple and meaningful to report corruption. report_corruption(&ctx, psprintf(\"heap-only update produced a non-heap only tuple at offset %u\", (unsigned) currentoffnum));Here we don't need to mention ctx.offnum explicitly in the above message as this will be taken care of by the code below.\"report_corruption_internal(Tuplestorestate *tupstore, TupleDesc tupdesc, BlockNumber blkno, OffsetNumber offnum, AttrNumber attnum, char *msg){ Datum values[HEAPCHECK_RELATION_COLS] = {0}; bool nulls[HEAPCHECK_RELATION_COLS] = {0}; HeapTuple tuple; values[0] = Int64GetDatum(blkno); values[1] = Int32GetDatum(offnum);\" -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 2 Dec 2022 13:20:54 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Fri, Dec 2, 2022 at 5:43 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > + /* Loop over offsets and validate the data in the\n> predecessor array. */\n> > + for (OffsetNumber currentoffnum = FirstOffsetNumber;\n> currentoffnum <= maxoff;\n> > + currentoffnum = OffsetNumberNext(currentoffnum))\n> > + {\n> > + HeapTupleHeader pred_htup;\n> > + HeapTupleHeader curr_htup;\n> > + TransactionId pred_xmin;\n> > + TransactionId curr_xmin;\n> > + ItemId pred_lp;\n> > + ItemId curr_lp;\n> > + bool pred_in_progress;\n> > + XidCommitStatus xid_commit_status;\n> > + XidBoundsViolation xid_status;\n> > +\n> > + ctx.offnum = predecessor[currentoffnum];\n> > + ctx.attnum = -1;\n> > + curr_lp = PageGetItemId(ctx.page, currentoffnum);\n> > + if (!lp_valid[currentoffnum] ||\n> ItemIdIsRedirected(curr_lp))\n> > + continue;\n>\n> I don't think we should do PageGetItemId(ctx.page, currentoffnum); if\n> !lp_valid[currentoffnum].\n>\n> Fixed.\n\n>\n> > + if (ctx.offnum == 0)\n>\n> For one, I think it'd be better to use InvalidOffsetNumber here. But more\n> generally, storing the predecessor in ctx.offnum seems quite confusing.\n>\n> changed all relevant places to use InvalidOffsetNumber.\n\n>\n> > + {\n> > + /*\n> > + * No harm in overriding value of\n> ctx.offnum as we will always\n> > + * continue if we are here.\n> > + */\n> > + ctx.offnum = currentoffnum;\n> > + if (TransactionIdIsInProgress(curr_xmin)\n> || TransactionIdDidCommit(curr_xmin))\n>\n> Is it actually ok to call TransactionIdDidCommit() here? There's a reason\n> get_xid_status() exists after all. And we do have the xid status for xmin\n> already, so this could just check xid_commit_status, no?\n>\n>\n> I think it will be good to pass NULL to get_xid_status like\n\"get_xid_status(curr_xmin, &ctx, NULL);\" so that we can only check the xid\nstatus at the time when it is actually required. This way we can avoid\nchecking xid status in cases when we simply 'continue' due to some check.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 5 Dec 2022 18:38:29 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi hackers,\n\n> Fixed.\n\nI noticed that this patch stuck a little and decided to take another look.\n\nIt seems to be well written, covered with tests and my understanding\nis that all the previous feedback was accounted for. To your knowledge\nis there anything that prevents us from moving it to \"Ready for\nCommitter\"?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Thu, 19 Jan 2023 16:55:18 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 8:55 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I noticed that this patch stuck a little and decided to take another look.\n>\n> It seems to be well written, covered with tests and my understanding\n> is that all the previous feedback was accounted for. To your knowledge\n> is there anything that prevents us from moving it to \"Ready for\n> Committer\"?\n\nThanks for taking a look, and for pinging the thread.\n\nI think that the handling of lp_valid[] in the loop that begins with\n\"Loop over offset and populate predecessor array from all entries that\nare present in successor array\" is very confusing. I think that\nlp_valid[] should be answering the question \"is the line pointer\nbasically sane?\". That is, if it's a redirect, it needs to point to\nsomething within the line pointer array (and we also check that it\nmust be an entry in the line pointer array that is used, which seems\nfine). If it's not a redirect, it needs to point to space that's\nentirely within the block, properly aligned, and big enough to contain\na tuple. We determine the answers to all of these questions in the\nfirst loop, the one that starts with /* Perform tuple checks */.\n\nNothing that happens in the second loop, where we populate the\npredecessor array, can reverse our previous conclusion that the line\npointer is valid, so this loop shouldn't be resetting entries in\nlp_valid[] to false. The reason that it's doing so seems to be that it\nwants to use lp_valid[] to control the behavior of the third loop,\nwhere we perform checks against things that have entries in the\npredecessor array. As written, the code ensures that we always set\nlp_valid[nextoffnum] to false unless we set predecessor[nextoffnum] to\na value other than InvalidOffsetNumber. But that is needlessly\ncomplex: the third loop doesn't need to look at lp_valid[] at all. It\ncan just check whether predecessor[currentoffnum] is valid. If it is,\nperform checks. Otherwise, skip it. It seems to me that this would be\nsignificantly simpler.\n\nTo put the above complaint another way, a variable shouldn't mean two\ndifferent things depending on where you are in the function. Right\nnow, at the end of the first loop, lp_valid[x] answers the question\n\"is line pointer x basically valid?\". But by the end of the second\nloop, it answers the question \"is line pointer x valid and does it\nalso have a valid predecessor?\". That kind of definitional change is\nsomething to be avoided.\n\nThe test if (pred_in_progress || TransactionIdDidCommit(curr_xmin))\nseems wrong to me. Shouldn't it be &&? Has this code been tested at\nall? It doesn't seem to have a test case. Some of these other errors\ndon't, either. Maybe there's some that we can't easily test in an\nautomated way, but we should test what we can. I guess maybe casual\ntesting wouldn't reveal the problem here because of the recheck, but\nit's worrying to find logic that doesn't look right with no\ncorresponding comments or test cases.\n\nSome error message kibitizing:\n\n psprintf(\"redirected tuple at line pointer offset %u is not heap only tuple\",\n\nIt seems to me that this should say \"redirected line pointer pointing\nto a non-heap-only tuple at offset %u\". There is no such thing as a\nredirected tuple -- and even if there were, what we have here is\nclearly a redirected line pointer.\n\npsprintf(\"redirected tuple at line pointer offset %u is not heap only tuple\",\n\nAnd I think for the same reasons this one should say something like\n\"redirected line pointer pointing to a non-heap-only tuple at offset\n%u\".\n\n psprintf(\"redirected tuple at line pointer offset %u is not heap\nupdated tuple\",\n\nPossibly all of these would sound better with \"points\" rather than\n\"pointing\" -- if so, we'd need to change an existing message in the\nsame way.\n\nAnd this one should say something like \"redirected line pointer\npointing to a tuple not produced by an update at offset %u\".\n\n psprintf(\"tuple is root of chain but it is marked as heap-only tuple\"));\n\nI think this would sound better if you deleted the word \"it\".\n\nI don't know whether it's worth arguing about -- it feels like we've\nargued too much already about this sort of thing -- but I am not very\nconvinced by initializers like OffsetNumber\npredecessor[MaxOffsetNumber] = {InvalidOffsetNumber}. That style is\nonly correct because InvalidOffsetNumber happens to be zero. If it\nwere up to me, I'd use memset to clear the predecessor array. I would\nnot bulk initialize sucessor and lp_valid but make sure that the first\nloop always sets them, possibly by having the top of the loop set them\nto InvalidOffsetNumber and false initially and then letting code later\nin the loop change the value, or possibly in some other way.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 19 Jan 2023 14:08:24 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 12:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> I think that the handling of lp_valid[] in the loop that begins with\n> \"Loop over offset and populate predecessor array from all entries that\n> are present in successor array\" is very confusing. I think that\n> lp_valid[] should be answering the question \"is the line pointer\n> basically sane?\". That is, if it's a redirect, it needs to point to\n> something within the line pointer array (and we also check that it\n> must be an entry in the line pointer array that is used, which seems\n> fine). If it's not a redirect, it needs to point to space that's\n> entirely within the block, properly aligned, and big enough to contain\n> a tuple. We determine the answers to all of these questions in the\n> first loop, the one that starts with /* Perform tuple checks */.\n>\n> Nothing that happens in the second loop, where we populate the\n> predecessor array, can reverse our previous conclusion that the line\n> pointer is valid, so this loop shouldn't be resetting entries in\n> lp_valid[] to false. The reason that it's doing so seems to be that it\n> wants to use lp_valid[] to control the behavior of the third loop,\n> where we perform checks against things that have entries in the\n> predecessor array. As written, the code ensures that we always set\n> lp_valid[nextoffnum] to false unless we set predecessor[nextoffnum] to\n> a value other than InvalidOffsetNumber. But that is needlessly\n> complex: the third loop doesn't need to look at lp_valid[] at all. It\n> can just check whether predecessor[currentoffnum] is valid. If it is,\n> perform checks. Otherwise, skip it. It seems to me that this would be\n> significantly simpler.\n>\nI was trying to use lp_valid as I need to identify the root of the HOT\nchain and we are doing validation on the root of the HOT chain when we loop\nover the predecessor array.\n if (nextoffnum == InvalidOffsetNumber ||\n!lp_valid[ctx.offnum] || !lp_valid[nextoffnum])\n {\n /*\n * Set lp_valid of nextoffnum to false if\ncurrent tuple's\n * lp_valid is true. We don't add this to\npredecessor array as\n * it's of no use to validate tuple if its\npredecessor is\n * already corrupted but we need to\nidentify all those tuple's\n * so that we can differentiate between all\nthe cases of\n * missing offset in predecessor array,\nthis will help in\n * validating the root of chain when we\nloop over predecessor\n * array.\n */\n if (!lp_valid[ctx.offnum] &&\nlp_valid[nextoffnum])\n lp_valid[nextoffnum] = false;\nWas resetting lp_valid in the last patch because we don't add data to\npredecessor[] and while looping over the predecessor array we need to\nisolate (and identify) all cases of missing data in the predecessor array\nto exactly identify the root of HOT chain.\nOne solution is to always add data to predecessor array while looping over\nsuccessor array and then while looping over predecessor array we can\ncontinue for other validation \"if (lp_valid [predecessor[currentoffnum]] &&\nlp_valid[currentoffnum]\" is true but in this case also our third loop will\nalso look at lp_valid[].\n\nTo put the above complaint another way, a variable shouldn't mean two\n> different things depending on where you are in the function. Right\n> now, at the end of the first loop, lp_valid[x] answers the question\n> \"is line pointer x basically valid?\". But by the end of the second\n> loop, it answers the question \"is line pointer x valid and does it\n> also have a valid predecessor?\". That kind of definitional change is\n> something to be avoided.\n>\n> agree.\n\n\n> The test if (pred_in_progress || TransactionIdDidCommit(curr_xmin))\n> seems wrong to me. Shouldn't it be &&? Has this code been tested at\n> all? It doesn't seem to have a test case. Some of these other errors\n> don't, either. Maybe there's some that we can't easily test in an\n> automated way, but we should test what we can. I guess maybe casual\n> testing wouldn't reveal the problem here because of the recheck, but\n> it's worrying to find logic that doesn't look right with no\n> corresponding comments or test cases.\n>\n> This is totally my Mistake, apologies for that. I will fix this in my next\npatch. Regarding the missing test cases, I need one in-progress transaction\nfor these test cases to be included in 004_verify_heapam.pl but I\ndon't find a clear way to have an in-progress transaction(as per the design\nof 004_verify_heapam.pl ) that I can use in the test cases. I will be doing\nmore research on a solution to add these missing test cases.\n\n> Some error message kibitizing:\n>\n> psprintf(\"redirected tuple at line pointer offset %u is not heap only\n> tuple\",\n>\n> It seems to me that this should say \"redirected line pointer pointing\n> to a non-heap-only tuple at offset %u\". There is no such thing as a\n> redirected tuple -- and even if there were, what we have here is\n> clearly a redirected line pointer.\n>\n> psprintf(\"redirected tuple at line pointer offset %u is not heap only\n> tuple\",\n>\n> And I think for the same reasons this one should say something like\n> \"redirected line pointer pointing to a non-heap-only tuple at offset\n> %u\".\n>\n> psprintf(\"redirected tuple at line pointer offset %u is not heap\n> updated tuple\",\n>\n> Possibly all of these would sound better with \"points\" rather than\n> \"pointing\" -- if so, we'd need to change an existing message in the\n> same way.\n>\n> And this one should say something like \"redirected line pointer\n> pointing to a tuple not produced by an update at offset %u\".\n>\n> psprintf(\"tuple is root of chain but it is marked as heap-only tuple\"));\n>\n> I think this would sound better if you deleted the word \"it\".\n>\n> Will change accordingly in my next patch.\n\n> I don't know whether it's worth arguing about -- it feels like we've\n> argued too much already about this sort of thing -- but I am not very\n> convinced by initializers like OffsetNumber\n> predecessor[MaxOffsetNumber] = {InvalidOffsetNumber}. That style is\n> only correct because InvalidOffsetNumber happens to be zero. If it\n> were up to me, I'd use memset to clear the predecessor array. I would\n> not bulk initialize sucessor and lp_valid but make sure that the first\n> loop always sets them, possibly by having the top of the loop set them\n> to InvalidOffsetNumber and false initially and then letting code later\n> in the loop change the value, or possibly in some other way.\n>\n> agree, will fix in my next patch\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Fri, Jan 20, 2023 at 12:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\nI think that the handling of lp_valid[] in the loop that begins with\n\"Loop over offset and populate predecessor array from all entries that\nare present in successor array\" is very confusing. I think that\nlp_valid[] should be answering the question \"is the line pointer\nbasically sane?\". That is, if it's a redirect, it needs to point to\nsomething within the line pointer array (and we also check that it\nmust be an entry in the line pointer array that is used, which seems\nfine). If it's not a redirect, it needs to point to space that's\nentirely within the block, properly aligned, and big enough to contain\na tuple. We determine the answers to all of these questions in the\nfirst loop, the one that starts with /* Perform tuple checks */.\n\nNothing that happens in the second loop, where we populate the\npredecessor array, can reverse our previous conclusion that the line\npointer is valid, so this loop shouldn't be resetting entries in\nlp_valid[] to false. The reason that it's doing so seems to be that it\nwants to use lp_valid[] to control the behavior of the third loop,\nwhere we perform checks against things that have entries in the\npredecessor array. As written, the code ensures that we always set\nlp_valid[nextoffnum] to false unless we set predecessor[nextoffnum] to\na value other than InvalidOffsetNumber. But that is needlessly\ncomplex: the third loop doesn't need to look at lp_valid[] at all. It\ncan just check whether predecessor[currentoffnum] is valid. If it is,\nperform checks. Otherwise, skip it. It seems to me that this would be\nsignificantly simpler. I was trying to use lp_valid as I need to identify the root of the HOT chain and we are doing validation on the root of the HOT chain when we loop over the predecessor array. if (nextoffnum == InvalidOffsetNumber || !lp_valid[ctx.offnum] || !lp_valid[nextoffnum]) { /* * Set lp_valid of nextoffnum to false if current tuple's * lp_valid is true. We don't add this to predecessor array as * it's of no use to validate tuple if its predecessor is * already corrupted but we need to identify all those tuple's * so that we can differentiate between all the cases of * missing offset in predecessor array, this will help in * validating the root of chain when we loop over predecessor * array. */ if (!lp_valid[ctx.offnum] && lp_valid[nextoffnum]) lp_valid[nextoffnum] = false;Was resetting lp_valid in the last patch because we don't add data to predecessor[] and while looping over the predecessor array we need to isolate (and identify) all cases of missing data in the predecessor array to exactly identify the root of HOT chain.One solution is to always add data to predecessor array while looping over successor array and then while looping over predecessor array we can continue for other validation \"if (lp_valid [predecessor[currentoffnum]] && lp_valid[currentoffnum]\" is true but in this case also our third loop will also look at lp_valid[].\nTo put the above complaint another way, a variable shouldn't mean two\ndifferent things depending on where you are in the function. Right\nnow, at the end of the first loop, lp_valid[x] answers the question\n\"is line pointer x basically valid?\". But by the end of the second\nloop, it answers the question \"is line pointer x valid and does it\nalso have a valid predecessor?\". That kind of definitional change is\nsomething to be avoided.\nagree. \nThe test if (pred_in_progress || TransactionIdDidCommit(curr_xmin))\nseems wrong to me. Shouldn't it be &&? Has this code been tested at\nall? It doesn't seem to have a test case. Some of these other errors\ndon't, either. Maybe there's some that we can't easily test in an\nautomated way, but we should test what we can. I guess maybe casual\ntesting wouldn't reveal the problem here because of the recheck, but\nit's worrying to find logic that doesn't look right with no\ncorresponding comments or test cases.\nThis is totally my Mistake, apologies for that. I will fix this in my next patch. Regarding the missing test cases, I need one in-progress transaction for these test cases to be included in 004_verify_heapam.pl but I don't find a clear way to have an in-progress transaction(as per the design of 004_verify_heapam.pl ) that I can use in the test cases. I will be doing more research on a solution to add these missing test cases. \nSome error message kibitizing:\n\n psprintf(\"redirected tuple at line pointer offset %u is not heap only tuple\",\n\nIt seems to me that this should say \"redirected line pointer pointing\nto a non-heap-only tuple at offset %u\". There is no such thing as a\nredirected tuple -- and even if there were, what we have here is\nclearly a redirected line pointer.\n\npsprintf(\"redirected tuple at line pointer offset %u is not heap only tuple\",\n\nAnd I think for the same reasons this one should say something like\n\"redirected line pointer pointing to a non-heap-only tuple at offset\n%u\".\n\n psprintf(\"redirected tuple at line pointer offset %u is not heap\nupdated tuple\",\n\nPossibly all of these would sound better with \"points\" rather than\n\"pointing\" -- if so, we'd need to change an existing message in the\nsame way.\n\nAnd this one should say something like \"redirected line pointer\npointing to a tuple not produced by an update at offset %u\".\n\n psprintf(\"tuple is root of chain but it is marked as heap-only tuple\"));\n\nI think this would sound better if you deleted the word \"it\".\nWill change accordingly in my next patch. \nI don't know whether it's worth arguing about -- it feels like we've\nargued too much already about this sort of thing -- but I am not very\nconvinced by initializers like OffsetNumber\npredecessor[MaxOffsetNumber] = {InvalidOffsetNumber}. That style is\nonly correct because InvalidOffsetNumber happens to be zero. If it\nwere up to me, I'd use memset to clear the predecessor array. I would\nnot bulk initialize sucessor and lp_valid but make sure that the first\nloop always sets them, possibly by having the top of the loop set them\nto InvalidOffsetNumber and false initially and then letting code later\nin the loop change the value, or possibly in some other way.\nagree, will fix in my next patch -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 22 Jan 2023 20:48:23 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Sun, Jan 22, 2023 at 10:19 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> I was trying to use lp_valid as I need to identify the root of the HOT chain and we are doing validation on the root of the HOT chain when we loop over the predecessor array.\n> Was resetting lp_valid in the last patch because we don't add data to predecessor[] and while looping over the predecessor array we need to isolate (and identify) all cases of missing data in the predecessor array to exactly identify the root of HOT chain.\n> One solution is to always add data to predecessor array while looping over successor array and then while looping over predecessor array we can continue for other validation \"if (lp_valid [predecessor[currentoffnum]] && lp_valid[currentoffnum]\" is true but in this case also our third loop will also look at lp_valid[].\n\nI don't mind if the third loop looks at lp_valid if it has a reason to\ndo that, but I don't think we should be resetting values from true to\nfalse. Once we know a line pointer to be valid, it doesn't stop being\nvalid later because we found out some other thing about something\nelse.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 24 Jan 2023 14:18:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi Hackers,\n\nOn Sun, Jan 22, 2023 at 8:48 PM Himanshu Upadhyaya <\nupadhyaya.himanshu@gmail.com> wrote:\n\n>\n> The test if (pred_in_progress || TransactionIdDidCommit(curr_xmin))\n>> seems wrong to me. Shouldn't it be &&? Has this code been tested at\n>> all? It doesn't seem to have a test case. Some of these other errors\n>> don't, either. Maybe there's some that we can't easily test in an\n>> automated way, but we should test what we can. I guess maybe casual\n>> testing wouldn't reveal the problem here because of the recheck, but\n>> it's worrying to find logic that doesn't look right with no\n>> corresponding comments or test cases.\n>>\n>> This is totally my Mistake, apologies for that. I will fix this in my\n> next patch. Regarding the missing test cases, I need one in-progress\n> transaction for these test cases to be included in 004_verify_heapam.pl\n> but I don't find a clear way to have an in-progress transaction(as per the\n> design of 004_verify_heapam.pl ) that I can use in the test cases. I will\n> be doing more research on a solution to add these missing test cases.\n>\n>>\n>> I am trying to add test cases related to in-progress transactions in\n004_verify_heapam.pl but I am not able to find a proper way to achieve\nthis.\nWe have a logic where we manually corrupt each tuple.\nPlease refer to the code just after the below comment in\n004_verify_heapam.pl\n\n\"# Corrupt the tuples, one type of corruption per tuple. Some types of\n# corruption cause verify_heapam to skip to the next tuple without\n# performing any remaining checks, so we can't exercise the system properly\nif\n# we focus all our corruption on a single tuple.\"\n\nBefore this we stop the node by \"$node->stop;\" and then only we progress to\nmanual corruption. This will abort all running/in-progress transactions.\nSo, if we create an in-progress transaction and comment \"$node->stop;\"\nthen somehow all the code that we have for manual corruption does not work.\n\nI think it is required to stop the server and then only proceed for manual\ncorruption?\nIf this is the case then please suggest if there is a way to get an\nin-progress transaction\nthat we can use for manual corruption.\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Hackers,On Sun, Jan 22, 2023 at 8:48 PM Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com> wrote:\nThe test if (pred_in_progress || TransactionIdDidCommit(curr_xmin))\nseems wrong to me. Shouldn't it be &&? Has this code been tested at\nall? It doesn't seem to have a test case. Some of these other errors\ndon't, either. Maybe there's some that we can't easily test in an\nautomated way, but we should test what we can. I guess maybe casual\ntesting wouldn't reveal the problem here because of the recheck, but\nit's worrying to find logic that doesn't look right with no\ncorresponding comments or test cases.\nThis is totally my Mistake, apologies for that. I will fix this in my next patch. Regarding the missing test cases, I need one in-progress transaction for these test cases to be included in 004_verify_heapam.pl but I don't find a clear way to have an in-progress transaction(as per the design of 004_verify_heapam.pl ) that I can use in the test cases. I will be doing more research on a solution to add these missing test cases. I am trying to add test cases related to in-progress transactions in 004_verify_heapam.pl but I am not able to find a proper way to achieve this. We have a logic where we manually corrupt each tuple.Please refer to the code just after the below comment in 004_verify_heapam.pl \"# Corrupt the tuples, one type of corruption per tuple. Some types of# corruption cause verify_heapam to skip to the next tuple without# performing any remaining checks, so we can't exercise the system properly if# we focus all our corruption on a single tuple.\"Before this we stop the node by \"$node->stop;\" and then only we progress to manual corruption. This will abort all running/in-progress transactions.So, if we create an in-progress transaction and comment \"$node->stop;\" then somehow all the code that we have for manual corruption does not work. I think it is required to stop the server and then only proceed for manual corruption?If this is the case then please suggest if there is a way to get an in-progress transactionthat we can use for manual corruption.-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Jan 2023 18:53:28 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 8:24 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Before this we stop the node by \"$node->stop;\" and then only we progress to\n> manual corruption. This will abort all running/in-progress transactions.\n> So, if we create an in-progress transaction and comment \"$node->stop;\"\n> then somehow all the code that we have for manual corruption does not work.\n>\n> I think it is required to stop the server and then only proceed for manual corruption?\n> If this is the case then please suggest if there is a way to get an in-progress transaction\n> that we can use for manual corruption.\n\nHow about using a prepared transaction?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 31 Jan 2023 08:50:29 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 7:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Jan 30, 2023 at 8:24 AM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> > Before this we stop the node by \"$node->stop;\" and then only we progress\n> to\n> > manual corruption. This will abort all running/in-progress transactions.\n> > So, if we create an in-progress transaction and comment \"$node->stop;\"\n> > then somehow all the code that we have for manual corruption does not\n> work.\n> >\n> > I think it is required to stop the server and then only proceed for\n> manual corruption?\n> > If this is the case then please suggest if there is a way to get an\n> in-progress transaction\n> > that we can use for manual corruption.\n>\n> How about using a prepared transaction?\n>\n> Thanks, yes it's working fine with Prepared Transaction.\nPlease find attached the v9 patch incorporating all the review comments.\n\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Sun, 5 Feb 2023 14:27:09 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Sun, Feb 5, 2023 at 3:57 AM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Thanks, yes it's working fine with Prepared Transaction.\n> Please find attached the v9 patch incorporating all the review comments.\n\nI don't know quite how we're still going around in circles about this,\nbut this code makes no sense to me at all:\n\n /*\n * Add data to the predecessor array even if the current or\n * successor's LP is not valid. We will not process/validate these\n * offset entries while looping over the predecessor array but\n * having all entries in the predecessor array will help in\n * identifying(and validating) the Root of a chain.\n */\n if (!lp_valid[ctx.offnum] || !lp_valid[nextoffnum])\n {\n predecessor[nextoffnum] = ctx.offnum;\n continue;\n }\n\nIf the current offset number is not for a valid line pointer, then it\nmakes no sense to talk about the successor. An invalid redirected line\npointer is one that points off the end of the line pointer array, or\nto before the beginning of the line pointer array, or to a line\npointer that is unused. An invalid line pointer that is LP_USED is one\nwhich points to a location outside the page, or to a location inside\nthe page. In none of these cases does it make any sense to talk about\nthe next tuple. If the line pointer isn't valid, it's pointing to some\ninvalid location where there cannot possibly be a tuple. In other\nwords, if lp_valid[ctx.offnum] is false, then nextoffnum is a garbage\nvalue, and therefore referencing predecessor[nextoffnum] is useless\nand dangerous.\n\nIf the next offset number is not for a valid line pointer, we could in\ntheory still assign to the predecessor array, as you propose here. In\nthat case, the tuple or line pointer at ctx.offnum is pointing to the\nline pointer at nextoffnum and that is all fine. But what is the\npoint? The comment claims that the point is that it will help us\nidentify and validate the root of the hot chain. But if the line\npointer at nextoffnum is not valid, it can't be the root of a hot\nchain. When we're talking about the root of a HOT chain, we're\nspeaking about a tuple. If lp_valid[nextoffnum] is false, there is no\ntuple. Instead of pointing to a tuple, that line pointer is pointing\nto garbage.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 12:46:51 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 11:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Sun, Feb 5, 2023 at 3:57 AM Himanshu Upadhyaya\n> <upadhyaya.himanshu@gmail.com> wrote:\n> > Thanks, yes it's working fine with Prepared Transaction.\n> > Please find attached the v9 patch incorporating all the review comments.\n>\n> I don't know quite how we're still going around in circles about this,\n> but this code makes no sense to me at all:\n>\n> /*\n> * Add data to the predecessor array even if the current or\n> * successor's LP is not valid. We will not process/validate\n> these\n> * offset entries while looping over the predecessor array but\n> * having all entries in the predecessor array will help in\n> * identifying(and validating) the Root of a chain.\n> */\n> if (!lp_valid[ctx.offnum] || !lp_valid[nextoffnum])\n> {\n> predecessor[nextoffnum] = ctx.offnum;\n> continue;\n> }\n>\n> If the current offset number is not for a valid line pointer, then it\n> makes no sense to talk about the successor. An invalid redirected line\n> pointer is one that points off the end of the line pointer array, or\n> to before the beginning of the line pointer array, or to a line\n> pointer that is unused. An invalid line pointer that is LP_USED is one\n> which points to a location outside the page, or to a location inside\n> the page. In none of these cases does it make any sense to talk about\n> the next tuple. If the line pointer isn't valid, it's pointing to some\n> invalid location where there cannot possibly be a tuple. In other\n> words, if lp_valid[ctx.offnum] is false, then nextoffnum is a garbage\n> value, and therefore referencing predecessor[nextoffnum] is useless\n> and dangerous.\n>\n> If the next offset number is not for a valid line pointer, we could in\n> theory still assign to the predecessor array, as you propose here. In\n> that case, the tuple or line pointer at ctx.offnum is pointing to the\n> line pointer at nextoffnum and that is all fine. But what is the\n> point? The comment claims that the point is that it will help us\n> identify and validate the root of the hot chain. But if the line\n> pointer at nextoffnum is not valid, it can't be the root of a hot\n> chain. When we're talking about the root of a HOT chain, we're\n> speaking about a tuple. If lp_valid[nextoffnum] is false, there is no\n> tuple. Instead of pointing to a tuple, that line pointer is pointing\n> to garbage.\n>\n>\nInitially while implementing logic to identify the root of the HOT chain\nI was getting crash and regression failure's that time I thought of having\nthis check along with a few other changes that were required,\nbut you are right, it's unnecessary to add data to the predecessor\narray(in this case) and is not required. I am removing this from the patch.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 9 Feb 2023 22:39:09 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Feb 9, 2023 at 12:09 PM Himanshu Upadhyaya\n<upadhyaya.himanshu@gmail.com> wrote:\n> Initially while implementing logic to identify the root of the HOT chain\n> I was getting crash and regression failure's that time I thought of having\n> this check along with a few other changes that were required,\n> but you are right, it's unnecessary to add data to the predecessor\n> array(in this case) and is not required. I am removing this from the patch.\n\nI finally found time to look at this today -- apologies for the long\ndelay -- and I don't think that it addresses my objections. When I\nproposed lp_valid, I had a very simple idea in mind: it tells you\nwhether or not the line pointer is, at some basic level, valid. Like,\nit contains numbers that could point to a tuple on the page, at least\nhypothetically. But that is something that can be determined strictly\nby inspecting the line pointer, and yet you have\ncheck_tuple_visibility() changing the value based on the visibility\nstatus of xmin. So it seems that we still don't have a patch where the\nvalue of a variable called lp_valid corresponds to whether or not the\nL.P. is valid.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:36:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 12:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> So it seems that we still don't have a patch where the\n> value of a variable called lp_valid corresponds to whether or not the\n> L.P. is valid.\n\nHere's a worked-over version of this patch. Changes:\n\n- I got rid of the code that sets lp_valid in funny places and instead\narranged to have check_tuple_visibility() pass up the information on\nthe XID status. That's important, because we can't casually apply\noperations like TransactionIdIsCommitted() to XIDs that, for all we\nknow, might not even be in the range covered by CLOG. In such cases,\nwe should not perform any HOT chain validation because we can't do it\nsensibly; the new code accomplishes this, and also reduces the number\nof CLOG lookups as compared with your version.\n\n- I moved most of the HOT chain checks from the loop over the\npredecessor[] array to the loop over the successor[] array. It didn't\nseem to have any value to put them in the third loop; it forces us to\nexpend extra code to distinguish between redirects and tuples,\ninformation that we already had in the second loop. The only check\nthat seems to make sense to do in that last loop is the one for a HOT\nchain that starts with a HOT tuple, which can't be done any earlier.\n\n- I realized that your patch had a guard against setting the\npredecessor[] when it was set already only for tuples, not for\nredirects. That means if a redirect pointed into the middle of a HOT\nchain we might not report corruption appropriately. I fixed this and\nreworded the associated messages a bit.\n\n- Assorted cosmetic and comment changes.\n\nI think this is easier to follow and more nearly correct, but what do\nyou (and others) think?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 7 Mar 2023 13:16:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "\n\n> On Mar 7, 2023, at 10:16 AM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Mon, Mar 6, 2023 at 12:36 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> So it seems that we still don't have a patch where the\n>> value of a variable called lp_valid corresponds to whether or not the\n>> L.P. is valid.\n> \n> Here's a worked-over version of this patch. Changes:\n> \n> - I got rid of the code that sets lp_valid in funny places and instead\n> arranged to have check_tuple_visibility() pass up the information on\n> the XID status. That's important, because we can't casually apply\n> operations like TransactionIdIsCommitted() to XIDs that, for all we\n> know, might not even be in the range covered by CLOG. In such cases,\n> we should not perform any HOT chain validation because we can't do it\n> sensibly; the new code accomplishes this, and also reduces the number\n> of CLOG lookups as compared with your version.\n> \n> - I moved most of the HOT chain checks from the loop over the\n> predecessor[] array to the loop over the successor[] array. It didn't\n> seem to have any value to put them in the third loop; it forces us to\n> expend extra code to distinguish between redirects and tuples,\n> information that we already had in the second loop. The only check\n> that seems to make sense to do in that last loop is the one for a HOT\n> chain that starts with a HOT tuple, which can't be done any earlier.\n> \n> - I realized that your patch had a guard against setting the\n> predecessor[] when it was set already only for tuples, not for\n> redirects. That means if a redirect pointed into the middle of a HOT\n> chain we might not report corruption appropriately. I fixed this and\n> reworded the associated messages a bit.\n> \n> - Assorted cosmetic and comment changes.\n> \n> I think this is easier to follow and more nearly correct, but what do\n> you (and others) think?\n\nThanks, Robert. Quickly skimming over this patch, it looks like something reviewable. Your changes to t/004_verify_heapam.pl appear to be consistent with how that test was intended to function.\n\nNote that I have not tried any of this yet.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n",
"msg_date": "Tue, 7 Mar 2023 10:29:47 -0800",
"msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\n> Note that I have not tried any of this yet.\n\nI did, both with Meson and Autotools. All in all the patch looks very\ngood, but I have a few little nitpicks.\n\n```\n+ /* HOT chains should not intersect. */\n+ if (predecessor[nextoffnum] != InvalidOffsetNumber)\n+ {\n+ report_corruption(&ctx,\n+ psprintf(\"redirect line pointer\npoints to offset %u, but offset %u also points there\",\n+ (unsigned) nextoffnum,\n(unsigned) predecessor[nextoffnum]));\n+ continue;\n+ }\n```\n\nThis type of corruption doesn't seem to be test-covered.\n\n```\n+ /*\n+ * If the next line pointer is a redirect, or if it's a tuple\n+ * but the XMAX of this tuple doesn't match the XMIN of the next\n+ * tuple, then the two aren't part of the same update chain and\n+ * there is nothing more to do.\n+ */\n+ if (ItemIdIsRedirected(next_lp))\n+ continue;\n```\n\nlcov shows that the `continue` path is never executed. This is\nprobably not a big deal however.\n\n```\n+$node->append_conf('postgresql.conf','max_prepared_transactions=100');\n```\n\n From what I can tell this line is not needed.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 8 Mar 2023 13:35:40 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 5:35 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> I did, both with Meson and Autotools. All in all the patch looks very\n> good, but I have a few little nitpicks.\n\nThank you for the nitpicks.\n\n> + /* HOT chains should not intersect. */\n> + if (predecessor[nextoffnum] != InvalidOffsetNumber)\n> + {\n> + report_corruption(&ctx,\n> + psprintf(\"redirect line pointer\n> points to offset %u, but offset %u also points there\",\n> + (unsigned) nextoffnum,\n> (unsigned) predecessor[nextoffnum]));\n> + continue;\n> + }\n> ```\n>\n> This type of corruption doesn't seem to be test-covered.\n\nHimanshu, would you be able to try to write a test case for this? I\nthink you need something like this: update a tuple with a lower TID to\nproduce a tuple with a higher TID, e.g. (0,10) is updated to produce\n(0,11). But then have a redirect line pointer that also points to the\nresult of the update, in this case (0,11).\n\n> ```\n> + /*\n> + * If the next line pointer is a redirect, or if it's a tuple\n> + * but the XMAX of this tuple doesn't match the XMIN of the next\n> + * tuple, then the two aren't part of the same update chain and\n> + * there is nothing more to do.\n> + */\n> + if (ItemIdIsRedirected(next_lp))\n> + continue;\n> ```\n>\n> lcov shows that the `continue` path is never executed. This is\n> probably not a big deal however.\n\nIt might be good to have a negative test case for this, though. Let's\nsay we, e.g. update (0,1) to produce (0,2), but then abort. The page\nis HOT-pruned. Then we add insert a new tuple at (0,2), HOT-update it\nto produce (0,3), and commit. Then we HOT-prune again. Possibly we\ncould try to write a test case that verifies that this does NOT\nproduce any corruption indication.\n\n> ```\n> +$node->append_conf('postgresql.conf','max_prepared_transactions=100');\n> ```\n>\n> From what I can tell this line is not needed.\n\nThat surprises me, because the new test cases involve preparing a\ntransaction, and by default max_prepared_transactions=0. So it seems\nto me (without testing) that this ought to be required. Did you test\nthat it works without this setting?\n\nThe value of 100 seems a bit excessive, though. Most TAP tests seem to use 10.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Mar 2023 08:35:53 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\n> > ```\n> > +$node->append_conf('postgresql.conf','max_prepared_transactions=100');\n> > ```\n> >\n> > From what I can tell this line is not needed.\n>\n> That surprises me, because the new test cases involve preparing a\n> transaction, and by default max_prepared_transactions=0. So it seems\n> to me (without testing) that this ought to be required. Did you test\n> that it works without this setting?\n\nSorry, I was wrong the first time. The test fails without this line:\n\n```\n112/238 postgresql:pg_amcheck / pg_amcheck/004_verify_heapam ERROR\n4.94s exit status 29\n```\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:59:39 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 7:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> > + /* HOT chains should not intersect. */\n> > + if (predecessor[nextoffnum] != InvalidOffsetNumber)\n> > + {\n> > + report_corruption(&ctx,\n> > + psprintf(\"redirect line pointer\n> > points to offset %u, but offset %u also points there\",\n> > + (unsigned) nextoffnum,\n> > (unsigned) predecessor[nextoffnum]));\n> > + continue;\n> > + }\n> > ```\n> >\n> > This type of corruption doesn't seem to be test-covered.\n>\n> Himanshu, would you be able to try to write a test case for this? I\n> think you need something like this: update a tuple with a lower TID to\n> produce a tuple with a higher TID, e.g. (0,10) is updated to produce\n> (0,11). But then have a redirect line pointer that also points to the\n> result of the update, in this case (0,11).\n>\n> Sure Robert, I will work on this.\n\n> > ```\n> > + /*\n> > + * If the next line pointer is a redirect, or if it's a\n> tuple\n> > + * but the XMAX of this tuple doesn't match the XMIN of the\n> next\n> > + * tuple, then the two aren't part of the same update chain\n> and\n> > + * there is nothing more to do.\n> > + */\n> > + if (ItemIdIsRedirected(next_lp))\n> > + continue;\n> > ```\n> >\n> > lcov shows that the `continue` path is never executed. This is\n> > probably not a big deal however.\n>\n> It might be good to have a negative test case for this, though. Let's\n> say we, e.g. update (0,1) to produce (0,2), but then abort. The page\n> is HOT-pruned. Then we add insert a new tuple at (0,2), HOT-update it\n> to produce (0,3), and commit. Then we HOT-prune again. Possibly we\n> could try to write a test case that verifies that this does NOT\n> produce any corruption indication.\n>\n> will work on this too.\n\n> > ```\n> > +$node->append_conf('postgresql.conf','max_prepared_transactions=100');\n> > ```\n> >\n> > From what I can tell this line is not needed.\n>\n> That surprises me, because the new test cases involve preparing a\n> transaction, and by default max_prepared_transactions=0. So it seems\n> to me (without testing) that this ought to be required. Did you test\n> that it works without this setting?\n>\n> The value of 100 seems a bit excessive, though. Most TAP tests seem to use\n> 10.\n>\n> We need this for prepare transaction, will change it to 10.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Mar 8, 2023 at 7:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> + /* HOT chains should not intersect. */\n> + if (predecessor[nextoffnum] != InvalidOffsetNumber)\n> + {\n> + report_corruption(&ctx,\n> + psprintf(\"redirect line pointer\n> points to offset %u, but offset %u also points there\",\n> + (unsigned) nextoffnum,\n> (unsigned) predecessor[nextoffnum]));\n> + continue;\n> + }\n> ```\n>\n> This type of corruption doesn't seem to be test-covered.\n\nHimanshu, would you be able to try to write a test case for this? I\nthink you need something like this: update a tuple with a lower TID to\nproduce a tuple with a higher TID, e.g. (0,10) is updated to produce\n(0,11). But then have a redirect line pointer that also points to the\nresult of the update, in this case (0,11).\nSure Robert, I will work on this. \n> ```\n> + /*\n> + * If the next line pointer is a redirect, or if it's a tuple\n> + * but the XMAX of this tuple doesn't match the XMIN of the next\n> + * tuple, then the two aren't part of the same update chain and\n> + * there is nothing more to do.\n> + */\n> + if (ItemIdIsRedirected(next_lp))\n> + continue;\n> ```\n>\n> lcov shows that the `continue` path is never executed. This is\n> probably not a big deal however.\n\nIt might be good to have a negative test case for this, though. Let's\nsay we, e.g. update (0,1) to produce (0,2), but then abort. The page\nis HOT-pruned. Then we add insert a new tuple at (0,2), HOT-update it\nto produce (0,3), and commit. Then we HOT-prune again. Possibly we\ncould try to write a test case that verifies that this does NOT\nproduce any corruption indication.\nwill work on this too. \n> ```\n> +$node->append_conf('postgresql.conf','max_prepared_transactions=100');\n> ```\n>\n> From what I can tell this line is not needed.\n\nThat surprises me, because the new test cases involve preparing a\ntransaction, and by default max_prepared_transactions=0. So it seems\nto me (without testing) that this ought to be required. Did you test\nthat it works without this setting?\n\nThe value of 100 seems a bit excessive, though. Most TAP tests seem to use 10.\nWe need this for prepare transaction, will change it to 10. -- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 8 Mar 2023 19:30:02 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 8, 2023 at 7:30 PM Himanshu Upadhyaya <\nupadhyaya.himanshu@gmail.com> wrote:\nPlease find the v11 patch with all these changes.\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 9 Mar 2023 21:24:38 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\n> On Wed, Mar 8, 2023 at 7:30 PM Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com> wrote:\n> Please find the v11 patch with all these changes.\n\nThe patch needed a rebase due to a4f23f9b. PFA v12.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Fri, 17 Mar 2023 15:31:33 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Fri, Mar 17, 2023 at 8:31 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> The patch needed a rebase due to a4f23f9b. PFA v12.\n\nI have committed this after tidying up a bunch of things in the test\ncase file that I found too difficult to understand -- or in some cases\njust incorrect, like:\n\n elsif ($offnum == 35)\n {\n- # set xmax to invalid transaction id.\n $tup->{t_xmin} = $in_progress_xid;\n $tup->{t_infomask} &= ~HEAP_XMIN_COMMITTED;\n push @expected,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 22 Mar 2023 09:19:18 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I have committed this after tidying up a bunch of things in the test\n> case file that I found too difficult to understand -- or in some cases\n> just incorrect, like:\n\nMy animal mamba doesn't like this one bit.\n\nI suspect the reason is that it's big-endian (PPC) and the endianness\nhacking in the test is simply wrong:\n\n syswrite($file,\n pack(\"L\", $ENDIANNESS eq 'little' ? 0x00010019 : 0x19000100))\n or BAIL_OUT(\"syswrite failed: $!\");\n\npack's L code should already be performing an endianness swap, so why\nare we doing another one in the argument?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Mar 2023 15:27:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-22 09:19:18 -0400, Robert Haas wrote:\n> On Fri, Mar 17, 2023 at 8:31 AM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n> > The patch needed a rebase due to a4f23f9b. PFA v12.\n>\n> I have committed this after tidying up a bunch of things in the test\n> case file that I found too difficult to understand -- or in some cases\n> just incorrect, like:\n\nAs noticed by Andrew\nhttps://postgr.es/m/bfa5bd2b-c0e6-9d65-62ce-97f4766b1c42%40dunslane.net and\nthen reproduced on HEAD by me, there's something not right here.\n\nAt the very least there's missing verification that tids actually exists in the\n\"Update chain validation\" loop, leading to:\nTRAP: failed Assert(\"ItemIdHasStorage(itemId)\"), File: \"../../../../home/andres/src/postgresql/src/include/storage/bufpage.h\", Line: 355, PID: 645093\n\nWhich makes sense - the earlier loop adds t_ctid to the successor array, which\nwe then query without checking if there still is such a tid on the page.\n\nI suspect we don't just need a !ItemIdIsUsed(), but also a check gainst the\nmax offset on the page.\n\nWRT these failures:\nnon-heap-only update produced a heap-only tuple at offset 20\n\nI think that's largely a consequence of HeapTupleHeaderIsHotUpdated()'s\ndefinition:\n/*\n * Note that we stop considering a tuple HOT-updated as soon as it is known\n * aborted or the would-be updating transaction is known aborted. For best\n * efficiency, check tuple visibility before using this macro, so that the\n * INVALID bits will be as up to date as possible.\n */\n#define HeapTupleHeaderIsHotUpdated(tup) \\\n( \\\n\t((tup)->t_infomask2 & HEAP_HOT_UPDATED) != 0 && \\\n\t((tup)->t_infomask & HEAP_XMAX_INVALID) == 0 && \\\n\t!HeapTupleHeaderXminInvalid(tup) \\\n)\n\n\nCurrently the new verify_heapam() follows ctid chains when XMAX_INVALID is set\nand expects to find an item it can dereference - but I don't think that's\nsomething we can rely on: Afaics HOT pruning can break chains, but doesn't\nreset xmax.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Mar 2023 13:45:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 1:45 PM Andres Freund <andres@anarazel.de> wrote:\n> At the very least there's missing verification that tids actually exists in the\n> \"Update chain validation\" loop, leading to:\n> TRAP: failed Assert(\"ItemIdHasStorage(itemId)\"), File: \"../../../../home/andres/src/postgresql/src/include/storage/bufpage.h\", Line: 355, PID: 645093\n>\n> Which makes sense - the earlier loop adds t_ctid to the successor array, which\n> we then query without checking if there still is such a tid on the page.\n>\n> I suspect we don't just need a !ItemIdIsUsed(), but also a check gainst the\n> max offset on the page.\n\nWe definitely need to do it that way, since a heap-only tuple's t_ctid\nis allowed to point to almost anything. I guess it can't point to some\ncompletely different heap block, but that's about the only\nrestriction. In particular, it can point to an item that's past the\nend of the page following line pointer array truncation (truncation\ncan happen during pruning or when the second heap pass takes place in\nVACUUM).\n\nOTOH the rules for LP_REDIRECT items *are* very strict. They need to\nbe, since it's the root item of the HOT chain, referenced by TIDs in\nindexes, and have no heap tuple header metadata to use in cross-checks\nthat take place during HOT chain traversal (traversal by code in\nplaces such as heap_hot_search_buffer).\n\n> WRT these failures:\n> non-heap-only update produced a heap-only tuple at offset 20\n>\n> I think that's largely a consequence of HeapTupleHeaderIsHotUpdated()'s\n> definition:\n\nThat has to be a problem for verify_heapam.\n\n> Currently the new verify_heapam() follows ctid chains when XMAX_INVALID is set\n> and expects to find an item it can dereference - but I don't think that's\n> something we can rely on: Afaics HOT pruning can break chains, but doesn't\n> reset xmax.\n\nI think that we need two passes to be completely thorough. An initial\npass, that works pretty much as-is, plus a second pass that locates\nany orphaned heap-only tuples -- heap-only tuples that were not deemed\npart of a valid HOT chain during the first pass. These remaining\norphaned heap-only tuples should be verified as having come from\nnow-aborted transactions (they should definitely be fully DEAD) --\notherwise we have corruption.\n\nThat's what my abandoned patch to make heap pruning more robust did,\nyou'll recall.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Mar 2023 14:14:38 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 2:14 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Currently the new verify_heapam() follows ctid chains when XMAX_INVALID is set\n> > and expects to find an item it can dereference - but I don't think that's\n> > something we can rely on: Afaics HOT pruning can break chains, but doesn't\n> > reset xmax.\n>\n> I think that we need two passes to be completely thorough. An initial\n> pass, that works pretty much as-is, plus a second pass that locates\n> any orphaned heap-only tuples -- heap-only tuples that were not deemed\n> part of a valid HOT chain during the first pass. These remaining\n> orphaned heap-only tuples should be verified as having come from\n> now-aborted transactions (they should definitely be fully DEAD) --\n> otherwise we have corruption.\n\nI see that there is a second pass over the heap page in\nverify_heapam(), in fact. Kind of. I'm referring to this loop:\n\n /*\n * An update chain can start either with a non-heap-only tuple or with\n * a redirect line pointer, but not with a heap-only tuple.\n *\n * (This check is in a separate loop because we need the predecessor\n * array to be fully populated before we can perform it.)\n */\n for (ctx.offnum = FirstOffsetNumber;\n ctx.offnum <= maxoff;\n ctx.offnum = OffsetNumberNext(ctx.offnum))\n {\n if (xmin_commit_status_ok[ctx.offnum] &&\n (xmin_commit_status[ctx.offnum] == XID_COMMITTED ||\n xmin_commit_status[ctx.offnum] == XID_IN_PROGRESS) &&\n predecessor[ctx.offnum] == InvalidOffsetNumber)\n {\n ItemId curr_lp;\n\n curr_lp = PageGetItemId(ctx.page, ctx.offnum);\n if (!ItemIdIsRedirected(curr_lp))\n {\n HeapTupleHeader curr_htup;\n\n curr_htup = (HeapTupleHeader)\n PageGetItem(ctx.page, curr_lp);\n if (HeapTupleHeaderIsHeapOnly(curr_htup))\n report_corruption(&ctx,\n psprintf(\"tuple is root of\nchain but is marked as heap-only tuple\"));\n }\n }\n }\n\n\nHowever, this \"second pass over page\" loop has roughly the same\nproblem as the nearby HeapTupleHeaderIsHotUpdated() coding pattern: it\ndoesn't account for the fact that a tuple whose xmin was\nXID_IN_PROGRESS a little earlier on may not be in that state once we\nreach the second pass loop. Concurrent transaction abort needs to be\naccounted for. The loop needs to recheck xmin status, at least in the\ninitially-XID_IN_PROGRESS-xmin case.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 22 Mar 2023 14:41:50 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-22 13:45:52 -0700, Andres Freund wrote:\n> On 2023-03-22 09:19:18 -0400, Robert Haas wrote:\n> > On Fri, Mar 17, 2023 at 8:31 AM Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n> > > The patch needed a rebase due to a4f23f9b. PFA v12.\n> >\n> > I have committed this after tidying up a bunch of things in the test\n> > case file that I found too difficult to understand -- or in some cases\n> > just incorrect, like:\n>\n> As noticed by Andrew\n> https://postgr.es/m/bfa5bd2b-c0e6-9d65-62ce-97f4766b1c42%40dunslane.net and\n> then reproduced on HEAD by me, there's something not right here.\n>\n> At the very least there's missing verification that tids actually exists in the\n> \"Update chain validation\" loop, leading to:\n> TRAP: failed Assert(\"ItemIdHasStorage(itemId)\"), File: \"../../../../home/andres/src/postgresql/src/include/storage/bufpage.h\", Line: 355, PID: 645093\n>\n> Which makes sense - the earlier loop adds t_ctid to the successor array, which\n> we then query without checking if there still is such a tid on the page.\n\nIt's not quite so simple - I see now that the lp_valid check tries to prevent\nthat. However, it's not sufficient, because there is no guarantee that\nlp_valid[nextoffnum] is initialized. Consider what happens if t_ctid of a heap\ntuple points to beyond the end of the item array - lp_valid[nextoffnum] won't\nbe initialized.\n\nWhy are redirections now checked in two places? There already was a\nItemIdIsUsed() check in the \"/* Perform tuple checks */\" loop, but now there's\nthe ItemIdIsRedirected() check in the \"Update chain validation.\" loop as well\n- and the output of that is confusing, because it'll just mention the target\nof the redirect.\n\n\nI also think it's not quite right that some of checks inside if\n(ItemIdIsRedirected()) continue in case of corruption, others don't. While\nthere's a later continue, that means the corrupt tuples get added to the\npredecessor array. Similarly, in the non-redirect portion, the successor\narray gets filled with corrupt tuples, which doesn't seem quite right to me.\n\n\nA patch addressing some, but not all, of those is attached. With that I don't\nsee any crashes or false-positives anymore.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 22 Mar 2023 14:56:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-22 14:56:22 -0700, Andres Freund wrote:\n> A patch addressing some, but not all, of those is attached. With that I don't\n> see any crashes or false-positives anymore.\n\nThat patch missed that, as committed, the first if (ItemIdIsRedirected())\ncheck sets lp_valid[n] = true even if the target of the redirect is unused.\n\nWith that fixed, 004_verify_heapam doesn't cause crash anymore - it doesn't\npass though, because there's a bunch of unadjusted error messages.\n\nAndres",
"msg_date": "Wed, 22 Mar 2023 15:07:57 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-22 13:45:52 -0700, Andres Freund wrote:\n> On 2023-03-22 09:19:18 -0400, Robert Haas wrote:\n> > On Fri, Mar 17, 2023 at 8:31 AM Aleksander Alekseev\n> > <aleksander@timescale.com> wrote:\n> > > The patch needed a rebase due to a4f23f9b. PFA v12.\n> >\n> > I have committed this after tidying up a bunch of things in the test\n> > case file that I found too difficult to understand -- or in some cases\n> > just incorrect, like:\n> \n> As noticed by Andrew\n> https://postgr.es/m/bfa5bd2b-c0e6-9d65-62ce-97f4766b1c42%40dunslane.net and\n> then reproduced on HEAD by me, there's something not right here.\n\nskink / valgrind reported in a while back and found another issue:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-03-22%2021%3A53%3A41\n\n==2490364== VALGRINDERROR-BEGIN\n==2490364== Conditional jump or move depends on uninitialised value(s)\n==2490364== at 0x11D459F2: check_tuple_visibility (verify_heapam.c:1379)\n==2490364== by 0x11D46262: check_tuple (verify_heapam.c:1812)\n==2490364== by 0x11D46FDF: verify_heapam (verify_heapam.c:535)\n==2490364== by 0x3D5B2C: ExecMakeTableFunctionResult (execSRF.c:235)\n==2490364== by 0x3E8225: FunctionNext (nodeFunctionscan.c:95)\n==2490364== by 0x3D6685: ExecScanFetch (execScan.c:133)\n==2490364== by 0x3D6709: ExecScan (execScan.c:182)\n==2490364== by 0x3E813A: ExecFunctionScan (nodeFunctionscan.c:270)\n==2490364== by 0x3D31C4: ExecProcNodeFirst (execProcnode.c:464)\n==2490364== by 0x3FF7E7: ExecProcNode (executor.h:262)\n==2490364== by 0x3FFB15: ExecNestLoop (nodeNestloop.c:160)\n==2490364== by 0x3D31C4: ExecProcNodeFirst (execProcnode.c:464)\n==2490364== Uninitialised value was created by a stack allocation\n==2490364== at 0x11D45325: check_tuple_visibility (verify_heapam.c:994)\n==2490364== \n==2490364== VALGRINDERROR-END\n==2490364== VALGRINDERROR-BEGIN\n==2490364== Conditional jump or move depends on uninitialised value(s)\n==2490364== at 0x11D45AC6: check_tuple_visibility (verify_heapam.c:1379)\n==2490364== by 0x11D46262: check_tuple (verify_heapam.c:1812)\n==2490364== by 0x11D46FDF: verify_heapam (verify_heapam.c:535)\n==2490364== by 0x3D5B2C: ExecMakeTableFunctionResult (execSRF.c:235)\n==2490364== by 0x3E8225: FunctionNext (nodeFunctionscan.c:95)\n==2490364== by 0x3D6685: ExecScanFetch (execScan.c:133)\n==2490364== by 0x3D6709: ExecScan (execScan.c:182)\n==2490364== by 0x3E813A: ExecFunctionScan (nodeFunctionscan.c:270)\n==2490364== by 0x3D31C4: ExecProcNodeFirst (execProcnode.c:464)\n==2490364== by 0x3FF7E7: ExecProcNode (executor.h:262)\n==2490364== by 0x3FFB15: ExecNestLoop (nodeNestloop.c:160)\n==2490364== by 0x3D31C4: ExecProcNodeFirst (execProcnode.c:464)\n==2490364== Uninitialised value was created by a stack allocation\n==2490364== at 0x11D45325: check_tuple_visibility (verify_heapam.c:994)\n==2490364==\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 22 Mar 2023 17:38:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 2:15 AM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> Currently the new verify_heapam() follows ctid chains when XMAX_INVALID is\n> set\n> and expects to find an item it can dereference - but I don't think that's\n> something we can rely on: Afaics HOT pruning can break chains, but doesn't\n> reset xmax.\n>\n> We have below code which I think takes care of xmin and xmax matching and\nif they match then only we add them to the predecessor array.\n /*\n * If the next line pointer is a redirect, or if\nit's a tuple\n * but the XMAX of this tuple doesn't match the\nXMIN of the next\n * tuple, then the two aren't part of the same\nupdate chain and\n * there is nothing more to do.\n */\n if (ItemIdIsRedirected(next_lp))\n continue;\n curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\ncurr_lp);\n curr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup);\n next_htup = (HeapTupleHeader) PageGetItem(ctx.page,\nnext_lp);\n next_xmin = HeapTupleHeaderGetXmin(next_htup);\n if (!TransactionIdIsValid(curr_xmax) ||\n !TransactionIdEquals(curr_xmax, next_xmin))\n continue;\n\n-- \nRegards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Mar 23, 2023 at 2:15 AM Andres Freund <andres@anarazel.de> wrote:\n\nCurrently the new verify_heapam() follows ctid chains when XMAX_INVALID is set\nand expects to find an item it can dereference - but I don't think that's\nsomething we can rely on: Afaics HOT pruning can break chains, but doesn't\nreset xmax.\nWe have below code which I think takes care of xmin and xmax matching and if they match then only we add them to the predecessor array. /* * If the next line pointer is a redirect, or if it's a tuple * but the XMAX of this tuple doesn't match the XMIN of the next * tuple, then the two aren't part of the same update chain and * there is nothing more to do. */ if (ItemIdIsRedirected(next_lp)) continue; curr_htup = (HeapTupleHeader) PageGetItem(ctx.page, curr_lp); curr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup); next_htup = (HeapTupleHeader) PageGetItem(ctx.page, next_lp); next_xmin = HeapTupleHeaderGetXmin(next_htup); if (!TransactionIdIsValid(curr_xmax) || !TransactionIdEquals(curr_xmax, next_xmin)) continue;-- Regards,\nHimanshu Upadhyaya\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Mar 2023 11:20:04 +0530",
"msg_from": "Himanshu Upadhyaya <upadhyaya.himanshu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 3:27 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> My animal mamba doesn't like this one bit.\n>\n> I suspect the reason is that it's big-endian (PPC) and the endianness\n> hacking in the test is simply wrong:\n>\n> syswrite($file,\n> pack(\"L\", $ENDIANNESS eq 'little' ? 0x00010019 : 0x19000100))\n> or BAIL_OUT(\"syswrite failed: $!\");\n>\n> pack's L code should already be performing an endianness swap, so why\n> are we doing another one in the argument?\n\n(Apologies for having missed responding to this yesterday afternoon.)\n\nHmph. I didn't think very hard about that code and just assumed\nHimanshu had tested it. I don't have convenient access to a Big-endian\ntest machine myself. Are you able to check whether using 0x00010019\nunconditionally works?\n\nI think part of the reason that I thought this looked OK was because\nthere are other places in this test case that do stuff differently\nbased on endian-ness, but now that you mention it, I realize that\nstuff has to do with the varlena representation, which is\nendian-dependent in a way that the ItemIdData representation is not.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 09:42:16 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 4:45 PM Andres Freund <andres@anarazel.de> wrote:\n> At the very least there's missing verification that tids actually exists in the\n> \"Update chain validation\" loop, leading to:\n> TRAP: failed Assert(\"ItemIdHasStorage(itemId)\"), File: \"../../../../home/andres/src/postgresql/src/include/storage/bufpage.h\", Line: 355, PID: 645093\n>\n> Which makes sense - the earlier loop adds t_ctid to the successor array, which\n> we then query without checking if there still is such a tid on the page.\n\nAh, crap. If the /* Perform tuple checks loop */ finds a redirect line\npointer, it verifies that the target is between FirstOffsetnNumber and\nmaxoff before setting lp_valid[ctx.offnum] = true. But in the case\nwhere it's a CTID link, the equivalent checks are missing. We could\nfix that like this:\n\n--- a/contrib/amcheck/verify_heapam.c\n+++ b/contrib/amcheck/verify_heapam.c\n@@ -543,7 +543,8 @@ verify_heapam(PG_FUNCTION_ARGS)\n */\n nextblkno = ItemPointerGetBlockNumber(&(ctx.tuphdr)->t_ctid);\n nextoffnum = ItemPointerGetOffsetNumber(&(ctx.tuphdr)->t_ctid);\n- if (nextblkno == ctx.blkno && nextoffnum != ctx.offnum)\n+ if (nextblkno == ctx.blkno && nextoffnum != ctx.offnum &&\n+ nextoffnum >= FirstOffsetNumber && nextoffnum <= maxoff)\n successor[ctx.offnum] = nextoffnum;\n }\n\n> I suspect we don't just need a !ItemIdIsUsed(), but also a check gainst the\n> max offset on the page.\n\nI don't see why we need an ItemIdIsUsed check any place where we don't\nhave one already. lp_valid[x] can't be true if the item x isn't used,\nunless we're referencing off the initialized portion of the array,\nwhich we shouldn't do.\n\n> WRT these failures:\n> non-heap-only update produced a heap-only tuple at offset 20\n>\n> I think that's largely a consequence of HeapTupleHeaderIsHotUpdated()'s\n> definition:\n> /*\n> * Note that we stop considering a tuple HOT-updated as soon as it is known\n> * aborted or the would-be updating transaction is known aborted. For best\n> * efficiency, check tuple visibility before using this macro, so that the\n> * INVALID bits will be as up to date as possible.\n> */\n> #define HeapTupleHeaderIsHotUpdated(tup) \\\n> ( \\\n> ((tup)->t_infomask2 & HEAP_HOT_UPDATED) != 0 && \\\n> ((tup)->t_infomask & HEAP_XMAX_INVALID) == 0 && \\\n> !HeapTupleHeaderXminInvalid(tup) \\\n> )\n\nYeah, it's not good that we're looking at the hint bit or the xmin\nthere -- it should just be checking the infomask2 bit and nothing\nelse, I think.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:04:11 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 5:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Why are redirections now checked in two places? There already was a\n> ItemIdIsUsed() check in the \"/* Perform tuple checks */\" loop, but now there's\n> the ItemIdIsRedirected() check in the \"Update chain validation.\" loop as well\n> - and the output of that is confusing, because it'll just mention the target\n> of the redirect.\n\nctx.offnum is reported as part of the context message, and doesn't\nneed to be duplicate in the message itself. Any other offset numbers\ndo need to be mentioned in the message itself. I have attempted to\nmake sure that the code consistently follows this rule but if you see\na case where I have failed to do so, let me know.\n\n> I also think it's not quite right that some of checks inside if\n> (ItemIdIsRedirected()) continue in case of corruption, others don't. While\n> there's a later continue, that means the corrupt tuples get added to the\n> predecessor array. Similarly, in the non-redirect portion, the successor\n> array gets filled with corrupt tuples, which doesn't seem quite right to me.\n\nI'm not entirely sure I agree with this. I mean, we should skip\nfurther checks of the data is so corrupt that further checks are not\nsensible e.g. if the line pointer is just gibberish we can't validate\nanything about the tuple, because we can't even find the tuple.\nHowever, we don't want to skip further checks as soon as we see any\nkind of a problem whatsoever. For instance, even if the tuple data is\nutter gibberish, that should not and does not keep us from checking\nwhether the update chain looks sane. If the tuple header is garbage\n(e.g. xmin and xmax are outside the bounds of clog) then at least some\nof the update-chain checks are not possible, because we can't know the\ncommit status of the tuples, but garbage in the tuple data isn't\nreally a problem for update chain validation per se.\n\nSo I think the places that lack a continue are making a judgement that\ncontinuing with further checks is reasonable in those scenarios, and\nit's not obvious to me that those judgements are wrong. For instance,\nsuppose we reach the \"Can only redirect to a HOT tuple.\" case. If we\ncontinue there, we won't check whether the HEAP_UPDATED is set on the\nsuccessor tuple, and we won't perform the intersecting-HOT-chain\ncheck, and we won't set the predecessor[] array, and I don't really\nsee why we shouldn't do those things. In my view, the fact that the\nredirected line pointer points to a tuple that is not marked as HOT\ndoes mean we have corruption, but it doesn't mean that we should give\nup on thinking of those two offset numbers as part of an update chain.\n\n> A patch addressing some, but not all, of those is attached. With that I don't\n> see any crashes or false-positives anymore.\n\n+ else if (ItemIdIsDead(rditem))\n+ report_corruption(&ctx,\n+\n psprintf(\"line pointer redirection to dead item at offset %u\",\n+\n (unsigned) rdoffnum));\n+ else if (ItemIdIsRedirected(rditem))\n+ report_corruption(&ctx,\n+\n psprintf(\"line pointer redirection to another redirect at offset\n%u\",\n+\n (unsigned) rdoffnum));\n\nHmm, the first of these definitely seems like a good additional check.\nThe second one moves an existing check from the subsequent loop to\nthis loop, which if we're going to add that additional check makes\nsense to do for consistency.\n\n+ /* the current line pointer may not have a successor */\n+ if (nextoffnum == InvalidOffsetNumber)\n+ continue;\n+\n /*\n- * The current line pointer may not have a successor, either\n- * because it's not valid or because it didn't point to anything.\n- * In either case, we have to give up.\n- *\n- * If the current line pointer does point to something, it's\n- * possible that the target line pointer isn't valid. We have to\n- * give up in that case, too.\n+ * The successor is located beyond the end of the line item array,\n+ * which can happen when the array is truncated.\n */\n- if (nextoffnum == InvalidOffsetNumber || !lp_valid[nextoffnum])\n+ if (nextoffnum > maxoff)\n+ continue;\n+\n+ /* the successor is not valid, have to give up */\n+ if (!lp_valid[nextoffnum])\n continue;\n\nI don't agree with this part -- I think we should instead do what I\nproposed before and prevent the successor[] array from ever getting\nentries that are out of bounds.\n\n- * Redirects are created by updates, so successor should be\n- * the result of an update.\n+ * Redirects are created by HOT updates, so successor should\n+ * be the result of an HOT update.\n+ *\n+ * XXX: HeapTupleHeaderIsHeapOnly() should always imply\n+ * HEAP_UPDATED. This should be checked even when the tuple\n+ * isn't a target of a redirect.\n\nHmm, OK. So the question is where to put this check. Maybe inside\ncheck_tuple_header(), making it independent of the update chain\nvalidation stuff?\n\n+ *\n+ * NB: Can't use HeapTupleHeaderIsHotUpdated() as it checks if\n+ * hint bits indicate xmin/xmax aborted.\n */\n- if (!HeapTupleHeaderIsHotUpdated(curr_htup) &&\n+ if (!(curr_htup->t_infomask2 & HEAP_HOT_UPDATED) &&\n HeapTupleHeaderIsHeapOnly(next_htup))\n {\n report_corruption(&ctx,\n psprintf(\"non-heap-only update\nproduced a heap-only tuple at offset %u\",\n (unsigned) nextoffnum));\n }\n- if (HeapTupleHeaderIsHotUpdated(curr_htup) &&\n+ if ((curr_htup->t_infomask2 & HEAP_HOT_UPDATED) &&\n\nMakes sense.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 11:41:52 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 5:42 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> However, this \"second pass over page\" loop has roughly the same\n> problem as the nearby HeapTupleHeaderIsHotUpdated() coding pattern: it\n> doesn't account for the fact that a tuple whose xmin was\n> XID_IN_PROGRESS a little earlier on may not be in that state once we\n> reach the second pass loop. Concurrent transaction abort needs to be\n> accounted for. The loop needs to recheck xmin status, at least in the\n> initially-XID_IN_PROGRESS-xmin case.\n\nI don't understand why it would need to do that. If the transaction\nhas subsequently committed, it doesn't change anything: we'll get the\nsame report we would have gotten anyway. If the transaction has\nsubsequently aborted, we'll get a report about corruption that would\nnot have been reported if the abort had occurred slightly earlier.\nHowever, the abort doesn't remove the corruption, just our ability to\ndetect it.\n\nConsider a page where TID 1 is a redirect to TID 4; TID 2 is dead; and\nTIDs 3 and 4 are heap-only tuples. Any other line pointers on the page\nare unused. The only way this can validly happen is if there was a\ntuple at TID 2 and it got updated to produce the tuple at TID 3 and\nthen that transaction aborted. Then it got updated again and produced\nthe tuple at TID 4 and that transaction was committed. But this\nimplies that the xmin of TID 3 must be aborted. If we observe that\nit's in-progress, we know that the transaction that created TID 3 was\nstill running after TID 4 had already shown up, which should be\nimpossible, and so it's fair to report corruption. If the xmin of TID\n3 then goes on to abort, a future attempt to verify this page won't be\nable to notice the corruption any more, because it won't be able to\nprove that TID 3's xmin aborted after TID 4's xmin committed. But a\ncurrent attempt to verify this page that has seen TID 3's xmin as\nin-progress at any point after locking the page knows for sure that\nTID 4 showed up before TID 3's inserter aborted, and that's\ninconsistent with any legal order of operations.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 12:49:10 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 9:42 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Hmph. I didn't think very hard about that code and just assumed\n> Himanshu had tested it. I don't have convenient access to a Big-endian\n> test machine myself. Are you able to check whether using 0x00010019\n> unconditionally works?\n\nOh, I see now you already pushed a fix. Thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 13:06:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-23 11:41:52 -0400, Robert Haas wrote:\n> On Wed, Mar 22, 2023 at 5:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > I also think it's not quite right that some of checks inside if\n> > (ItemIdIsRedirected()) continue in case of corruption, others don't. While\n> > there's a later continue, that means the corrupt tuples get added to the\n> > predecessor array. Similarly, in the non-redirect portion, the successor\n> > array gets filled with corrupt tuples, which doesn't seem quite right to me.\n> \n> I'm not entirely sure I agree with this. I mean, we should skip\n> further checks of the data is so corrupt that further checks are not\n> sensible e.g. if the line pointer is just gibberish we can't validate\n> anything about the tuple, because we can't even find the tuple.\n> However, we don't want to skip further checks as soon as we see any\n> kind of a problem whatsoever. For instance, even if the tuple data is\n> utter gibberish, that should not and does not keep us from checking\n> whether the update chain looks sane. If the tuple header is garbage\n> (e.g. xmin and xmax are outside the bounds of clog) then at least some\n> of the update-chain checks are not possible, because we can't know the\n> commit status of the tuples, but garbage in the tuple data isn't\n> really a problem for update chain validation per se.\n\nThe cases I was complaining about where metadata that's important. We\nshouldn't enter a tuple into lp_valid[] or successor[] if it failed validity\nchecks - the subsequent error reports that that generates won't be helpful -\nor we'll crash.\n\nE.g. continuing after:\n\n\t\t\t\trditem = PageGetItemId(ctx.page, rdoffnum);\n\t\t\t\tif (!ItemIdIsUsed(rditem))\n\t\t\t\t\treport_corruption(&ctx,\n\t\t\t\t\t\t\t\t\t psprintf(\"line pointer redirection to unused item at offset %u\",\n\t\t\t\t\t\t\t\t\t\t\t (unsigned) rdoffnum));\n\nmeans we'll look into the tuple in the \"update chain validation\" loop for\nunused items. Where it afaict will lead to a crash or bogus results, because:\n\t\t\t\t/* Can only redirect to a HOT tuple. */\n\t\t\t\tnext_htup = (HeapTupleHeader) PageGetItem(ctx.page, next_lp);\n\t\t\t\tif (!HeapTupleHeaderIsHeapOnly(next_htup))\n\t\t\t\t{\n\t\t\t\t\treport_corruption(&ctx,\n\t\t\t\t\t\t\t\t\t psprintf(\"redirected line pointer points to a non-heap-only tuple at offset %u\",\n\t\t\t\t\t\t\t\t\t\t\t (unsigned) nextoffnum));\n\t\t\t\t}\n\nwill just dereference the unused item.\n\n\n\n> - * Redirects are created by updates, so successor should be\n> - * the result of an update.\n> + * Redirects are created by HOT updates, so successor should\n> + * be the result of an HOT update.\n> + *\n> + * XXX: HeapTupleHeaderIsHeapOnly() should always imply\n> + * HEAP_UPDATED. This should be checked even when the tuple\n> + * isn't a target of a redirect.\n> \n> Hmm, OK. So the question is where to put this check. Maybe inside\n> check_tuple_header(), making it independent of the update chain\n> validation stuff?\n\nYes, check_tuple_header sounds sensible to me.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:26:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 1:26 PM Andres Freund <andres@anarazel.de> wrote:\n> E.g. continuing after:\n>\n> rditem = PageGetItemId(ctx.page, rdoffnum);\n> if (!ItemIdIsUsed(rditem))\n> report_corruption(&ctx,\n> psprintf(\"line pointer redirection to unused item at offset %u\",\n> (unsigned) rdoffnum));\n>\n> means we'll look into the tuple in the \"update chain validation\" loop for\n> unused items.\n\nAh, yes, that's a goof for sure.\n\n> > - * Redirects are created by updates, so successor should be\n> > - * the result of an update.\n> > + * Redirects are created by HOT updates, so successor should\n> > + * be the result of an HOT update.\n> > + *\n> > + * XXX: HeapTupleHeaderIsHeapOnly() should always imply\n> > + * HEAP_UPDATED. This should be checked even when the tuple\n> > + * isn't a target of a redirect.\n> >\n> > Hmm, OK. So the question is where to put this check. Maybe inside\n> > check_tuple_header(), making it independent of the update chain\n> > validation stuff?\n>\n> Yes, check_tuple_header sounds sensible to me.\n\nOK, let me spend some more time on this and I'll post a patch (or\npatches) in a bit.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 13:34:31 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-23 11:20:04 +0530, Himanshu Upadhyaya wrote:\n> On Thu, Mar 23, 2023 at 2:15 AM Andres Freund <andres@anarazel.de> wrote:\n> \n> >\n> > Currently the new verify_heapam() follows ctid chains when XMAX_INVALID is\n> > set\n> > and expects to find an item it can dereference - but I don't think that's\n> > something we can rely on: Afaics HOT pruning can break chains, but doesn't\n> > reset xmax.\n> >\n> > We have below code which I think takes care of xmin and xmax matching and\n> if they match then only we add them to the predecessor array.\n> /*\n> * If the next line pointer is a redirect, or if\n> it's a tuple\n> * but the XMAX of this tuple doesn't match the\n> XMIN of the next\n> * tuple, then the two aren't part of the same\n> update chain and\n> * there is nothing more to do.\n> */\n> if (ItemIdIsRedirected(next_lp))\n> continue;\n> curr_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> curr_lp);\n> curr_xmax = HeapTupleHeaderGetUpdateXid(curr_htup);\n> next_htup = (HeapTupleHeader) PageGetItem(ctx.page,\n> next_lp);\n> next_xmin = HeapTupleHeaderGetXmin(next_htup);\n> if (!TransactionIdIsValid(curr_xmax) ||\n> !TransactionIdEquals(curr_xmax, next_xmin))\n> continue;\n\nThe problem is that that doesn't help if the tuple points to past maxoff,\nbecause we can't even fetch the tuple and thus won't even reach these\nchecks. But Robert now put in defenses against that.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:36:31 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 1:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> OK, let me spend some more time on this and I'll post a patch (or\n> patches) in a bit.\n\nAll right, here are some more fixups.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Mar 2023 15:06:01 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Wed, Mar 22, 2023 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n> skink / valgrind reported in a while back and found another issue:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-03-22%2021%3A53%3A41\n>\n> ==2490364== VALGRINDERROR-BEGIN\n> ==2490364== Conditional jump or move depends on uninitialised value(s)\n> ==2490364== at 0x11D459F2: check_tuple_visibility (verify_heapam.c:1379)\n...\n> ==2490364== Uninitialised value was created by a stack allocation\n> ==2490364== at 0x11D45325: check_tuple_visibility (verify_heapam.c:994)\n\nOK, so this is an interesting one. It's complaining about switch\n(xmax_status), because the get_xid_status(xmax, ctx, &xmax_status)\nused in the previous switch might not actually initialize xmax_status,\nand apparently didn't in this case. get_xid_status() does not set\nxmax_status except when it returns XID_BOUNDS_OK, and the previous\nswitch falls through both in that case and also when get_xid_status()\nreturns XID_INVALID. That seems like it must be the issue here. As far\nas I can see, this isn't related to any of the recent changes but has\nbeen like this since this code was introduced, so I'm a little\nconfused about why it's only causing a problem now.\n\nNonetheless, here's a patch. I notice that there's a similar problem\nin another place, too. get_xid_status() is called a total of five\ntimes and it looks like only three of them got it right. I suppose\nthat if this is correct we should back-patch it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 23 Mar 2023 15:37:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "Hi,\n\nOn 2023-03-23 15:37:15 -0400, Robert Haas wrote:\n> On Wed, Mar 22, 2023 at 8:38 PM Andres Freund <andres@anarazel.de> wrote:\n> > skink / valgrind reported in a while back and found another issue:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2023-03-22%2021%3A53%3A41\n> >\n> > ==2490364== VALGRINDERROR-BEGIN\n> > ==2490364== Conditional jump or move depends on uninitialised value(s)\n> > ==2490364== at 0x11D459F2: check_tuple_visibility (verify_heapam.c:1379)\n> ...\n> > ==2490364== Uninitialised value was created by a stack allocation\n> > ==2490364== at 0x11D45325: check_tuple_visibility (verify_heapam.c:994)\n> \n> OK, so this is an interesting one. It's complaining about switch\n> (xmax_status), because the get_xid_status(xmax, ctx, &xmax_status)\n> used in the previous switch might not actually initialize xmax_status,\n> and apparently didn't in this case. get_xid_status() does not set\n> xmax_status except when it returns XID_BOUNDS_OK, and the previous\n> switch falls through both in that case and also when get_xid_status()\n> returns XID_INVALID. That seems like it must be the issue here. As far\n> as I can see, this isn't related to any of the recent changes but has\n> been like this since this code was introduced, so I'm a little\n> confused about why it's only causing a problem now.\n\nCould it be that the tests didn't exercise the path before?\n\n\n> Nonetheless, here's a patch. I notice that there's a similar problem\n> in another place, too. get_xid_status() is called a total of five\n> times and it looks like only three of them got it right. I suppose\n> that if this is correct we should back-patch it.\n\nYea, I think you're right.\n\n\n> +\t\t\treport_corruption(ctx,\n> +\t\t\t\t\t\t\t pstrdup(\"xmin is invalid\"));\n\nNot a correctnes issue: Nearly all callers to report_corruption() do a\npsprintf(), the remaining a pstrdup(), as here. Seems like it'd be cleaner to\njust make report_corruption() accept a format string?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 23 Mar 2023 13:36:56 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 4:36 PM Andres Freund <andres@anarazel.de> wrote:\n> Could it be that the tests didn't exercise the path before?\n\nHmm, perhaps.\n\n> > Nonetheless, here's a patch. I notice that there's a similar problem\n> > in another place, too. get_xid_status() is called a total of five\n> > times and it looks like only three of them got it right. I suppose\n> > that if this is correct we should back-patch it.\n>\n> Yea, I think you're right.\n\nOK.\n\n> > + report_corruption(ctx,\n> > + pstrdup(\"xmin is invalid\"));\n>\n> Not a correctnes issue: Nearly all callers to report_corruption() do a\n> psprintf(), the remaining a pstrdup(), as here. Seems like it'd be cleaner to\n> just make report_corruption() accept a format string?\n\nMeh.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 23 Mar 2023 19:20:29 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
},
{
"msg_contents": "On Thu, Mar 23, 2023 at 3:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Mar 23, 2023 at 1:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > OK, let me spend some more time on this and I'll post a patch (or\n> > patches) in a bit.\n>\n> All right, here are some more fixups.\n\nIt looks like e88754a1965c0f40a723e6e46d670cacda9e19bd make skink\nhappy (although Peter Geoghegan has spotted a problem with it, see the\nthread that begins with the commit email) so I went ahead and\ncommitted these fixups. Hopefully that won't again make the buildfarm\nunhappy, but I guess we'll see.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Mar 2023 13:43:07 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: HOT chain validation in verify_heapam()"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nDuring the work in [1] we created a new TAP test to test the SYSTEM_USER \nbehavior with peer authentication.\n\nIt turns out that there is currently no TAP test for the peer \nauthentication, so we think (thanks Michael for the suggestion [2]) that \nit's better to split the work in [1] between \"pure\" SYSTEM_USER related \nwork and the \"pure\" peer authentication TAP test work.\n\nThat's the reason of this new thread, please find attached a patch to \nadd a new TAP test for the peer authentication.\n\n[1]: \nhttps://www.postgresql.org/message-id/flat/7e692b8c-0b11-45db-1cad-3afc5b57409f%40amazon.com\n\n[2]: https://www.postgresql.org/message-id/YwgboqQUV1%2BY/k6z%40paquier.xyz\n\nRegards,\n\n-- \nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 26 Aug 2022 10:43:43 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 10:43:43AM +0200, Drouvot, Bertrand wrote:\n> During the work in [1] we created a new TAP test to test the SYSTEM_USER\n> behavior with peer authentication.\n> \n> It turns out that there is currently no TAP test for the peer\n> authentication, so we think (thanks Michael for the suggestion [2]) that\n> it's better to split the work in [1] between \"pure\" SYSTEM_USER related work\n> and the \"pure\" peer authentication TAP test work.\n> \n> That's the reason of this new thread, please find attached a patch to add a\n> new TAP test for the peer authentication.\n\n+# Get the session_user to define the user name map test.\n+my $session_user =\n+ $node->safe_psql('postgres', 'select session_user');\n[...]\n+# Define a user name map.\n+$node->append_conf('pg_ident.conf', qq{mypeermap $session_user testmap$session_user});\n+\n+# Set pg_hba.conf with the peer authentication and the user name map.\n+reset_pg_hba($node, 'peer map=mypeermap');\n\nA map consists of a \"MAPNAME SYSTEM_USER PG_USER\". Why does this test\nuse a Postgres role (from session_user) as the system user for the\npeer map?\n--\nMichael",
"msg_date": "Wed, 28 Sep 2022 14:52:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "Hi,\n\nOn 9/28/22 7:52 AM, Michael Paquier wrote:\n> On Fri, Aug 26, 2022 at 10:43:43AM +0200, Drouvot, Bertrand wrote:\n>> During the work in [1] we created a new TAP test to test the SYSTEM_USER\n>> behavior with peer authentication.\n>>\n>> It turns out that there is currently no TAP test for the peer\n>> authentication, so we think (thanks Michael for the suggestion [2]) that\n>> it's better to split the work in [1] between \"pure\" SYSTEM_USER related work\n>> and the \"pure\" peer authentication TAP test work.\n>>\n>> That's the reason of this new thread, please find attached a patch to add a\n>> new TAP test for the peer authentication.\n> \n> +# Get the session_user to define the user name map test.\n> +my $session_user =\n> + $node->safe_psql('postgres', 'select session_user');\n> [...]\n> +# Define a user name map.\n> +$node->append_conf('pg_ident.conf', qq{mypeermap $session_user testmap$session_user});\n> +\n> +# Set pg_hba.conf with the peer authentication and the user name map.\n> +reset_pg_hba($node, 'peer map=mypeermap');\n> \n> A map consists of a \"MAPNAME SYSTEM_USER PG_USER\". Why does this test\n> use a Postgres role (from session_user) as the system user for the\n> peer map?\n\nThanks for looking at it!\n\nInitially selecting the session_user with a \"local\" connection and no \nuser provided during the connection is a way I came up to retrieve the \n\"SYSTEM_USER\" to be used later on in the map.\n\nMaybe the variable name should be system_user instead or should we use \nanother way to get the \"SYSTEM_USER\" to be used in the map?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 28 Sep 2022 09:12:57 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 09:12:57AM +0200, Drouvot, Bertrand wrote:\n> Maybe the variable name should be system_user instead or should we use\n> another way to get the \"SYSTEM_USER\" to be used in the map?\n\nHmm, indeed. It would be more reliable to rely on the contents\nreturned by getpeereid()/getpwuid() after one successful peer\nconnection, then use it in the map. I was wondering whether using\nstuff like getpwuid() in the perl script itself would be better, but\nit sounds less of a headache in terms of portability to just rely on\nauthn_id via SYSTEM_USER to generate the contents of the correct map.\n--\nMichael",
"msg_date": "Wed, 28 Sep 2022 16:24:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "On Wed, Sep 28, 2022 at 04:24:44PM +0900, Michael Paquier wrote:\n> Hmm, indeed. It would be more reliable to rely on the contents\n> returned by getpeereid()/getpwuid() after one successful peer\n> connection, then use it in the map. I was wondering whether using\n> stuff like getpwuid() in the perl script itself would be better, but\n> it sounds less of a headache in terms of portability to just rely on\n> authn_id via SYSTEM_USER to generate the contents of the correct map.\n\nBy the way, on an extra read I have found a few things that can be\nsimplified\n- I think that test_role() should be reworked so as the log patterns\nexpected are passed down to connect_ok() and connect_fails() rather\nthan involving find_in_log(). You still need find_in_log() to skip\nproperly the case where peer is not supported by the platform, of\ncourse.\n- get_log_size() is not necessary. You should be able to get the same\ninformation with \"-s $self->logfile\".\n- Nit: a newline should be added at the end of 003_peer.pl.\n--\nMichael",
"msg_date": "Fri, 30 Sep 2022 09:00:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "Hi,\n\nOn 9/30/22 2:00 AM, Michael Paquier wrote:\n> On Wed, Sep 28, 2022 at 04:24:44PM +0900, Michael Paquier wrote:\n>> Hmm, indeed. It would be more reliable to rely on the contents\n>> returned by getpeereid()/getpwuid() after one successful peer\n>> connection, then use it in the map. I was wondering whether using\n>> stuff like getpwuid() in the perl script itself would be better, but\n>> it sounds less of a headache in terms of portability to just rely on\n>> authn_id via SYSTEM_USER to generate the contents of the correct map.\n> \n> By the way, on an extra read I have found a few things that can be\n> simplified\n> - I think that test_role() should be reworked so as the log patterns\n> expected are passed down to connect_ok() and connect_fails() rather\n> than involving find_in_log(). You still need find_in_log() to skip\n> properly the case where peer is not supported by the platform, of\n> course.\n> - get_log_size() is not necessary. You should be able to get the same\n> information with \"-s $self->logfile\".\n> - Nit: a newline should be added at the end of 003_peer.pl.\n> --\n\nAgree that it could be simplified, thanks for the hints!\n\nAttached a simplified version.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 30 Sep 2022 19:51:29 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "On Fri, Sep 30, 2022 at 07:51:29PM +0200, Drouvot, Bertrand wrote:\n> Agree that it could be simplified, thanks for the hints!\n> \n> Attached a simplified version.\n\nWhile looking at that, I have noticed that it is possible to reduce\nthe number of connection attempts (for example no need to re-test that\nthe connection works when the map is not set, and the authn log would\nbe the same with the map in place). Note that a path's meson.build\nneeds a refresh for any new file added into the tree, with 003_peer.pl\nmissing so this new test was not running in the recent CI runs. The\nindentation was also a bit wrong and I have tweaked a few comments,\nbefore finally applying it.\n--\nMichael",
"msg_date": "Mon, 3 Oct 2022 16:46:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "Hi,\n\nOn 10/3/22 9:46 AM, Michael Paquier wrote:\n> On Fri, Sep 30, 2022 at 07:51:29PM +0200, Drouvot, Bertrand wrote:\n>> Agree that it could be simplified, thanks for the hints!\n>>\n>> Attached a simplified version.\n> \n> While looking at that, I have noticed that it is possible to reduce\n> the number of connection attempts (for example no need to re-test that\n> the connection works when the map is not set, and the authn log would\n> be the same with the map in place).\n\nYeah that's right, thanks for simplifying further.\n\n> Note that a path's meson.build\n> needs a refresh for any new file added into the tree, with 003_peer.pl\n> missing so this new test was not running in the recent CI runs. The\n> indentation was also a bit wrong and I have tweaked a few comments,\n> before finally applying it.\n\nThanks!\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 3 Oct 2022 10:59:21 +0200",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "Hello!\n\nOn Windows this test fails with error:\n# connection error: 'psql: error: connection to server at \"127.0.0.1\", port xxxxx failed:\n# FATAL: no pg_hba.conf entry for host \"127.0.0.1\", user \"buildfarm\", database \"postgres\", no encryption'\n\nMay be disable this test for windows like in 001_password.pl and 002_saslprep.pl?\n\n\nBest wishes,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 25 Nov 2022 07:56:08 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 07:56:08AM +0300, Anton A. Melnikov wrote:\n> On Windows this test fails with error:\n> # connection error: 'psql: error: connection to server at \"127.0.0.1\", port xxxxx failed:\n> # FATAL: no pg_hba.conf entry for host \"127.0.0.1\", user \"buildfarm\", database \"postgres\", no encryption'\n> \n> May be disable this test for windows like in 001_password.pl and 002_saslprep.pl?\n\nYou are not using MSVC but MinGW, are you? The buildfarm members with\nTAP tests enabled are drongo, fairywren, bowerbord and jacana. Even\nthough none of them are running the tests from\nsrc/test/authentication/, this is running on a periodic basis in the\nCI, where we are able to skip the test in MSVC already: \npostgresql:authentication / authentication/003_peer SKIP 9.73s\n\nSo yes, it is plausible that we are missing more safeguards here.\n\nYour suggestion to skip under !$use_unix_sockets makes sense, as not\nhaving unix sockets is not going to work for peer and WIN32 needs SSPI\nto be secure with pg_regress. Where is your test failing? On the\nfirst $node->psql('postgres') at the beginning of the test? Just\nwondering..\n--\nMichael",
"msg_date": "Fri, 25 Nov 2022 14:18:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "Hello, thanks for rapid answer!\n\nOn 25.11.2022 08:18, Michael Paquier wrote:\n> You are not using MSVC but MinGW, are you? The buildfarm members with\n> TAP tests enabled are drongo, fairywren, bowerbord and jacana. Even\n> though none of them are running the tests from\n> src/test/authentication/, this is running on a periodic basis in the\n> CI, where we are able to skip the test in MSVC already:\n> postgresql:authentication / authentication/003_peer SKIP 9.73s\n\nThere is MSVC on my PC. The project was build with\nMicrosoft (R) C/C++ Optimizing Compiler Version 19.29.30136 for x64\n\n> So yes, it is plausible that we are missing more safeguards here.\n> \n> Your suggestion to skip under !$use_unix_sockets makes sense, as not\n> having unix sockets is not going to work for peer and WIN32 needs SSPI\n> to be secure with pg_regress. Where is your test failing? On the\n> first $node->psql('postgres') at the beginning of the test? Just\n> wondering..\n\nThe test fails almost at the beginning in reset_pg_hba call after\nmodification pg_hba.conf and node reloading:\n#t/003_peer.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n#No subtests run\n\nLogs regress_log_003_peer and 003_peer_node.log are attached.\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 25 Nov 2022 10:13:29 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "On Fri, Nov 25, 2022 at 10:13:29AM +0300, Anton A. Melnikov wrote:\n> The test fails almost at the beginning in reset_pg_hba call after\n> modification pg_hba.conf and node reloading:\n> #t/003_peer.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n> #No subtests run\n> \n> Logs regress_log_003_peer and 003_peer_node.log are attached.\n\nYeah, that's failing exactly at the position I am pointing to. I'll\ngo apply what you have..\n--\nMichael",
"msg_date": "Fri, 25 Nov 2022 16:34:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
},
{
"msg_contents": "\nOn 25.11.2022 10:34, Michael Paquier wrote:\n> On Fri, Nov 25, 2022 at 10:13:29AM +0300, Anton A. Melnikov wrote:\n>> The test fails almost at the beginning in reset_pg_hba call after\n>> modification pg_hba.conf and node reloading:\n>> #t/003_peer.pl .. Dubious, test returned 2 (wstat 512, 0x200)\n>> #No subtests run\n>>\n>> Logs regress_log_003_peer and 003_peer_node.log are attached.\n> \n> Yeah, that's failing exactly at the position I am pointing to. I'll\n> go apply what you have..\n> --\n> Michael\n\nThanks!\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 25 Nov 2022 10:40:43 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add peer authentication TAP test"
}
] |
[
{
"msg_contents": "The last 20 some consecutive builds failed:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\nlike this:\n[09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n[09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n[09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n[09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n\nI imagine it may be due to an error hit while rebuilding the ci's docker image.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 26 Aug 2022 06:55:46 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 06:55:46 -0500, Justin Pryzby wrote:\n> The last 20 some consecutive builds failed:\n> https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n> \n> like this:\n> [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n> [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n> [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n> [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n> \n> I imagine it may be due to an error hit while rebuilding the ci's docker image.\n\nI don't think it's CI specific, see\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2022-08-26%2011%3A00%3A11\n\nLooks like the failures might have started with\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=121d2d3d70ecdb2113b340c5f3b99a61341291af\nbased on\nhttps://cirrus-ci.com/github/postgres/postgres/\n\nNot immediately obvious why that would be.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 06:21:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 06:21:51 -0700, Andres Freund wrote:\n> On 2022-08-26 06:55:46 -0500, Justin Pryzby wrote:\n> > The last 20 some consecutive builds failed:\n> > https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n> > \n> > like this:\n> > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n> > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n> > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n> > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n> > \n> > I imagine it may be due to an error hit while rebuilding the ci's docker image.\n> \n> I don't think it's CI specific, see\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2022-08-26%2011%3A00%3A11\n> \n> Looks like the failures might have started with\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=121d2d3d70ecdb2113b340c5f3b99a61341291af\n> based on\n> https://cirrus-ci.com/github/postgres/postgres/\n> \n> Not immediately obvious why that would be.\n\nReproduces in a VM, it starts to fail with that commit. Looks like somehow\ndifferent macros are trampling on each other. Something in perl is interfering\nwith msvc's malloc.h, turning\n\n if (_Marker == _ALLOCA_S_HEAP_MARKER)\n {\n free(_Memory);\n }\ninto\n\n if (_Marker == 0xDDDD)\n {\n (*(my_perl->IMem)->pFree)((my_perl->IMem), (_Memory));\n }\n\nafter preprocessing. No idea how.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 06:40:47 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 06:40:47 -0700, Andres Freund wrote:\n> On 2022-08-26 06:21:51 -0700, Andres Freund wrote:\n> > On 2022-08-26 06:55:46 -0500, Justin Pryzby wrote:\n> > > The last 20 some consecutive builds failed:\n> > > https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n> > > \n> > > like this:\n> > > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n> > > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2065: 'my_perl': undeclared identifier (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n> > > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/SPI.c) [c:\\cirrus\\plperl.vcxproj]\n> > > [09:29:27.711] C:\\Program Files (x86)\\Windows Kits\\10\\Include\\10.0.20348.0\\ucrt\\malloc.h(159,17): error C2223: left of '->IMem' must point to struct/union (compiling source file src/pl/plperl/Util.c) [c:\\cirrus\\plperl.vcxproj]\n> > > \n> > > I imagine it may be due to an error hit while rebuilding the ci's docker image.\n> > \n> > I don't think it's CI specific, see\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hamerkop&dt=2022-08-26%2011%3A00%3A11\n> > \n> > Looks like the failures might have started with\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=121d2d3d70ecdb2113b340c5f3b99a61341291af\n> > based on\n> > https://cirrus-ci.com/github/postgres/postgres/\n> > \n> > Not immediately obvious why that would be.\n> \n> Reproduces in a VM, it starts to fail with that commit. Looks like somehow\n> different macros are trampling on each other. Something in perl is interfering\n> with msvc's malloc.h, turning\n> \n> if (_Marker == _ALLOCA_S_HEAP_MARKER)\n> {\n> free(_Memory);\n> }\n> into\n> \n> if (_Marker == 0xDDDD)\n> {\n> (*(my_perl->IMem)->pFree)((my_perl->IMem), (_Memory));\n> }\n> \n> after preprocessing. No idea how.\n\nBecause perl, extremely unhelpfully, #defines free. Which, not surprisingly,\ncauses issues when including system headers referencing free as well.\n\nI don't really see a good solution to this other than hoisting the\nmb/pg_wchar.h include out to before we include all the perl stuff. That does\nfix the issue.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 07:27:00 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 9:27 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Because perl, extremely unhelpfully, #defines free. Which, not surprisingly,\n> causes issues when including system headers referencing free as well.\n>\n> I don't really see a good solution to this other than hoisting the\n> mb/pg_wchar.h include out to before we include all the perl stuff. That does\n> fix the issue.\n\nWe could also move is_valid_ascii somewhere else. It's only\ntangentially related to \"wide chars\" anyway.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 21:39:05 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 21:39:05 +0700, John Naylor wrote:\n> On Fri, Aug 26, 2022 at 9:27 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Because perl, extremely unhelpfully, #defines free. Which, not surprisingly,\n> > causes issues when including system headers referencing free as well.\n> >\n> > I don't really see a good solution to this other than hoisting the\n> > mb/pg_wchar.h include out to before we include all the perl stuff. That does\n> > fix the issue.\n> \n> We could also move is_valid_ascii somewhere else. It's only\n> tangentially related to \"wide chars\" anyway.\n\nGiven the crazy defines of stuff like free, it seems like a good idea to have\na rule that no headers should be included after plperl.h with\nPG_NEED_PERL_XSUB_H defined. It's not like there's not other chances of of\npulling in malloc.h from within pg_wchar.h somehow.\n\nIt's a bit ugly to have the mb/pg_wchar.h in plperl.h instead of\nplperl_helpers.h, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 07:47:40 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nTom, Ilmari, you seem to have hacked on this stuff most (not so) recently. Do\nyou have a better suggestion than moving the mb/pg_wchar.h include out of\nplperl_helpers.h as I suggest below?\n\nOn 2022-08-26 07:47:40 -0700, Andres Freund wrote:\n> On 2022-08-26 21:39:05 +0700, John Naylor wrote:\n> > On Fri, Aug 26, 2022 at 9:27 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Because perl, extremely unhelpfully, #defines free. Which, not surprisingly,\n> > > causes issues when including system headers referencing free as well.\n> > >\n> > > I don't really see a good solution to this other than hoisting the\n> > > mb/pg_wchar.h include out to before we include all the perl stuff. That does\n> > > fix the issue.\n> >\n> > We could also move is_valid_ascii somewhere else. It's only\n> > tangentially related to \"wide chars\" anyway.\n>\n> Given the crazy defines of stuff like free, it seems like a good idea to have\n> a rule that no headers should be included after plperl.h with\n> PG_NEED_PERL_XSUB_H defined. It's not like there's not other chances of of\n> pulling in malloc.h from within pg_wchar.h somehow.\n>\n> It's a bit ugly to have the mb/pg_wchar.h in plperl.h instead of\n> plperl_helpers.h, but ...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 13:28:30 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Tom, Ilmari, you seem to have hacked on this stuff most (not so) recently. Do\n> you have a better suggestion than moving the mb/pg_wchar.h include out of\n> plperl_helpers.h as I suggest below?\n\nI agree with the conclusion that we'd better #include all our own\nheaders before any of Perl's. No strong opinions about which\nrearrangement is least ugly --- but let's add some comments about\nthat requirement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Aug 2022 16:32:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "\nOn 2022-08-26 Fr 10:47, Andres Freund wrote:\n> Hi,\n>\n> On 2022-08-26 21:39:05 +0700, John Naylor wrote:\n>> On Fri, Aug 26, 2022 at 9:27 PM Andres Freund <andres@anarazel.de> wrote:\n>>> Because perl, extremely unhelpfully, #defines free. Which, not surprisingly,\n>>> causes issues when including system headers referencing free as well.\n>>>\n>>> I don't really see a good solution to this other than hoisting the\n>>> mb/pg_wchar.h include out to before we include all the perl stuff. That does\n>>> fix the issue.\n>> We could also move is_valid_ascii somewhere else. It's only\n>> tangentially related to \"wide chars\" anyway.\n> Given the crazy defines of stuff like free, it seems like a good idea to have\n> a rule that no headers should be included after plperl.h with\n> PG_NEED_PERL_XSUB_H defined. It's not like there's not other chances of of\n> pulling in malloc.h from within pg_wchar.h somehow.\n>\n> It's a bit ugly to have the mb/pg_wchar.h in plperl.h instead of\n> plperl_helpers.h, but ...\n>\n\n\nIt's already included directly in plperl.c, so couldn't we just lift it\ndirectly into SPI.xs and Util.xs?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Fri, 26 Aug 2022 17:05:52 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-26 17:05:52 -0400, Andrew Dunstan wrote:\n> On 2022-08-26 Fr 10:47, Andres Freund wrote:\n> > Given the crazy defines of stuff like free, it seems like a good idea to have\n> > a rule that no headers should be included after plperl.h with\n> > PG_NEED_PERL_XSUB_H defined. It's not like there's not other chances of of\n> > pulling in malloc.h from within pg_wchar.h somehow.\n> >\n> > It's a bit ugly to have the mb/pg_wchar.h in plperl.h instead of\n> > plperl_helpers.h, but ...\n\n> It's already included directly in plperl.c, so couldn't we just lift it\n> directly into SPI.xs and Util.xs?\n\nI think it'd also be needed in hstore_plperl.c, jsonb_plperl.c. Putting the\ninclude in plperl.h would keep that aspect transparent, because plperl_utils.h\nincludes plperl.h.\n\nI don't think manually including all dependencies, even if it's just one, in\neach of the six files currently using plperl_utils.h is a good approach.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 26 Aug 2022 14:15:07 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 4:15 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-26 17:05:52 -0400, Andrew Dunstan wrote:\n> > On 2022-08-26 Fr 10:47, Andres Freund wrote:\n> > > Given the crazy defines of stuff like free, it seems like a good idea to have\n> > > a rule that no headers should be included after plperl.h with\n> > > PG_NEED_PERL_XSUB_H defined. It's not like there's not other chances of of\n> > > pulling in malloc.h from within pg_wchar.h somehow.\n> > >\n> > > It's a bit ugly to have the mb/pg_wchar.h in plperl.h instead of\n> > > plperl_helpers.h, but ...\n>\n> > It's already included directly in plperl.c, so couldn't we just lift it\n> > directly into SPI.xs and Util.xs?\n>\n> I think it'd also be needed in hstore_plperl.c, jsonb_plperl.c. Putting the\n> include in plperl.h would keep that aspect transparent, because plperl_utils.h\n> includes plperl.h.\n\nSince plperl_helpers.h already includes plperl.h, I'm not sure why\nboth are included everywhere the former is. If .c/.xs files didn't\ninclude plperl.h directly, we could keep pg_wchar.h in\nplperl_helpers.h. Not sure if that's workable or any better...\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 09:32:45 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> On Sat, Aug 27, 2022 at 4:15 AM Andres Freund <andres@anarazel.de> wrote:\n>> I think it'd also be needed in hstore_plperl.c, jsonb_plperl.c. Putting the\n>> include in plperl.h would keep that aspect transparent, because plperl_utils.h\n>> includes plperl.h.\n\n> Since plperl_helpers.h already includes plperl.h, I'm not sure why\n> both are included everywhere the former is. If .c/.xs files didn't\n> include plperl.h directly, we could keep pg_wchar.h in\n> plperl_helpers.h. Not sure if that's workable or any better...\n\nMaybe we should flush the separate plperl_helpers.h header and just\nput those static-inline functions in plperl.h.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Aug 2022 23:02:06 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Sat, Aug 27, 2022 at 4:15 AM Andres Freund <andres@anarazel.de> wrote:\n> >> I think it'd also be needed in hstore_plperl.c, jsonb_plperl.c. Putting the\n> >> include in plperl.h would keep that aspect transparent, because plperl_utils.h\n> >> includes plperl.h.\n>\n> > Since plperl_helpers.h already includes plperl.h, I'm not sure why\n> > both are included everywhere the former is. If .c/.xs files didn't\n> > include plperl.h directly, we could keep pg_wchar.h in\n> > plperl_helpers.h. Not sure if that's workable or any better...\n>\n> Maybe we should flush the separate plperl_helpers.h header and just\n> put those static-inline functions in plperl.h.\n\nHere's a patch with that idea, not tested on Windows yet.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 27 Aug 2022 11:20:29 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 11:20 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> Here's a patch with that idea, not tested on Windows yet.\n\nUpdate: I tried taking the CI for a spin, but ran into IT issues with\nGithub when I tried to push my branch to remote.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 12:53:24 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "On 2022-08-26 23:02:06 -0400, Tom Lane wrote:\n> John Naylor <john.naylor@enterprisedb.com> writes:\n> > On Sat, Aug 27, 2022 at 4:15 AM Andres Freund <andres@anarazel.de> wrote:\n> >> I think it'd also be needed in hstore_plperl.c, jsonb_plperl.c. Putting the\n> >> include in plperl.h would keep that aspect transparent, because plperl_utils.h\n> >> includes plperl.h.\n> \n> > Since plperl_helpers.h already includes plperl.h, I'm not sure why\n> > both are included everywhere the former is. If .c/.xs files didn't\n> > include plperl.h directly, we could keep pg_wchar.h in\n> > plperl_helpers.h. Not sure if that's workable or any better...\n> \n> Maybe we should flush the separate plperl_helpers.h header and just\n> put those static-inline functions in plperl.h.\n\n+1\n\n\n",
"msg_date": "Sat, 27 Aug 2022 00:11:13 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "Hi,\n\nOn 2022-08-27 12:53:24 +0700, John Naylor wrote:\n> On Sat, Aug 27, 2022 at 11:20 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> >\n> > Here's a patch with that idea, not tested on Windows yet.\n> \n> Update: I tried taking the CI for a spin, but ran into IT issues with\n> Github when I tried to push my branch to remote.\n\nA github, not a CI issue? Just making sure...\n\nAs a workaround you can just open a CF entry, that'll run the patch soon.\n\n\nBut either way, I ran the patch \"manually\" in a windows VM that I had running\nanyway. With the meson patchset, but I don't see how it could matter here.\n\n1/5 postgresql:setup / tmp_install OK 1.30s\n2/5 postgresql:jsonb_plperl / jsonb_plperl/regress OK 8.30s\n3/5 postgresql:bool_plperl / bool_plperl/regress OK 8.30s\n4/5 postgresql:hstore_plperl / hstore_plperl/regress OK 8.64s\n5/5 postgresql:plperl / plperl/regress OK 10.41s\n\nOk: 5\n\n\nI didn't test other platforms.\n\n\nWRT the patch's commit message: The issue isn't that perl's free() is\nredefined, it's that perl's #define free (which references perl globals!)\nbreaks windows' header...\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 27 Aug 2022 00:23:44 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
},
{
"msg_contents": "On Sat, Aug 27, 2022 at 2:23 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-08-27 12:53:24 +0700, John Naylor wrote:\n> > Update: I tried taking the CI for a spin, but ran into IT issues with\n> > Github when I tried to push my branch to remote.\n>\n> A github, not a CI issue? Just making sure...\n\nYeah, I forked PG from the Github page, cloned it locally, applied the\npatch and tried to push to origin.\n\n> As a workaround you can just open a CF entry, that'll run the patch soon.\n\nYeah, I did that after taking a break -- there are compiler warnings\nfor contrib/sepgsql/label.c where pfree's argument is cast to void *,\nso seems unrelated.\n\n> But either way, I ran the patch \"manually\" in a windows VM that I had running\n> anyway. With the meson patchset, but I don't see how it could matter here.\n>\n> 1/5 postgresql:setup / tmp_install OK 1.30s\n> 2/5 postgresql:jsonb_plperl / jsonb_plperl/regress OK 8.30s\n> 3/5 postgresql:bool_plperl / bool_plperl/regress OK 8.30s\n> 4/5 postgresql:hstore_plperl / hstore_plperl/regress OK 8.64s\n> 5/5 postgresql:plperl / plperl/regress OK 10.41s\n>\n> Ok: 5\n>\n>\n> I didn't test other platforms.\n>\n>\n> WRT the patch's commit message: The issue isn't that perl's free() is\n> redefined, it's that perl's #define free (which references perl globals!)\n> breaks windows' header...\n\nAh, thanks for that detail and for testing, will push.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 27 Aug 2022 14:36:09 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: windows cfbot failing: my_perl"
}
] |
[
{
"msg_contents": "Hi,\n\nAt function has_matching_range, if variable ranges->nranges == 0,\nwe exit quickly with a result equal to false.\n\nThis means that nranges can be zero.\nIt occurs then that it is possible then to occur an array out of bonds, in\nthe initialization of the variable maxvalue.\nSo if nranges is equal to zero, there is no need to initialize minvalue and\nmaxvalue.\n\nThe patch tries to fix it, avoiding possible errors by using maxvalue.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 26 Aug 2022 10:28:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "At Fri, 26 Aug 2022 10:28:50 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> At function has_matching_range, if variable ranges->nranges == 0,\n> we exit quickly with a result equal to false.\n> \n> This means that nranges can be zero.\n> It occurs then that it is possible then to occur an array out of bonds, in\n> the initialization of the variable maxvalue.\n> So if nranges is equal to zero, there is no need to initialize minvalue and\n> maxvalue.\n> \n> The patch tries to fix it, avoiding possible errors by using maxvalue.\n\nHowever it seems that nranges will never be zero, still the fix looks\ngood to me since it is generally allowed to be zero. I don't find a\nsimilar mistake related to Range.nranges.\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 29 Aug 2022 10:06:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em dom., 28 de ago. de 2022 às 22:06, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> At Fri, 26 Aug 2022 10:28:50 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > At function has_matching_range, if variable ranges->nranges == 0,\n> > we exit quickly with a result equal to false.\n> >\n> > This means that nranges can be zero.\n> > It occurs then that it is possible then to occur an array out of bonds,\n> in\n> > the initialization of the variable maxvalue.\n> > So if nranges is equal to zero, there is no need to initialize minvalue\n> and\n> > maxvalue.\n> >\n> > The patch tries to fix it, avoiding possible errors by using maxvalue.\n>\n> However it seems that nranges will never be zero, still the fix looks\n> good to me since it is generally allowed to be zero. I don't find a\n> similar mistake related to Range.nranges.\n>\nThanks Kyotaro for taking a look at this.\n\nregards,\nRanier Vilela\n\nEm dom., 28 de ago. de 2022 às 22:06, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:At Fri, 26 Aug 2022 10:28:50 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> At function has_matching_range, if variable ranges->nranges == 0,\n> we exit quickly with a result equal to false.\n> \n> This means that nranges can be zero.\n> It occurs then that it is possible then to occur an array out of bonds, in\n> the initialization of the variable maxvalue.\n> So if nranges is equal to zero, there is no need to initialize minvalue and\n> maxvalue.\n> \n> The patch tries to fix it, avoiding possible errors by using maxvalue.\n\nHowever it seems that nranges will never be zero, still the fix looks\ngood to me since it is generally allowed to be zero. I don't find a\nsimilar mistake related to Range.nranges.Thanks \nKyotaro for taking a look at this. regards,Ranier Vilela",
"msg_date": "Mon, 29 Aug 2022 09:51:28 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Sat, 27 Aug 2022 at 01:29, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> At function has_matching_range, if variable ranges->nranges == 0,\n> we exit quickly with a result equal to false.\n>\n> This means that nranges can be zero.\n> It occurs then that it is possible then to occur an array out of bonds, in the initialization of the variable maxvalue.\n> So if nranges is equal to zero, there is no need to initialize minvalue and maxvalue.\n\nI think there's more strange coding in the same file that might need\naddressed, for example, AssertCheckRanges() does:\n\nif (ranges->nranges == 0)\nbreak;\n\nfrom within the first for() loop. Why can't that check be outside of\nthe loop. Nothing seems to make any changes to that field from within\nthe loop.\n\nAlso, in the final loop of the same function there's:\n\nif (ranges->nsorted == 0)\nbreak;\n\nIt's not very obvious to me why we don't only run that loop when\nranges->nsorted > 0. Also, isn't it an array overrun to access:\n\nDatum value = ranges->values[2 * ranges->nranges + i];\n\nIf there's only 1 range stored in the array, then there should be 2\nelements, but that code will try to access the 3rd element with\nranges->values[2].\n\nThis is not so critical, but I'll note it down anyway. The following\nlooks a bit suboptimal in brin_minmax_multi_summary_out():\n\nStringInfoData str;\n\ninitStringInfo(&str);\n\na = FunctionCall1(&fmgrinfo, ranges_deserialized->values[idx++]);\n\nappendStringInfoString(&str, DatumGetCString(a));\n\nb = cstring_to_text(str.data);\n\nWhy do we need a StringInfoData there? Why not just do:\n\nb = cstring_to_text(DatumGetCString(a)); ?\n\nThat requires less memcpy()s and pallocs().\n\nI've included Tomas just in case I've misunderstood the nrange stuff.\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Sep 2022 12:27:45 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em qui., 1 de set. de 2022 às 21:27, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Sat, 27 Aug 2022 at 01:29, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > At function has_matching_range, if variable ranges->nranges == 0,\n> > we exit quickly with a result equal to false.\n> >\n> > This means that nranges can be zero.\n> > It occurs then that it is possible then to occur an array out of bonds,\n> in the initialization of the variable maxvalue.\n> > So if nranges is equal to zero, there is no need to initialize minvalue\n> and maxvalue.\n>\n> I think there's more strange coding in the same file that might need\n> addressed, for example, AssertCheckRanges() does:\n>\n> if (ranges->nranges == 0)\n> break;\n>\n> from within the first for() loop. Why can't that check be outside of\n> the loop. Nothing seems to make any changes to that field from within\n> the loop.\n>\n> Also, in the final loop of the same function there's:\n>\n> if (ranges->nsorted == 0)\n> break;\n>\n> It's not very obvious to me why we don't only run that loop when\n> ranges->nsorted > 0. Also, isn't it an array overrun to access:\n>\n> Datum value = ranges->values[2 * ranges->nranges + i];\n>\n> If there's only 1 range stored in the array, then there should be 2\n> elements, but that code will try to access the 3rd element with\n> ranges->values[2].\n>\nYeah, it seems to me that both nranges and nsorted are invariant there,\nso we can safely avoid loops.\n\n\n>\n> This is not so critical, but I'll note it down anyway. The following\n> looks a bit suboptimal in brin_minmax_multi_summary_out():\n>\n> StringInfoData str;\n>\n> initStringInfo(&str);\n>\n> a = FunctionCall1(&fmgrinfo, ranges_deserialized->values[idx++]);\n>\n> appendStringInfoString(&str, DatumGetCString(a));\n>\n> b = cstring_to_text(str.data);\n>\n> Why do we need a StringInfoData there? Why not just do:\n>\n> b = cstring_to_text(DatumGetCString(a)); ?\n>\n> That requires less memcpy()s and pallocs().\n>\nI agree that StringInfoData is not needed there.\nIs it strange to convert char * to only store a temporary str.data.\n\nWhy not?\nastate_values = accumArrayResult(astate_values,\n PointerGetDatum(a),\n false,\n TEXTOID,\n CurrentMemoryContext);\n\nIs it possible to avoid cstring_to_text conversion?\n\nregards,\nRanier Vilela\n\nEm qui., 1 de set. de 2022 às 21:27, David Rowley <dgrowleyml@gmail.com> escreveu:On Sat, 27 Aug 2022 at 01:29, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> At function has_matching_range, if variable ranges->nranges == 0,\n> we exit quickly with a result equal to false.\n>\n> This means that nranges can be zero.\n> It occurs then that it is possible then to occur an array out of bonds, in the initialization of the variable maxvalue.\n> So if nranges is equal to zero, there is no need to initialize minvalue and maxvalue.\n\nI think there's more strange coding in the same file that might need\naddressed, for example, AssertCheckRanges() does:\n\nif (ranges->nranges == 0)\nbreak;\n\nfrom within the first for() loop. Why can't that check be outside of\nthe loop. Nothing seems to make any changes to that field from within\nthe loop.\n\nAlso, in the final loop of the same function there's:\n\nif (ranges->nsorted == 0)\nbreak;\n\nIt's not very obvious to me why we don't only run that loop when\nranges->nsorted > 0. Also, isn't it an array overrun to access:\n\nDatum value = ranges->values[2 * ranges->nranges + i];\n\nIf there's only 1 range stored in the array, then there should be 2\nelements, but that code will try to access the 3rd element with\nranges->values[2].Yeah, it seems to me that both nranges and nsorted are invariant there, so we can safely avoid loops. \n\nThis is not so critical, but I'll note it down anyway. The following\nlooks a bit suboptimal in brin_minmax_multi_summary_out():\n\nStringInfoData str;\n\ninitStringInfo(&str);\n\na = FunctionCall1(&fmgrinfo, ranges_deserialized->values[idx++]);\n\nappendStringInfoString(&str, DatumGetCString(a));\n\nb = cstring_to_text(str.data);\n\nWhy do we need a StringInfoData there? Why not just do:\n\nb = cstring_to_text(DatumGetCString(a)); ?\n\nThat requires less memcpy()s and pallocs().I agree that StringInfoData is not needed there.Is it strange to convert char * to only store a temporary str.data.Why not?\t\tastate_values = accumArrayResult(astate_values, PointerGetDatum(a), false, TEXTOID, CurrentMemoryContext);Is it possible to avoid cstring_to_text conversion?regards,Ranier Vilela",
"msg_date": "Thu, 1 Sep 2022 21:55:25 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Fri, 2 Sept 2022 at 12:55, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Why not?\n> astate_values = accumArrayResult(astate_values,\n> PointerGetDatum(a),\n> false,\n> TEXTOID,\n> CurrentMemoryContext);\n>\n> Is it possible to avoid cstring_to_text conversion?\n\nNote the element_type is being passed to accumArrayResult() as\nTEXTOID, so we should be passing a text type, not a cstring type. The\nconversion to text is required.\n\nDavid\n\n\n",
"msg_date": "Fri, 2 Sep 2022 12:58:37 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Mon, Aug 29, 2022 at 10:06:55AM +0900, Kyotaro Horiguchi wrote:\n> At Fri, 26 Aug 2022 10:28:50 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > At function has_matching_range, if variable ranges->nranges == 0,\n> > we exit quickly with a result equal to false.\n> > \n> > This means that nranges can be zero.\n> > It occurs then that it is possible then to occur an array out of bonds, in\n> > the initialization of the variable maxvalue.\n> > So if nranges is equal to zero, there is no need to initialize minvalue and\n> > maxvalue.\n> > \n> > The patch tries to fix it, avoiding possible errors by using maxvalue.\n> \n> However it seems that nranges will never be zero, still the fix looks\n> good to me since it is generally allowed to be zero. I don't find a\n> similar mistake related to Range.nranges.\n\nActually, the nranges==0 branch is hit during regression tests:\nhttps://coverage.postgresql.org/src/backend/access/brin/brin_minmax_multi.c.gcov.html\n\nI'm not sure, but I *suspect* that compilers usually check\n ranges->nranges==0\nbefore reading ranges->values[2 * ranges->nranges - 1];\n\nEspecially since it's a static function.\n\nEven if they didn't (say, under -O0), values[-1] would probably point to\na palloc header, which would be enough to \"not crash\" before returning\none line later.\n\nBut +1 to fix this and other issues even if they would never crash.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 2 Sep 2022 07:01:30 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em sex., 2 de set. de 2022 às 09:01, Justin Pryzby <pryzby@telsasoft.com>\nescreveu:\n\n> On Mon, Aug 29, 2022 at 10:06:55AM +0900, Kyotaro Horiguchi wrote:\n> > At Fri, 26 Aug 2022 10:28:50 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > > At function has_matching_range, if variable ranges->nranges == 0,\n> > > we exit quickly with a result equal to false.\n> > >\n> > > This means that nranges can be zero.\n> > > It occurs then that it is possible then to occur an array out of\n> bonds, in\n> > > the initialization of the variable maxvalue.\n> > > So if nranges is equal to zero, there is no need to initialize\n> minvalue and\n> > > maxvalue.\n> > >\n> > > The patch tries to fix it, avoiding possible errors by using maxvalue.\n> >\n> > However it seems that nranges will never be zero, still the fix looks\n> > good to me since it is generally allowed to be zero. I don't find a\n> > similar mistake related to Range.nranges.\n>\n> Actually, the nranges==0 branch is hit during regression tests:\n>\n> https://coverage.postgresql.org/src/backend/access/brin/brin_minmax_multi.c.gcov.html\n>\n> I'm not sure, but I *suspect* that compilers usually check\n> ranges->nranges==0\n> before reading ranges->values[2 * ranges->nranges - 1];\n>\n> Especially since it's a static function.\n>\n> Even if they didn't (say, under -O0), values[-1] would probably point to\n> a palloc header, which would be enough to \"not crash\" before returning\n> one line later.\n>\n> But +1 to fix this and other issues even if they would never crash.\n>\nThanks Justin.\n\nBased on comments by David, I made a new patch.\nTo simplify I've included the 0001 in the 0002 patch.\n\nSummary:\n1. Once that ranges->nranges is invariant, avoid the loop if\nranges->nranges <= 0.\nThis matches the current behavior.\n\n2. Once that ranges->nsorted is invariant, avoid the loop if\nranges->nsorted <= 0.\nThis matches the current behavior.\n\n3. Remove the invariant cxt from ranges->nsorted loop.\n\n4. Avoid possible overflows when using int to store length strings.\n\n5. Avoid possible out-of-bounds when ranges->nranges == 0.\n\n6. Avoid overhead when using unnecessary StringInfoData to convert Datum a\nto Text b.\n\nAttached is 0002.\n\nregards,\nRanier Vilela",
"msg_date": "Fri, 2 Sep 2022 09:36:50 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Sat, 3 Sept 2022 at 00:37, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> But +1 to fix this and other issues even if they would never crash.\n\nYeah, I don't think any of this coding would lead to a crash, but it's\npretty weird coding and we should fix it.\n\n> 1. Once that ranges->nranges is invariant, avoid the loop if ranges->nranges <= 0.\n> This matches the current behavior.\n>\n> 2. Once that ranges->nsorted is invariant, avoid the loop if ranges->nsorted <= 0.\n> This matches the current behavior.\n>\n> 3. Remove the invariant cxt from ranges->nsorted loop.\n>\n> 4. Avoid possible overflows when using int to store length strings.\n>\n> 5. Avoid possible out-of-bounds when ranges->nranges == 0.\n>\n> 6. Avoid overhead when using unnecessary StringInfoData to convert Datum a to Text b.\n\nI've ripped out #4 and #6 for now. I think we should do #6 in master\nonly, probably as part of a wider cleanup of StringInfo misusages.\n\nI also spent some time trying to ensure I understand this code\ncorrectly. I was unable to work out what the nsorted field was from\njust the comments. I needed to look at the code to figure it out, so I\nthink the comments for that field need to be improved. A few of the\nothers were not that clear either. I hope I've improved them in the\nattached.\n\nI was also a bit confused at various other comments. e.g:\n\n/*\n* Is the value greater than the maxval? If yes, we'll recurse to the\n* right side of range array.\n*/\n\nI don't see any sort of recursion going on here. All I see are\nskipping of values that are out of bounds of the lower bound of the\nlowest range, and above the upper bound of the highest range.\n\nI propose to backpatch the attached into v14 tomorrow morning (about\n12 hours from now).\n\nThe only other weird coding I found was in brin_range_deserialize:\n\nfor (i = 0; (i < nvalues) && (!typbyval); i++)\n\nI imagine most compilers would optimize that so that the typbyval\ncheck is done before the first loop and not done on every loop, but I\ndon't think that coding practice helps the human readers out much. I\nleft that one alone, for now.\n\nDavid",
"msg_date": "Mon, 5 Sep 2022 22:15:26 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em seg., 5 de set. de 2022 às 07:15, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Sat, 3 Sept 2022 at 00:37, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >> But +1 to fix this and other issues even if they would never crash.\n>\n> Yeah, I don't think any of this coding would lead to a crash, but it's\n> pretty weird coding and we should fix it.\n>\n> > 1. Once that ranges->nranges is invariant, avoid the loop if\n> ranges->nranges <= 0.\n> > This matches the current behavior.\n> >\n> > 2. Once that ranges->nsorted is invariant, avoid the loop if\n> ranges->nsorted <= 0.\n> > This matches the current behavior.\n> >\n> > 3. Remove the invariant cxt from ranges->nsorted loop.\n> >\n> > 4. Avoid possible overflows when using int to store length strings.\n> >\n> > 5. Avoid possible out-of-bounds when ranges->nranges == 0.\n> >\n> > 6. Avoid overhead when using unnecessary StringInfoData to convert Datum\n> a to Text b.\n>\n> I've ripped out #4 and #6 for now. I think we should do #6 in master\n> only, probably as part of a wider cleanup of StringInfo misusages.\n>\n> I also spent some time trying to ensure I understand this code\n> correctly. I was unable to work out what the nsorted field was from\n> just the comments. I needed to look at the code to figure it out, so I\n> think the comments for that field need to be improved. A few of the\n> others were not that clear either. I hope I've improved them in the\n> attached.\n>\n> I was also a bit confused at various other comments. e.g:\n>\n> /*\n> * Is the value greater than the maxval? If yes, we'll recurse to the\n> * right side of range array.\n> */\n>\nThe second comment in the v3 patch, must be:\n/*\n* Is the value greater than the maxval? If yes, we'll recurse\n* to the right side of the range array.\n*/\n\nI think this is copy-and-paste thinko with the word \"minval\".\n\n\n>\n> I don't see any sort of recursion going on here. All I see are\n> skipping of values that are out of bounds of the lower bound of the\n> lowest range, and above the upper bound of the highest range.\n>\nI think this kind recursion, because the loop is restarted\nwith:\nstart = (midpoint + 1);\ncontinue;\n\n\n>\n> I propose to backpatch the attached into v14 tomorrow morning (about\n> 12 hours from now).\n>\n> The only other weird coding I found was in brin_range_deserialize:\n>\n> for (i = 0; (i < nvalues) && (!typbyval); i++)\n>\n> I imagine most compilers would optimize that so that the typbyval\n> check is done before the first loop and not done on every loop, but I\n> don't think that coding practice helps the human readers out much. I\n> left that one alone, for now.\n>\nYeah, I prefer write:\nif (!typbyval)\n{\n for (i = 0; (i < nvalues); i++)\n}\n\nregards,\nRanier Vilela\n\nEm seg., 5 de set. de 2022 às 07:15, David Rowley <dgrowleyml@gmail.com> escreveu:On Sat, 3 Sept 2022 at 00:37, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>> But +1 to fix this and other issues even if they would never crash.\n\nYeah, I don't think any of this coding would lead to a crash, but it's\npretty weird coding and we should fix it.\n\n> 1. Once that ranges->nranges is invariant, avoid the loop if ranges->nranges <= 0.\n> This matches the current behavior.\n>\n> 2. Once that ranges->nsorted is invariant, avoid the loop if ranges->nsorted <= 0.\n> This matches the current behavior.\n>\n> 3. Remove the invariant cxt from ranges->nsorted loop.\n>\n> 4. Avoid possible overflows when using int to store length strings.\n>\n> 5. Avoid possible out-of-bounds when ranges->nranges == 0.\n>\n> 6. Avoid overhead when using unnecessary StringInfoData to convert Datum a to Text b.\n\nI've ripped out #4 and #6 for now. I think we should do #6 in master\nonly, probably as part of a wider cleanup of StringInfo misusages.\n\nI also spent some time trying to ensure I understand this code\ncorrectly. I was unable to work out what the nsorted field was from\njust the comments. I needed to look at the code to figure it out, so I\nthink the comments for that field need to be improved. A few of the\nothers were not that clear either. I hope I've improved them in the\nattached.\n\nI was also a bit confused at various other comments. e.g:\n\n/*\n* Is the value greater than the maxval? If yes, we'll recurse to the\n* right side of range array.\n*/The second comment in the v3 patch, must be:\t\t\t\t/*\t\t\t\t * Is the value greater than the maxval? If yes, we'll recurse\t\t\t\t * to the right side of the range array.\t\t\t\t */I think this is copy-and-paste thinko with the word \"minval\". \n\nI don't see any sort of recursion going on here. All I see are\nskipping of values that are out of bounds of the lower bound of the\nlowest range, and above the upper bound of the highest range.I think this kind recursion, because the loop is restartedwith:\t\t\t\t\tstart = (midpoint + 1);\t\t\t\t\tcontinue; \n\nI propose to backpatch the attached into v14 tomorrow morning (about\n12 hours from now).\n\nThe only other weird coding I found was in brin_range_deserialize:\n\nfor (i = 0; (i < nvalues) && (!typbyval); i++)\n\nI imagine most compilers would optimize that so that the typbyval\ncheck is done before the first loop and not done on every loop, but I\ndon't think that coding practice helps the human readers out much. I\nleft that one alone, for now.Yeah, I prefer write:if (!typbyval){ for (i = 0; (i < nvalues); i++)\n\n} regards,Ranier Vilela",
"msg_date": "Mon, 5 Sep 2022 09:17:21 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Mon, 5 Sept 2022 at 22:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> On Sat, 3 Sept 2022 at 00:37, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > 6. Avoid overhead when using unnecessary StringInfoData to convert Datum a to Text b.\n>\n> I've ripped out #4 and #6 for now. I think we should do #6 in master\n> only, probably as part of a wider cleanup of StringInfo misusages.\n\nI've attached a patch which does various other string operation cleanups.\n\n* This changes cstring_to_text() to use cstring_to_text_with_len when\nwe're working with a StringInfo and can just access the .len field.\n* Uses appendStringInfoString instead of appendStringInfo when there\nis special formatting.\n* Uses pstrdup(str) instead of psprintf(\"%s\", str). In many cases\nthis will save a bit of memory\n* Uses appendPQExpBufferChar instead of appendPQExpBufferStr() when\nappending a 1 byte string.\n* Uses appendStringInfoChar() instead of appendStringInfo() when no\nformatting and string is 1 byte.\n* Uses appendStringInfoChar() instead of appendStringInfoString() when\nstring is 1 byte.\n* Uses appendPQExpBuffer(b , ...) instead of appendPQExpBufferStr(b, \"%s\" ...)\n\nI'm aware there are a few other places that we could use\ncstring_to_text_with_len() instead of cstring_to_text(). For example,\nusing the return value of snprintf() to obtain the length. I just\ndidn't do that because we need to take care to check the return value\nisn't -1.\n\nMy grep patterns didn't account for these function calls spanning\nmultiple lines, so I may have missed a few.\n\nDavid",
"msg_date": "Tue, 6 Sep 2022 01:40:43 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em seg., 5 de set. de 2022 às 10:40, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Mon, 5 Sept 2022 at 22:15, David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Sat, 3 Sept 2022 at 00:37, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > > 6. Avoid overhead when using unnecessary StringInfoData to convert\n> Datum a to Text b.\n> >\n> > I've ripped out #4 and #6 for now. I think we should do #6 in master\n> > only, probably as part of a wider cleanup of StringInfo misusages.\n>\n> I've attached a patch which does various other string operation cleanups.\n>\n> * This changes cstring_to_text() to use cstring_to_text_with_len when\n> we're working with a StringInfo and can just access the .len field.\n> * Uses appendStringInfoString instead of appendStringInfo when there\n> is special formatting.\n> * Uses pstrdup(str) instead of psprintf(\"%s\", str). In many cases\n> this will save a bit of memory\n> * Uses appendPQExpBufferChar instead of appendPQExpBufferStr() when\n> appending a 1 byte string.\n> * Uses appendStringInfoChar() instead of appendStringInfo() when no\n> formatting and string is 1 byte.\n> * Uses appendStringInfoChar() instead of appendStringInfoString() when\n> string is 1 byte.\n> * Uses appendPQExpBuffer(b , ...) instead of appendPQExpBufferStr(b, \"%s\"\n> ...)\n>\n> I'm aware there are a few other places that we could use\n> cstring_to_text_with_len() instead of cstring_to_text(). For example,\n> using the return value of snprintf() to obtain the length. I just\n> didn't do that because we need to take care to check the return value\n> isn't -1.\n>\n> My grep patterns didn't account for these function calls spanning\n> multiple lines, so I may have missed a few.\n>\nI did a search and found a few more places.\nv1 attached.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 5 Sep 2022 15:07:27 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 06:07, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I did a search and found a few more places.\n> v1 attached.\n\nThanks. I've done a bit more looking and found a few more places that\nwe can improve and I've pushed the result.\n\nIt feels like it would be good if we had a way to detect a few of\nthese issues much earlier than we are currently. There's been a long\nseries of commits fixing up this sort of thing. If we had a tool to\nparse the .c files and look for things like a function call to\nappendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\nno va_arg arguments).\n\nI'll hold off a few days before pushing the other patch. Tom stamped\nbeta4 earlier, so I'll hold off until after the tag.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Sep 2022 13:29:01 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em seg., 5 de set. de 2022 às 22:29, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 6 Sept 2022 at 06:07, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > I did a search and found a few more places.\n> > v1 attached.\n>\n> Thanks. I've done a bit more looking and found a few more places that\n> we can improve and I've pushed the result.\n>\nThanks.\n\n\n>\n> It feels like it would be good if we had a way to detect a few of\n> these issues much earlier than we are currently. There's been a long\n> series of commits fixing up this sort of thing. If we had a tool to\n> parse the .c files and look for things like a function call to\n> appendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\n> no va_arg arguments).\n>\nStaticAssert could check va_arg no?\n\nregards,\nRanier Vilela\n\nEm seg., 5 de set. de 2022 às 22:29, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 6 Sept 2022 at 06:07, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> I did a search and found a few more places.\n> v1 attached.\n\nThanks. I've done a bit more looking and found a few more places that\nwe can improve and I've pushed the result.Thanks. \n\nIt feels like it would be good if we had a way to detect a few of\nthese issues much earlier than we are currently. There's been a long\nseries of commits fixing up this sort of thing. If we had a tool to\nparse the .c files and look for things like a function call to\nappendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\nno va_arg arguments).StaticAssert could check va_arg no? regards,Ranier Vilela",
"msg_date": "Mon, 5 Sep 2022 22:52:34 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 13:52, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em seg., 5 de set. de 2022 às 22:29, David Rowley <dgrowleyml@gmail.com> escreveu:\n>> It feels like it would be good if we had a way to detect a few of\n>> these issues much earlier than we are currently. There's been a long\n>> series of commits fixing up this sort of thing. If we had a tool to\n>> parse the .c files and look for things like a function call to\n>> appendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\n>> no va_arg arguments).\n>\n> StaticAssert could check va_arg no?\n\nI'm not sure exactly what you have in mind. If you think you have a\nway to make that work, it would be good to see a patch with it.\n\nDavid\n\n\n",
"msg_date": "Tue, 6 Sep 2022 14:02:42 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em seg., 5 de set. de 2022 às 23:02, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 6 Sept 2022 at 13:52, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Em seg., 5 de set. de 2022 às 22:29, David Rowley <dgrowleyml@gmail.com>\n> escreveu:\n> >> It feels like it would be good if we had a way to detect a few of\n> >> these issues much earlier than we are currently. There's been a long\n> >> series of commits fixing up this sort of thing. If we had a tool to\n> >> parse the .c files and look for things like a function call to\n> >> appendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\n> >> no va_arg arguments).\n> >\n> > StaticAssert could check va_arg no?\n>\n> I'm not sure exactly what you have in mind. If you think you have a\n> way to make that work, it would be good to see a patch with it.\n>\nI will study the matter.\nBut first, I would like to continue with this correction of using strings.\nIn the following cases:\nfprintf -> fputs -> fputc\nprintf -> puts -> putchar\n\nThere are many occurrences, do you think it would be worth the effort?\n\nregards,\nRanier Vilela\n\nEm seg., 5 de set. de 2022 às 23:02, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 6 Sept 2022 at 13:52, Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em seg., 5 de set. de 2022 às 22:29, David Rowley <dgrowleyml@gmail.com> escreveu:\n>> It feels like it would be good if we had a way to detect a few of\n>> these issues much earlier than we are currently. There's been a long\n>> series of commits fixing up this sort of thing. If we had a tool to\n>> parse the .c files and look for things like a function call to\n>> appendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\n>> no va_arg arguments).\n>\n> StaticAssert could check va_arg no?\n\nI'm not sure exactly what you have in mind. If you think you have a\nway to make that work, it would be good to see a patch with it.I will study the matter.But first, I would like to continue with this correction of using strings.In the following cases:fprintf -> fputs -> fputcprintf -> puts -> putcharThere are many occurrences, do you think it would be worth the effort?regards,Ranier Vilela",
"msg_date": "Tue, 6 Sep 2022 08:25:22 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 23:25, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> But first, I would like to continue with this correction of using strings.\n> In the following cases:\n> fprintf -> fputs -> fputc\n> printf -> puts -> putchar\n>\n> There are many occurrences, do you think it would be worth the effort?\n\nI'm pretty unexcited about that. Quite a bit of churn and adding\nanother precedent that we currently have no good way to enforce or\nmaintain.\n\nIn addition to that, puts() is a fairly seldom used function, which\nperhaps is because it's a bit quirky and appends a \\n to the end of\nthe string. I'm just imagining all the bugs where we append an extra\nnewline. But, feel free to open another thread about it and see if you\ncan drum up any support.\n\nDavid\n\n\n",
"msg_date": "Wed, 7 Sep 2022 08:59:56 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "On Tue, 6 Sept 2022 at 13:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll hold off a few days before pushing the other patch. Tom stamped\n> beta4 earlier, so I'll hold off until after the tag.\n\nI've now pushed this.\n\nDavid\n\n\n",
"msg_date": "Tue, 13 Sep 2022 11:06:33 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em seg., 12 de set. de 2022 às 20:06, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 6 Sept 2022 at 13:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'll hold off a few days before pushing the other patch. Tom stamped\n> > beta4 earlier, so I'll hold off until after the tag.\n>\n> I've now pushed this.\n>\nThank you David.\nBut the correct thing is to put you also as author, after all, there's more\nof your code there than mine.\nAnyway, I appreciate the consideration.\n\nregards,\nRanier Vilela\n\nEm seg., 12 de set. de 2022 às 20:06, David Rowley <dgrowleyml@gmail.com> escreveu:On Tue, 6 Sept 2022 at 13:29, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll hold off a few days before pushing the other patch. Tom stamped\n> beta4 earlier, so I'll hold off until after the tag.\n\nI've now pushed this.Thank you David.But the correct thing is to put you also as author, after all, there's more of your code there than mine.Anyway, I appreciate the consideration.regards,Ranier Vilela",
"msg_date": "Mon, 12 Sep 2022 20:56:46 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
},
{
"msg_contents": "Em seg., 5 de set. de 2022 às 23:02, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Tue, 6 Sept 2022 at 13:52, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> >\n> > Em seg., 5 de set. de 2022 às 22:29, David Rowley <dgrowleyml@gmail.com>\n> escreveu:\n> >> It feels like it would be good if we had a way to detect a few of\n> >> these issues much earlier than we are currently. There's been a long\n> >> series of commits fixing up this sort of thing. If we had a tool to\n> >> parse the .c files and look for things like a function call to\n> >> appendPQExpBuffer() and appendStringInfo() with only 2 parameters (i.e\n> >> no va_arg arguments).\n> >\n> > StaticAssert could check va_arg no?\n>\n> I'm not sure exactly what you have in mind. If you think you have a\n> way to make that work, it would be good to see a patch with it.\n>\nAbout this:\n\n1. StaticAssertSmt can not help.\nAlthough some posts on the web show that it is possible to calculate the\nnumber of arguments,\nI didn't get anything useful.\nSo I left this option.\n\n2. Compiler supports\nBest solution.\nBut currently does not allow the suggestion to use another function.\n\n3. Owner tool\nTemporary solution.\nCan help, until the compilers build support for it.\n\nSo, I made one very simple tool, can do the basics here.\nNot meant to be some universal lint.\nIt only processes previously coded functions.\n\npg_check test1.c\nline (1): should be appendPQExpBufferStr?\nline (2): should be appendPQExpBufferChar?\nline (4): should be appendPQExpBufferStr?\nline (5): should be appendPQExpBufferStr?\n\nI don't think it's anywhere near the quality to be considered Postgres, but\nit could be a start.\nIf it helps, great, if not, fine.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 13 Sep 2022 21:34:51 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix possible bogus array out of bonds\n (src/backend/access/brin/brin_minmax_multi.c)"
}
] |
[
{
"msg_contents": "According to pg_has_role, it's possible to have USAGE WITH ADMIN\nOPTION on a role without having USAGE:\n\ntemplate1=# create role foo;\nCREATE ROLE\ntemplate1=# create role admin;\nCREATE ROLE\ntemplate1=# grant foo to admin with inherit false, admin true;\nGRANT ROLE\ntemplate1=# select p.priv, pg_has_role('admin', 'foo', p.priv) from\n(values ('USAGE'), ('MEMBER'),('USAGE WITH ADMIN OPTION'), ('MEMBER\nWITH ADMIN OPTION')) p(priv);\n priv | pg_has_role\n--------------------------+-------------\n USAGE | f\n MEMBER | t\n USAGE WITH ADMIN OPTION | t\n MEMBER WITH ADMIN OPTION | t\n(4 rows)\n\nTo me it seems wrong to say that you can have \"X WITH Y\" without\nhaving X. If I order a hamburger with fries, I do not only get fries:\nI also get a hamburger. I think the problem here is that pg_has_role()\nis defined to work like has_table_privilege(), and for table\nprivileges, each underlying privilege bit has a corresponding bit\nrepresenting the right to grant that privilege, and you can't grant\nthe right to set the privilege without first granting the privilege.\nFor roles, you just get ADMIN OPTION on the role, and that entitles\nyou to grant or revoke any privilege associated with the role. So the\nwhole way this function is defined seems wrong to me. It seems like it\nwould be more reasonable to have the third argument be, e.g. MEMBER,\nUSAGE, or ADMIN and forget about this WITH ADMIN OPTION stuff. That\nwould be a behavior change, though.\n\nIf we don't do that, then I think things just get weirder if we add\nsome more privileges around role memberships. Let's say that in\naddition to INHERIT OPTION and GRANT OPTION, we add some other things\nthat one role could do to another, let's say FLUMMOX, PERTURB, and\nDISCOMBOBULATE, then we'll just end up with more and more synonyms for\n\"does this role have admin option\". That is:\n\n column1 | column2\n----------------------------------+---------------------------------------------\n USAGE | Is this grant inheritable?\n MEMBER | Does a grant even exist in the first place?\n FLUMMOX | Can this grant flummox?\n PERTURB | Can this grant perturb?\n DISCOMBOBULATE | Can this grant discombobulate?\n USAGE WITH ADMIN OPTION | Does this grant have ADMIN OPTION?\n MEMBER WITH ADMIN OPTION | Does this grant have ADMIN OPTION?\n FLUMMOX WITH ADMIN OPTION | Does this grant have ADMIN OPTION?\n PERTURB WITH ADMIN OPTION | Does this grant have ADMIN OPTION?\n DISCOMBOBULATE WITH ADMIN OPTION | Does this grant have ADMIN OPTION?\n\nMaybe everybody else thinks that would be just fine? To me it seems\nfairly misleading.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 11:55:08 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_has_role's handling of ADMIN OPTION is pretty weird"
},
{
"msg_contents": "On Fri, Aug 26, 2022 at 11:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> According to pg_has_role, it's possible to have USAGE WITH ADMIN\n> OPTION on a role without having USAGE:\n\nOne more thing about this. The documentation about how this function\nactually works seems never to have been very good, and I think it's\nactually worse starting in v13. In v12 and prior it wasn't terribly\nclear, but we said this:\n\n\"pg_has_role checks whether a user can access a role in a particular\nway. Its argument possibilities are analogous to has_table_privilege,\nexcept that public is not allowed as a user name. The desired access\nprivilege type must evaluate to some combination of MEMBER or USAGE.\nMEMBER denotes direct or indirect membership in the role (that is, the\nright to do SET ROLE), while USAGE denotes whether the privileges of\nthe role are immediately available without doing SET ROLE.\"\n\nNow, has_table_privilege() allows you to specify multiple table\noptions and to append WITH GRANT OPTION to any or all of them. That\nactually works for pg_has_role() too, and a particularly sharp user\nmight suppose based on what we say elsewhere in the documentation\nthat, in the case of roles, we normally write WITH ADMIN OPTION rather\nthan WITH GRANT OPTION. So possibly someone could figure out what this\nfunction actually does without reading the source code, at least if\nthey have a PhD degree in PostgreSQL-ology.\n\nStarting in v13, the only explicit mention of pg_has_role() is this table entry:\n\n\"pg_has_role ( [ user name or oid, ] role text or oid, privilege text\n) → boolean\n\nDoes user have privilege for role? Allowable privilege types are\nMEMBER and USAGE. MEMBER denotes direct or indirect membership in the\nrole (that is, the right to do SET ROLE), while USAGE denotes whether\nthe privileges of the role are immediately available without doing SET\nROLE. This function does not allow the special case of setting user to\npublic, because the PUBLIC pseudo-role can never be a member of real\nroles.\"\n\nThat gives no hint that you can specify multiple privileges, let alone\nappend WITH ADMIN OPTION or WITH GRANT OPTION. Everything else in this\ntable has the same problem. There is some text above the table which\nexplains what's going on here and from which it might be possible to\ninfer the behavior of pg_has_role(), but only if you actually read\nthat text and understand that it actually acts as a modifier to\neverything as follows. None of the functions actually do what they say\nthey do; they all do approximately that, but as modified to fit the\nscheme described in this paragraph.\n\nAt the very least, these table entries should say that the last\nargument is called \"privileges\" not \"privilege\" so that someone might\nhave a clue that more than one can be specified. And for the ones\nwhere you can add \"WITH GRANT OPTION\" or \"WITH ADMIN OPTION\" that\nshould be mentioned in the table itself.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 26 Aug 2022 13:16:15 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_has_role's handling of ADMIN OPTION is pretty weird"
}
] |
[
{
"msg_contents": "Hi all,\n\nJust a reminder that September 2022 commitfest will begin this\n\ncoming Thursday, September 1.\n\nAs of now, there have been “267” patches in total. Out of these\n\n267 patches, “22” patches required committer attention. Unfortunately,\n\nonly three patches have a committer. I think the author needs to find a\n\ncommitter, or the committer needs to look at the patches.\n\n\n 1.\n\n Fix assertion failure with barriers in parallel hash join\n 2.\n\n pg_dump - read data for some options from external file\n 3.\n\n Add non-blocking version of PQcancel\n 4.\n\n use has_privs_of_role() for pg_hba.conf (jconway)\n 5.\n\n explain analyze rows=%.0f\n 6.\n\n Allow pageinspect's bt_page_stats function to return a set of rows\n instead of a single row\n 7.\n\n pg_stat_statements: Track statement entry timestamp\n 8.\n\n Add connection active, idle time to pg_stat_activity\n 9.\n\n Add Amcheck option for checking unique constraints in btree indexes\n 10.\n\n jit_warn_above_fraction parameter\n 11.\n\n fix spinlock contention in LogwrtResult\n 12.\n\n Faster pglz compression (fuzzycz)\n 13.\n\n Parallel Hash Full Join (macdice)\n 14.\n\n KnownAssignedXidsGetAndSetXmin performance\n 15.\n\n psql - refactor echo code\n 16.\n\n Use \"WAL segment\" instead of \"log segment\" consistently in user-facing\n messages\n 17.\n\n Avoid erroring out when unable to remove or parse logical rewrite files\n to save checkpoint work\n 18.\n\n pg_receivewal fail to streams when the partial file to write is not\n fully initialized present in the wal receiver directory\n 19.\n\n On client login event trigger\n 20.\n\n Update relfrozenxmin when truncating temp tables\n 21.\n\n XID formatting and SLRU refactorings (Independent part of: Add 64-bit\n XIDs into PostgreSQL 15)\n 22.\n\n Unit tests for SLRU\n\n\nCurrently, we have 31 patches which require the author's attention.\n\nIf you already fixed and replied, change the status.\n\nI'll send out reminders this week to get your patches\n\nregistered/rebased, I'll update stale statuses in the CF app.\n\nThanks,\n\n--\nIbrar Ahmed.\n\nHi all,Just a reminder that September 2022 commitfest will begin this coming Thursday, September 1. As of now, there have been “267” patches in total. Out of these267 patches, “22” patches required committer attention. Unfortunately, only three patches have a committer. I think the author needs to find a committer, or the committer needs to look at the patches. Fix assertion failure with barriers in parallel hash joinpg_dump - read data for some options from external fileAdd non-blocking version of PQcanceluse has_privs_of_role() for pg_hba.conf (jconway)explain analyze rows=%.0fAllow pageinspect's bt_page_stats function to return a set of rows instead of a single rowpg_stat_statements: Track statement entry timestampAdd connection active, idle time to pg_stat_activityAdd Amcheck option for checking unique constraints in btree indexesjit_warn_above_fraction parameterfix spinlock contention in LogwrtResultFaster pglz compression (fuzzycz)Parallel Hash Full Join (macdice)KnownAssignedXidsGetAndSetXmin performancepsql - refactor echo codeUse \"WAL segment\" instead of \"log segment\" consistently in user-facing messagesAvoid erroring out when unable to remove or parse logical rewrite files to save checkpoint workpg_receivewal fail to streams when the partial file to write is not fully initialized present in the wal receiver directoryOn client login event triggerUpdate relfrozenxmin when truncating temp tablesXID formatting and SLRU refactorings (Independent part of: Add 64-bit XIDs into PostgreSQL 15)Unit tests for SLRUCurrently, we have 31 patches which require the author's attention. If you already fixed and replied, change the status. I'll send out reminders this week to get your patchesregistered/rebased, I'll update stale statuses in the CF app.Thanks,--Ibrar Ahmed.",
"msg_date": "Sun, 28 Aug 2022 00:28:26 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmed@percona.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2022-09] Begins This Thursday"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 12:28:26AM +0500, Ibrar Ahmed wrote:\n> Just a reminder that September 2022 commitfest will begin this\n> \n> coming Thursday, September 1.\n\nWe are the 1st of September AoE [1], so I have taken the liberty to\nswitch the CF as in progress.\n\n[1]: https://www.timeanddate.com/time/zones/aoe\n--\nMichael",
"msg_date": "Fri, 2 Sep 2022 17:14:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2022-09] Begins This Thursday"
}
] |
[
{
"msg_contents": "I once wrote code like this:\n\n char *oid = get_from_somewhere();\n ...\n\n values[i++] = ObjectIdGetDatum(oid);\n\nThis compiles cleanly and even appears to work in practice, except of \ncourse it doesn't.\n\nThe FooGetDatum() macros just cast whatever you give it to Datum, \nwithout checking whether the input was really foo.\n\nTo address this, I converted these macros to inline functions, which \nenables type checking of the input argument. For symmetry, I also \nconverted the corresponding DatumGetFoo() macros (but those are less \nlikely to cover mistakes, since the input argument is always Datum). \nThis is patch 0002.\n\n(I left some of the DatumGet... of the varlena types in fmgr.h as \nmacros. These ultimately map to functions that do type checking, so \nthere would be little more to be learnt from that. But we could do \nthose for consistency as well.)\n\nThis whole thing threw up a bunch of compiler warnings and errors, which \nrevealed a number of existing misuses. These are fixed in patch 0001. \nThese include\n\n- using FooGetDatum on things that are already Datum,\n\n- using DatumGetPointer on things that are already pointers,\n\n- using PG_RETURN_TYPE on things that are Datum,\n\n- using PG_RETURN_TYPE of the wrong type,\n\nand others, including my personal favorite:\n\n- using PointerGetDatum where DatumGetPointer should be used.\n\n(AFAICT, unlike my initial example, I don't think any of those would \ncause wrong behavior.)",
"msg_date": "Sun, 28 Aug 2022 17:55:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi Peter,\n\n> To address this, I converted these macros to inline functions\n\nThis is a great change!\n\nI encountered passing the wrong arguments to these macros many times,\nand this is indeed pretty annoying. I wish we could forbid doing other\nstupid things as well, e.g. comparing two Datum's directly, which for\nTimestamps works just fine but only on 64-bit platforms. Although this\nis certainly out of scope of this thread.\n\nThe patch looks good to me, I merely added a link to the discussion. I\nadded it to the CF application. Cfbot is making its mind at the moment\nof writing.\n\nDo you think this should be backported?\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 30 Aug 2022 16:36:11 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Do you think this should be backported?\n\nImpossible, it's an ABI break.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 09:46:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi Tom,\n\n> Do you think this should be backported?\n>> Impossible, it's an ABI break.\n\nOK, got it.\n\nJust to clarify, a break in this case is going to be the fact that we\nare adding new functions, although inlined, correct? Or maybe\nsomething else? I'm sorry this is the first time I encounter the\nquestion of ABI compatibility in the context of Postgres, so I would\nappreciate it if you could elaborate a bit.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 30 Aug 2022 16:59:45 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Just to clarify, a break in this case is going to be the fact that we\n> are adding new functions, although inlined, correct? Or maybe\n> something else? I'm sorry this is the first time I encounter the\n> question of ABI compatibility in the context of Postgres, so I would\n> appreciate it if you could elaborate a bit.\n\nAfter absorbing a bit more caffeine, I suppose that replacing a\nmacro with a \"static inline\" function would not be an ABI break,\nat least not with most modern compilers, because the code should\nend up the same. I'd still vote against back-patching though.\nI don't think the risk-reward ratio is good, especially not for\nthe pre-C99 branches which don't necessarily have \"inline\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:13:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 10:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Aleksander Alekseev <aleksander@timescale.com> writes:\n> > Just to clarify, a break in this case is going to be the fact that we\n> > are adding new functions, although inlined, correct? Or maybe\n> > something else? I'm sorry this is the first time I encounter the\n> > question of ABI compatibility in the context of Postgres, so I would\n> > appreciate it if you could elaborate a bit.\n>\n> After absorbing a bit more caffeine, I suppose that replacing a\n> macro with a \"static inline\" function would not be an ABI break,\n> at least not with most modern compilers, because the code should\n> end up the same. I'd still vote against back-patching though.\n> I don't think the risk-reward ratio is good, especially not for\n> the pre-C99 branches which don't necessarily have \"inline\".\n\nYeah, I don't see a reason to back-patch a change like this, certainly\nnot right away. If over time it turns out that the different\ndefinitions on different branches cause too many headaches, we could\nreconsider. However, I'm not sure that will happen, because the whole\npoint is that the static inline functions are intended to behave in\nthe same way as the macros, just with better type-checking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:16:45 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Tom, Robert,\n\n> Yeah, I don't see a reason to back-patch a change like this\n\nMaybe we should consider backporting at least 0001 patch, partially\nperhaps? I believe if fixes pretty cursed pieces of code, e.g:\n\n```\n pg_cryptohash_ctx *context =\n- (pg_cryptohash_ctx *) PointerGetDatum(foundres);\n+ (pg_cryptohash_ctx *) DatumGetPointer(foundres);\n```\n\nThis being said, personally I don't have a strong opinion here. After\nall, the code works and passes the tests. Maybe I'm just being a\nperfectionist here.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:25:32 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On Tue, Aug 30, 2022 at 10:25 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > Yeah, I don't see a reason to back-patch a change like this\n>\n> Maybe we should consider backporting at least 0001 patch, partially\n> perhaps? I believe if fixes pretty cursed pieces of code, e.g:\n>\n> ```\n> pg_cryptohash_ctx *context =\n> - (pg_cryptohash_ctx *) PointerGetDatum(foundres);\n> + (pg_cryptohash_ctx *) DatumGetPointer(foundres);\n> ```\n\nSure, back-porting the bug fixes would make sense to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:27:48 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n> Maybe we should consider backporting at least 0001 patch, partially\n> perhaps? I believe if fixes pretty cursed pieces of code, e.g:\n\nCertainly if there are any parts of it that fix actual bugs,\nwe ought to backport those. I'm not in a big hurry to backport\ncosmetic fixes though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 30 Aug 2022 10:28:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi hackers,\n\n> Cfbot is making its mind at the moment of writing.\n\nHere is v3 with silenced compiler warnings.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 30 Aug 2022 17:52:51 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi hackers,\n\n> Here is v3 with silenced compiler warnings.\n\nSome more warnings were reported by cfbot, so here is v4. Apologies\nfor the noise.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Tue, 30 Aug 2022 21:15:40 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On 30.08.22 20:15, Aleksander Alekseev wrote:\n>> Here is v3 with silenced compiler warnings.\n> \n> Some more warnings were reported by cfbot, so here is v4. Apologies\n> for the noise.\n\nLooking at these warnings you are fixing, I think there is a small \nproblem we need to address.\n\nI have defined PointerGetDatum() with a const argument:\n\nPointerGetDatum(const void *X)\n\nThis is because in some places the thing that is being passed into that \nis itself defined as const, so this is the clean way to avoid warnings \nabout dropping constness.\n\nHowever, some support functions for gist and text search pass back \nreturn values via pointer arguments, like\n\n DirectFunctionCall3(g_int_same,\n entry->key,\n PointerGetDatum(query),\n PointerGetDatum(&retval));\n\nThe compiler you are using apparently thinks that passing &retval to a \nconst pointer argument cannot change retval, which seems quite \nreasonable. But that isn't actually what's happening here, so we're \nlying a bit.\n\n(Which compiler is that, by the way?)\n\nI think to resolve that we could either\n\n1. Not define PointerGetDatum() with a const argument, and just sprinkle \nin a few unconstify calls where necessary.\n\n2. Maybe add a NonconstPointerGetDatum() for those few cases where \npointer arguments are used for return values.\n\n3. Go with your patch and just fix up the warnings about uninitialized \nvariables. But that seems the least principled to me.\n\n\n",
"msg_date": "Mon, 5 Sep 2022 14:57:10 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi Peter,\n\n> Which compiler is that, by the way?\n\nThe warnings were reported by cfbot during the \"clang_warning\" step.\nAccording to the logs:\n\n```\nusing compiler=Debian clang version 11.0.1-2\n```\n\nPersonally I use Clang 14 on MacOS and I don't get these warnings.\n\n> I think to resolve that we could either\n>\n> 1. Not define PointerGetDatum() with a const argument, and just sprinkle\n> in a few unconstify calls where necessary.\n>\n> 2. Maybe add a NonconstPointerGetDatum() for those few cases where\n> pointer arguments are used for return values.\n>\n> 3. Go with your patch and just fix up the warnings about uninitialized\n> variables. But that seems the least principled to me.\n\nIMO the 3rd option is the lesser evil. Initializing four bools/ints in\norder to make Clang 11 happy doesn't strike me as such a big deal. At\nleast until somebody reports a bottleneck for this particular reason.\nWe can optimize the code when and if this will happen.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Mon, 5 Sep 2022 17:20:03 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi Peter,\n\n> > 3. Go with your patch and just fix up the warnings about uninitialized\n> > variables. But that seems the least principled to me.\n>\n> IMO the 3rd option is the lesser evil. Initializing four bools/ints in\n> order to make Clang 11 happy doesn't strike me as such a big deal. At\n> least until somebody reports a bottleneck for this particular reason.\n> We can optimize the code when and if this will happen.\n\nSince the first patch was applied, cfbot now complains that it can't\napply the patchset. Here is the rebased version.\n\n-- \nBest regards,\nAleksander Alekseev",
"msg_date": "Thu, 8 Sep 2022 12:26:06 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On 08.09.22 11:26, Aleksander Alekseev wrote:\n>>> 3. Go with your patch and just fix up the warnings about uninitialized\n>>> variables. But that seems the least principled to me.\n>>\n>> IMO the 3rd option is the lesser evil. Initializing four bools/ints in\n>> order to make Clang 11 happy doesn't strike me as such a big deal. At\n>> least until somebody reports a bottleneck for this particular reason.\n>> We can optimize the code when and if this will happen.\n> \n> Since the first patch was applied, cfbot now complains that it can't\n> apply the patchset. Here is the rebased version.\n\ncommitted, thanks\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 17:59:09 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Hi,\n\nOn Mon, Sep 12, 2022 at 05:59:09PM +0200, Peter Eisentraut wrote:\n>\n> committed, thanks\n\nFTR lapwing is complaining about this commit:\nhttps://brekka.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-12%2016%3A40%3A18.\n\nSnapper is also failing with similar problems:\nhttps://brekka.postgresql.org/cgi-bin/show_log.pl?nm=snapper&dt=2022-09-12%2016%3A42%3A10\n\n\n",
"msg_date": "Tue, 13 Sep 2022 01:03:14 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On 12.09.22 19:03, Julien Rouhaud wrote:\n> On Mon, Sep 12, 2022 at 05:59:09PM +0200, Peter Eisentraut wrote:\n>>\n>> committed, thanks\n> \n> FTR lapwing is complaining about this commit:\n> https://brekka.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-12%2016%3A40%3A18.\n> \n> Snapper is also failing with similar problems:\n> https://brekka.postgresql.org/cgi-bin/show_log.pl?nm=snapper&dt=2022-09-12%2016%3A42%3A10\n\nOk, it has problems with 32-bit platforms. I can reproduce it locally. \nI'll need to take another look at this. I have reverted the patch for now.\n\n\n\n",
"msg_date": "Mon, 12 Sep 2022 19:59:36 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On 12.09.22 19:59, Peter Eisentraut wrote:\n> On 12.09.22 19:03, Julien Rouhaud wrote:\n>> On Mon, Sep 12, 2022 at 05:59:09PM +0200, Peter Eisentraut wrote:\n>>>\n>>> committed, thanks\n>>\n>> FTR lapwing is complaining about this commit:\n>> https://brekka.postgresql.org/cgi-bin/show_log.pl?nm=lapwing&dt=2022-09-12%2016%3A40%3A18.\n>>\n>> Snapper is also failing with similar problems:\n>> https://brekka.postgresql.org/cgi-bin/show_log.pl?nm=snapper&dt=2022-09-12%2016%3A42%3A10\n> \n> Ok, it has problems with 32-bit platforms. I can reproduce it locally. \n> I'll need to take another look at this. I have reverted the patch for now.\n\nI have tried to analyze these issues, but I'm quite stuck. If anyone \nelse has any ideas, it would be helpful.\n\n\n\n",
"msg_date": "Mon, 26 Sep 2022 16:55:22 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Ok, it has problems with 32-bit platforms. I can reproduce it locally. \n>> I'll need to take another look at this. I have reverted the patch for now.\n\n> I have tried to analyze these issues, but I'm quite stuck. If anyone \n> else has any ideas, it would be helpful.\n\nIt looks to me like the problem is with the rewrite of Int64GetDatumFast\nand Float8GetDatumFast:\n\n+static inline Datum\n+Int64GetDatumFast(int64 X)\n+{\n+#ifdef USE_FLOAT8_BYVAL\n+\treturn Int64GetDatum(X);\n+#else\n+\treturn PointerGetDatum(&X);\n+#endif\n+}\n\nIn the by-ref code path, this is going to return the address of the\nparameter local variable, which of course is broken as soon as the\nfunction exits. To test, I reverted the mods to those two macros,\nand I got through check-world OK in a 32-bit VM.\n\nI think we can do this while still having reasonable type-safety\nby adding AssertVariableIsOfTypeMacro() checks to the macros.\nAn advantage of that solution is that we verify that the code\nwill be safe for a 32-bit build even in 64-bit builds. (Of\ncourse, it's just checking the variable's type not its lifespan,\nbut this is still a step forward.)\n\n0001 attached is what you committed, 0002 is a proposed delta\nto fix the Fast macros.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 26 Sep 2022 13:34:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "On 26.09.22 19:34, Tom Lane wrote:\n> I think we can do this while still having reasonable type-safety\n> by adding AssertVariableIsOfTypeMacro() checks to the macros.\n> An advantage of that solution is that we verify that the code\n> will be safe for a 32-bit build even in 64-bit builds. (Of\n> course, it's just checking the variable's type not its lifespan,\n> but this is still a step forward.)\n> \n> 0001 attached is what you committed, 0002 is a proposed delta\n> to fix the Fast macros.\n\nThanks, I committed it like that.\n\n(I had looked into AssertVariableIsOfTypeMacro() for an earlier variant \nof this patch, before I had the idea with the inline functions. It's in \ngeneral a bit too strict, such as with short vs int, and signed vs \nunsigned, but it should work ok for this limited set of uses.)\n\n\n\n",
"msg_date": "Tue, 27 Sep 2022 21:26:20 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 26.09.22 19:34, Tom Lane wrote:\n>> I think we can do this while still having reasonable type-safety\n>> by adding AssertVariableIsOfTypeMacro() checks to the macros.\n\n> (I had looked into AssertVariableIsOfTypeMacro() for an earlier variant \n> of this patch, before I had the idea with the inline functions. It's in \n> general a bit too strict, such as with short vs int, and signed vs \n> unsigned, but it should work ok for this limited set of uses.)\n\nYeah. I had sort of expected to need a UInt64GetDatumFast variant\nthat would accept uint64, but there doesn't appear to be anyplace\nthat wants that today. We should be willing to add it if anyone\ncomplains, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Sep 2022 16:13:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Convert *GetDatum() and DatumGet*() macros to inline functions"
}
] |
[
{
"msg_contents": "I noticed that the pg_upgrade check_ functions were determining failures found\nin a few different ways. Some keep a boolen flag variable, and some (like\ncheck_for_incompatible_polymorphics) check the state of the script filehandle\nwhich is guaranteed to be set (with the error message referring to the path of\nsaid file). Others like check_loadable_libraries only check the flag variable\nand fclose the handle assuming it was opened.\n\nThe attached diff changes the functions to do it consistently in one way, by\nchecking the state of the filehandle. Since we are referring to the file by\npath in the printed error message it seemed the cleanest approach, and it saves\na few lines of code without IMO reducing readability.\n\nThere is no change in functionality, just code consistency.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/",
"msg_date": "Sun, 28 Aug 2022 22:42:24 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Slight refactoring of state check in pg_upgrade check_ function"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 10:42:24PM +0200, Daniel Gustafsson wrote:\n> I noticed that the pg_upgrade check_ functions were determining failures found\n> in a few different ways. Some keep a boolen flag variable, and some (like\n> check_for_incompatible_polymorphics) check the state of the script filehandle\n> which is guaranteed to be set (with the error message referring to the path of\n> said file). Others like check_loadable_libraries only check the flag variable\n> and fclose the handle assuming it was opened.\n> \n> The attached diff changes the functions to do it consistently in one way, by\n> checking the state of the filehandle. Since we are referring to the file by\n> path in the printed error message it seemed the cleanest approach, and it saves\n> a few lines of code without IMO reducing readability.\n> \n> There is no change in functionality, just code consistency.\n\nThe patch looks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 28 Aug 2022 15:06:09 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Slight refactoring of state check in pg_upgrade check_ function"
},
{
"msg_contents": "On Sun, Aug 28, 2022 at 03:06:09PM -0700, Nathan Bossart wrote:\n> On Sun, Aug 28, 2022 at 10:42:24PM +0200, Daniel Gustafsson wrote:\n> > I noticed that the pg_upgrade check_ functions were determining failures found\n> > in a few different ways. Some keep a boolen flag variable, and some (like\n> > check_for_incompatible_polymorphics) check the state of the script filehandle\n> > which is guaranteed to be set (with the error message referring to the path of\n> > said file). Others like check_loadable_libraries only check the flag variable\n> > and fclose the handle assuming it was opened.\n> > \n> > The attached diff changes the functions to do it consistently in one way, by\n> > checking the state of the filehandle. Since we are referring to the file by\n> > path in the printed error message it seemed the cleanest approach, and it saves\n> > a few lines of code without IMO reducing readability.\n> > \n> > There is no change in functionality, just code consistency.\n> \n> The patch looks reasonable to me.\n\n+1. Those checks have accumulated over time with different authors,\nhence the stylistic differences.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n",
"msg_date": "Tue, 30 Aug 2022 17:08:02 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Slight refactoring of state check in pg_upgrade check_ function"
},
{
"msg_contents": "> On 30 Aug 2022, at 23:08, Bruce Momjian <bruce@momjian.us> wrote:\n> On Sun, Aug 28, 2022 at 03:06:09PM -0700, Nathan Bossart wrote:\n\n>> The patch looks reasonable to me.\n> \n> +1. Those checks have accumulated over time with different authors,\n> hence the stylistic differences.\n\nPushed, thanks for review!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Wed, 31 Aug 2022 15:10:47 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": true,
"msg_subject": "Re: Slight refactoring of state check in pg_upgrade check_ function"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.