threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nI asked this question in the Postgres Slack, and was recommended to ask\nhere instead.\n\nA few times, I've been in a situation where I want to join a table to\nitself on its primary key. That typically happens because I have some kind\nof summary view, which I then want to join to the original table (using its\nprimary key) to flesh out the summary data with other columns. That's\nexecuted as a join, which surprised me. But in this case, I could extend\nthe view to have all of the columns of the original table to avoid the join.\n\nBut there's another case that's harder to solve this way: combining views\ntogether. Here's a trivial example:\n\nCREATE TABLE users (id BIGINT PRIMARY KEY, varchar name);\nCREATE VIEW only_some_users AS (SELECT * FROM users WHERE id < 10);\nCREATE VIEW some_other_users AS (SELECT * FROM users WHERE id > 3);\n\nEXPLAIN SELECT * FROM only_some_users\nINNER JOIN some_other_users ON only_some_users.id = some_other_users.id;\n\nHash Join (cost=29.23..43.32 rows=90 width=144)\n\n Hash Cond: (users.id = users_1.id)\n\n -> Bitmap Heap Scan on users (cost=6.24..19.62 rows=270 width=72)\n\n Recheck Cond: (id < 10)\n\n -> Bitmap Index Scan on users_pkey (cost=0.00..6.18 rows=270\nwidth=0)\n Index Cond: (id < 10)\n\n -> Hash (cost=19.62..19.62 rows=270 width=72)\n\n -> Bitmap Heap Scan on users users_1 (cost=6.24..19.62 rows=270\nwidth=72)\n Recheck Cond: (id > 3)\n\n -> Bitmap Index Scan on users_pkey (cost=0.00..6.18\nrows=270 width=0)\n Index Cond: (id > 3)\n\nIs there a reason why Postgres doesn't have an optimisation built in to\noptimise this JOIN? What I'm imagining is that a join between two aliases\nfor the same table on its primary key could be optimised by treating them\nas the same table. I think the same would be true for self-joins on any\nnon-null columns covered by a uniqueness constraint.\n\nIf this is considered a desirable change, I'd be keen to work on it (with\nsome guidance).\n\nThanks,\n\nHywel\n<https://files.slack.com/files-pri/TMKTMS7PB-F01E0TSQH3P/sw_horizontal_colour__1_.png>\n\nHi,I asked this question in the Postgres Slack, and was recommended to ask here instead.A few times, I've been in a situation where I want to join a table to itself on its primary key. That typically happens because I have some kind of summary view, which I then want to join to the original table (using its primary key) to flesh out the summary data with other columns. That's executed as a join, which surprised me. But in this case, I could extend the view to have all of the columns of the original table to avoid the join.But there's another case that's harder to solve this way: combining views together. Here's a trivial example:CREATE TABLE users (id BIGINT PRIMARY KEY, varchar name);CREATE VIEW only_some_users AS (SELECT * FROM users WHERE id < 10);CREATE VIEW some_other_users AS (SELECT * FROM users WHERE id > 3);EXPLAIN SELECT * FROM only_some_usersINNER JOIN some_other_users ON only_some_users.id = some_other_users.id;Hash Join  (cost=29.23..43.32 rows=90 width=144)                                       Hash Cond: (users.id = users_1.id)                                                   ->  Bitmap Heap Scan on users  (cost=6.24..19.62 rows=270 width=72)                        Recheck Cond: (id < 10)                                                              ->  Bitmap Index Scan on users_pkey  (cost=0.00..6.18 rows=270 width=0)                    Index Cond: (id < 10)                                                    ->  Hash  (cost=19.62..19.62 rows=270 width=72)                                            ->  Bitmap Heap Scan on users users_1  (cost=6.24..19.62 rows=270 width=72)                Recheck Cond: (id > 3)                                                               ->  Bitmap Index Scan on users_pkey  (cost=0.00..6.18 rows=270 width=0)                    Index Cond: (id > 3) Is there a reason why Postgres doesn't have an optimisation built in to optimise this JOIN? What I'm imagining is that a join between two aliases for the same table on its primary key could be optimised by treating them as the same table. I think the same would be true for self-joins on any non-null columns covered by a uniqueness constraint.If this is considered a desirable change, I'd be keen to work on it (with some guidance).Thanks,Hywel", "msg_date": "Thu, 11 Mar 2021 14:06:23 +0000", "msg_from": "Hywel Carver <hywel@skillerwhale.com>", "msg_from_op": true, "msg_subject": "Self-join optimisation" }, { "msg_contents": "On Thu, 11 Mar 2021 at 15:15, Hywel Carver <hywel@skillerwhale.com> wrote:\n>\n> Hi,\n>\n> I asked this question in the Postgres Slack, and was recommended to ask here instead.\n>\n> A few times, I've been in a situation where I want to join a table to itself on its primary key. That typically happens because I have some kind of summary view, which I then want to join to the original table (using its primary key) to flesh out the summary data with other columns. That's executed as a join, which surprised me. But in this case, I could extend the view to have all of the columns of the original table to avoid the join.\n>\n> But there's another case that's harder to solve this way: combining views together. Here's a trivial example:\n>\n> CREATE TABLE users (id BIGINT PRIMARY KEY, varchar name);\n> CREATE VIEW only_some_users AS (SELECT * FROM users WHERE id < 10);\n> CREATE VIEW some_other_users AS (SELECT * FROM users WHERE id > 3);\n>\n> EXPLAIN SELECT * FROM only_some_users\n> INNER JOIN some_other_users ON only_some_users.id = some_other_users.id;\n>\n> Hash Join (cost=29.23..43.32 rows=90 width=144)\n> Hash Cond: (users.id = users_1.id)\n> -> Bitmap Heap Scan on users (cost=6.24..19.62 rows=270 width=72)\n> Recheck Cond: (id < 10)\n> -> Bitmap Index Scan on users_pkey (cost=0.00..6.18 rows=270 width=0)\n> Index Cond: (id < 10)\n> -> Hash (cost=19.62..19.62 rows=270 width=72)\n> -> Bitmap Heap Scan on users users_1 (cost=6.24..19.62 rows=270 width=72)\n> Recheck Cond: (id > 3)\n> -> Bitmap Index Scan on users_pkey (cost=0.00..6.18 rows=270 width=0)\n> Index Cond: (id > 3)\n>\n> Is there a reason why Postgres doesn't have an optimisation built in to optimise this JOIN? What I'm imagining is that a join between two aliases for the same table on its primary key could be optimised by treating them as the same table. I think the same would be true for self-joins on any non-null columns covered by a uniqueness constraint.\n>\n> If this is considered a desirable change, I'd be keen to work on it (with some guidance).\n\nThere's currently a patch registered in the commitfest that could fix\nthis for you, called \"Remove self join on a unique column\" [0].\n\n\nWith regards,\n\nMatthias van de Meent\n\n[0] https://commitfest.postgresql.org/31/1712/, thread at\nhttps://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru\n\n\n", "msg_date": "Thu, 11 Mar 2021 15:32:16 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Self-join optimisation" }, { "msg_contents": "Great! It looks like it's been in commitfest status for a few years. Is\nthere anything someone like me (outside the pgsql-hackers community) can do\nto help it get reviewed this time around?\n\nOn Thu, Mar 11, 2021 at 2:32 PM Matthias van de Meent <\nboekewurm+postgres@gmail.com> wrote:\n\n> On Thu, 11 Mar 2021 at 15:15, Hywel Carver <hywel@skillerwhale.com> wrote:\n> >\n> > Hi,\n> >\n> > I asked this question in the Postgres Slack, and was recommended to ask\n> here instead.\n> >\n> > A few times, I've been in a situation where I want to join a table to\n> itself on its primary key. That typically happens because I have some kind\n> of summary view, which I then want to join to the original table (using its\n> primary key) to flesh out the summary data with other columns. That's\n> executed as a join, which surprised me. But in this case, I could extend\n> the view to have all of the columns of the original table to avoid the join.\n> >\n> > But there's another case that's harder to solve this way: combining\n> views together. Here's a trivial example:\n> >\n> > CREATE TABLE users (id BIGINT PRIMARY KEY, varchar name);\n> > CREATE VIEW only_some_users AS (SELECT * FROM users WHERE id < 10);\n> > CREATE VIEW some_other_users AS (SELECT * FROM users WHERE id > 3);\n> >\n> > EXPLAIN SELECT * FROM only_some_users\n> > INNER JOIN some_other_users ON only_some_users.id = some_other_users.id;\n> >\n> > Hash Join (cost=29.23..43.32 rows=90 width=144)\n> > Hash Cond: (users.id = users_1.id)\n> > -> Bitmap Heap Scan on users (cost=6.24..19.62 rows=270 width=72)\n> > Recheck Cond: (id < 10)\n> > -> Bitmap Index Scan on users_pkey (cost=0.00..6.18 rows=270\n> width=0)\n> > Index Cond: (id < 10)\n> > -> Hash (cost=19.62..19.62 rows=270 width=72)\n> > -> Bitmap Heap Scan on users users_1 (cost=6.24..19.62\n> rows=270 width=72)\n> > Recheck Cond: (id > 3)\n> > -> Bitmap Index Scan on users_pkey (cost=0.00..6.18\n> rows=270 width=0)\n> > Index Cond: (id > 3)\n> >\n> > Is there a reason why Postgres doesn't have an optimisation built in to\n> optimise this JOIN? What I'm imagining is that a join between two aliases\n> for the same table on its primary key could be optimised by treating them\n> as the same table. I think the same would be true for self-joins on any\n> non-null columns covered by a uniqueness constraint.\n> >\n> > If this is considered a desirable change, I'd be keen to work on it\n> (with some guidance).\n>\n> There's currently a patch registered in the commitfest that could fix\n> this for you, called \"Remove self join on a unique column\" [0].\n>\n>\n> With regards,\n>\n> Matthias van de Meent\n>\n> [0] https://commitfest.postgresql.org/31/1712/, thread at\n>\n> https://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru\n>\n\nGreat! It looks like it's been in commitfest status for a few years. Is there anything someone like me (outside the pgsql-hackers community) can do to help it get reviewed this time around?On Thu, Mar 11, 2021 at 2:32 PM Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:On Thu, 11 Mar 2021 at 15:15, Hywel Carver <hywel@skillerwhale.com> wrote:\n>\n> Hi,\n>\n> I asked this question in the Postgres Slack, and was recommended to ask here instead.\n>\n> A few times, I've been in a situation where I want to join a table to itself on its primary key. That typically happens because I have some kind of summary view, which I then want to join to the original table (using its primary key) to flesh out the summary data with other columns. That's executed as a join, which surprised me. But in this case, I could extend the view to have all of the columns of the original table to avoid the join.\n>\n> But there's another case that's harder to solve this way: combining views together. Here's a trivial example:\n>\n> CREATE TABLE users (id BIGINT PRIMARY KEY, varchar name);\n> CREATE VIEW only_some_users AS (SELECT * FROM users WHERE id < 10);\n> CREATE VIEW some_other_users AS (SELECT * FROM users WHERE id > 3);\n>\n> EXPLAIN SELECT * FROM only_some_users\n> INNER JOIN some_other_users ON only_some_users.id = some_other_users.id;\n>\n> Hash Join  (cost=29.23..43.32 rows=90 width=144)\n>   Hash Cond: (users.id = users_1.id)\n>   ->  Bitmap Heap Scan on users  (cost=6.24..19.62 rows=270 width=72)\n>         Recheck Cond: (id < 10)\n>         ->  Bitmap Index Scan on users_pkey  (cost=0.00..6.18 rows=270 width=0)\n>               Index Cond: (id < 10)\n>   ->  Hash  (cost=19.62..19.62 rows=270 width=72)\n>         ->  Bitmap Heap Scan on users users_1  (cost=6.24..19.62 rows=270 width=72)\n>               Recheck Cond: (id > 3)\n>               ->  Bitmap Index Scan on users_pkey  (cost=0.00..6.18 rows=270 width=0)\n>                     Index Cond: (id > 3)\n>\n> Is there a reason why Postgres doesn't have an optimisation built in to optimise this JOIN? What I'm imagining is that a join between two aliases for the same table on its primary key could be optimised by treating them as the same table. I think the same would be true for self-joins on any non-null columns covered by a uniqueness constraint.\n>\n> If this is considered a desirable change, I'd be keen to work on it (with some guidance).\n\nThere's currently a patch registered in the commitfest that could fix\nthis for you, called \"Remove self join on a unique column\" [0].\n\n\nWith regards,\n\nMatthias van de Meent\n\n[0] https://commitfest.postgresql.org/31/1712/, thread at\nhttps://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru", "msg_date": "Thu, 11 Mar 2021 14:39:55 +0000", "msg_from": "Hywel Carver <hywel@skillerwhale.com>", "msg_from_op": true, "msg_subject": "Re: Self-join optimisation" }, { "msg_contents": "On Thu, Mar 11, 2021 at 03:32:16PM +0100, Matthias van de Meent wrote:\n> On Thu, 11 Mar 2021 at 15:15, Hywel Carver <hywel@skillerwhale.com> wrote:\n> > I asked this question in the Postgres Slack, and was recommended to ask here instead.\n> >\n> > A few times, I've been in a situation where I want to join a table to itself on its primary key. That typically happens because I have some kind of summary view, which I then want to join to the original table (using its primary key) to flesh out the summary data with other columns. That's executed as a join, which surprised me. But in this case, I could extend the view to have all of the columns of the original table to avoid the join.\n> >\n> > But there's another case that's harder to solve this way: combining views together. Here's a trivial example:\n> >\n> > CREATE TABLE users (id BIGINT PRIMARY KEY, varchar name);\n> > CREATE VIEW only_some_users AS (SELECT * FROM users WHERE id < 10);\n> > CREATE VIEW some_other_users AS (SELECT * FROM users WHERE id > 3);\n> >\n> > EXPLAIN SELECT * FROM only_some_users\n> > INNER JOIN some_other_users ON only_some_users.id = some_other_users.id;\n> >\n> > Hash Join (cost=29.23..43.32 rows=90 width=144)\n> > Hash Cond: (users.id = users_1.id)\n> > -> Bitmap Heap Scan on users (cost=6.24..19.62 rows=270 width=72)\n> > Recheck Cond: (id < 10)\n> > -> Bitmap Index Scan on users_pkey (cost=0.00..6.18 rows=270 width=0)\n> > Index Cond: (id < 10)\n> > -> Hash (cost=19.62..19.62 rows=270 width=72)\n> > -> Bitmap Heap Scan on users users_1 (cost=6.24..19.62 rows=270 width=72)\n> > Recheck Cond: (id > 3)\n> > -> Bitmap Index Scan on users_pkey (cost=0.00..6.18 rows=270 width=0)\n> > Index Cond: (id > 3)\n> >\n> > Is there a reason why Postgres doesn't have an optimisation built in to optimise this JOIN? What I'm imagining is that a join between two aliases for the same table on its primary key could be optimised by treating them as the same table. I think the same would be true for self-joins on any non-null columns covered by a uniqueness constraint.\n> >\n> > If this is considered a desirable change, I'd be keen to work on it (with some guidance).\n> \n> There's currently a patch registered in the commitfest that could fix\n> this for you, called \"Remove self join on a unique column\" [0].\n\nMaybe you'd want to test the patch and send a review.\nhttps://commitfest.postgresql.org/32/1712/\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 11 Mar 2021 09:30:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Self-join optimisation" }, { "msg_contents": "On 3/11/21 3:39 PM, Hywel Carver wrote:\n> Great! It looks like it's been in commitfest status for a few years. Is\n> there anything someone like me (outside the pgsql-hackers community) can\n> do to help it get reviewed this time around?\n> \n\nWell, you could do a review, or at least test it with the queries your\napplication is actually running. And explain why your application is\ndoing queries like this, and why it can't be changed to changed to not\ngenerate such queries.\n\nThe first couple of messages from the patch thread [1] (particularly the\nmessages from May 2018) are a good explanation why patches like this are\ntricky to get through.\n\nThe basic assumption is that such queries are a bit silly, and it'd be\nprobably easier to modify the application not to generate them instead\nof undoing the harm in the database planner. The problem is this makes\nthe planner more expensive for everyone, including people who carefully\nwrite \"good\" queries.\n\n\nAnd we don't want to do that, so we need to find a way to make this\noptimization very cheap (essentially \"free\" if not applicable), but\nthat's difficult because there may be cases where the self-joins are\nintentional, and undoing them would make the query slower. And doing\ngood decision requires enough information, but this decision needs to\nhappen quite early in the planning, when we have very little info.\n\nSo while it seems like a simple optimization, it's actually quite tricky\nto get right.\n\n(Of course, there are cases where you may get such queries even if you\ntry writing good SQL, say when joining views etc.)\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 11 Mar 2021 23:50:05 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Self-join optimisation" }, { "msg_contents": "Hi, thanks for your replies. I've tested the patch and it works for me in\nthe cases where I'd use it.\n\n> And explain why your application is doing queries like this, and why it\ncan't be changed to changed to not generate such queries.\n\nReading the thread, it looks like some of the requests for this feature are\ncoming from people using ORMs that generate bad queries. That's not been my\nexperience - I've always been able to find a way to construct the right\nquery through the ORM or just write correct SQL. When I've wanted this\nfeature has always been in relation to combining views.\n\nFor example, I was recently helping out a company that runs a booking\nsystem for leisure activities, and their database has a view for how many\nstaff are available on a given day to supervise a given activity (e.g.\nsurfing), and a view for how much equipment is available on a given day\n(e.g. how many surfboards). They also have a view for the equipment\nrequirements for a given activity (e.g. some boat trips require a minimum\nof 2 boats and 4 oars). When they want to make bookings, they have to\ncombine data from these views, and the tables that create them.\n\nIt would definitely be possible to write one view that had all of this data\nin (and maintain the other views too, which are needed elsewhere in the\nsite). And it could be made wide to have all of the rows from the source\ntables. But it would, to me, feel like much better code to keep the\nseparate decomposed views and join them together for the booking query.\nRight now, that query's performance suffers in a way that this feature\nwould fix. So the current choices are: accept worse performance with\ndecomposed views, write one very large and performant view but duplicate\nsome of the logic, or use their ORM to generate the SQL that they'd\nnormally put in a view.\n\nOn Thu, Mar 11, 2021 at 10:50 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 3/11/21 3:39 PM, Hywel Carver wrote:\n> > Great! It looks like it's been in commitfest status for a few years. Is\n> > there anything someone like me (outside the pgsql-hackers community) can\n> > do to help it get reviewed this time around?\n> >\n>\n> Well, you could do a review, or at least test it with the queries your\n> application is actually running. And explain why your application is\n> doing queries like this, and why it can't be changed to changed to not\n> generate such queries.\n>\n> The first couple of messages from the patch thread [1] (particularly the\n> messages from May 2018) are a good explanation why patches like this are\n> tricky to get through.\n>\n> The basic assumption is that such queries are a bit silly, and it'd be\n> probably easier to modify the application not to generate them instead\n> of undoing the harm in the database planner. The problem is this makes\n> the planner more expensive for everyone, including people who carefully\n> write \"good\" queries.\n>\n>\n> And we don't want to do that, so we need to find a way to make this\n> optimization very cheap (essentially \"free\" if not applicable), but\n> that's difficult because there may be cases where the self-joins are\n> intentional, and undoing them would make the query slower. And doing\n> good decision requires enough information, but this decision needs to\n> happen quite early in the planning, when we have very little info.\n>\n> So while it seems like a simple optimization, it's actually quite tricky\n> to get right.\n>\n> (Of course, there are cases where you may get such queries even if you\n> try writing good SQL, say when joining views etc.)\n>\n> regards\n>\n> [1]\n>\n> https://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi, thanks for your replies. I've tested the patch and it works for me in the cases where I'd use it.> And explain why your application is doing queries like this, and why it can't be changed to changed to not generate such queries.Reading the thread, it looks like some of the requests for this feature are coming from people using ORMs that generate bad queries. That's not been my experience - I've always been able to find a way to construct the right query through the ORM or just write correct SQL. When I've wanted this feature has always been in relation to combining views.For example, I was recently helping out a company that runs a booking system for leisure activities, and their database has a view for how many staff are available on a given day to supervise a given activity (e.g. surfing), and a view for how much equipment is available on a given day (e.g. how many surfboards). They also have a view for the equipment requirements for a given activity (e.g. some boat trips require a minimum of 2 boats and 4 oars). When they want to make bookings, they have to combine data from these views, and the tables that create them.It would definitely be possible to write one view that had all of this data in (and maintain the other views too, which are needed elsewhere in the site). And it could be made wide to have all of the rows from the source tables. But it would, to me, feel like much better code to keep the separate decomposed views and join them together for the booking query. Right now, that query's performance suffers in a way that this feature would fix. So the current choices are: accept worse performance with decomposed views, write one very large and performant view but duplicate some of the logic, or use their ORM to generate the SQL that they'd normally put in a view.On Thu, Mar 11, 2021 at 10:50 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 3/11/21 3:39 PM, Hywel Carver wrote:\n> Great! It looks like it's been in commitfest status for a few years. Is\n> there anything someone like me (outside the pgsql-hackers community) can\n> do to help it get reviewed this time around?\n> \n\nWell, you could do a review, or at least test it with the queries your\napplication is actually running. And explain why your application is\ndoing queries like this, and why it can't be changed to changed to not\ngenerate such queries.\n\nThe first couple of messages from the patch thread [1] (particularly the\nmessages from May 2018) are a good explanation why patches like this are\ntricky to get through.\n\nThe basic assumption is that such queries are a bit silly, and it'd be\nprobably easier to modify the application not to generate them instead\nof undoing the harm in the database planner. The problem is this makes\nthe planner more expensive for everyone, including people who carefully\nwrite \"good\" queries.\n\n\nAnd we don't want to do that, so we need to find a way to make this\noptimization very cheap (essentially \"free\" if not applicable), but\nthat's difficult because there may be cases where the self-joins are\nintentional, and undoing them would make the query slower. And doing\ngood decision requires enough information, but this decision needs to\nhappen quite early in the planning, when we have very little info.\n\nSo while it seems like a simple optimization, it's actually quite tricky\nto get right.\n\n(Of course, there are cases where you may get such queries even if you\ntry writing good SQL, say when joining views etc.)\n\nregards\n\n[1]\nhttps://www.postgresql.org/message-id/flat/64486b0b-0404-e39e-322d-0801154901f3@postgrespro.ru\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 12 Mar 2021 09:20:18 +0000", "msg_from": "Hywel Carver <hywel@skillerwhale.com>", "msg_from_op": true, "msg_subject": "Re: Self-join optimisation" } ]
[ { "msg_contents": "Hi hackers,\n\nWhen we enable hot standby, HotStandbyActive() returns true on hot standby.\nThen, we promote the hot standby, the SHM variable `XLogCtl->SharedHotStandbyActive`\nremains true. So, HotStandbyActive() still returns true until the next call of\n`XLOGShmemInit()` even if the data node was promoted.\n`XLogWalRcvSendHSFeedback()` is the only caller of HotStandbyActive,\nit's probably not covered by the test cases.\n\nIs it the expected behavior or a bug in postgres? Probably a bug.\nI haven't much knowledge of hot-standby, a simple fix might be\nto set XLogCtl->SharedHotStandbyActive to false when\nthe recovery process almost finishes. See the attachment.\n\nRegards,\nHao Wu", "msg_date": "Fri, 12 Mar 2021 02:14:42 +0000", "msg_from": "Hao Wu <hawu@vmware.com>", "msg_from_op": true, "msg_subject": "HotStandbyActive() issue in postgres" }, { "msg_contents": "\n\nOn 2021/03/12 11:14, Hao Wu wrote:\n> Hi hackers,\n> \n> When we enable hot standby,�HotStandbyActive() returns true on hot standby.\n> Then, we promote the hot standby, the SHM variable `XLogCtl->SharedHotStandbyActive`\n> remains true. So, HotStandbyActive() still returns true until the next call of\n> `XLOGShmemInit()` even if the data node was promoted.\n> `XLogWalRcvSendHSFeedback()` is the only caller of HotStandbyActive,\n> it's probably not covered by the test cases.\n> \n> Is it the expected behavior or a bug in postgres? Probably a bug.\n> I haven't much knowledge of hot-standby, a simple fix might be\n> to set XLogCtl->SharedHotStandbyActive to false when\n> the recovery process almost finishes. See the attachment.\n\nSo if walreceiver is only user of HotStandbyActive(), which means that there is no user of it after recovery finishes because walreceiver exits at the end of recovery? If this understanding is right, ISTM that HotStandbyActive() doesn't need to return false after recovery finishes because there is no user of it. No?\n\nOr you're implementing something that uses HotStandbyActive(), so want it to return false after the recovery?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 12 Mar 2021 17:58:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: HotStandbyActive() issue in postgres" }, { "msg_contents": "Yes, I have an extension/UDF that needs to know if the server is currently\nrunning as hot standby. For example, a UDF foo() wants to run on\nboth the primary and secondary and runs different behaviors for different\nroles.\nPromoted secondary looks the same as the primary since it's no longer\nreal hot standby. Does it make sense?\n\nRegards,\nHao Wu\n\n\n\n\n\n\n\n\nYes, I have an extension/UDF that needs to know if the server is currently\n\nrunning as hot standby. For example, a UDF foo() wants to run on\n\nboth the primary and secondary and runs different behaviors for different\n\nroles.\n\nPromoted secondary looks the same as the primary since it's no longer\n\nreal hot standby. Does it make sense?\n\n\n\n\nRegards,\n\nHao Wu", "msg_date": "Tue, 16 Mar 2021 03:24:41 +0000", "msg_from": "Hao Wu <hawu@vmware.com>", "msg_from_op": true, "msg_subject": "Re: HotStandbyActive() issue in postgres" }, { "msg_contents": "\n\nOn 2021/03/16 12:24, Hao Wu wrote:\n> Yes, I have an extension/UDF that needs to know if the server is currently\n> running as hot standby. For example, a UDF foo() wants to run on\n> both the primary and secondary and runs different behaviors for different\n> roles.\n> Promoted secondary looks the same as the primary since it's no longer\n> real hot standby. Does it make sense?\n\nIf UDF, using RecoveryInProgress() is enough? If RecoveryInProgress() returns\nfalse, UDF can determine that the server is working as a primary. If true is\nreturned, UDF can determine that the server is working as a secondary and\nhot standby is active because you can connect to the server and UDF can be\ncalled. If hot standby is not active during recovery, you cannot conect to\nthe server and cannot run UDF, so UDF doesn't need to handle the case where\nhot standby is not active during recovery. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 16 Mar 2021 17:44:55 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: HotStandbyActive() issue in postgres" } ]
[ { "msg_contents": "Hi, hackers\r\n\r\n\r\nDue to configure with parameter --enable-cassert, the debug_assertions is on by default, as follows:\r\npostgres=# show debug_assertions;\r\ndebug_assertions\r\n-------------------\r\non\r\n\r\n\r\nBecause of pgbench performance testing, I need to disable the assert function. Following the doc below, I tried to set plpgsql.check_asserts to off to disable assert function.\r\nhttps://www.postgresql.org/docs/13/plpgsql-errors-and-messages.html\r\n\r\n\r\nHowever, it prompts the following error, not sure if I missed something, any thoughts about it?\r\npostgres=# alter system set plpgsql.check_asserts = off;\r\nEORROR:&nbsp;unrecognized configuration parameter \"plpgsql.check_asserts\"\r\n\r\n\r\nenv:\r\nPG: 13.2\r\nOS: redhat 7.4 3.10.0-693.17.1.e17.x86_64\r\nconfigure parameter: --enable-coverage --enable-tap-tests --enable-cassert --enable-debug --enable-nls --with-perl --with-python --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx CLANG=xxx\r\n\r\n\r\n\r\n\r\nthanks\r\nwalker\nHi, hackersDue to configure with parameter --enable-cassert, the debug_assertions is on by default, as follows:postgres=# show debug_assertions;debug_assertions-------------------onBecause of pgbench performance testing, I need to disable the assert function. Following the doc below, I tried to set plpgsql.check_asserts to off to disable assert function.https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.htmlHowever, it prompts the following error, not sure if I missed something, any thoughts about it?postgres=# alter system set plpgsql.check_asserts = off;EORROR: unrecognized configuration parameter \"plpgsql.check_asserts\"env:PG: 13.2OS: redhat 7.4 3.10.0-693.17.1.e17.x86_64configure parameter: --enable-coverage --enable-tap-tests --enable-cassert --enable-debug --enable-nls --with-perl --with-python --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx CLANG=xxxthankswalker", "msg_date": "Fri, 12 Mar 2021 18:53:26 +0800", "msg_from": "\"=?ISO-8859-1?B?V2Fsa2Vy?=\" <failaway@qq.com>", "msg_from_op": true, "msg_subject": "unrecognized configuration parameter \"plpgsql.check_asserts\"" }, { "msg_contents": "pá 12. 3. 2021 v 11:54 odesílatel Walker <failaway@qq.com> napsal:\n\n> Hi, hackers\n>\n> Due to configure with parameter --enable-cassert, the debug_assertions is\n> on by default, as follows:\n> postgres=# show debug_assertions;\n> debug_assertions\n> -------------------\n> on\n>\n> Because of pgbench performance testing, I need to disable the assert\n> function. Following the doc below, I tried to set plpgsql.check_asserts to\n> off to disable assert function.\n> https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.html\n>\n> However, it prompts the following error, not sure if I missed something,\n> any thoughts about it?\n> postgres=# alter system set plpgsql.check_asserts = off;\n> EORROR: unrecognized configuration parameter \"plpgsql.check_asserts\"\n>\n\nyou cannot disable debug_assertions. It is possible just by configure, and\nmake\n\nplpgsql.check_asserts controls evaluation of plpgsql statement ASSERT\n\nPavel\n\n\n\n> env:\n> PG: 13.2\n> OS: redhat 7.4 3.10.0-693.17.1.e17.x86_64\n> configure parameter: --enable-coverage --enable-tap-tests --enable-cassert\n> --enable-debug --enable-nls --with-perl --with-python --with-tcl\n> --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs\n> --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx\n> CLANG=xxx\n>\n>\n> thanks\n> walker\n>\n>\n>\n\npá 12. 3. 2021 v 11:54 odesílatel Walker <failaway@qq.com> napsal:Hi, hackersDue to configure with parameter --enable-cassert, the debug_assertions is on by default, as follows:postgres=# show debug_assertions;debug_assertions-------------------onBecause of pgbench performance testing, I need to disable the assert function. Following the doc below, I tried to set plpgsql.check_asserts to off to disable assert function.https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.htmlHowever, it prompts the following error, not sure if I missed something, any thoughts about it?postgres=# alter system set plpgsql.check_asserts = off;EORROR: unrecognized configuration parameter \"plpgsql.check_asserts\"you cannot disable debug_assertions. It is possible just by configure, and make plpgsql.check_asserts controls evaluation of plpgsql statement ASSERTPavel env:PG: 13.2OS: redhat 7.4 3.10.0-693.17.1.e17.x86_64configure parameter: --enable-coverage --enable-tap-tests --enable-cassert --enable-debug --enable-nls --with-perl --with-python --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx CLANG=xxxthankswalker", "msg_date": "Fri, 12 Mar 2021 12:11:03 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unrecognized configuration parameter \"plpgsql.check_asserts\"" } ]
[ { "msg_contents": "Hi, Pavel\r\n\r\n\r\nThanks for your comments.&nbsp;\r\n\r\n\r\nTo get rid of --enable-cassert while configuring, debug_assertions is shown as off. In this case, ASSERT statement also can't be used, right?\r\n\r\n\r\nwhen enable --enable-cassert, can we use plpgsql.check_asserts to control ASSERT statement, and how?&nbsp;\r\n\r\n\r\n\r\nthanks\r\nwalker\r\n\r\n\r\n------------------&nbsp;原始邮件&nbsp;------------------\r\n发件人: \"Pavel Stehule\" <pavel.stehule@gmail.com&gt;;\r\n发送时间:&nbsp;2021年3月12日(星期五) 晚上7:11\r\n收件人:&nbsp;\"Walker\"<failaway@qq.com&gt;;\r\n抄送:&nbsp;\"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org&gt;;\r\n主题:&nbsp;Re: unrecognized configuration parameter \"plpgsql.check_asserts\"\r\n\r\n\r\n\r\n\r\n\r\n\r\npá 12. 3. 2021 v&nbsp;11:54 odesílatel Walker <failaway@qq.com&gt; napsal:\r\n\r\nHi, hackers\r\n\r\n\r\nDue to configure with parameter --enable-cassert, the debug_assertions is on by default, as follows:\r\npostgres=# show debug_assertions;\r\ndebug_assertions\r\n-------------------\r\non\r\n\r\n\r\nBecause of pgbench performance testing, I need to disable the assert function. Following the doc below, I tried to set plpgsql.check_asserts to off to disable assert function.\r\nhttps://www.postgresql.org/docs/13/plpgsql-errors-and-messages.html\r\n\r\n\r\nHowever, it prompts the following error, not sure if I missed something, any thoughts about it?\r\npostgres=# alter system set plpgsql.check_asserts = off;\r\nEORROR:&nbsp;unrecognized configuration parameter \"plpgsql.check_asserts\"\r\n\r\n\r\nyou cannot disable debug_assertions. It is possible just by configure, and make \r\n\r\n\r\n\r\nplpgsql.check_asserts controls evaluation of plpgsql statement ASSERT\r\n\r\n\r\nPavel\r\n\r\n\r\n\r\n \r\n\r\n\r\n\r\nenv:\r\nPG: 13.2\r\nOS: redhat 7.4 3.10.0-693.17.1.e17.x86_64\r\nconfigure parameter: --enable-coverage --enable-tap-tests --enable-cassert --enable-debug --enable-nls --with-perl --with-python --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx CLANG=xxx\r\n\r\n\r\n\r\n\r\nthanks\r\nwalker\nHi, PavelThanks for your comments. To get rid of --enable-cassert while configuring, debug_assertions is shown as off. In this case, ASSERT statement also can't be used, right?when enable --enable-cassert, can we use plpgsql.check_asserts to control ASSERT statement, and how? thankswalker------------------ 原始邮件 ------------------发件人: \"Pavel Stehule\" <pavel.stehule@gmail.com>;发送时间: 2021年3月12日(星期五) 晚上7:11收件人: \"Walker\"<failaway@qq.com>;抄送: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;主题: Re: unrecognized configuration parameter \"plpgsql.check_asserts\"pá 12. 3. 2021 v 11:54 odesílatel Walker <failaway@qq.com> napsal:Hi, hackersDue to configure with parameter --enable-cassert, the debug_assertions is on by default, as follows:postgres=# show debug_assertions;debug_assertions-------------------onBecause of pgbench performance testing, I need to disable the assert function. Following the doc below, I tried to set plpgsql.check_asserts to off to disable assert function.https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.htmlHowever, it prompts the following error, not sure if I missed something, any thoughts about it?postgres=# alter system set plpgsql.check_asserts = off;EORROR: unrecognized configuration parameter \"plpgsql.check_asserts\"you cannot disable debug_assertions. It is possible just by configure, and make plpgsql.check_asserts controls evaluation of plpgsql statement ASSERTPavel env:PG: 13.2OS: redhat 7.4 3.10.0-693.17.1.e17.x86_64configure parameter: --enable-coverage --enable-tap-tests --enable-cassert --enable-debug --enable-nls --with-perl --with-python --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx CLANG=xxxthankswalker", "msg_date": "Fri, 12 Mar 2021 20:12:53 +0800", "msg_from": "\"=?gb18030?B?V2Fsa2Vy?=\" <failaway@qq.com>", "msg_from_op": true, "msg_subject": "=?gb18030?B?u9i4tKO6IHVucmVjb2duaXplZCBjb25maWd1cmF0?=\n =?gb18030?B?aW9uIHBhcmFtZXRlciAicGxwZ3NxbC5jaGVja19h?=\n =?gb18030?B?c3NlcnRzIg==?=" }, { "msg_contents": "pá 12. 3. 2021 v 13:13 odesílatel Walker <failaway@qq.com> napsal:\n\n> Hi, Pavel\n>\n> Thanks for your comments.\n>\n> To get rid of --enable-cassert while configuring, debug_assertions is\n> shown as off. In this case, ASSERT statement also can't be used, right?\n>\n\nno - debug assertions and plpgsql assertions are two absolutely independent\nfeatures.\n\n--enable-cessart enables C assertions, and then needs postgres's source\ncode compilation. It is designed and used by Postgres's core developers.\n\nplpgsql's ASSERT is user space feature, and can be enabled or disabled how\nit is necessary by plpgsql.check_assert.\n\n\n> when enable --enable-cassert, can we use plpgsql.check_asserts to control\n> ASSERT statement, and how?\n>\n> thanks\n> walker\n>\n> ------------------ 原始邮件 ------------------\n> *发件人:* \"Pavel Stehule\" <pavel.stehule@gmail.com>;\n> *发送时间:* 2021年3月12日(星期五) 晚上7:11\n> *收件人:* \"Walker\"<failaway@qq.com>;\n> *抄送:* \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;\n> *主题:* Re: unrecognized configuration parameter \"plpgsql.check_asserts\"\n>\n>\n>\n> pá 12. 3. 2021 v 11:54 odesílatel Walker <failaway@qq.com> napsal:\n>\n>> Hi, hackers\n>>\n>> Due to configure with parameter --enable-cassert, the debug_assertions is\n>> on by default, as follows:\n>> postgres=# show debug_assertions;\n>> debug_assertions\n>> -------------------\n>> on\n>>\n>> Because of pgbench performance testing, I need to disable the assert\n>> function. Following the doc below, I tried to set plpgsql.check_asserts to\n>> off to disable assert function.\n>> https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.html\n>>\n>> However, it prompts the following error, not sure if I missed something,\n>> any thoughts about it?\n>> postgres=# alter system set plpgsql.check_asserts = off;\n>> EORROR: unrecognized configuration parameter \"plpgsql.check_asserts\"\n>>\n>\n> you cannot disable debug_assertions. It is possible just by configure, and\n> make\n>\n> plpgsql.check_asserts controls evaluation of plpgsql statement ASSERT\n>\n> Pavel\n>\n>\n>\n>> env:\n>> PG: 13.2\n>> OS: redhat 7.4 3.10.0-693.17.1.e17.x86_64\n>> configure parameter: --enable-coverage --enable-tap-tests\n>> --enable-cassert --enable-debug --enable-nls --with-perl --with-python\n>> --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt\n>> --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm\n>> LLVM_CONFIG=xxx CLANG=xxx\n>>\n>>\n>> thanks\n>> walker\n>>\n>>\n>>\n\npá 12. 3. 2021 v 13:13 odesílatel Walker <failaway@qq.com> napsal:Hi, PavelThanks for your comments. To get rid of --enable-cassert while configuring, debug_assertions is shown as off. In this case, ASSERT statement also can't be used, right?no - debug assertions and plpgsql assertions are two absolutely independent features.--enable-cessart enables C assertions, and then needs postgres's source code compilation. It is designed and used by Postgres's core developers.plpgsql's ASSERT is user space feature, and can be enabled or disabled how it is necessary by plpgsql.check_assert. when enable --enable-cassert, can we use plpgsql.check_asserts to control ASSERT statement, and how? thankswalker------------------ 原始邮件 ------------------发件人: \"Pavel Stehule\" <pavel.stehule@gmail.com>;发送时间: 2021年3月12日(星期五) 晚上7:11收件人: \"Walker\"<failaway@qq.com>;抄送: \"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;主题: Re: unrecognized configuration parameter \"plpgsql.check_asserts\"pá 12. 3. 2021 v 11:54 odesílatel Walker <failaway@qq.com> napsal:Hi, hackersDue to configure with parameter --enable-cassert, the debug_assertions is on by default, as follows:postgres=# show debug_assertions;debug_assertions-------------------onBecause of pgbench performance testing, I need to disable the assert function. Following the doc below, I tried to set plpgsql.check_asserts to off to disable assert function.https://www.postgresql.org/docs/13/plpgsql-errors-and-messages.htmlHowever, it prompts the following error, not sure if I missed something, any thoughts about it?postgres=# alter system set plpgsql.check_asserts = off;EORROR: unrecognized configuration parameter \"plpgsql.check_asserts\"you cannot disable debug_assertions. It is possible just by configure, and make plpgsql.check_asserts controls evaluation of plpgsql statement ASSERTPavel env:PG: 13.2OS: redhat 7.4 3.10.0-693.17.1.e17.x86_64configure parameter: --enable-coverage --enable-tap-tests --enable-cassert --enable-debug --enable-nls --with-perl --with-python --with-tcl --with-openssl --with-ldap --with-libxml --with-libxslt --with-uuid=e2fs --with-segsize=10 --with-wal-blocksize=16 --with-llvm LLVM_CONFIG=xxx CLANG=xxxthankswalker", "msg_date": "Fri, 12 Mar 2021 13:25:59 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: unrecognized configuration parameter \"plpgsql.check_asserts\"" }, { "msg_contents": "On Fri, Mar 12, 2021 at 08:12:53PM +0800, Walker wrote:\n> To get rid of --enable-cassert while configuring, debug_assertions is shown as off. In this case, ASSERT statement also can't be used, right?\n\nNo, those are two different things. plpgsql ASSERT are only controlled by\nplpgsql.check_asserts configuration option, whether the server were compiled\nwith or without --enable-cassert.\n\n\n", "msg_date": "Fri, 12 Mar 2021 20:27:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: =?utf-8?B?5Zue5aSN77yaIHVucmVjb2duaXpl?= =?utf-8?Q?d?=\n configuration parameter \"plpgsql.check_asserts\"" }, { "msg_contents": "thanks Julien &amp; Pavel\r\n\r\n\r\nit's crystal clear now. thanks again for your kindly help&nbsp;\r\n\r\n\r\nthanks\r\nwalker\r\n\r\n\r\n\r\n\r\n------------------&nbsp;Original&nbsp;------------------\r\nFrom: \"Julien Rouhaud\" <rjuju123@gmail.com&gt;;\r\nDate:&nbsp;Fri, Mar 12, 2021 08:27 PM\r\nTo:&nbsp;\"Walker\"<failaway@qq.com&gt;;\r\nCc:&nbsp;\"Pavel Stehule\"<pavel.stehule@gmail.com&gt;;\"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org&gt;;\r\nSubject:&nbsp;Re: 回复: unrecognized configuration parameter \"plpgsql.check_asserts\"\r\n\r\n\r\n\r\nOn Fri, Mar 12, 2021 at 08:12:53PM +0800, Walker wrote:\r\n&gt; To get rid of --enable-cassert while configuring, debug_assertions is shown as off. In this case, ASSERT statement also can't be used, right?\r\n\r\nNo, those are two different things.&nbsp; plpgsql ASSERT are only controlled by\r\nplpgsql.check_asserts configuration option, whether the server were compiled\r\nwith or without --enable-cassert.\nthanks Julien & Pavelit's crystal clear now. thanks again for your kindly help thankswalker------------------ Original ------------------From: \"Julien Rouhaud\" <rjuju123@gmail.com>;Date: Fri, Mar 12, 2021 08:27 PMTo: \"Walker\"<failaway@qq.com>;Cc: \"Pavel Stehule\"<pavel.stehule@gmail.com>;\"pgsql-hackers\"<pgsql-hackers@lists.postgresql.org>;Subject: Re: 回复: unrecognized configuration parameter \"plpgsql.check_asserts\"On Fri, Mar 12, 2021 at 08:12:53PM +0800, Walker wrote:> To get rid of --enable-cassert while configuring, debug_assertions is shown as off. In this case, ASSERT statement also can't be used, right?No, those are two different things.  plpgsql ASSERT are only controlled byplpgsql.check_asserts configuration option, whether the server were compiledwith or without --enable-cassert.", "msg_date": "Fri, 12 Mar 2021 20:38:51 +0800", "msg_from": "\"=?gb18030?B?V2Fsa2Vy?=\" <failaway@qq.com>", "msg_from_op": true, "msg_subject": "=?gb18030?B?UmU6ICC72Li0o7ogdW5yZWNvZ25pemVkIGNvbmZp?=\n =?gb18030?B?Z3VyYXRpb24gcGFyYW1ldGVyICJwbHBnc3FsLmNo?=\n =?gb18030?B?ZWNrX2Fzc2VydHMi?=" } ]
[ { "msg_contents": "Peter,\n\nI'd like your advice on the following observations, if you'd be so kind:\n\n\nUsing the pg_amcheck command committed yesterday (thanks, Robert! thanks Tom!), I have begun investigating segfaults that sometimes occur when running the amcheck routines on intentionally corrupted databases. We've discussed this before, and there are limits to how much hardening is possible, particularly if it comes at the expense of backend performance under normal conditions. There are also serious problems with corruption schemes that differ from what is likely to go wrong in the wild.\n\nThese segfaults present a real usage problem for pg_amcheck. We made the decision [3] to not continue checking if we get a FATAL or PANIC error. Getting a segfault in just one database while checking just one index can abort a pg_amcheck run that spans multiple databases, tables and indexes, and therefore is not easy to blow off. Perhaps the decision in [3] was wrong, but if not, some hardening of amcheck might make this problem less common.\n\nThe testing strategy I'm using is to corrupt heap and btree pages in schema \"public\" of the \"regression\" database created by `make installcheck`, by overwriting random bytes in randomly selected locations on pages after the page header. Then I run `pg_amcheck regression` and see if anything segfaults. Doing this repeatedly, with random bytes and locations within files not the same from one run to the next, I can find the locations of segfaults that are particularly common.\n\nAdmittedly, this is not what is likely to be wrong in real-world installations. I had a patch as part of the pg_amcheck series that I ultimately abandoned for v14 given the schedule. It reverts tables and indexes (or portions thereof) to prior states. I'll probably move on to testing with that in a bit.\n\n\nThe first common problem [1] happens at verify_nbtree.c:1422 when a corruption report is being generated. The generation does not seem entirely safe, and the problematic bit can be avoided, though I suspect you could do better than the brute-force solution I'm using locally:\n\ndiff --git a/contrib/amcheck/verify_nbtree.c b/contrib/amcheck/verify_nbtree.c\nindex c4ca633918..fa8b7d5163 100644\n--- a/contrib/amcheck/verify_nbtree.c\n+++ b/contrib/amcheck/verify_nbtree.c\n@@ -1418,9 +1418,13 @@ bt_target_page_check(BtreeCheckState *state)\n OffsetNumberNext(offset));\n itup = (IndexTuple) PageGetItem(state->target, itemid);\n tid = BTreeTupleGetPointsToTID(itup);\n+#if 0\n nhtid = psprintf(\"(%u,%u)\",\n ItemPointerGetBlockNumberNoCheck(tid),\n ItemPointerGetOffsetNumberNoCheck(tid));\n+#else\n+ nhtid = \"(UNRESOLVED,UNRESOLVED)\";\n+#endif\n \n ereport(ERROR,\n (errcode(ERRCODE_INDEX_CORRUPTED),\n\nThe temPointerGetBlockNumberNoCheck(tid) seems to be unsafe here. I get much the same crash at verify_nbtree.c:1136, but it's so similar I'm not bother to include a crash report for it.\n\n\nThe second common problem [2] happens at verify_nbtree.c:2762 where it uses _bt_compare, which ends up calling pg_detoast_datum_packed on a garbage value. I'm not sure we can fix that at the level of _bt_compare, since that would have performance consequences on backends under normal conditions, but perhaps we could have a function that sanity checks datums, and call that from invariant_l_offset() prior to _bt_compare? I have observed a variant of this crash where the text data is not toasted but crashes anyway:\n\n0 libsystem_platform.dylib 0x00007fff738ec929 _platform_memmove$VARIANT$Haswell + 41\n1 postgres 0x000000010bf1af34 varstr_cmp + 532 (varlena.c:1678)\n2 postgres 0x000000010bf1b6c9 text_cmp + 617 (varlena.c:1770)\n3 postgres 0x000000010bf1bfe5 bttextcmp + 69 (varlena.c:1990)\n4 postgres 0x000000010bf68c87 FunctionCall2Coll + 167 (fmgr.c:1163)\n5 postgres 0x000000010b8a7cb5 _bt_compare + 1445 (nbtsearch.c:721)\n6 amcheck.so 0x000000011525eaeb invariant_l_offset + 123 (verify_nbtree.c:2758)\n7 amcheck.so 0x000000011525cd92 bt_target_page_check + 4754 (verify_nbtree.c:1398)\n\n\n\n[1]\n\n \n\n\n\n[2]\n\n\n\n\n\n[3] https://www.postgresql.org/message-id/CA%2BTgmob2c0eM8%2B5kzkXaqdc9XbBCkHmtihSOSk-jCzzT4BFhFQ%40mail.gmail.com\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 13 Mar 2021 10:35:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "amcheck hardening" }, { "msg_contents": "On Sat, Mar 13, 2021 at 10:35 AM Mark Dilger\n<mark.dilger@enterprisedb.com> wrote:\n> The testing strategy I'm using is to corrupt heap and btree pages in schema \"public\" of the \"regression\" database created by `make installcheck`, by overwriting random bytes in randomly selected locations on pages after the page header. Then I run `pg_amcheck regression` and see if anything segfaults. Doing this repeatedly, with random bytes and locations within files not the same from one run to the next, I can find the locations of segfaults that are particularly common.\n\nThat seems like a reasonable starting point to me.\n\n> The first common problem [1] happens at verify_nbtree.c:1422 when a corruption report is being generated. The generation does not seem entirely safe, and the problematic bit can be avoided\n\n> The temPointerGetBlockNumberNoCheck(tid) seems to be unsafe here.\n\nI think that the best way to further harden verify_nbtree.c at this\npoint is to do the most basic validation of IndexTuples in our own new\nvariant of a core bufpage.h macro: PageGetItemCareful(). In other\nwords, we should invent the IndexTuple equivalent of the existing\nPageGetItemIdCareful() function (which already does the same thing for\nitem pointers), and then replace all current PageGetItem() calls with\nPageGetItemCareful() calls -- we ban raw PageGetItem() calls from\nverify_nbtree.c forever.\n\nHere is some of the stuff that could go in PageGetItemCareful(), just\noff the top of my head:\n\n* The existing \"if (tupsize != ItemIdGetLength(itemid))\" check at the\ntop of bt_target_page_check() -- make sure the length from the\ncaller's line pointer agrees with IndexTupleSize().\n\n* Basic validation against the index's tuple descriptor -- in\nparticular, that varlena headers are basically sane, and that the\napparent range of datums is safely within the space on the page for\nthe tuple.\n\n* Similarly, BTreeTupleGetHeapTID() should not be able to return a\npointer that doesn't actually point somewhere inside the space that\nthe target page has for the IndexTuple.\n\n* No external TOAST pointers, since this is an index AM, and so\ndoesn't allow that.\n\nIn general this kind of very basic validation should be pushed down to\nthe lowest level code, so that it detects the problem as early as\npossible, before slightly higher level code has the opportunity to\nrun. Higher level code is always going to be at risk of making\nassumptions about the data not being corrupt, because there is so much\nmore of it, and also because it tends to roughly look like idiomatic\nAM code.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 13 Mar 2021 11:06:35 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: amcheck hardening" }, { "msg_contents": "\n\n> On Mar 13, 2021, at 11:06 AM, Peter Geoghegan <pg@bowt.ie> wrote:\n> \n> On Sat, Mar 13, 2021 at 10:35 AM Mark Dilger\n> <mark.dilger@enterprisedb.com> wrote:\n>> The testing strategy I'm using is to corrupt heap and btree pages in schema \"public\" of the \"regression\" database created by `make installcheck`, by overwriting random bytes in randomly selected locations on pages after the page header. Then I run `pg_amcheck regression` and see if anything segfaults. Doing this repeatedly, with random bytes and locations within files not the same from one run to the next, I can find the locations of segfaults that are particularly common.\n> \n> That seems like a reasonable starting point to me.\n> \n>> The first common problem [1] happens at verify_nbtree.c:1422 when a corruption report is being generated. The generation does not seem entirely safe, and the problematic bit can be avoided\n> \n>> The temPointerGetBlockNumberNoCheck(tid) seems to be unsafe here.\n> \n> I think that the best way to further harden verify_nbtree.c at this\n> point is to do the most basic validation of IndexTuples in our own new\n> variant of a core bufpage.h macro: PageGetItemCareful(). In other\n> words, we should invent the IndexTuple equivalent of the existing\n> PageGetItemIdCareful() function (which already does the same thing for\n> item pointers), and then replace all current PageGetItem() calls with\n> PageGetItemCareful() calls -- we ban raw PageGetItem() calls from\n> verify_nbtree.c forever.\n> \n> Here is some of the stuff that could go in PageGetItemCareful(), just\n> off the top of my head:\n> \n> * The existing \"if (tupsize != ItemIdGetLength(itemid))\" check at the\n> top of bt_target_page_check() -- make sure the length from the\n> caller's line pointer agrees with IndexTupleSize().\n> \n> * Basic validation against the index's tuple descriptor -- in\n> particular, that varlena headers are basically sane, and that the\n> apparent range of datums is safely within the space on the page for\n> the tuple.\n> \n> * Similarly, BTreeTupleGetHeapTID() should not be able to return a\n> pointer that doesn't actually point somewhere inside the space that\n> the target page has for the IndexTuple.\n> \n> * No external TOAST pointers, since this is an index AM, and so\n> doesn't allow that.\n> \n> In general this kind of very basic validation should be pushed down to\n> the lowest level code, so that it detects the problem as early as\n> possible, before slightly higher level code has the opportunity to\n> run. Higher level code is always going to be at risk of making\n> assumptions about the data not being corrupt, because there is so much\n> more of it, and also because it tends to roughly look like idiomatic\n> AM code.\n\nExcellent write-up! Thanks!\n\nI'll work on this and get a patch set going if time allows.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 13 Mar 2021 11:29:49 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: amcheck hardening" } ]
[ { "msg_contents": "A Case For Inlining Immediate Referential Integrity Checks\n----------------------------------------------------------\n\nThe following is an overview of how Postgres currently implemented\nreferential integrity, the some problems with that architecture, attempted\nsolutions for those problems, and a suggstion of another possible solution.\n\nNotes On Notation and Referential Integrity In General\n------------------------------------------------------\n\nAll referential integrity is ultimately of this form:\n\nR(X) => T(Y)\n\nWhere one referencing table R has a set of columns X That references a set\nof columns Y which comprise a unique constraint on a target table T. Note\nthat the Y-set of columns is usually the primary key of T, but does not\nhave to be.\n\nThe basic referential integrity checks fall into two basic categories,\nInsert and Delete, which can be checked Immediately following the\nstatement, or can be Deferred to the end of the transaction.\n\nThe Insert check is fairly straightforward. Any insert to R, or update of R\nthat modifies [1] any column in X, is checked to see if all of the X\ncolumns are NOT NULL, and if so, a lookup is done on T to find a matching\nrow tuple of Y. If none is found, then an error is raised.\n\nThe Update check is more complicated, as it covers any UPDATE operation\nthat modifies [1] any column in Y, where all of the values of Y are NOT\nNUL, as well as DELETE operation where all of the columns of Y are NOT\nNULL. For any Update check, the table R is scanned for any matching X\ntuples matching Y in the previous, and for any matches found, an action is\ntaken. That action can be to fail the operation (NO ACTION, RESTRICT),\nupdate the X values to fixed values (SET NULL, SET DEFAULT), or to delete\nthose rows in R (CASCADE).\n\n\nCurrent Implementation\n----------------------\n\nCurrently, these operations are handled via per-row triggers. In our\ngeneral case, one trigger is placed on R for INSERT operations, and one\ntrigger is placed on T for DELETE operations, and an additional trigger is\nplaced on T for UPDATE operations that affect any column of Y.\n\nThese Insert trigger functions invoke the C function RI_FKey_check() [2].\nThe trigger is fired unconditionally, and the trigger itself determines if\nthere is a referential integrity constraint to be made or not. Ultimately\nthis trigger invokes an SPI query of the form SELECT 1 FROM <T> WHERE (<X =\nY>) FOR KEY SHARE. This query is generally quite straightforward to the\nplanner, as it becomes either a scan of a single unique index, or a\npartition search followed by a scan of a single unique index. The operation\nsucceeds if a row is found, and fails if it does not.\n\nThe Update trigger functions are implemented with a set of C functions\nRI_[noaction|restrict|cascade|setnull|setdefault]_[upd|del]() [3]. These\nfunctions each generate a variation of SPI query in one of the following\nforms\n\n cascade: DELETE FROM <R> WHERE <X = Y>\n restrict/noaction: SELECT 1 FROM <R> WHERE <X = Y> FOR KEY SHARE\n setnull: UPDATE <R> SET x1 = NULL, ... WHERE <X = Y>\n setdefault: UPDATE <R> SET x1 = DEFAULT, ... WHERE <X = Y>\n\nThese triggers are either executed at statement time (Immediate) or are\nqueued for execution as a part of the transaction commit (Deferred).\n\nProblems With The Current Implementation\n----------------------------------------\n\nThe main problems with this architecture come down to visiblity and\nperformance.\n\nThe foremost problem with this implementation is that these extra queries\nare not visible to the end user in any way. It is possible to infer that\nthe functions executed by looking at the constraint defnitions and\ncomparing pg_stat_user_tables or pg_stat_user_indexes before and after the\noperation, but in general the time spent in these functions accrues to the\nDML statement (Immediate) or COMMIT statement (Deferred) without any insght\ninto what took place. This is especially vexing in situations where an\noperation as simple as \"DELETE FROM highly_referenced_table WHERE id = 1\"\nhits the primary key index, but takes several seconds to run.\n\nThe performance of Insert operations is generally not too bad, in that\nquery boils down to an Index Scan for a single row. The problem, however,\nis that this query must be executed for every row inserted. The query\nitself is only planned once, and that query plan is cached for later\nre-use. That removes some of the query overhead, but also incurs a growing\ncache of plans which can create memory pressure if the number of foreign\nkeys is large, and indeed this has become a problem for at least one\ncustomer [4]. Some profiling of the RI check indicated that about half of\nthe time of the insert was spent in SPI functions that could be bypassed if\nthe C function called index_beginscan and index_rescan directly [5]. And\nthese indications bore out when Amit Langote wrote a patch [6] which finds\nthe designanted index from the constraint (with some drilling through\npartitions if need be) and then invokes the scan functions. This method\nshowed about a halving of the time involved, while also avoiding the memory\npressure from many cached plans [7].\n\nThe performance of Delete operations is far less certain, and the potential\nperformance impact is far greater. There are four main reasons for this.\nFirst, there is no guarantee of a suitable index on the R table let alone\nan optimal index, so a Sequential Scan is possible. Second, the operation\nmay match multiple rows in the R table, so the scan must exhaust the whole\ntable/index of R. Third, this already sub-optimal operation must be\nperformed for every row affected by the Delete operation, with no ability\nto coordinate between row triggers which means that in a pathological case,\na large table is sequential scanned once per row deleted. Lastly, the\ncascade, setnull and setdefault variants have the potential to fire more\ntriggers.\n\nAttempts at Statement Level Triggers\n------------------------------------\n\nBack in 2016, Kevin Grittner made a mock up of handling RI delete checks\nwith statement-level triggers [8] and got a 98% reduction in execution\ntime. The source of the benefit was not hard to see: doing 1 query\nfiltering for N values is almost always faster than N queries filtering for\n1 each. It especially helps the most pathological case where a Sequential\nScan is done in the row-trigger case, now a hash join to the transition\ntable is possible.\n\nIt was with that thinking that I made my own attempt at re-implementing RI\nwith statement-level triggers [9]. This effort was incomplete, and raised\nquestions from Alvaro Herrera [10] and Kevin Grittner [11]. Also, the\nimplementation was only seeing about a 25% improvement instead of the 98%\nthat was hoped. Antonin Houska found some bugs in the patch [12] and\nsuggested keeping row level triggers. The chief source of baggage seemed to\nbe that the transition tables contained the entire before/after rows, not\njust the columns needed to process the trigger, and that level of memory\nconsumption was quite significant.\n\nSome time later, Antonin followed through with his idea for how to improve\nthe patch [13], which sparked a lively discussion where it was observed\nthat this discussion had happened before, in other forms, at least as far\nback as 2013 [14]. The chief challenges being how to optimize a many-row\nupdate without penalizing a single row update.\n\nAll of the discussion, however, was about architecture and performance of\ntriggers, without any mention of improving the visibility of RI checks to\nthe user.\n\nCurrent State Of Affairs\n------------------------\n\nAmit's patch to remove all SPI from Insert triggers [6] appears to work,\nand is about as minimally invasive as possible under the circumstances, and\nI think it is important that it be included for v14. However, concerns have\nbeen raised about the patch bypassing permission checks, how it would\nhandle foreign tables, alternate access methods, etc.\n\nEven if all issues are resolved with Amit's patch, it does nothing for\nUpdates, and nothing for visiblity. As was mentioned before, the chief\nproblem for Update performance is that statement-level triggers are too\nmuch overhead for single-row updates, and row level triggers need extra\noverhead to see if they can find common cause with other trigger firings.\n\nIt has occurred to me that the solution to both of these issues is to,\nwhere possible, roll the trigger operation into the query itself, and in\ncases where that isn't possible, at least indicate that a trigger could be\nfired.\n\nThe Proposed Changes\n--------------------\n\nThe changes I'm proposing boil down to the following:\n\n1. Add informational nodes to query plans to indicate that a certain named\ntrigger would be fired, whether it is a row or statement trigger, and in\nthe case of per-row triggers, how many are expected to be fired.\n2. To the extent possible, accrue time/cost expended on triggers to that\nnode.\n3. Make the planner capable of seeing these triggers, and where possible,\nbring the logic of the RI check inside the query itself.\n\n\nAdding Trigger Nodes to A Query Plan\n------------------------------------\n\nWhen a query is being planned that modifies a table, the planner would\ninspect the triggers affected on that table, and will create a node for\neach named trigger that would be fired.\n\n1. Disabled triggers are ignored outright.\n2. User-defined triggers and Deferred RI triggers will simply be named\nalong with an estimate of the number of firings that will occur.\n3. RI Insert triggers that are in immediate mode will be inlined as a node\nReferentialIntegrityInsert\n3. RI Update/Delete triggers of type ON DELETE RESTRICT and ON DELETE NO\nACTION that are in immediate mode will be inlined as a node\nRerefentialIntegrityDeleteCheck\n4. RI Update/Delete triggers of type CASCADE will be inlined as a node\nRerefentialIntegrityDeleteCascade\n5. RI Update/Delete triggers of type SET NULL and SET DEFAULT will be\ninlined as a node RerefentialIntegrityUpdateSet.\n\nNote that nodes RerefentialIntegrityDeleteCascade and\nRerefentialIntegrityUpdateSet will themselves modify tables which can in\nturn have RI constraints that would create more\nRerefentialIntegrityDeleteCascade and RerefentialIntegrityUpdateSet nodes.\nIn theory, this could continue infinitely. Praactically speaking, such\nsituations either have a DEFERRED constraint somewhere in the loop, or one\nof the references starts off initially NULL. Still, the planner can't see\nthat, so it makes sense to put a depth-limit on such things, and any\nUPDATEs/DELETEs that cascade beyond that limit are simply not inlined, and\nare queued as user-defined immediate triggers are.\n\nReferentialIntegrityInsert Node\n-------------------------------\nThis is a node that does a lookup of every tuple in the input set to the\nunique index(normal or partitioned) on the referencing table. The node\nwould have the option of first doing a distinct on the set of inputs, or\ninstead might choose to cache the values already looked up to avoid\nduplicate index traversals, in which case the operation would more closely\nresemble a LEFT OUTER JOIN where, instead of returning a null row on a\nmiss, the query would instead fail on an RI violation.\n\nRerefentialIntegrityDeleteCheck Node\n------------------------------------\nIn many ways, this node is the opposite of ReferentialIntegrityInsert, in\nthe sense that it fails if there IS a match found rather than if there was\nnot. The power of this node lies in the fact that the planner can see how\nmany tuples will be input, and can decide on a nested loop traversal, or a\nhash join, or a merge join if amenable indexes are present.\n\nRerefentialIntegrityDeleteCascade node\n------------------------------------\nThis node amounts to DELETE FROM referencing_table WHERE id IN (\nset_of_input_tuples ) with no returning clause and no need for one. The\nnode cannot itself filter any rows, nor can it raise any errors, but the RI\nconstraints of referencing_table might raise errors.\n\nRerefentialIntegrityUpdateSet node\n------------------------------------\nThis node amounts to UPDATE referencing_table SET col = NULL|DEFAULT WHERE\nid IN ( set_of_input_tuples ) with no returning clause and no need for one.\nThe node cannot itself filter any rows, nor can it raise any errors, but\nthe RI constraints of referencing_table might raise errors.\n\nAdvantages To This Approach\n---------------------------\n\n1. The visibility of triggers is given, regardless of which triggers are\neligible for inlining.\n2. All triggers have a fall-back case of operating exactly as they do now.\n3. Inlining could be enabled/disabled by a session setting or other GUC.\n4. The planner itself has vibility into the number of rows affected, and\ntherefore can choose single-row vs set operations as appropriate.\n\nWhat Might This Look Like?\n--------------------------\n\nSay there's a fact table class_registration with foreign keys to the\ndimension tables class, student, semester, you'd see something like this\n\nEXPLAIN INSERT INTO class_registration VALUES (...);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on class_regisration (cost=0.00..0.01 rows=1 width=40)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> RerefentialIntegrityInsert: class_registration_class_id_fkey\n -> IndexOnlyScan class_pk (...)\n -> RerefentialIntegrityInsert: class_registration_student_id_fkey\n -> IndexOnlyScan student_uq (...)\n -> RerefentialIntegrityInsert: class_registration_semester_id_fkey\n -> IndexOnlyScan semester_pk (...)\n\nOr in the case of deferred constraints\n\nEXPLAIN INSERT INTO class_registration VALUES (...);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Insert on class_regisration (cost=0.00..0.01 rows=1 width=40)\n -> Result (cost=0.00..0.01 rows=1 width=4)\n -> ForeignKey Check: class_registration_class_id_fkey\n(RI_FKey_check_ins)\n -> Row Deferred (Execute 1 Time)\n -> ForeignKey Check: class_registration_student_id_fkey\n(RI_FKey_check_ins)\n -> Row Deferred (Execute 1 Time)\n -> ForeignKey Check: class_registration_semester_id_fkey\n(RI_FKey_check_ins)\n -> Row Deferred (Execute 1 Time)\n\nIs there anything actionable for user? Not there, but now they know the\nextra lifting they were asking the database to perform.\n\nThe actual tune-able would be on a DELETEs and UPDATEs\n\nEXPLAIN DELETE FROM class WHERE class_id <= 4;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------\n Delete on class (cost=0.00..0.04 rows=3 width=60)\n -> Index Only Scan on class_pk (...)\n -> rerefentialIntegrityDeleteCascade: class_class_registration_fkey\n -> Hash Cond: (r1.class_id = class.id)\n -> Hash (...)\n -> Seq Scan on class_registration c1 (...)\n -> Hash (...)\n -> Materialize Affected Tuples\n\nThe reader could see that an index was missing, and work to fix it. The\nplanner, for it's part, saw that a Seq Scan would be needed, but rather\nthan Scan once per row as per-row triggers would have done, it does the\nscan once, hashes it, and then does a comparison against a similar hash of\nthe rows deleted.\n\nIt's possible that the plan would also view the DELETE as a CTE which is\nthen fed as input to progressive layers of RI checks.\n\nConclusion\n----------\n\nI think there are improvements to be made in RI checks in terms of\nperformance, visibility, and tunability. It is my hope that this sparks\ndiscussion that leads to better performance and visibility of Referential\nIntegrity.\n\n\nFootnotes:\n[1] Updates that set the column value to the value it already has do not\nrequire an integrity check.\n[2]\nhttps://doxygen.postgresql.org/ri__triggers_8c.html#a14c6f5e65d657bd5b4fd45769b4c0197\n[3]\nhttps://doxygen.postgresql.org/ri__triggers_8c.html#a43b79b7b3f05fc8bec34e5fb6c37ba47\nis the first alphabetically\n[4]\nhttps://www.postgresql.org/message-id/CAKkQ508Z6r5e3jdqhfPWSzSajLpHo3OYYOAmfeSAuPTo5VGfgw@mail.gmail.com\n[5]\nhttps://www.postgresql.org/message-id/20201126.121818.26523414172308697.horikyota.ntt@gmail.com\n[6]\nhttps://www.postgresql.org/message-id/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com\n[7]\nhttps://www.postgresql.org/message-id/CAKkQ50_h8TcBkY5KYQfneejrZ_d3veFcK3nGmN-WxucEu_QrCw%40mail.gmail.com\n[8]\nhttps://www.postgresql.org/message-id/CACjxUsM4s9%3DCUmPU4YFOYiD5f%3D2ULVDBjuFSo20Twe7KbUe8Mw%40mail.gmail.com\n[9]\nhttps://www.postgresql.org/message-id/CADkLM=dFuNHNiZ9Pop1pqa+HPh4T9WuwnjwSf6UAvnxcgUaQdA@mail.gmail.com\n[10]\nhttps://www.postgresql.org/message-id/20181217172729.mjfkflaelii2boaj%40alvherre.pgsql\n[11]\nhttps://www.postgresql.org/message-id/CACjxUsOY-CXoNMPit%2Bk1PC_5LjYkvYPz2VJwK5YDDtzRh4J7vw%40mail.gmail.com\n[12] https://www.postgresql.org/message-id/17100.1550662686%40localhost\n[13] https://www.postgresql.org/message-id/1813.1586363881@antos\n[14]\nhttps://www.postgresql.org/message-id/flat/CA%2BU5nMLM1DaHBC6JXtUMfcG6f7FgV5mPSpufO7GRnbFKkF2f7g%40mail.gmail.com\n\nA Case For Inlining Immediate Referential Integrity Checks----------------------------------------------------------The following is an overview of how Postgres currently implemented referential integrity, the some problems with that architecture, attempted solutions for those problems, and a suggstion of another possible solution.Notes On Notation and Referential Integrity In General------------------------------------------------------All referential integrity is ultimately of this form:R(X) => T(Y)Where one referencing table R has a set of columns X That references a set of columns Y which comprise a unique constraint on a target table T. Note that the Y-set of columns is usually the primary key of T, but does not have to be.The basic referential integrity checks fall into two basic categories, Insert and Delete, which can be checked Immediately following the statement, or can be Deferred to the end of the transaction.The Insert check is fairly straightforward. Any insert to R, or update of R that modifies [1] any column in X, is checked to see if all of the X columns are NOT NULL, and if so, a lookup is done on T to find a matching row tuple of Y. If none is found, then an error is raised.The Update check is more complicated, as it covers any UPDATE operation that modifies [1] any column in Y, where all of the values of Y are NOT NUL, as well as DELETE operation where all of the columns of Y are NOT NULL. For any Update check, the table R is scanned for any matching X tuples matching Y in the previous, and for any matches found, an action is taken. That action can be to fail the operation (NO ACTION, RESTRICT), update the X values to fixed values (SET NULL, SET DEFAULT), or to delete those rows in R (CASCADE).Current Implementation----------------------Currently, these operations are handled via per-row triggers. In our general case, one trigger is placed on R for INSERT operations, and one trigger is placed on T for DELETE operations, and an additional trigger is placed on T for UPDATE operations that affect any column of Y.These Insert trigger functions invoke the C function RI_FKey_check() [2]. The trigger is fired unconditionally, and the trigger itself determines if there is a referential integrity constraint to be made or not. Ultimately this trigger invokes an SPI query of the form SELECT 1 FROM <T> WHERE (<X = Y>) FOR KEY SHARE. This query is generally quite straightforward to the planner, as it becomes either a scan of a single unique index, or a partition search followed by a scan of a single unique index. The operation succeeds if a row is found, and fails if it does not.The Update trigger functions are implemented with a set of C functions RI_[noaction|restrict|cascade|setnull|setdefault]_[upd|del]() [3]. These functions each generate a variation of SPI query in one of the following forms     cascade: DELETE FROM <R> WHERE <X = Y>     restrict/noaction: SELECT 1 FROM <R> WHERE <X = Y> FOR KEY SHARE     setnull: UPDATE <R> SET x1 = NULL, ... WHERE <X = Y>     setdefault: UPDATE <R> SET x1 = DEFAULT, ... WHERE <X = Y>These triggers are either executed at statement time (Immediate) or are queued for execution as a part of the transaction commit (Deferred).Problems With The Current Implementation----------------------------------------The main problems with this architecture come down to visiblity and performance.The foremost problem with this implementation is that these extra queries are not visible to the end user in any way. It is possible to infer that the functions executed by looking at the constraint defnitions and comparing pg_stat_user_tables or pg_stat_user_indexes before and after the operation, but in general the time spent in these functions accrues to the DML statement (Immediate) or COMMIT statement (Deferred) without any insght into what took place. This is especially vexing in situations where an operation as simple as \"DELETE FROM highly_referenced_table WHERE id = 1\" hits the primary key index, but takes several seconds to run. The performance of Insert operations is generally not too bad, in that query boils down to an Index Scan for a single row. The problem, however, is that this query must be executed for every row inserted. The query itself is only planned once, and that query plan is cached for later re-use. That removes some of the query overhead, but also incurs a growing cache of plans which can create memory pressure if the number of foreign keys is large, and indeed this has become a problem for at least one customer [4]. Some profiling of the RI check indicated that about half of the time of the insert was spent in SPI functions that could be bypassed if the C function called index_beginscan and index_rescan directly [5]. And these indications bore out when Amit Langote wrote a patch [6] which finds the designanted index from the constraint (with some drilling through partitions if need be) and then invokes the scan functions. This method showed about a halving of the time involved, while also avoiding the memory pressure from many cached plans [7].The performance of Delete operations is far less certain, and the potential performance impact is far greater. There are four main reasons for this. First, there is no guarantee of a suitable index on the R table let alone an optimal index, so a Sequential Scan is possible. Second, the operation may match multiple rows in the R table, so the scan must exhaust the whole table/index of R. Third, this already sub-optimal operation must be performed for every row affected by the Delete operation, with no ability to coordinate between row triggers which means that in a pathological case, a large table is sequential scanned once per row deleted. Lastly, the cascade, setnull and setdefault variants have the potential to fire more triggers.Attempts at Statement Level Triggers------------------------------------Back in 2016, Kevin Grittner made a mock up of handling RI delete checks with statement-level triggers [8] and got a 98% reduction in execution time. The source of the benefit was not hard to see: doing 1 query filtering for N values is almost always faster than N queries filtering for 1 each. It especially helps the most pathological case where a Sequential Scan is done in the row-trigger case, now a hash join to the transition table is possible.It was with that thinking that I made my own attempt at re-implementing RI with statement-level triggers [9]. This effort was incomplete, and raised questions from Alvaro Herrera [10] and Kevin Grittner [11]. Also, the implementation was only seeing about a 25% improvement instead of the 98% that was hoped. Antonin Houska found some bugs in the patch [12] and suggested keeping row level triggers. The chief source of baggage seemed to be that the transition tables contained the entire before/after rows, not just the columns needed to process the trigger, and that level of memory consumption was quite significant. Some time later, Antonin followed through with his idea for how to improve the patch [13], which sparked a lively discussion where it was observed that this discussion had happened before, in other forms, at least as far back as 2013 [14]. The chief challenges being how to optimize a many-row update without penalizing a single row update. All of the discussion, however, was about architecture and performance of triggers, without any mention of improving the visibility of RI checks to the user.Current State Of Affairs------------------------Amit's patch to remove all SPI from Insert triggers [6] appears to work, and is about as minimally invasive as possible under the circumstances, and I think it is important that it be included for v14. However, concerns have been raised about the patch bypassing permission checks, how it would handle foreign tables, alternate access methods, etc.Even if all issues are resolved with Amit's patch, it does nothing for Updates, and nothing for visiblity. As was mentioned before, the chief problem for Update performance is that statement-level triggers are too much overhead for single-row updates, and row level triggers need extra overhead to see if they can find common cause with other trigger firings.It has occurred to me that the solution to both of these issues is to, where possible, roll the trigger operation into the query itself, and in cases where that isn't possible, at least indicate that a trigger could be fired.The Proposed Changes--------------------The changes I'm proposing boil down to the following:1. Add informational nodes to query plans to indicate that a certain named trigger would be fired, whether it is a row or statement trigger, and in the case of per-row triggers, how many are expected to be fired.2. To the extent possible, accrue time/cost expended on triggers to that node.3. Make the planner capable of seeing these triggers, and where possible, bring the logic of the RI check inside the query itself.Adding Trigger Nodes to A Query Plan------------------------------------When a query is being planned that modifies a table, the planner would inspect the triggers affected on that table, and will create a node for each named trigger that would be fired.1. Disabled triggers are ignored outright.2. User-defined triggers and Deferred RI triggers will simply be named along with an estimate of the number of firings that will occur.3. RI Insert triggers that are in immediate mode will be inlined as a node ReferentialIntegrityInsert3. RI Update/Delete triggers of type ON DELETE RESTRICT and ON DELETE NO ACTION that are in immediate mode will be inlined as a node RerefentialIntegrityDeleteCheck4. RI Update/Delete triggers of type CASCADE will be inlined as a node RerefentialIntegrityDeleteCascade5. RI Update/Delete triggers of type SET NULL and SET DEFAULT will be inlined as a node RerefentialIntegrityUpdateSet.Note that nodes RerefentialIntegrityDeleteCascade and RerefentialIntegrityUpdateSet will themselves modify tables which can in turn have RI constraints that would create more RerefentialIntegrityDeleteCascade and RerefentialIntegrityUpdateSet nodes. In theory, this could continue infinitely. Praactically speaking, such situations either have a DEFERRED constraint somewhere in the loop, or one of the references starts off initially NULL. Still, the planner can't see that, so it makes sense to put a depth-limit on such things, and any UPDATEs/DELETEs that cascade beyond that limit are simply not inlined, and are queued as user-defined immediate triggers are.ReferentialIntegrityInsert Node-------------------------------This is a node that does a lookup of every tuple in the input set to the unique index(normal or partitioned) on the referencing table. The node would have the option of first doing a distinct on the set of inputs, or instead might choose to cache the values already looked up to avoid duplicate index traversals, in which case the operation would more closely resemble a LEFT OUTER JOIN where, instead of returning a null row on a miss, the query would instead fail on an RI violation.RerefentialIntegrityDeleteCheck Node------------------------------------In many ways, this node is the opposite of ReferentialIntegrityInsert, in the sense that it fails if there IS a match found rather than if there was not. The power of this node lies in the fact that the planner can see how many tuples will be input, and can decide on a nested loop traversal, or a hash join, or a merge join if amenable indexes are present.RerefentialIntegrityDeleteCascade node------------------------------------This node amounts to DELETE FROM referencing_table WHERE id IN ( set_of_input_tuples ) with no returning clause and no need for one. The node cannot itself filter any rows, nor can it raise any errors, but the RI constraints of referencing_table might raise errors.RerefentialIntegrityUpdateSet node------------------------------------This node amounts to UPDATE referencing_table SET col = NULL|DEFAULT WHERE id IN ( set_of_input_tuples ) with no returning clause and no need for one. The node cannot itself filter any rows, nor can it raise any errors, but the RI constraints of referencing_table might raise errors.Advantages To This Approach---------------------------1. The visibility of triggers is given, regardless of which triggers are eligible for inlining.2. All triggers have a fall-back case of operating exactly as they do now.3. Inlining could be enabled/disabled by a session setting or other GUC.4. The planner itself has vibility into the number of rows affected, and therefore can choose single-row vs set operations as appropriate.What Might This Look Like?--------------------------Say there's a fact table class_registration with foreign keys to the dimension tables class, student, semester, you'd see something like thisEXPLAIN INSERT INTO class_registration VALUES (...);                                        QUERY PLAN------------------------------------------------------------------------------------------ Insert on class_regisration  (cost=0.00..0.01 rows=1 width=40)   ->  Result  (cost=0.00..0.01 rows=1 width=4)   ->  RerefentialIntegrityInsert: class_registration_class_id_fkey       -> IndexOnlyScan class_pk (...)   ->  RerefentialIntegrityInsert: class_registration_student_id_fkey       -> IndexOnlyScan student_uq (...)   ->  RerefentialIntegrityInsert: class_registration_semester_id_fkey       -> IndexOnlyScan semester_pk (...)Or in the case of deferred constraintsEXPLAIN INSERT INTO class_registration VALUES (...);                                        QUERY PLAN------------------------------------------------------------------------------------------ Insert on class_regisration  (cost=0.00..0.01 rows=1 width=40)   ->  Result  (cost=0.00..0.01 rows=1 width=4)   ->  ForeignKey Check: class_registration_class_id_fkey (RI_FKey_check_ins)       -> Row Deferred (Execute 1 Time)   ->  ForeignKey Check: class_registration_student_id_fkey (RI_FKey_check_ins)       -> Row Deferred (Execute 1 Time)   ->  ForeignKey Check: class_registration_semester_id_fkey (RI_FKey_check_ins)       -> Row Deferred (Execute 1 Time)Is there anything actionable for user? Not there, but now they know the extra lifting they were asking the database to perform.The actual tune-able would be on a DELETEs and UPDATEsEXPLAIN DELETE FROM class WHERE class_id <= 4;                                                QUERY PLAN----------------------------------------------------------------------------------------------------------- Delete on class  (cost=0.00..0.04 rows=3 width=60)   ->  Index Only Scan on class_pk (...)   ->  rerefentialIntegrityDeleteCascade: class_class_registration_fkey        -> Hash Cond: (r1.class_id = class.id)            -> Hash (...)                -> Seq Scan on class_registration c1 (...)            -> Hash (...)                -> Materialize Affected TuplesThe reader could see that an index was missing, and work to fix it. The planner, for it's part, saw that a Seq Scan would be needed, but rather than Scan once per row as per-row triggers would have done, it does the scan once, hashes it, and then does a comparison against a similar hash of the rows deleted.It's possible that the plan would also view the DELETE as a CTE which is then fed as input to progressive layers of RI checks.Conclusion----------I think there are improvements to be made in RI checks in terms of performance, visibility, and tunability. It is my hope that this sparks discussion that leads to better performance and visibility of Referential Integrity.Footnotes:[1] Updates that set the column value to the value it already has do not require an integrity check.[2] https://doxygen.postgresql.org/ri__triggers_8c.html#a14c6f5e65d657bd5b4fd45769b4c0197[3] https://doxygen.postgresql.org/ri__triggers_8c.html#a43b79b7b3f05fc8bec34e5fb6c37ba47 is the first alphabetically[4] https://www.postgresql.org/message-id/CAKkQ508Z6r5e3jdqhfPWSzSajLpHo3OYYOAmfeSAuPTo5VGfgw@mail.gmail.com[5] https://www.postgresql.org/message-id/20201126.121818.26523414172308697.horikyota.ntt@gmail.com[6] https://www.postgresql.org/message-id/CA+HiwqGkfJfYdeq5vHPh6eqPKjSbfpDDY+j-kXYFePQedtSLeg@mail.gmail.com[7] https://www.postgresql.org/message-id/CAKkQ50_h8TcBkY5KYQfneejrZ_d3veFcK3nGmN-WxucEu_QrCw%40mail.gmail.com[8] https://www.postgresql.org/message-id/CACjxUsM4s9%3DCUmPU4YFOYiD5f%3D2ULVDBjuFSo20Twe7KbUe8Mw%40mail.gmail.com[9] https://www.postgresql.org/message-id/CADkLM=dFuNHNiZ9Pop1pqa+HPh4T9WuwnjwSf6UAvnxcgUaQdA@mail.gmail.com[10] https://www.postgresql.org/message-id/20181217172729.mjfkflaelii2boaj%40alvherre.pgsql[11] https://www.postgresql.org/message-id/CACjxUsOY-CXoNMPit%2Bk1PC_5LjYkvYPz2VJwK5YDDtzRh4J7vw%40mail.gmail.com[12] https://www.postgresql.org/message-id/17100.1550662686%40localhost[13] https://www.postgresql.org/message-id/1813.1586363881@antos[14] https://www.postgresql.org/message-id/flat/CA%2BU5nMLM1DaHBC6JXtUMfcG6f7FgV5mPSpufO7GRnbFKkF2f7g%40mail.gmail.com", "msg_date": "Sun, 14 Mar 2021 15:49:27 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": true, "msg_subject": "A Case For Inlining Immediate Referential Integrity Checks" } ]
[ { "msg_contents": "Hi ,\n\nIn one of the environments, using pg_upgrade with hard links, PostgreSQL 12\nhas been upgraded to PostgreSQL 13.1. The OS was Ubuntu 16.04.7 LTS (Xenial\nXerus). pg_repack was used to rebuild all the tables across the database\nright after the upgrade to PG 13.\n\nA new server with Ubuntu 20.04.1 LTS was later provisioned. Streaming\nreplication was set up from the Old Server running on Ubuntu 16 to New\nServer on Ubuntu 20 - same PG versions 13.1.\n\nReplication was running fine, but, after the failover to the New Server, an\nUpdate on a few random rows (not on the same page) was causing Segmentation\nfault and causing a crash of the Postgres.\nSelecting the records using the Index or directly from the table works\nabsolutely fine. But, when the same records are updated, it gets into the\nfollowing error.\n\n2021-03-12 17:20:01.979 CET p#7 s#604b8fa9.7 t#0 LOG: terminating any\nother active server processes\n2021-03-12 17:20:01.979 CET p#41 s#604b9212.29 t#0 WARNING: terminating\nconnection because of crash of another server process\n2021-03-12 17:20:01.979 CET p#41 s#604b9212.29 t#0 DETAIL: The postmaster\nhas commanded this server process to roll back the current transaction and\nexit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2021-03-12 17:20:01.979 CET p#41 s#604b9212.29 t#0 HINT: In a moment you\nshould be able to reconnect to the database and repeat your command.\n\ngdb backtrace looks like following with the debug symbols.\n\n(gdb) bt\n#0 __memmove_avx_unaligned_erms () at\n../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:533\n#1 0x000055b72761c370 in memmove (__len=<optimized out>,\n__src=0x55b72930e9c7, __dest=<optimized out>)\n at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:40\n#2 _bt_swap_posting (newitem=newitem@entry=0x55b7292010c0,\noposting=oposting@entry=0x7f3b46f94778,\n postingoff=postingoff@entry=2) at\n./build/../src/backend/access/nbtree/nbtdedup.c:796\n#3 0x000055b72761d40b in _bt_insertonpg (rel=0x7f3acd8a49c0,\nitup_key=0x55b7292bc6a8, buf=507, cbuf=0, stack=0x55b7292d5f98,\n itup=0x55b7292010c0, itemsz=32, newitemoff=48, postingoff=2,\nsplit_only_page=false)\n at ./build/../src/backend/access/nbtree/nbtinsert.c:1167\n#4 0x000055b72761eae9 in _bt_doinsert (rel=rel@entry=0x7f3acd8a49c0,\nitup=itup@entry=0x55b7292bc848,\n checkUnique=checkUnique@entry=UNIQUE_CHECK_NO, heapRel=heapRel@entry\n=0x7f3acd894f70)\n at ./build/../src/backend/access/nbtree/nbtinsert.c:1009\n#5 0x000055b727621e2e in btinsert (rel=0x7f3acd8a49c0, values=<optimized\nout>, isnull=<optimized out>, ht_ctid=0x55b7292d4578,\n heapRel=0x7f3acd894f70, checkUnique=UNIQUE_CHECK_NO,\nindexInfo=0x55b7292bc238)\n at ./build/../src/backend/access/nbtree/nbtree.c:210\n#6 0x000055b727757487 in ExecInsertIndexTuples\n(slot=slot@entry=0x55b7292d4548,\nestate=estate@entry=0x55b7291ff1f8,\n noDupErr=noDupErr@entry=false, specConflict=specConflict@entry=0x0,\narbiterIndexes=arbiterIndexes@entry=0x0)\n at ./build/../src/backend/executor/execIndexing.c:393\n#7 0x000055b7277807a8 in ExecUpdate (mtstate=0x55b7292bb2c8,\ntupleid=0x7fff45ea318a, oldtuple=0x0, slot=0x55b7292d4548,\n planSlot=0x55b7292c04e8, epqstate=0x55b7292bb3c0,\nestate=0x55b7291ff1f8, canSetTag=true)\n at ./build/../src/backend/executor/nodeModifyTable.c:1479\n#8 0x000055b727781655 in ExecModifyTable (pstate=0x55b7292bb2c8) at\n./build/../src/backend/executor/nodeModifyTable.c:2253\n#9 0x000055b727758424 in ExecProcNode (node=0x55b7292bb2c8) at\n./build/../src/include/executor/executor.h:248\n#10 ExecutePlan (execute_once=<optimized out>, dest=0x55b7292c1728,\ndirection=<optimized out>, numberTuples=0,\n sendTuples=<optimized out>, operation=CMD_UPDATE,\nuse_parallel_mode=<optimized out>, planstate=0x55b7292bb2c8,\n estate=0x55b7291ff1f8) at\n./build/../src/backend/executor/execMain.c:1632\n#11 standard_ExecutorRun (queryDesc=0x55b7292ba578, direction=<optimized\nout>, count=0, execute_once=<optimized out>)\n at ./build/../src/backend/executor/execMain.c:350\n#12 0x000055b7278bebf7 in ProcessQuery (plan=<optimized out>,\nsourceText=0x55b72919efa8 \"\\031)\\267U\", params=0x0, queryEnv=0x0,\n dest=0x55b7292c1728, qc=0x7fff45ea34c0) at\n./build/../src/backend/tcop/pquery.c:160\n#13 0x000055b7278bedf9 in PortalRunMulti (portal=portal@entry=0x55b729254128,\nisTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x55b7292c1728,\naltdest=altdest@entry=0x55b7292c1728,\n qc=qc@entry=0x7fff45ea34c0) at ./build/../src/backend/tcop/pquery.c:1265\n#14 0x000055b7278bf847 in PortalRun (portal=portal@entry=0x55b729254128,\ncount=count@entry=9223372036854775807,\n isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\ndest=dest@entry=0x55b7292c1728,\n--Type <RET> for more, q to quit, c to continue without paging--\n\n\nIs this expected when replication is happening between PostgreSQL databases\nhosted on different OS versions like Ubuntu 16 and Ubuntu 20 ? Or, do we\nthink this is some sort of corruption ?\n\n-- \nRegards,\nAvi.\n\nHi ,In one of the environments, using pg_upgrade with hard links, PostgreSQL 12 has been upgraded to PostgreSQL 13.1. The OS was Ubuntu 16.04.7 LTS (Xenial Xerus). pg_repack was used to rebuild all the tables across the database right after the upgrade to PG 13. A new server with Ubuntu 20.04.1 LTS was later provisioned. Streaming replication was set up from the Old Server running on Ubuntu 16 to New Server on Ubuntu 20 - same PG versions 13.1. Replication was running fine, but, after the failover to the New Server, an Update on a few random rows (not on the same page) was causing Segmentation fault and causing a crash of the Postgres. Selecting the records using the Index or directly from the table works absolutely fine. But, when the same records are updated, it gets into the following error.2021-03-12 17:20:01.979 CET p#7 s#604b8fa9.7 t#0 LOG:  terminating any other active server processes2021-03-12 17:20:01.979 CET p#41 s#604b9212.29 t#0 WARNING:  terminating connection because of crash of another server process2021-03-12 17:20:01.979 CET p#41 s#604b9212.29 t#0 DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.2021-03-12 17:20:01.979 CET p#41 s#604b9212.29 t#0 HINT:  In a moment you should be able to reconnect to the database and repeat your command.gdb backtrace looks like following with the debug symbols.(gdb) bt#0  __memmove_avx_unaligned_erms () at ../sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:533#1  0x000055b72761c370 in memmove (__len=<optimized out>, __src=0x55b72930e9c7, __dest=<optimized out>)    at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:40#2  _bt_swap_posting (newitem=newitem@entry=0x55b7292010c0, oposting=oposting@entry=0x7f3b46f94778,     postingoff=postingoff@entry=2) at ./build/../src/backend/access/nbtree/nbtdedup.c:796#3  0x000055b72761d40b in _bt_insertonpg (rel=0x7f3acd8a49c0, itup_key=0x55b7292bc6a8, buf=507, cbuf=0, stack=0x55b7292d5f98,     itup=0x55b7292010c0, itemsz=32, newitemoff=48, postingoff=2, split_only_page=false)    at ./build/../src/backend/access/nbtree/nbtinsert.c:1167#4  0x000055b72761eae9 in _bt_doinsert (rel=rel@entry=0x7f3acd8a49c0, itup=itup@entry=0x55b7292bc848,     checkUnique=checkUnique@entry=UNIQUE_CHECK_NO, heapRel=heapRel@entry=0x7f3acd894f70)    at ./build/../src/backend/access/nbtree/nbtinsert.c:1009#5  0x000055b727621e2e in btinsert (rel=0x7f3acd8a49c0, values=<optimized out>, isnull=<optimized out>, ht_ctid=0x55b7292d4578,     heapRel=0x7f3acd894f70, checkUnique=UNIQUE_CHECK_NO, indexInfo=0x55b7292bc238)    at ./build/../src/backend/access/nbtree/nbtree.c:210#6  0x000055b727757487 in ExecInsertIndexTuples (slot=slot@entry=0x55b7292d4548, estate=estate@entry=0x55b7291ff1f8,     noDupErr=noDupErr@entry=false, specConflict=specConflict@entry=0x0, arbiterIndexes=arbiterIndexes@entry=0x0)    at ./build/../src/backend/executor/execIndexing.c:393#7  0x000055b7277807a8 in ExecUpdate (mtstate=0x55b7292bb2c8, tupleid=0x7fff45ea318a, oldtuple=0x0, slot=0x55b7292d4548,     planSlot=0x55b7292c04e8, epqstate=0x55b7292bb3c0, estate=0x55b7291ff1f8, canSetTag=true)    at ./build/../src/backend/executor/nodeModifyTable.c:1479#8  0x000055b727781655 in ExecModifyTable (pstate=0x55b7292bb2c8) at ./build/../src/backend/executor/nodeModifyTable.c:2253#9  0x000055b727758424 in ExecProcNode (node=0x55b7292bb2c8) at ./build/../src/include/executor/executor.h:248#10 ExecutePlan (execute_once=<optimized out>, dest=0x55b7292c1728, direction=<optimized out>, numberTuples=0,     sendTuples=<optimized out>, operation=CMD_UPDATE, use_parallel_mode=<optimized out>, planstate=0x55b7292bb2c8,     estate=0x55b7291ff1f8) at ./build/../src/backend/executor/execMain.c:1632#11 standard_ExecutorRun (queryDesc=0x55b7292ba578, direction=<optimized out>, count=0, execute_once=<optimized out>)    at ./build/../src/backend/executor/execMain.c:350#12 0x000055b7278bebf7 in ProcessQuery (plan=<optimized out>, sourceText=0x55b72919efa8 \"\\031)\\267U\", params=0x0, queryEnv=0x0,     dest=0x55b7292c1728, qc=0x7fff45ea34c0) at ./build/../src/backend/tcop/pquery.c:160#13 0x000055b7278bedf9 in PortalRunMulti (portal=portal@entry=0x55b729254128, isTopLevel=isTopLevel@entry=true,     setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x55b7292c1728, altdest=altdest@entry=0x55b7292c1728,     qc=qc@entry=0x7fff45ea34c0) at ./build/../src/backend/tcop/pquery.c:1265#14 0x000055b7278bf847 in PortalRun (portal=portal@entry=0x55b729254128, count=count@entry=9223372036854775807,     isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x55b7292c1728, --Type <RET> for more, q to quit, c to continue without paging--Is this expected when replication is happening between PostgreSQL databases hosted on different OS versions like Ubuntu 16 and Ubuntu 20 ? Or, do we think this is some sort of corruption ? -- Regards,Avi.", "msg_date": "Sun, 14 Mar 2021 20:14:40 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Mon, Mar 15, 2021 at 1:29 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> Is this expected when replication is happening between PostgreSQL databases hosted on different OS versions like Ubuntu 16 and Ubuntu 20 ? Or, do we think this is some sort of corruption ?\n\nIs this index on a text datatype, and using a collation other than \"C\"?\n\nhttps://wiki.postgresql.org/wiki/Locale_data_changes\n\nNot that I expect it to crash if that's the cause, I thought it'd just\nget confused.\n\n\n", "msg_date": "Mon, 15 Mar 2021 13:39:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Hi Thomas,\n\nOn Sun, Mar 14, 2021 at 9:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Mar 15, 2021 at 1:29 PM Avinash Kumar\n> <avinash.vallarapu@gmail.com> wrote:\n> > Is this expected when replication is happening between PostgreSQL\n> databases hosted on different OS versions like Ubuntu 16 and Ubuntu 20 ?\n> Or, do we think this is some sort of corruption ?\n>\n> Is this index on a text datatype, and using a collation other than \"C\"?\n>\nIts en_US.UTF-8\n\n>\n> https://wiki.postgresql.org/wiki/Locale_data_changes\n>\n> Not that I expect it to crash if that's the cause, I thought it'd just\n> get confused.\n>\nOn Ubuntu 16 server,\n\n*$* ldd --version\n\nldd (Ubuntu GLIBC 2.23-0ubuntu11.2) 2.23\n\nOn New Server Ubuntu 20,\n\n*$* ldd --version\n\nldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31\n\n\n-- \nRegards,\nAvi.\n\nHi Thomas,On Sun, Mar 14, 2021 at 9:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Mar 15, 2021 at 1:29 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> Is this expected when replication is happening between PostgreSQL databases hosted on different OS versions like Ubuntu 16 and Ubuntu 20 ? Or, do we think this is some sort of corruption ?\n\nIs this index on a text datatype, and using a collation other than \"C\"?\nIts en_US.UTF-8 \n\nhttps://wiki.postgresql.org/wiki/Locale_data_changes\n\nNot that I expect it to crash if that's the cause, I thought it'd just\nget confused.\nOn Ubuntu 16 server,\n$ ldd --version\nldd (Ubuntu GLIBC 2.23-0ubuntu11.2) 2.23On New Server Ubuntu 20,\n$ ldd --version\nldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31-- Regards,Avi.", "msg_date": "Sun, 14 Mar 2021 22:01:28 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Hi Thomas,\n\nOn Sun, Mar 14, 2021 at 10:01 PM Avinash Kumar <avinash.vallarapu@gmail.com>\nwrote:\n\n> Hi Thomas,\n>\n> On Sun, Mar 14, 2021 at 9:40 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n>\n>> On Mon, Mar 15, 2021 at 1:29 PM Avinash Kumar\n>> <avinash.vallarapu@gmail.com> wrote:\n>> > Is this expected when replication is happening between PostgreSQL\n>> databases hosted on different OS versions like Ubuntu 16 and Ubuntu 20 ?\n>> Or, do we think this is some sort of corruption ?\n>>\n>> Is this index on a text datatype, and using a collation other than \"C\"?\n>>\n> Its en_US.UTF-8\n>\n\n\n> Also the datatype is bigint\n>\n\n\n>\n>\n\n>> https://wiki.postgresql.org/wiki/Locale_data_changes\n>>\n>> Not that I expect it to crash if that's the cause, I thought it'd just\n>> get confused.\n>>\n> On Ubuntu 16 server,\n>\n> *$* ldd --version\n>\n> ldd (Ubuntu GLIBC 2.23-0ubuntu11.2) 2.23\n>\n> On New Server Ubuntu 20,\n>\n> *$* ldd --version\n>\n> ldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31\n>\n>\n> --\n> Regards,\n> Avi.\n>\n\n\n-- \nRegards,\nAvinash Vallarapu\n+1-902-221-5976\n\nHi Thomas,On Sun, Mar 14, 2021 at 10:01 PM Avinash Kumar <avinash.vallarapu@gmail.com> wrote:Hi Thomas,On Sun, Mar 14, 2021 at 9:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Mar 15, 2021 at 1:29 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> Is this expected when replication is happening between PostgreSQL databases hosted on different OS versions like Ubuntu 16 and Ubuntu 20 ? Or, do we think this is some sort of corruption ?\n\nIs this index on a text datatype, and using a collation other than \"C\"?\nIts en_US.UTF-8  Also the datatype is bigint  \n\nhttps://wiki.postgresql.org/wiki/Locale_data_changes\n\nNot that I expect it to crash if that's the cause, I thought it'd just\nget confused.\nOn Ubuntu 16 server,\n$ ldd --version\nldd (Ubuntu GLIBC 2.23-0ubuntu11.2) 2.23On New Server Ubuntu 20,\n$ ldd --version\nldd (Ubuntu GLIBC 2.31-0ubuntu9.2) 2.31-- Regards,Avi.\n-- Regards,Avinash Vallarapu+1-902-221-5976", "msg_date": "Sun, 14 Mar 2021 22:05:40 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "[Dropping pgsql-general@ from the CC, because cross-posting triggers\nmoderation; sorry I didn't notice that on my first reply]\n\nOn Mon, Mar 15, 2021 at 2:05 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> On Sun, Mar 14, 2021 at 10:01 PM Avinash Kumar <avinash.vallarapu@gmail.com> wrote:\n>> Also the datatype is bigint\n\nOk. Collation changes are the most common cause of index problems\nwhen upgrading OSes, but here we can rule that out if your index is on\nbigint. So it seems like this is some other kind of corruption in\nyour database, or a bug in the deduplication code.\n\n\n", "msg_date": "Mon, 15 Mar 2021 14:16:34 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Hi,\n\nOn Sun, Mar 14, 2021 at 10:17 PM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> [Dropping pgsql-general@ from the CC, because cross-posting triggers\n> moderation; sorry I didn't notice that on my first reply]\n>\n> On Mon, Mar 15, 2021 at 2:05 PM Avinash Kumar\n> <avinash.vallarapu@gmail.com> wrote:\n> > On Sun, Mar 14, 2021 at 10:01 PM Avinash Kumar <\n> avinash.vallarapu@gmail.com> wrote:\n> >> Also the datatype is bigint\n>\n> Ok. Collation changes are the most common cause of index problems\n> when upgrading OSes, but here we can rule that out if your index is on\n> bigint. So it seems like this is some other kind of corruption in\n> your database, or a bug in the deduplication code.\n>\nI suspect the same.\nWhen i tried to perform a pg_filedump to see the entry of the ID in the\nindex, it was strange that the entry did not exist in the Index. But, the\nSELECT using an Index only scan was still working okay. I have chosen the\nstart and end page perfectly and there should not be any mistake there.\n\nFollowing may be helpful to understand what I meant.\n\nI have renamed the table and index names before adding it here.\n\n=# select pg_size_pretty(pg_relation_size('idx_id_mtime')) as size,\nrelpages from pg_class where relname = 'idx_id_mtime';\n size | relpages\n-------+----------\n 71 MB | 8439\n\n=# select pg_relation_filepath('idx_id_mtime');\n pg_relation_filepath\n----------------------\n base/16404/346644309\n\n=# \\d+ idx_id_mtime\n Index \"public.idx_id_mtime\"\n Column | Type | Key? | Definition | Storage | Stats\ntarget\n-----------+--------------------------+------+------------+---------+--------------\n sometable_id | bigint | yes | sometable_id | plain |\n mtime | timestamp with time zone | yes | mtime | plain |\nbtree, for table \"public.sometable\"\n\n$ pg_filedump -R 1 8439 -D bigint,timestamp\n/flash/berta13/base/16404/346644309 > 12345.txt\n\n$ cat 12345.txt | grep -w 70334\n--> No Output.\n\nWe don't see the entry for the ID : 70334 in the output of pg_filedump.\n*But, the SELECT statement is still using the same Index. *\n\n=*# EXPLAIN select * from sometable where sometable_id = 70334;\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n Index Scan using idx_id_mtime on sometable (cost=0.43..2.45 rows=1\nwidth=869)\n Index Cond: (sometable_id = 70334)\n(2 rows)\n\n=*# EXPLAIN ANALYZE select * from sometable where sometable_id = 70334;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_id_mtime on sometable (cost=0.43..2.45 rows=1\nwidth=869) (actual time=0.166..0.168 rows=1 loops=1)\n Index Cond: (sometable_id = 70334)\n Planning Time: 0.154 ms\n Execution Time: 0.195 ms\n(4 rows)\n\n=*# update sometable set sometable_id = 70334 where sometable_id = 70334;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\nNow, let us see the next ID. Here, the entry is visible in the output of\npg_filedump.\n\n$ cat 12345.txt | grep -w 10819\nCOPY: 10819 2018-03-21 15:16:41.202277\n\nThe update still fails with the same error.\n\n=*# update sometable set sometable_id = 10819 where sometable_id = 10819;\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!?>\n\nHi,On Sun, Mar 14, 2021 at 10:17 PM Thomas Munro <thomas.munro@gmail.com> wrote:[Dropping pgsql-general@ from the CC, because cross-posting triggers\nmoderation; sorry I didn't notice that on my first reply]\n\nOn Mon, Mar 15, 2021 at 2:05 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> On Sun, Mar 14, 2021 at 10:01 PM Avinash Kumar <avinash.vallarapu@gmail.com> wrote:\n>> Also the datatype is bigint\n\nOk.  Collation changes are the most common cause of index problems\nwhen upgrading OSes, but here we can rule that out if your index is on\nbigint.  So it seems like this is some other kind of corruption in\nyour database, or a bug in the deduplication code.I suspect the same. When i tried to perform a pg_filedump to see the entry of the ID in the index, it was strange that the entry did not exist in the Index. But, the SELECT using an Index only scan was still working okay. I have chosen the start and end page perfectly and there should not be any mistake there. Following may be helpful to understand what I meant. I have renamed the table and index names before adding it here. =# select pg_size_pretty(pg_relation_size('idx_id_mtime')) as size, relpages from pg_class where relname = 'idx_id_mtime'; size  | relpages -------+---------- 71 MB |     8439=# select pg_relation_filepath('idx_id_mtime'); pg_relation_filepath ---------------------- base/16404/346644309=# \\d+ idx_id_mtime                           Index \"public.idx_id_mtime\"  Column   |           Type           | Key? | Definition | Storage | Stats target -----------+--------------------------+------+------------+---------+-------------- sometable_id | bigint                   | yes  | sometable_id  | plain   |  mtime     \t  | timestamp with time zone | yes  | mtime      | plain   | btree, for table \"public.sometable\"$ pg_filedump -R 1 8439 -D bigint,timestamp /flash/berta13/base/16404/346644309 > 12345.txt$ cat 12345.txt | grep -w 70334--> No Output.We don't see the entry for the ID : 70334 in the output of pg_filedump. But, the SELECT statement is still using the same Index. =*# EXPLAIN select * from sometable where sometable_id = 70334;                                   QUERY PLAN                                   -------------------------------------------------------------------------------- Index Scan using idx_id_mtime on sometable  (cost=0.43..2.45 rows=1 width=869)   Index Cond: (sometable_id = 70334)(2 rows)=*# EXPLAIN ANALYZE select * from sometable where sometable_id = 70334;                                                        QUERY PLAN                                                        -------------------------------------------------------------------------------------------------------------------------- Index Scan using idx_id_mtime on sometable  (cost=0.43..2.45 rows=1 width=869) (actual time=0.166..0.168 rows=1 loops=1)   Index Cond: (sometable_id = 70334) Planning Time: 0.154 ms Execution Time: 0.195 ms(4 rows)=*# update sometable set sometable_id = 70334 where sometable_id = 70334;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>Now, let us see the next ID. Here, the entry is visible in the output of pg_filedump. $ cat 12345.txt | grep -w 10819COPY: 10819\t2018-03-21 15:16:41.202277The update still fails with the same error.=*# update sometable set sometable_id = 10819 where sometable_id = 10819;server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.The connection to the server was lost. Attempting reset: Failed.!?>", "msg_date": "Sun, 14 Mar 2021 22:54:13 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Sun, Mar 14, 2021 at 6:54 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> Following may be helpful to understand what I meant.\n>\n> I have renamed the table and index names before adding it here.\n\nIt should be possible to run amcheck on your database, which will\ndetect corrupt posting list tuples on Postgres 13. It's a contrib\nextension, so you must first run \"CREATE EXTENSION amcheck;\". From\nthere, you can run a query like the following (you may want to\ncustomize this):\n\nSELECT bt_index_parent_check(index => c.oid, heapallindexed => true),\nc.relname,\nc.relpages\nFROM pg_index i\nJOIN pg_opclass op ON i.indclass[0] = op.oid\nJOIN pg_am am ON op.opcmethod = am.oid\nJOIN pg_class c ON i.indexrelid = c.oid\nJOIN pg_namespace n ON c.relnamespace = n.oid\nWHERE am.amname = 'btree'\n-- Don't check temp tables, which may be from another session:\nAND c.relpersistence != 't'\n-- Function may throw an error when this is omitted:\nAND c.relkind = 'i' AND i.indisready AND i.indisvalid\nORDER BY c.relpages DESC;\n\nIf this query takes too long to complete you may find it useful to add\nsomething to limit the indexes check, such as: AND n.nspname =\n'public' -- that change to the SQL will make the query just test\nindexes from the public schema.\n\nDo \"SET client_min_messages=DEBUG1 \" to get a kind of rudimentary\nprogress indicator, if that seems useful to you.\n\nThe docs have further information on what this bt_index_parent_check\nfunction does, should you need it:\nhttps://www.postgresql.org/docs/13/amcheck.html\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 Mar 2021 19:23:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Hi,\n\nOn Sun, Mar 14, 2021 at 11:24 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Sun, Mar 14, 2021 at 6:54 PM Avinash Kumar\n> <avinash.vallarapu@gmail.com> wrote:\n> > Following may be helpful to understand what I meant.\n> >\n> > I have renamed the table and index names before adding it here.\n>\n> It should be possible to run amcheck on your database, which will\n> detect corrupt posting list tuples on Postgres 13. It's a contrib\n> extension, so you must first run \"CREATE EXTENSION amcheck;\". From\n> there, you can run a query like the following (you may want to\n> customize this):\n>\n> SELECT bt_index_parent_check(index => c.oid, heapallindexed => true),\n> c.relname,\n> c.relpages\n> FROM pg_index i\n> JOIN pg_opclass op ON i.indclass[0] = op.oid\n> JOIN pg_am am ON op.opcmethod = am.oid\n> JOIN pg_class c ON i.indexrelid = c.oid\n> JOIN pg_namespace n ON c.relnamespace = n.oid\n> WHERE am.amname = 'btree'\n> -- Don't check temp tables, which may be from another session:\n> AND c.relpersistence != 't'\n> -- Function may throw an error when this is omitted:\n> AND c.relkind = 'i' AND i.indisready AND i.indisvalid\n> ORDER BY c.relpages DESC;\n>\n> If this query takes too long to complete you may find it useful to add\n> something to limit the indexes check, such as: AND n.nspname =\n> 'public' -- that change to the SQL will make the query just test\n> indexes from the public schema.\n>\n> Do \"SET client_min_messages=DEBUG1 \" to get a kind of rudimentary\n> progress indicator, if that seems useful to you.\n>\nI see that there are 26 Indexes for which there are 100 to thousands of\nentries similar to the following. All are of course btree indexes.\n\npsql:amchecksql.sql:17: DEBUG: leaf block 1043751 of index \"idx_id_mtime\"\nhas no first data item\n\nAnd one error as follows.\n\npsql:amchecksql.sql:17: ERROR: down-link lower bound invariant violated\nfor index \"some_other_index\"\n\n>\n> The docs have further information on what this bt_index_parent_check\n> function does, should you need it:\n> https://www.postgresql.org/docs/13/amcheck.html\n>\n> --\n> Peter Geoghegan\n>\n\n\n-- \nRegards,\nAvi.\n\nHi,On Sun, Mar 14, 2021 at 11:24 PM Peter Geoghegan <pg@bowt.ie> wrote:On Sun, Mar 14, 2021 at 6:54 PM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> Following may be helpful to understand what I meant.\n>\n> I have renamed the table and index names before adding it here.\n\nIt should be possible to run amcheck on your database, which will\ndetect corrupt posting list tuples on Postgres 13. It's a contrib\nextension, so you must first run \"CREATE EXTENSION amcheck;\". From\nthere, you can run a query like the following (you may want to\ncustomize this):\n\nSELECT bt_index_parent_check(index => c.oid, heapallindexed => true),\nc.relname,\nc.relpages\nFROM pg_index i\nJOIN pg_opclass op ON i.indclass[0] = op.oid\nJOIN pg_am am ON op.opcmethod = am.oid\nJOIN pg_class c ON i.indexrelid = c.oid\nJOIN pg_namespace n ON c.relnamespace = n.oid\nWHERE am.amname = 'btree'\n-- Don't check temp tables, which may be from another session:\nAND c.relpersistence != 't'\n-- Function may throw an error when this is omitted:\nAND c.relkind = 'i' AND i.indisready AND i.indisvalid\nORDER BY c.relpages DESC;\n\nIf this query takes too long to complete you may find it useful to add\nsomething to limit the indexes check, such as: AND n.nspname =\n'public' -- that change to the SQL will make the query just test\nindexes from the public schema.\n\nDo \"SET client_min_messages=DEBUG1 \" to get a kind of rudimentary\nprogress indicator, if that seems useful to you.I see that there are 26 Indexes for which there are 100 to thousands of entries similar to the following. All are of course btree indexes. psql:amchecksql.sql:17: DEBUG:  leaf block 1043751 of index \"idx_id_mtime\" has no first data itemAnd one error as follows.psql:amchecksql.sql:17: ERROR:  down-link lower bound invariant violated for index \"some_other_index\" \n\nThe docs have further information on what this bt_index_parent_check\nfunction does, should you need it:\nhttps://www.postgresql.org/docs/13/amcheck.html\n\n-- \nPeter Geoghegan\n-- Regards,Avi.", "msg_date": "Mon, 15 Mar 2021 10:56:14 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Mon, Mar 15, 2021 at 6:56 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> psql:amchecksql.sql:17: DEBUG: leaf block 1043751 of index \"idx_id_mtime\" has no first data item\n\nThat one is harmless.\n\n> And one error as follows.\n>\n> psql:amchecksql.sql:17: ERROR: down-link lower bound invariant violated for index \"some_other_index\"\n\nThat indicates corruption. Can you tie this back to the crash? Is it\nthe same index?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 15 Mar 2021 09:18:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Hi,\n\nOn Mon, Mar 15, 2021 at 1:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Mon, Mar 15, 2021 at 6:56 AM Avinash Kumar\n> <avinash.vallarapu@gmail.com> wrote:\n> > psql:amchecksql.sql:17: DEBUG: leaf block 1043751 of index\n> \"idx_id_mtime\" has no first data item\n>\n> That one is harmless.\n>\n> > And one error as follows.\n> >\n> > psql:amchecksql.sql:17: ERROR: down-link lower bound invariant violated\n> for index \"some_other_index\"\n>\n> That indicates corruption. Can you tie this back to the crash? Is it\n> the same index?\n>\nNo, that's not the same index. The Index discussed in the previous\nmessages shows the following output.\n\nDEBUG: verifying consistency of tree structure for index \"idx_id_mtime\"\nwith cross-level checks\nDEBUG: verifying level 2 (true root level)\nDEBUG: verifying level 1\nDEBUG: verifying level 0 (leaf level)\nDEBUG: verifying that tuples from index \"idx_id_mtime\" are present in\n\"player\"\nDEBUG: finished verifying presence of 1966412 tuples from table \"player\"\nwith bitset 29.89% set\nLOG: duration: 3341.755 ms statement: SELECT bt_index_parent_check(index\n=> c.oid, heapallindexed => true), c.relname, c.relpages FROM pg_index i\nJOIN pg_opclass op ON i.indclass[0] = op.oid JOIN pg_am am ON op.opcmethod\n= am.oid JOIN pg_class c ON i.indexrelid = c.oid JOIN pg_namespace n ON\nc.relnamespace = n.oid WHERE am.amname = 'btree' AND c.relpersistence !=\n't' AND c.relkind = 'i' AND i.indisready AND i.indisvalid AND indexrelid =\n80774 AND n.nspname = 'public' ORDER BY c.relpages DESC;\n bt_index_parent_check | relname | relpages\n-----------------------+-----------------+----------\n | idx_id_mtime | 8439\n(1 row)\n\n\n> --\n> Peter Geoghegan\n>\n\n\n-- \nRegards,\nAvi.\n\nHi,On Mon, Mar 15, 2021 at 1:18 PM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Mar 15, 2021 at 6:56 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> psql:amchecksql.sql:17: DEBUG:  leaf block 1043751 of index \"idx_id_mtime\" has no first data item\n\nThat one is harmless.\n\n> And one error as follows.\n>\n> psql:amchecksql.sql:17: ERROR:  down-link lower bound invariant violated for index \"some_other_index\"\n\nThat indicates corruption. Can you tie this back to the crash? Is it\nthe same index?No, that's not the same index.  The Index discussed in the previous messages shows the following output. DEBUG:  verifying consistency of tree structure for index \"idx_id_mtime\" with cross-level checksDEBUG:  verifying level 2 (true root level)DEBUG:  verifying level 1DEBUG:  verifying level 0 (leaf level)DEBUG:  verifying that tuples from index \"idx_id_mtime\" are present in \"player\"DEBUG:  finished verifying presence of 1966412 tuples from table \"player\" with bitset 29.89% setLOG:  duration: 3341.755 ms  statement: SELECT bt_index_parent_check(index => c.oid, heapallindexed => true), c.relname, c.relpages FROM pg_index i JOIN pg_opclass op ON i.indclass[0] = op.oid JOIN pg_am am ON op.opcmethod = am.oid JOIN pg_class c ON i.indexrelid = c.oid JOIN pg_namespace n ON c.relnamespace = n.oid WHERE am.amname = 'btree' AND c.relpersistence != 't' AND c.relkind = 'i' AND i.indisready AND i.indisvalid AND indexrelid = 80774 AND n.nspname = 'public' ORDER BY c.relpages DESC; bt_index_parent_check |     relname     | relpages -----------------------+-----------------+----------                       | idx_id_mtime |     8439(1 row)\n\n-- \nPeter Geoghegan\n-- Regards,Avi.", "msg_date": "Mon, 15 Mar 2021 15:21:33 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Mon, Mar 15, 2021 at 3:21 PM Avinash Kumar <avinash.vallarapu@gmail.com>\nwrote:\n\n> Hi,\n>\n> On Mon, Mar 15, 2021 at 1:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n>> On Mon, Mar 15, 2021 at 6:56 AM Avinash Kumar\n>> <avinash.vallarapu@gmail.com> wrote:\n>> > psql:amchecksql.sql:17: DEBUG: leaf block 1043751 of index\n>> \"idx_id_mtime\" has no first data item\n>>\n>> That one is harmless.\n>>\n>> > And one error as follows.\n>> >\n>> > psql:amchecksql.sql:17: ERROR: down-link lower bound invariant\n>> violated for index \"some_other_index\"\n>>\n>> That indicates corruption. Can you tie this back to the crash? Is it\n>> the same index?\n>>\n> No, that's not the same index. The Index discussed in the previous\n> messages shows the following output.\n>\n> DEBUG: verifying consistency of tree structure for index \"idx_id_mtime\"\n> with cross-level checks\n> DEBUG: verifying level 2 (true root level)\n> DEBUG: verifying level 1\n> DEBUG: verifying level 0 (leaf level)\n> DEBUG: verifying that tuples from index \"idx_id_mtime\" are present in\n> \"player\"\n> DEBUG: finished verifying presence of 1966412 tuples from table \"player\"\n> with bitset 29.89% set\n> LOG: duration: 3341.755 ms statement: SELECT bt_index_parent_check(index\n> => c.oid, heapallindexed => true), c.relname, c.relpages FROM pg_index i\n> JOIN pg_opclass op ON i.indclass[0] = op.oid JOIN pg_am am ON op.opcmethod\n> = am.oid JOIN pg_class c ON i.indexrelid = c.oid JOIN pg_namespace n ON\n> c.relnamespace = n.oid WHERE am.amname = 'btree' AND c.relpersistence !=\n> 't' AND c.relkind = 'i' AND i.indisready AND i.indisvalid AND indexrelid =\n> 80774 AND n.nspname = 'public' ORDER BY c.relpages DESC;\n> bt_index_parent_check | relname | relpages\n> -----------------------+-----------------+----------\n> | idx_id_mtime | 8439\n> (1 row)\n>\n>\n>> --\n>> Peter Geoghegan\n>>\n>\n>\n> --\n> Regards,\n> Avi.\n>\n\nI am afraid that it looks to me like a deduplication bug but not sure how\nthis can be pin-pointed. If there is something I could do to determine\nthat, I would be more than happy.\n\n-- \nRegards,\nAvi\n\nOn Mon, Mar 15, 2021 at 3:21 PM Avinash Kumar <avinash.vallarapu@gmail.com> wrote:Hi,On Mon, Mar 15, 2021 at 1:18 PM Peter Geoghegan <pg@bowt.ie> wrote:On Mon, Mar 15, 2021 at 6:56 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> psql:amchecksql.sql:17: DEBUG:  leaf block 1043751 of index \"idx_id_mtime\" has no first data item\n\nThat one is harmless.\n\n> And one error as follows.\n>\n> psql:amchecksql.sql:17: ERROR:  down-link lower bound invariant violated for index \"some_other_index\"\n\nThat indicates corruption. Can you tie this back to the crash? Is it\nthe same index?No, that's not the same index.  The Index discussed in the previous messages shows the following output. DEBUG:  verifying consistency of tree structure for index \"idx_id_mtime\" with cross-level checksDEBUG:  verifying level 2 (true root level)DEBUG:  verifying level 1DEBUG:  verifying level 0 (leaf level)DEBUG:  verifying that tuples from index \"idx_id_mtime\" are present in \"player\"DEBUG:  finished verifying presence of 1966412 tuples from table \"player\" with bitset 29.89% setLOG:  duration: 3341.755 ms  statement: SELECT bt_index_parent_check(index => c.oid, heapallindexed => true), c.relname, c.relpages FROM pg_index i JOIN pg_opclass op ON i.indclass[0] = op.oid JOIN pg_am am ON op.opcmethod = am.oid JOIN pg_class c ON i.indexrelid = c.oid JOIN pg_namespace n ON c.relnamespace = n.oid WHERE am.amname = 'btree' AND c.relpersistence != 't' AND c.relkind = 'i' AND i.indisready AND i.indisvalid AND indexrelid = 80774 AND n.nspname = 'public' ORDER BY c.relpages DESC; bt_index_parent_check |     relname     | relpages -----------------------+-----------------+----------                       | idx_id_mtime |     8439(1 row)\n\n-- \nPeter Geoghegan\n-- Regards,Avi.I am afraid that it looks to me like a deduplication bug but not sure how this can be pin-pointed. If there is something I could do to determine that, I would be more than happy. -- Regards,Avi", "msg_date": "Tue, 16 Mar 2021 09:01:29 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Tue, Mar 16, 2021 at 5:01 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> I am afraid that it looks to me like a deduplication bug but not sure how this can be pin-pointed. If there is something I could do to determine that, I would be more than happy.\n\nThat cannot be ruled out, but I don't consider it to be the most\nlikely explanation. The index in question passes amcheck verification,\nwhich includes verification of the posting list tuple structure, and\neven includes making sure the index has an entry for each row from the\ntable. It's highly unlikely that it is corrupt, and it's hard to see\nhow you get from a non-corrupt index to the segfault. At the same time\nwe see that some other index is corrupt -- it fails amcheck due to a\ncross-level inconsistency, which is very unlikely to be related to\ndeduplication in any way. It's hard to believe that the problem is\nsquarely with _bt_swap_posting().\n\nDid you actually run amcheck on the failed-over server, not the original server?\n\nNote that you can disable deduplication selectively -- perhaps doing\nso will make it possible to isolate the issue. Something like this\nshould do it (you need to reindex here to actually change the on-disk\nrepresentation to not have any posting list tuples from\ndeduplication):\n\nalter index idx_id_mtime set (deduplicate_items = off);\nreindex index idx_id_mtime;\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Mar 2021 09:44:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Hi,\n\n\nOn Tue, Mar 16, 2021 at 1:44 PM Peter Geoghegan <pg@bowt.ie> wrote:\n\n> On Tue, Mar 16, 2021 at 5:01 AM Avinash Kumar\n> <avinash.vallarapu@gmail.com> wrote:\n> > I am afraid that it looks to me like a deduplication bug but not sure\n> how this can be pin-pointed. If there is something I could do to determine\n> that, I would be more than happy.\n>\n> That cannot be ruled out, but I don't consider it to be the most\n> likely explanation. The index in question passes amcheck verification,\n> which includes verification of the posting list tuple structure, and\n> even includes making sure the index has an entry for each row from the\n> table. It's highly unlikely that it is corrupt, and it's hard to see\n> how you get from a non-corrupt index to the segfault. At the same time\n> we see that some other index is corrupt -- it fails amcheck due to a\n> cross-level inconsistency, which is very unlikely to be related to\n> deduplication in any way. It's hard to believe that the problem is\n> squarely with _bt_swap_posting().\n>\n> Did you actually run amcheck on the failed-over server, not the original\n> server?\n>\nYes, it was on the failover-over server where the issue is currently seen.\nTook a snapshot of the data directory so that the issue can be analyzed.\n\n>\n> Note that you can disable deduplication selectively -- perhaps doing\n> so will make it possible to isolate the issue. Something like this\n> should do it (you need to reindex here to actually change the on-disk\n> representation to not have any posting list tuples from\n> deduplication):\n>\n> alter index idx_id_mtime set (deduplicate_items = off);\n> reindex index idx_id_mtime;\n>\nI can do this. But, to add here, when we do a pg_repack or rebuild of\nIndexes, automatically this is resolved. But, not sure if we get the same\nissue again.\n\n>\n> --\n> Peter Geoghegan\n>\n\n\n-- \nRegards,\nAvi.\n\nHi,On Tue, Mar 16, 2021 at 1:44 PM Peter Geoghegan <pg@bowt.ie> wrote:On Tue, Mar 16, 2021 at 5:01 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> I am afraid that it looks to me like a deduplication bug but not sure how this can be pin-pointed. If there is something I could do to determine that, I would be more than happy.\n\nThat cannot be ruled out, but I don't consider it to be the most\nlikely explanation. The index in question passes amcheck verification,\nwhich includes verification of the posting list tuple structure, and\neven includes making sure the index has an entry for each row from the\ntable. It's highly unlikely that it is corrupt, and it's hard to see\nhow you get from a non-corrupt index to the segfault. At the same time\nwe see that some other index is corrupt -- it fails amcheck due to a\ncross-level inconsistency, which is very unlikely to be related to\ndeduplication in any way. It's hard to believe that the problem is\nsquarely with _bt_swap_posting().\n\nDid you actually run amcheck on the failed-over server, not the original server?Yes, it was on the failover-over server where the issue is currently seen. Took a snapshot of the data directory so that the issue can be analyzed.  \n\nNote that you can disable deduplication selectively -- perhaps doing\nso will make it possible to isolate the issue. Something like this\nshould do it (you need to reindex here to actually change the on-disk\nrepresentation to not have any posting list tuples from\ndeduplication):\n\nalter index idx_id_mtime set (deduplicate_items = off);\nreindex index idx_id_mtime;I can do this. But, to add here, when we do a pg_repack or rebuild of Indexes, automatically this is resolved. But, not sure if we get the same issue again.  \n\n-- \nPeter Geoghegan\n-- Regards,Avi.", "msg_date": "Tue, 16 Mar 2021 13:50:10 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Tue, Mar 16, 2021 at 9:50 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> Yes, it was on the failover-over server where the issue is currently seen. Took a snapshot of the data directory so that the issue can be analyzed.\n\nI would be very cautious when using LVM snapshots with a Postgres data\ndirectory, or VM-based snapshotting tools. There are many things that\ncan go wrong with these tools, which are usually not sensitive to the\nvery specific requirements of a database system like Postgres (e.g.\ninconsistencies between WAL and data files can emerge in many\nscenarios).\n\nMy general recommendation is to avoid these tools completely --\nconsistently use a backup solution like pgBackrest instead.\n\nBTW, running pg_repack is something that creates additional risk of\ndatabase corruption, at least to some degree. That seems less likely\nto have been the problem here (I think that it's probably something\nwith snapshots). Something to consider.\n\n> I can do this. But, to add here, when we do a pg_repack or rebuild of Indexes, automatically this is resolved.\n\nYour bug report was useful to me, because it made me realize that the\nposting list split code in _bt_swap_posting() is unnecessarily\ntrusting of the on-disk data -- especially compared to _bt_split(),\nthe page split code. While I consider it unlikely that the problem\nthat you see is truly a bug in Postgres, it is still true that the\ncrash that you saw should probably have just been an error.\n\nWe don't promise that the database cannot crash even with corrupt\ndata, but we do try to avoid it whenever possible. I may be able to\nharden _bt_swap_posting(), to make failures like this a little more\nfriendly. It's an infrequently hit code path, so we can easily afford\nto make the code more careful/less trusting.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Mar 2021 10:02:46 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> ... It's hard to believe that the problem is\n> squarely with _bt_swap_posting().\n\nIIUC, the problem is seen on a replica server but not the primary?\nIn that case, my thoughts would run towards a bug in WAL log creation\nor replay, causing the index contents to be different/wrong on the\nreplica.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Mar 2021 14:08:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Tue, Mar 16, 2021 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > ... It's hard to believe that the problem is\n> > squarely with _bt_swap_posting().\n>\n> IIUC, the problem is seen on a replica server but not the primary?\n> In that case, my thoughts would run towards a bug in WAL log creation\n> or replay, causing the index contents to be different/wrong on the\n> replica.\n>\nRight, observed after the replica Server after it got promoted.\nThe replica is of the same Postgres minor version - 13.1 but, the OS is\nUbuntu 16 on Primary and Ubuntu 20 on Replica (that got promoted).\nReplica was setup using a backup taken using pg_basebackup.\n\nI can share any detail that would help here.\n\n\n> regards, tom lane\n>\n\n\n-- \nRegards,\nAvi\n\nOn Tue, Mar 16, 2021 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Geoghegan <pg@bowt.ie> writes:\n> ... It's hard to believe that the problem is\n> squarely with _bt_swap_posting().\n\nIIUC, the problem is seen on a replica server but not the primary?\nIn that case, my thoughts would run towards a bug in WAL log creation\nor replay, causing the index contents to be different/wrong on the\nreplica.Right, observed after the replica Server after it got promoted. The replica is of the same Postgres minor version - 13.1 but, the OS is Ubuntu 16 on Primary and Ubuntu 20 on Replica (that got promoted). Replica was setup using a backup taken using pg_basebackup.I can share any detail that would help here. \n\n                        regards, tom lane\n-- Regards,Avi", "msg_date": "Tue, 16 Mar 2021 15:20:34 -0300", "msg_from": "Avinash Kumar <avinash.vallarapu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Tue, Mar 16, 2021 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > ... It's hard to believe that the problem is\n> > squarely with _bt_swap_posting().\n>\n> IIUC, the problem is seen on a replica server but not the primary?\n> In that case, my thoughts would run towards a bug in WAL log creation\n> or replay, causing the index contents to be different/wrong on the\n> replica.\n\nMy remarks were intended to include problems during recovery\n(_bt_swap_posting() is run inside REDO routines). Though I did\nconsider recovery specifically when thinking through the problem.\n\nMy assessment is that the index is highly unlikely to be corrupt\n(whether it happened during recovery or at some other time), because\nit passes validation by bt_index_parent_check(), with the optional\nheapallindexed index-matches-table verification option enabled. This\nincludes exhaustive verification of posting list tuple invariants.\n\nAnything is possible, but I find it easier to believe that the issue\nis somewhere else -- we see the problem in _bt_swap_posting() because\nit happens to go further than other code in trusting that the tuple\nisn't corrupt (which it shouldn't). Another unrelated index *was*\nreported corrupt by amcheck, though the error in question does not\nsuggest an issue with deduplication.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Mar 2021 11:23:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." }, { "msg_contents": "On Tue, Mar 16, 2021 at 11:20 AM Avinash Kumar\n<avinash.vallarapu@gmail.com> wrote:\n> I can share any detail that would help here.\n\nI would like to know what you see when you run a slightly modified\nversion of the same amcheck query. The same query as before, but with\nthe call to bt_index_parent_check() replaced with a call to\nbt_index_check(). Can you do that, please?\n\nThis is what I mean:\n\nSELECT bt_index_check(index => c.oid, heapallindexed => true),\nc.relname,\nc.relpages\nFROM pg_index i\nJOIN pg_opclass op ON i.indclass[0] = op.oid\nJOIN pg_am am ON op.opcmethod = am.oid\nJOIN pg_class c ON i.indexrelid = c.oid\nJOIN pg_namespace n ON c.relnamespace = n.oid\nWHERE am.amname = 'btree'\n-- Don't check temp tables, which may be from another session:\nAND c.relpersistence != 't'\n-- Function may throw an error when this is omitted:\nAND c.relkind = 'i' AND i.indisready AND i.indisvalid\nORDER BY c.relpages DESC;\n\nThe error that you reported was a cross-level invariant violation,\nfrom one of the tests that bt_index_parent_check() performs but\nbt_index_check() does not perform (the former performs checks that are\na superset of the latter). It's possible that we'll get a more\ninteresting error message from bt_index_check() here, because it might\ngo on for a bit longer -- it might conceivably reach a corrupt posting\nlist tuple on the leaf level, and report it as such.\n\nOf course we don't see any corruption in the index that you had the\ncrash with at all, but it can't hurt to do this as well -- just in\ncase the issue is transient or something.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 16 Mar 2021 11:31:34 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Postgres crashes at memcopy() after upgrade to PG 13." } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing the patch for parallel REFRESH MATERIALIZED VIEW, I\nnoticed that select_parallel.sql and write_parallel.sql believe that\n(1) the tests are supposed to work with serializable as a default\nisolation level, and (2) parallelism would be inhibited by that, so\nthey'd better use something else explicitly. Here's a patch to update\nthat second thing in light of commit bb16aba5. I don't think it\nmatters enough to bother back-patching it.\n\nHowever, since commit 862ef372d6b, there *is* one test that fails if\nyou run make installcheck against a cluster running with -c\ndefault_transaction_isolation=serializable: transaction.sql. Is that\na mistake? Is it a goal to be able to run this test suite against all\n3 isolation levels?\n\n@@ -1032,7 +1032,7 @@\n SHOW transaction_isolation; -- out of transaction block\n transaction_isolation\n -----------------------\n- read committed\n+ serializable\n (1 row)", "msg_date": "Mon, 15 Mar 2021 17:24:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Regression tests vs SERIALIZABLE" }, { "msg_contents": "On Mon, Mar 15, 2021 at 9:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> While reviewing the patch for parallel REFRESH MATERIALIZED VIEW, I\n> noticed that select_parallel.sql and write_parallel.sql believe that\n> (1) the tests are supposed to work with serializable as a default\n> isolation level, and (2) parallelism would be inhibited by that, so\n> they'd better use something else explicitly. Here's a patch to update\n> that second thing in light of commit bb16aba5. I don't think it\n> matters enough to bother back-patching it.\n\n+1, patch basically LGTM. I have one point - do we also need to remove\n\"begin isolation level repeatable read;\" in aggreates.sql, explain.sql\nand insert_parallel.sql? And in insert_parallel.sql, the comment also\nsays \"Serializable isolation would disable parallel query\", which is\nnot true after bb16aba5. Do we need to change that too?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Mar 2021 10:44:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "On Mon, Mar 15, 2021 at 6:14 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> On Mon, Mar 15, 2021 at 9:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > While reviewing the patch for parallel REFRESH MATERIALIZED VIEW, I\n> > noticed that select_parallel.sql and write_parallel.sql believe that\n> > (1) the tests are supposed to work with serializable as a default\n> > isolation level, and (2) parallelism would be inhibited by that, so\n> > they'd better use something else explicitly. Here's a patch to update\n> > that second thing in light of commit bb16aba5. I don't think it\n> > matters enough to bother back-patching it.\n>\n> +1, patch basically LGTM. I have one point - do we also need to remove\n> \"begin isolation level repeatable read;\" in aggreates.sql, explain.sql\n> and insert_parallel.sql? And in insert_parallel.sql, the comment also\n> says \"Serializable isolation would disable parallel query\", which is\n> not true after bb16aba5. Do we need to change that too?\n\nYeah, you're right. That brings us to the attached.", "msg_date": "Mon, 15 Mar 2021 18:33:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "On Mon, Mar 15, 2021 at 11:04 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Mar 15, 2021 at 6:14 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > On Mon, Mar 15, 2021 at 9:54 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > While reviewing the patch for parallel REFRESH MATERIALIZED VIEW, I\n> > > noticed that select_parallel.sql and write_parallel.sql believe that\n> > > (1) the tests are supposed to work with serializable as a default\n> > > isolation level, and (2) parallelism would be inhibited by that, so\n> > > they'd better use something else explicitly. Here's a patch to update\n> > > that second thing in light of commit bb16aba5. I don't think it\n> > > matters enough to bother back-patching it.\n> >\n> > +1, patch basically LGTM. I have one point - do we also need to remove\n> > \"begin isolation level repeatable read;\" in aggreates.sql, explain.sql\n> > and insert_parallel.sql? And in insert_parallel.sql, the comment also\n> > says \"Serializable isolation would disable parallel query\", which is\n> > not true after bb16aba5. Do we need to change that too?\n>\n> Yeah, you're right. That brings us to the attached.\n\nThanks. v2 LGTM, both make check and make check-world passes on my dev system.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 15 Mar 2021 12:29:56 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "On Mon, Mar 15, 2021 at 8:00 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. v2 LGTM, both make check and make check-world passes on my dev system.\n\nPushed. Thanks!\n\n\n", "msg_date": "Mon, 15 Mar 2021 23:35:10 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "On Mon, Mar 15, 2021 at 5:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> However, since commit 862ef372d6b, there *is* one test that fails if\n> you run make installcheck against a cluster running with -c\n> default_transaction_isolation=serializable: transaction.sql. Is that\n> a mistake? Is it a goal to be able to run this test suite against all\n> 3 isolation levels?\n\nHere's a fix.", "msg_date": "Mon, 15 Mar 2021 23:51:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Mon, Mar 15, 2021 at 5:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> However, since commit 862ef372d6b, there *is* one test that fails if\n>> you run make installcheck against a cluster running with -c\n>> default_transaction_isolation=serializable: transaction.sql. Is that\n>> a mistake? Is it a goal to be able to run this test suite against all\n>> 3 isolation levels?\n\n> Here's a fix.\n\nUsually, if we issue a SET in the regression tests, we explicitly RESET\nas soon thereafter as practical, so as to have a well-defined scope\nwhere the script is running under unusual conditions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Mar 2021 10:28:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "On Tue, Mar 16, 2021 at 3:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > On Mon, Mar 15, 2021 at 5:24 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >> However, since commit 862ef372d6b, there *is* one test that fails if\n> >> you run make installcheck against a cluster running with -c\n> >> default_transaction_isolation=serializable: transaction.sql. Is that\n> >> a mistake? Is it a goal to be able to run this test suite against all\n> >> 3 isolation levels?\n>\n> > Here's a fix.\n>\n> Usually, if we issue a SET in the regression tests, we explicitly RESET\n> as soon thereafter as practical, so as to have a well-defined scope\n> where the script is running under unusual conditions.\n\nOh, of course. Thanks.\n\nI was wrong to blame that commit, and there are many other tests that\nfail in the back branches. But since we were down to just one, I went\nahead and fixed this in the master branch only.\n\n\n", "msg_date": "Wed, 17 Mar 2021 17:28:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Regression tests vs SERIALIZABLE" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Tue, Mar 16, 2021 at 3:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Usually, if we issue a SET in the regression tests, we explicitly RESET\n>> as soon thereafter as practical, so as to have a well-defined scope\n>> where the script is running under unusual conditions.\n\n> Oh, of course. Thanks.\n\n> I was wrong to blame that commit, and there are many other tests that\n> fail in the back branches. But since we were down to just one, I went\n> ahead and fixed this in the master branch only.\n\nMakes sense to me. Committed patch looks good.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 00:31:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Regression tests vs SERIALIZABLE" } ]
[ { "msg_contents": "Hello,\n\nForgive me for probably naive questions, being so talkative like the\nfollowing. But the less one knows the more one must explain. And I don't\nknow much regarding RLS.\n\n1. Some time ago I've implemented in my schema a poore mans' RLS using\nthe rule system.\n\n2. like half a year ago I've discovered postgreSQL native implementation\nwith policies, so I've decided to give it a try.\n\n3. to my ultimate surprise, this turned out to be like 10 times slower.\nSo I abondened the project.\n\n4. but it bites me, one question in particular .... which requires the\nlengthy explanations:\n\n5. My experiments with RLS was like following:\n\t- I've implemented a STABLE function, that returns INTEGER 1/0\n\t- I've linked that function as POLICY to my tables\n\t- I've GRANTED PUBLIC access to those tables\n\t---> and all works as predicted.... only slow (10x slower!).\n\nAs I understand it, RLS took time to get implemented in postgreSQL for\nmay reasons, one of which was the requirement to prevent \"not belonging\"\nrows from leaking into the query results of library buffers. Eventually,\nthis was somehow achieved.\n\nFMHE (for my eyes) the most striking change the policy (as of step 5)\nintroduces is a change from \"access denied\" error, which GRANT would\nraise when it declines access, to a \"silent omission\", which POLICY does\n... AT THE SAME SITUATION.\n\nThis lead me to the following conclusions:\n1. in the pass (like I was implementing poor mans RLS with rules), I\nfound it very useful for some GRANTs to silently omit access to object\ninstead of raising an error. But this is impossible, isn't it?\n\n2. in particular, I thought I could partition a table (using\ninheritance) and do RLS on GRANT/REVOKE into individual partitions. It\nwould certainly hard limit any rows leaking into library buffers,\nparticularly if partitions are on separate tablespaces. But\nunfortunately GRANT/REVOKE did raises an error, (and doesn't simply\nsilently ignore those not granted).\n\n3. So, what if one could change the way GRANT/REVOKE behave when denying\naccess?\n\n4. one feature necesary for such scenario to work, is the ability to\nselect one particular (single) ROLE, from all the ROLEs a particular\nsession__user has, that would \"solely\" be used for RLS checking of such\n\"silent GRANT/REVOKE\" validates. (a multitenet database). I mean here\nsomething along the lines of: \"SET ROLE XXXX [FOR RLS]\".\n\n5. the above should come in pair with \"CHECK (RLS = XXXX)\" at partition\nlevel. This way, when postgresql-session does NOT HAVE the \"role for\nrls\" set, all GRANT/REVOKE would work as usual, i.e.: ignore that CHECK\nand normally raise \"access denied\".\n\nIMHO, such implementation would not suffer performance hit, that current\nimplementation of POLICIES do.\n\nSo, I have two questions here:\n1. does the above scenario look like safe enough regarding unauthorised\nrows leaking (and as substitute for POLICIES)?\n2. would it be feasible to add such variant of RLS, should one attempt\nto implement it? (i.e. would the community accept it?).\n\nThose questions come from my bad experience with POLICY performance.\nUnfortunatly I did that test like half a year ago, so I don't have\nresults at hand to quote them, but should anybody be interested, I may\ntry to do it again in a couple of days.\n\nwith best regards,\n\n-R\n\n\n", "msg_date": "Mon, 15 Mar 2021 08:57:35 +0100", "msg_from": "Rafal Pietrak <rafal@ztk-rp.eu>", "msg_from_op": true, "msg_subject": "row level security (RLS)" }, { "msg_contents": "Hello,\n\nI was told, that pgsql-hackers is not the right list for the following\nquestions. So I'm reposting to general.\n\nDoes anybody have an opinion regarding the following questions?\n\n\n-------------------------------\nHello,\n\nForgive me for probably naive questions, being so talkative like the\nfollowing. But the less one knows the more one must explain. And I don't\nknow much regarding RLS.\n\n1. Some time ago I've implemented in my schema a poore mans' RLS using\nthe rule system.\n\n2. like half a year ago I've discovered postgreSQL native implementation\nwith policies, so I've decided to give it a try.\n\n3. to my ultimate surprise, this turned out to be like 10 times slower.\nSo I abondened the project.\n\n4. but it bites me, one question in particular .... which requires the\nlengthy explanations:\n\n5. My experiments with RLS was like following:\n\t- I've implemented a STABLE function, that returns INTEGER 1/0\n\t- I've linked that function as POLICY to my tables\n\t- I've GRANTED PUBLIC access to those tables\n\t---> and all works as predicted.... only slow (10x slower!).\n\nAs I understand it, RLS took time to get implemented in postgreSQL for\nmay reasons, one of which was the requirement to prevent \"not belonging\"\nrows from leaking into the query results of library buffers. Eventually,\nthis was somehow achieved.\n\nFMHE (for my eyes) the most striking change the policy (as of step 5)\nintroduces is a change from \"access denied\" error, which GRANT would\nraise when it declines access, to a \"silent omission\", which POLICY does\n... AT THE SAME SITUATION.\n\nThis lead me to the following conclusions:\n1. in the pass (like I was implementing poor mans RLS with rules), I\nfound it very useful for some GRANTs to silently omit access to object\ninstead of raising an error. But this is impossible, isn't it?\n\n2. in particular, I thought I could partition a table (using\ninheritance) and do RLS on GRANT/REVOKE into individual partitions. It\nwould certainly hard limit any rows leaking into library buffers,\nparticularly if partitions are on separate tablespaces. But\nunfortunately GRANT/REVOKE did raises an error, (and doesn't simply\nsilently ignore those not granted).\n\n3. So, what if one could change the way GRANT/REVOKE behave when denying\naccess?\n\n4. one feature necesary for such scenario to work, is the ability to\nselect one particular (single) ROLE, from all the ROLEs a particular\nsession__user has, that would \"solely\" be used for RLS checking of such\n\"silent GRANT/REVOKE\" validates. (a multitenet database). I mean here\nsomething along the lines of: \"SET ROLE XXXX [FOR RLS]\".\n\n5. the above should come in pair with \"CHECK (RLS = XXXX)\" at partition\nlevel. This way, when postgresql-session does NOT HAVE the \"role for\nrls\" set, all GRANT/REVOKE would work as usual, i.e.: ignore that CHECK\nand normally raise \"access denied\".\n\nIMHO, such implementation would not suffer performance hit, that current\nimplementation of POLICIES do.\n\nSo, I have two questions here:\n1. does the above scenario look like safe enough regarding unauthorised\nrows leaking (and as substitute for POLICIES)?\n2. would it be feasible to add such variant of RLS, should one attempt\nto implement it? (i.e. would the community accept it?).\n\nThose questions come from my bad experience with POLICY performance.\nUnfortunatly I did that test like half a year ago, so I don't have\nresults at hand to quote them, but should anybody be interested, I may\ntry to do it again in a couple of days.\n\nwith best regards,\n\n-R\n\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 16:28:42 +0100", "msg_from": "Rafal Pietrak <rafal@ztk-rp.eu>", "msg_from_op": true, "msg_subject": "Fwd: row level security (RLS)" }, { "msg_contents": "On Mon, 2021-03-15 at 16:28 +0100, Rafal Pietrak wrote:\n> 5. My experiments with RLS was like following:\n> - I've implemented a STABLE function, that returns INTEGER 1/0\n> - I've linked that function as POLICY to my tables\n> - I've GRANTED PUBLIC access to those tables\n> ---> and all works as predicted.... only slow (10x slower!).\n>\n> [lots of questions about how to solve this is some other way]\n>\n> Those questions come from my bad experience with POLICY performance.\n\nYou should figure out why RLS was so slow.\n\nThe key to this is \"EXPLAIN (ANALYZE, BUFFERS)\" for the query -\nthat will tell you what is slow and why, so that you can tackle the\nproblem.\n\nYours,\nLaurenz Albe\n-- \nCybertec | https://www.cybertec-postgresql.com\n\n\n\n", "msg_date": "Mon, 15 Mar 2021 18:01:23 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Fwd: row level security (RLS)" }, { "msg_contents": "\n\nW dniu 15.03.2021 o 18:01, Laurenz Albe pisze:\n> On Mon, 2021-03-15 at 16:28 +0100, Rafal Pietrak wrote:\n>> 5. My experiments with RLS was like following:\n>> - I've implemented a STABLE function, that returns INTEGER 1/0\n>> - I've linked that function as POLICY to my tables\n>> - I've GRANTED PUBLIC access to those tables\n>> ---> and all works as predicted.... only slow (10x slower!).\n>>\n>> [lots of questions about how to solve this is some other way]\n>>\n>> Those questions come from my bad experience with POLICY performance.\n> \n> You should figure out why RLS was so slow.\n\nYes I should... although I didn't. Somewhat because I thought it was\nobvious (an additional function call on every row). Still, as I've\nmentioned in my initial post, I'm going to revisit the case in the\ncouple of days and gather more evidence.\n\nHaving said that, I'm really interested in any comments on the way I've\n\"imagined\" addressing RLS years ago (and described it in the post), when\nI've looked for a solution and settled for the rule system. The question\nabout partition/check/role approach irrespective of where they come from.\n\nPls address the following reasoning:\n1. POLICY calls a function on every row to check it's visibility to the\nclient (for 1mln rows, 1mln checks).\n2. \"alternative\" does just one check on all the rows contained in a\nparticular partition (for 100 tenets 100 checks)\n\nNo matter how hard one optimises the POLICY function, it will always loose.\n\nThen again, I'll be back with some \"ANALYSE\" in a couple of days.\n\n-R\n\n\n", "msg_date": "Mon, 15 Mar 2021 23:24:21 +0100", "msg_from": "Rafal Pietrak <rafal@ztk-rp.eu>", "msg_from_op": true, "msg_subject": "Re: Fwd: row level security (RLS)" } ]
[ { "msg_contents": "Hi,\n\nThis seems to be a new low frequency failure, I didn't see it mentioned already:\n\n# Failed test 'DROP SUBSCRIPTION during error can clean up the slots\non the publisher'\n# at t/004_sync.pl line 171.\n# got: '1'\n# expected: '0'\n\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=culicidae&dt=2021-03-15%2010:40:10\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=piculet&dt=2021-03-15%2001:10:14\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=francolin&dt=2021-03-15%2001:10:09\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-02-26%2006:44:45\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2021-02-23%2019:00:09\n https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=calliphoridae&dt=2021-02-23%2009:40:08\n\n\n", "msg_date": "Tue, 16 Mar 2021 01:29:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "subscriptionCheck failures" }, { "msg_contents": "On Mon, Mar 15, 2021 at 6:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> Hi,\n>\n> This seems to be a new low frequency failure, I didn't see it mentioned already:\n>\n\nThanks for reporting, I'll look into it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Mar 2021 09:00:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptionCheck failures" }, { "msg_contents": "Hello\r\n\r\nOn Tuesday, March 16, 2021 12:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Mon, Mar 15, 2021 at 6:00 PM Thomas Munro\r\n> <thomas.munro@gmail.com> wrote:\r\n> >\r\n> > Hi,\r\n> >\r\n> > This seems to be a new low frequency failure, I didn't see it mentioned\r\n> already:\r\nOh, this is the test I wrote and included as part of the commit ce0fdbfe\r\n# Failed test 'DROP SUBSCRIPTION during error can clean up the slots on the publisher'\r\n# at t/004_sync.pl line 171.\r\n# got: '1'\r\n# expected: '0'\r\n# Looks like you failed 1 test of 8.\r\nI apologize this and will check the reason.\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 16 Mar 2021 04:13:26 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: subscriptionCheck failures" }, { "msg_contents": "On Tue, Mar 16, 2021 at 9:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 15, 2021 at 6:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > This seems to be a new low frequency failure, I didn't see it mentioned already:\n> >\n>\n> Thanks for reporting, I'll look into it.\n>\n\nBy looking at the logs [1] in the buildfarm, I think I know what is\ngoing on here. After Create Subscription, the tablesync worker is\nlaunched and tries to create the slot for doing the initial copy but\nbefore it could finish creating the slot, we issued the Drop\nSubscription which first stops the tablesync worker and then tried to\ndrop its slot. Now, it is quite possible that by the time Drop\nSubscription tries to drop the tablesync slot, it is not yet created.\nWe treat this condition okay and just Logs the message. I don't think\nthis is an issue because anyway generally such a slot created on the\nserver will be dropped before we persist it but the test was checking\nthe existence of slots on server before it gets dropped. I think we\ncan avoid such a situation by preventing cancel/die interrupts while\ncreating tablesync slot.\n\nThis is a timing issue, so I have reproduced it via debugger and\ntested that the attached patch fixes it.\n\n\n[1]:\n2021-02-23 09:57:47.593 UTC [6034d19b.291aed:7] LOG: received\nreplication command: CREATE_REPLICATION_SLOT\n\"pg_16396_sync_16384_6932396177428838370\" LOGICAL pgoutput\nUSE_SNAPSHOT\n2021-02-23 09:57:47.593 UTC [6034d19b.291aed:8] STATEMENT:\nCREATE_REPLICATION_SLOT \"pg_16396_sync_16384_6932396177428838370\"\nLOGICAL pgoutput USE_SNAPSHOT\n2021-02-23 09:57:47.664 UTC [6034d19b.291ae2:14] LOG: disconnection:\nsession time: 0:00:00.130 user=andres database=postgres host=[local]\n2021-02-23 09:57:47.686 UTC [6034d19b.291af3:1] LOG: connection\nreceived: host=[local]\n2021-02-23 09:57:47.687 UTC [6034d19b.291af3:2] LOG: replication\nconnection authorized: user=andres application_name=tap_sub\n2021-02-23 09:57:47.688 UTC [6034d19b.291af3:3] LOG: statement:\nSELECT pg_catalog.set_config('search_path', '', false);\n2021-02-23 09:57:47.688 UTC [6034d19b.291af3:4] LOG: received\nreplication command: DROP_REPLICATION_SLOT\npg_16396_sync_16384_6932396177428838370 WAIT\n2021-02-23 09:57:47.688 UTC [6034d19b.291af3:5] STATEMENT:\nDROP_REPLICATION_SLOT pg_16396_sync_16384_6932396177428838370 WAIT\n2021-02-23 09:57:47.688 UTC [6034d19b.291af3:6] ERROR: replication\nslot \"pg_16396_sync_16384_6932396177428838370\" does not exist\n2021-02-23 09:57:47.688 UTC [6034d19b.291af3:7] STATEMENT:\nDROP_REPLICATION_SLOT pg_16396_sync_16384_6932396177428838370 WAIT\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 16 Mar 2021 12:29:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptionCheck failures" }, { "msg_contents": "On Tue, Mar 16, 2021 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 16, 2021 at 9:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 15, 2021 at 6:00 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > This seems to be a new low frequency failure, I didn't see it mentioned already:\n> > >\n> >\n> > Thanks for reporting, I'll look into it.\n> >\n>\n> By looking at the logs [1] in the buildfarm, I think I know what is\n> going on here. After Create Subscription, the tablesync worker is\n> launched and tries to create the slot for doing the initial copy but\n> before it could finish creating the slot, we issued the Drop\n> Subscription which first stops the tablesync worker and then tried to\n> drop its slot. Now, it is quite possible that by the time Drop\n> Subscription tries to drop the tablesync slot, it is not yet created.\n> We treat this condition okay and just Logs the message. I don't think\n> this is an issue because anyway generally such a slot created on the\n> server will be dropped before we persist it but the test was checking\n> the existence of slots on server before it gets dropped. I think we\n> can avoid such a situation by preventing cancel/die interrupts while\n> creating tablesync slot.\n>\n> This is a timing issue, so I have reproduced it via debugger and\n> tested that the attached patch fixes it.\n>\n\nThanks for the patch.\nI was able to reproduce the issue using debugger by making it wait at\nCreateReplicationSlot. After applying the patch the issue gets solved.\n\nFew minor comments:\n1) subscrition should be subscription in the below change:\n+ * Prevent cancel/die interrupts while creating slot here because it is\n+ * possible that before the server finishes this command a concurrent drop\n+ * subscrition happens which would complete without removing this slot\n+ * leading to a dangling slot on the server.\n */\n\n2) \"finishes this command a concurrent drop\" should be \"finishes this\ncommand, a concurrent drop\" in the below change:\n+ * Prevent cancel/die interrupts while creating slot here because it is\n+ * possible that before the server finishes this command a concurrent drop\n+ * subscrition happens which would complete without removing this slot\n+ * leading to a dangling slot on the server.\n */\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 16 Mar 2021 12:45:15 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptionCheck failures" }, { "msg_contents": "Hi\r\n\r\n\r\nOn Tuesday, March 16, 2021 4:15 PM vignesh C <vignesh21@gmail.com> wrote:\r\n> On Tue, Mar 16, 2021 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Mar 16, 2021 at 9:00 AM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> > > On Mon, Mar 15, 2021 at 6:00 PM Thomas Munro\r\n> <thomas.munro@gmail.com> wrote:\r\n> > > >\r\n> > > > Hi,\r\n> > > >\r\n> > > > This seems to be a new low frequency failure, I didn't see it mentioned\r\n> already:\r\n> > > >\r\n> > >\r\n> > > Thanks for reporting, I'll look into it.\r\n> > >\r\n> >\r\n> > By looking at the logs [1] in the buildfarm, I think I know what is\r\n> > going on here. After Create Subscription, the tablesync worker is\r\n> > launched and tries to create the slot for doing the initial copy but\r\n> > before it could finish creating the slot, we issued the Drop\r\n> > Subscription which first stops the tablesync worker and then tried to\r\n> > drop its slot. Now, it is quite possible that by the time Drop\r\n> > Subscription tries to drop the tablesync slot, it is not yet created.\r\n> > We treat this condition okay and just Logs the message. I don't think\r\n> > this is an issue because anyway generally such a slot created on the\r\n> > server will be dropped before we persist it but the test was checking\r\n> > the existence of slots on server before it gets dropped. I think we\r\n> > can avoid such a situation by preventing cancel/die interrupts while\r\n> > creating tablesync slot.\r\n> >\r\n> > This is a timing issue, so I have reproduced it via debugger and\r\n> > tested that the attached patch fixes it.\r\n> >\r\n> \r\n> Thanks for the patch.\r\n> I was able to reproduce the issue using debugger by making it wait at\r\n> CreateReplicationSlot. After applying the patch the issue gets solved.\r\nI really appreciate everyone's help.\r\n\r\nFor the double check, I utilized the patch and debugger too.\r\nI also put one while loop at the top of CreateReplicationSlot to control walsender.\r\n\r\nWithout the patch, DROP SUBSCRIPTION goes forward,\r\neven when the table sync worker cannot move by the CreateReplicationSlot loop,\r\nand the table sync worker is killed by DROP SUBSCRIPTION command.\r\nOn the other hand, with the patch contents, I observed that\r\nDROP SUBSCRIPTION hangs and waits\r\nuntil I release the walsender process from CreateReplicationSlot.\r\nAfter this, the command drops two slots like below.\r\n\r\nNOTICE: dropped replication slot \"pg_16391_sync_16385_6940222843739406079\" on publisher\r\nNOTICE: dropped replication slot \"mysub1\" on publisher\r\nDROP SUBSCRIPTION\r\n\r\nTo me, this correctly works because\r\nthe timing I put the while loop and stops the walsender\r\nmakes the DROP SUBSCRIPTION affects two slots. Any comments ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 16 Mar 2021 12:52:23 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: subscriptionCheck failures" }, { "msg_contents": "On Tue, Mar 16, 2021 at 6:22 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n\n>\n> To me, this correctly works because\n> the timing I put the while loop and stops the walsender\n> makes the DROP SUBSCRIPTION affects two slots. Any comments ?\n>\n\nNo, your testing looks fine. I have also done the similar test. Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 17 Mar 2021 08:40:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: subscriptionCheck failures" } ]
[ { "msg_contents": "Hi all,\nMy country(Ethiopia) is one of the nations that uses different kind of\ncalendar than what PostgreSQL have so we are deprived from the benefit of\ndata datatype. We just uses String to store date that limits our\napplication quality greatly. The lag became even worst once application and\nsystem time support is available and it seems to me it is not fair to\nsuggest to add other date data type kind and implementation for just\ndifferent calendar that even not minor user group. Having calendar support\nto localization will be very very very very exciting feature for none\nGregorian calendar user group and make so loved. As far as i can see the\ndifficult thing is understanding different calendar. I can prepare a patch\nfor Ethiopian calendar once we have consensus.\n\nI cc Thomas Munro and Vik because they have interest on this area\n\nPlease don't suggests to fork from PostgreSQL just for this feature\n\nregards\nSurafel\n\nHi all,My country(Ethiopia) is one of the nations that uses different kind of calendar than what PostgreSQL have so we are deprived from the benefit of data datatype. We just uses String to store date that limits our application quality greatly. The lag became even worst once application and system time support is available and it seems to me it is not fair to suggest to add other date data type kind and implementation for just different calendar that even not minor user group. Having calendar support to localization will be very very very very exciting feature for none Gregorian calendar user group and make so loved. As far as i can see the difficult thing is understanding different calendar. I can prepare a patch for Ethiopian calendar once we have consensus.I cc Thomas Munro and Vik because they have interest on this areaPlease don't suggests to fork from PostgreSQL just for this feature regardsSurafel", "msg_date": "Mon, 15 Mar 2021 07:47:49 -0700", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Calendar support in localization" }, { "msg_contents": "Hi Surafel,\n\nOn Tue, Mar 16, 2021 at 3:48 AM Surafel Temesgen <surafel3000@gmail.com> wrote:\n> My country(Ethiopia) is one of the nations that uses different kind of calendar than what PostgreSQL have so we are deprived from the benefit of data datatype. We just uses String to store date that limits our application quality greatly. The lag became even worst once application and system time support is available and it seems to me it is not fair to suggest to add other date data type kind and implementation for just different calendar that even not minor user group. Having calendar support to localization will be very very very very exciting feature for none Gregorian calendar user group and make so loved. As far as i can see the difficult thing is understanding different calendar. I can prepare a patch for Ethiopian calendar once we have consensus.\n\nOne key question here is whether you need a different date type or\njust different operations (functions, operators etc) on the existing\ntypes.\n\n> I cc Thomas Munro and Vik because they have interest on this area\n\nLast time it came up[1], I got as far as wondering if the best way\nwould be to write a set of ICU-based calendar functions. Would it be\nenough for your needs to have Ethiopic calendar-aware date arithmetic\n(add, subtract a month etc), date part extraction (get the current\nEthiopic day/month/year of a date), display and parsing, and have all\nof these as functions that you have to call explicitly, but have them\ntake the standard built-in date and timestamp types, so that your\ntables would store regular date and timestamp values? If not, what\nelse do you need?\n\nICU is very well maintained and widely used software, and PostgreSQL\nalready depends on it optionally, and that's enabled in all common\ndistributions. In other words, maybe all the logic you want exists\nalready in your process's memory, we just have to figure out how to\nreach it from SQL. Another reason to use ICU is that we can solve\nthis problem once and then it'll work for many other calendars.\n\n> Please don't suggests to fork from PostgreSQL just for this feature\n\nI would start with an extension, and I'd try to do a small set of\nsimple functions, to let me write things like:\n\n icu_format(now(), 'fr_FR@calendar=buddhist') to get a Buddhist\ncalendar with French words\n\n icu_date_part('year', current_date, 'am_ET@calendar=traditional') to\nget the current year in the Ethiopic calendar (2013 apparently)\n\nWell, the first one probably also needs a format string too, actual\ndetails to be worked out by reading the ICU manual...\n\nMaybe instead of making a new extension, I might try to start from\nhttps://github.com/dverite/icu_ext and see if it makes sense to extend\nit to cover calendars.\n\nMaybe one day ICU will become a hard dependency of PostgreSQL and\nsomeone will propose all that stuff into core, and then maybe we could\nstart to think about the possibility of tighter integration with the\nbuilt-in date/time functions (and LC_TIME setting? seems complicated,\nsee also problems with setting LC_COLLATE/datcollate to an ICU\ncollation name, but I digress and that's a far off problem). I would\nalso study the SQL standard and maybe DB2 (highly subjective comment:\nat a wild guess, the most likely commercial RDBMS to have done a good\njob of this if anyone has) to see if they contemplate non-Gregorian\ncalendars, to get some feel for whether that would eventually be a\npossibility to conform with whatever the standard says.\n\nIn summary, getting something of very high quality by using a widely\nused open source library that we already use seems like a better plan\nthan trying to write and maintain our own specialist knowledge about\nindividual calendars. If there's something you need that can't be\ndone with its APIs working on top of our regular date and timestamp\ntypes, could you elaborate?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BybW0LJuLtj3yAUsbOw3DrzK00pGk8JyfpCREzi_LSsg%40mail.gmail.com#393d827f1be589d0ad6ca6b016905e80\n\n\n", "msg_date": "Tue, 16 Mar 2021 10:57:33 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "Hi Thomas\n\nOn Mon, Mar 15, 2021 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n>\n> One key question here is whether you need a different date type or\n> just different operations (functions, operators etc) on the existing\n> types.\n>\n>\nI am thinking of having a converter to a specific calendar after each\noperation and function for display or storage. It works on\nEthiopice calendar and i expect it will work on other calendar too\n\n\n> > I cc Thomas Munro and Vik because they have interest on this area\n>\n> Last time it came up[1], I got as far as wondering if the best way\n> would be to write a set of ICU-based calendar functions. Would it be\n> enough for your needs to have Ethiopic calendar-aware date arithmetic\n> (add, subtract a month etc), date part extraction (get the current\n> Ethiopic day/month/year of a date), display and parsing, and have all\n> of these as functions that you have to call explicitly, but have them\n> take the standard built-in date and timestamp types, so that your\n> tables would store regular date and timestamp values? If not, what\n> else do you need?\n>\n>\nEthiopice calendar have 13 months so it can not be stored as date and\ntimestamp type and you approach seems more complicated and i suggest to\nhave this feature on the purpose of PostgreSQL popularities too not only\nfor my need\n\n\n> ICU is very well maintained and widely used software, and PostgreSQL\n> already depends on it optionally, and that's enabled in all common\n> distributions. In other words, maybe all the logic you want exists\n> already in your process's memory, we just have to figure out how to\n> reach it from SQL. Another reason to use ICU is that we can solve\n> this problem once and then it'll work for many other calendars.\n>\n>\nEach calendar-aware date arithmetic is different so solving one calendar\nproblem didn't help on other calendar\n\n\n> > Please don't suggests to fork from PostgreSQL just for this feature\n>\n> I would start with an extension, and I'd try to do a small set of\n> simple functions, to let me write things like:\n>\n> icu_format(now(), 'fr_FR@calendar=buddhist') to get a Buddhist\n> calendar with French words\n>\n> icu_date_part('year', current_date, 'am_ET@calendar=traditional') to\n> get the current year in the Ethiopic calendar (2013 apparently)\n>\n> Well, the first one probably also needs a format string too, actual\n> details to be worked out by reading the ICU manual...\n>\n\nI think you suggesting this by expecting the implementation is difficult\nbut it's not that much difficult once you fully understand Gregorian\ncalendar and the calendar you work on\n\n\n>\n> Maybe instead of making a new extension, I might try to start from\n> https://github.com/dverite/icu_ext and see if it makes sense to extend\n> it to cover calendars.\n>\n> Maybe one day ICU will become a hard dependency of PostgreSQL and\n> someone will propose all that stuff into core, and then maybe we could\n> start to think about the possibility of tighter integration with the\n> built-in date/time functions (and LC_TIME setting? seems complicated,\n> see also problems with setting LC_COLLATE/datcollate to an ICU\n> collation name, but I digress and that's a far off problem). I would\n> also study the SQL standard and maybe DB2 (highly subjective comment:\n> at a wild guess, the most likely commercial RDBMS to have done a good\n> job of this if anyone has) to see if they contemplate non-Gregorian\n> calendars, to get some feel for whether that would eventually be a\n> possibility to conform with whatever the standard says.\n>\n> In summary, getting something of very high quality by using a widely\n> used open source library that we already use seems like a better plan\n> than trying to write and maintain our own specialist knowledge about\n> individual calendars. If there's something you need that can't be\n> done with its APIs working on top of our regular date and timestamp\n> types, could you elaborate?\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BybW0LJuLtj3yAUsbOw3DrzK00pGk8JyfpCREzi_LSsg%40mail.gmail.com#393d827f1be589d0ad6ca6b016905e80\n\n\nI don't know how you see this but for me the feature deserves a specialist\nand it is not that much difficult to have one because i guess every majore\ncalendar have english documentation\n\nregards\nSurafel\n\nHi ThomasOn Mon, Mar 15, 2021 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\nOne key question here is whether you need a different date type or\njust different operations (functions, operators etc) on the existing\ntypes.\nI am thinking of having a converter to a specific calendar after each operation and function for display or storage. It works on Ethiopice calendar and i expect it will work on other calendar too  \n> I cc Thomas Munro and Vik because they have interest on this area\n\nLast time it came up[1], I got as far as wondering if the best way\nwould be to write a set of ICU-based calendar functions.  Would it be\nenough for your needs to have Ethiopic calendar-aware date arithmetic\n(add, subtract a month etc), date part extraction (get the current\nEthiopic day/month/year of a date), display and parsing, and have all\nof these as functions that you have to call explicitly, but have them\ntake the standard built-in date and timestamp types, so that your\ntables would store regular date and timestamp values?  If not, what\nelse do you need?\nEthiopice calendar have 13 months so it can not be stored as date and timestamp type and you approach seems more complicated and i suggest to have this feature on the purpose of PostgreSQL popularities too not only for my need   \nICU is very well maintained and widely used software, and PostgreSQL\nalready depends on it optionally, and that's enabled in all common\ndistributions.  In other words, maybe all the logic you want exists\nalready in your process's memory, we just have to figure out how to\nreach it from SQL.  Another reason to use ICU is that we can solve\nthis problem once and then it'll work for many other calendars.\nEach \n\ncalendar-aware date arithmetic is different so solving one calendar problem didn't help on other calendar  \n> Please don't suggests to fork from PostgreSQL just for this feature\n\nI would start with an extension, and I'd try to do a small set of\nsimple functions, to let me write things like:\n\n  icu_format(now(), 'fr_FR@calendar=buddhist') to get a Buddhist\ncalendar with French words\n\n  icu_date_part('year', current_date, 'am_ET@calendar=traditional') to\nget the current year in the Ethiopic calendar (2013 apparently)\n\nWell, the first one probably also needs a format string too, actual\ndetails to be worked out by reading the ICU manual...I think you suggesting this by expecting the implementation is difficult but it's not that much difficult once you fully understand Gregorian calendar and the calendar you work on  \n\nMaybe instead of making a new extension, I might try to start from\nhttps://github.com/dverite/icu_ext and see if it makes sense to extend\nit to cover calendars.\n\nMaybe one day ICU will become a hard dependency of PostgreSQL and\nsomeone will propose all that stuff into core, and then maybe we could\nstart to think about the possibility of tighter integration with the\nbuilt-in date/time functions (and LC_TIME setting?  seems complicated,\nsee also problems with setting LC_COLLATE/datcollate to an ICU\ncollation name, but I digress and that's a far off problem).  I would\nalso study the SQL standard and maybe DB2 (highly subjective comment:\nat a wild guess, the most likely commercial RDBMS to have done a good\njob of this if anyone has) to see if they contemplate non-Gregorian\ncalendars, to get some feel for whether that would eventually be a\npossibility to conform with whatever the standard says.\n\nIn summary, getting something of very high quality by using a widely\nused open source library that we already use seems like a better plan\nthan trying to write and maintain our own specialist knowledge about\nindividual calendars.  If there's something you need that can't be\ndone with its APIs working on top of our regular date and timestamp\ntypes, could you elaborate?\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BybW0LJuLtj3yAUsbOw3DrzK00pGk8JyfpCREzi_LSsg%40mail.gmail.com#393d827f1be589d0ad6ca6b016905e80I don't know how you see this but for me the feature deserves a specialist and it is not that much difficult to have one because i guess every majore calendar have english documentation  regardsSurafel", "msg_date": "Tue, 16 Mar 2021 10:30:51 -0700", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Wed, Mar 17, 2021 at 6:31 AM Surafel Temesgen <surafel3000@gmail.com> wrote:\n> Ethiopice calendar have 13 months so it can not be stored as date and timestamp type and you approach seems more complicated and i suggest to have this feature on the purpose of PostgreSQL popularities too not only for my need\n\nI know, but the DATE and TIMESTAMPTZ datatypes don't intrinsically\nknow anything about months or other calendar concepts. Internally,\nthey are just a single number that counts the number of days or\nseconds since an arbitrary epoch time. We are all in agreement about\nhow many times the Earth has rotated since then*. The calendar\nconcepts such as \"day\", \"month\", \"year\", whether Gregorian, Ethiopic,\nIslamic, ... are all derivable from those numbers, if you know the\nrules.\n\nSo I think you should seriously consider using the same types.\n\n> Each calendar-aware date arithmetic is different so solving one calendar problem didn't help on other calendar\n\nThey have a *lot* in common though. They have similar \"fields\" (day,\nmonth, year etc), based on the Earth, moon, sun etc, so it is possible\nto use a common abstraction to interact with them. I haven't studied\nit too closely, but it looks like ICU can give you a \"Calendar\" object\nfor a given Locale (which you create from a string like\n\"am_ET@calendar=traditional\") and timezone (\"Africa/Addis_Ababa\").\nThen you can set the object's time to X seconds since an epoch, based\non UTC seconds without leap seconds -- which is exactly like our\nTIMESTAMPTZ's internal value -- and then you can query it to get\nfields like month etc. Or do the opposite, or use formatting and\nparsing routines etc. Internally, ICU has a C++ class for each\ncalendar with a name like EthiopicCalendar, IslamicCalendar etc which\nencapsulates all the logic, but it's not necessary to use them\ndirectly: we could just look them up with names via the C API and then\ntreat them all the same.\n\n> I think you suggesting this by expecting the implementation is difficult but it's not that much difficult once you fully understand Gregorian calendar and the calendar you work on\n\nYeah, I am sure it's all just a bunch of simple integer maths. But\nI'm talking about things like software architecture, maintainability,\ncohesion, and getting maximum impact for the work we do.\n\nI may be missing some key detail though: why do you think it should be\na different type? The two reasons I can think of are: (1) the\nslightly tricky detail that the date apparently changes at 1:00am\n(which I don't think is a show stopper for this approach, I could\nelaborate), (2) you may want dates to be formatted on the screen with\nthe Ethiopic calendar in common software like psql and GUI clients,\nwhich may be easier to arrange with different types, but that seems to\nbe a cosmetic thing that could eventually be handled with tighter\nlocale integration with ICU. In the early stages you'd access\ncalendar logic though special functions with names like\nicu_format_date(), or whatever.\n\nMaybe I'm totally wrong about all of this, but this is the first way\nI'd probably try to tackle this problem, and I suspect it has the\nhighest chance of eventually being included in core PostgreSQL.\n\n*I mean, we can discuss the different \"timelines\" like UT, UTC, TAI\netc, but that's getting into the weeds, the usual timeline for\ncomputer software outside specialist scientific purposes is UTC\nwithout leap seconds.\n\n\n", "msg_date": "Wed, 17 Mar 2021 08:20:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Tue, Mar 16, 2021 at 12:20 PM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> On Wed, Mar 17, 2021 at 6:31 AM Surafel Temesgen <surafel3000@gmail.com>\n> wrote:\n> > Ethiopice calendar have 13 months so it can not be stored as date and\n> timestamp type and you approach seems more complicated and i suggest to\n> have this feature on the purpose of PostgreSQL popularities too not only\n> for my need\n>\n> I know, but the DATE and TIMESTAMPTZ datatypes don't intrinsically\n> know anything about months or other calendar concepts. Internally,\n> they are just a single number that counts the number of days or\n> seconds since an arbitrary epoch time. We are all in agreement about\n> how many times the Earth has rotated since then*. The calendar\n> concepts such as \"day\", \"month\", \"year\", whether Gregorian, Ethiopic,\n> Islamic, ... are all derivable from those numbers, if you know the\n> rules.\n>\n>\nOkay\n\n\n>\n>\n> > I think you suggesting this by expecting the implementation is difficult\n> but it's not that much difficult once you fully understand Gregorian\n> calendar and the calendar you work on\n>\n> Yeah, I am sure it's all just a bunch of simple integer maths. But\n> I'm talking about things like software architecture, maintainability,\n> cohesion, and getting maximum impact for the work we do.\n>\n> I may be missing some key detail though: why do you think it should be\n> a different type? The two reasons I can think of are: (1) the\n> slightly tricky detail that the date apparently changes at 1:00am\n> (which I don't think is a show stopper for this approach, I could\n> elaborate), (2) you may want dates to be formatted on the screen with\n> the Ethiopic calendar in common software like psql and GUI clients,\n> which may be easier to arrange with different types, but that seems to\n> be a cosmetic thing that could eventually be handled with tighter\n> locale integration with ICU. In the early stages you'd access\n> calendar logic though special functions with names like\n> icu_format_date(), or whatever.\n>\n>\nAs you mention above whatever the calendar type is we ended up storing an\ninteger that represent the date so rather than re-implementing every\nfunction and operation for every calendar we can use existing Gerigorian\nimplementation as a base implementation and if new calendar want to perform\nsame function or operation it translate to Gregorian one and use the\nexisting function and operation and translate to back to working calendar.\nIn this approach the only function we want for supporting a new calendar is\na translator from the new calendar to Gregorian one and from Gerigorian\ncalendar to the new calendar and may be input/ output function. What do you\nthink of this implementation?\n\nregards\nSurafel\n\nOn Tue, Mar 16, 2021 at 12:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Wed, Mar 17, 2021 at 6:31 AM Surafel Temesgen <surafel3000@gmail.com> wrote:\n> Ethiopice calendar have 13 months so it can not be stored as date and timestamp type and you approach seems more complicated and i suggest to have this feature on the purpose of PostgreSQL popularities too not only for my need\n\nI know, but the DATE and TIMESTAMPTZ datatypes don't intrinsically\nknow anything about months or other calendar concepts.  Internally,\nthey are just a single number that counts the number of days or\nseconds since an arbitrary epoch time.  We are all in agreement about\nhow many times the Earth has rotated since then*.  The calendar\nconcepts such as \"day\", \"month\", \"year\", whether Gregorian, Ethiopic,\nIslamic, ... are all derivable from those numbers, if you know the\nrules.Okay   \n\n> I think you suggesting this by expecting the implementation is difficult but it's not that much difficult once you fully understand Gregorian calendar and the calendar you work on\n\nYeah, I am sure it's all just a bunch of simple integer maths.  But\nI'm talking about things like software architecture, maintainability,\ncohesion, and getting maximum impact for the work we do.\n\nI may be missing some key detail though: why do you think it should be\na different type?  The two reasons I can think of are: (1) the\nslightly tricky detail that the date apparently changes at 1:00am\n(which I don't think is a show stopper for this approach, I could\nelaborate), (2) you may want dates to be formatted on the screen with\nthe Ethiopic calendar in common software like psql and GUI clients,\nwhich may be easier to arrange with different types, but that seems to\nbe a cosmetic thing that could eventually be handled with tighter\nlocale integration with ICU.  In the early stages you'd access\ncalendar logic though special functions with names like\nicu_format_date(), or whatever.As you mention above whatever the calendar type is we ended up storing an integer that  represent the date so rather than re-implementing every function and operation for every calendar we can use existing Gerigorian implementation as a base implementation and if new calendar want to perform same function or operation it translate to Gregorian one and use the existing function and operation and translate to back to working calendar. In this approach the only function we want for supporting a new calendar is a translator from the new calendar to Gregorian one and from Gerigorian calendar to the new calendar and may be input/ output function. What do you think of this implementation?regardsSurafel", "msg_date": "Wed, 17 Mar 2021 06:53:46 -0700", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Wed, Mar 17, 2021 at 9:54 AM Surafel Temesgen <surafel3000@gmail.com> wrote:\n> As you mention above whatever the calendar type is we ended up storing an integer that represent the date so rather than re-implementing every function and operation for every calendar we can use existing Gerigorian implementation as a base implementation and if new calendar want to perform same function or operation it translate to Gregorian one and use the existing function and operation and translate to back to working calendar. In this approach the only function we want for supporting a new calendar is a translator from the new calendar to Gregorian one and from Gerigorian calendar to the new calendar and may be input/ output function. What do you think of this implementation?\n\nI'm not sure what the best way of tackling this problem is, but I\nwanted to mention another possible approach: instead of actually using\nthe timestamptz data type, have another data type that is\nbinary-compatible with timestamptz - that is, it's the same thing on\ndisk, so you can cast between the two data types for free. Then have\nseparate input/output functions for it, separate operators and\nfunctions and so forth.\n\nIt's not very obvious how to scale this kind of approach to a wide\nvariety of calendar types, and as Thomas says, it would much cooler to\nbe able to handle all of the ones that ICU knows how to support rather\nthan just one. But, the problem I see with using timestamptz is that\nit's not so obvious how to get a different output format ... unless, I\nguess, we can cram it into DateStyle. And it's also much less obvious\nhow you get the other functions and operators to do what you want, if\nit's different.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Mar 2021 10:16:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> It's not very obvious how to scale this kind of approach to a wide\n> variety of calendar types, and as Thomas says, it would much cooler to\n> be able to handle all of the ones that ICU knows how to support rather\n> than just one. But, the problem I see with using timestamptz is that\n> it's not so obvious how to get a different output format ... unless, I\n> guess, we can cram it into DateStyle. And it's also much less obvious\n> how you get the other functions and operators to do what you want, if\n> it's different.\n\nYeah, I'm afraid that it probably is different. The most obvious\nexample is in operations involving type interval:\n\tselect now() + '1 month'::interval;\nThat should almost certainly give a different answer when using a\ndifferent calendar --- indeed the units of interest might not even\nbe the same. (Do all human calendars use the concept of months?)\n\nI don't feel like DateStyle is chartered to affect the behavior\nof datetime operators; it's understood as tweaking the I/O behavior\nonly. There might be more of a case for letting LC_TIME choose\nthis behavior, but I bet the relevant standards only contemplate\nGregorian calendars. Also, the SQL spec says in so many words\nthat the SQL-defined datetime types follow the Gregorian calendar.\n\nSo on the whole, new datatypes and operators seem like the way\nto go. I concur that if ICU has solved this problem, piggybacking\non it seems more promising than writing our own code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 10:48:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Thu, Mar 18, 2021 at 3:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > It's not very obvious how to scale this kind of approach to a wide\n> > variety of calendar types, and as Thomas says, it would much cooler to\n> > be able to handle all of the ones that ICU knows how to support rather\n> > than just one. But, the problem I see with using timestamptz is that\n> > it's not so obvious how to get a different output format ... unless, I\n> > guess, we can cram it into DateStyle. And it's also much less obvious\n> > how you get the other functions and operators to do what you want, if\n> > it's different.\n>\n> Yeah, I'm afraid that it probably is different. The most obvious\n> example is in operations involving type interval:\n> select now() + '1 month'::interval;\n> That should almost certainly give a different answer when using a\n> different calendar --- indeed the units of interest might not even\n> be the same. (Do all human calendars use the concept of months?)\n\nRight, so if this is done by trying to extend Daniel Verite's icu_ext\nextension (link given earlier) and Robert's idea of a fast-castable\ntype, I suppose you might want now()::icu_date + '1 month'::internal\nto advance you by one Ethiopic month if you have done SET\nicu_ext.ICU_LC_TIME = 'am_ET@calendar=traditional'. Or if using my\nfirst idea of just sticking with the core types, perhaps you'd have to\nreplace stuff via search path... I admit that sounds rather error\nprone and fragile (I was thinking mainly of different functions, not\noperators). Either way, I suppose there'd also be more explicit\nfunctions for various operations including ones that take an extra\nargument if you want to use an explicit locale instead of relying on\nthe ICU_LC_TIME setting. I dunno.\n\nAs for whether all calendars have months, it looks like ICU's model\nhas just the familiar looking standardised fields; whether some of\nthem make no sense in some calendars, I don't know, but it has stuff\nlike x.get(field, &error), x.set(field, &error), x.add(field, amount,\n&error) and if it fails for some field on your particular calendar, or\nfor some value (you can't set a Gregorian date's month to 13\n(apparently we call this month \"undecember\", hah), but you can for a\nHebrew or Ethiopic one) I suppose we'd just report the error?\n\n> I don't feel like DateStyle is chartered to affect the behavior\n> of datetime operators; it's understood as tweaking the I/O behavior\n> only. There might be more of a case for letting LC_TIME choose\n> this behavior, but I bet the relevant standards only contemplate\n\nAbout LC_TIME... I suppose in one possible future we eventually use\nICU for more core stuff, and someone proposes to merge hypothetical\nicu_date etc types into the core date etc types, and then LC_TIME\ncontrols that. But then you might have a version of the problem that\nPeter E ran into in attempts so far to use ICU collations as the\ndefault: if you put ICU's funky extensible locale names into the\nLC_XXX environment variables, then your libc will see it too, and\nmight get upset, since PostgreSQL uses the en. I suspect that ICU\nwill understand typical libc locale names, but common libcs won't\nunderstand ICU's highly customisable syntax, but I haven't looked into\nit. If that's generally true, then perhaps the solution to both\nproblems is a kind of partial separation: regular LC_XXX, and then\nalso ICU_LC_XXX which defaults to the same value but can be changed to\naccess more advanced stuff, and is used only for interacting with ICU.\n\n> Gregorian calendars. Also, the SQL spec says in so many words\n> that the SQL-defined datetime types follow the Gregorian calendar.\n\n:-(\n\n\n", "msg_date": "Thu, 18 Mar 2021 11:38:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On 3/17/21 3:48 PM, Tom Lane wrote:\n> Also, the SQL spec says in so many words\n> that the SQL-defined datetime types follow the Gregorian calendar.\n\nWe already don't follow the SQL spec for timestamps (and I, for one,\nthink we are better for it) so I don't think we should worry about that.\n-- \nVik Fearing\n\n\n", "msg_date": "Thu, 18 Mar 2021 02:14:36 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Wed, Mar 17, 2021 at 3:39 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Mar 18, 2021 at 3:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Right, so if this is done by trying to extend Daniel Verite's icu_ext\n> extension (link given earlier) and Robert's idea of a fast-castable\n> type, I suppose you might want now()::icu_date + '1 month'::internal\n> to advance you by one Ethiopic month if you have done SET\n> icu_ext.ICU_LC_TIME = 'am_ET@calendar=traditional'. Or if using my\n> first idea of just sticking with the core types, perhaps you'd have to\n> replace stuff via search path... I admit that sounds rather error\n> prone and fragile (I was thinking mainly of different functions, not\n> operators). Either way, I suppose there'd also be more explicit\n> functions for various operations including ones that take an extra\n> argument if you want to use an explicit locale instead of relying on\n> the ICU_LC_TIME setting. I dunno.\n>\n>\nAs you know internally timestamptz data type does't existe instead it\nstored as integer kind and we depend on operating system and external\nlibrary for our date data type support so i think that put as on the\nposition for not be the first one to implement timestamptz data type thing\nand i don't know who give as the casting for free?\n\nregards\nSurafel\n\nOn Wed, Mar 17, 2021 at 3:39 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Mar 18, 2021 at 3:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nRight, so if this is done by trying to extend Daniel Verite's icu_ext\nextension (link given earlier) and Robert's idea of a fast-castable\ntype, I suppose you might want now()::icu_date + '1 month'::internal\nto advance you by one Ethiopic month if you have done SET\nicu_ext.ICU_LC_TIME = 'am_ET@calendar=traditional'.  Or if using my\nfirst idea of just sticking with the core types, perhaps you'd have to\nreplace stuff via search path... I admit that sounds rather error\nprone and fragile (I was thinking mainly of different functions, not\noperators).  Either way, I suppose there'd also be more explicit\nfunctions for various operations including ones that take an extra\nargument if you want to use an explicit locale instead of relying on\nthe ICU_LC_TIME setting.  I dunno.\nAs you know internally timestamptz data type does't existe instead it stored as integer kind and we depend on operating system and external library for our date data type support so i think that put as on the position for not be the first one to implement timestamptz data type thing and i don't know who give as the casting for free? regardsSurafel", "msg_date": "Thu, 18 Mar 2021 06:40:43 -0700", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "\tThomas Munro wrote:\n\n> Right, so if this is done by trying to extend Daniel Verite's icu_ext\n> extension (link given earlier) and Robert's idea of a fast-castable\n> type, I suppose you might want now()::icu_date + '1 month'::internal\n> to advance you by one Ethiopic month if you have done SET\n> icu_ext.ICU_LC_TIME = 'am_ET@calendar=traditional'.\n\nI've pushed a calendar branch on icu_ext [1] with preliminary support\nfor non-gregorian calendars through ICU, so far with only format and parse\nof timetamptz.\nThe ICU locale drives both the localization of field names (language) and the\nchoice of calendar.\n\nIt looks like this:\n\n\\set fmt 'dd/MMMM/yyyy GGGG HH:mm:ss.SSS zz'\n\n WITH list(cal) AS ( values \n ('gregorian'),\n ('japanese'),\n ('buddhist'),\n ('roc'),\n ('persian'),\n ('islamic-civil'),\n ('islamic'),\n ('hebrew'),\n ('chinese'),\n ('indian'),\n ('coptic'),\n ('ethiopic'),\n ('ethiopic-amete-alem'),\n ('iso8601'),\n ('dangi')\n),\nfmt AS (\n select\n cal,\n icu_format_date(now(), :'fmt', 'fr@calendar='||cal) as now_str,\n icu_format_date(now()+'1 month'::interval, :'fmt', 'fr@calendar='||cal) as\nplus_1m\n from list\n)\nSELECT\n cal,\n now_str,\n icu_parse_date(now_str, :'fmt', 'fr@calendar='||cal) as now_parsed,\n plus_1m,\n icu_parse_date(plus_1m, :'fmt', 'fr@calendar='||cal) as plus_1m_parsed\nFROM fmt;\n\n\n-[ RECORD 1 ]--+-------------------------------------------------------\ncal\t | gregorian\nnow_str | 26/mars/2021 après Jésus-Christ 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 26/avril/2021 après Jésus-Christ 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 2 ]--+-------------------------------------------------------\ncal\t | japanese\nnow_str | 26/mars/0033 Heisei 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 26/avril/0033 Heisei 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 3 ]--+-------------------------------------------------------\ncal\t | buddhist\nnow_str | 26/mars/2564 ère bouddhique 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 26/avril/2564 ère bouddhique 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 4 ]--+-------------------------------------------------------\ncal\t | roc\nnow_str | 26/mars/0110 RdC 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 26/avril/0110 RdC 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 5 ]--+-------------------------------------------------------\ncal\t | persian\nnow_str | 06/farvardin/1400 Anno Persico 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 06/ordibehešt/1400 Anno Persico 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 6 ]--+-------------------------------------------------------\ncal\t | islamic-civil\nnow_str | 12/chaabane/1442 ère de l’Hégire 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 14/ramadan/1442 ère de l’Hégire 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 7 ]--+-------------------------------------------------------\ncal\t | islamic\nnow_str | 13/chaabane/1442 ère de l’Hégire 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 14/ramadan/1442 ère de l’Hégire 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 8 ]--+-------------------------------------------------------\ncal\t | hebrew\nnow_str | 13/nissan/5781 Anno Mundi 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 14/iyar/5781 Anno Mundi 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 9 ]--+-------------------------------------------------------\ncal\t | chinese\nnow_str | 14/èryuè/0038 78 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 15/sānyuè/0038 78 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 10 ]-+-------------------------------------------------------\ncal\t | indian\nnow_str | 05/chaitra/1943 ère Saka 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 06/vaishākh/1943 ère Saka 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 11 ]-+-------------------------------------------------------\ncal\t | coptic\nnow_str | 17/barmahât/1737 après Dioclétien 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 18/barmoudah/1737 après Dioclétien 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 12 ]-+-------------------------------------------------------\ncal\t | ethiopic\nnow_str | 17/mägabit/2013 après l’Incarnation 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 18/miyazya/2013 après l’Incarnation 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 13 ]-+-------------------------------------------------------\ncal\t | ethiopic-amete-alem\nnow_str | 17/mägabit/7513 ERA0 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 18/miyazya/7513 ERA0 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 14 ]-+-------------------------------------------------------\ncal\t | iso8601\nnow_str | 26/mars/2021 après Jésus-Christ 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 26/avril/2021 après Jésus-Christ 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n-[ RECORD 15 ]-+-------------------------------------------------------\ncal\t | dangi\nnow_str | 14/èryuè/0038 78 18:22:07.566 UTC+1\nnow_parsed | 2021-03-26 18:22:07.566+01\nplus_1m | 15/sānyuè/0038 78 18:22:07.566 UTC+2\nplus_1m_parsed | 2021-04-26 18:22:07.566+02\n\n\n\nI understand that adding months or years with some of the non-gregorian\ncalendars should lead to different points in time than with the gregorian\ncalendar.\n\nFor instance with the ethiopic calendar, the query above displays today as\n17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\nwhile the correct result is probably 17/miyazya/2013 (?)\n\nI'm not sure at this point that there should be a new set of\ndata/interval/timestamp types though, especially if considering\nthe integration in core.\n\nAbout intervals, if there were locale-aware functions like\n add_interval(timestamptz, interval [, locale]) returns timestamptz\nor\n sub_timestamp(timestamptz, timestamptz [,locale]) returns interval\nthat would use ICU to compute the results according to the locale,\nwouldn't it be good enough?\n\nAnother argument for new datatypes could be that getting the\nlocalized-by-ICU display/parsing without function calls around the dates\nmeans new I/O functions. In the context of the extension, probably,\nbut in core, if DateStyle is extended to divert the I/O of date/timestamp[tz]\nto ICU, I guess it could work with the existing types.\n\nAnother reason to have new datatypes could be that users would like\nto use a localized calendar only on specific fields. I don't know if that\nmakes sense.\n\n\n[1] https://github.com/dverite/icu_ext/tree/calendar\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 26 Mar 2021 18:51:33 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Sat, Mar 27, 2021 at 6:51 AM Daniel Verite <daniel@manitou-mail.org> wrote:\n> now_str | 17/mägabit/2013 après l’Incarnation 18:22:07.566 UTC+1\n\nVery nice!\n\n> For instance with the ethiopic calendar, the query above displays today as\n> 17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\n> while the correct result is probably 17/miyazya/2013 (?)\n>\n>\n> I'm not sure at this point that there should be a new set of\n> data/interval/timestamp types though, especially if considering\n> the integration in core.\n>\n> About intervals, if there were locale-aware functions like\n> add_interval(timestamptz, interval [, locale]) returns timestamptz\n> or\n> sub_timestamp(timestamptz, timestamptz [,locale]) returns interval\n> that would use ICU to compute the results according to the locale,\n> wouldn't it be good enough?\n\n+1, I'd probably do that next if I were hacking on this...\n\n\n", "msg_date": "Mon, 29 Mar 2021 22:21:02 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Fri, 26 Mar 2021 at 18:51, Daniel Verite <daniel@manitou-mail.org> wrote:\n> [...]\n> -[ RECORD 2 ]--+-------------------------------------------------------\n> cal | japanese\n> now_str | 26/mars/0033 Heisei 18:22:07.566 UTC+1\n> now_parsed | 2021-03-26 18:22:07.566+01\n> plus_1m | 26/avril/0033 Heisei 18:22:07.566 UTC+2\n> plus_1m_parsed | 2021-04-26 18:22:07.566+02\n> -[ RECORD 3 ]--+-------------------------------------------------------\n> [...]\n> -[ RECORD 12 ]-+-------------------------------------------------------\n> cal | ethiopic\n> now_str | 17/mägabit/2013 après l’Incarnation 18:22:07.566 UTC+1\n> now_parsed | 2021-03-26 18:22:07.566+01\n> plus_1m | 18/miyazya/2013 après l’Incarnation 18:22:07.566 UTC+2\n> plus_1m_parsed | 2021-04-26 18:22:07.566+02\n> -[ RECORD 13 ]-+-------------------------------------------------------\n> cal | ethiopic-amete-alem\n> now_str | 17/mägabit/7513 ERA0 18:22:07.566 UTC+1\n> now_parsed | 2021-03-26 18:22:07.566+01\n> plus_1m | 18/miyazya/7513 ERA0 18:22:07.566 UTC+2\n> plus_1m_parsed | 2021-04-26 18:22:07.566+02\n> [...]\n> I understand that adding months or years with some of the non-gregorian\n> calendars should lead to different points in time than with the gregorian\n> calendar.\n>\n> For instance with the ethiopic calendar, the query above displays today as\n> 17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\n> while the correct result is probably 17/miyazya/2013 (?)\n\nSeeing the results for Japanese locale, you might want to update your\nICU library, which could fix this potential inconsistency.\n\nThe results for the Japanese locale should be \"0003 Reiwa\" instead of\n\"0033 Heisei\", as the era changed in 2019. ICU releases have since\nimplemented this and other corrections; this specific change was\nimplemented in the batched release of ICU versions on 2019-04-12.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 29 Mar 2021 12:57:55 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "\tMatthias van de Meent wrote:\n\n> The results for the Japanese locale should be \"0003 Reiwa\" instead of\n> \"0033 Heisei\", as the era changed in 2019. ICU releases have since\n> implemented this and other corrections; this specific change was\n> implemented in the batched release of ICU versions on 2019-04-12.\n\nRight. I've run this test on an Ubuntu 18.04 desktop which comes with\nlibicu60 . The current version for my system is 60.2-3ubuntu3.1.\nUbuntu maintainers did not pick up the change of the new Japanese era.\nAs a guess, it's because it's not a security fix.\nThis contrasts with the baseline maintainers, who did an\nexceptional effort to backpatch this down to ICU 53\n(exceptional in the sense that they don't do that for bugfixes).\n\n>> For instance with the ethiopic calendar, the query above displays today as\n>> 17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\n>> while the correct result is probably 17/miyazya/2013 (?)\n\n> Seeing the results for Japanese locale, you might want to update your\n> ICU library, which could fix this potential inconsistency.\n\nI agree it's always best to have the latest ICU version, but in the\ncontext of Postgres, we have to work with the versions that are\ntypically installed on users systems. People who have pre-2019\nversions will simply be stuck with the previous Japanese era.\n\nAnyway, for the specific problem that the interval datatype cannot be\nused seamlessly across all calendars, it's essentially about how days\nare mapped into calendars, and it's unrelated to ICU updates AFAIU.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 29 Mar 2021 14:33:37 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Mon, 29 Mar 2021 at 14:33, Daniel Verite <daniel@manitou-mail.org> wrote:\n>\n> Matthias van de Meent wrote:\n>\n> > The results for the Japanese locale should be \"0003 Reiwa\" instead of\n> > \"0033 Heisei\", as the era changed in 2019. ICU releases have since\n> > implemented this and other corrections; this specific change was\n> > implemented in the batched release of ICU versions on 2019-04-12.\n>\n> Right. I've run this test on an Ubuntu 18.04 desktop which comes with\n> libicu60 . The current version for my system is 60.2-3ubuntu3.1.\n> Ubuntu maintainers did not pick up the change of the new Japanese era.\n> As a guess, it's because it's not a security fix.\n> This contrasts with the baseline maintainers, who did an\n> exceptional effort to backpatch this down to ICU 53\n> (exceptional in the sense that they don't do that for bugfixes).\n>\n> >> For instance with the ethiopic calendar, the query above displays today as\n> >> 17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\n> >> while the correct result is probably 17/miyazya/2013 (?)\n>\n> > Seeing the results for Japanese locale, you might want to update your\n> > ICU library, which could fix this potential inconsistency.\n>\n> I agree it's always best to have the latest ICU version, but in the\n> context of Postgres, we have to work with the versions that are\n> typically installed on users systems. People who have pre-2019\n> versions will simply be stuck with the previous Japanese era.\n>\n> Anyway, for the specific problem that the interval datatype cannot be\n> used seamlessly across all calendars, it's essentially about how days\n> are mapped into calendars, and it's unrelated to ICU updates AFAIU.\n\nAh, yes, I only glanced over the supplied query and misunderstood it\ndue to not taking enough time. I understood it as 'use icu locale info\nto add 1 month to the current date', which would use ICU knowledge\nabout months in the locale and would be consistent with the question\nmark, instead of 'use icu to interpret the result of adding one\nnon-icu-locale-dependent month to the current non-icu-locale-dependent\ndate'. If it were the former, my response would have made more sense,\nbut it doesn't in this case. So, sorry for the noise.\n\n> About intervals, if there were locale-aware functions like\n> add_interval(timestamptz, interval [, locale]) returns timestamptz\n> or\n> sub_timestamp(timestamptz, timestamptz [,locale]) returns interval\n> that would use ICU to compute the results according to the locale,\n> wouldn't it be good enough?\n\nI agree, that should fix the issues at hand / grant a workable path\nfor locale-aware timestamp manipulation.\n\n\nWith regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Mon, 29 Mar 2021 15:18:37 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "Hi Daniel,\n\nOn Fri, Mar 26, 2021 at 8:51 PM Daniel Verite <daniel@manitou-mail.org>\nwrote:\n\n> Thomas Munro wrote:\n>\n> > Right, so if this is done by trying to extend Daniel Verite's icu_ext\n> > extension (link given earlier) and Robert's idea of a fast-castable\n> > type, I suppose you might want now()::icu_date + '1 month'::internal\n> > to advance you by one Ethiopic month if you have done SET\n> > icu_ext.ICU_LC_TIME = 'am_ET@calendar=traditional'.\n>\n> I've pushed a calendar branch on icu_ext [1] with preliminary support\n> for non-gregorian calendars through ICU, so far with only format and parse\n> of timetamptz.\n>\n\nThanks\n\n\n>\n>\n> I understand that adding months or years with some of the non-gregorian\n> calendars should lead to different points in time than with the gregorian\n> calendar.\n>\n> For instance with the ethiopic calendar, the query above displays today as\n> 17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\n> while the correct result is probably 17/miyazya/2013 (?)\n>\n>\nyes it should be 17/miyazya/2013 (?)\n\n\n> I'm not sure at this point that there should be a new set of\n> data/interval/timestamp types though, especially if considering\n> the integration in core.\n>\n> About intervals, if there were locale-aware functions like\n> add_interval(timestamptz, interval [, locale]) returns timestamptz\n> or\n> sub_timestamp(timestamptz, timestamptz [,locale]) returns interval\n> that would use ICU to compute the results according to the locale,\n> wouldn't it be good enough?\n>\n>\nYes it can be enough for now but there are patches proposed to support the\nsystem and application time period which are in SQL standard and if we have\nthat feature the calendar has to be in core and It doesn't appear hard for\nme to support the calendar locally. Postgresql itself does't store\nGregorian date it stores julian date(which is more accurate than gregorian\ncalendar) and almost all of function and operator is done using julian date\nconverted to second(TimestampTz) so what it takes to support calendar\nlocally is input/output function and a converter from and to julian\ncalendar and that may not be that much hard since most of the world\ncalendar is based on julian or gregorian calendar[0]. Thought?\n\n0.https://en.wikipedia.org/wiki/List_of_calendars\n\nregards\nSurafel\n\nHi Daniel,On Fri, Mar 26, 2021 at 8:51 PM Daniel Verite <daniel@manitou-mail.org> wrote:        Thomas Munro wrote:\n\n> Right, so if this is done by trying to extend Daniel Verite's icu_ext\n> extension (link given earlier) and Robert's idea of a fast-castable\n> type, I suppose you might want now()::icu_date + '1 month'::internal\n> to advance you by one Ethiopic month if you have done SET\n> icu_ext.ICU_LC_TIME = 'am_ET@calendar=traditional'.\n\nI've pushed a calendar branch on icu_ext [1] with preliminary support\nfor non-gregorian calendars through ICU, so far with only format and parse\nof timetamptz.Thanks  \n\nI understand that adding months or years with some of the non-gregorian\ncalendars should lead to different points in time than with the gregorian\ncalendar.\n\nFor instance with the ethiopic calendar, the query above displays today as\n17/mägabit/2013 and 1 month from now as 18/miyazya/2013,\nwhile the correct result is probably 17/miyazya/2013 (?)\nyes it should be 17/miyazya/2013 (?) \nI'm not sure at this point that there should be a new set of\ndata/interval/timestamp types though, especially if considering\nthe integration in core.\n\nAbout intervals, if there were locale-aware functions like\n add_interval(timestamptz, interval [, locale]) returns timestamptz\nor\n sub_timestamp(timestamptz, timestamptz [,locale]) returns interval\nthat would use ICU to compute the results according to the locale,\nwouldn't it be good enough?\nYes it can be enough for now but there are patches proposed to support the system and application time period which are in SQL standard and if we have that feature the calendar has to be in core and It doesn't appear hard for me to support the calendar locally. Postgresql itself does't store Gregorian date it stores julian date(which is more accurate than gregorian calendar) and almost all of function and operator is done using julian date converted to second(TimestampTz) so what it takes to support calendar locally is input/output function and a converter from and to julian calendar and that may not be that much hard since most of the world calendar is based on julian or gregorian calendar[0]. Thought?  0.https://en.wikipedia.org/wiki/List_of_calendarsregardsSurafel", "msg_date": "Mon, 29 Mar 2021 17:58:01 +0300", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "\tSurafel Temesgen wrote:\n\n> > About intervals, if there were locale-aware functions like\n> > add_interval(timestamptz, interval [, locale]) returns timestamptz\n> > or\n> > sub_timestamp(timestamptz, timestamptz [,locale]) returns interval\n> > that would use ICU to compute the results according to the locale,\n> > wouldn't it be good enough?\n> >\n> >\n> Yes it can be enough for now but there are patches proposed to support the\n> system and application time period which are in SQL standard \n\nTo clarify, these function signatures are not meant to oppose\na core vs extension implementation, nor an ICU vs non-ICU\nimplementation. They're meant to illustrate the case of using\nspecific functions instead of adding specific data types.\nAFAIU, adding data types come from the idea that since\n(non-gregorian-date + interval) doesn't have the same result as\n(gregorian-date + interval), we could use a different type for\nnon-gregorian-date and so a different \"+\" operator, maybe\neven a specific interval type.\n\nFor the case of temporal tables, I'm not quite familiar with the\nfeature, but I notice that the patch says:\n\n+ When system versioning is specified two columns are added which\n+ record the start timestamp and end timestamp of each row verson.\n+ The data type of these columns will be TIMESTAMP WITH TIME ZONE.\n\nThe user doesn't get to choose the data type, so if we'd require to\nuse specific data types for non-gregorian calendars, that would\nseemingly complicate things for this feature. This is consistent\nwith the remark upthread that the SQL standard assumes the\ngregorian calendar.\n\n\n> what it takes to support calendar locally is input/output function\n> and a converter from and to julian calendar and that may not be that\n> much hard since most of the world calendar is based on julian or\n> gregorian calendar[0]\n\nThe conversions from julian dates are not necessarily hard, but the\nI/O functions means having localized names for all days, months, eras\nof all calendars in all supported languages. If you're thinking of\nimplementing this from scratch (without the ICU dependency), where\nwould these names come from? OTOH if we're using ICU, then why\nbother reinventing the julian-to-calendars conversions that ICU\nalready does?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 30 Mar 2021 20:16:32 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:16 AM Daniel Verite <daniel@manitou-mail.org>\nwrote:\n\n>\n> The conversions from julian dates are not necessarily hard, but the\n> I/O functions means having localized names for all days, months, eras\n> of all calendars in all supported languages. If you're thinking of\n> implementing this from scratch (without the ICU dependency), where\n> would these names come from? OTOH if we're using ICU, then why\n> bother reinventing the julian-to-calendars conversions that ICU\n> already does?\n>\n>\ni donno why but currently we are using our own function for\nconverting (see j2date and date2j) maybe it's written before ICU but i\nthink ICU helps in adding other calendar support easly. Regarding I/O\nfunctions postgresql hard coded days and months names on array and just\nparse and string compare, if it is not on the list then error(see\ndatetime.c) and it will be the same for other calendar but i think we don't\nneed all that if we use ICU\n\nregards\nSurafel\n\nOn Tue, Mar 30, 2021 at 11:16 AM Daniel Verite <daniel@manitou-mail.org> wrote:     \nThe conversions from julian dates are not necessarily hard, but the\nI/O functions means having localized names for all days, months, eras\nof all calendars in all supported languages. If you're thinking of\nimplementing this from scratch (without the ICU dependency), where\nwould these names come from? OTOH if we're using ICU, then why\nbother reinventing the julian-to-calendars conversions that ICU\nalready does?\n\ni donno why  but  currently \n\n we are using our own function for converting (see j2date and date2j) maybe it's written before ICU but i think ICU helps in adding other calendar support easly. Regarding  I/O functions postgresql hard coded days and months names on array and just parse and string compare, if it is not on the list then error(see datetime.c) and it will be the same for other calendar but i think we don't need all  that if we use ICU   regards Surafel", "msg_date": "Wed, 31 Mar 2021 07:38:27 -0700", "msg_from": "Surafel Temesgen <surafel3000@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Calendar support in localization" }, { "msg_contents": "On Wed, Mar 17, 2021 at 8:20 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> *I mean, we can discuss the different \"timelines\" like UT, UTC, TAI\n> etc, but that's getting into the weeds, the usual timeline for\n> computer software outside specialist scientific purposes is UTC\n> without leap seconds.\n\n(Erm, rereading this thread, I meant to write \"time scales\" there.)\n\n\n", "msg_date": "Mon, 12 Apr 2021 10:56:03 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Calendar support in localization" } ]
[ { "msg_contents": "Sending this to pgsql-hackers list to create a CommitFest entry with the attached patch proposal.\r\n\r\nHello,\r\nWe noticed that the logical replication could fail when the Standby::RUNNING_XACT record is generated in the middle of a catalog modifying transaction and if the logical decoding has to restart from the RUNNING_XACT\r\nWAL entry.\r\nThe Standby::RUNNING_XACT record is generated periodically (roughly every 15s by default) or during a CHECKPOINT operation.\r\n\r\nDetailed problem description:\r\nTested on 11.8 & current master.\r\nThe logical replication slot restart_lsn advances in the middle of an open txn that modified the catalog (e.g. TRUNCATE operation).\r\nShould the logical decoding has to restart it could fail with an error like this:\r\nERROR: could not map filenode \"base/13237/442428\"\r\n\r\nCurrently, the system relies on processing Heap2::NEW_CID to keep track of catalog modifying (sub)transactions.\r\nThis context is lost if the logical decoding has to restart from a Standby::RUNNING_XACTS that is written between the NEW_CID record and its parent txn commit.\r\nIf the logical stream restarts from this restart_lsn, then it doesn't have the xid responsible for modifying the catalog.\r\n\r\nRepro steps:\r\n1. We need to generate the Standby::RUNNING_XACT record deterministically using CHECKPOINT. Hence we'll delay the LOG_SNAPSHOT_INTERVAL_MS using the following patch:\r\ndiff --git a/src/backend/postmaster/bgwriter.c b/src/backend/postmaster/bgwriter.c\r\nindex 3e6ffb05b9..b776e8d566 100644\r\n--- a/src/backend/postmaster/bgwriter.c\r\n+++ b/src/backend/postmaster/bgwriter.c\r\n@@ -76,7 +76,7 @@ int BgWriterDelay = 200;\r\n\r\n* Interval in which standby snapshots are logged into the WAL stream, in\r\n* milliseconds.\r\n\r\n /\r\n-#define LOG_SNAPSHOT_INTERVAL_MS 15000\r\n+#define LOG_SNAPSHOT_INTERVAL_MS 1500000\r\n2. Create a table\r\npostgres=# create table bdt (a int);\r\nCREATE TABLE\r\n3. Create a logical replication slot:\r\npostgres=# select pg_create_logical_replication_slot('bdt_slot','test_decoding');\r\npg_create_logical_replication_slot\r\n------------------------------------\r\n(bdt_slot,0/FFAA1C70)\r\n(1 row)\r\n4. Start reading the slot in a shell (keep the shell so that we can stop reading later):\r\n./bin/pg_recvlogical --slot bdt_slot --start -f bdt.out -d postgres\r\n5. Execute the workload across 2 different clients in the following order\r\nSession1:\r\nbegin;\r\nsavepoint b1;\r\ntruncate bdt;\r\n\r\nSession2:\r\nselect * from pg_replication_slots; /* keep note of the confirmed_flush_lsn */\r\ncheckpoint;\r\n/* Repeat the following query until the confirmed_flush_lsn changes */\r\nselect * from pg_replication_slots;\r\n\r\nOnce confirmed_flush_lsn, changes:\r\nSession1:\r\nend;\r\nbegin;\r\ninsert into bdt values (1);\r\nSession2:\r\nselect * from pg_replication_slots; /* keep note of both restart_lsn AND the confirmed_flush_lsn */\r\ncheckpoint;\r\n/* Repeat the following query until both restart_lsn AND confirmed_flush_lsn change */\r\nselect * from pg_replication_slots;\r\n6. Stop the pg_recvlogical (Control-C)\r\n7. Then commit the insert txn:\r\nSession1:\r\nend;\r\n8. Get/peek the replication slot changes\r\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\r\nERROR: could not map filenode \"base/13237/442428\" to relation OID\r\n\r\nProposed solution:\r\nIf we’re decoding a catalog modifying commit record, then check whether it’s part of the RUNNING_XACT xid’s processed @ the restart_lsn. If so, then add its xid & subxacts in the committed txns list in the logical decoding snapshot.\r\n\r\nPlease refer to the attachment for the proposed patch.\r\n\r\nThanks,\r\nMike", "msg_date": "Mon, 15 Mar 2021 16:34:56 +0000", "msg_from": "\"Oh, Mike\" <minsoo@amazon.com>", "msg_from_op": true, "msg_subject": "[BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI have also seen this error with logical replication using pglogical extension, will this patch also address similar problem with pglogical?", "msg_date": "Fri, 07 May 2021 11:50:24 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "\nOn Fri, 07 May 2021 at 19:50, ahsan hadi <ahsan.hadi@gmail.com> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: not tested\n>\n> I have also seen this error with logical replication using pglogical extension, will this patch also address similar problem with pglogical?\n\nDoes there is a test case to reproduce this problem (using pglogical)?\nI encountered this, however I'm not find a case to reproduce it.\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Sat, 08 May 2021 11:17:13 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sat, May 8, 2021 at 8:17 AM Japin Li <japinli@hotmail.com> wrote:\n\n>\n> On Fri, 07 May 2021 at 19:50, ahsan hadi <ahsan.hadi@gmail.com> wrote:\n> > The following review has been posted through the commitfest application:\n> > make installcheck-world: tested, passed\n> > Implements feature: tested, passed\n> > Spec compliant: tested, passed\n> > Documentation: not tested\n> >\n> > I have also seen this error with logical replication using pglogical\n> extension, will this patch also address similar problem with pglogical?\n>\n> Does there is a test case to reproduce this problem (using pglogical)?\n> I encountered this, however I'm not find a case to reproduce it.\n>\n\nI have seen a user run into this with pglogical, the error is produced\nafter logical decoding finds an inconsistent point. However we haven't been\nable to reproduce the user scenario locally...\n\n\n> --\n> Regrads,\n> Japin Li.\n> ChengDu WenWu Information Technology Co.,Ltd.\n>\n\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nOn Sat, May 8, 2021 at 8:17 AM Japin Li <japinli@hotmail.com> wrote:\nOn Fri, 07 May 2021 at 19:50, ahsan hadi <ahsan.hadi@gmail.com> wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world:  tested, passed\n> Implements feature:       tested, passed\n> Spec compliant:           tested, passed\n> Documentation:            not tested\n>\n> I have also seen this error with logical replication using pglogical extension, will this patch also address similar problem with pglogical?\n\nDoes there is a test case to reproduce this problem (using pglogical)?\nI encountered this, however I'm not find a case to reproduce it.I have seen a user run into this with pglogical, the error is produced after logical decoding finds an inconsistent point. However we haven't been able to reproduce the user scenario locally...\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca", "msg_date": "Sat, 8 May 2021 23:51:45 +0500", "msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "This patch should address the same problem for pglogical as well.\r\n\r\nThanks,\r\nMike\r\n\r\nOn 6/4/21, 3:55 PM, \"ahsan hadi\" <ahsan.hadi@gmail.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n\r\n\r\n\r\n The following review has been posted through the commitfest application:\r\n make installcheck-world: tested, passed\r\n Implements feature: tested, passed\r\n Spec compliant: tested, passed\r\n Documentation: not tested\r\n\r\n I have also seen this error with logical replication using pglogical extension, will this patch also address similar problem with pglogical?\r\n\r\n", "msg_date": "Fri, 4 Jun 2021 22:59:22 +0000", "msg_from": "\"Oh, Mike\" <minsoo@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Mar 16, 2021 at 1:35 AM Oh, Mike <minsoo@amazon.com> wrote:\n>\n> Sending this to pgsql-hackers list to create a CommitFest entry with the attached patch proposal.\n>\n>\n>\n> Hello,\n>\n> We noticed that the logical replication could fail when the Standby::RUNNING_XACT record is generated in the middle of a catalog modifying transaction and if the logical decoding has to restart from the RUNNING_XACT\n>\n> WAL entry.\n>\n> The Standby::RUNNING_XACT record is generated periodically (roughly every 15s by default) or during a CHECKPOINT operation.\n>\n>\n>\n> Detailed problem description:\n>\n> Tested on 11.8 & current master.\n>\n> The logical replication slot restart_lsn advances in the middle of an open txn that modified the catalog (e.g. TRUNCATE operation).\n>\n> Should the logical decoding has to restart it could fail with an error like this:\n>\n> ERROR: could not map filenode \"base/13237/442428\"\n\nThank you for reporting the issue.\n\nI could reproduce this issue by the steps you shared.\n\n>\n> Currently, the system relies on processing Heap2::NEW_CID to keep track of catalog modifying (sub)transactions.\n>\n> This context is lost if the logical decoding has to restart from a Standby::RUNNING_XACTS that is written between the NEW_CID record and its parent txn commit.\n>\n> If the logical stream restarts from this restart_lsn, then it doesn't have the xid responsible for modifying the catalog.\n>\n\nI agree with your analysis. Since we don’t use commit WAL record to\ntrack the transaction that has modified system catalogs, if we decode\nonly the commit record of such transaction, we cannot know the\ntransaction has been modified system catalogs, resulting in the\nsubsequent transaction scans system catalog with the wrong snapshot.\n\nWith the patch, if the commit WAL record has a XACT_XINFO_HAS_INVALS\nflag and its xid is included in RUNNING_XACT record written at\nrestart_lsn, we forcibly add the top XID and its sub XIDs as a\ncommitted transaction that has modified system catalogs to the\nsnapshot. I might be missing something about your patch but I have\nsome comments on this approach:\n\n1. Commit WAL record may not have invalidation message for system\ncatalogs (e.g., when commit record has only invalidation message for\nrelcache) even if it has XACT_XINFO_HAS_INVALS flag. In this case, the\ntransaction wrongly is added to the snapshot, is that okay?\n\n2. We might add a subtransaction XID as a committed transaction that\nhas modified system catalogs even if it actually didn't. As the\ncomment in SnapBuildBuildSnapshot() describes, we track only the\ntransactions that have modified the system catalog and store in the\nsnapshot (in the ‘xip' array). The patch could break that assumption.\nHowever, I’m really not sure how to deal with this point. We cannot\nknow which subtransaction has actually modified system catalogs by\nusing only the commit WAL record.\n\n3. The patch covers only the case where the restart_lsn exactly\nmatches the LSN of RUNNING_XACT. I wonder if there could be a case\nwhere the decoding starts at a WAL record other than RUNNING_XACT but\nthe next WAL record is RUNNING_XACT.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 29 Jul 2021 17:25:01 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "Hi,\n\nOn 7/29/21 10:25 AM, Masahiko Sawada wrote:\n> Thank you for reporting the issue.\n>\n> I could reproduce this issue by the steps you shared.\n\nThanks for looking at it!\n\n>\n>> Currently, the system relies on processing Heap2::NEW_CID to keep track of catalog modifying (sub)transactions.\n>>\n>> This context is lost if the logical decoding has to restart from a Standby::RUNNING_XACTS that is written between the NEW_CID record and its parent txn commit.\n>>\n>> If the logical stream restarts from this restart_lsn, then it doesn't have the xid responsible for modifying the catalog.\n>>\n> I agree with your analysis. Since we don’t use commit WAL record to\n> track the transaction that has modified system catalogs, if we decode\n> only the commit record of such transaction, we cannot know the\n> transaction has been modified system catalogs, resulting in the\n> subsequent transaction scans system catalog with the wrong snapshot.\nRight.\n>\n> With the patch, if the commit WAL record has a XACT_XINFO_HAS_INVALS\n> flag and its xid is included in RUNNING_XACT record written at\n> restart_lsn, we forcibly add the top XID and its sub XIDs as a\n> committed transaction that has modified system catalogs to the\n> snapshot. I might be missing something about your patch but I have\n> some comments on this approach:\n>\n> 1. Commit WAL record may not have invalidation message for system\n> catalogs (e.g., when commit record has only invalidation message for\n> relcache) even if it has XACT_XINFO_HAS_INVALS flag.\n\nRight, good point (create policy for example would lead to an \ninvalidation for relcache only).\n\n> In this case, the\n> transaction wrongly is added to the snapshot, is that okay?\nThis transaction is a committed one, and IIUC the snapshot would be used \nonly for catalog visibility, so i don't see any issue to add it in the \nsnapshot, what do you think?\n>\n> 2. We might add a subtransaction XID as a committed transaction that\n> has modified system catalogs even if it actually didn't.\n\nRight, like when needs_timetravel is true.\n\n> As the\n> comment in SnapBuildBuildSnapshot() describes, we track only the\n> transactions that have modified the system catalog and store in the\n> snapshot (in the ‘xip' array). The patch could break that assumption.\nRight. It looks to me that breaking this assumption is not an issue.\n\nIIUC currently the committed ones that are not modifying the catalog are \nnot stored \"just\" because we don't need them.\n> However, I’m really not sure how to deal with this point. We cannot\n> know which subtransaction has actually modified system catalogs by\n> using only the commit WAL record.\nRight, unless we rewrite this patch so that a commit WAL record will \nproduce this information.\n>\n> 3. The patch covers only the case where the restart_lsn exactly\n> matches the LSN of RUNNING_XACT.\nRight.\n> I wonder if there could be a case\n> where the decoding starts at a WAL record other than RUNNING_XACT but\n> the next WAL record is RUNNING_XACT.\n\nNot sure, but could a restart_lsn not be a RUNNING_XACTS?\n\nThanks\n\nBertrand\n\n\n\n", "msg_date": "Thu, 23 Sep 2021 10:44:16 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "Hi\r\n\r\n\r\nOn Tuesday, March 16, 2021 1:35 AM Oh, Mike <minsoo@amazon.com> wrote:\r\n> We noticed that the logical replication could fail when the\r\n> Standby::RUNNING_XACT record is generated in the middle of a catalog\r\n> modifying transaction and if the logical decoding has to restart from the\r\n> RUNNING_XACT\r\n> WAL entry.\r\n...\r\n> Proposed solution:\r\n> If we’re decoding a catalog modifying commit record, then check whether\r\n> it’s part of the RUNNING_XACT xid’s processed @ the restart_lsn. If so,\r\n> then add its xid & subxacts in the committed txns list in the logical decoding\r\n> snapshot.\r\n> \r\n> Please refer to the attachment for the proposed patch.\r\n\r\n\r\nLet me share some review comments for the patch.\r\n\r\n(1) last_running declaration\r\n\r\nIsn't it better to add static for this variable,\r\nbecause we don't use this in other places ?\r\n\r\n@@ -85,6 +86,9 @@ static bool DecodeTXNNeedSkip(LogicalDecodingContext *ctx,\r\n XLogRecordBuffer *buf, Oid dbId,\r\n RepOriginId origin_id);\r\n\r\n+/* record previous restart_lsn running xacts */\r\n+xl_running_xacts *last_running = NULL;\r\n\r\n\r\n(2) DecodeStandbyOp's memory free\r\n\r\nI'm not sure when\r\nwe pass this condition with already allocated last_running,\r\nbut do you need to free it's xid array here as well,\r\nif last_running isn't null ?\r\nOtherwise, we'll miss the chance after this.\r\n\r\n+ /* record restart_lsn running xacts */\r\n+ if (MyReplicationSlot && (buf->origptr == MyReplicationSlot->data.restart_lsn))\r\n+ {\r\n+ if (last_running)\r\n+ free(last_running);\r\n+\r\n+ last_running = NULL;\r\n\r\n(3) suggestion of small readability improvement\r\n\r\nWe calculate the same size twice here and DecodeCommit.\r\nI suggest you declare a variable that stores the computed result of size,\r\nwhich might shorten those codes.\r\n\r\n+ /*\r\n+ * xl_running_xacts contains a xids Flexible Array\r\n+ * and its size is subxcnt + xcnt.\r\n+ * Take that into account while allocating\r\n+ * the memory for last_running.\r\n+ */\r\n+ last_running = (xl_running_xacts *) malloc(sizeof(xl_running_xacts)\r\n+ + sizeof(TransactionId )\r\n+ * (running->subxcnt + running->xcnt));\r\n+ memcpy(last_running, running, sizeof(xl_running_xacts)\r\n+ + (sizeof(TransactionId)\r\n+ * (running->subxcnt + running->xcnt)));\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 24 Sep 2021 08:02:38 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On 7/29/21 01:25, Masahiko Sawada wrote:\n> On Tue, Mar 16, 2021 at 1:35 AM Oh, Mike <minsoo@amazon.com> wrote:\n>>\n>> Sending this to pgsql-hackers list to create a CommitFest entry with the attached patch proposal.\n>>\n>> ...\n>>\n>> Detailed problem description:\n>>\n>> Tested on 11.8 & current master.\n>>\n>> The logical replication slot restart_lsn advances in the middle of an open txn that modified the catalog (e.g. TRUNCATE operation).\n>>\n>> Should the logical decoding has to restart it could fail with an error like this:\n>>\n>> ERROR: could not map filenode \"base/13237/442428\"\n> \n> Thank you for reporting the issue.\n> \n> I could reproduce this issue by the steps you shared.\n\n\nI also noticed a bug report earlier this year with another PG user\nreporting the same error - on version 12.3\n\nhttps://www.postgresql.org/message-id/flat/16812-3d9df99bd77ff616%40postgresql.org\n\nToday I received a report from a new PG user of this same error message\ncausing their logical replication to break. This customer was also\nrunning PostgreSQL 12.3 on both source and target side.\n\nHaven't yet dumped WAL or anything, but wanted to point out that the\nerror is being seen in the wild - I hope we can get a version of this\npatch committed soon, as it will help with at least one cause.\n\n\n-Jeremy\n\n-- \nJeremy Schneider\nDatabase Engineer\nAmazon Web Services\n\n\n", "msg_date": "Fri, 1 Oct 2021 12:49:34 -0700", "msg_from": "Jeremy Schneider <schnjere@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Friday, September 24, 2021 5:03 PM I wrote:\r\n> On Tuesday, March 16, 2021 1:35 AM Oh, Mike <minsoo@amazon.com> wrote:\r\n> > We noticed that the logical replication could fail when the\r\n> > Standby::RUNNING_XACT record is generated in the middle of a catalog\r\n> > modifying transaction and if the logical decoding has to restart from\r\n> > the RUNNING_XACT WAL entry.\r\n> ...\r\n> > Proposed solution:\r\n> > If we’re decoding a catalog modifying commit record, then check\r\n> > whether it’s part of the RUNNING_XACT xid’s processed @ the\r\n> > restart_lsn. If so, then add its xid & subxacts in the committed txns\r\n> > list in the logical decoding snapshot.\r\n> >\r\n> > Please refer to the attachment for the proposed patch.\r\n> \r\n> \r\n> Let me share some review comments for the patch.\r\n....\r\n> (3) suggestion of small readability improvement\r\n> \r\n> We calculate the same size twice here and DecodeCommit.\r\n> I suggest you declare a variable that stores the computed result of size, which\r\n> might shorten those codes.\r\n> \r\n> + /*\r\n> + * xl_running_xacts contains a xids\r\n> Flexible Array\r\n> + * and its size is subxcnt + xcnt.\r\n> + * Take that into account while\r\n> allocating\r\n> + * the memory for last_running.\r\n> + */\r\n> + last_running = (xl_running_xacts *)\r\n> malloc(sizeof(xl_running_xacts)\r\n> +\r\n> + sizeof(TransactionId )\r\n> +\r\n> * (running->subxcnt + running->xcnt));\r\n> + memcpy(last_running, running,\r\n> sizeof(xl_running_xacts)\r\n> +\r\n> + (sizeof(TransactionId)\r\n> +\r\n> + * (running->subxcnt + running->xcnt)));\r\nLet me add one more basic review comment in DecodeStandbyOp().\r\n\r\nWhy do you call raw malloc directly ?\r\nYou don't have the basic check whether the return value is\r\nNULL or not and intended to call palloc here instead ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 5 Oct 2021 07:37:23 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Sep 23, 2021 at 5:44 PM Drouvot, Bertrand <bdrouvot@amazon.com> wrote:\n>\n> Hi,\n>\n> On 7/29/21 10:25 AM, Masahiko Sawada wrote:\n> > Thank you for reporting the issue.\n> >\n> > I could reproduce this issue by the steps you shared.\n>\n> Thanks for looking at it!\n>\n> >\n> >> Currently, the system relies on processing Heap2::NEW_CID to keep track of catalog modifying (sub)transactions.\n> >>\n> >> This context is lost if the logical decoding has to restart from a Standby::RUNNING_XACTS that is written between the NEW_CID record and its parent txn commit.\n> >>\n> >> If the logical stream restarts from this restart_lsn, then it doesn't have the xid responsible for modifying the catalog.\n> >>\n> > I agree with your analysis. Since we don’t use commit WAL record to\n> > track the transaction that has modified system catalogs, if we decode\n> > only the commit record of such transaction, we cannot know the\n> > transaction has been modified system catalogs, resulting in the\n> > subsequent transaction scans system catalog with the wrong snapshot.\n> Right.\n> >\n> > With the patch, if the commit WAL record has a XACT_XINFO_HAS_INVALS\n> > flag and its xid is included in RUNNING_XACT record written at\n> > restart_lsn, we forcibly add the top XID and its sub XIDs as a\n> > committed transaction that has modified system catalogs to the\n> > snapshot. I might be missing something about your patch but I have\n> > some comments on this approach:\n> >\n> > 1. Commit WAL record may not have invalidation message for system\n> > catalogs (e.g., when commit record has only invalidation message for\n> > relcache) even if it has XACT_XINFO_HAS_INVALS flag.\n>\n> Right, good point (create policy for example would lead to an\n> invalidation for relcache only).\n>\n> > In this case, the\n> > transaction wrongly is added to the snapshot, is that okay?\n> This transaction is a committed one, and IIUC the snapshot would be used\n> only for catalog visibility, so i don't see any issue to add it in the\n> snapshot, what do you think?\n\nIt seems to me that it's no problem since we always transaction with\ncatalog-changed when decoding XLOG_XACT_INVALIDATIONS records.\n\n> >\n> > 2. We might add a subtransaction XID as a committed transaction that\n> > has modified system catalogs even if it actually didn't.\n>\n> Right, like when needs_timetravel is true.\n>\n> > As the\n> > comment in SnapBuildBuildSnapshot() describes, we track only the\n> > transactions that have modified the system catalog and store in the\n> > snapshot (in the ‘xip' array). The patch could break that assumption.\n> Right. It looks to me that breaking this assumption is not an issue.\n>\n> IIUC currently the committed ones that are not modifying the catalog are\n> not stored \"just\" because we don't need them.\n> > However, I’m really not sure how to deal with this point. We cannot\n> > know which subtransaction has actually modified system catalogs by\n> > using only the commit WAL record.\n> Right, unless we rewrite this patch so that a commit WAL record will\n> produce this information.\n> >\n> > 3. The patch covers only the case where the restart_lsn exactly\n> > matches the LSN of RUNNING_XACT.\n> Right.\n> > I wonder if there could be a case\n> > where the decoding starts at a WAL record other than RUNNING_XACT but\n> > the next WAL record is RUNNING_XACT.\n>\n> Not sure, but could a restart_lsn not be a RUNNING_XACTS?\n\nI guess the decoding always starts from RUNING_XACTS.\nAfter more thought, I think that the basic approach of the proposed\npatch is a probably good idea, which we add xid whose commit record\nhas XACT_XINFO_HAS_INVALS to the snapshot. The problem as I see is\nthat during decoding COMMIT record we cannot know which transactions\n(top transaction or subtransactions) actually did catalog changes. But\ngiven that even if XLOG_XACT_INVALIDATION has only relcache\ninvalidation message we always mark the transaction with\ncatalog-changed, it seems no problem. Therefore, in the reported\ncases, probably we can add both the top transaction xid and its\nsubscription xids to the snapshot.\n\nRegarding the patch details, I have two comments:\n\n---\n+ if ((parsed->xinfo & XACT_XINFO_HAS_INVALS) && last_running)\n+ {\n+ /* make last_running->xids bsearch()able */\n+ qsort(last_running->xids,\n+ last_running->subxcnt + last_running->xcnt,\n+ sizeof(TransactionId), xidComparator);\n\nThe patch does qsort() every time when the commit message has\nXACT_XINFO_HAS_INVALS. IIUC the xids we need to remember is the only\nxids that are recorded in the first replayed XLOG_RUNNING_XACTS,\nright? If so, we need to do qsort() once, can remove xid from the\narray once it gets committed, and then can eventually make\nlast_running empty so that we can skip even TransactionIdInArray().\n\n---\nSince last_running is allocated by malloc() and it isn't freed even\nafter finishing logical decoding.\n\n\nAnother idea to fix this problem would be that before calling\nSnapBuildCommitTxn() we create transaction entries in ReorderBuffer\nfor (sub)transactions whose COMMIT record has XACT_XINFO_HAS_INVALS,\nand then mark all of them as catalog-changed by calling\nReorderBufferXidSetCatalogChanges(). I've attached a PoC patch for\nthis idea. What the patch does is essentially the same as what the\nproposed patch does. But the patch doesn't modify the\nSnapBuildCommitTxn(). And we remember the list of last running\ntransactions in reorder buffer and the list is periodically purged\nduring decoding RUNNING_XACTS records, eventually making it empty.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 7 Oct 2021 13:20:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thursday, October 7, 2021 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> Regarding the patch details, I have two comments:\r\n> \r\n> ---\r\n> + if ((parsed->xinfo & XACT_XINFO_HAS_INVALS) && last_running) {\r\n> + /* make last_running->xids bsearch()able */\r\n> + qsort(last_running->xids,\r\n> + last_running->subxcnt + last_running->xcnt,\r\n> + sizeof(TransactionId), xidComparator);\r\n> \r\n> The patch does qsort() every time when the commit message has\r\n> XACT_XINFO_HAS_INVALS. IIUC the xids we need to remember is the only\r\n> xids that are recorded in the first replayed XLOG_RUNNING_XACTS, right? If so,\r\n> we need to do qsort() once, can remove xid from the array once it gets\r\n> committed, and then can eventually make last_running empty so that we can\r\n> skip even TransactionIdInArray().\r\n> \r\n> ---\r\n> Since last_running is allocated by malloc() and it isn't freed even after finishing\r\n> logical decoding.\r\n> \r\n> \r\n> Another idea to fix this problem would be that before calling\r\n> SnapBuildCommitTxn() we create transaction entries in ReorderBuffer for\r\n> (sub)transactions whose COMMIT record has XACT_XINFO_HAS_INVALS,\r\n> and then mark all of them as catalog-changed by calling\r\n> ReorderBufferXidSetCatalogChanges(). I've attached a PoC patch for this idea.\r\n> What the patch does is essentially the same as what the proposed patch does.\r\n> But the patch doesn't modify the SnapBuildCommitTxn(). And we remember\r\n> the list of last running transactions in reorder buffer and the list is periodically\r\n> purged during decoding RUNNING_XACTS records, eventually making it\r\n> empty.\r\nThanks for the patch.\r\n\r\nConducted a quick check of the POC.\r\n\r\nTest of check-world PASSED with your patch and head.\r\nAlso, the original scenario described in [1] looks fine\r\nwith your revised patch and LOG_SNAPSHOT_INTERVAL_MS expansion in the procedure.\r\n\r\nThe last command in the provided steps showed below.\r\n\r\npostgres=# select * from pg_logical_slot_get_changes('bdt_slot', null, null);\r\n lsn | xid | data \r\n-----------+-----+----------------------------------------\r\n 0/1560020 | 710 | BEGIN 710\r\n 0/1560020 | 710 | table public.bdt: INSERT: a[integer]:1\r\n 0/1560140 | 710 | COMMIT 710\r\n\r\n\r\nMinor comments for DecodeStandbyOp changes I noticed instantly\r\n(1) minor suggestion of your comment.\r\n\r\n\r\n+ * has done catalog changes without these records, we miss to add\r\n+ * the xid to the snapshot so up creating the wrong snapshot. To\r\n\r\n\"miss to add\" would be miss adding or fail to add.\r\nAnd \"up creating\" is natural in this sentence ?\r\n\r\n(2) a full-width space between \"it'\" and \"s\" in the next sentence.\r\n\r\n+ * mark an xid that actually has not done that but it’s not a\r\n\r\n\r\n\r\n[1] - https://www.postgresql.org/message-id/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n\r\n", "msg_date": "Fri, 8 Oct 2021 03:07:00 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Thu, 7 Oct 2021 13:20:14 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> Another idea to fix this problem would be that before calling\n> SnapBuildCommitTxn() we create transaction entries in ReorderBuffer\n> for (sub)transactions whose COMMIT record has XACT_XINFO_HAS_INVALS,\n> and then mark all of them as catalog-changed by calling\n> ReorderBufferXidSetCatalogChanges(). I've attached a PoC patch for\n> this idea. What the patch does is essentially the same as what the\n> proposed patch does. But the patch doesn't modify the\n> SnapBuildCommitTxn(). And we remember the list of last running\n> transactions in reorder buffer and the list is periodically purged\n> during decoding RUNNING_XACTS records, eventually making it empty.\n\nI came up with the third way. SnapBuildCommitTxn already properly\nhandles the case where a ReorderBufferTXN with\nRBTXN_HAS_CATALOG_CHANGES. So this issue can be resolved by create\nsuch ReorderBufferTXNs in SnapBuildProcessRunningXacts.\n\nOne problem with this is that change creates the case where multiple\nReorderBufferTXNs share the same first_lsn. I haven't come up with a\nclean idea to avoid relaxing the restriction of AssertTXNLsnOrder..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\nindex 46e66608cf..503116764f 100644\n--- a/src/backend/replication/logical/reorderbuffer.c\n+++ b/src/backend/replication/logical/reorderbuffer.c\n@@ -887,9 +887,14 @@ AssertTXNLsnOrder(ReorderBuffer *rb)\n \t\tif (cur_txn->end_lsn != InvalidXLogRecPtr)\n \t\t\tAssert(cur_txn->first_lsn <= cur_txn->end_lsn);\n \n-\t\t/* Current initial LSN must be strictly higher than previous */\n+\t\t/*\n+\t\t * Current initial LSN must be strictly higher than previous. except\n+\t\t * this transaction is created by XLOG_RUNNING_XACTS. If one\n+\t\t * XLOG_RUNNING_XACTS creates multiple transactions, they share the\n+\t\t * same LSN. See SnapBuildProcessRunningXacts.\n+\t\t */\n \t\tif (prev_first_lsn != InvalidXLogRecPtr)\n-\t\t\tAssert(prev_first_lsn < cur_txn->first_lsn);\n+\t\t\tAssert(prev_first_lsn <= cur_txn->first_lsn);\n \n \t\t/* known-as-subtxn txns must not be listed */\n \t\tAssert(!rbtxn_is_known_subxact(cur_txn));\ndiff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\nindex a5333349a8..58859112dc 100644\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -1097,6 +1097,20 @@ SnapBuildProcessRunningXacts(SnapBuild *builder, XLogRecPtr lsn, xl_running_xact\n \t */\n \tif (builder->state < SNAPBUILD_CONSISTENT)\n \t{\n+\t\t/*\n+\t\t * At the time we passed the first XLOG_RUNNING_XACTS record, the\n+\t\t * transactions notified by the record may have updated\n+\t\t * catalogs. Register the transactions with marking them as having\n+\t\t * caused catalog changes. The worst misbehavior here is some spurious\n+\t\t * invalidation at decoding start.\n+\t\t */\n+\t\tif (builder->state == SNAPBUILD_START)\n+\t\t{\n+\t\t\tfor (int i = 0 ; i < running->xcnt + running->subxcnt ; i++)\n+\t\t\t\tReorderBufferXidSetCatalogChanges(builder->reorder,\n+\t\t\t\t\t\t\t\t\t\t\t\t running->xids[i], lsn);\n+\t\t}\n+\n \t\t/* returns false if there's no point in performing cleanup just yet */\n \t\tif (!SnapBuildFindSnapshot(builder, lsn, running))\n \t\t\treturn;", "msg_date": "Fri, 08 Oct 2021 16:50:55 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": ".\n\nOn Fri, Oct 8, 2021 at 4:50 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 7 Oct 2021 13:20:14 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > Another idea to fix this problem would be that before calling\n> > SnapBuildCommitTxn() we create transaction entries in ReorderBuffer\n> > for (sub)transactions whose COMMIT record has XACT_XINFO_HAS_INVALS,\n> > and then mark all of them as catalog-changed by calling\n> > ReorderBufferXidSetCatalogChanges(). I've attached a PoC patch for\n> > this idea. What the patch does is essentially the same as what the\n> > proposed patch does. But the patch doesn't modify the\n> > SnapBuildCommitTxn(). And we remember the list of last running\n> > transactions in reorder buffer and the list is periodically purged\n> > during decoding RUNNING_XACTS records, eventually making it empty.\n>\n> I came up with the third way. SnapBuildCommitTxn already properly\n> handles the case where a ReorderBufferTXN with\n> RBTXN_HAS_CATALOG_CHANGES. So this issue can be resolved by create\n> such ReorderBufferTXNs in SnapBuildProcessRunningXacts.\n\nThank you for the idea and patch!\n\nIt's much simpler than mine. I think that creating an entry of a\ncatalog-changed transaction in the reorder buffer before\nSunapBuildCommitTxn() is the right direction.\n\nAfter more thought, given DDLs are not likely to happen than DML in\npractice, probably we can always mark both the top transaction and its\nsubtransactions as containing catalog changes if the commit record has\nXACT_XINFO_HAS_INVALS? I believe this is not likely to lead to\noverhead in practice. That way, the patch could be more simple and\ndoesn't need the change of AssertTXNLsnOrder().\n\nI've attached another PoC patch. Also, I've added the tests for this\nissue in test_decoding.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 11 Oct 2021 15:27:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "Hi,\n\nOn 10/11/21 8:27 AM, Masahiko Sawada wrote:\n> On Fri, Oct 8, 2021 at 4:50 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> At Thu, 7 Oct 2021 13:20:14 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n>>> Another idea to fix this problem would be that before calling\n>>> SnapBuildCommitTxn() we create transaction entries in ReorderBuffer\n>>> for (sub)transactions whose COMMIT record has XACT_XINFO_HAS_INVALS,\n>>> and then mark all of them as catalog-changed by calling\n>>> ReorderBufferXidSetCatalogChanges(). I've attached a PoC patch for\n>>> this idea. What the patch does is essentially the same as what the\n>>> proposed patch does. But the patch doesn't modify the\n>>> SnapBuildCommitTxn(). And we remember the list of last running\n>>> transactions in reorder buffer and the list is periodically purged\n>>> during decoding RUNNING_XACTS records, eventually making it empty.\n>> I came up with the third way. SnapBuildCommitTxn already properly\n>> handles the case where a ReorderBufferTXN with\n>> RBTXN_HAS_CATALOG_CHANGES. So this issue can be resolved by create\n>> such ReorderBufferTXNs in SnapBuildProcessRunningXacts.\n> Thank you for the idea and patch!\n\nThanks you both for your new patches proposal!\n\nI liked Sawada's one but also do \"prefer\" Horiguchi's one.\n\n>\n> It's much simpler than mine. I think that creating an entry of a\n> catalog-changed transaction in the reorder buffer before\n> SunapBuildCommitTxn() is the right direction.\n+1\n>\n> After more thought, given DDLs are not likely to happen than DML in\n> practice, probably we can always mark both the top transaction and its\n> subtransactions as containing catalog changes if the commit record has\n> XACT_XINFO_HAS_INVALS? I believe this is not likely to lead to\n> overhead in practice. That way, the patch could be more simple and\n> doesn't need the change of AssertTXNLsnOrder().\n>\n> I've attached another PoC patch. Also, I've added the tests for this\n> issue in test_decoding.\n\nThanks!\n\nIt looks good to me, just have a remark about the comment:\n\n+   /*\n+    * Mark the top transaction and its subtransactions as containing \ncatalog\n+    * changes, if the commit record has invalidation message. This is \nnecessary\n+    * for the case where we decode only the commit record of the \ntransaction\n+    * that actually has done catalog changes.\n+    */\n\nWhat about?\n\n     /*\n      * Mark the top transaction and its subtransactions as containing \ncatalog\n      * changes, if the commit record has invalidation message. This is \nnecessary\n      * for the case where we did not decode the transaction that did \nthe catalog\n      * change(s) (the decoding restarted after). So that we are \ndecoding only the\n      * commit record of the transaction that actually has done catalog \nchanges.\n      */\n\nThanks\n\nBertrand\n\n\n\n", "msg_date": "Mon, 11 Oct 2021 11:44:21 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On 10/10/21 23:27, Masahiko Sawada wrote:\n> \n> After more thought, given DDLs are not likely to happen than DML in\n> practice, ...\n\nI haven't looked closely at the patch, but I'd be careful about\nworkloads where people create and drop \"temporary tables\". I've seen\nthis pattern used a few times, especially by developers who came from a\nSQL server background, for some reason.\n\nI certainly don't think we need to optimize for this workload - which is\nnot a best practice on PostreSQL. I'd just want to be careful not to\nmake PostgreSQL logical replication crumble underneath it, if PG was\npreviously keeping up with difficulty. That would be a sad upgrade\nexperience!\n\n-Jeremy\n\n-- \nhttp://about.me/jeremy_schneider\n\n\n", "msg_date": "Tue, 12 Oct 2021 15:55:53 -0700", "msg_from": "Jeremy Schneider <schneider@ardentperf.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Monday, October 11, 2021 3:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> On Fri, Oct 8, 2021 at 4:50 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\r\n> wrote:\r\n> >\r\n> > At Thu, 7 Oct 2021 13:20:14 +0900, Masahiko Sawada\r\n> > <sawada.mshk@gmail.com> wrote in\r\n> > > Another idea to fix this problem would be that before calling\r\n> > > SnapBuildCommitTxn() we create transaction entries in ReorderBuffer\r\n> > > for (sub)transactions whose COMMIT record has\r\n> XACT_XINFO_HAS_INVALS,\r\n> > > and then mark all of them as catalog-changed by calling\r\n> > > ReorderBufferXidSetCatalogChanges(). I've attached a PoC patch for\r\n> > > this idea. What the patch does is essentially the same as what the\r\n> > > proposed patch does. But the patch doesn't modify the\r\n> > > SnapBuildCommitTxn(). And we remember the list of last running\r\n> > > transactions in reorder buffer and the list is periodically purged\r\n> > > during decoding RUNNING_XACTS records, eventually making it empty.\r\n> >\r\n> > I came up with the third way. SnapBuildCommitTxn already properly\r\n> > handles the case where a ReorderBufferTXN with\r\n> > RBTXN_HAS_CATALOG_CHANGES. So this issue can be resolved by\r\n> create\r\n> > such ReorderBufferTXNs in SnapBuildProcessRunningXacts.\r\n> \r\n> Thank you for the idea and patch!\r\n> \r\n> It's much simpler than mine. I think that creating an entry of a catalog-changed\r\n> transaction in the reorder buffer before\r\n> SunapBuildCommitTxn() is the right direction.\r\n> \r\n> After more thought, given DDLs are not likely to happen than DML in practice,\r\n> probably we can always mark both the top transaction and its subtransactions\r\n> as containing catalog changes if the commit record has\r\n> XACT_XINFO_HAS_INVALS? I believe this is not likely to lead to overhead in\r\n> practice. That way, the patch could be more simple and doesn't need the\r\n> change of AssertTXNLsnOrder().\r\n> \r\n> I've attached another PoC patch. Also, I've added the tests for this issue in\r\n> test_decoding.\r\nI also felt that your patch addresses the problem in a good way.\r\nEven without setting xid by NEW_CID decoding like in the original scenario,\r\nwe can set catalog change flag.\r\n\r\nOne really minor comment I have is,\r\nin DecodeCommit(), you don't need to declar i. It's defined at the top of the function.\r\n\r\n+ for (int i = 0; i < parsed->nsubxacts; i++)\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Wed, 13 Oct 2021 12:58:34 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Oct 13, 2021 at 7:55 AM Jeremy Schneider\n<schneider@ardentperf.com> wrote:\n>\n> On 10/10/21 23:27, Masahiko Sawada wrote:\n> >\n> > After more thought, given DDLs are not likely to happen than DML in\n> > practice, ...\n>\n> I haven't looked closely at the patch, but I'd be careful about\n> workloads where people create and drop \"temporary tables\". I've seen\n> this pattern used a few times, especially by developers who came from a\n> SQL server background, for some reason.\n\nTrue. But since the snapshot builder is designed on the same\nassumption it would not be problematic. It keeps track of the\ncommitted catalog modifying transaction instead of keeping track of\nall running transactions. See the header comment of snapbuild.c\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 14 Oct 2021 10:39:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Mon, 11 Oct 2021 15:27:41 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> .\n> \n> On Fri, Oct 8, 2021 at 4:50 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I came up with the third way. SnapBuildCommitTxn already properly\n> > handles the case where a ReorderBufferTXN with\n> > RBTXN_HAS_CATALOG_CHANGES. So this issue can be resolved by create\n> > such ReorderBufferTXNs in SnapBuildProcessRunningXacts.\n> \n> Thank you for the idea and patch!\n> \n> It's much simpler than mine. I think that creating an entry of a\n> catalog-changed transaction in the reorder buffer before\n> SunapBuildCommitTxn() is the right direction.\n> \n> After more thought, given DDLs are not likely to happen than DML in\n> practice, probably we can always mark both the top transaction and its\n> subtransactions as containing catalog changes if the commit record has\n> XACT_XINFO_HAS_INVALS? I believe this is not likely to lead to\n> overhead in practice. That way, the patch could be more simple and\n> doesn't need the change of AssertTXNLsnOrder().\n> \n> I've attached another PoC patch. Also, I've added the tests for this\n> issue in test_decoding.\n\nThanks for the test script. (I did that with TAP framework but\nisolation tester version is simpler.)\n\nIt adds a call to ReorderBufferAssignChild but usually subtransactions\nare assigned to top level elsewherae. Addition to that\nReorderBufferCommitChild() called just later does the same thing. We\nare adding the third call to the same function, which looks a bit odd.\n\nAnd I'm not sure it is wise to mark all subtransactions as \"catalog\nchanged\" always when the top transaction is XACT_XINFO_HAS_INVALS. The\nreason I did that in the snapshiot building phase is to prevent adding\nto DecodeCommit an extra code that is needed only while any\ntransaction running since before replication start is surviving.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 14 Oct 2021 11:21:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thursday, October 14, 2021 11:21 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> At Mon, 11 Oct 2021 15:27:41 +0900, Masahiko Sawada\n> <sawada.mshk@gmail.com> wrote in\n> >\n> > On Fri, Oct 8, 2021 at 4:50 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > I came up with the third way. SnapBuildCommitTxn already properly\n> > > handles the case where a ReorderBufferTXN with\n> > > RBTXN_HAS_CATALOG_CHANGES. So this issue can be resolved by\n> create\n> > > such ReorderBufferTXNs in SnapBuildProcessRunningXacts.\n> >\n> > Thank you for the idea and patch!\n> >\n> > It's much simpler than mine. I think that creating an entry of a\n> > catalog-changed transaction in the reorder buffer before\n> > SunapBuildCommitTxn() is the right direction.\n> >\n> > After more thought, given DDLs are not likely to happen than DML in\n> > practice, probably we can always mark both the top transaction and its\n> > subtransactions as containing catalog changes if the commit record has\n> > XACT_XINFO_HAS_INVALS? I believe this is not likely to lead to\n> > overhead in practice. That way, the patch could be more simple and\n> > doesn't need the change of AssertTXNLsnOrder().\n> >\n> > I've attached another PoC patch. Also, I've added the tests for this\n> > issue in test_decoding.\n> \n> Thanks for the test script. (I did that with TAP framework but isolation tester\n> version is simpler.)\n> \n> It adds a call to ReorderBufferAssignChild but usually subtransactions are\n> assigned to top level elsewherae. Addition to that\n> ReorderBufferCommitChild() called just later does the same thing. We are\n> adding the third call to the same function, which looks a bit odd.\nIt can be odd. However, we\nhave a check at the top of ReorderBufferAssignChild\nto judge if the sub transaction is already associated or not\nand skip the processings if it is.\n\n> And I'm not sure it is wise to mark all subtransactions as \"catalog changed\"\n> always when the top transaction is XACT_XINFO_HAS_INVALS.\nIn order to avoid this,\ncan't we have a new flag (for example, in reorderbuffer struct) to check\nif we start decoding from RUNNING_XACTS, which is similar to the first patch of [1]\nand use it at DecodeCommit ? This still leads to some extra specific codes added\nto DecodeCommit and this solution becomes a bit similar to other previous patches though.\n\n\n[1] - https://www.postgresql.org/message-id/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com\n\n\nBest Regards,\n\tTakamichi Osumi\n \n\n\n", "msg_date": "Tue, 19 Oct 2021 02:45:24 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Tue, 19 Oct 2021 02:45:24 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in \n> On Thursday, October 14, 2021 11:21 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > It adds a call to ReorderBufferAssignChild but usually subtransactions are\n> > assigned to top level elsewherae. Addition to that\n> > ReorderBufferCommitChild() called just later does the same thing. We are\n> > adding the third call to the same function, which looks a bit odd.\n> It can be odd. However, we\n> have a check at the top of ReorderBufferAssignChild\n> to judge if the sub transaction is already associated or not\n> and skip the processings if it is.\n\nMy question was why do we need to make the extra call to\nReorerBufferCommitChild when XACT_XINFO_HAS_INVALS in spite of the\nexisting call to the same fuction that unconditionally made. It\ndoesn't cost so much but also it's not free.\n\n> > And I'm not sure it is wise to mark all subtransactions as \"catalog changed\"\n> > always when the top transaction is XACT_XINFO_HAS_INVALS.\n> In order to avoid this,\n> can't we have a new flag (for example, in reorderbuffer struct) to check\n> if we start decoding from RUNNING_XACTS, which is similar to the first patch of [1]\n> and use it at DecodeCommit ? This still leads to some extra specific codes added\n> to DecodeCommit and this solution becomes a bit similar to other previous patches though.\n\nIf it is somehow wrong in any sense that we add subtransactions in\nSnapBuildProcessRunningXacts (for example, we should avoid relaxing\nthe assertion condition.), I think we would go another way. Otherwise\nwe don't even need that additional flag. (But Sawadasan's recent PoC\nalso needs that relaxation.)\n\nASAICS, and unless I'm missing something (that odds are rtlatively\nhigh:p), we need the specially added subransactions only for the\ntransactions that were running at passing the first RUNNING_XACTS,\nbecuase otherwise (substantial) subtransactions are assigned to\ntoplevel by the first record of the subtransaction.\n\nBefore reaching consistency, DecodeCommit feeds the subtransactions to\nReorderBufferForget individually so the subtransactions are not needed\nto be assigned to the top transaction at all. Since the\nsubtransactions added by the first RUNNING_XACT are processed that\nway, we don't need in the first place to call ReorderBufferCommitChild\nfor such subtransactions.\n\n> [1] - https://www.postgresql.org/message-id/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:43:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "Hi,\n\nOn 10/19/21 8:43 AM, Kyotaro Horiguchi wrote:\n> At Tue, 19 Oct 2021 02:45:24 +0000, \"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com> wrote in\n>> On Thursday, October 14, 2021 11:21 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>> It adds a call to ReorderBufferAssignChild but usually subtransactions are\n>>> assigned to top level elsewherae. Addition to that\n>>> ReorderBufferCommitChild() called just later does the same thing. We are\n>>> adding the third call to the same function, which looks a bit odd.\n>> It can be odd. However, we\n>> have a check at the top of ReorderBufferAssignChild\n>> to judge if the sub transaction is already associated or not\n>> and skip the processings if it is.\n> My question was why do we need to make the extra call to\n> ReorerBufferCommitChild when XACT_XINFO_HAS_INVALS in spite of the\n> existing call to the same fuction that unconditionally made. It\n> doesn't cost so much but also it's not free.\n>\n>>> And I'm not sure it is wise to mark all subtransactions as \"catalog changed\"\n>>> always when the top transaction is XACT_XINFO_HAS_INVALS.\n>> In order to avoid this,\n>> can't we have a new flag (for example, in reorderbuffer struct) to check\n>> if we start decoding from RUNNING_XACTS, which is similar to the first patch of [1]\n>> and use it at DecodeCommit ? This still leads to some extra specific codes added\n>> to DecodeCommit and this solution becomes a bit similar to other previous patches though.\n> If it is somehow wrong in any sense that we add subtransactions in\n> SnapBuildProcessRunningXacts (for example, we should avoid relaxing\n> the assertion condition.), I think we would go another way. Otherwise\n> we don't even need that additional flag. (But Sawadasan's recent PoC\n> also needs that relaxation.)\n>\n> ASAICS, and unless I'm missing something (that odds are rtlatively\n> high:p), we need the specially added subransactions only for the\n> transactions that were running at passing the first RUNNING_XACTS,\n> becuase otherwise (substantial) subtransactions are assigned to\n> toplevel by the first record of the subtransaction.\n>\n> Before reaching consistency, DecodeCommit feeds the subtransactions to\n> ReorderBufferForget individually so the subtransactions are not needed\n> to be assigned to the top transaction at all. Since the\n> subtransactions added by the first RUNNING_XACT are processed that\n> way, we don't need in the first place to call ReorderBufferCommitChild\n> for such subtransactions.\n>\n>> [1] - https://www.postgresql.org/message-id/81D0D8B0-E7C4-4999-B616-1E5004DBDCD2%40amazon.com\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\nJust rebased (minor change in the contrib/test_decoding/Makefile) the \nlast POC version linked to the CF entry as it was failing the CF bot.\n\nThanks\n\nBertrand", "msg_date": "Mon, 21 Feb 2022 11:29:51 +0100", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Oct 11, 2021 at 11:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> It's much simpler than mine. I think that creating an entry of a\n> catalog-changed transaction in the reorder buffer before\n> SunapBuildCommitTxn() is the right direction.\n>\n> After more thought, given DDLs are not likely to happen than DML in\n> practice, probably we can always mark both the top transaction and its\n> subtransactions as containing catalog changes if the commit record has\n> XACT_XINFO_HAS_INVALS?\n>\n\nI have some observations and thoughts on this work.\n\n1.\n+# For the transaction that TRUNCATEd the table tbl1, the last decoding decodes\n+# only its COMMIT record, because it starts from the RUNNING_XACT\nrecord emitted\n+# during the second checkpoint execution. This transaction must be marked as\n+# containing catalog changes during decoding the COMMIT record and the decoding\n+# of the INSERT record must read the pg_class with the correct\nhistoric snapshot.\n+permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_truncate\"\n\"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s0_begin\" \"s0_insert\"\n\"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s1_get_changes\"\n\nIn the first line of comment, do you want to say \"... record emitted\nduring the first checkpoint\" because only then it can start from the\ncommit record of the transaction that has performed truncate.\n\n2.\n+ /*\n+ * Mark the top transaction and its subtransactions as containing catalog\n+ * changes, if the commit record has invalidation message. This is necessary\n+ * for the case where we decode only the commit record of the transaction\n+ * that actually has done catalog changes.\n+ */\n+ if (parsed->xinfo & XACT_XINFO_HAS_INVALS)\n+ {\n+ ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n+\n+ for (int i = 0; i < parsed->nsubxacts; i++)\n+ {\n+ ReorderBufferAssignChild(ctx->reorder, xid, parsed->subxacts[i],\n+ buf->origptr);\n+ ReorderBufferXidSetCatalogChanges(ctx->reorder, parsed->subxacts[i],\n+ buf->origptr);\n+ }\n+ }\n+\n SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,\n parsed->nsubxacts, parsed->subxacts);\n\nMarking it before SnapBuildCommitTxn has one disadvantage that we\nsometimes do this work even if the snapshot state is SNAPBUILD_START\nor SNAPBUILD_BUILDING_SNAPSHOT in which case SnapBuildCommitTxn\nwouldn't do anything. Now, whereas this will fix the issue but it\nseems we need to do this work even when we would have already marked\nthe txn has catalog changes, and then probably there are cases when we\nmark them when it is not required as discussed in this thread.\n\nI think if we don't have any better ideas then we should go with\neither this or one of the other proposals in this thread. The other\nidea that occurred to me is whether we can somehow update the snapshot\nwe have serialized on disk about this information. On each\nrunning_xact record when we serialize the snapshot, we also try to\npurge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\nthat we can check if there are committed xacts to be purged and if we\nhave previously serialized the snapshot for the prior running xact\nrecord, if so, we can update it with the list of xacts that have\ncatalog changes. If this is feasible then I think we need to somehow\nremember the point where we last serialized the snapshot (maybe by\nusing builder->last_serialized_snapshot). Even, if this is feasible we\nmay not be able to do this in back-branches because of the disk-format\nchange required for this.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 21 May 2022 15:35:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Sat, 21 May 2022 15:35:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> I think if we don't have any better ideas then we should go with\n> either this or one of the other proposals in this thread. The other\n> idea that occurred to me is whether we can somehow update the snapshot\n> we have serialized on disk about this information. On each\n> running_xact record when we serialize the snapshot, we also try to\n> purge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\n> that we can check if there are committed xacts to be purged and if we\n> have previously serialized the snapshot for the prior running xact\n> record, if so, we can update it with the list of xacts that have\n> catalog changes. If this is feasible then I think we need to somehow\n> remember the point where we last serialized the snapshot (maybe by\n> using builder->last_serialized_snapshot). Even, if this is feasible we\n> may not be able to do this in back-branches because of the disk-format\n> change required for this.\n> \n> Thoughts?\n\nI didn't look it closer, but it seems to work. I'm not sure how much\nspurious invalidations at replication start impacts on performance,\nbut it is promising if the impact is significant. That being said I'm\na bit negative for doing that in post-beta1 stage.\n\nI thought for a moment that RUNNING_XACT might be able to contain\ninvalidation information but it seems too complex to happen with such\na frequency..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 23 May 2022 13:33:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, May 23, 2022 at 10:03 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Sat, 21 May 2022 15:35:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > I think if we don't have any better ideas then we should go with\n> > either this or one of the other proposals in this thread. The other\n> > idea that occurred to me is whether we can somehow update the snapshot\n> > we have serialized on disk about this information. On each\n> > running_xact record when we serialize the snapshot, we also try to\n> > purge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\n> > that we can check if there are committed xacts to be purged and if we\n> > have previously serialized the snapshot for the prior running xact\n> > record, if so, we can update it with the list of xacts that have\n> > catalog changes. If this is feasible then I think we need to somehow\n> > remember the point where we last serialized the snapshot (maybe by\n> > using builder->last_serialized_snapshot). Even, if this is feasible we\n> > may not be able to do this in back-branches because of the disk-format\n> > change required for this.\n> >\n> > Thoughts?\n>\n> I didn't look it closer, but it seems to work. I'm not sure how much\n> spurious invalidations at replication start impacts on performance,\n> but it is promising if the impact is significant.\n>\n\nIt seems Sawada-San's patch is doing at each commit not at the start\nof replication and I think that is required because we need this each\ntime for replication restart. So, I feel this will be an ongoing\noverhead for spurious cases with the current approach.\n\n> That being said I'm\n> a bit negative for doing that in post-beta1 stage.\n>\n\nFair point. We can use the do it early in PG-16 if the approach is\nfeasible, and backpatch something on lines of what Sawada-San or you\nproposed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 23 May 2022 11:09:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, May 23, 2022 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 23, 2022 at 10:03 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Sat, 21 May 2022 15:35:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > I think if we don't have any better ideas then we should go with\n> > > either this or one of the other proposals in this thread. The other\n> > > idea that occurred to me is whether we can somehow update the snapshot\n> > > we have serialized on disk about this information. On each\n> > > running_xact record when we serialize the snapshot, we also try to\n> > > purge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\n> > > that we can check if there are committed xacts to be purged and if we\n> > > have previously serialized the snapshot for the prior running xact\n> > > record, if so, we can update it with the list of xacts that have\n> > > catalog changes. If this is feasible then I think we need to somehow\n> > > remember the point where we last serialized the snapshot (maybe by\n> > > using builder->last_serialized_snapshot). Even, if this is feasible we\n> > > may not be able to do this in back-branches because of the disk-format\n> > > change required for this.\n> > >\n> > > Thoughts?\n\nIt seems to work, could you draft the patch?\n\n> >\n> > I didn't look it closer, but it seems to work. I'm not sure how much\n> > spurious invalidations at replication start impacts on performance,\n> > but it is promising if the impact is significant.\n> >\n>\n> It seems Sawada-San's patch is doing at each commit not at the start\n> of replication and I think that is required because we need this each\n> time for replication restart. So, I feel this will be an ongoing\n> overhead for spurious cases with the current approach.\n>\n> > That being said I'm\n> > a bit negative for doing that in post-beta1 stage.\n> >\n>\n> Fair point. We can use the do it early in PG-16 if the approach is\n> feasible, and backpatch something on lines of what Sawada-San or you\n> proposed.\n\n+1.\n\nI proposed two approaches: [1] and [2,] and I prefer [1].\nHoriguchi-san's idea[3] also looks good but I think it's better to\nsomehow deal with the problem he mentioned:\n\n> One problem with this is that change creates the case where multiple\n> ReorderBufferTXNs share the same first_lsn. I haven't come up with a\n> clean idea to avoid relaxing the restriction of AssertTXNLsnOrder..\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAn-k6OpZ6HSAH_G91tpTXR6KYvkf663kg6EqW-f6sz1w%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAD21AoD00wV4gt-53ze%2BZB8n4bqJrdH8J_UnDHddy8S2A%2Ba25g%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/20211008.165055.1621145185927268721.horikyota.ntt%40gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 24 May 2022 11:27:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, May 24, 2022 at 7:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 23, 2022 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 23, 2022 at 10:03 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > At Sat, 21 May 2022 15:35:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > I think if we don't have any better ideas then we should go with\n> > > > either this or one of the other proposals in this thread. The other\n> > > > idea that occurred to me is whether we can somehow update the snapshot\n> > > > we have serialized on disk about this information. On each\n> > > > running_xact record when we serialize the snapshot, we also try to\n> > > > purge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\n> > > > that we can check if there are committed xacts to be purged and if we\n> > > > have previously serialized the snapshot for the prior running xact\n> > > > record, if so, we can update it with the list of xacts that have\n> > > > catalog changes. If this is feasible then I think we need to somehow\n> > > > remember the point where we last serialized the snapshot (maybe by\n> > > > using builder->last_serialized_snapshot). Even, if this is feasible we\n> > > > may not be able to do this in back-branches because of the disk-format\n> > > > change required for this.\n> > > >\n> > > > Thoughts?\n>\n> It seems to work, could you draft the patch?\n>\n\nI can help with the review and discussion.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 24 May 2022 10:47:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, May 24, 2022 at 2:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, May 24, 2022 at 7:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 23, 2022 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 23, 2022 at 10:03 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > At Sat, 21 May 2022 15:35:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > > I think if we don't have any better ideas then we should go with\n> > > > > either this or one of the other proposals in this thread. The other\n> > > > > idea that occurred to me is whether we can somehow update the snapshot\n> > > > > we have serialized on disk about this information. On each\n> > > > > running_xact record when we serialize the snapshot, we also try to\n> > > > > purge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\n> > > > > that we can check if there are committed xacts to be purged and if we\n> > > > > have previously serialized the snapshot for the prior running xact\n> > > > > record, if so, we can update it with the list of xacts that have\n> > > > > catalog changes. If this is feasible then I think we need to somehow\n> > > > > remember the point where we last serialized the snapshot (maybe by\n> > > > > using builder->last_serialized_snapshot). Even, if this is feasible we\n> > > > > may not be able to do this in back-branches because of the disk-format\n> > > > > change required for this.\n> > > > >\n> > > > > Thoughts?\n> >\n> > It seems to work, could you draft the patch?\n> >\n>\n> I can help with the review and discussion.\n\nOkay, I'll draft the patch for this idea.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 25 May 2022 12:11:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, May 25, 2022 at 12:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, May 24, 2022 at 2:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, May 24, 2022 at 7:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, May 23, 2022 at 2:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 23, 2022 at 10:03 AM Kyotaro Horiguchi\n> > > > <horikyota.ntt@gmail.com> wrote:\n> > > > >\n> > > > > At Sat, 21 May 2022 15:35:58 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > > > I think if we don't have any better ideas then we should go with\n> > > > > > either this or one of the other proposals in this thread. The other\n> > > > > > idea that occurred to me is whether we can somehow update the snapshot\n> > > > > > we have serialized on disk about this information. On each\n> > > > > > running_xact record when we serialize the snapshot, we also try to\n> > > > > > purge the committed xacts (via SnapBuildPurgeCommittedTxn). So, during\n> > > > > > that we can check if there are committed xacts to be purged and if we\n> > > > > > have previously serialized the snapshot for the prior running xact\n> > > > > > record, if so, we can update it with the list of xacts that have\n> > > > > > catalog changes. If this is feasible then I think we need to somehow\n> > > > > > remember the point where we last serialized the snapshot (maybe by\n> > > > > > using builder->last_serialized_snapshot). Even, if this is feasible we\n> > > > > > may not be able to do this in back-branches because of the disk-format\n> > > > > > change required for this.\n> > > > > >\n> > > > > > Thoughts?\n> > >\n> > > It seems to work, could you draft the patch?\n> > >\n> >\n> > I can help with the review and discussion.\n>\n> Okay, I'll draft the patch for this idea.\n\nI've attached three POC patches:\n\npoc_remember_last_running_xacts_v2.patch is a rebased patch of my\nprevious proposal[1]. This is based on the original proposal: we\nremember the last-running-xacts list of the first decoded\nRUNNING_XACTS record and check if the transaction whose commit record\nhas XACT_XINFO_HAS_INVALS and whose xid is in the list. This doesn’t\nrequire any file format changes but the transaction will end up being\nadded to the snapshot even if it has only relcache invalidations.\n\npoc_add_running_catchanges_xacts_to_serialized_snapshot.patch is a\npatch for the idea Amit Kapila proposed with some changes. The basic\napproach is to remember the list of xids that changed catalogs and\nwere running when serializing the snapshot. The list of xids is kept\nin SnapShotBuilder and is serialized and restored to/from the\nserialized snapshot. When decoding a commit record, we check if the\ntransaction is already marked as catalog-changes or its xid is in the\nlist. If so, we add it to the snapshot. Unlike the first patch, it can\nadd only transactions properly that have changed catalogs, but as Amit\nmentioned before, this idea cannot be back patched as this changes the\non-disk format of the serialized snapshot.\n\npoc_add_regression_tests.patch adds regression tests for this bug. The\nregression tests are required for both HEAD and back-patching but I've\nseparated this patch for testing the above two patches easily.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 30 May 2022 14:42:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 25, 2022 at 12:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> poc_add_regression_tests.patch adds regression tests for this bug. The\n> regression tests are required for both HEAD and back-patching but I've\n> separated this patch for testing the above two patches easily.\n>\n\nFew comments on the test case patch:\n===============================\n1.\n+# For the transaction that TRUNCATEd the table tbl1, the last decoding decodes\n+# only its COMMIT record, because it starts from the RUNNING_XACT\nrecord emitted\n+# during the first checkpoint execution. This transaction must be marked as\n+# catalog-changes while decoding the COMMIT record and the decoding\nof the INSERT\n+# record must read the pg_class with the correct historic snapshot.\n+permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_truncate\"\n\"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s0_begin\" \"s0_insert\"\n\"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s1_get_changes\"\n\nWill this test always work? What if we get an additional running_xact\nrecord between steps \"s0_commit\" and \"s0_begin\" that is logged via\nbgwriter? You can mimic that by adding an additional checkpoint\nbetween those two steps. If we do that, the test will pass even\nwithout the patch because I think the last decoding will start\ndecoding from this new running_xact record.\n\n2.\n+step \"s1_get_changes\" { SELECT data FROM\npg_logical_slot_get_changes('isolation_slot', NULL, NULL,\n'include-xids', '0'); }\n\nIt is better to skip empty transactions by using 'skip-empty-xacts' to\navoid any transaction from a background process like autovacuum. We\nhave previously seen some buildfarm failures due to that.\n\n3. Did you intentionally omit the .out from the test case patch?\n\n4.\nThis transaction must be marked as\n+# catalog-changes while decoding the COMMIT record and the decoding\nof the INSERT\n+# record must read the pg_class with the correct historic snapshot.\n\n/marked as catalog-changes/marked as containing catalog changes\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 7 Jun 2022 18:02:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jun 7, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 25, 2022 at 12:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > poc_add_regression_tests.patch adds regression tests for this bug. The\n> > regression tests are required for both HEAD and back-patching but I've\n> > separated this patch for testing the above two patches easily.\n> >\n\nThank you for the comments.\n\n>\n> Few comments on the test case patch:\n> ===============================\n> 1.\n> +# For the transaction that TRUNCATEd the table tbl1, the last decoding decodes\n> +# only its COMMIT record, because it starts from the RUNNING_XACT\n> record emitted\n> +# during the first checkpoint execution. This transaction must be marked as\n> +# catalog-changes while decoding the COMMIT record and the decoding\n> of the INSERT\n> +# record must read the pg_class with the correct historic snapshot.\n> +permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_truncate\"\n> \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s0_begin\" \"s0_insert\"\n> \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s1_get_changes\"\n>\n> Will this test always work? What if we get an additional running_xact\n> record between steps \"s0_commit\" and \"s0_begin\" that is logged via\n> bgwriter? You can mimic that by adding an additional checkpoint\n> between those two steps. If we do that, the test will pass even\n> without the patch because I think the last decoding will start\n> decoding from this new running_xact record.\n\nRight. It could pass depending on the timing but doesn't fail\ndepending on the timing. I think we need to somehow stop bgwriter to\nmake the test case stable but it seems unrealistic. Do you have any\nbetter ideas?\n\n>\n> 2.\n> +step \"s1_get_changes\" { SELECT data FROM\n> pg_logical_slot_get_changes('isolation_slot', NULL, NULL,\n> 'include-xids', '0'); }\n>\n> It is better to skip empty transactions by using 'skip-empty-xacts' to\n> avoid any transaction from a background process like autovacuum. We\n> have previously seen some buildfarm failures due to that.\n\nAgreed.\n\n>\n> 3. Did you intentionally omit the .out from the test case patch?\n\nNo, I'll add .out file in the next version patch.\n\n>\n> 4.\n> This transaction must be marked as\n> +# catalog-changes while decoding the COMMIT record and the decoding\n> of the INSERT\n> +# record must read the pg_class with the correct historic snapshot.\n>\n> /marked as catalog-changes/marked as containing catalog changes\n\nAgreed.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 13 Jun 2022 11:58:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jun 13, 2022 at 8:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jun 7, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, May 25, 2022 at 12:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > poc_add_regression_tests.patch adds regression tests for this bug. The\n> > > regression tests are required for both HEAD and back-patching but I've\n> > > separated this patch for testing the above two patches easily.\n> > >\n>\n> Thank you for the comments.\n>\n> >\n> > Few comments on the test case patch:\n> > ===============================\n> > 1.\n> > +# For the transaction that TRUNCATEd the table tbl1, the last decoding decodes\n> > +# only its COMMIT record, because it starts from the RUNNING_XACT\n> > record emitted\n> > +# during the first checkpoint execution. This transaction must be marked as\n> > +# catalog-changes while decoding the COMMIT record and the decoding\n> > of the INSERT\n> > +# record must read the pg_class with the correct historic snapshot.\n> > +permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_truncate\"\n> > \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s0_begin\" \"s0_insert\"\n> > \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s1_get_changes\"\n> >\n> > Will this test always work? What if we get an additional running_xact\n> > record between steps \"s0_commit\" and \"s0_begin\" that is logged via\n> > bgwriter? You can mimic that by adding an additional checkpoint\n> > between those two steps. If we do that, the test will pass even\n> > without the patch because I think the last decoding will start\n> > decoding from this new running_xact record.\n>\n> Right. It could pass depending on the timing but doesn't fail\n> depending on the timing. I think we need to somehow stop bgwriter to\n> make the test case stable but it seems unrealistic.\n>\n\nAgreed, in my local testing for this case, I use to increase\nLOG_SNAPSHOT_INTERVAL_MS to avoid such a situation but I understand it\nis not practical via test.\n\n> Do you have any\n> better ideas?\n>\n\nNo, I don't have any better ideas. I think it is better to add some\ninformation related to this in the comments because it may help to\nimprove the test in the future if we come up with a better idea.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 14 Jun 2022 12:26:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jun 14, 2022 at 3:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jun 13, 2022 at 8:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jun 7, 2022 at 9:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, May 25, 2022 at 12:11 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > poc_add_regression_tests.patch adds regression tests for this bug. The\n> > > > regression tests are required for both HEAD and back-patching but I've\n> > > > separated this patch for testing the above two patches easily.\n> > > >\n> >\n> > Thank you for the comments.\n> >\n> > >\n> > > Few comments on the test case patch:\n> > > ===============================\n> > > 1.\n> > > +# For the transaction that TRUNCATEd the table tbl1, the last decoding decodes\n> > > +# only its COMMIT record, because it starts from the RUNNING_XACT\n> > > record emitted\n> > > +# during the first checkpoint execution. This transaction must be marked as\n> > > +# catalog-changes while decoding the COMMIT record and the decoding\n> > > of the INSERT\n> > > +# record must read the pg_class with the correct historic snapshot.\n> > > +permutation \"s0_init\" \"s0_begin\" \"s0_savepoint\" \"s0_truncate\"\n> > > \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s0_begin\" \"s0_insert\"\n> > > \"s1_checkpoint\" \"s1_get_changes\" \"s0_commit\" \"s1_get_changes\"\n> > >\n> > > Will this test always work? What if we get an additional running_xact\n> > > record between steps \"s0_commit\" and \"s0_begin\" that is logged via\n> > > bgwriter? You can mimic that by adding an additional checkpoint\n> > > between those two steps. If we do that, the test will pass even\n> > > without the patch because I think the last decoding will start\n> > > decoding from this new running_xact record.\n> >\n> > Right. It could pass depending on the timing but doesn't fail\n> > depending on the timing. I think we need to somehow stop bgwriter to\n> > make the test case stable but it seems unrealistic.\n> >\n>\n> Agreed, in my local testing for this case, I use to increase\n> LOG_SNAPSHOT_INTERVAL_MS to avoid such a situation but I understand it\n> is not practical via test.\n>\n> > Do you have any\n> > better ideas?\n> >\n>\n> No, I don't have any better ideas. I think it is better to add some\n> information related to this in the comments because it may help to\n> improve the test in the future if we come up with a better idea.\n\nI also don't have any better ideas to make it stable, and agreed. I've\nattached an updated version patch for adding regression tests.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 15 Jun 2022 10:34:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached three POC patches:\n>\n\nI think it will be a good idea if you can add a short commit message\nat least to say which patch is proposed for HEAD and which one is for\nback branches. Also, it would be good if you can add some description\nof the fix in the commit message. Let's remove poc* from the patch\nname.\n\nReview poc_add_running_catchanges_xacts_to_serialized_snapshot\n=====================================================\n1.\n+ /*\n+ * Array of transactions that were running when the snapshot serialization\n+ * and changed system catalogs,\n\nThe part of the sentence after serialization is not very clear.\n\n2.\n- if (ReorderBufferXidHasCatalogChanges(builder->reorder, subxid))\n+ if (ReorderBufferXidHasCatalogChanges(builder->reorder, subxid) ||\n+ bsearch(&xid, builder->catchanges.xip, builder->catchanges.xcnt,\n+ sizeof(TransactionId), xidComparator) != NULL)\n\nWhy are you using xid instead of subxid in bsearch call? Can we add a\ncomment to say why it is okay to use xid if there is a valid reason?\nBut note, we are using subxid to add to the committed xact array so\nnot sure if this is a good idea but I might be missing something.\n\nSuggestions for improvement in comments:\n- /*\n- * Update the transactions that are running and changes\ncatalogs that are\n- * not committed.\n- */\n+ /* Update the catalog modifying transactions that are yet not\ncommitted. */\n if (builder->catchanges.xip)\n pfree(builder->catchanges.xip);\n builder->catchanges.xip =\nReorderBufferGetCatalogChangesXacts(builder->reorder,\n@@ -1647,7 +1644,7 @@ SnapBuildSerialize(SnapBuild *builder, XLogRecPtr lsn)\n COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n ondisk_c += sz;\n\n- /* copy catalog-changes xacts */\n+ /* copy catalog modifying xacts */\n sz = sizeof(TransactionId) * builder->catchanges.xcnt;\n memcpy(ondisk_c, builder->catchanges.xip, sz);\n COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 4 Jul 2022 18:12:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 4, 2022 at 6:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached three POC patches:\n> >\n>\n> I think it will be a good idea if you can add a short commit message\n> at least to say which patch is proposed for HEAD and which one is for\n> back branches. Also, it would be good if you can add some description\n> of the fix in the commit message. Let's remove poc* from the patch\n> name.\n>\n> Review poc_add_running_catchanges_xacts_to_serialized_snapshot\n> =====================================================\n\nFew more comments:\n1.\n+\n+ /* This array must be sorted in xidComparator order */\n+ TransactionId *xip;\n+ } catchanges;\n };\n\nThis array contains the transaction ids for subtransactions as well. I\nthink it is better mention the same in comments.\n\n2. Are we anytime removing transaction ids from catchanges->xip array?\nIf not, is there a reason for the same? I think we can remove it\neither at commit/abort or even immediately after adding the xid/subxid\nto committed->xip array.\n\n3.\n+ if (readBytes != sz)\n+ {\n+ int save_errno = errno;\n+\n+ CloseTransientFile(fd);\n+\n+ if (readBytes < 0)\n+ {\n+ errno = save_errno;\n+ ereport(ERROR,\n+ (errcode_for_file_access(),\n+ errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n+ }\n+ else\n+ ereport(ERROR,\n+ (errcode(ERRCODE_DATA_CORRUPTED),\n+ errmsg(\"could not read file \\\"%s\\\": read %d of %zu\",\n+ path, readBytes, sz)));\n+ }\n\nThis is the fourth instance of similar error handling code in\nSnapBuildRestore(). Isn't it better to extract this into a separate\nfunction?\n\n4.\n+TransactionId *\n+ReorderBufferGetCatalogChangesXacts(ReorderBuffer *rb, size_t *xcnt_p)\n+{\n+ HASH_SEQ_STATUS hash_seq;\n+ ReorderBufferTXNByIdEnt *ent;\n+ TransactionId *xids;\n+ size_t xcnt = 0;\n+ size_t xcnt_space = 64; /* arbitrary number */\n+\n+ xids = (TransactionId *) palloc(sizeof(TransactionId) * xcnt_space);\n+\n+ hash_seq_init(&hash_seq, rb->by_txn);\n+ while ((ent = hash_seq_search(&hash_seq)) != NULL)\n+ {\n+ ReorderBufferTXN *txn = ent->txn;\n+\n+ if (!rbtxn_has_catalog_changes(txn))\n+ continue;\n\nIt would be better to allocate memory the first time we have to store\nxids. There is a good chance that many a time this function will do\njust palloc without having to store any xid.\n\n5. Do you think we should do some performance testing for a mix of\nddl/dml workload to see if it adds any overhead in decoding due to\nserialize/restore doing additional work? I don't think it should add\nsome meaningful overhead but OTOH there is no harm in doing some\ntesting of the same.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 5 Jul 2022 16:30:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 4, 2022 at 9:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached three POC patches:\n> >\n>\n> I think it will be a good idea if you can add a short commit message\n> at least to say which patch is proposed for HEAD and which one is for\n> back branches. Also, it would be good if you can add some description\n> of the fix in the commit message. Let's remove poc* from the patch\n> name.\n\nUpdated.\n\n>\n> Review poc_add_running_catchanges_xacts_to_serialized_snapshot\n> =====================================================\n> 1.\n> + /*\n> + * Array of transactions that were running when the snapshot serialization\n> + * and changed system catalogs,\n>\n> The part of the sentence after serialization is not very clear.\n\nUpdated.\n\n>\n> 2.\n> - if (ReorderBufferXidHasCatalogChanges(builder->reorder, subxid))\n> + if (ReorderBufferXidHasCatalogChanges(builder->reorder, subxid) ||\n> + bsearch(&xid, builder->catchanges.xip, builder->catchanges.xcnt,\n> + sizeof(TransactionId), xidComparator) != NULL)\n>\n> Why are you using xid instead of subxid in bsearch call? Can we add a\n> comment to say why it is okay to use xid if there is a valid reason?\n> But note, we are using subxid to add to the committed xact array so\n> not sure if this is a good idea but I might be missing something.\n\nYou're right, subxid should be used here.\n\n>\n> Suggestions for improvement in comments:\n> - /*\n> - * Update the transactions that are running and changes\n> catalogs that are\n> - * not committed.\n> - */\n> + /* Update the catalog modifying transactions that are yet not\n> committed. */\n> if (builder->catchanges.xip)\n> pfree(builder->catchanges.xip);\n> builder->catchanges.xip =\n> ReorderBufferGetCatalogChangesXacts(builder->reorder,\n> @@ -1647,7 +1644,7 @@ SnapBuildSerialize(SnapBuild *builder, XLogRecPtr lsn)\n> COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n> ondisk_c += sz;\n>\n> - /* copy catalog-changes xacts */\n> + /* copy catalog modifying xacts */\n> sz = sizeof(TransactionId) * builder->catchanges.xcnt;\n> memcpy(ondisk_c, builder->catchanges.xip, sz);\n> COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n\nUpdated.\n\nI'll post a new version patch in the next email with replying to other comments.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 6 Jul 2022 11:07:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 6, 2022 at 7:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I'll post a new version patch in the next email with replying to other comments.\n>\n\nOkay, thanks for working on this. Few comments/suggestions on\npoc_remember_last_running_xacts_v2 patch:\n\n1.\n+ReorderBufferSetLastRunningXactsCatalogChanges(ReorderBuffer *rb,\nTransactionId xid,\n+ uint32 xinfo, int subxcnt,\n+ TransactionId *subxacts, XLogRecPtr lsn)\n+{\n...\n...\n+\n+ test = bsearch(&xid, rb->last_running_xacts, rb->n_last_running_xacts,\n+ sizeof(TransactionId), xidComparator);\n+\n+ if (test == NULL)\n+ {\n+ for (int i = 0; i < subxcnt; i++)\n+ {\n+ test = bsearch(&subxacts[i], rb->last_running_xacts, rb->n_last_running_xacts,\n+ sizeof(TransactionId), xidComparator);\n...\n\nIs there ever a possibility that the top transaction id is not in the\nrunning_xacts list but one of its subxids is present? If yes, it is\nnot very obvious at least to me so adding a comment here could be\nuseful. If not, then why do we need this additional check for each of\nthe sub-transaction ids?\n\n2.\n@@ -627,6 +647,15 @@ DecodeCommit(LogicalDecodingContext *ctx,\nXLogRecordBuffer *buf,\n commit_time = parsed->origin_timestamp;\n }\n\n+ /*\n+ * Set the last running xacts as containing catalog change if necessary.\n+ * This must be done before SnapBuildCommitTxn() so that we include catalog\n+ * change transactions to the historic snapshot.\n+ */\n+ ReorderBufferSetLastRunningXactsCatalogChanges(ctx->reorder, xid,\nparsed->xinfo,\n+ parsed->nsubxacts, parsed->subxacts,\n+ buf->origptr);\n+\n SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,\n parsed->nsubxacts, parsed->subxacts);\n\nAs mentioned previously as well, marking it before SnapBuildCommitTxn\nhas one disadvantage, we sometimes do this work even if the snapshot\nstate is SNAPBUILD_START or SNAPBUILD_BUILDING_SNAPSHOT in which case\nSnapBuildCommitTxn wouldn't do anything. Can we instead check whether\nthe particular txn has invalidations and is present in the\nlast_running_xacts list along with the check\nReorderBufferXidHasCatalogChanges? I think that has the additional\nadvantage that we don't need this additional marking if the xact is\nalready marked as containing catalog changes.\n\n3.\n1.\n+ /*\n+ * We rely on HEAP2_NEW_CID records and XACT_INVALIDATIONS to know\n+ * if the transaction has changed the catalog, and that information\n+ * is not serialized to SnapBuilder. Therefore, if the logical\n+ * decoding decodes the commit record of the transaction that actually\n+ * has done catalog changes without these records, we miss to add\n+ * the xid to the snapshot so up creating the wrong snapshot.\n\nThe part of the sentence \"... snapshot so up creating the wrong\nsnapshot.\" is not clear. In this comment, at one place you have used\ntwo spaces after a full stop, and at another place, there is one\nspace. I think let's follow nearby code practice to use a single space\nbefore a new sentence.\n\n4.\n+void\n+ReorderBufferProcessLastRunningXacts(ReorderBuffer *rb,\nxl_running_xacts *running)\n+{\n+ /* Quick exit if there is no longer last running xacts */\n+ if (likely(rb->n_last_running_xacts == 0))\n+ return;\n+\n+ /* First call, build the last running xact list */\n+ if (rb->n_last_running_xacts == -1)\n+ {\n+ int nxacts = running->subxcnt + running->xcnt;\n+ Size sz = sizeof(TransactionId) * nxacts;;\n+\n+ rb->last_running_xacts = MemoryContextAlloc(rb->context, sz);\n+ memcpy(rb->last_running_xacts, running->xids, sz);\n+ qsort(rb->last_running_xacts, nxacts, sizeof(TransactionId), xidComparator);\n+\n+ rb->n_last_running_xacts = nxacts;\n+\n+ return;\n+ }\n\na. Can we add the function header comments for this function?\nb. We seem to be tracking the running_xact information for the first\nrunning_xact record after start/restart. The name last_running_xacts\ndoesn't sound appropriate for that, how about initial_running_xacts?\n\n5.\n+ /*\n+ * Purge xids in the last running xacts list if we can do that for at least\n+ * one xid.\n+ */\n+ if (NormalTransactionIdPrecedes(rb->last_running_xacts[0],\n+ running->oldestRunningXid))\n\nI think it would be a good idea to add a few lines here explaining why\nit is safe to purge. IIUC, it is because the commit for those xacts\nwould have already been processed and we don't need such a xid\nanymore.\n\n6. As per the discussion above in this thread having\nXACT_XINFO_HAS_INVALS in the commit record doesn't indicate that the\nxact has catalog changes, so can we add somewhere in comments that for\nsuch a case we can't distinguish whether the txn has catalog change\nbut we still mark the txn has catalog changes? Can you please share\none example for this case?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Jul 2022 11:30:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 5, 2022 at 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 4, 2022 at 6:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 30, 2022 at 11:13 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached three POC patches:\n> > >\n> >\n> > I think it will be a good idea if you can add a short commit message\n> > at least to say which patch is proposed for HEAD and which one is for\n> > back branches. Also, it would be good if you can add some description\n> > of the fix in the commit message. Let's remove poc* from the patch\n> > name.\n> >\n> > Review poc_add_running_catchanges_xacts_to_serialized_snapshot\n> > =====================================================\n>\n> Few more comments:\n\nThank you for the comments.\n\n> 1.\n> +\n> + /* This array must be sorted in xidComparator order */\n> + TransactionId *xip;\n> + } catchanges;\n> };\n>\n> This array contains the transaction ids for subtransactions as well. I\n> think it is better mention the same in comments.\n\nUpdated.\n\n>\n> 2. Are we anytime removing transaction ids from catchanges->xip array?\n\nNo.\n\n> If not, is there a reason for the same? I think we can remove it\n> either at commit/abort or even immediately after adding the xid/subxid\n> to committed->xip array.\n\nIt might be a good idea but I'm concerned that removing XID from the\narray at every commit/abort or after adding it to committed->xip array\nmight be costly as it requires adjustment of the array to keep its\norder. Removing XIDs from the array would make bsearch faster but the\narray is updated reasonably often (every 15 sec).\n\n>\n> 3.\n> + if (readBytes != sz)\n> + {\n> + int save_errno = errno;\n> +\n> + CloseTransientFile(fd);\n> +\n> + if (readBytes < 0)\n> + {\n> + errno = save_errno;\n> + ereport(ERROR,\n> + (errcode_for_file_access(),\n> + errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n> + }\n> + else\n> + ereport(ERROR,\n> + (errcode(ERRCODE_DATA_CORRUPTED),\n> + errmsg(\"could not read file \\\"%s\\\": read %d of %zu\",\n> + path, readBytes, sz)));\n> + }\n>\n> This is the fourth instance of similar error handling code in\n> SnapBuildRestore(). Isn't it better to extract this into a separate\n> function?\n\nGood idea, updated.\n\n>\n> 4.\n> +TransactionId *\n> +ReorderBufferGetCatalogChangesXacts(ReorderBuffer *rb, size_t *xcnt_p)\n> +{\n> + HASH_SEQ_STATUS hash_seq;\n> + ReorderBufferTXNByIdEnt *ent;\n> + TransactionId *xids;\n> + size_t xcnt = 0;\n> + size_t xcnt_space = 64; /* arbitrary number */\n> +\n> + xids = (TransactionId *) palloc(sizeof(TransactionId) * xcnt_space);\n> +\n> + hash_seq_init(&hash_seq, rb->by_txn);\n> + while ((ent = hash_seq_search(&hash_seq)) != NULL)\n> + {\n> + ReorderBufferTXN *txn = ent->txn;\n> +\n> + if (!rbtxn_has_catalog_changes(txn))\n> + continue;\n>\n> It would be better to allocate memory the first time we have to store\n> xids. There is a good chance that many a time this function will do\n> just palloc without having to store any xid.\n\nAgreed.\n\n>\n> 5. Do you think we should do some performance testing for a mix of\n> ddl/dml workload to see if it adds any overhead in decoding due to\n> serialize/restore doing additional work? I don't think it should add\n> some meaningful overhead but OTOH there is no harm in doing some\n> testing of the same.\n\nYes, it would be worth trying. I also believe this change doesn't\nintroduce noticeable overhead but let's check just in case.\n\nI've attached an updated patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 6 Jul 2022 15:48:40 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 6, 2022 at 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 5, 2022 at 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 2. Are we anytime removing transaction ids from catchanges->xip array?\n>\n> No.\n>\n> > If not, is there a reason for the same? I think we can remove it\n> > either at commit/abort or even immediately after adding the xid/subxid\n> > to committed->xip array.\n>\n> It might be a good idea but I'm concerned that removing XID from the\n> array at every commit/abort or after adding it to committed->xip array\n> might be costly as it requires adjustment of the array to keep its\n> order. Removing XIDs from the array would make bsearch faster but the\n> array is updated reasonably often (every 15 sec).\n>\n\nFair point. However, I am slightly worried that we are unnecessarily\nsearching in this new array even when ReorderBufferTxn has the\nrequired information. To avoid that, in function\nSnapBuildXidHasCatalogChange(), we can first check\nReorderBufferXidHasCatalogChanges() and then check the array if the\nfirst check doesn't return true. Also, by the way, do we need to\nalways keep builder->catchanges.xip updated via SnapBuildRestore()?\nIsn't it sufficient that we just read and throw away contents from a\nsnapshot if builder->catchanges.xip is non-NULL?\n\nI had additionally thought if can further optimize this solution to\njust store this additional information when we need to serialize for\ncheckpoint record but I think that won't work because walsender can\nrestart even without resatart of server in which case the same problem\ncan occur. I am not if sure there is a way to further optimize this\nsolution, let me know if you have any ideas?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 6 Jul 2022 14:25:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 6, 2022 at 5:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 6, 2022 at 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 5, 2022 at 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > 2. Are we anytime removing transaction ids from catchanges->xip array?\n> >\n> > No.\n> >\n> > > If not, is there a reason for the same? I think we can remove it\n> > > either at commit/abort or even immediately after adding the xid/subxid\n> > > to committed->xip array.\n> >\n> > It might be a good idea but I'm concerned that removing XID from the\n> > array at every commit/abort or after adding it to committed->xip array\n> > might be costly as it requires adjustment of the array to keep its\n> > order. Removing XIDs from the array would make bsearch faster but the\n> > array is updated reasonably often (every 15 sec).\n> >\n>\n> Fair point. However, I am slightly worried that we are unnecessarily\n> searching in this new array even when ReorderBufferTxn has the\n> required information. To avoid that, in function\n> SnapBuildXidHasCatalogChange(), we can first check\n> ReorderBufferXidHasCatalogChanges() and then check the array if the\n> first check doesn't return true. Also, by the way, do we need to\n> always keep builder->catchanges.xip updated via SnapBuildRestore()?\n> Isn't it sufficient that we just read and throw away contents from a\n> snapshot if builder->catchanges.xip is non-NULL?\n\nIIUC catchanges.xip is restored only once when restoring a consistent\nsnapshot via SnapBuildRestore(). I think it's necessary to set\ncatchanges.xip for later use in SnapBuildXidHasCatalogChange(). Or did\nyou mean via SnapBuildSerialize()?∫\n\n>\n> I had additionally thought if can further optimize this solution to\n> just store this additional information when we need to serialize for\n> checkpoint record but I think that won't work because walsender can\n> restart even without resatart of server in which case the same problem\n> can occur.\n\nYes, probably we need to write catalog modifying transactions for\nevery serialized snapshot.\n\n> I am not if sure there is a way to further optimize this\n> solution, let me know if you have any ideas?\n\nI suppose that writing additional information to serialized snapshots\nwould not be a noticeable overhead since we need 4 bytes per\ntransaction and we would not expect there is a huge number of\nconcurrent catalog modifying transactions. But both collecting catalog\nmodifying transactions (especially when there are many ongoing\ntransactions) and bsearch'ing on the XID list every time decoding the\nCOMMIT record could bring overhead.\n\nA solution for the first point would be to keep track of catalog\nmodifying transactions by using a linked list so that we can avoid\nchecking all ongoing transactions.\n\nRegarding the second point, on reflection, I think we need to look up\nthe XID list until all XID in the list is committed/aborted. We can\nremove XIDs from the list after adding it to committed.xip as you\nsuggested. Or when decoding a RUNNING_XACTS record, we can remove XIDs\nolder than builder->xmin from the list like we do for committed.xip in\nSnapBuildPurgeCommittedTxn().\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 7 Jul 2022 11:50:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 7, 2022 at 8:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 6, 2022 at 5:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 6, 2022 at 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 5, 2022 at 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > 2. Are we anytime removing transaction ids from catchanges->xip array?\n> > >\n> > > No.\n> > >\n> > > > If not, is there a reason for the same? I think we can remove it\n> > > > either at commit/abort or even immediately after adding the xid/subxid\n> > > > to committed->xip array.\n> > >\n> > > It might be a good idea but I'm concerned that removing XID from the\n> > > array at every commit/abort or after adding it to committed->xip array\n> > > might be costly as it requires adjustment of the array to keep its\n> > > order. Removing XIDs from the array would make bsearch faster but the\n> > > array is updated reasonably often (every 15 sec).\n> > >\n> >\n> > Fair point. However, I am slightly worried that we are unnecessarily\n> > searching in this new array even when ReorderBufferTxn has the\n> > required information. To avoid that, in function\n> > SnapBuildXidHasCatalogChange(), we can first check\n> > ReorderBufferXidHasCatalogChanges() and then check the array if the\n> > first check doesn't return true. Also, by the way, do we need to\n> > always keep builder->catchanges.xip updated via SnapBuildRestore()?\n> > Isn't it sufficient that we just read and throw away contents from a\n> > snapshot if builder->catchanges.xip is non-NULL?\n>\n> IIUC catchanges.xip is restored only once when restoring a consistent\n> snapshot via SnapBuildRestore(). I think it's necessary to set\n> catchanges.xip for later use in SnapBuildXidHasCatalogChange(). Or did\n> you mean via SnapBuildSerialize()?∫\n>\n\nSorry, I got confused about the way restore is used. You are right, it\nwill be done once. My main worry is that we shouldn't look at\nbuilder->catchanges.xip array on an ongoing basis which I think can be\ndealt with by one of the ideas you mentioned below. But, I think we\ncan still follow the other suggestion related to moving\nReorderBufferXidHasCatalogChanges() check prior to checking array.\n\n> >\n> > I had additionally thought if can further optimize this solution to\n> > just store this additional information when we need to serialize for\n> > checkpoint record but I think that won't work because walsender can\n> > restart even without resatart of server in which case the same problem\n> > can occur.\n>\n> Yes, probably we need to write catalog modifying transactions for\n> every serialized snapshot.\n>\n> > I am not if sure there is a way to further optimize this\n> > solution, let me know if you have any ideas?\n>\n> I suppose that writing additional information to serialized snapshots\n> would not be a noticeable overhead since we need 4 bytes per\n> transaction and we would not expect there is a huge number of\n> concurrent catalog modifying transactions. But both collecting catalog\n> modifying transactions (especially when there are many ongoing\n> transactions) and bsearch'ing on the XID list every time decoding the\n> COMMIT record could bring overhead.\n>\n> A solution for the first point would be to keep track of catalog\n> modifying transactions by using a linked list so that we can avoid\n> checking all ongoing transactions.\n>\n\nThis sounds reasonable to me.\n\n> Regarding the second point, on reflection, I think we need to look up\n> the XID list until all XID in the list is committed/aborted. We can\n> remove XIDs from the list after adding it to committed.xip as you\n> suggested. Or when decoding a RUNNING_XACTS record, we can remove XIDs\n> older than builder->xmin from the list like we do for committed.xip in\n> SnapBuildPurgeCommittedTxn().\n>\n\nI think doing along with RUNNING_XACTS should be fine. At each\ncommit/abort, the cost could be high because we need to maintain the\nsort order. In general, I feel any one of these should be okay because\nonce the array becomes empty, it won't be used again and there won't\nbe any operation related to it during ongoing replication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 7 Jul 2022 12:10:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 7, 2022 at 3:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 7, 2022 at 8:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 6, 2022 at 5:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 6, 2022 at 12:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jul 5, 2022 at 8:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > 2. Are we anytime removing transaction ids from catchanges->xip array?\n> > > >\n> > > > No.\n> > > >\n> > > > > If not, is there a reason for the same? I think we can remove it\n> > > > > either at commit/abort or even immediately after adding the xid/subxid\n> > > > > to committed->xip array.\n> > > >\n> > > > It might be a good idea but I'm concerned that removing XID from the\n> > > > array at every commit/abort or after adding it to committed->xip array\n> > > > might be costly as it requires adjustment of the array to keep its\n> > > > order. Removing XIDs from the array would make bsearch faster but the\n> > > > array is updated reasonably often (every 15 sec).\n> > > >\n> > >\n> > > Fair point. However, I am slightly worried that we are unnecessarily\n> > > searching in this new array even when ReorderBufferTxn has the\n> > > required information. To avoid that, in function\n> > > SnapBuildXidHasCatalogChange(), we can first check\n> > > ReorderBufferXidHasCatalogChanges() and then check the array if the\n> > > first check doesn't return true. Also, by the way, do we need to\n> > > always keep builder->catchanges.xip updated via SnapBuildRestore()?\n> > > Isn't it sufficient that we just read and throw away contents from a\n> > > snapshot if builder->catchanges.xip is non-NULL?\n> >\n> > IIUC catchanges.xip is restored only once when restoring a consistent\n> > snapshot via SnapBuildRestore(). I think it's necessary to set\n> > catchanges.xip for later use in SnapBuildXidHasCatalogChange(). Or did\n> > you mean via SnapBuildSerialize()?∫\n> >\n>\n> Sorry, I got confused about the way restore is used. You are right, it\n> will be done once. My main worry is that we shouldn't look at\n> builder->catchanges.xip array on an ongoing basis which I think can be\n> dealt with by one of the ideas you mentioned below. But, I think we\n> can still follow the other suggestion related to moving\n> ReorderBufferXidHasCatalogChanges() check prior to checking array.\n\nAgreed. I've incorporated this change in the new version patch.\n\n>\n> > >\n> > > I had additionally thought if can further optimize this solution to\n> > > just store this additional information when we need to serialize for\n> > > checkpoint record but I think that won't work because walsender can\n> > > restart even without resatart of server in which case the same problem\n> > > can occur.\n> >\n> > Yes, probably we need to write catalog modifying transactions for\n> > every serialized snapshot.\n> >\n> > > I am not if sure there is a way to further optimize this\n> > > solution, let me know if you have any ideas?\n> >\n> > I suppose that writing additional information to serialized snapshots\n> > would not be a noticeable overhead since we need 4 bytes per\n> > transaction and we would not expect there is a huge number of\n> > concurrent catalog modifying transactions. But both collecting catalog\n> > modifying transactions (especially when there are many ongoing\n> > transactions) and bsearch'ing on the XID list every time decoding the\n> > COMMIT record could bring overhead.\n> >\n> > A solution for the first point would be to keep track of catalog\n> > modifying transactions by using a linked list so that we can avoid\n> > checking all ongoing transactions.\n> >\n>\n> This sounds reasonable to me.\n>\n> > Regarding the second point, on reflection, I think we need to look up\n> > the XID list until all XID in the list is committed/aborted. We can\n> > remove XIDs from the list after adding it to committed.xip as you\n> > suggested. Or when decoding a RUNNING_XACTS record, we can remove XIDs\n> > older than builder->xmin from the list like we do for committed.xip in\n> > SnapBuildPurgeCommittedTxn().\n> >\n>\n> I think doing along with RUNNING_XACTS should be fine. At each\n> commit/abort, the cost could be high because we need to maintain the\n> sort order. In general, I feel any one of these should be okay because\n> once the array becomes empty, it won't be used again and there won't\n> be any operation related to it during ongoing replication.\n\nI've attached the new version patch that incorporates the comments and\nthe optimizations discussed above.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 8 Jul 2022 10:14:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 8, 2022 at 6:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 7, 2022 at 3:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 7, 2022 at 8:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the new version patch that incorporates the comments and\n> the optimizations discussed above.\n>\n\nThanks, few minor comments:\n1.\nIn ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\nlist length of 'catchange_txns' to allocate xids array? If we can do\nso, then we will save the need to repalloc as well.\n\n2.\n/* ->committed manipulation */\nstatic void SnapBuildPurgeCommittedTxn(SnapBuild *builder);\n\nThe above comment also needs to be changed.\n\n3. As SnapBuildPurgeCommittedTxn() removes xacts both from committed\nand catchange arrays, the function name no more remains appropriate.\nWe can either rename to something like SnapBuildPurgeOlderTxn() or\nmove the catchange logic to a different function and call it from\nSnapBuildProcessRunningXacts.\n\n4.\n+ if (TransactionIdEquals(builder->catchange.xip[off],\n+ builder->xmin) ||\n+ NormalTransactionIdFollows(builder->catchange.xip[off],\n+ builder->xmin))\n\nCan we use TransactionIdFollowsOrEquals() instead of above?\n\n5. Comment change suggestion:\n/*\n * Remove knowledge about transactions we treat as committed or\ncontaining catalog\n * changes that are smaller than ->xmin. Those won't ever get checked via\n- * the ->committed array and ->catchange, respectively. The committed xids will\n- * get checked via the clog machinery.\n+ * the ->committed or ->catchange array, respectively. The committed xids will\n+ * get checked via the clog machinery. We can ideally remove the transaction\n+ * from catchange array once it is finished (committed/aborted) but that could\n+ * be costly as we need to maintain the xids order in the array.\n */\n\nApart from the above, I think there are pending comments for the\nback-branch patch and some performance testing of this work.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 8 Jul 2022 11:57:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 8, 2022 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 8, 2022 at 6:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 7, 2022 at 3:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 7, 2022 at 8:21 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the new version patch that incorporates the comments and\n> > the optimizations discussed above.\n> >\n>\n> Thanks, few minor comments:\n\nThank you for the comments.\n\n> 1.\n> In ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\n> list length of 'catchange_txns' to allocate xids array? If we can do\n> so, then we will save the need to repalloc as well.\n\nSince ReorderBufferGetcatalogChangesXacts() collects all ongoing\ncatalog modifying transactions, the length of the array could be\nbigger than the one taken last time. We can start with the previous\nlength but I think we cannot remove the need for repalloc.\n\n> 2.\n> /* ->committed manipulation */\n> static void SnapBuildPurgeCommittedTxn(SnapBuild *builder);\n>\n> The above comment also needs to be changed.\n>\n> 3. As SnapBuildPurgeCommittedTxn() removes xacts both from committed\n> and catchange arrays, the function name no more remains appropriate.\n> We can either rename to something like SnapBuildPurgeOlderTxn() or\n> move the catchange logic to a different function and call it from\n> SnapBuildProcessRunningXacts.\n>\n> 4.\n> + if (TransactionIdEquals(builder->catchange.xip[off],\n> + builder->xmin) ||\n> + NormalTransactionIdFollows(builder->catchange.xip[off],\n> + builder->xmin))\n>\n> Can we use TransactionIdFollowsOrEquals() instead of above?\n>\n> 5. Comment change suggestion:\n> /*\n> * Remove knowledge about transactions we treat as committed or\n> containing catalog\n> * changes that are smaller than ->xmin. Those won't ever get checked via\n> - * the ->committed array and ->catchange, respectively. The committed xids will\n> - * get checked via the clog machinery.\n> + * the ->committed or ->catchange array, respectively. The committed xids will\n> + * get checked via the clog machinery. We can ideally remove the transaction\n> + * from catchange array once it is finished (committed/aborted) but that could\n> + * be costly as we need to maintain the xids order in the array.\n> */\n>\n\nAgreed with the above comments.\n\n> Apart from the above, I think there are pending comments for the\n> back-branch patch and some performance testing of this work.\n\nRight. I'll incorporate all comments I got so far into these patches\nand submit them. Also, will do some benchmark tests.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 8 Jul 2022 16:15:28 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 8, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jul 8, 2022 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > 1.\n> > In ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\n> > list length of 'catchange_txns' to allocate xids array? If we can do\n> > so, then we will save the need to repalloc as well.\n>\n> Since ReorderBufferGetcatalogChangesXacts() collects all ongoing\n> catalog modifying transactions, the length of the array could be\n> bigger than the one taken last time. We can start with the previous\n> length but I think we cannot remove the need for repalloc.\n>\n\nIt is using the list \"catchange_txns\" to form xid array which\nshouldn't change for the duration of\nReorderBufferGetCatalogChangesXacts(). Then the caller frees the xid\narray after its use. Next time in\nReorderBufferGetCatalogChangesXacts(), the fresh allocation for xid\narray happens, so not sure why repalloc would be required?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 8 Jul 2022 14:29:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 8, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 8, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jul 8, 2022 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > 1.\n> > > In ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\n> > > list length of 'catchange_txns' to allocate xids array? If we can do\n> > > so, then we will save the need to repalloc as well.\n> >\n> > Since ReorderBufferGetcatalogChangesXacts() collects all ongoing\n> > catalog modifying transactions, the length of the array could be\n> > bigger than the one taken last time. We can start with the previous\n> > length but I think we cannot remove the need for repalloc.\n> >\n>\n> It is using the list \"catchange_txns\" to form xid array which\n> shouldn't change for the duration of\n> ReorderBufferGetCatalogChangesXacts(). Then the caller frees the xid\n> array after its use. Next time in\n> ReorderBufferGetCatalogChangesXacts(), the fresh allocation for xid\n> array happens, so not sure why repalloc would be required?\n\nOops, I mistook catchange_txns for catchange->xcnt. You're right.\nStarting with the length of catchange_txns should be sufficient.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 8 Jul 2022 20:20:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 6, 2022 at 3:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 6, 2022 at 7:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I'll post a new version patch in the next email with replying to other comments.\n> >\n>\n> Okay, thanks for working on this. Few comments/suggestions on\n> poc_remember_last_running_xacts_v2 patch:\n>\n> 1.\n> +ReorderBufferSetLastRunningXactsCatalogChanges(ReorderBuffer *rb,\n> TransactionId xid,\n> + uint32 xinfo, int subxcnt,\n> + TransactionId *subxacts, XLogRecPtr lsn)\n> +{\n> ...\n> ...\n> +\n> + test = bsearch(&xid, rb->last_running_xacts, rb->n_last_running_xacts,\n> + sizeof(TransactionId), xidComparator);\n> +\n> + if (test == NULL)\n> + {\n> + for (int i = 0; i < subxcnt; i++)\n> + {\n> + test = bsearch(&subxacts[i], rb->last_running_xacts, rb->n_last_running_xacts,\n> + sizeof(TransactionId), xidComparator);\n> ...\n>\n> Is there ever a possibility that the top transaction id is not in the\n> running_xacts list but one of its subxids is present? If yes, it is\n> not very obvious at least to me so adding a comment here could be\n> useful. If not, then why do we need this additional check for each of\n> the sub-transaction ids?\n\nI think there is no possibility. The check for subtransactions is not necessary.\n\n>\n> 2.\n> @@ -627,6 +647,15 @@ DecodeCommit(LogicalDecodingContext *ctx,\n> XLogRecordBuffer *buf,\n> commit_time = parsed->origin_timestamp;\n> }\n>\n> + /*\n> + * Set the last running xacts as containing catalog change if necessary.\n> + * This must be done before SnapBuildCommitTxn() so that we include catalog\n> + * change transactions to the historic snapshot.\n> + */\n> + ReorderBufferSetLastRunningXactsCatalogChanges(ctx->reorder, xid,\n> parsed->xinfo,\n> + parsed->nsubxacts, parsed->subxacts,\n> + buf->origptr);\n> +\n> SnapBuildCommitTxn(ctx->snapshot_builder, buf->origptr, xid,\n> parsed->nsubxacts, parsed->subxacts);\n>\n> As mentioned previously as well, marking it before SnapBuildCommitTxn\n> has one disadvantage, we sometimes do this work even if the snapshot\n> state is SNAPBUILD_START or SNAPBUILD_BUILDING_SNAPSHOT in which case\n> SnapBuildCommitTxn wouldn't do anything. Can we instead check whether\n> the particular txn has invalidations and is present in the\n> last_running_xacts list along with the check\n> ReorderBufferXidHasCatalogChanges? I think that has the additional\n> advantage that we don't need this additional marking if the xact is\n> already marked as containing catalog changes.\n\nAgreed.\n\n>\n> 3.\n> 1.\n> + /*\n> + * We rely on HEAP2_NEW_CID records and XACT_INVALIDATIONS to know\n> + * if the transaction has changed the catalog, and that information\n> + * is not serialized to SnapBuilder. Therefore, if the logical\n> + * decoding decodes the commit record of the transaction that actually\n> + * has done catalog changes without these records, we miss to add\n> + * the xid to the snapshot so up creating the wrong snapshot.\n>\n> The part of the sentence \"... snapshot so up creating the wrong\n> snapshot.\" is not clear. In this comment, at one place you have used\n> two spaces after a full stop, and at another place, there is one\n> space. I think let's follow nearby code practice to use a single space\n> before a new sentence.\n\nAgreed.\n\n>\n> 4.\n> +void\n> +ReorderBufferProcessLastRunningXacts(ReorderBuffer *rb,\n> xl_running_xacts *running)\n> +{\n> + /* Quick exit if there is no longer last running xacts */\n> + if (likely(rb->n_last_running_xacts == 0))\n> + return;\n> +\n> + /* First call, build the last running xact list */\n> + if (rb->n_last_running_xacts == -1)\n> + {\n> + int nxacts = running->subxcnt + running->xcnt;\n> + Size sz = sizeof(TransactionId) * nxacts;;\n> +\n> + rb->last_running_xacts = MemoryContextAlloc(rb->context, sz);\n> + memcpy(rb->last_running_xacts, running->xids, sz);\n> + qsort(rb->last_running_xacts, nxacts, sizeof(TransactionId), xidComparator);\n> +\n> + rb->n_last_running_xacts = nxacts;\n> +\n> + return;\n> + }\n>\n> a. Can we add the function header comments for this function?\n\nUpdated.\n\n> b. We seem to be tracking the running_xact information for the first\n> running_xact record after start/restart. The name last_running_xacts\n> doesn't sound appropriate for that, how about initial_running_xacts?\n\nSound good, updated.\n\n>\n> 5.\n> + /*\n> + * Purge xids in the last running xacts list if we can do that for at least\n> + * one xid.\n> + */\n> + if (NormalTransactionIdPrecedes(rb->last_running_xacts[0],\n> + running->oldestRunningXid))\n>\n> I think it would be a good idea to add a few lines here explaining why\n> it is safe to purge. IIUC, it is because the commit for those xacts\n> would have already been processed and we don't need such a xid\n> anymore.\n\nRight, updated.\n\n>\n> 6. As per the discussion above in this thread having\n> XACT_XINFO_HAS_INVALS in the commit record doesn't indicate that the\n> xact has catalog changes, so can we add somewhere in comments that for\n> such a case we can't distinguish whether the txn has catalog change\n> but we still mark the txn has catalog changes?\n\nAgreed.\n\n> Can you please share one example for this case?\n\nI think it depends on what we did in the transaction but one example I\nhave is that a commit record for ALTER DATABASE has only a snapshot\ninvalidation message:\n\n=# alter database postgrse set log_statement to 'all';\nALTER DATABASE\n\n$ pg_waldump $PGDATA/pg_wal/000000010000000000000001 | tail -1\nrmgr: Transaction len (rec/tot): 66/ 66, tx: 821, lsn:\n0/019B50A8, prev 0/019B5070, desc: COMMIT 2022-07-11 21:38:44.036513\nJST; inval msgs: snapshot 2964\n\nI've attached an updated patch, please review it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 11 Jul 2022 22:54:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 8, 2022 at 8:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jul 8, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 8, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 8, 2022 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > > 1.\n> > > > In ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\n> > > > list length of 'catchange_txns' to allocate xids array? If we can do\n> > > > so, then we will save the need to repalloc as well.\n> > >\n> > > Since ReorderBufferGetcatalogChangesXacts() collects all ongoing\n> > > catalog modifying transactions, the length of the array could be\n> > > bigger than the one taken last time. We can start with the previous\n> > > length but I think we cannot remove the need for repalloc.\n> > >\n> >\n> > It is using the list \"catchange_txns\" to form xid array which\n> > shouldn't change for the duration of\n> > ReorderBufferGetCatalogChangesXacts(). Then the caller frees the xid\n> > array after its use. Next time in\n> > ReorderBufferGetCatalogChangesXacts(), the fresh allocation for xid\n> > array happens, so not sure why repalloc would be required?\n>\n> Oops, I mistook catchange_txns for catchange->xcnt. You're right.\n> Starting with the length of catchange_txns should be sufficient.\n>\n\nI've attached an updated patch.\n\nWhile trying this idea, I noticed there is no API to get the length of\ndlist, as we discussed offlist. Alternative idea was to use List\n(T_XidList) but I'm not sure it's a great idea since deleting an xid\nfrom the list is O(N), we need to implement list_delete_xid, and we\nneed to make sure allocating list node in the reorder buffer context.\nSo in the patch, I added a variable, catchange_ntxns, to keep track of\nthe length of the dlist. Please review it.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 12 Jul 2022 09:48:52 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 9:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jul 8, 2022 at 8:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jul 8, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 8, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Jul 8, 2022 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > > 1.\n> > > > > In ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\n> > > > > list length of 'catchange_txns' to allocate xids array? If we can do\n> > > > > so, then we will save the need to repalloc as well.\n> > > >\n> > > > Since ReorderBufferGetcatalogChangesXacts() collects all ongoing\n> > > > catalog modifying transactions, the length of the array could be\n> > > > bigger than the one taken last time. We can start with the previous\n> > > > length but I think we cannot remove the need for repalloc.\n> > > >\n> > >\n> > > It is using the list \"catchange_txns\" to form xid array which\n> > > shouldn't change for the duration of\n> > > ReorderBufferGetCatalogChangesXacts(). Then the caller frees the xid\n> > > array after its use. Next time in\n> > > ReorderBufferGetCatalogChangesXacts(), the fresh allocation for xid\n> > > array happens, so not sure why repalloc would be required?\n> >\n> > Oops, I mistook catchange_txns for catchange->xcnt. You're right.\n> > Starting with the length of catchange_txns should be sufficient.\n> >\n>\n> I've attached an updated patch.\n>\n> While trying this idea, I noticed there is no API to get the length of\n> dlist, as we discussed offlist. Alternative idea was to use List\n> (T_XidList) but I'm not sure it's a great idea since deleting an xid\n> from the list is O(N), we need to implement list_delete_xid, and we\n> need to make sure allocating list node in the reorder buffer context.\n> So in the patch, I added a variable, catchange_ntxns, to keep track of\n> the length of the dlist. Please review it.\n>\n\nI'm doing benchmark tests and will share the results.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 12 Jul 2022 10:28:17 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 8:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated patch.\r\n> \r\n> While trying this idea, I noticed there is no API to get the length of\r\n> dlist, as we discussed offlist. Alternative idea was to use List\r\n> (T_XidList) but I'm not sure it's a great idea since deleting an xid\r\n> from the list is O(N), we need to implement list_delete_xid, and we\r\n> need to make sure allocating list node in the reorder buffer context.\r\n> So in the patch, I added a variable, catchange_ntxns, to keep track of\r\n> the length of the dlist. Please review it.\r\n> \r\n\r\nThanks for your patch. Here are some comments on the master patch.\r\n\r\n1.\r\nIn catalog_change_snapshot.spec, should we use \"RUNNING_XACTS record\" instead of\r\n\"RUNNING_XACT record\" / \"XACT_RUNNING record\" in the comment?\r\n\r\n2.\r\n+\t\t * Since catchange.xip is sorted, we find the lower bound of\r\n+\t\t * xids that sill are interesting.\r\n\r\nTypo?\r\n\"sill\" -> \"still\"\r\n\r\n3.\r\n+\t * This array is set once when restoring the snapshot, xids are removed\r\n+\t * from the array when decoding xl_running_xacts record, and then eventually\r\n+\t * becomes an empty.\r\n\r\n+\t\t\t/* catchange list becomes an empty */\r\n+\t\t\tpfree(builder->catchange.xip);\r\n+\t\t\tbuilder->catchange.xip = NULL;\r\n\r\nShould \"becomes an empty\" be modified to \"becomes empty\"?\r\n\r\n4.\r\n+ * changes that are smaller than ->xmin. Those won't ever get checked via\r\n+ * the ->committed array and ->catchange, respectively. The committed xids will\r\n\r\nShould we change \r\n\"the ->committed array and ->catchange\"\r\nto\r\n\"the ->committed or ->catchange array\"\r\n?\r\n\r\nRegards,\r\nShi yu\r\n\r\n", "msg_date": "Tue, 12 Jul 2022 03:40:44 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 9:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jul 8, 2022 at 8:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 8, 2022 at 5:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Fri, Jul 8, 2022 at 12:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Jul 8, 2022 at 3:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > >\n> > > > > > 1.\n> > > > > > In ReorderBufferGetCatalogChangesXacts(), isn't it better to use the\n> > > > > > list length of 'catchange_txns' to allocate xids array? If we can do\n> > > > > > so, then we will save the need to repalloc as well.\n> > > > >\n> > > > > Since ReorderBufferGetcatalogChangesXacts() collects all ongoing\n> > > > > catalog modifying transactions, the length of the array could be\n> > > > > bigger than the one taken last time. We can start with the previous\n> > > > > length but I think we cannot remove the need for repalloc.\n> > > > >\n> > > >\n> > > > It is using the list \"catchange_txns\" to form xid array which\n> > > > shouldn't change for the duration of\n> > > > ReorderBufferGetCatalogChangesXacts(). Then the caller frees the xid\n> > > > array after its use. Next time in\n> > > > ReorderBufferGetCatalogChangesXacts(), the fresh allocation for xid\n> > > > array happens, so not sure why repalloc would be required?\n> > >\n> > > Oops, I mistook catchange_txns for catchange->xcnt. You're right.\n> > > Starting with the length of catchange_txns should be sufficient.\n> > >\n> >\n> > I've attached an updated patch.\n> >\n> > While trying this idea, I noticed there is no API to get the length of\n> > dlist, as we discussed offlist. Alternative idea was to use List\n> > (T_XidList) but I'm not sure it's a great idea since deleting an xid\n> > from the list is O(N), we need to implement list_delete_xid, and we\n> > need to make sure allocating list node in the reorder buffer context.\n> > So in the patch, I added a variable, catchange_ntxns, to keep track of\n> > the length of the dlist. Please review it.\n> >\n>\n> I'm doing benchmark tests and will share the results.\n>\n\nI've done benchmark tests to measure the overhead introduced by doing\nbsearch() every time when decoding a commit record. I've simulated a\nvery intensified situation where we decode 1M commit records while\nkeeping builder->catchange.xip array but the overhead is negilible:\n\nHEAD: 584 ms\nPatched: 614 ms\n\nI've attached the benchmark script I used. With increasing\nLOG_SNAPSHOT_INTERVAL_MS to 90000, the last decoding by\npg_logicla_slot_get_changes() decodes 1M commit records while keeping\ncatalog modifying transactions.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 12 Jul 2022 15:07:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 11:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > I'm doing benchmark tests and will share the results.\n> >\n>\n> I've done benchmark tests to measure the overhead introduced by doing\n> bsearch() every time when decoding a commit record. I've simulated a\n> very intensified situation where we decode 1M commit records while\n> keeping builder->catchange.xip array but the overhead is negilible:\n>\n> HEAD: 584 ms\n> Patched: 614 ms\n>\n> I've attached the benchmark script I used. With increasing\n> LOG_SNAPSHOT_INTERVAL_MS to 90000, the last decoding by\n> pg_logicla_slot_get_changes() decodes 1M commit records while keeping\n> catalog modifying transactions.\n>\n\nThanks for the test. We should also see how it performs when (a) we\ndon't change LOG_SNAPSHOT_INTERVAL_MS, and (b) we have more DDL xacts\nso that the array to search is somewhat bigger\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Jul 2022 11:55:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 11:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 12, 2022 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > I'm doing benchmark tests and will share the results.\n> > >\n> >\n> > I've done benchmark tests to measure the overhead introduced by doing\n> > bsearch() every time when decoding a commit record. I've simulated a\n> > very intensified situation where we decode 1M commit records while\n> > keeping builder->catchange.xip array but the overhead is negilible:\n> >\n> > HEAD: 584 ms\n> > Patched: 614 ms\n> >\n> > I've attached the benchmark script I used. With increasing\n> > LOG_SNAPSHOT_INTERVAL_MS to 90000, the last decoding by\n> > pg_logicla_slot_get_changes() decodes 1M commit records while keeping\n> > catalog modifying transactions.\n> >\n>\n> Thanks for the test. We should also see how it performs when (a) we\n> don't change LOG_SNAPSHOT_INTERVAL_MS,\n\nWhat point do you want to see in this test? I think the performance\noverhead depends on how many times we do bsearch() and how many\ntransactions are in the list. I increased this value to easily\nsimulate the situation where we decode many commit records while\nkeeping catalog modifying transactions. But even if we don't change\nthis value, the result would not change if we don't change how many\ncommit records we decode.\n\n> and (b) we have more DDL xacts\n> so that the array to search is somewhat bigger\n\nI've done the same performance tests while creating 64 catalog\nmodifying transactions. The result is:\n\nHEAD: 595 ms\nPatched: 628 ms\n\nThere was no big overhead.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 12 Jul 2022 16:42:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jul 12, 2022 at 11:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 12, 2022 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > >\n> > > > I'm doing benchmark tests and will share the results.\n> > > >\n> > >\n> > > I've done benchmark tests to measure the overhead introduced by doing\n> > > bsearch() every time when decoding a commit record. I've simulated a\n> > > very intensified situation where we decode 1M commit records while\n> > > keeping builder->catchange.xip array but the overhead is negilible:\n> > >\n> > > HEAD: 584 ms\n> > > Patched: 614 ms\n> > >\n> > > I've attached the benchmark script I used. With increasing\n> > > LOG_SNAPSHOT_INTERVAL_MS to 90000, the last decoding by\n> > > pg_logicla_slot_get_changes() decodes 1M commit records while keeping\n> > > catalog modifying transactions.\n> > >\n> >\n> > Thanks for the test. We should also see how it performs when (a) we\n> > don't change LOG_SNAPSHOT_INTERVAL_MS,\n>\n> What point do you want to see in this test? I think the performance\n> overhead depends on how many times we do bsearch() and how many\n> transactions are in the list.\n>\n\nRight, I am not expecting any visible performance difference in this\ncase. This is to ensure that we are not incurring any overhead in the\nmore usual scenarios (or default cases). As per my understanding, the\npurpose of increasing the value of LOG_SNAPSHOT_INTERVAL_MS is to\nsimulate a stress case for the changes made by the patch, and keeping\nits value default will test the more usual scenarios.\n\n> I increased this value to easily\n> simulate the situation where we decode many commit records while\n> keeping catalog modifying transactions. But even if we don't change\n> this value, the result would not change if we don't change how many\n> commit records we decode.\n>\n> > and (b) we have more DDL xacts\n> > so that the array to search is somewhat bigger\n>\n> I've done the same performance tests while creating 64 catalog\n> modifying transactions. The result is:\n>\n> HEAD: 595 ms\n> Patched: 628 ms\n>\n> There was no big overhead.\n>\n\nYeah, especially considering you have simulated a stress case for the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Jul 2022 14:22:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 8:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated patch.\r\n> \r\n\r\nHi,\r\n\r\nI met a segmentation fault in test_decoding test after applying the patch for master\r\nbranch. Attach the backtrace.\r\n\r\nIt happened when executing the following code because it tried to free a NULL\r\npointer (catchange_xip).\r\n\r\n\t/* be tidy */\r\n \tif (ondisk)\r\n \t\tpfree(ondisk);\r\n+\tif (catchange_xip)\r\n+\t\tpfree(catchange_xip);\r\n }\r\n\r\nIt seems to be related to configure option. I could reproduce it when using\r\n`./configure --enable-debug`.\r\nBut I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -ggdb\"`.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Tue, 12 Jul 2022 08:58:50 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 5:58 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Jul 12, 2022 8:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch.\n> >\n>\n> Hi,\n>\n> I met a segmentation fault in test_decoding test after applying the patch for master\n> branch. Attach the backtrace.\n\nThank you for testing the patch!\n\n>\n> It happened when executing the following code because it tried to free a NULL\n> pointer (catchange_xip).\n>\n> /* be tidy */\n> if (ondisk)\n> pfree(ondisk);\n> + if (catchange_xip)\n> + pfree(catchange_xip);\n> }\n>\n> It seems to be related to configure option. I could reproduce it when using\n> `./configure --enable-debug`.\n> But I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -ggdb\"`.\n\nHmm, I could not reproduce this problem even if I use ./configure\n--enable-debug. And it's weird that we checked if catchange_xip is not\nnull but we did pfree for it:\n\n#1 pfree (pointer=0x0) at mcxt.c:1177\n#2 0x000000000078186b in SnapBuildSerialize (builder=0x1fd5e78,\nlsn=25719712) at snapbuild.c:1792\n\nIs it reproducible in your environment? If so, could you test it again\nwith the following changes?\n\ndiff --git a/src/backend/replication/logical/snapbuild.c\nb/src/backend/replication/logical/snapbuild.c\nindex d015c06ced..a6e76e3781 100644\n--- a/src/backend/replication/logical/snapbuild.c\n+++ b/src/backend/replication/logical/snapbuild.c\n@@ -1788,7 +1788,7 @@ out:\n /* be tidy */\n if (ondisk)\n pfree(ondisk);\n- if (catchange_xip)\n+ if (catchange_xip != NULL)\n pfree(catchange_xip);\n }\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 12 Jul 2022 18:22:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 5:58 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> >\n> > It happened when executing the following code because it tried to free a NULL\n> > pointer (catchange_xip).\n> >\n> > /* be tidy */\n> > if (ondisk)\n> > pfree(ondisk);\n> > + if (catchange_xip)\n> > + pfree(catchange_xip);\n> > }\n> >\n> > It seems to be related to configure option. I could reproduce it when using\n> > `./configure --enable-debug`.\n> > But I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -ggdb\"`.\n>\n> Hmm, I could not reproduce this problem even if I use ./configure\n> --enable-debug. And it's weird that we checked if catchange_xip is not\n> null but we did pfree for it:\n>\n\nYeah, this looks weird to me as well but one difference in running\ntests could be the timing of WAL LOG for XLOG_RUNNING_XACTS. That may\nchange the timing of SnapBuildSerialize. The other thing we can try is\nby checking the value of catchange_xcnt before pfree.\n\nBTW, I think ReorderBufferGetCatalogChangesXacts should have an Assert\nto ensure rb->catchange_ntxns and xcnt are equal. We can probably then\navoid having xcnt_p as an out parameter as the caller can use\nrb->catchange_ntxns instead.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 12 Jul 2022 16:28:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 7:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 2:53 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 12, 2022 at 5:58 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > >\n> > > It happened when executing the following code because it tried to free a NULL\n> > > pointer (catchange_xip).\n> > >\n> > > /* be tidy */\n> > > if (ondisk)\n> > > pfree(ondisk);\n> > > + if (catchange_xip)\n> > > + pfree(catchange_xip);\n> > > }\n> > >\n> > > It seems to be related to configure option. I could reproduce it when using\n> > > `./configure --enable-debug`.\n> > > But I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -ggdb\"`.\n> >\n> > Hmm, I could not reproduce this problem even if I use ./configure\n> > --enable-debug. And it's weird that we checked if catchange_xip is not\n> > null but we did pfree for it:\n> >\n>\n> Yeah, this looks weird to me as well but one difference in running\n> tests could be the timing of WAL LOG for XLOG_RUNNING_XACTS. That may\n> change the timing of SnapBuildSerialize. The other thing we can try is\n> by checking the value of catchange_xcnt before pfree.\n\nYeah, we can try that.\n\nWhile reading the code, I realized that we try to pfree both ondisk\nand catchange_xip also when we jumped to 'out:':\n\nout:\n ReorderBufferSetRestartPoint(builder->reorder,\n builder->last_serialized_snapshot);\n /* be tidy */\n if (ondisk)\n pfree(ondisk);\n if (catchange_xip)\n pfree(catchange_xip);\n\nBut we use both ondisk and catchange_xip only if we didn't jump to\n'out:'. If this problem is related to compiler optimization with\n'goto' statement, moving them before 'out:' might be worth trying.\n\n>\n> BTW, I think ReorderBufferGetCatalogChangesXacts should have an Assert\n> to ensure rb->catchange_ntxns and xcnt are equal. We can probably then\n> avoid having xcnt_p as an out parameter as the caller can use\n> rb->catchange_ntxns instead.\n>\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 13 Jul 2022 09:36:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 5:52 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 12, 2022 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 12, 2022 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 12, 2022 at 11:38 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Tue, Jul 12, 2022 at 10:28 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > I'm doing benchmark tests and will share the results.\n> > > > >\n> > > >\n> > > > I've done benchmark tests to measure the overhead introduced by doing\n> > > > bsearch() every time when decoding a commit record. I've simulated a\n> > > > very intensified situation where we decode 1M commit records while\n> > > > keeping builder->catchange.xip array but the overhead is negilible:\n> > > >\n> > > > HEAD: 584 ms\n> > > > Patched: 614 ms\n> > > >\n> > > > I've attached the benchmark script I used. With increasing\n> > > > LOG_SNAPSHOT_INTERVAL_MS to 90000, the last decoding by\n> > > > pg_logicla_slot_get_changes() decodes 1M commit records while keeping\n> > > > catalog modifying transactions.\n> > > >\n> > >\n> > > Thanks for the test. We should also see how it performs when (a) we\n> > > don't change LOG_SNAPSHOT_INTERVAL_MS,\n> >\n> > What point do you want to see in this test? I think the performance\n> > overhead depends on how many times we do bsearch() and how many\n> > transactions are in the list.\n> >\n>\n> Right, I am not expecting any visible performance difference in this\n> case. This is to ensure that we are not incurring any overhead in the\n> more usual scenarios (or default cases). As per my understanding, the\n> purpose of increasing the value of LOG_SNAPSHOT_INTERVAL_MS is to\n> simulate a stress case for the changes made by the patch, and keeping\n> its value default will test the more usual scenarios.\n\nAgreed.\n\nI've done simple benchmark tests to decode 100k pgbench transactions:\n\nHEAD: 10.34 s\nPatched: 10.29 s\n\nI've attached an updated patch that incorporated comments from Amit and Shi.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 14 Jul 2022 10:30:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 at 12:40 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Jul 12, 2022 8:49 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch.\n> >\n> > While trying this idea, I noticed there is no API to get the length of\n> > dlist, as we discussed offlist. Alternative idea was to use List\n> > (T_XidList) but I'm not sure it's a great idea since deleting an xid\n> > from the list is O(N), we need to implement list_delete_xid, and we\n> > need to make sure allocating list node in the reorder buffer context.\n> > So in the patch, I added a variable, catchange_ntxns, to keep track of\n> > the length of the dlist. Please review it.\n> >\n>\n> Thanks for your patch. Here are some comments on the master patch.\n\nThank you for the comments.\n\n>\n> 1.\n> In catalog_change_snapshot.spec, should we use \"RUNNING_XACTS record\" instead of\n> \"RUNNING_XACT record\" / \"XACT_RUNNING record\" in the comment?\n>\n> 2.\n> + * Since catchange.xip is sorted, we find the lower bound of\n> + * xids that sill are interesting.\n>\n> Typo?\n> \"sill\" -> \"still\"\n>\n> 3.\n> + * This array is set once when restoring the snapshot, xids are removed\n> + * from the array when decoding xl_running_xacts record, and then eventually\n> + * becomes an empty.\n>\n> + /* catchange list becomes an empty */\n> + pfree(builder->catchange.xip);\n> + builder->catchange.xip = NULL;\n>\n> Should \"becomes an empty\" be modified to \"becomes empty\"?\n>\n> 4.\n> + * changes that are smaller than ->xmin. Those won't ever get checked via\n> + * the ->committed array and ->catchange, respectively. The committed xids will\n>\n> Should we change\n> \"the ->committed array and ->catchange\"\n> to\n> \"the ->committed or ->catchange array\"\n> ?\n\nAgreed with all the above comments. These are incorporated in the\nlatest v4 patch I just sent[1].\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAD21AoAyNPrOFg%2BQGh%2B%3D4205TU0%3DyrE%2BQyMgzStkH85uBZXptQ%40mail.gmail.com\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 14 Jul 2022 10:32:06 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 12, 2022 5:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Tue, Jul 12, 2022 at 5:58 PM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> >\r\n> > It happened when executing the following code because it tried to free a\r\n> NULL\r\n> > pointer (catchange_xip).\r\n> >\r\n> > /* be tidy */\r\n> > if (ondisk)\r\n> > pfree(ondisk);\r\n> > + if (catchange_xip)\r\n> > + pfree(catchange_xip);\r\n> > }\r\n> >\r\n> > It seems to be related to configure option. I could reproduce it when using\r\n> > `./configure --enable-debug`.\r\n> > But I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -\r\n> ggdb\"`.\r\n> \r\n> Hmm, I could not reproduce this problem even if I use ./configure\r\n> --enable-debug. And it's weird that we checked if catchange_xip is not\r\n> null but we did pfree for it:\r\n> \r\n> #1 pfree (pointer=0x0) at mcxt.c:1177\r\n> #2 0x000000000078186b in SnapBuildSerialize (builder=0x1fd5e78,\r\n> lsn=25719712) at snapbuild.c:1792\r\n> \r\n> Is it reproducible in your environment?\r\n\r\nThanks for your reply! Yes, it is reproducible. And I also reproduced it on the\r\nv4 patch you posted [1].\r\n\r\n[1] https://www.postgresql.org/message-id/CAD21AoAyNPrOFg%2BQGh%2B%3D4205TU0%3DyrE%2BQyMgzStkH85uBZXptQ%40mail.gmail.com\r\n\r\n> If so, could you test it again\r\n> with the following changes?\r\n> \r\n> diff --git a/src/backend/replication/logical/snapbuild.c\r\n> b/src/backend/replication/logical/snapbuild.c\r\n> index d015c06ced..a6e76e3781 100644\r\n> --- a/src/backend/replication/logical/snapbuild.c\r\n> +++ b/src/backend/replication/logical/snapbuild.c\r\n> @@ -1788,7 +1788,7 @@ out:\r\n> /* be tidy */\r\n> if (ondisk)\r\n> pfree(ondisk);\r\n> - if (catchange_xip)\r\n> + if (catchange_xip != NULL)\r\n> pfree(catchange_xip);\r\n> }\r\n> \r\n\r\nI tried this and could still reproduce the problem.\r\n\r\nBesides, I tried the suggestion from Amit [2], it could be fixed by checking\r\nthe value of catchange_xcnt instead of catchange_xip before pfree.\r\n\r\n[2] https://www.postgresql.org/message-id/CAA4eK1%2BXPdm8G%3DEhUJA12Pi1YvQAfcz2%3DkTd9a4BjVx4%3Dgk-MA%40mail.gmail.com\r\n\r\ndiff --git a/src/backend/replication/logical/snapbuild.c b/src/backend/replication/logical/snapbuild.c\r\nindex c482e906b0..68b9c4ef7d 100644\r\n--- a/src/backend/replication/logical/snapbuild.c\r\n+++ b/src/backend/replication/logical/snapbuild.c\r\n@@ -1573,7 +1573,7 @@ SnapBuildSerialize(SnapBuild *builder, XLogRecPtr lsn)\r\n Size needed_length;\r\n SnapBuildOnDisk *ondisk = NULL;\r\n TransactionId *catchange_xip = NULL;\r\n- size_t catchange_xcnt;\r\n+ size_t catchange_xcnt = 0;\r\n char *ondisk_c;\r\n int fd;\r\n char tmppath[MAXPGPATH];\r\n@@ -1788,7 +1788,7 @@ out:\r\n /* be tidy */\r\n if (ondisk)\r\n pfree(ondisk);\r\n- if (catchange_xip)\r\n+ if (catchange_xcnt != 0)\r\n pfree(catchange_xip);\r\n }\r\n\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Thu, 14 Jul 2022 02:16:00 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 14, 2022 at 11:16 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Tue, Jul 12, 2022 5:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 12, 2022 at 5:58 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > > It happened when executing the following code because it tried to free a\n> > NULL\n> > > pointer (catchange_xip).\n> > >\n> > > /* be tidy */\n> > > if (ondisk)\n> > > pfree(ondisk);\n> > > + if (catchange_xip)\n> > > + pfree(catchange_xip);\n> > > }\n> > >\n> > > It seems to be related to configure option. I could reproduce it when using\n> > > `./configure --enable-debug`.\n> > > But I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -\n> > ggdb\"`.\n> >\n> > Hmm, I could not reproduce this problem even if I use ./configure\n> > --enable-debug. And it's weird that we checked if catchange_xip is not\n> > null but we did pfree for it:\n> >\n> > #1 pfree (pointer=0x0) at mcxt.c:1177\n> > #2 0x000000000078186b in SnapBuildSerialize (builder=0x1fd5e78,\n> > lsn=25719712) at snapbuild.c:1792\n> >\n> > Is it reproducible in your environment?\n>\n> Thanks for your reply! Yes, it is reproducible. And I also reproduced it on the\n> v4 patch you posted [1].\n\nThank you for testing!\n\n>\n> [1] https://www.postgresql.org/message-id/CAD21AoAyNPrOFg%2BQGh%2B%3D4205TU0%3DyrE%2BQyMgzStkH85uBZXptQ%40mail.gmail.com\n>\n> > If so, could you test it again\n> > with the following changes?\n> >\n> > diff --git a/src/backend/replication/logical/snapbuild.c\n> > b/src/backend/replication/logical/snapbuild.c\n> > index d015c06ced..a6e76e3781 100644\n> > --- a/src/backend/replication/logical/snapbuild.c\n> > +++ b/src/backend/replication/logical/snapbuild.c\n> > @@ -1788,7 +1788,7 @@ out:\n> > /* be tidy */\n> > if (ondisk)\n> > pfree(ondisk);\n> > - if (catchange_xip)\n> > + if (catchange_xip != NULL)\n> > pfree(catchange_xip);\n> > }\n> >\n>\n> I tried this and could still reproduce the problem.\n\nDoes the backtrace still show we attempt to pfree a null-pointer?\n\n>\n> Besides, I tried the suggestion from Amit [2], it could be fixed by checking\n> the value of catchange_xcnt instead of catchange_xip before pfree.\n\nCould you check if this problem occurred when we reached there via\ngoto pass, i.e., did we call ReorderBufferGetCatalogChangesXacts() or\nnot?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 14 Jul 2022 12:06:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 14, 2022 at 12:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 14, 2022 at 11:16 AM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Tue, Jul 12, 2022 5:23 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 12, 2022 at 5:58 PM shiy.fnst@fujitsu.com\n> > > <shiy.fnst@fujitsu.com> wrote:\n> > > >\n> > > > It happened when executing the following code because it tried to free a\n> > > NULL\n> > > > pointer (catchange_xip).\n> > > >\n> > > > /* be tidy */\n> > > > if (ondisk)\n> > > > pfree(ondisk);\n> > > > + if (catchange_xip)\n> > > > + pfree(catchange_xip);\n> > > > }\n> > > >\n> > > > It seems to be related to configure option. I could reproduce it when using\n> > > > `./configure --enable-debug`.\n> > > > But I couldn't reproduce with `./configure --enable-debug CFLAGS=\"-Og -\n> > > ggdb\"`.\n> > >\n> > > Hmm, I could not reproduce this problem even if I use ./configure\n> > > --enable-debug. And it's weird that we checked if catchange_xip is not\n> > > null but we did pfree for it:\n> > >\n> > > #1 pfree (pointer=0x0) at mcxt.c:1177\n> > > #2 0x000000000078186b in SnapBuildSerialize (builder=0x1fd5e78,\n> > > lsn=25719712) at snapbuild.c:1792\n> > >\n> > > Is it reproducible in your environment?\n> >\n> > Thanks for your reply! Yes, it is reproducible. And I also reproduced it on the\n> > v4 patch you posted [1].\n>\n> Thank you for testing!\n\nI've found out the exact cause of this problem and how to fix it. I'll\nsubmit an updated patch next week with my analysis.\n\nThank you for testing and providing additional information off-list, Shi yu.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 15 Jul 2022 02:36:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 11, 2022 9:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached an updated patch, please review it.\r\n> \r\n\r\nThanks for your patch. Here are some comments for the REL14-v1 patch.\r\n\r\n1.\r\n+\t\tSize\t\tsz = sizeof(TransactionId) * nxacts;;\r\n\r\nThere is a redundant semicolon at the end.\r\n\r\n2.\r\n+\tworkspace = MemoryContextAlloc(rb->context, rb->n_initial_running_xacts);\r\n\r\nShould it be:\r\n+\tworkspace = MemoryContextAlloc(rb->context, sizeof(TransactionId) * rb->n_initial_running_xacts);\r\n\r\n3.\r\n+\t/* bound check if there is at least one transaction to be removed */\r\n+\tif (NormalTransactionIdPrecedes(rb->initial_running_xacts[0],\r\n+\t\t\t\t\t\t\t\t\trunning->oldestRunningXid))\r\n+\t\treturn;\r\n+\r\n\r\nHere, I think it should return if rb->initial_running_xacts[0] is older than\r\noldestRunningXid, right? Should it be changed to:\r\n\r\n+\tif (!NormalTransactionIdPrecedes(rb->initial_running_xacts[0],\r\n+\t\t\t\t\t\t\t\t\trunning->oldestRunningXid))\r\n+\t\treturn;\r\n\r\n4.\r\n+\tif ((parsed->xinfo & XACT_XINFO_HAS_INVALS) != 0)\r\n\r\nMaybe we can change it like the following, to be consistent with other places in\r\nthis file. It's also fine if you don't change it.\r\n\r\n+\tif (parsed->xinfo & XACT_XINFO_HAS_INVALS)\r\n\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Fri, 15 Jul 2022 06:32:10 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thursday, July 14, 2022 10:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached an updated patch that incorporated comments from Amit and Shi.\r\nHi,\r\n\r\n\r\nMinor comments for v4.\r\n\r\n(1) typo in the commit message\r\n\r\n\"When decoding a COMMIT record, we check both the list and the ReorderBuffer to see if\r\nif the transaction has modified catalogs.\"\r\n\r\nThere are two 'if's in succession in the last sentence of the second paragraph.\r\n\r\n(2) The header comment for the spec test\r\n\r\n+# Test that decoding only the commit record of the transaction that have\r\n+# catalog-changed.\r\n\r\nRewording of this part looks required, because \"test that ... \" requires a complete sentence\r\nafter that, right ?\r\n\r\n\r\n(3) SnapBuildRestore\r\n\r\nsnapshot_not_interesting:\r\n if (ondisk.builder.committed.xip != NULL)\r\n pfree(ondisk.builder.committed.xip);\r\n return false;\r\n}\r\n\r\nDo we need to add pfree for ondisk.builder.catchange.xip after the 'snapshot_not_interesting' label ?\r\n\r\n\r\n(4) SnapBuildPurgeOlderTxn\r\n\r\n+ elog(DEBUG3, \"purged catalog modifying transactions from %d to %d\",\r\n+ (uint32) builder->catchange.xcnt, surviving_xids);\r\n\r\nTo make this part more aligned with existing codes,\r\nprobably we can have a look at another elog for debug in the same function.\r\n\r\nWe should use %u for casted xcnt & surviving_xids,\r\nwhile adding a format for xmin if necessary ?\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Fri, 15 Jul 2022 13:43:14 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 15, 2022 at 10:43 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Thursday, July 14, 2022 10:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached an updated patch that incorporated comments from Amit and Shi.\n> Hi,\n>\n>\n> Minor comments for v4.\n\nThank you for the comments!\n\n>\n> (1) typo in the commit message\n>\n> \"When decoding a COMMIT record, we check both the list and the ReorderBuffer to see if\n> if the transaction has modified catalogs.\"\n>\n> There are two 'if's in succession in the last sentence of the second paragraph.\n>\n> (2) The header comment for the spec test\n>\n> +# Test that decoding only the commit record of the transaction that have\n> +# catalog-changed.\n>\n> Rewording of this part looks required, because \"test that ... \" requires a complete sentence\n> after that, right ?\n>\n>\n> (3) SnapBuildRestore\n>\n> snapshot_not_interesting:\n> if (ondisk.builder.committed.xip != NULL)\n> pfree(ondisk.builder.committed.xip);\n> return false;\n> }\n>\n> Do we need to add pfree for ondisk.builder.catchange.xip after the 'snapshot_not_interesting' label ?\n>\n>\n> (4) SnapBuildPurgeOlderTxn\n>\n> + elog(DEBUG3, \"purged catalog modifying transactions from %d to %d\",\n> + (uint32) builder->catchange.xcnt, surviving_xids);\n>\n> To make this part more aligned with existing codes,\n> probably we can have a look at another elog for debug in the same function.\n>\n> We should use %u for casted xcnt & surviving_xids,\n> while adding a format for xmin if necessary ?\n\nI agreed with all the above comments and incorporated them into the\nupdated patch.\n\nThis patch should have the fix for the issue that Shi yu reported. Shi\nyu, could you please test it again with this patch?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 15 Jul 2022 23:39:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 15, 2022 at 3:32 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jul 11, 2022 9:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached an updated patch, please review it.\n> >\n>\n> Thanks for your patch. Here are some comments for the REL14-v1 patch.\n>\n> 1.\n> + Size sz = sizeof(TransactionId) * nxacts;;\n>\n> There is a redundant semicolon at the end.\n>\n> 2.\n> + workspace = MemoryContextAlloc(rb->context, rb->n_initial_running_xacts);\n>\n> Should it be:\n> + workspace = MemoryContextAlloc(rb->context, sizeof(TransactionId) * rb->n_initial_running_xacts);\n>\n> 3.\n> + /* bound check if there is at least one transaction to be removed */\n> + if (NormalTransactionIdPrecedes(rb->initial_running_xacts[0],\n> + running->oldestRunningXid))\n> + return;\n> +\n>\n> Here, I think it should return if rb->initial_running_xacts[0] is older than\n> oldestRunningXid, right? Should it be changed to:\n>\n> + if (!NormalTransactionIdPrecedes(rb->initial_running_xacts[0],\n> + running->oldestRunningXid))\n> + return;\n>\n> 4.\n> + if ((parsed->xinfo & XACT_XINFO_HAS_INVALS) != 0)\n>\n> Maybe we can change it like the following, to be consistent with other places in\n> this file. It's also fine if you don't change it.\n>\n> + if (parsed->xinfo & XACT_XINFO_HAS_INVALS)\n\nThank you for the comments!\n\nI've attached patches for all supported branches including the master.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Sun, 17 Jul 2022 21:58:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 15, 2022 10:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> This patch should have the fix for the issue that Shi yu reported. Shi\r\n> yu, could you please test it again with this patch?\r\n> \r\n\r\nThanks for updating the patch!\r\nI have tested and confirmed that the problem I found has been fixed.\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Mon, 18 Jul 2022 03:28:03 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 15, 2022 at 8:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> This patch should have the fix for the issue that Shi yu reported. Shi\n> yu, could you please test it again with this patch?\n>\n\nCan you explain the cause of the failure and your fix for the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Jul 2022 09:42:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sun, Jul 17, 2022 at 6:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jul 15, 2022 at 3:32 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n>\n> I've attached patches for all supported branches including the master.\n>\n\nFor back branch patches,\n* Wouldn't it be better to move purge logic into the function\nSnapBuildPurge* function for the sake of consistency?\n* Do we really need ReorderBufferInitialXactsSetCatalogChanges()?\nCan't we instead have a function similar to\nSnapBuildXidHasCatalogChanges() as we have for the master branch? That\nwill avoid calling it when the snapshot\nstate is SNAPBUILD_START or SNAPBUILD_BUILDING_SNAPSHOT\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 18 Jul 2022 17:19:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 18, 2022 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 15, 2022 at 8:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > This patch should have the fix for the issue that Shi yu reported. Shi\n> > yu, could you please test it again with this patch?\n> >\n>\n> Can you explain the cause of the failure and your fix for the same?\n\n@@ -1694,6 +1788,8 @@ out:\n /* be tidy */\n if (ondisk)\n pfree(ondisk);\n+ if (catchange_xip)\n+ pfree(catchange_xip);\n\nRegarding the above code in the previous version patch, looking at the\ngenerated assembler code shared by Shi yu offlist, I realized that the\n“if (catchange_xip)” is removed (folded) by gcc optimization. This is\nbecause we dereference catchange_xip before null-pointer check as\nfollow:\n\n+ /* copy catalog modifying xacts */\n+ sz = sizeof(TransactionId) * catchange_xcnt;\n+ memcpy(ondisk_c, catchange_xip, sz);\n+ COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n+ ondisk_c += sz;\n\nSince sz is 0 in this case, memcpy doesn’t do anything actually.\n\nBy checking the assembler code, I’ve confirmed that gcc does the\noptimization for these code and setting\n-fno-delete-null-pointer-checks flag prevents the if statement from\nbeing folded. Also, I’ve confirmed that adding the check if\n\"catchange.xcnt > 0” before the null-pointer check also can prevent\nthat. Adding a check if \"catchange.xcnt > 0” looks more robust. I’ve\nadded a similar check for builder->committed.xcnt as well for\nconsistency. builder->committed.xip could have no transactions.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 10:03:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 18, 2022 at 12:28 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Fri, Jul 15, 2022 10:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > This patch should have the fix for the issue that Shi yu reported. Shi\n> > yu, could you please test it again with this patch?\n> >\n>\n> Thanks for updating the patch!\n> I have tested and confirmed that the problem I found has been fixed.\n\nThank you for testing!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 10:13:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 6:34 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 18, 2022 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 15, 2022 at 8:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > This patch should have the fix for the issue that Shi yu reported. Shi\n> > > yu, could you please test it again with this patch?\n> > >\n> >\n> > Can you explain the cause of the failure and your fix for the same?\n>\n> @@ -1694,6 +1788,8 @@ out:\n> /* be tidy */\n> if (ondisk)\n> pfree(ondisk);\n> + if (catchange_xip)\n> + pfree(catchange_xip);\n>\n> Regarding the above code in the previous version patch, looking at the\n> generated assembler code shared by Shi yu offlist, I realized that the\n> “if (catchange_xip)” is removed (folded) by gcc optimization. This is\n> because we dereference catchange_xip before null-pointer check as\n> follow:\n>\n> + /* copy catalog modifying xacts */\n> + sz = sizeof(TransactionId) * catchange_xcnt;\n> + memcpy(ondisk_c, catchange_xip, sz);\n> + COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n> + ondisk_c += sz;\n>\n> Since sz is 0 in this case, memcpy doesn’t do anything actually.\n>\n> By checking the assembler code, I’ve confirmed that gcc does the\n> optimization for these code and setting\n> -fno-delete-null-pointer-checks flag prevents the if statement from\n> being folded. Also, I’ve confirmed that adding the check if\n> \"catchange.xcnt > 0” before the null-pointer check also can prevent\n> that. Adding a check if \"catchange.xcnt > 0” looks more robust. I’ve\n> added a similar check for builder->committed.xcnt as well for\n> consistency. builder->committed.xip could have no transactions.\n>\n\nGood work. I wonder without comments this may create a problem in the\nfuture. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\nfreeing the memory any less robust. Also, for consistency, we can use\na similar check based on xcnt in the SnapBuildRestore to free the\nmemory in the below code:\n+ /* set catalog modifying transactions */\n+ if (builder->catchange.xip)\n+ pfree(builder->catchange.xip);\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jul 2022 10:17:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 19, 2022 at 6:34 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 18, 2022 at 1:12 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 15, 2022 at 8:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > This patch should have the fix for the issue that Shi yu reported. Shi\n> > > > yu, could you please test it again with this patch?\n> > > >\n> > >\n> > > Can you explain the cause of the failure and your fix for the same?\n> >\n> > @@ -1694,6 +1788,8 @@ out:\n> > /* be tidy */\n> > if (ondisk)\n> > pfree(ondisk);\n> > + if (catchange_xip)\n> > + pfree(catchange_xip);\n> >\n> > Regarding the above code in the previous version patch, looking at the\n> > generated assembler code shared by Shi yu offlist, I realized that the\n> > “if (catchange_xip)” is removed (folded) by gcc optimization. This is\n> > because we dereference catchange_xip before null-pointer check as\n> > follow:\n> >\n> > + /* copy catalog modifying xacts */\n> > + sz = sizeof(TransactionId) * catchange_xcnt;\n> > + memcpy(ondisk_c, catchange_xip, sz);\n> > + COMP_CRC32C(ondisk->checksum, ondisk_c, sz);\n> > + ondisk_c += sz;\n> >\n> > Since sz is 0 in this case, memcpy doesn’t do anything actually.\n> >\n> > By checking the assembler code, I’ve confirmed that gcc does the\n> > optimization for these code and setting\n> > -fno-delete-null-pointer-checks flag prevents the if statement from\n> > being folded. Also, I’ve confirmed that adding the check if\n> > \"catchange.xcnt > 0” before the null-pointer check also can prevent\n> > that. Adding a check if \"catchange.xcnt > 0” looks more robust. I’ve\n> > added a similar check for builder->committed.xcnt as well for\n> > consistency. builder->committed.xip could have no transactions.\n> >\n>\n> Good work. I wonder without comments this may create a problem in the\n> future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> freeing the memory any less robust. Also, for consistency, we can use\n> a similar check based on xcnt in the SnapBuildRestore to free the\n> memory in the below code:\n> + /* set catalog modifying transactions */\n> + if (builder->catchange.xip)\n> + pfree(builder->catchange.xip);\n\nI would hesitate to add comments about preventing the particular\noptimization. I think we do null-pointer-check-then-pfree many place.\nIt seems to me that checking the array length before memcpy is more\nnatural than checking both the array length and the array existence\nbefore pfree.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 16:02:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sunday, July 17, 2022 9:59 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> I've attached patches for all supported branches including the master.\r\nHi,\r\n\r\n\r\nMinor comments for REL14.\r\n\r\n(1) There are some foreign characters in the patches (in the commit message)\r\n\r\nWhen I had a look at your patch for back branches with some editor,\r\nI could see some unfamiliar full-width characters like below two cases,\r\nmainly around \"single quotes\" in the sentences.\r\n\r\nCould you please check the entire patches,\r\nprobably by some tool that helps you to detect this kind of characters ?\r\n\r\n* the 2nd paragraph of the commit message\r\n\r\n...mark the transaction as containing catalog changes if it窶冱 in the list of the\r\ninitial running transactions ...\r\n\r\n* the 3rd paragraph of the same\r\n\r\nIt doesn窶冲 have the information on which (sub) transaction has catalog changes....\r\n\r\nFYI, this comment applies to other patches for REL13, REL12, REL11, REL10.\r\n\r\n\r\n(2) typo in the commit message\r\n\r\nFROM:\r\nTo fix this problem, this change the reorder buffer so that...\r\nTO:\r\nTo fix this problem, this changes the reorder buffer so that...\r\n\r\n\r\n(3) typo in ReorderBufferProcessInitialXacts\r\n\r\n+ /*\r\n+ * Remove transactions that would have been processed and we don't need to\r\n+ * keep track off anymore.\r\n\r\n\r\nKindly change\r\nFROM:\r\nkeep track off\r\nTO:\r\nkeep track of\r\n\r\n\r\n\r\nBest Regards,\r\n\tTakamichi Osumi\r\n\r\n", "msg_date": "Tue, 19 Jul 2022 07:28:15 +0000", "msg_from": "\"osumi.takamichi@fujitsu.com\" <osumi.takamichi@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Tue, 19 Jul 2022 10:17:15 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> Good work. I wonder without comments this may create a problem in the\n> future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> freeing the memory any less robust. Also, for consistency, we can use\n> a similar check based on xcnt in the SnapBuildRestore to free the\n> memory in the below code:\n> + /* set catalog modifying transactions */\n> + if (builder->catchange.xip)\n> + pfree(builder->catchange.xip);\n\nBut xip must be positive there. We can add a comment explains that.\n\n\n+\t * Array of transactions and subtransactions that had modified catalogs\n+\t * and were running when the snapshot was serialized.\n+\t *\n+\t * We normally rely on HEAP2_NEW_CID and XLOG_XACT_INVALIDATIONS records to\n+\t * know if the transaction has changed the catalog. But it could happen that\n+\t * the logical decoding decodes only the commit record of the transaction.\n+\t * This array keeps track of the transactions that have modified catalogs\n\n(Might be only me, but) \"track\" makes me think that xids are added and\nremoved by activities. On the other hand the array just remembers\ncatalog-modifying xids in the last life until the all xids in the list\ngone.\n\n+\t * and were running when serializing a snapshot, and this array is used to\n+\t * add such transactions to the snapshot.\n+\t *\n+\t * This array is set once when restoring the snapshot, xids are removed\n\n(So I want to add \"only\" between \"are removed\").\n\n+\t * from the array when decoding xl_running_xacts record, and then eventually\n+\t * becomes empty.\n\n\n+\tcatchange_xip = ReorderBufferGetCatalogChangesXacts(builder->reorder);\n\ncatchange_xip is allocated in the current context, but ondisk is\nallocated in builder->context. I see it kind of inconsistent (even if\nthe current context is same with build->context).\n\n\n+\tif (builder->committed.xcnt > 0)\n+\t{\n\nIt seems to me comitted.xip is always non-null, so we don't need this.\nI don't strongly object to do that, though.\n\n-\t * Remove TXN from its containing list.\n+\t * Remove TXN from its containing lists.\n\nThe comment body only describes abut txn->nodes. I think we need to\nadd that for catchange_node.\n\n\n+\tAssert((xcnt > 0) && (xcnt == rb->catchange_ntxns));\n\n(xcnt > 0) is obvious here (otherwise means dlist_foreach is broken..).\n(xcnt == rb->catchange_ntxns) is not what should be checked here. The\nassert just requires that catchange_txns and catchange_ntxns are\nconsistent so it should be checked just after dlist_empty.. I think.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Jul 2022 16:35:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 18, 2022 at 8:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jul 17, 2022 at 6:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jul 15, 2022 at 3:32 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> >\n> > I've attached patches for all supported branches including the master.\n> >\n>\n> For back branch patches,\n> * Wouldn't it be better to move purge logic into the function\n> SnapBuildPurge* function for the sake of consistency?\n\nAgreed.\n\n> * Do we really need ReorderBufferInitialXactsSetCatalogChanges()?\n> Can't we instead have a function similar to\n> SnapBuildXidHasCatalogChanges() as we have for the master branch? That\n> will avoid calling it when the snapshot\n> state is SNAPBUILD_START or SNAPBUILD_BUILDING_SNAPSHOT\n\nSeems a good idea. We would need to pass the information about\n(parsed->xinfo & XACT_XINFO_HAS_INVALS) to the function but probably\nwe can change ReorderBufferXidHasCatalogChanges() so that it checks\nthe RBTXN_HAS_CATALOG_CHANGES flag and then the initial running xacts\narray.\n\nBTW on backbranches, I think that the reason why we add\ninitial_running_xacts stuff to ReorderBuffer is that we cannot modify\nSnapBuild that could be serialized. Can we add a (private) array for\nthe initial running xacts in snapbuild.c instead of adding new\nvariables to ReorderBuffer? That way, the code would become more\nconsistent with the changes on the master branch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 16:39:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 4:28 PM osumi.takamichi@fujitsu.com\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> On Sunday, July 17, 2022 9:59 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > I've attached patches for all supported branches including the master.\n> Hi,\n>\n>\n> Minor comments for REL14.\n>\n> (1) There are some foreign characters in the patches (in the commit message)\n>\n> When I had a look at your patch for back branches with some editor,\n> I could see some unfamiliar full-width characters like below two cases,\n> mainly around \"single quotes\" in the sentences.\n>\n> Could you please check the entire patches,\n> probably by some tool that helps you to detect this kind of characters ?\n>\n> * the 2nd paragraph of the commit message\n>\n> ...mark the transaction as containing catalog changes if it窶冱 in the list of the\n> initial running transactions ...\n>\n> * the 3rd paragraph of the same\n>\n> It doesn窶冲 have the information on which (sub) transaction has catalog changes....\n>\n> FYI, this comment applies to other patches for REL13, REL12, REL11, REL10.\n>\n>\n> (2) typo in the commit message\n>\n> FROM:\n> To fix this problem, this change the reorder buffer so that...\n> TO:\n> To fix this problem, this changes the reorder buffer so that...\n>\n>\n> (3) typo in ReorderBufferProcessInitialXacts\n>\n> + /*\n> + * Remove transactions that would have been processed and we don't need to\n> + * keep track off anymore.\n>\n>\n> Kindly change\n> FROM:\n> keep track off\n> TO:\n> keep track of\n\nThank you for the comments! I'll address these comments in the next\nversion patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 16:40:41 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Tue, 19 Jul 2022 16:02:26 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Tue, Jul 19, 2022 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > Good work. I wonder without comments this may create a problem in the\n> > future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> > freeing the memory any less robust. Also, for consistency, we can use\n> > a similar check based on xcnt in the SnapBuildRestore to free the\n> > memory in the below code:\n> > + /* set catalog modifying transactions */\n> > + if (builder->catchange.xip)\n> > + pfree(builder->catchange.xip);\n> \n> I would hesitate to add comments about preventing the particular\n> optimization. I think we do null-pointer-check-then-pfree many place.\n> It seems to me that checking the array length before memcpy is more\n> natural than checking both the array length and the array existence\n> before pfree.\n\nAnyway according to commit message of 46ab07ffda, POSIX forbits\nmemcpy(NULL, NULL, 0). It seems to me that it is the cause of the\nfalse (or over) optimization. So if we add some comment, it would be\nfor memcpy, not pfree..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Jul 2022 16:57:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Tue, 19 Jul 2022 16:57:14 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Tue, 19 Jul 2022 16:02:26 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> > On Tue, Jul 19, 2022 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > Good work. I wonder without comments this may create a problem in the\n> > > future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> > > freeing the memory any less robust. Also, for consistency, we can use\n> > > a similar check based on xcnt in the SnapBuildRestore to free the\n> > > memory in the below code:\n> > > + /* set catalog modifying transactions */\n> > > + if (builder->catchange.xip)\n> > > + pfree(builder->catchange.xip);\n> > \n> > I would hesitate to add comments about preventing the particular\n> > optimization. I think we do null-pointer-check-then-pfree many place.\n> > It seems to me that checking the array length before memcpy is more\n> > natural than checking both the array length and the array existence\n> > before pfree.\n> \n> Anyway according to commit message of 46ab07ffda, POSIX forbits\n> memcpy(NULL, NULL, 0). It seems to me that it is the cause of the\n> false (or over) optimization. So if we add some comment, it would be\n> for memcpy, not pfree..\n\nFor clarilty, I meant that I don't think we need that comment.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 19 Jul 2022 17:13:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 4:35 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n\nThank you for the comments!\n\n>\n> At Tue, 19 Jul 2022 10:17:15 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > Good work. I wonder without comments this may create a problem in the\n> > future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> > freeing the memory any less robust. Also, for consistency, we can use\n> > a similar check based on xcnt in the SnapBuildRestore to free the\n> > memory in the below code:\n> > + /* set catalog modifying transactions */\n> > + if (builder->catchange.xip)\n> > + pfree(builder->catchange.xip);\n>\n> But xip must be positive there. We can add a comment explains that.\n>\n\nYes, if we add the comment for it, probably we need to explain a gcc's\noptimization but it seems to be too much to me.\n\n>\n> + * Array of transactions and subtransactions that had modified catalogs\n> + * and were running when the snapshot was serialized.\n> + *\n> + * We normally rely on HEAP2_NEW_CID and XLOG_XACT_INVALIDATIONS records to\n> + * know if the transaction has changed the catalog. But it could happen that\n> + * the logical decoding decodes only the commit record of the transaction.\n> + * This array keeps track of the transactions that have modified catalogs\n>\n> (Might be only me, but) \"track\" makes me think that xids are added and\n> removed by activities. On the other hand the array just remembers\n> catalog-modifying xids in the last life until the all xids in the list\n> gone.\n>\n> + * and were running when serializing a snapshot, and this array is used to\n> + * add such transactions to the snapshot.\n> + *\n> + * This array is set once when restoring the snapshot, xids are removed\n>\n> (So I want to add \"only\" between \"are removed\").\n>\n> + * from the array when decoding xl_running_xacts record, and then eventually\n> + * becomes empty.\n\nAgreed. WIll fix.\n\n>\n>\n> + catchange_xip = ReorderBufferGetCatalogChangesXacts(builder->reorder);\n>\n> catchange_xip is allocated in the current context, but ondisk is\n> allocated in builder->context. I see it kind of inconsistent (even if\n> the current context is same with build->context).\n\nRight. I thought that since the lifetime of catchange_xip is short,\nuntil the end of SnapBuildSerialize() function we didn't need to\nallocate it in builder->context. But given ondisk, we need to do that\nfor catchange_xip as well. Will fix it.\n\n>\n>\n> + if (builder->committed.xcnt > 0)\n> + {\n>\n> It seems to me comitted.xip is always non-null, so we don't need this.\n> I don't strongly object to do that, though.\n\nBut committed.xcnt could be 0, right? We don't need to copy anything\nby calling memcpy with size = 0 in this case. Also, it looks more\nconsistent with what we do for catchange_xcnt.\n\n>\n> - * Remove TXN from its containing list.\n> + * Remove TXN from its containing lists.\n>\n> The comment body only describes abut txn->nodes. I think we need to\n> add that for catchange_node.\n\nWill add.\n\n>\n>\n> + Assert((xcnt > 0) && (xcnt == rb->catchange_ntxns));\n>\n> (xcnt > 0) is obvious here (otherwise means dlist_foreach is broken..).\n> (xcnt == rb->catchange_ntxns) is not what should be checked here. The\n> assert just requires that catchange_txns and catchange_ntxns are\n> consistent so it should be checked just after dlist_empty.. I think.\n>\n\nIf we want to check if catchange_txns and catchange_ntxns are\nconsistent, should we check (xcnt == rb->catchange_ntxns) as well, no?\nThis function requires the caller to use rb->catchange_ntxns as the\nlength of the returned array. I think this assertion ensures that the\nactual length of the array is consistent with the length we\npre-calculated.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 17:31:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 1:43 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 19 Jul 2022 16:57:14 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n> > At Tue, 19 Jul 2022 16:02:26 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > > On Tue, Jul 19, 2022 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > Good work. I wonder without comments this may create a problem in the\n> > > > future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> > > > freeing the memory any less robust. Also, for consistency, we can use\n> > > > a similar check based on xcnt in the SnapBuildRestore to free the\n> > > > memory in the below code:\n> > > > + /* set catalog modifying transactions */\n> > > > + if (builder->catchange.xip)\n> > > > + pfree(builder->catchange.xip);\n> > >\n> > > I would hesitate to add comments about preventing the particular\n> > > optimization. I think we do null-pointer-check-then-pfree many place.\n> > > It seems to me that checking the array length before memcpy is more\n> > > natural than checking both the array length and the array existence\n> > > before pfree.\n> >\n> > Anyway according to commit message of 46ab07ffda, POSIX forbits\n> > memcpy(NULL, NULL, 0). It seems to me that it is the cause of the\n> > false (or over) optimization. So if we add some comment, it would be\n> > for memcpy, not pfree..\n>\n> For clarilty, I meant that I don't think we need that comment.\n>\n\nFair enough. I think commit 46ab07ffda clearly explains why it is a\ngood idea to add a check as Sawada-San did in his latest version. I\nalso agree that we don't any comment for this change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jul 2022 17:26:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 1:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 18, 2022 at 8:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Jul 17, 2022 at 6:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 15, 2022 at 3:32 PM shiy.fnst@fujitsu.com\n> > > <shiy.fnst@fujitsu.com> wrote:\n> > > >\n> > >\n> > > I've attached patches for all supported branches including the master.\n> > >\n> >\n> > For back branch patches,\n> > * Wouldn't it be better to move purge logic into the function\n> > SnapBuildPurge* function for the sake of consistency?\n>\n> Agreed.\n>\n> > * Do we really need ReorderBufferInitialXactsSetCatalogChanges()?\n> > Can't we instead have a function similar to\n> > SnapBuildXidHasCatalogChanges() as we have for the master branch? That\n> > will avoid calling it when the snapshot\n> > state is SNAPBUILD_START or SNAPBUILD_BUILDING_SNAPSHOT\n>\n> Seems a good idea. We would need to pass the information about\n> (parsed->xinfo & XACT_XINFO_HAS_INVALS) to the function but probably\n> we can change ReorderBufferXidHasCatalogChanges() so that it checks\n> the RBTXN_HAS_CATALOG_CHANGES flag and then the initial running xacts\n> array.\n>\n\nLet's try to keep this as much similar to the master branch patch as possible.\n\n> BTW on backbranches, I think that the reason why we add\n> initial_running_xacts stuff to ReorderBuffer is that we cannot modify\n> SnapBuild that could be serialized. Can we add a (private) array for\n> the initial running xacts in snapbuild.c instead of adding new\n> variables to ReorderBuffer?\n>\n\nWhile thinking about this, I wonder if the current patch for back\nbranches can lead to an ABI break as it changes the exposed structure?\nIf so, it may be another reason to change it to some other way\nprobably as you are suggesting.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jul 2022 17:55:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 2:01 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 19, 2022 at 4:35 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>\n> >\n> >\n> > + Assert((xcnt > 0) && (xcnt == rb->catchange_ntxns));\n> >\n> > (xcnt > 0) is obvious here (otherwise means dlist_foreach is broken..).\n> > (xcnt == rb->catchange_ntxns) is not what should be checked here. The\n> > assert just requires that catchange_txns and catchange_ntxns are\n> > consistent so it should be checked just after dlist_empty.. I think.\n> >\n>\n> If we want to check if catchange_txns and catchange_ntxns are\n> consistent, should we check (xcnt == rb->catchange_ntxns) as well, no?\n> This function requires the caller to use rb->catchange_ntxns as the\n> length of the returned array. I think this assertion ensures that the\n> actual length of the array is consistent with the length we\n> pre-calculated.\n>\n\nRight, so, I think it is better to keep this assertion but remove\n(xcnt > 0) part as pointed out by Horiguchi-San.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 19 Jul 2022 18:12:22 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 9:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 19, 2022 at 1:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 18, 2022 at 8:49 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sun, Jul 17, 2022 at 6:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Jul 15, 2022 at 3:32 PM shiy.fnst@fujitsu.com\n> > > > <shiy.fnst@fujitsu.com> wrote:\n> > > > >\n> > > >\n> > > > I've attached patches for all supported branches including the master.\n> > > >\n> > >\n> > > For back branch patches,\n> > > * Wouldn't it be better to move purge logic into the function\n> > > SnapBuildPurge* function for the sake of consistency?\n> >\n> > Agreed.\n> >\n> > > * Do we really need ReorderBufferInitialXactsSetCatalogChanges()?\n> > > Can't we instead have a function similar to\n> > > SnapBuildXidHasCatalogChanges() as we have for the master branch? That\n> > > will avoid calling it when the snapshot\n> > > state is SNAPBUILD_START or SNAPBUILD_BUILDING_SNAPSHOT\n> >\n> > Seems a good idea. We would need to pass the information about\n> > (parsed->xinfo & XACT_XINFO_HAS_INVALS) to the function but probably\n> > we can change ReorderBufferXidHasCatalogChanges() so that it checks\n> > the RBTXN_HAS_CATALOG_CHANGES flag and then the initial running xacts\n> > array.\n> >\n>\n> Let's try to keep this as much similar to the master branch patch as possible.\n>\n> > BTW on backbranches, I think that the reason why we add\n> > initial_running_xacts stuff to ReorderBuffer is that we cannot modify\n> > SnapBuild that could be serialized. Can we add a (private) array for\n> > the initial running xacts in snapbuild.c instead of adding new\n> > variables to ReorderBuffer?\n> >\n>\n> While thinking about this, I wonder if the current patch for back\n> branches can lead to an ABI break as it changes the exposed structure?\n> If so, it may be another reason to change it to some other way\n> probably as you are suggesting.\n\nYeah, it changes the size of ReorderBuffer, which is not good.\nChanging the function names and arguments would also break ABI. So\nprobably we cannot do the above idea of removing\nReorderBufferInitialXactsSetCatalogChanges() as well.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Jul 2022 22:58:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Tue, 19 Jul 2022 17:31:07 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Tue, Jul 19, 2022 at 4:35 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Tue, 19 Jul 2022 10:17:15 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > Good work. I wonder without comments this may create a problem in the\n> > > future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> > > freeing the memory any less robust. Also, for consistency, we can use\n> > > a similar check based on xcnt in the SnapBuildRestore to free the\n> > > memory in the below code:\n> > > + /* set catalog modifying transactions */\n> > > + if (builder->catchange.xip)\n> > > + pfree(builder->catchange.xip);\n> >\n> > But xip must be positive there. We can add a comment explains that.\n> >\n> \n> Yes, if we add the comment for it, probably we need to explain a gcc's\n> optimization but it seems to be too much to me.\n\nAh, sorry. I confused with other place in SnapBuildPurgeCommitedTxn.\nI agree to you, that we don't need additional comment *there*.\n\n> > + catchange_xip = ReorderBufferGetCatalogChangesXacts(builder->reorder);\n> >\n> > catchange_xip is allocated in the current context, but ondisk is\n> > allocated in builder->context. I see it kind of inconsistent (even if\n> > the current context is same with build->context).\n> \n> Right. I thought that since the lifetime of catchange_xip is short,\n> until the end of SnapBuildSerialize() function we didn't need to\n> allocate it in builder->context. But given ondisk, we need to do that\n> for catchange_xip as well. Will fix it.\n\nThanks.\n\n> > + if (builder->committed.xcnt > 0)\n> > + {\n> >\n> > It seems to me comitted.xip is always non-null, so we don't need this.\n> > I don't strongly object to do that, though.\n> \n> But committed.xcnt could be 0, right? We don't need to copy anything\n> by calling memcpy with size = 0 in this case. Also, it looks more\n> consistent with what we do for catchange_xcnt.\n\nMmm. the patch changed that behavior. AllocateSnapshotBuilder always\nallocate the array with a fixed size. SnapBuildAddCommittedTxn still\nassumes builder->committed.xip is non-NULL. SnapBuildRestore *kept*\nondisk.builder.commited.xip populated with a valid array pointer. But\nthe patch allows committed.xip be NULL, thus in that case,\nSnapBuildAddCommitedTxn calls repalloc(NULL) which triggers assertion\nfailure.\n\n> > + Assert((xcnt > 0) && (xcnt == rb->catchange_ntxns));\n> >\n> > (xcnt > 0) is obvious here (otherwise means dlist_foreach is broken..).\n> > (xcnt == rb->catchange_ntxns) is not what should be checked here. The\n> > assert just requires that catchange_txns and catchange_ntxns are\n> > consistent so it should be checked just after dlist_empty.. I think.\n> >\n> \n> If we want to check if catchange_txns and catchange_ntxns are\n> consistent, should we check (xcnt == rb->catchange_ntxns) as well, no?\n> This function requires the caller to use rb->catchange_ntxns as the\n> length of the returned array. I think this assertion ensures that the\n> actual length of the array is consistent with the length we\n> pre-calculated.\n\nSorry again. Please forget the comment about xcnt == rb->catchange_ntxns..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 Jul 2022 09:58:08 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 9:58 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 19 Jul 2022 17:31:07 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > On Tue, Jul 19, 2022 at 4:35 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > At Tue, 19 Jul 2022 10:17:15 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > > Good work. I wonder without comments this may create a problem in the\n> > > > future. OTOH, I don't see adding a check \"catchange.xcnt > 0\" before\n> > > > freeing the memory any less robust. Also, for consistency, we can use\n> > > > a similar check based on xcnt in the SnapBuildRestore to free the\n> > > > memory in the below code:\n> > > > + /* set catalog modifying transactions */\n> > > > + if (builder->catchange.xip)\n> > > > + pfree(builder->catchange.xip);\n> > >\n> > > But xip must be positive there. We can add a comment explains that.\n> > >\n> >\n> > Yes, if we add the comment for it, probably we need to explain a gcc's\n> > optimization but it seems to be too much to me.\n>\n> Ah, sorry. I confused with other place in SnapBuildPurgeCommitedTxn.\n> I agree to you, that we don't need additional comment *there*.\n>\n> > > + catchange_xip = ReorderBufferGetCatalogChangesXacts(builder->reorder);\n> > >\n> > > catchange_xip is allocated in the current context, but ondisk is\n> > > allocated in builder->context. I see it kind of inconsistent (even if\n> > > the current context is same with build->context).\n> >\n> > Right. I thought that since the lifetime of catchange_xip is short,\n> > until the end of SnapBuildSerialize() function we didn't need to\n> > allocate it in builder->context. But given ondisk, we need to do that\n> > for catchange_xip as well. Will fix it.\n>\n> Thanks.\n>\n> > > + if (builder->committed.xcnt > 0)\n> > > + {\n> > >\n> > > It seems to me comitted.xip is always non-null, so we don't need this.\n> > > I don't strongly object to do that, though.\n> >\n> > But committed.xcnt could be 0, right? We don't need to copy anything\n> > by calling memcpy with size = 0 in this case. Also, it looks more\n> > consistent with what we do for catchange_xcnt.\n>\n> Mmm. the patch changed that behavior. AllocateSnapshotBuilder always\n> allocate the array with a fixed size. SnapBuildAddCommittedTxn still\n > assumes builder->committed.xip is non-NULL. SnapBuildRestore *kept*\n> ondisk.builder.commited.xip populated with a valid array pointer. But\n> the patch allows committed.xip be NULL, thus in that case,\n> SnapBuildAddCommitedTxn calls repalloc(NULL) which triggers assertion\n> failure.\n\nIIUC the patch doesn't allow committed.xip to be NULL since we don't\noverwrite it if builder->committed.xcnt is 0 (i.e.,\nondisk.builder.committed.xip is NULL):\n\n builder->committed.xcnt = ondisk.builder.committed.xcnt;\n /* We only allocated/stored xcnt, not xcnt_space xids ! */\n /* don't overwrite preallocated xip, if we don't have anything here */\n if (builder->committed.xcnt > 0)\n {\n pfree(builder->committed.xip);\n builder->committed.xcnt_space = ondisk.builder.committed.xcnt;\n builder->committed.xip = ondisk.builder.committed.xip;\n }\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 20 Jul 2022 10:58:16 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 19, 2022 at 7:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Jul 19, 2022 at 9:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jul 19, 2022 at 1:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > > BTW on backbranches, I think that the reason why we add\n> > > initial_running_xacts stuff to ReorderBuffer is that we cannot modify\n> > > SnapBuild that could be serialized. Can we add a (private) array for\n> > > the initial running xacts in snapbuild.c instead of adding new\n> > > variables to ReorderBuffer?\n> > >\n> >\n> > While thinking about this, I wonder if the current patch for back\n> > branches can lead to an ABI break as it changes the exposed structure?\n> > If so, it may be another reason to change it to some other way\n> > probably as you are suggesting.\n>\n> Yeah, it changes the size of ReorderBuffer, which is not good.\n>\n\nSo, are you planning to give a try with your idea of making a private\narray for the initial running xacts? I am not sure but I guess you are\nproposing to add it in SnapBuild structure, if so, that seems safe as\nthat structure is not exposed.\n\n> Changing the function names and arguments would also break ABI. So\n> probably we cannot do the above idea of removing\n> ReorderBufferInitialXactsSetCatalogChanges() as well.\n>\n\nWhy do you think we can't remove\nReorderBufferInitialXactsSetCatalogChanges() from the back branch\npatch? I think we don't need to change the existing function\nReorderBufferXidHasCatalogChanges() but instead can have a wrapper\nlike SnapBuildXidHasCatalogChanges() similar to master branch patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jul 2022 08:41:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 12:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 19, 2022 at 7:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Jul 19, 2022 at 9:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 19, 2022 at 1:10 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > > BTW on backbranches, I think that the reason why we add\n> > > > initial_running_xacts stuff to ReorderBuffer is that we cannot modify\n> > > > SnapBuild that could be serialized. Can we add a (private) array for\n> > > > the initial running xacts in snapbuild.c instead of adding new\n> > > > variables to ReorderBuffer?\n> > > >\n> > >\n> > > While thinking about this, I wonder if the current patch for back\n> > > branches can lead to an ABI break as it changes the exposed structure?\n> > > If so, it may be another reason to change it to some other way\n> > > probably as you are suggesting.\n> >\n> > Yeah, it changes the size of ReorderBuffer, which is not good.\n> >\n>\n> So, are you planning to give a try with your idea of making a private\n> array for the initial running xacts?\n\nYes.\n\n> I am not sure but I guess you are\n> proposing to add it in SnapBuild structure, if so, that seems safe as\n> that structure is not exposed.\n\nWe cannot add it in SnapBuild as it gets serialized to the disk.\n\n>\n> > Changing the function names and arguments would also break ABI. So\n> > probably we cannot do the above idea of removing\n> > ReorderBufferInitialXactsSetCatalogChanges() as well.\n> >\n>\n> Why do you think we can't remove\n> ReorderBufferInitialXactsSetCatalogChanges() from the back branch\n> patch? I think we don't need to change the existing function\n> ReorderBufferXidHasCatalogChanges() but instead can have a wrapper\n> like SnapBuildXidHasCatalogChanges() similar to master branch patch.\n\nIIUC we need to change SnapBuildCommitTxn() but it's exposed.\n\nCurrently, we call like DecodeCommit() -> SnapBuildCommitTxn() ->\nReorderBufferXidHasCatalogChanges(). If we have a wrapper function, we\ncall like DecodeCommit() -> SnapBuildCommitTxn() ->\nSnapBuildXidHasCatalogChanges() ->\nReorderBufferXidHasCatalogChanges(). In\nSnapBuildXidHasCatalogChanges(), we need to check if the transaction\nhas XACT_XINFO_HAS_INVALS, which means DecodeCommit() needs to pass\neither parsed->xinfo or (parsed->xinfo & XACT_XINFO_HAS_INVALS != 0)\ndown to SnapBuildXidHasCatalogChanges(). However, since\nSnapBuildCommitTxn(), between DecodeCommit() and\nSnapBuildXidHasCatalogChanges(), is exposed we cannot change it.\n\nAnother idea would be to have functions, say\nSnapBuildCommitTxnWithXInfo() and SnapBuildCommitTxn_ext(). The latter\ndoes actual work of handling transaction commits and both\nSnapBuildCommitTxn() and SnapBuildCommit() call\nSnapBuildCommitTxnWithXInfo() with different arguments.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 20 Jul 2022 12:30:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 12:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jul 19, 2022 at 7:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Why do you think we can't remove\n> > ReorderBufferInitialXactsSetCatalogChanges() from the back branch\n> > patch? I think we don't need to change the existing function\n> > ReorderBufferXidHasCatalogChanges() but instead can have a wrapper\n> > like SnapBuildXidHasCatalogChanges() similar to master branch patch.\n>\n> IIUC we need to change SnapBuildCommitTxn() but it's exposed.\n>\n> Currently, we call like DecodeCommit() -> SnapBuildCommitTxn() ->\n> ReorderBufferXidHasCatalogChanges(). If we have a wrapper function, we\n> call like DecodeCommit() -> SnapBuildCommitTxn() ->\n> SnapBuildXidHasCatalogChanges() ->\n> ReorderBufferXidHasCatalogChanges(). In\n> SnapBuildXidHasCatalogChanges(), we need to check if the transaction\n> has XACT_XINFO_HAS_INVALS, which means DecodeCommit() needs to pass\n> either parsed->xinfo or (parsed->xinfo & XACT_XINFO_HAS_INVALS != 0)\n> down to SnapBuildXidHasCatalogChanges(). However, since\n> SnapBuildCommitTxn(), between DecodeCommit() and\n> SnapBuildXidHasCatalogChanges(), is exposed we cannot change it.\n>\n\nAgreed.\n\n> Another idea would be to have functions, say\n> SnapBuildCommitTxnWithXInfo() and SnapBuildCommitTxn_ext(). The latter\n> does actual work of handling transaction commits and both\n> SnapBuildCommitTxn() and SnapBuildCommit() call\n> SnapBuildCommitTxnWithXInfo() with different arguments.\n>\n\nDo you want to say DecodeCommit() instead of SnapBuildCommit() in\nabove para? Yet another idea could be to have another flag\nRBTXN_HAS_INVALS which will be set by DecodeCommit for top-level TXN.\nThen, we can retrieve it even for each of the subtxn's if and when\nrequired.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jul 2022 10:48:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Wed, 20 Jul 2022 10:58:16 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in \n> On Wed, Jul 20, 2022 at 9:58 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Mmm. the patch changed that behavior. AllocateSnapshotBuilder always\n> > allocate the array with a fixed size. SnapBuildAddCommittedTxn still\n> > assumes builder->committed.xip is non-NULL. SnapBuildRestore *kept*\n> > ondisk.builder.commited.xip populated with a valid array pointer. But\n> > the patch allows committed.xip be NULL, thus in that case,\n> > SnapBuildAddCommitedTxn calls repalloc(NULL) which triggers assertion\n> > failure.\n> \n> IIUC the patch doesn't allow committed.xip to be NULL since we don't\n> overwrite it if builder->committed.xcnt is 0 (i.e.,\n> ondisk.builder.committed.xip is NULL):\n\nI meant that ondisk.builder.committed.xip can be NULL.. But looking\nagain that cannot be. I don't understand what I was looking at at\nthat time.\n\nSo, sorry for the noise.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 20 Jul 2022 16:16:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 4:16 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 20 Jul 2022 10:58:16 +0900, Masahiko Sawada <sawada.mshk@gmail.com> wrote in\n> > On Wed, Jul 20, 2022 at 9:58 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > Mmm. the patch changed that behavior. AllocateSnapshotBuilder always\n> > > allocate the array with a fixed size. SnapBuildAddCommittedTxn still\n> > > assumes builder->committed.xip is non-NULL. SnapBuildRestore *kept*\n> > > ondisk.builder.commited.xip populated with a valid array pointer. But\n> > > the patch allows committed.xip be NULL, thus in that case,\n> > > SnapBuildAddCommitedTxn calls repalloc(NULL) which triggers assertion\n> > > failure.\n> >\n> > IIUC the patch doesn't allow committed.xip to be NULL since we don't\n> > overwrite it if builder->committed.xcnt is 0 (i.e.,\n> > ondisk.builder.committed.xip is NULL):\n>\n> I meant that ondisk.builder.committed.xip can be NULL.. But looking\n> again that cannot be. I don't understand what I was looking at at\n> that time.\n>\n> So, sorry for the noise.\n\nNo problem. Thank you for your review and comments!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 20 Jul 2022 16:20:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 2:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 12:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jul 19, 2022 at 7:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Why do you think we can't remove\n> > > ReorderBufferInitialXactsSetCatalogChanges() from the back branch\n> > > patch? I think we don't need to change the existing function\n> > > ReorderBufferXidHasCatalogChanges() but instead can have a wrapper\n> > > like SnapBuildXidHasCatalogChanges() similar to master branch patch.\n> >\n> > IIUC we need to change SnapBuildCommitTxn() but it's exposed.\n> >\n> > Currently, we call like DecodeCommit() -> SnapBuildCommitTxn() ->\n> > ReorderBufferXidHasCatalogChanges(). If we have a wrapper function, we\n> > call like DecodeCommit() -> SnapBuildCommitTxn() ->\n> > SnapBuildXidHasCatalogChanges() ->\n> > ReorderBufferXidHasCatalogChanges(). In\n> > SnapBuildXidHasCatalogChanges(), we need to check if the transaction\n> > has XACT_XINFO_HAS_INVALS, which means DecodeCommit() needs to pass\n> > either parsed->xinfo or (parsed->xinfo & XACT_XINFO_HAS_INVALS != 0)\n> > down to SnapBuildXidHasCatalogChanges(). However, since\n> > SnapBuildCommitTxn(), between DecodeCommit() and\n> > SnapBuildXidHasCatalogChanges(), is exposed we cannot change it.\n> >\n>\n> Agreed.\n>\n> > Another idea would be to have functions, say\n> > SnapBuildCommitTxnWithXInfo() and SnapBuildCommitTxn_ext(). The latter\n> > does actual work of handling transaction commits and both\n> > SnapBuildCommitTxn() and SnapBuildCommit() call\n> > SnapBuildCommitTxnWithXInfo() with different arguments.\n> >\n>\n> Do you want to say DecodeCommit() instead of SnapBuildCommit() in\n> above para?\n\nI meant that we will call like DecodeCommit() ->\nSnapBuildCommitTxnWithXInfo() -> SnapBuildCommitTxn_ext(has_invals =\ntrue) -> SnapBuildXidHasCatalogChanges(has_invals = true) -> ... If\nSnapBuildCommitTxn() gets called, it calls SnapBuildCommitTxn_ext()\nwith has_invals = false and behaves the same as before.\n\n> Yet another idea could be to have another flag\n> RBTXN_HAS_INVALS which will be set by DecodeCommit for top-level TXN.\n> Then, we can retrieve it even for each of the subtxn's if and when\n> required.\n\nDo you mean that when checking if the subtransaction has catalog\nchanges, we check if its top-level XID has this new flag? Why do we\nneed the new flag?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 20 Jul 2022 16:57:35 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 2:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > > Another idea would be to have functions, say\n> > > SnapBuildCommitTxnWithXInfo() and SnapBuildCommitTxn_ext(). The latter\n> > > does actual work of handling transaction commits and both\n> > > SnapBuildCommitTxn() and SnapBuildCommit() call\n> > > SnapBuildCommitTxnWithXInfo() with different arguments.\n> > >\n> >\n> > Do you want to say DecodeCommit() instead of SnapBuildCommit() in\n> > above para?\n>\n> I meant that we will call like DecodeCommit() ->\n> SnapBuildCommitTxnWithXInfo() -> SnapBuildCommitTxn_ext(has_invals =\n> true) -> SnapBuildXidHasCatalogChanges(has_invals = true) -> ... If\n> SnapBuildCommitTxn() gets called, it calls SnapBuildCommitTxn_ext()\n> with has_invals = false and behaves the same as before.\n>\n\nOkay, understood. This will work.\n\n> > Yet another idea could be to have another flag\n> > RBTXN_HAS_INVALS which will be set by DecodeCommit for top-level TXN.\n> > Then, we can retrieve it even for each of the subtxn's if and when\n> > required.\n>\n> Do you mean that when checking if the subtransaction has catalog\n> changes, we check if its top-level XID has this new flag?\n>\n\nYes.\n\n> Why do we\n> need the new flag?\n>\n\nThis is required if we don't want to introduce a new set of functions\nas you proposed above. I am not sure which one is better w.r.t back\npatching effort later but it seems to me using flag stuff would make\nfuture back patches easier if we make any changes in\nSnapBuildCommitTxn.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 20 Jul 2022 14:20:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 20, 2022 at 5:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 2:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 20, 2022 at 9:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > Another idea would be to have functions, say\n> > > > SnapBuildCommitTxnWithXInfo() and SnapBuildCommitTxn_ext(). The latter\n> > > > does actual work of handling transaction commits and both\n> > > > SnapBuildCommitTxn() and SnapBuildCommit() call\n> > > > SnapBuildCommitTxnWithXInfo() with different arguments.\n> > > >\n> > >\n> > > Do you want to say DecodeCommit() instead of SnapBuildCommit() in\n> > > above para?\n> >\n> > I meant that we will call like DecodeCommit() ->\n> > SnapBuildCommitTxnWithXInfo() -> SnapBuildCommitTxn_ext(has_invals =\n> > true) -> SnapBuildXidHasCatalogChanges(has_invals = true) -> ... If\n> > SnapBuildCommitTxn() gets called, it calls SnapBuildCommitTxn_ext()\n> > with has_invals = false and behaves the same as before.\n> >\n>\n> Okay, understood. This will work.\n>\n> > > Yet another idea could be to have another flag\n> > > RBTXN_HAS_INVALS which will be set by DecodeCommit for top-level TXN.\n> > > Then, we can retrieve it even for each of the subtxn's if and when\n> > > required.\n> >\n> > Do you mean that when checking if the subtransaction has catalog\n> > changes, we check if its top-level XID has this new flag?\n> >\n>\n> Yes.\n>\n> > Why do we\n> > need the new flag?\n> >\n>\n> This is required if we don't want to introduce a new set of functions\n> as you proposed above. I am not sure which one is better w.r.t back\n> patching effort later but it seems to me using flag stuff would make\n> future back patches easier if we make any changes in\n> SnapBuildCommitTxn.\n\nUnderstood.\n\nI've implemented this idea as well for discussion. Both patches have\nthe common change to remember the initial running transactions and to\npurge them when decoding xl_running_xacts records. The difference is\nhow to mark the transactions as needing to be added to the snapshot.\n\nIn v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during.patch,\nwhen the transaction is in the initial running xact list and its\ncommit record has XINFO_HAS_INVAL flag, we mark both the top\ntransaction and its all subtransactions as containing catalog changes\n(which also means to create ReorderBufferTXN entries for them). These\ntransactions are added to the snapshot in SnapBuildCommitTxn() since\nReorderBufferXidHasCatalogChanges () for them returns true.\n\nIn poc_mark_top_txn_has_inval.patch, when the transaction is in the\ninitial running xacts list and its commit record has XINFO_HAS_INVALS\nflag, we set a new flag, say RBTXN_COMMIT_HAS_INVALS, only to the top\ntransaction. In SnapBuildCommitTxn(), we add all subtransactions to\nthe snapshot without checking ReorderBufferXidHasCatalogChanges() for\nsubtransactions if its top transaction has the RBTXN_COMMIT_HAS_INVALS\nflag.\n\nA difference between the two ideas is the scope of changes: the former\nchanges only snapbuild.c but the latter changes both snapbuild.c and\nreorderbuffer.c. Moreover, while the former uses the existing flag,\nthe latter adds a new flag to the reorder buffer for dealing with only\nthis case. I think the former idea is simpler in terms of that. But,\nan advantage of the latter idea is that the latter idea can save to\ncreate ReorderBufferTXN entries for subtransactions.\n\nOverall I prefer the former for now but I'd like to hear what others think.\n\nFWIW, I didn't try the idea of adding wrapper functions since it would\nbe costly in terms of back patching effort in the future.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 22 Jul 2022 15:17:50 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 22, 2022 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 20, 2022 at 5:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > This is required if we don't want to introduce a new set of functions\n> > as you proposed above. I am not sure which one is better w.r.t back\n> > patching effort later but it seems to me using flag stuff would make\n> > future back patches easier if we make any changes in\n> > SnapBuildCommitTxn.\n>\n> Understood.\n>\n> I've implemented this idea as well for discussion. Both patches have\n> the common change to remember the initial running transactions and to\n> purge them when decoding xl_running_xacts records. The difference is\n> how to mark the transactions as needing to be added to the snapshot.\n>\n> In v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during.patch,\n> when the transaction is in the initial running xact list and its\n> commit record has XINFO_HAS_INVAL flag, we mark both the top\n> transaction and its all subtransactions as containing catalog changes\n> (which also means to create ReorderBufferTXN entries for them). These\n> transactions are added to the snapshot in SnapBuildCommitTxn() since\n> ReorderBufferXidHasCatalogChanges () for them returns true.\n>\n> In poc_mark_top_txn_has_inval.patch, when the transaction is in the\n> initial running xacts list and its commit record has XINFO_HAS_INVALS\n> flag, we set a new flag, say RBTXN_COMMIT_HAS_INVALS, only to the top\n> transaction.\n>\n\nIt seems that the patch has missed the part to check if the xid is in\nthe initial running xacts list?\n\n> In SnapBuildCommitTxn(), we add all subtransactions to\n> the snapshot without checking ReorderBufferXidHasCatalogChanges() for\n> subtransactions if its top transaction has the RBTXN_COMMIT_HAS_INVALS\n> flag.\n>\n> A difference between the two ideas is the scope of changes: the former\n> changes only snapbuild.c but the latter changes both snapbuild.c and\n> reorderbuffer.c. Moreover, while the former uses the existing flag,\n> the latter adds a new flag to the reorder buffer for dealing with only\n> this case. I think the former idea is simpler in terms of that. But,\n> an advantage of the latter idea is that the latter idea can save to\n> create ReorderBufferTXN entries for subtransactions.\n>\n> Overall I prefer the former for now but I'd like to hear what others think.\n>\n\nI agree that the latter idea can have better performance in extremely\nspecial scenarios but introducing a new flag for the same sounds a bit\nugly to me. So, I would also prefer to go with the former idea,\nhowever, I would also like to hear what Horiguchi-San and others have\nto say.\n\nFew comments on v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during:\n1.\n+void\n+SnapBuildInitialXactSetCatalogChanges(SnapBuild *builder, TransactionId xid,\n+ int subxcnt, TransactionId *subxacts,\n+ XLogRecPtr lsn)\n+{\n\nI think it is better to name this function as\nSnapBuildXIDSetCatalogChanges as we use this to mark a particular\ntransaction as having catalog changes.\n\n2. Changed/added a few comments in the attached.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 23 Jul 2022 17:02:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sat, Jul 23, 2022 at 8:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 22, 2022 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 20, 2022 at 5:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 20, 2022 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > This is required if we don't want to introduce a new set of functions\n> > > as you proposed above. I am not sure which one is better w.r.t back\n> > > patching effort later but it seems to me using flag stuff would make\n> > > future back patches easier if we make any changes in\n> > > SnapBuildCommitTxn.\n> >\n> > Understood.\n> >\n> > I've implemented this idea as well for discussion. Both patches have\n> > the common change to remember the initial running transactions and to\n> > purge them when decoding xl_running_xacts records. The difference is\n> > how to mark the transactions as needing to be added to the snapshot.\n> >\n> > In v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during.patch,\n> > when the transaction is in the initial running xact list and its\n> > commit record has XINFO_HAS_INVAL flag, we mark both the top\n> > transaction and its all subtransactions as containing catalog changes\n> > (which also means to create ReorderBufferTXN entries for them). These\n> > transactions are added to the snapshot in SnapBuildCommitTxn() since\n> > ReorderBufferXidHasCatalogChanges () for them returns true.\n> >\n> > In poc_mark_top_txn_has_inval.patch, when the transaction is in the\n> > initial running xacts list and its commit record has XINFO_HAS_INVALS\n> > flag, we set a new flag, say RBTXN_COMMIT_HAS_INVALS, only to the top\n> > transaction.\n> >\n>\n> It seems that the patch has missed the part to check if the xid is in\n> the initial running xacts list?\n\nOops, right.\n\n>\n> > In SnapBuildCommitTxn(), we add all subtransactions to\n> > the snapshot without checking ReorderBufferXidHasCatalogChanges() for\n> > subtransactions if its top transaction has the RBTXN_COMMIT_HAS_INVALS\n> > flag.\n> >\n> > A difference between the two ideas is the scope of changes: the former\n> > changes only snapbuild.c but the latter changes both snapbuild.c and\n> > reorderbuffer.c. Moreover, while the former uses the existing flag,\n> > the latter adds a new flag to the reorder buffer for dealing with only\n> > this case. I think the former idea is simpler in terms of that. But,\n> > an advantage of the latter idea is that the latter idea can save to\n> > create ReorderBufferTXN entries for subtransactions.\n> >\n> > Overall I prefer the former for now but I'd like to hear what others think.\n> >\n>\n> I agree that the latter idea can have better performance in extremely\n> special scenarios but introducing a new flag for the same sounds a bit\n> ugly to me. So, I would also prefer to go with the former idea,\n> however, I would also like to hear what Horiguchi-San and others have\n> to say.\n\nAgreed.\n\n>\n> Few comments on v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during:\n> 1.\n> +void\n> +SnapBuildInitialXactSetCatalogChanges(SnapBuild *builder, TransactionId xid,\n> + int subxcnt, TransactionId *subxacts,\n> + XLogRecPtr lsn)\n> +{\n>\n> I think it is better to name this function as\n> SnapBuildXIDSetCatalogChanges as we use this to mark a particular\n> transaction as having catalog changes.\n>\n> 2. Changed/added a few comments in the attached.\n\nThank you for the comments.\n\nI've attached updated version patches for the master and back branches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 25 Jul 2022 10:45:46 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 25, 2022 at 10:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Jul 23, 2022 at 8:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 22, 2022 at 11:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Jul 20, 2022 at 5:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Jul 20, 2022 at 1:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > This is required if we don't want to introduce a new set of functions\n> > > > as you proposed above. I am not sure which one is better w.r.t back\n> > > > patching effort later but it seems to me using flag stuff would make\n> > > > future back patches easier if we make any changes in\n> > > > SnapBuildCommitTxn.\n> > >\n> > > Understood.\n> > >\n> > > I've implemented this idea as well for discussion. Both patches have\n> > > the common change to remember the initial running transactions and to\n> > > purge them when decoding xl_running_xacts records. The difference is\n> > > how to mark the transactions as needing to be added to the snapshot.\n> > >\n> > > In v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during.patch,\n> > > when the transaction is in the initial running xact list and its\n> > > commit record has XINFO_HAS_INVAL flag, we mark both the top\n> > > transaction and its all subtransactions as containing catalog changes\n> > > (which also means to create ReorderBufferTXN entries for them). These\n> > > transactions are added to the snapshot in SnapBuildCommitTxn() since\n> > > ReorderBufferXidHasCatalogChanges () for them returns true.\n> > >\n> > > In poc_mark_top_txn_has_inval.patch, when the transaction is in the\n> > > initial running xacts list and its commit record has XINFO_HAS_INVALS\n> > > flag, we set a new flag, say RBTXN_COMMIT_HAS_INVALS, only to the top\n> > > transaction.\n> > >\n> >\n> > It seems that the patch has missed the part to check if the xid is in\n> > the initial running xacts list?\n>\n> Oops, right.\n>\n> >\n> > > In SnapBuildCommitTxn(), we add all subtransactions to\n> > > the snapshot without checking ReorderBufferXidHasCatalogChanges() for\n> > > subtransactions if its top transaction has the RBTXN_COMMIT_HAS_INVALS\n> > > flag.\n> > >\n> > > A difference between the two ideas is the scope of changes: the former\n> > > changes only snapbuild.c but the latter changes both snapbuild.c and\n> > > reorderbuffer.c. Moreover, while the former uses the existing flag,\n> > > the latter adds a new flag to the reorder buffer for dealing with only\n> > > this case. I think the former idea is simpler in terms of that. But,\n> > > an advantage of the latter idea is that the latter idea can save to\n> > > create ReorderBufferTXN entries for subtransactions.\n> > >\n> > > Overall I prefer the former for now but I'd like to hear what others think.\n> > >\n> >\n> > I agree that the latter idea can have better performance in extremely\n> > special scenarios but introducing a new flag for the same sounds a bit\n> > ugly to me. So, I would also prefer to go with the former idea,\n> > however, I would also like to hear what Horiguchi-San and others have\n> > to say.\n>\n> Agreed.\n>\n> >\n> > Few comments on v7-0001-Fix-catalog-lookup-with-the-wrong-snapshot-during:\n> > 1.\n> > +void\n> > +SnapBuildInitialXactSetCatalogChanges(SnapBuild *builder, TransactionId xid,\n> > + int subxcnt, TransactionId *subxacts,\n> > + XLogRecPtr lsn)\n> > +{\n> >\n> > I think it is better to name this function as\n> > SnapBuildXIDSetCatalogChanges as we use this to mark a particular\n> > transaction as having catalog changes.\n> >\n> > 2. Changed/added a few comments in the attached.\n>\n> Thank you for the comments.\n>\n> I've attached updated version patches for the master and back branches.\n\nI've attached the patch for REl15 that I forgot.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 25 Jul 2022 14:55:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "Hi,\r\n\r\nI did some performance test for the master branch patch (based on v6 patch) to\r\nsee if the bsearch() added by this patch will cause any overhead.\r\n\r\nI tested them three times and took the average.\r\n\r\nThe results are as follows, and attach the bar chart.\r\n\r\ncase 1\r\n---------\r\nNo catalog modifying transaction.\r\nDecode 800k pgbench transactions. (8 clients, 100k transactions per client)\r\n\r\nmaster 7.5417\r\npatched 7.4107\r\n\r\ncase 2\r\n---------\r\nThere's one catalog modifying transaction.\r\nDecode 100k/500k/1M transactions.\r\n\r\n 100k 500k 1M\r\nmaster 0.0576 0.1491 0.4346\r\npatched 0.0586 0.1500 0.4344\r\n\r\ncase 3\r\n---------\r\nThere are 64 catalog modifying transactions.\r\nDecode 100k/500k/1M transactions.\r\n\r\n 100k 500k 1M\r\nmaster 0.0600 0.1666 0.4876\r\npatched 0.0620 0.1653 0.4795\r\n\r\n(Because the result of case 3 shows that there is a overhead of about 3% in the\r\ncase decoding 100k transactions with 64 catalog modifying transactions, I\r\ntested the next run of 100k xacts with or without catalog modifying\r\ntransactions, to see if it affects subsequent decoding.)\r\n\r\ncase 4.1\r\n---------\r\nAfter the test steps in case 3 (64 catalog modifying transactions, decode 100k\r\ntransactions), run 100k xacts and then decode.\r\n\r\nmaster 0.3699\r\npatched 0.3701\r\n\r\ncase 4.2\r\n---------\r\nAfter the test steps in case 3 (64 catalog modifying transactions, decode 100k\r\ntransactions), run 64 DDLs(without checkpoint) and 100k xacts, then decode.\r\n\r\nmaster 0.3687\r\npatched 0.3696\r\n\r\nSummary of the tests:\r\nAfter applying this patch, there is a overhead of about 3% in the case decoding\r\n100k transactions with 64 catalog modifying transactions. This is an extreme\r\ncase, so maybe it's okay. And case 4.1 and case 4.2 shows that the patch has no\r\neffect on subsequent decoding. In other cases, there are no significant\r\ndifferences.\r\n\r\nFor your information, here are the parameters specified in postgresql.conf in\r\nthe test.\r\n\r\nshared_buffers = 8GB\r\ncheckpoint_timeout = 30min\r\nmax_wal_size = 20GB\r\nmin_wal_size = 10GB\r\nautovacuum = off\r\n\r\nRegards,\r\nShi yu", "msg_date": "Mon, 25 Jul 2022 10:57:43 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 25, 2022 at 7:57 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> I did some performance test for the master branch patch (based on v6 patch) to\n> see if the bsearch() added by this patch will cause any overhead.\n\nThank you for doing performance tests!\n\n>\n> I tested them three times and took the average.\n>\n> The results are as follows, and attach the bar chart.\n>\n> case 1\n> ---------\n> No catalog modifying transaction.\n> Decode 800k pgbench transactions. (8 clients, 100k transactions per client)\n>\n> master 7.5417\n> patched 7.4107\n>\n> case 2\n> ---------\n> There's one catalog modifying transaction.\n> Decode 100k/500k/1M transactions.\n>\n> 100k 500k 1M\n> master 0.0576 0.1491 0.4346\n> patched 0.0586 0.1500 0.4344\n>\n> case 3\n> ---------\n> There are 64 catalog modifying transactions.\n> Decode 100k/500k/1M transactions.\n>\n> 100k 500k 1M\n> master 0.0600 0.1666 0.4876\n> patched 0.0620 0.1653 0.4795\n>\n> (Because the result of case 3 shows that there is a overhead of about 3% in the\n> case decoding 100k transactions with 64 catalog modifying transactions, I\n> tested the next run of 100k xacts with or without catalog modifying\n> transactions, to see if it affects subsequent decoding.)\n>\n> case 4.1\n> ---------\n> After the test steps in case 3 (64 catalog modifying transactions, decode 100k\n> transactions), run 100k xacts and then decode.\n>\n> master 0.3699\n> patched 0.3701\n>\n> case 4.2\n> ---------\n> After the test steps in case 3 (64 catalog modifying transactions, decode 100k\n> transactions), run 64 DDLs(without checkpoint) and 100k xacts, then decode.\n>\n> master 0.3687\n> patched 0.3696\n>\n> Summary of the tests:\n> After applying this patch, there is a overhead of about 3% in the case decoding\n> 100k transactions with 64 catalog modifying transactions. This is an extreme\n> case, so maybe it's okay.\n\nYes. If we're worried about the overhead and doing bsearch() is the\ncause, probably we can try simplehash instead of the array.\n\nAn improvement idea is that we pass the parsed->xinfo down to\nSnapBuildXidHasCatalogChanges(), and then return from that function\nbefore doing bearch() if the parsed->xinfo doesn't have\nXACT_XINFO_HAS_INVALS. That would save calling bsearch() for\nnon-catalog-modifying transactions. Is it worth trying?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 26 Jul 2022 10:29:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 26, 2022 at 7:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 7:57 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > Hi,\n> >\n> > I did some performance test for the master branch patch (based on v6 patch) to\n> > see if the bsearch() added by this patch will cause any overhead.\n>\n> Thank you for doing performance tests!\n>\n> >\n> > I tested them three times and took the average.\n> >\n> > The results are as follows, and attach the bar chart.\n> >\n> > case 1\n> > ---------\n> > No catalog modifying transaction.\n> > Decode 800k pgbench transactions. (8 clients, 100k transactions per client)\n> >\n> > master 7.5417\n> > patched 7.4107\n> >\n> > case 2\n> > ---------\n> > There's one catalog modifying transaction.\n> > Decode 100k/500k/1M transactions.\n> >\n> > 100k 500k 1M\n> > master 0.0576 0.1491 0.4346\n> > patched 0.0586 0.1500 0.4344\n> >\n> > case 3\n> > ---------\n> > There are 64 catalog modifying transactions.\n> > Decode 100k/500k/1M transactions.\n> >\n> > 100k 500k 1M\n> > master 0.0600 0.1666 0.4876\n> > patched 0.0620 0.1653 0.4795\n> >\n> > (Because the result of case 3 shows that there is a overhead of about 3% in the\n> > case decoding 100k transactions with 64 catalog modifying transactions, I\n> > tested the next run of 100k xacts with or without catalog modifying\n> > transactions, to see if it affects subsequent decoding.)\n> >\n> > case 4.1\n> > ---------\n> > After the test steps in case 3 (64 catalog modifying transactions, decode 100k\n> > transactions), run 100k xacts and then decode.\n> >\n> > master 0.3699\n> > patched 0.3701\n> >\n> > case 4.2\n> > ---------\n> > After the test steps in case 3 (64 catalog modifying transactions, decode 100k\n> > transactions), run 64 DDLs(without checkpoint) and 100k xacts, then decode.\n> >\n> > master 0.3687\n> > patched 0.3696\n> >\n> > Summary of the tests:\n> > After applying this patch, there is a overhead of about 3% in the case decoding\n> > 100k transactions with 64 catalog modifying transactions. This is an extreme\n> > case, so maybe it's okay.\n>\n> Yes. If we're worried about the overhead and doing bsearch() is the\n> cause, probably we can try simplehash instead of the array.\n>\n\nI am not sure if we need to go that far for this extremely corner\ncase. Let's first try your below idea.\n\n> An improvement idea is that we pass the parsed->xinfo down to\n> SnapBuildXidHasCatalogChanges(), and then return from that function\n> before doing bearch() if the parsed->xinfo doesn't have\n> XACT_XINFO_HAS_INVALS. That would save calling bsearch() for\n> non-catalog-modifying transactions. Is it worth trying?\n>\n\nI think this is worth trying and this might reduce some of the\noverhead as well in the case presented by Shi-San.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 26 Jul 2022 10:48:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 26, 2022 at 2:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 7:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 25, 2022 at 7:57 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I did some performance test for the master branch patch (based on v6 patch) to\n> > > see if the bsearch() added by this patch will cause any overhead.\n> >\n> > Thank you for doing performance tests!\n> >\n> > >\n> > > I tested them three times and took the average.\n> > >\n> > > The results are as follows, and attach the bar chart.\n> > >\n> > > case 1\n> > > ---------\n> > > No catalog modifying transaction.\n> > > Decode 800k pgbench transactions. (8 clients, 100k transactions per client)\n> > >\n> > > master 7.5417\n> > > patched 7.4107\n> > >\n> > > case 2\n> > > ---------\n> > > There's one catalog modifying transaction.\n> > > Decode 100k/500k/1M transactions.\n> > >\n> > > 100k 500k 1M\n> > > master 0.0576 0.1491 0.4346\n> > > patched 0.0586 0.1500 0.4344\n> > >\n> > > case 3\n> > > ---------\n> > > There are 64 catalog modifying transactions.\n> > > Decode 100k/500k/1M transactions.\n> > >\n> > > 100k 500k 1M\n> > > master 0.0600 0.1666 0.4876\n> > > patched 0.0620 0.1653 0.4795\n> > >\n> > > (Because the result of case 3 shows that there is a overhead of about 3% in the\n> > > case decoding 100k transactions with 64 catalog modifying transactions, I\n> > > tested the next run of 100k xacts with or without catalog modifying\n> > > transactions, to see if it affects subsequent decoding.)\n> > >\n> > > case 4.1\n> > > ---------\n> > > After the test steps in case 3 (64 catalog modifying transactions, decode 100k\n> > > transactions), run 100k xacts and then decode.\n> > >\n> > > master 0.3699\n> > > patched 0.3701\n> > >\n> > > case 4.2\n> > > ---------\n> > > After the test steps in case 3 (64 catalog modifying transactions, decode 100k\n> > > transactions), run 64 DDLs(without checkpoint) and 100k xacts, then decode.\n> > >\n> > > master 0.3687\n> > > patched 0.3696\n> > >\n> > > Summary of the tests:\n> > > After applying this patch, there is a overhead of about 3% in the case decoding\n> > > 100k transactions with 64 catalog modifying transactions. This is an extreme\n> > > case, so maybe it's okay.\n> >\n> > Yes. If we're worried about the overhead and doing bsearch() is the\n> > cause, probably we can try simplehash instead of the array.\n> >\n>\n> I am not sure if we need to go that far for this extremely corner\n> case. Let's first try your below idea.\n>\n> > An improvement idea is that we pass the parsed->xinfo down to\n> > SnapBuildXidHasCatalogChanges(), and then return from that function\n> > before doing bearch() if the parsed->xinfo doesn't have\n> > XACT_XINFO_HAS_INVALS. That would save calling bsearch() for\n> > non-catalog-modifying transactions. Is it worth trying?\n> >\n>\n> I think this is worth trying and this might reduce some of the\n> overhead as well in the case presented by Shi-San.\n\nOkay, I've attached an updated patch that does the above idea. Could\nyou please do the performance tests again to see if the idea can help\nreduce the overhead, Shi yu?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 26 Jul 2022 16:51:33 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 26, 2022 3:52 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> On Tue, Jul 26, 2022 at 2:18 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> >\r\n> > On Tue, Jul 26, 2022 at 7:00 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> > >\r\n> > > On Mon, Jul 25, 2022 at 7:57 PM shiy.fnst@fujitsu.com\r\n> > > <shiy.fnst@fujitsu.com> wrote:\r\n> > > >\r\n> > > >\r\n> > > > case 3\r\n> > > > ---------\r\n> > > > There are 64 catalog modifying transactions.\r\n> > > > Decode 100k/500k/1M transactions.\r\n> > > >\r\n> > > > 100k 500k 1M\r\n> > > > master 0.0600 0.1666 0.4876\r\n> > > > patched 0.0620 0.1653 0.4795\r\n> > > >\r\n> > > >\r\n> > > > Summary of the tests:\r\n> > > > After applying this patch, there is a overhead of about 3% in the case\r\n> decoding\r\n> > > > 100k transactions with 64 catalog modifying transactions. This is an\r\n> extreme\r\n> > > > case, so maybe it's okay.\r\n> > >\r\n> > > Yes. If we're worried about the overhead and doing bsearch() is the\r\n> > > cause, probably we can try simplehash instead of the array.\r\n> > >\r\n> >\r\n> > I am not sure if we need to go that far for this extremely corner\r\n> > case. Let's first try your below idea.\r\n> >\r\n> > > An improvement idea is that we pass the parsed->xinfo down to\r\n> > > SnapBuildXidHasCatalogChanges(), and then return from that function\r\n> > > before doing bearch() if the parsed->xinfo doesn't have\r\n> > > XACT_XINFO_HAS_INVALS. That would save calling bsearch() for\r\n> > > non-catalog-modifying transactions. Is it worth trying?\r\n> > >\r\n> >\r\n> > I think this is worth trying and this might reduce some of the\r\n> > overhead as well in the case presented by Shi-San.\r\n> \r\n> Okay, I've attached an updated patch that does the above idea. Could\r\n> you please do the performance tests again to see if the idea can help\r\n> reduce the overhead, Shi yu?\r\n> \r\n\r\nThanks for your improvement. I have tested the case which shows overhead before\r\n(decoding 100k transactions with 64 catalog modifying transactions) for the v9\r\npatch, the result is as follows.\r\n\r\nmaster 0.0607\r\npatched 0.0613\r\n\r\nThere's almost no difference compared with master (less than 1%), which looks\r\ngood to me.\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Tue, 26 Jul 2022 09:24:58 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Jul 25, 2022 at 11:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 10:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the patch for REl15 that I forgot.\n>\n\nI feel the place to remember running xacts information in\nSnapBuildProcessRunningXacts is not appropriate. Because in cases\nwhere there are no running xacts or when xl_running_xact is old enough\nthat we can't use it, we don't need that information. I feel we need\nit only when we have to reuse the already serialized snapshot, so,\nwon't it be better to initialize at that place in\nSnapBuildFindSnapshot()? I have changed accordingly in the attached\nand apart from that slightly modified the comments and commit message.\nDo let me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 27 Jul 2022 17:03:21 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Jul 27, 2022 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Jul 25, 2022 at 11:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Jul 25, 2022 at 10:45 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the patch for REl15 that I forgot.\n> >\n>\n> I feel the place to remember running xacts information in\n> SnapBuildProcessRunningXacts is not appropriate. Because in cases\n> where there are no running xacts or when xl_running_xact is old enough\n> that we can't use it, we don't need that information. I feel we need\n> it only when we have to reuse the already serialized snapshot, so,\n> won't it be better to initialize at that place in\n> SnapBuildFindSnapshot()?\n\nGood point, agreed.\n\n> I have changed accordingly in the attached\n> and apart from that slightly modified the comments and commit message.\n> Do let me know what you think of the attached?\n\nIt would be better to remember the initial running xacts after\nSnapBuildRestore() returns true? Because otherwise, we could end up\nallocating InitialRunningXacts multiple times while leaking the old\nones if there are no serialized snapshots that we are interested in.\n\n---\n+ if (builder->state == SNAPBUILD_START)\n+ {\n+ int nxacts =\nrunning->subxcnt + running->xcnt;\n+ Size sz = sizeof(TransactionId) * nxacts;\n+\n+ NInitialRunningXacts = nxacts;\n+ InitialRunningXacts =\nMemoryContextAlloc(builder->context, sz);\n+ memcpy(InitialRunningXacts, running->xids, sz);\n+ qsort(InitialRunningXacts, nxacts,\nsizeof(TransactionId), xidComparator);\n+ }\n\nWe should allocate the memory for InitialRunningXacts only when\n(running->subxcnt + running->xcnt) > 0.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 28 Jul 2022 10:48:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 28, 2022 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Jul 27, 2022 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I have changed accordingly in the attached\n> > and apart from that slightly modified the comments and commit message.\n> > Do let me know what you think of the attached?\n>\n> It would be better to remember the initial running xacts after\n> SnapBuildRestore() returns true? Because otherwise, we could end up\n> allocating InitialRunningXacts multiple times while leaking the old\n> ones if there are no serialized snapshots that we are interested in.\n>\n\nRight, this makes sense. But note that you can no longer have a check\n(builder->state == SNAPBUILD_START) which I believe is not required.\nWe need to do this after restore, in whichever state snapshot was as\nany state other than SNAPBUILD_CONSISTENT can have commits without all\ntheir changes.\n\nAccordingly, I think the comment: \"Remember the transactions and\nsubtransactions that were running when xl_running_xacts record that we\ndecoded first was written.\" needs to be slightly modified to something\nlike: \"Remember the transactions and subtransactions that were running\nwhen xl_running_xacts record that we decoded was written.\". Change\nthis if it is used at any other place in the patch.\n\n> ---\n> + if (builder->state == SNAPBUILD_START)\n> + {\n> + int nxacts =\n> running->subxcnt + running->xcnt;\n> + Size sz = sizeof(TransactionId) * nxacts;\n> +\n> + NInitialRunningXacts = nxacts;\n> + InitialRunningXacts =\n> MemoryContextAlloc(builder->context, sz);\n> + memcpy(InitialRunningXacts, running->xids, sz);\n> + qsort(InitialRunningXacts, nxacts,\n> sizeof(TransactionId), xidComparator);\n> + }\n>\n> We should allocate the memory for InitialRunningXacts only when\n> (running->subxcnt + running->xcnt) > 0.\n>\n\nThere is no harm in doing that but ideally, that case would have been\ncovered by an earlier check \"if (running->oldestRunningXid ==\nrunning->nextXid)\" which suggests \"No transactions were running, so we\ncan jump to consistent.\"\n\nKindly make the required changes and submit the back branch patches again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 28 Jul 2022 08:51:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "() an\n\nOn Thu, Jul 28, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Jul 27, 2022 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > I have changed accordingly in the attached\n> > > and apart from that slightly modified the comments and commit message.\n> > > Do let me know what you think of the attached?\n> >\n> > It would be better to remember the initial running xacts after\n> > SnapBuildRestore() returns true? Because otherwise, we could end up\n> > allocating InitialRunningXacts multiple times while leaking the old\n> > ones if there are no serialized snapshots that we are interested in.\n> >\n>\n> Right, this makes sense. But note that you can no longer have a check\n> (builder->state == SNAPBUILD_START) which I believe is not required.\n> We need to do this after restore, in whichever state snapshot was as\n> any state other than SNAPBUILD_CONSISTENT can have commits without all\n> their changes.\n\nRight.\n\n>\n> Accordingly, I think the comment: \"Remember the transactions and\n> subtransactions that were running when xl_running_xacts record that we\n> decoded first was written.\" needs to be slightly modified to something\n> like: \"Remember the transactions and subtransactions that were running\n> when xl_running_xacts record that we decoded was written.\". Change\n> this if it is used at any other place in the patch.\n\nAgreed.\n\n>\n> > ---\n> > + if (builder->state == SNAPBUILD_START)\n> > + {\n> > + int nxacts =\n> > running->subxcnt + running->xcnt;\n> > + Size sz = sizeof(TransactionId) * nxacts;\n> > +\n> > + NInitialRunningXacts = nxacts;\n> > + InitialRunningXacts =\n> > MemoryContextAlloc(builder->context, sz);\n> > + memcpy(InitialRunningXacts, running->xids, sz);\n> > + qsort(InitialRunningXacts, nxacts,\n> > sizeof(TransactionId), xidComparator);\n> > + }\n> >\n> > We should allocate the memory for InitialRunningXacts only when\n> > (running->subxcnt + running->xcnt) > 0.\n> >\n>\nd > There is no harm in doing that but ideally, that case would have been\n> covered by an earlier check \"if (running->oldestRunningXid ==\n> running->nextXid)\" which suggests \"No transactions were running, so we\n> can jump to consistent.\"\n\nYou're right.\n\nWhile editing back branch patches, I realized that the following\n(parsed->xinfo & XACT_XINFO_HAS_INVALS) and (parsed->nmsgs > 0) are\nequivalent:\n\n+ /*\n+ * If the COMMIT record has invalidation messages, it could have catalog\n+ * changes. It is possible that we didn't mark this transaction as\n+ * containing catalog changes when the decoding starts from a commit\n+ * record without decoding the transaction's other changes. So, we ensure\n+ * to mark such transactions as containing catalog change.\n+ *\n+ * This must be done before SnapBuildCommitTxn() so that we can include\n+ * these transactions in the historic snapshot.\n+ */\n+ if (parsed->xinfo & XACT_XINFO_HAS_INVALS)\n+ SnapBuildXidSetCatalogChanges(ctx->snapshot_builder, xid,\n+ parsed->nsubxacts, parsed->subxacts,\n+ buf->origptr);\n+\n /*\n * Process invalidation messages, even if we're not interested in the\n * transaction's contents, since the various caches need to always be\n * consistent.\n */\n if (parsed->nmsgs > 0)\n {\n if (!ctx->fast_forward)\n ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr,\n parsed->nmsgs, parsed->msgs);\n ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n }\n\nIf that's right, I think we can merge these if branches. We can call\nReorderBufferXidSetCatalogChanges() for top-txn and in\nSnapBuildXidSetCatalogChanges() we mark its subtransactions if top-txn\nis in the list. What do you think?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 28 Jul 2022 15:25:33 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 28, 2022 at 11:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 28, 2022 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n>\n> While editing back branch patches, I realized that the following\n> (parsed->xinfo & XACT_XINFO_HAS_INVALS) and (parsed->nmsgs > 0) are\n> equivalent:\n>\n> + /*\n> + * If the COMMIT record has invalidation messages, it could have catalog\n> + * changes. It is possible that we didn't mark this transaction as\n> + * containing catalog changes when the decoding starts from a commit\n> + * record without decoding the transaction's other changes. So, we ensure\n> + * to mark such transactions as containing catalog change.\n> + *\n> + * This must be done before SnapBuildCommitTxn() so that we can include\n> + * these transactions in the historic snapshot.\n> + */\n> + if (parsed->xinfo & XACT_XINFO_HAS_INVALS)\n> + SnapBuildXidSetCatalogChanges(ctx->snapshot_builder, xid,\n> + parsed->nsubxacts, parsed->subxacts,\n> + buf->origptr);\n> +\n> /*\n> * Process invalidation messages, even if we're not interested in the\n> * transaction's contents, since the various caches need to always be\n> * consistent.\n> */\n> if (parsed->nmsgs > 0)\n> {\n> if (!ctx->fast_forward)\n> ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr,\n> parsed->nmsgs, parsed->msgs);\n> ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n> }\n>\n> If that's right, I think we can merge these if branches. We can call\n> ReorderBufferXidSetCatalogChanges() for top-txn and in\n> SnapBuildXidSetCatalogChanges() we mark its subtransactions if top-txn\n> is in the list. What do you think?\n>\n\nNote that this code doesn't exist in 14 and 15, so we need to create\ndifferent patches for those. BTW, how in 13 and lower versions did we\nidentify and mark subxacts as having catalog changes without our\npatch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 28 Jul 2022 12:43:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 28, 2022 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 11:56 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 28, 2022 at 12:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 28, 2022 at 7:18 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> >\n> > While editing back branch patches, I realized that the following\n> > (parsed->xinfo & XACT_XINFO_HAS_INVALS) and (parsed->nmsgs > 0) are\n> > equivalent:\n> >\n> > + /*\n> > + * If the COMMIT record has invalidation messages, it could have catalog\n> > + * changes. It is possible that we didn't mark this transaction as\n> > + * containing catalog changes when the decoding starts from a commit\n> > + * record without decoding the transaction's other changes. So, we ensure\n> > + * to mark such transactions as containing catalog change.\n> > + *\n> > + * This must be done before SnapBuildCommitTxn() so that we can include\n> > + * these transactions in the historic snapshot.\n> > + */\n> > + if (parsed->xinfo & XACT_XINFO_HAS_INVALS)\n> > + SnapBuildXidSetCatalogChanges(ctx->snapshot_builder, xid,\n> > + parsed->nsubxacts, parsed->subxacts,\n> > + buf->origptr);\n> > +\n> > /*\n> > * Process invalidation messages, even if we're not interested in the\n> > * transaction's contents, since the various caches need to always be\n> > * consistent.\n> > */\n> > if (parsed->nmsgs > 0)\n> > {\n> > if (!ctx->fast_forward)\n> > ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr,\n> > parsed->nmsgs, parsed->msgs);\n> > ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n> > }\n> >\n> > If that's right, I think we can merge these if branches. We can call\n> > ReorderBufferXidSetCatalogChanges() for top-txn and in\n> > SnapBuildXidSetCatalogChanges() we mark its subtransactions if top-txn\n> > is in the list. What do you think?\n> >\n>\n> Note that this code doesn't exist in 14 and 15, so we need to create\n> different patches for those.\n\nRight.\n\n> BTW, how in 13 and lower versions did we\n> identify and mark subxacts as having catalog changes without our\n> patch?\n\nI think we use HEAP_INPLACE and XLOG_HEAP2_NEW_CID to mark subxacts as well.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 28 Jul 2022 16:26:19 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 28, 2022 at 12:56 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > While editing back branch patches, I realized that the following\n> > > (parsed->xinfo & XACT_XINFO_HAS_INVALS) and (parsed->nmsgs > 0) are\n> > > equivalent:\n> > >\n> > > + /*\n> > > + * If the COMMIT record has invalidation messages, it could have catalog\n> > > + * changes. It is possible that we didn't mark this transaction as\n> > > + * containing catalog changes when the decoding starts from a commit\n> > > + * record without decoding the transaction's other changes. So, we ensure\n> > > + * to mark such transactions as containing catalog change.\n> > > + *\n> > > + * This must be done before SnapBuildCommitTxn() so that we can include\n> > > + * these transactions in the historic snapshot.\n> > > + */\n> > > + if (parsed->xinfo & XACT_XINFO_HAS_INVALS)\n> > > + SnapBuildXidSetCatalogChanges(ctx->snapshot_builder, xid,\n> > > + parsed->nsubxacts, parsed->subxacts,\n> > > + buf->origptr);\n> > > +\n> > > /*\n> > > * Process invalidation messages, even if we're not interested in the\n> > > * transaction's contents, since the various caches need to always be\n> > > * consistent.\n> > > */\n> > > if (parsed->nmsgs > 0)\n> > > {\n> > > if (!ctx->fast_forward)\n> > > ReorderBufferAddInvalidations(ctx->reorder, xid, buf->origptr,\n> > > parsed->nmsgs, parsed->msgs);\n> > > ReorderBufferXidSetCatalogChanges(ctx->reorder, xid, buf->origptr);\n> > > }\n> > >\n> > > If that's right, I think we can merge these if branches. We can call\n> > > ReorderBufferXidSetCatalogChanges() for top-txn and in\n> > > SnapBuildXidSetCatalogChanges() we mark its subtransactions if top-txn\n> > > is in the list. What do you think?\n> > >\n> >\n> > Note that this code doesn't exist in 14 and 15, so we need to create\n> > different patches for those.\n>\n> Right.\n>\n\nOkay, then this sounds reasonable to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 28 Jul 2022 14:27:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Jul 26, 2022 at 1:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Okay, I've attached an updated patch that does the above idea. Could\n> you please do the performance tests again to see if the idea can help\n> reduce the overhead, Shi yu?\n>\n\nWhile reviewing the patch for HEAD, I have changed a few comments. See\nattached, if you agree with these changes then include them in the\nnext version.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 28 Jul 2022 15:23:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 28, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jul 26, 2022 at 1:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Okay, I've attached an updated patch that does the above idea. Could\n> > you please do the performance tests again to see if the idea can help\n> > reduce the overhead, Shi yu?\n> >\n>\n> While reviewing the patch for HEAD, I have changed a few comments. See\n> attached, if you agree with these changes then include them in the\n> next version.\n>\n\nI have another comment on this patch:\nSnapBuildPurgeOlderTxn()\n{\n...\n+ if (surviving_xids > 0)\n+ memmove(builder->catchange.xip, &(builder->catchange.xip[off]),\n+ surviving_xids * sizeof(TransactionId))\n...\n\nFor this code to hit, we must have a situation where one or more of\nthe xacts in this array must be still running. And, if that is true,\nwe would not have started from the restart point where the\ncorresponding snapshot (that contains the still running xacts) has\nbeen serialized because we advance the restart point to not before the\noldest running xacts restart_decoding_lsn. This may not be easy to\nunderstand so let me take an example to explain. Say we have two\ntransactions t1 and t2, and both have made catalog changes. We want a\nsituation where one of those gets purged and the other remains in\nbuilder->catchange.xip array. I have tried variants of the below\nsequence to see if I can get into the required situation but am not\nable to make it.\n\nSession-1\nCheckpoint -1;\nT1\nDDL\n\nSession-2\nT2\nDDL\n\nSession-3\nCheckpoint-2;\npg_logical_slot_get_changes()\n -- Here when we serialize the snapshot corresponding to\nCHECKPOINT-2's running_xact record, we will serialize both t1 and t2\nas catalog-changing xacts.\n\nSession-1\nT1\nCommit;\n\nCheckpoint;\npg_logical_slot_get_changes()\n -- Here we will restore from Checkpoint-1's serialized snapshot and\nwon't be able to move restart_point to Checkpoint-2 because T2 is\nstill open.\n\nNow, as per my understanding, it is only possible to move\nrestart_point to Checkpoint-2 if T2 gets committed/rolled-back in\nwhich case we will never have that in surviving_xids array after the\npurge.\n\nIt is possible I am missing something here. Do let me know your thoughts.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 28 Jul 2022 17:27:26 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Jul 28, 2022 at 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jul 26, 2022 at 1:22 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Okay, I've attached an updated patch that does the above idea. Could\n> > > you please do the performance tests again to see if the idea can help\n> > > reduce the overhead, Shi yu?\n> > >\n> >\n> > While reviewing the patch for HEAD, I have changed a few comments. See\n> > attached, if you agree with these changes then include them in the\n> > next version.\n> >\n>\n> I have another comment on this patch:\n> SnapBuildPurgeOlderTxn()\n> {\n> ...\n> + if (surviving_xids > 0)\n> + memmove(builder->catchange.xip, &(builder->catchange.xip[off]),\n> + surviving_xids * sizeof(TransactionId))\n> ...\n>\n> For this code to hit, we must have a situation where one or more of\n> the xacts in this array must be still running. And, if that is true,\n> we would not have started from the restart point where the\n> corresponding snapshot (that contains the still running xacts) has\n> been serialized because we advance the restart point to not before the\n> oldest running xacts restart_decoding_lsn. This may not be easy to\n> understand so let me take an example to explain. Say we have two\n> transactions t1 and t2, and both have made catalog changes. We want a\n> situation where one of those gets purged and the other remains in\n> builder->catchange.xip array. I have tried variants of the below\n> sequence to see if I can get into the required situation but am not\n> able to make it.\n>\n> Session-1\n> Checkpoint -1;\n> T1\n> DDL\n>\n> Session-2\n> T2\n> DDL\n>\n> Session-3\n> Checkpoint-2;\n> pg_logical_slot_get_changes()\n> -- Here when we serialize the snapshot corresponding to\n> CHECKPOINT-2's running_xact record, we will serialize both t1 and t2\n> as catalog-changing xacts.\n>\n> Session-1\n> T1\n> Commit;\n>\n> Checkpoint;\n> pg_logical_slot_get_changes()\n> -- Here we will restore from Checkpoint-1's serialized snapshot and\n> won't be able to move restart_point to Checkpoint-2 because T2 is\n> still open.\n>\n> Now, as per my understanding, it is only possible to move\n> restart_point to Checkpoint-2 if T2 gets committed/rolled-back in\n> which case we will never have that in surviving_xids array after the\n> purge.\n>\n> It is possible I am missing something here. Do let me know your thoughts.\n\nYeah, your description makes sense to me. I've also considered how to\nhit this path but I guess it is never hit. Thinking of it in another\nway, first of all, at least 2 catalog modifying transactions have to\nbe running while writing a xl_running_xacts. The serialized snapshot\nthat is written when we decode the first xl_running_xact has two\ntransactions. Then, one of them is committed before the second\nxl_running_xacts. The second serialized snapshot has only one\ntransaction. Then, the transaction is also committed after that. Now,\nin order to execute the path, we need to start decoding from the first\nserialized snapshot. However, if we start from there, we cannot decode\nthe full contents of the transaction that was committed later.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 29 Jul 2022 09:06:08 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 29, 2022 at 5:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Jul 28, 2022 at 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Jul 28, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I have another comment on this patch:\n> > SnapBuildPurgeOlderTxn()\n> > {\n> > ...\n> > + if (surviving_xids > 0)\n> > + memmove(builder->catchange.xip, &(builder->catchange.xip[off]),\n> > + surviving_xids * sizeof(TransactionId))\n> > ...\n> >\n> > For this code to hit, we must have a situation where one or more of\n> > the xacts in this array must be still running. And, if that is true,\n> > we would not have started from the restart point where the\n> > corresponding snapshot (that contains the still running xacts) has\n> > been serialized because we advance the restart point to not before the\n> > oldest running xacts restart_decoding_lsn. This may not be easy to\n> > understand so let me take an example to explain. Say we have two\n> > transactions t1 and t2, and both have made catalog changes. We want a\n> > situation where one of those gets purged and the other remains in\n> > builder->catchange.xip array. I have tried variants of the below\n> > sequence to see if I can get into the required situation but am not\n> > able to make it.\n> >\n> > Session-1\n> > Checkpoint -1;\n> > T1\n> > DDL\n> >\n> > Session-2\n> > T2\n> > DDL\n> >\n> > Session-3\n> > Checkpoint-2;\n> > pg_logical_slot_get_changes()\n> > -- Here when we serialize the snapshot corresponding to\n> > CHECKPOINT-2's running_xact record, we will serialize both t1 and t2\n> > as catalog-changing xacts.\n> >\n> > Session-1\n> > T1\n> > Commit;\n> >\n> > Checkpoint;\n> > pg_logical_slot_get_changes()\n> > -- Here we will restore from Checkpoint-1's serialized snapshot and\n> > won't be able to move restart_point to Checkpoint-2 because T2 is\n> > still open.\n> >\n> > Now, as per my understanding, it is only possible to move\n> > restart_point to Checkpoint-2 if T2 gets committed/rolled-back in\n> > which case we will never have that in surviving_xids array after the\n> > purge.\n> >\n> > It is possible I am missing something here. Do let me know your thoughts.\n>\n> Yeah, your description makes sense to me. I've also considered how to\n> hit this path but I guess it is never hit. Thinking of it in another\n> way, first of all, at least 2 catalog modifying transactions have to\n> be running while writing a xl_running_xacts. The serialized snapshot\n> that is written when we decode the first xl_running_xact has two\n> transactions. Then, one of them is committed before the second\n> xl_running_xacts. The second serialized snapshot has only one\n> transaction. Then, the transaction is also committed after that. Now,\n> in order to execute the path, we need to start decoding from the first\n> serialized snapshot. However, if we start from there, we cannot decode\n> the full contents of the transaction that was committed later.\n>\n\nI think then we should change this code in the master branch patch\nwith an additional comment on the lines of: \"Either all the xacts got\npurged or none. It is only possible to partially remove the xids from\nthis array if one or more of the xids are still running but not all.\nThat can happen if we start decoding from a point (LSN where the\nsnapshot state became consistent) where all the xacts in this were\nrunning and then at least one of those got committed and a few are\nstill running. We will never start from such a point because we won't\nmove the slot's restart_lsn past the point where the oldest running\ntransaction's restart_decoding_lsn is.\"\n\nI suggest keeping the back branch as it is w.r.t this change as if\nthis logic proves to be faulty it won't affect the stable branches. We\ncan always back-patch this small change if required.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 29 Jul 2022 12:15:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 29, 2022 at 5:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Jul 28, 2022 at 8:57 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Jul 28, 2022 at 3:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I have another comment on this patch:\n> > > SnapBuildPurgeOlderTxn()\n> > > {\n> > > ...\n> > > + if (surviving_xids > 0)\n> > > + memmove(builder->catchange.xip, &(builder->catchange.xip[off]),\n> > > + surviving_xids * sizeof(TransactionId))\n> > > ...\n> > >\n> > > For this code to hit, we must have a situation where one or more of\n> > > the xacts in this array must be still running. And, if that is true,\n> > > we would not have started from the restart point where the\n> > > corresponding snapshot (that contains the still running xacts) has\n> > > been serialized because we advance the restart point to not before the\n> > > oldest running xacts restart_decoding_lsn. This may not be easy to\n> > > understand so let me take an example to explain. Say we have two\n> > > transactions t1 and t2, and both have made catalog changes. We want a\n> > > situation where one of those gets purged and the other remains in\n> > > builder->catchange.xip array. I have tried variants of the below\n> > > sequence to see if I can get into the required situation but am not\n> > > able to make it.\n> > >\n> > > Session-1\n> > > Checkpoint -1;\n> > > T1\n> > > DDL\n> > >\n> > > Session-2\n> > > T2\n> > > DDL\n> > >\n> > > Session-3\n> > > Checkpoint-2;\n> > > pg_logical_slot_get_changes()\n> > > -- Here when we serialize the snapshot corresponding to\n> > > CHECKPOINT-2's running_xact record, we will serialize both t1 and t2\n> > > as catalog-changing xacts.\n> > >\n> > > Session-1\n> > > T1\n> > > Commit;\n> > >\n> > > Checkpoint;\n> > > pg_logical_slot_get_changes()\n> > > -- Here we will restore from Checkpoint-1's serialized snapshot and\n> > > won't be able to move restart_point to Checkpoint-2 because T2 is\n> > > still open.\n> > >\n> > > Now, as per my understanding, it is only possible to move\n> > > restart_point to Checkpoint-2 if T2 gets committed/rolled-back in\n> > > which case we will never have that in surviving_xids array after the\n> > > purge.\n> > >\n> > > It is possible I am missing something here. Do let me know your thoughts.\n> >\n> > Yeah, your description makes sense to me. I've also considered how to\n> > hit this path but I guess it is never hit. Thinking of it in another\n> > way, first of all, at least 2 catalog modifying transactions have to\n> > be running while writing a xl_running_xacts. The serialized snapshot\n> > that is written when we decode the first xl_running_xact has two\n> > transactions. Then, one of them is committed before the second\n> > xl_running_xacts. The second serialized snapshot has only one\n> > transaction. Then, the transaction is also committed after that. Now,\n> > in order to execute the path, we need to start decoding from the first\n> > serialized snapshot. However, if we start from there, we cannot decode\n> > the full contents of the transaction that was committed later.\n> >\n>\n> I think then we should change this code in the master branch patch\n> with an additional comment on the lines of: \"Either all the xacts got\n> purged or none. It is only possible to partially remove the xids from\n> this array if one or more of the xids are still running but not all.\n> That can happen if we start decoding from a point (LSN where the\n> snapshot state became consistent) where all the xacts in this were\n> running and then at least one of those got committed and a few are\n> still running. We will never start from such a point because we won't\n> move the slot's restart_lsn past the point where the oldest running\n> transaction's restart_decoding_lsn is.\"\n\nAgreed.\n\n>\n> I suggest keeping the back branch as it is w.r.t this change as if\n> this logic proves to be faulty it won't affect the stable branches. We\n> can always back-patch this small change if required.\n\nYes, during PG16 release cycle, we can have time for evaluating\nwhether the approach in the master branch is correct. We can always\nback-patch the part.\n\nI've attached updated patches for all branches. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 1 Aug 2022 11:16:21 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Aug 1, 2022 at 7:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> I've attached updated patches for all branches. Please review them.\n>\n\nThanks, the patches look mostly good to me. I have made minor edits by\nremoving 'likely' from a few places as those don't seem to be adding\nmuch value, changed comments at a few places, and was getting\ncompilation in error in v11/10 (snapbuild.c:2111:3: error: ‘for’ loop\ninitial declarations are only allowed in C99 mode) which I have fixed.\nSee attached, unless there are major comments/suggestions, I am\nplanning to push this day after tomorrow (by Wednesday) after another\npass.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Mon, 1 Aug 2022 20:01:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Mon, 1 Aug 2022 20:01:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Aug 1, 2022 at 7:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > I've attached updated patches for all branches. Please review them.\n> >\n> \n> Thanks, the patches look mostly good to me. I have made minor edits by\n> removing 'likely' from a few places as those don't seem to be adding\n> much value, changed comments at a few places, and was getting\n> compilation in error in v11/10 (snapbuild.c:2111:3: error: ‘for’ loop\n> initial declarations are only allowed in C99 mode) which I have fixed.\n> See attached, unless there are major comments/suggestions, I am\n> planning to push this day after tomorrow (by Wednesday) after another\n> pass.\n\nmaster:\n+ * Read the contents of the serialized snapshot to the dest.\n\nDo we need the \"the\" before the \"dest\"?\n\n+\t{\n+\t\tint\t\t\tsave_errno = errno;\n+\n+\t\tCloseTransientFile(fd);\n+\n+\t\tif (readBytes < 0)\n+\t\t{\n+\t\t\terrno = save_errno;\n+\t\t\tereport(ERROR,\n\nDo we need the CloseTransientFile(fd) there? This call requires errno\nto be remembered but anyway OpenTransientFile'd files are to be close\nat transaction end. Actually CloseTransientFile() is not called\nbefore error'ing-out at error in other places.\n\n\n+\t * from the LSN-ordered list of toplevel TXNs. We remove TXN from the list\n\nWe remove \"the\" TXN\"?\n\n+\tif (dlist_is_empty(&rb->catchange_txns))\n+\t{\n+\t\tAssert(rb->catchange_ntxns == 0);\n+\t\treturn NULL;\n+\t}\n\nIt seems that the assert is far simpler than dlist_is_empty(). Why\ndon't we swap the conditions for if() and Assert() in the above?\n\n+\t * the oldest running transaction窶冱 restart_decoding_lsn is.\n\nThe line contains a broken characters.\n\n\n+\t * Either all the xacts got purged or none. It is only possible to\n+\t * partially remove the xids from this array if one or more of the xids\n+\t * are still running but not all. That can happen if we start decoding\n\nAssuming this, the commment below seems getting stale.\n\n+\t * catalog. We remove xids from this array when they become old enough to\n+\t * matter, and then it eventually becomes empty.\n\n\"We discard this array when the all containing xids are gone. See\nSnapBuildPurgeOlderTxn for details.\" or something like?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 02 Aug 2022 15:30:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Aug 2, 2022 at 12:00 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 1 Aug 2022 20:01:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Mon, Aug 1, 2022 at 7:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > I've attached updated patches for all branches. Please review them.\n> > >\n> >\n> > Thanks, the patches look mostly good to me. I have made minor edits by\n> > removing 'likely' from a few places as those don't seem to be adding\n> > much value, changed comments at a few places, and was getting\n> > compilation in error in v11/10 (snapbuild.c:2111:3: error: ‘for’ loop\n> > initial declarations are only allowed in C99 mode) which I have fixed.\n> > See attached, unless there are major comments/suggestions, I am\n> > planning to push this day after tomorrow (by Wednesday) after another\n> > pass.\n>\n>\n> + {\n> + int save_errno = errno;\n> +\n> + CloseTransientFile(fd);\n> +\n> + if (readBytes < 0)\n> + {\n> + errno = save_errno;\n> + ereport(ERROR,\n>\n> Do we need the CloseTransientFile(fd) there? This call requires errno\n> to be remembered but anyway OpenTransientFile'd files are to be close\n> at transaction end. Actually CloseTransientFile() is not called\n> before error'ing-out at error in other places.\n>\n\nBut this part of the code is just a copy of the existing code. See:\n\n- if (readBytes != sizeof(SnapBuild))\n- {\n- int save_errno = errno;\n-\n- CloseTransientFile(fd);\n-\n- if (readBytes < 0)\n- {\n- errno = save_errno;\n- ereport(ERROR,\n- (errcode_for_file_access(),\n- errmsg(\"could not read file \\\"%s\\\": %m\", path)));\n- }\n- else\n- ereport(ERROR,\n- (errcode(ERRCODE_DATA_CORRUPTED),\n- errmsg(\"could not read file \\\"%s\\\": read %d of %zu\",\n- path, readBytes, sizeof(SnapBuild))));\n- }\n\nWe just moved it to a separate function as the same code is being\nduplicated to multiple places.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 2 Aug 2022 13:54:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Aug 1, 2022 10:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Mon, Aug 1, 2022 at 7:46 AM Masahiko Sawada\r\n> <sawada.mshk@gmail.com> wrote:\r\n> >\r\n> > On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com>\r\n> wrote:\r\n> > >\r\n> >\r\n> > I've attached updated patches for all branches. Please review them.\r\n> >\r\n> \r\n> Thanks, the patches look mostly good to me. I have made minor edits by\r\n> removing 'likely' from a few places as those don't seem to be adding\r\n> much value, changed comments at a few places, and was getting\r\n> compilation in error in v11/10 (snapbuild.c:2111:3: error: ‘for’ loop\r\n> initial declarations are only allowed in C99 mode) which I have fixed.\r\n> See attached, unless there are major comments/suggestions, I am\r\n> planning to push this day after tomorrow (by Wednesday) after another\r\n> pass.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nHere are some minor comments:\r\n\r\n1.\r\npatches for REL10 ~ REL13:\r\n+ * Mark the transaction as containing catalog changes. In addition, if the\r\n+ * given xid is in the list of the initial running xacts, we mark the\r\n+ * its subtransactions as well. See comments for NInitialRunningXacts and\r\n+ * InitialRunningXacts for additional info.\r\n\r\n\"mark the its subtransactions\"\r\n->\r\n\"mark its subtransactions\"\r\n\r\n2.\r\npatches for REL10 ~ REL15:\r\nIn the comment in catalog_change_snapshot.spec, maybe we can use \"RUNNING_XACTS\"\r\ninstead of \"RUNNING_XACT\" \"XACT_RUNNING\", same as the patch for master branch.\r\n\r\nRegards,\r\nShi yu\r\n\r\n", "msg_date": "Tue, 2 Aug 2022 08:31:04 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Tue, 2 Aug 2022 13:54:43 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Tue, Aug 2, 2022 at 12:00 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > + {\n> > + int save_errno = errno;\n> > +\n> > + CloseTransientFile(fd);\n> > +\n> > + if (readBytes < 0)\n> > + {\n> > + errno = save_errno;\n> > + ereport(ERROR,\n> >\n> > Do we need the CloseTransientFile(fd) there? This call requires errno\n> > to be remembered but anyway OpenTransientFile'd files are to be close\n> > at transaction end. Actually CloseTransientFile() is not called\n> > before error'ing-out at error in other places.\n..\n> We just moved it to a separate function as the same code is being\n> duplicated to multiple places.\n\nThere are code paths that doesn't CloseTransientFile() explicitly,\ntoo. If there were no need of save_errno there, that'd be fine. But\notherwise I guess we prefer to let the orphan fds closed by ERROR and\nI don't think we need to preserve the less-preferred code pattern (if\nwe actually prefer not to have the explicit call).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Aug 2022 10:20:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Aug 3, 2022 at 10:20 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 2 Aug 2022 13:54:43 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Tue, Aug 2, 2022 at 12:00 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > >\n> > > + {\n> > > + int save_errno = errno;\n> > > +\n> > > + CloseTransientFile(fd);\n> > > +\n> > > + if (readBytes < 0)\n> > > + {\n> > > + errno = save_errno;\n> > > + ereport(ERROR,\n> > >\n> > > Do we need the CloseTransientFile(fd) there? This call requires errno\n> > > to be remembered but anyway OpenTransientFile'd files are to be close\n> > > at transaction end. Actually CloseTransientFile() is not called\n> > > before error'ing-out at error in other places.\n> ..\n> > We just moved it to a separate function as the same code is being\n> > duplicated to multiple places.\n>\n> There are code paths that doesn't CloseTransientFile() explicitly,\n> too. If there were no need of save_errno there, that'd be fine. But\n> otherwise I guess we prefer to let the orphan fds closed by ERROR and\n> I don't think we need to preserve the less-preferred code pattern (if\n> we actually prefer not to have the explicit call).\n\nLooking at other codes in snapbuild.c, we call CloseTransientFile()\nbefore erroring out in SnapBuildSerialize(). I think it's better to\nkeep it consistent with nearby codes in this patch. I think if we\nprefer the style of closing the file by ereport(ERROR), it should be\ndone for all of them in a separate patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 3 Aug 2022 10:35:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Aug 3, 2022 at 7:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 10:20 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 2 Aug 2022 13:54:43 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > > On Tue, Aug 2, 2022 at 12:00 PM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > > >\n> > > > + {\n> > > > + int save_errno = errno;\n> > > > +\n> > > > + CloseTransientFile(fd);\n> > > > +\n> > > > + if (readBytes < 0)\n> > > > + {\n> > > > + errno = save_errno;\n> > > > + ereport(ERROR,\n> > > >\n> > > > Do we need the CloseTransientFile(fd) there? This call requires errno\n> > > > to be remembered but anyway OpenTransientFile'd files are to be close\n> > > > at transaction end. Actually CloseTransientFile() is not called\n> > > > before error'ing-out at error in other places.\n> > ..\n> > > We just moved it to a separate function as the same code is being\n> > > duplicated to multiple places.\n> >\n> > There are code paths that doesn't CloseTransientFile() explicitly,\n> > too. If there were no need of save_errno there, that'd be fine. But\n> > otherwise I guess we prefer to let the orphan fds closed by ERROR and\n> > I don't think we need to preserve the less-preferred code pattern (if\n> > we actually prefer not to have the explicit call).\n>\n> Looking at other codes in snapbuild.c, we call CloseTransientFile()\n> before erroring out in SnapBuildSerialize(). I think it's better to\n> keep it consistent with nearby codes in this patch. I think if we\n> prefer the style of closing the file by ereport(ERROR), it should be\n> done for all of them in a separate patch.\n>\n\n+1. I also feel it is better to change it in a separate patch as this\nis not a pattern introduced by this patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 3 Aug 2022 08:51:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Aug 2, 2022 at 3:30 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 1 Aug 2022 20:01:00 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Mon, Aug 1, 2022 at 7:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > I've attached updated patches for all branches. Please review them.\n> > >\n> >\n> > Thanks, the patches look mostly good to me. I have made minor edits by\n> > removing 'likely' from a few places as those don't seem to be adding\n> > much value, changed comments at a few places, and was getting\n> > compilation in error in v11/10 (snapbuild.c:2111:3: error: ‘for’ loop\n> > initial declarations are only allowed in C99 mode) which I have fixed.\n> > See attached, unless there are major comments/suggestions, I am\n> > planning to push this day after tomorrow (by Wednesday) after another\n> > pass.\n>\n> master:\n> + * Read the contents of the serialized snapshot to the dest.\n>\n> Do we need the \"the\" before the \"dest\"?\n\nFixed.\n\n>\n> + {\n> + int save_errno = errno;\n> +\n> + CloseTransientFile(fd);\n> +\n> + if (readBytes < 0)\n> + {\n> + errno = save_errno;\n> + ereport(ERROR,\n>\n> Do we need the CloseTransientFile(fd) there? This call requires errno\n> to be remembered but anyway OpenTransientFile'd files are to be close\n> at transaction end. Actually CloseTransientFile() is not called\n> before error'ing-out at error in other places.\n\nAs Amit mentioned, it's just moved from SnapBuildRestore(). Looking at\nother code in snapbuild.c, we call CloseTransientFile before erroring\nout. I think it's better to keep it consistent with nearby codes.\n\n>\n>\n> + * from the LSN-ordered list of toplevel TXNs. We remove TXN from the list\n>\n> We remove \"the\" TXN\"?\n\nFixed.\n\n>\n> + if (dlist_is_empty(&rb->catchange_txns))\n> + {\n> + Assert(rb->catchange_ntxns == 0);\n> + return NULL;\n> + }\n>\n> It seems that the assert is far simpler than dlist_is_empty(). Why\n> don't we swap the conditions for if() and Assert() in the above?\n\nChanged.\n\n>\n> + * the oldest running transaction窶冱 restart_decoding_lsn is.\n>\n> The line contains a broken characters.\n\nFixed.\n\n>\n>\n> + * Either all the xacts got purged or none. It is only possible to\n> + * partially remove the xids from this array if one or more of the xids\n> + * are still running but not all. That can happen if we start decoding\n>\n> Assuming this, the commment below seems getting stale.\n>\n> + * catalog. We remove xids from this array when they become old enough to\n> + * matter, and then it eventually becomes empty.\n>\n> \"We discard this array when the all containing xids are gone. See\n> SnapBuildPurgeOlderTxn for details.\" or something like?\n\nChanged to:\n\nWe discard this array when all the xids in the list become old enough\nto matter. See SnapBuildPurgeOlderTxn for details.\n\nI've attached updated patches that incorporated the above comments as\nwell as the comments from Shi yu. Please review them.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 3 Aug 2022 13:05:30 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Tue, Aug 2, 2022 at 5:31 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Aug 1, 2022 10:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Aug 1, 2022 at 7:46 AM Masahiko Sawada\n> > <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 29, 2022 at 3:45 PM Amit Kapila <amit.kapila16@gmail.com>\n> > wrote:\n> > > >\n> > >\n> > > I've attached updated patches for all branches. Please review them.\n> > >\n> >\n> > Thanks, the patches look mostly good to me. I have made minor edits by\n> > removing 'likely' from a few places as those don't seem to be adding\n> > much value, changed comments at a few places, and was getting\n> > compilation in error in v11/10 (snapbuild.c:2111:3: error: ‘for’ loop\n> > initial declarations are only allowed in C99 mode) which I have fixed.\n> > See attached, unless there are major comments/suggestions, I am\n> > planning to push this day after tomorrow (by Wednesday) after another\n> > pass.\n> >\n>\n> Thanks for updating the patch.\n>\n> Here are some minor comments:\n>\n> 1.\n> patches for REL10 ~ REL13:\n> + * Mark the transaction as containing catalog changes. In addition, if the\n> + * given xid is in the list of the initial running xacts, we mark the\n> + * its subtransactions as well. See comments for NInitialRunningXacts and\n> + * InitialRunningXacts for additional info.\n>\n> \"mark the its subtransactions\"\n> ->\n> \"mark its subtransactions\"\n>\n> 2.\n> patches for REL10 ~ REL15:\n> In the comment in catalog_change_snapshot.spec, maybe we can use \"RUNNING_XACTS\"\n> instead of \"RUNNING_XACT\" \"XACT_RUNNING\", same as the patch for master branch.\n>\n\nThank you for the comments! These have been incorporated in the latest\nversion v12 patch I just submitted.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 3 Aug 2022 13:06:25 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "At Wed, 3 Aug 2022 08:51:40 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Aug 3, 2022 at 7:05 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > Looking at other codes in snapbuild.c, we call CloseTransientFile()\n> > before erroring out in SnapBuildSerialize(). I think it's better to\n> > keep it consistent with nearby codes in this patch. I think if we\n> > prefer the style of closing the file by ereport(ERROR), it should be\n> > done for all of them in a separate patch.\n> >\n> \n> +1. I also feel it is better to change it in a separate patch as this\n> is not a pattern introduced by this patch.\n\nAgreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 03 Aug 2022 15:27:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map\n filenode \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Aug 3, 2022 12:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\r\n> \r\n> I've attached updated patches that incorporated the above comments as\r\n> well as the comments from Shi yu. Please review them.\r\n> \r\n\r\nThanks for updating the patch.\r\n\r\nI noticed that in SnapBuildXidSetCatalogChanges(), \"i\" is initialized in the if\r\nbranch in REL10 patch, which is different from REL11 patch. Maybe we can modify\r\nREL11 patch to be consistent with REL10 patch.\r\n\r\nThe rest of the patch looks good to me.\r\n\r\nRegards,\r\nShi yu\r\n", "msg_date": "Wed, 3 Aug 2022 06:52:46 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Aug 3, 2022 at 3:52 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Aug 3, 2022 12:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached updated patches that incorporated the above comments as\n> > well as the comments from Shi yu. Please review them.\n> >\n>\n> Thanks for updating the patch.\n>\n> I noticed that in SnapBuildXidSetCatalogChanges(), \"i\" is initialized in the if\n> branch in REL10 patch, which is different from REL11 patch. Maybe we can modify\n> REL11 patch to be consistent with REL10 patch.\n>\n> The rest of the patch looks good to me.\n\nOops, thanks for pointing it out. I've fixed it and attached updated\npatches for all branches so as not to confuse the patch version. There\nis no update from v12 patch on REL12 - master patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 3 Aug 2022 16:49:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Wed, Aug 3, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 3:52 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n> >\n> > On Wed, Aug 3, 2022 12:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > I've attached updated patches that incorporated the above comments as\n> > > well as the comments from Shi yu. Please review them.\n> > >\n> >\n> > Thanks for updating the patch.\n> >\n> > I noticed that in SnapBuildXidSetCatalogChanges(), \"i\" is initialized in the if\n> > branch in REL10 patch, which is different from REL11 patch. Maybe we can modify\n> > REL11 patch to be consistent with REL10 patch.\n> >\n> > The rest of the patch looks good to me.\n>\n> Oops, thanks for pointing it out. I've fixed it and attached updated\n> patches for all branches so as not to confuse the patch version. There\n> is no update from v12 patch on REL12 - master patches.\n>\n\nThanks for the updated patches, the changes look good to me.\nHoriguchi-San, and others, do you have any further comments on this or\ndo you want to spend time in review of it? If not, I would like to\npush this after the current minor version release.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 8 Aug 2022 09:34:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Mon, Aug 8, 2022 at 9:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Aug 3, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > Oops, thanks for pointing it out. I've fixed it and attached updated\n> > patches for all branches so as not to confuse the patch version. There\n> > is no update from v12 patch on REL12 - master patches.\n> >\n>\n> Thanks for the updated patches, the changes look good to me.\n> Horiguchi-San, and others, do you have any further comments on this or\n> do you want to spend time in review of it? If not, I would like to\n> push this after the current minor version release.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 11 Aug 2022 11:40:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Thu, Aug 11, 2022 at 3:10 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Aug 8, 2022 at 9:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Aug 3, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > Oops, thanks for pointing it out. I've fixed it and attached updated\n> > > patches for all branches so as not to confuse the patch version. There\n> > > is no update from v12 patch on REL12 - master patches.\n> > >\n> >\n> > Thanks for the updated patches, the changes look good to me.\n> > Horiguchi-San, and others, do you have any further comments on this or\n> > do you want to spend time in review of it? If not, I would like to\n> > push this after the current minor version release.\n> >\n>\n> Pushed.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 12 Aug 2022 09:08:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "Hi,\n\nOn 8/11/22 8:10 AM, Amit Kapila wrote:\n> On Mon, Aug 8, 2022 at 9:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> On Wed, Aug 3, 2022 at 1:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>>>\n>>> Oops, thanks for pointing it out. I've fixed it and attached updated\n>>> patches for all branches so as not to confuse the patch version. There\n>>> is no update from v12 patch on REL12 - master patches.\n>>>\n>> Thanks for the updated patches, the changes look good to me.\n>> Horiguchi-San, and others, do you have any further comments on this or\n>> do you want to spend time in review of it? If not, I would like to\n>> push this after the current minor version release.\n>>\n> Pushed.\n\nThank you!\n\nI just marked the corresponding CF entry [1] as committed.\n\n[1]: https://commitfest.postgresql.org/39/3041/\n\nRegards,\n\n-- \n\nBertrand Drouvot\nAmazon Web Services: https://aws.amazon.com\n\n\n\n", "msg_date": "Fri, 12 Aug 2022 15:38:00 +0200", "msg_from": "\"Drouvot, Bertrand\" <bdrouvot@amazon.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Fri, Jul 29, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> >\n> > Yeah, your description makes sense to me. I've also considered how to\n> > hit this path but I guess it is never hit. Thinking of it in another\n> > way, first of all, at least 2 catalog modifying transactions have to\n> > be running while writing a xl_running_xacts. The serialized snapshot\n> > that is written when we decode the first xl_running_xact has two\n> > transactions. Then, one of them is committed before the second\n> > xl_running_xacts. The second serialized snapshot has only one\n> > transaction. Then, the transaction is also committed after that. Now,\n> > in order to execute the path, we need to start decoding from the first\n> > serialized snapshot. However, if we start from there, we cannot decode\n> > the full contents of the transaction that was committed later.\n> >\n>\n> I think then we should change this code in the master branch patch\n> with an additional comment on the lines of: \"Either all the xacts got\n> purged or none. It is only possible to partially remove the xids from\n> this array if one or more of the xids are still running but not all.\n> That can happen if we start decoding from a point (LSN where the\n> snapshot state became consistent) where all the xacts in this were\n> running and then at least one of those got committed and a few are\n> still running. We will never start from such a point because we won't\n> move the slot's restart_lsn past the point where the oldest running\n> transaction's restart_decoding_lsn is.\"\n>\n\nUnfortunately, this theory doesn't turn out to be true. While\ninvestigating the latest buildfarm failure [1], I see that it is\npossible that only part of the xacts in the restored catalog modifying\nxacts list needs to be purged. See the attached where I have\ndemonstrated it via a reproducible test. It seems the point we were\nmissing was that to start from a point where two or more catalog\nmodifying were serialized, it requires another open transaction before\nboth get committed, and then we need the checkpoint or other way to\nforce running_xacts record in-between the commit of initial two\ncatalog modifying xacts. There could possibly be other ways as well\nbut the theory above wasn't correct.\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=curculio&dt=2022-08-25%2004%3A15%3A34\n\n\n--\nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 27 Aug 2022 12:26:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sat, Aug 27, 2022 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jul 29, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > Yeah, your description makes sense to me. I've also considered how to\n> > > hit this path but I guess it is never hit. Thinking of it in another\n> > > way, first of all, at least 2 catalog modifying transactions have to\n> > > be running while writing a xl_running_xacts. The serialized snapshot\n> > > that is written when we decode the first xl_running_xact has two\n> > > transactions. Then, one of them is committed before the second\n> > > xl_running_xacts. The second serialized snapshot has only one\n> > > transaction. Then, the transaction is also committed after that. Now,\n> > > in order to execute the path, we need to start decoding from the first\n> > > serialized snapshot. However, if we start from there, we cannot decode\n> > > the full contents of the transaction that was committed later.\n> > >\n> >\n> > I think then we should change this code in the master branch patch\n> > with an additional comment on the lines of: \"Either all the xacts got\n> > purged or none. It is only possible to partially remove the xids from\n> > this array if one or more of the xids are still running but not all.\n> > That can happen if we start decoding from a point (LSN where the\n> > snapshot state became consistent) where all the xacts in this were\n> > running and then at least one of those got committed and a few are\n> > still running. We will never start from such a point because we won't\n> > move the slot's restart_lsn past the point where the oldest running\n> > transaction's restart_decoding_lsn is.\"\n> >\n>\n> Unfortunately, this theory doesn't turn out to be true. While\n> investigating the latest buildfarm failure [1], I see that it is\n> possible that only part of the xacts in the restored catalog modifying\n> xacts list needs to be purged. See the attached where I have\n> demonstrated it via a reproducible test. It seems the point we were\n> missing was that to start from a point where two or more catalog\n> modifying were serialized, it requires another open transaction before\n> both get committed, and then we need the checkpoint or other way to\n> force running_xacts record in-between the commit of initial two\n> catalog modifying xacts. There could possibly be other ways as well\n> but the theory above wasn't correct.\n>\n\nThank you for the analysis and the patch. I have the same conclusion.\nSince we took this approach only on the master the back branches are\nnot affected.\n\nThe new test scenario makes sense to me and looks better than the one\nI have. Regarding the fix, I think we should use\nTransactionIdFollowsOrEquals() instead of\nNormalTransactionIdPrecedes():\n\n + for (off = 0; off < builder->catchange.xcnt; off++)\n + {\n + if (NormalTransactionIdPrecedes(builder->catchange.xip[off],\n + builder->xmin))\n + break;\n + }\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Sat, 27 Aug 2022 16:35:52 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sat, Aug 27, 2022 at 1:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Aug 27, 2022 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Jul 29, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > >\n> > > > Yeah, your description makes sense to me. I've also considered how to\n> > > > hit this path but I guess it is never hit. Thinking of it in another\n> > > > way, first of all, at least 2 catalog modifying transactions have to\n> > > > be running while writing a xl_running_xacts. The serialized snapshot\n> > > > that is written when we decode the first xl_running_xact has two\n> > > > transactions. Then, one of them is committed before the second\n> > > > xl_running_xacts. The second serialized snapshot has only one\n> > > > transaction. Then, the transaction is also committed after that. Now,\n> > > > in order to execute the path, we need to start decoding from the first\n> > > > serialized snapshot. However, if we start from there, we cannot decode\n> > > > the full contents of the transaction that was committed later.\n> > > >\n> > >\n> > > I think then we should change this code in the master branch patch\n> > > with an additional comment on the lines of: \"Either all the xacts got\n> > > purged or none. It is only possible to partially remove the xids from\n> > > this array if one or more of the xids are still running but not all.\n> > > That can happen if we start decoding from a point (LSN where the\n> > > snapshot state became consistent) where all the xacts in this were\n> > > running and then at least one of those got committed and a few are\n> > > still running. We will never start from such a point because we won't\n> > > move the slot's restart_lsn past the point where the oldest running\n> > > transaction's restart_decoding_lsn is.\"\n> > >\n> >\n> > Unfortunately, this theory doesn't turn out to be true. While\n> > investigating the latest buildfarm failure [1], I see that it is\n> > possible that only part of the xacts in the restored catalog modifying\n> > xacts list needs to be purged. See the attached where I have\n> > demonstrated it via a reproducible test. It seems the point we were\n> > missing was that to start from a point where two or more catalog\n> > modifying were serialized, it requires another open transaction before\n> > both get committed, and then we need the checkpoint or other way to\n> > force running_xacts record in-between the commit of initial two\n> > catalog modifying xacts. There could possibly be other ways as well\n> > but the theory above wasn't correct.\n> >\n>\n> Thank you for the analysis and the patch. I have the same conclusion.\n> Since we took this approach only on the master the back branches are\n> not affected.\n>\n> The new test scenario makes sense to me and looks better than the one\n> I have. Regarding the fix, I think we should use\n> TransactionIdFollowsOrEquals() instead of\n> NormalTransactionIdPrecedes():\n>\n> + for (off = 0; off < builder->catchange.xcnt; off++)\n> + {\n> + if (NormalTransactionIdPrecedes(builder->catchange.xip[off],\n> + builder->xmin))\n> + break;\n> + }\n>\n\nRight, fixed.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 27 Aug 2022 15:54:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sat, Aug 27, 2022 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Aug 27, 2022 at 1:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Aug 27, 2022 at 3:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Jul 29, 2022 at 12:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > >\n> > > > > Yeah, your description makes sense to me. I've also considered how to\n> > > > > hit this path but I guess it is never hit. Thinking of it in another\n> > > > > way, first of all, at least 2 catalog modifying transactions have to\n> > > > > be running while writing a xl_running_xacts. The serialized snapshot\n> > > > > that is written when we decode the first xl_running_xact has two\n> > > > > transactions. Then, one of them is committed before the second\n> > > > > xl_running_xacts. The second serialized snapshot has only one\n> > > > > transaction. Then, the transaction is also committed after that. Now,\n> > > > > in order to execute the path, we need to start decoding from the first\n> > > > > serialized snapshot. However, if we start from there, we cannot decode\n> > > > > the full contents of the transaction that was committed later.\n> > > > >\n> > > >\n> > > > I think then we should change this code in the master branch patch\n> > > > with an additional comment on the lines of: \"Either all the xacts got\n> > > > purged or none. It is only possible to partially remove the xids from\n> > > > this array if one or more of the xids are still running but not all.\n> > > > That can happen if we start decoding from a point (LSN where the\n> > > > snapshot state became consistent) where all the xacts in this were\n> > > > running and then at least one of those got committed and a few are\n> > > > still running. We will never start from such a point because we won't\n> > > > move the slot's restart_lsn past the point where the oldest running\n> > > > transaction's restart_decoding_lsn is.\"\n> > > >\n> > >\n> > > Unfortunately, this theory doesn't turn out to be true. While\n> > > investigating the latest buildfarm failure [1], I see that it is\n> > > possible that only part of the xacts in the restored catalog modifying\n> > > xacts list needs to be purged. See the attached where I have\n> > > demonstrated it via a reproducible test. It seems the point we were\n> > > missing was that to start from a point where two or more catalog\n> > > modifying were serialized, it requires another open transaction before\n> > > both get committed, and then we need the checkpoint or other way to\n> > > force running_xacts record in-between the commit of initial two\n> > > catalog modifying xacts. There could possibly be other ways as well\n> > > but the theory above wasn't correct.\n> > >\n> >\n> > Thank you for the analysis and the patch. I have the same conclusion.\n> > Since we took this approach only on the master the back branches are\n> > not affected.\n> >\n> > The new test scenario makes sense to me and looks better than the one\n> > I have. Regarding the fix, I think we should use\n> > TransactionIdFollowsOrEquals() instead of\n> > NormalTransactionIdPrecedes():\n> >\n> > + for (off = 0; off < builder->catchange.xcnt; off++)\n> > + {\n> > + if (NormalTransactionIdPrecedes(builder->catchange.xip[off],\n> > + builder->xmin))\n> > + break;\n> > + }\n> >\n>\n> Right, fixed.\n\nThank you for updating the patch! It looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Sat, 27 Aug 2022 22:35:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "On Sat, Aug 27, 2022 at 7:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Aug 27, 2022 at 7:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Aug 27, 2022 at 1:06 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > I think then we should change this code in the master branch patch\n> > > > > with an additional comment on the lines of: \"Either all the xacts got\n> > > > > purged or none. It is only possible to partially remove the xids from\n> > > > > this array if one or more of the xids are still running but not all.\n> > > > > That can happen if we start decoding from a point (LSN where the\n> > > > > snapshot state became consistent) where all the xacts in this were\n> > > > > running and then at least one of those got committed and a few are\n> > > > > still running. We will never start from such a point because we won't\n> > > > > move the slot's restart_lsn past the point where the oldest running\n> > > > > transaction's restart_decoding_lsn is.\"\n> > > > >\n> > > >\n> > > > Unfortunately, this theory doesn't turn out to be true. While\n> > > > investigating the latest buildfarm failure [1], I see that it is\n> > > > possible that only part of the xacts in the restored catalog modifying\n> > > > xacts list needs to be purged. See the attached where I have\n> > > > demonstrated it via a reproducible test. It seems the point we were\n> > > > missing was that to start from a point where two or more catalog\n> > > > modifying were serialized, it requires another open transaction before\n> > > > both get committed, and then we need the checkpoint or other way to\n> > > > force running_xacts record in-between the commit of initial two\n> > > > catalog modifying xacts. There could possibly be other ways as well\n> > > > but the theory above wasn't correct.\n> > > >\n> > >\n> > > Thank you for the analysis and the patch. I have the same conclusion.\n> > > Since we took this approach only on the master the back branches are\n> > > not affected.\n> > >\n> > > The new test scenario makes sense to me and looks better than the one\n> > > I have. Regarding the fix, I think we should use\n> > > TransactionIdFollowsOrEquals() instead of\n> > > NormalTransactionIdPrecedes():\n> > >\n> > > + for (off = 0; off < builder->catchange.xcnt; off++)\n> > > + {\n> > > + if (NormalTransactionIdPrecedes(builder->catchange.xip[off],\n> > > + builder->xmin))\n> > > + break;\n> > > + }\n> > >\n> >\n> > Right, fixed.\n>\n> Thank you for updating the patch! It looks good to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Aug 2022 11:47:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" }, { "msg_contents": "> Pushed.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n>\nHi!\n\nWhile working on 64–bit XID's patch set, I stumble into problems with\ncontrib/test_decoding/catalog_change_snapshot test [0].\n\nAFAICS, the problem is not related to the 64–bit XID's patch set and the\nproblem is in InitialRunningXacts array, allocaled in\nbuilder->context. Do we really need it to be allocated that way?\n\n\n[0]\nhttps://www.postgresql.org/message-id/CACG%3DezZoz_KG%2BRyh9MrU_g5e0HiVoHocEvqFF%3DNRrhrwKmEQJQ%40mail.gmail.com\n\n\n-- \nBest regards,\nMaxim Orlov.\n\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nHi!While working on 64–bit XID's patch set, I stumble into problems with contrib/test_decoding/catalog_change_snapshot test [0].AFAICS, the problem is not related to the 64–bit XID's patch set and the problem is in InitialRunningXacts array, allocaled in builder->context. Do we really need it to be allocated that way?[0] https://www.postgresql.org/message-id/CACG%3DezZoz_KG%2BRyh9MrU_g5e0HiVoHocEvqFF%3DNRrhrwKmEQJQ%40mail.gmail.com -- Best regards,Maxim Orlov.", "msg_date": "Mon, 21 Nov 2022 16:08:05 +0300", "msg_from": "Maxim Orlov <orlovmg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] Logical replication failure \"ERROR: could not map filenode\n \"base/13237/442428\" to relation OID\" with catalog modifying txns" } ]
[ { "msg_contents": "The wait event WalReceiverWaitStart has been categorized in the type Client.\nBut why? Walreceiver is waiting for startup process to set the lsn and\ntimeline while it is reporting WalReceiverWaitStart. So its type should be IPC,\ninstead?\n\nThe wait event WalSenderWaitForWAL has also been categorized in the type\nClient. While this wait event is being reported, logical replication walsender\nis waiting for not only new WAL to be flushed but also the socket to be\nreadable and writeable (if there is pending data). I guess that this is why\nits type is Client. But ISTM walsender is *mainly* waiting for new WAL to be\nflushed by other processes during that period, so I think that it's better\nto use IPC as the type of the wait event WalSenderWaitForWAL. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 16 Mar 2021 03:12:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Type of wait events WalReceiverWaitStart and WalSenderWaitForWAL" }, { "msg_contents": "At Tue, 16 Mar 2021 03:12:54 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> The wait event WalReceiverWaitStart has been categorized in the type\n> Client.\n> But why? Walreceiver is waiting for startup process to set the lsn and\n> timeline while it is reporting WalReceiverWaitStart. So its type\n> should be IPC,\n> instead?\n>\n> The wait event WalSenderWaitForWAL has also been categorized in the\n> type\n> Client. While this wait event is being reported, logical replication\n> walsender\n> is waiting for not only new WAL to be flushed but also the socket to\n> be\n> readable and writeable (if there is pending data). I guess that this\n> is why\n> its type is Client. But ISTM walsender is *mainly* waiting for new WAL\n> to be\n> flushed by other processes during that period, so I think that it's\n> better\n> to use IPC as the type of the wait event WalSenderWaitForWAL. Thought?\n\nI agree that it's definitely not a client wait. It would be either\nactivity or IPC. My reasoning for the latter is it's similar to\nWAIT_EVENT_WAL_RECEIVER_MAIN since both are a wait while\nWalReceiverMain to continue. With a difference thatin walreceiver\nhears where to start in the latter state.\n\nI don't object if it were categorized to IPC, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 16 Mar 2021 11:59:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and\n WalSenderWaitForWAL" }, { "msg_contents": "\n\nOn 2021/03/16 11:59, Kyotaro Horiguchi wrote:\n> At Tue, 16 Mar 2021 03:12:54 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> The wait event WalReceiverWaitStart has been categorized in the type\n>> Client.\n>> But why? Walreceiver is waiting for startup process to set the lsn and\n>> timeline while it is reporting WalReceiverWaitStart. So its type\n>> should be IPC,\n>> instead?\n>>\n>> The wait event WalSenderWaitForWAL has also been categorized in the\n>> type\n>> Client. While this wait event is being reported, logical replication\n>> walsender\n>> is waiting for not only new WAL to be flushed but also the socket to\n>> be\n>> readable and writeable (if there is pending data). I guess that this\n>> is why\n>> its type is Client. But ISTM walsender is *mainly* waiting for new WAL\n>> to be\n>> flushed by other processes during that period, so I think that it's\n>> better\n>> to use IPC as the type of the wait event WalSenderWaitForWAL. Thought?\n> \n> I agree that it's definitely not a client wait. It would be either\n> activity or IPC. My reasoning for the latter is it's similar to\n> WAIT_EVENT_WAL_RECEIVER_MAIN since both are a wait while\n> WalReceiverMain to continue. With a difference thatin walreceiver\n> hears where to start in the latter state.\n> \n> I don't object if it were categorized to IPC, though.\n\nOk. And on my further thought;\nThere are three calls to WalSndWait() in walsender.c as follow.\n\n1. WalSndLoop() calls WalSndWait() with the wait event\n \"Activity:WalSenderMain\". Both physical and logical replication walsenders\n use this function.\n2. WalSndWriteData() calls WalSndWait() with the wait event\n \"Client:WalSenderWriteData\". Only logical replication walsender uses\n this function.\n3. WalSndWaitForWal() calls WalSndWait() with the wait event\n \"Client:WalSenderWaitForWAL\". Only logical replication walsender\n uses this function.\n\nThese three WalSndWait() basically do the same thing, i.e., wait for the latch\nset, timeout, postmaster death, the readable and writeable socket. So you\nmay think that it's strange to categorize them differently. Maybe it's better\nto categorize all of them in Actvitiy?\n\nOr it's better to categorize only WalSenderMain in Activity, and the others\nin IPC because only WalSenderMain is reported in walsender's main loop.\nAt least for me the latter is better because the former, i.e., reporting\nthree different events for walsender's activity in main loop seems a bit strange.\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 16 Mar 2021 15:42:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and WalSenderWaitForWAL" }, { "msg_contents": "At Tue, 16 Mar 2021 15:42:27 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/03/16 11:59, Kyotaro Horiguchi wrote:\n> > At Tue, 16 Mar 2021 03:12:54 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in\n> >> The wait event WalReceiverWaitStart has been categorized in the type\n> >> Client.\n> >> But why? Walreceiver is waiting for startup process to set the lsn and\n> >> timeline while it is reporting WalReceiverWaitStart. So its type\n> >> should be IPC,\n> >> instead?\n> >>\n> >> The wait event WalSenderWaitForWAL has also been categorized in the\n> >> type\n> >> Client. While this wait event is being reported, logical replication\n> >> walsender\n> >> is waiting for not only new WAL to be flushed but also the socket to\n> >> be\n> >> readable and writeable (if there is pending data). I guess that this\n> >> is why\n> >> its type is Client. But ISTM walsender is *mainly* waiting for new WAL\n> >> to be\n> >> flushed by other processes during that period, so I think that it's\n> >> better\n> >> to use IPC as the type of the wait event WalSenderWaitForWAL. Thought?\n> > I agree that it's definitely not a client wait. It would be either\n> > activity or IPC. My reasoning for the latter is it's similar to\n> > WAIT_EVENT_WAL_RECEIVER_MAIN since both are a wait while\n> > WalReceiverMain to continue. With a difference thatin walreceiver\n> > hears where to start in the latter state.\n> > I don't object if it were categorized to IPC, though.\n> \n> Ok. And on my further thought;\n> There are three calls to WalSndWait() in walsender.c as follow.\n> \n> 1. WalSndLoop() calls WalSndWait() with the wait event\n> \"Activity:WalSenderMain\". Both physical and logical replication\n> walsenders\n> use this function.\n> 2. WalSndWriteData() calls WalSndWait() with the wait event\n> \"Client:WalSenderWriteData\". Only logical replication walsender uses\n> this function.\n> 3. WalSndWaitForWal() calls WalSndWait() with the wait event\n> \"Client:WalSenderWaitForWAL\". Only logical replication walsender\n> uses this function.\n> \n> These three WalSndWait() basically do the same thing, i.e., wait for\n> the latch\n> set, timeout, postmaster death, the readable and writeable socket. So\n> you\n> may think that it's strange to categorize them differently. Maybe it's\n> better\n> to categorize all of them in Actvitiy?\n\nI think it'd be better that they are categorized by what it is waiting\nfor.\n\nActivity is waiting for something gating me to be released.\n\nIPC is waiting for the response for a request previously sent to\nanother process.\n\nWait-client is waiting for the peer over a network connection to allow\nme to proceed activity.\n\nSo whether the three fall into the same category or not doesn't matter\nto me.\n\n\nWAIT_EVENT_WAL_RECEIVER_MAIN(WalReceiverMain) is waiting for new data\nto arrive. This looks like an activity to me.\n\nWAIT_EVENT_WAL_RECEIVER_WAIT_START is waiting for waiting for starup\nprocess to kick me. So it may be either IPC or Activity. Since\nwalreceiver hasn't sent anything to startup, so it's activity, rather\nthan IPC. However, the behavior can be said that it convey a piece of\ninformation from startup to wal receiver so it also can be said to be\nan IPC. (That is the reason why I don't object for IPC.)\n\n1(WAIT_EVENT_WAL_SENDER_MAIN, currently an activity) is waiting for\nsomething to happen on the connection to the peer\nreceiver/worker. This might either be an activity or an wait_client,\nbut I prefer it to be wait_client, as the same behavior of a client\nbackend is categorizes as wait_client.\n\n2 (WAIT_EVENT_WAL_SENDER_WRITE_DATA, currently a wait_client) is the\nsame to 1.\n\n3 (WAIT_EVENT_WAL_SENDER_WAIT_WAL, currently a wait_client) is the\nsame to 1.\n\nAs the result I'd prefer to categorize all of them to Activity.\n\n> Or it's better to categorize only WalSenderMain in Activity, and the\n> others\n> in IPC because only WalSenderMain is reported in walsender's main\n> loop.\n\nI don't think 1, 2 and 3 are Activities. And Activity necessariry\nmeans main loop as I descrived. And as I said,\nWAIT_EVENT_WAL_RECEIVER_WAIT_START seems in between IPC and activity\nso I don't object to categorized it to IPC.\n\n> At least for me the latter is better because the former, i.e.,\n> reporting\n> three different events for walsender's activity in main loop seems a\n> bit strange.\n> Thought?\n\nMaybe it's the difference in what one consider as the same event.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 17 Mar 2021 15:31:37 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and\n WalSenderWaitForWAL" }, { "msg_contents": "At Wed, 17 Mar 2021 15:31:37 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> WAIT_EVENT_WAL_RECEIVER_MAIN(WalReceiverMain) is waiting for new data\n> to arrive. This looks like an activity to me.\n> \n> WAIT_EVENT_WAL_RECEIVER_WAIT_START is waiting for waiting for starup\n> process to kick me. So it may be either IPC or Activity. Since\n> walreceiver hasn't sent anything to startup, so it's activity, rather\n> than IPC. However, the behavior can be said that it convey a piece of\n> information from startup to wal receiver so it also can be said to be\n> an IPC. (That is the reason why I don't object for IPC.)\n> \n> 1(WAIT_EVENT_WAL_SENDER_MAIN, currently an activity) is waiting for\n> something to happen on the connection to the peer\n> receiver/worker. This might either be an activity or an wait_client,\n> but I prefer it to be wait_client, as the same behavior of a client\n> backend is categorizes as wait_client.\n> \n> 2 (WAIT_EVENT_WAL_SENDER_WRITE_DATA, currently a wait_client) is the\n> same to 1.\n> \n> 3 (WAIT_EVENT_WAL_SENDER_WAIT_WAL, currently a wait_client) is the\n> same to 1.\n\n- As the result I'd prefer to categorize all of them to Activity.\n\nYear, I don't understand what I meant:(\n\n+ As the result I'd prefer to categorize the first two to Activity, and\n+ the last three to wait_client.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 17 Mar 2021 15:36:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and\n WalSenderWaitForWAL" }, { "msg_contents": "\n\nOn 2021/03/17 15:31, Kyotaro Horiguchi wrote:\n> I think it'd be better that they are categorized by what it is waiting\n> for.\n\nYes. And some processes can be waiting for several events at the same\nmoment. In this case we should pick the event that those proceses\n*mainly* are waiing for, as a wait event, I think.\n\n\n\n> Activity is waiting for something gating me to be released.\n> \n> IPC is waiting for the response for a request previously sent to\n> another process.\n> \n> Wait-client is waiting for the peer over a network connection to allow\n> me to proceed activity.\n\nI'm not sure if these definitions are really right or not because they\nseem to be slightly different from those in the document.\n\n\n> So whether the three fall into the same category or not doesn't matter\n> to me.\n\nUnderstood.\n\n\n> WAIT_EVENT_WAL_RECEIVER_MAIN(WalReceiverMain) is waiting for new data\n> to arrive. This looks like an activity to me.\n\n+1. So our consensus is not to change the category of this event.\n\n\n> WAIT_EVENT_WAL_RECEIVER_WAIT_START is waiting for waiting for starup\n> process to kick me. So it may be either IPC or Activity. Since\n> walreceiver hasn't sent anything to startup, so it's activity, rather\n> than IPC. However, the behavior can be said that it convey a piece of\n> information from startup to wal receiver so it also can be said to be\n> an IPC. (That is the reason why I don't object for IPC.)\n\nIMO this should be IPC because walreceiver is mainly waiting for the\ninteraction with the startup process, during this wait event. Since you can\nlive with IPC, probably our consensus is to use IPC?\n\n\n> 1(WAIT_EVENT_WAL_SENDER_MAIN, currently an activity) is waiting for\n> something to happen on the connection to the peer\n> receiver/worker. This might either be an activity or an wait_client,\n> but I prefer it to be wait_client, as the same behavior of a client\n> backend is categorizes as wait_client.\n\nYes, walsender is waiting for replies from the standby to arrive during\nthis event. But I think that it's *mainly* waiting for WAL to be flushed\nin order to send it. So IPC is better for this event rather than Client.\nOn the other hand, wait events reported in main loop are basically\ncategorized in Activity, in other processes. So in the sake of consistency,\nI like Activity rather than IPC, for this event.\n\n\n> 2 (WAIT_EVENT_WAL_SENDER_WRITE_DATA, currently a wait_client) is the\n> same to 1.\n\nIIUC walsender is mainly waiting for the socket to be writeable, to send\nany pending data. So I agree to use Client for this event. Our consensus\nseems not to change the category of this event.\n\n\n> 3 (WAIT_EVENT_WAL_SENDER_WAIT_WAL, currently a wait_client) is the\n> same to 1.\n\nYes, walsender is waiting for replies from the standby to arrive during\nthis event. But I think that it's *mainly* waiting for WAL to be flushed\nin order to send it. So IPC is better for this event rather than Client.\nOn the other hand, while the server is in idle, this event is reported for\nlogical walsender. This makes me think that it might be Activity, i.e.,\nwe should treat this as the wait event in logical walsender's main loop.\nSo I like Activity rather than IPC, for this event.\nIf we do this, it might be better to rename the event to WAIT_EVENT_LOGICAL_SENDER_MAIN.\n\n\nTherefore, my current idea is\n\nWAIT_EVENT_WAL_RECEIVER_MAIN should be in Activity (as currently it is)\nWAIT_EVENT_WAL_RECEIVER_WAIT_START should be moved to in IPC\nWAIT_EVENT_WAL_SENDER_MAIN should be in Activity (as currently it is)\nWAIT_EVENT_WAL_SENDER_WRITE_DATA should be in Client (as currently it is)\nWAIT_EVENT_WAL_SENDER_WAIT_WAL should be moved to in Activity.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Mar 2021 18:48:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and WalSenderWaitForWAL" }, { "msg_contents": "On 2021/03/18 18:48, Fujii Masao wrote:\n>> WAIT_EVENT_WAL_RECEIVER_WAIT_START is waiting for waiting for starup\n>> process to kick me.  So it may be either IPC or Activity.  Since\n>> walreceiver hasn't sent anything to startup, so it's activity, rather\n>> than IPC.  However, the behavior can be said that it convey a piece of\n>> information from startup to wal receiver so it also can be said to be\n>> an IPC. (That is the reason why I don't object for IPC.)\n> \n> IMO this should be IPC because walreceiver is mainly waiting for the\n> interaction with the startup process, during this wait event. Since you can\n> live with IPC, probably our consensus is to use IPC?\n\nIf this is ok, I'd like to apply the attached patch at first.\nThis patch changes the type of WAIT_EVENT_WAL_RECEIVER_WAIT_START\nfrom Client to IPC.\n\nBTW, I found that recently WalrcvExit wait event was introduced.\nBut this name is not consistent with other events. I'm thinking that\nit's better to rename it to WalReceiverExit.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 19 Mar 2021 14:01:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and WalSenderWaitForWAL" }, { "msg_contents": "At Thu, 18 Mar 2021 18:48:50 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2021/03/17 15:31, Kyotaro Horiguchi wrote:\n> > I think it'd be better that they are categorized by what it is waiting\n> > for.\n> \n> Yes. And some processes can be waiting for several events at the same\n> moment. In this case we should pick the event that those proceses\n> *mainly* are waiing for, as a wait event, I think.\n\nRight.\n\n> > Activity is waiting for something gating me to be released.\n> > IPC is waiting for the response for a request previously sent to\n> > another process.\n> > Wait-client is waiting for the peer over a network connection to allow\n> > me to proceed activity.\n> \n> I'm not sure if these definitions are really right or not because they\n> seem to be slightly different from those in the document.\n\nMaybe it depends on what \"main processing loop\" means. I found my\nwords are inaccurate. \"something gating me\" meant that the main\nwork. In the case of walsender main loop, it's advance of WAL flush\nlocation. In a broader idea it is a kind of IPC in most cases but the\ndifference is, as you daid, in what the wait is waiting in those\ncases.\n\n> > So whether the three fall into the same category or not doesn't matter\n> > to me.\n> \n> Understood.\n> \n> \n> > WAIT_EVENT_WAL_RECEIVER_MAIN(WalReceiverMain) is waiting for new data\n> > to arrive. This looks like an activity to me.\n> \n> +1. So our consensus is not to change the category of this event.\n\nAgreed.\n\n> > WAIT_EVENT_WAL_RECEIVER_WAIT_START is waiting for waiting for starup\n> > process to kick me. So it may be either IPC or Activity. Since\n> > walreceiver hasn't sent anything to startup, so it's activity, rather\n> > than IPC. However, the behavior can be said that it convey a piece of\n> > information from startup to wal receiver so it also can be said to be\n> > an IPC. (That is the reason why I don't object for IPC.)\n> \n> IMO this should be IPC because walreceiver is mainly waiting for the\n> interaction with the startup process, during this wait event. Since\n> you can\n> live with IPC, probably our consensus is to use IPC?\n\nExactly.\n\n> > 1(WAIT_EVENT_WAL_SENDER_MAIN, currently an activity) is waiting for\n> > something to happen on the connection to the peer\n> > receiver/worker. This might either be an activity or an wait_client,\n> > but I prefer it to be wait_client, as the same behavior of a client\n> > backend is categorizes as wait_client.\n> \n> Yes, walsender is waiting for replies from the standby to arrive\n> during\n> this event. But I think that it's *mainly* waiting for WAL to be\n> flushed\n> in order to send it. So IPC is better for this event rather than\n> Client.\n> On the other hand, wait events reported in main loop are basically\n> categorized in Activity, in other processes. So in the sake of\n> consistency,\n> I like Activity rather than IPC, for this event.\n\nMmm. I agree that it waits for WAL in most cases, but still WAL-wait\nis activity for me because it is not waiting for being cued by\nsomeone, but waiting for new WAL to come to perform its main purpose.\nIf it's an IPC, all waits on other than pure sleep should fall into\nIPC? (I was confused by the comment of WalSndWait, which doesn't\nstate that it is waiting for latch..)\n\nOther point I'd like to raise is that the client_wait case should be\ndistinctive from the WAL-wait since it is significant sign of what is\nhappening.\n\nSo I propose two chagnes here.\n\na. Rewrite the comment of WalSndWait so that it states that \"also\n waiting for latch-set\".\n\nb. Split the event to two different events.\n\n-\tWalSndWait(wakeEvents, sleeptime, WAIT_EVENT_WAL_SENDER_MAIN);\n+\tWalSndWait(wakeEvents, sleeptime,\n+\t pq_is_send_pending() ? WAIT_EVENT_WAL_SENDER_WRITE_DATA:\n+\t WAIT_EVENT_WAL_SENDER_MAIN);\n\nAnd _WRITE_DATA as client_wait and _SENDER_MAIN as activity.\n\nWhat do you think about this?\n\n> > 2 (WAIT_EVENT_WAL_SENDER_WRITE_DATA, currently a wait_client) is the\n> > same to 1.\n> \n> IIUC walsender is mainly waiting for the socket to be writeable, to\n> send\n> any pending data. So I agree to use Client for this event. Our\n> consensus\n> seems not to change the category of this event.\n\nRight.\n\n> > 3 (WAIT_EVENT_WAL_SENDER_WAIT_WAL, currently a wait_client) is the\n> > same to 1.\n> \n> Yes, walsender is waiting for replies from the standby to arrive\n> during\n> this event. But I think that it's *mainly* waiting for WAL to be\n> flushed\n> in order to send it. So IPC is better for this event rather than\n> Client.\n>\n> On the other hand, while the server is in idle, this event is\nreported\n> for\n> logical walsender. This makes me think that it might be Activity,\n> i.e.,\n> we should treat this as the wait event in logical walsender's main\n> loop.\n> So I like Activity rather than IPC, for this event.\n> If we do this, it might be better to rename the event to\n> WAIT_EVENT_LOGICAL_SENDER_MAIN.\n\nYes. The WAIT_EVENT_WAL_SENDER_WAIT_WAL is equivalent to\nWAIT_EVENT_WAL_SENDER_MAIN as function. So I think it should be in\nthe same category as WAIT_EVENT_WAL_SENDER_MAIN. And like the 1 above,\nwait_client case should be distinctive from the _MAIN event.\n\n> Therefore, my current idea is\n> \n> WAIT_EVENT_WAL_RECEIVER_MAIN should be in Activity (as currently it\n> is)\n> WAIT_EVENT_WAL_RECEIVER_WAIT_START should be moved to in IPC\n> WAIT_EVENT_WAL_SENDER_MAIN should be in Activity (as currently it is)\n> WAIT_EVENT_WAL_SENDER_WRITE_DATA should be in Client (as currently it\n> is)\n> WAIT_EVENT_WAL_SENDER_WAIT_WAL should be moved to in Activity.\n\nMine is.\n\n> WAIT_EVENT_WAL_RECEIVER_MAIN should be in Activity (as currently it\n> is)\n> WAIT_EVENT_WAL_RECEIVER_WAIT_START should be moved to in IPC\n\nAgreed.\n\n> WAIT_EVENT_WAL_SENDER_MAIN should be in Activity (as currently it is)\n> WAIT_EVENT_WAL_SENDER_WRITE_DATA should be in Client (as currently it\n> is)\n> WAIT_EVENT_WAL_SENDER_WAIT_WAL should be moved to in Activity.\n\nAgreed. And I'd like to add _SENDER_WRITE_DATA as the alternative\nevent for _SENDER_MAIN in the case pq_is_send_pending() == true.\n\nAnd also I'd like to propose edit the comment of WalSndWait().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 22 Mar 2021 12:01:21 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and\n WalSenderWaitForWAL" }, { "msg_contents": "On 2021/03/22 12:01, Kyotaro Horiguchi wrote:\n>>> WAIT_EVENT_WAL_RECEIVER_WAIT_START is waiting for waiting for starup\n>>> process to kick me. So it may be either IPC or Activity. Since\n>>> walreceiver hasn't sent anything to startup, so it's activity, rather\n>>> than IPC. However, the behavior can be said that it convey a piece of\n>>> information from startup to wal receiver so it also can be said to be\n>>> an IPC. (That is the reason why I don't object for IPC.)\n>>\n>> IMO this should be IPC because walreceiver is mainly waiting for the\n>> interaction with the startup process, during this wait event. Since\n>> you can\n>> live with IPC, probably our consensus is to use IPC?\n> \n> Exactly.\n\nOk, so barring any objection, I will commit the patch that I posted upthread.\n\n\n> Mmm. I agree that it waits for WAL in most cases, but still WAL-wait\n> is activity for me because it is not waiting for being cued by\n> someone, but waiting for new WAL to come to perform its main purpose.\n> If it's an IPC, all waits on other than pure sleep should fall into\n> IPC? (I was confused by the comment of WalSndWait, which doesn't\n> state that it is waiting for latch..)\n> \n> Other point I'd like to raise is that the client_wait case should be\n> distinctive from the WAL-wait since it is significant sign of what is\n> happening.\n> \n> So I propose two chagnes here.\n> \n> a. Rewrite the comment of WalSndWait so that it states that \"also\n> waiting for latch-set\".\n\n+1\n\n\n> b. Split the event to two different events.\n> \n> -\tWalSndWait(wakeEvents, sleeptime, WAIT_EVENT_WAL_SENDER_MAIN);\n> +\tWalSndWait(wakeEvents, sleeptime,\n> +\t pq_is_send_pending() ? WAIT_EVENT_WAL_SENDER_WRITE_DATA:\n> +\t WAIT_EVENT_WAL_SENDER_MAIN);\n> \n> And _WRITE_DATA as client_wait and _SENDER_MAIN as activity.\n> \n> What do you think about this?\n\nI'm ok with this. What about the attached patch (WalSenderWriteData.patch)?\n\n> Yes. The WAIT_EVENT_WAL_SENDER_WAIT_WAL is equivalent to\n> WAIT_EVENT_WAL_SENDER_MAIN as function. So I think it should be in\n> the same category as WAIT_EVENT_WAL_SENDER_MAIN. And like the 1 above,\n> wait_client case should be distinctive from the _MAIN event.\n\n+1. What about the attached patch (WalSenderWaitForWAL.patch)?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 22 Mar 2021 13:59:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and WalSenderWaitForWAL" }, { "msg_contents": "\n\nOn 2021/03/22 13:59, Fujii Masao wrote:\n> \n> Ok, so barring any objection, I will commit the patch that I posted upthread.\n\nPushed!\n\nI'm waiting for other two patches to be reviewed :)\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:16:05 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and WalSenderWaitForWAL" }, { "msg_contents": "(I finally get to catch up here..)\n\nAt Mon, 22 Mar 2021 13:59:47 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2021/03/22 12:01, Kyotaro Horiguchi wrote:\n> > Mmm. I agree that it waits for WAL in most cases, but still WAL-wait\n> > is activity for me because it is not waiting for being cued by\n> > someone, but waiting for new WAL to come to perform its main purpose.\n> > If it's an IPC, all waits on other than pure sleep should fall into\n> > IPC? (I was confused by the comment of WalSndWait, which doesn't\n> > state that it is waiting for latch..)\n> > Other point I'd like to raise is that the client_wait case should be\n> > distinctive from the WAL-wait since it is significant sign of what is\n> > happening.\n> > So I propose two chagnes here.\n> > a. Rewrite the comment of WalSndWait so that it states that \"also\n> > waiting for latch-set\".\n> \n> +1\n\nCool.\n\n> > b. Split the event to two different events.\n> > -\tWalSndWait(wakeEvents, sleeptime, WAIT_EVENT_WAL_SENDER_MAIN);\n> > +\tWalSndWait(wakeEvents, sleeptime,\n> > +\t pq_is_send_pending() ? WAIT_EVENT_WAL_SENDER_WRITE_DATA:\n> > +\t WAIT_EVENT_WAL_SENDER_MAIN);\n> > And _WRITE_DATA as client_wait and _SENDER_MAIN as activity.\n> > What do you think about this?\n> \n> I'm ok with this. What about the attached patch\n> (WalSenderWriteData.patch)?\n\nYeah, that is better. I'm fine with it as a whole.\n\n+ * Overwrite wait_event with WAIT_EVENT_WAL_SENDER_WRITE_DATA\n+ * if we have pending data in the output buffer and are waiting to write\n+ * data to a client.\n\nSince the function doesn't check for that directly, I'd like to write\nas the following.\n\nOverwrite wait_event with WAIT_EVENT_WAL_SENDER_WRITE_DATA if the\ncaller told to wait for WL_SOCKET_WRITEABLE, which means that we have\npending data in the output buffer and are waiting to write data to a\nclient.\n\n\n> > Yes. The WAIT_EVENT_WAL_SENDER_WAIT_WAL is equivalent to\n> > WAIT_EVENT_WAL_SENDER_MAIN as function. So I think it should be in\n> > the same category as WAIT_EVENT_WAL_SENDER_MAIN. And like the 1 above,\n> > wait_client case should be distinctive from the _MAIN event.\n> \n> +1. What about the attached patch (WalSenderWaitForWAL.patch)?\n\nLooks good to me. Thanks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 23 Mar 2021 13:50:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Type of wait events WalReceiverWaitStart and\n WalSenderWaitForWAL" } ]
[ { "msg_contents": "\nPostgreSQL 12 and onward supports nondeterministic collations. For \"GROUP\nBY x\", which value of 'x' will PostgreSQL return in this case? The first\nvalue of x?\n\nThe SQL standard (section 8.2) states that the specific value returned is\nimplementation-defined, but requires that the value must be one of the\nspecific values in the set of values that compare equally:\n\nd) Depending on the collation, two strings may compare as equal even if they\nare of different lengths or contain different sequences of characters. When\nany of the operations MAX, MIN, and DISTINCT reference a grouping column,\nand the UNION, EXCEPT, and INTERSECT operators refer to character strings,\n*the specific value selected by these operations from a set of such equal\nvalues is implementation- dependent*.\n\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 15 Mar 2021 14:31:52 -0700 (MST)", "msg_from": "Jim Finnerty <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Nondeterministic collations and the value returned by GROUP BY x" }, { "msg_contents": "Jim Finnerty <jfinnert@amazon.com> writes:\n> PostgreSQL 12 and onward supports nondeterministic collations. For \"GROUP\n> BY x\", which value of 'x' will PostgreSQL return in this case? The first\n> value of x?\n\n> The SQL standard (section 8.2) states that the specific value returned is\n> implementation-defined, but requires that the value must be one of the\n> specific values in the set of values that compare equally:\n\n> d) Depending on the collation, two strings may compare as equal even if they\n> are of different lengths or contain different sequences of characters. When\n> any of the operations MAX, MIN, and DISTINCT reference a grouping column,\n> and the UNION, EXCEPT, and INTERSECT operators refer to character strings,\n> *the specific value selected by these operations from a set of such equal\n> values is implementation- dependent*.\n\nAs I recall, \"implementation-dependent\" means specifically that we *don't*\nhave to make any promise about which particular value will be selected.\nIf it said \"implementation-defined\" then we would.\n\nI expect that in practice it'd be the first of the group that arrives at\nthe grouping plan node --- but that doesn't really get you any closer\nto being able to say which one it is exactly. The input is either not\nordered at all, or ordered by something like a Sort node, which itself\nis not going to make any promises about which one of a group of peers\nis delivered first.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 15 Mar 2021 17:46:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Nondeterministic collations and the value returned by GROUP BY x" }, { "msg_contents": "right. It doesn't matter which of the values is returned; however, a\nplausible-sounding implementation would case-fold the value, like GROUP BY\nLOWER(x), but the case-folded value isn't necessarily one of the original\nvalues and so that could be subtly wrong in the case-insensitive case, and\ncould in principle be completely wrong in the most general nondeterministic\ncollation case where the case-folded value isn't even equal to the other\nmembers of the set.\n\ndoes the implementation in PG12 ensure that some member of the set of equal\nvalues is chosen as the representative value?\n\n\n\n-----\nJim Finnerty, AWS, Amazon Aurora PostgreSQL\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Tue, 16 Mar 2021 06:33:34 -0700 (MST)", "msg_from": "Jim Finnerty <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Nondeterministic collations and the value returned by GROUP BY\n x" }, { "msg_contents": "Jim Finnerty <jfinnert@amazon.com> writes:\n> right. It doesn't matter which of the values is returned; however, a\n> plausible-sounding implementation would case-fold the value, like GROUP BY\n> LOWER(x), but the case-folded value isn't necessarily one of the original\n> values and so that could be subtly wrong in the case-insensitive case, and\n> could in principle be completely wrong in the most general nondeterministic\n> collation case where the case-folded value isn't even equal to the other\n> members of the set.\n\n> does the implementation in PG12 ensure that some member of the set of equal\n> values is chosen as the representative value?\n\nWithout having actually looked, I'm pretty certain it does.\nConsiderations of data type independence would seem to rule out a hack\nlike applying case folding. There might be case folding happening\ninternally to comparison functions, like citext_cmp, but that wouldn't\naffect the grouping logic that is going to save aside one of the\ngroup of peer values.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Mar 2021 10:14:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Nondeterministic collations and the value returned by GROUP BY x" } ]
[ { "msg_contents": "Add libpq pipeline mode support to pgbench\n\nNew metacommands \\startpipeline and \\endpipeline allow the user to run\nqueries in libpq pipeline mode.\n\nAuthor: Daniel Vérité <daniel@manitou-mail.org>\nReviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\nDiscussion: https://postgr.es/m/b4e34135-2bd9-4b8a-94ca-27d760da26d7@manitou-mail.org\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/9aa491abbf07ca8385a429385be8d68517384fdf\n\nModified Files\n--------------\ndoc/src/sgml/ref/pgbench.sgml | 22 +++++\nsrc/bin/pgbench/pgbench.c | 131 ++++++++++++++++++++++++---\nsrc/bin/pgbench/t/001_pgbench_with_server.pl | 79 +++++++++++++++-\n3 files changed, 216 insertions(+), 16 deletions(-)", "msg_date": "Mon, 15 Mar 2021 21:35:10 +0000", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "pgsql: Add libpq pipeline mode support to pgbench" }, { "msg_contents": "Bonjour Daniel, Ola Alvaro,\n\n> Add libpq pipeline mode support to pgbench\n>\n> New metacommands \\startpipeline and \\endpipeline allow the user to run\n> queries in libpq pipeline mode.\n>\n> Author: Daniel Vérité <daniel@manitou-mail.org>\n> Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>\n> Discussion: https://postgr.es/m/b4e34135-2bd9-4b8a-94ca-27d760da26d7@manitou-mail.org\n\nI did not notice that the libpq pipeline mode thread had a pgbench patch \nattached, otherwise I would have looked at it.\n\nSome minor post-commit comments:\n\nFor consistency with the existing \\if … \\endif, ISTM that it could have \nbeen named \\batch … \\endbatch or \\pipeline … \\endpipeline?\n\nISTM that the constraint checks (nesting, no \\[ag]set) could be added to \nCheckConditional that could be renamed to CheckScript. That could allow to \nsimplify the checks in the command execution as mere Asserts.\n\n-- \nFabien.", "msg_date": "Wed, 17 Mar 2021 10:17:01 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add libpq pipeline mode support to pgbench" }, { "msg_contents": "\tFabien COELHO wrote:\n\n> For consistency with the existing \\if … \\endif, ISTM that it could have \n> been named \\batch … \\endbatch or \\pipeline … \\endpipeline?\n\n\"start\" mirrors \"end\". To me, the analogy with \\if-\\endif is not\nobvious.\nGrammatically \\if is meant to introduce the expression after it,\nwhereas \\startpipeline takes no argument.\nFunctionally \\startpipeline can be thought as \"open the valve\"\nand \\endpipeline \"close the valve\". They're \"call-to-action\" kinds of\ncommands, and in that sense quite different from the \\if-\\endif pair.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: https://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 17 Mar 2021 14:24:55 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add libpq pipeline mode support to pgbench" }, { "msg_contents": "On 2021-Mar-17, Daniel Verite wrote:\n\n> \tFabien COELHO wrote:\n> \n> > For consistency with the existing \\if … \\endif, ISTM that it could have \n> > been named \\batch … \\endbatch or \\pipeline … \\endpipeline?\n> \n> \"start\" mirrors \"end\". To me, the analogy with \\if-\\endif is not\n> obvious.\n> Grammatically \\if is meant to introduce the expression after it,\n> whereas \\startpipeline takes no argument.\n> Functionally \\startpipeline can be thought as \"open the valve\"\n> and \\endpipeline \"close the valve\". They're \"call-to-action\" kinds of\n> commands, and in that sense quite different from the \\if-\\endif pair.\n\nI forgot to reply to this, but I did consider the naming of these\ncommands before commit, and I tend to side with Daniel here. I think\nit's not totally unreasonable to have in the future another command\n\\syncpipeline which sends does PQpipelineSync(); if the commands are\n\\pipeline and \\endpipeline then a \\syncpipeline in the middle makes less\nsense than if they are \\startpipeline and \\endpipeline. I grant this is\nquite subjective, though.\n\n-- \nÁlvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Fri, 9 Apr 2021 19:02:38 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add libpq pipeline mode support to pgbench" } ]
[ { "msg_contents": "I noticed that the PG docs [1] for the catalog pg_subscription doesn't\nhave any mention of the substream column.\n\nAccidental omission by commit 4648243 [2] from last year?\n\n----\n[1] https://www.postgresql.org/docs/devel/catalog-pg-subscription.html\n[2] https://github.com/postgres/postgres/commit/464824323e57dc4b397e8b05854d779908b55304\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 16 Mar 2021 09:05:05 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "pg_subscription - substream column?" }, { "msg_contents": "On Tue, Mar 16, 2021 at 3:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> I noticed that the PG docs [1] for the catalog pg_subscription doesn't\n> have any mention of the substream column.\n>\n> Accidental omission by commit 4648243 [2] from last year?\n>\n\nRight. I'll fix this unless you or someone else is interested in\nproviding a patch for this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 16 Mar 2021 13:50:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_subscription - substream column?" }, { "msg_contents": "On Tue, Mar 16, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 16, 2021 at 3:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > I noticed that the PG docs [1] for the catalog pg_subscription doesn't\n> > have any mention of the substream column.\n> >\n> > Accidental omission by commit 4648243 [2] from last year?\n> >\n>\n> Right. I'll fix this unless you or someone else is interested in\n> providing a patch for this.\n\nSure, I will send a patch for it tomorrow.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Tue, 16 Mar 2021 22:57:12 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_subscription - substream column?" }, { "msg_contents": "On Tue, Mar 16, 2021 at 5:27 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Mar 16, 2021 at 7:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 16, 2021 at 3:35 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > I noticed that the PG docs [1] for the catalog pg_subscription doesn't\n> > > have any mention of the substream column.\n> > >\n> > > Accidental omission by commit 4648243 [2] from last year?\n> > >\n> >\n> > Right. I'll fix this unless you or someone else is interested in\n> > providing a patch for this.\n>\n> Sure, I will send a patch for it tomorrow.\n>\n\nAttached, please find the patch to update the description of substream\nin pg_subscription.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 16 Mar 2021 19:15:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_subscription - substream column?" }, { "msg_contents": "On Wed, Mar 17, 2021 at 12:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> Attached, please find the patch to update the description of substream\n> in pg_subscription.\n>\n\nI applied your patch and regenerated the PG docs to check the result.\n\nLGTM.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Wed, 17 Mar 2021 10:26:28 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_subscription - substream column?" }, { "msg_contents": "On Wed, Mar 17, 2021 at 4:56 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Mar 17, 2021 at 12:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Attached, please find the patch to update the description of substream\n> > in pg_subscription.\n> >\n>\n> I applied your patch and regenerated the PG docs to check the result.\n>\n> LGTM.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 17 Mar 2021 08:38:43 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_subscription - substream column?" } ]
[ { "msg_contents": "Hi,\n\nThis thread came from another thread about collecting the WAL \nstats([1]).\n\nIs it better to make the stats collector shutdown without writing the \nstats files\nif the immediate shutdown is requested?\n\nThere was a related discussion([2]) although it's stopped on 12/1/2016.\nIIUC, the thread's consensus are\n\n(1) It's useful to make the stats collector shutdown quickly without \nwriting the stats files\n if the immediate shutdown is requested in some cases because there \nis a possibility\n that it takes a long time until the failover happens. The possible \nreasons are that\n disk write speed is slow, the stats files are big, and so on. And \nthere is no negative impact\n on the users because all stats files are removed in a crash recovery \nphase now.\n\n(2) As another aspect, it needs to change the behavior removing all \nstats files because autovacuum\n uses the stats. There are some ideas, for example writing the stats \nfiles every X minutes\n (using wal or another mechanism) and even if a crash occurs, the \nstartup process can restore\n the stats with slightly low freshness. But, it needs to find a way \nhow to handle the stats files\n when deleting on PITR rewind or stats collector crash happens.\n\nI agree that the above consensus and I think we can make separate two \npatches.\nThe first one is for (1) and the second one is for (2).\n\nWhy don't you apply the patch for (1) first?\nI attached the patch based on tsunakawa-san's patch([2]).\n(v1-0001-pgstat_avoid_writing_on_sigquit.patch)\n\nAt this time, there are no cons points for the users because\nthe stats files are removed in a crash recovery phase as pointed in the \ndiscussion.\n\n[1] \nhttps://www.postgresql.org/message-id/c616cf24bf4ecd818f7cab0900a40842%40oss.nttdata.com\n[2] \nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F5EF25A%40G01JPEXMBYT05\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 16 Mar 2021 08:50:25 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "make the stats collector shutdown without writing the statsfiles if\n the immediate shutdown is requested." }, { "msg_contents": "Dear Ikeda-san\n\nI think the idea is good.\n\nI read the patch and other sources, and I found process_startup_packet_die also execute _exit(1).\nI think they can be combined into one function and moved to interrupt.c, but \nsome important comments might be removed. How do you think?\n\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n", "msg_date": "Tue, 16 Mar 2021 04:44:22 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021-03-16 13:44, kuroda.hayato@fujitsu.com wrote:\n> Dear Ikeda-san\n> \n> I think the idea is good.\n> \n> I read the patch and other sources, and I found\n> process_startup_packet_die also execute _exit(1).\n> I think they can be combined into one function and moved to \n> interrupt.c, but\n> some important comments might be removed. How do you think?\n\nHi, Kuroda-san.\nThanks for your comments.\n\nI agreed that your idea.\nI combined them into one function and moved the comments to\nthe calling function side.\n(v2-0001-pgstat_avoid_writing_on_sigquit.patch)\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Wed, 17 Mar 2021 08:09:27 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "RE: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "Hi,\n\n+ * Simple signal handler for processes HAVE NOT yet touched or DO NOT\n\nI think there should be a 'which' between processes and HAVE. It seems the\nwords in Capital letters should be in lower case.\n\n+ * Simple signal handler for processes have touched shared memory to\n+ * exit quickly.\n\nAdd 'which' between processes and have.\n\n unlink(fname);\n+\n+ elog(DEBUG2, \"removing stats file \\\"%s\\\"\", fname);\n\nIt seems the return value from unlink should be checked and reflected in\nthe debug message.\n\nThanks\n\nOn Tue, Mar 16, 2021 at 4:09 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com>\nwrote:\n\n> On 2021-03-16 13:44, kuroda.hayato@fujitsu.com wrote:\n> > Dear Ikeda-san\n> >\n> > I think the idea is good.\n> >\n> > I read the patch and other sources, and I found\n> > process_startup_packet_die also execute _exit(1).\n> > I think they can be combined into one function and moved to\n> > interrupt.c, but\n> > some important comments might be removed. How do you think?\n>\n> Hi, Kuroda-san.\n> Thanks for your comments.\n>\n> I agreed that your idea.\n> I combined them into one function and moved the comments to\n> the calling function side.\n> (v2-0001-pgstat_avoid_writing_on_sigquit.patch)\n>\n> Regards,\n> --\n> Masahiro Ikeda\n> NTT DATA CORPORATION\n\nHi,+ * Simple signal handler for processes HAVE NOT yet touched or DO NOTI think there should be a 'which' between processes and HAVE. It seems the words in Capital letters should be in lower case.+ * Simple signal handler for processes have touched shared memory to+ * exit quickly.Add 'which' between processes and have.        unlink(fname);++       elog(DEBUG2, \"removing stats file \\\"%s\\\"\", fname);It seems the return value from unlink should be checked and reflected in the debug message.ThanksOn Tue, Mar 16, 2021 at 4:09 PM Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote:On 2021-03-16 13:44, kuroda.hayato@fujitsu.com wrote:\n> Dear Ikeda-san\n> \n> I think the idea is good.\n> \n> I read the patch and other sources, and I found\n> process_startup_packet_die also execute _exit(1).\n> I think they can be combined into one function and moved to \n> interrupt.c, but\n> some important comments might be removed. How do you think?\n\nHi, Kuroda-san.\nThanks for your comments.\n\nI agreed that your idea.\nI combined them into one function and moved the comments to\nthe calling function side.\n(v2-0001-pgstat_avoid_writing_on_sigquit.patch)\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 16 Mar 2021 16:25:48 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021-03-17 08:25, Zhihong Yu wrote:\n> Hi,\n\nThanks for your comments!\n\n> + * Simple signal handler for processes HAVE NOT yet touched or DO NOT\n> \n> I think there should be a 'which' between processes and HAVE. It seems\n> the words in Capital letters should be in lower case.\n> \n> + * Simple signal handler for processes have touched shared memory to\n> + * exit quickly.\n> \n> Add 'which' between processes and have.\n\nOK, I fixed them.\n\n> unlink(fname);\n> +\n> + elog(DEBUG2, \"removing stats file \\\"%s\\\"\", fname);\n> \n> It seems the return value from unlink should be checked and reflected\n> in the debug message.\n\nThere are related codes that show log and call unlink() in slru.c and \npgstat.c.\n\n```\npgstat_write_db_statsfile(PgStat_StatDBEntry *dbentry, bool permanent)\n{\n // some code\n\t\telog(DEBUG2, \"removing temporary stats file \\\"%s\\\"\", statfile);\n\t\tunlink(statfile)\n}\n```\n\nI fixed it in the same way instead of checking the return value because\nIIUC, it's unimportant if an error has occurred. The log shows that just \nto try\nremoving it. Though?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Wed, 17 Mar 2021 09:27:24 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "Dear Ikeda-san,\n\nI confirmed new patch and no problem was found. Thanks.\n(I'm not a native English speaker, so I cannot check your comments correctly, sorry)\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n", "msg_date": "Thu, 18 Mar 2021 02:59:25 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/18 11:59, kuroda.hayato@fujitsu.com wrote:\n> Dear Ikeda-san,\n> \n> I confirmed new patch and no problem was found. Thanks.\n> (I'm not a native English speaker, so I cannot check your comments correctly, sorry)\n\nOne user-visible side-effect by this change is; with the patch, the stats is\ncleared when only the stats collector is killed (with SIGQUIT) accidentally\nand restarted by postmaster later. On the other than, currently the stats is\nwritten in that case and subsequently-restarted stats collector can use\nthat stats file. I'm not sure if we need to keep supporting this behavior, though.\n\nWhen only the stats collector exits by SIGQUIT, with the patch\nFreeWaitEventSet() is also skipped. Is this ok?\n\n-\t * Loop to process messages until we get SIGQUIT or detect ungraceful\n-\t * death of our parent postmaster.\n+\t * Loop to process messages until we get SIGTERM or SIGQUIT of our parent\n+\t * postmaster.\n\n\"SIGTERM or SIGQUIT of our parent postmaster\" should be\n\"SIGTERM, SIGQUIT, or detect ungraceful death of our parent postmaster\"?\n\n+SignalHandlerForUnsafeExit(SIGNAL_ARGS)\n\nI don't think SignalHandlerForUnsafeExit is good name. Because that's not\n\"unsafe\" exit. No? Even after this signal handler is triggered, the server is\nstill running normally and a process that exits will be restarted later. What\nabout SignalHandlerForNonCrashExit or SignalHandlerForNonFatalExit?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 18 Mar 2021 13:37:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021-03-18 13:37, Fujii Masao wrote:\n> On 2021/03/18 11:59, kuroda.hayato@fujitsu.com wrote:\n>> Dear Ikeda-san,\n>> \n>> I confirmed new patch and no problem was found. Thanks.\n>> (I'm not a native English speaker, so I cannot check your comments \n>> correctly, sorry)\n> \n> One user-visible side-effect by this change is; with the patch, the \n> stats is\n> cleared when only the stats collector is killed (with SIGQUIT) \n> accidentally\n> and restarted by postmaster later.\n\nThanks for your comments.\n\nAs you said, the temporary stats files don't removed if the stats \ncollector is killed with SIGQUIT.\nSo, if the user change the GUC parameter \"stats_temp_directory\" after \nimmediate shutdown,\ntemporary stats file can't be removed forever.\n\nBut, I think this case is rarely happened and unimportant. Actually, \npgstat_write_statsfiles()\ndidn't check error of unlink() and the same problem is occurred if the \nserver is crashed now.\nThe documentation said following. I think it's enough.\n\n```\n For better performance, <varname>stats_temp_directory</varname> can \nbe\n pointed at a RAM-based file system, decreasing physical I/O \nrequirements.\n When the server shuts down cleanly, a permanent copy of the \nstatistics\n data is stored in the <filename>pg_stat</filename> subdirectory, so \nthat\n statistics can be retained across server restarts. When recovery is\n performed at server start (e.g., after immediate shutdown, server \ncrash,\n and point-in-time recovery), all statistics counters are reset.\n```\n\n\n> On the other than, currently the stats is\n> written in that case and subsequently-restarted stats collector can use\n> that stats file. I'm not sure if we need to keep supporting this\n> behavior, though.\n\nI don't have any strong opinion this behaivor is useless too.\n\nSince the reinitialized phase is not executed when only the stats \ncollector is crashed\n(since it didn't touch the shared memory), if the permanent stats file \nis exists, the\nstats collector can use it. But, IIUC, the case is rare.\n\nThe case is happened by operation mistake which a operator sends the \nSIGQUIT signal to\nthe stats collector. Please let me know if there are more important \ncase.\n\nBut, if SIGKILL is sent by operator, the stats can't be rescure now \nbecause the permanent stats\nfiles can't be written before exiting. Since the case which can rescure \nthe stats is rare,\nI think it's ok to initialize the stats even if SIGQUIT is sent.\n\nIf to support this feature, we need to implement the following first.\n\n> (2) As another aspect, it needs to change the behavior removing all \n> stats files because autovacuum\n> uses the stats. There are some ideas, for example writing the stats \n> files every X minutes\n> (using wal or another mechanism) and even if a crash occurs, the \n> startup process can restore\n> the stats with slightly low freshness. But, it needs to find a way \n> how to handle the stats files\n> when deleting on PITR rewind or stats collector crash happens.\n\n\n\n> When only the stats collector exits by SIGQUIT, with the patch\n> FreeWaitEventSet() is also skipped. Is this ok?\n\nThanks, I fixed it.\n\n\n> -\t * Loop to process messages until we get SIGQUIT or detect ungraceful\n> -\t * death of our parent postmaster.\n> +\t * Loop to process messages until we get SIGTERM or SIGQUIT of our \n> parent\n> +\t * postmaster.\n> \n> \"SIGTERM or SIGQUIT of our parent postmaster\" should be\n> \"SIGTERM, SIGQUIT, or detect ungraceful death of our parent \n> postmaster\"?\n\nYes, I fixed it.\n\n\n> +SignalHandlerForUnsafeExit(SIGNAL_ARGS)\n> \n> I don't think SignalHandlerForUnsafeExit is good name. Because that's \n> not\n> \"unsafe\" exit. No? Even after this signal handler is triggered, the \n> server is\n> still running normally and a process that exits will be restarted \n> later. What\n> about SignalHandlerForNonCrashExit or SignalHandlerForNonFatalExit?\n\nOK, I fixed.\nI changed to the SignalPgstatHandlerForNonCrashExit() to add \nFreeWaitEventSet()\nin the handler for the stats collector.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Thu, 18 Mar 2021 19:16:18 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/18 19:16, Masahiro Ikeda wrote:\n> As you said, the temporary stats files don't removed if the stats collector is killed with SIGQUIT.\n> So, if the user change the GUC parameter \"stats_temp_directory\" after immediate shutdown,\n> temporary stats file can't be removed forever.\n> \n> But, I think this case is rarely happened and unimportant. Actually, pgstat_write_statsfiles()\n> didn't check error of unlink() and the same problem is occurred if the server is crashed now.\n> The documentation said following. I think it's enough.\n\nThanks for considering this! Understood.\n\n\n> I don't have any strong opinion this behaivor is useless too.\n> \n> Since the reinitialized phase is not executed when only the stats collector is crashed\n> (since it didn't touch the shared memory), if the permanent stats file is exists, the\n> stats collector can use it. But, IIUC, the case is rare.\n> \n> The case is happened by operation mistake which a operator sends the SIGQUIT signal to\n> the stats collector. Please let me know if there are more important case.\n> \n> But, if SIGKILL is sent by operator, the stats can't be rescure now because the permanent stats\n> files can't be written before exiting. Since the case which can rescure the stats is rare,\n> I think it's ok to initialize the stats even if SIGQUIT is sent.\n\nSounds reasonable.\n\n\n>> When only the stats collector exits by SIGQUIT, with the patch\n>> FreeWaitEventSet() is also skipped. Is this ok?\n> \n> Thanks, I fixed it.\n\nI'm still not sure if FreeWaitEventSet() is actually necessary in that\nexit case. Could you tell me why you thought FreeWaitEventSet() is\nnecessary in the case?\n\nSince it's not good to do many things in a signal handler, even when\nFreeWaitEventSet() is really necessary, it should be called at subsequent\nstartup of the stats collector instead of calling it in the handler\nat the end? BTW, I found bgworker calls FreeWaitEventSet() via\nShutdownLatchSupport() at its startup. But I'm also not sure if this\nis really necessary at the startup...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Mar 2021 13:33:58 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": ">>> When only the stats collector exits by SIGQUIT, with the patch\n>>> FreeWaitEventSet() is also skipped. Is this ok?\n>> \n>> Thanks, I fixed it.\n> \n> I'm still not sure if FreeWaitEventSet() is actually necessary in that\n> exit case. Could you tell me why you thought FreeWaitEventSet() is\n> necessary in the case?\n\nSorry, there is no evidence we should call it.\nI thought that to call FreeWaitEventSet() is a manner after using\nCreateWaitEventSet() and the performance impact to call it seems to be \ntoo small.\n\n(Please let me know if my understanding is wrong.)\nThe reason why I said this is a manner because I thought it's no problem\neven without calling FreeWaitEventSet() before the process exits \nregardless\nof fast or immediate shutdown. Since the \"WaitEventSet *wes\" is a \nprocess local resource,\nthis will be released by OS even if FreeWaitEventSet() is not called.\n\nI'm now changing my mind to skip it is better because the code can be \nsimpler and,\nit's unimportant for the above reason especially when the immediate \nshutdown is\nrequested which is a bad manner itself.\n\nBTW, the SysLoggerMain() create the WaitEventSet, but it didn't call \nFreeWaitEventSet().\nIs it better to call FreeWaitEventSet() before exiting too?\n\n\n> Since it's not good to do many things in a signal handler, even when\n> FreeWaitEventSet() is really necessary, it should be called at \n> subsequent\n> startup of the stats collector instead of calling it in the handler\n> at the end? BTW, I found bgworker calls FreeWaitEventSet() via\n> ShutdownLatchSupport() at its startup. But I'm also not sure if this\n> is really necessary at the startup...\n\nOK, I understood that I need to change the signal handler's \nimplementation\nif FreeWaitEventSet() is necessary.\n\nIn my understanding from the following commit message, the ordinary \nbgworker\nwhich not access the shared memory doesn't use the latch which \npostmaster\ninstalled. So, ShutdownLatchSupport() is called at the startup. Though?\n\n2021/3/1 commit: 83709a0d5a46559db016c50ded1a95fd3b0d3be6\n```\nThe signal handler is now installed in all postmaster children by\nInitializeLatchSupport(). Those wishing to disconnect from it should\ncall ShutdownLatchSupport().\n```\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 20 Mar 2021 13:40:45 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/20 13:40, Masahiro Ikeda wrote:\n> Sorry, there is no evidence we should call it.\n> I thought that to call FreeWaitEventSet() is a manner after using\n> CreateWaitEventSet() and the performance impact to call it seems to be too small.\n> \n> (Please let me know if my understanding is wrong.)\n> The reason why I said this is a manner because I thought it's no problem\n> even without calling FreeWaitEventSet() before the process exits regardless\n> of fast or immediate shutdown. Since the \"WaitEventSet *wes\" is a process local resource,\n> this will be released by OS even if FreeWaitEventSet() is not called.\n> \n> I'm now changing my mind to skip it is better because the code can be simpler and,\n> it's unimportant for the above reason especially when the immediate shutdown is\n> requested which is a bad manner itself.\n\nThanks for explaining this! So you're thinking that v3 patch should be chosen?\nProbably some review comments I posted upthread need to be applied to\nv3 patch, e.g., rename of SignalHandlerForUnsafeExit() function.\n\n\n> BTW, the SysLoggerMain() create the WaitEventSet, but it didn't call FreeWaitEventSet().\n> Is it better to call FreeWaitEventSet() before exiting too?\n\nYes if which could cause actual issue. Otherwise I don't have strong\nreason to do that for now..\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 22 Mar 2021 23:59:02 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021/03/22 23:59, Fujii Masao wrote:\n> \n> \n> On 2021/03/20 13:40, Masahiro Ikeda wrote:\n>> Sorry, there is no evidence we should call it.\n>> I thought that to call FreeWaitEventSet() is a manner after using\n>> CreateWaitEventSet() and the performance impact to call it seems to be too\n>> small.\n>>\n>> (Please let me know if my understanding is wrong.)\n>> The reason why I said this is a manner because I thought it's no problem\n>> even without calling FreeWaitEventSet() before the process exits regardless\n>> of fast or immediate shutdown. Since the \"WaitEventSet *wes\" is a process\n>> local resource,\n>> this will be released by OS even if FreeWaitEventSet() is not called.\n>>\n>> I'm now changing my mind to skip it is better because the code can be\n>> simpler and,\n>> it's unimportant for the above reason especially when the immediate shutdown is\n>> requested which is a bad manner itself.\n> \n> Thanks for explaining this! So you're thinking that v3 patch should be chosen?\n> Probably some review comments I posted upthread need to be applied to\n> v3 patch, e.g., rename of SignalHandlerForUnsafeExit() function.\n\nYes. I attached the v5 patch based on v3 patch.\nI renamed SignalHandlerForUnsafeExit() and fixed the following comment.\n\n> \"SIGTERM or SIGQUIT of our parent postmaster\" should be\n> \"SIGTERM, SIGQUIT, or detect ungraceful death of our parent\n> postmaster\"?\n\n\n>> BTW, the SysLoggerMain() create the WaitEventSet, but it didn't call\n>> FreeWaitEventSet().\n>> Is it better to call FreeWaitEventSet() before exiting too?\n> \n> Yes if which could cause actual issue. Otherwise I don't have strong\n> reason to do that for now..\n\nOK. AFAIK, this doesn't lead critical problems like memory leak.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 23 Mar 2021 09:05:33 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/23 9:05, Masahiro Ikeda wrote:\n> Yes. I attached the v5 patch based on v3 patch.\n> I renamed SignalHandlerForUnsafeExit() and fixed the following comment.\n\nThanks for updating the patch!\n\nWhen the startup process exits because of recovery_target_action=shutdown,\nreaper() calls TerminateChildren(SIGTERM). This function sends SIGTERM to\nthe stats collector. Currently the stats collector ignores SIGTERM, but with\nthe patch it exits normally. This change of behavior might be problematic.\n\nThat is, TerminateChildren(SIGTERM) sends SIGTERM to various processes.\nBut currently the stats collector and checkpointer don't exit even when\nSIGTERM arrives because they ignore SIGTERM. After several processes\nother than the stats collector and checkpointer exit by SIGTERM,\nPostmasterStateMachine() and reaper() make checkpointer exit and then\nthe stats collector exit. The shutdown terminates the processes in this order.\n\nOn the other hand, with the patch, the stats collector exits by SIGTERM\nbefore checkpointer exits. This is not normal order of processes to exit in\nshutdown.\n\nTo address this issue, one idea is to use SIGUSR2 for normal exit of the stats\ncollector, instead of SIGTERM. If we do this, TerminateChildren(SIGTERM)\ncannot terminate the stats collector. Thought?\n\nIf we adopt this idea, the detail comment about why SIGUSR2 is used for that\nneeds to be added.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:40:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021/03/23 11:40, Fujii Masao wrote:\n> \n> \n> On 2021/03/23 9:05, Masahiro Ikeda wrote:\n>> Yes. I attached the v5 patch based on v3 patch.\n>> I renamed SignalHandlerForUnsafeExit() and fixed the following comment.\n> \n> Thanks for updating the patch!\n> \n> When the startup process exits because of recovery_target_action=shutdown,\n> reaper() calls TerminateChildren(SIGTERM). This function sends SIGTERM to\n> the stats collector. Currently the stats collector ignores SIGTERM, but with\n> the patch it exits normally. This change of behavior might be problematic.\n> \n> That is, TerminateChildren(SIGTERM) sends SIGTERM to various processes.\n> But currently the stats collector and checkpointer don't exit even when\n> SIGTERM arrives because they ignore SIGTERM. After several processes\n> other than the stats collector and checkpointer exit by SIGTERM,\n> PostmasterStateMachine() and reaper() make checkpointer exit and then\n> the stats collector exit. The shutdown terminates the processes in this order.\n> \n> On the other hand, with the patch, the stats collector exits by SIGTERM\n> before checkpointer exits. This is not normal order of processes to exit in\n> shutdown.\n> \n> To address this issue, one idea is to use SIGUSR2 for normal exit of the stats\n> collector, instead of SIGTERM. If we do this, TerminateChildren(SIGTERM)\n> cannot terminate the stats collector. Thought?\n> \n> If we adopt this idea, the detail comment about why SIGUSR2 is used for that\n> needs to be added.\n\nThanks for your comments. I agreed your concern and suggestion.\nAdditionally, we need to consider system shutdown cycle too as\nCheckpointerMain()'s comment said.\n\n```\n\t * Note: we deliberately ignore SIGTERM, because during a standard Unix\n\t * system shutdown cycle, init will SIGTERM all processes at once. We\n\t * want to wait for the backends to exit, whereupon the postmaster will\n\t * tell us it's okay to shut down (via SIGUSR2)\n```\n\nI changed the signal from SIGTERM to SIGUSR2 and add the comments why SIGUSR2\nis used.\n(v6-0001-pgstat_avoid_writing_on_sigquit.patch)\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 23 Mar 2021 14:54:34 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/23 14:54, Masahiro Ikeda wrote:\n> Thanks for your comments. I agreed your concern and suggestion.\n> Additionally, we need to consider system shutdown cycle too as\n> CheckpointerMain()'s comment said.\n> \n> ```\n> \t * Note: we deliberately ignore SIGTERM, because during a standard Unix\n> \t * system shutdown cycle, init will SIGTERM all processes at once. We\n> \t * want to wait for the backends to exit, whereupon the postmaster will\n> \t * tell us it's okay to shut down (via SIGUSR2)\n> ```\n\nGood catch!\n\n\n> I changed the signal from SIGTERM to SIGUSR2 and add the comments why SIGUSR2\n> is used.\n> (v6-0001-pgstat_avoid_writing_on_sigquit.patch)\n\nThanks for updating the patch!\n\n+ * The statistics collector is started by the postmaster as soon as the\n+ * startup subprocess finishes.\n\nThis comment needs to be updated? Because the stats collector can\nbe invoked when the startup process sends PMSIGNAL_BEGIN_HOT_STANDBY\nsignal.\n\nThis fact makes me wonder that if we collect the statistics about WAL writing\nfrom walreceiver as we discussed in other thread, the stats collector should\nbe invoked at more earlier stage. IIUC walreceiver can be invoked before\nPMSIGNAL_BEGIN_HOT_STANDBY is sent.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 23 Mar 2021 15:50:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "Hi,\n\nOn 2021-03-23 15:50:46 +0900, Fujii Masao wrote:\n> This fact makes me wonder that if we collect the statistics about WAL writing\n> from walreceiver as we discussed in other thread, the stats collector should\n> be invoked at more earlier stage. IIUC walreceiver can be invoked before\n> PMSIGNAL_BEGIN_HOT_STANDBY is sent.\n\nFWIW, in the shared memory stats patch the stats subsystem is\ninitialized early on by the startup process.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:51:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021/03/23 15:50, Fujii Masao wrote:\n> + * The statistics collector is started by the postmaster as soon as the\n> + * startup subprocess finishes.\n> \n> This comment needs to be updated? Because the stats collector can\n> be invoked when the startup process sends PMSIGNAL_BEGIN_HOT_STANDBY\n> signal.\n\nI updated this comment in the patch.\nAttached is the updated version of the patch.\n\nBarring any objection, I'm thinking to commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 24 Mar 2021 18:36:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/24 3:51, Andres Freund wrote:\n> Hi,\n> \n> On 2021-03-23 15:50:46 +0900, Fujii Masao wrote:\n>> This fact makes me wonder that if we collect the statistics about WAL writing\n>> from walreceiver as we discussed in other thread, the stats collector should\n>> be invoked at more earlier stage. IIUC walreceiver can be invoked before\n>> PMSIGNAL_BEGIN_HOT_STANDBY is sent.\n> \n> FWIW, in the shared memory stats patch the stats subsystem is\n> initialized early on by the startup process.\n\nThis is good news!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Mar 2021 18:36:47 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/24 18:36, Fujii Masao wrote:\n> \n> \n> On 2021/03/24 3:51, Andres Freund wrote:\n>> Hi,\n>>\n>> On 2021-03-23 15:50:46 +0900, Fujii Masao wrote:\n>>> This fact makes me wonder that if we collect the statistics about WAL writing\n>>> from walreceiver as we discussed in other thread, the stats collector should\n>>> be invoked at more earlier stage. IIUC walreceiver can be invoked before\n>>> PMSIGNAL_BEGIN_HOT_STANDBY is sent.\n>>\n>> FWIW, in the shared memory stats patch the stats subsystem is\n>> initialized early on by the startup process.\n> \n> This is good news!\n\nFujii-san, Andres-san,\nThanks for your comments!\n\nI didn't think about the start order. From the point of view, I noticed that\nthe current source code has two other concerns.\n\n\n1. This problem is not only for the wal receiver.\n\nThe problem which the wal receiver starts before the stats collector\nis launched during archive recovery is not only for the the wal receiver but\nalso the checkpointer and the bgwriter. Before starting redo, the startup\nprocess sends the postmaster \"PMSIGNAL_RECOVERY_STARTED\" signal to launch the\ncheckpointer and the bgwriter to be able to perform creating restartpoint.\n\nAlthough the socket for communication between the stats collector and the\nother processes is made in earlier stage via pgstat_init(), I agree to make\nthe stats collector starts earlier stage is defensive. BTW, in my\nenvironments(linux, net.core.rmem_default = 212992), the socket can buffer\nalmost 300 WAL stats messages. This mean that, as you said, if the redo phase\nis too long, it can lost the messages easily.\n\n\n2. To make the stats clear in redo phase.\n\nThe statistics can be reset after the wal receiver, the checkpointer and\nthe wal writer are started in redo phase. So, it's not enough the stats\ncollector is invoked at more earlier stage. We need to fix it.\n\n\n\n(I hope I am not missing something.)\nThanks to Andres-san's work([1]), the above problems will be handle in the\nshared memory stats patch. First problem will be resolved since the stats are\ncollected in shared memory, so the stats collector process is unnecessary\nitself. Second problem will be resolved to remove the reset code because the\ntemporary stats file won't generated, and if the permanent stats file\ncorrupted, just recreate it.\n\n[1]\nhttps://github.com/anarazel/postgres/compare/master...shmstat-before-split-2021-03-22\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 09:31:46 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "On 2021/03/24 18:36, Fujii Masao wrote:\n> On 2021/03/23 15:50, Fujii Masao wrote:\n>> + * The statistics collector is started by the postmaster as soon as the\n>> + * startup subprocess finishes.\n>>\n>> This comment needs to be updated? Because the stats collector can\n>> be invoked when the startup process sends PMSIGNAL_BEGIN_HOT_STANDBY\n>> signal.\n> \n> I updated this comment in the patch.\n> Attached is the updated version of the patch.\n> \n> Barring any objection, I'm thinking to commit this patch.\n\nThanks for your comments and updating the patch!\nI checked your patch and I don't have any comments.\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 09:32:29 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "Hi,\n\nOn 2021-03-24 18:36:14 +0900, Fujii Masao wrote:\n> Barring any objection, I'm thinking to commit this patch.\n\nTo which branches? I am *strongly* opposed to backpatching this.\n\n\n> /*\n> - * Simple signal handler for exiting quickly as if due to a crash.\n> + * Simple signal handler for processes which have not yet touched or do not\n> + * touch shared memory to exit quickly.\n> *\n> - * Normally, this would be used for handling SIGQUIT.\n> + * Note that if processes already touched shared memory, use\n> + * SignalHandlerForCrashExit() instead and force the postmaster into\n> + * a system reset cycle because shared memory may be corrupted.\n> + */\n> +void\n> +SignalHandlerForNonCrashExit(SIGNAL_ARGS)\n> +{\n> +\t/*\n> +\t * Since we don't touch shared memory, we can just pull the plug and exit\n> +\t * without running any atexit handlers.\n> +\t */\n> +\t_exit(1);\n> +}\n\nThis strikes me as a quite a misleading function name. Outside of very\nnarrow circumstances a normal exit shouldn't use _exit(1). Neither 1 no\n_exit() (as opposed to exit()) seem appropriate. This seems like a bad\nidea.\n\nAlso, won't this lead to postmaster now starting to complain about\npgstat having crashed in an immediate shutdown? Right now only exit\nstatus 0 is expected:\n\n\t\tif (pid == PgStatPID)\n\t\t{\n\t\t\tPgStatPID = 0;\n\t\t\tif (!EXIT_STATUS_0(exitstatus))\n\t\t\t\tLogChildExit(LOG, _(\"statistics collector process\"),\n\t\t\t\t\t\t\t pid, exitstatus);\n\t\t\tif (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n\t\t\t\tPgStatPID = pgstat_start();\n\t\t\tcontinue;\n\t\t}\n\n\n\n> + * The statistics collector is started by the postmaster as soon as the\n> + * startup subprocess finishes, or as soon as the postmaster is ready\n> + * to accept read only connections during archive recovery. It remains\n> + * alive until the postmaster commands it to terminate. Normal\n> + * termination is by SIGUSR2 after the checkpointer must exit(0),\n> + * which instructs the statistics collector to save the final statistics\n> + * to reuse at next startup and then exit(0).\n\nI can't parse \"...after the checkpointer must exit(0), which instructs...\"\n\n\n> + * Emergency termination is by SIGQUIT; like any backend, the statistics\n> + * collector will exit quickly without saving the final statistics. It's\n> + * ok because the startup process will remove all statistics at next\n> + * startup after emergency termination.\n\nNormally there won't be any stats to remove, no? The permanent stats\nfile has been removed when the stats collector started up.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Mar 2021 17:58:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/25 9:31, Masahiro Ikeda wrote:\n> \n> \n> On 2021/03/24 18:36, Fujii Masao wrote:\n>>\n>>\n>> On 2021/03/24 3:51, Andres Freund wrote:\n>>> Hi,\n>>>\n>>> On 2021-03-23 15:50:46 +0900, Fujii Masao wrote:\n>>>> This fact makes me wonder that if we collect the statistics about WAL writing\n>>>> from walreceiver as we discussed in other thread, the stats collector should\n>>>> be invoked at more earlier stage. IIUC walreceiver can be invoked before\n>>>> PMSIGNAL_BEGIN_HOT_STANDBY is sent.\n>>>\n>>> FWIW, in the shared memory stats patch the stats subsystem is\n>>> initialized early on by the startup process.\n>>\n>> This is good news!\n> \n> Fujii-san, Andres-san,\n> Thanks for your comments!\n> \n> I didn't think about the start order. From the point of view, I noticed that\n> the current source code has two other concerns.\n> \n> \n> 1. This problem is not only for the wal receiver.\n> \n> The problem which the wal receiver starts before the stats collector\n> is launched during archive recovery is not only for the the wal receiver but\n> also the checkpointer and the bgwriter. Before starting redo, the startup\n> process sends the postmaster \"PMSIGNAL_RECOVERY_STARTED\" signal to launch the\n> checkpointer and the bgwriter to be able to perform creating restartpoint.\n> \n> Although the socket for communication between the stats collector and the\n> other processes is made in earlier stage via pgstat_init(), I agree to make\n> the stats collector starts earlier stage is defensive. BTW, in my\n> environments(linux, net.core.rmem_default = 212992), the socket can buffer\n> almost 300 WAL stats messages. This mean that, as you said, if the redo phase\n> is too long, it can lost the messages easily.\n> \n> \n> 2. To make the stats clear in redo phase.\n> \n> The statistics can be reset after the wal receiver, the checkpointer and\n> the wal writer are started in redo phase. So, it's not enough the stats\n> collector is invoked at more earlier stage. We need to fix it.\n> \n> \n> \n> (I hope I am not missing something.)\n> Thanks to Andres-san's work([1]), the above problems will be handle in the\n> shared memory stats patch. First problem will be resolved since the stats are\n> collected in shared memory, so the stats collector process is unnecessary\n> itself. Second problem will be resolved to remove the reset code because the\n> temporary stats file won't generated, and if the permanent stats file\n> corrupted, just recreate it.\n\nYes. So we should wait for the shared memory stats patch to be committed\nbefore working on walreceiver stats patch more?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 19:48:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/25 9:58, Andres Freund wrote:\n> Hi,\n> \n> On 2021-03-24 18:36:14 +0900, Fujii Masao wrote:\n>> Barring any objection, I'm thinking to commit this patch.\n> \n> To which branches?\n\nI was thinking to push the patch to the master branch\nbecause this is not a bug fix.\n\n> I am *strongly* opposed to backpatching this.\n\n+1\n\n> This strikes me as a quite a misleading function name.\n\nYeah, better name is always welcome :)\n\n> Outside of very\n> narrow circumstances a normal exit shouldn't use _exit(1). Neither 1 no\n> _exit() (as opposed to exit()) seem appropriate. This seems like a bad\n> idea.\n\nSo you're thinking that the stats collector should do proc_exit(0)\nor something even when it receives immediate shutdown request?\n\nOne idea to avoid using _exit(1) is to change the SIGQUIT handler\nso that it just sets the flag. Then if the stats collector detects that\nthe flag is set in the main loop, it gets out of the loop,\nskips writing the permanent stats file and then exits with exit(0).\nThat is, normal and immediate shutdown requests are treated\nalmost the same way in the stats collector. Only the difference of\nthem is whether it saves the stats to the file or not. Thought?\n\n> Also, won't this lead to postmaster now starting to complain about\n> pgstat having crashed in an immediate shutdown? Right now only exit\n> status 0 is expected:\n> \n> \t\tif (pid == PgStatPID)\n> \t\t{\n> \t\t\tPgStatPID = 0;\n> \t\t\tif (!EXIT_STATUS_0(exitstatus))\n> \t\t\t\tLogChildExit(LOG, _(\"statistics collector process\"),\n> \t\t\t\t\t\t\t pid, exitstatus);\n> \t\t\tif (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n> \t\t\t\tPgStatPID = pgstat_start();\n> \t\t\tcontinue;\n> \t\t}\n\nNo. In immediate shutdown case, pmdie() sets \"Shutdown\" variable to\nImmdiateShutdown and HandleChildCrash() doesn't complain that in that case\nbecause of the following.\n\n\t/*\n\t * We only log messages and send signals if this is the first process\n\t * crash and we're not doing an immediate shutdown; otherwise, we're only\n\t * here to update postmaster's idea of live processes. If we have already\n\t * signaled children, nonzero exit status is to be expected, so don't\n\t * clutter log.\n\t */\n\ttake_action = !FatalError && Shutdown != ImmediateShutdown;\n\n>> + * The statistics collector is started by the postmaster as soon as the\n>> + * startup subprocess finishes, or as soon as the postmaster is ready\n>> + * to accept read only connections during archive recovery. It remains\n>> + * alive until the postmaster commands it to terminate. Normal\n>> + * termination is by SIGUSR2 after the checkpointer must exit(0),\n>> + * which instructs the statistics collector to save the final statistics\n>> + * to reuse at next startup and then exit(0).\n> \n> I can't parse \"...after the checkpointer must exit(0), which instructs...\"\n\nWhat about changing that to the following?\n\n------------------------\nNormal termination is requested after checkpointer exits. It's by SIGUSR2,\nwhich instructs the statistics collector to save the final statistics and\nthen exit(0).\n------------------------\n\n>> + * Emergency termination is by SIGQUIT; like any backend, the statistics\n>> + * collector will exit quickly without saving the final statistics. It's\n>> + * ok because the startup process will remove all statistics at next\n>> + * startup after emergency termination.\n> \n> Normally there won't be any stats to remove, no? The permanent stats\n> file has been removed when the stats collector started up.\n\nYes. In normal case, when the stats collector starts up, it loads the stats\nfrom the file and remove the file. OTOH, when the recovery is necessary,\ninstead the startup process calls pgstat_reset_all() and removes the stats files.\n\nYou're thinking that the above comments are confusing?\nIf so, what about the following?\n\n----------------------\nEmergency termination is by SIGQUIT; the statistics collector\nwill exit quickly without saving the final statistics. In this case\nthe statistics files don't need to be written because the next startup\nwill trigger a crash recovery and remove all statistics files forcibly\neven if they exist.\n----------------------\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 21:23:17 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\nOn 2021/03/25 21:23, Fujii Masao wrote:\n> On 2021/03/25 9:58, Andres Freund wrote:\n>> Outside of very\n>> narrow circumstances a normal exit shouldn't use _exit(1). Neither 1 no\n>> _exit() (as opposed to exit()) seem appropriate. This seems like a bad\n>> idea.\n> \n> So you're thinking that the stats collector should do proc_exit(0)\n> or something even when it receives immediate shutdown request?\n> \n> One idea to avoid using _exit(1) is to change the SIGQUIT handler\n> so that it just sets the flag. Then if the stats collector detects that\n> the flag is set in the main loop, it gets out of the loop,\n> skips writing the permanent stats file and then exits with exit(0).\n> That is, normal and immediate shutdown requests are treated\n> almost the same way in the stats collector. Only the difference of\n> them is whether it saves the stats to the file or not. Thought?\n\nAlthough I don't oppose the idea to change from _exit(1) to proc_exit(0), the\nreason why I used _exit(1) is that I thought the behavior to skip writing the\nstat counters is not normal exit. Actually, other background processes use\n_exit(2) instead of _exit(0) or proc_exit(0) in immediate shutdown although\nthe status code is different because they touch shared memory.\n\n\n>> Also, won't this lead to postmaster now starting to complain about\n>> pgstat having crashed in an immediate shutdown? Right now only exit\n>> status 0 is expected:\n>>\n>>         if (pid == PgStatPID)\n>>         {\n>>             PgStatPID = 0;\n>>             if (!EXIT_STATUS_0(exitstatus))\n>>                 LogChildExit(LOG, _(\"statistics collector process\"),\n>>                              pid, exitstatus);\n>>             if (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n>>                 PgStatPID = pgstat_start();\n>>             continue;\n>>         }\n> \n> No. In immediate shutdown case, pmdie() sets \"Shutdown\" variable to\n> ImmdiateShutdown and HandleChildCrash() doesn't complain that in that case\n> because of the following.\n> \n>     /*\n>      * We only log messages and send signals if this is the first process\n>      * crash and we're not doing an immediate shutdown; otherwise, we're only\n>      * here to update postmaster's idea of live processes.  If we have already\n>      * signaled children, nonzero exit status is to be expected, so don't\n>      * clutter log.\n>      */\n>     take_action = !FatalError && Shutdown != ImmediateShutdown;\n\nIIUC, in immediate shutdown case (and other cases too?), HandleChildCrash() is\nnever invoked due to the exit of pgstat. My understanding of Andres-san's\ncomment is that is it ok to show like the following log message?\n\n```\nLOG: statistics collector process (PID 64991) exited with exit code 1\n```\n\nSurely, other processes don't output the log like it. So, I agree to suppress\nthe log message.\n\nFWIW, in immediate shutdown case, since pmdie() sets \"pmState\" variable to\n\"PM_WAIT_BACKENDS\", pgstat_start() won't be invoked.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 26 Mar 2021 09:25:47 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/25 19:48, Fujii Masao wrote:\n> \n> \n> On 2021/03/25 9:31, Masahiro Ikeda wrote:\n>>\n>>\n>> On 2021/03/24 18:36, Fujii Masao wrote:\n>>>\n>>>\n>>> On 2021/03/24 3:51, Andres Freund wrote:\n>>>> Hi,\n>>>>\n>>>> On 2021-03-23 15:50:46 +0900, Fujii Masao wrote:\n>>>>> This fact makes me wonder that if we collect the statistics about WAL\n>>>>> writing\n>>>>> from walreceiver as we discussed in other thread, the stats collector should\n>>>>> be invoked at more earlier stage. IIUC walreceiver can be invoked before\n>>>>> PMSIGNAL_BEGIN_HOT_STANDBY is sent.\n>>>>\n>>>> FWIW, in the shared memory stats patch the stats subsystem is\n>>>> initialized early on by the startup process.\n>>>\n>>> This is good news!\n>>\n>> Fujii-san, Andres-san,\n>> Thanks for your comments!\n>>\n>> I didn't think about the start order. From the point of view, I noticed that\n>> the current source code has two other concerns.\n>>\n>>\n>> 1. This problem is not only for the wal receiver.\n>>\n>> The problem which the wal receiver starts before the stats collector\n>> is launched during archive recovery is not only for the the wal receiver but\n>> also the checkpointer and the bgwriter. Before starting redo, the startup\n>> process sends the postmaster \"PMSIGNAL_RECOVERY_STARTED\" signal to launch the\n>> checkpointer and the bgwriter to be able to perform creating restartpoint.\n>>\n>> Although the socket for communication between the stats collector and the\n>> other processes is made in earlier stage via pgstat_init(), I agree to make\n>> the stats collector starts earlier stage is defensive. BTW, in my\n>> environments(linux, net.core.rmem_default = 212992), the socket can buffer\n>> almost 300 WAL stats messages. This mean that, as you said, if the redo phase\n>> is too long, it can lost the messages easily.\n>>\n>>\n>> 2. To make the stats clear in redo phase.\n>>\n>> The statistics can be reset after the wal receiver, the checkpointer and\n>> the wal writer are started in redo phase. So, it's not enough the stats\n>> collector is invoked at more earlier stage. We need to fix it.\n>>\n>>\n>>\n>> (I hope I am not missing something.)\n>> Thanks to Andres-san's work([1]), the above problems will be handle in the\n>> shared memory stats patch. First problem will be resolved since the stats are\n>> collected in shared memory, so the stats collector process is unnecessary\n>> itself. Second problem will be resolved to remove the reset code because the\n>> temporary stats file won't generated, and if the permanent stats file\n>> corrupted, just recreate it.\n> \n> Yes. So we should wait for the shared memory stats patch to be committed\n> before working on walreceiver stats patch more?\n\nYes, I agree.\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 26 Mar 2021 09:27:19 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/26 9:25, Masahiro Ikeda wrote:\n> \n> On 2021/03/25 21:23, Fujii Masao wrote:\n>> On 2021/03/25 9:58, Andres Freund wrote:\n>>> Outside of very\n>>> narrow circumstances a normal exit shouldn't use _exit(1). Neither 1 no\n>>> _exit() (as opposed to exit()) seem appropriate. This seems like a bad\n>>> idea.\n>>\n>> So you're thinking that the stats collector should do proc_exit(0)\n>> or something even when it receives immediate shutdown request?\n>>\n>> One idea to avoid using _exit(1) is to change the SIGQUIT handler\n>> so that it just sets the flag. Then if the stats collector detects that\n>> the flag is set in the main loop, it gets out of the loop,\n>> skips writing the permanent stats file and then exits with exit(0).\n>> That is, normal and immediate shutdown requests are treated\n>> almost the same way in the stats collector. Only the difference of\n>> them is whether it saves the stats to the file or not. Thought?\n> \n> Although I don't oppose the idea to change from _exit(1) to proc_exit(0), the\n> reason why I used _exit(1) is that I thought the behavior to skip writing the\n> stat counters is not normal exit. Actually, other background processes use\n> _exit(2) instead of _exit(0) or proc_exit(0) in immediate shutdown although\n> the status code is different because they touch shared memory.\n\nTrue.\n\n\n> \n> \n>>> Also, won't this lead to postmaster now starting to complain about\n>>> pgstat having crashed in an immediate shutdown? Right now only exit\n>>> status 0 is expected:\n>>>\n>>>         if (pid == PgStatPID)\n>>>         {\n>>>             PgStatPID = 0;\n>>>             if (!EXIT_STATUS_0(exitstatus))\n>>>                 LogChildExit(LOG, _(\"statistics collector process\"),\n>>>                              pid, exitstatus);\n>>>             if (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n>>>                 PgStatPID = pgstat_start();\n>>>             continue;\n>>>         }\n>>\n>> No. In immediate shutdown case, pmdie() sets \"Shutdown\" variable to\n>> ImmdiateShutdown and HandleChildCrash() doesn't complain that in that case\n>> because of the following.\n>>\n>>     /*\n>>      * We only log messages and send signals if this is the first process\n>>      * crash and we're not doing an immediate shutdown; otherwise, we're only\n>>      * here to update postmaster's idea of live processes.  If we have already\n>>      * signaled children, nonzero exit status is to be expected, so don't\n>>      * clutter log.\n>>      */\n>>     take_action = !FatalError && Shutdown != ImmediateShutdown;\n> \n> IIUC, in immediate shutdown case (and other cases too?), HandleChildCrash() is\n> never invoked due to the exit of pgstat. My understanding of Andres-san's\n> comment is that is it ok to show like the following log message?\n> \n> ```\n> LOG: statistics collector process (PID 64991) exited with exit code 1\n> ```\n> \n> Surely, other processes don't output the log like it. So, I agree to suppress\n> the log message.\n\nYes. I was mistakenly thinking that HandleChildCrash() was called\neven when the stats collector exits with non-zero code, like other processes.\nBut that's not true.\n\nHow should we do this? HandleChildCrash() calls LogChildExit()\nwhen FatalError = false and Shutdown != ImmediateShutdown.\nWe should use the same conditions for the stats collector as follows?\n\n if (pid == PgStatPID)\n {\n PgStatPID = 0;\n- if (!EXIT_STATUS_0(exitstatus))\n+ if (!EXIT_STATUS_0(exitstatus) && !FatalError &&\n+ Shutdown != ImmediateShutdown)\n LogChildExit(LOG, _(\"statistics collector process\"),\n pid, exitstatus);\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 26 Mar 2021 09:54:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/26 9:54, Fujii Masao wrote:\n> On 2021/03/26 9:25, Masahiro Ikeda wrote:\n>> On 2021/03/25 21:23, Fujii Masao wrote:\n>>> On 2021/03/25 9:58, Andres Freund wrote:\n>>>> Also, won't this lead to postmaster now starting to complain about\n>>>> pgstat having crashed in an immediate shutdown? Right now only exit\n>>>> status 0 is expected:\n>>>>\n>>>>          if (pid == PgStatPID)\n>>>>          {\n>>>>              PgStatPID = 0;\n>>>>              if (!EXIT_STATUS_0(exitstatus))\n>>>>                  LogChildExit(LOG, _(\"statistics collector process\"),\n>>>>                               pid, exitstatus);\n>>>>              if (pmState == PM_RUN || pmState == PM_HOT_STANDBY)\n>>>>                  PgStatPID = pgstat_start();\n>>>>              continue;\n>>>>          }\n>>>\n>>> No. In immediate shutdown case, pmdie() sets \"Shutdown\" variable to\n>>> ImmdiateShutdown and HandleChildCrash() doesn't complain that in that case\n>>> because of the following.\n>>>\n>>>      /*\n>>>       * We only log messages and send signals if this is the first process\n>>>       * crash and we're not doing an immediate shutdown; otherwise, we're only\n>>>       * here to update postmaster's idea of live processes.  If we have\n>>> already\n>>>       * signaled children, nonzero exit status is to be expected, so don't\n>>>       * clutter log.\n>>>       */\n>>>      take_action = !FatalError && Shutdown != ImmediateShutdown;\n>>\n>> IIUC, in immediate shutdown case (and other cases too?), HandleChildCrash() is\n>> never invoked due to the exit of pgstat. My understanding of Andres-san's\n>> comment is that is it ok to show like the following log message?\n>>\n>> ```\n>> LOG:  statistics collector process (PID 64991) exited with exit code 1\n>> ```\n>>\n>> Surely, other processes don't output the log like it. So, I agree to suppress\n>> the log message.\n> \n> Yes. I was mistakenly thinking that HandleChildCrash() was called\n> even when the stats collector exits with non-zero code, like other processes.\n> But that's not true.\n> \n> How should we do this? HandleChildCrash() calls LogChildExit()\n> when FatalError = false and Shutdown != ImmediateShutdown.\n> We should use the same conditions for the stats collector as follows?\n> \n>                 if (pid == PgStatPID)\n>                 {\n>                         PgStatPID = 0;\n> -                       if (!EXIT_STATUS_0(exitstatus))\n> +                       if (!EXIT_STATUS_0(exitstatus) && !FatalError &&\n> +                               Shutdown != ImmediateShutdown)\n>                                 LogChildExit(LOG, _(\"statistics collector\n> process\"),\n>                                                          pid, exitstatus);\n\nThanks, I agree the above code if it's ok that the exit status of the stats\ncollector is not 0 in immediate shutdown case.\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 26 Mar 2021 16:46:14 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "Hi,\n\nOn 2021-03-25 21:23:17 +0900, Fujii Masao wrote:\n> > This strikes me as a quite a misleading function name.\n> \n> Yeah, better name is always welcome :)\n\nIt might just be best to not introduce a generic function and just open\ncode one just for the stats collector...\n\n\n> > Outside of very\n> > narrow circumstances a normal exit shouldn't use _exit(1). Neither 1 no\n> > _exit() (as opposed to exit()) seem appropriate. This seems like a bad\n> > idea.\n> \n> So you're thinking that the stats collector should do proc_exit(0)\n> or something even when it receives immediate shutdown request?\n\n> One idea to avoid using _exit(1) is to change the SIGQUIT handler\n> so that it just sets the flag. Then if the stats collector detects that\n> the flag is set in the main loop, it gets out of the loop,\n> skips writing the permanent stats file and then exits with exit(0).\n> That is, normal and immediate shutdown requests are treated\n> almost the same way in the stats collector. Only the difference of\n> them is whether it saves the stats to the file or not. Thought?\n\nMy main complaint isn't so much that you made the stats collector\n_exit(1). It's that that you added a function that sounded generic, but\nshould basically not be used anywhere (we have very few non-shmem\nconnected processes left - I don't think that number will increase).\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:11:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "Hi,\n\nOn 2021-03-26 09:27:19 +0900, Masahiro Ikeda wrote:\n> On 2021/03/25 19:48, Fujii Masao wrote:\n> > Yes. So we should wait for the shared memory stats patch to be committed\n> > before working on walreceiver stats patch more?\n> \n> Yes, I agree.\n\nAgreed.\n\nOne thing that I didn't quite see discussed enough in the thread so far:\nHave you considered what regressions the stats file removal in immediate\nshutdowns could cause? Right now we will - kind of accidentally - keep\nthe stats file for immediate shutdowns, which means that autovacuum etc\nwill still have stats to work with after the next start. After this, not\nso much?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:14:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." }, { "msg_contents": "\n\nOn 2021/03/27 2:14, Andres Freund wrote:\n> Hi,\n> \n> On 2021-03-26 09:27:19 +0900, Masahiro Ikeda wrote:\n>> On 2021/03/25 19:48, Fujii Masao wrote:\n>>> Yes. So we should wait for the shared memory stats patch to be\n>>> committed before working on walreceiver stats patch more?\n>> \n>> Yes, I agree.\n> \n> Agreed.\n\nOK. And I withdraw this thread since the shared memory stats patch will solve\nthis problem too.\n\n> One thing that I didn't quite see discussed enough in the thread so far: \n> Have you considered what regressions the stats file removal in immediate \n> shutdowns could cause? Right now we will - kind of accidentally - keep the\n> stats file for immediate shutdowns, which means that autovacuum etc will\n> still have stats to work with after the next start. After this, not so\n> much?\n\nYes([1]). Although we keep the stats file for immediate shutdowns, IIUC, we'll\nremove them in redo phase via pgstat_reset_all() anyway. So, this means that\nautovaccum etc can't use the stats after the next start.\n\nFWIW, the issue is discussed([2]). The consensus is that we need to change the\nbehavior removing all stats files even if server crash is occurred because\nautovacuum uses the stats. There are some ideas, for example writing the stats\nfiles every X minutes (using wal or another mechanism). Then, the startup\nprocess can restore the stats with slightly low freshness even if a crash\noccurs. But, it needs to find a way how to handle the stats files when\ndeleting on PITR rewind.\n\nThis is not solved yet. If my understanding is correct, the shared memory\nstats patch didn't consider to handle the issue yet, but to solve it makes the\npatch more complicated... I think it's better to work on the issue after the\nbase patch of the shared memory stats is committed.\n\n[1]\nhttps://www.postgresql.org/message-id/c96d8989100e4bce4fa586064aa7e0e9%40oss.nttdata.com\n[2]\nhttps://www.postgresql.org/message-id/flat/0A3221C70F24FB45833433255569204D1F5EF25A%40G01JPEXMBYT05\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 30 Mar 2021 09:55:36 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: make the stats collector shutdown without writing the statsfiles\n if the immediate shutdown is requested." } ]
[ { "msg_contents": "Hi all,\n\nThere has been for the last couple of weeks a collection of reports\ncomplaining that the renaming of WAL segments is broken:\nhttps://www.postgresql.org/message-id/3861ff1e-0923-7838-e826-094cc9bef737@hot.ee\nhttps://www.postgresql.org/message-id/16874-c3eecd319e36a2bf@postgresql.org\nhttps://www.postgresql.org/message-id/095ccf8d-7f58-d928-427c-b17ace23cae6@burgess.co.nz\nhttps://www.postgresql.org/message-id/16927-67c570d968c99567%40postgresql.org\n\nThese have happened on a variety of Windows versions, 2019 and 2012 R2\nbeing mentioned when segments are recycled.\n\nThe number of those failures is alarming, and the information gathered\npoints at 13.1 and 13.2 as the culprits where those failures are\nhappening, so I'd like to believe that there is a regression in 13.\nFWIW, I have also been doing some tests on my side, and while I as not\nable to trigger the reported failure, I have been able to trigger the\nsame error with an archive_command doing a simple cp that failed\ncontinuously on EACCES.\n\nFujii-san has mentioned that on twitter, but one area that has changed\nduring the v13 cycle is aaa3aed, where the code recycling segments has\nbeen switched from a pgrename() (with a retry loop) to a\nCreateHardLinkA()+pgunlink() (with a retry loop for the second). One\ntheory that I got in mind here is the case where we create the hard\nlink, but fail to finish do the pgunlink() on the xlogtemp.N file,\nthough after some testing it did not seem to have any impact.\n\nI am running more tests with several scenarios (aggressive segment\nrecycling or segment rotation) to get more reproducible scenarios,\nbut I was wondering if anybody had ideas around that.\n\nSo, thoughts?\n--\nMichael", "msg_date": "Tue, 16 Mar 2021 16:20:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "On Tue, Mar 16, 2021 at 8:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> Hi all,\n>\n> There has been for the last couple of weeks a collection of reports\n> complaining that the renaming of WAL segments is broken:\n> https://www.postgresql.org/message-id/3861ff1e-0923-7838-e826-094cc9bef737@hot.ee\n> https://www.postgresql.org/message-id/16874-c3eecd319e36a2bf@postgresql.org\n> https://www.postgresql.org/message-id/095ccf8d-7f58-d928-427c-b17ace23cae6@burgess.co.nz\n> https://www.postgresql.org/message-id/16927-67c570d968c99567%40postgresql.org\n>\n> These have happened on a variety of Windows versions, 2019 and 2012 R2\n> being mentioned when segments are recycled.\n>\n> The number of those failures is alarming, and the information gathered\n> points at 13.1 and 13.2 as the culprits where those failures are\n> happening, so I'd like to believe that there is a regression in 13.\n\nAgreed.\n\n\n> FWIW, I have also been doing some tests on my side, and while I as not\n> able to trigger the reported failure, I have been able to trigger the\n> same error with an archive_command doing a simple cp that failed\n> continuously on EACCES.\n>\n> Fujii-san has mentioned that on twitter, but one area that has changed\n> during the v13 cycle is aaa3aed, where the code recycling segments has\n> been switched from a pgrename() (with a retry loop) to a\n> CreateHardLinkA()+pgunlink() (with a retry loop for the second). One\n> theory that I got in mind here is the case where we create the hard\n> link, but fail to finish do the pgunlink() on the xlogtemp.N file,\n> though after some testing it did not seem to have any impact.\n\nIf you back out that patch, does the problem you can reproduce with\narchive_command go away?\n\n\n> I am running more tests with several scenarios (aggressive segment\n> recycling or segment rotation) to get more reproducible scenarios,\n> but I was wondering if anybody had ideas around that.\n>\n> So, thoughts?\n\nI agree with your analysis in general. It certainly seems to hit right\nin the center of the problem scope.\n\nMaybe hardlinks on Windows has yet another \"weird behaviour\" vs what\nwe're used to from Unix.\n\nIt would definitely be more useful if we could figure out *when* this\nhappens. But failing that, I wonder if we could find a way to provide\na build with this patch backed out for the bug reporters to test out,\ngiven they all seem to have it fairly well reproducible. (But I am\nassuming are unlikely to be able to create their own builds easily,\ngiven the complexity of doing so on Windows). Given that this is a\npretty isolated change, it should hopefully be easy enough to back out\nfor testing.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 16 Mar 2021 10:02:25 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "On Tue, Mar 16, 2021 at 10:02:25AM +0100, Magnus Hagander wrote:\n> If you back out that patch, does the problem you can reproduce with\n> archive_command go away?\n\nThat's the first thing I did after seeing the failure, and I saw\nnothing after 2~3 hours of pgbench :)\n\nThe second thing I did was to revert back to HEAD with more logging in\nthe area, but I was not able to see my error again. Perhaps I just\nneed to put more load, there are still too many guesses and not enough\nfacts.\n\n> I agree with your analysis in general. It certainly seems to hit right\n> in the center of the problem scope.\n> \n> Maybe hardlinks on Windows has yet another \"weird behaviour\" vs what\n> we're used to from Unix.\n\nYeah, I'd like to think that this is a rational explanation, and\nthat's why I was just focusing on reproducing this issue rather\nreliably as a first step.\n\n> It would definitely be more useful if we could figure out *when* this\n> happens. But failing that, I wonder if we could find a way to provide\n> a build with this patch backed out for the bug reporters to test out,\n> given they all seem to have it fairly well reproducible. (But I am\n> assuming are unlikely to be able to create their own builds easily,\n> given the complexity of doing so on Windows). Given that this is a\n> pretty isolated change, it should hopefully be easy enough to back out\n> for testing.\n\nThere is a large pool of bug reporters, hopefully one of them may be\nable to help..\n--\nMichael", "msg_date": "Tue, 16 Mar 2021 19:22:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "On Tue, Mar 16, 2021 at 11:22 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 16, 2021 at 10:02:25AM +0100, Magnus Hagander wrote:\n> > If you back out that patch, does the problem you can reproduce with\n> > archive_command go away?\n>\n> That's the first thing I did after seeing the failure, and I saw\n> nothing after 2~3 hours of pgbench :)\n\n:) That's at least an \"almost\".\n\n\n> The second thing I did was to revert back to HEAD with more logging in\n> the area, but I was not able to see my error again. Perhaps I just\n> need to put more load, there are still too many guesses and not enough\n> facts.\n\nAgreed.\n\n\n\n> > I agree with your analysis in general. It certainly seems to hit right\n> > in the center of the problem scope.\n> >\n> > Maybe hardlinks on Windows has yet another \"weird behaviour\" vs what\n> > we're used to from Unix.\n>\n> Yeah, I'd like to think that this is a rational explanation, and\n> that's why I was just focusing on reproducing this issue rather\n> reliably as a first step.\n\nYeah, it'd definitely be good to figure out exactly what it is that\ntriggers the issue.\n\n\n> > It would definitely be more useful if we could figure out *when* this\n> > happens. But failing that, I wonder if we could find a way to provide\n> > a build with this patch backed out for the bug reporters to test out,\n> > given they all seem to have it fairly well reproducible. (But I am\n> > assuming are unlikely to be able to create their own builds easily,\n> > given the complexity of doing so on Windows). Given that this is a\n> > pretty isolated change, it should hopefully be easy enough to back out\n> > for testing.\n>\n> There is a large pool of bug reporters, hopefully one of them may be\n> able to help..\n\nI think you're overestimating peoples ability to get our build going\non Windows :)\n\nIf we can provide a new .EXE built with exactly the same flags as the\nEDB downloads that they can just drop into a directory, I think it's a\nlot easier to get that done.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 16 Mar 2021 11:40:12 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "On Tue, Mar 16, 2021 at 11:40:12AM +0100, Magnus Hagander wrote:\n> If we can provide a new .EXE built with exactly the same flags as the\n> EDB downloads that they can just drop into a directory, I think it's a\n> lot easier to get that done.\n\nYeah, multiple people have been complaining about that bug, so I have\njust produced two builds that people with those sensitive environments\ncan use, and sent some private links to get the builds. Let's see how\nit goes from this point, but, FWIW, I have not been able to reproduce\nagain my similar problem with the archive command :/\n--\nMichael", "msg_date": "Thu, 18 Mar 2021 09:55:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "Hi,\n\nOn 2021-03-16 16:20:37 +0900, Michael Paquier wrote:\n> Fujii-san has mentioned that on twitter, but one area that has changed\n> during the v13 cycle is aaa3aed, where the code recycling segments has\n> been switched from a pgrename() (with a retry loop) to a\n> CreateHardLinkA()+pgunlink() (with a retry loop for the second). One\n> theory that I got in mind here is the case where we create the hard\n> link, but fail to finish do the pgunlink() on the xlogtemp.N file,\n> though after some testing it did not seem to have any impact.\n\nA related question: What on earth is the point of using the unlink\napproach on any operating system. We use the durable_rename_excl() (and\nits predecessor, durable_link_or_rename(), and in turn its open coded\npredecessors) for things like recycling WAL files at check points.\n\nNow imagine that durable_rename_excl() fails to unlink the old\nfile. We'll still have the old file, but there's a second link to a new\nWAL file, which will be used. No error will be thrown, because we don't\ncheck unlink()'s return code (but if we did, we'd still have similar\nissues).\n\nAnd then imagine that that happens again, during the next checkpoint,\nbecause the permission or whatever issue is not yet resolved. We now\nwill have the same physical file in two location in the future WAL\nstream.\n\nWelcome impossible to debug issues.\n\nAnd all of this with the sole justification of \"paranoidly trying to\navoid overwriting an existing file (there shouldn't be one).\". A few\nlines after we either unlinked the target filename, or used stat() to\nfind an unused filename.\n\nIsn't the whole idea of durable_rename_excl() bad? There's not the same\ndanger of using it when we start from a temp filename, e.g. during\ncreation of new segments, or timeline files or whatnot. But I still\ndon't see what the whole thing is supposed to protect us against\nrealistically.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 18:48:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "Hi,\n\nOn 2021-03-18 09:55:46 +0900, Michael Paquier wrote:\n> Let's see how it goes from this point, but, FWIW, I have not been able\n> to reproduce again my similar problem with the archive command :/ --\n\nI suspect it might be easier to reproduce the issue with smaller WAL\nsegments, a short checkpoint_timeout, and multiple jobs generating WAL\nand then sleeping for random amounts of time. Not sure if that's the\nsole ingredient, but consider what happens there's processes that\nXLogWrite()s some WAL and then sleeps. Typically such a process'\nopenLogFile will still point to the WAL segment. And they may still do\nthat when the next checkpoint finishes and we recycle the WAL file.\n\nI wonder if we actually fail to unlink() the file in\ndurable_link_or_rename(), and then end up recycling the same old file\ninto multiple \"future\" positions in the WAL stream.\n\nThere's also these interesting notes at\nhttps://docs.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-createhardlinka\n\n1)\n> The security descriptor belongs to the file to which a hard link\n> points. The link itself is only a directory entry, and does not have a\n> security descriptor. Therefore, when you change the security\n> descriptor of a hard link, you a change the security descriptor of the\n> underlying file, and all hard links that point to the file allow the\n> newly specified access. You cannot give a file different security\n> descriptors on a per-hard-link basis.\n\n2)\n> Flags, attributes, access, and sharing that are specified in\n> CreateFile operate on a per-file basis. That is, if you open a file\n> that does not allow sharing, another application cannot share the file\n> by creating a new hard link to the file.\n\n3)\n> The maximum number of hard links that can be created with this\n> function is 1023 per file. If more than 1023 links are created for a\n> file, an error results.\n\n\n1) and 2) seems problematic for restore_command use. I wonder if there's\na chance that some of the reports ended up hitting 3), and that windows\ndoesn't handle that well.\n\n\nIf you manage to reproduce, could you check what the link count of the\nall the segments is? Apparently sysinternal's findlinks can do that.\n\nOr perhaps even better, add an error check that the number of links of\nWAL segments is 1 in a bunch of places (recycling, opening them, closing\nthem, maybe?).\n\nPlus error reporting for unlink failures, of course.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 19:30:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "On Wed, Mar 17, 2021 at 07:30:04PM -0700, Andres Freund wrote:\n> I suspect it might be easier to reproduce the issue with smaller WAL\n> segments, a short checkpoint_timeout, and multiple jobs generating WAL\n> and then sleeping for random amounts of time. Not sure if that's the\n> sole ingredient, but consider what happens there's processes that\n> XLogWrite()s some WAL and then sleeps. Typically such a process'\n> openLogFile will still point to the WAL segment. And they may still do\n> that when the next checkpoint finishes and we recycle the WAL file.\n\nYep. That's basically the kind of scenarios I have been testing to\nstress the recycling/removing, with pgbench putting some load into the\nserver. This has worked for me. Once. But I have little idea why it\ngets easier to reproduce in the environments of others, so there may\nbe an OS-version dependency in the equation here.\n\n> I wonder if we actually fail to unlink() the file in\n> durable_link_or_rename(), and then end up recycling the same old file\n> into multiple \"future\" positions in the WAL stream.\n\nYou actually mean durable_rename_excl() as of 13~, right? Yeah, this\nmatches my impression that it is a two-step failure:\n- Failure in one of the steps of durable_rename_excl().\n- Fallback to segment removal, where we get the complain about\nrenaming.\n\n> 1) and 2) seems problematic for restore_command use. I wonder if there's\n> a chance that some of the reports ended up hitting 3), and that windows\n> doesn't handle that well.\n\nYeap. I was thinking about 3) being the actual problem while going\nthrough those docs two days ago.\n\n> If you manage to reproduce, could you check what the link count of the\n> all the segments is? Apparently sysinternal's findlinks can do that.\n> \n> Or perhaps even better, add an error check that the number of links of\n> WAL segments is 1 in a bunch of places (recycling, opening them, closing\n> them, maybe?).\n> \n> Plus error reporting for unlink failures, of course.\n\nYep, that's actually something I wrote for my own setups, with\nlog_checkpoints enabled to catch all concurrent checkpoint activity\nand some LOGs. Still no luck unfortunately :(\n--\nMichael", "msg_date": "Thu, 18 Mar 2021 12:01:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" }, { "msg_contents": "On Thu, Mar 18, 2021 at 12:01:40PM +0900, Michael Paquier wrote:\n> Yep, that's actually something I wrote for my own setups, with\n> log_checkpoints enabled to catch all concurrent checkpoint activity\n> and some LOGs. Still no luck unfortunately :(\n\nThe various reporters had more luck than myself in reproducing the\nissue, so I have applied 909b449e to address the issue. I am pretty\nsure that we should review more this business in the future, but I'd\nrather not touch the stable branches.\n--\nMichael", "msg_date": "Mon, 22 Mar 2021 14:46:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" } ]
[ { "msg_contents": "While I worked on a patch, I noticed a comment that is inconsistent\nwith the fact.\n\n> * SIGQUIT is the special signal that says exit without proc_exit\n> * and let the user know what's going on. But if SendStop is set\n> * (-s on command line), then we send SIGSTOP instead, so that we\n> * can get core dumps from all backends by hand.\n\nSendStop is set by \"-T\" option. It was changed by 86c23a6eb2 from \"-s\"\nin 2006.\n\nThe attaches fixes the comment for the master branch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 16 Mar 2021 16:51:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "comment fix in postmaster.c" }, { "msg_contents": "\n\nOn 2021/03/16 16:51, Kyotaro Horiguchi wrote:\n> While I worked on a patch, I noticed a comment that is inconsistent\n> with the fact.\n> \n>> * SIGQUIT is the special signal that says exit without proc_exit\n>> * and let the user know what's going on. But if SendStop is set\n>> * (-s on command line), then we send SIGSTOP instead, so that we\n>> * can get core dumps from all backends by hand.\n> \n> SendStop is set by \"-T\" option. It was changed by 86c23a6eb2 from \"-s\"\n> in 2006.\n> \n> The attaches fixes the comment for the master branch.\n\nThanks for the path! LGTM. Barring any objection, I will push the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 16 Mar 2021 18:27:53 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: comment fix in postmaster.c" }, { "msg_contents": "\n\nOn 2021/03/16 18:27, Fujii Masao wrote:\n\n> Thanks for the path! LGTM. Barring any objection, I will push the patch.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 19 Mar 2021 11:31:27 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: comment fix in postmaster.c" } ]
[ { "msg_contents": ">Yeah, it'd definitely be good to figure out exactly what it is that\n>triggers the issue.\n\nI think that this issue is the same at 1.\nAnd IMHO the patch solves, but nobody is interested.\n\nI think that \"MOVEFILE_COPY_ALLOWED\" that's what is missing.\nAt least on my machine, Postgres can rename statistics files.\n\nregards,\nRanier Vilela\n\n1.\nhttps://www.postgresql.org/message-id/20200616041018.GR20404%40telsasoft.com", "msg_date": "Tue, 16 Mar 2021 08:07:34 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Permission failures with WAL files in 13~ on Windows" } ]
[ { "msg_contents": "With HEAD (I think v12 and greater), I see $subject when trying out\nthe following scenario:\n\n-- in backend 1\ncreate table p (a int primary key);\ncreate table f (a int references p on update cascade deferrable\ninitially deferred);\ninsert into p values (1);\nbegin isolation level serializable;\ninsert into p values (3);\n\n-- in another backend\ninsert into f values (1)\n\n-- back in backend 1\nupdate p set a = a + 1;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nI see the following backtrace:\n\n#0 0x00007f747e6e2387 in raise () from /lib64/libc.so.6\n#1 0x00007f747e6e3a78 in abort () from /lib64/libc.so.6\n#2 0x0000000000ae056a in ExceptionalCondition (\n conditionName=0xb67c10 \"!ItemPointerEquals(&oldtup.t_self,\n&oldtup.t_data->t_ctid)\",\n errorType=0xb66d89 \"FailedAssertion\", fileName=0xb66e68\n\"heapam.c\", lineNumber=3560) at assert.c:69\n#3 0x00000000004eed16 in heap_update (relation=0x7f747f569590,\notid=0x7ffe6f236ec0, newtup=0x1c214b8, cid=2,\n crosscheck=0x1c317f8, wait=true, tmfd=0x7ffe6f236df0,\nlockmode=0x7ffe6f236dec) at heapam.c:3560\n#4 0x00000000004fdb52 in heapam_tuple_update\n(relation=0x7f747f569590, otid=0x7ffe6f236ec0, slot=0x1c43fc8, cid=2,\n snapshot=0x1c31a30, crosscheck=0x1c317f8, wait=true,\ntmfd=0x7ffe6f236df0, lockmode=0x7ffe6f236dec,\n update_indexes=0x7ffe6f236deb) at heapam_handler.c:327\n#5 0x000000000075a7fc in table_tuple_update (rel=0x7f747f569590,\notid=0x7ffe6f236ec0, slot=0x1c43fc8, cid=2,\n snapshot=0x1c31a30, crosscheck=0x1c317f8, wait=true,\ntmfd=0x7ffe6f236df0, lockmode=0x7ffe6f236dec,\n update_indexes=0x7ffe6f236deb) at ../../../src/include/access/tableam.h:1509\n#6 0x000000000075cd20 in ExecUpdate (mtstate=0x1c42540,\nresultRelInfo=0x1c42778, tupleid=0x7ffe6f236ec0, oldtuple=0x0,\n slot=0x1c43fc8, planSlot=0x1c43e78, epqstate=0x1c42638,\nestate=0x1c422d0, canSetTag=true) at nodeModifyTable.c:1498\n#7 0x000000000075e0a3 in ExecModifyTable (pstate=0x1c42540) at\nnodeModifyTable.c:2254\n#8 0x000000000072674e in ExecProcNodeFirst (node=0x1c42540) at\nexecProcnode.c:456\n#9 0x000000000071b13b in ExecProcNode (node=0x1c42540) at\n../../../src/include/executor/executor.h:247\n#10 0x000000000071d8f3 in ExecutePlan (estate=0x1c422d0,\nplanstate=0x1c42540, use_parallel_mode=false, operation=CMD_UPDATE,\n sendTuples=false, numberTuples=0, direction=ForwardScanDirection,\ndest=0xcb1440 <spi_printtupDR>, execute_once=true)\n at execMain.c:1531\n#11 0x000000000071b75f in standard_ExecutorRun (queryDesc=0x1c4cd18,\ndirection=ForwardScanDirection, count=0,\n execute_once=true) at execMain.c:350\n#12 0x000000000071b587 in ExecutorRun (queryDesc=0x1c4cd18,\ndirection=ForwardScanDirection, count=0, execute_once=true)\n at execMain.c:294\n#13 0x0000000000777a88 in _SPI_pquery (queryDesc=0x1c4cd18,\nfire_triggers=false, tcount=0) at spi.c:2729\n#14 0x00000000007774fa in _SPI_execute_plan (plan=0x1bf93d0,\nparamLI=0x1c402c0, snapshot=0x1036840 <SecondarySnapshotData>,\n crosscheck_snapshot=0x1c317f8, read_only=false,\nno_snapshots=false, fire_triggers=false, tcount=0, caller_dest=0x0,\n plan_owner=0x1bc1c10) at spi.c:2500\n#15 0x00000000007740a9 in SPI_execute_snapshot (plan=0x1bf93d0,\nValues=0x7ffe6f237340, Nulls=0x7ffe6f237300 \" \",\n snapshot=0x1036840 <SecondarySnapshotData>,\ncrosscheck_snapshot=0x1c317f8, read_only=false, fire_triggers=false,\ntcount=0)\n at spi.c:693\n#16 0x0000000000a52724 in ri_PerformCheck (riinfo=0x1c3f2f8,\nqkey=0x7ffe6f2378a0, qplan=0x1bf93d0, fk_rel=0x7f747f569590,\n pk_rel=0x7f747f564a30, oldslot=0x1c042b8, newslot=0x1c04420,\ndetectNewRows=true, expect_OK=9) at ri_triggers.c:2517\n#17 0x0000000000a4fee5 in RI_FKey_cascade_upd (fcinfo=0x7ffe6f237a60)\nat ri_triggers.c:1163\n#18 0x00000000006ea114 in ExecCallTriggerFunc\n(trigdata=0x7ffe6f237b00, tgindx=1, finfo=0x1bc5be0, instr=0x0,\n per_tuple_context=0x1c06760) at trigger.c:2141\n#19 0x00000000006ed216 in AfterTriggerExecute (estate=0x1bc52f0,\nevent=0x1c196c0, relInfo=0x1bc5798, trigdesc=0x1bc59d0,\n finfo=0x1bc5bb0, instr=0x0, per_tuple_context=0x1c06760,\ntrig_tuple_slot1=0x0, trig_tuple_slot2=0x0) at trigger.c:4030\n#20 0x00000000006ed6e5 in afterTriggerInvokeEvents (events=0x1c31ac8,\nfiring_id=1, estate=0x1bc52f0, delete_ok=false)\n at trigger.c:4244\n#21 0x00000000006ede4c in AfterTriggerEndQuery (estate=0x1bc52f0) at\ntrigger.c:4581\n#22 0x000000000071b90c in standard_ExecutorFinish\n(queryDesc=0x1c13040) at execMain.c:425\n#23 0x000000000071b803 in ExecutorFinish (queryDesc=0x1c13040) at execMain.c:393\n#24 0x0000000000955ec6 in ProcessQuery (plan=0x1c51b30,\nsourceText=0x1b424a0 \"update p set a = a + 1;\", params=0x0,\n queryEnv=0x0, dest=0x1c51ca0, qc=0x7ffe6f237f20) at pquery.c:190\n#25 0x0000000000957701 in PortalRunMulti (portal=0x1ba4980,\nisTopLevel=true, setHoldSnapshot=false, dest=0x1c51ca0,\n altdest=0x1c51ca0, qc=0x7ffe6f237f20) at pquery.c:1267\n#26 0x0000000000956ca5 in PortalRun (portal=0x1ba4980,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\n dest=0x1c51ca0, altdest=0x1c51ca0, qc=0x7ffe6f237f20) at pquery.c:779\n#27 0x0000000000950c2a in exec_simple_query (query_string=0x1b424a0\n\"update p set a = a + 1;\") at postgres.c:1173\n#28 0x0000000000954e06 in PostgresMain (argc=1, argv=0x7ffe6f2381b0,\ndbname=0x1b6c868 \"postgres\", username=0x1b6c848 \"amit\")\n at postgres.c:4327\n#29 0x0000000000896188 in BackendRun (port=0x1b64130) at postmaster.c:4465\n#30 0x0000000000895b0e in BackendStartup (port=0x1b64130) at postmaster.c:4187\n#31 0x0000000000892174 in ServerLoop () at postmaster.c:1736\n#32 0x0000000000891a4b in PostmasterMain (argc=3, argv=0x1b3cc20) at\npostmaster.c:1408\n#33 0x000000000079360f in main (argc=3, argv=0x1b3cc20) at main.c:209\n\nI haven't checked the failure in more detail yet other than that the\nfailed Assert was added in 5db6df0c0117.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 16 Mar 2021 23:02:38 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "crash during cascaded foreign key update" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> With HEAD (I think v12 and greater), I see $subject when trying out\n> the following scenario:\n\nI wonder if this is related to\n\nhttps://www.postgresql.org/message-id/flat/89429.1584443208%40antos\n\nwhich we've still not done anything about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Mar 2021 10:17:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: crash during cascaded foreign key update" }, { "msg_contents": "On Tue, Mar 16, 2021 at 11:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > With HEAD (I think v12 and greater), I see $subject when trying out\n> > the following scenario:\n>\n> I wonder if this is related to\n>\n> https://www.postgresql.org/message-id/flat/89429.1584443208%40antos\n>\n> which we've still not done anything about.\n\nAh, indeed the same issue. Will read through that discussion first, thanks.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 17 Mar 2021 11:01:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: crash during cascaded foreign key update" }, { "msg_contents": "On Wed, Mar 17, 2021 at 11:01 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Mar 16, 2021 at 11:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> > > With HEAD (I think v12 and greater), I see $subject when trying out\n> > > the following scenario:\n\nActually, the crash is reproducible in all supported versions (9.6~),\nalthough the failing Assert is different in branches older than 12.\nThe underlying issue is nevertheless the same.\n\n> > I wonder if this is related to\n> >\n> > https://www.postgresql.org/message-id/flat/89429.1584443208%40antos\n> >\n> > which we've still not done anything about.\n>\n> Ah, indeed the same issue. Will read through that discussion first, thanks.\n\nSo, it appears that there's no live bug that actually manifests in the\nreal world use cases, even though the failing Assert shows that the\ncrosscheck snapshot handling code has grown inconsistent with its\nsurrounding code since it was first added in commit 55d85f42a89,\nwhich, if I read correctly, also seems to the main conclusion of the\nlinked thread. I found the 2nd email in that thread very helpful to\nunderstand the problem.\n\nAs for a solution, how about making heap_update() and heap_delete()\nthemselves report the error immediately upon a tuple failing the\ncrosscheck snapshot visibility test, instead of leaving it to the\ncaller which would definitely report the error in this case AFAICS?\nIf we do that, we don't really have to bother with figuring out sane\nresult codes for the crosscheck snapshot failure case. It also sounds\nlike that would be the easiest solution to back-patch.\n\nI've attached a patch for that, which also adds isolation tests for these cases.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 18 Mar 2021 18:42:35 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "Re: crash during cascaded foreign key update" } ]
[ { "msg_contents": ">0 0x00007f747e6e2387 in raise () from /lib64/libc.so.6\n>#1 0x00007f747e6e3a78 in abort () from /lib64/libc.so.6\n>#2 0x0000000000ae056a in ExceptionalCondition (\n>conditionName=0xb67c10 \"!ItemPointerEquals(&oldtup.t_self,\n>&oldtup.t_data->t_ctid)\",\n>errorType=0xb66d89 \"FailedAssertion\", fileName=0xb66e68\n>\"heapam.c\", lineNumber=3560) at assert.c:69\n>#3 0x00000000004eed16 in heap_update (relation=0x7f747f569590,\n>otid=0x7ffe6f236ec0, newtup=0x1c214b8, cid=2,\n>crosscheck=0x1c317f8, wait=true, tmfd=0x7ffe6f236df0,\nl>ockmode=0x7ffe6f236dec) at heapam.c:3560\n\nI have this report from one static analysis tool:\nheapam.c (9379):\nDereferencing of a potential null pointer 'oldtup.t_data'\n\nregards,\nRanier Vilela\n\n>0 0x00007f747e6e2387 in raise () from /lib64/libc.so.6>#1 0x00007f747e6e3a78 in abort () from /lib64/libc.so.6>#2 0x0000000000ae056a in ExceptionalCondition ( >conditionName=0xb67c10 \"!ItemPointerEquals(&oldtup.t_self,>&oldtup.t_data->t_ctid)\", >errorType=0xb66d89 \"FailedAssertion\", fileName=0xb66e68>\"heapam.c\", lineNumber=3560) at assert.c:69>#3 0x00000000004eed16 in heap_update (relation=0x7f747f569590,>otid=0x7ffe6f236ec0, newtup=0x1c214b8, cid=2, >crosscheck=0x1c317f8, wait=true, tmfd=0x7ffe6f236df0,l>ockmode=0x7ffe6f236dec) at heapam.c:3560\n\nI have this report from one static analysis tool:heapam.c (9379): Dereferencing of a potential null pointer 'oldtup.t_data'regards,Ranier Vilela", "msg_date": "Tue, 16 Mar 2021 11:52:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "re: crash during cascaded foreign key update" } ]
[ { "msg_contents": "Use pre-fetching for ANALYZE\n\nWhen we have posix_fadvise() available, we can improve the performance\nof an ANALYZE by quite a bit by using it to inform the kernel of the\nblocks that we're going to be asking for. Similar to bitmap index\nscans, the number of buffers pre-fetched is based off of the\nmaintenance_io_concurrency setting (for the particular tablespace or,\nif not set, globally, via get_tablespace_maintenance_io_concurrency()).\n\nReviewed-By: Heikki Linnakangas, Tomas Vondra\nDiscussion: https://www.postgresql.org/message-id/VI1PR0701MB69603A433348EDCF783C6ECBF6EF0%40VI1PR0701MB6960.eurprd07.prod.outlook.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/c6fc50cb40285141fad401321ae21becbaea1c59\n\nModified Files\n--------------\nsrc/backend/commands/analyze.c | 73 ++++++++++++++++++++++++++++++++++++++++--\n1 file changed, 71 insertions(+), 2 deletions(-)", "msg_date": "Tue, 16 Mar 2021 18:48:08 +0000", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "pgsql: Use pre-fetching for ANALYZE" }, { "msg_contents": "Hi,\n\nOn 2021-03-16 18:48:08 +0000, Stephen Frost wrote:\n> Use pre-fetching for ANALYZE\n> \n> When we have posix_fadvise() available, we can improve the performance\n> of an ANALYZE by quite a bit by using it to inform the kernel of the\n> blocks that we're going to be asking for. Similar to bitmap index\n> scans, the number of buffers pre-fetched is based off of the\n> maintenance_io_concurrency setting (for the particular tablespace or,\n> if not set, globally, via get_tablespace_maintenance_io_concurrency()).\n\nI just looked at this as part of debugging a crash / hang in the AIO patch.\n\nThe code does:\n\n\t\tblock_accepted = table_scan_analyze_next_block(scan, targblock, vac_strategy);\n\n#ifdef USE_PREFETCH\n\n\t\t/*\n\t\t * When pre-fetching, after we get a block, tell the kernel about the\n\t\t * next one we will want, if there's any left.\n\t\t *\n\t\t * We want to do this even if the table_scan_analyze_next_block() call\n\t\t * above decides against analyzing the block it picked.\n\t\t */\n\t\tif (prefetch_maximum && prefetch_targblock != InvalidBlockNumber)\n\t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_targblock);\n#endif\n\nI.e. we lock a buffer and *then* we prefetch another buffer. That seems like a\nquite bad idea to me. Why are we doing IO while holding a content lock, if we\ncan avoid it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 19:30:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Use pre-fetching for ANALYZE" }, { "msg_contents": "Hi,\n\nOn 2022-06-02 19:30:16 -0700, Andres Freund wrote:\n> On 2021-03-16 18:48:08 +0000, Stephen Frost wrote:\n> > Use pre-fetching for ANALYZE\n> > \n> > When we have posix_fadvise() available, we can improve the performance\n> > of an ANALYZE by quite a bit by using it to inform the kernel of the\n> > blocks that we're going to be asking for. Similar to bitmap index\n> > scans, the number of buffers pre-fetched is based off of the\n> > maintenance_io_concurrency setting (for the particular tablespace or,\n> > if not set, globally, via get_tablespace_maintenance_io_concurrency()).\n> \n> I just looked at this as part of debugging a crash / hang in the AIO patch.\n> \n> The code does:\n> \n> \t\tblock_accepted = table_scan_analyze_next_block(scan, targblock, vac_strategy);\n> \n> #ifdef USE_PREFETCH\n> \n> \t\t/*\n> \t\t * When pre-fetching, after we get a block, tell the kernel about the\n> \t\t * next one we will want, if there's any left.\n> \t\t *\n> \t\t * We want to do this even if the table_scan_analyze_next_block() call\n> \t\t * above decides against analyzing the block it picked.\n> \t\t */\n> \t\tif (prefetch_maximum && prefetch_targblock != InvalidBlockNumber)\n> \t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_targblock);\n> #endif\n> \n> I.e. we lock a buffer and *then* we prefetch another buffer. That seems like a\n> quite bad idea to me. Why are we doing IO while holding a content lock, if we\n> can avoid it?\n\nIt also seems decidedly not great from a layering POV to do the IO in\nanalyze.c. There's no guarantee that the tableam maps blocks in a way that's\ncompatible with PrefetchBuffer(). Yes, the bitmap heap scan code does\nsomething similar, but a) that is opt in by the AM, b) there's a comment\nsaying it's quite crufty and should be fixed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Jun 2022 19:35:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Use pre-fetching for ANALYZE" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2022-06-02 19:30:16 -0700, Andres Freund wrote:\n> > On 2021-03-16 18:48:08 +0000, Stephen Frost wrote:\n> > > Use pre-fetching for ANALYZE\n> > > \n> > > When we have posix_fadvise() available, we can improve the performance\n> > > of an ANALYZE by quite a bit by using it to inform the kernel of the\n> > > blocks that we're going to be asking for. Similar to bitmap index\n> > > scans, the number of buffers pre-fetched is based off of the\n> > > maintenance_io_concurrency setting (for the particular tablespace or,\n> > > if not set, globally, via get_tablespace_maintenance_io_concurrency()).\n> > \n> > I just looked at this as part of debugging a crash / hang in the AIO patch.\n> > \n> > The code does:\n> > \n> > \t\tblock_accepted = table_scan_analyze_next_block(scan, targblock, vac_strategy);\n> > \n> > #ifdef USE_PREFETCH\n> > \n> > \t\t/*\n> > \t\t * When pre-fetching, after we get a block, tell the kernel about the\n> > \t\t * next one we will want, if there's any left.\n> > \t\t *\n> > \t\t * We want to do this even if the table_scan_analyze_next_block() call\n> > \t\t * above decides against analyzing the block it picked.\n> > \t\t */\n> > \t\tif (prefetch_maximum && prefetch_targblock != InvalidBlockNumber)\n> > \t\t\tPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, prefetch_targblock);\n> > #endif\n> > \n> > I.e. we lock a buffer and *then* we prefetch another buffer. That seems like a\n> > quite bad idea to me. Why are we doing IO while holding a content lock, if we\n> > can avoid it?\n\nAt the end, we're doing a posix_fadvise() which is a kernel call but\nhopefully wouldn't do actual IO when we call it. Still, agreed that\nit'd be better to do that without holding locks and no objection to\nmaking such a change.\n\n> It also seems decidedly not great from a layering POV to do the IO in\n> analyze.c. There's no guarantee that the tableam maps blocks in a way that's\n> compatible with PrefetchBuffer(). Yes, the bitmap heap scan code does\n> something similar, but a) that is opt in by the AM, b) there's a comment\n> saying it's quite crufty and should be fixed.\n\nCertainly open to suggestions. Are you thinking it'd make sense to add\na 'prefetch_block' method to TableAmRoutine? Or did you have another\nthought?\n\nThanks!\n\nStephen", "msg_date": "Mon, 6 Jun 2022 14:52:03 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": true, "msg_subject": "Re: pgsql: Use pre-fetching for ANALYZE" } ]
[ { "msg_contents": "[ starting a new thread for this ]\n\nAndres Freund <andres@anarazel.de> writes:\n> I wonder if it'd be worth starting to explicitly annotate all the places\n> that do allocations and are fine with leaking them. E.g. by introducing\n> malloc_permanently() or such. Right now it's hard to use valgrind et al\n> to detect leaks because of all the false positives due to such \"ok to\n> leak\" allocations.\n\nOut of curiosity I poked at this for a little while. It doesn't appear\nto me that we leak much at all, at least not if you are willing to take\n\"still reachable\" blocks as not-leaked. Most of the problem is\nmore subtle than that.\n\nI found just a couple of things that really seem like leaks of permanent\ndata structures to valgrind:\n\n* Where ps_status.c copies the original \"environ\" array (on\nPS_USE_CLOBBER_ARGV platforms), valgrind thinks that's all leaked,\nimplying that it doesn't count the \"environ\" global as a valid\nreference to leakable data. I was able to shut that up by also saving\nthe pointer into an otherwise-unused static variable. (This is sort of\na poor man's implementation of your \"malloc_permanently\" idea; but I\ndoubt it's worth working harder, given the small number of use-cases.)\n\n* The postmaster's sock_paths and lock_files lists appear to be leaked,\nbut we're doing that to ourselves by throwing away the pointers to them\nwithout physically freeing the lists. We can just not do that.\n\nWhat I found out is that we have a lot of issues that seem to devolve\nto valgrind not being sure that a block is referenced. I identified\ntwo main causes of that:\n\n(1) We have a pointer, but it doesn't actually point right at the start\nof the block. A primary culprit here is lists of thingies that use the\nslist and dlist infrastructure. As an experiment, I moved the dlist_node\nfields of some popular structs to the beginning, and verified that that\nsilences associated complaints. I'm not sure that we want to insist on\nput-the-link-first as policy (although if we did, it could provide some\nnotational savings perhaps). However, unless someone knows of a way to\nteach valgrind about this situation, there may be no other way to silence\nthose leakage complaints. A secondary culprit is people randomly applying\nCACHELINEALIGN or the like to a palloc'd address, so that the address we\nhave isn't pointing right at the block start.\n\n(2) The only pointer to the start of a block is actually somewhere within\nthe block. This is common in dynahash tables, where we allocate a slab\nof entries in a single palloc and then thread them together. Each entry\nshould have exactly one referencing pointer, but that pointer is more\nlikely to be elsewhere within the same palloc block than in the external\nhash bucket array. AFAICT, all cases except where the slab's first entry\nis pointed to by a hash bucket pointer confuse valgrind to some extent.\nI was able to hack around this by preventing dynahash from allocating\nmore than one hash entry per palloc, but I wonder if there's a better way.\n\n\nAttached is a very crude hack, not meant for commit, that hacks things\nup enough to greatly reduce the number of complaints with\n\"--leak-check=full\".\n\nOne thing I've failed to silence so far is a bunch of entries like\n\n==00:00:03:56.088 3467702== 1,861 bytes in 67 blocks are definitely lost in loss record 1,290 of 1,418\n==00:00:03:56.088 3467702== at 0x950650: MemoryContextAlloc (mcxt.c:827)\n==00:00:03:56.088 3467702== by 0x951710: MemoryContextStrdup (mcxt.c:1179)\n==00:00:03:56.088 3467702== by 0x91C86E: RelationInitIndexAccessInfo (relcache.c:1444)\n==00:00:03:56.088 3467702== by 0x91DA9C: RelationBuildDesc (relcache.c:1200)\n\nwhich is complaining about the memory context identifiers for system\nindexes' rd_indexcxt contexts. Those are surely not being leaked in\nany real sense. I suspect that this has something to do with valgrind\nnot counting the context->ident fields as live pointers, but I don't\nhave enough valgrind-fu to fix that.\n\nAnyway, the bottom line is that I do not think that we have all that\nmany uses of the pattern you postulated originally. It's more that\nwe've designed some valgrind-unfriendly data structures. We need to\nimprove that situation to make much progress here.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 16 Mar 2021 19:36:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nDavid, there's a question about a commit of yours below, hence adding\nyou.\n\nOn 2021-03-16 19:36:10 -0400, Tom Lane wrote:\n> Out of curiosity I poked at this for a little while.\n\nCool.\n\n\n> It doesn't appear to me that we leak much at all, at least not if you\n> are willing to take \"still reachable\" blocks as not-leaked.\n\nWell, I think for any sort of automated testing - which I think would be\nuseful - we'd really need *no* leaks. I know that I get a few bleats\nwhenever I forget to set --leak-check=no. It's also not just postgres\nitself, but some of the helper tools...\n\nAnd it's not just valgrind, also gcc/clang sanitizers...\n\n\n> What I found out is that we have a lot of issues that seem to devolve\n> to valgrind not being sure that a block is referenced. I identified\n> two main causes of that:\n>\n> (1) We have a pointer, but it doesn't actually point right at the start\n> of the block. A primary culprit here is lists of thingies that use the\n> slist and dlist infrastructure. As an experiment, I moved the dlist_node\n> fields of some popular structs to the beginning, and verified that that\n> silences associated complaints. I'm not sure that we want to insist on\n> put-the-link-first as policy (although if we did, it could provide some\n> notational savings perhaps). However, unless someone knows of a way to\n> teach valgrind about this situation, there may be no other way to silence\n> those leakage complaints. A secondary culprit is people randomly applying\n> CACHELINEALIGN or the like to a palloc'd address, so that the address we\n> have isn't pointing right at the block start.\n\nHm, do you still have a backtrace / suppression for one of those? I\ndidn't see any in a quick (*) serial installcheck I just ran. Or I\nwasn't able to pinpoint them to this issue.\n\n\nI think the run might have shown a genuine leak:\n\n==2048803== 16 bytes in 1 blocks are definitely lost in loss record 139 of 906\n==2048803== at 0x89D2EA: palloc (mcxt.c:975)\n==2048803== by 0x2392D3: heap_beginscan (heapam.c:1198)\n==2048803== by 0x264E8F: table_beginscan_strat (tableam.h:918)\n==2048803== by 0x265994: systable_beginscan (genam.c:453)\n==2048803== by 0x83C2D1: SearchCatCacheMiss (catcache.c:1359)\n==2048803== by 0x83C197: SearchCatCacheInternal (catcache.c:1299)\n==2048803== by 0x83BE9A: SearchCatCache1 (catcache.c:1167)\n==2048803== by 0x85876A: SearchSysCache1 (syscache.c:1134)\n==2048803== by 0x84CDB3: RelationInitTableAccessMethod (relcache.c:1795)\n==2048803== by 0x84F807: RelationBuildLocalRelation (relcache.c:3554)\n==2048803== by 0x303C9D: heap_create (heap.c:395)\n==2048803== by 0x305790: heap_create_with_catalog (heap.c:1291)\n==2048803== by 0x41A327: DefineRelation (tablecmds.c:885)\n==2048803== by 0x6C96B6: ProcessUtilitySlow (utility.c:1131)\n==2048803== by 0x6C948A: standard_ProcessUtility (utility.c:1034)\n==2048803== by 0x6C865F: ProcessUtility (utility.c:525)\n==2048803== by 0x6C7409: PortalRunUtility (pquery.c:1159)\n==2048803== by 0x6C7636: PortalRunMulti (pquery.c:1305)\n==2048803== by 0x6C6B11: PortalRun (pquery.c:779)\n==2048803== by 0x6C05AB: exec_simple_query (postgres.c:1173)\n==2048803==\n{\n <insert_a_suppression_name_here>\n Memcheck:Leak\n match-leak-kinds: definite\n fun:palloc\n fun:heap_beginscan\n fun:table_beginscan_strat\n fun:systable_beginscan\n fun:SearchCatCacheMiss\n fun:SearchCatCacheInternal\n fun:SearchCatCache1\n fun:SearchSysCache1\n fun:RelationInitTableAccessMethod\n fun:RelationBuildLocalRelation\n fun:heap_create\n fun:heap_create_with_catalog\n fun:DefineRelation\n fun:ProcessUtilitySlow\n fun:standard_ProcessUtility\n fun:ProcessUtility\n fun:PortalRunUtility\n fun:PortalRunMulti\n fun:PortalRun\n fun:exec_simple_query\n}\n\nSince 56788d2156fc heap_beginscan() allocates\n\tscan->rs_base.rs_private =\n\t\tpalloc(sizeof(ParallelBlockTableScanWorkerData));\nin heap_beginscan(). But doesn't free it in heap_endscan().\n\nIn most of the places heap scans are begun inside transient contexts,\nbut not always. In the above trace for example\nRelationBuildLocalRelation switched to CacheMemoryContext, and nothing\nswitched to something else.\n\nI'm a bit confused about the precise design of rs_private /\nParallelBlockTableScanWorkerData, specifically why it's been added to\nTableScanDesc, instead of just adding it to HeapScanDesc? And why is it\nallocated unconditionally, instead of just for parallel scans?\n\n\nI don't think this is a false positive, even though it theoretically\ncould be freed by resetting CacheMemoryContext (see below)?\n\n\nI saw a lot of false positives from autovacuum workers, because\nAutovacMemCxt is never deleted, and because table_toast_map is created\nin TopMemoryContext. Adding an explicit\nMemoryContextDelete(AutovacMemCxt) and parenting table_toast_map in that\nshut that up.\n\n\nWhich brings me to the question why allocations in CacheMemoryContext,\nAutovacMemCxt are considered to be \"lost\", even though they're still\n\"reachable\" via a context reset. I think valgrind ends up treating\nmemory allocated via memory contexts that still exist at process end as\nlost, regardless of being reachable via the the memory pool (from\nvalgrinds view). Which I guess actually makes sense, for things like\nTopMemoryContext and CacheContext - anything not reachable by means\nother than a context reset is effectively lost for those.\n\nFor autovac launcher / worker it seems like a sensible thing to just\ndelete AutovacMemCxt.\n\n\n\n> (2) The only pointer to the start of a block is actually somewhere within\n> the block. This is common in dynahash tables, where we allocate a slab\n> of entries in a single palloc and then thread them together. Each entry\n> should have exactly one referencing pointer, but that pointer is more\n> likely to be elsewhere within the same palloc block than in the external\n> hash bucket array. AFAICT, all cases except where the slab's first entry\n> is pointed to by a hash bucket pointer confuse valgrind to some extent.\n> I was able to hack around this by preventing dynahash from allocating\n> more than one hash entry per palloc, but I wonder if there's a better way.\n\nHm. For me the number of leaks seem to stay the same with or without\nyour changes related to this. Is this a USE_VALGRIND build?\n\nI'm using valgrind-3.16.1\n\n\n> Attached is a very crude hack, not meant for commit, that hacks things\n> up enough to greatly reduce the number of complaints with\n> \"--leak-check=full\".\n>\n> One thing I've failed to silence so far is a bunch of entries like\n>\n> ==00:00:03:56.088 3467702== 1,861 bytes in 67 blocks are definitely lost in loss record 1,290 of 1,418\n> ==00:00:03:56.088 3467702== at 0x950650: MemoryContextAlloc (mcxt.c:827)\n> ==00:00:03:56.088 3467702== by 0x951710: MemoryContextStrdup (mcxt.c:1179)\n> ==00:00:03:56.088 3467702== by 0x91C86E: RelationInitIndexAccessInfo (relcache.c:1444)\n> ==00:00:03:56.088 3467702== by 0x91DA9C: RelationBuildDesc (relcache.c:1200)\n>\n> which is complaining about the memory context identifiers for system\n> indexes' rd_indexcxt contexts. Those are surely not being leaked in\n> any real sense. I suspect that this has something to do with valgrind\n> not counting the context->ident fields as live pointers, but I don't\n> have enough valgrind-fu to fix that.\n\nYea. I suspect it's related to the fact that we mark the memory as a\nvalgrind mempool, I'll try to investigate.\n\n\nI do see a bunch of leaks bleats below fun:plpgsql_compile that I don't\nyet understand. E.g.\n\n==2054558== 32 bytes in 1 blocks are definitely lost in loss record 284 of 913\n==2054558== at 0x89D389: palloc (mcxt.c:975)\n==2054558== by 0x518732: new_list (list.c:134)\n==2054558== by 0x518C0C: lappend (list.c:341)\n==2054558== by 0x83CAE7: SearchCatCacheList (catcache.c:1691)\n==2054558== by 0x859A9C: SearchSysCacheList (syscache.c:1447)\n==2054558== by 0x313192: FuncnameGetCandidates (namespace.c:975)\n==2054558== by 0x313D91: FunctionIsVisible (namespace.c:1450)\n==2054558== by 0x7C2891: format_procedure_extended (regproc.c:375)\n==2054558== by 0x7C27C3: format_procedure (regproc.c:324)\n==2054558== by 0xA7693E1: do_compile (pl_comp.c:348)\n==2054558== by 0xA769130: plpgsql_compile (pl_comp.c:224)\n\nand\n\n==2054558== 30 bytes in 4 blocks are definitely lost in loss record 225 of 913\n==2054558== at 0x89D389: palloc (mcxt.c:975)\n==2054558== by 0x3ADDAE: downcase_identifier (scansup.c:52)\n==2054558== by 0x3ADD85: downcase_truncate_identifier (scansup.c:39)\n==2054558== by 0x3AB5E4: core_yylex (scan.l:1032)\n==2054558== by 0xA789B2D: internal_yylex (pl_scanner.c:321)\n==2054558== by 0xA7896E3: plpgsql_yylex (pl_scanner.c:152)\n==2054558== by 0xA780015: plpgsql_yyparse (pl_gram.c:1945)\n==2054558== by 0xA76A652: do_compile (pl_comp.c:788)\n==2054558== by 0xA769130: plpgsql_compile (pl_comp.c:224)\n==2054558== by 0xA78948F: plpgsql_validator (pl_handler.c:539)\n\nBased on the quick look I had (dinner is calling) I didn't yet\nunderstand how plpgsql_compile_tmp_cxt error handling works.\n\nGreetings,\n\nAndres Freund\n\n\n(*) or not so quick, I had to figure out why valgrind was so slow. It\nturned out that I had typed shared_buffers=32MB into\nshared_buffers=32GB...\n\n\n", "msg_date": "Tue, 16 Mar 2021 19:31:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\nFor the second last trace involving SearchCatCacheList (catcache.c:1691),\nthe ctlist's members are stored in cl->members array where cl is returned\nat the end of SearchCatCacheList.\n\nMaybe this was not accounted for by valgrind ?\n\nCheers\n\nOn Tue, Mar 16, 2021 at 7:31 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> David, there's a question about a commit of yours below, hence adding\n> you.\n>\n> On 2021-03-16 19:36:10 -0400, Tom Lane wrote:\n> > Out of curiosity I poked at this for a little while.\n>\n> Cool.\n>\n>\n> > It doesn't appear to me that we leak much at all, at least not if you\n> > are willing to take \"still reachable\" blocks as not-leaked.\n>\n> Well, I think for any sort of automated testing - which I think would be\n> useful - we'd really need *no* leaks. I know that I get a few bleats\n> whenever I forget to set --leak-check=no. It's also not just postgres\n> itself, but some of the helper tools...\n>\n> And it's not just valgrind, also gcc/clang sanitizers...\n>\n>\n> > What I found out is that we have a lot of issues that seem to devolve\n> > to valgrind not being sure that a block is referenced. I identified\n> > two main causes of that:\n> >\n> > (1) We have a pointer, but it doesn't actually point right at the start\n> > of the block. A primary culprit here is lists of thingies that use the\n> > slist and dlist infrastructure. As an experiment, I moved the dlist_node\n> > fields of some popular structs to the beginning, and verified that that\n> > silences associated complaints. I'm not sure that we want to insist on\n> > put-the-link-first as policy (although if we did, it could provide some\n> > notational savings perhaps). However, unless someone knows of a way to\n> > teach valgrind about this situation, there may be no other way to silence\n> > those leakage complaints. A secondary culprit is people randomly\n> applying\n> > CACHELINEALIGN or the like to a palloc'd address, so that the address we\n> > have isn't pointing right at the block start.\n>\n> Hm, do you still have a backtrace / suppression for one of those? I\n> didn't see any in a quick (*) serial installcheck I just ran. Or I\n> wasn't able to pinpoint them to this issue.\n>\n>\n> I think the run might have shown a genuine leak:\n>\n> ==2048803== 16 bytes in 1 blocks are definitely lost in loss record 139 of\n> 906\n> ==2048803== at 0x89D2EA: palloc (mcxt.c:975)\n> ==2048803== by 0x2392D3: heap_beginscan (heapam.c:1198)\n> ==2048803== by 0x264E8F: table_beginscan_strat (tableam.h:918)\n> ==2048803== by 0x265994: systable_beginscan (genam.c:453)\n> ==2048803== by 0x83C2D1: SearchCatCacheMiss (catcache.c:1359)\n> ==2048803== by 0x83C197: SearchCatCacheInternal (catcache.c:1299)\n> ==2048803== by 0x83BE9A: SearchCatCache1 (catcache.c:1167)\n> ==2048803== by 0x85876A: SearchSysCache1 (syscache.c:1134)\n> ==2048803== by 0x84CDB3: RelationInitTableAccessMethod (relcache.c:1795)\n> ==2048803== by 0x84F807: RelationBuildLocalRelation (relcache.c:3554)\n> ==2048803== by 0x303C9D: heap_create (heap.c:395)\n> ==2048803== by 0x305790: heap_create_with_catalog (heap.c:1291)\n> ==2048803== by 0x41A327: DefineRelation (tablecmds.c:885)\n> ==2048803== by 0x6C96B6: ProcessUtilitySlow (utility.c:1131)\n> ==2048803== by 0x6C948A: standard_ProcessUtility (utility.c:1034)\n> ==2048803== by 0x6C865F: ProcessUtility (utility.c:525)\n> ==2048803== by 0x6C7409: PortalRunUtility (pquery.c:1159)\n> ==2048803== by 0x6C7636: PortalRunMulti (pquery.c:1305)\n> ==2048803== by 0x6C6B11: PortalRun (pquery.c:779)\n> ==2048803== by 0x6C05AB: exec_simple_query (postgres.c:1173)\n> ==2048803==\n> {\n> <insert_a_suppression_name_here>\n> Memcheck:Leak\n> match-leak-kinds: definite\n> fun:palloc\n> fun:heap_beginscan\n> fun:table_beginscan_strat\n> fun:systable_beginscan\n> fun:SearchCatCacheMiss\n> fun:SearchCatCacheInternal\n> fun:SearchCatCache1\n> fun:SearchSysCache1\n> fun:RelationInitTableAccessMethod\n> fun:RelationBuildLocalRelation\n> fun:heap_create\n> fun:heap_create_with_catalog\n> fun:DefineRelation\n> fun:ProcessUtilitySlow\n> fun:standard_ProcessUtility\n> fun:ProcessUtility\n> fun:PortalRunUtility\n> fun:PortalRunMulti\n> fun:PortalRun\n> fun:exec_simple_query\n> }\n>\n> Since 56788d2156fc heap_beginscan() allocates\n> scan->rs_base.rs_private =\n> palloc(sizeof(ParallelBlockTableScanWorkerData));\n> in heap_beginscan(). But doesn't free it in heap_endscan().\n>\n> In most of the places heap scans are begun inside transient contexts,\n> but not always. In the above trace for example\n> RelationBuildLocalRelation switched to CacheMemoryContext, and nothing\n> switched to something else.\n>\n> I'm a bit confused about the precise design of rs_private /\n> ParallelBlockTableScanWorkerData, specifically why it's been added to\n> TableScanDesc, instead of just adding it to HeapScanDesc? And why is it\n> allocated unconditionally, instead of just for parallel scans?\n>\n>\n> I don't think this is a false positive, even though it theoretically\n> could be freed by resetting CacheMemoryContext (see below)?\n>\n>\n> I saw a lot of false positives from autovacuum workers, because\n> AutovacMemCxt is never deleted, and because table_toast_map is created\n> in TopMemoryContext. Adding an explicit\n> MemoryContextDelete(AutovacMemCxt) and parenting table_toast_map in that\n> shut that up.\n>\n>\n> Which brings me to the question why allocations in CacheMemoryContext,\n> AutovacMemCxt are considered to be \"lost\", even though they're still\n> \"reachable\" via a context reset. I think valgrind ends up treating\n> memory allocated via memory contexts that still exist at process end as\n> lost, regardless of being reachable via the the memory pool (from\n> valgrinds view). Which I guess actually makes sense, for things like\n> TopMemoryContext and CacheContext - anything not reachable by means\n> other than a context reset is effectively lost for those.\n>\n> For autovac launcher / worker it seems like a sensible thing to just\n> delete AutovacMemCxt.\n>\n>\n>\n> > (2) The only pointer to the start of a block is actually somewhere within\n> > the block. This is common in dynahash tables, where we allocate a slab\n> > of entries in a single palloc and then thread them together. Each entry\n> > should have exactly one referencing pointer, but that pointer is more\n> > likely to be elsewhere within the same palloc block than in the external\n> > hash bucket array. AFAICT, all cases except where the slab's first entry\n> > is pointed to by a hash bucket pointer confuse valgrind to some extent.\n> > I was able to hack around this by preventing dynahash from allocating\n> > more than one hash entry per palloc, but I wonder if there's a better\n> way.\n>\n> Hm. For me the number of leaks seem to stay the same with or without\n> your changes related to this. Is this a USE_VALGRIND build?\n>\n> I'm using valgrind-3.16.1\n>\n>\n> > Attached is a very crude hack, not meant for commit, that hacks things\n> > up enough to greatly reduce the number of complaints with\n> > \"--leak-check=full\".\n> >\n> > One thing I've failed to silence so far is a bunch of entries like\n> >\n> > ==00:00:03:56.088 3467702== 1,861 bytes in 67 blocks are definitely lost\n> in loss record 1,290 of 1,418\n> > ==00:00:03:56.088 3467702== at 0x950650: MemoryContextAlloc\n> (mcxt.c:827)\n> > ==00:00:03:56.088 3467702== by 0x951710: MemoryContextStrdup\n> (mcxt.c:1179)\n> > ==00:00:03:56.088 3467702== by 0x91C86E: RelationInitIndexAccessInfo\n> (relcache.c:1444)\n> > ==00:00:03:56.088 3467702== by 0x91DA9C: RelationBuildDesc\n> (relcache.c:1200)\n> >\n> > which is complaining about the memory context identifiers for system\n> > indexes' rd_indexcxt contexts. Those are surely not being leaked in\n> > any real sense. I suspect that this has something to do with valgrind\n> > not counting the context->ident fields as live pointers, but I don't\n> > have enough valgrind-fu to fix that.\n>\n> Yea. I suspect it's related to the fact that we mark the memory as a\n> valgrind mempool, I'll try to investigate.\n>\n>\n> I do see a bunch of leaks bleats below fun:plpgsql_compile that I don't\n> yet understand. E.g.\n>\n> ==2054558== 32 bytes in 1 blocks are definitely lost in loss record 284 of\n> 913\n> ==2054558== at 0x89D389: palloc (mcxt.c:975)\n> ==2054558== by 0x518732: new_list (list.c:134)\n> ==2054558== by 0x518C0C: lappend (list.c:341)\n> ==2054558== by 0x83CAE7: SearchCatCacheList (catcache.c:1691)\n> ==2054558== by 0x859A9C: SearchSysCacheList (syscache.c:1447)\n> ==2054558== by 0x313192: FuncnameGetCandidates (namespace.c:975)\n> ==2054558== by 0x313D91: FunctionIsVisible (namespace.c:1450)\n> ==2054558== by 0x7C2891: format_procedure_extended (regproc.c:375)\n> ==2054558== by 0x7C27C3: format_procedure (regproc.c:324)\n> ==2054558== by 0xA7693E1: do_compile (pl_comp.c:348)\n> ==2054558== by 0xA769130: plpgsql_compile (pl_comp.c:224)\n>\n> and\n>\n> ==2054558== 30 bytes in 4 blocks are definitely lost in loss record 225 of\n> 913\n> ==2054558== at 0x89D389: palloc (mcxt.c:975)\n> ==2054558== by 0x3ADDAE: downcase_identifier (scansup.c:52)\n> ==2054558== by 0x3ADD85: downcase_truncate_identifier (scansup.c:39)\n> ==2054558== by 0x3AB5E4: core_yylex (scan.l:1032)\n> ==2054558== by 0xA789B2D: internal_yylex (pl_scanner.c:321)\n> ==2054558== by 0xA7896E3: plpgsql_yylex (pl_scanner.c:152)\n> ==2054558== by 0xA780015: plpgsql_yyparse (pl_gram.c:1945)\n> ==2054558== by 0xA76A652: do_compile (pl_comp.c:788)\n> ==2054558== by 0xA769130: plpgsql_compile (pl_comp.c:224)\n> ==2054558== by 0xA78948F: plpgsql_validator (pl_handler.c:539)\n>\n> Based on the quick look I had (dinner is calling) I didn't yet\n> understand how plpgsql_compile_tmp_cxt error handling works.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n> (*) or not so quick, I had to figure out why valgrind was so slow. It\n> turned out that I had typed shared_buffers=32MB into\n> shared_buffers=32GB...\n>\n>\n>\n\nHi,For the second last trace involving SearchCatCacheList (catcache.c:1691), the ctlist's members are stored in cl->members array where cl is returned at the end of SearchCatCacheList.Maybe this was not accounted for by valgrind ?CheersOn Tue, Mar 16, 2021 at 7:31 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nDavid, there's a question about a commit of yours below, hence adding\nyou.\n\nOn 2021-03-16 19:36:10 -0400, Tom Lane wrote:\n> Out of curiosity I poked at this for a little while.\n\nCool.\n\n\n> It doesn't appear to me that we leak much at all, at least not if you\n> are willing to take \"still reachable\" blocks as not-leaked.\n\nWell, I think for any sort of automated testing - which I think would be\nuseful - we'd really need *no* leaks. I know that I get a few bleats\nwhenever I forget to set --leak-check=no. It's also not just postgres\nitself, but some of the helper tools...\n\nAnd it's not just valgrind, also gcc/clang sanitizers...\n\n\n> What I found out is that we have a lot of issues that seem to devolve\n> to valgrind not being sure that a block is referenced.  I identified\n> two main causes of that:\n>\n> (1) We have a pointer, but it doesn't actually point right at the start\n> of the block.  A primary culprit here is lists of thingies that use the\n> slist and dlist infrastructure.  As an experiment, I moved the dlist_node\n> fields of some popular structs to the beginning, and verified that that\n> silences associated complaints.  I'm not sure that we want to insist on\n> put-the-link-first as policy (although if we did, it could provide some\n> notational savings perhaps).  However, unless someone knows of a way to\n> teach valgrind about this situation, there may be no other way to silence\n> those leakage complaints.  A secondary culprit is people randomly applying\n> CACHELINEALIGN or the like to a palloc'd address, so that the address we\n> have isn't pointing right at the block start.\n\nHm, do you still have a backtrace / suppression for one of those? I\ndidn't see any in a quick (*) serial installcheck I just ran. Or I\nwasn't able to pinpoint them to this issue.\n\n\nI think the run might have shown a genuine leak:\n\n==2048803== 16 bytes in 1 blocks are definitely lost in loss record 139 of 906\n==2048803==    at 0x89D2EA: palloc (mcxt.c:975)\n==2048803==    by 0x2392D3: heap_beginscan (heapam.c:1198)\n==2048803==    by 0x264E8F: table_beginscan_strat (tableam.h:918)\n==2048803==    by 0x265994: systable_beginscan (genam.c:453)\n==2048803==    by 0x83C2D1: SearchCatCacheMiss (catcache.c:1359)\n==2048803==    by 0x83C197: SearchCatCacheInternal (catcache.c:1299)\n==2048803==    by 0x83BE9A: SearchCatCache1 (catcache.c:1167)\n==2048803==    by 0x85876A: SearchSysCache1 (syscache.c:1134)\n==2048803==    by 0x84CDB3: RelationInitTableAccessMethod (relcache.c:1795)\n==2048803==    by 0x84F807: RelationBuildLocalRelation (relcache.c:3554)\n==2048803==    by 0x303C9D: heap_create (heap.c:395)\n==2048803==    by 0x305790: heap_create_with_catalog (heap.c:1291)\n==2048803==    by 0x41A327: DefineRelation (tablecmds.c:885)\n==2048803==    by 0x6C96B6: ProcessUtilitySlow (utility.c:1131)\n==2048803==    by 0x6C948A: standard_ProcessUtility (utility.c:1034)\n==2048803==    by 0x6C865F: ProcessUtility (utility.c:525)\n==2048803==    by 0x6C7409: PortalRunUtility (pquery.c:1159)\n==2048803==    by 0x6C7636: PortalRunMulti (pquery.c:1305)\n==2048803==    by 0x6C6B11: PortalRun (pquery.c:779)\n==2048803==    by 0x6C05AB: exec_simple_query (postgres.c:1173)\n==2048803==\n{\n   <insert_a_suppression_name_here>\n   Memcheck:Leak\n   match-leak-kinds: definite\n   fun:palloc\n   fun:heap_beginscan\n   fun:table_beginscan_strat\n   fun:systable_beginscan\n   fun:SearchCatCacheMiss\n   fun:SearchCatCacheInternal\n   fun:SearchCatCache1\n   fun:SearchSysCache1\n   fun:RelationInitTableAccessMethod\n   fun:RelationBuildLocalRelation\n   fun:heap_create\n   fun:heap_create_with_catalog\n   fun:DefineRelation\n   fun:ProcessUtilitySlow\n   fun:standard_ProcessUtility\n   fun:ProcessUtility\n   fun:PortalRunUtility\n   fun:PortalRunMulti\n   fun:PortalRun\n   fun:exec_simple_query\n}\n\nSince 56788d2156fc heap_beginscan() allocates\n        scan->rs_base.rs_private =\n                palloc(sizeof(ParallelBlockTableScanWorkerData));\nin heap_beginscan(). But doesn't free it in heap_endscan().\n\nIn most of the places heap scans are begun inside transient contexts,\nbut not always. In the above trace for example\nRelationBuildLocalRelation switched to CacheMemoryContext, and nothing\nswitched to something else.\n\nI'm a bit confused about the precise design of rs_private /\nParallelBlockTableScanWorkerData, specifically why it's been added to\nTableScanDesc, instead of just adding it to HeapScanDesc? And why is it\nallocated unconditionally, instead of just for parallel scans?\n\n\nI don't think this is a false positive, even though it theoretically\ncould be freed by resetting CacheMemoryContext (see below)?\n\n\nI saw a lot of false positives from autovacuum workers, because\nAutovacMemCxt is never deleted, and because table_toast_map is created\nin TopMemoryContext.  Adding an explicit\nMemoryContextDelete(AutovacMemCxt) and parenting table_toast_map in that\nshut that up.\n\n\nWhich brings me to the question why allocations in CacheMemoryContext,\nAutovacMemCxt are considered to be \"lost\", even though they're still\n\"reachable\" via a context reset.  I think valgrind ends up treating\nmemory allocated via memory contexts that still exist at process end as\nlost, regardless of being reachable via the the memory pool (from\nvalgrinds view). Which I guess actually makes sense, for things like\nTopMemoryContext and CacheContext - anything not reachable by means\nother than a context reset is effectively lost for those.\n\nFor autovac launcher / worker it seems like a sensible thing to just\ndelete AutovacMemCxt.\n\n\n\n> (2) The only pointer to the start of a block is actually somewhere within\n> the block.  This is common in dynahash tables, where we allocate a slab\n> of entries in a single palloc and then thread them together.  Each entry\n> should have exactly one referencing pointer, but that pointer is more\n> likely to be elsewhere within the same palloc block than in the external\n> hash bucket array.  AFAICT, all cases except where the slab's first entry\n> is pointed to by a hash bucket pointer confuse valgrind to some extent.\n> I was able to hack around this by preventing dynahash from allocating\n> more than one hash entry per palloc, but I wonder if there's a better way.\n\nHm. For me the number of leaks seem to stay the same with or without\nyour changes related to this. Is this a USE_VALGRIND build?\n\nI'm using valgrind-3.16.1\n\n\n> Attached is a very crude hack, not meant for commit, that hacks things\n> up enough to greatly reduce the number of complaints with\n> \"--leak-check=full\".\n>\n> One thing I've failed to silence so far is a bunch of entries like\n>\n> ==00:00:03:56.088 3467702== 1,861 bytes in 67 blocks are definitely lost in loss record 1,290 of 1,418\n> ==00:00:03:56.088 3467702==    at 0x950650: MemoryContextAlloc (mcxt.c:827)\n> ==00:00:03:56.088 3467702==    by 0x951710: MemoryContextStrdup (mcxt.c:1179)\n> ==00:00:03:56.088 3467702==    by 0x91C86E: RelationInitIndexAccessInfo (relcache.c:1444)\n> ==00:00:03:56.088 3467702==    by 0x91DA9C: RelationBuildDesc (relcache.c:1200)\n>\n> which is complaining about the memory context identifiers for system\n> indexes' rd_indexcxt contexts.  Those are surely not being leaked in\n> any real sense.  I suspect that this has something to do with valgrind\n> not counting the context->ident fields as live pointers, but I don't\n> have enough valgrind-fu to fix that.\n\nYea. I suspect it's related to the fact that we mark the memory as a\nvalgrind mempool, I'll try to investigate.\n\n\nI do see a bunch of leaks bleats below fun:plpgsql_compile that I don't\nyet understand. E.g.\n\n==2054558== 32 bytes in 1 blocks are definitely lost in loss record 284 of 913\n==2054558==    at 0x89D389: palloc (mcxt.c:975)\n==2054558==    by 0x518732: new_list (list.c:134)\n==2054558==    by 0x518C0C: lappend (list.c:341)\n==2054558==    by 0x83CAE7: SearchCatCacheList (catcache.c:1691)\n==2054558==    by 0x859A9C: SearchSysCacheList (syscache.c:1447)\n==2054558==    by 0x313192: FuncnameGetCandidates (namespace.c:975)\n==2054558==    by 0x313D91: FunctionIsVisible (namespace.c:1450)\n==2054558==    by 0x7C2891: format_procedure_extended (regproc.c:375)\n==2054558==    by 0x7C27C3: format_procedure (regproc.c:324)\n==2054558==    by 0xA7693E1: do_compile (pl_comp.c:348)\n==2054558==    by 0xA769130: plpgsql_compile (pl_comp.c:224)\n\nand\n\n==2054558== 30 bytes in 4 blocks are definitely lost in loss record 225 of 913\n==2054558==    at 0x89D389: palloc (mcxt.c:975)\n==2054558==    by 0x3ADDAE: downcase_identifier (scansup.c:52)\n==2054558==    by 0x3ADD85: downcase_truncate_identifier (scansup.c:39)\n==2054558==    by 0x3AB5E4: core_yylex (scan.l:1032)\n==2054558==    by 0xA789B2D: internal_yylex (pl_scanner.c:321)\n==2054558==    by 0xA7896E3: plpgsql_yylex (pl_scanner.c:152)\n==2054558==    by 0xA780015: plpgsql_yyparse (pl_gram.c:1945)\n==2054558==    by 0xA76A652: do_compile (pl_comp.c:788)\n==2054558==    by 0xA769130: plpgsql_compile (pl_comp.c:224)\n==2054558==    by 0xA78948F: plpgsql_validator (pl_handler.c:539)\n\nBased on the quick look I had (dinner is calling) I didn't yet\nunderstand how plpgsql_compile_tmp_cxt error handling works.\n\nGreetings,\n\nAndres Freund\n\n\n(*) or not so quick, I had to figure out why valgrind was so slow. It\nturned out that I had typed shared_buffers=32MB into\nshared_buffers=32GB...", "msg_date": "Tue, 16 Mar 2021 20:00:36 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-03-16 19:36:10 -0400, Tom Lane wrote:\n>> It doesn't appear to me that we leak much at all, at least not if you\n>> are willing to take \"still reachable\" blocks as not-leaked.\n\n> Well, I think for any sort of automated testing - which I think would be\n> useful - we'd really need *no* leaks.\n\nThat seems both unnecessary and impractical. We have to consider that\neverything-still-reachable is an OK final state.\n\n> I think the run might have shown a genuine leak:\n\n> ==2048803== 16 bytes in 1 blocks are definitely lost in loss record 139 of 906\n> ==2048803== at 0x89D2EA: palloc (mcxt.c:975)\n> ==2048803== by 0x2392D3: heap_beginscan (heapam.c:1198)\n> ==2048803== by 0x264E8F: table_beginscan_strat (tableam.h:918)\n> ==2048803== by 0x265994: systable_beginscan (genam.c:453)\n> ==2048803== by 0x83C2D1: SearchCatCacheMiss (catcache.c:1359)\n> ==2048803== by 0x83C197: SearchCatCacheInternal (catcache.c:1299)\n\nI didn't see anything like that after applying the fixes I showed before.\nThere are a LOT of false positives from the fact that with our HEAD\ncode, valgrind believes that everything in the catalog caches and\nmost things in dynahash tables (including the relcache) are unreachable.\n\nI'm not trying to claim there are no leaks anywhere, just that the amount\nof noise from those issues swamps all the real problems. I particularly\nrecommend not believing anything related to catcache or relcache if you\nhaven't applied that admittedly-hacky patch.\n\n> Hm. For me the number of leaks seem to stay the same with or without\n> your changes related to this. Is this a USE_VALGRIND build?\n\nYeah ...\n\n> I do see a bunch of leaks bleats below fun:plpgsql_compile that I don't\n> yet understand. E.g.\n\nThose are probably a variant of what you were suggesting above, ie\nplpgsql isn't terribly careful not to leak random stuff while building\na long-lived function parse tree. It's supposed to use a temp context\nfor anything that might leak, but I suspect it's not thorough about it.\nWe could chase that sort of thing after we clean up the other problems.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 16 Mar 2021 23:01:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Hm. For me the number of leaks seem to stay the same with or without\n> your changes related to this. Is this a USE_VALGRIND build?\n\nNot sure how you arrived at that answer. I attach two logs of individual\nbackends running with\n\n--leak-check=full --track-origins=yes --read-var-info=yes --error-exitcode=0\n\nThe test scenario in both cases was just start up, run \"select 2+2;\",\nquit. The first one is before I'd made any of the changes shown\nbefore, the second is after.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 16 Mar 2021 23:23:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 16, 2021, at 20:01, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-03-16 19:36:10 -0400, Tom Lane wrote:\n> >> It doesn't appear to me that we leak much at all, at least not if you\n> >> are willing to take \"still reachable\" blocks as not-leaked.\n> \n> > Well, I think for any sort of automated testing - which I think would be\n> > useful - we'd really need *no* leaks.\n> \n> That seems both unnecessary and impractical. We have to consider that\n> everything-still-reachable is an OK final state.\n\nI don't consider \"still reachable\" a leak. Just definitely unreachable. And with a few tweaks that seems like we could achieve that?\n\n\n> > I think the run might have shown a genuine leak:\n> \n> > ==2048803== 16 bytes in 1 blocks are definitely lost in loss record 139 of 906\n> > ==2048803== at 0x89D2EA: palloc (mcxt.c:975)\n> > ==2048803== by 0x2392D3: heap_beginscan (heapam.c:1198)\n> > ==2048803== by 0x264E8F: table_beginscan_strat (tableam.h:918)\n> > ==2048803== by 0x265994: systable_beginscan (genam.c:453)\n> > ==2048803== by 0x83C2D1: SearchCatCacheMiss (catcache.c:1359)\n> > ==2048803== by 0x83C197: SearchCatCacheInternal (catcache.c:1299)\n> \n> I didn't see anything like that after applying the fixes I showed before.\n> There are a LOT of false positives from the fact that with our HEAD\n> code, valgrind believes that everything in the catalog caches and\n> most things in dynahash tables (including the relcache) are unreachable.\n\nI think it's actually unreachable memory (unless you count resetting the cache context), based on manually tracing the code... I'll try to repro.\n\n\n> > I do see a bunch of leaks bleats below fun:plpgsql_compile that I don't\n> > yet understand. E.g.\n> \n> Those are probably a variant of what you were suggesting above, ie\n> plpgsql isn't terribly careful not to leak random stuff while building\n> a long-lived function parse tree. It's supposed to use a temp context\n> for anything that might leak, but I suspect it's not thorough about it.\n\nWhat I meant was that I didn't understand how there's not a leak danger when compilation fails halfway through, given that the context in question is below TopMemoryContext and that I didn't see a relevant TRY block. But that probably there is something cleaning it up that I didn't see.\n\nAndres\n\n\n", "msg_date": "Tue, 16 Mar 2021 20:50:17 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "\"Andres Freund\" <andres@anarazel.de> writes:\n> On Tue, Mar 16, 2021, at 20:01, Tom Lane wrote:\n>> That seems both unnecessary and impractical. We have to consider that\n>> everything-still-reachable is an OK final state.\n\n> I don't consider \"still reachable\" a leak. Just definitely unreachable.\n\nOK, we're in violent agreement then -- I must've misread what you wrote.\n\n>>> I think the run might have shown a genuine leak:\n\n>> I didn't see anything like that after applying the fixes I showed before.\n\n> I think it's actually unreachable memory (unless you count resetting the cache context), based on manually tracing the code... I'll try to repro.\n\nI agree that having to reset CacheContext is not something we\nshould count as \"still reachable\", and I'm pretty sure that the\nexisting valgrind infrastructure doesn't count it that way.\n\nAs for the particular point about ParallelBlockTableScanWorkerData,\nI agree with your question to David about why that's in TableScanDesc\nnot HeapScanDesc, but I can't get excited about it not being freed in\nheap_endscan. That's mainly because I do not believe that anything as\ncomplex as a heap or indexscan should be counted on to be zero-leakage.\nThe right answer is to not do such operations in long-lived contexts.\nSo if we're running such a thing while switched into CacheContext,\n*that* is the bug, not that heap_endscan didn't free this particular\nallocation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 00:01:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn 2021-03-16 20:50:17 -0700, Andres Freund wrote:\n> What I meant was that I didn't understand how there's not a leak\n> danger when compilation fails halfway through, given that the context\n> in question is below TopMemoryContext and that I didn't see a relevant\n> TRY block. But that probably there is something cleaning it up that I\n> didn't see.\n\nLooks like it's an actual leak:\n\npostgres[2058957][1]=# DO $do$BEGIN CREATE OR REPLACE FUNCTION foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;END;$$;^C\npostgres[2058957][1]=# SELECT count(*), SUM(total_bytes) FROM pg_backend_memory_contexts WHERE name = 'PL/pgSQL function';\n┌───────┬────────┐\n│ count │ sum │\n├───────┼────────┤\n│ 0 │ (null) │\n└───────┴────────┘\n(1 row)\n\nTime: 1.666 ms\npostgres[2058957][1]=# CREATE OR REPLACE FUNCTION foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;END;$$;\nERROR: 42601: syntax error at or near \"frakbar\"\nLINE 1: ...ON foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;EN...\n ^\nLOCATION: scanner_yyerror, scan.l:1176\nTime: 5.463 ms\npostgres[2058957][1]=# CREATE OR REPLACE FUNCTION foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;END;$$;\nERROR: 42601: syntax error at or near \"frakbar\"\nLINE 1: ...ON foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;EN...\n ^\nLOCATION: scanner_yyerror, scan.l:1176\nTime: 1.223 ms\npostgres[2058957][1]=# CREATE OR REPLACE FUNCTION foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;END;$$;\nERROR: 42601: syntax error at or near \"frakbar\"\nLINE 1: ...ON foo() RETURNS VOID LANGUAGE plpgsql AS $$BEGIN frakbar;EN...\n ^\nLOCATION: scanner_yyerror, scan.l:1176\nTime: 1.194 ms\npostgres[2058957][1]=# SELECT count(*), SUM(total_bytes) FROM pg_backend_memory_contexts WHERE name = 'PL/pgSQL function';\n┌───────┬───────┐\n│ count │ sum │\n├───────┼───────┤\n│ 3 │ 24576 │\n└───────┴───────┘\n(1 row)\n\nSomething like\n\nDO $do$ BEGIN FOR i IN 1 .. 10000 LOOP BEGIN EXECUTE $cf$CREATE OR REPLACE FUNCTION foo() RETURNS VOID LANGUAGE plpgsql AS $f$BEGIN frakbar; END;$f$;$cf$; EXCEPTION WHEN others THEN END; END LOOP; END;$do$;\n\nwill show the leak visible in top too (albeit slowly - a more\ncomplicated statement will leak more quickly I think).\n\n\npostgres[2059268][1]=# SELECT count(*), SUM(total_bytes) FROM pg_backend_memory_contexts WHERE name = 'PL/pgSQL function';\n┌───────┬──────────┐\n│ count │ sum │\n├───────┼──────────┤\n│ 10000 │ 81920000 │\n└───────┴──────────┘\n(1 row)\n\nThe leak appears to be not new, I see it in 9.6 as well. This seems like\na surprisingly easy to trigger leak...\n\n\nLooks like there's something else awry. The above DO statement takes\n2.2s on an 13 assert build, but 32s on a master assert build. Spending a\nlot of time doing dependency lookups:\n\n- 94.62% 0.01% postgres postgres [.] CreateFunction\n - 94.61% CreateFunction\n - 94.56% ProcedureCreate\n - 89.68% deleteDependencyRecordsFor\n - 89.38% systable_getnext\n - 89.33% index_getnext_slot\n - 56.00% index_fetch_heap\n + 54.64% table_index_fetch_tuple\n 0.09% heapam_index_fetch_tuple\n + 28.53% index_getnext_tid\n 2.77% ItemPointerEquals\n 0.10% table_index_fetch_tuple\n 0.09% btgettuple\n 0.03% index_getnext_tid\n\n1000 iterations: 521ms\n1000 iterations: 533ms\n2000 iterations: 1670ms\n3000 iterations: 3457ms\n3000 iterations: 3457ms\n10000 iterations: 31794ms\n\nThe quadratic seeming nature made me wonder if someone broke killtuples\nin this situation. And it seem that someone was me, in 623a9ba . We need\nto bump xactCompletionCount in the subxid abort case as well...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 16 Mar 2021 22:57:18 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-03-16 20:50:17 -0700, Andres Freund wrote:\n>> What I meant was that I didn't understand how there's not a leak\n>> danger when compilation fails halfway through, given that the context\n>> in question is below TopMemoryContext and that I didn't see a relevant\n>> TRY block. But that probably there is something cleaning it up that I\n>> didn't see.\n\n> Looks like it's an actual leak:\n\nYeah, I believe that. On the other hand, I'm not sure that such cases\nrepresent any real problem for production usage. I'm inclined to focus\non non-error scenarios first.\n\n(Having said that, we probably have the ability to fix such things\nrelatively painlessly now, by reparenting an initially-temporary\ncontext once we're done parsing.)\n\nMeanwhile, I'm still trying to understand why valgrind is whining\nabout the rd_indexcxt identifier strings. AFAICS it shouldn't.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 10:16:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 17, 2021, at 07:16, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-03-16 20:50:17 -0700, Andres Freund wrote:\n> Meanwhile, I'm still trying to understand why valgrind is whining\n> about the rd_indexcxt identifier strings. AFAICS it shouldn't.\n\nI found a way around that late last night. Need to mark the context itself as an allocation. But I made a mess on the way to that and need to clean the patch up before sending it (and need to drop my girlfriend off first).\n\nAndres\n\n\n", "msg_date": "Wed, 17 Mar 2021 08:15:43 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\n(really need to fix my mobile phone mail program to keep the CC list...)\n\nOn 2021-03-17 08:15:43 -0700, Andres Freund wrote:\n> On Wed, Mar 17, 2021, at 07:16, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > On 2021-03-16 20:50:17 -0700, Andres Freund wrote:\n> > Meanwhile, I'm still trying to understand why valgrind is whining\n> > about the rd_indexcxt identifier strings. AFAICS it shouldn't.\n> \n> I found a way around that late last night. Need to mark the context\n> itself as an allocation. But I made a mess on the way to that and need\n> to clean the patch up before sending it (and need to drop my\n> girlfriend off first).\n\nUnfortunately I didn't immediately find a way to do this while keeping\nthe MEMPOOL_CREATE/DESTROY in mcxt.c. The attached patch moves the pool\ncreation into the memory context implementations, \"allocates\" the\ncontext itself as part of that pool, and changes the reset\nimplementation from MEMPOOL_DESTROY + MEMPOOL_CREATE to instead do\nMEMPOOL_TRIM. That leaves the memory context itself valid (and thus\ntracking ->ident etc), but marks all the other memory as freed.\n\nThis is just a first version, it probably needs more work, and\ndefinitely a few comments...\n\nAfter this, your changes, and the previously mentioned fixes, I get far\nfewer false positives. Also found a crash / memory leak in pgstat.c due\nto the new replication slot stats, but I'll start a separate thread.\n\n\nThere are a few leak warnings around guc.c that look like they might be\nreal, not false positives, and thus a bit concerning. Looks like several\nguc check hooks don't bother to free the old *extra before allocating a\nnew one.\n\n\nI suspect we might get better results from valgrind, not just for leaks\nbut also undefined value tracking, if we changed the way we represent\npools to utilize VALGRIND_MEMPOOL_METAPOOL |\nVALGRIND_MEMPOOL_AUTO_FREE. E.g. aset.c would associate AllocBlock using\nVALGRIND_MEMPOOL_ALLOC and then mcxt.c would use\nVALGRIND_MALLOCLIKE_BLOCK for the individual chunk allocation.\n\nhttps://www.valgrind.org/docs/manual/mc-manual.html#mc-manual.mempools\n\n\nI played with naming the allocations underlying aset.c using\nVALGRIND_CREATE_BLOCK(block, strlen(context->name), context->name).\nThat does produce better undefined-value warnings, but it seems that\ne.g. the leak detector doen't have that information around. Nor does it\nseem to be usable for use-afte-free. At least the latter likely because\nI had to VALGRIND_DISCARD by that point...\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 17 Mar 2021 11:15:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n>> I found a way around that late last night. Need to mark the context\n>> itself as an allocation. But I made a mess on the way to that and need\n>> to clean the patch up before sending it (and need to drop my\n>> girlfriend off first).\n\n> Unfortunately I didn't immediately find a way to do this while keeping\n> the MEMPOOL_CREATE/DESTROY in mcxt.c. The attached patch moves the pool\n> creation into the memory context implementations, \"allocates\" the\n> context itself as part of that pool, and changes the reset\n> implementation from MEMPOOL_DESTROY + MEMPOOL_CREATE to instead do\n> MEMPOOL_TRIM. That leaves the memory context itself valid (and thus\n> tracking ->ident etc), but marks all the other memory as freed.\n\nHuh, interesting. I wonder why that makes the ident problem go away?\nI'd supposed that valgrind would see the context headers as ordinary\nmemory belonging to the global \"malloc\" pool, so that any pointers\ninside them ought to be considered valid.\n\nAnyway, I don't have a problem with rearranging the responsibility\nlike this. It gives the individual allocators more freedom to do\nodd stuff, at the cost of very minor duplication of valgrind calls.\n\nI agree we need more comments -- would you like me to have a go at\nwriting them?\n\nOne thing I was stewing over last night is that a MemoryContextReset\nwill mess up any context identifier assigned with\nMemoryContextCopyAndSetIdentifier. I'd left that as a problem to\nfix later, because we don't currently have a need to reset contexts\nthat use copied identifiers. But that assumption obviously will bite\nus someday, so maybe now is a good time to think about it.\n\nThe very simplest fix would be to allocate non-constant idents with\nmalloc; which'd require adding a flag to track whether context->ident\nneeds to be free()d. We have room for another bool near the top of\nstruct MemoryContextData (and at some point we could turn those\nbool fields into a flags word). The only real cost here is one\nmore free() while destroying a labeled context, which is probably\nnegligible.\n\nOther ideas are possible but they seem to require getting the individual\nmcxt methods involved, and I doubt it's worth the complexity.\n\n> There are a few leak warnings around guc.c that look like they might be\n> real, not false positives, and thus a bit concerning. Looks like several\n> guc check hooks don't bother to free the old *extra before allocating a\n> new one.\n\nI'll take a look, but I'm pretty certain that guc.c, not the hooks, is\nresponsible for freeing those. Might be another case of valgrind not\nunderstanding what's happening.\n\n> I suspect we might get better results from valgrind, not just for leaks\n> but also undefined value tracking, if we changed the way we represent\n> pools to utilize VALGRIND_MEMPOOL_METAPOOL |\n> VALGRIND_MEMPOOL_AUTO_FREE.\n\nDon't really see why that'd help? I mean, it could conceivably help\ncatch bugs in the allocators themselves, but I don't follow the argument\nthat it'd improve anything else. Defined is defined, as far as I can\ntell from the valgrind manual.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 18:07:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn 2021-03-17 18:07:36 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> >> I found a way around that late last night. Need to mark the context\n> >> itself as an allocation. But I made a mess on the way to that and need\n> >> to clean the patch up before sending it (and need to drop my\n> >> girlfriend off first).\n> \n> > Unfortunately I didn't immediately find a way to do this while keeping\n> > the MEMPOOL_CREATE/DESTROY in mcxt.c. The attached patch moves the pool\n> > creation into the memory context implementations, \"allocates\" the\n> > context itself as part of that pool, and changes the reset\n> > implementation from MEMPOOL_DESTROY + MEMPOOL_CREATE to instead do\n> > MEMPOOL_TRIM. That leaves the memory context itself valid (and thus\n> > tracking ->ident etc), but marks all the other memory as freed.\n> \n> Huh, interesting. I wonder why that makes the ident problem go away?\n> I'd supposed that valgrind would see the context headers as ordinary\n> memory belonging to the global \"malloc\" pool, so that any pointers\n> inside them ought to be considered valid.\n\n\nI'm didn't quite understand either at the time of writing the change. It\njust seemed the only explanation for some the behaviour I was seeing, so\nI tried it. Just to be initially be rebuffed due to errors when\naccessing the recycled sets...\n\nI spent a bit of time looking at valgrind, and it looks to be explicit\nbehaviour:\n\nmemcheck/mc_leakcheck.c\n\nstatic MC_Chunk**\nfind_active_chunks(Int* pn_chunks)\n{\n // Our goal is to construct a set of chunks that includes every\n // mempool chunk, and every malloc region that *doesn't* contain a\n // mempool chunk.\n...\n // Then we loop over the mempool tables. For each chunk in each\n // pool, we set the entry in the Bool array corresponding to the\n // malloc chunk containing the mempool chunk.\n VG_(HT_ResetIter)(MC_(mempool_list));\n while ( (mp = VG_(HT_Next)(MC_(mempool_list))) ) {\n VG_(HT_ResetIter)(mp->chunks);\n while ( (mc = VG_(HT_Next)(mp->chunks)) ) {\n\n // We'll need to record this chunk.\n n_chunks++;\n\n // Possibly invalidate the malloc holding the beginning of this chunk.\n m = find_chunk_for(mc->data, mallocs, n_mallocs);\n if (m != -1 && malloc_chunk_holds_a_pool_chunk[m] == False) {\n tl_assert(n_chunks > 0);\n n_chunks--;\n malloc_chunk_holds_a_pool_chunk[m] = True;\n }\n\nI think that means it explicitly ignores the entire malloced allocation\nwhenever there's *any* mempool chunk in it, instead considering only the\nmempool chunks. So once aset allocats something in the init block, the\ncontext itself is ignored for leak checking purposes. But that wouldn't\nbe the case if we didn't have the init block...\n\nI guess that makes sense, but would definitely be nice to have had\ndocumented...\n\n\n\n> Anyway, I don't have a problem with rearranging the responsibility\n> like this. It gives the individual allocators more freedom to do\n> odd stuff, at the cost of very minor duplication of valgrind calls.\n\nYea, similar.\n\n\n> I agree we need more comments -- would you like me to have a go at\n> writing them?\n\nGladly!\n\n\n> One thing I was stewing over last night is that a MemoryContextReset\n> will mess up any context identifier assigned with\n> MemoryContextCopyAndSetIdentifier.\n\nValgrind should catch such problems. Not having the danger would be\nbetter, of course.\n\nWe could also add an assertion at reset time that the identifier isn't\nin the current context.\n\n\n> The very simplest fix would be to allocate non-constant idents with\n> malloc; which'd require adding a flag to track whether context->ident\n> needs to be free()d. We have room for another bool near the top of\n> struct MemoryContextData (and at some point we could turn those\n> bool fields into a flags word). The only real cost here is one\n> more free() while destroying a labeled context, which is probably\n> negligible.\n\nHm. A separate malloc()/free() could conceivably actually show up in\nprofiles at some point.\n\nWhat if we instead used that flag to indicate that MemoryContextReset()\nneeds to save the identifier? Then any cost would only be paid if the\ncontext is actually reset.\n\n\n> > There are a few leak warnings around guc.c that look like they might be\n> > real, not false positives, and thus a bit concerning. Looks like several\n> > guc check hooks don't bother to free the old *extra before allocating a\n> > new one.\n> \n> I'll take a look, but I'm pretty certain that guc.c, not the hooks, is\n> responsible for freeing those.\n\nYea, I had misattributed the leak to the callbacks. One of the things I\nsaw definitely is a leak: if call_string_check_hook() ereport(ERRORs)\nthe guc_strdup() of newval->stringval is lost.\n\nThere's another set of them that seems to be related to paralellism. But\nI've not hunted it down yet.\n\nI think it might be worth adding a VALGRIND_DO_CHANGED_LEAK_CHECK() at\nthe end of a transaction or such? That way it'd not be quite as hard to\npinpoint the source of a leak to individual statements as it is right\nnow.\n\n\n> Might be another case of valgrind not understanding what's happening.\n\nThose allocations seem very straightforward, plain malloc/free that is\nreferenced by a pointer to the start of the allocation. So that doesn't\nseem very likely?\n\n\n> > I suspect we might get better results from valgrind, not just for leaks\n> > but also undefined value tracking, if we changed the way we represent\n> > pools to utilize VALGRIND_MEMPOOL_METAPOOL |\n> > VALGRIND_MEMPOOL_AUTO_FREE.\n> \n> Don't really see why that'd help? I mean, it could conceivably help\n> catch bugs in the allocators themselves, but I don't follow the argument\n> that it'd improve anything else. Defined is defined, as far as I can\n> tell from the valgrind manual.\n\nI think it might improve the attribution of some use-after-free errors\nto the memory context. Looks like it also might reduce the cost of\nrunning valgrind a bit.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 17:26:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-03-17 18:07:36 -0400, Tom Lane wrote:\n>> Huh, interesting. I wonder why that makes the ident problem go away?\n\n> I spent a bit of time looking at valgrind, and it looks to be explicit\n> behaviour:\n>\n> // Our goal is to construct a set of chunks that includes every\n> // mempool chunk, and every malloc region that *doesn't* contain a\n> // mempool chunk.\n\nUgh.\n\n> I guess that makes sense, but would definitely be nice to have had\n> documented...\n\nIndeed. So this started happening to us when we merged the aset\nheader with its first allocation block.\n\n>>> There are a few leak warnings around guc.c that look like they might be\n>>> real, not false positives, and thus a bit concerning. Looks like several\n>>> guc check hooks don't bother to free the old *extra before allocating a\n>>> new one.\n\n>> I'll take a look, but I'm pretty certain that guc.c, not the hooks, is\n>> responsible for freeing those.\n\nI believe I've just tracked down the cause of that. Those errors\nare (as far as I've seen) only happening in parallel workers, and\nthe reason is this gem in RestoreGUCState:\n\n\t/* See comment at can_skip_gucvar(). */\n\tfor (i = 0; i < num_guc_variables; i++)\n\t\tif (!can_skip_gucvar(guc_variables[i]))\n\t\t\tInitializeOneGUCOption(guc_variables[i]);\n\nwhere InitializeOneGUCOption zeroes out that GUC variable, causing\nit to lose track of any allocations it was responsible for.\n\nAt minimum, somebody didn't understand the difference between \"initialize\"\nand \"reset\". But I suspect we could just nuke this loop altogether.\nI've not yet tried to grok the comment that purports to justify it,\nbut I fail to see why it'd ever be useful to drop values inherited\nfrom the postmaster.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 21:30:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn 2021-03-17 21:30:48 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-03-17 18:07:36 -0400, Tom Lane wrote:\n> >> Huh, interesting. I wonder why that makes the ident problem go away?\n> \n> > I spent a bit of time looking at valgrind, and it looks to be explicit\n> > behaviour:\n> >\n> > // Our goal is to construct a set of chunks that includes every\n> > // mempool chunk, and every malloc region that *doesn't* contain a\n> > // mempool chunk.\n> \n> Ugh.\n> \n> > I guess that makes sense, but would definitely be nice to have had\n> > documented...\n> \n> Indeed. So this started happening to us when we merged the aset\n> header with its first allocation block.\n\nYea.\n\nI have the feeling that valgrinds error detection capability in our\ncodebase used to be higher too. I wonder if that could be related\nsomehow. Or maybe it's just a figment of my imagination.\n\n\n> I believe I've just tracked down the cause of that. Those errors\n> are (as far as I've seen) only happening in parallel workers, and\n> the reason is this gem in RestoreGUCState:\n> \n> \t/* See comment at can_skip_gucvar(). */\n> \tfor (i = 0; i < num_guc_variables; i++)\n> \t\tif (!can_skip_gucvar(guc_variables[i]))\n> \t\t\tInitializeOneGUCOption(guc_variables[i]);\n> \n> where InitializeOneGUCOption zeroes out that GUC variable, causing\n> it to lose track of any allocations it was responsible for.\n\nOuch. At least it's a short lived issue rather than something permanently\nleaking memory on every SIGHUP...\n\n\n\n> At minimum, somebody didn't understand the difference between \"initialize\"\n> and \"reset\". But I suspect we could just nuke this loop altogether.\n> I've not yet tried to grok the comment that purports to justify it,\n> but I fail to see why it'd ever be useful to drop values inherited\n> from the postmaster.\n\nI can't really make sense of it either. I think it may be trying to\nrestore the GUC state to what it would have been in postmaster,\ndisregarding all the settings that were set as part of PostgresInit()\netc?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 19:05:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-03-17 21:30:48 -0400, Tom Lane wrote:\n>> I believe I've just tracked down the cause of that. Those errors\n>> are (as far as I've seen) only happening in parallel workers, and\n>> the reason is this gem in RestoreGUCState: ...\n\n> Ouch. At least it's a short lived issue rather than something permanently\n> leaking memory on every SIGHUP...\n\nYeah. This leak is really insignificant in practice, although I'm\nsuspicious that randomly init'ing GUCs like this might have semantic\nissues that we've not detected yet.\n\n>> I've not yet tried to grok the comment that purports to justify it,\n>> but I fail to see why it'd ever be useful to drop values inherited\n>> from the postmaster.\n\n> I can't really make sense of it either. I think it may be trying to\n> restore the GUC state to what it would have been in postmaster,\n> disregarding all the settings that were set as part of PostgresInit()\n> etc?\n\nAt least in a non-EXEC_BACKEND build, the most reliable way to reproduce\nthe postmaster's settings is to do nothing whatsoever. And I think the\nsame is true for EXEC_BACKEND, really, because the guc.c subsystem is\nresponsible for restoring what would have been the inherited-via-fork\nsettings. So I'm really not sure what this is on about, and I'm too\ntired to try to figure it out tonight.\n\nIn other news, I found that there's a genuine leak in\nRelationBuildLocalRelation: RelationInitTableAccessMethod\nwas added in a spot where CurrentMemoryContext is CacheMemoryContext,\nwhich is bad because it does a syscache lookup that can lead to\na table access which can leak some memory. Seems easy to fix though.\n\nThe plpgsql situation looks like a mess. As a short-term answer,\nI'm inclined to recommend adding an exclusion that will ignore anything\nallocated within plpgsql_compile(). Doing better will require a fair\namount of rewriting. (Although I suppose we could also consider adding\nan on_proc_exit callback that runs through and frees all the function\ncache entries.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 22:33:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn 2021-03-17 00:01:55 -0400, Tom Lane wrote:\n> As for the particular point about ParallelBlockTableScanWorkerData,\n> I agree with your question to David about why that's in TableScanDesc\n> not HeapScanDesc, but I can't get excited about it not being freed in\n> heap_endscan. That's mainly because I do not believe that anything as\n> complex as a heap or indexscan should be counted on to be zero-leakage.\n> The right answer is to not do such operations in long-lived contexts.\n> So if we're running such a thing while switched into CacheContext,\n> *that* is the bug, not that heap_endscan didn't free this particular\n> allocation.\n\nI agree that it's a bad idea to do scans in non-transient contexts. It\ndoes however seems like there's a number of places that do...\n\nI added the following hacky definition of \"permanent\" contexts\n\n/*\n * NB: Only for assertion use.\n *\n * TopMemoryContext itself obviously is permanent. Treat CacheMemoryContext\n * and all its children as permanent too.\n *\n * XXX: Might be worth adding this as an explicit flag on the context?\n */\nbool\nMemoryContextIsPermanent(MemoryContext c)\n{\n\tif (c == TopMemoryContext)\n\t\treturn true;\n\n\twhile (c)\n\t{\n\t\tif (c == CacheMemoryContext)\n\t\t\treturn true;\n\t\tc = c->parent;\n\t}\n\n\treturn false;\n}\n\nand checked that the CurrentMemoryContext is not permanent in\nSearchCatCacheInternal() and systable_beginscan(). Hit a number of\ntimes.\n\nThe most glaring case is the RelationInitTableAccessMethod() call in\nRelationBuildLocalRelation(). Seems like the best fix is to just move\nthe MemoryContextSwitchTo() to just before the\nRelationInitTableAccessMethod(). Although I wonder if we shouldn't go\nfurther, and move it to much earlier, somewhere after the rd_rel\nallocation.\n\nThere's plenty other hits, but I think I should get back to working on\nmaking the shared memory stats patch committable. I really wouldn't want\nit to slip yet another year.\n\nBut I think it might make sense to add a flag indicating contexts that\nshouldn't be used for non-transient data. Seems like we fairly regularly\nhave \"bugs\" around this?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 20:02:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "On 2021-03-17 22:33:59 -0400, Tom Lane wrote:\n> >> I've not yet tried to grok the comment that purports to justify it,\n> >> but I fail to see why it'd ever be useful to drop values inherited\n> >> from the postmaster.\n> \n> > I can't really make sense of it either. I think it may be trying to\n> > restore the GUC state to what it would have been in postmaster,\n> > disregarding all the settings that were set as part of PostgresInit()\n> > etc?\n> \n> At least in a non-EXEC_BACKEND build, the most reliable way to reproduce\n> the postmaster's settings is to do nothing whatsoever. And I think the\n> same is true for EXEC_BACKEND, really, because the guc.c subsystem is\n> responsible for restoring what would have been the inherited-via-fork\n> settings. So I'm really not sure what this is on about, and I'm too\n> tired to try to figure it out tonight.\n\nThe restore thing runs after we've already set and initialized GUCs,\nincluding things like user/database default GUCs. Is see things like\n\n==2251779== 4,560 bytes in 1 blocks are definitely lost in loss record 383 of 406\n==2251779== at 0x483877F: malloc (vg_replace_malloc.c:307)\n==2251779== by 0x714A45: ConvertTimeZoneAbbrevs (datetime.c:4556)\n==2251779== by 0x88DE95: load_tzoffsets (tzparser.c:465)\n==2251779== by 0x88511D: check_timezone_abbreviations (guc.c:11565)\n==2251779== by 0x884633: call_string_check_hook (guc.c:11232)\n==2251779== by 0x87CB57: parse_and_validate_value (guc.c:7012)\n==2251779== by 0x87DD5F: set_config_option (guc.c:7630)\n==2251779== by 0x88397F: ProcessGUCArray (guc.c:10784)\n==2251779== by 0x32BCCF: ApplySetting (pg_db_role_setting.c:256)\n==2251779== by 0x874CA2: process_settings (postinit.c:1163)\n==2251779== by 0x874A0B: InitPostgres (postinit.c:1048)\n==2251779== by 0x60129A: BackgroundWorkerInitializeConnectionByOid (postmaster.c:5681)\n==2251779== by 0x2BB283: ParallelWorkerMain (parallel.c:1374)\n==2251779== by 0x5EBA6A: StartBackgroundWorker (bgworker.c:864)\n==2251779== by 0x6014FE: do_start_bgworker (postmaster.c:5802)\n==2251779== by 0x6018D2: maybe_start_bgworkers (postmaster.c:6027)\n==2251779== by 0x600811: sigusr1_handler (postmaster.c:5190)\n==2251779== by 0x4DD513F: ??? (in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so)\n==2251779== by 0x556E865: select (select.c:41)\n==2251779== by 0x5FC0CB: ServerLoop (postmaster.c:1700)\n==2251779== by 0x5FBA68: PostmasterMain (postmaster.c:1408)\n==2251779== by 0x4F8BFD: main (main.c:209)\n\nThe BackgroundWorkerInitializeConnectionByOid() in that trace is before\nthe RestoreGUCState() later in ParallelWorkerMain(). But that doesn't\nreally explain the approach.\n\n\n> In other news, I found that there's a genuine leak in\n> RelationBuildLocalRelation: RelationInitTableAccessMethod\n> was added in a spot where CurrentMemoryContext is CacheMemoryContext,\n> which is bad because it does a syscache lookup that can lead to\n> a table access which can leak some memory. Seems easy to fix though.\n\nYea, that's the one I was talking about re\nParallelBlockTableScanWorkerData not being freed. While I think we\nshould also add that pfree, I think you're right, and we shouldn't do\nRelationInitTableAccessMethod() while in CacheMemoryContext.\n\n\n> The plpgsql situation looks like a mess. As a short-term answer,\n> I'm inclined to recommend adding an exclusion that will ignore anything\n> allocated within plpgsql_compile(). Doing better will require a fair\n> amount of rewriting. (Although I suppose we could also consider adding\n> an on_proc_exit callback that runs through and frees all the function\n> cache entries.)\n\nThe error variant of this one seems like it might actually be a\npractically relevant leak? As well as increasing memory usage for\ncompiled plpgsql functions unnecessarily, of course. The latter would be\ngood to fix, but the former seems like it might be a practical issue for\npoolers and the like?\n\nSo I think we should do at least the reparenting thing to address that?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 20:15:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The most glaring case is the RelationInitTableAccessMethod() call in\n> RelationBuildLocalRelation(). Seems like the best fix is to just move\n> the MemoryContextSwitchTo() to just before the\n> RelationInitTableAccessMethod(). Although I wonder if we shouldn't go\n> further, and move it to much earlier, somewhere after the rd_rel\n> allocation.\n\nYeah, same thing I did locally. Not sure if it's worth working harder.\n\n> There's plenty other hits, but I think I should get back to working on\n> making the shared memory stats patch committable. I really wouldn't want\n> it to slip yet another year.\n\n+1, so far there's little indication that we're finding any serious leaks\nhere. Might be best to set it all aside till there's more time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 23:21:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "I wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> There's plenty other hits, but I think I should get back to working on\n>> making the shared memory stats patch committable. I really wouldn't want\n>> it to slip yet another year.\n\n> +1, so far there's little indication that we're finding any serious leaks\n> here. Might be best to set it all aside till there's more time.\n\nWell, I failed to resist the temptation to keep poking at this issue,\nmainly because I felt that it'd be a good idea to make sure we'd gotten\nour arms around all of the detectable issues, even if we choose not to\nfix them right away. The attached patch, combined with your preceding\nmemory context patch, eliminates all but a very small number of the leak\ncomplaints in the core regression tests.\n\nThe remaing leak warnings that I see are:\n\n1. WaitForReplicationWorkerAttach leaks the BackgroundWorkerHandle it's\npassed. I'm not sure which function should clean that up, but in any\ncase it's only 16 bytes one time in one process, so it's hardly exciting.\n\n2. There's lots of leakage in text search dictionary initialization\nfunctions. This is hard to get around completely, because the API for\nthose functions has them being called in the dictionary's long-lived\ncontext. In any case, the leaks are not large (modulo comments below),\nand they would get cleaned up if the dictionary cache were thrown away.\n\n3. partition_prune.sql repeatably produces this warning:\n\n==00:00:44:39.764 3813570== 32,768 bytes in 1 blocks are possibly lost in loss record 2,084 of 2,096\n==00:00:44:39.764 3813570== at 0x4C30F0B: malloc (vg_replace_malloc.c:307)\n==00:00:44:39.764 3813570== by 0x94315A: AllocSetAlloc (aset.c:941)\n==00:00:44:39.764 3813570== by 0x94B52F: MemoryContextAlloc (mcxt.c:804)\n==00:00:44:39.764 3813570== by 0x8023DA: LockAcquireExtended (lock.c:845)\n==00:00:44:39.764 3813570== by 0x7FFC7E: LockRelationOid (lmgr.c:116)\n==00:00:44:39.764 3813570== by 0x5CB89F: findDependentObjects (dependency.c:962)\n==00:00:44:39.764 3813570== by 0x5CBA66: findDependentObjects (dependency.c:1060)\n==00:00:44:39.764 3813570== by 0x5CBA66: findDependentObjects (dependency.c:1060)\n==00:00:44:39.764 3813570== by 0x5CCB72: performMultipleDeletions (dependency.c:409)\n==00:00:44:39.764 3813570== by 0x66F574: RemoveRelations (tablecmds.c:1395)\n==00:00:44:39.764 3813570== by 0x81C986: ProcessUtilitySlow.isra.8 (utility.c:1698)\n==00:00:44:39.764 3813570== by 0x81BCF2: standard_ProcessUtility (utility.c:1034)\n\nwhich evidently is warning that some LockMethodLocalHash entry is losing\ntrack of its lockOwners array. But I sure don't see how that could\nhappen, nor why it'd only happen in this test. Could this be a false\nreport?\n\nAs you can see from the patch's additions to src/tools/valgrind.supp,\nI punted on the issues around pl/pgsql's function-compile-time leaks.\nWe know that's an issue, but again the leaks are fairly small and\nthey are confined within function cache entries, so I'm not sure\nhow hard we should work on that.\n\n(An idea for silencing both this and the dictionary leak warnings is\nto install an on-proc-exit callback to drop the cached objects'\ncontexts.)\n\nAnyway, looking through the patch, I found several non-cosmetic issues:\n\n* You were right to suspect that there was residual leakage in guc.c's\nhandling of error cases. ProcessGUCArray and call_string_check_hook\nare both guilty of leaking malloc'd strings if an error in a proposed\nGUC setting is detected.\n\n* I still haven't tried to wrap my brain around the question of what\nRestoreGUCState really needs to be doing. I have, however, found that\ncheck-world passes just fine with the InitializeOneGUCOption calls\ndiked out entirely, so that's what's in this patch.\n\n* libpqwalreceiver.c leaks a malloc'd error string when\nlibpqrcv_check_conninfo detects an erroneous conninfo string.\n\n* spell.c leaks a compiled regular expression if an ispell dictionary\ncache entry is dropped. I fixed this by adding a memory context reset\ncallback to release the regex. This is potentially a rather large\nleak, if the regex is complex (though typically it wouldn't be).\n\n* BuildEventTriggerCache leaks working storage into\nEventTriggerCacheContext.\n\n* Likewise, load_domaintype_info leaks working storage into\na long-lived cache context.\n\n* RelationDestroyRelation leaks rd_statlist; the patch that added\nthat failed to emulate the rd_indexlist prototype very well.\n\n* As previously noted, RelationInitTableAccessMethod causes leaks.\n\n* I made some effort to remove easily-removable leakage in\nlookup_ts_dictionary_cache itself, although given the leaks in\nits callees, this isn't terribly exciting.\n\nI am suspicious that there's something still not quite right in the\nmemory context changes, because I noticed that the sanity_check.sql\ntest (and no other ones) complained about row_description_context being\nleaked. I realized that the warning was because my compiler optimizes\naway the row_description_context static variable altogether, leaving no\npointer to the context behind. I hacked around that in this patch by\nmarking that variable volatile. However, that explanation just begs\nthe question of why every run didn't show the same warning. I suppose\nthat valgrind was considering the context to be referenced by some\nchild or sibling context pointer, but if that's the explanation then\nwe should never have seen the warning. I'm forced to the conclusion\nthat valgrind is counting some but not all child/sibling context\npointers, which sure seems like a bug. Maybe we need the two-level-\nmempool mechanism after all to get that to work better.\n\nAnyway, I think we need to commit and even back-patch the portion\nof the attached changes that are in\n* libpqwalreceiver.c\n* spell.h / spell.c\n* relcache.c\n* guc.c (except the quick hack in RestoreGUCState)\nThose are all genuine leaks that are in places where they could be\nrepeated and thus potentially add up to something significant.\n\nThere's a weaker case for applying the changes in evtcache.c,\nts_cache.c, and typcache.c. Those are all basically leaving some cruft\nbehind in a long-lived cache entry. But the cruft isn't huge and it\nwould be recovered at cache flush, so I'm not convinced it amounts to a\nreal-world problem.\n\nThe rest of this is either working around valgrind shortcomings or\njumping through a hoop to convince it that some data structure that's\nstill around at exit is still referenced. Maybe we should commit some\nof it under \"#ifdef USE_VALGRIND\" tests. I really want to find some\nother answer than moving the dlist_node fields, though.\n\nComments?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 18 Mar 2021 17:51:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "On Wed, 17 Mar 2021 at 15:31, Andres Freund <andres@anarazel.de> wrote:\n> I'm a bit confused about the precise design of rs_private /\n> ParallelBlockTableScanWorkerData, specifically why it's been added to\n> TableScanDesc, instead of just adding it to HeapScanDesc? And why is it\n> allocated unconditionally, instead of just for parallel scans?\n\nThat's a good point. In hindsight, I didn't spend enough effort\nquestioning that design in the original patch. I see now that the\nrs_private field makes very little sense as we can just store what's\nprivate to heapam in HeapScanDescData.\n\nI've done that in the attached. I added the\nParallelBlockTableScanWorkerData as a pointer field in\nHeapScanDescData and change it so we only allocate memory for it for\njust parallel scans. The field is left as NULL for non-parallel\nscans.\n\nI've also added a pfree in heap_endscan() to free the memory when the\npointer is not NULL. I'm hoping that'll fix the valgrind warning, but\nI've not run it to check.\n\nDavid", "msg_date": "Mon, 29 Mar 2021 11:48:47 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "Hi,\n\nOn 2021-03-29 11:48:47 +1300, David Rowley wrote:\n> I've done that in the attached. I added the\n> ParallelBlockTableScanWorkerData as a pointer field in\n> HeapScanDescData and change it so we only allocate memory for it for\n> just parallel scans. The field is left as NULL for non-parallel\n> scans.\n\nLGTM.\n\n> I've also added a pfree in heap_endscan() to free the memory when the\n> pointer is not NULL. I'm hoping that'll fix the valgrind warning, but\n> I've not run it to check.\n\nCool. I think that's a good thing to do. The leak itself should already\nbe fixed, and was more my fault...\n\ncommit 415ffdc2205e209b6a73fb42a3fdd6e57e16c7b2\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2021-03-18 20:50:56 -0400\n\n Don't run RelationInitTableAccessMethod in a long-lived context.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Mar 2021 10:38:16 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" }, { "msg_contents": "On Tue, 30 Mar 2021 at 06:38, Andres Freund <andres@anarazel.de> wrote:\n> On 2021-03-29 11:48:47 +1300, David Rowley wrote:\n> > I've done that in the attached. I added the\n> > ParallelBlockTableScanWorkerData as a pointer field in\n> > HeapScanDescData and change it so we only allocate memory for it for\n> > just parallel scans. The field is left as NULL for non-parallel\n> > scans.\n>\n> LGTM.\n\nThanks for having a look.\n\nPushed.\n\nDavid\n\n\n", "msg_date": "Tue, 30 Mar 2021 10:18:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Getting better results from valgrind leak tracking" } ]
[ { "msg_contents": "When table oids were removed by commit 578b229718e, the function\nCatalogTupleInsert() was modified to return void but its comment was\nleft intact. Here is a trivial patch to fix that.\n-- \nVik Fearing", "msg_date": "Wed, 17 Mar 2021 08:31:16 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Code comment fix" }, { "msg_contents": "On Wed, Mar 17, 2021 at 08:31:16AM +0100, Vik Fearing wrote:\n> When table oids were removed by commit 578b229718e, the function\n> CatalogTupleInsert() was modified to return void but its comment was\n> left intact. Here is a trivial patch to fix that.\n\nThanks, Vik. Good catch. I'll fix that in a bit.\n--\nMichael", "msg_date": "Wed, 17 Mar 2021 17:11:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Code comment fix" }, { "msg_contents": "On 3/17/21 9:11 AM, Michael Paquier wrote:\n> On Wed, Mar 17, 2021 at 08:31:16AM +0100, Vik Fearing wrote:\n>> When table oids were removed by commit 578b229718e, the function\n>> CatalogTupleInsert() was modified to return void but its comment was\n>> left intact. Here is a trivial patch to fix that.\n> \n> Thanks, Vik. Good catch. I'll fix that in a bit.\n\nCheers.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 17 Mar 2021 10:10:09 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Code comment fix" } ]
[ { "msg_contents": "Hey hackers,\n\nI had this idea, that I raised and cherished like my baby to add a switch\nin `pg_dump` to allow exporting stored functions (and procedures) only.\n\nHowever, when I finally got the time to look at it in detail, I found out\nthere was no way to solve the dependencies in the functions and procedures,\nso that the exported file, when re-played could lead to invalid objects.\n\nSo, I decided this would not make Postgres better and decide to walk off\nthis patch.\n\nAnyhow, when sharing my thoughts, several people told me to ask the\ncommunity about adding this feature because this could be useful in some\nuse cases. Another argument is that should you have all your functions in\none schema and your tables in another, exporting only the function schema\nwill lead to the same kind of file that could lead to invalid objects\ncreated when the file is re-played against a database that does not have\nthe tables.\n\nOf course, the documentation would add a warning against object invalidity\nshould only the stored functions/procedures be exported.\n\nSo, my question is: what do you think about such a feature? is it worth it?\n\nHave a nice day,\n\nLætitia\n\nHey hackers,I had this idea, that I raised and cherished like my baby to add a switch in `pg_dump` to allow exporting stored functions (and procedures) only.However, when I finally got the time to look at it in detail, I found out there was no way to solve the dependencies in the functions and procedures, so that the exported file, when re-played could lead to invalid objects.So, I decided this would not make Postgres better and decide to walk off this patch. Anyhow, when sharing my thoughts, several people told me to ask the community about adding this feature because this could be useful in some use cases. Another argument is that should you have all your functions in one schema and your tables in another, exporting only the function schema will lead to the same kind of file that could lead to invalid objects created when the file is re-played against a database that does not have the tables.Of course, the documentation would add a warning against object invalidity should only the stored functions/procedures be exported.So, my question is: what do you think about such a feature? is it worth it?Have a nice day,Lætitia", "msg_date": "Wed, 17 Mar 2021 18:00:41 +0100", "msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On 3/17/21 6:00 PM, Lætitia Avrot wrote:\n> Hey hackers,\n> \n> I had this idea, that I raised and cherished like my baby to add a switch\n> in `pg_dump` to allow exporting stored functions (and procedures) only.\n> \n> However, when I finally got the time to look at it in detail, I found out\n> there was no way to solve the dependencies in the functions and procedures,\n> so that the exported file, when re-played could lead to invalid objects.\n> \n> So, I decided this would not make Postgres better and decide to walk off\n> this patch.\n> \n> Anyhow, when sharing my thoughts, several people told me to ask the\n> community about adding this feature because this could be useful in some\n> use cases. Another argument is that should you have all your functions in\n> one schema and your tables in another, exporting only the function schema\n> will lead to the same kind of file that could lead to invalid objects\n> created when the file is re-played against a database that does not have\n> the tables.\n> \n> Of course, the documentation would add a warning against object invalidity\n> should only the stored functions/procedures be exported.\n> \n> So, my question is: what do you think about such a feature? is it worth it?\n\nYes, it is absolutely worth it to be able to extract just the functions\nfrom a database. I have wanted this several times.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 17 Mar 2021 18:10:21 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Wed, Mar 17, 2021 at 2:00 PM Lætitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n>\n> Hey hackers,\n>\n> I had this idea, that I raised and cherished like my baby to add a switch\nin `pg_dump` to allow exporting stored functions (and procedures) only.\n>\n> However, when I finally got the time to look at it in detail, I found out\nthere was no way to solve the dependencies in the functions and procedures,\nso that the exported file, when re-played could lead to invalid objects.\n>\n> So, I decided this would not make Postgres better and decide to walk off\nthis patch.\n>\n> Anyhow, when sharing my thoughts, several people told me to ask the\ncommunity about adding this feature because this could be useful in some\nuse cases. Another argument is that should you have all your functions in\none schema and your tables in another, exporting only the function schema\nwill lead to the same kind of file that could lead to invalid objects\ncreated when the file is re-played against a database that does not have\nthe tables.\n>\n> Of course, the documentation would add a warning against object\ninvalidity should only the stored functions/procedures be exported.\n>\n> So, my question is: what do you think about such a feature? is it worth\nit?\n>\n\nMake total sense since we already have --function=NAME(args) on pg_restore\nand it doesn't solve dependencies... so we can add it to also export\nfunction/procedure contents.\n\n+1 for general idea.\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, Mar 17, 2021 at 2:00 PM Lætitia Avrot <laetitia.avrot@gmail.com> wrote:>> Hey hackers,>> I had this idea, that I raised and cherished like my baby to add a switch in `pg_dump` to allow exporting stored functions (and procedures) only.>> However, when I finally got the time to look at it in detail, I found out there was no way to solve the dependencies in the functions and procedures, so that the exported file, when re-played could lead to invalid objects.>> So, I decided this would not make Postgres better and decide to walk off this patch. >> Anyhow, when sharing my thoughts, several people told me to ask the community about adding this feature because this could be useful in some use cases. Another argument is that should you have all your functions in one schema and your tables in another, exporting only the function schema will lead to the same kind of file that could lead to invalid objects created when the file is re-played against a database that does not have the tables.>> Of course, the documentation would add a warning against object invalidity should only the stored functions/procedures be exported.>> So, my question is: what do you think about such a feature? is it worth it?>Make total sense since we already have --function=NAME(args) on pg_restore and it doesn't solve dependencies... so we can add it to also export function/procedure contents.+1 for general idea.--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Wed, 17 Mar 2021 14:15:52 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 3/17/21 6:00 PM, Lætitia Avrot wrote:\n>> However, when I finally got the time to look at it in detail, I found out\n>> there was no way to solve the dependencies in the functions and procedures,\n>> so that the exported file, when re-played could lead to invalid objects.\n>> ...\n>> So, my question is: what do you think about such a feature? is it worth it?\n\n> Yes, it is absolutely worth it to be able to extract just the functions\n> from a database. I have wanted this several times.\n\nSelective dumps always have a risk of not being restorable on their\nown; I don't see that \"functions only\" is noticeably less safe than\n\"just these tables\", or other cases that we support already.\n\nWhat I'm wondering about is how this might interact with the\ndiscussion at [1].\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CAFj8pRB10wvW0CC9Xq=1XDs=zCQxer3cbLcNZa+qiX4cUH-G_A@mail.gmail.com\n\n\n", "msg_date": "Wed, 17 Mar 2021 13:16:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Hello,\n\nYou'll find enclosed the first version of my patch. I did not include the\npossibility of using a file to list tables to be exported as Tom suggested\nbecause I genuinely think it is a totally different matter. It does not\nmean I'm not open to the possibility, it just felt weird.\n\nThe patch allows using a `--functions-only` flag in `pg_dump` to export\nonly functions and stored procedures. My code was build and passed tests on\nthe last master branch of the PostgreSQL project. I added regression tests.\nDocumentation has been updated too and generation of the documentation\n(HTML, man page, pdf in A4 and letter US format) has been tested\nsuccessfully.\n\nI did not add a warning in the documentation that the file provided might\nend up in a not restorable file or in a file restoring broken functions or\nprocedures. Do you think I should?\n\nI don't know if this patch has any impact on performance. I guess that\nadding 4 if statements will slow down `pg_dump` a little bit.\n\nHave a nice day,\n\nLætitia\n\nLe mer. 17 mars 2021 à 18:16, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Vik Fearing <vik@postgresfriends.org> writes:\n> > On 3/17/21 6:00 PM, Lætitia Avrot wrote:\n> >> However, when I finally got the time to look at it in detail, I found\n> out\n> >> there was no way to solve the dependencies in the functions and\n> procedures,\n> >> so that the exported file, when re-played could lead to invalid objects.\n> >> ...\n> >> So, my question is: what do you think about such a feature? is it worth\n> it?\n>\n> > Yes, it is absolutely worth it to be able to extract just the functions\n> > from a database. I have wanted this several times.\n>\n> Selective dumps always have a risk of not being restorable on their\n> own; I don't see that \"functions only\" is noticeably less safe than\n> \"just these tables\", or other cases that we support already.\n>\n> What I'm wondering about is how this might interact with the\n> discussion at [1].\n>\n> regards, tom lane\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAFj8pRB10wvW0CC9Xq=1XDs=zCQxer3cbLcNZa+qiX4cUH-G_A@mail.gmail.com\n>", "msg_date": "Sat, 27 Mar 2021 13:22:43 +0100", "msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "\nOn 3/27/21 8:22 AM, Lætitia Avrot wrote:\n> Hello,\n>\n> You'll find enclosed the first version of my patch. I did not include\n> the possibility of using a file to list tables to be exported as Tom\n> suggested because I genuinely think it is a totally different matter.\n> It does not mean I'm not open to the possibility, it just felt weird.\n>\n> The patch allows using a `--functions-only` flag in `pg_dump` to\n> export only functions and stored procedures. My code was build and\n> passed tests on the last master branch of the PostgreSQL project. I\n> added regression tests. Documentation has been updated too and\n> generation of the documentation (HTML, man page, pdf in A4 and letter\n> US format) has been tested successfully.\n\n\nWe can bikeshed the name of the flag at some stage. --procedures-only\nmight also make sense\n\n\n>\n> I did not add a warning in the documentation that the file provided\n> might end up in a not restorable file or in a file restoring broken\n> functions or procedures. Do you think I should?\n\n\nNo, I don't think it's any different from any of the other similar switches.\n\n\n>\n> I don't know if this patch has any impact on performance. I guess that\n> adding 4 if statements will slow down `pg_dump` a little bit.\n>\n>\n\nNot likely to be noticeable.\n\n\nPlease add this to the next commitfest.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 27 Mar 2021 08:57:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Sat, Mar 27, 2021 at 6:23 AM Lætitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n> Hello,\n>\n> You'll find enclosed the first version of my patch.\n>\n\nI tested a couple simple use cases. This is great, Thank you!\n\n\n> I did not include the possibility of using a file to list tables to be\n> exported as Tom suggested because I genuinely think it is a totally\n> different matter. It does not mean I'm not open to the possibility, it just\n> felt weird.\n>\n> The patch allows using a `--functions-only` flag in `pg_dump` to export\n> only functions and stored procedures. My code was build and passed tests on\n> the last master branch of the PostgreSQL project. I added regression tests.\n> Documentation has been updated too and generation of the documentation\n> (HTML, man page, pdf in A4 and letter US format) has been tested\n> successfully.\n>\n> I did not add a warning in the documentation that the file provided might\n> end up in a not restorable file or in a file restoring broken functions or\n> procedures. Do you think I should?\n>\n\nThe docs for both the --table and --schema options do warn about this. On\nthe other hand, --data-only has no such warning. I'd lean towards matching\n--data-only for this.\n\n\n>\n> I don't know if this patch has any impact on performance. I guess that\n> adding 4 if statements will slow down `pg_dump` a little bit.\n>\n> Have a nice day,\n>\n> Lætitia\n>\n\nUsing --functions-only along with --table=<name> does not error out and\nwarn the user, instead it creates a dump containing only the SET commands.\nAn error similar to using --functions-only along with --data-only seems\nlike a good idea.\n\nCheers,\n\n*Ryan Lambert*\nRustProof Labs\n\nOn Sat, Mar 27, 2021 at 6:23 AM Lætitia Avrot <laetitia.avrot@gmail.com> wrote:Hello,You'll find enclosed the first version of my patch. I tested a couple simple use cases.  This is great, Thank you! I did not include the possibility of using a file to list tables to be exported as Tom suggested because I genuinely think it is a totally different matter. It does not mean I'm not open to the possibility, it just felt weird.The patch allows using a `--functions-only` flag in `pg_dump` to export only functions and stored procedures. My code was build and passed tests on the last master branch of the PostgreSQL project. I added regression tests. Documentation has been updated too and generation of the documentation (HTML, man page, pdf in A4 and letter US format) has been tested successfully.I did not add a warning in the documentation that the file provided might end up in a not restorable file or in a file restoring broken functions or procedures. Do you think I should?The docs for both the --table and --schema options do warn about this.  On the other hand, --data-only has no such warning. I'd lean towards matching --data-only for this. I don't know if this patch has any impact on performance. I guess that adding 4 if statements will slow down `pg_dump` a little bit.Have a nice day,LætitiaUsing --functions-only along with --table=<name> does not error out and warn the user, instead it creates a dump containing only the SET commands.  An error similar to using --functions-only along with --data-only seems like a good idea.Cheers,Ryan LambertRustProof Labs", "msg_date": "Sat, 27 Mar 2021 07:50:12 -0600", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": ">\n>\n>> Using --functions-only along with --table=<name> does not error out and\n> warn the user, instead it creates a dump containing only the SET commands.\n> An error similar to using --functions-only along with --data-only seems\n> like a good idea.\n>\n>\nThank you for giving my patch a try.\nI added the new if statement, so that the program will error should the\n`--functions-only` be used alongside the `--table` option.\n\nThe patch has been added to the next commifest.\n\nHave a nice day,\n\nLætitia", "msg_date": "Sat, 27 Mar 2021 21:30:14 +0100", "msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: not tested\n\nFew minor comments :\r\n\r\n- The latest patch has some hunk failures\r\n- Regression with master has many failures with/without the patch, it is difficult to tell if the patch is causing any failures.\r\n- This is probably intended behaviour that --functions-only switch is also dumping stored procedures? \r\n- If i create a procedure with \r\nLANGUAGE plpgsql\r\nSECURITY INVOKER\r\nIt is not including \"SECURITY INVOKER\" in the dump. That's probably because INVOKER is default security rights.", "msg_date": "Thu, 29 Apr 2021 11:46:48 +0000", "msg_from": "ahsan hadi <ahsan.hadi@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Hi,\n\nI took a quick look at the patch today. There was some minor bitrot\nrequiring a rebase, so I attach the rebased patch as v3.\n\nThe separate 0002 part contains some minor fixes - a couple\ntypos/rewording in the docs (would be good if a native speaker looked at\nit, thought), and a slightly reworked chunk of code from pg_dump.c. The\nchange is more a matter of personal preference than correctness - it\njust seems simpler this way, but ymmv. And I disliked that the comment\nsaid \"If we have to export only the functions ..\" but the if condition\nwas actually the exact opposite of that.\n\nThe main question I have is whether this should include procedures. I'd\nprobably argue procedures should be considered different from functions\n(i.e. requiring a separate --procedures-only option), because it pretty\nmuch is meant to be a separate object type. We don't allow calling DROP\nFUNCTION on a procedure, etc. It'd be silly to introduce an unnecessary\nambiguity in pg_dump and have to deal with it sometime later.\n\nI wonder if we should allow naming a function to dump, similarly to how\n--table works for tables, for example.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 10 Jul 2021 00:43:42 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> The main question I have is whether this should include procedures.\n\nI feel a bit uncomfortable about sticking this sort of limited-purpose\nselectivity mechanism into pg_dump. I'd rather see a general filter\nmethod that can select object(s) of any type. Pavel was doing some\nwork towards that awhile ago, though I think he got frustrated about\nthe lack of consensus on details. Which is a problem, but I don't\nthink the solution is to accrue more and more independently-designed-\nand-implemented features that each solve some subset of the big problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 09 Jul 2021 19:43:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Fri, Jul 09, 2021 at 07:43:02PM -0400, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > The main question I have is whether this should include procedures.\n> \n> I feel a bit uncomfortable about sticking this sort of limited-purpose\n> selectivity mechanism into pg_dump. I'd rather see a general filter\n> method that can select object(s) of any type. Pavel was doing some\n> work towards that awhile ago, though I think he got frustrated about\n> the lack of consensus on details.\n\nThat's this: https://commitfest.postgresql.org/33/2573/\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 9 Jul 2021 18:50:20 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On 7/10/21 1:43 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> The main question I have is whether this should include procedures.\n> \n> I feel a bit uncomfortable about sticking this sort of limited-purpose\n> selectivity mechanism into pg_dump. I'd rather see a general filter\n> method that can select object(s) of any type. Pavel was doing some\n> work towards that awhile ago, though I think he got frustrated about\n> the lack of consensus on details. Which is a problem, but I don't\n> think the solution is to accrue more and more independently-designed-\n> and-implemented features that each solve some subset of the big problem.\n> \n\nI'm not against introducing such general filter mechanism, but why \nshould it block this patch? I'd understand it the patch was adding a lot \nof code, but that's not the case - it's tiny. And we already have \nmultiple filter options (to pick tables, schemas, extensions, ...).\n\nAnd if there's no consensus on details of Pavel's patch after multiple \ncommitfests, how likely is it it'll start moving forward?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 10 Jul 2021 13:06:43 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "> On Sat, Jul 10, 2021 at 5:06 AM Tomas Vondra <\ntomas.vondra@enterprisedb.com> wrote:\n> On 7/10/21 1:43 AM, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>>> The main question I have is whether this should include procedures.\n>>\n>> I feel a bit uncomfortable about sticking this sort of limited-purpose\n>> selectivity mechanism into pg_dump. I'd rather see a general filter\n>> method that can select object(s) of any type. Pavel was doing some\n>> work towards that awhile ago, though I think he got frustrated about\n>> the lack of consensus on details. Which is a problem, but I don't\n>> think the solution is to accrue more and more independently-designed-\n>> and-implemented features that each solve some subset of the big problem.\n>>\n>\n> I'm not against introducing such general filter mechanism, but why\n> should it block this patch? I'd understand it the patch was adding a lot\n> of code, but that's not the case - it's tiny. And we already have\n> multiple filter options (to pick tables, schemas, extensions, ...).\n\n> And if there's no consensus on details of Pavel's patch after multiple\n> commitfests, how likely is it it'll start moving forward?\n\nIt seems to me that pg_dump already has plenty of limited-purpose options\nfor selectivity, adding this patch seems to fit in with the norm. I'm in\nfavor of this patch because it works in the same way the community is\nalready used to and meets the need. The other patch referenced upstream\nappears to be intended to solve a specific problem encountered with very\nlong commands. It is also introducing a new way of working with pg_dump\nvia a config file, and honestly I've never wanted that type of feature. Not\nsaying that wouldn't be useful, but that has not been a pain point for me\nand seems overly complex. For the use cases I imagine this patch will help\nwith, being required to go through a config file seems excessive vs pg_dump\n--functions-only.\n\n> On Fri, Jul 9, 2021 at 4:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n>\n> The main question I have is whether this should include procedures. I'd\n> probably argue procedures should be considered different from functions\n> (i.e. requiring a separate --procedures-only option), because it pretty\n> much is meant to be a separate object type. We don't allow calling DROP\n> FUNCTION on a procedure, etc. It'd be silly to introduce an unnecessary\n> ambiguity in pg_dump and have to deal with it sometime later.\n\n+1\n\n*Ryan Lambert*\nRustProof Labs\n\n> On Sat, Jul 10, 2021 at 5:06 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> On 7/10/21 1:43 AM, Tom Lane wrote:>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:>>> The main question I have is whether this should include procedures.>>>> I feel a bit uncomfortable about sticking this sort of limited-purpose>> selectivity mechanism into pg_dump.  I'd rather see a general filter>> method that can select object(s) of any type.  Pavel was doing some>> work towards that awhile ago, though I think he got frustrated about>> the lack of consensus on details.  Which is a problem, but I don't>> think the solution is to accrue more and more independently-designed->> and-implemented features that each solve some subset of the big problem.>>> > I'm not against introducing such general filter mechanism, but why> should it block this patch? I'd understand it the patch was adding a lot> of code, but that's not the case - it's tiny. And we already have> multiple filter options (to pick tables, schemas, extensions, ...).> And if there's no consensus on details of Pavel's patch after multiple> commitfests, how likely is it it'll start moving forward?It seems to me that pg_dump already has plenty of limited-purpose options for selectivity, adding this patch seems to fit in with the norm. I'm in favor of this patch because it works in the same way the community is already used to and meets the need. The other patch referenced upstream appears to be intended to solve a specific problem encountered with very long commands.  It is also introducing a new way of working with pg_dump via a config file, and honestly I've never wanted that type of feature. Not saying that wouldn't be useful, but that has not been a pain point for me and seems overly complex. For the use cases I imagine this patch will help with, being required to go through a config file seems excessive vs pg_dump --functions-only.> On Fri, Jul 9, 2021 at 4:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> > The main question I have is whether this should include procedures. I'd> probably argue procedures should be considered different from functions> (i.e. requiring a separate --procedures-only option), because it pretty> much is meant to be a separate object type. We don't allow calling DROP> FUNCTION on a procedure, etc. It'd be silly to introduce an unnecessary> ambiguity in pg_dump and have to deal with it sometime later.+1Ryan LambertRustProof Labs", "msg_date": "Sat, 17 Jul 2021 07:32:25 -0600", "msg_from": "Ryan Lambert <ryan@rustprooflabs.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": ">\n>\n> > On Fri, Jul 9, 2021 at 4:43 PM Tomas Vondra <\n> tomas.vondra@enterprisedb.com> wrote:\n> >\n> > The main question I have is whether this should include procedures. I'd\n> > probably argue procedures should be considered different from functions\n> > (i.e. requiring a separate --procedures-only option), because it pretty\n> > much is meant to be a separate object type. We don't allow calling DROP\n> > FUNCTION on a procedure, etc. It'd be silly to introduce an unnecessary\n> > ambiguity in pg_dump and have to deal with it sometime later.\n>\n>\nI respectfully disagree. In psql, the `\\ef` and `\\df` metacommands will\nalso list procedures, not just functions. So at one point we agreed to\nconsider for this client that functions were close enough to procedures to\nuse a simple metacommand to list/display without distinction. Why should it\nbe different for `pg_dump` ?\n\nHave a nice day,\n\nLætitia\n\n> On Fri, Jul 9, 2021 at 4:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:> > The main question I have is whether this should include procedures. I'd> probably argue procedures should be considered different from functions> (i.e. requiring a separate --procedures-only option), because it pretty> much is meant to be a separate object type. We don't allow calling DROP> FUNCTION on a procedure, etc. It'd be silly to introduce an unnecessary> ambiguity in pg_dump and have to deal with it sometime later.I respectfully disagree. In psql, the `\\ef` and `\\df` metacommands will also list procedures, not just functions. So at one point we agreed to consider for this client that functions were close enough to procedures to use a simple metacommand to list/display without distinction. Why should it be different for `pg_dump` ?Have a nice day,Lætitia", "msg_date": "Fri, 30 Jul 2021 12:55:42 +0200", "msg_from": "=?UTF-8?Q?L=C3=A6titia_Avrot?= <laetitia.avrot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "> On 30 Jul 2021, at 12:55, Lætitia Avrot <laetitia.avrot@gmail.com> wrote:\n> \n> > On Fri, Jul 9, 2021 at 4:43 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:\n> > \n> > The main question I have is whether this should include procedures. I'd\n> > probably argue procedures should be considered different from functions\n> > (i.e. requiring a separate --procedures-only option), because it pretty\n> > much is meant to be a separate object type. We don't allow calling DROP\n> > FUNCTION on a procedure, etc. It'd be silly to introduce an unnecessary\n> > ambiguity in pg_dump and have to deal with it sometime later.\n> \n> I respectfully disagree. In psql, the `\\ef` and `\\df` metacommands will also list procedures, not just functions.\n\nI tend to agree that we should include both, while they are clearly different I\ndon't think it would be helpful in this case to distinguish.\n\nLooking at this thread I think it makes sense to go ahead with this patch. The\nfilter functionality worked on in another thread is dealing with cherry-picking\ncertain objects where this is an all-or-nothing switch, so I don't think they\nare at odds with each other.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 9 Nov 2021 15:23:07 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Tue, Nov 09, 2021 at 03:23:07PM +0100, Daniel Gustafsson wrote:\n> Looking at this thread I think it makes sense to go ahead with this patch. The\n> filter functionality worked on in another thread is dealing with cherry-picking\n> certain objects where this is an all-or-nothing switch, so I don't think they\n> are at odds with each other.\n\nIncluding both procedures and functions sounds natural from here. Now\nI have a different question, something that has not been discussed in\nthis thread at all. What about patterns? Switches like --table or\n--extension are able to digest a psql-like pattern to decide which\nobjects to dump. Is there a reason not to have this capability for\nthis new switch with procedure names? I mean to handle the case\nwithout the function arguments, even if the same name is used by\nmultiple functions with different arguments.\n--\nMichael", "msg_date": "Tue, 25 Jan 2022 14:49:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "It looks like this discussion has reached a bit of an impasse with Tom\nbeing against this approach and Michael and Daniel being for it. It\ndoesn't look like it's going to get committed this commitfest, shall\nwe move it forward or mark it returned with feedback?\n\n\n", "msg_date": "Thu, 24 Mar 2022 18:38:15 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "> On 24 Mar 2022, at 23:38, Greg Stark <stark@mit.edu> wrote:\n\n> It looks like this discussion has reached a bit of an impasse with Tom\n> being against this approach and Michael and Daniel being for it. It\n> doesn't look like it's going to get committed this commitfest, shall\n> we move it forward or mark it returned with feedback?\n\nLætitia mentioned the other day off-list that she was going to try and update\nthis patch with the pattern support proposed, so hopefully we will hear from\nher shortly on that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 25 Mar 2022 00:00:14 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 24 Mar 2022, at 23:38, Greg Stark <stark@mit.edu> wrote:\n>> It looks like this discussion has reached a bit of an impasse with Tom\n>> being against this approach and Michael and Daniel being for it. It\n>> doesn't look like it's going to get committed this commitfest, shall\n>> we move it forward or mark it returned with feedback?\n\n> Lætitia mentioned the other day off-list that she was going to try and update\n> this patch with the pattern support proposed, so hopefully we will hear from\n> her shortly on that.\n\nTo clarify: I'm not against having an easy way to dump all-and-only\nfunctions. What concerns me is having such a feature that's designed\nin isolation, without a plan for anything else. I'd like to create\nsome sort of road map for future selective-dumping options, and then\nwe can make sure that this feature fits into the bigger picture.\nOtherwise we're going to end up with an accumulation of warts, with\ninconsistent naming and syntax, and who knows what other sources of\nconfusion.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 19:18:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On 03/27/21 08:57, Andrew Dunstan wrote:\n> We can bikeshed the name of the flag at some stage. --procedures-only\n> might also make sense\n\nAny takers for --routines-only ?\n\n\"Routine\" is the genuine, ISO SQL umbrella term for a function or\nprocedure, and we have used it that way in our docs and glossary.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 24 Mar 2022 19:42:37 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump new feature: exporting functions only. Bad or good\n idea ?" }, { "msg_contents": "On Thu, Mar 24, 2022 at 4:42 PM Chapman Flack <chap@anastigmatix.net> wrote:\n\n> On 03/27/21 08:57, Andrew Dunstan wrote:\n> > We can bikeshed the name of the flag at some stage. --procedures-only\n> > might also make sense\n>\n> Any takers for --routines-only ?\n>\n> \"Routine\" is the genuine, ISO SQL umbrella term for a function or\n> procedure, and we have used it that way in our docs and glossary.\n>\n>\nRegardless of the prefix name choice neither blobs, tables, nor schemas use\nthe \"-only\" suffix so I don't see that this should. I have no issue if we\nadd three options for this: --routine/--procedure/--function (these are\nsingular because --table and --schema are singular)\n\n--blobs and --no-blobs are special so let us just build off of the API\nalready implemented for --table/--exclude-table\n\nNo short option is required, and honestly I don't think it is worthwhile to\ntake up short options for this, acknowledging that we are leaving -t/-T\n(and -n/-N) in place for legacy support.\n\n--blobs reacts to these additional object types in the same manner that it\nreacts to --table. As soon as any of these object type inclusion options\nis specified nothing except the options that are specified will be output.\nBoth data and schema, though, for most object types, data is not relevant.\nIf schema is not output then options that control schema content objects\nonly are ignored.\n\nThe --exclude-* options behave in the same way as defined for -t/-T,\nspecifically the note in -T about when both are present.\n\nAs with tables, the affirmative version of these overrides any --schema\n(-n/-N) specification provided. But the --exclude-* versions of these do\nomit the named objects from the dump should they have been selected by\n--schema.\n\nDavid J.\n\nOn Thu, Mar 24, 2022 at 4:42 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/27/21 08:57, Andrew Dunstan wrote:\n> We can bikeshed the name of the flag at some stage. --procedures-only\n> might also make sense\n\nAny takers for --routines-only ?\n\n\"Routine\" is the genuine, ISO SQL umbrella term for a function or\nprocedure, and we have used it that way in our docs and glossary.Regardless of the prefix name choice neither blobs, tables, nor schemas use the \"-only\" suffix so I don't see that this should.  I have no issue if we add three options for this: --routine/--procedure/--function (these are singular because --table and --schema are singular)--blobs and --no-blobs are special so let us just build off of the API already implemented for --table/--exclude-tableNo short option is required, and honestly I don't think it is worthwhile to take up short options for this, acknowledging that we are leaving -t/-T (and -n/-N) in place for legacy support.--blobs reacts to these additional object types in the same manner that it reacts to --table.  As soon as any of these object type inclusion options is specified nothing except the options that are specified will be output.  Both data and schema, though, for most object types, data is not relevant.  If schema is not output then options that control schema content objects only are ignored.The --exclude-* options behave in the same way as defined for -t/-T, specifically the note in -T about when both are present.As with tables, the affirmative version of these overrides any --schema (-n/-N) specification provided.  But the --exclude-* versions of these do omit the named objects from the dump should they have been selected by --schema.David J.", "msg_date": "Thu, 24 Mar 2022 17:16:47 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump new feature: exporting functions only. Bad or good\n idea ?" }, { "msg_contents": "On Mon, Jan 24, 2022 at 10:49 PM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> What about patterns? Switches like --table or\n> --extension are able to digest a psql-like pattern to decide which\n> objects to dump.\n>\n\nThe extension object type does not seem to have gotten the\n--exclude-extension capability that it would need to conform to the general\ndesign exemplified by --table and hopefully extended out to the routine\nobject types.\n\nDavid J.\n\nOn Mon, Jan 24, 2022 at 10:49 PM Michael Paquier <michael@paquier.xyz> wrote:What about patterns?  Switches like --table or\n--extension are able to digest a psql-like pattern to decide which\nobjects to dump.The extension object type does not seem to have gotten the --exclude-extension capability that it would need to conform to the general design exemplified by --table and hopefully extended out to the routine object types.David J.", "msg_date": "Thu, 24 Mar 2022 17:25:45 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The extension object type does not seem to have gotten the\n> --exclude-extension capability that it would need to conform to the general\n> design exemplified by --table and hopefully extended out to the routine\n> object types.\n\nWe're not going to instantly build out every feature that would be\nsuggested by a roadmap. However, I see in what you just wrote\na plausible roadmap: eventually, all or most object types should\nhave pg_dump switches comparable to, and syntactically aligned\nwith, the --table and --exclude-table switches. The expectation\nwould be that if any of these selective-dump switches appear,\nthen only objects matching at least one of them (and not matching\nany --exclude switch) will be dumped. So for example\n\n\tpg_dump --table=foo* --function=bar*\n\ndumps tables whose names start with foo, and functions whose\nnames start with bar, and nothing else. (We'd need to spell out\nhow these things interact with --schema, too.)\n\nIn this scheme, Lætitia's desired functionality should be spelled\n\"--function=*\", or possibly \"--routine=*\", depending on what she\nwanted to happen with procedures.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 20:40:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Thu, Mar 24, 2022 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > The extension object type does not seem to have gotten the\n> > --exclude-extension capability that it would need to conform to the\n> general\n> > design exemplified by --table and hopefully extended out to the routine\n> > object types.\n>\n> We're not going to instantly build out every feature that would be\n> suggested by a roadmap. However, I see in what you just wrote\n> a plausible roadmap: eventually, all or most object types should\n> have pg_dump switches comparable to, and syntactically aligned\n> with, the --table and --exclude-table switches. The expectation\n> would be that if any of these selective-dump switches appear,\n> then only objects matching at least one of them (and not matching\n> any --exclude switch) will be dumped. So for example\n>\n> pg_dump --table=foo* --function=bar*\n>\n> dumps tables whose names start with foo, and functions whose\n> names start with bar, and nothing else. (We'd need to spell out\n> how these things interact with --schema, too.)\n>\n> In this scheme, Lætitia's desired functionality should be spelled\n> \"--function=*\", or possibly \"--routine=*\", depending on what she\n> wanted to happen with procedures.\n>\n> Thoughts?\n>\n>\nMy longer first post today [1] indeed was that roadmap you were looking\nfor. I then re-read the part about --extension and realized I had missed\nits existence and felt it desirable to note that within that roadmap the\nexisting --extension object type did not conform.\n\nDavid J.\n\nhttps://www.postgresql.org/message-id/CAKFQuwYcw%2BA%2BMyDQoVahKkEqJtgih3c1i-JLY_YPMucNfgQDkg%40mail.gmail.com\n\nI think Gmail is messing with me by adding an unintended \"Re:\" to the\nsubject line which probably put my first response outside the thread.\n\nOn Thu, Mar 24, 2022 at 5:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> The extension object type does not seem to have gotten the\n> --exclude-extension capability that it would need to conform to the general\n> design exemplified by --table and hopefully extended out to the routine\n> object types.\n\nWe're not going to instantly build out every feature that would be\nsuggested by a roadmap.  However, I see in what you just wrote\na plausible roadmap: eventually, all or most object types should\nhave pg_dump switches comparable to, and syntactically aligned\nwith, the --table and --exclude-table switches.  The expectation\nwould be that if any of these selective-dump switches appear,\nthen only objects matching at least one of them (and not matching\nany --exclude switch) will be dumped.  So for example\n\n        pg_dump --table=foo* --function=bar*\n\ndumps tables whose names start with foo, and functions whose\nnames start with bar, and nothing else.  (We'd need to spell out\nhow these things interact with --schema, too.)\n\nIn this scheme, Lætitia's desired functionality should be spelled\n\"--function=*\", or possibly \"--routine=*\", depending on what she\nwanted to happen with procedures.\n\nThoughts?My longer first post today [1] indeed was that roadmap you were looking for.  I then re-read the part about --extension and realized I had missed its existence and felt it desirable to note that within that roadmap the existing --extension object type did not conform.David J.https://www.postgresql.org/message-id/CAKFQuwYcw%2BA%2BMyDQoVahKkEqJtgih3c1i-JLY_YPMucNfgQDkg%40mail.gmail.comI think Gmail is messing with me by adding an unintended \"Re:\" to the subject line which probably put my first response outside the thread.", "msg_date": "Thu, 24 Mar 2022 18:02:29 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Hello Michael,\n\nLe mar. 25 janv. 2022 à 06:49, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Tue, Nov 09, 2021 at 03:23:07PM +0100, Daniel Gustafsson wrote:\n> > Looking at this thread I think it makes sense to go ahead with this\n> patch. The\n> > filter functionality worked on in another thread is dealing with\n> cherry-picking\n> > certain objects where this is an all-or-nothing switch, so I don't think\n> they\n> > are at odds with each other.\n>\n> Including both procedures and functions sounds natural from here. Now\n> I have a different question, something that has not been discussed in\n> this thread at all. What about patterns? Switches like --table or\n> --extension are able to digest a psql-like pattern to decide which\n> objects to dump. Is there a reason not to have this capability for\n> this new switch with procedure names? I mean to handle the case\n> without the function arguments, even if the same name is used by\n> multiple functions with different arguments.\n>\n\nThank you for this suggestion.\nWe have --schema-only flag to export only the structure and then we have\n--schema=<pattern> flag to export the schemas following a pattern.\nI don't think both features can't exist for functions (and stored\nprocedures), but I see them as different features. We could have\n--functions-only and --function=<pattern>.\nIn my humble opinion, the lack of --function=<pattern> feature should block\nthis patch.\n\nHave a great day,\nLætitia\n\n\n\n> --\n> Michael\n>\n\nHello Michael,Le mar. 25 janv. 2022 à 06:49, Michael Paquier <michael@paquier.xyz> a écrit :On Tue, Nov 09, 2021 at 03:23:07PM +0100, Daniel Gustafsson wrote:\n> Looking at this thread I think it makes sense to go ahead with this patch.  The\n> filter functionality worked on in another thread is dealing with cherry-picking\n> certain objects where this is an all-or-nothing switch, so I don't think they\n> are at odds with each other.\n\nIncluding both procedures and functions sounds natural from here.  Now\nI have a different question, something that has not been discussed in\nthis thread at all.  What about patterns?  Switches like --table or\n--extension are able to digest a psql-like pattern to decide which\nobjects to dump.  Is there a reason not to have this capability for\nthis new switch with procedure names?  I mean to handle the case\nwithout the function arguments, even if the same name is used by\nmultiple functions with different arguments.Thank you for this suggestion.We have --schema-only flag to export only the structure and then we have --schema=<pattern> flag to export the schemas following a pattern.I don't think both features can't exist for functions (and stored procedures), but I see them as different features. We could have --functions-only and --function=<pattern>.In my humble opinion, the lack of --function=<pattern> feature should block this patch.Have a great day,Lætitia \n--\nMichael", "msg_date": "Fri, 25 Mar 2022 17:23:36 +0100", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Hello Tom,\n\nLe ven. 25 mars 2022 à 00:18, Tom Lane <tgl@sss.pgh.pa.us> a écrit :\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 24 Mar 2022, at 23:38, Greg Stark <stark@mit.edu> wrote:\n> >> It looks like this discussion has reached a bit of an impasse with Tom\n> >> being against this approach and Michael and Daniel being for it. It\n> >> doesn't look like it's going to get committed this commitfest, shall\n> >> we move it forward or mark it returned with feedback?\n>\n> > Lætitia mentioned the other day off-list that she was going to try and\n> update\n> > this patch with the pattern support proposed, so hopefully we will hear\n> from\n> > her shortly on that.\n>\n> To clarify: I'm not against having an easy way to dump all-and-only\n> functions. What concerns me is having such a feature that's designed\n> in isolation, without a plan for anything else. I'd like to create\n> some sort of road map for future selective-dumping options, and then\n> we can make sure that this feature fits into the bigger picture.\n> Otherwise we're going to end up with an accumulation of warts, with\n> inconsistent naming and syntax, and who knows what other sources of\n> confusion.\n>\n\nThis totally makes sense.\n\nHave a nice day,\n\nLætitia\n\n\n>\n> regards, tom lane\n>\n\nHello Tom,Le ven. 25 mars 2022 à 00:18, Tom Lane <tgl@sss.pgh.pa.us> a écrit :Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 24 Mar 2022, at 23:38, Greg Stark <stark@mit.edu> wrote:\n>> It looks like this discussion has reached a bit of an impasse with Tom\n>> being against this approach and Michael and Daniel being for it. It\n>> doesn't look like it's going to get committed this commitfest, shall\n>> we move it forward or mark it returned with feedback?\n\n> Lætitia mentioned the other day off-list that she was going to try and update\n> this patch with the pattern support proposed, so hopefully we will hear from\n> her shortly on that.\n\nTo clarify: I'm not against having an easy way to dump all-and-only\nfunctions.  What concerns me is having such a feature that's designed\nin isolation, without a plan for anything else.  I'd like to create\nsome sort of road map for future selective-dumping options, and then\nwe can make sure that this feature fits into the bigger picture.\nOtherwise we're going to end up with an accumulation of warts, with\ninconsistent naming and syntax, and who knows what other sources of\nconfusion.This totally makes sense.Have a nice day,Lætitia \n\n                        regards, tom lane", "msg_date": "Fri, 25 Mar 2022 17:25:01 +0100", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Hello Chapman,\n\nLe ven. 25 mars 2022 à 00:42, Chapman Flack <chap@anastigmatix.net> a\nécrit :\n\n> On 03/27/21 08:57, Andrew Dunstan wrote:\n> > We can bikeshed the name of the flag at some stage. --procedures-only\n> > might also make sense\n>\n> Any takers for --routines-only ?\n>\n> \"Routine\" is the genuine, ISO SQL umbrella term for a function or\n> procedure, and we have used it that way in our docs and glossary.\n>\n\nThank you so much for your suggestion. I was really excited to find a\ngeneric term for Functions and Procedures, but \"routine\" also includes\naggregation functions which I had excluded from my feature (see Postgres\nGlossary here:\nhttps://www.postgresql.org/docs/14/glossary.html#GLOSSARY-ROUTINE).\n\nI had decided not to include aggregate functions when I designed my patch\nbecause I thought most users wouldn't expect them in the result file. Was I\nwrong?\n\nHave a nice day,\n\nLætitia\n\n\n>\n> Regards,\n> -Chap\n>\n\nHello Chapman,Le ven. 25 mars 2022 à 00:42, Chapman Flack <chap@anastigmatix.net> a écrit :On 03/27/21 08:57, Andrew Dunstan wrote:\n> We can bikeshed the name of the flag at some stage. --procedures-only\n> might also make sense\n\nAny takers for --routines-only ?\n\n\"Routine\" is the genuine, ISO SQL umbrella term for a function or\nprocedure, and we have used it that way in our docs and glossary.Thank you so much for your suggestion. I was really excited to find a generic term for Functions and Procedures, but \"routine\" also includes aggregation functions which I had excluded from my feature (see Postgres Glossary here: https://www.postgresql.org/docs/14/glossary.html#GLOSSARY-ROUTINE).I had decided not to include aggregate functions when I designed my patch because I thought most users wouldn't expect them in the result file. Was I wrong?Have a nice day,Lætitia \n\nRegards,\n-Chap", "msg_date": "Fri, 25 Mar 2022 17:28:45 +0100", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump new feature: exporting functions only. Bad or good\n idea ?" }, { "msg_contents": "Hello David,\n\nthank you for your interest in that patch.\n\nLe ven. 25 mars 2022 à 01:17, David G. Johnston <david.g.johnston@gmail.com>\na écrit :\n\n> On Thu, Mar 24, 2022 at 4:42 PM Chapman Flack <chap@anastigmatix.net>\n> wrote:\n>\n>> On 03/27/21 08:57, Andrew Dunstan wrote:\n>> > We can bikeshed the name of the flag at some stage. --procedures-only\n>> > might also make sense\n>>\n>> Any takers for --routines-only ?\n>>\n>> \"Routine\" is the genuine, ISO SQL umbrella term for a function or\n>> procedure, and we have used it that way in our docs and glossary.\n>>\n>>\n> Regardless of the prefix name choice neither blobs, tables, nor schemas\n> use the \"-only\" suffix so I don't see that this should. I have no issue if\n> we add three options for this: --routine/--procedure/--function (these are\n> singular because --table and --schema are singular)\n>\n\nActually, I thought of it after the --schema-only flag (which is kind of\nconfusing, because it won't export only schema creation DDL). I agree to\nkeep the flag name singular if we're using a pattern to cherry-pick the\nobjects we want. My problem is how do you think we could get all the stored\nprocedures/functions at once? --function=* ? It seems to me that exporting\neverything at once is the main use case (I'd be happy to be proven wrong),\nand it does not feel intuitive to use --function=*.\n\nHow would you see to create 3 flags: --functions-only /\n--function=<pattern> / --exclude-function=<pattern> ?\nAnd then we could create the same series of 3 flags for other objects.\nWould that be too verbose?\n\n>\n> --blobs and --no-blobs are special so let us just build off of the API\n> already implemented for --table/--exclude-table\n>\n\n> No short option is required, and honestly I don't think it is worthwhile\n> to take up short options for this, acknowledging that we are leaving -t/-T\n> (and -n/-N) in place for legacy support.\n>\n\nI agree\n\n\n>\n> --blobs reacts to these additional object types in the same manner that it\n> reacts to --table. As soon as any of these object type inclusion options\n> is specified nothing except the options that are specified will be output.\n> Both data and schema, though, for most object types, data is not relevant.\n> If schema is not output then options that control schema content objects\n> only are ignored.\n>\n> The --exclude-* options behave in the same way as defined for -t/-T,\n> specifically the note in -T about when both are present.\n>\n> As with tables, the affirmative version of these overrides any --schema\n> (-n/-N) specification provided. But the --exclude-* versions of these do\n> omit the named objects from the dump should they have been selected by\n> --schema.\n>\n\nThat's fine with me.\n\nHave a nice day,\n\nLætitia\n\n\n>\n> David J.\n>\n>\n\nHello David,thank you for your interest in that patch.Le ven. 25 mars 2022 à 01:17, David G. Johnston <david.g.johnston@gmail.com> a écrit :On Thu, Mar 24, 2022 at 4:42 PM Chapman Flack <chap@anastigmatix.net> wrote:On 03/27/21 08:57, Andrew Dunstan wrote:\n> We can bikeshed the name of the flag at some stage. --procedures-only\n> might also make sense\n\nAny takers for --routines-only ?\n\n\"Routine\" is the genuine, ISO SQL umbrella term for a function or\nprocedure, and we have used it that way in our docs and glossary.Regardless of the prefix name choice neither blobs, tables, nor schemas use the \"-only\" suffix so I don't see that this should.  I have no issue if we add three options for this: --routine/--procedure/--function (these are singular because --table and --schema are singular)Actually, I thought of it after the --schema-only flag (which is kind of confusing, because it won't export only schema creation DDL). I agree to keep the flag name singular if we're using a pattern to cherry-pick the objects we want. My problem is how do you think we could get all the stored procedures/functions at once? --function=* ? It seems to me that exporting everything at once is the main use case (I'd be happy to be proven wrong), and it does not feel intuitive to use --function=*.How would you see to create 3 flags: --functions-only /  --function=<pattern> / --exclude-function=<pattern> ?And then we could create the same series of 3 flags for other objects. Would that be too verbose?--blobs and --no-blobs are special so let us just build off of the API already implemented for --table/--exclude-table No short option is required, and honestly I don't think it is worthwhile to take up short options for this, acknowledging that we are leaving -t/-T (and -n/-N) in place for legacy support.I agree --blobs reacts to these additional object types in the same manner that it reacts to --table.  As soon as any of these object type inclusion options is specified nothing except the options that are specified will be output.  Both data and schema, though, for most object types, data is not relevant.  If schema is not output then options that control schema content objects only are ignored.The --exclude-* options behave in the same way as defined for -t/-T, specifically the note in -T about when both are present.As with tables, the affirmative version of these overrides any --schema (-n/-N) specification provided.  But the --exclude-* versions of these do omit the named objects from the dump should they have been selected by --schema.That's fine with me.Have a nice day,Lætitia David J.", "msg_date": "Fri, 25 Mar 2022 17:44:00 +0100", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump new feature: exporting functions only. Bad or good\n idea ?" }, { "msg_contents": "On Fri, Mar 25, 2022 at 9:44 AM Laetitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n>\n> Actually, I thought of it after the --schema-only flag (which is kind of\n> confusing, because it won't export only schema creation DDL).\n>\n\n--schema-only is talking about the different sections of the dump file, not\nnamespace schema objects in the database.\n\n> My problem is how do you think we could get all the stored\n> procedures/functions at once? --function=* ? It seems to me that exporting\n> everything at once is the main use case (I'd be happy to be proven wrong),\n> and it does not feel intuitive to use --function=*.\n>\n\nHow does one specify \"all but only tables\" today? If the answer is \"we\ndon't\" then we get to decide now. I have no qualms with --objecttype=*\nmeaning all.\n\n(goes and checks)\npg_dump --schema-only -t '*' # indeed outputs all relations. Annoyingly,\nthis seems to include pg_catalog relations as well, so one basically has to\nspecify --exclude-table='pg_catalog.*' as well in the typical case of only\nwanting user objects. Solving this with a new --no-system-objects that\nwould apply firstly seems like a nice feature to this pre-existing\nbehavior. One might think that --exclude-schema='pg_catalog' would work,\nbut it is doesn't by design. The design choice seems solid for user-space\nschema names so just dealing with the system objects is my preferred\nsolution.\n\nI don't find the --objectype-only option to be desirable. psql\n--tables-only --functions-only just seems odd, no longer are they \"only\".\nI would go with --function-all (and maybe --function-system and\n--function-user) if going down this path but the wildcard feature can\nhandle this just fine and we want that feature anyway. Except succinctly\nomitting system objects which should get its own general option.\n\nDavid J.\n\nOn Fri, Mar 25, 2022 at 9:44 AM Laetitia Avrot <laetitia.avrot@gmail.com> wrote:Actually, I thought of it after the --schema-only flag (which is kind of confusing, because it won't export only schema creation DDL).--schema-only is talking about the different sections of the dump file, not namespace schema objects in the database.My problem is how do you think we could get all the stored procedures/functions at once? --function=* ? It seems to me that exporting everything at once is the main use case (I'd be happy to be proven wrong), and it does not feel intuitive to use --function=*.How does one specify \"all but only tables\" today?  If the answer is \"we don't\" then we get to decide now.  I have no qualms with --objecttype=* meaning all.(goes and checks)pg_dump --schema-only -t '*'  # indeed outputs all relations.  Annoyingly, this seems to include pg_catalog relations as well, so one basically has to specify --exclude-table='pg_catalog.*' as well in the typical case of only wanting user objects.  Solving this with a new --no-system-objects that would apply firstly seems like a nice feature to this pre-existing behavior.  One might think that --exclude-schema='pg_catalog' would work, but it is doesn't by design.  The design choice seems solid for user-space schema names so just dealing with the system objects is my preferred solution.I don't find the --objectype-only option to be desirable.  psql --tables-only --functions-only just seems odd, no longer are they \"only\".  I would go with --function-all (and maybe --function-system and --function-user) if going down this path but the wildcard feature can handle this just fine and we want that feature anyway.  Except succinctly omitting system objects which should get its own general option.David J.", "msg_date": "Fri, 25 Mar 2022 10:11:14 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump new feature: exporting functions only. Bad or good\n idea ?" }, { "msg_contents": "Laetitia Avrot <laetitia.avrot@gmail.com> writes:\n> Thank you so much for your suggestion. I was really excited to find a\n> generic term for Functions and Procedures, but \"routine\" also includes\n> aggregation functions which I had excluded from my feature (see Postgres\n> Glossary here:\n> https://www.postgresql.org/docs/14/glossary.html#GLOSSARY-ROUTINE).\n\n> I had decided not to include aggregate functions when I designed my patch\n> because I thought most users wouldn't expect them in the result file. Was I\n> wrong?\n\nI'd vote for treating them as functions for this purpose. I'd put\nthem in the same category as window functions: we use a separate\nname for them for historical reasons, but they still walk and quack\npretty much like functions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 13:34:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I don't find the --objectype-only option to be desirable. psql\n> --tables-only --functions-only just seems odd, no longer are they \"only\".\n> I would go with --function-all (and maybe --function-system and\n> --function-user) if going down this path but the wildcard feature can\n> handle this just fine and we want that feature anyway.\n\nAgreed. \"--function=*\" is more general than \"--function-only\",\nand shorter too, so what's not to like?\n\n> Except succinctly\n> omitting system objects which should get its own general option.\n\npg_dump never dumps system objects, so I don't see a need for\na switch to tell it not to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 13:56:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>\n> > Except succinctly\n> > omitting system objects which should get its own general option.\n>\npg_dump never dumps system objects, so I don't see a need for\n> a switch to tell it not to.\n>\n>\nI considered pg_class to be a system object, which was dumped under -t '*'\n\n$ pg_dump -U postgres --schema-only -t '*' | grep 'CREATE.*pg_class'\nCREATE TABLE pg_catalog.pg_class (\nCREATE UNIQUE INDEX pg_class_oid_index ON pg_catalog.pg_class USING btree\n(oid);\nCREATE UNIQUE INDEX pg_class_relname_nsp_index ON pg_catalog.pg_class USING\nbtree (relname, relnamespace);\nCREATE INDEX pg_class_tblspc_relfilenode_index ON pg_catalog.pg_class USING\nbtree (reltablespace, relfilenode);\n\n$ psql -U postgres -c 'select version();'\n version\n\n----------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 13.6 (Ubuntu 13.6-1.pgdg20.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit\n(1 row)\n\nDavid J.\n\nOn Fri, Mar 25, 2022 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> Except succinctly\n> omitting system objects which should get its own general option. \npg_dump never dumps system objects, so I don't see a need for\na switch to tell it not to.I considered pg_class to be a system object, which was dumped under -t '*'$ pg_dump -U postgres --schema-only -t '*' | grep 'CREATE.*pg_class'CREATE TABLE pg_catalog.pg_class (CREATE UNIQUE INDEX pg_class_oid_index ON pg_catalog.pg_class USING btree (oid);CREATE UNIQUE INDEX pg_class_relname_nsp_index ON pg_catalog.pg_class USING btree (relname, relnamespace);CREATE INDEX pg_class_tblspc_relfilenode_index ON pg_catalog.pg_class USING btree (reltablespace, relfilenode);$ psql -U postgres -c 'select version();'                                                             version                                 ---------------------------------------------------------------------------------------------------------------------------------- PostgreSQL 13.6 (Ubuntu 13.6-1.pgdg20.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit(1 row)David J.", "msg_date": "Fri, 25 Mar 2022 11:29:08 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Mar 25, 2022 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> pg_dump never dumps system objects, so I don't see a need for\n>> a switch to tell it not to.\n\n> I considered pg_class to be a system object, which was dumped under -t '*'\n\nOh! You're right, the --table switches will include system objects.\nThat seems like a bug TBH. Even if it's intentional, it's surely\nnot behavior we want for functions. You can somewhat easily\nexclude system catalogs from matching --table since they all have\nnames starting with \"pg_\", but it'd be way more painful for functions\nbecause (a) there are thousands and (b) they're not very predictably\nnamed.\n\nI'd vote for changing the behavior of --table rather than trying to\nbe bug-compatible with this decision.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 14:37:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Friday, March 25, 2022, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Fri, Mar 25, 2022 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> pg_dump never dumps system objects, so I don't see a need for\n> >> a switch to tell it not to.\n>\n> > I considered pg_class to be a system object, which was dumped under -t\n> '*'\n>\n> I'd vote for changing the behavior of --table rather than trying to\n> be bug-compatible with this decision.\n>\n>\nAgreed.\n\nDavid J.\n\nOn Friday, March 25, 2022, Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Fri, Mar 25, 2022 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> pg_dump never dumps system objects, so I don't see a need for\n>> a switch to tell it not to.\n\n> I considered pg_class to be a system object, which was dumped under -t '*'\nI'd vote for changing the behavior of --table rather than trying to\nbe bug-compatible with this decision.\nAgreed.David J.", "msg_date": "Fri, 25 Mar 2022 11:40:01 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "> On 25 Mar 2022, at 01:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n>> The extension object type does not seem to have gotten the\n>> --exclude-extension capability that it would need to conform to the general\n>> design exemplified by --table and hopefully extended out to the routine\n>> object types.\n> \n> We're not going to instantly build out every feature that would be\n> suggested by a roadmap.\n\nAgreed. In this case it seems that adding --exclude-extension would make sense\nto keep conistency. I took a quick stab at doing so with the attached while\nwe're here.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Fri, 25 Mar 2022 22:09:33 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "> On 25 Mar 2022, at 19:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I'd vote for changing the behavior of --table rather than trying to\n> be bug-compatible with this decision.\n\nAgreed. Question is what to do for \"-t pg_class\", should we still forbid\ndumping system catalogs when they are pattern matched without wildcard or is\nshould that be ok? And should this depend on if \"-n pg_catalog\" is used?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 25 Mar 2022 22:44:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 25 Mar 2022, at 19:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd vote for changing the behavior of --table rather than trying to\n>> be bug-compatible with this decision.\n\n> Agreed. Question is what to do for \"-t pg_class\", should we still forbid\n> dumping system catalogs when they are pattern matched without wildcard or is\n> should that be ok? And should this depend on if \"-n pg_catalog\" is used?\n\nI don't think there's anything really wrong with just \"we won't dump\nsystem objects, full stop\"; I don't see much use-case for doing that\nexcept maybe debugging, and even that is a pretty thin argument.\n\nHowever, a possible compromise is to say that we act as though\n--exclude-schema=pg_catalog is specified unless you explicitly\noverride that with \"--schema=pg_catalog\". (And the same for\ninformation_schema, I suppose.) This might be a bit hacky to\nimplement :-(\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 17:54:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Fri, Mar 25, 2022 at 2:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n> >> On 25 Mar 2022, at 19:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'd vote for changing the behavior of --table rather than trying to\n> >> be bug-compatible with this decision.\n>\n> > Agreed. Question is what to do for \"-t pg_class\", should we still forbid\n> > dumping system catalogs when they are pattern matched without wildcard\n> or is\n> > should that be ok? And should this depend on if \"-n pg_catalog\" is used?\n>\n> I don't think there's anything really wrong with just \"we won't dump\n> system objects, full stop\"; I don't see much use-case for doing that\n> except maybe debugging, and even that is a pretty thin argument.\n>\n\n+1\n\nWe could bug-fix in a compromise if we felt compelled by a user complaint\nbut I don't foresee any compelling ones for this. The catalogs are\nimplementation details that should never have been exposed in this manner\nin the first place.\n\nIf we want to choose the other position I would just go with\n\"--[no]-system-objects\" options to toggle whether pattern matching grabs\nthem by default (defaulting to no) and if someone wants to enable them for\nonly specific object types they can --system-objects and then\n--exclude-type='pg_catalog' any that shouldn't be enabled.\n\nThe documentation already says that the include options ignore -n/-N so the\nsolution that breaks this rule seems less appealing at a cursory glance.\n\nDavid J.\n\nOn Fri, Mar 25, 2022 at 2:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 25 Mar 2022, at 19:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd vote for changing the behavior of --table rather than trying to\n>> be bug-compatible with this decision.\n\n> Agreed.  Question is what to do for \"-t pg_class\", should we still forbid\n> dumping system catalogs when they are pattern matched without wildcard or is\n> should that be ok? And should this depend on if \"-n pg_catalog\" is used?\n\nI don't think there's anything really wrong with just \"we won't dump\nsystem objects, full stop\"; I don't see much use-case for doing that\nexcept maybe debugging, and even that is a pretty thin argument.+1We could bug-fix in a compromise if we felt compelled by a user complaint but I don't foresee any compelling ones for this.  The catalogs are implementation details that should never have been exposed in this manner in the first place.If we want to choose the other position I would just go with \"--[no]-system-objects\" options to toggle whether pattern matching grabs them by default (defaulting to no) and if someone wants to enable them for only specific object types they can --system-objects and then --exclude-type='pg_catalog' any that shouldn't be enabled.The documentation already says that the include options ignore -n/-N so the solution that breaks this rule seems less appealing at a cursory glance.David J.", "msg_date": "Fri, 25 Mar 2022 15:02:46 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> If we want to choose the other position I would just go with\n> \"--[no]-system-objects\" options to toggle whether pattern matching grabs\n> them by default (defaulting to no) and if someone wants to enable them for\n> only specific object types they can --system-objects and then\n> --exclude-type='pg_catalog' any that shouldn't be enabled.\n\nYeah, I could live with that. Per-object-type control doesn't\nseem necessary.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 18:07:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:09:33PM +0100, Daniel Gustafsson wrote:\n> Agreed. In this case it seems that adding --exclude-extension would make sense\n> to keep conistency. I took a quick stab at doing so with the attached while\n> we're here.\n\nsrc/test/modules/test_pg_dump would be the best place for the addition\nof a couple of tests with this new switch. Better to check as well\nwhat happens when a command collides with --extension and\n--exclude-extension.\n\n printf(_(\" -e, --extension=PATTERN dump the specified extension(s) only\\n\"));\n+ printf(_(\" --exclude-extension=PATTERN do NOT dump the specified extension(s)\\n\"));\nShouldn't this be listed closer to --exclude-table-data in the --help\noutput?\n--\nMichael", "msg_date": "Sat, 26 Mar 2022 09:13:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Hello all,\n\nLe sam. 26 mars 2022 à 01:13, Michael Paquier <michael@paquier.xyz> a\nécrit :\n\n> On Fri, Mar 25, 2022 at 10:09:33PM +0100, Daniel Gustafsson wrote:\n> > Agreed. In this case it seems that adding --exclude-extension would\n> make sense\n> > to keep conistency. I took a quick stab at doing so with the attached\n> while\n> > we're here.\n>\n> src/test/modules/test_pg_dump would be the best place for the addition\n> of a couple of tests with this new switch. Better to check as well\n> what happens when a command collides with --extension and\n> --exclude-extension.\n>\n> printf(_(\" -e, --extension=PATTERN dump the specified\n> extension(s) only\\n\"));\n> + printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\n> extension(s)\\n\"));\n> Shouldn't this be listed closer to --exclude-table-data in the --help\n> output?\n>\n\nI think it's time to sum up what we want to do:\n\n- We'd like to use switches to export objects according to a pattern.\n- For each object type we will have an --object=PATTERN flag and a\n--exclude-object=PATTERN\n- Having a short flag for each of the long flags is not mandatory\n- The object types that pg_dump can select so far are:\n - table (already written)\n - schema (already written)\n - extension (half-written, --exclude-extension not written)\n - routine (TBD ASAP). Routine flag operates on stored functions, stored\nprocedures, aggregate functions, and window functions.\n- By default, pg_dump does not export system objects but we found out that\nwe could use --table='pg_catalog.*' to export them. This is a bug and will\nbe fixed. pg_dump won't have the ability to export any system object\nanymore. Should the fix belong to that patch or do I need to create a\nseparate patch? (Seems to me it should be separated)\n\nIf everyone is ok with the points above, I'll write both patches.\n\nHave a nice day,\n\nLætitia\n\nHello all,Le sam. 26 mars 2022 à 01:13, Michael Paquier <michael@paquier.xyz> a écrit :On Fri, Mar 25, 2022 at 10:09:33PM +0100, Daniel Gustafsson wrote:\n> Agreed.  In this case it seems that adding --exclude-extension would make sense\n> to keep conistency.  I took a quick stab at doing so with the attached while\n> we're here.\n\nsrc/test/modules/test_pg_dump would be the best place for the addition\nof a couple of tests with this new switch.  Better to check as well\nwhat happens when a command collides with --extension and\n--exclude-extension.\n\n    printf(_(\"  -e, --extension=PATTERN      dump the specified extension(s) only\\n\"));\n+   printf(_(\"  --exclude-extension=PATTERN  do NOT dump the specified extension(s)\\n\"));\nShouldn't this be listed closer to --exclude-table-data in the --help\noutput?I think it's time to sum up what we want to do:- We'd like to use switches to export objects according to a pattern.- For each object type we will have an --object=PATTERN flag and a --exclude-object=PATTERN- Having a short flag for each of the long flags is not mandatory- The object types that pg_dump can select so far are:    - table (already written)    - schema (already written)    - extension (half-written, --exclude-extension not written)    - routine (TBD ASAP). Routine flag operates on stored functions, stored procedures, aggregate functions, and window functions.- By default, pg_dump does not export system objects but we found out that we could use --table='pg_catalog.*' to export them. This is a bug and will be fixed. pg_dump won't have the ability to export any system object anymore. Should the fix belong to that patch or do I need to create a separate patch? (Seems to me it should be separated)If everyone is ok with the points above, I'll write both patches.Have a nice day,Lætitia", "msg_date": "Sat, 26 Mar 2022 09:53:19 +0100", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Sat, Mar 26, 2022 at 1:53 AM Laetitia Avrot <laetitia.avrot@gmail.com>\nwrote:\n\n> Hello all,\n>\n> Le sam. 26 mars 2022 à 01:13, Michael Paquier <michael@paquier.xyz> a\n> écrit :\n>\n>> On Fri, Mar 25, 2022 at 10:09:33PM +0100, Daniel Gustafsson wrote:\n>> > Agreed. In this case it seems that adding --exclude-extension would\n>> make sense\n>> > to keep conistency. I took a quick stab at doing so with the attached\n>> while\n>> > we're here.\n>>\n>> src/test/modules/test_pg_dump would be the best place for the addition\n>> of a couple of tests with this new switch. Better to check as well\n>> what happens when a command collides with --extension and\n>> --exclude-extension.\n>>\n>> printf(_(\" -e, --extension=PATTERN dump the specified\n>> extension(s) only\\n\"));\n>> + printf(_(\" --exclude-extension=PATTERN do NOT dump the specified\n>> extension(s)\\n\"));\n>> Shouldn't this be listed closer to --exclude-table-data in the --help\n>> output?\n>>\n>\n> I think it's time to sum up what we want to do:\n>\n> - We'd like to use switches to export objects according to a pattern.\n> - For each object type we will have an --object=PATTERN flag and a\n> --exclude-object=PATTERN\n> - Having a short flag for each of the long flags is not mandatory\n> - The object types that pg_dump can select so far are:\n> - table (already written)\n> - schema (already written)\n> - extension (half-written, --exclude-extension not written)\n> - routine (TBD ASAP). Routine flag operates on stored functions,\n> stored procedures, aggregate functions, and window functions.\n> - By default, pg_dump does not export system objects but we found out that\n> we could use --table='pg_catalog.*' to export them. This is a bug and will\n> be fixed. pg_dump won't have the ability to export any system object\n> anymore. Should the fix belong to that patch or do I need to create a\n> separate patch? (Seems to me it should be separated)\n>\n> If everyone is ok with the points above, I'll write both patches.\n>\n>\nThat looks correct.\n\nI would say we should make the --table change and the --exclude-extension\nchange as separate commits.\n\nMichael's question brought up a point that we should address. I do not\nthink having these (now) 4 pairs of options presented strictly\nalphabetically in the documentation is a good choice and we should deviate\nfrom that convention here for something more user-friendly, and to reduce\nthe repetitiveness that comes from having basically what could be one pair\nof options actually implemented as 3 pairs. My initial approach would be\nto move them all to a subsection after the --help parameter and before the\nsection header for -d. That section would be presented something like:\n\n\"\"\"\nThese options allow for fine-grained control of which user objects are\nproduced by the dump (system objects are never dumped). If no inclusion\noptions are specified all objects are dumped except those that are\nexplicitly excluded. If even one inclusion option is specified then only\nthose objects selected for inclusion, and not excluded, will appear in the\ndump.\n\nThese options can appear multiple times within a single pg_dump command\nline. For each of these there is a mandatory pattern value, so the actual\noption looks like, e.g., --table='public.*', which will select all\nrelations in the public schema. See (Patterns). When using wildcards, be\ncareful to quote the pattern if needed to prevent the shell from expanding\nthe wildcards.\n\nWhen using these options, dependencies of the selected objects are not\nautomatically dumped, thus making such a dump potentially unsuitable for\nrestoration into a clean database.\n\nThis subset of options control which schemas to select objects from for an\notherwise normal dump.\n\n--schema / -n\n--exclude-schema / -N\n\nThe following subset specifies which non-schema objects to include. These\nare added to the objects that end up selected due to their presence in a\nschema. Specifically, the --exclude-schema option is ignored while\nprocessing these options.\n\n--table / -t\n Considers all relations, not just tables. i.e., views, materialized\nviews, foreign tables, and sequences.\n--routine\n Considers functions, procedures, aggregates, window functions\n--extension / -e\n Considers extensions\n\nWhen dumping data, only local table data is dumped by default. Specific\ntable data can be excluded using the --exclude-table-data option.\n\nSpecifying a foreign server using --include-foreign-data will cause related\nforeign table data to also be dumped.\n\nThe following subset specifies which objects to exclude. An object that\nmatches one of these patterns will never be dumped.\n\n--exclude-table / -T\n--exclude-routine\n--exclude-extension\n\nThe following options control the dumping of large objects:\n\n-b\n--blobs\nInclude large objects in the dump. This is the default behavior except when\n--schema, --table, or --schema-only is specified. The -b switch is\ntherefore only useful to add large objects to dumps where a specific schema\nor table has been requested. Note that blobs are considered data and\ntherefore will be included when --data-only is used, but not when\n--schema-only is.\n\n-B\n--no-blobs\nExclude large objects in the dump.\n\nWhen both -b and -B are given, the behavior is to output large objects,\nwhen data is being dumped, see the -b documentation.\n\"\"\"\n\nI've kept the blob wording as-is but moved it here since it seems to fit\nthe criteria of the section. I don't necessarily believe the wording is\nthe best possible. In particular the mention of --schema-only should\nsimply be removed. The comment that blobs are data implies they are not\ndumped during --schema-only. Then, --schema and --table should be\nconsolidated to refer to the \"inclusion options\" listed immediately above.\nI don't know why we didn't just error if the user specified both -b and -B\nat the same time. Is it too late to change that decision?\n\nI'm tempted to move the three section-related options to the top of this as\nwell. Ordering --data-only and --schema-only alphabetically also doesn't\nmake sense to me, nor does keeping them away from the --section option.\n\n--no-privileges is separated from the other --no-[property] options (e.g.,\n--no-comments) due to its having an -X short option and everything with a\nshort option being listed before things without...I would move it, and\nprobably place all of the \"-no-[property] stuff, as-is, in a subsection\nwith a descriptive leading paragraph. The blob options probably belong\nhere as well.\n\nYes, this is a fairly radical suggestion. But combined with some xrefs\nfrom a table of contents at the top of the page I do believe it would make\nfor a better user experience. We already do this for the database\nconnection parameters section of the documentation and the same rationale\nexists for taking the remaining large list of options and categorizing them\ninto meaningful units.\n\nDavid J.\n\nOn Sat, Mar 26, 2022 at 1:53 AM Laetitia Avrot <laetitia.avrot@gmail.com> wrote:Hello all,Le sam. 26 mars 2022 à 01:13, Michael Paquier <michael@paquier.xyz> a écrit :On Fri, Mar 25, 2022 at 10:09:33PM +0100, Daniel Gustafsson wrote:\n> Agreed.  In this case it seems that adding --exclude-extension would make sense\n> to keep conistency.  I took a quick stab at doing so with the attached while\n> we're here.\n\nsrc/test/modules/test_pg_dump would be the best place for the addition\nof a couple of tests with this new switch.  Better to check as well\nwhat happens when a command collides with --extension and\n--exclude-extension.\n\n    printf(_(\"  -e, --extension=PATTERN      dump the specified extension(s) only\\n\"));\n+   printf(_(\"  --exclude-extension=PATTERN  do NOT dump the specified extension(s)\\n\"));\nShouldn't this be listed closer to --exclude-table-data in the --help\noutput?I think it's time to sum up what we want to do:- We'd like to use switches to export objects according to a pattern.- For each object type we will have an --object=PATTERN flag and a --exclude-object=PATTERN- Having a short flag for each of the long flags is not mandatory- The object types that pg_dump can select so far are:    - table (already written)    - schema (already written)    - extension (half-written, --exclude-extension not written)    - routine (TBD ASAP). Routine flag operates on stored functions, stored procedures, aggregate functions, and window functions.- By default, pg_dump does not export system objects but we found out that we could use --table='pg_catalog.*' to export them. This is a bug and will be fixed. pg_dump won't have the ability to export any system object anymore. Should the fix belong to that patch or do I need to create a separate patch? (Seems to me it should be separated)If everyone is ok with the points above, I'll write both patches.That looks correct.I would say we should make the --table change and the --exclude-extension change as separate commits.Michael's question brought up a point that we should address.  I do not think having these (now) 4 pairs of options presented strictly alphabetically in the documentation is a good choice and we should deviate from that convention here for something more user-friendly, and to reduce the repetitiveness that comes from having basically what could be one pair of options actually implemented as 3 pairs.  My initial approach would be to move them all to a subsection after the --help parameter and before the section header for -d.  That section would be presented something like:\"\"\"These options allow for fine-grained control of which user objects are produced by the dump (system objects are never dumped).  If no inclusion options are specified all objects are dumped except those that are explicitly excluded.  If even one inclusion option is specified then only those objects selected for inclusion, and not excluded, will appear in the dump.These options can appear multiple times within a single pg_dump command line. For each of these there is a mandatory pattern value, so the actual option looks like, e.g., --table='public.*', which will select all relations in the public schema. See (Patterns). When using wildcards, be careful to quote the pattern if needed to prevent the shell from expanding the wildcards.When using these options, dependencies of the selected objects are not automatically dumped, thus making such a dump potentially unsuitable for restoration into a clean database.This subset of options control which schemas to select objects from for an otherwise normal dump.--schema / -n--exclude-schema / -NThe following subset specifies which non-schema objects to include.  These are added to the objects that end up selected due to their presence in a schema.  Specifically, the --exclude-schema option is ignored while processing these options.--table / -t  Considers all relations, not just tables.  i.e., views, materialized views, foreign tables, and sequences.--routine  Considers functions, procedures, aggregates, window functions--extension / -e  Considers extensionsWhen dumping data, only local table data is dumped by default.  Specific table data can be excluded using the --exclude-table-data option.Specifying a foreign server using --include-foreign-data will cause related foreign table data to also be dumped.The following subset specifies which objects to exclude.  An object that matches one of these patterns will never be dumped.--exclude-table / -T--exclude-routine--exclude-extensionThe following options control the dumping of large objects:-b--blobsInclude large objects in the dump. This is the default behavior except when --schema, --table, or --schema-only is specified. The -b switch is therefore only useful to add large objects to dumps where a specific schema or table has been requested. Note that blobs are considered data and therefore will be included when --data-only is used, but not when --schema-only is.-B--no-blobsExclude large objects in the dump.When both -b and -B are given, the behavior is to output large objects, when data is being dumped, see the -b documentation.\"\"\"I've kept the blob wording as-is but moved it here since it seems to fit the criteria of the section.  I don't necessarily believe the wording is the best possible.  In particular the mention of --schema-only should simply be removed.  The comment that blobs are data implies they are not dumped during --schema-only.  Then, --schema and --table should be consolidated to refer to the \"inclusion options\" listed immediately above.  I don't know why we didn't just error if the user specified both -b and -B at the same time.  Is it too late to change that decision?I'm tempted to move the three section-related options to the top of this as well.  Ordering --data-only and --schema-only alphabetically also doesn't make sense to me, nor does keeping them away from the --section option.--no-privileges is separated from the other --no-[property] options (e.g., --no-comments) due to its having an -X short option and everything with a short option being listed before things without...I would move it, and probably place all of the \"-no-[property] stuff, as-is, in a subsection with a descriptive leading paragraph.  The blob options probably belong here as well.Yes, this is a fairly radical suggestion.  But combined with some xrefs from a table of contents at the top of the page I do believe it would make for a better user experience.  We already do this for the database connection parameters section of the documentation and the same rationale exists for taking the remaining large list of options and categorizing them into meaningful units.David J.", "msg_date": "Sat, 26 Mar 2022 10:00:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "On Sat, Mar 26, 2022 at 09:53:19AM +0100, Laetitia Avrot wrote:\n> I think it's time to sum up what we want to do:\n> \n> - We'd like to use switches to export objects according to a pattern.\n> - For each object type we will have an --object=PATTERN flag and a\n> --exclude-object=PATTERN\n> - Having a short flag for each of the long flags is not mandatory\n> - The object types that pg_dump can select so far are:\n> - table (already written)\n> - schema (already written)\n\n> - extension (half-written, --exclude-extension not written)\n\nI would be to blame on this item.\n\n> - routine (TBD ASAP). Routine flag operates on stored functions, stored\n> procedures, aggregate functions, and window functions.\n> - By default, pg_dump does not export system objects but we found out that\n> we could use --table='pg_catalog.*' to export them. This is a bug and will\n> be fixed. pg_dump won't have the ability to export any system object\n> anymore. Should the fix belong to that patch or do I need to create a\n> separate patch? (Seems to me it should be separated)\n> \n> If everyone is ok with the points above, I'll write both patches.\n\nLooks clear to me that a different design is wanted here, and that\nthis won't make it for v15, so I have marked the patch as returned\nwith feedback in the CF app.\n--\nMichael", "msg_date": "Thu, 7 Apr 2022 14:43:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" }, { "msg_contents": "Le jeu. 7 avr. 2022, 07:43, Michael Paquier <michael@paquier.xyz> a écrit :\n\n>\n> Looks clear to me that a different design is wanted here, and that\n> this won't make it for v15, so I have marked the patch as returned\n> with feedback in the CF app.\n>\n> Hello,\n\nI agree with Michael, this won't be ready for PG15. I had planned to work\non this sooner but I life happened...\n\nHave a great day,\n\nLætitia\n\nLe jeu. 7 avr. 2022, 07:43, Michael Paquier <michael@paquier.xyz> a écrit :\n\nLooks clear to me that a different design is wanted here, and that\nthis won't make it for v15, so I have marked the patch as returned\nwith feedback in the CF app.Hello,I agree with Michael, this won't be ready for PG15. I had planned to work on this sooner but I life happened...Have a great day, Lætitia", "msg_date": "Fri, 8 Apr 2022 08:36:13 +0200", "msg_from": "Laetitia Avrot <laetitia.avrot@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_dump new feature: exporting functions only. Bad or good idea ?" } ]
[ { "msg_contents": "Hi,\n\nin the course of https://postgr.es/m/3471359.1615937770%40sss.pgh.pa.us\nI saw a leak in pgstat_read_statsfiles(), more precisely:\n\t/* Allocate the space for replication slot statistics */\n\treplSlotStats = palloc0(max_replication_slots * sizeof(PgStat_ReplSlotStats));\n\nthe issue is that the current memory context is not set by\npgstat_read_statsfiles().\n\nIn some cases CurrentMemoryContext is going to be a long-lived context,\naccumulating those allocations over time. In other contexts it will be a\ntoo short lived context, e.g. an ExprContext from the pg_stat_*\ninvocation in the query. A reproducer for the latter:\n\npostgres[2252294][1]=# SELECT pg_create_logical_replication_slot('test', 'test_decoding');\n┌────────────────────────────────────┐\n│ pg_create_logical_replication_slot │\n├────────────────────────────────────┤\n│ (test,0/456C1878) │\n└────────────────────────────────────┘\n(1 row)\n\npostgres[2252294][1]=# BEGIN ;\nBEGIN\n\npostgres[2252294][1]*=# SELECT * FROM pg_stat_replication_slots ;\n┌───────────┬────────────┬─────────────┬─────────────┬─────────────┬──────────────┬──────────────┬─────────────┐\n│ slot_name │ spill_txns │ spill_count │ spill_bytes │ stream_txns │ stream_count │ stream_bytes │ stats_reset │\n├───────────┼────────────┼─────────────┼─────────────┼─────────────┼──────────────┼──────────────┼─────────────┤\n│ test │ 0 │ 0 │ 0 │ 0 │ 0 │ 0 │ (null) │\n└───────────┴────────────┴─────────────┴─────────────┴─────────────┴──────────────┴──────────────┴─────────────┘\n(1 row)\n\npostgres[2252294][1]*=# SELECT * FROM pg_stat_replication_slots ;\n┌─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────>\n│ >\n├─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────>\n│ \\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F\\x7F>\n└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────>\n(1 row)\n\nI'll push the minimal fix of forcing the allocation to happen in\npgStatLocalContext and setting it to NULL in pgstat_clear_snapshot().\n\n\nBut it seems like we just shouldn't allocate it dynamically at all?\nmax_replication_slots doesn't change during postmaster lifetime, so it\nseems like it should just be allocated once?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 16:04:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "replication slot stats memory bug" }, { "msg_contents": "Hi,\n\nOn 2021-03-17 16:04:47 -0700, Andres Freund wrote:\n> I'll push the minimal fix of forcing the allocation to happen in\n> pgStatLocalContext and setting it to NULL in pgstat_clear_snapshot().\n\nDone: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5f79580ad69f6e696365bdc63bc265f45bd77211\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 16:25:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: replication slot stats memory bug" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I saw a leak in pgstat_read_statsfiles(), more precisely:\n> \t/* Allocate the space for replication slot statistics */\n> \treplSlotStats = palloc0(max_replication_slots * sizeof(PgStat_ReplSlotStats));\n\nYeah, I just found that myself. I think your fix is good.\n\n> But it seems like we just shouldn't allocate it dynamically at all?\n> max_replication_slots doesn't change during postmaster lifetime, so it\n> seems like it should just be allocated once?\n\nMeh. I don't see a need to wire in such an assumption here.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 17 Mar 2021 19:36:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: replication slot stats memory bug" }, { "msg_contents": "On Thu, Mar 18, 2021 at 4:55 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-17 16:04:47 -0700, Andres Freund wrote:\n> > I'll push the minimal fix of forcing the allocation to happen in\n> > pgStatLocalContext and setting it to NULL in pgstat_clear_snapshot().\n>\n> Done: https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5f79580ad69f6e696365bdc63bc265f45bd77211\n>\n\nThank you!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Mar 2021 07:01:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication slot stats memory bug" }, { "msg_contents": "Hi,\n\nOn 2021-03-17 19:36:46 -0400, Tom Lane wrote:\n> > But it seems like we just shouldn't allocate it dynamically at all?\n> > max_replication_slots doesn't change during postmaster lifetime, so it\n> > seems like it should just be allocated once?\n> \n> Meh. I don't see a need to wire in such an assumption here.\n\nIt does make it easier for the shared memory stats patch, because if\nthere's a fixed number + location, the relevant stats reporting doesn't\nneed to go through a hashtable with the associated locking. I guess\nthat may have colored my perception that it's better to just have a\nstatically sized memory allocation for this. Noteworthy that SLRU stats\nare done in a fixed size allocation as well...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 17 Mar 2021 18:51:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: replication slot stats memory bug" } ]
[ { "msg_contents": "Hi all,\n\nWell, as $subject tells, I just found confusing that \\h does not\ncomplete the so-said command, the only one using IMPORT as keyword,\nso I'd like to do the following:\n--- a/src/bin/psql/tab-complete.c\n+++ b/src/bin/psql/tab-complete.c\n@@ -1493,7 +1493,7 @@ psql_completion(const char *text, int start, int end)\n \"ABORT\", \"ALTER\", \"ANALYZE\", \"BEGIN\", \"CALL\", \"CHECKPOINT\", \"CLOSE\", \"CLUSTER\",\n \"COMMENT\", \"COMMIT\", \"COPY\", \"CREATE\", \"DEALLOCATE\", \"DECLARE\",\n \"DELETE FROM\", \"DISCARD\", \"DO\", \"DROP\", \"END\", \"EXECUTE\", \"EXPLAIN\",\n- \"FETCH\", \"GRANT\", \"IMPORT\", \"INSERT\", \"LISTEN\", \"LOAD\", \"LOCK\",\n+ \"FETCH\", \"GRANT\", \"IMPORT FOREIGN SCHEMA\", \"INSERT\", \"LISTEN\", \"LOAD\", \"LOCK\",\n \"MOVE\", \"NOTIFY\", \"PREPARE\",\n \"REASSIGN\", \"REFRESH MATERIALIZED VIEW\", \"REINDEX\", \"RELEASE\",\n \"RESET\", \"REVOKE\", \"ROLLBACK\",\n\nThat's not the patch of the year.\n\nThanks,\n--\nMichael", "msg_date": "Thu, 18 Mar 2021 15:58:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "psql tab completion for \\h with IMPORT FOREIGN SCHEMA" }, { "msg_contents": "On Thu, Mar 18, 2021 at 03:58:46PM +0900, Michael Paquier wrote:\n> \n> Well, as $subject tells, I just found confusing that \\h does not\n> complete the so-said command, the only one using IMPORT as keyword,\n> so I'd like to do the following:\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -1493,7 +1493,7 @@ psql_completion(const char *text, int start, int end)\n> \"ABORT\", \"ALTER\", \"ANALYZE\", \"BEGIN\", \"CALL\", \"CHECKPOINT\", \"CLOSE\", \"CLUSTER\",\n> \"COMMENT\", \"COMMIT\", \"COPY\", \"CREATE\", \"DEALLOCATE\", \"DECLARE\",\n> \"DELETE FROM\", \"DISCARD\", \"DO\", \"DROP\", \"END\", \"EXECUTE\", \"EXPLAIN\",\n> - \"FETCH\", \"GRANT\", \"IMPORT\", \"INSERT\", \"LISTEN\", \"LOAD\", \"LOCK\",\n> + \"FETCH\", \"GRANT\", \"IMPORT FOREIGN SCHEMA\", \"INSERT\", \"LISTEN\", \"LOAD\", \"LOCK\",\n> \"MOVE\", \"NOTIFY\", \"PREPARE\",\n> \"REASSIGN\", \"REFRESH MATERIALIZED VIEW\", \"REINDEX\", \"RELEASE\",\n> \"RESET\", \"REVOKE\", \"ROLLBACK\",\n\nLooks sensible to me.\n\n\n", "msg_date": "Thu, 18 Mar 2021 15:13:17 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql tab completion for \\h with IMPORT FOREIGN SCHEMA" }, { "msg_contents": "\n\n\n\n\n‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐\nOn Thursday, March 18, 2021 8:13 AM, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> On Thu, Mar 18, 2021 at 03:58:46PM +0900, Michael Paquier wrote:\n>\n> > Well, as $subject tells, I just found confusing that \\h does not\n> > complete the so-said command, the only one using IMPORT as keyword,\n> > so I'd like to do the following:\n> > --- a/src/bin/psql/tab-complete.c\n> > +++ b/src/bin/psql/tab-complete.c\n> > @@ -1493,7 +1493,7 @@ psql_completion(const char *text, int start, int end)\n> > \"ABORT\", \"ALTER\", \"ANALYZE\", \"BEGIN\", \"CALL\", \"CHECKPOINT\", \"CLOSE\", \"CLUSTER\",\n> > \"COMMENT\", \"COMMIT\", \"COPY\", \"CREATE\", \"DEALLOCATE\", \"DECLARE\",\n> > \"DELETE FROM\", \"DISCARD\", \"DO\", \"DROP\", \"END\", \"EXECUTE\", \"EXPLAIN\",\n> >\n> > - \"FETCH\", \"GRANT\", \"IMPORT\", \"INSERT\", \"LISTEN\", \"LOAD\", \"LOCK\",\n> >\n> >\n> >\n> > - \"FETCH\", \"GRANT\", \"IMPORT FOREIGN SCHEMA\", \"INSERT\", \"LISTEN\", \"LOAD\", \"LOCK\",\n> > \"MOVE\", \"NOTIFY\", \"PREPARE\",\n> > \"REASSIGN\", \"REFRESH MATERIALIZED VIEW\", \"REINDEX\", \"RELEASE\",\n> > \"RESET\", \"REVOKE\", \"ROLLBACK\",\n> >\n> >\n>\n> Looks sensible to me.\n\n\nIt seems helpful. Thank you.\n\n\n\n", "msg_date": "Thu, 18 Mar 2021 07:45:36 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": false, "msg_subject": "Re: psql tab completion for \\h with IMPORT FOREIGN SCHEMA" }, { "msg_contents": "On Thu, Mar 18, 2021 at 07:45:36AM +0000, gkokolatos@pm.me wrote:\n> It seems helpful. Thank you.\n\nThanks, applied then.\n--\nMichael", "msg_date": "Fri, 19 Mar 2021 09:23:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: psql tab completion for \\h with IMPORT FOREIGN SCHEMA" } ]
[ { "msg_contents": "Dear hacker:\r\n&nbsp; &nbsp; I am a student from Nanjing University. I have some troubles about command 'initdb'. After I modify sth about system catalog, I use initdb to init the database. But it caused this problem:&nbsp;\r\n&nbsp; &nbsp; 'performing post-bootstrap initialization ... Segmentation fault (core dumped)'\r\n&nbsp; &nbsp; 'child process exited with exit code 139'\r\n\r\n&nbsp; &nbsp;&nbsp;\r\n&nbsp; &nbsp; When I do 'make' and 'make install', there is no warning or error infomation appeared. Compilation step seems to have no problem. Also, I use gdb to backtrace initdb, the result is as below:\r\n\r\n\r\nperforming post-bootstrap initialization ... Segmentation fault (core dumped)\r\nProgram received signal SIGPIPE, Broken pipe.0x00007ffff7475224 in __GI___libc_write (fd=4, buf=0x555555838910, nbytes=71)&nbsp; &nbsp; at ../sysdeps/unix/sysv/linux/write.c:2727\t../sysdeps/unix/sysv/linux/write.c: No such file or directory.(gdb) bt#0&nbsp; 0x00007ffff7475224 in __GI___libc_write (fd=4, buf=0x555555838910,&nbsp;&nbsp; &nbsp; nbytes=71) at ../sysdeps/unix/sysv/linux/write.c:27#1&nbsp; 0x00007ffff73f028d in _IO_new_file_write (f=0x555555828040,&nbsp;&nbsp; &nbsp; data=0x555555838910, n=71) at fileops.c:1203#2&nbsp; 0x00007ffff73f2021 in new_do_write (to_do=71,&nbsp;&nbsp; &nbsp; data=0x555555838910 \" * string literal (including a function body!) or a multiline comment.\\n\\n\\n\\nguage';\\nge';\\n;\\n;\\nrministic, collencoding, collcollate, collctype)VALUES (pg_nextoid('pg_catalog.pg_collation', 'oid', 'pg_cata\"...,&nbsp;&nbsp; &nbsp; fp=0x555555828040) at fileops.c:457#3&nbsp; _IO_new_do_write (fp=0x555555828040,&nbsp;&nbsp; &nbsp; data=0x555555838910 \" * string literal (including a function body!) or a multiline comment.\\n\\n\\n\\nguage';\\nge';\\n;\\n;\\nrministic, collencoding, collcollate, collctype)VALUES (pg_nextoid('pg_catalog.pg_collation', 'oid', 'pg_cata\"...,&nbsp;&nbsp; &nbsp; to_do=71) at fileops.c:433#4&nbsp; 0x00007ffff73ef858 in _IO_new_file_sync (fp=0x555555828040)&nbsp; &nbsp; at fileops.c:813#5&nbsp; 0x00007ffff73e395d in __GI__IO_fflush (fp=0x555555828040) at iofflush.c:40#6&nbsp; 0x000055555555d5ec in setup_dictionary (cmdfd=0x555555828040)&nbsp; &nbsp; at initdb.c:1675#7&nbsp; 0x000055555555f838 in initialize_data_directory () at initdb.c:2909#8&nbsp; 0x0000555555560234 in main (argc=3, argv=0x7fffffffded8) at initdb.c:3228&nbsp; &nbsp; I am not able to fully recognize the insight of backtrace, but it's strange that it receives 'signal SIGPIPE, Broken pipe'.&nbsp; &nbsp; Looking forward to your reply. Yours sincerely.\nDear hacker:    I am a student from Nanjing University. I have some troubles about command 'initdb'. After I modify sth about system catalog, I use initdb to init the database. But it caused this problem:     'performing post-bootstrap initialization ... Segmentation fault (core dumped)'    'child process exited with exit code 139'        When I do 'make' and 'make install', there is no warning or error infomation appeared. Compilation step seems to have no problem. Also, I use gdb to backtrace initdb, the result is as below:performing post-bootstrap initialization ... Segmentation fault (core dumped)Program received signal SIGPIPE, Broken pipe.0x00007ffff7475224 in __GI___libc_write (fd=4, buf=0x555555838910, nbytes=71)    at ../sysdeps/unix/sysv/linux/write.c:2727 ../sysdeps/unix/sysv/linux/write.c: No such file or directory.(gdb) bt#0  0x00007ffff7475224 in __GI___libc_write (fd=4, buf=0x555555838910,     nbytes=71) at ../sysdeps/unix/sysv/linux/write.c:27#1  0x00007ffff73f028d in _IO_new_file_write (f=0x555555828040,     data=0x555555838910, n=71) at fileops.c:1203#2  0x00007ffff73f2021 in new_do_write (to_do=71,     data=0x555555838910 \" * string literal (including a function body!) or a multiline comment.\\n\\n\\n\\nguage';\\nge';\\n;\\n;\\nrministic, collencoding, collcollate, collctype)VALUES (pg_nextoid('pg_catalog.pg_collation', 'oid', 'pg_cata\"...,     fp=0x555555828040) at fileops.c:457#3  _IO_new_do_write (fp=0x555555828040,     data=0x555555838910 \" * string literal (including a function body!) or a multiline comment.\\n\\n\\n\\nguage';\\nge';\\n;\\n;\\nrministic, collencoding, collcollate, collctype)VALUES (pg_nextoid('pg_catalog.pg_collation', 'oid', 'pg_cata\"...,     to_do=71) at fileops.c:433#4  0x00007ffff73ef858 in _IO_new_file_sync (fp=0x555555828040)    at fileops.c:813#5  0x00007ffff73e395d in __GI__IO_fflush (fp=0x555555828040) at iofflush.c:40#6  0x000055555555d5ec in setup_dictionary (cmdfd=0x555555828040)    at initdb.c:1675#7  0x000055555555f838 in initialize_data_directory () at initdb.c:2909#8  0x0000555555560234 in main (argc=3, argv=0x7fffffffded8) at initdb.c:3228    I am not able to fully recognize the insight of backtrace, but it's strange that it receives 'signal SIGPIPE, Broken pipe'.    Looking forward to your reply. Yours sincerely.", "msg_date": "Thu, 18 Mar 2021 16:35:48 +0800", "msg_from": "\"=?gb18030?B?0e7S3bTm?=\" <1057206466@qq.com>", "msg_from_op": true, "msg_subject": "Query about 'initdb' error" }, { "msg_contents": "\"=?gb18030?B?0e7S3bTm?=\" <1057206466@qq.com> writes:\n> &nbsp; &nbsp; When I do 'make' and 'make install', there is no warning or error infomation appeared. Compilation step seems to have no problem. Also, I use gdb to backtrace initdb, the result is as below:\n\n> performing post-bootstrap initialization ... Segmentation fault (core dumped)\n> Program received signal SIGPIPE, Broken pipe.\n\n> I am not able to fully recognize the insight of backtrace, but it's\n> strange that it receives 'signal SIGPIPE, Broken pipe'.&nbsp; &nbsp;\n> Looking forward to your reply. Yours sincerely.\n\nAt this phase, initdb is just shoving SQL commands down a pipe to a\n\"standalone backend\" that's doing the real work. Evidently your\nbackend dumped core, and you need to be looking at that dump not\ninitdb itself. Depending on how your machine is set up, you might\nneed to use initdb's --noclean option to keep it from throwing away\nthe incomplete data directory, as the backend might have dropped core\nin there.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 09:55:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query about 'initdb' error" } ]
[ { "msg_contents": "While reviewing/testing subscriber-side work for $SUBJECT [1], I\nnoticed a problem that seems to need a broader discussion, so started\nthis thread. We can get prepare for the same GID more than once for\nthe cases where we have defined multiple subscriptions for\npublications on the same server and prepared transaction has\noperations on tables subscribed to those subscriptions. For such\ncases, one of the prepare will be successful and others will fail in\nwhich case the server will send them again. Once the commit prepared\nis done for the first one, the next prepare will be successful. Now,\nthis is not ideal but will work.\n\nHowever, if the user has setup synchronous_standby_names for all the\nsubscriptions then we won't be able to proceed because the prepare on\npublisher will wait for all the subscriptions to ack and the\nsubscriptions are waiting for the first prepare to finish. See an\nexample below for such a situation. I think this can also happen if we\nget any key violation while applying the changes on the subscriber,\nbut for that, we can ask the user to remove the violating key on the\nsubscriber as that is what we suggest now also for commits. Similarly,\nsay the user has already prepared the transaction with the same GID on\nsubscriber-node, then also we can get into a similar situation but for\nthat, we can ask the user to commit such a GID.\n\nWe can think of appending some unique identifier (like subid) with GID\nbut that won't work for cascaded standby setup (where the prepares on\nsubscriber will be again sent to another subscriber) as the GID can\nbecome too long. So that might not be a good solution, maybe we can\noptimize it in some way that we append only when there is a GID clash.\nThe other thing we could do is to ask the user to temporarily disable\nthe subscription and change synchronous_standby_settings on the\npublisher node. Any better ideas?\n\nExample of the above scenario, you can see this problem after applying\nthe patches at [1].\n\nPublisher\n=================\nCREATE TABLE mytbl(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nBEGIN;\nINSERT INTO mytbl(somedata, text) VALUES (1, 1);\nINSERT INTO mytbl(somedata, text) VALUES (1, 2);\nCOMMIT;\n\nCREATE PUBLICATION mypub FOR TABLE mytbl;\n\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nBEGIN;\nINSERT INTO mytbl1(somedata, text) VALUES (1, 1);\nINSERT INTO mytbl1(somedata, text) VALUES (1, 2);\nCOMMIT;\n\nCREATE PUBLICATION mypub1 FOR TABLE mytbl1;\n\nSubscriber\n=============\nCREATE TABLE mytbl(id SERIAL PRIMARY KEY, somedata int, text varchar(120));\n\nCREATE SUBSCRIPTION mysub\n CONNECTION 'host=localhost port=5432 dbname=postgres'\n PUBLICATION mypub WITH(two_phase = on);\n\nCREATE TABLE mytbl1(id SERIAL PRIMARY KEY, somedata int, text\nvarchar(120)); CREATE SUBSCRIPTION mysub1\n CONNECTION 'host=localhost port=5432 dbname=postgres'\n PUBLICATION mypub1 WITH(two_phase = on);\n\nNow, set synchronous_standby_names = 'FIRST 2 (mysub, mysub1)' on the\npublisher in postgresql.conf and restart both publisher and\nsubscriber, actually restart is not required as\nsynchronous_standby_names is a SIGHUP parameter.\n\nPublisher\n=============\nBEGIN;\nInsert into mytbl values(17,1,18);\nInsert into mytbl1 values(17,1,18);\nPrepare Transaction 'foo';\n\nNow, this Prepare transaction will wait forever because on subscriber\nwe are getting \"ERROR: transaction identifier \"foo\" is already in\nuse\" which means it is waiting for a publisher to send commit prepared\nfor first apply worker and publisher is waiting for both the\nsubscriptions to send ack. This is happening because the prepared\ntransaction on publisher operates on tables of both subscriptions.\n\nIn short, on the subscriber, both the apply workers (corresponding to\ntwo subscriptions) are getting the same prepare transaction GID,\nleading to an error on the subscriber and making the publisher wait\nforever.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/CAHut%2BPv3X7YH_nDEjH1ZJf5U6M6DHHtEjevu7PY5Dv5071jQ4A%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 18 Mar 2021 15:15:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Logical Replication vs. 2PC" }, { "msg_contents": "On Thu, Mar 18, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> While reviewing/testing subscriber-side work for $SUBJECT [1], I\n> noticed a problem that seems to need a broader discussion, so started\n> this thread. We can get prepare for the same GID more than once for\n> the cases where we have defined multiple subscriptions for\n> publications on the same server and prepared transaction has\n> operations on tables subscribed to those subscriptions. For such\n> cases, one of the prepare will be successful and others will fail in\n> which case the server will send them again. Once the commit prepared\n> is done for the first one, the next prepare will be successful. Now,\n> this is not ideal but will work.\n>\n> However, if the user has setup synchronous_standby_names for all the\n> subscriptions then we won't be able to proceed because the prepare on\n> publisher will wait for all the subscriptions to ack and the\n> subscriptions are waiting for the first prepare to finish. See an\n> example below for such a situation. I think this can also happen if we\n> get any key violation while applying the changes on the subscriber,\n> but for that, we can ask the user to remove the violating key on the\n> subscriber as that is what we suggest now also for commits. Similarly,\n> say the user has already prepared the transaction with the same GID on\n> subscriber-node, then also we can get into a similar situation but for\n> that, we can ask the user to commit such a GID.\n>\n> We can think of appending some unique identifier (like subid) with GID\n> but that won't work for cascaded standby setup (where the prepares on\n> subscriber will be again sent to another subscriber) as the GID can\n> become too long. So that might not be a good solution, maybe we can\n> optimize it in some way that we append only when there is a GID clash.\n> The other thing we could do is to ask the user to temporarily disable\n> the subscription and change synchronous_standby_settings on the\n> publisher node. Any better ideas?\n>\n> In short, on the subscriber, both the apply workers (corresponding to\n> two subscriptions) are getting the same prepare transaction GID,\n> leading to an error on the subscriber and making the publisher wait\n> forever.\n>\n> Thoughts?\n\nI see the main problem here is because the GID clashes as you have\nrightly pointed out. I'm not sure if we are allowed to change the\nGID's in the subscriber.\nIf we are allowed to change the GID's in the subscriber. Worker can do\nsomething like: When the apply worker is applying the prepared\ntransaction, try to apply the prepare transaction with the GID as is.\nIf there is an error GID already in use, workers can try to catch that\nerror and change the GID to a fixed length hash key of (GID,\nsubscription name, node name, timestamp,etc) to generate a unique hash\nkey(modified GID), prepare the transaction with the generated hash\nkey. Store this key and the original GID for later use, this will be\nrequired during commit prepared or in case of rollback prepared. When\napplying the commit prepared or rollback prepared, change the GID with\nthe hash key that was used during the prepare transaction.\nIf we are not allowed to change the GID's in the subscriber. This\nthought is in similar lines where in one of the earlier design\nprepared spool files was used. Can we have some mechanism where we can\nidentify this scenario and store the failing prepare transaction\ninformation, so that when the worker is restarted worker can use this\nstored information to identify the failed prepare transaction, once\nworker identifies that it is a failed prepare transaction then all of\nthis transaction can be serialized into a file and later when the\napply worker receives a commit prepared it can get the changes from\nthe file and apply this transaction or discard the file in case of\nrollback prepared.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 18 Mar 2021 17:31:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Thu, Mar 18, 2021 at 5:31 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Mar 18, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > In short, on the subscriber, both the apply workers (corresponding to\n> > two subscriptions) are getting the same prepare transaction GID,\n> > leading to an error on the subscriber and making the publisher wait\n> > forever.\n> >\n> > Thoughts?\n>\n> I see the main problem here is because the GID clashes as you have\n> rightly pointed out. I'm not sure if we are allowed to change the\n> GID's in the subscriber.\n> If we are allowed to change the GID's in the subscriber. Worker can do\n> something like: When the apply worker is applying the prepared\n> transaction, try to apply the prepare transaction with the GID as is.\n> If there is an error GID already in use, workers can try to catch that\n> error and change the GID to a fixed length hash key of (GID,\n> subscription name, node name, timestamp,etc) to generate a unique hash\n> key(modified GID), prepare the transaction with the generated hash\n> key. Store this key and the original GID for later use, this will be\n> required during commit prepared or in case of rollback prepared. When\n> applying the commit prepared or rollback prepared, change the GID with\n> the hash key that was used during the prepare transaction.\n>\n\nI think it will be tricky to distinguish the clash is due to the user\nhas already prepared a xact with the same GID on a subscriber or it is\nfrom one of the apply workers. For earlier cases, the user needs to\ntake action. You need to change both file format and WAL for this and\nnot sure but generating hash key for this looks a bit shaky. Now, we\nmight be able to make it work but how about if we always append subid\nwith GID for prepare and store GID and subid separately in WAL (I\nthink we can store additional subscriber-id information\nconditionally). Then during recovery, we will use both GID and subid\nfor prepare but for decoding, we will only use GID. This way for\ncascaded set up we can always send GID by reading WAL and the\ndownstream subscriber will append its subid to GID. I know this is\nalso not that straight-forward but I don't have any better ideas at\nthe moment.\n\n> If we are not allowed to change the GID's in the subscriber. This\n> thought is in similar lines where in one of the earlier design\n> prepared spool files was used. Can we have some mechanism where we can\n> identify this scenario and store the failing prepare transaction\n> information, so that when the worker is restarted worker can use this\n> stored information to identify the failed prepare transaction, once\n> worker identifies that it is a failed prepare transaction then all of\n> this transaction can be serialized into a file and later when the\n> apply worker receives a commit prepared it can get the changes from\n> the file and apply this transaction or discard the file in case of\n> rollback prepared.\n>\n\nHmm, this idea will face similar problems as described here [1].\n\nNote: added Petr Jelinek to see if he has any opinion on this matter.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1LVEdPYnjdajYzu3k6KEii1%2BF0jdQ6sWnYugiHcSGZD6Q%40mail.gmail.com\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 19 Mar 2021 09:05:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Thu, Mar 18, 2021 at 8:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n>\n>\n> However, if the user has setup synchronous_standby_names for all the\n> subscriptions then we won't be able to proceed because the prepare on\n> publisher will wait for all the subscriptions to ack and the\n> subscriptions are waiting for the first prepare to finish.\n>\n\nBut is it a valid use case to have two synchronous standbys which are two\nsubscriptions that are on the same server both with 2pc enabled?\nIf the purpose of synchronous standby is for durability to prevent data\nloss, then why split your tables across 2 subscriptions which are on the\nsame server?\nMaybe it could be documented warning users from having such a setup. Do we\nreally want to create a solution for an impractical scenario?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Thu, Mar 18, 2021 at 8:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\nHowever, if the user has setup synchronous_standby_names for all the\nsubscriptions then we won't be able to proceed because the prepare on\npublisher will wait for all the subscriptions to ack and the\nsubscriptions are waiting for the first prepare to finish. \nBut is it a valid use case to have two synchronous standbys which are two subscriptions that are on the same server both with 2pc enabled?If the purpose of synchronous standby is for durability to prevent data loss, then why split your tables across 2 subscriptions which are on the same server?Maybe it could be documented warning users from having such a setup. Do we really want to create a solution for an impractical scenario?regards,Ajin CherianFujitsu Australia", "msg_date": "Fri, 19 Mar 2021 16:50:59 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On 18.03.21 10:45, Amit Kapila wrote:\n> While reviewing/testing subscriber-side work for $SUBJECT [1], I\n> noticed a problem that seems to need a broader discussion, so started\n> this thread. We can get prepare for the same GID more than once for\n> the cases where we have defined multiple subscriptions for\n> publications on the same server and prepared transaction has\n> operations on tables subscribed to those subscriptions. For such\n> cases, one of the prepare will be successful and others will fail in\n> which case the server will send them again. Once the commit prepared\n> is done for the first one, the next prepare will be successful. Now,\n> this is not ideal but will work.\n\nThat's assuming you're using the same gid on the subscriber, which does \nnot apply to all use cases. It clearly depends on what you try to \nachieve by decoding in two phases, obviously.\n\nWe clearly don't have this issue in BDR, because we're using xids \n(together with a node id) to globally identify transactions and \nconstruct local (per-node) gids that don't clash.\n\n(Things get even more interesting if you take into account that users \nmay reuse the same gid for different transactions. Lag between \nsubscriptions could then lead to blocking between different origin \ntransactions...)\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Fri, 19 Mar 2021 16:52:12 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Fri, Mar 19, 2021 at 9:22 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 18.03.21 10:45, Amit Kapila wrote:\n> > While reviewing/testing subscriber-side work for $SUBJECT [1], I\n> > noticed a problem that seems to need a broader discussion, so started\n> > this thread. We can get prepare for the same GID more than once for\n> > the cases where we have defined multiple subscriptions for\n> > publications on the same server and prepared transaction has\n> > operations on tables subscribed to those subscriptions. For such\n> > cases, one of the prepare will be successful and others will fail in\n> > which case the server will send them again. Once the commit prepared\n> > is done for the first one, the next prepare will be successful. Now,\n> > this is not ideal but will work.\n>\n> That's assuming you're using the same gid on the subscriber, which does\n> not apply to all use cases. It clearly depends on what you try to\n> achieve by decoding in two phases, obviously.\n>\n> We clearly don't have this issue in BDR, because we're using xids\n> (together with a node id) to globally identify transactions and\n> construct local (per-node) gids that don't clash.\n>\n\nSo, I think you are using xid of publisher and origin_id of\nsubscription to achieve uniqueness because both will be accessible in\nprepare and commit prepared. Right? If so, I think that will work out\nhere as well. But if we think to use xid generated on subscriber then\nwe need to keep some mapping of original GID sent by publisher and GID\ngenerated by us (origin+xid of subscription) because, at commit\nprepared time, we won't know that xid.\n\n> (Things get even more interesting if you take into account that users\n> may reuse the same gid for different transactions.\n>\n\nAre you saying that users might use the same GID which we have\nconstructed internally (say by combining origin and xid: originid_xid)\nand then there will be conflict while replaying such transactions?\n\n\n> Lag between\n> subscriptions could then lead to blocking between different origin\n> transactions...)\n>\n\nRight and even for one subscription that can lead to blocking\ntransactions. But isn't it similar to what we get for a primary key\nviolation while replaying transactions? In that case, we suggest users\nremove conflicting rows, so in such cases, we can recommend users to\ncommit/rollback such prepared xacts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Mar 2021 07:47:41 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On 20.03.21 03:17, Amit Kapila wrote:\n> Are you saying that users might use the same GID which we have\n> constructed internally (say by combining origin and xid: originid_xid)\n> and then there will be conflict while replaying such transactions?\n\nNo, I was pondering about a user doing (in short sequence):\n\n..\nPREPARE TRANSACTION 'foobar';\nCOMMIT PREPARED 'foobar';\n\nBEGIN;\n...\nPREPARE TRANSACTION 'foobar';\nCOMMIT PREPARED 'foobar';\n\n> Right and even for one subscription that can lead to blocking\n> transactions. But isn't it similar to what we get for a primary key\n> violation while replaying transactions?\n\nSure, it's a conflict that prevents application. A primary key conflict \nmay be different in that it does not eventually resolve, though.\n\n> In that case, we suggest users\n> remove conflicting rows, so in such cases, we can recommend users to\n> commit/rollback such prepared xacts?\n\nRight, if you use gids, you could ask the user to always provide unique \nidentifiers and not reuse them on any other node. That's putting the \nburden of coming up with unique identifiers on the user, but that's a \nperfectly fine and reasonable thing to do. (Lots of other systems out \nthere requiring a unique request id or such, which would get confused if \nyou issue requests with duplicate ids.)\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Sat, 20 Mar 2021 10:27:21 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Sat, Mar 20, 2021 at 7:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 19, 2021 at 9:22 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n\n> So, I think you are using xid of publisher and origin_id of\n> subscription to achieve uniqueness because both will be accessible in\n> prepare and commit prepared. Right? If so, I think that will work out\n> here as well. But if we think to use xid generated on subscriber then\n> we need to keep some mapping of original GID sent by publisher and GID\n> generated by us (origin+xid of subscription) because, at commit\n> prepared time, we won't know that xid.\n\nI agree that if we use (publisher's xid + subscriber origin id)\ninstead of GID, we can resolve this deadlock issue. I was also\nthinking that is it okay to change the prepared transaction name on\nthe subscriber? I mean instead of GID if we use some other name then\nimagine a case where a user has prepared some transaction on the\npublisher and then tries to commit that on the subscriber using the\nprepared transaction name, then it will not work. But maybe this is\nnot really a practical use case. I mean why anyone would want to\nprepare a transaction on the publisher and commit that prepared\ntransaction directly on the subscriber. Thoughts?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 20 Mar 2021 16:02:24 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Sat, Mar 20, 2021 at 2:57 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 20.03.21 03:17, Amit Kapila wrote:\n> > Are you saying that users might use the same GID which we have\n> > constructed internally (say by combining origin and xid: originid_xid)\n> > and then there will be conflict while replaying such transactions?\n>\n> No, I was pondering about a user doing (in short sequence):\n>\n> ..\n> PREPARE TRANSACTION 'foobar';\n> COMMIT PREPARED 'foobar';\n>\n> BEGIN;\n> ...\n> PREPARE TRANSACTION 'foobar';\n> COMMIT PREPARED 'foobar';\n>\n> > Right and even for one subscription that can lead to blocking\n> > transactions. But isn't it similar to what we get for a primary key\n> > violation while replaying transactions?\n>\n> Sure, it's a conflict that prevents application. A primary key conflict\n> may be different in that it does not eventually resolve, though.\n>\n> > In that case, we suggest users\n> > remove conflicting rows, so in such cases, we can recommend users to\n> > commit/rollback such prepared xacts?\n>\n> Right, if you use gids, you could ask the user to always provide unique\n> identifiers and not reuse them on any other node. That's putting the\n> burden of coming up with unique identifiers on the user, but that's a\n> perfectly fine and reasonable thing to do. (Lots of other systems out\n> there requiring a unique request id or such, which would get confused if\n> you issue requests with duplicate ids.)\n>\n\nRight, but I guess in our case using user-provided GID will conflict\nif we use multiple subscriptions on the same node. So, it is better to\ngenerate a unique identifier like we are discussing here, something\nlike (origin_id of subscription + xid of the publisher). Do you see\nany problem with that?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Mar 2021 20:44:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Sat, Mar 20, 2021 at 4:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 7:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Fri, Mar 19, 2021 at 9:22 PM Markus Wanner\n> > <markus.wanner@enterprisedb.com> wrote:\n>\n> > So, I think you are using xid of publisher and origin_id of\n> > subscription to achieve uniqueness because both will be accessible in\n> > prepare and commit prepared. Right? If so, I think that will work out\n> > here as well. But if we think to use xid generated on subscriber then\n> > we need to keep some mapping of original GID sent by publisher and GID\n> > generated by us (origin+xid of subscription) because, at commit\n> > prepared time, we won't know that xid.\n>\n> I agree that if we use (publisher's xid + subscriber origin id)\n> instead of GID, we can resolve this deadlock issue.\n>\n\nYeah, the two things to keep in mind with this solution as well are\n(a) still it is possible that conflict can be generated if the user\nhas prepared the transaction with that name of subscriber, the chances\nof which are bleak and the user can always commit/rollback the\nconflicting GID; (b) the subscription has two publications at\ndifferent nodes and then there is some chance that both send the same\nxid, again the chances of this are bleak.\n\nI think even though in the above kind of cases there is a chance of\nconflict but it won't be a deadlock kind of situation. So, I guess it\nis better to do this solution, what do you think?\n\n> I was also\n> thinking that is it okay to change the prepared transaction name on\n> the subscriber? I mean instead of GID if we use some other name then\n> imagine a case where a user has prepared some transaction on the\n> publisher and then tries to commit that on the subscriber using the\n> prepared transaction name, then it will not work. But maybe this is\n> not really a practical use case. I mean why anyone would want to\n> prepare a transaction on the publisher and commit that prepared\n> transaction directly on the subscriber.\n>\n\nIt is not clear to me either if for such a purpose we need to use the\nsame GID as provided by the publisher. I don't know if there is any\nsuch use case but if there is one, maybe later we can provide an\noption with a subscription to use GID provided by the publisher when\ntwo_phase is enabled?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Mar 2021 20:53:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Sat, Mar 20, 2021 at 8:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 4:02 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Sat, Mar 20, 2021 at 7:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Fri, Mar 19, 2021 at 9:22 PM Markus Wanner\n> > > <markus.wanner@enterprisedb.com> wrote:\n> >\n> > > So, I think you are using xid of publisher and origin_id of\n> > > subscription to achieve uniqueness because both will be accessible in\n> > > prepare and commit prepared. Right? If so, I think that will work out\n> > > here as well. But if we think to use xid generated on subscriber then\n> > > we need to keep some mapping of original GID sent by publisher and GID\n> > > generated by us (origin+xid of subscription) because, at commit\n> > > prepared time, we won't know that xid.\n> >\n> > I agree that if we use (publisher's xid + subscriber origin id)\n> > instead of GID, we can resolve this deadlock issue.\n> >\n>\n> Yeah, the two things to keep in mind with this solution as well are\n> (a) still it is possible that conflict can be generated if the user\n> has prepared the transaction with that name of subscriber, the chances\n> of which are bleak and the user can always commit/rollback the\n> conflicting GID; (b) the subscription has two publications at\n> different nodes and then there is some chance that both send the same\n> xid, again the chances of this are bleak.\n>\n> I think even though in the above kind of cases there is a chance of\n> conflict but it won't be a deadlock kind of situation. So, I guess it\n> is better to do this solution, what do you think?\n>\n\nI have enhanced the patch for 2PC implementation on the\nsubscriber-side as per the solution discussed here [1].\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1KvXA34S24My1qnRhOn%2Bw30b2FdGNNzqh1pm0ENveGJJw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 21 Mar 2021 13:09:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On 20.03.21 16:14, Amit Kapila wrote:\n> Right, but I guess in our case using user-provided GID will conflict\n> if we use multiple subscriptions on the same node. So, it is better to\n> generate a unique identifier like we are discussing here, something\n> like (origin_id of subscription + xid of the publisher). Do you see\n> any problem with that?\n\nNo, quite the opposite: I'm the one advocating the use of xids to \nidentify transactions. See my patch for filter_prepare.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Sun, 21 Mar 2021 10:17:20 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Sun, Mar 21, 2021 at 2:47 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 20.03.21 16:14, Amit Kapila wrote:\n> > Right, but I guess in our case using user-provided GID will conflict\n> > if we use multiple subscriptions on the same node. So, it is better to\n> > generate a unique identifier like we are discussing here, something\n> > like (origin_id of subscription + xid of the publisher). Do you see\n> > any problem with that?\n>\n> No, quite the opposite: I'm the one advocating the use of xids to\n> identify transactions.\n>\n\nOkay.\n\n> See my patch for filter_prepare.\n>\n\nI'll think once again from this angle and respond on that thread,\nprobably one use case could be for the plugins which use xid to\ngenerate GID. In such cases, xid might be required to filter the\ntransaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:17:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Logical Replication vs. 2PC" }, { "msg_contents": "On Sunday, March 21, 2021 4:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n\r\n>I have enhanced the patch for 2PC implementation on the\r\n>subscriber-side as per the solution discussed here [1].\r\n\r\nFYI.\r\nI did the confirmation for the solution of unique GID problem raised at [1].\r\nThis problem in V61-patches at [2] is fixed in the latest V66-patches at [3].\r\n\r\nB.T.W. NG log at V61-patches is attached, please take it as your reference.\r\n Test step is just the same as Amit said at [1].\r\n\r\n[1] - https://www.postgresql.org/message-id/CAA4eK1+opiV4aFTmWWUF9h_32=HfPOW9vZASHarT0UA5oBrtGw@mail.gmail.com\r\n[2] - https://www.postgresql.org/message-id/CAHut%2BPv3X7YH_nDEjH1ZJf5U6M6DHHtEjevu7PY5Dv5071jQ4A%40mail.gmail.com\r\n[3] - https://www.postgresql.org/message-id/CAA4eK1JPEoYAkggmLqbdD%2BcF%3DkWNpLkZb_wJ8eqj0QD2AjBTBA%40mail.gmail.com\r\n\r\nRegards,\r\nTang", "msg_date": "Wed, 24 Mar 2021 07:31:18 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: Logical Replication vs. 2PC" } ]
[ { "msg_contents": "Hello\nIn src/backend/utils/adt/genfile.c in pg_read_file we have errhint\n\n> errhint(\"Consider using %s, which is part of core, instead.\",\n>\t\t\t\t\t\t \"pg_file_read()\")\n\nShouldn't pg_read_file() be written here?\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 18 Mar 2021 12:57:46 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": true, "msg_subject": "hint Consider using pg_file_read()" }, { "msg_contents": "On Thu, Mar 18, 2021 at 10:58 AM Sergei Kornilov <sk@zsrv.org> wrote:\n>\n> Hello\n> In src/backend/utils/adt/genfile.c in pg_read_file we have errhint\n>\n> > errhint(\"Consider using %s, which is part of core, instead.\",\n> > \"pg_file_read()\")\n>\n> Shouldn't pg_read_file() be written here?\n\nYup, it certainly looks that way. There's a really funky combination\nof names between SQL and backend there that I guess threw the original\nauthor off.\n\nWill fix.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 18 Mar 2021 11:21:45 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: hint Consider using pg_file_read()" }, { "msg_contents": "I noticed that the fix has been committed, thank you!\n\nregards, Sergei\n\n\n", "msg_date": "Thu, 18 Mar 2021 13:44:40 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": true, "msg_subject": "Re: hint Consider using pg_file_read()" } ]
[ { "msg_contents": "Hi All!\r\n\r\nHopefully I’m using correct mail list\r\nIf not please show me right direction 😊\r\n\r\nI’m quite struggling without native Change Data Capture feature in PostgreSQL.\r\n\r\nThat would be great to implement it, possibly in not so complicated way.\r\n\r\nCan Logical replication be a little be modified or reused to do not replicate data into destination table as is but to insert each change into “change table” (like in oracle 11 CDC)?\r\nThat change table should have at lease few additional columns\r\n\r\n * Operation (I/D/U)\r\n * txid\r\n * Commit_time_stamp\r\n\r\n\r\nThanks!\r\n\r\nStepan Yankevych\r\n\n\n\n\n\n\n\n\n\nHi All!\n \nHopefully I’m using correct mail list  \nIf not please show me right direction \r\n😊 \n \nI’m quite struggling without native Change Data Capture feature in PostgreSQL.\r\n\n \nThat would be great to implement it, possibly in not so complicated way.\r\n\n \nCan Logical replication be a little be modified or reused to do not replicate data into destination table as is but to insert each change into “change table” (like in oracle 11 CDC)?\nThat change table should have at lease few additional columns\r\n\n\nOperation (I/D/U)\r\ntxid\r\nCommit_time_stamp\n \nThanks!\n \nStepan Yankevych", "msg_date": "Thu, 18 Mar 2021 13:03:17 +0000", "msg_from": "Stepan Yankevych <Stepan_Yankevych@epam.com>", "msg_from_op": true, "msg_subject": "CDC feature request" }, { "msg_contents": "On Thu, Mar 18, 2021 at 2:03 PM Stepan Yankevych <Stepan_Yankevych@epam.com>\nwrote:\n\n> Hi All!\n>\n>\n>\n> Hopefully I’m using correct mail list\n>\n> If not please show me right direction 😊\n>\n>\n>\n> I’m quite struggling without native Change Data Capture feature in\n> PostgreSQL.\n>\n>\n>\n> That would be great to implement it, possibly in not so complicated way.\n>\n>\n>\n> Can Logical replication be a little be modified or reused to do not\n> replicate data into destination table as is but to insert each change into\n> “change table” (like in oracle 11 CDC)?\n>\n> That change table should have at lease few additional columns\n>\n> - Operation (I/D/U)\n> - txid\n> - Commit_time_stamp\n>\n>\nIf you look at logical decoding, that's basically what you have, isn't it?\nIt won't go into a table, but you can consume it into one if you want. Look\nat for example wal2json for examples of how to consume it -- but the system\nis pluggable so you can build your own or use one of the others available\nplugins.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Mar 18, 2021 at 2:03 PM Stepan Yankevych <Stepan_Yankevych@epam.com> wrote:\n\n\nHi All!\n \nHopefully I’m using correct mail list  \nIf not please show me right direction \n😊 \n \nI’m quite struggling without native Change Data Capture feature in PostgreSQL.\n\n \nThat would be great to implement it, possibly in not so complicated way.\n\n \nCan Logical replication be a little be modified or reused to do not replicate data into destination table as is but to insert each change into “change table” (like in oracle 11 CDC)?\nThat change table should have at lease few additional columns\n\n\nOperation (I/D/U)\ntxid\nCommit_time_stampIf you look at logical decoding, that's basically what you have, isn't it? It won't go into a table, but you can consume it into one if you want. Look at for example wal2json for examples of how to consume it -- but the system is pluggable so you can build your own or use one of the others available plugins. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 18 Mar 2021 14:07:41 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: CDC feature request" }, { "msg_contents": ">> \n>> That change table should have at lease few additional columns\n>> \n>> * Operation (I/D/U)\n>> * txid\n>> * Commit_time_stamp\n> \n> If you look at logical decoding, that's basically what you have, isn't\n> it? It won't go into a table, but you can consume it into one if you\n> want. Look at for example wal2json for examples of how to consume it\n> -- but the system is pluggable so you can build your own or use one of\n> the others available plugins.\n\nHello,\n\nAt my work, we basically did that, using wal2json, here: \nhttps://github.com/peopledoc/connemara/blob/master/connemara_replication/src/connemara_replication.c\n\nThe code is quite simple, you could probably inspire yourself from that, \nor even use it directly if your needs are basic and match was is \noffered. The replication code was written to be as fast and simple as \npossible.\n\n\n", "msg_date": "Thu, 18 Mar 2021 14:34:46 +0100", "msg_from": "Ronan Dunklau <ronan@dunklau.fr>", "msg_from_op": false, "msg_subject": "Re: CDC feature request" }, { "msg_contents": "On 18/3/21 14:03, Stepan Yankevych wrote:\n>\n> Hi All!\n>\n>  \n>\n> Hopefully I’m using correct mail list  \n>\n> If not please show me right direction 😊\n>\n>  \n>\n> I’m quite struggling without native Change Data Capture feature in\n> PostgreSQL.\n>\n>  \n>\n> That would be great to implement it, possibly in not so complicated way.\n>\n>  \n>\n> Can Logical replication be a little be modified or reused to do not\n> replicate data into destination table as is but to insert each change\n> into “change table” (like in oracle 11 CDC)?\n>\n> That change table should have at lease few additional columns\n>\n> * Operation (I/D/U)\n> * txid\n> * Commit_time_stamp\n>\n>  \n>\n> Thanks!\n>\n>  \n>\n> *Stepan Yankevych*\n>\n\n    Hi Stepan.\n\n    I would recommend you to check https://debezium.io/, it stores every\nchange in Kafka with detailed metadata, and you can later transform\nand/or inject it into any destination, with great level of flexibility,\nusing any of the database connectors available.\n\n   \n    Álvaro\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres\n\n\n\n\n\n\n\n\n\nOn 18/3/21 14:03, Stepan Yankevych\n wrote:\n\n\n\n\n\n\nHi All!\n \nHopefully I’m using correct mail list  \nIf not please show me right direction \n 😊 \n \nI’m quite struggling without native Change\n Data Capture feature in PostgreSQL.\n \n \nThat would be great to implement it,\n possibly in not so complicated way.\n \n \nCan Logical replication be a little be\n modified or reused to do not replicate data into destination\n table as is but to insert each change into “change table”\n (like in oracle 11 CDC)?\nThat change table should have at lease few\n additional columns\n \n\nOperation\n (I/D/U)\n \ntxid\n \nCommit_time_stamp\n\n \nThanks!\n \nStepan\n Yankevych\n\n\n\n     Hi Stepan.\n\n     I would recommend you to check https://debezium.io/, it stores\n every change in Kafka with detailed metadata, and you can later\n transform and/or inject it into any destination, with great level of\n flexibility, using any of the database connectors available.\n\n     \n     Álvaro\n\n-- \n\nAlvaro Hernandez\n\n\n-----------\nOnGres", "msg_date": "Thu, 18 Mar 2021 14:37:01 +0100", "msg_from": "=?UTF-8?B?w4FsdmFybyBIZXJuw6FuZGV6?= <aht@ongres.com>", "msg_from_op": false, "msg_subject": "Re: CDC feature request" } ]
[ { "msg_contents": "Recent versions of git are capable of maintaining a list of commits\nfor \"git blame\" to ignore:\n\nhttps://www.moxio.com/blog/43/ignoring-bulk-change-commits-with-git-blame\n\nI tried this out myself, using my own list of pgindent commits. It\nworks very well -- much better than what you get when you ask git to\nheuristically ignore commits based on whitespace-only line changes.\nThis is not surprising, in a way; I don't actually want to avoid\nwhitespace. I just want to ignore pgindent commits.\n\nNote that there are still a small number of pgindent line changes,\neven with this. That's because sometimes it's unavoidable -- some\n\"substantively distinct lines of code\" are actually created by\npgindent. But these all seem to be <CR> lines that are only shown\nbecause there is legitimately no more appropriate commit to attribute\nthe line to. This seems like the ideal behavior to me.\n\nI propose that we (I suppose I actually mean Bruce) start maintaining\nour own file for this in git. It can be configured to run without any\nextra steps via a once-off \"git config blame.ignoreRevsFile\n.git-blame-ignore-revs\". It would only need to be updated whenever\nBruce or Tom runs pgindent.\n\nIt doesn't matter if this misses one or two smaller pgindent runs, it\nseems. Provided the really huge ones are in the file, everything works\nvery well.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 13:46:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 01:46:49PM -0700, Peter Geoghegan wrote:\n> Recent versions of git are capable of maintaining a list of commits\n> for \"git blame\" to ignore:\n> \n> https://www.moxio.com/blog/43/ignoring-bulk-change-commits-with-git-blame\n> \n> I tried this out myself, using my own list of pgindent commits. It\n> works very well -- much better than what you get when you ask git to\n> heuristically ignore commits based on whitespace-only line changes.\n> This is not surprising, in a way; I don't actually want to avoid\n> whitespace. I just want to ignore pgindent commits.\n> \n> Note that there are still a small number of pgindent line changes,\n> even with this. That's because sometimes it's unavoidable -- some\n> \"substantively distinct lines of code\" are actually created by\n> pgindent. But these all seem to be <CR> lines that are only shown\n> because there is legitimately no more appropriate commit to attribute\n> the line to. This seems like the ideal behavior to me.\n> \n> I propose that we (I suppose I actually mean Bruce) start maintaining\n\nActually, Tom Lane runs pgindent usually now. I do the copyright\nchange, but I don't think we would ignore those since the single-line\nchange is probably something we would want to blame.\n\n> our own file for this in git. It can be configured to run without any\n> extra steps via a once-off \"git config blame.ignoreRevsFile\n> .git-blame-ignore-revs\". It would only need to be updated whenever\n> Bruce or Tom runs pgindent.\n> \n> It doesn't matter if this misses one or two smaller pgindent runs, it\n> seems. Provided the really huge ones are in the file, everything works\n> very well.\n\nIt would certainly be very easy to pull pgindent commits out of git log\nand add them. I do wish we could cause everyone to honor that file, but\nit seems each user has to configure their repository to honor it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 18 Mar 2021 17:10:12 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 2:10 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Actually, Tom Lane runs pgindent usually now. I do the copyright\n> change, but I don't think we would ignore those since the single-line\n> change is probably something we would want to blame.\n\nThe copyright change commits don't need to be considered here. In\npractice they're just not a problem because nobody wants or expects\n\"git blame\" to do anything more than attribute an affected line to a\ncopyright commit.\n\n> It would certainly be very easy to pull pgindent commits out of git log\n> and add them. I do wish we could cause everyone to honor that file, but\n> it seems each user has to configure their repository to honor it.\n\nThat doesn't seem like a huge problem. There is no reason why this\nshouldn't be easy to use and to maintain going forward. There just\naren't very many commits involved.\n\nAttached is my .git-blame-ignore-revs file, which has pgindent commits\nthat I'd like to exclude from git blame. The file is helpful on its\nown. But what we really ought to do is commit the file (perhaps with\nsome revisions) and require that it be maintained by the official\nproject workflow documented at src/tools/pgindent/README.\n\n-- \nPeter Geoghegan", "msg_date": "Thu, 18 Mar 2021 15:03:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 03:03:41PM -0700, Peter Geoghegan wrote:\n> Attached is my .git-blame-ignore-revs file, which has pgindent commits\n> that I'd like to exclude from git blame. The file is helpful on its\n> own. But what we really ought to do is commit the file (perhaps with\n> some revisions) and require that it be maintained by the official\n> project workflow documented at src/tools/pgindent/README.\n\nIt would be kind of nice if the file can be generated automatically. I\nhave you checked if 'pgindent' being on the first line of the commit is\nsufficient?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 18 Mar 2021 18:12:27 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 3:12 PM Bruce Momjian <bruce@momjian.us> wrote:\n> It would be kind of nice if the file can be generated automatically. I\n> have you checked if 'pgindent' being on the first line of the commit is\n> sufficient?\n\nI generated the file by looking for commits that:\n\n1) Mentioned \"pgindent\" or \"PGINDENT\" in the entire commit message.\n\n2) Had more than 20 or 30 files changed.\n\nThis left me with fewer than 50 commits that cover over 20 years of\nhistory since the first pgindent commit. I also added one or two\nothers that I somehow missed (maybe you happened to spell it \"pg\nindent\" that year) through trial and error. The file that I sent to\nthe list works really well for me.\n\nI don't think that it's a good idea to automate this process, because\nwe certainly don't want to let incorrect entries slip in. And because\nthere just isn't a lot left to automate -- running pgindent on the\ntree is something that happens no more than 2 or 3 times a year. It\ncould easily be added to the checklist in the README. It should take\nless than 5 minutes a year.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 15:20:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 03:20:37PM -0700, Peter Geoghegan wrote:\n> On Thu, Mar 18, 2021 at 3:12 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > It would be kind of nice if the file can be generated automatically. I\n> > have you checked if 'pgindent' being on the first line of the commit is\n> > sufficient?\n> \n> I generated the file by looking for commits that:\n> \n> 1) Mentioned \"pgindent\" or \"PGINDENT\" in the entire commit message.\n> \n> 2) Had more than 20 or 30 files changed.\n> \n> This left me with fewer than 50 commits that cover over 20 years of\n> history since the first pgindent commit. I also added one or two\n> others that I somehow missed (maybe you happened to spell it \"pg\n> indent\" that year) through trial and error. The file that I sent to\n> the list works really well for me.\n> \n> I don't think that it's a good idea to automate this process, because\n> we certainly don't want to let incorrect entries slip in. And because\n> there just isn't a lot left to automate -- running pgindent on the\n> tree is something that happens no more than 2 or 3 times a year. It\n> could easily be added to the checklist in the README. It should take\n> less than 5 minutes a year.\n\nSounds like a plan. We should mention adding to this file somewhere in\nour pgindent README.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 18 Mar 2021 18:35:49 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Attached is my .git-blame-ignore-revs file, which has pgindent commits\n> that I'd like to exclude from git blame. The file is helpful on its\n> own. But what we really ought to do is commit the file (perhaps with\n> some revisions) and require that it be maintained by the official\n> project workflow documented at src/tools/pgindent/README.\n\nI don't object to maintaining such a file; if it makes \"git blame\"\nwork better, that's a huge win. However, the file as you have it\nseems rather unreadable. I'd much rather have something that includes\nthe commit date and/or first line of commit message. Is there any\nflexibility in the format, or does git blame insist it be just like this?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 18:39:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't object to maintaining such a file; if it makes \"git blame\"\n> work better, that's a huge win. However, the file as you have it\n> seems rather unreadable. I'd much rather have something that includes\n> the commit date and/or first line of commit message. Is there any\n> flexibility in the format, or does git blame insist it be just like this?\n\nI ended up doing it that way because I was in a hurry to see how much\nit helped. I can fix it up.\n\nWe could require (but not automatically enforce) that the first line\nof the commit message be included above each hash, as a comment. You\ncould also require reverse chronological ordering of commits. That\nwould make everything easy to follow.\n\nIt's worth noting that git insists that you provide the full hash of\ncommits here. This is not something I remember it insisting upon in\nany other area. There is probably a very good practical reason for\nthat.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 15:46:10 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 03:46:10PM -0700, Peter Geoghegan wrote:\n> It's worth noting that git insists that you provide the full hash of\n> commits here. This is not something I remember it insisting upon in\n> any other area. There is probably a very good practical reason for\n> that.\n\nProbably because later commits might collide with shorter hashes. When\nyou are reporting a hash that only looks _backward_, this is not an\nissue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:00:00 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 4:00 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Probably because later commits might collide with shorter hashes. When\n> you are reporting a hash that only looks _backward_, this is not an\n> issue.\n\nRight, but it's extremely unlikely to happen by accident. I was\nsuggesting that there might be a security issue. I could fairly easily\nmake my git commit match a prefix intended to uniquely identify your\ngit commit if I set out to do so.\n\nThere are projects that might have to consider that possibility,\nthough perhaps we're not one of them.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 16:03:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Mar 18, 2021 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I don't object to maintaining such a file; if it makes \"git blame\"\n>> work better, that's a huge win. However, the file as you have it\n>> seems rather unreadable. I'd much rather have something that includes\n>> the commit date and/or first line of commit message. Is there any\n>> flexibility in the format, or does git blame insist it be just like this?\n\n> I ended up doing it that way because I was in a hurry to see how much\n> it helped. I can fix it up.\n> We could require (but not automatically enforce) that the first line\n> of the commit message be included above each hash, as a comment. You\n> could also require reverse chronological ordering of commits. That\n> would make everything easy to follow.\n\nGiven that the file will be added to manually, I think just having an\nexisting format to follow will be easy enough. I'd suggest something\nlike\n\nb5d69b7c22ee4c44b8bb99cfa0466ffaf3b5fab9 # Sun Jun 7 16:57:08 2020 -0400\n# pgindent run prior to branching v13.\n\nwhich is easy to make from \"git log\" or \"git show\" output. (Because\nof the precedent of those tools, I'd rather write the commit hash\nbefore the rest of the entry.)\n\nThe date is important IMO because otherwise it's quite unclear whether\nto add a new entry at the top or the bottom.\n\nOther points: the file should be kept in src/tools/pgindent/, and\nit should definitely NOT have a name beginning with \".\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:05:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 07:05:03PM -0400, Tom Lane wrote:\n> Other points: the file should be kept in src/tools/pgindent/, and\n> it should definitely NOT have a name beginning with \".\".\n\nWell, if we want github and others to eventually honor a file, it seems\n.git-blame-ignore-revs at the top of the tree is the common location for\nthis. Of course, I don't know if they will ever do that, and can move\nit later if needed.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:31:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> b5d69b7c22ee4c44b8bb99cfa0466ffaf3b5fab9 # Sun Jun 7 16:57:08 2020 -0400\n> # pgindent run prior to branching v13.\n>\n> which is easy to make from \"git log\" or \"git show\" output. (Because\n> of the precedent of those tools, I'd rather write the commit hash\n> before the rest of the entry.)\n\nWFM.\n\nWhat about reformat-dat-files and perltidy runs? It seems that you\nhave sometimes used all three reformatting tools to produce one commit\n-- but not always. ISTM that I should get any of those that I missed.\nAnd that the pgindent README (which already mentions these other\ntools) ought to be updated to be explicit about the policy applying\nequally to commits that apply any of the two other tools in bulk.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 16:32:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> What about reformat-dat-files and perltidy runs? It seems that you\n> have sometimes used all three reformatting tools to produce one commit\n> -- but not always. ISTM that I should get any of those that I missed.\n> And that the pgindent README (which already mentions these other\n> tools) ought to be updated to be explicit about the policy applying\n> equally to commits that apply any of the two other tools in bulk.\n\nGood question. We don't have a standard about that (whether to\ndo those in separate or the same commits), but we could establish one\nif it seems helpful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 19:40:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 4:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Good question. We don't have a standard about that (whether to\n> do those in separate or the same commits), but we could establish one\n> if it seems helpful.\n\nI don't think that it matters too much, but it will necessitate\nupdating the file multiple times. It might become natural to just do\neverything together in a way that it wasn't before.\n\nThe really big wins come from excluding the enormous pgindent run\ncommits, especially for the few historic pgindent runs where the rules\nchanged -- there are no more than a handful of those. They tend to\ngenerate an enormous amount of churn that touches almost everything.\nSo it probably isn't necessary to worry about smaller things.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 18 Mar 2021 16:54:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Thu, Mar 18, 2021 at 4:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Good question. We don't have a standard about that (whether to\n>> do those in separate or the same commits), but we could establish one\n>> if it seems helpful.\n\n> I don't think that it matters too much, but it will necessitate\n> updating the file multiple times. It might become natural to just do\n> everything together in a way that it wasn't before.\n\nDoubt that it matters. The workflow would have to be \"commit and push\nthe mechanical updates, then edit the tracking file, commit and push\nthat\". You don't have the commit hash nailed down till you've pushed.\nSo if we decided to do the mechanical updates in several commits,\nnot just one, I'd still be inclined to do them all and then edit the\ntracking file once.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 18 Mar 2021 20:07:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 5:07 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Doubt that it matters. The workflow would have to be \"commit and push\n> the mechanical updates, then edit the tracking file, commit and push\n> that\". You don't have the commit hash nailed down till you've pushed.\n\nOkay. I have made a personal TODO list item for this. I'll pick this\nup again in April, once the final CF is over.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 19 Mar 2021 19:33:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Thu, Mar 18, 2021 at 4:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Mar 18, 2021 at 4:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > b5d69b7c22ee4c44b8bb99cfa0466ffaf3b5fab9 # Sun Jun 7 16:57:08 2020 -0400\n> > # pgindent run prior to branching v13.\n> >\n> > which is easy to make from \"git log\" or \"git show\" output. (Because\n> > of the precedent of those tools, I'd rather write the commit hash\n> > before the rest of the entry.)\n>\n> WFM.\n\nWhat do you think of the attached? I prefer the ISO date style myself,\nso I went with that.\n\nNote that I have included \"Modify BufferGetPage() to prepare for\n\"snapshot too old\" feature\", as well as \"Revert no-op changes to\nBufferGetPage()\". I've noticed that those two particular commits cause\nunhelpful noise when I run \"git blame\" on access method code. I see\nproblems with these commits often enough to matter. The latter commit\ncleanly reverted the former after only 12 days, so ignoring both seems\nokay to me.\n\nEverything else should be either pgindent/perltidy related or\nreformat-dat-files related.\n\n-- \nPeter Geoghegan", "msg_date": "Sun, 20 Jun 2021 21:28:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> What do you think of the attached? I prefer the ISO date style myself,\n> so I went with that.\n\nI grepped the git logs for \"indent\" and found a bunch more commits\nthat look like they should be included:\n\ndb6e2b4c5\nd84213909\n1e9c85809\nf04d4ac91\n9ef2dbefc\n651902deb\nce5548103\nb5bce6c1e\nde94e2af1\nd0cd7bda9\nbefa3e648\n7584649a1\n84288a86a\nd74714027\nb6b71b85b\n46785776c\n089003fb4\nea08e6cd5\n59f6a57e5\n\nIt's possible that some of these touch few enough lines that they\nare not worth listing; I did not check the commit delta sizes.\n\n> Note that I have included \"Modify BufferGetPage() to prepare for\n> \"snapshot too old\" feature\", as well as \"Revert no-op changes to\n> BufferGetPage()\". I've noticed that those two particular commits cause\n> unhelpful noise when I run \"git blame\" on access method code.\n\nMeh. I can get on board with the idea of adding commit+revert pairs\nto this list, but I'd like a more principled selection filter than\n\"which ones bother Peter\". Maybe the size of the reverted patch\nshould factor into it.\n\nDo we have an idea of how much adding entries to this list slows\ndown \"git blame\"? If we include commit+revert pairs more than\nvery sparingly, I'm afraid we'll soon have an enormous list, and\nI wonder what that will cost us.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 11:34:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Mon, Jun 21, 2021 at 8:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> It's possible that some of these touch few enough lines that they\n> are not worth listing; I did not check the commit delta sizes.\n\nCommits that touch very few lines weren't included originally, just\nbecause it didn't seem necessary. Even still, I've looked through the\nextra commits now, and everything that you picked out looks in scope.\nI'm just going to include these extra commits.\n\nAttached is a new version of the same file, based on your feedback\n(with those extra commits, and some commits from the last version\nremoved). I'll produce a conventional patch file in the next revision,\nmost likely.\n\n> Meh. I can get on board with the idea of adding commit+revert pairs\n> to this list, but I'd like a more principled selection filter than\n> \"which ones bother Peter\". Maybe the size of the reverted patch\n> should factor into it\n\nI have to admit that you're right. That was why I picked those two\nout. Of course I can defend this choice in detail, but in the interest\nof not setting a terrible precedent I won't do that. The commits in\nquestion have been removed from this revised version.\n\nI think it's important that we not get into the business of adding\nstuff to this willy-nilly. Inevitably everybody will have their own\npet peeve noisy commit, and will want to add to the list -- just like\nI did. Naturally nobody will be interested in arguing against\nincluding whatever individual pet peeve commit each time this comes\nup. Regardless of the merits of the case. Let's just not go there\n(unless perhaps it's truly awful for almost everybody).\n\n> Do we have an idea of how much adding entries to this list slows\n> down \"git blame\"? If we include commit+revert pairs more than\n> very sparingly, I'm afraid we'll soon have an enormous list, and\n> I wonder what that will cost us.\n\nI doubt it costs us much, at least in any way that has a very\nnoticeable relationship as new commits are added. I've now settled on\n68 commits, and expect that this won't need to grow very quickly, so\nthat seems fine. From my point of view it makes \"git blame\" far more\nuseful.\n\nLLVM uses a file with fewer entries, and have had such a file since last year:\n\nhttps://github.com/llvm/llvm-project/blob/main/.git-blame-ignore-revs\n\nThe list of commit hashes in the file that the Blender project uses is\nabout the same size:\n\nhttps://github.com/blender/blender/blob/master/.git-blame-ignore-revs\n\nWe're using more commits than either project here, but it's within an\norder of magnitude. Note that this is going to be opt-in, not opt-out.\nIt won't do anything unless an individual hacker decides to enable it\nlocally.\n\nThe convention seems to be that it is located in the top-level\ndirectory. ISTM that we should follow that convention, since following\nthe convention is good, and does not in itself force anybody to ignore\nany of the listed commits. Any thoughts on that?\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 21 Jun 2021 16:47:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> The convention seems to be that it is located in the top-level\n> directory. ISTM that we should follow that convention, since following\n> the convention is good, and does not in itself force anybody to ignore\n> any of the listed commits. Any thoughts on that?\n\nAgreed. I think I'd previously suggested something under src/tools,\nbut we might as well do like others are doing; especially since\nwe have .gitattributes and the like there already.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 21 Jun 2021 20:06:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Mon, Jun 21, 2021 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Agreed. I think I'd previously suggested something under src/tools,\n> but we might as well do like others are doing; especially since\n> we have .gitattributes and the like there already.\n\nGreat.\n\nAttached is a patch file that puts it all together. I would like to\ncommit this in the next couple of days.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 21 Jun 2021 19:43:59 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Mon, Jun 21, 2021 at 07:43:59PM -0700, Peter Geoghegan wrote:\n> On Mon, Jun 21, 2021 at 5:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Agreed. I think I'd previously suggested something under src/tools,\n> > but we might as well do like others are doing; especially since\n> > we have .gitattributes and the like there already.\n> \n> Great.\n> \n> Attached is a patch file that puts it all together. I would like to\n> commit this in the next couple of days.\n\n+1\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 21 Jun 2021 22:57:59 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Attached is a patch file that puts it all together. I would like to\n> commit this in the next couple of days.\n\nHmm, is the \"git config blame.ignoreRevsFile\" setting actually\nrepo-relative? I'm a bit confused as to whether an absolute\nfile path would be needed to ensure correct behavior.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:04:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Tue, Jun 22, 2021 at 8:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm, is the \"git config blame.ignoreRevsFile\" setting actually\n> repo-relative? I'm a bit confused as to whether an absolute\n> file path would be needed to ensure correct behavior.\n\nThat seems to be standard practice, and certainly works for me.\n\nIf any of the hashes are not well formed, or even appear in\nabbreviated form, \"git blame\" breaks in a very obvious and visible\nway. So there is zero chance of it breaking without somebody noticing\nimmediately.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Jun 2021 08:15:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jun 22, 2021 at 8:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm, is the \"git config blame.ignoreRevsFile\" setting actually\n>> repo-relative? I'm a bit confused as to whether an absolute\n>> file path would be needed to ensure correct behavior.\n\n> That seems to be standard practice, and certainly works for me.\n\nOK, no objections then.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Jun 2021 11:18:05 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Tue, Jun 22, 2021 at 8:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> OK, no objections then.\n\nPushed. Thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Jun 2021 09:07:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jun 22, 2021 at 8:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> OK, no objections then.\n\n> Pushed. Thanks.\n\nUm. You probably should have waited for beta2 to be tagged.\nNo harm done likely, but it's not per release process.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 22 Jun 2021 12:43:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Tue, Jun 22, 2021 at 9:43 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Pushed. Thanks.\n>\n> Um. You probably should have waited for beta2 to be tagged.\n> No harm done likely, but it's not per release process.\n\nSorry about that. I was aware of the policy, but somehow overlooked\nthat we were in the window between stamping and tagging. I'll be more\ncareful about this in the future.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Jun 2021 09:49:05 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On 2021-Mar-18, Peter Geoghegan wrote:\n\n> On Thu, Mar 18, 2021 at 3:12 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > It would be kind of nice if the file can be generated automatically. I\n> > have you checked if 'pgindent' being on the first line of the commit is\n> > sufficient?\n> \n> I generated the file by looking for commits that:\n> \n> 1) Mentioned \"pgindent\" or \"PGINDENT\" in the entire commit message.\n> \n> 2) Had more than 20 or 30 files changed.\n\nIs there a minimum git version for this to work? It doesn't seem to\nwork for me.\n\n... ah, apparently one needs git 2.23:\nhttps://www.moxio.com/blog/43/ignoring-bulk-change-commits-with-git-blame\n\nI have 2.20.\n\n[ apt install -t buster-backports git ]\n\nI have 2.30. It works better. To be clear: some lines still appear as\noriginating in some pgindent commit, when they are created by such a\ncommit. But as far as I've seen, they're mostly uninteresting ones\n(whitespace, only braces, only \"else\", only \"for (;;)\" and similar).\n \nThe git blame experience seems much better. Thanks!\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imb�cil\" (Luis Adler, \"Los tripulantes de la noche\")\n\n\n", "msg_date": "Tue, 22 Jun 2021 20:00:55 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" }, { "msg_contents": "On Tue, Jun 22, 2021 at 5:01 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> I have 2.30. It works better. To be clear: some lines still appear as\n> originating in some pgindent commit, when they are created by such a\n> commit. But as far as I've seen, they're mostly uninteresting ones\n> (whitespace, only braces, only \"else\", only \"for (;;)\" and similar).\n\nAs I understand it there are a small number of remaining lines that\nare fundamentally impossible to attribute to any commit but a pgindent\ncommit. These are lines that a pgindent commit created, typically when\nit adds a new single line of whitespace (carriage return). I think\nthat these remaining lines of whitespace probably *should* be\nattributed to a pgindent commit -- it's actually a good thing. In any\ncase they're unlikely to be called up because they're just whitespace.\n\n> The git blame experience seems much better. Thanks!\n\nI'm very pleased with the results myself.\n\nIt's particularly nice when you \"git blame\" an old file that has been\nthrough multiple huge pgindent changes. You can actually see\nreasonable attributions for commits that go back to the 1990s now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 22 Jun 2021 17:11:41 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Maintaining a list of pgindent commits for \"git blame\" to ignore" } ]
[ { "msg_contents": "We've noticed what may be a regression / bug in PG13.\n\nI work at Heroku on the Data team, where we manage a fleet of Postgres\nservices. This change has resulted in breaking the UX we offer to customers\nto manage their PG services. In particular, ‘forks’ and ‘point in time\nrestores’ seem broken for PG13.\n\nI believe it is related to this patch (\nhttps://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no\n)\n\nFor PG12, we expect:\n\n-- We create a new Postgres service from archive and provide a\nrecovery_target_xid\n-- PG replays the archive until the end of the archive is reached, and the\ncurrent transaction == recovery_target_xid\n-- We measure the current transaction via the query SELECT\npg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\n-- Since the current transaction is exactly equal to the target\ntransaction, we perform the promotion\n\nFor PG12, what we get:\n\n-- This process completes smoothly, and the new postgres service is up and\nrunning\n\nFor PG13, we expect:\n\n-- We create a new Postgres service from archive and provide a\nrecovery_target_xid\n-- PG replays the archive until the end of the archive is reached, and the\ncurrent transaction == recovery_target_xid\n-- We measure the current transaction via the query SELECT\npg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\n-- Since the current transaction is exactly equal to the target\ntransaction, we perform the promotion\n\nFor PG13, what we get:\n\n-- On promotion we see the postgres process exit with the following log\nlines:\n\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n[18-1] sql_error_code = 00000 LOG: promote trigger file found:\n/etc/postgresql/wal-e.d/pull-env/STANDBY_OFF\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n[19-1] sql_error_code = 00000 LOG: redo done at 0/60527E0\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n[20-1] sql_error_code = 00000 LOG: last completed transaction was at log\ntime 2021-03-17 14:42:44.901448+00\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n[21-1] sql_error_code = XX000 FATAL: recovery ended before configured\nrecovery target was reached\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[5]:\n[8-1] sql_error_code = 00000 LOG: startup process (PID 7) exited with exit\ncode 1\n\nEven though the transaction IDs are identical. It seems like the end of the\narchive was reached (in fact the last transaction), and while we arrived at\nthe correct transaction id, somehow PG decided it wasn’t done replaying?\nPerhaps because somehow the timestamps don’t line up? (Afaict we do not set\nthe recovery_target_time, just the recovery_target_xid)\n\nWe have the `recovery_target_inclusive` set to true, which is the default.\nIt really seems like the intent of that setting means that if the target\nequals the current transaction ID, recovery should be marked as complete.\nHowever we're seeing the opposite. While the current txn id == the target\ntransaction id, the server exits. This is surprising, and doesn't line up\nwith our expected behavior.\n\nWe have a workaround.\n\nRight before promotion, if we increment the transaction of the leader\ndatabase (the original PG service that we're forking from) by running\n`SELECT pg_catalog.txid_current()`, wait 120 seconds (double our archive\ntimeout value to allow for the WAL segment to be written / uploaded /\nread), and wait until the current transaction is strictly greater than the\ntarget transaction, then the promotion seems to work fine every time for\nPG13. But this seems like an off by one error?\n\nWhat do you think? Is this a bug? Is this expected? Is this user error on\nour end?\n\nThanks!\n\nSean\n\nWe've noticed what may be a regression / bug in PG13.\n\nI work at Heroku on the Data team, where we manage a fleet of Postgres services. This change has resulted in breaking the UX we offer to customers to manage their PG services. In particular, ‘forks’ and ‘point in time restores’ seem broken for PG13.\n\nI believe it is related to this patch (https://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no)\n\nFor PG12, we expect:\n\n-- We create a new Postgres service from archive and provide a recovery_target_xid\n-- PG replays the archive until the end of the archive is reached, and the current transaction == recovery_target_xid\n-- We measure the current transaction via the query SELECT pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\n-- Since the current transaction is exactly equal to the target transaction, we perform the promotion\n\nFor PG12, what we get:\n\n-- This process completes smoothly, and the new postgres service is up and running\n\nFor PG13, we expect:\n\n-- We create a new Postgres service from archive and provide a recovery_target_xid\n-- PG replays the archive until the end of the archive is reached, and the current transaction == recovery_target_xid\n-- We measure the current transaction via the query SELECT pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\n-- Since the current transaction is exactly equal to the target transaction, we perform the promotion\n\nFor PG13, what we get:\n\n-- On promotion we see the postgres process exit with the following log lines:\n\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [18-1] sql_error_code = 00000 LOG: promote trigger file found: /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [19-1] sql_error_code = 00000 LOG: redo done at 0/60527E0\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [20-1] sql_error_code = 00000 LOG: last completed transaction was at log time 2021-03-17 14:42:44.901448+00\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]: [21-1] sql_error_code = XX000 FATAL: recovery ended before configured recovery target was reached\nMar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[5]: [8-1] sql_error_code = 00000 LOG: startup process (PID 7) exited with exit code 1\n\nEven though the transaction IDs are identical. It seems like the end of the archive was reached (in fact the last transaction), and while we arrived at the correct transaction id, somehow PG decided it wasn’t done replaying? Perhaps because somehow the timestamps don’t line up? (Afaict we do not set the recovery_target_time, just the recovery_target_xid)\n\nWe have the `recovery_target_inclusive` set to true, which is the default. It really seems like the intent of that setting means that if the target equals the current transaction ID, recovery should be marked as complete. However we're seeing the opposite. While the current txn id == the target transaction id, the server exits. This is surprising, and doesn't line up with our expected behavior.We have a workaround.Right before promotion, if we increment the transaction of the leader database (the original PG service that we're forking from) by running `SELECT pg_catalog.txid_current()`, wait 120 seconds (double our archive timeout value to allow for the WAL segment to be written / uploaded / read), and wait until the current transaction is strictly greater than the target transaction, then the promotion seems to work fine every time for PG13. But this seems like an off by one error?What do you think? Is this a bug? Is this expected? Is this user error on our end?Thanks!Sean", "msg_date": "Thu, 18 Mar 2021 17:59:34 -0400", "msg_from": "Sean Jezewski <sjezewski@salesforce.com>", "msg_from_op": true, "msg_subject": "PG13 fails to startup even though the current transaction is equal to\n the target transaction" }, { "msg_contents": "At Thu, 18 Mar 2021 17:59:34 -0400, Sean Jezewski <sjezewski@salesforce.com> wrote in \r\n> We've noticed what may be a regression / bug in PG13.\r\n> \r\n> I work at Heroku on the Data team, where we manage a fleet of Postgres\r\n> services. This change has resulted in breaking the UX we offer to customers\r\n> to manage their PG services. In particular, ‘forks’ and ‘point in time\r\n> restores’ seem broken for PG13.\r\n> \r\n> I believe it is related to this patch (\r\n> https://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no\r\n> )\r\n> \r\n> For PG12, we expect:\r\n> \r\n> -- We create a new Postgres service from archive and provide a\r\n> recovery_target_xid\r\n> -- PG replays the archive until the end of the archive is reached, and the\r\n> current transaction == recovery_target_xid\r\n> -- We measure the current transaction via the query SELECT\r\n> pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\r\n> -- Since the current transaction is exactly equal to the target\r\n> transaction, we perform the promotion\r\n> \r\n> For PG12, what we get:\r\n> \r\n> -- This process completes smoothly, and the new postgres service is up and\r\n> running\r\n...\r\n> For PG13, what we get:\r\n> \r\n> -- On promotion we see the postgres process exit with the following log\r\n> lines:\r\n> \r\n> Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\r\n> [18-1] sql_error_code = 00000 LOG: promote trigger file found:\r\n> /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF\r\n\r\nThis means someone other than the server itself has placed that file\r\nto cause the promotion, perhaps before reaching the target point of\r\nthe recovery. Even if that happened on PG12, server is uninterested\r\nin the cause of the recovery stop and happily proceeds to\r\npromotion. Thus, it is likely that the configured target xid actually\r\nhave not been reached at promotion at least in the PG13 case.\r\n\r\n> Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\r\n> [19-1] sql_error_code = 00000 LOG: redo done at 0/60527E0\r\n...\r\n> [21-1] sql_error_code = XX000 FATAL: recovery ended before configured\r\n> recovery target was reached\r\n\r\nIn PG13, startup process complains like this even if recovery is\r\nstopped by operational (or manual) promotion. There might be other\r\nbehaviors but it seems to be reasonable to give priority on\r\nconfiguration in postgresql.conf over on-the-fly operations like\r\npromotion triggering.\r\n\r\n> Even though the transaction IDs are identical. It seems like the end of the\r\n> archive was reached (in fact the last transaction), and while we arrived at\r\n> the correct transaction id, somehow PG decided it wasn’t done replaying?\r\n> Perhaps because somehow the timestamps don’t line up? (Afaict we do not set\r\n> the recovery_target_time, just the recovery_target_xid)\r\n> \r\n> We have the `recovery_target_inclusive` set to true, which is the default.\r\n> It really seems like the intent of that setting means that if the target\r\n> equals the current transaction ID, recovery should be marked as complete.\r\n> However we're seeing the opposite. While the current txn id == the target\r\n> transaction id, the server exits. This is surprising, and doesn't line up\r\n> with our expected behavior.\r\n\r\nSo at least the issue raised here doesn't seem relevant to how\r\nxid-targetted PITR works.\r\n\r\n> We have a workaround.\r\n> \r\n> Right before promotion, if we increment the transaction of the leader\r\n> database (the original PG service that we're forking from) by running\r\n> `SELECT pg_catalog.txid_current()`, wait 120 seconds (double our archive\r\n> timeout value to allow for the WAL segment to be written / uploaded /\r\n> read), and wait until the current transaction is strictly greater than the\r\n> target transaction, then the promotion seems to work fine every time for\r\n> PG13. But this seems like an off by one error?\r\n\r\n(Note that transaction ID are not always commited in the order of the\r\n integer values.)\r\n\r\nI'm not sure. The direct cause of the \"issue\" is a promotion trigger\r\nthat came before reaching recovery target. That won't happen if the\r\n\"someone\" doesn't do that.\r\n\r\n> What do you think? Is this a bug? Is this expected? Is this user error on\r\n> our end?\r\n\r\nSo in regard to the behavior that server stopps when targetted\r\nrecovery is immaturely stopped due to manual promotion, my opinion is\r\nit's not a bug.\r\n\r\nregards.\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Mon, 22 Mar 2021 16:59:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG13 fails to startup even though the current transaction is\n equal to the target transaction" }, { "msg_contents": "Hi Kyotaro -\n\nThanks for the response.\n\nI think it boils down to your comment:\n\n> I'm not sure. The direct cause of the \"issue\" is a promotion trigger\n> that came before reaching recovery target. That won't happen if the\n> \"someone\" doesn't do that.\n\nI think the question is 'under what conditions is it safe to do the\npromotion' ?\n\nWhat is your recommendation in this case? The end of the archive has been\nreached. All transactions have been replayed. And in fact the current\ntransaction id is exactly equal to the target recovery transaction id.\n\nSo by all the indicators I can see, this recovery is in fact done. All the\ndata that should be there is there. All the transactions that I want\nreplayed have been replayed. (In fact all the transactions in the archive\nhave been replayed).\n\nIf we stop and wait before hitting the promotion trigger, we could wait\nindefinitely (if the parent service has no more incoming transactions,\nwhich means no more WAL to replay).\n\nAre you recommending that we wait until another transaction happens on the\nparent DB?\n\nThanks,\nSean\n\nOn Mon, Mar 22, 2021 at 3:59 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Thu, 18 Mar 2021 17:59:34 -0400, Sean Jezewski <\n> sjezewski@salesforce.com> wrote in\n> > We've noticed what may be a regression / bug in PG13.\n> >\n> > I work at Heroku on the Data team, where we manage a fleet of Postgres\n> > services. This change has resulted in breaking the UX we offer to\n> customers\n> > to manage their PG services. In particular, ‘forks’ and ‘point in time\n> > restores’ seem broken for PG13.\n> >\n> > I believe it is related to this patch (\n> >\n> https://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no\n> > )\n> >\n> > For PG12, we expect:\n> >\n> > -- We create a new Postgres service from archive and provide a\n> > recovery_target_xid\n> > -- PG replays the archive until the end of the archive is reached, and\n> the\n> > current transaction == recovery_target_xid\n> > -- We measure the current transaction via the query SELECT\n> > pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\n> > -- Since the current transaction is exactly equal to the target\n> > transaction, we perform the promotion\n> >\n> > For PG12, what we get:\n> >\n> > -- This process completes smoothly, and the new postgres service is up\n> and\n> > running\n> ...\n> > For PG13, what we get:\n> >\n> > -- On promotion we see the postgres process exit with the following log\n> > lines:\n> >\n> > Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n> > [18-1] sql_error_code = 00000 LOG: promote trigger file found:\n> > /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF\n>\n> This means someone other than the server itself has placed that file\n> to cause the promotion, perhaps before reaching the target point of\n> the recovery. Even if that happened on PG12, server is uninterested\n> in the cause of the recovery stop and happily proceeds to\n> promotion. Thus, it is likely that the configured target xid actually\n> have not been reached at promotion at least in the PG13 case.\n>\n> > Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n> > [19-1] sql_error_code = 00000 LOG: redo done at 0/60527E0\n> ...\n> > [21-1] sql_error_code = XX000 FATAL: recovery ended before configured\n> > recovery target was reached\n>\n> In PG13, startup process complains like this even if recovery is\n> stopped by operational (or manual) promotion. There might be other\n> behaviors but it seems to be reasonable to give priority on\n> configuration in postgresql.conf over on-the-fly operations like\n> promotion triggering.\n>\n> > Even though the transaction IDs are identical. It seems like the end of\n> the\n> > archive was reached (in fact the last transaction), and while we arrived\n> at\n> > the correct transaction id, somehow PG decided it wasn’t done replaying?\n> > Perhaps because somehow the timestamps don’t line up? (Afaict we do not\n> set\n> > the recovery_target_time, just the recovery_target_xid)\n> >\n> > We have the `recovery_target_inclusive` set to true, which is the\n> default.\n> > It really seems like the intent of that setting means that if the target\n> > equals the current transaction ID, recovery should be marked as complete.\n> > However we're seeing the opposite. While the current txn id == the target\n> > transaction id, the server exits. This is surprising, and doesn't line up\n> > with our expected behavior.\n>\n> So at least the issue raised here doesn't seem relevant to how\n> xid-targetted PITR works.\n>\n> > We have a workaround.\n> >\n> > Right before promotion, if we increment the transaction of the leader\n> > database (the original PG service that we're forking from) by running\n> > `SELECT pg_catalog.txid_current()`, wait 120 seconds (double our archive\n> > timeout value to allow for the WAL segment to be written / uploaded /\n> > read), and wait until the current transaction is strictly greater than\n> the\n> > target transaction, then the promotion seems to work fine every time for\n> > PG13. But this seems like an off by one error?\n>\n> (Note that transaction ID are not always commited in the order of the\n> integer values.)\n>\n> I'm not sure. The direct cause of the \"issue\" is a promotion trigger\n> that came before reaching recovery target. That won't happen if the\n> \"someone\" doesn't do that.\n>\n> > What do you think? Is this a bug? Is this expected? Is this user error on\n> > our end?\n>\n> So in regard to the behavior that server stopps when targetted\n> recovery is immaturely stopped due to manual promotion, my opinion is\n> it's not a bug.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nHi Kyotaro -Thanks for the response.I think it boils down to your comment:> I'm not sure.  The direct cause of the \"issue\" is a promotion trigger> that came before reaching recovery target.  That won't happen if the> \"someone\" doesn't do that.I think the question is 'under what conditions is it safe to do the promotion' ?What is your recommendation in this case? The end of the archive has been reached. All transactions have been replayed. And in fact the current transaction id is exactly equal to the target recovery transaction id. So by all the indicators I can see, this recovery is in fact done. All the data that should be there is there. All the transactions that I want replayed have been replayed. (In fact all the transactions in the archive have been replayed).If we stop and wait before hitting the promotion trigger, we could wait indefinitely (if the parent service has no more incoming transactions, which means no more WAL to replay).Are you recommending that we wait until another transaction happens on the parent DB? Thanks,SeanOn Mon, Mar 22, 2021 at 3:59 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Thu, 18 Mar 2021 17:59:34 -0400, Sean Jezewski <sjezewski@salesforce.com> wrote in \n> We've noticed what may be a regression / bug in PG13.\n> \n> I work at Heroku on the Data team, where we manage a fleet of Postgres\n> services. This change has resulted in breaking the UX we offer to customers\n> to manage their PG services. In particular, ‘forks’ and ‘point in time\n> restores’ seem broken for PG13.\n> \n> I believe it is related to this patch (\n> https://www.postgresql.org/message-id/993736dd3f1713ec1f63fc3b653839f5%40lako.no\n> )\n> \n> For PG12, we expect:\n> \n> -- We create a new Postgres service from archive and provide a\n> recovery_target_xid\n> -- PG replays the archive until the end of the archive is reached, and the\n> current transaction == recovery_target_xid\n> -- We measure the current transaction via the query SELECT\n> pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot())\n> -- Since the current transaction is exactly equal to the target\n> transaction, we perform the promotion\n> \n> For PG12, what we get:\n> \n> -- This process completes smoothly, and the new postgres service is up and\n> running\n...\n> For PG13, what we get:\n> \n> -- On promotion we see the postgres process exit with the following log\n> lines:\n> \n> Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n> [18-1] sql_error_code = 00000 LOG: promote trigger file found:\n> /etc/postgresql/wal-e.d/pull-env/STANDBY_OFF\n\nThis means someone other than the server itself has placed that file\nto cause the promotion, perhaps before reaching the target point of\nthe recovery.  Even if that happened on PG12, server is uninterested\nin the cause of the recovery stop and happily proceeds to\npromotion. Thus, it is likely that the configured target xid actually\nhave not been reached at promotion at least in the PG13 case.\n\n> Mar 17 14:47:49 ip-10-0-146-54 25a9551c_65ec_4870_99e9_df69151984a0[7]:\n> [19-1] sql_error_code = 00000 LOG: redo done at 0/60527E0\n...\n> [21-1] sql_error_code = XX000 FATAL: recovery ended before configured\n> recovery target was reached\n\nIn PG13, startup process complains like this even if recovery is\nstopped by operational (or manual) promotion.  There might be other\nbehaviors but it seems to be reasonable to give priority on\nconfiguration in postgresql.conf over on-the-fly operations like\npromotion triggering.\n\n> Even though the transaction IDs are identical. It seems like the end of the\n> archive was reached (in fact the last transaction), and while we arrived at\n> the correct transaction id, somehow PG decided it wasn’t done replaying?\n> Perhaps because somehow the timestamps don’t line up? (Afaict we do not set\n> the recovery_target_time, just the recovery_target_xid)\n> \n> We have the `recovery_target_inclusive` set to true, which is the default.\n> It really seems like the intent of that setting means that if the target\n> equals the current transaction ID, recovery should be marked as complete.\n> However we're seeing the opposite. While the current txn id == the target\n> transaction id, the server exits. This is surprising, and doesn't line up\n> with our expected behavior.\n\nSo at least the issue raised here doesn't seem relevant to how\nxid-targetted PITR works.\n\n> We have a workaround.\n> \n> Right before promotion, if we increment the transaction of the leader\n> database (the original PG service that we're forking from) by running\n> `SELECT pg_catalog.txid_current()`, wait 120 seconds (double our archive\n> timeout value to allow for the WAL segment to be written / uploaded /\n> read), and wait until the current transaction is strictly greater than the\n> target transaction, then the promotion seems to work fine every time for\n> PG13. But this seems like an off by one error?\n\n(Note that transaction ID are not always commited in the order of the\n integer values.)\n\nI'm not sure.  The direct cause of the \"issue\" is a promotion trigger\nthat came before reaching recovery target.  That won't happen if the\n\"someone\" doesn't do that.\n\n> What do you think? Is this a bug? Is this expected? Is this user error on\n> our end?\n\nSo in regard to the behavior that server stopps when targetted\nrecovery is immaturely stopped due to manual promotion, my opinion is\nit's not a bug.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 22 Mar 2021 08:40:38 -0400", "msg_from": "Sean Jezewski <sjezewski@salesforce.com>", "msg_from_op": true, "msg_subject": "Re: PG13 fails to startup even though the current transaction is\n equal to the target transaction" }, { "msg_contents": "\n\nOn 2021/03/22 21:40, Sean Jezewski wrote:\n> Hi Kyotaro -\n> \n> Thanks for the response.\n> \n> I think it boils down to your comment:\n> \n> > I'm not sure.  The direct cause of the \"issue\" is a promotion trigger\n> > that came before reaching recovery target.  That won't happen if the\n> > \"someone\" doesn't do that.\n> \n> I think the question is 'under what conditions is it safe to do the promotion' ?\n> \n> What is your recommendation in this case? The end of the archive has been reached. All transactions have been replayed. And in fact the current transaction id is exactly equal to the target recovery transaction id.\n\nI guess that the transaction with this current XID has not been committed\nyet at that moment. Right? I thought that because you confirmed the XID\nby SELECT pg_catalog.txid_snapshot_xmax(pg_catalog.txid_current_snapshot()).\nIIUC this query doesn't return the XID of already-committed transaction.\n\nThe standby thinks that the recovery reaches the recovery target when\nthe XID of *committed* transaction get equal to the recovery_target_xid.\nSo in your case, the standby didn't reached the recovery target when you\nrequested the promotion. IMO this is why you got that FATAL error.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 22 Mar 2021 23:37:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: PG13 fails to startup even though the current transaction is\n equal to the target transaction" } ]
[ { "msg_contents": "Hi,\n\nThis is a proposal of a patch for pg_stat_statements extension. It\ncorrects deallocation events accounting.\n\nSince 2e0fedf there is a view pg_stat_statements_info is available in\npg_stat_statements extension. It has a dealloc field, that should be a\ncounter of deallocation events happened.\nRight now it accounts only automatic deallocation events, happened when\nwe need a place for a new statement, but manual deallocation events\ncaused by pg_stat_statements_reset() function for some subset of\ncollected statements is not accounted.\nMy opinion is that manual deallocation is a deallocation too and it\nmust be accounted in dealloc field of pg_stat_statements_info view.\n\nLet's see how it happens:\n\npostgres=# select pg_stat_statements_reset();\npostgres=# select 1;\n ?column?\n----------\n 1\n(1 row)\npostgres=# select dealloc from pg_stat_statements_info ;\n dealloc\n---------\n 0\n(1 row)\n\npostgres=# select pg_stat_statements_reset(userid,dbid,queryid)\npostgres-# from pg_stat_statements where query = 'select $1';\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\npostgres=# select dealloc from pg_stat_statements_info ;\n dealloc\n---------\n 0 -- Here must be a one now, as deallocation happened\n(1 row)\n\nThis patch adds accounting of manual deallocation events.\n\n-- \nAndrei Zubkov\nPostgres Professional\nThe Russian Postgres Company", "msg_date": "Fri, 19 Mar 2021 17:08:45 +0300", "msg_from": "=?koi8-r?Q?=E1=CE=C4=D2=C5=CA_?= =?koi8-r?Q?=FA=D5=C2=CB=CF=D7?=\n\t <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "[PATCH] pg_stat_statements dealloc field ignores manual deallocation" }, { "msg_contents": "On Fri, Mar 19, 2021 at 05:08:45PM +0300, Андрей Зубков wrote:\n> \n> Since 2e0fedf there is a view pg_stat_statements_info is available in\n> pg_stat_statements extension. It has a dealloc field, that should be a\n> counter of deallocation events happened.\n> Right now it accounts only automatic deallocation events, happened when\n> we need a place for a new statement,\n\nYes, and that behavior is documented:\n\ndealloc bigint\n\nTotal number of times pg_stat_statements entries about the least-executed\nstatements were deallocated because more distinct statements than\npg_stat_statements.max were observed\n\n> but manual deallocation events\n> caused by pg_stat_statements_reset() function for some subset of\n> collected statements is not accounted.\n> My opinion is that manual deallocation is a deallocation too and it\n> must be accounted in dealloc field of pg_stat_statements_info view.\n\nI disagree. The point of that field is to help users configuring\npg_stat_statements.max, as evictions have a huge overhead in many workloads.\n\nIf users remove entries for some reasons, we don't have to give the impression\nthat pg_stat_statements.max is too low and that it should be increased,\nespecially since it requires a restart.\n\n\n", "msg_date": "Fri, 19 Mar 2021 22:15:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_statements dealloc field ignores manual\n deallocation" }, { "msg_contents": "On Fri, 2021-03-19 at 22:15 +0800, Julien Rouhaud wrote:\n> I disagree. The point of that field is to help users configuring\n> pg_stat_statements.max, as evictions have a huge overhead in many\n> workloads.\n> \n> If users remove entries for some reasons, we don't have to give the\n> impression\n> that pg_stat_statements.max is too low and that it should be\n> increased,\n> especially since it requires a restart.\n> \nOk.\nBut when we are collecting aggregated statistics on pg_stat_statememts\nperiodically it would be great to know about every deallocation\nhappened. Maybe we need to add another counter for manual deallocations\ntracking?\n\n\n\n", "msg_date": "Mon, 22 Mar 2021 12:06:15 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] pg_stat_statements dealloc field ignores manual\n deallocation" } ]
[ { "msg_contents": "Hi, haсkers!\n\nRecently, I was doing some experiments with primary/standby instances \ninteraction. In certain conditions I’ve got and was able to reproduce \ncrash on failed assertion.\n\nThe scenario is the following:\n1. start primary server\n2. start standby server by pg_basebackup -P -R -X stream -c fast -p5432 \n-D data\n3. apply some load to the primary server by pgbench -p5432 -i -s 150 \npostgres\n4. kill primary server (with kill -9) and keep it down\n5. stop standby server by pg_ctl\n6. run standby server\n\nThen any standby server termination will result in a failed assertion.\n\nThe log with a backtrace is following:\n\n2021-03-19 18:54:25.352 MSK [3508443] LOG: received fast shutdown \nrequest\n2021-03-19 18:54:25.379 MSK [3508443] LOG: aborting any active \ntransactions\nTRAP: FailedAssertion(\"SHMQueueEmpty(&(MyProc->myProcLocks[i]))\", File: \n\"/home/ziva/projects/pgpro/build-secondary/../postgrespro/src/backend/storage/lmgr/proc.c\", \nLine: 592, PID: 3508452)\npostgres: walreceiver (ExceptionalCondition+0xd0)[0x555555d0526f]\npostgres: walreceiver (InitAuxiliaryProcess+0x31c)[0x555555b43e31]\npostgres: walreceiver (AuxiliaryProcessMain+0x54f)[0x55555574ae32]\npostgres: walreceiver (+0x530bff)[0x555555a84bff]\npostgres: walreceiver (+0x531044)[0x555555a85044]\npostgres: walreceiver (+0x530959)[0x555555a84959]\n/lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7ffff7a303c0]\n/lib/x86_64-linux-gnu/libc.so.6(__select+0x1a)[0x7ffff72a40da]\npostgres: walreceiver (+0x52bea4)[0x555555a7fea4]\npostgres: walreceiver (PostmasterMain+0x129f)[0x555555a7f7c1]\npostgres: walreceiver (+0x41ff1f)[0x555555973f1f]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7ffff71b30b3]\npostgres: walreceiver (_start+0x2e)[0x55555561abfe]\n\nAfter a brief investigation I found out that I can get this assert with \n100% probability if I insert a sleep for about 5 sec into \nInitAuxiliaryProcess(void) in src/backend/storage/lmgr/proc.c:\n\ndiff --git a/src/backend/storage/lmgr/proc.c \nb/src/backend/storage/lmgr/proc.c\nindex 897045ee272..b5f365f426d 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -525,7 +525,7 @@ InitAuxiliaryProcess(void)\n\n if (MyProc != NULL)\n elog(ERROR, \"you already exist\");\n-\n+ pg_usleep(5000000L);\n /*\n * We use the ProcStructLock to protect assignment and releasing \nof\n * AuxiliaryProcs entries.\n\nMaybe, this kinda behaviour would appear if a computer hosting instances \nis under significant side load, which cause delay to start db-instances \nunder a heavy load.\n\nConfiguration for a primary server is default with \"wal_level = logical\"\n\nConfiguration for a standby server is default with \"wal_level = logical\" \nand \"primary_conninfo = 'port=5432'\"\n\nI'm puzzled with this behavor. I'm pretty sure it is not what should be. \nAny ideas how this can be fixed?\n\n---\nBest regards,\nMaxim Orlov.\n\n\n", "msg_date": "Fri, 19 Mar 2021 20:25:47 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Failed assertion on standby while shutdown" }, { "msg_contents": "On 2021/03/20 2:25, Maxim Orlov wrote:\n> Hi, haсkers!\n> \n> Recently, I was doing some experiments with primary/standby instances interaction. In certain conditions I’ve got and was able to reproduce crash on failed assertion.\n> \n> The scenario is the following:\n> 1. start primary server\n> 2. start standby server by pg_basebackup -P -R -X stream -c fast -p5432 -D data\n> 3. apply some load to the primary server by pgbench -p5432 -i -s 150 postgres\n> 4. kill primary server (with kill -9) and keep it down\n> 5. stop standby server by pg_ctl\n> 6. run standby server\n> \n> Then any standby server termination will result in a failed assertion.\n> \n> The log with a backtrace is following:\n> \n> 2021-03-19 18:54:25.352 MSK [3508443] LOG:  received fast shutdown request\n> 2021-03-19 18:54:25.379 MSK [3508443] LOG:  aborting any active transactions\n> TRAP: FailedAssertion(\"SHMQueueEmpty(&(MyProc->myProcLocks[i]))\", File: \"/home/ziva/projects/pgpro/build-secondary/../postgrespro/src/backend/storage/lmgr/proc.c\", Line: 592, PID: 3508452)\n> postgres: walreceiver (ExceptionalCondition+0xd0)[0x555555d0526f]\n> postgres: walreceiver (InitAuxiliaryProcess+0x31c)[0x555555b43e31]\n> postgres: walreceiver (AuxiliaryProcessMain+0x54f)[0x55555574ae32]\n> postgres: walreceiver (+0x530bff)[0x555555a84bff]\n> postgres: walreceiver (+0x531044)[0x555555a85044]\n> postgres: walreceiver (+0x530959)[0x555555a84959]\n> /lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7ffff7a303c0]\n> /lib/x86_64-linux-gnu/libc.so.6(__select+0x1a)[0x7ffff72a40da]\n> postgres: walreceiver (+0x52bea4)[0x555555a7fea4]\n> postgres: walreceiver (PostmasterMain+0x129f)[0x555555a7f7c1]\n> postgres: walreceiver (+0x41ff1f)[0x555555973f1f]\n> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7ffff71b30b3]\n> postgres: walreceiver (_start+0x2e)[0x55555561abfe]\n> \n> After a brief investigation I found out that I can get this assert with 100% probability if I insert a sleep for about 5 sec into InitAuxiliaryProcess(void) in src/backend/storage/lmgr/proc.c:\n> \n> diff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\n> index 897045ee272..b5f365f426d 100644\n> --- a/src/backend/storage/lmgr/proc.c\n> +++ b/src/backend/storage/lmgr/proc.c\n> @@ -525,7 +525,7 @@ InitAuxiliaryProcess(void)\n> \n>         if (MyProc != NULL)\n>                 elog(ERROR, \"you already exist\");\n> -\n> +       pg_usleep(5000000L);\n>         /*\n>          * We use the ProcStructLock to protect assignment and releasing of\n>          * AuxiliaryProcs entries.\n\nThanks for the report! I could reproduce this issue by adding that sleep\ninto InitAuxiliaryProcess().\n\n> Maybe, this kinda behaviour would appear if a computer hosting instances is under significant side load, which cause delay to start db-instances under a heavy load.\n> \n> Configuration for a primary server is default with \"wal_level = logical\"\n> \n> Configuration for a standby server is default with \"wal_level = logical\" and \"primary_conninfo = 'port=5432'\"\n> \n> I'm puzzled with this behavor. I'm pretty sure it is not what should be. Any ideas how this can be fixed?\n\nISTM that the cause of this issue is that the startup process exits\nwithout releasing the locks that it was holding when shutdown is\nrequested. To address this issue, IMO the startup process should\ncall ShutdownRecoveryTransactionEnvironment() at its exit.\nAttached is the POC patch that changes the startup process that way.\n\nI've not tested the patch enough yet..\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Mon, 22 Mar 2021 22:40:54 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "On 2021-03-22 16:40, Fujii Masao wrote:\n> On 2021/03/20 2:25, Maxim Orlov wrote:\n>> Hi, haсkers!\n>> \n>> Recently, I was doing some experiments with primary/standby instances \n>> interaction. In certain conditions I’ve got and was able to reproduce \n>> crash on failed assertion.\n>> \n>> The scenario is the following:\n>> 1. start primary server\n>> 2. start standby server by pg_basebackup -P -R -X stream -c fast \n>> -p5432 -D data\n>> 3. apply some load to the primary server by pgbench -p5432 -i -s 150 \n>> postgres\n>> 4. kill primary server (with kill -9) and keep it down\n>> 5. stop standby server by pg_ctl\n>> 6. run standby server\n>> \n>> Then any standby server termination will result in a failed assertion.\n>> \n>> The log with a backtrace is following:\n>> \n>> 2021-03-19 18:54:25.352 MSK [3508443] LOG:  received fast shutdown \n>> request\n>> 2021-03-19 18:54:25.379 MSK [3508443] LOG:  aborting any active \n>> transactions\n>> TRAP: FailedAssertion(\"SHMQueueEmpty(&(MyProc->myProcLocks[i]))\", \n>> File: \n>> \"/home/ziva/projects/pgpro/build-secondary/../postgrespro/src/backend/storage/lmgr/proc.c\", \n>> Line: 592, PID: 3508452)\n>> postgres: walreceiver (ExceptionalCondition+0xd0)[0x555555d0526f]\n>> postgres: walreceiver (InitAuxiliaryProcess+0x31c)[0x555555b43e31]\n>> postgres: walreceiver (AuxiliaryProcessMain+0x54f)[0x55555574ae32]\n>> postgres: walreceiver (+0x530bff)[0x555555a84bff]\n>> postgres: walreceiver (+0x531044)[0x555555a85044]\n>> postgres: walreceiver (+0x530959)[0x555555a84959]\n>> /lib/x86_64-linux-gnu/libpthread.so.0(+0x153c0)[0x7ffff7a303c0]\n>> /lib/x86_64-linux-gnu/libc.so.6(__select+0x1a)[0x7ffff72a40da]\n>> postgres: walreceiver (+0x52bea4)[0x555555a7fea4]\n>> postgres: walreceiver (PostmasterMain+0x129f)[0x555555a7f7c1]\n>> postgres: walreceiver (+0x41ff1f)[0x555555973f1f]\n>> /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7ffff71b30b3]\n>> postgres: walreceiver (_start+0x2e)[0x55555561abfe]\n>> \n>> After a brief investigation I found out that I can get this assert \n>> with 100% probability if I insert a sleep for about 5 sec into \n>> InitAuxiliaryProcess(void) in src/backend/storage/lmgr/proc.c:\n>> \n>> diff --git a/src/backend/storage/lmgr/proc.c \n>> b/src/backend/storage/lmgr/proc.c\n>> index 897045ee272..b5f365f426d 100644\n>> --- a/src/backend/storage/lmgr/proc.c\n>> +++ b/src/backend/storage/lmgr/proc.c\n>> @@ -525,7 +525,7 @@ InitAuxiliaryProcess(void)\n>> \n>>         if (MyProc != NULL)\n>>                 elog(ERROR, \"you already exist\");\n>> -\n>> +       pg_usleep(5000000L);\n>>         /*\n>>          * We use the ProcStructLock to protect assignment and \n>> releasing of\n>>          * AuxiliaryProcs entries.\n> \n> Thanks for the report! I could reproduce this issue by adding that \n> sleep\n> into InitAuxiliaryProcess().\n> \n>> Maybe, this kinda behaviour would appear if a computer hosting \n>> instances is under significant side load, which cause delay to start \n>> db-instances under a heavy load.\n>> \n>> Configuration for a primary server is default with \"wal_level = \n>> logical\"\n>> \n>> Configuration for a standby server is default with \"wal_level = \n>> logical\" and \"primary_conninfo = 'port=5432'\"\n>> \n>> I'm puzzled with this behavor. I'm pretty sure it is not what should \n>> be. Any ideas how this can be fixed?\n> \n> ISTM that the cause of this issue is that the startup process exits\n> without releasing the locks that it was holding when shutdown is\n> requested. To address this issue, IMO the startup process should\n> call ShutdownRecoveryTransactionEnvironment() at its exit.\n> Attached is the POC patch that changes the startup process that way.\n> \n> I've not tested the patch enough yet..\n> \n> Regards,\n\nThank you for reply! As far as I understand, this is really the case. \nI've test your patch a bit. This annoying failed assertion is gone now.\n\nI think I should test more and report later about results.\n\nShould we put this patch to CF?\n\n---\nBest regards,\nMaxim Orlov.\n\n\n", "msg_date": "Wed, 24 Mar 2021 08:02:53 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "\n\nOn 2021/03/24 14:02, Maxim Orlov wrote:\n> Thank you for reply! As far as I understand, this is really the case. I've test your patch a bit.\n\nThanks for testing the patch!\n\n> This annoying failed assertion is gone now.\n\nGood news!\n\n> I think I should test more and report later about results.\n> \n> Should we put this patch to CF?\n\nYes. Since this is a bug, we can review and commit the patch\nwithout waiting for next CF. But I agree that it's better to\nadd the patch to next CF so that we don't forget the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 24 Mar 2021 16:52:56 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nAll the tests passed successfully.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Tue, 30 Mar 2021 17:44:03 +0000", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "отбой: Маша сказала: уже оплатили :)\n\n\nвт 30.03.21 20:44, Maxim Orlov пишет:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> All the tests passed successfully.\n>\n> The new status of this patch is: Ready for Committer\n\n\n", "msg_date": "Tue, 30 Mar 2021 21:51:26 +0300", "msg_from": "igor levshin <i.levshin@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "On 2021-03-30 20:44, Maxim Orlov wrote:\n> The following review has been posted through the commitfest \n> application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> All the tests passed successfully.\n> \n> The new status of this patch is: Ready for Committer\n\nThe patch is good. One note, should we put a comment about \nShutdownRecoveryTransactionEnvironment not reentrant behaviour? Or maybe \nrename it to ShutdownRecoveryTransactionEnvironmentOnce?\n\n---\nBest regards,\nMaxim Orlov.\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:51:56 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "On 2021/03/31 19:51, Maxim Orlov wrote:\n> On 2021-03-30 20:44, Maxim Orlov wrote:\n>> The following review has been posted through the commitfest application:\n>> make installcheck-world:  tested, passed\n>> Implements feature:       tested, passed\n>> Spec compliant:           not tested\n>> Documentation:            not tested\n>>\n>> All the tests passed successfully.\n>>\n>> The new status of this patch is: Ready for Committer\n> \n> The patch is good. One note, should we put a comment about ShutdownRecoveryTransactionEnvironment not reentrant behaviour? Or maybe rename it to ShutdownRecoveryTransactionEnvironmentOnce?\n\n+1 to add more comments into ShutdownRecoveryTransactionEnvironment().\nI did that. What about the attached patch?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 1 Apr 2021 21:02:09 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "On 2021-04-01 15:02, Fujii Masao wrote:\n> On 2021/03/31 19:51, Maxim Orlov wrote:\n>> On 2021-03-30 20:44, Maxim Orlov wrote:\n>>> The following review has been posted through the commitfest \n>>> application:\n>>> make installcheck-world:  tested, passed\n>>> Implements feature:       tested, passed\n>>> Spec compliant:           not tested\n>>> Documentation:            not tested\n>>> \n>>> All the tests passed successfully.\n>>> \n>>> The new status of this patch is: Ready for Committer\n>> \n>> The patch is good. One note, should we put a comment about \n>> ShutdownRecoveryTransactionEnvironment not reentrant behaviour? Or \n>> maybe rename it to ShutdownRecoveryTransactionEnvironmentOnce?\n> \n> +1 to add more comments into ShutdownRecoveryTransactionEnvironment().\n> I did that. What about the attached patch?\n> \n> Regards,\n\nWell done! In my view is just what it's needed.\n\n---\nBest regards,\nMaxim Orlov.\n\n\n", "msg_date": "Mon, 05 Apr 2021 10:30:01 +0300", "msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Failed assertion on standby while shutdown" }, { "msg_contents": "\n\nOn 2021/04/05 16:30, Maxim Orlov wrote:\n> Well done! In my view is just what it's needed.\n\nThanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 6 Apr 2021 02:31:13 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Failed assertion on standby while shutdown" } ]
[ { "msg_contents": "In the thread about valgrind leak detection [1], we noticed that\nRestoreGUCState(), which is intended to load the leader process's\nGUC settings into a parallel worker, was causing visible memory\nleaks by invoking InitializeOneGUCOption() on already-set-up GUCs.\nI noted that simply removing that call made the leaks go away with\nno obvious ill effects, but didn't stop to look closer.\n\nI've now looked closer, and I see that the reason that removing\nthat call has no ill effects is in fact that it's a complete no-op.\nEvery GUC that this chooses to target:\n\n\t\tif (!can_skip_gucvar(guc_variables[i]))\n\t\t\tInitializeOneGUCOption(guc_variables[i]);\n\nis one that the leader backend will send a value for, so that the\nreinitialized value will certainly be overwritten in the next loop.\n(Actually, the set of forcibly-reinited GUCs is a strict subset of\nthose that the leader will send, since GUCs that have source\nPGC_S_DEFAULT in the newly-started worker might have other sources\nin the leader.)\n\nI wonder whether the intent was to do the negation of this test,\nie reset GUCs that the leader *isn't* going to send. But that's\nvery obviously the wrong thing, because it would lose the values\nof (at least) PGC_POSTMASTER variables.\n\nSo we can remove the code that does this, and I intend to go do so.\nHowever, given the unmistakable evidence of sloppy thinking here,\nI looked closer at exactly what can_skip_gucvar() is doing, and\nI think we're either sending too much or too little.\n\nThe argument for \"sending too little\" comes from the race condition\nthat's described in the function's comments: a variable that has\nsource PGC_S_DEFAULT (ie, has never moved off its compiled-in default)\nin the leader could have just been updated in the postmaster, due to\nre-reading postgresql.conf after SIGHUP. In that case, when the\npostmaster forks the worker it will inherit the new setting from\npostgresql.conf, and will run with that because the leader didn't send\nits value. So we risk having a situation where parallel workers are\nusing a setting that the leader won't adopt until it next goes idle.\n\nNow, this shouldn't cause any really fundamental problems (if it\ncould, the variable shouldn't have been marked as safe to change\nat SIGHUP). But you could imagine some minor query weirdness\nbeing traceable to that. I think that the authors of this code\njudged that the cost of sending default GUC values was more than\npreventing such weirdness is worth, and I can see the point.\nNeglecting the PGC_POSTMASTER and PGC_INTERNAL variables, which\nseem safe to not send, I see this in a regression test install:\n\n=# select source,count(*) from pg_settings where context not in ('postmaster', 'internal') group by 1;\n source | count \n----------------------+-------\n client | 2\n environment variable | 1\n configuration file | 6\n default | 246\n database | 6\n override | 3\n command line | 1\n(7 rows)\n\nSending 265 values to a new parallel worker instead of 19 would be\na pretty large cost to avoid a race condition that probably wouldn't\nhave significant ill effects anyway.\n\nHowever, if you are willing to accept that tradeoff, then this code\nis leaving a lot on the table, because there is no more reason for\nit to send values with any of these sources than there is to send\nPGC_S_DEFAULT ones:\n\n PGC_S_DYNAMIC_DEFAULT, /* default computed during initialization */\n PGC_S_ENV_VAR, /* postmaster environment variable */\n PGC_S_FILE, /* postgresql.conf */\n PGC_S_ARGV, /* postmaster command line */\n PGC_S_GLOBAL, /* global in-database setting */\n PGC_S_DATABASE, /* per-database setting */\n PGC_S_USER, /* per-user setting */\n PGC_S_DATABASE_USER, /* per-user-and-database setting */\n\nThe new worker will have absorbed all such values already during its\nregular InitPostgres() call.\n\nI suppose there's an argument to be made that skipping such values\nwidens the scope of the race hazard a bit, since a GUC that the DBA\nhas already chosen to move off of default (or set via ALTER USER/\nDATABASE) might be one she's more likely to change later. But that\nargument seems pretty tissue-thin to me.\n\nIn short, I think we really only need to transmit GUCs with sources\nof PGC_S_CLIENT and higher. In the regression environment that\nwould cut us down from sending 19 values to sending 5. In production\nthe win would likely be substantially more, since it's more likely\nthat the DBA would have tweaked more things in postgresql.conf;\nwhich are variables we are sending today and don't have to.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/3471359.1615937770%40sss.pgh.pa.us\n\n\n", "msg_date": "Fri, 19 Mar 2021 13:27:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Bringing some sanity to RestoreGUCState()" }, { "msg_contents": "I wrote:\n> The argument for \"sending too little\" comes from the race condition\n> that's described in the function's comments: a variable that has\n> source PGC_S_DEFAULT (ie, has never moved off its compiled-in default)\n> in the leader could have just been updated in the postmaster, due to\n> re-reading postgresql.conf after SIGHUP. In that case, when the\n> postmaster forks the worker it will inherit the new setting from\n> postgresql.conf, and will run with that because the leader didn't send\n> its value. So we risk having a situation where parallel workers are\n> using a setting that the leader won't adopt until it next goes idle.\n\nAfter further study I've realized that the above can't happen, because\nthe existing code is considerably more magical, delicate, and badly\ncommented than I'd realized.\n\nBasically it divides the GUCs into two categories: those that will\nnever be shipped based on their context or name (for which we assume\nthe worker will obtain correct values via other mechanisms), and all\nothers, which are shipped if they don't have their compiled-in\ndefault values. On the receiving side, the first loop in\nRestoreGUCOptions acts to ensure that all GUCs in the second category\nare at their compiled-in defaults, essentially throwing away whatever\nthe worker might've obtained from pg_db_role_setting or other places.\nThen, after receiving and applying the shipped GUCs, we have an exact\nmatch to the leader's state (given the assumption that the compiled-in\nvalues are identical, anyway), without any race conditions.\n\nThe magical/fragile part of this is that the same can_skip_guc test\nworks for both sides of the operation; it's not really obvious that\nthat must be so.\n\nForcing all the potentially-shipped GUCs into PGC_S_DEFAULT state has\nanother critical and undocumented property, which is that it ensures\nthat set_config_option won't refuse to apply any of the incoming\nsettings on the basis of their source priority being lower than what\nthe worker already has.\n\nSo we do need RestoreGUCOptions to be doing something equivalent to\nInitializeOneGUCOption, although preferably without the leaks.\nThat doesn't look too awful though, since we should be able to just\nAssert that the stack is empty; the only thing that may need to be\nfreed is the current values of string variables. I'll see about\nfixing that and improving the comments while this is all swapped in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Mar 2021 18:15:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Bringing some sanity to RestoreGUCState()" } ]
[ { "msg_contents": "The Red Hat folk are seeing a problem with that combination:\n\nhttps://bugzilla.redhat.com/show_bug.cgi?id=1940964\n\nwhich boils down to\n\n> Build fails with this error:\n> ERROR: failed to JIT module: Added modules have incompatible data layouts: E-m:e-i1:8:16-i8:8:16-i64:64-f128:64-a:8:16-n32:64 (module) vs E-m:e-i1:8:16-i8:8:16-i64:64-f128:64-v128:64-a:8:16-n32:64 (jit)\n\n(By \"build\", I imagine the reporter means \"regression tests\")\n\nSo I was wondering if we'd tested it yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Mar 2021 14:03:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Do we work with LLVM 12 on s390x?" }, { "msg_contents": "Hi,\n\nOn 2021-03-19 14:03:21 -0400, Tom Lane wrote:\n> The Red Hat folk are seeing a problem with that combination:\n> \n> https://bugzilla.redhat.com/show_bug.cgi?id=1940964\n> \n> which boils down to\n> \n> > Build fails with this error:\n> > ERROR: failed to JIT module: Added modules have incompatible data layouts: E-m:e-i1:8:16-i8:8:16-i64:64-f128:64-a:8:16-n32:64 (module) vs E-m:e-i1:8:16-i8:8:16-i64:64-f128:64-v128:64-a:8:16-n32:64 (jit)\n> \n> (By \"build\", I imagine the reporter means \"regression tests\")\n> \n> So I was wondering if we'd tested it yet.\n\nYes, I did test it not too long ago, after Christoph Berg reported\nDebian s390x failing with jit. Which made me learn a bunch of s390x\nassembler and discover a bug in our code that only rarely happend (iirc\nsomething about booleans that are not exactly 0 or 1 not testing\ntrue)...\n\nhttps://www.postgresql.org/message-id/20201015222924.yyms42qjloydfvar%40alap3.anarazel.de\n\nI think the error above comes from a \"mismatch\" between the clang used\nto compile bitcode, and the LLVM version linked to. Normally we're\nsomewhat tolerant of differences between the two, but there was an ABI\nchange at some point, leading to that error. IIRC I hit that, but it\nvanished as soon as I used a matching libllvm and clang.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 19 Mar 2021 12:00:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think the error above comes from a \"mismatch\" between the clang used\n> to compile bitcode, and the LLVM version linked to. Normally we're\n> somewhat tolerant of differences between the two, but there was an ABI\n> change at some point, leading to that error. IIRC I hit that, but it\n> vanished as soon as I used a matching libllvm and clang.\n\nThanks, I passed that advice on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 19 Mar 2021 15:15:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" }, { "msg_contents": "On 3/19/21 8:15 PM, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> I think the error above comes from a \"mismatch\" between the clang used\n>> to compile bitcode, and the LLVM version linked to. Normally we're\n>> somewhat tolerant of differences between the two, but there was an ABI\n>> change at some point, leading to that error. IIRC I hit that, but it\n>> vanished as soon as I used a matching libllvm and clang.\n> \n> Thanks, I passed that advice on.\n> \n> \t\t\tregards, tom lane\n\nTom Stellard was so kind to look at this issue deeper with his LLVM \nskills and found PostgreSQL is not actually handling the LLVM perfectly. \nHe's working on improving the patch, but sharing even the first attempt \nwith upstream seems like a good idea:\n\nhttps://src.fedoraproject.org/rpms/postgresql/pull-request/29\n\nRegards,\nHonza\n\n\n\n", "msg_date": "Wed, 21 Apr 2021 15:40:02 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" }, { "msg_contents": "On 4/21/21 6:40 AM, Honza Horak wrote:\n> On 3/19/21 8:15 PM, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> I think the error above comes from a \"mismatch\" between the clang used\n>>> to compile bitcode, and the LLVM version linked to. Normally we're\n>>> somewhat tolerant of differences between the two, but there was an ABI\n>>> change at some point, leading to that error.  IIRC I hit that, but it\n>>> vanished as soon as I used a matching libllvm and clang.\n>>\n>> Thanks, I passed that advice on.\n>>\n>>             regards, tom lane\n> \n> Tom Stellard was so kind to look at this issue deeper with his LLVM skills and found PostgreSQL is not actually handling the LLVM perfectly. He's working on improving the patch, but sharing even the first attempt with upstream seems like a good idea:\n> \n> https://src.fedoraproject.org/rpms/postgresql/pull-request/29\n> \n\nI wrote a new patch based on the bug discussion[1]. It works around\nthe issue specifically on s390x rather than disabling specific\nCPUs and features for all targets. The patch is attached.\n\n\n[1] https://www.postgresql.org/message-id/flat/16971-5d004d34742a3d35%40postgresql.org\n\n\n> Regards,\n> Honza\n>", "msg_date": "Thu, 22 Apr 2021 09:35:48 -0700", "msg_from": "Tom Stellard <tstellar@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" }, { "msg_contents": "On 4/22/21 6:35 PM, Tom Stellard wrote:\n> On 4/21/21 6:40 AM, Honza Horak wrote:\n>> On 3/19/21 8:15 PM, Tom Lane wrote:\n>>> Andres Freund <andres@anarazel.de> writes:\n>>>> I think the error above comes from a \"mismatch\" between the clang used\n>>>> to compile bitcode, and the LLVM version linked to. Normally we're\n>>>> somewhat tolerant of differences between the two, but there was an ABI\n>>>> change at some point, leading to that error.  IIRC I hit that, but it\n>>>> vanished as soon as I used a matching libllvm and clang.\n>>>\n>>> Thanks, I passed that advice on.\n>>>\n>>>             regards, tom lane\n>>\n>> Tom Stellard was so kind to look at this issue deeper with his LLVM \n>> skills and found PostgreSQL is not actually handling the LLVM \n>> perfectly. He's working on improving the patch, but sharing even the \n>> first attempt with upstream seems like a good idea:\n>>\n>> https://src.fedoraproject.org/rpms/postgresql/pull-request/29\n>>\n> \n> I wrote a new patch based on the bug discussion[1].  It works around\n> the issue specifically on s390x rather than disabling specific\n> CPUs and features for all targets.  The patch is attached.\n> \n> \n> [1] \n> https://www.postgresql.org/message-id/flat/16971-5d004d34742a3d35%40postgresql.org \n\nThanks, Tom, it looks good in koji build, so merging so far. We very \nmuch appreciate your help here.\n\nCheers,\nHonza\n\n> \n>> Regards,\n>> Honza\n>>\n> \n\n\n\n", "msg_date": "Thu, 22 Apr 2021 22:39:53 +0200", "msg_from": "Honza Horak <hhorak@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" }, { "msg_contents": "Hi,\n\nOn 2021-04-22 09:35:48 -0700, Tom Stellard wrote:\n> On 4/21/21 6:40 AM, Honza Horak wrote:\n> I wrote a new patch based on the bug discussion[1]. It works around\n> the issue specifically on s390x rather than disabling specific\n> CPUs and features for all targets. The patch is attached.\n\nCool, this is a pretty clear improvement. There's a few minor things I'd\nchange to fit it into PG - do you mind if I send that to the thread at\n[1] for you to test before I push it?\n\n\n> +/*\n> + * For the systemz target, LLVM uses a different datalayout for z13 and newer\n> + * CPUs than it does for older CPUs. This can cause a mismatch in datalayouts\n> + * in the case where the llvm_types_module is compiled with a pre-z13 CPU\n> + * and the JIT is running on z13 or newer.\n> + * See computeDataLayout() function in\n> + * llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp for information on the\n> + * datalayout differences.\n> + */\n> +static bool\n> +needs_systemz_workaround(void)\n> +{\n> +\tbool ret = false;\n> +\tLLVMContextRef llvm_context;\n> +\tLLVMTypeRef vec_type;\n> +\tLLVMTargetDataRef llvm_layoutref;\n> +\tif (strncmp(LLVMGetTargetName(llvm_targetref), \"systemz\", strlen(\"systemz\")))\n> +\t{\n> +\t\treturn false;\n> +\t}\n> +\n> +\tllvm_context = LLVMGetModuleContext(llvm_types_module);\n> +\tvec_type = LLVMVectorType(LLVMIntTypeInContext(llvm_context, 32), 4);\n> +\tllvm_layoutref = LLVMCreateTargetData(llvm_layout);\n> +\tret = (LLVMABIAlignmentOfType(llvm_layoutref, vec_type) == 16);\n> +\tLLVMDisposeTargetData(llvm_layoutref);\n> +\treturn ret;\n> +}\n\nI wonder if it'd be better to compare LLVMCopyStringRepOfTargetData() of\nthe llvm_types_module with the one of the JIT target machine, and only\nspecify -vector in that case? We currently support older LLVM versions\nthan the one that introduced the vector specific handling for systemz,\nand I don't know what'd happen if we unnecessarily specified -vector.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:25:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" }, { "msg_contents": "On 4/22/21 3:25 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-04-22 09:35:48 -0700, Tom Stellard wrote:\n>> On 4/21/21 6:40 AM, Honza Horak wrote:\n>> I wrote a new patch based on the bug discussion[1]. It works around\n>> the issue specifically on s390x rather than disabling specific\n>> CPUs and features for all targets. The patch is attached.\n> \n> Cool, this is a pretty clear improvement. There's a few minor things I'd\n> change to fit it into PG - do you mind if I send that to the thread at\n> [1] for you to test before I push it?\n> \n\nSure, no problem.\n> \n>> +/*\n>> + * For the systemz target, LLVM uses a different datalayout for z13 and newer\n>> + * CPUs than it does for older CPUs. This can cause a mismatch in datalayouts\n>> + * in the case where the llvm_types_module is compiled with a pre-z13 CPU\n>> + * and the JIT is running on z13 or newer.\n>> + * See computeDataLayout() function in\n>> + * llvm/lib/Target/SystemZ/SystemZTargetMachine.cpp for information on the\n>> + * datalayout differences.\n>> + */\n>> +static bool\n>> +needs_systemz_workaround(void)\n>> +{\n>> +\tbool ret = false;\n>> +\tLLVMContextRef llvm_context;\n>> +\tLLVMTypeRef vec_type;\n>> +\tLLVMTargetDataRef llvm_layoutref;\n>> +\tif (strncmp(LLVMGetTargetName(llvm_targetref), \"systemz\", strlen(\"systemz\")))\n>> +\t{\n>> +\t\treturn false;\n>> +\t}\n>> +\n>> +\tllvm_context = LLVMGetModuleContext(llvm_types_module);\n>> +\tvec_type = LLVMVectorType(LLVMIntTypeInContext(llvm_context, 32), 4);\n>> +\tllvm_layoutref = LLVMCreateTargetData(llvm_layout);\n>> +\tret = (LLVMABIAlignmentOfType(llvm_layoutref, vec_type) == 16);\n>> +\tLLVMDisposeTargetData(llvm_layoutref);\n>> +\treturn ret;\n>> +}\n> \n> I wonder if it'd be better to compare LLVMCopyStringRepOfTargetData() of\n> the llvm_types_module with the one of the JIT target machine, and only\n> specify -vector in that case? We currently support older LLVM versions\n> than the one that introduced the vector specific handling for systemz,\n> and I don't know what'd happen if we unnecessarily specified -vector.\n> \n\nThe problem is that you have to pass the features to LLVMCreateTargetMachine\nin order to know what the data layout of the JIT target is going to be,\nso the only way to make this work, would be to create the TargetMachine\nwith the default features, check it's datalayout, and then re-create the\nTargetMachine in order to apply the workaround. Maybe that's not so bad?\n\nThe other question I had is should we #ifdef ARCH_S390x in\nneeds_sytemz_workaround(), so we don't need to check the target\nname.\n\n-Tom\n\n\n> Greetings,\n> \n> Andres Freund\n> \n\n\n\n", "msg_date": "Thu, 22 Apr 2021 15:57:49 -0700", "msg_from": "Tom Stellard <tstellar@redhat.com>", "msg_from_op": false, "msg_subject": "Re: Do we work with LLVM 12 on s390x?" } ]
[ { "msg_contents": "Hi,\n\nI started to write this as a reply to\nhttps://postgr.es/m/20210318015105.dcfa4ceybdjubf2i%40alap3.anarazel.de\nbut I think it doesn't really fit under that header anymore.\n\nOn 2021-03-17 18:51:05 -0700, Andres Freund wrote:\n> It does make it easier for the shared memory stats patch, because if\n> there's a fixed number + location, the relevant stats reporting doesn't\n> need to go through a hashtable with the associated locking. I guess\n> that may have colored my perception that it's better to just have a\n> statically sized memory allocation for this. Noteworthy that SLRU stats\n> are done in a fixed size allocation as well...\n\nAs part of reviewing the replication slot stats patch I looked at\nreplication slot stats a fair bit, and I've a few misgivings. First,\nabout the pgstat.c side of things:\n\n- If somehow slot stat drop messages got lost (remember pgstat\n communication is lossy!), we'll just stop maintaining stats for slots\n created later, because there'll eventually be no space for keeping\n stats for another slot.\n\n- If max_replication_slots was lowered between a restart,\n pgstat_read_statfile() will happily write beyond the end of\n replSlotStats.\n\n- pgstat_reset_replslot_counter() acquires ReplicationSlotControlLock. I\n think pgstat.c has absolutely no business doing things on that level.\n\n- We do a linear search through all replication slots whenever receiving\n stats for a slot. Even though there'd be a perfectly good index to\n just use all throughout - the slots index itself. It looks to me like\n slots stat reports can be fairly frequent in some workloads, so that\n doesn't seem great.\n\n- PgStat_ReplSlotStats etc use slotname[NAMEDATALEN]. Why not just NameData?\n\n- pgstat_report_replslot() already has a lot of stats parameters, it\n seems likely that we'll get more. Seems like we should just use a\n struct of stats updates.\n\n\nAnd then more generally about the feature:\n- If a slot was used to stream out a large amount of changes (say an\n initial data load), but then replication is interrupted before the\n transaction is committed/aborted, stream_bytes will not reflect the\n many gigabytes of data we may have sent.\n- I seems weird that we went to the trouble of inventing replication\n slot stats, but then limit them to logical slots, and even there don't\n record the obvious things like the total amount of data sent.\n\n\nI think the best way to address the more fundamental \"pgstat related\"\ncomplaints is to change how replication slot stats are\n\"addressed\". Instead of using the slots name, report stats using the\nindex in ReplicationSlotCtl->replication_slots.\n\nThat removes the risk of running out of \"replication slot stat slots\":\nIf we loose a drop message, the index eventually will be reused and we\nlikely can detect that the stats were for a different slot by comparing\nthe slot name.\n\nIt also makes it easy to handle the issue of max_replication_slots being\nlowered and there still being stats for a slot - we simply can skip\nrestoring that slots data, because we know the relevant slot can't exist\nanymore. And we can make the initial pgstat_report_replslot() during\nslot creation use a\n\nI'm wondering if we should just remove the slot name entirely from the\npgstat.c side of things, and have pg_stat_get_replication_slots()\ninquire about slots by index as well and get the list of slots to report\nstats for from slot.c infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 19 Mar 2021 11:52:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Replication slot stats misgivings" }, { "msg_contents": "On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> And then more generally about the feature:\n> - If a slot was used to stream out a large amount of changes (say an\n> initial data load), but then replication is interrupted before the\n> transaction is committed/aborted, stream_bytes will not reflect the\n> many gigabytes of data we may have sent.\n>\n\nWe can probably update the stats each time we spilled or streamed the\ntransaction data but it was not clear at that stage whether or how\nmuch it will be useful.\n\n> - I seems weird that we went to the trouble of inventing replication\n> slot stats, but then limit them to logical slots, and even there don't\n> record the obvious things like the total amount of data sent.\n>\n\nWon't spill_bytes and stream_bytes will give you the amount of data sent?\n\n>\n> I think the best way to address the more fundamental \"pgstat related\"\n> complaints is to change how replication slot stats are\n> \"addressed\". Instead of using the slots name, report stats using the\n> index in ReplicationSlotCtl->replication_slots.\n>\n> That removes the risk of running out of \"replication slot stat slots\":\n> If we loose a drop message, the index eventually will be reused and we\n> likely can detect that the stats were for a different slot by comparing\n> the slot name.\n>\n\nThis idea is worth exploring to address the complaints but what do we\ndo when we detect that the stats are from the different slot? It has\nmixed of stats from the old and new slot. We need to probably reset it\nafter we detect that. What if after some frequency (say whenever we\nrun out of indexes) we check whether the slots we are maintaining is\npgstat.c have some stale slot entry (entry exists but the actual slot\nis dropped)?\n\n> It also makes it easy to handle the issue of max_replication_slots being\n> lowered and there still being stats for a slot - we simply can skip\n> restoring that slots data, because we know the relevant slot can't exist\n> anymore. And we can make the initial pgstat_report_replslot() during\n> slot creation use a\n>\n\nHere, your last sentence seems to be incomplete.\n\n> I'm wondering if we should just remove the slot name entirely from the\n> pgstat.c side of things, and have pg_stat_get_replication_slots()\n> inquire about slots by index as well and get the list of slots to report\n> stats for from slot.c infrastructure.\n>\n\nBut how will you detect in your idea that some of the stats from the\nalready dropped slot?\n\nI'll create an entry for this in PG14 Open items wiki.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Mar 2021 09:25:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Mar 20, 2021 at 9:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > And then more generally about the feature:\n> > - If a slot was used to stream out a large amount of changes (say an\n> > initial data load), but then replication is interrupted before the\n> > transaction is committed/aborted, stream_bytes will not reflect the\n> > many gigabytes of data we may have sent.\n> >\n>\n> We can probably update the stats each time we spilled or streamed the\n> transaction data but it was not clear at that stage whether or how\n> much it will be useful.\n>\n> > - I seems weird that we went to the trouble of inventing replication\n> > slot stats, but then limit them to logical slots, and even there don't\n> > record the obvious things like the total amount of data sent.\n> >\n>\n> Won't spill_bytes and stream_bytes will give you the amount of data sent?\n>\n> >\n> > I think the best way to address the more fundamental \"pgstat related\"\n> > complaints is to change how replication slot stats are\n> > \"addressed\". Instead of using the slots name, report stats using the\n> > index in ReplicationSlotCtl->replication_slots.\n> >\n> > That removes the risk of running out of \"replication slot stat slots\":\n> > If we loose a drop message, the index eventually will be reused and we\n> > likely can detect that the stats were for a different slot by comparing\n> > the slot name.\n> >\n>\n> This idea is worth exploring to address the complaints but what do we\n> do when we detect that the stats are from the different slot? It has\n> mixed of stats from the old and new slot. We need to probably reset it\n> after we detect that.\n>\n\nWhat if the user created a slot with the same name after dropping the\nslot and it has used the same index. I think chances are less but\nstill a possibility, but maybe that is okay.\n\n> What if after some frequency (say whenever we\n> run out of indexes) we check whether the slots we are maintaining is\n> pgstat.c have some stale slot entry (entry exists but the actual slot\n> is dropped)?\n>\n\nA similar drawback (the user created a slot with the same name after\ndropping it) exists with this as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 20 Mar 2021 10:28:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-20 09:25:40 +0530, Amit Kapila wrote:\n> On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > And then more generally about the feature:\n> > - If a slot was used to stream out a large amount of changes (say an\n> > initial data load), but then replication is interrupted before the\n> > transaction is committed/aborted, stream_bytes will not reflect the\n> > many gigabytes of data we may have sent.\n> >\n> \n> We can probably update the stats each time we spilled or streamed the\n> transaction data but it was not clear at that stage whether or how\n> much it will be useful.\n\nIt seems like the obvious answer here is to sync stats when releasing\nthe slot?\n\n\n> > - I seems weird that we went to the trouble of inventing replication\n> > slot stats, but then limit them to logical slots, and even there don't\n> > record the obvious things like the total amount of data sent.\n> >\n> \n> Won't spill_bytes and stream_bytes will give you the amount of data sent?\n\nI don't think either tracks changes that were neither spilled nor\nstreamed? And if they are, they're terribly misnamed?\n\n> >\n> > I think the best way to address the more fundamental \"pgstat related\"\n> > complaints is to change how replication slot stats are\n> > \"addressed\". Instead of using the slots name, report stats using the\n> > index in ReplicationSlotCtl->replication_slots.\n> >\n> > That removes the risk of running out of \"replication slot stat slots\":\n> > If we loose a drop message, the index eventually will be reused and we\n> > likely can detect that the stats were for a different slot by comparing\n> > the slot name.\n> >\n> \n> This idea is worth exploring to address the complaints but what do we\n> do when we detect that the stats are from the different slot?\n\nI think it's pretty easy to make that bulletproof. Add a\npgstat_report_replslot_create(), and use that in\nReplicationSlotCreate(). That is called with\nReplicationSlotAllocationLock held, so it can just safely zero out stats.\n\nI don't think:\n\n> It has mixed of stats from the old and new slot.\n\nCan happen in that scenario.\n\n\n> > It also makes it easy to handle the issue of max_replication_slots being\n> > lowered and there still being stats for a slot - we simply can skip\n> > restoring that slots data, because we know the relevant slot can't exist\n> > anymore. And we can make the initial pgstat_report_replslot() during\n> > slot creation use a\n> >\n> \n> Here, your last sentence seems to be incomplete.\n\nOops, I was planning to suggest adding pgstat_report_replslot_create()\nthat zeroes out the pre-existing stats (or a parameter to\npgstat_report_replslot(), but I don't think that's better).\n\n\n> > I'm wondering if we should just remove the slot name entirely from the\n> > pgstat.c side of things, and have pg_stat_get_replication_slots()\n> > inquire about slots by index as well and get the list of slots to report\n> > stats for from slot.c infrastructure.\n> >\n> \n> But how will you detect in your idea that some of the stats from the\n> already dropped slot?\n\nI don't think that is possible with my sketch?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Mar 2021 14:26:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-20 10:28:06 +0530, Amit Kapila wrote:\n> On Sat, Mar 20, 2021 at 9:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > This idea is worth exploring to address the complaints but what do we\n> > do when we detect that the stats are from the different slot? It has\n> > mixed of stats from the old and new slot. We need to probably reset it\n> > after we detect that.\n> >\n> \n> What if the user created a slot with the same name after dropping the\n> slot and it has used the same index. I think chances are less but\n> still a possibility, but maybe that is okay.\n> \n> > What if after some frequency (say whenever we\n> > run out of indexes) we check whether the slots we are maintaining is\n> > pgstat.c have some stale slot entry (entry exists but the actual slot\n> > is dropped)?\n> >\n> \n> A similar drawback (the user created a slot with the same name after\n> dropping it) exists with this as well.\n\npgstat_report_replslot_drop() already prevents that, no?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 20 Mar 2021 14:27:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Mar 21, 2021 at 2:57 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-20 10:28:06 +0530, Amit Kapila wrote:\n> > On Sat, Mar 20, 2021 at 9:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > This idea is worth exploring to address the complaints but what do we\n> > > do when we detect that the stats are from the different slot? It has\n> > > mixed of stats from the old and new slot. We need to probably reset it\n> > > after we detect that.\n> > >\n> >\n> > What if the user created a slot with the same name after dropping the\n> > slot and it has used the same index. I think chances are less but\n> > still a possibility, but maybe that is okay.\n> >\n> > > What if after some frequency (say whenever we\n> > > run out of indexes) we check whether the slots we are maintaining is\n> > > pgstat.c have some stale slot entry (entry exists but the actual slot\n> > > is dropped)?\n> > >\n> >\n> > A similar drawback (the user created a slot with the same name after\n> > dropping it) exists with this as well.\n>\n> pgstat_report_replslot_drop() already prevents that, no?\n>\n\nYeah, normally it would prevent that but what if a drop message is lost?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 21 Mar 2021 16:08:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Mar 21, 2021 at 2:56 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-03-20 09:25:40 +0530, Amit Kapila wrote:\n> > On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > And then more generally about the feature:\n> > > - If a slot was used to stream out a large amount of changes (say an\n> > > initial data load), but then replication is interrupted before the\n> > > transaction is committed/aborted, stream_bytes will not reflect the\n> > > many gigabytes of data we may have sent.\n> > >\n> >\n> > We can probably update the stats each time we spilled or streamed the\n> > transaction data but it was not clear at that stage whether or how\n> > much it will be useful.\n>\n> It seems like the obvious answer here is to sync stats when releasing\n> the slot?\n>\n\nOkay, that makes sense.\n\n>\n> > > - I seems weird that we went to the trouble of inventing replication\n> > > slot stats, but then limit them to logical slots, and even there don't\n> > > record the obvious things like the total amount of data sent.\n> > >\n> >\n> > Won't spill_bytes and stream_bytes will give you the amount of data sent?\n>\n> I don't think either tracks changes that were neither spilled nor\n> streamed? And if they are, they're terribly misnamed?\n>\n\nRight, it won't track such changes but we can track that as well and I\nunderstand it will be good to track that information. I think we were\ntoo focused on stats for newly introduced features that we forget\nabout the non-spilled and non-streamed xacts.\n\nNote - I have now created an entry for this in PG14 Open Items [1].\n\n[1] - https://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 21 Mar 2021 16:10:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-21 16:08:00 +0530, Amit Kapila wrote:\n> On Sun, Mar 21, 2021 at 2:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2021-03-20 10:28:06 +0530, Amit Kapila wrote:\n> > > On Sat, Mar 20, 2021 at 9:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > This idea is worth exploring to address the complaints but what do we\n> > > > do when we detect that the stats are from the different slot? It has\n> > > > mixed of stats from the old and new slot. We need to probably reset it\n> > > > after we detect that.\n> > > >\n> > >\n> > > What if the user created a slot with the same name after dropping the\n> > > slot and it has used the same index. I think chances are less but\n> > > still a possibility, but maybe that is okay.\n> > >\n> > > > What if after some frequency (say whenever we\n> > > > run out of indexes) we check whether the slots we are maintaining is\n> > > > pgstat.c have some stale slot entry (entry exists but the actual slot\n> > > > is dropped)?\n> > > >\n> > >\n> > > A similar drawback (the user created a slot with the same name after\n> > > dropping it) exists with this as well.\n> >\n> > pgstat_report_replslot_drop() already prevents that, no?\n> >\n> \n> Yeah, normally it would prevent that but what if a drop message is lost?\n\nThat already exists as a danger, no? pgstat_recv_replslot() uses\npgstat_replslot_index() to find the slot by name. So if a drop message\nis lost we'd potentially accumulate into stats of an older slot. It'd\nprobably a lower risk with what I suggested, because the initial stat\nreport slot.c would use something like pgstat_report_replslot_create(),\nwhich the stats collector can use to reset the stats to 0?\n\nIf we do it right the lossiness will be removed via shared memory stats\npatch... But architecturally the name based lookup and unpredictable\nnumber of stats doesn't fit in super well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Mar 2021 14:40:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Mar 22, 2021 at 3:10 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-03-21 16:08:00 +0530, Amit Kapila wrote:\n> > On Sun, Mar 21, 2021 at 2:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > > On 2021-03-20 10:28:06 +0530, Amit Kapila wrote:\n> > > > On Sat, Mar 20, 2021 at 9:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > This idea is worth exploring to address the complaints but what do we\n> > > > > do when we detect that the stats are from the different slot? It has\n> > > > > mixed of stats from the old and new slot. We need to probably reset it\n> > > > > after we detect that.\n> > > > >\n> > > >\n> > > > What if the user created a slot with the same name after dropping the\n> > > > slot and it has used the same index. I think chances are less but\n> > > > still a possibility, but maybe that is okay.\n> > > >\n> > > > > What if after some frequency (say whenever we\n> > > > > run out of indexes) we check whether the slots we are maintaining is\n> > > > > pgstat.c have some stale slot entry (entry exists but the actual slot\n> > > > > is dropped)?\n> > > > >\n> > > >\n> > > > A similar drawback (the user created a slot with the same name after\n> > > > dropping it) exists with this as well.\n> > >\n> > > pgstat_report_replslot_drop() already prevents that, no?\n> > >\n> >\n> > Yeah, normally it would prevent that but what if a drop message is lost?\n>\n> That already exists as a danger, no? pgstat_recv_replslot() uses\n> pgstat_replslot_index() to find the slot by name. So if a drop message\n> is lost we'd potentially accumulate into stats of an older slot. It'd\n> probably a lower risk with what I suggested, because the initial stat\n> report slot.c would use something like pgstat_report_replslot_create(),\n> which the stats collector can use to reset the stats to 0?\n>\n\nokay, but I guess if we miss the create message as well then we will\nhave a similar danger. I think the benefit your idea will bring is to\nuse index-based lookup instead of name-based lookup. IIRC, we have\ninitially used the name here because we thought there is nothing like\nOID for slots but your suggestion of using\nReplicationSlotCtl->replication_slots can address that.\n\n> If we do it right the lossiness will be removed via shared memory stats\n> patch...\n>\n\nOkay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Mar 2021 08:26:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> - If max_replication_slots was lowered between a restart,\n> pgstat_read_statfile() will happily write beyond the end of\n> replSlotStats.\n\nI think we cannot restart the server after lowering\nmax_replication_slots to a value less than the number of replication\nslots actually created on the server. No?\n\n>\n> - pgstat_reset_replslot_counter() acquires ReplicationSlotControlLock. I\n> think pgstat.c has absolutely no business doing things on that level.\n\nAgreed.\n\n>\n> - PgStat_ReplSlotStats etc use slotname[NAMEDATALEN]. Why not just NameData?\n\nThat's because we followed other definitions in pgstat.h that use\nchar[NAMEDATALEN]. I'm okay with using NameData.\n\n>\n> - pgstat_report_replslot() already has a lot of stats parameters, it\n> seems likely that we'll get more. Seems like we should just use a\n> struct of stats updates.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 22 Mar 2021 13:25:16 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Mar 22, 2021 at 1:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > - If max_replication_slots was lowered between a restart,\n> > pgstat_read_statfile() will happily write beyond the end of\n> > replSlotStats.\n>\n> I think we cannot restart the server after lowering\n> max_replication_slots to a value less than the number of replication\n> slots actually created on the server. No?\n\nThis problem happens in the case where max_replication_slots is\nlowered and there still are stats for a slot.\n\nI understood the risk of running out of replSlotStats. If we use the\nindex in replSlotStats instead, IIUC we need to somehow synchronize\nthe indexes in between replSlotStats and\nReplicationSlotCtl->replication_slots. The order of replSlotStats is\npreserved across restarting whereas the order of\nReplicationSlotCtl->replication_slots isn’t (readdir() that is used by\nStartupReplicationSlots() doesn’t guarantee the order of the returned\nentries in the directory). Maybe we can compare the slot name in the\nreceived message to the name in the element of replSlotStats. If they\ndon’t match, we swap entries in replSlotStats to synchronize the index\nof the replication slot in ReplicationSlotCtl->replication_slots and\nreplSlotStats. If we cannot find the entry in replSlotStats that has\nthe name in the received message, it probably means either it's a new\nslot or the previous create message is dropped, we can create the new\nstats for the slot. Is that what you mean, Andres?\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 22 Mar 2021 15:49:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Mar 22, 2021 at 12:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 1:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > - If max_replication_slots was lowered between a restart,\n> > > pgstat_read_statfile() will happily write beyond the end of\n> > > replSlotStats.\n> >\n> > I think we cannot restart the server after lowering\n> > max_replication_slots to a value less than the number of replication\n> > slots actually created on the server. No?\n>\n> This problem happens in the case where max_replication_slots is\n> lowered and there still are stats for a slot.\n>\n\nI think this can happen only if the drop message is lost, right?\n\n> I understood the risk of running out of replSlotStats. If we use the\n> index in replSlotStats instead, IIUC we need to somehow synchronize\n> the indexes in between replSlotStats and\n> ReplicationSlotCtl->replication_slots. The order of replSlotStats is\n> preserved across restarting whereas the order of\n> ReplicationSlotCtl->replication_slots isn’t (readdir() that is used by\n> StartupReplicationSlots() doesn’t guarantee the order of the returned\n> entries in the directory). Maybe we can compare the slot name in the\n> received message to the name in the element of replSlotStats. If they\n> don’t match, we swap entries in replSlotStats to synchronize the index\n> of the replication slot in ReplicationSlotCtl->replication_slots and\n> replSlotStats. If we cannot find the entry in replSlotStats that has\n> the name in the received message, it probably means either it's a new\n> slot or the previous create message is dropped, we can create the new\n> stats for the slot. Is that what you mean, Andres?\n>\n\nI wonder how in this scheme, we will remove the risk of running out of\n'replSlotStats' and still restore correct stats assuming the drop\nmessage is lost? Do we want to check after restoring each slot info\nwhether the slot with that name exists?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:39:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Mar 23, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 12:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Mar 22, 2021 at 1:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > - If max_replication_slots was lowered between a restart,\n> > > > pgstat_read_statfile() will happily write beyond the end of\n> > > > replSlotStats.\n> > >\n> > > I think we cannot restart the server after lowering\n> > > max_replication_slots to a value less than the number of replication\n> > > slots actually created on the server. No?\n> >\n> > This problem happens in the case where max_replication_slots is\n> > lowered and there still are stats for a slot.\n> >\n>\n> I think this can happen only if the drop message is lost, right?\n\nYes, I think you're right. In that case, the stats file could have\nmore slots statistics than the lowered max_replication_slots.\n\n>\n> > I understood the risk of running out of replSlotStats. If we use the\n> > index in replSlotStats instead, IIUC we need to somehow synchronize\n> > the indexes in between replSlotStats and\n> > ReplicationSlotCtl->replication_slots. The order of replSlotStats is\n> > preserved across restarting whereas the order of\n> > ReplicationSlotCtl->replication_slots isn’t (readdir() that is used by\n> > StartupReplicationSlots() doesn’t guarantee the order of the returned\n> > entries in the directory). Maybe we can compare the slot name in the\n> > received message to the name in the element of replSlotStats. If they\n> > don’t match, we swap entries in replSlotStats to synchronize the index\n> > of the replication slot in ReplicationSlotCtl->replication_slots and\n> > replSlotStats. If we cannot find the entry in replSlotStats that has\n> > the name in the received message, it probably means either it's a new\n> > slot or the previous create message is dropped, we can create the new\n> > stats for the slot. Is that what you mean, Andres?\n> >\n>\n> I wonder how in this scheme, we will remove the risk of running out of\n> 'replSlotStats' and still restore correct stats assuming the drop\n> message is lost? Do we want to check after restoring each slot info\n> whether the slot with that name exists?\n\nYeah, I think we need such a check at least if the number of slot\nstats in the stats file is larger than max_replication_slots. Or we\ncan do that at every startup to remove orphaned slot stats.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 23 Mar 2021 23:37:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-23 23:37:14 +0900, Masahiko Sawada wrote:\n> On Tue, Mar 23, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Mar 22, 2021 at 12:20 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Mar 22, 2021 at 1:25 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > >\n> > > > > - If max_replication_slots was lowered between a restart,\n> > > > > pgstat_read_statfile() will happily write beyond the end of\n> > > > > replSlotStats.\n> > > >\n> > > > I think we cannot restart the server after lowering\n> > > > max_replication_slots to a value less than the number of replication\n> > > > slots actually created on the server. No?\n> > >\n> > > This problem happens in the case where max_replication_slots is\n> > > lowered and there still are stats for a slot.\n> > >\n> >\n> > I think this can happen only if the drop message is lost, right?\n> \n> Yes, I think you're right. In that case, the stats file could have\n> more slots statistics than the lowered max_replication_slots.\n\nOr if slots are deleted on the file-system while the cluster is\nshutdown. Which obviously is at best a semi-supported thing, but it\nnormally does work.\n\n\n> > > I understood the risk of running out of replSlotStats. If we use the\n> > > index in replSlotStats instead, IIUC we need to somehow synchronize\n> > > the indexes in between replSlotStats and\n> > > ReplicationSlotCtl->replication_slots. The order of replSlotStats is\n> > > preserved across restarting whereas the order of\n> > > ReplicationSlotCtl->replication_slots isn’t (readdir() that is used by\n> > > StartupReplicationSlots() doesn’t guarantee the order of the returneda\n> > > entries in the directory).\n\nVery good point. Even if readdir() order were fixed, we'd still have the\nproblem because there can be \"gaps\" in the indexes for slots\n(e.g. create slot_a, create slot_b, create slot_c, drop slot_b, leaving\nyou with index 0 and 2 used, and 1 unused).\n\n\n> > > Maybe we can compare the slot name in the\n> > > received message to the name in the element of replSlotStats. If they\n> > > don’t match, we swap entries in replSlotStats to synchronize the index\n> > > of the replication slot in ReplicationSlotCtl->replication_slots and\n> > > replSlotStats. If we cannot find the entry in replSlotStats that has\n> > > the name in the received message, it probably means either it's a new\n> > > slot or the previous create message is dropped, we can create the new\n> > > stats for the slot. Is that what you mean, Andres?\n\nThat doesn't seem great. Slot names are imo a poor identifier for\nsomething happening asynchronously. The stats collector regularly\ndoesn't process incoming messages for periods of time because it is busy\nwriting out the stats file. That's also when messages to it are most\nlikely to be dropped (likely because the incoming buffer is full).\n\nPerhaps we could have RestoreSlotFromDisk() send something to the stats\ncollector ensuring the mapping makes sense?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:24:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Mar 23, 2021 at 10:54 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2021-03-23 23:37:14 +0900, Masahiko Sawada wrote:\n>\n> > > > Maybe we can compare the slot name in the\n> > > > received message to the name in the element of replSlotStats. If they\n> > > > don’t match, we swap entries in replSlotStats to synchronize the index\n> > > > of the replication slot in ReplicationSlotCtl->replication_slots and\n> > > > replSlotStats. If we cannot find the entry in replSlotStats that has\n> > > > the name in the received message, it probably means either it's a new\n> > > > slot or the previous create message is dropped, we can create the new\n> > > > stats for the slot. Is that what you mean, Andres?\n>\n> That doesn't seem great. Slot names are imo a poor identifier for\n> something happening asynchronously. The stats collector regularly\n> doesn't process incoming messages for periods of time because it is busy\n> writing out the stats file. That's also when messages to it are most\n> likely to be dropped (likely because the incoming buffer is full).\n>\n\nLeaving aside restart case, without some sort of such sanity checking,\nif both drop (of old slot) and create (of new slot) messages are lost\nthen we will start accumulating stats in old slots. However, if only\none of them is lost then there won't be any such problem.\n\n> Perhaps we could have RestoreSlotFromDisk() send something to the stats\n> collector ensuring the mapping makes sense?\n>\n\nSay if we send just the index location of each slot then probably we\ncan setup replSlotStats. Now say before the restart if one of the drop\nmessages was missed (by stats collector) and that happens to be at\nsome middle location, then we would end up restoring some already\ndropped slot, leaving some of the still required ones. However, if\nthere is some sanity identifier like name along with the index, then I\nthink that would have worked for such a case.\n\nI think it would have been easier if we would have some OID type of\nidentifier for each slot. But, without that may be index location of\nReplicationSlotCtl->replication_slots and slotname combination can\nreduce the chances of slot stats go wrong quite less even if not zero.\nIf not name, do we have anything else in a slot that can be used for\nsome sort of sanity checking?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 24 Mar 2021 15:36:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Mar 24, 2021 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 23, 2021 at 10:54 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2021-03-23 23:37:14 +0900, Masahiko Sawada wrote:\n> >\n> > > > > Maybe we can compare the slot name in the\n> > > > > received message to the name in the element of replSlotStats. If they\n> > > > > don’t match, we swap entries in replSlotStats to synchronize the index\n> > > > > of the replication slot in ReplicationSlotCtl->replication_slots and\n> > > > > replSlotStats. If we cannot find the entry in replSlotStats that has\n> > > > > the name in the received message, it probably means either it's a new\n> > > > > slot or the previous create message is dropped, we can create the new\n> > > > > stats for the slot. Is that what you mean, Andres?\n> >\n> > That doesn't seem great. Slot names are imo a poor identifier for\n> > something happening asynchronously. The stats collector regularly\n> > doesn't process incoming messages for periods of time because it is busy\n> > writing out the stats file. That's also when messages to it are most\n> > likely to be dropped (likely because the incoming buffer is full).\n> >\n>\n> Leaving aside restart case, without some sort of such sanity checking,\n> if both drop (of old slot) and create (of new slot) messages are lost\n> then we will start accumulating stats in old slots. However, if only\n> one of them is lost then there won't be any such problem.\n>\n> > Perhaps we could have RestoreSlotFromDisk() send something to the stats\n> > collector ensuring the mapping makes sense?\n> >\n>\n> Say if we send just the index location of each slot then probably we\n> can setup replSlotStats. Now say before the restart if one of the drop\n> messages was missed (by stats collector) and that happens to be at\n> some middle location, then we would end up restoring some already\n> dropped slot, leaving some of the still required ones. However, if\n> there is some sanity identifier like name along with the index, then I\n> think that would have worked for such a case.\n\nEven such messages could also be lost? Given that any message could be\nlost under a UDP connection, I think we cannot rely on a single\nmessage. Instead, I think we need to loosely synchronize the indexes\nwhile assuming the indexes in replSlotStats and\nReplicationSlotCtl->replication_slots are not synchronized.\n\n>\n> I think it would have been easier if we would have some OID type of\n> identifier for each slot. But, without that may be index location of\n> ReplicationSlotCtl->replication_slots and slotname combination can\n> reduce the chances of slot stats go wrong quite less even if not zero.\n> If not name, do we have anything else in a slot that can be used for\n> some sort of sanity checking?\n\nI don't see any useful information in a slot for sanity checking.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 25 Mar 2021 15:05:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Mar 25, 2021 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Mar 24, 2021 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Leaving aside restart case, without some sort of such sanity checking,\n> > if both drop (of old slot) and create (of new slot) messages are lost\n> > then we will start accumulating stats in old slots. However, if only\n> > one of them is lost then there won't be any such problem.\n> >\n> > > Perhaps we could have RestoreSlotFromDisk() send something to the stats\n> > > collector ensuring the mapping makes sense?\n> > >\n> >\n> > Say if we send just the index location of each slot then probably we\n> > can setup replSlotStats. Now say before the restart if one of the drop\n> > messages was missed (by stats collector) and that happens to be at\n> > some middle location, then we would end up restoring some already\n> > dropped slot, leaving some of the still required ones. However, if\n> > there is some sanity identifier like name along with the index, then I\n> > think that would have worked for such a case.\n>\n> Even such messages could also be lost? Given that any message could be\n> lost under a UDP connection, I think we cannot rely on a single\n> message. Instead, I think we need to loosely synchronize the indexes\n> while assuming the indexes in replSlotStats and\n> ReplicationSlotCtl->replication_slots are not synchronized.\n>\n> >\n> > I think it would have been easier if we would have some OID type of\n> > identifier for each slot. But, without that may be index location of\n> > ReplicationSlotCtl->replication_slots and slotname combination can\n> > reduce the chances of slot stats go wrong quite less even if not zero.\n> > If not name, do we have anything else in a slot that can be used for\n> > some sort of sanity checking?\n>\n> I don't see any useful information in a slot for sanity checking.\n>\n\nIn that case, can we do a hard check for which slots exist if\nreplSlotStats runs out of space (that can probably happen only after\nrestart and when we lost some drop messages)?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 25 Mar 2021 17:12:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-25 17:12:31 +0530, Amit Kapila wrote:\n> On Thu, Mar 25, 2021 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Mar 24, 2021 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Leaving aside restart case, without some sort of such sanity checking,\n> > > if both drop (of old slot) and create (of new slot) messages are lost\n> > > then we will start accumulating stats in old slots. However, if only\n> > > one of them is lost then there won't be any such problem.\n> > >\n> > > > Perhaps we could have RestoreSlotFromDisk() send something to the stats\n> > > > collector ensuring the mapping makes sense?\n> > > >\n> > >\n> > > Say if we send just the index location of each slot then probably we\n> > > can setup replSlotStats. Now say before the restart if one of the drop\n> > > messages was missed (by stats collector) and that happens to be at\n> > > some middle location, then we would end up restoring some already\n> > > dropped slot, leaving some of the still required ones. However, if\n> > > there is some sanity identifier like name along with the index, then I\n> > > think that would have worked for such a case.\n> >\n> > Even such messages could also be lost? Given that any message could be\n> > lost under a UDP connection, I think we cannot rely on a single\n> > message. Instead, I think we need to loosely synchronize the indexes\n> > while assuming the indexes in replSlotStats and\n> > ReplicationSlotCtl->replication_slots are not synchronized.\n> >\n> > >\n> > > I think it would have been easier if we would have some OID type of\n> > > identifier for each slot. But, without that may be index location of\n> > > ReplicationSlotCtl->replication_slots and slotname combination can\n> > > reduce the chances of slot stats go wrong quite less even if not zero.\n> > > If not name, do we have anything else in a slot that can be used for\n> > > some sort of sanity checking?\n> >\n> > I don't see any useful information in a slot for sanity checking.\n> >\n> \n> In that case, can we do a hard check for which slots exist if\n> replSlotStats runs out of space (that can probably happen only after\n> restart and when we lost some drop messages)?\n\nI suggest we wait doing anything about this until we know if the shared\nstats patch gets in or not (I'd give it 50% maybe). If it does get in\nthings get a good bit easier, because we don't have to deal with the\nmessage loss issues anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Mar 2021 12:47:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Mar 26, 2021 at 1:17 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-25 17:12:31 +0530, Amit Kapila wrote:\n> > On Thu, Mar 25, 2021 at 11:36 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 24, 2021 at 7:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > Leaving aside restart case, without some sort of such sanity checking,\n> > > > if both drop (of old slot) and create (of new slot) messages are lost\n> > > > then we will start accumulating stats in old slots. However, if only\n> > > > one of them is lost then there won't be any such problem.\n> > > >\n> > > > > Perhaps we could have RestoreSlotFromDisk() send something to the stats\n> > > > > collector ensuring the mapping makes sense?\n> > > > >\n> > > >\n> > > > Say if we send just the index location of each slot then probably we\n> > > > can setup replSlotStats. Now say before the restart if one of the drop\n> > > > messages was missed (by stats collector) and that happens to be at\n> > > > some middle location, then we would end up restoring some already\n> > > > dropped slot, leaving some of the still required ones. However, if\n> > > > there is some sanity identifier like name along with the index, then I\n> > > > think that would have worked for such a case.\n> > >\n> > > Even such messages could also be lost? Given that any message could be\n> > > lost under a UDP connection, I think we cannot rely on a single\n> > > message. Instead, I think we need to loosely synchronize the indexes\n> > > while assuming the indexes in replSlotStats and\n> > > ReplicationSlotCtl->replication_slots are not synchronized.\n> > >\n> > > >\n> > > > I think it would have been easier if we would have some OID type of\n> > > > identifier for each slot. But, without that may be index location of\n> > > > ReplicationSlotCtl->replication_slots and slotname combination can\n> > > > reduce the chances of slot stats go wrong quite less even if not zero.\n> > > > If not name, do we have anything else in a slot that can be used for\n> > > > some sort of sanity checking?\n> > >\n> > > I don't see any useful information in a slot for sanity checking.\n> > >\n> >\n> > In that case, can we do a hard check for which slots exist if\n> > replSlotStats runs out of space (that can probably happen only after\n> > restart and when we lost some drop messages)?\n>\n> I suggest we wait doing anything about this until we know if the shared\n> stats patch gets in or not (I'd give it 50% maybe). If it does get in\n> things get a good bit easier, because we don't have to deal with the\n> message loss issues anymore.\n>\n\nOkay, that makes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Mar 2021 07:58:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-26 07:58:58 +0530, Amit Kapila wrote:\n> On Fri, Mar 26, 2021 at 1:17 AM Andres Freund <andres@anarazel.de> wrote:\n> > I suggest we wait doing anything about this until we know if the shared\n> > stats patch gets in or not (I'd give it 50% maybe). If it does get in\n> > things get a good bit easier, because we don't have to deal with the\n> > message loss issues anymore.\n> >\n> \n> Okay, that makes sense.\n\nAny chance you could write a tap test exercising a few of these cases?\nE.g. things like:\n\n- create a few slots, drop one of them, shut down, start up, verify\n stats are still sane\n- create a few slots, shut down, manually remove a slot, lower\n max_replication_slots, start up\n\nIMO, independent of the shutdown / startup issue, it'd be worth writing\na patch tracking the bytes sent independently of the slot stats storage\nissues. That would also make the testing for the above cheaper...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Mar 2021 17:58:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-26 07:58:58 +0530, Amit Kapila wrote:\n> > On Fri, Mar 26, 2021 at 1:17 AM Andres Freund <andres@anarazel.de> wrote:\n> > > I suggest we wait doing anything about this until we know if the shared\n> > > stats patch gets in or not (I'd give it 50% maybe). If it does get in\n> > > things get a good bit easier, because we don't have to deal with the\n> > > message loss issues anymore.\n> > >\n> >\n> > Okay, that makes sense.\n>\n> Any chance you could write a tap test exercising a few of these cases?\n\nI can try to write a patch for this if nobody objects.\n\n> E.g. things like:\n>\n> - create a few slots, drop one of them, shut down, start up, verify\n> stats are still sane\n> - create a few slots, shut down, manually remove a slot, lower\n> max_replication_slots, start up\n\nHere by \"manually remove a slot\", do you mean to remove the slot\nmanually from the pg_replslot folder?\n\n> IMO, independent of the shutdown / startup issue, it'd be worth writing\n> a patch tracking the bytes sent independently of the slot stats storage\n> issues. That would also make the testing for the above cheaper...\n\nI can try to write a patch for this if nobody objects.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 30 Mar 2021 10:13:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Hi,\n\nOn 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > Any chance you could write a tap test exercising a few of these cases?\n> \n> I can try to write a patch for this if nobody objects.\n\nCool!\n\n> > E.g. things like:\n> >\n> > - create a few slots, drop one of them, shut down, start up, verify\n> > stats are still sane\n> > - create a few slots, shut down, manually remove a slot, lower\n> > max_replication_slots, start up\n> \n> Here by \"manually remove a slot\", do you mean to remove the slot\n> manually from the pg_replslot folder?\n\nYep - thereby allowing max_replication_slots after the shutdown/start to\nbe lower than the number of slots-stats objects.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 29 Mar 2021 22:30:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Any chance you could write a tap test exercising a few of these cases?\n> >\n> > I can try to write a patch for this if nobody objects.\n>\n> Cool!\n>\n\nAttached a patch which has the test for the first scenario.\n\n> > > E.g. things like:\n> > >\n> > > - create a few slots, drop one of them, shut down, start up, verify\n> > > stats are still sane\n> > > - create a few slots, shut down, manually remove a slot, lower\n> > > max_replication_slots, start up\n> >\n> > Here by \"manually remove a slot\", do you mean to remove the slot\n> > manually from the pg_replslot folder?\n>\n> Yep - thereby allowing max_replication_slots after the shutdown/start to\n> be lower than the number of slots-stats objects.\n\nI have not included the 2nd test in the patch as the test fails with\nfollowing warnings and also displays the statistics of the removed\nslot:\nWARNING: problem in alloc set Statistics snapshot: detected write\npast chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\nWARNING: problem in alloc set Statistics snapshot: detected write\npast chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n\nThis happens because the statistics file has an additional slot\npresent even though the replication slot was removed. I felt this\nissue should be fixed. I will try to fix this issue and send the\nsecond test along with the fix.\n\nRegards,\nVignesh", "msg_date": "Wed, 31 Mar 2021 11:32:51 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > Any chance you could write a tap test exercising a few of these cases?\n> > >\n> > > I can try to write a patch for this if nobody objects.\n> >\n> > Cool!\n> >\n>\n> Attached a patch which has the test for the first scenario.\n>\n> > > > E.g. things like:\n> > > >\n> > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > stats are still sane\n> > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > max_replication_slots, start up\n> > >\n> > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > manually from the pg_replslot folder?\n> >\n> > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > be lower than the number of slots-stats objects.\n>\n> I have not included the 2nd test in the patch as the test fails with\n> following warnings and also displays the statistics of the removed\n> slot:\n> WARNING: problem in alloc set Statistics snapshot: detected write\n> past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> WARNING: problem in alloc set Statistics snapshot: detected write\n> past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n>\n> This happens because the statistics file has an additional slot\n> present even though the replication slot was removed. I felt this\n> issue should be fixed. I will try to fix this issue and send the\n> second test along with the fix.\n\nI felt from the statistics collector process, there is no way in which\nwe can identify if the replication slot is present or not because the\nstatistic collector process does not have access to shared memory.\nAnything that the statistic collector process does independently by\ntraversing and removing the statistics of the replication slot\nexceeding the max_replication_slot has its drawback of removing some\nvalid replication slot's statistics data.\nAny thoughts on how we can identify the replication slot which has been dropped?\nCan someone point me to the shared stats patch link with which message\nloss can be avoided. I wanted to see a scenario where something like\nthe slot is dropped but the statistics are not updated because of an\nimmediate shutdown or server going down abruptly can occur or not with\nthe shared stats patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Apr 2021 15:43:36 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 1, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > Any chance you could write a tap test exercising a few of these cases?\n> > > >\n> > > > I can try to write a patch for this if nobody objects.\n> > >\n> > > Cool!\n> > >\n> >\n> > Attached a patch which has the test for the first scenario.\n> >\n> > > > > E.g. things like:\n> > > > >\n> > > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > > stats are still sane\n> > > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > > max_replication_slots, start up\n> > > >\n> > > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > > manually from the pg_replslot folder?\n> > >\n> > > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > > be lower than the number of slots-stats objects.\n> >\n> > I have not included the 2nd test in the patch as the test fails with\n> > following warnings and also displays the statistics of the removed\n> > slot:\n> > WARNING: problem in alloc set Statistics snapshot: detected write\n> > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > WARNING: problem in alloc set Statistics snapshot: detected write\n> > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> >\n> > This happens because the statistics file has an additional slot\n> > present even though the replication slot was removed. I felt this\n> > issue should be fixed. I will try to fix this issue and send the\n> > second test along with the fix.\n>\n> I felt from the statistics collector process, there is no way in which\n> we can identify if the replication slot is present or not because the\n> statistic collector process does not have access to shared memory.\n> Anything that the statistic collector process does independently by\n> traversing and removing the statistics of the replication slot\n> exceeding the max_replication_slot has its drawback of removing some\n> valid replication slot's statistics data.\n> Any thoughts on how we can identify the replication slot which has been dropped?\n> Can someone point me to the shared stats patch link with which message\n> loss can be avoided. I wanted to see a scenario where something like\n> the slot is dropped but the statistics are not updated because of an\n> immediate shutdown or server going down abruptly can occur or not with\n> the shared stats patch.\n>\n\nI don't think it is easy to simulate a scenario where the 'drop'\nmessage is dropped and I think that is why the test contains the step\nto manually remove the slot. At this stage, you can probably provide a\ntest patch and a code-fix patch where it just drops the extra slots\nfrom the stats file. That will allow us to test it with a shared\nmemory stats patch on which Andres and Horiguchi-San are working. If\nwe still continue to pursue with current approach then as Andres\nsuggested we might send additional information from\nRestoreSlotFromDisk to keep it in sync.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Apr 2021 17:58:10 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Mar 30, 2021 at 9:58 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> IMO, independent of the shutdown / startup issue, it'd be worth writing\n> a patch tracking the bytes sent independently of the slot stats storage\n> issues. That would also make the testing for the above cheaper...\n\nAgreed.\n\nI think the bytes sent should be recorded by the decoding plugin, not\nby the core side. Given that table filtering and row filtering,\ntracking the bytes passed to the decoding plugin would not help gauge\nthe actual network I/O. In that sense, the description of stream_bytes\nin the doc seems not accurate:\n\n---\nThis and other streaming counters for this slot can be used to gauge\nthe network I/O which occurred during logical decoding and allow\ntuning logical_decoding_work_mem.\n---\n\nIt can surely be used to allow tuning logical_decoding_work_mem but it\ncould not be true for gauging the network I/O which occurred during\nlogical decoding.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 1 Apr 2021 21:48:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 1, 2021 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 1, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > Any chance you could write a tap test exercising a few of these cases?\n> > > > >\n> > > > > I can try to write a patch for this if nobody objects.\n> > > >\n> > > > Cool!\n> > > >\n> > >\n> > > Attached a patch which has the test for the first scenario.\n> > >\n> > > > > > E.g. things like:\n> > > > > >\n> > > > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > > > stats are still sane\n> > > > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > > > max_replication_slots, start up\n> > > > >\n> > > > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > > > manually from the pg_replslot folder?\n> > > >\n> > > > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > > > be lower than the number of slots-stats objects.\n> > >\n> > > I have not included the 2nd test in the patch as the test fails with\n> > > following warnings and also displays the statistics of the removed\n> > > slot:\n> > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > >\n> > > This happens because the statistics file has an additional slot\n> > > present even though the replication slot was removed. I felt this\n> > > issue should be fixed. I will try to fix this issue and send the\n> > > second test along with the fix.\n> >\n> > I felt from the statistics collector process, there is no way in which\n> > we can identify if the replication slot is present or not because the\n> > statistic collector process does not have access to shared memory.\n> > Anything that the statistic collector process does independently by\n> > traversing and removing the statistics of the replication slot\n> > exceeding the max_replication_slot has its drawback of removing some\n> > valid replication slot's statistics data.\n> > Any thoughts on how we can identify the replication slot which has been dropped?\n> > Can someone point me to the shared stats patch link with which message\n> > loss can be avoided. I wanted to see a scenario where something like\n> > the slot is dropped but the statistics are not updated because of an\n> > immediate shutdown or server going down abruptly can occur or not with\n> > the shared stats patch.\n> >\n>\n> I don't think it is easy to simulate a scenario where the 'drop'\n> message is dropped and I think that is why the test contains the step\n> to manually remove the slot. At this stage, you can probably provide a\n> test patch and a code-fix patch where it just drops the extra slots\n> from the stats file. That will allow us to test it with a shared\n> memory stats patch on which Andres and Horiguchi-San are working. If\n> we still continue to pursue with current approach then as Andres\n> suggested we might send additional information from\n> RestoreSlotFromDisk to keep it in sync.\n\nThanks for your comments, Attached patch has the fix for the same.\nAlso attached a couple of more patches which addresses the comments\nwhich Andres had listed i.e changing char to NameData type and also to\ndisplay the unspilled/unstreamed transaction information in the\nreplication statistics.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Thu, 1 Apr 2021 22:25:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 2, 2021 at 1:55 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Apr 1, 2021 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 1, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > >\n> > > > > Hi,\n> > > > >\n> > > > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > > Any chance you could write a tap test exercising a few of these cases?\n> > > > > >\n> > > > > > I can try to write a patch for this if nobody objects.\n> > > > >\n> > > > > Cool!\n> > > > >\n> > > >\n> > > > Attached a patch which has the test for the first scenario.\n> > > >\n> > > > > > > E.g. things like:\n> > > > > > >\n> > > > > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > > > > stats are still sane\n> > > > > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > > > > max_replication_slots, start up\n> > > > > >\n> > > > > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > > > > manually from the pg_replslot folder?\n> > > > >\n> > > > > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > > > > be lower than the number of slots-stats objects.\n> > > >\n> > > > I have not included the 2nd test in the patch as the test fails with\n> > > > following warnings and also displays the statistics of the removed\n> > > > slot:\n> > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > >\n> > > > This happens because the statistics file has an additional slot\n> > > > present even though the replication slot was removed. I felt this\n> > > > issue should be fixed. I will try to fix this issue and send the\n> > > > second test along with the fix.\n> > >\n> > > I felt from the statistics collector process, there is no way in which\n> > > we can identify if the replication slot is present or not because the\n> > > statistic collector process does not have access to shared memory.\n> > > Anything that the statistic collector process does independently by\n> > > traversing and removing the statistics of the replication slot\n> > > exceeding the max_replication_slot has its drawback of removing some\n> > > valid replication slot's statistics data.\n> > > Any thoughts on how we can identify the replication slot which has been dropped?\n> > > Can someone point me to the shared stats patch link with which message\n> > > loss can be avoided. I wanted to see a scenario where something like\n> > > the slot is dropped but the statistics are not updated because of an\n> > > immediate shutdown or server going down abruptly can occur or not with\n> > > the shared stats patch.\n> > >\n> >\n> > I don't think it is easy to simulate a scenario where the 'drop'\n> > message is dropped and I think that is why the test contains the step\n> > to manually remove the slot. At this stage, you can probably provide a\n> > test patch and a code-fix patch where it just drops the extra slots\n> > from the stats file. That will allow us to test it with a shared\n> > memory stats patch on which Andres and Horiguchi-San are working. If\n> > we still continue to pursue with current approach then as Andres\n> > suggested we might send additional information from\n> > RestoreSlotFromDisk to keep it in sync.\n>\n> Thanks for your comments, Attached patch has the fix for the same.\n> Also attached a couple of more patches which addresses the comments\n> which Andres had listed i.e changing char to NameData type and also to\n> display the unspilled/unstreamed transaction information in the\n> replication statistics.\n> Thoughts?\n\nThank you for the patches!\n\nI've looked at those patches and here are some comments on 0001, 0002,\nand 0003 patch:\n\n0001 patch:\n\n- values[0] = PointerGetDatum(cstring_to_text(s->slotname));\n+ values[0] = PointerGetDatum(cstring_to_text(s->slotname.data));\n\nWe can use NameGetDatum() instead.\n\n---\n0002 patch:\n\nThe patch uses logical replication to test replication slots\nstatistics but I think it's necessarily necessary. It would be more\nsimple to use logical decoding. Maybe we can add TAP tests to\ncontrib/test_decoding.\n\n---\n0003 patch:\n\n void\n pgstat_report_replslot(const char *slotname, int spilltxns, int spillcount,\n- int spillbytes, int streamtxns, int\nstreamcount, int streambytes)\n+ int spillbytes, int streamtxns, int streamcount,\n+ int streambytes, int totaltxns, int totalbytes)\n {\n\nAs Andreas pointed out, we should use a struct of stats updates rather\nthan adding more arguments to pgstat_report_replslot().\n\n---\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>total_bytes</structfield><type>bigint</type>\n+ </para>\n+ <para>\n+ Amount of decoded in-progress transaction data replicated to\nthe decoding\n+ output plugin while decoding changes from WAL for this slot.\nThis and other\n+ counters for this slot can be used to gauge the network I/O\nwhich occurred\n+ during logical decoding and allow tuning\n<literal>logical_decoding_work_mem</literal>.\n+ </para>\n+ </entry>\n+ </row>\n\nAs I mentioned in another reply, I think users should not gauge the\nnetwork I/O which occurred during logical decoding using by those\ncounters since the actual amount of network I/O is affected by table\nfiltering and row filtering discussed on another thread[1]. Also,\nsince this is total bytes I'm not sure how users can use this value to\ntune logical_decoding_work_mem. I agree to track both the total bytes\nand the total number of transactions passed to the decoding plugin but\nI think the description needs to be updated. How about the following\ndescription for example?\n\nAmount of decoded transaction data sent to the decoding output plugin\nwhile decoding changes from WAL for this slot. This and total_txn for\nthis slot can be used to gauge the total amount of data during logical\ndecoding.\n\n---\nI think we can merge 0001 and 0003 patches.\n\n[1] https://www.postgresql.org/message-id/CAHE3wggb715X%2BmK_DitLXF25B%3DjE6xyNCH4YOwM860JR7HarGQ%40mail.gmail.com\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 2 Apr 2021 12:58:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 2, 2021 at 9:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 2, 2021 at 1:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Apr 1, 2021 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 1, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > >\n> > > > > > Hi,\n> > > > > >\n> > > > > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > > > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > > > Any chance you could write a tap test exercising a few of these cases?\n> > > > > > >\n> > > > > > > I can try to write a patch for this if nobody objects.\n> > > > > >\n> > > > > > Cool!\n> > > > > >\n> > > > >\n> > > > > Attached a patch which has the test for the first scenario.\n> > > > >\n> > > > > > > > E.g. things like:\n> > > > > > > >\n> > > > > > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > > > > > stats are still sane\n> > > > > > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > > > > > max_replication_slots, start up\n> > > > > > >\n> > > > > > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > > > > > manually from the pg_replslot folder?\n> > > > > >\n> > > > > > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > > > > > be lower than the number of slots-stats objects.\n> > > > >\n> > > > > I have not included the 2nd test in the patch as the test fails with\n> > > > > following warnings and also displays the statistics of the removed\n> > > > > slot:\n> > > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > >\n> > > > > This happens because the statistics file has an additional slot\n> > > > > present even though the replication slot was removed. I felt this\n> > > > > issue should be fixed. I will try to fix this issue and send the\n> > > > > second test along with the fix.\n> > > >\n> > > > I felt from the statistics collector process, there is no way in which\n> > > > we can identify if the replication slot is present or not because the\n> > > > statistic collector process does not have access to shared memory.\n> > > > Anything that the statistic collector process does independently by\n> > > > traversing and removing the statistics of the replication slot\n> > > > exceeding the max_replication_slot has its drawback of removing some\n> > > > valid replication slot's statistics data.\n> > > > Any thoughts on how we can identify the replication slot which has been dropped?\n> > > > Can someone point me to the shared stats patch link with which message\n> > > > loss can be avoided. I wanted to see a scenario where something like\n> > > > the slot is dropped but the statistics are not updated because of an\n> > > > immediate shutdown or server going down abruptly can occur or not with\n> > > > the shared stats patch.\n> > > >\n> > >\n> > > I don't think it is easy to simulate a scenario where the 'drop'\n> > > message is dropped and I think that is why the test contains the step\n> > > to manually remove the slot. At this stage, you can probably provide a\n> > > test patch and a code-fix patch where it just drops the extra slots\n> > > from the stats file. That will allow us to test it with a shared\n> > > memory stats patch on which Andres and Horiguchi-San are working. If\n> > > we still continue to pursue with current approach then as Andres\n> > > suggested we might send additional information from\n> > > RestoreSlotFromDisk to keep it in sync.\n> >\n> > Thanks for your comments, Attached patch has the fix for the same.\n> > Also attached a couple of more patches which addresses the comments\n> > which Andres had listed i.e changing char to NameData type and also to\n> > display the unspilled/unstreamed transaction information in the\n> > replication statistics.\n> > Thoughts?\n>\n> Thank you for the patches!\n>\n> I've looked at those patches and here are some comments on 0001, 0002,\n> and 0003 patch:\n>\n> 0001 patch:\n>\n> - values[0] = PointerGetDatum(cstring_to_text(s->slotname));\n> + values[0] = PointerGetDatum(cstring_to_text(s->slotname.data));\n>\n> We can use NameGetDatum() instead.\n>\n> ---\n> 0002 patch:\n>\n> The patch uses logical replication to test replication slots\n> statistics but I think it's necessarily necessary. It would be more\n> simple to use logical decoding. Maybe we can add TAP tests to\n> contrib/test_decoding.\n>\n> ---\n> 0003 patch:\n>\n> void\n> pgstat_report_replslot(const char *slotname, int spilltxns, int spillcount,\n> - int spillbytes, int streamtxns, int\n> streamcount, int streambytes)\n> + int spillbytes, int streamtxns, int streamcount,\n> + int streambytes, int totaltxns, int totalbytes)\n> {\n>\n> As Andreas pointed out, we should use a struct of stats updates rather\n> than adding more arguments to pgstat_report_replslot().\n>\n> ---\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>total_bytes</structfield><type>bigint</type>\n> + </para>\n> + <para>\n> + Amount of decoded in-progress transaction data replicated to\n> the decoding\n> + output plugin while decoding changes from WAL for this slot.\n> This and other\n> + counters for this slot can be used to gauge the network I/O\n> which occurred\n> + during logical decoding and allow tuning\n> <literal>logical_decoding_work_mem</literal>.\n> + </para>\n> + </entry>\n> + </row>\n>\n> As I mentioned in another reply, I think users should not gauge the\n> network I/O which occurred during logical decoding using by those\n> counters since the actual amount of network I/O is affected by table\n> filtering and row filtering discussed on another thread[1]. Also,\n> since this is total bytes I'm not sure how users can use this value to\n> tune logical_decoding_work_mem. I agree to track both the total bytes\n> and the total number of transactions passed to the decoding plugin but\n> I think the description needs to be updated. How about the following\n> description for example?\n>\n> Amount of decoded transaction data sent to the decoding output plugin\n> while decoding changes from WAL for this slot. This and total_txn for\n> this slot can be used to gauge the total amount of data during logical\n> decoding.\n>\n> ---\n> I think we can merge 0001 and 0003 patches.\n>\n\nThanks for the comments, I will fix the comments and provide a patch\nfor this soon.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 2 Apr 2021 09:57:03 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 1, 2021 at 6:19 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 9:58 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > IMO, independent of the shutdown / startup issue, it'd be worth writing\n> > a patch tracking the bytes sent independently of the slot stats storage\n> > issues. That would also make the testing for the above cheaper...\n>\n> Agreed.\n>\n> I think the bytes sent should be recorded by the decoding plugin, not\n> by the core side. Given that table filtering and row filtering,\n> tracking the bytes passed to the decoding plugin would not help gauge\n> the actual network I/O. In that sense, the description of stream_bytes\n> in the doc seems not accurate:\n>\n> ---\n> This and other streaming counters for this slot can be used to gauge\n> the network I/O which occurred during logical decoding and allow\n> tuning logical_decoding_work_mem.\n> ---\n>\n> It can surely be used to allow tuning logical_decoding_work_mem but it\n> could not be true for gauging the network I/O which occurred during\n> logical decoding.\n>\n\nAgreed. I think we can adjust the wording accordingly.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 2 Apr 2021 11:25:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 2, 2021 at 9:57 AM vignesh C <vignesh21@gmail.com> wrote:\n> Thanks for the comments, I will fix the comments and provide a patch\n> for this soon.\n\nHere are some comments:\n1) How about something like below\n+ (errmsg(\"skipping \\\"%s\\\" replication slot\nstatistics as the statistic collector process does not have enough\nstatistic slots\",\ninstead of\n+ (errmsg(\"skipping \\\"%s\\\" replication\nslot's statistic as the statistic collector process does not have\nenough statistic slots\",\n\n2) Does it mean \"pg_statistic slots\" when we say \"statistic slots\" in\nthe above warning? If yes, why can't we use \"pg_statistic slots\"\ninstead of \"statistic slots\" as with another existing message\n\"insufficient pg_statistic slots for array stats\"?\n\n3) Should we change the if condition to max_replication_slots <=\nnReplSlotStats instead of max_replication_slots == nReplSlotStats? In\nthe scenario, it is mentioned that \"one of the replication slots is\ndropped\", will this issue occur when multiple replication slots are\ndropped?\n\n4) Let's end the statement after this and start a new one, something like below\n+ * this. To avoid writing beyond the max_replication_slots\ninstead of\n+ * this, to avoid writing beyond the max_replication_slots\n\n5) How about something like below\n+ * this. To avoid writing beyond the max_replication_slots,\n+ * this replication slot statistics information will\nbe skipped.\n+ */\ninstead of\n+ * this, to avoid writing beyond the max_replication_slots\n+ * these replication slot statistic information will\nbe skipped.\n+ */\n\n6) Any specific reason to use a new local variable replSlotStat and\nlater memcpy into replSlotStats[nReplSlotStats]? Instead we could\ndirectly fread into &replSlotStats[nReplSlotStats] and do\nmemset(&replSlotStats[nReplSlotStats], 0,\nsizeof(PgStat_ReplSlotStats)); before the warnings. As warning\nscenarios seem to be less frequent, we could avoid doing memcpy\nalways.\n- if (fread(&replSlotStats[nReplSlotStats], 1,\nsizeof(PgStat_ReplSlotStats), fpin)\n+ if (fread(&replSlotStat, 1, sizeof(PgStat_ReplSlotStats), fpin)\n\n+ memcpy(&replSlotStats[nReplSlotStats], &replSlotStat,\nsizeof(PgStat_ReplSlotStats));\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Apr 2021 11:28:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 2, 2021 at 9:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 2, 2021 at 1:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Thu, Apr 1, 2021 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 1, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > >\n> > > > > > Hi,\n> > > > > >\n> > > > > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > > > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > > > Any chance you could write a tap test exercising a few of these cases?\n> > > > > > >\n> > > > > > > I can try to write a patch for this if nobody objects.\n> > > > > >\n> > > > > > Cool!\n> > > > > >\n> > > > >\n> > > > > Attached a patch which has the test for the first scenario.\n> > > > >\n> > > > > > > > E.g. things like:\n> > > > > > > >\n> > > > > > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > > > > > stats are still sane\n> > > > > > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > > > > > max_replication_slots, start up\n> > > > > > >\n> > > > > > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > > > > > manually from the pg_replslot folder?\n> > > > > >\n> > > > > > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > > > > > be lower than the number of slots-stats objects.\n> > > > >\n> > > > > I have not included the 2nd test in the patch as the test fails with\n> > > > > following warnings and also displays the statistics of the removed\n> > > > > slot:\n> > > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > >\n> > > > > This happens because the statistics file has an additional slot\n> > > > > present even though the replication slot was removed. I felt this\n> > > > > issue should be fixed. I will try to fix this issue and send the\n> > > > > second test along with the fix.\n> > > >\n> > > > I felt from the statistics collector process, there is no way in which\n> > > > we can identify if the replication slot is present or not because the\n> > > > statistic collector process does not have access to shared memory.\n> > > > Anything that the statistic collector process does independently by\n> > > > traversing and removing the statistics of the replication slot\n> > > > exceeding the max_replication_slot has its drawback of removing some\n> > > > valid replication slot's statistics data.\n> > > > Any thoughts on how we can identify the replication slot which has been dropped?\n> > > > Can someone point me to the shared stats patch link with which message\n> > > > loss can be avoided. I wanted to see a scenario where something like\n> > > > the slot is dropped but the statistics are not updated because of an\n> > > > immediate shutdown or server going down abruptly can occur or not with\n> > > > the shared stats patch.\n> > > >\n> > >\n> > > I don't think it is easy to simulate a scenario where the 'drop'\n> > > message is dropped and I think that is why the test contains the step\n> > > to manually remove the slot. At this stage, you can probably provide a\n> > > test patch and a code-fix patch where it just drops the extra slots\n> > > from the stats file. That will allow us to test it with a shared\n> > > memory stats patch on which Andres and Horiguchi-San are working. If\n> > > we still continue to pursue with current approach then as Andres\n> > > suggested we might send additional information from\n> > > RestoreSlotFromDisk to keep it in sync.\n> >\n> > Thanks for your comments, Attached patch has the fix for the same.\n> > Also attached a couple of more patches which addresses the comments\n> > which Andres had listed i.e changing char to NameData type and also to\n> > display the unspilled/unstreamed transaction information in the\n> > replication statistics.\n> > Thoughts?\n>\n> Thank you for the patches!\n>\n> I've looked at those patches and here are some comments on 0001, 0002,\n> and 0003 patch:\n\nThanks for the comments.\n\n> 0001 patch:\n>\n> - values[0] = PointerGetDatum(cstring_to_text(s->slotname));\n> + values[0] = PointerGetDatum(cstring_to_text(s->slotname.data));\n>\n> We can use NameGetDatum() instead.\n\nI felt we will not be able to use NameGetDatum because this function\nwill not have access to the value throughout the loop and NameGetDatum\nmust ensure the pointed-to value has adequate lifetime.\n\n> ---\n> 0002 patch:\n>\n> The patch uses logical replication to test replication slots\n> statistics but I think it's necessarily necessary. It would be more\n> simple to use logical decoding. Maybe we can add TAP tests to\n> contrib/test_decoding.\n>\n\nI will try to change it to test_decoding if feasible and post in the\nnext version.\n\n> ---\n> 0003 patch:\n>\n> void\n> pgstat_report_replslot(const char *slotname, int spilltxns, int spillcount,\n> - int spillbytes, int streamtxns, int\n> streamcount, int streambytes)\n> + int spillbytes, int streamtxns, int streamcount,\n> + int streambytes, int totaltxns, int totalbytes)\n> {\n>\n> As Andreas pointed out, we should use a struct of stats updates rather\n> than adding more arguments to pgstat_report_replslot().\n>\n\nModified as suggested.\n\n> ---\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>total_bytes</structfield><type>bigint</type>\n> + </para>\n> + <para>\n> + Amount of decoded in-progress transaction data replicated to\n> the decoding\n> + output plugin while decoding changes from WAL for this slot.\n> This and other\n> + counters for this slot can be used to gauge the network I/O\n> which occurred\n> + during logical decoding and allow tuning\n> <literal>logical_decoding_work_mem</literal>.\n> + </para>\n> + </entry>\n> + </row>\n>\n> As I mentioned in another reply, I think users should not gauge the\n> network I/O which occurred during logical decoding using by those\n> counters since the actual amount of network I/O is affected by table\n> filtering and row filtering discussed on another thread[1]. Also,\n> since this is total bytes I'm not sure how users can use this value to\n> tune logical_decoding_work_mem. I agree to track both the total bytes\n> and the total number of transactions passed to the decoding plugin but\n> I think the description needs to be updated. How about the following\n> description for example?\n>\n> Amount of decoded transaction data sent to the decoding output plugin\n> while decoding changes from WAL for this slot. This and total_txn for\n> this slot can be used to gauge the total amount of data during logical\n> decoding.\n>\n\nModified as suggested.\n\n> ---\n> I think we can merge 0001 and 0003 patches.\n\nI have merged them.\nAttached V2 patch which has the fixes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Sat, 3 Apr 2021 23:07:16 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 2, 2021 at 11:28 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Apr 2, 2021 at 9:57 AM vignesh C <vignesh21@gmail.com> wrote:\n> > Thanks for the comments, I will fix the comments and provide a patch\n> > for this soon.\n>\n\nThanks for the comments.\n\n> Here are some comments:\n> 1) How about something like below\n> + (errmsg(\"skipping \\\"%s\\\" replication slot\n> statistics as the statistic collector process does not have enough\n> statistic slots\",\n> instead of\n> + (errmsg(\"skipping \\\"%s\\\" replication\n> slot's statistic as the statistic collector process does not have\n> enough statistic slots\",\n>\n\nModified.\n\n> 2) Does it mean \"pg_statistic slots\" when we say \"statistic slots\" in\n> the above warning? If yes, why can't we use \"pg_statistic slots\"\n> instead of \"statistic slots\" as with another existing message\n> \"insufficient pg_statistic slots for array stats\"?\n>\n\nHere pg_stat_replication_slots will not have enought slots. I changed\nit to below:\nerrmsg(\"skipping \\\"%s\\\" replication slot statistics as\npg_stat_replication_slots does not have enough slots\"\nThoughts?\n\n> 3) Should we change the if condition to max_replication_slots <=\n> nReplSlotStats instead of max_replication_slots == nReplSlotStats? In\n> the scenario, it is mentioned that \"one of the replication slots is\n> dropped\", will this issue occur when multiple replication slots are\n> dropped?\n>\n\nI felt it should be max_replication_slots == nReplSlotStats, if\nmax_replication_slots = 5, we will be able to store 5 replication slot\nstatistics from 0,1..4, from 5th we will not have space. I think this\nneed not be changed.\n\n> 4) Let's end the statement after this and start a new one, something like below\n> + * this. To avoid writing beyond the max_replication_slots\n> instead of\n> + * this, to avoid writing beyond the max_replication_slots\n>\n\nChanged it.\n\n> 5) How about something like below\n> + * this. To avoid writing beyond the max_replication_slots,\n> + * this replication slot statistics information will\n> be skipped.\n> + */\n> instead of\n> + * this, to avoid writing beyond the max_replication_slots\n> + * these replication slot statistic information will\n> be skipped.\n> + */\n>\n\nChanged it.\n\n> 6) Any specific reason to use a new local variable replSlotStat and\n> later memcpy into replSlotStats[nReplSlotStats]? Instead we could\n> directly fread into &replSlotStats[nReplSlotStats] and do\n> memset(&replSlotStats[nReplSlotStats], 0,\n> sizeof(PgStat_ReplSlotStats)); before the warnings. As warning\n> scenarios seem to be less frequent, we could avoid doing memcpy\n> always.\n> - if (fread(&replSlotStats[nReplSlotStats], 1,\n> sizeof(PgStat_ReplSlotStats), fpin)\n> + if (fread(&replSlotStat, 1, sizeof(PgStat_ReplSlotStats), fpin)\n>\n> + memcpy(&replSlotStats[nReplSlotStats], &replSlotStat,\n> sizeof(PgStat_ReplSlotStats));\n>\n\nI wanted to avoid the memcpy instructions multiple times, but your\nexplanation makes sense to keep the memcpy in failure path so that the\npositive flow can be faster. Changed it.\nThese comments are fixed in the v2 patch posted in my previous mail.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 3 Apr 2021 23:11:53 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 3, 2021 at 11:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> Here pg_stat_replication_slots will not have enought slots. I changed\n> it to below:\n> errmsg(\"skipping \\\"%s\\\" replication slot statistics as\n> pg_stat_replication_slots does not have enough slots\"\n> Thoughts?\n\nWFM.\n\n> > 3) Should we change the if condition to max_replication_slots <=\n> > nReplSlotStats instead of max_replication_slots == nReplSlotStats? In\n> > the scenario, it is mentioned that \"one of the replication slots is\n> > dropped\", will this issue occur when multiple replication slots are\n> > dropped?\n> >\n>\n> I felt it should be max_replication_slots == nReplSlotStats, if\n> max_replication_slots = 5, we will be able to store 5 replication slot\n> statistics from 0,1..4, from 5th we will not have space. I think this\n> need not be changed.\n\nI'm not sure whether we can have a situation where\nmax_replication_slots < nReplSlotStats i.e. max_replication_slots\ngetting set to lesser than nReplSlotStats. I think I didn't get the\nabove mentioned scenario i.e. max_replication_slots == nReplSlotStats\ncorrectly. It will be great if you could throw some light on that\nscenario and ensure that it's not possible to reach a situation where\nmax_replication_slots < nReplSlotStats.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Apr 2021 12:44:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 5, 2021 at 12:44 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Sat, Apr 3, 2021 at 11:12 PM vignesh C <vignesh21@gmail.com> wrote:\n> > Here pg_stat_replication_slots will not have enought slots. I changed\n> > it to below:\n> > errmsg(\"skipping \\\"%s\\\" replication slot statistics as\n> > pg_stat_replication_slots does not have enough slots\"\n> > Thoughts?\n>\n> WFM.\n>\n> > > 3) Should we change the if condition to max_replication_slots <=\n> > > nReplSlotStats instead of max_replication_slots == nReplSlotStats? In\n> > > the scenario, it is mentioned that \"one of the replication slots is\n> > > dropped\", will this issue occur when multiple replication slots are\n> > > dropped?\n> > >\n> >\n> > I felt it should be max_replication_slots == nReplSlotStats, if\n> > max_replication_slots = 5, we will be able to store 5 replication slot\n> > statistics from 0,1..4, from 5th we will not have space. I think this\n> > need not be changed.\n>\n> I'm not sure whether we can have a situation where\n> max_replication_slots < nReplSlotStats i.e. max_replication_slots\n> getting set to lesser than nReplSlotStats. I think I didn't get the\n> above mentioned scenario i.e. max_replication_slots == nReplSlotStats\n> correctly. It will be great if you could throw some light on that\n> scenario and ensure that it's not possible to reach a situation where\n> max_replication_slots < nReplSlotStats.\n\nUsually this will not happen, there is a remote chance that will\nhappen in the below scenario:\nWhen a replication slot is created, a slot is created in\npg_stat_replication_slot also by the statistics collector, this cannot\nexceed max_replication_slots. Whenever a slot is dropped, the\ncorresponding slot will be deleted from pg_stat_replication_slot. The\nstatistics collector uses UDP protocol for communication hence there\nis no guarantee that the message is received by the statistic\ncollector process. After the user has dropped the replication slot, if\nthe server is stopped(here statistic collector process has not yet\nreceived the drop replication slot statistic). Then the user reduces\nthe max_repllication_slot and starts the server. In this scenario the\nstatistic collector process will have more replication slot\nstatistics(as the drop was not received) than max_replication_slot.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 5 Apr 2021 19:24:26 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 3, 2021 at 11:07 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, Apr 2, 2021 at 9:29 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Apr 2, 2021 at 1:55 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 1, 2021 at 5:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Apr 1, 2021 at 3:43 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Mar 31, 2021 at 11:32 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > >\n> > > > > > On Tue, Mar 30, 2021 at 11:00 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > >\n> > > > > > > Hi,\n> > > > > > >\n> > > > > > > On 2021-03-30 10:13:29 +0530, vignesh C wrote:\n> > > > > > > > On Tue, Mar 30, 2021 at 6:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > > > > > > Any chance you could write a tap test exercising a few of these cases?\n> > > > > > > >\n> > > > > > > > I can try to write a patch for this if nobody objects.\n> > > > > > >\n> > > > > > > Cool!\n> > > > > > >\n> > > > > >\n> > > > > > Attached a patch which has the test for the first scenario.\n> > > > > >\n> > > > > > > > > E.g. things like:\n> > > > > > > > >\n> > > > > > > > > - create a few slots, drop one of them, shut down, start up, verify\n> > > > > > > > > stats are still sane\n> > > > > > > > > - create a few slots, shut down, manually remove a slot, lower\n> > > > > > > > > max_replication_slots, start up\n> > > > > > > >\n> > > > > > > > Here by \"manually remove a slot\", do you mean to remove the slot\n> > > > > > > > manually from the pg_replslot folder?\n> > > > > > >\n> > > > > > > Yep - thereby allowing max_replication_slots after the shutdown/start to\n> > > > > > > be lower than the number of slots-stats objects.\n> > > > > >\n> > > > > > I have not included the 2nd test in the patch as the test fails with\n> > > > > > following warnings and also displays the statistics of the removed\n> > > > > > slot:\n> > > > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > > > WARNING: problem in alloc set Statistics snapshot: detected write\n> > > > > > past chunk end in block 0x55d038b8e410, chunk 0x55d038b8e438\n> > > > > >\n> > > > > > This happens because the statistics file has an additional slot\n> > > > > > present even though the replication slot was removed. I felt this\n> > > > > > issue should be fixed. I will try to fix this issue and send the\n> > > > > > second test along with the fix.\n> > > > >\n> > > > > I felt from the statistics collector process, there is no way in which\n> > > > > we can identify if the replication slot is present or not because the\n> > > > > statistic collector process does not have access to shared memory.\n> > > > > Anything that the statistic collector process does independently by\n> > > > > traversing and removing the statistics of the replication slot\n> > > > > exceeding the max_replication_slot has its drawback of removing some\n> > > > > valid replication slot's statistics data.\n> > > > > Any thoughts on how we can identify the replication slot which has been dropped?\n> > > > > Can someone point me to the shared stats patch link with which message\n> > > > > loss can be avoided. I wanted to see a scenario where something like\n> > > > > the slot is dropped but the statistics are not updated because of an\n> > > > > immediate shutdown or server going down abruptly can occur or not with\n> > > > > the shared stats patch.\n> > > > >\n> > > >\n> > > > I don't think it is easy to simulate a scenario where the 'drop'\n> > > > message is dropped and I think that is why the test contains the step\n> > > > to manually remove the slot. At this stage, you can probably provide a\n> > > > test patch and a code-fix patch where it just drops the extra slots\n> > > > from the stats file. That will allow us to test it with a shared\n> > > > memory stats patch on which Andres and Horiguchi-San are working. If\n> > > > we still continue to pursue with current approach then as Andres\n> > > > suggested we might send additional information from\n> > > > RestoreSlotFromDisk to keep it in sync.\n> > >\n> > > Thanks for your comments, Attached patch has the fix for the same.\n> > > Also attached a couple of more patches which addresses the comments\n> > > which Andres had listed i.e changing char to NameData type and also to\n> > > display the unspilled/unstreamed transaction information in the\n> > > replication statistics.\n> > > Thoughts?\n> >\n> > Thank you for the patches!\n> >\n> > I've looked at those patches and here are some comments on 0001, 0002,\n> > and 0003 patch:\n>\n> Thanks for the comments.\n>\n> > 0001 patch:\n> >\n> > - values[0] = PointerGetDatum(cstring_to_text(s->slotname));\n> > + values[0] = PointerGetDatum(cstring_to_text(s->slotname.data));\n> >\n> > We can use NameGetDatum() instead.\n>\n> I felt we will not be able to use NameGetDatum because this function\n> will not have access to the value throughout the loop and NameGetDatum\n> must ensure the pointed-to value has adequate lifetime.\n>\n> > ---\n> > 0002 patch:\n> >\n> > The patch uses logical replication to test replication slots\n> > statistics but I think it's necessarily necessary. It would be more\n> > simple to use logical decoding. Maybe we can add TAP tests to\n> > contrib/test_decoding.\n> >\n>\n> I will try to change it to test_decoding if feasible and post in the\n> next version.\n>\n\nI have modified the patch to include tap tests in contrib/test_decoding.\nAttached v3 patch has the changes for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 5 Apr 2021 20:51:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Mar 22, 2021 at 9:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> >\n> > - PgStat_ReplSlotStats etc use slotname[NAMEDATALEN]. Why not just NameData?\n>\n> That's because we followed other definitions in pgstat.h that use\n> char[NAMEDATALEN]. I'm okay with using NameData.\n>\n\nI see that at many places in code we use char[NAMEDATALEN] for names.\nHowever, for slotname, we use NameData, see:\ntypedef struct ReplicationSlotPersistentData\n{\n/* The slot's identifier */\nNameData name;\n\nSo, it will be better to use the same for pgstat purposes as well. In\nother words, I also agree with this decision and I see that Vignesh\nhas already used NameData for slot_name in his recent patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 6 Apr 2021 11:22:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 5, 2021 at 8:51 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nFew comments on the latest patches:\nComments on 0001\n--------------------------------\n1.\n@@ -659,6 +661,8 @@ ReorderBufferTXNByXid(ReorderBuffer *rb,\nTransactionId xid, bool create,\n dlist_push_tail(&rb->toplevel_by_lsn, &txn->node);\n AssertTXNLsnOrder(rb);\n }\n+\n+ rb->totalTxns++;\n }\n else\n txn = NULL; /* not found and not asked to create */\n@@ -3078,6 +3082,7 @@ ReorderBufferChangeMemoryUpdate(ReorderBuffer *rb,\n {\n txn->size += sz;\n rb->size += sz;\n+ rb->totalBytes += sz;\n\nI think this will include the txns that are aborted and for which we\ndon't send anything. It might be better to update these stats in\nReorderBufferProcessTXN or ReorderBufferReplay where we are sure we\nhave sent the data. We can probably use size/total_size in txn. We\nneed to be careful to not double include the totaltxn or totalBytes\nfor streaming xacts as we might process the same txn multiple times.\n\n2.\n+ Amount of decoded transactions data sent to the decoding output plugin\n+ while decoding the changes from WAL for this slot. This and total_txns\n+ for this slot can be used to gauge the total amount of data during\n+ logical decoding.\n\nI think we can slightly modify the second line here: \"This can be used\nto gauge the total amount of data sent during logical decoding.\". Why\nwe need to include total_txns along with it.\n\n0002\n----------\n3.\n+ -- we don't want to wait forever; loop will exit after 30 seconds\n+ FOR i IN 1 .. 5 LOOP\n+\n...\n...\n+\n+ -- wait a little\n+ perform pg_sleep_for('100 milliseconds');\n\nI think this loop needs to be executed 300 times instead of 5 times,\nif the above comments and code needs to do what is expected here?\n\n\n4.\n+# Test to drop one of the subscribers and verify replication statistics data is\n+# fine after publisher is restarted.\n+$node->safe_psql('postgres', \"SELECT\npg_drop_replication_slot('regression_slot4')\");\n+\n+$node->stop;\n+$node->start;\n+\n+# Verify statistics data present in pg_stat_replication_slots are sane after\n+# publisher is restarted\n+$result = $node->safe_psql('postgres',\n+ \"SELECT slot_name, total_txns > 0 AS total_txn, total_bytes > 0 AS total_bytes\n+ FROM pg_stat_replication_slots ORDER BY slot_name\"\n\nVarious comments in the 0002 refer to publisher/subscriber which is\nnot what we are using here.\n\n5.\n+# Create table.\n+$node->safe_psql('postgres',\n+ \"CREATE TABLE test_repl_stat(col1 int)\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot1', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot2', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot4', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n\nI think we can save the above calls to pg_logical_slot_get_changes if\nwe create table before creating the slots in this test.\n\n0003\n---------\n6. In the tests/code, publisher is used at multiple places. I think\nthat is not required because this can happen via plugin as well.\n7.\n+ if (max_replication_slots == nReplSlotStats)\n+ {\n+ ereport(pgStatRunningInCollector ? LOG : WARNING,\n+ (errmsg(\"skipping \\\"%s\\\" replication slot statistics as\npg_stat_replication_slots does not have enough slots\",\n+ NameStr(replSlotStats[nReplSlotStats].slotname))));\n+ memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n\nDo we need memset here? Isn't this location is past the max location?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 6 Apr 2021 12:19:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 6, 2021 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 5, 2021 at 8:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Few comments on the latest patches:\n> Comments on 0001\n> --------------------------------\n> 1.\n> @@ -659,6 +661,8 @@ ReorderBufferTXNByXid(ReorderBuffer *rb,\n> TransactionId xid, bool create,\n> dlist_push_tail(&rb->toplevel_by_lsn, &txn->node);\n> AssertTXNLsnOrder(rb);\n> }\n> +\n> + rb->totalTxns++;\n> }\n> else\n> txn = NULL; /* not found and not asked to create */\n> @@ -3078,6 +3082,7 @@ ReorderBufferChangeMemoryUpdate(ReorderBuffer *rb,\n> {\n> txn->size += sz;\n> rb->size += sz;\n> + rb->totalBytes += sz;\n>\n> I think this will include the txns that are aborted and for which we\n> don't send anything. It might be better to update these stats in\n> ReorderBufferProcessTXN or ReorderBufferReplay where we are sure we\n> have sent the data. We can probably use size/total_size in txn. We\n> need to be careful to not double include the totaltxn or totalBytes\n> for streaming xacts as we might process the same txn multiple times.\n>\n> 2.\n> + Amount of decoded transactions data sent to the decoding output plugin\n> + while decoding the changes from WAL for this slot. This and total_txns\n> + for this slot can be used to gauge the total amount of data during\n> + logical decoding.\n>\n> I think we can slightly modify the second line here: \"This can be used\n> to gauge the total amount of data sent during logical decoding.\". Why\n> we need to include total_txns along with it.\n>\n> 0002\n> ----------\n> 3.\n> + -- we don't want to wait forever; loop will exit after 30 seconds\n> + FOR i IN 1 .. 5 LOOP\n> +\n> ...\n> ...\n> +\n> + -- wait a little\n> + perform pg_sleep_for('100 milliseconds');\n>\n> I think this loop needs to be executed 300 times instead of 5 times,\n> if the above comments and code needs to do what is expected here?\n>\n>\n> 4.\n> +# Test to drop one of the subscribers and verify replication statistics data is\n> +# fine after publisher is restarted.\n> +$node->safe_psql('postgres', \"SELECT\n> pg_drop_replication_slot('regression_slot4')\");\n> +\n> +$node->stop;\n> +$node->start;\n> +\n> +# Verify statistics data present in pg_stat_replication_slots are sane after\n> +# publisher is restarted\n> +$result = $node->safe_psql('postgres',\n> + \"SELECT slot_name, total_txns > 0 AS total_txn, total_bytes > 0 AS total_bytes\n> + FROM pg_stat_replication_slots ORDER BY slot_name\"\n>\n> Various comments in the 0002 refer to publisher/subscriber which is\n> not what we are using here.\n>\n> 5.\n> +# Create table.\n> +$node->safe_psql('postgres',\n> + \"CREATE TABLE test_repl_stat(col1 int)\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot1', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot2', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot4', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n>\n> I think we can save the above calls to pg_logical_slot_get_changes if\n> we create table before creating the slots in this test.\n>\n> 0003\n> ---------\n> 6. In the tests/code, publisher is used at multiple places. I think\n> that is not required because this can happen via plugin as well.\n> 7.\n> + if (max_replication_slots == nReplSlotStats)\n> + {\n> + ereport(pgStatRunningInCollector ? LOG : WARNING,\n> + (errmsg(\"skipping \\\"%s\\\" replication slot statistics as\n> pg_stat_replication_slots does not have enough slots\",\n> + NameStr(replSlotStats[nReplSlotStats].slotname))));\n> + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n>\n> Do we need memset here? Isn't this location is past the max location?\n\nThanks for the comments, I will fix and post a patch for this soon.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 6 Apr 2021 17:28:25 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 6, 2021 at 12:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 5, 2021 at 8:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Few comments on the latest patches:\n> Comments on 0001\n> --------------------------------\n> 1.\n> @@ -659,6 +661,8 @@ ReorderBufferTXNByXid(ReorderBuffer *rb,\n> TransactionId xid, bool create,\n> dlist_push_tail(&rb->toplevel_by_lsn, &txn->node);\n> AssertTXNLsnOrder(rb);\n> }\n> +\n> + rb->totalTxns++;\n> }\n> else\n> txn = NULL; /* not found and not asked to create */\n> @@ -3078,6 +3082,7 @@ ReorderBufferChangeMemoryUpdate(ReorderBuffer *rb,\n> {\n> txn->size += sz;\n> rb->size += sz;\n> + rb->totalBytes += sz;\n>\n> I think this will include the txns that are aborted and for which we\n> don't send anything. It might be better to update these stats in\n> ReorderBufferProcessTXN or ReorderBufferReplay where we are sure we\n> have sent the data. We can probably use size/total_size in txn. We\n> need to be careful to not double include the totaltxn or totalBytes\n> for streaming xacts as we might process the same txn multiple times.\n\nModified it to update total_byte for spilled transactions and streamed\ntransactions where spill_bytes and stream_bytes are updated. For\nnon-stream/spilled transactions, total_bytes is updated in\nReorderBufferProcessTXN.\n\n> 2.\n> + Amount of decoded transactions data sent to the decoding output plugin\n> + while decoding the changes from WAL for this slot. This and total_txns\n> + for this slot can be used to gauge the total amount of data during\n> + logical decoding.\n>\n> I think we can slightly modify the second line here: \"This can be used\n> to gauge the total amount of data sent during logical decoding.\". Why\n> we need to include total_txns along with it.\n\nModified it.\n\n> 0002\n> ----------\n> 3.\n> + -- we don't want to wait forever; loop will exit after 30 seconds\n> + FOR i IN 1 .. 5 LOOP\n> +\n> ...\n> ...\n> +\n> + -- wait a little\n> + perform pg_sleep_for('100 milliseconds');\n>\n> I think this loop needs to be executed 300 times instead of 5 times,\n> if the above comments and code needs to do what is expected here?\n>\n\nModified it.\n\n> 4.\n> +# Test to drop one of the subscribers and verify replication statistics data is\n> +# fine after publisher is restarted.\n> +$node->safe_psql('postgres', \"SELECT\n> pg_drop_replication_slot('regression_slot4')\");\n> +\n> +$node->stop;\n> +$node->start;\n> +\n> +# Verify statistics data present in pg_stat_replication_slots are sane after\n> +# publisher is restarted\n> +$result = $node->safe_psql('postgres',\n> + \"SELECT slot_name, total_txns > 0 AS total_txn, total_bytes > 0 AS total_bytes\n> + FROM pg_stat_replication_slots ORDER BY slot_name\"\n>\n> Various comments in the 0002 refer to publisher/subscriber which is\n> not what we are using here.\n\nRemoved references to publisher/subscriber.\n\n> 5.\n> +# Create table.\n> +$node->safe_psql('postgres',\n> + \"CREATE TABLE test_repl_stat(col1 int)\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot1', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot2', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot4', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n>\n> I think we can save the above calls to pg_logical_slot_get_changes if\n> we create table before creating the slots in this test.\n>\n\nModified it.\n\n> 0003\n> ---------\n> 6. In the tests/code, publisher is used at multiple places. I think\n> that is not required because this can happen via plugin as well.\n\nRemoved references to publisher.\n\n> 7.\n> + if (max_replication_slots == nReplSlotStats)\n> + {\n> + ereport(pgStatRunningInCollector ? LOG : WARNING,\n> + (errmsg(\"skipping \\\"%s\\\" replication slot statistics as\n> pg_stat_replication_slots does not have enough slots\",\n> + NameStr(replSlotStats[nReplSlotStats].slotname))));\n> + memset(&replSlotStats[nReplSlotStats], 0, sizeof(PgStat_ReplSlotStats));\n>\n> Do we need memset here? Isn't this location is past the max location?\n\nThat is not required, I have modified it.\nAttached v4 patch has the fixes for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 7 Apr 2021 14:50:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 7, 2021 at 2:51 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\n@@ -4069,6 +4069,24 @@ pgstat_read_statsfiles(Oid onlydb, bool\npermanent, bool deep)\n * slot follows.\n */\n case 'R':\n+ /*\n+ * There is a remote scenario where one of the replication slots\n+ * is dropped and the drop slot statistics message is not\n+ * received by the statistic collector process, now if the\n+ * max_replication_slots is reduced to the actual number of\n+ * replication slots that are in use and the server is\n+ * re-started then the statistics process will not be aware of\n+ * this. To avoid writing beyond the max_replication_slots\n+ * this replication slot statistic information will be skipped.\n+ */\n+ if (max_replication_slots == nReplSlotStats)\n+ {\n+ ereport(pgStatRunningInCollector ? LOG : WARNING,\n+ (errmsg(\"skipping \\\"%s\\\" replication slot statistics as\npg_stat_replication_slots does not have enough slots\",\n+ NameStr(replSlotStats[nReplSlotStats].slotname))));\n+ goto done;\n+ }\n\nI think we might truncate some valid slots here. I have another idea\nto fix this case which is that while writing, we first write the\n'nReplSlotStats' and then write each slot info. Then while reading we\ncan allocate memory based on the required number of slots. Later when\nstartup process sends the slots, we can remove the already dropped\nslots from this array. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 8 Apr 2021 16:00:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 8, 2021 at 4:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 2:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> @@ -4069,6 +4069,24 @@ pgstat_read_statsfiles(Oid onlydb, bool\n> permanent, bool deep)\n> * slot follows.\n> */\n> case 'R':\n> + /*\n> + * There is a remote scenario where one of the replication slots\n> + * is dropped and the drop slot statistics message is not\n> + * received by the statistic collector process, now if the\n> + * max_replication_slots is reduced to the actual number of\n> + * replication slots that are in use and the server is\n> + * re-started then the statistics process will not be aware of\n> + * this. To avoid writing beyond the max_replication_slots\n> + * this replication slot statistic information will be skipped.\n> + */\n> + if (max_replication_slots == nReplSlotStats)\n> + {\n> + ereport(pgStatRunningInCollector ? LOG : WARNING,\n> + (errmsg(\"skipping \\\"%s\\\" replication slot statistics as\n> pg_stat_replication_slots does not have enough slots\",\n> + NameStr(replSlotStats[nReplSlotStats].slotname))));\n> + goto done;\n> + }\n>\n> I think we might truncate some valid slots here. I have another idea\n> to fix this case which is that while writing, we first write the\n> 'nReplSlotStats' and then write each slot info. Then while reading we\n> can allocate memory based on the required number of slots. Later when\n> startup process sends the slots, we can remove the already dropped\n> slots from this array. What do you think?\n\nI felt this idea is better, the reason being in the earlier idea we\nmight end up deleting some valid replication slot statistics and that\nslot's statistics will never be available to the user.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 8 Apr 2021 16:08:14 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 7, 2021 at 2:51 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> That is not required, I have modified it.\n> Attached v4 patch has the fixes for the same.\n>\n\nFew comments:\n\n0001\n------\n1. The first patch includes changing char datatype to NameData\ndatatype for slotname. I feel this can be a separate patch from adding\nnew stats in the view. I think we can also move the change related to\nmoving stats to a structure rather than sending them individually in\nthe same patch.\n\n2.\n@@ -2051,6 +2054,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn,\n rb->begin(rb, txn);\n }\n\n+ /*\n+ * Update total transaction count and total transaction bytes, if\n+ * transaction is streamed or spilled it will be updated while the\n+ * transaction gets spilled or streamed.\n+ */\n+ if (!rb->streamBytes && !rb->spillBytes)\n+ {\n+ rb->totalTxns++;\n+ rb->totalBytes += rb->size;\n+ }\n\nI think this will skip a transaction if it is interleaved between a\nstreaming transaction. Assume, two transactions t1 and t2. t1 sends\nchanges in multiple streams and t2 sends all changes in one go at\ncommit time. So, now, if t2 is interleaved between multiple streams\nthen I think the above won't count t2.\n\n3.\n@@ -3524,9 +3538,11 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn)\n {\n rb->spillCount += 1;\n rb->spillBytes += size;\n+ rb->totalBytes += size;\n\n /* don't consider already serialized transactions */\n rb->spillTxns += (rbtxn_is_serialized(txn) ||\nrbtxn_is_serialized_clear(txn)) ? 0 : 1;\n+ rb->totalTxns += (rbtxn_is_serialized(txn) ||\nrbtxn_is_serialized_clear(txn)) ? 0 : 1;\n }\n\nWe do serialize each subtransaction separately. So totalTxns will\ninclude subtransaction count as well when serialized, otherwise not.\nThe description of totalTxns also says that it doesn't include\nsubtransactions. So, I think updating rb->totalTxns here is wrong.\n\n0002\n-----\n1.\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM pg_logical_slot_get_changes('regression_slot2',\nNULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n\nThe indentation of the second SELECT seems to bit off.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 9 Apr 2021 16:13:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 8, 2021 at 7:30 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 2:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> @@ -4069,6 +4069,24 @@ pgstat_read_statsfiles(Oid onlydb, bool\n> permanent, bool deep)\n> * slot follows.\n> */\n> case 'R':\n> + /*\n> + * There is a remote scenario where one of the replication slots\n> + * is dropped and the drop slot statistics message is not\n> + * received by the statistic collector process, now if the\n> + * max_replication_slots is reduced to the actual number of\n> + * replication slots that are in use and the server is\n> + * re-started then the statistics process will not be aware of\n> + * this. To avoid writing beyond the max_replication_slots\n> + * this replication slot statistic information will be skipped.\n> + */\n> + if (max_replication_slots == nReplSlotStats)\n> + {\n> + ereport(pgStatRunningInCollector ? LOG : WARNING,\n> + (errmsg(\"skipping \\\"%s\\\" replication slot statistics as\n> pg_stat_replication_slots does not have enough slots\",\n> + NameStr(replSlotStats[nReplSlotStats].slotname))));\n> + goto done;\n> + }\n>\n> I think we might truncate some valid slots here. I have another idea\n> to fix this case which is that while writing, we first write the\n> 'nReplSlotStats' and then write each slot info. Then while reading we\n> can allocate memory based on the required number of slots. Later when\n> startup process sends the slots, we can remove the already dropped\n> slots from this array. What do you think?\n\nIIUC there are two problems in the case where the drop message is lost:\n\n1. Writing beyond the end of replSlotStats.\nThis can happen if after restarting the number of slots whose stats\nare stored in the stats file exceeds max_replication_slots. Vignesh's\npatch addresses this problem.\n\n2. The stats for the new slot are not recorded.\nIf the stats for already-dropped slots remain in replSlotStats, the\nstats for the new slot cannot be registered due to the full of\nreplSlotStats. This can happen even when after restarting the number\nof slots whose stats are stored in the stat file does NOT exceed\nmax_replication_slots as well as even during the server running. The\npatch doesn’t address this problem. (If this happens, we will have to\nreset all slot stats since pg_stat_reset_replication_slot() cannot\nremove the slot stats with the non-existing name).\n\nI think we can use HTAB to store slot stats and have\npg_stat_get_replication_slot() inquire about stats by the slot name,\nresolving both problems. By using HTAB we're no longer concerned about\nthe problem of writing stats beyond the end of the replSlotStats\narray. Instead, we have to consider how and when to clean up the stats\nfor already-dropped slots. We can have the startup process send slot\nnames at startup time, which borrows the idea proposed by Amit. But\nmaybe we need to consider the case again where the message from the\nstartup process is lost? Another idea would be to have\npgstat_vacuum_stat() check the existing slots and call\npgstat_report_replslot_drop() if the slot in the stats file doesn't\nexist. That way, we can continuously check the stats for\nalready-dropped slots.\n\nI've written a PoC patch for the above idea; using HTAB and cleaning\nup slot stats at pgstat_vacuum_stat(). The patch can be applied on top\nof 0001 patch Vignesh proposed before[1].\n\nPlease note that this cannot resolve the problem of ending up\naccumulating the stats to the old slot if the slot is re-created with\nthe same name and the drop message is lost. To deal with this problem\nI think we would need to use something unique identifier for each slot\ninstead of slot name.\n\n[1] https://www.postgresql.org/message-id/CALDaNm195xL1bZq4VHKt%3D-wmXJ5kC4jxKh7LXK%2BpN7ESFjHO%2Bw%40mail.gmail.com\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Sat, 10 Apr 2021 10:54:02 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 9, 2021 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> 2.\n> @@ -2051,6 +2054,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn,\n> rb->begin(rb, txn);\n> }\n>\n> + /*\n> + * Update total transaction count and total transaction bytes, if\n> + * transaction is streamed or spilled it will be updated while the\n> + * transaction gets spilled or streamed.\n> + */\n> + if (!rb->streamBytes && !rb->spillBytes)\n> + {\n> + rb->totalTxns++;\n> + rb->totalBytes += rb->size;\n> + }\n>\n> I think this will skip a transaction if it is interleaved between a\n> streaming transaction. Assume, two transactions t1 and t2. t1 sends\n> changes in multiple streams and t2 sends all changes in one go at\n> commit time. So, now, if t2 is interleaved between multiple streams\n> then I think the above won't count t2.\n>\n> 3.\n> @@ -3524,9 +3538,11 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n> {\n> rb->spillCount += 1;\n> rb->spillBytes += size;\n> + rb->totalBytes += size;\n>\n> /* don't consider already serialized transactions */\n> rb->spillTxns += (rbtxn_is_serialized(txn) ||\n> rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> + rb->totalTxns += (rbtxn_is_serialized(txn) ||\n> rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> }\n>\n> We do serialize each subtransaction separately. So totalTxns will\n> include subtransaction count as well when serialized, otherwise not.\n> The description of totalTxns also says that it doesn't include\n> subtransactions. So, I think updating rb->totalTxns here is wrong.\n>\n\nThe attached patch should fix the above two comments. I think it\nshould be sufficient if we just update the stats after processing the\nTXN. We need to ensure that don't count streamed transactions multiple\ntimes. I have not tested the attached, can you please review/test it\nand include it in the next set of patches if you agree with this\nchange.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 10 Apr 2021 09:50:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 9:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 9, 2021 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 2.\n> > @@ -2051,6 +2054,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\n> > ReorderBufferTXN *txn,\n> > rb->begin(rb, txn);\n> > }\n> >\n> > + /*\n> > + * Update total transaction count and total transaction bytes, if\n> > + * transaction is streamed or spilled it will be updated while the\n> > + * transaction gets spilled or streamed.\n> > + */\n> > + if (!rb->streamBytes && !rb->spillBytes)\n> > + {\n> > + rb->totalTxns++;\n> > + rb->totalBytes += rb->size;\n> > + }\n> >\n> > I think this will skip a transaction if it is interleaved between a\n> > streaming transaction. Assume, two transactions t1 and t2. t1 sends\n> > changes in multiple streams and t2 sends all changes in one go at\n> > commit time. So, now, if t2 is interleaved between multiple streams\n> > then I think the above won't count t2.\n> >\n> > 3.\n> > @@ -3524,9 +3538,11 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\n> > ReorderBufferTXN *txn)\n> > {\n> > rb->spillCount += 1;\n> > rb->spillBytes += size;\n> > + rb->totalBytes += size;\n> >\n> > /* don't consider already serialized transactions */\n> > rb->spillTxns += (rbtxn_is_serialized(txn) ||\n> > rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> > + rb->totalTxns += (rbtxn_is_serialized(txn) ||\n> > rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> > }\n> >\n> > We do serialize each subtransaction separately. So totalTxns will\n> > include subtransaction count as well when serialized, otherwise not.\n> > The description of totalTxns also says that it doesn't include\n> > subtransactions. So, I think updating rb->totalTxns here is wrong.\n> >\n>\n> The attached patch should fix the above two comments. I think it\n> should be sufficient if we just update the stats after processing the\n> TXN. We need to ensure that don't count streamed transactions multiple\n> times. I have not tested the attached, can you please review/test it\n> and include it in the next set of patches if you agree with this\n> change.\n>\n\noops, forgot to attach. Attaching now.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Sat, 10 Apr 2021 09:51:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 9:50 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 9, 2021 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > 2.\n> > @@ -2051,6 +2054,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\n> > ReorderBufferTXN *txn,\n> > rb->begin(rb, txn);\n> > }\n> >\n> > + /*\n> > + * Update total transaction count and total transaction bytes, if\n> > + * transaction is streamed or spilled it will be updated while the\n> > + * transaction gets spilled or streamed.\n> > + */\n> > + if (!rb->streamBytes && !rb->spillBytes)\n> > + {\n> > + rb->totalTxns++;\n> > + rb->totalBytes += rb->size;\n> > + }\n> >\n> > I think this will skip a transaction if it is interleaved between a\n> > streaming transaction. Assume, two transactions t1 and t2. t1 sends\n> > changes in multiple streams and t2 sends all changes in one go at\n> > commit time. So, now, if t2 is interleaved between multiple streams\n> > then I think the above won't count t2.\n> >\n> > 3.\n> > @@ -3524,9 +3538,11 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\n> > ReorderBufferTXN *txn)\n> > {\n> > rb->spillCount += 1;\n> > rb->spillBytes += size;\n> > + rb->totalBytes += size;\n> >\n> > /* don't consider already serialized transactions */\n> > rb->spillTxns += (rbtxn_is_serialized(txn) ||\n> > rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> > + rb->totalTxns += (rbtxn_is_serialized(txn) ||\n> > rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> > }\n> >\n> > We do serialize each subtransaction separately. So totalTxns will\n> > include subtransaction count as well when serialized, otherwise not.\n> > The description of totalTxns also says that it doesn't include\n> > subtransactions. So, I think updating rb->totalTxns here is wrong.\n> >\n>\n> The attached patch should fix the above two comments. I think it\n> should be sufficient if we just update the stats after processing the\n> TXN. We need to ensure that don't count streamed transactions multiple\n> times. I have not tested the attached, can you please review/test it\n> and include it in the next set of patches if you agree with this\n> change.\n\nThanks Amit for your Patch. I have merged your changes into my\npatchset. I did not find any issues in my testing.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Sat, 10 Apr 2021 13:06:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 7:24 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> IIUC there are two problems in the case where the drop message is lost:\n>\n> 1. Writing beyond the end of replSlotStats.\n> This can happen if after restarting the number of slots whose stats\n> are stored in the stats file exceeds max_replication_slots. Vignesh's\n> patch addresses this problem.\n>\n> 2. The stats for the new slot are not recorded.\n> If the stats for already-dropped slots remain in replSlotStats, the\n> stats for the new slot cannot be registered due to the full of\n> replSlotStats. This can happen even when after restarting the number\n> of slots whose stats are stored in the stat file does NOT exceed\n> max_replication_slots as well as even during the server running. The\n> patch doesn’t address this problem. (If this happens, we will have to\n> reset all slot stats since pg_stat_reset_replication_slot() cannot\n> remove the slot stats with the non-existing name).\n>\n> I think we can use HTAB to store slot stats and have\n> pg_stat_get_replication_slot() inquire about stats by the slot name,\n> resolving both problems. By using HTAB we're no longer concerned about\n> the problem of writing stats beyond the end of the replSlotStats\n> array. Instead, we have to consider how and when to clean up the stats\n> for already-dropped slots. We can have the startup process send slot\n> names at startup time, which borrows the idea proposed by Amit. But\n> maybe we need to consider the case again where the message from the\n> startup process is lost? Another idea would be to have\n> pgstat_vacuum_stat() check the existing slots and call\n> pgstat_report_replslot_drop() if the slot in the stats file doesn't\n> exist. That way, we can continuously check the stats for\n> already-dropped slots.\n>\n\nAgreed, I think checking periodically via pgstat_vacuum_stat is a\nbetter idea then sending once at start up time. I also think using\nslot_name is better than using 'idx' (index in\nReplicationSlotCtl->replication_slots) in this scheme because even\nafter startup 'idx' changes we will be able to drop the dead slot.\n\n> I've written a PoC patch for the above idea; using HTAB and cleaning\n> up slot stats at pgstat_vacuum_stat(). The patch can be applied on top\n> of 0001 patch Vignesh proposed before[1].\n>\n\nIt seems Vignesh has changed patches based on the latest set of\ncomments so you might want to rebase.\n\n> Please note that this cannot resolve the problem of ending up\n> accumulating the stats to the old slot if the slot is re-created with\n> the same name and the drop message is lost. To deal with this problem\n> I think we would need to use something unique identifier for each slot\n> instead of slot name.\n>\n\nRight, we can probably write it in comments and or docs about this\ncaveat and the user can probably use pg_stat_reset_replication_slot\nfor such slots.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 10 Apr 2021 18:23:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 1:06 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> Thanks Amit for your Patch. I have merged your changes into my\n> patchset. I did not find any issues in my testing.\n> Thoughts?\n>\n\n0001\n------\n PgStat_Counter m_stream_bytes;\n+ PgStat_Counter m_total_txns;\n+ PgStat_Counter m_total_bytes;\n } PgStat_MsgReplSlot;\n\n..\n..\n\n+ PgStat_Counter total_txns;\n+ PgStat_Counter total_bytes;\n TimestampTz stat_reset_timestamp;\n } PgStat_ReplSlotStats;\n\nDoesn't this change belong to the second patch?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 10 Apr 2021 18:24:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 6:24 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 10, 2021 at 1:06 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Thanks Amit for your Patch. I have merged your changes into my\n> > patchset. I did not find any issues in my testing.\n> > Thoughts?\n> >\n>\n> 0001\n> ------\n> PgStat_Counter m_stream_bytes;\n> + PgStat_Counter m_total_txns;\n> + PgStat_Counter m_total_bytes;\n> } PgStat_MsgReplSlot;\n>\n> ..\n> ..\n>\n> + PgStat_Counter total_txns;\n> + PgStat_Counter total_bytes;\n> TimestampTz stat_reset_timestamp;\n> } PgStat_ReplSlotStats;\n>\n> Doesn't this change belong to the second patch?\n\nMissed it while splitting the patches, it is fixed in the attached patch,\n\nRegards,\nVignesh", "msg_date": "Sat, 10 Apr 2021 18:51:32 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Thanks for the comments.\n\nOn Fri, Apr 9, 2021 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 2:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > That is not required, I have modified it.\n> > Attached v4 patch has the fixes for the same.\n> >\n>\n> Few comments:\n>\n> 0001\n> ------\n> 1. The first patch includes changing char datatype to NameData\n> datatype for slotname. I feel this can be a separate patch from adding\n> new stats in the view. I think we can also move the change related to\n> moving stats to a structure rather than sending them individually in\n> the same patch.\n\nI have split the patch as suggested.\n\n> 2.\n> @@ -2051,6 +2054,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn,\n> rb->begin(rb, txn);\n> }\n>\n> + /*\n> + * Update total transaction count and total transaction bytes, if\n> + * transaction is streamed or spilled it will be updated while the\n> + * transaction gets spilled or streamed.\n> + */\n> + if (!rb->streamBytes && !rb->spillBytes)\n> + {\n> + rb->totalTxns++;\n> + rb->totalBytes += rb->size;\n> + }\n>\n> I think this will skip a transaction if it is interleaved between a\n> streaming transaction. Assume, two transactions t1 and t2. t1 sends\n> changes in multiple streams and t2 sends all changes in one go at\n> commit time. So, now, if t2 is interleaved between multiple streams\n> then I think the above won't count t2.\n>\n\nModified it.\n\n> 3.\n> @@ -3524,9 +3538,11 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,\n> ReorderBufferTXN *txn)\n> {\n> rb->spillCount += 1;\n> rb->spillBytes += size;\n> + rb->totalBytes += size;\n>\n> /* don't consider already serialized transactions */\n> rb->spillTxns += (rbtxn_is_serialized(txn) ||\n> rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> + rb->totalTxns += (rbtxn_is_serialized(txn) ||\n> rbtxn_is_serialized_clear(txn)) ? 0 : 1;\n> }\n>\n> We do serialize each subtransaction separately. So totalTxns will\n> include subtransaction count as well when serialized, otherwise not.\n> The description of totalTxns also says that it doesn't include\n> subtransactions. So, I think updating rb->totalTxns here is wrong.\n>\n\nModified it.\n\n> 0002\n> -----\n> 1.\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM pg_logical_slot_get_changes('regression_slot2',\n> NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n>\n> The indentation of the second SELECT seems to bit off.\n\nModified it.\nThese comments are fixed in the patch available at [1].\n\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm1A%3DbjSrQjBNwNsOtTig%2B6pZpunmAj_P7Au0H0XjtvCyA%40mail.gmail.com\n\nRegards,\nVignesh\n\nThanks for the comments.On Fri, Apr 9, 2021 at 4:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:>> On Wed, Apr 7, 2021 at 2:51 PM vignesh C <vignesh21@gmail.com> wrote:> >> > That is not required, I have modified it.> > Attached v4 patch has the fixes for the same.> >>> Few comments:>> 0001> ------> 1. The first patch includes changing char datatype to NameData> datatype for slotname. I feel this can be a separate patch from adding> new stats in the view. I think we can also move the change related to> moving stats to a structure rather than sending them individually in> the same patch.I have split the patch as suggested.> 2.> @@ -2051,6 +2054,17 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,> ReorderBufferTXN *txn,>   rb->begin(rb, txn);>   }>> + /*> + * Update total transaction count and total transaction bytes, if> + * transaction is streamed or spilled it will be updated while the> + * transaction gets spilled or streamed.> + */> + if (!rb->streamBytes && !rb->spillBytes)> + {> + rb->totalTxns++;> + rb->totalBytes += rb->size;> + }>> I think this will skip a transaction if it is interleaved between a> streaming transaction. Assume, two transactions t1 and t2. t1 sends> changes in multiple streams and t2 sends all changes in one go at> commit time. So, now, if t2 is interleaved between multiple streams> then I think the above won't count t2.>Modified it.> 3.> @@ -3524,9 +3538,11 @@ ReorderBufferSerializeTXN(ReorderBuffer *rb,> ReorderBufferTXN *txn)>   {>   rb->spillCount += 1;>   rb->spillBytes += size;> + rb->totalBytes += size;>>   /* don't consider already serialized transactions */>   rb->spillTxns += (rbtxn_is_serialized(txn) ||> rbtxn_is_serialized_clear(txn)) ? 0 : 1;> + rb->totalTxns += (rbtxn_is_serialized(txn) ||> rbtxn_is_serialized_clear(txn)) ? 0 : 1;>   }>> We do serialize each subtransaction separately. So totalTxns will> include subtransaction count as well when serialized, otherwise not.> The description of totalTxns also says that it doesn't include> subtransactions. So, I think updating rb->totalTxns here is wrong.>Modified it.> 0002> -----> 1.> +$node->safe_psql('postgres',> + \"SELECT data FROM pg_logical_slot_get_changes('regression_slot2',> NULL, NULL, 'include-xids', '0', 'skip-empty-xacts', '1')\");> +$node->safe_psql('postgres',> +        \"SELECT data FROM> pg_logical_slot_get_changes('regression_slot3', NULL, NULL,> 'include-xids', '0', 'skip-empty-xacts', '1')\");>> The indentation of the second SELECT seems to bit off.Modified it.These comments are fixed in the patch available at [1].[1] - https://www.postgresql.org/message-id/CALDaNm1A%3DbjSrQjBNwNsOtTig%2B6pZpunmAj_P7Au0H0XjtvCyA%40mail.gmail.comRegards,Vignesh", "msg_date": "Mon, 12 Apr 2021 09:11:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 10, 2021 at 7:24 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > IIUC there are two problems in the case where the drop message is lost:\n> >\n> > 1. Writing beyond the end of replSlotStats.\n> > This can happen if after restarting the number of slots whose stats\n> > are stored in the stats file exceeds max_replication_slots. Vignesh's\n> > patch addresses this problem.\n> >\n> > 2. The stats for the new slot are not recorded.\n> > If the stats for already-dropped slots remain in replSlotStats, the\n> > stats for the new slot cannot be registered due to the full of\n> > replSlotStats. This can happen even when after restarting the number\n> > of slots whose stats are stored in the stat file does NOT exceed\n> > max_replication_slots as well as even during the server running. The\n> > patch doesn’t address this problem. (If this happens, we will have to\n> > reset all slot stats since pg_stat_reset_replication_slot() cannot\n> > remove the slot stats with the non-existing name).\n> >\n> > I think we can use HTAB to store slot stats and have\n> > pg_stat_get_replication_slot() inquire about stats by the slot name,\n> > resolving both problems. By using HTAB we're no longer concerned about\n> > the problem of writing stats beyond the end of the replSlotStats\n> > array. Instead, we have to consider how and when to clean up the stats\n> > for already-dropped slots. We can have the startup process send slot\n> > names at startup time, which borrows the idea proposed by Amit. But\n> > maybe we need to consider the case again where the message from the\n> > startup process is lost? Another idea would be to have\n> > pgstat_vacuum_stat() check the existing slots and call\n> > pgstat_report_replslot_drop() if the slot in the stats file doesn't\n> > exist. That way, we can continuously check the stats for\n> > already-dropped slots.\n> >\n\nThanks for your comments.\n\n>\n> Agreed, I think checking periodically via pgstat_vacuum_stat is a\n> better idea then sending once at start up time. I also think using\n> slot_name is better than using 'idx' (index in\n> ReplicationSlotCtl->replication_slots) in this scheme because even\n> after startup 'idx' changes we will be able to drop the dead slot.\n>\n> > I've written a PoC patch for the above idea; using HTAB and cleaning\n> > up slot stats at pgstat_vacuum_stat(). The patch can be applied on top\n> > of 0001 patch Vignesh proposed before[1].\n> >\n>\n> It seems Vignesh has changed patches based on the latest set of\n> comments so you might want to rebase.\n\nI've merged my patch into the v6 patch set Vignesh submitted.\n\nI've attached the updated version of the patches. I didn't change\nanything in the patch that changes char[NAMEDATALEN] to NameData (0001\npatch) and patches that add tests. In 0003 patch I reordered the\noutput parameters of pg_stat_replication_slots; showing total number\nof transactions and total bytes followed by statistics for spilled and\nstreamed transactions seems appropriate to me. Since my patch resolved\nthe issue of writing stats beyond the end of the array, I've removed\nthe patch that writes the number of stats into the stats file\n(v6-0004-Handle-overwriting-of-replication-slot-statistic-.patch).\n\nApart from the above updates, the\ncontrib/test_decoding/001_repl_stats.pl add wait_for_decode_stats()\nfunction during testing but I think we can use poll_query_until()\ninstead. Also, I think we can merge 0004 and 0005 patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 12 Apr 2021 13:56:38 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > It seems Vignesh has changed patches based on the latest set of\n> > comments so you might want to rebase.\n>\n> I've merged my patch into the v6 patch set Vignesh submitted.\n>\n> I've attached the updated version of the patches. I didn't change\n> anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> patch) and patches that add tests.\n>\n\nI think we can push 0001. What do you think?\n\n> In 0003 patch I reordered the\n> output parameters of pg_stat_replication_slots; showing total number\n> of transactions and total bytes followed by statistics for spilled and\n> streamed transactions seems appropriate to me.\n>\n\nI am not sure about this because I think we might want to add some\ninfo of stream/spill bytes in total_bytes description (something like\nstream/spill bytes are not in addition to total_bytes). So probably\nkeeping these new counters at the end makes more sense to me.\n\n> Since my patch resolved\n> the issue of writing stats beyond the end of the array, I've removed\n> the patch that writes the number of stats into the stats file\n> (v6-0004-Handle-overwriting-of-replication-slot-statistic-.patch).\n>\n\nOkay, but I think it might be better to keep 0001, 0002, 0003 as\nVignesh had because those are agreed upon changes and are\nstraightforward. We can push those and then further review HTAB\nimplementation and also see if Andres has any suggestions on the same.\n\n> Apart from the above updates, the\n> contrib/test_decoding/001_repl_stats.pl add wait_for_decode_stats()\n> function during testing but I think we can use poll_query_until()\n> instead.\n\n+1. Can you please change it in the next version?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Apr 2021 14:49:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Mar 20, 2021 at 9:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > And then more generally about the feature:\n> > - If a slot was used to stream out a large amount of changes (say an\n> > initial data load), but then replication is interrupted before the\n> > transaction is committed/aborted, stream_bytes will not reflect the\n> > many gigabytes of data we may have sent.\n> >\n>\n> We can probably update the stats each time we spilled or streamed the\n> transaction data but it was not clear at that stage whether or how\n> much it will be useful.\n>\n\nI felt we can update the replication slot statistics data each time we\nspill/stream the transaction data instead of accumulating the\nstatistics and updating at the end. I have tried this in the attached\npatch and the statistics data were getting updated.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 12 Apr 2021 14:57:46 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > It seems Vignesh has changed patches based on the latest set of\n> > > comments so you might want to rebase.\n> >\n> > I've merged my patch into the v6 patch set Vignesh submitted.\n> >\n> > I've attached the updated version of the patches. I didn't change\n> > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > patch) and patches that add tests.\n> >\n>\n> I think we can push 0001. What do you think?\n\n+1\n\n>\n> > In 0003 patch I reordered the\n> > output parameters of pg_stat_replication_slots; showing total number\n> > of transactions and total bytes followed by statistics for spilled and\n> > streamed transactions seems appropriate to me.\n> >\n>\n> I am not sure about this because I think we might want to add some\n> info of stream/spill bytes in total_bytes description (something like\n> stream/spill bytes are not in addition to total_bytes).\n\nOkay.\n\n> So probably\n> keeping these new counters at the end makes more sense to me.\n\nBut I think all of those counters are new for users since\npg_stat_replication_slots view will be introduced to PG14, no?\n\n>\n> > Since my patch resolved\n> > the issue of writing stats beyond the end of the array, I've removed\n> > the patch that writes the number of stats into the stats file\n> > (v6-0004-Handle-overwriting-of-replication-slot-statistic-.patch).\n> >\n>\n> Okay, but I think it might be better to keep 0001, 0002, 0003 as\n> Vignesh had because those are agreed upon changes and are\n> straightforward. We can push those and then further review HTAB\n> implementation and also see if Andres has any suggestions on the same.\n\nMakes sense. Maybe it should have written my patch as 0004 (i.g.,\napplied on top of the patch that adds total_txn and tota_bytes).\n\n>\n> > Apart from the above updates, the\n> > contrib/test_decoding/001_repl_stats.pl add wait_for_decode_stats()\n> > function during testing but I think we can use poll_query_until()\n> > instead.\n>\n> +1. Can you please change it in the next version?\n\nSure, I'll update the patches.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 12 Apr 2021 20:04:04 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > It seems Vignesh has changed patches based on the latest set of\n> > > > comments so you might want to rebase.\n> > >\n> > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > >\n> > > I've attached the updated version of the patches. I didn't change\n> > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > patch) and patches that add tests.\n> > >\n> >\n> > I think we can push 0001. What do you think?\n>\n> +1\n>\n> >\n> > > In 0003 patch I reordered the\n> > > output parameters of pg_stat_replication_slots; showing total number\n> > > of transactions and total bytes followed by statistics for spilled and\n> > > streamed transactions seems appropriate to me.\n> > >\n> >\n> > I am not sure about this because I think we might want to add some\n> > info of stream/spill bytes in total_bytes description (something like\n> > stream/spill bytes are not in addition to total_bytes).\n>\n> Okay.\n>\n> > So probably\n> > keeping these new counters at the end makes more sense to me.\n>\n> But I think all of those counters are new for users since\n> pg_stat_replication_slots view will be introduced to PG14, no?\n>\n\nRight, I was referring to total_txns and total_bytes attributes. I\nthink keeping them at end after spill and stream counters should be\nokay.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Apr 2021 16:38:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > It seems Vignesh has changed patches based on the latest set of\n> > > > comments so you might want to rebase.\n> > >\n> > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > >\n> > > I've attached the updated version of the patches. I didn't change\n> > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > patch) and patches that add tests.\n> > >\n> >\n> > I think we can push 0001. What do you think?\n>\n> +1\n>\n> >\n> > > In 0003 patch I reordered the\n> > > output parameters of pg_stat_replication_slots; showing total number\n> > > of transactions and total bytes followed by statistics for spilled and\n> > > streamed transactions seems appropriate to me.\n> > >\n> >\n> > I am not sure about this because I think we might want to add some\n> > info of stream/spill bytes in total_bytes description (something like\n> > stream/spill bytes are not in addition to total_bytes).\n>\n> Okay.\n>\n> > So probably\n> > keeping these new counters at the end makes more sense to me.\n>\n> But I think all of those counters are new for users since\n> pg_stat_replication_slots view will be introduced to PG14, no?\n>\n> >\n> > > Since my patch resolved\n> > > the issue of writing stats beyond the end of the array, I've removed\n> > > the patch that writes the number of stats into the stats file\n> > > (v6-0004-Handle-overwriting-of-replication-slot-statistic-.patch).\n> > >\n> >\n> > Okay, but I think it might be better to keep 0001, 0002, 0003 as\n> > Vignesh had because those are agreed upon changes and are\n> > straightforward. We can push those and then further review HTAB\n> > implementation and also see if Andres has any suggestions on the same.\n>\n> Makes sense. Maybe it should have written my patch as 0004 (i.g.,\n> applied on top of the patch that adds total_txn and tota_bytes).\n>\n> >\n> > > Apart from the above updates, the\n> > > contrib/test_decoding/001_repl_stats.pl add wait_for_decode_stats()\n> > > function during testing but I think we can use poll_query_until()\n> > > instead.\n> >\n> > +1. Can you please change it in the next version?\n>\n> Sure, I'll update the patches.\n\nI had started working on poll_query_until comment, I will test and\npost a patch for that comment shortly.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 12 Apr 2021 16:45:07 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Apr 10, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nThanks, 0001 and 0002 look good to me. I have a minor comment for 0002.\n\n<entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>total_bytes</structfield><type>bigint</type>\n+ </para>\n+ <para>\n+ Amount of decoded transactions data sent to the decoding output plugin\n+ while decoding the changes from WAL for this slot. This can be used to\n+ gauge the total amount of data sent during logical decoding.\n\nCan we slightly extend it to say something like: Note that this\nincludes the bytes streamed and or spilled. Similarly, we can extend\nit for total_txns.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Apr 2021 16:46:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > comments so you might want to rebase.\n> > > >\n> > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > >\n> > > > I've attached the updated version of the patches. I didn't change\n> > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > patch) and patches that add tests.\n> > > >\n> > >\n> > > I think we can push 0001. What do you think?\n> >\n> > +1\n> >\n> > >\n> > > > In 0003 patch I reordered the\n> > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > of transactions and total bytes followed by statistics for spilled and\n> > > > streamed transactions seems appropriate to me.\n> > > >\n> > >\n> > > I am not sure about this because I think we might want to add some\n> > > info of stream/spill bytes in total_bytes description (something like\n> > > stream/spill bytes are not in addition to total_bytes).\n\nBTW doesn't it confuse users that stream/spill bytes are not in\naddition to total_bytes? User will need to do \"total_bytes +\nspill/stream_bytes\" to know the actual total amount of data sent to\nthe decoding output plugin, is that right? The doc says \"Amount of\ndecoded transactions data sent to the decoding output plugin while\ndecoding the changes from WAL for this slot\" but I think we also send\ndecoded data that had been spilled to the decoding output plugin.\n\n> >\n> > Okay.\n> >\n> > > So probably\n> > > keeping these new counters at the end makes more sense to me.\n> >\n> > But I think all of those counters are new for users since\n> > pg_stat_replication_slots view will be introduced to PG14, no?\n> >\n>\n> Right, I was referring to total_txns and total_bytes attributes. I\n> think keeping them at end after spill and stream counters should be\n> okay.\n\nOkay, understood.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 12 Apr 2021 20:58:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Apr 10, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Thanks, 0001 and 0002 look good to me. I have a minor comment for 0002.\n>\n> <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>total_bytes</structfield><type>bigint</type>\n> + </para>\n> + <para>\n> + Amount of decoded transactions data sent to the decoding output plugin\n> + while decoding the changes from WAL for this slot. This can be used to\n> + gauge the total amount of data sent during logical decoding.\n>\n> Can we slightly extend it to say something like: Note that this\n> includes the bytes streamed and or spilled. Similarly, we can extend\n> it for total_txns.\n>\n\nThanks for the comments, the comments are fixed in the v8 patch attached.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 12 Apr 2021 17:46:37 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 5:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > > comments so you might want to rebase.\n> > > > >\n> > > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > > >\n> > > > > I've attached the updated version of the patches. I didn't change\n> > > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > > patch) and patches that add tests.\n> > > > >\n> > > >\n> > > > I think we can push 0001. What do you think?\n> > >\n> > > +1\n> > >\n> > > >\n> > > > > In 0003 patch I reordered the\n> > > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > > of transactions and total bytes followed by statistics for spilled and\n> > > > > streamed transactions seems appropriate to me.\n> > > > >\n> > > >\n> > > > I am not sure about this because I think we might want to add some\n> > > > info of stream/spill bytes in total_bytes description (something like\n> > > > stream/spill bytes are not in addition to total_bytes).\n>\n> BTW doesn't it confuse users that stream/spill bytes are not in\n> addition to total_bytes? User will need to do \"total_bytes +\n> spill/stream_bytes\" to know the actual total amount of data sent to\n> the decoding output plugin, is that right?\n>\n\nNo, total_bytes includes the spill/stream bytes. So, the user doesn't\nneed to do any calculation to compute totel_bytes sent to output\nplugin.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 12 Apr 2021 18:06:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 9:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 5:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > > > comments so you might want to rebase.\n> > > > > >\n> > > > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > > > >\n> > > > > > I've attached the updated version of the patches. I didn't change\n> > > > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > > > patch) and patches that add tests.\n> > > > > >\n> > > > >\n> > > > > I think we can push 0001. What do you think?\n> > > >\n> > > > +1\n> > > >\n> > > > >\n> > > > > > In 0003 patch I reordered the\n> > > > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > > > of transactions and total bytes followed by statistics for spilled and\n> > > > > > streamed transactions seems appropriate to me.\n> > > > > >\n> > > > >\n> > > > > I am not sure about this because I think we might want to add some\n> > > > > info of stream/spill bytes in total_bytes description (something like\n> > > > > stream/spill bytes are not in addition to total_bytes).\n> >\n> > BTW doesn't it confuse users that stream/spill bytes are not in\n> > addition to total_bytes? User will need to do \"total_bytes +\n> > spill/stream_bytes\" to know the actual total amount of data sent to\n> > the decoding output plugin, is that right?\n> >\n>\n> No, total_bytes includes the spill/stream bytes. So, the user doesn't\n> need to do any calculation to compute totel_bytes sent to output\n> plugin.\n\nThe following test for the latest v8 patch seems to show different.\ntotal_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\nsomething?\n\npostgres(1:85969)=# select pg_create_logical_replication_slot('s',\n'test_decoding');\n pg_create_logical_replication_slot\n------------------------------------\n (s,0/1884468)\n(1 row)\n\npostgres(1:85969)=# create table a (i int);\nCREATE TABLE\npostgres(1:85969)=# insert into a select generate_series(1, 100000);\nINSERT 0 100000\npostgres(1:85969)=# set logical_decoding_work_mem to 64;\nSET\npostgres(1:85969)=# select * from pg_stat_replication_slots ;\n slot_name | total_txns | total_bytes | spill_txns | spill_count |\nspill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n-----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n s | 0 | 0 | 0 | 0 |\n 0 | 0 | 0 | 0 |\n(1 row)\n\npostgres(1:85969)=# select count(*) from\npg_logical_slot_peek_changes('s', NULL, NULL);\n count\n--------\n 100004\n(1 row)\n\npostgres(1:85969)=# select * from pg_stat_replication_slots ;\n slot_name | total_txns | total_bytes | spill_txns | spill_count |\nspill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n-----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n s | 2 | 1808 | 1 | 202 |\n13200000 | 0 | 0 | 0 |\n(1 row)\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 12 Apr 2021 22:33:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 7:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 9:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 5:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > > > > comments so you might want to rebase.\n> > > > > > >\n> > > > > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > > > > >\n> > > > > > > I've attached the updated version of the patches. I didn't change\n> > > > > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > > > > patch) and patches that add tests.\n> > > > > > >\n> > > > > >\n> > > > > > I think we can push 0001. What do you think?\n> > > > >\n> > > > > +1\n> > > > >\n> > > > > >\n> > > > > > > In 0003 patch I reordered the\n> > > > > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > > > > of transactions and total bytes followed by statistics for spilled and\n> > > > > > > streamed transactions seems appropriate to me.\n> > > > > > >\n> > > > > >\n> > > > > > I am not sure about this because I think we might want to add some\n> > > > > > info of stream/spill bytes in total_bytes description (something like\n> > > > > > stream/spill bytes are not in addition to total_bytes).\n> > >\n> > > BTW doesn't it confuse users that stream/spill bytes are not in\n> > > addition to total_bytes? User will need to do \"total_bytes +\n> > > spill/stream_bytes\" to know the actual total amount of data sent to\n> > > the decoding output plugin, is that right?\n> > >\n> >\n> > No, total_bytes includes the spill/stream bytes. So, the user doesn't\n> > need to do any calculation to compute totel_bytes sent to output\n> > plugin.\n>\n> The following test for the latest v8 patch seems to show different.\n> total_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\n> something?\n\nI will check this issue and post my analysis.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 12 Apr 2021 21:39:14 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 9:16 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Apr 10, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > Thanks, 0001 and 0002 look good to me. I have a minor comment for 0002.\n> >\n> > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > + <structfield>total_bytes</structfield><type>bigint</type>\n> > + </para>\n> > + <para>\n> > + Amount of decoded transactions data sent to the decoding output plugin\n> > + while decoding the changes from WAL for this slot. This can be used to\n> > + gauge the total amount of data sent during logical decoding.\n> >\n> > Can we slightly extend it to say something like: Note that this\n> > includes the bytes streamed and or spilled. Similarly, we can extend\n> > it for total_txns.\n> >\n>\n> Thanks for the comments, the comments are fixed in the v8 patch attached.\n> Thoughts?\n\nHere are review comments on new TAP tests:\n\n+# Create replication slots.\n+$node->safe_psql('postgres',\n+ \"SELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot1',\n'test_decoding')\");\n+$node->safe_psql('postgres',\n+ \"SELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot2',\n'test_decoding')\");\n+$node->safe_psql('postgres',\n+ \"SELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot3',\n'test_decoding')\");\n+$node->safe_psql('postgres',\n+ \"SELECT 'init' FROM\npg_create_logical_replication_slot('regression_slot4',\n'test_decoding')\");\n\nand\n\n+\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot1', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot2', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n+$node->safe_psql('postgres',\n+ \"SELECT data FROM\npg_logical_slot_get_changes('regression_slot4', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1')\");\n\nI think we can do those similar queries in a single psql connection\nlike follows:\n\n # Create replication slots.\n $node->safe_psql('postgres',\n qq[\nSELECT pg_create_logical_replication_slot('regression_slot1', 'test_decoding');\nSELECT pg_create_logical_replication_slot('regression_slot2', 'test_decoding');\nSELECT pg_create_logical_replication_slot('regression_slot3', 'test_decoding');\nSELECT pg_create_logical_replication_slot('regression_slot4', 'test_decoding');\n]);\n\nand\n\n$node->safe_psql('postgres',\n qq[\nSELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\nSELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\nSELECT data FROM pg_logical_slot_get_changes('regression_slot3', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\nSELECT data FROM pg_logical_slot_get_changes('regression_slot4', NULL,\nNULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n ]);\n\n---\n+# Wait for the statistics to be updated.\n+my $slot1_stat_check_query =\n+ \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n= 'regression_slot1' AND total_txns > 0 AND total_bytes > 0;\";\n+my $slot2_stat_check_query =\n+ \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n= 'regression_slot2' AND total_txns > 0 AND total_bytes > 0;\";\n+my $slot3_stat_check_query =\n+ \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n= 'regression_slot3' AND total_txns > 0 AND total_bytes > 0;\";\n+my $slot4_stat_check_query =\n+ \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n= 'regression_slot4' AND total_txns > 0 AND total_bytes > 0;\";\n+\n+# Verify that the statistics have been updated.\n+$node->poll_query_until('postgres', $slot1_stat_check_query)\n+ or die \"Timed out while waiting for statistics to be updated\";\n+$node->poll_query_until('postgres', $slot2_stat_check_query)\n+ or die \"Timed out while waiting for statistics to be updated\";\n+$node->poll_query_until('postgres', $slot3_stat_check_query)\n+ or die \"Timed out while waiting for statistics to be updated\";\n+$node->poll_query_until('postgres', $slot4_stat_check_query)\n+ or die \"Timed out while waiting for statistics to be updated\";\n\nWe can simplify the above code to something like:\n\n$node->poll_query_until(\n 'postgres', qq[\nSELECT count(slot_name) >= 4\nFROM pg_stat_replication_slots\nWHERE slot_name ~ 'regression_slot'\n AND total_txns > 0\n AND total_bytes > 0;\n]) or die \"Timed out while waiting for statistics to be updated\";\n\n---\n+# Test to remove one of the replication slots and adjust max_replication_slots\n+# accordingly to the number of slots and verify replication statistics data is\n+# fine after restart.\n\nI think it's better if we explain in detail what cases we're trying to\ntest. How about the following description?\n\nTest to remove one of the replication slots and adjust\nmax_replication_slots accordingly to the number of slots. This leads\nto a mismatch of the number of slots between in the stats file and on\nshared memory, simulating the message for dropping a slot got lost. We\nverify replication statistics data is fine after restart.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 13 Apr 2021 14:15:49 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 7:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 9:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 5:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > >\n> > > > > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > > > > comments so you might want to rebase.\n> > > > > > >\n> > > > > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > > > > >\n> > > > > > > I've attached the updated version of the patches. I didn't change\n> > > > > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > > > > patch) and patches that add tests.\n> > > > > > >\n> > > > > >\n> > > > > > I think we can push 0001. What do you think?\n> > > > >\n> > > > > +1\n> > > > >\n> > > > > >\n> > > > > > > In 0003 patch I reordered the\n> > > > > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > > > > of transactions and total bytes followed by statistics for spilled and\n> > > > > > > streamed transactions seems appropriate to me.\n> > > > > > >\n> > > > > >\n> > > > > > I am not sure about this because I think we might want to add some\n> > > > > > info of stream/spill bytes in total_bytes description (something like\n> > > > > > stream/spill bytes are not in addition to total_bytes).\n> > >\n> > > BTW doesn't it confuse users that stream/spill bytes are not in\n> > > addition to total_bytes? User will need to do \"total_bytes +\n> > > spill/stream_bytes\" to know the actual total amount of data sent to\n> > > the decoding output plugin, is that right?\n> > >\n> >\n> > No, total_bytes includes the spill/stream bytes. So, the user doesn't\n> > need to do any calculation to compute totel_bytes sent to output\n> > plugin.\n>\n> The following test for the latest v8 patch seems to show different.\n> total_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\n> something?\n>\n> postgres(1:85969)=# select pg_create_logical_replication_slot('s',\n> 'test_decoding');\n> pg_create_logical_replication_slot\n> ------------------------------------\n> (s,0/1884468)\n> (1 row)\n>\n> postgres(1:85969)=# create table a (i int);\n> CREATE TABLE\n> postgres(1:85969)=# insert into a select generate_series(1, 100000);\n> INSERT 0 100000\n> postgres(1:85969)=# set logical_decoding_work_mem to 64;\n> SET\n> postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> s | 0 | 0 | 0 | 0 |\n> 0 | 0 | 0 | 0 |\n> (1 row)\n>\n> postgres(1:85969)=# select count(*) from\n> pg_logical_slot_peek_changes('s', NULL, NULL);\n> count\n> --------\n> 100004\n> (1 row)\n>\n> postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> s | 2 | 1808 | 1 | 202 |\n> 13200000 | 0 | 0 | 0 |\n> (1 row)\n>\n\nThanks for identifying this issue, while spilling the transactions\nreorder buffer changes gets released, we will not be able to get the\ntotal size for spilled transactions from reorderbuffer size. I have\nfixed it by including spilledbytes to totalbytes in case of spilled\ntransactions. Attached patch has the fix for this.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Tue, 13 Apr 2021 13:36:55 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 13, 2021 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n>\n> On Mon, Apr 12, 2021 at 9:16 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> > >\n> > > On Sat, Apr 10, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > Thanks, 0001 and 0002 look good to me. I have a minor comment for\n0002.\n> > >\n> > > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> > > + <structfield>total_bytes</structfield><type>bigint</type>\n> > > + </para>\n> > > + <para>\n> > > + Amount of decoded transactions data sent to the decoding\noutput plugin\n> > > + while decoding the changes from WAL for this slot. This can\nbe used to\n> > > + gauge the total amount of data sent during logical decoding.\n> > >\n> > > Can we slightly extend it to say something like: Note that this\n> > > includes the bytes streamed and or spilled. Similarly, we can extend\n> > > it for total_txns.\n> > >\n> >\n> > Thanks for the comments, the comments are fixed in the v8 patch\nattached.\n> > Thoughts?\n>\n> Here are review comments on new TAP tests:\n\nThanks for the comments.\n\n> +# Create replication slots.\n> +$node->safe_psql('postgres',\n> + \"SELECT 'init' FROM\n> pg_create_logical_replication_slot('regression_slot1',\n> 'test_decoding')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT 'init' FROM\n> pg_create_logical_replication_slot('regression_slot2',\n> 'test_decoding')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT 'init' FROM\n> pg_create_logical_replication_slot('regression_slot3',\n> 'test_decoding')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT 'init' FROM\n> pg_create_logical_replication_slot('regression_slot4',\n> 'test_decoding')\");\n>\n> and\n>\n> +\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot1', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot2', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot3', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n> +$node->safe_psql('postgres',\n> + \"SELECT data FROM\n> pg_logical_slot_get_changes('regression_slot4', NULL, NULL,\n> 'include-xids', '0', 'skip-empty-xacts', '1')\");\n>\n> I think we can do those similar queries in a single psql connection\n> like follows:\n>\n> # Create replication slots.\n> $node->safe_psql('postgres',\n> qq[\n> SELECT pg_create_logical_replication_slot('regression_slot1',\n'test_decoding');\n> SELECT pg_create_logical_replication_slot('regression_slot2',\n'test_decoding');\n> SELECT pg_create_logical_replication_slot('regression_slot3',\n'test_decoding');\n> SELECT pg_create_logical_replication_slot('regression_slot4',\n'test_decoding');\n> ]);\n>\n> and\n>\n> $node->safe_psql('postgres',\n> qq[\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot3', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> SELECT data FROM pg_logical_slot_get_changes('regression_slot4', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> ]);\n>\n\nModified.\n\n> ---\n> +# Wait for the statistics to be updated.\n> +my $slot1_stat_check_query =\n> + \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n> = 'regression_slot1' AND total_txns > 0 AND total_bytes > 0;\";\n> +my $slot2_stat_check_query =\n> + \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n> = 'regression_slot2' AND total_txns > 0 AND total_bytes > 0;\";\n> +my $slot3_stat_check_query =\n> + \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n> = 'regression_slot3' AND total_txns > 0 AND total_bytes > 0;\";\n> +my $slot4_stat_check_query =\n> + \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name\n> = 'regression_slot4' AND total_txns > 0 AND total_bytes > 0;\";\n> +\n> +# Verify that the statistics have been updated.\n> +$node->poll_query_until('postgres', $slot1_stat_check_query)\n> + or die \"Timed out while waiting for statistics to be updated\";\n> +$node->poll_query_until('postgres', $slot2_stat_check_query)\n> + or die \"Timed out while waiting for statistics to be updated\";\n> +$node->poll_query_until('postgres', $slot3_stat_check_query)\n> + or die \"Timed out while waiting for statistics to be updated\";\n> +$node->poll_query_until('postgres', $slot4_stat_check_query)\n> + or die \"Timed out while waiting for statistics to be updated\";\n>\n> We can simplify the above code to something like:\n>\n> $node->poll_query_until(\n> 'postgres', qq[\n> SELECT count(slot_name) >= 4\n> FROM pg_stat_replication_slots\n> WHERE slot_name ~ 'regression_slot'\n> AND total_txns > 0\n> AND total_bytes > 0;\n> ]) or die \"Timed out while waiting for statistics to be updated\";\n>\n\nModified.\n\n> ---\n> +# Test to remove one of the replication slots and adjust\nmax_replication_slots\n> +# accordingly to the number of slots and verify replication statistics\ndata is\n> +# fine after restart.\n>\n> I think it's better if we explain in detail what cases we're trying to\n> test. How about the following description?\n>\n> Test to remove one of the replication slots and adjust\n> max_replication_slots accordingly to the number of slots. This leads\n> to a mismatch of the number of slots between in the stats file and on\n> shared memory, simulating the message for dropping a slot got lost. We\n> verify replication statistics data is fine after restart.\n\nSlightly reworded and modified it.\n\nThese comments are fixed as part of the v9 patch posted at [1].\n[1] -\nhttps://www.postgresql.org/message-id/CALDaNm3CtPUYkFjPhzX0AcuRiK2MzdCR%2B_w8ok1kCcykveuL2Q%40mail.gmail.com\n\nRegards,\nVignesh\n\nOn Tue, Apr 13, 2021 at 10:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:>> On Mon, Apr 12, 2021 at 9:16 PM vignesh C <vignesh21@gmail.com> wrote:> >> > On Mon, Apr 12, 2021 at 4:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:> > >> > > On Sat, Apr 10, 2021 at 6:51 PM vignesh C <vignesh21@gmail.com> wrote:> > > >> > >> > > Thanks, 0001 and 0002 look good to me. I have a minor comment for 0002.> > >> > > <entry role=\"catalog_table_entry\"><para role=\"column_definition\">> > > +        <structfield>total_bytes</structfield><type>bigint</type>> > > +       </para>> > > +       <para>> > > +        Amount of decoded transactions data sent to the decoding output plugin> > > +        while decoding the changes from WAL for this slot. This can be used to> > > +        gauge the total amount of data sent during logical decoding.> > >> > > Can we slightly extend it to say something like: Note that this> > > includes the bytes streamed and or spilled. Similarly, we can extend> > > it for total_txns.> > >> >> > Thanks for the comments, the comments are fixed in the v8 patch attached.> > Thoughts?>> Here are review comments on new TAP tests:Thanks for the comments.> +# Create replication slots.> +$node->safe_psql('postgres',> +       \"SELECT 'init' FROM> pg_create_logical_replication_slot('regression_slot1',> 'test_decoding')\");> +$node->safe_psql('postgres',> +       \"SELECT 'init' FROM> pg_create_logical_replication_slot('regression_slot2',> 'test_decoding')\");> +$node->safe_psql('postgres',> +       \"SELECT 'init' FROM> pg_create_logical_replication_slot('regression_slot3',> 'test_decoding')\");> +$node->safe_psql('postgres',> +       \"SELECT 'init' FROM> pg_create_logical_replication_slot('regression_slot4',> 'test_decoding')\");>> and>> +> +$node->safe_psql('postgres',> +       \"SELECT data FROM> pg_logical_slot_get_changes('regression_slot1', NULL, NULL,> 'include-xids', '0', 'skip-empty-xacts', '1')\");> +$node->safe_psql('postgres',> +       \"SELECT data FROM> pg_logical_slot_get_changes('regression_slot2', NULL, NULL,> 'include-xids', '0', 'skip-empty-xacts', '1')\");> +$node->safe_psql('postgres',> +        \"SELECT data FROM> pg_logical_slot_get_changes('regression_slot3', NULL, NULL,> 'include-xids', '0', 'skip-empty-xacts', '1')\");> +$node->safe_psql('postgres',> +       \"SELECT data FROM> pg_logical_slot_get_changes('regression_slot4', NULL, NULL,> 'include-xids', '0', 'skip-empty-xacts', '1')\");>> I think we can do those similar queries in a single psql connection> like follows:>>  # Create replication slots.>  $node->safe_psql('postgres',>    qq[> SELECT pg_create_logical_replication_slot('regression_slot1', 'test_decoding');> SELECT pg_create_logical_replication_slot('regression_slot2', 'test_decoding');> SELECT pg_create_logical_replication_slot('regression_slot3', 'test_decoding');> SELECT pg_create_logical_replication_slot('regression_slot4', 'test_decoding');> ]);>> and>> $node->safe_psql('postgres',>    qq[> SELECT data FROM pg_logical_slot_get_changes('regression_slot1', NULL,> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');> SELECT data FROM pg_logical_slot_get_changes('regression_slot2', NULL,> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');> SELECT data FROM pg_logical_slot_get_changes('regression_slot3', NULL,> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');> SELECT data FROM pg_logical_slot_get_changes('regression_slot4', NULL,> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');>       ]);>Modified.> ---> +# Wait for the statistics to be updated.> +my $slot1_stat_check_query => +  \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name> = 'regression_slot1' AND total_txns > 0 AND total_bytes > 0;\";> +my $slot2_stat_check_query => +  \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name> = 'regression_slot2' AND total_txns > 0 AND total_bytes > 0;\";> +my $slot3_stat_check_query => +  \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name> = 'regression_slot3' AND total_txns > 0 AND total_bytes > 0;\";> +my $slot4_stat_check_query => +  \"SELECT count(1) = 1 FROM pg_stat_replication_slots WHERE slot_name> = 'regression_slot4' AND total_txns > 0 AND total_bytes > 0;\";> +> +# Verify that the statistics have been updated.> +$node->poll_query_until('postgres', $slot1_stat_check_query)> +  or die \"Timed out while waiting for statistics to be updated\";> +$node->poll_query_until('postgres', $slot2_stat_check_query)> +  or die \"Timed out while waiting for statistics to be updated\";> +$node->poll_query_until('postgres', $slot3_stat_check_query)> +  or die \"Timed out while waiting for statistics to be updated\";> +$node->poll_query_until('postgres', $slot4_stat_check_query)> +  or die \"Timed out while waiting for statistics to be updated\";>> We can simplify the above code to something like:>> $node->poll_query_until(>    'postgres', qq[> SELECT count(slot_name) >= 4> FROM pg_stat_replication_slots> WHERE slot_name ~ 'regression_slot'>     AND total_txns > 0>     AND total_bytes > 0;> ]) or die \"Timed out while waiting for statistics to be updated\";>Modified.> ---> +# Test to remove one of the replication slots and adjust max_replication_slots> +# accordingly to the number of slots and verify replication statistics data is> +# fine after restart.>> I think it's better if we explain in detail what cases we're trying to> test. How about the following description?>> Test to remove one of the replication slots and adjust> max_replication_slots accordingly to the number of slots. This leads> to a mismatch of the number of slots between in the stats file and on> shared memory, simulating the message for dropping a slot got lost. We> verify replication statistics data is fine after restart.Slightly reworded and modified it.These comments are fixed as part of the v9 patch posted at [1].[1] - https://www.postgresql.org/message-id/CALDaNm3CtPUYkFjPhzX0AcuRiK2MzdCR%2B_w8ok1kCcykveuL2Q%40mail.gmail.comRegards,Vignesh", "msg_date": "Wed, 14 Apr 2021 07:20:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 13, 2021 at 5:07 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 7:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 9:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 5:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > > >\n> > > > > > > > >\n> > > > > > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > > > > > comments so you might want to rebase.\n> > > > > > > >\n> > > > > > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > > > > > >\n> > > > > > > > I've attached the updated version of the patches. I didn't change\n> > > > > > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > > > > > patch) and patches that add tests.\n> > > > > > > >\n> > > > > > >\n> > > > > > > I think we can push 0001. What do you think?\n> > > > > >\n> > > > > > +1\n> > > > > >\n> > > > > > >\n> > > > > > > > In 0003 patch I reordered the\n> > > > > > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > > > > > of transactions and total bytes followed by statistics for spilled and\n> > > > > > > > streamed transactions seems appropriate to me.\n> > > > > > > >\n> > > > > > >\n> > > > > > > I am not sure about this because I think we might want to add some\n> > > > > > > info of stream/spill bytes in total_bytes description (something like\n> > > > > > > stream/spill bytes are not in addition to total_bytes).\n> > > >\n> > > > BTW doesn't it confuse users that stream/spill bytes are not in\n> > > > addition to total_bytes? User will need to do \"total_bytes +\n> > > > spill/stream_bytes\" to know the actual total amount of data sent to\n> > > > the decoding output plugin, is that right?\n> > > >\n> > >\n> > > No, total_bytes includes the spill/stream bytes. So, the user doesn't\n> > > need to do any calculation to compute totel_bytes sent to output\n> > > plugin.\n> >\n> > The following test for the latest v8 patch seems to show different.\n> > total_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\n> > something?\n> >\n> > postgres(1:85969)=# select pg_create_logical_replication_slot('s',\n> > 'test_decoding');\n> > pg_create_logical_replication_slot\n> > ------------------------------------\n> > (s,0/1884468)\n> > (1 row)\n> >\n> > postgres(1:85969)=# create table a (i int);\n> > CREATE TABLE\n> > postgres(1:85969)=# insert into a select generate_series(1, 100000);\n> > INSERT 0 100000\n> > postgres(1:85969)=# set logical_decoding_work_mem to 64;\n> > SET\n> > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > s | 0 | 0 | 0 | 0 |\n> > 0 | 0 | 0 | 0 |\n> > (1 row)\n> >\n> > postgres(1:85969)=# select count(*) from\n> > pg_logical_slot_peek_changes('s', NULL, NULL);\n> > count\n> > --------\n> > 100004\n> > (1 row)\n> >\n> > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > s | 2 | 1808 | 1 | 202 |\n> > 13200000 | 0 | 0 | 0 |\n> > (1 row)\n> >\n>\n> Thanks for identifying this issue, while spilling the transactions\n> reorder buffer changes gets released, we will not be able to get the\n> total size for spilled transactions from reorderbuffer size. I have\n> fixed it by including spilledbytes to totalbytes in case of spilled\n> transactions. Attached patch has the fix for this.\n> Thoughts?\n\nI've not looked at the patches yet but as Amit mentioned before[1],\nit's better to move 0002 patch to after 0004. That is, 0001 patch\nchanges data type to NameData, 0002 patch adds total_txn and\ntotal_bytes, and 0003 patch adds regression tests. 0004 patch will be\nthe patch using HTAB (was 0002 patch) and get reviewed after pushing\n0001, 0002, and 0003 patches. 0005 patch adds more regression tests\nfor the problem 0004 patch addresses.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1Kd4ag6Vc6jO%2BntYmTMiR70x3t_%2BYQRMDP%3D9T5a2uzUHg%40mail.gmail.com\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 14 Apr 2021 11:21:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 14, 2021 at 7:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 13, 2021 at 5:07 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 7:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 12, 2021 at 9:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 12, 2021 at 5:29 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 12, 2021 at 8:08 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Mon, Apr 12, 2021 at 4:34 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Mon, Apr 12, 2021 at 6:19 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > > > On Mon, Apr 12, 2021 at 10:27 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > > > >\n> > > > > > > > > On Sat, Apr 10, 2021 at 9:53 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > > > >\n> > > > > > > > > >\n> > > > > > > > > > It seems Vignesh has changed patches based on the latest set of\n> > > > > > > > > > comments so you might want to rebase.\n> > > > > > > > >\n> > > > > > > > > I've merged my patch into the v6 patch set Vignesh submitted.\n> > > > > > > > >\n> > > > > > > > > I've attached the updated version of the patches. I didn't change\n> > > > > > > > > anything in the patch that changes char[NAMEDATALEN] to NameData (0001\n> > > > > > > > > patch) and patches that add tests.\n> > > > > > > > >\n> > > > > > > >\n> > > > > > > > I think we can push 0001. What do you think?\n> > > > > > >\n> > > > > > > +1\n> > > > > > >\n> > > > > > > >\n> > > > > > > > > In 0003 patch I reordered the\n> > > > > > > > > output parameters of pg_stat_replication_slots; showing total number\n> > > > > > > > > of transactions and total bytes followed by statistics for spilled and\n> > > > > > > > > streamed transactions seems appropriate to me.\n> > > > > > > > >\n> > > > > > > >\n> > > > > > > > I am not sure about this because I think we might want to add some\n> > > > > > > > info of stream/spill bytes in total_bytes description (something like\n> > > > > > > > stream/spill bytes are not in addition to total_bytes).\n> > > > >\n> > > > > BTW doesn't it confuse users that stream/spill bytes are not in\n> > > > > addition to total_bytes? User will need to do \"total_bytes +\n> > > > > spill/stream_bytes\" to know the actual total amount of data sent to\n> > > > > the decoding output plugin, is that right?\n> > > > >\n> > > >\n> > > > No, total_bytes includes the spill/stream bytes. So, the user doesn't\n> > > > need to do any calculation to compute totel_bytes sent to output\n> > > > plugin.\n> > >\n> > > The following test for the latest v8 patch seems to show different.\n> > > total_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\n> > > something?\n> > >\n> > > postgres(1:85969)=# select pg_create_logical_replication_slot('s',\n> > > 'test_decoding');\n> > > pg_create_logical_replication_slot\n> > > ------------------------------------\n> > > (s,0/1884468)\n> > > (1 row)\n> > >\n> > > postgres(1:85969)=# create table a (i int);\n> > > CREATE TABLE\n> > > postgres(1:85969)=# insert into a select generate_series(1, 100000);\n> > > INSERT 0 100000\n> > > postgres(1:85969)=# set logical_decoding_work_mem to 64;\n> > > SET\n> > > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > > s | 0 | 0 | 0 | 0 |\n> > > 0 | 0 | 0 | 0 |\n> > > (1 row)\n> > >\n> > > postgres(1:85969)=# select count(*) from\n> > > pg_logical_slot_peek_changes('s', NULL, NULL);\n> > > count\n> > > --------\n> > > 100004\n> > > (1 row)\n> > >\n> > > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > > s | 2 | 1808 | 1 | 202 |\n> > > 13200000 | 0 | 0 | 0 |\n> > > (1 row)\n> > >\n> >\n> > Thanks for identifying this issue, while spilling the transactions\n> > reorder buffer changes gets released, we will not be able to get the\n> > total size for spilled transactions from reorderbuffer size. I have\n> > fixed it by including spilledbytes to totalbytes in case of spilled\n> > transactions. Attached patch has the fix for this.\n> > Thoughts?\n>\n> I've not looked at the patches yet but as Amit mentioned before[1],\n> it's better to move 0002 patch to after 0004. That is, 0001 patch\n> changes data type to NameData, 0002 patch adds total_txn and\n> total_bytes, and 0003 patch adds regression tests. 0004 patch will be\n> the patch using HTAB (was 0002 patch) and get reviewed after pushing\n> 0001, 0002, and 0003 patches. 0005 patch adds more regression tests\n> for the problem 0004 patch addresses.\n\nI will make the change for this and post a patch for this.\nCurrently we have kept total_txns and total_bytes at the beginning of\npg_stat_replication_slots, I did not see any conclusion on this. I\npreferred it to be at the beginning.\nThoughts?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Apr 2021 08:04:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 14, 2021 at 8:04 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, Apr 14, 2021 at 7:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've not looked at the patches yet but as Amit mentioned before[1],\n> > it's better to move 0002 patch to after 0004. That is, 0001 patch\n> > changes data type to NameData, 0002 patch adds total_txn and\n> > total_bytes, and 0003 patch adds regression tests. 0004 patch will be\n> > the patch using HTAB (was 0002 patch) and get reviewed after pushing\n> > 0001, 0002, and 0003 patches. 0005 patch adds more regression tests\n> > for the problem 0004 patch addresses.\n>\n> I will make the change for this and post a patch for this.\n> Currently we have kept total_txns and total_bytes at the beginning of\n> pg_stat_replication_slots, I did not see any conclusion on this. I\n> preferred it to be at the beginning.\n> Thoughts?\n>\n\nI prefer those two fields after spill and stream fields. I have\nmentioned the same in one of the emails above.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 14 Apr 2021 08:18:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 13, 2021 at 1:37 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 7:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > The following test for the latest v8 patch seems to show different.\n> > total_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\n> > something?\n> >\n> > postgres(1:85969)=# select pg_create_logical_replication_slot('s',\n> > 'test_decoding');\n> > pg_create_logical_replication_slot\n> > ------------------------------------\n> > (s,0/1884468)\n> > (1 row)\n> >\n> > postgres(1:85969)=# create table a (i int);\n> > CREATE TABLE\n> > postgres(1:85969)=# insert into a select generate_series(1, 100000);\n> > INSERT 0 100000\n> > postgres(1:85969)=# set logical_decoding_work_mem to 64;\n> > SET\n> > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > s | 0 | 0 | 0 | 0 |\n> > 0 | 0 | 0 | 0 |\n> > (1 row)\n> >\n> > postgres(1:85969)=# select count(*) from\n> > pg_logical_slot_peek_changes('s', NULL, NULL);\n> > count\n> > --------\n> > 100004\n> > (1 row)\n> >\n> > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > s | 2 | 1808 | 1 | 202 |\n> > 13200000 | 0 | 0 | 0 |\n> > (1 row)\n> >\n>\n> Thanks for identifying this issue, while spilling the transactions\n> reorder buffer changes gets released, we will not be able to get the\n> total size for spilled transactions from reorderbuffer size. I have\n> fixed it by including spilledbytes to totalbytes in case of spilled\n> transactions. Attached patch has the fix for this.\n> Thoughts?\n>\n\nI am not sure if that is the best way to fix it because sometimes we\nclear the serialized flag in which case it might not give the correct\nanswer. Another way to fix it could be that before we try to restore a\nnew set of changes, we update totalBytes counter. See, the attached\npatch atop your v6-0002-* patch.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 14 Apr 2021 12:09:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 14, 2021 at 12:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 13, 2021 at 1:37 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Apr 12, 2021 at 7:03 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > The following test for the latest v8 patch seems to show different.\n> > > total_bytes is 1808 whereas spill_bytes is 13200000. Am I missing\n> > > something?\n> > >\n> > > postgres(1:85969)=# select pg_create_logical_replication_slot('s',\n> > > 'test_decoding');\n> > > pg_create_logical_replication_slot\n> > > ------------------------------------\n> > > (s,0/1884468)\n> > > (1 row)\n> > >\n> > > postgres(1:85969)=# create table a (i int);\n> > > CREATE TABLE\n> > > postgres(1:85969)=# insert into a select generate_series(1, 100000);\n> > > INSERT 0 100000\n> > > postgres(1:85969)=# set logical_decoding_work_mem to 64;\n> > > SET\n> > > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > > s | 0 | 0 | 0 | 0 |\n> > > 0 | 0 | 0 | 0 |\n> > > (1 row)\n> > >\n> > > postgres(1:85969)=# select count(*) from\n> > > pg_logical_slot_peek_changes('s', NULL, NULL);\n> > > count\n> > > --------\n> > > 100004\n> > > (1 row)\n> > >\n> > > postgres(1:85969)=# select * from pg_stat_replication_slots ;\n> > > slot_name | total_txns | total_bytes | spill_txns | spill_count |\n> > > spill_bytes | stream_txns | stream_count | stream_bytes | stats_reset\n> > > -----------+------------+-------------+------------+-------------+-------------+-------------+--------------+--------------+-------------\n> > > s | 2 | 1808 | 1 | 202 |\n> > > 13200000 | 0 | 0 | 0 |\n> > > (1 row)\n> > >\n> >\n> > Thanks for identifying this issue, while spilling the transactions\n> > reorder buffer changes gets released, we will not be able to get the\n> > total size for spilled transactions from reorderbuffer size. I have\n> > fixed it by including spilledbytes to totalbytes in case of spilled\n> > transactions. Attached patch has the fix for this.\n> > Thoughts?\n> >\n>\n> I am not sure if that is the best way to fix it because sometimes we\n> clear the serialized flag in which case it might not give the correct\n> answer. Another way to fix it could be that before we try to restore a\n> new set of changes, we update totalBytes counter. See, the attached\n> patch atop your v6-0002-* patch.\n\nI felt calculating totalbytes this way is better than depending on\nspill_bytes. I have taken your changes. Attached patch includes the\nchanges suggested.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Wed, 14 Apr 2021 17:52:45 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 14, 2021 at 5:52 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nI have made minor changes to the 0001 and 0002 patches. Attached is\nthe combined patch for them, I think we can push them as one patch.\nChanges made are (a) minor editing in comments, (b) changed the\ncondition when to report stats such that unless we have processed any\nbytes, we shouldn't send those, (c) removed some unrelated changes\nfrom 0002, (d) ran pgindent.\n\nLet me know what you think of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 15 Apr 2021 11:52:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 14, 2021 at 5:52 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> I have made minor changes to the 0001 and 0002 patches. Attached is\n> the combined patch for them, I think we can push them as one patch.\n> Changes made are (a) minor editing in comments, (b) changed the\n> condition when to report stats such that unless we have processed any\n> bytes, we shouldn't send those, (c) removed some unrelated changes\n> from 0002, (d) ran pgindent.\n>\n> Let me know what you think of the attached?\n\nChanges look fine to me, the patch applies neatly and make check-world passes.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 15 Apr 2021 12:45:36 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 12:45 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 14, 2021 at 5:52 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > I have made minor changes to the 0001 and 0002 patches. Attached is\n> > the combined patch for them, I think we can push them as one patch.\n> > Changes made are (a) minor editing in comments, (b) changed the\n> > condition when to report stats such that unless we have processed any\n> > bytes, we shouldn't send those, (c) removed some unrelated changes\n> > from 0002, (d) ran pgindent.\n> >\n> > Let me know what you think of the attached?\n>\n> Changes look fine to me, the patch applies neatly and make check-world passes.\n>\n\nThanks! Sawada-San, others, unless you have any suggestions, I am\nplanning to push\nv11-0001-Add-information-of-total-data-processed-to-repli.patch\ntomorrow.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 15 Apr 2021 13:12:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 3:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 14, 2021 at 5:52 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> I have made minor changes to the 0001 and 0002 patches. Attached is\n> the combined patch for them, I think we can push them as one patch.\n> Changes made are (a) minor editing in comments, (b) changed the\n> condition when to report stats such that unless we have processed any\n> bytes, we shouldn't send those, (c) removed some unrelated changes\n> from 0002, (d) ran pgindent.\n>\n> Let me know what you think of the attached?\n\nThank you for updating the patch.\n\nI have one question on the doc change:\n\n+ so the counter is not incremented for subtransactions. Note that this\n+ includes the transactions streamed and or spilled.\n+ </para></entry>\n\nThe patch uses the sentence \"streamed and or spilled\" in two places.\nYou meant “streamed and spilled”? Even if it actually means “and or”,\nusing \"and or” (i.g., connecting “and” to “or” by a space) is general?\nI could not find we use it other places in the doc but found we're\nusing \"and/or\" instead.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 15 Apr 2021 16:42:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 3:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> Thank you for updating the patch.\n>\n> I have one question on the doc change:\n>\n> + so the counter is not incremented for subtransactions. Note that this\n> + includes the transactions streamed and or spilled.\n> + </para></entry>\n>\n> The patch uses the sentence \"streamed and or spilled\" in two places.\n> You meant “streamed and spilled”? Even if it actually means “and or”,\n> using \"and or” (i.g., connecting “and” to “or” by a space) is general?\n> I could not find we use it other places in the doc but found we're\n> using \"and/or\" instead.\n>\n\nI changed it to 'and/or' and made another minor change.\n\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 15 Apr 2021 14:46:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 6:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 15, 2021 at 3:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > Thank you for updating the patch.\n> >\n> > I have one question on the doc change:\n> >\n> > + so the counter is not incremented for subtransactions. Note that this\n> > + includes the transactions streamed and or spilled.\n> > + </para></entry>\n> >\n> > The patch uses the sentence \"streamed and or spilled\" in two places.\n> > You meant “streamed and spilled”? Even if it actually means “and or”,\n> > using \"and or” (i.g., connecting “and” to “or” by a space) is general?\n> > I could not find we use it other places in the doc but found we're\n> > using \"and/or\" instead.\n> >\n>\n> I changed it to 'and/or' and made another minor change.\n\n\nThank you for the update! The patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 15 Apr 2021 20:04:31 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 02:46:35PM +0530, Amit Kapila wrote:\n> On Thu, Apr 15, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 15, 2021 at 3:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > Thank you for updating the patch.\n> >\n> > I have one question on the doc change:\n> >\n> > + so the counter is not incremented for subtransactions. Note that this\n> > + includes the transactions streamed and or spilled.\n> > + </para></entry>\n> >\n> > The patch uses the sentence \"streamed and or spilled\" in two places.\n> > You meant “streamed and spilled”? Even if it actually means “and or”,\n> > using \"and or” (i.g., connecting “and” to “or” by a space) is general?\n> > I could not find we use it other places in the doc but found we're\n> > using \"and/or\" instead.\n> >\n> \n> I changed it to 'and/or' and made another minor change.\n\nI'm suggesting some doc changes. If these are fine, I'll include in my next\nround of doc review, in case you don't want to make another commit just for\nthat.\n\ndiff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\nindex 1d90eb0f21..18c5bba254 100644\n--- a/doc/src/sgml/monitoring.sgml\n+++ b/doc/src/sgml/monitoring.sgml\n@@ -2720,9 +2720,9 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n </para>\n <para>\n Number of decoded transactions sent to the decoding output plugin for\n- this slot. This counter is used to maintain the top level transactions,\n- so the counter is not incremented for subtransactions. Note that this\n- includes the transactions that are streamed and/or spilled.\n+ this slot. This counts top-level transactions only,\n+ and is not incremented for subtransactions. Note that this\n+ includes transactions that are streamed and/or spilled.\n </para></entry>\n </row>\n \n@@ -2731,10 +2731,10 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i\n <structfield>total_bytes</structfield><type>bigint</type>\n </para>\n <para>\n- Amount of decoded transactions data sent to the decoding output plugin\n+ Amount of decoded transaction data sent to the decoding output plugin\n while decoding the changes from WAL for this slot. This can be used to\n gauge the total amount of data sent during logical decoding. Note that\n- this includes the data that is streamed and/or spilled.\n+ this includes data that is streamed and/or spilled.\n </para>\n </entry>\n </row>\n-- \n2.17.0\n\n\n\n", "msg_date": "Thu, 15 Apr 2021 21:52:18 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 8:22 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 02:46:35PM +0530, Amit Kapila wrote:\n> > On Thu, Apr 15, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 15, 2021 at 3:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > Thank you for updating the patch.\n> > >\n> > > I have one question on the doc change:\n> > >\n> > > + so the counter is not incremented for subtransactions. Note that this\n> > > + includes the transactions streamed and or spilled.\n> > > + </para></entry>\n> > >\n> > > The patch uses the sentence \"streamed and or spilled\" in two places.\n> > > You meant “streamed and spilled”? Even if it actually means “and or”,\n> > > using \"and or” (i.g., connecting “and” to “or” by a space) is general?\n> > > I could not find we use it other places in the doc but found we're\n> > > using \"and/or\" instead.\n> > >\n> >\n> > I changed it to 'and/or' and made another minor change.\n>\n> I'm suggesting some doc changes. If these are fine, I'll include in my next\n> round of doc review, in case you don't want to make another commit just for\n> that.\n>\n\nI am fine with your proposed changes. There are one or two more\npatches in this area. I can include your suggestions along with those\nif you don't mind?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Apr 2021 08:48:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 08:48:29AM +0530, Amit Kapila wrote:\n> I am fine with your proposed changes. There are one or two more\n> patches in this area. I can include your suggestions along with those\n> if you don't mind?\n\nHowever's convenient is fine \n\n-- \nJustin\n\n\n", "msg_date": "Thu, 15 Apr 2021 22:20:14 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 2:46 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 1:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 15, 2021 at 3:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > Thank you for updating the patch.\n> >\n> > I have one question on the doc change:\n> >\n> > + so the counter is not incremented for subtransactions. Note that this\n> > + includes the transactions streamed and or spilled.\n> > + </para></entry>\n> >\n> > The patch uses the sentence \"streamed and or spilled\" in two places.\n> > You meant “streamed and spilled”? Even if it actually means “and or”,\n> > using \"and or” (i.g., connecting “and” to “or” by a space) is general?\n> > I could not find we use it other places in the doc but found we're\n> > using \"and/or\" instead.\n> >\n>\n> I changed it to 'and/or' and made another minor change.\n\nI have rebased the remaining patches on top of head. Attached the\npatches for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Fri, 16 Apr 2021 09:08:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 15, 2021 at 4:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Thank you for the update! The patch looks good to me.\n>\n\nI have pushed the first patch. Comments on the next patch\nv13-0001-Use-HTAB-for-replication-slot-statistics:\n1.\n+ /*\n+ * Check for all replication slots in stats hash table. We do this check\n+ * when replSlotStats has more than max_replication_slots entries, i.e,\n+ * when there are stats for the already-dropped slot, to avoid frequent\n+ * call SearchNamedReplicationSlot() which acquires LWLock.\n+ */\n+ if (replSlotStats && hash_get_num_entries(replSlotStats) >\nmax_replication_slots)\n+ {\n+ PgStat_ReplSlotEntry *slotentry;\n+\n+ hash_seq_init(&hstat, replSlotStats);\n+ while ((slotentry = (PgStat_ReplSlotEntry *) hash_seq_search(&hstat)) != NULL)\n+ {\n+ if (SearchNamedReplicationSlot(NameStr(slotentry->slotname), true) == NULL)\n+ pgstat_report_replslot_drop(NameStr(slotentry->slotname));\n+ }\n+ }\n\nIs SearchNamedReplicationSlot() so frequently used that we need to do\nthis only when the hash table has entries more than\nmax_replication_slots? I think it would be better if we can do it\nwithout such a condition to reduce the chances of missing the slot\nstats. We don't have any such restrictions for any other cases in this\nfunction.\n\nI think it is better to add CHECK_FOR_INTERRUPTS in the above while loop?\n\n2.\n/*\n * Replication slot statistics kept in the stats collector\n */\n-typedef struct PgStat_ReplSlotStats\n+typedef struct PgStat_ReplSlotEntry\n\nI think the comment above this structure can be changed to \"The\ncollector's data per slot\" or something like that. Also, if we have to\nfollow table/function/db style, then probably this structure should be\nnamed as PgStat_StatReplSlotEntry.\n\n3.\n- * create the statistics for the replication slot.\n+ * create the statistics for the replication slot. In case where the\n+ * message for dropping the old slot gets lost and a slot with the same is\n\n/the same is/the same name is/.\n\nCan we mention something similar to what you have added here in docs as well?\n\n4.\n+CREATE VIEW pg_stat_replication_slots AS\n+ SELECT\n+ s.slot_name,\n+ s.spill_txns,\n+ s.spill_count,\n+ s.spill_bytes,\n+ s.stream_txns,\n+ s.stream_count,\n+ s.stream_bytes,\n+ s.total_txns,\n+ s.total_bytes,\n+ s.stats_reset\n+ FROM pg_replication_slots as r,\n+ LATERAL pg_stat_get_replication_slot(slot_name) as s\n+ WHERE r.datoid IS NOT NULL; -- excluding physical slots\n..\n..\n\n-/* Get the statistics for the replication slots */\n+/* Get the statistics for the replication slot */\n Datum\n-pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n+pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n {\n #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n- ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n+ text *slotname_text = PG_GETARG_TEXT_P(0);\n+ NameData slotname;\n\nI think with the above changes getting all the slot stats has become\nmuch costlier. Is there any reason why can't we get all the stats from\nthe new hash_table in one shot and return them to the user?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Apr 2021 11:28:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 12, 2021 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 9:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > And then more generally about the feature:\n> > > - If a slot was used to stream out a large amount of changes (say an\n> > > initial data load), but then replication is interrupted before the\n> > > transaction is committed/aborted, stream_bytes will not reflect the\n> > > many gigabytes of data we may have sent.\n> > >\n> >\n> > We can probably update the stats each time we spilled or streamed the\n> > transaction data but it was not clear at that stage whether or how\n> > much it will be useful.\n> >\n>\n> I felt we can update the replication slot statistics data each time we\n> spill/stream the transaction data instead of accumulating the\n> statistics and updating at the end. I have tried this in the attached\n> patch and the statistics data were getting updated.\n> Thoughts?\n>\n\nDid you check if we can update the stats when we release the slot as\ndiscussed above? I am not sure if it is easy to do at the time of slot\nrelease because this information might not be accessible there and in\nsome cases, we might have already released the decoding\ncontext/reorderbuffer where this information is stored. It might be\nokay to update this when we stream or spill but let's see if we can do\nit easily at the time of slot release.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 16 Apr 2021 15:16:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, Mar 20, 2021 at 9:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > And then more generally about the feature:\n> > > > - If a slot was used to stream out a large amount of changes (say an\n> > > > initial data load), but then replication is interrupted before the\n> > > > transaction is committed/aborted, stream_bytes will not reflect the\n> > > > many gigabytes of data we may have sent.\n> > > >\n> > >\n> > > We can probably update the stats each time we spilled or streamed the\n> > > transaction data but it was not clear at that stage whether or how\n> > > much it will be useful.\n> > >\n> >\n> > I felt we can update the replication slot statistics data each time we\n> > spill/stream the transaction data instead of accumulating the\n> > statistics and updating at the end. I have tried this in the attached\n> > patch and the statistics data were getting updated.\n> > Thoughts?\n> >\n>\n> Did you check if we can update the stats when we release the slot as\n> discussed above? I am not sure if it is easy to do at the time of slot\n> release because this information might not be accessible there and in\n> some cases, we might have already released the decoding\n> context/reorderbuffer where this information is stored. It might be\n> okay to update this when we stream or spill but let's see if we can do\n> it easily at the time of slot release.\n>\n\nI'm not sure if we will be able to update stats from here, as we will\nnot have access to decoding context/reorderbuffer at this place, and\nalso like you pointed out I noticed that the decoding context gets\nreleased earlier itself.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 16 Apr 2021 16:55:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 11:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 4:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for the update! The patch looks good to me.\n> >\n>\n> I have pushed the first patch. Comments on the next patch\n> v13-0001-Use-HTAB-for-replication-slot-statistics:\n\nAlso should we change PGSTAT_FILE_FORMAT_ID as we have modified the\nreplication slot statistics?\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 16 Apr 2021 16:57:48 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I have pushed the first patch.\n\nThe buildfarm suggests that this isn't entirely stable:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n\nEach of those animals has also passed at least once since this went in,\nso I'm betting on a timing-dependent issue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Apr 2021 18:15:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "I wrote:\n> The buildfarm suggests that this isn't entirely stable:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n\nOh, I missed that hyrax is showing the identical symptom:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-16%2007%3A05%3A44\n\nSo you might try CLOBBER_CACHE_ALWAYS to see if you can reproduce it\nthat way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 17 Apr 2021 18:21:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Apr 18, 2021 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > The buildfarm suggests that this isn't entirely stable:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n>\n> Oh, I missed that hyrax is showing the identical symptom:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-16%2007%3A05%3A44\n>\n> So you might try CLOBBER_CACHE_ALWAYS to see if you can reproduce it\n> that way.\n>\n\nI will try to check and identify why it is failing.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 18 Apr 2021 07:36:28 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Apr 18, 2021 at 7:36 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, Apr 18, 2021 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > I wrote:\n> > > The buildfarm suggests that this isn't entirely stable:\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n> >\n> > Oh, I missed that hyrax is showing the identical symptom:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-16%2007%3A05%3A44\n> >\n> > So you might try CLOBBER_CACHE_ALWAYS to see if you can reproduce it\n> > that way.\n> >\n>\n> I will try to check and identify why it is failing.\n>\n\nI think the failure is due to the reason that in the new tests after\nreset, we are not waiting for the stats message to be delivered as we\nwere doing in other cases. Also, for the new test (non-spilled case),\nwe need to decode changes as we are doing for other tests, otherwise,\nit will show the old stats.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 18 Apr 2021 08:43:39 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Apr 18, 2021 at 8:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Apr 18, 2021 at 7:36 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sun, Apr 18, 2021 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > I wrote:\n> > > > The buildfarm suggests that this isn't entirely stable:\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n> > >\n> > > Oh, I missed that hyrax is showing the identical symptom:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-16%2007%3A05%3A44\n> > >\n> > > So you might try CLOBBER_CACHE_ALWAYS to see if you can reproduce it\n> > > that way.\n> > >\n> >\n> > I will try to check and identify why it is failing.\n> >\n>\n> I think the failure is due to the reason that in the new tests after\n> reset, we are not waiting for the stats message to be delivered as we\n> were doing in other cases. Also, for the new test (non-spilled case),\n> we need to decode changes as we are doing for other tests, otherwise,\n> it will show the old stats.\n\nI also felt that is the reason for the failure, I will fix and post a\npatch for this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 18 Apr 2021 09:02:42 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Apr 18, 2021 at 12:13 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Apr 18, 2021 at 7:36 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sun, Apr 18, 2021 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > I wrote:\n> > > > The buildfarm suggests that this isn't entirely stable:\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n> > >\n> > > Oh, I missed that hyrax is showing the identical symptom:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-16%2007%3A05%3A44\n> > >\n> > > So you might try CLOBBER_CACHE_ALWAYS to see if you can reproduce it\n> > > that way.\n> > >\n> >\n> > I will try to check and identify why it is failing.\n> >\n>\n> I think the failure is due to the reason that in the new tests after\n> reset, we are not waiting for the stats message to be delivered as we\n> were doing in other cases. Also, for the new test (non-spilled case),\n> we need to decode changes as we are doing for other tests, otherwise,\n> it will show the old stats.\n>\n\nYes, also the following expectation in expected/stats.out is wrong:\n\nSELECT slot_name, spill_txns = 0 AS spill_txns, spill_count = 0 AS\nspill_count, total_txns > 0 AS total_txns, total_bytes > 0 AS\ntotal_bytes FROM pg_stat_replication_slots;\n slot_name | spill_txns | spill_count | total_txns | total_bytes\n-----------------+------------+-------------+------------+-------------\n regression_slot | f | f | t | t\n(1 row)\n\nWe should expect all values are 0. Please find attached the patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Sun, 18 Apr 2021 22:21:16 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Apr 18, 2021 at 9:02 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, Apr 18, 2021 at 8:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Apr 18, 2021 at 7:36 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sun, Apr 18, 2021 at 3:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > I wrote:\n> > > > > The buildfarm suggests that this isn't entirely stable:\n> > > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2021-04-17%2011%3A14%3A49\n> > > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=bichir&dt=2021-04-17%2016%3A30%3A15\n> > > >\n> > > > Oh, I missed that hyrax is showing the identical symptom:\n> > > >\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2021-04-16%2007%3A05%3A44\n> > > >\n> > > > So you might try CLOBBER_CACHE_ALWAYS to see if you can reproduce it\n> > > > that way.\n> > > >\n> > >\n> > > I will try to check and identify why it is failing.\n> > >\n> >\n> > I think the failure is due to the reason that in the new tests after\n> > reset, we are not waiting for the stats message to be delivered as we\n> > were doing in other cases. Also, for the new test (non-spilled case),\n> > we need to decode changes as we are doing for other tests, otherwise,\n> > it will show the old stats.\n>\n> I also felt that is the reason for the failure, I will fix and post a\n> patch for this.\n\nAttached a patch which includes the changes for the fix. I have moved\nthe non-spilled transaction test to reduce the steps which reduces\ncalling pg_logical_slot_get_changes before this test.\n\nRegards,\nVignesh", "msg_date": "Sun, 18 Apr 2021 18:55:19 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 4:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for the update! The patch looks good to me.\n> >\n>\n> I have pushed the first patch. Comments on the next patch\n> v13-0001-Use-HTAB-for-replication-slot-statistics:\n> 1.\n> + /*\n> + * Check for all replication slots in stats hash table. We do this check\n> + * when replSlotStats has more than max_replication_slots entries, i.e,\n> + * when there are stats for the already-dropped slot, to avoid frequent\n> + * call SearchNamedReplicationSlot() which acquires LWLock.\n> + */\n> + if (replSlotStats && hash_get_num_entries(replSlotStats) >\n> max_replication_slots)\n> + {\n> + PgStat_ReplSlotEntry *slotentry;\n> +\n> + hash_seq_init(&hstat, replSlotStats);\n> + while ((slotentry = (PgStat_ReplSlotEntry *) hash_seq_search(&hstat)) != NULL)\n> + {\n> + if (SearchNamedReplicationSlot(NameStr(slotentry->slotname), true) == NULL)\n> + pgstat_report_replslot_drop(NameStr(slotentry->slotname));\n> + }\n> + }\n>\n> Is SearchNamedReplicationSlot() so frequently used that we need to do\n> this only when the hash table has entries more than\n> max_replication_slots? I think it would be better if we can do it\n> without such a condition to reduce the chances of missing the slot\n> stats. We don't have any such restrictions for any other cases in this\n> function.\n\nPlease see below comment on #4.\n\n>\n> I think it is better to add CHECK_FOR_INTERRUPTS in the above while loop?\n\nAgreed.\n\n>\n> 2.\n> /*\n> * Replication slot statistics kept in the stats collector\n> */\n> -typedef struct PgStat_ReplSlotStats\n> +typedef struct PgStat_ReplSlotEntry\n>\n> I think the comment above this structure can be changed to \"The\n> collector's data per slot\" or something like that. Also, if we have to\n> follow table/function/db style, then probably this structure should be\n> named as PgStat_StatReplSlotEntry.\n\nAgreed.\n\n>\n> 3.\n> - * create the statistics for the replication slot.\n> + * create the statistics for the replication slot. In case where the\n> + * message for dropping the old slot gets lost and a slot with the same is\n>\n> /the same is/the same name is/.\n>\n> Can we mention something similar to what you have added here in docs as well?\n\nAgreed.\n\n>\n> 4.\n> +CREATE VIEW pg_stat_replication_slots AS\n> + SELECT\n> + s.slot_name,\n> + s.spill_txns,\n> + s.spill_count,\n> + s.spill_bytes,\n> + s.stream_txns,\n> + s.stream_count,\n> + s.stream_bytes,\n> + s.total_txns,\n> + s.total_bytes,\n> + s.stats_reset\n> + FROM pg_replication_slots as r,\n> + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> ..\n> ..\n>\n> -/* Get the statistics for the replication slots */\n> +/* Get the statistics for the replication slot */\n> Datum\n> -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> {\n> #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> + text *slotname_text = PG_GETARG_TEXT_P(0);\n> + NameData slotname;\n>\n> I think with the above changes getting all the slot stats has become\n> much costlier. Is there any reason why can't we get all the stats from\n> the new hash_table in one shot and return them to the user?\n\nI think the advantage of this approach would be that it can avoid\nshowing the stats for already-dropped slots. Like other statistics\nviews such as pg_stat_all_tables and pg_stat_all_functions, searching\nthe stats by the name got from pg_replication_slots can show only\navailable slot stats even if the hash table has garbage slot stats.\nGiven that pg_stat_replication_slots doesn’t show garbage slot stats\neven if it has, I thought we can avoid checking those garbage stats\nfrequently. It should not essentially be a problem for the hash table\nto have entries up to max_replication_slots regardless of live or\nalready-dropped.\n\nAs another design, we can get all stats from the hash table in one\nshot as you suggested. If we do that, it's better to check garbage\nslot stats every time pgstat_vacuum_stat() is called so the view\ndoesn't show those stats but cannot avoid that completely.\n\nI'll change the code pointed out by #1 and #4 according to this design\ndiscussion.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 19 Apr 2021 12:29:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sun, Apr 18, 2021 at 6:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Yes, also the following expectation in expected/stats.out is wrong:\n>\n> SELECT slot_name, spill_txns = 0 AS spill_txns, spill_count = 0 AS\n> spill_count, total_txns > 0 AS total_txns, total_bytes > 0 AS\n> total_bytes FROM pg_stat_replication_slots;\n> slot_name | spill_txns | spill_count | total_txns | total_bytes\n> -----------------+------------+-------------+------------+-------------\n> regression_slot | f | f | t | t\n> (1 row)\n>\n> We should expect all values are 0. Please find attached the patch.\n>\n\nRight. Both your and Vignesh's patch will fix the problem but I mildly\nprefer Vignesh's one as that seems a bit simpler. So, I went ahead and\npushed his patch with minor other changes. Thanks to both of you.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:41:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 8:50 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Apr 16, 2021 at 08:48:29AM +0530, Amit Kapila wrote:\n> > I am fine with your proposed changes. There are one or two more\n> > patches in this area. I can include your suggestions along with those\n> > if you don't mind?\n>\n> However's convenient is fine\n>\n\nThanks for your suggestions. I have pushed your changes as part of the\ncommit c64dcc7fee.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:42:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 19, 2021 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > 4.\n> > +CREATE VIEW pg_stat_replication_slots AS\n> > + SELECT\n> > + s.slot_name,\n> > + s.spill_txns,\n> > + s.spill_count,\n> > + s.spill_bytes,\n> > + s.stream_txns,\n> > + s.stream_count,\n> > + s.stream_bytes,\n> > + s.total_txns,\n> > + s.total_bytes,\n> > + s.stats_reset\n> > + FROM pg_replication_slots as r,\n> > + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> > + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> > ..\n> > ..\n> >\n> > -/* Get the statistics for the replication slots */\n> > +/* Get the statistics for the replication slot */\n> > Datum\n> > -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> > +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> > {\n> > #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> > - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > + text *slotname_text = PG_GETARG_TEXT_P(0);\n> > + NameData slotname;\n> >\n> > I think with the above changes getting all the slot stats has become\n> > much costlier. Is there any reason why can't we get all the stats from\n> > the new hash_table in one shot and return them to the user?\n>\n> I think the advantage of this approach would be that it can avoid\n> showing the stats for already-dropped slots. Like other statistics\n> views such as pg_stat_all_tables and pg_stat_all_functions, searching\n> the stats by the name got from pg_replication_slots can show only\n> available slot stats even if the hash table has garbage slot stats.\n>\n\nSounds reasonable. However, if the create_slot message is missed, it\nwill show an empty row for it. See below:\n\npostgres=# select slot_name, total_txns from pg_stat_replication_slots;\n slot_name | total_txns\n-----------+------------\n s1 | 0\n s2 | 0\n |\n(3 rows)\n\nHere, I have manually via debugger skipped sending the create_slot\nmessage for the third slot and we are showing an empty for it. This\nwon't happen for pg_stat_all_tables, as it will set 0 or other initial\nvalues in such a case. I think we need to address this case.\n\n> Given that pg_stat_replication_slots doesn’t show garbage slot stats\n> even if it has, I thought we can avoid checking those garbage stats\n> frequently. It should not essentially be a problem for the hash table\n> to have entries up to max_replication_slots regardless of live or\n> already-dropped.\n>\n\nYeah, but I guess we still might not save much by not doing it,\nespecially because for the other cases like tables/functions, we are\ndoing it without any threshold limit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 19 Apr 2021 10:44:11 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 19, 2021 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > 4.\n> > > +CREATE VIEW pg_stat_replication_slots AS\n> > > + SELECT\n> > > + s.slot_name,\n> > > + s.spill_txns,\n> > > + s.spill_count,\n> > > + s.spill_bytes,\n> > > + s.stream_txns,\n> > > + s.stream_count,\n> > > + s.stream_bytes,\n> > > + s.total_txns,\n> > > + s.total_bytes,\n> > > + s.stats_reset\n> > > + FROM pg_replication_slots as r,\n> > > + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> > > + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> > > ..\n> > > ..\n> > >\n> > > -/* Get the statistics for the replication slots */\n> > > +/* Get the statistics for the replication slot */\n> > > Datum\n> > > -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> > > +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> > > {\n> > > #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> > > - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > > + text *slotname_text = PG_GETARG_TEXT_P(0);\n> > > + NameData slotname;\n> > >\n> > > I think with the above changes getting all the slot stats has become\n> > > much costlier. Is there any reason why can't we get all the stats from\n> > > the new hash_table in one shot and return them to the user?\n> >\n> > I think the advantage of this approach would be that it can avoid\n> > showing the stats for already-dropped slots. Like other statistics\n> > views such as pg_stat_all_tables and pg_stat_all_functions, searching\n> > the stats by the name got from pg_replication_slots can show only\n> > available slot stats even if the hash table has garbage slot stats.\n> >\n>\n> Sounds reasonable. However, if the create_slot message is missed, it\n> will show an empty row for it. See below:\n>\n> postgres=# select slot_name, total_txns from pg_stat_replication_slots;\n> slot_name | total_txns\n> -----------+------------\n> s1 | 0\n> s2 | 0\n> |\n> (3 rows)\n>\n> Here, I have manually via debugger skipped sending the create_slot\n> message for the third slot and we are showing an empty for it. This\n> won't happen for pg_stat_all_tables, as it will set 0 or other initial\n> values in such a case. I think we need to address this case.\n\nGood catch. I think it's better to set 0 to all counters and NULL to\nreset_stats.\n\n>\n> > Given that pg_stat_replication_slots doesn’t show garbage slot stats\n> > even if it has, I thought we can avoid checking those garbage stats\n> > frequently. It should not essentially be a problem for the hash table\n> > to have entries up to max_replication_slots regardless of live or\n> > already-dropped.\n> >\n>\n> Yeah, but I guess we still might not save much by not doing it,\n> especially because for the other cases like tables/functions, we are\n> doing it without any threshold limit.\n\nAgreed.\n\nI've attached the updated patch, please review it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Mon, 19 Apr 2021 16:48:34 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 3:16 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 12, 2021 at 2:57 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sat, Mar 20, 2021 at 9:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 20, 2021 at 12:22 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > And then more generally about the feature:\n> > > > - If a slot was used to stream out a large amount of changes (say an\n> > > > initial data load), but then replication is interrupted before the\n> > > > transaction is committed/aborted, stream_bytes will not reflect the\n> > > > many gigabytes of data we may have sent.\n> > > >\n> > >\n> > > We can probably update the stats each time we spilled or streamed the\n> > > transaction data but it was not clear at that stage whether or how\n> > > much it will be useful.\n> > >\n> >\n> > I felt we can update the replication slot statistics data each time we\n> > spill/stream the transaction data instead of accumulating the\n> > statistics and updating at the end. I have tried this in the attached\n> > patch and the statistics data were getting updated.\n> > Thoughts?\n> >\n>\n> Did you check if we can update the stats when we release the slot as\n> discussed above? I am not sure if it is easy to do at the time of slot\n> release because this information might not be accessible there and in\n> some cases, we might have already released the decoding\n> context/reorderbuffer where this information is stored. It might be\n> okay to update this when we stream or spill but let's see if we can do\n> it easily at the time of slot release.\n>\n\nI have made the changes to update the replication statistics at\nreplication slot release. Please find the patch attached for the same.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Mon, 19 Apr 2021 16:28:00 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 19, 2021 at 4:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > >\n> > > > 4.\n> > > > +CREATE VIEW pg_stat_replication_slots AS\n> > > > + SELECT\n> > > > + s.slot_name,\n> > > > + s.spill_txns,\n> > > > + s.spill_count,\n> > > > + s.spill_bytes,\n> > > > + s.stream_txns,\n> > > > + s.stream_count,\n> > > > + s.stream_bytes,\n> > > > + s.total_txns,\n> > > > + s.total_bytes,\n> > > > + s.stats_reset\n> > > > + FROM pg_replication_slots as r,\n> > > > + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> > > > + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> > > > ..\n> > > > ..\n> > > >\n> > > > -/* Get the statistics for the replication slots */\n> > > > +/* Get the statistics for the replication slot */\n> > > > Datum\n> > > > -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> > > > +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> > > > {\n> > > > #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> > > > - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > > > + text *slotname_text = PG_GETARG_TEXT_P(0);\n> > > > + NameData slotname;\n> > > >\n> > > > I think with the above changes getting all the slot stats has become\n> > > > much costlier. Is there any reason why can't we get all the stats from\n> > > > the new hash_table in one shot and return them to the user?\n> > >\n> > > I think the advantage of this approach would be that it can avoid\n> > > showing the stats for already-dropped slots. Like other statistics\n> > > views such as pg_stat_all_tables and pg_stat_all_functions, searching\n> > > the stats by the name got from pg_replication_slots can show only\n> > > available slot stats even if the hash table has garbage slot stats.\n> > >\n> >\n> > Sounds reasonable. However, if the create_slot message is missed, it\n> > will show an empty row for it. See below:\n> >\n> > postgres=# select slot_name, total_txns from pg_stat_replication_slots;\n> > slot_name | total_txns\n> > -----------+------------\n> > s1 | 0\n> > s2 | 0\n> > |\n> > (3 rows)\n> >\n> > Here, I have manually via debugger skipped sending the create_slot\n> > message for the third slot and we are showing an empty for it. This\n> > won't happen for pg_stat_all_tables, as it will set 0 or other initial\n> > values in such a case. I think we need to address this case.\n>\n> Good catch. I think it's better to set 0 to all counters and NULL to\n> reset_stats.\n>\n> >\n> > > Given that pg_stat_replication_slots doesn’t show garbage slot stats\n> > > even if it has, I thought we can avoid checking those garbage stats\n> > > frequently. It should not essentially be a problem for the hash table\n> > > to have entries up to max_replication_slots regardless of live or\n> > > already-dropped.\n> > >\n> >\n> > Yeah, but I guess we still might not save much by not doing it,\n> > especially because for the other cases like tables/functions, we are\n> > doing it without any threshold limit.\n>\n> Agreed.\n>\n> I've attached the updated patch, please review it.\n\nI've attached the new version patch that fixed the compilation error\nreported off-line by Amit.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 20 Apr 2021 12:37:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 9:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I've attached the new version patch that fixed the compilation error\n> reported off-line by Amit.\n>\n\nI was thinking about whether we can someway avoid the below risk:\nIn case where the\n+ * message for dropping the old slot gets lost and a slot with the same\n+ * name is created, the stats will be accumulated into the old slots since\n+ * we use the slot name as the key. In that case, user can reset the\n+ * particular stats by pg_stat_reset_replication_slot().\n\nWhat if we send a separate message for create slot such that the stats\ncollector will initialize the entries even if the previous drop\nmessage is lost or came later? If we do that then if the drop message\nis lost, the create with same name won't accumulate the stats and if\nthe drop came later, it will remove the newly created stats but\nanyway, later stats from the same slot will again create the slot\nentry in the hash table.\n\nAlso, I think we can include the test case prepared by Vignesh in the email [1].\n\nApart from the above, I have made few minor modifications in the attached patch.\n(a) + if (slotent->stat_reset_timestamp == 0 || !slotent)\nI don't understand why second part of check is required? By this time\nslotent will anyway have some valid value.\n\n(b) + slotent = (PgStat_StatReplSlotEntry *) hash_search(replSlotStats,\n+ (void *) &name,\n+ create_it ? HASH_ENTER : HASH_FIND,\n+ &found);\n\nIt is better to use NameStr here.\n\n(c) made various changes in comments and some other cosmetic changes.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3yBctNFE6X2FV_haRF4uue9okm1_DVE6ZANWvOV_CvYw%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Tue, 20 Apr 2021 15:29:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 9:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 4:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 19, 2021 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > >\n> > > > > 4.\n> > > > > +CREATE VIEW pg_stat_replication_slots AS\n> > > > > + SELECT\n> > > > > + s.slot_name,\n> > > > > + s.spill_txns,\n> > > > > + s.spill_count,\n> > > > > + s.spill_bytes,\n> > > > > + s.stream_txns,\n> > > > > + s.stream_count,\n> > > > > + s.stream_bytes,\n> > > > > + s.total_txns,\n> > > > > + s.total_bytes,\n> > > > > + s.stats_reset\n> > > > > + FROM pg_replication_slots as r,\n> > > > > + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> > > > > + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> > > > > ..\n> > > > > ..\n> > > > >\n> > > > > -/* Get the statistics for the replication slots */\n> > > > > +/* Get the statistics for the replication slot */\n> > > > > Datum\n> > > > > -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> > > > > +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> > > > > {\n> > > > > #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> > > > > - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > > > > + text *slotname_text = PG_GETARG_TEXT_P(0);\n> > > > > + NameData slotname;\n> > > > >\n> > > > > I think with the above changes getting all the slot stats has become\n> > > > > much costlier. Is there any reason why can't we get all the stats from\n> > > > > the new hash_table in one shot and return them to the user?\n> > > >\n> > > > I think the advantage of this approach would be that it can avoid\n> > > > showing the stats for already-dropped slots. Like other statistics\n> > > > views such as pg_stat_all_tables and pg_stat_all_functions, searching\n> > > > the stats by the name got from pg_replication_slots can show only\n> > > > available slot stats even if the hash table has garbage slot stats.\n> > > >\n> > >\n> > > Sounds reasonable. However, if the create_slot message is missed, it\n> > > will show an empty row for it. See below:\n> > >\n> > > postgres=# select slot_name, total_txns from pg_stat_replication_slots;\n> > > slot_name | total_txns\n> > > -----------+------------\n> > > s1 | 0\n> > > s2 | 0\n> > > |\n> > > (3 rows)\n> > >\n> > > Here, I have manually via debugger skipped sending the create_slot\n> > > message for the third slot and we are showing an empty for it. This\n> > > won't happen for pg_stat_all_tables, as it will set 0 or other initial\n> > > values in such a case. I think we need to address this case.\n> >\n> > Good catch. I think it's better to set 0 to all counters and NULL to\n> > reset_stats.\n> >\n> > >\n> > > > Given that pg_stat_replication_slots doesn’t show garbage slot stats\n> > > > even if it has, I thought we can avoid checking those garbage stats\n> > > > frequently. It should not essentially be a problem for the hash table\n> > > > to have entries up to max_replication_slots regardless of live or\n> > > > already-dropped.\n> > > >\n> > >\n> > > Yeah, but I guess we still might not save much by not doing it,\n> > > especially because for the other cases like tables/functions, we are\n> > > doing it without any threshold limit.\n> >\n> > Agreed.\n> >\n> > I've attached the updated patch, please review it.\n>\n> I've attached the new version patch that fixed the compilation error\n> reported off-line by Amit.\n\nThanks for the updated patch, few comments:\n1) We can change \"slotent = pgstat_get_replslot_entry(slotname,\nfalse);\" to \"return pgstat_get_replslot_entry(slotname, false);\" and\nremove the slotent variable.\n\n+ PgStat_StatReplSlotEntry *slotent = NULL;\n+\n backend_read_statsfile();\n\n- *nslots_p = nReplSlotStats;\n- return replSlotStats;\n+ slotent = pgstat_get_replslot_entry(slotname, false);\n+\n+ return slotent;\n\n2) Should we change PGSTAT_FILE_FORMAT_ID as the statistic file format\nhas changed for replication statistics?\n\n3) We can include PgStat_StatReplSlotEntry in typedefs.lst and remove\nPgStat_ReplSlotStats from typedefs.lst\n\n4) Few indentation issues are there, we can run pgindent on pgstat.c changes:\n case 'R':\n- if\n(fread(&replSlotStats[nReplSlotStats], 1,\nsizeof(PgStat_ReplSlotStats), fpin)\n- != sizeof(PgStat_ReplSlotStats))\n+ {\n+ PgStat_StatReplSlotEntry slotbuf;\n+ PgStat_StatReplSlotEntry *slotent;\n+\n+ if (fread(&slotbuf, 1,\nsizeof(PgStat_StatReplSlotEntry), fpin)\n+ != sizeof(PgStat_StatReplSlotEntry))\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 20 Apr 2021 15:52:40 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 6:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 9:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I've attached the new version patch that fixed the compilation error\n> > reported off-line by Amit.\n> >\n>\n> I was thinking about whether we can someway avoid the below risk:\n> In case where the\n> + * message for dropping the old slot gets lost and a slot with the same\n> + * name is created, the stats will be accumulated into the old slots since\n> + * we use the slot name as the key. In that case, user can reset the\n> + * particular stats by pg_stat_reset_replication_slot().\n>\n> What if we send a separate message for create slot such that the stats\n> collector will initialize the entries even if the previous drop\n> message is lost or came later? If we do that then if the drop message\n> is lost, the create with same name won't accumulate the stats and if\n> the drop came later, it will remove the newly created stats but\n> anyway, later stats from the same slot will again create the slot\n> entry in the hash table.\n\nSounds good to me. There is still little chance to happen if messages\nfor both creating and dropping slots with the same name got lost, but\nit's unlikely to happen in practice.\n\n>\n> Also, I think we can include the test case prepared by Vignesh in the email [1].\n>\n> Apart from the above, I have made few minor modifications in the attached patch.\n> (a) + if (slotent->stat_reset_timestamp == 0 || !slotent)\n> I don't understand why second part of check is required? By this time\n> slotent will anyway have some valid value.\n>\n> (b) + slotent = (PgStat_StatReplSlotEntry *) hash_search(replSlotStats,\n> + (void *) &name,\n> + create_it ? HASH_ENTER : HASH_FIND,\n> + &found);\n>\n> It is better to use NameStr here.\n>\n> (c) made various changes in comments and some other cosmetic changes.\n\nAll the above changes make sense to me.\n\nI'll submit the updated patch soon.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 20 Apr 2021 21:26:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 9:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 4:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 19, 2021 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 19, 2021 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > >\n> > > > > > 4.\n> > > > > > +CREATE VIEW pg_stat_replication_slots AS\n> > > > > > + SELECT\n> > > > > > + s.slot_name,\n> > > > > > + s.spill_txns,\n> > > > > > + s.spill_count,\n> > > > > > + s.spill_bytes,\n> > > > > > + s.stream_txns,\n> > > > > > + s.stream_count,\n> > > > > > + s.stream_bytes,\n> > > > > > + s.total_txns,\n> > > > > > + s.total_bytes,\n> > > > > > + s.stats_reset\n> > > > > > + FROM pg_replication_slots as r,\n> > > > > > + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> > > > > > + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> > > > > > ..\n> > > > > > ..\n> > > > > >\n> > > > > > -/* Get the statistics for the replication slots */\n> > > > > > +/* Get the statistics for the replication slot */\n> > > > > > Datum\n> > > > > > -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> > > > > > +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> > > > > > {\n> > > > > > #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> > > > > > - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > > > > > + text *slotname_text = PG_GETARG_TEXT_P(0);\n> > > > > > + NameData slotname;\n> > > > > >\n> > > > > > I think with the above changes getting all the slot stats has become\n> > > > > > much costlier. Is there any reason why can't we get all the stats from\n> > > > > > the new hash_table in one shot and return them to the user?\n> > > > >\n> > > > > I think the advantage of this approach would be that it can avoid\n> > > > > showing the stats for already-dropped slots. Like other statistics\n> > > > > views such as pg_stat_all_tables and pg_stat_all_functions, searching\n> > > > > the stats by the name got from pg_replication_slots can show only\n> > > > > available slot stats even if the hash table has garbage slot stats.\n> > > > >\n> > > >\n> > > > Sounds reasonable. However, if the create_slot message is missed, it\n> > > > will show an empty row for it. See below:\n> > > >\n> > > > postgres=# select slot_name, total_txns from pg_stat_replication_slots;\n> > > > slot_name | total_txns\n> > > > -----------+------------\n> > > > s1 | 0\n> > > > s2 | 0\n> > > > |\n> > > > (3 rows)\n> > > >\n> > > > Here, I have manually via debugger skipped sending the create_slot\n> > > > message for the third slot and we are showing an empty for it. This\n> > > > won't happen for pg_stat_all_tables, as it will set 0 or other initial\n> > > > values in such a case. I think we need to address this case.\n> > >\n> > > Good catch. I think it's better to set 0 to all counters and NULL to\n> > > reset_stats.\n> > >\n> > > >\n> > > > > Given that pg_stat_replication_slots doesn’t show garbage slot stats\n> > > > > even if it has, I thought we can avoid checking those garbage stats\n> > > > > frequently. It should not essentially be a problem for the hash table\n> > > > > to have entries up to max_replication_slots regardless of live or\n> > > > > already-dropped.\n> > > > >\n> > > >\n> > > > Yeah, but I guess we still might not save much by not doing it,\n> > > > especially because for the other cases like tables/functions, we are\n> > > > doing it without any threshold limit.\n> > >\n> > > Agreed.\n> > >\n> > > I've attached the updated patch, please review it.\n> >\n> > I've attached the new version patch that fixed the compilation error\n> > reported off-line by Amit.\n>\n> Thanks for the updated patch, few comments:\n\nThank you for the review comments.\n\n> 1) We can change \"slotent = pgstat_get_replslot_entry(slotname,\n> false);\" to \"return pgstat_get_replslot_entry(slotname, false);\" and\n> remove the slotent variable.\n>\n> + PgStat_StatReplSlotEntry *slotent = NULL;\n> +\n> backend_read_statsfile();\n>\n> - *nslots_p = nReplSlotStats;\n> - return replSlotStats;\n> + slotent = pgstat_get_replslot_entry(slotname, false);\n> +\n> + return slotent;\n\nFixed.\n\n>\n> 2) Should we change PGSTAT_FILE_FORMAT_ID as the statistic file format\n> has changed for replication statistics?\n\nThe struct name is changed but I think the statistics file format has\nnot changed by this patch. No?\n\n>\n> 3) We can include PgStat_StatReplSlotEntry in typedefs.lst and remove\n> PgStat_ReplSlotStats from typedefs.lst\n\nFixed.\n\n>\n> 4) Few indentation issues are there, we can run pgindent on pgstat.c changes:\n> case 'R':\n> - if\n> (fread(&replSlotStats[nReplSlotStats], 1,\n> sizeof(PgStat_ReplSlotStats), fpin)\n> - != sizeof(PgStat_ReplSlotStats))\n> + {\n> + PgStat_StatReplSlotEntry slotbuf;\n> + PgStat_StatReplSlotEntry *slotent;\n> +\n> + if (fread(&slotbuf, 1,\n> sizeof(PgStat_StatReplSlotEntry), fpin)\n> + != sizeof(PgStat_StatReplSlotEntry))\n\nFixed.\n\nI've attached the patch. In addition to the test Vignesh prepared, I\nadded one test for the message for creating a slot that checks if the\nstatistics are initialized after re-creating the same name slot.\nPlease review it.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 20 Apr 2021 23:23:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n\nI have one question:\n\n+ /*\n+ * Create the replication slot stats hash table if we don't have\n+ * it already.\n+ */\n+ if (replSlotStats == NULL)\n {\n- if (namestrcmp(&replSlotStats[i].slotname, name) == 0)\n- return i; /* found */\n+ HASHCTL hash_ctl;\n+\n+ hash_ctl.keysize = sizeof(NameData);\n+ hash_ctl.entrysize = sizeof(PgStat_StatReplSlotEntry);\n+ hash_ctl.hcxt = pgStatLocalContext;\n+\n+ replSlotStats = hash_create(\"Replication slots hash\",\n+ PGSTAT_REPLSLOT_HASH_SIZE,\n+ &hash_ctl,\n+ HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);\n }\n\nIt seems to me that the patch is always creating a hash table in\npgStatLocalContext? AFAIU, we need to create it in pgStatLocalContext\nwhen we read stats via backend_read_statsfile so that we can clear it\nat the end of the transaction. The db/function stats seems to be doing\nthe same. Is there a reason why here we need to always create it in\npgStatLocalContext?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:20:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 7:22 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Apr 20, 2021 at 9:08 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 19, 2021 at 4:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 19, 2021 at 2:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 19, 2021 at 9:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > > >\n> > > > > > > 4.\n> > > > > > > +CREATE VIEW pg_stat_replication_slots AS\n> > > > > > > + SELECT\n> > > > > > > + s.slot_name,\n> > > > > > > + s.spill_txns,\n> > > > > > > + s.spill_count,\n> > > > > > > + s.spill_bytes,\n> > > > > > > + s.stream_txns,\n> > > > > > > + s.stream_count,\n> > > > > > > + s.stream_bytes,\n> > > > > > > + s.total_txns,\n> > > > > > > + s.total_bytes,\n> > > > > > > + s.stats_reset\n> > > > > > > + FROM pg_replication_slots as r,\n> > > > > > > + LATERAL pg_stat_get_replication_slot(slot_name) as s\n> > > > > > > + WHERE r.datoid IS NOT NULL; -- excluding physical slots\n> > > > > > > ..\n> > > > > > > ..\n> > > > > > >\n> > > > > > > -/* Get the statistics for the replication slots */\n> > > > > > > +/* Get the statistics for the replication slot */\n> > > > > > > Datum\n> > > > > > > -pg_stat_get_replication_slots(PG_FUNCTION_ARGS)\n> > > > > > > +pg_stat_get_replication_slot(PG_FUNCTION_ARGS)\n> > > > > > > {\n> > > > > > > #define PG_STAT_GET_REPLICATION_SLOT_COLS 10\n> > > > > > > - ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo;\n> > > > > > > + text *slotname_text = PG_GETARG_TEXT_P(0);\n> > > > > > > + NameData slotname;\n> > > > > > >\n> > > > > > > I think with the above changes getting all the slot stats has become\n> > > > > > > much costlier. Is there any reason why can't we get all the stats from\n> > > > > > > the new hash_table in one shot and return them to the user?\n> > > > > >\n> > > > > > I think the advantage of this approach would be that it can avoid\n> > > > > > showing the stats for already-dropped slots. Like other statistics\n> > > > > > views such as pg_stat_all_tables and pg_stat_all_functions, searching\n> > > > > > the stats by the name got from pg_replication_slots can show only\n> > > > > > available slot stats even if the hash table has garbage slot stats.\n> > > > > >\n> > > > >\n> > > > > Sounds reasonable. However, if the create_slot message is missed, it\n> > > > > will show an empty row for it. See below:\n> > > > >\n> > > > > postgres=# select slot_name, total_txns from pg_stat_replication_slots;\n> > > > > slot_name | total_txns\n> > > > > -----------+------------\n> > > > > s1 | 0\n> > > > > s2 | 0\n> > > > > |\n> > > > > (3 rows)\n> > > > >\n> > > > > Here, I have manually via debugger skipped sending the create_slot\n> > > > > message for the third slot and we are showing an empty for it. This\n> > > > > won't happen for pg_stat_all_tables, as it will set 0 or other initial\n> > > > > values in such a case. I think we need to address this case.\n> > > >\n> > > > Good catch. I think it's better to set 0 to all counters and NULL to\n> > > > reset_stats.\n> > > >\n> > > > >\n> > > > > > Given that pg_stat_replication_slots doesn’t show garbage slot stats\n> > > > > > even if it has, I thought we can avoid checking those garbage stats\n> > > > > > frequently. It should not essentially be a problem for the hash table\n> > > > > > to have entries up to max_replication_slots regardless of live or\n> > > > > > already-dropped.\n> > > > > >\n> > > > >\n> > > > > Yeah, but I guess we still might not save much by not doing it,\n> > > > > especially because for the other cases like tables/functions, we are\n> > > > > doing it without any threshold limit.\n> > > >\n> > > > Agreed.\n> > > >\n> > > > I've attached the updated patch, please review it.\n> > >\n> > > I've attached the new version patch that fixed the compilation error\n> > > reported off-line by Amit.\n> >\n> > Thanks for the updated patch, few comments:\n>\n> Thank you for the review comments.\n>\n> > 1) We can change \"slotent = pgstat_get_replslot_entry(slotname,\n> > false);\" to \"return pgstat_get_replslot_entry(slotname, false);\" and\n> > remove the slotent variable.\n> >\n> > + PgStat_StatReplSlotEntry *slotent = NULL;\n> > +\n> > backend_read_statsfile();\n> >\n> > - *nslots_p = nReplSlotStats;\n> > - return replSlotStats;\n> > + slotent = pgstat_get_replslot_entry(slotname, false);\n> > +\n> > + return slotent;\n>\n> Fixed.\n>\n> >\n> > 2) Should we change PGSTAT_FILE_FORMAT_ID as the statistic file format\n> > has changed for replication statistics?\n>\n> The struct name is changed but I think the statistics file format has\n> not changed by this patch. No?\n\nI tried to create stats on head and then applied this patch and tried\nreading the stats, it could not get the values, the backtrace for the\nsame is:\n(gdb) bt\n#0 0x000055fe12f8a93d in pg_detoast_datum (datum=0x7f7f7f7f7f7f7f7f)\nat fmgr.c:1727\n#1 0x000055fe12ec2a03 in pg_stat_get_replication_slot\n(fcinfo=0x55fe1357e150) at pgstatfuncs.c:2316\n#2 0x000055fe12b6af23 in ExecMakeTableFunctionResult\n(setexpr=0x55fe13563c28, econtext=0x55fe13563b90,\nargContext=0x55fe1357e030, expectedDesc=0x55fe13564968,\n randomAccess=false) at execSRF.c:234\n#3 0x000055fe12b87ba3 in FunctionNext (node=0x55fe13563a78) at\nnodeFunctionscan.c:95\n#4 0x000055fe12b6c929 in ExecScanFetch (node=0x55fe13563a78,\naccessMtd=0x55fe12b87aee <FunctionNext>, recheckMtd=0x55fe12b87eea\n<FunctionRecheck>) at execScan.c:133\n#5 0x000055fe12b6c9a2 in ExecScan (node=0x55fe13563a78,\naccessMtd=0x55fe12b87aee <FunctionNext>, recheckMtd=0x55fe12b87eea\n<FunctionRecheck>) at execScan.c:182\n#6 0x000055fe12b87f40 in ExecFunctionScan (pstate=0x55fe13563a78) at\nnodeFunctionscan.c:270\n#7 0x000055fe12b687eb in ExecProcNodeFirst (node=0x55fe13563a78) at\nexecProcnode.c:462\n#8 0x000055fe12b5c713 in ExecProcNode (node=0x55fe13563a78) at\n../../../src/include/executor/executor.h:257\n#9 0x000055fe12b5f147 in ExecutePlan (estate=0x55fe135635f0,\nplanstate=0x55fe13563a78, use_parallel_mode=false,\noperation=CMD_SELECT, sendTuples=true, numberTuples=0,\n direction=ForwardScanDirection, dest=0x55fe13579558,\nexecute_once=true) at execMain.c:1551\n#10 0x000055fe12b5cded in standard_ExecutorRun\n(queryDesc=0x55fe1349acd0, direction=ForwardScanDirection, count=0,\nexecute_once=true) at execMain.c:361\n#11 0x000055fe12b5cbfc in ExecutorRun (queryDesc=0x55fe1349acd0,\ndirection=ForwardScanDirection, count=0, execute_once=true) at\nexecMain.c:305\n#12 0x000055fe12dca9ce in PortalRunSelect (portal=0x55fe134ed2f0,\nforward=true, count=0, dest=0x55fe13579558) at pquery.c:912\n#13 0x000055fe12dca607 in PortalRun (portal=0x55fe134ed2f0,\ncount=9223372036854775807, isTopLevel=true, run_once=true,\ndest=0x55fe13579558, altdest=0x55fe13579558,\n qc=0x7ffefa53cd30) at pquery.c:756\n#14 0x000055fe12dc3915 in exec_simple_query\n(query_string=0x55fe134796e0 \"select * from pg_stat_replication_slots\n;\") at postgres.c:1196\n\nI feel we can change CATALOG_VERSION_NO so that we will get this error\n\"The database cluster was initialized with CATALOG_VERSION_NO\n2021XXXXX, but the server was compiled with CATALOG_VERSION_NO\n2021XXXXX.\" which will prevent the above issue.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:39:38 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 9:39 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I feel we can change CATALOG_VERSION_NO so that we will get this error\n> \"The database cluster was initialized with CATALOG_VERSION_NO\n> 2021XXXXX, but the server was compiled with CATALOG_VERSION_NO\n> 2021XXXXX.\" which will prevent the above issue.\n>\n\nRight, but we normally do that just before commit. We might want to\nmention it in the commit message just as a Note so that we don't\nforget to bump it before commit but otherwise, we don't need to change\nit in the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 09:47:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 9:39 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > I feel we can change CATALOG_VERSION_NO so that we will get this error\n> > \"The database cluster was initialized with CATALOG_VERSION_NO\n> > 2021XXXXX, but the server was compiled with CATALOG_VERSION_NO\n> > 2021XXXXX.\" which will prevent the above issue.\n> >\n>\n> Right, but we normally do that just before commit. We might want to\n> mention it in the commit message just as a Note so that we don't\n> forget to bump it before commit but otherwise, we don't need to change\n> it in the patch.\n>\n\nYes, that is fine with me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 21 Apr 2021 10:13:21 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> I've attached the patch. In addition to the test Vignesh prepared, I\n> added one test for the message for creating a slot that checks if the\n> statistics are initialized after re-creating the same name slot.\n>\n\nI am not sure how much useful your new test is because you are testing\nit for slot name for which we have removed the slot file. It is not\nrelated to stat messages this patch is sending. I think we can leave\nthat for now. One other minor comment:\n\n- * create the statistics for the replication slot.\n+ * create the statistics for the replication slot. In the cases where the\n+ * message for dropping the old slot gets lost and a slot with the same\n+ * name is created, since the stats will be initialized by the message\n+ * for creating the slot the statistics are not accumulated into the\n+ * old slot unless the messages for both creating and dropping slots with\n+ * the same name got lost. Just in case it happens, the user can reset\n+ * the particular stats by pg_stat_reset_replication_slot().\n\nI think we can change it to something like: \" XXX In case, the\nmessages for creation and drop slot of the same name get lost and\ncreate happens before (auto)vacuum cleans up the dead slot, the stats\nwill be accumulated into the old slot. One can imagine having OIDs for\neach slot to avoid the accumulation of stats but that doesn't seem\nworth doing as in practice this won't happen frequently.\". Also, I am\nnot sure after your recent change whether it is a good idea to mention\nsomething in docs. What do you think?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 11:39:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n\n>\n> I've attached the patch. In addition to the test Vignesh prepared, I\n> added one test for the message for creating a slot that checks if the\n> statistics are initialized after re-creating the same name slot.\n> Please review it.\n\nOverall the patch looks good to me. However, I have one question, I\ndid not understand the reason behind moving the below code from\n\"pgstat_reset_replslot_counter\" to \"pg_stat_reset_replication_slot\"?\n\n+ /*\n+ * Check if the slot exists with the given name. It is possible that by\n+ * the time this message is executed the slot is dropped but at least\n+ * this check will ensure that the given name is for a valid slot.\n+ */\n+ slot = SearchNamedReplicationSlot(target, true);\n+\n+ if (!slot)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"replication slot \\\"%s\\\" does not exist\",\n+ target)));\n+\n+ /*\n+ * Nothing to do for physical slots as we collect stats only for\n+ * logical slots.\n+ */\n+ if (SlotIsPhysical(slot))\n+ PG_RETURN_VOID();\n+ }\n+\n pgstat_reset_replslot_counter(target);\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 21 Apr 2021 13:13:53 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> I have one question:\n>\n> + /*\n> + * Create the replication slot stats hash table if we don't have\n> + * it already.\n> + */\n> + if (replSlotStats == NULL)\n> {\n> - if (namestrcmp(&replSlotStats[i].slotname, name) == 0)\n> - return i; /* found */\n> + HASHCTL hash_ctl;\n> +\n> + hash_ctl.keysize = sizeof(NameData);\n> + hash_ctl.entrysize = sizeof(PgStat_StatReplSlotEntry);\n> + hash_ctl.hcxt = pgStatLocalContext;\n> +\n> + replSlotStats = hash_create(\"Replication slots hash\",\n> + PGSTAT_REPLSLOT_HASH_SIZE,\n> + &hash_ctl,\n> + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);\n> }\n>\n> It seems to me that the patch is always creating a hash table in\n> pgStatLocalContext? AFAIU, we need to create it in pgStatLocalContext\n> when we read stats via backend_read_statsfile so that we can clear it\n> at the end of the transaction. The db/function stats seems to be doing\n> the same. Is there a reason why here we need to always create it in\n> pgStatLocalContext?\n\nI wanted to avoid creating the hash table if there is no replication\nslot. But as you pointed out, we create the hash table even when\nlookup (i.g., create_it is false), which is bad. So I think we can\nhave pgstat_get_replslot_entry() return NULL without creating the hash\ntable if the hash table is NULL and create_it is false so that backend\nprocesses don’t create the hash table, not via\nbackend_read_statsfile(). Or another idea would be to always create\nthe hash table in pgstat_read_statsfiles(). That way, it would\nsimplify the code but could waste the memory if there is no\nreplication slot. I slightly prefer the former but what do you think?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 21 Apr 2021 18:06:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > I've attached the patch. In addition to the test Vignesh prepared, I\n> > added one test for the message for creating a slot that checks if the\n> > statistics are initialized after re-creating the same name slot.\n> >\n>\n> I am not sure how much useful your new test is because you are testing\n> it for slot name for which we have removed the slot file. It is not\n> related to stat messages this patch is sending. I think we can leave\n> that for now.\n\nI might be missing something but I think the test is related to the\nmessage for creating a slot that initializes all counters. No? If\nthere is no that message, we will end up getting old stats if a\nmessage for dropping slot gets lost (simulated by dropping slot file)\nand the same name slot is created.\n\n> One other minor comment:\n>\n> - * create the statistics for the replication slot.\n> + * create the statistics for the replication slot. In the cases where the\n> + * message for dropping the old slot gets lost and a slot with the same\n> + * name is created, since the stats will be initialized by the message\n> + * for creating the slot the statistics are not accumulated into the\n> + * old slot unless the messages for both creating and dropping slots with\n> + * the same name got lost. Just in case it happens, the user can reset\n> + * the particular stats by pg_stat_reset_replication_slot().\n>\n> I think we can change it to something like: \" XXX In case, the\n> messages for creation and drop slot of the same name get lost and\n> create happens before (auto)vacuum cleans up the dead slot, the stats\n> will be accumulated into the old slot. One can imagine having OIDs for\n> each slot to avoid the accumulation of stats but that doesn't seem\n> worth doing as in practice this won't happen frequently.\". Also, I am\n> not sure after your recent change whether it is a good idea to mention\n> something in docs. What do you think?\n\nBoth points make sense to me. I'll update the comment and remove the\nmention in the doc in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 21 Apr 2021 18:15:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 2:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> >\n> > I have one question:\n> >\n> > + /*\n> > + * Create the replication slot stats hash table if we don't have\n> > + * it already.\n> > + */\n> > + if (replSlotStats == NULL)\n> > {\n> > - if (namestrcmp(&replSlotStats[i].slotname, name) == 0)\n> > - return i; /* found */\n> > + HASHCTL hash_ctl;\n> > +\n> > + hash_ctl.keysize = sizeof(NameData);\n> > + hash_ctl.entrysize = sizeof(PgStat_StatReplSlotEntry);\n> > + hash_ctl.hcxt = pgStatLocalContext;\n> > +\n> > + replSlotStats = hash_create(\"Replication slots hash\",\n> > + PGSTAT_REPLSLOT_HASH_SIZE,\n> > + &hash_ctl,\n> > + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);\n> > }\n> >\n> > It seems to me that the patch is always creating a hash table in\n> > pgStatLocalContext? AFAIU, we need to create it in pgStatLocalContext\n> > when we read stats via backend_read_statsfile so that we can clear it\n> > at the end of the transaction. The db/function stats seems to be doing\n> > the same. Is there a reason why here we need to always create it in\n> > pgStatLocalContext?\n>\n> I wanted to avoid creating the hash table if there is no replication\n> slot. But as you pointed out, we create the hash table even when\n> lookup (i.g., create_it is false), which is bad. So I think we can\n> have pgstat_get_replslot_entry() return NULL without creating the hash\n> table if the hash table is NULL and create_it is false so that backend\n> processes don’t create the hash table, not via\n> backend_read_statsfile(). Or another idea would be to always create\n> the hash table in pgstat_read_statsfiles(). That way, it would\n> simplify the code but could waste the memory if there is no\n> replication slot.\n>\n\nIf you create it after reading 'R' message as we do in the case of 'D'\nmessage then it won't waste any memory. So probably creating in\npgstat_read_statsfiles() would be better unless you see some other\nproblem with that.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 14:49:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 2:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > >\n> > > I've attached the patch. In addition to the test Vignesh prepared, I\n> > > added one test for the message for creating a slot that checks if the\n> > > statistics are initialized after re-creating the same name slot.\n> > >\n> >\n> > I am not sure how much useful your new test is because you are testing\n> > it for slot name for which we have removed the slot file. It is not\n> > related to stat messages this patch is sending. I think we can leave\n> > that for now.\n>\n> I might be missing something but I think the test is related to the\n> message for creating a slot that initializes all counters. No? If\n> there is no that message, we will end up getting old stats if a\n> message for dropping slot gets lost (simulated by dropping slot file)\n> and the same name slot is created.\n>\n\nThe test is not waiting for a new slot creation message to reach the\nstats collector. So, if the old slot data still exists in the file and\nnow when we read stats via backend, then won't there exists a chance\nthat old slot stats data still exists?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 15:06:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 6:20 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 2:37 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 21, 2021 at 12:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > >\n> > > I have one question:\n> > >\n> > > + /*\n> > > + * Create the replication slot stats hash table if we don't have\n> > > + * it already.\n> > > + */\n> > > + if (replSlotStats == NULL)\n> > > {\n> > > - if (namestrcmp(&replSlotStats[i].slotname, name) == 0)\n> > > - return i; /* found */\n> > > + HASHCTL hash_ctl;\n> > > +\n> > > + hash_ctl.keysize = sizeof(NameData);\n> > > + hash_ctl.entrysize = sizeof(PgStat_StatReplSlotEntry);\n> > > + hash_ctl.hcxt = pgStatLocalContext;\n> > > +\n> > > + replSlotStats = hash_create(\"Replication slots hash\",\n> > > + PGSTAT_REPLSLOT_HASH_SIZE,\n> > > + &hash_ctl,\n> > > + HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);\n> > > }\n> > >\n> > > It seems to me that the patch is always creating a hash table in\n> > > pgStatLocalContext? AFAIU, we need to create it in pgStatLocalContext\n> > > when we read stats via backend_read_statsfile so that we can clear it\n> > > at the end of the transaction. The db/function stats seems to be doing\n> > > the same. Is there a reason why here we need to always create it in\n> > > pgStatLocalContext?\n> >\n> > I wanted to avoid creating the hash table if there is no replication\n> > slot. But as you pointed out, we create the hash table even when\n> > lookup (i.g., create_it is false), which is bad. So I think we can\n> > have pgstat_get_replslot_entry() return NULL without creating the hash\n> > table if the hash table is NULL and create_it is false so that backend\n> > processes don’t create the hash table, not via\n> > backend_read_statsfile(). Or another idea would be to always create\n> > the hash table in pgstat_read_statsfiles(). That way, it would\n> > simplify the code but could waste the memory if there is no\n> > replication slot.\n> >\n>\n> If you create it after reading 'R' message as we do in the case of 'D'\n> message then it won't waste any memory. So probably creating in\n> pgstat_read_statsfiles() would be better unless you see some other\n> problem with that.\n\nYeah, I think that's the approach I mentioned as the former. I’ll\nchange in the next version patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 21 Apr 2021 19:06:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 6:36 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 2:46 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 21, 2021 at 3:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > >\n> > > > I've attached the patch. In addition to the test Vignesh prepared, I\n> > > > added one test for the message for creating a slot that checks if the\n> > > > statistics are initialized after re-creating the same name slot.\n> > > >\n> > >\n> > > I am not sure how much useful your new test is because you are testing\n> > > it for slot name for which we have removed the slot file. It is not\n> > > related to stat messages this patch is sending. I think we can leave\n> > > that for now.\n> >\n> > I might be missing something but I think the test is related to the\n> > message for creating a slot that initializes all counters. No? If\n> > there is no that message, we will end up getting old stats if a\n> > message for dropping slot gets lost (simulated by dropping slot file)\n> > and the same name slot is created.\n> >\n>\n> The test is not waiting for a new slot creation message to reach the\n> stats collector. So, if the old slot data still exists in the file and\n> now when we read stats via backend, then won't there exists a chance\n> that old slot stats data still exists?\n\nYou're right. We should wait for the message to reach the collector.\nOr should we remove that test case?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 21 Apr 2021 19:08:26 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 3:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > The test is not waiting for a new slot creation message to reach the\n> > stats collector. So, if the old slot data still exists in the file and\n> > now when we read stats via backend, then won't there exists a chance\n> > that old slot stats data still exists?\n>\n> You're right. We should wait for the message to reach the collector.\n> Or should we remove that test case?\n>\n\nI feel we can remove it. I am not sure how much value this additional\ntest case is adding.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 21 Apr 2021 15:41:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 4:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > I've attached the patch. In addition to the test Vignesh prepared, I\n> > added one test for the message for creating a slot that checks if the\n> > statistics are initialized after re-creating the same name slot.\n> > Please review it.\n>\n> Overall the patch looks good to me. However, I have one question, I\n> did not understand the reason behind moving the below code from\n> \"pgstat_reset_replslot_counter\" to \"pg_stat_reset_replication_slot\"?\n\nAndres pointed out that pgstat_reset_replslot_counter() acquires lwlock[1]:\n\n---\n- pgstat_reset_replslot_counter() acquires ReplicationSlotControlLock. I\nthink pgstat.c has absolutely no business doing things on that level.\n---\n\nI changed the code so that pgstat_reset_replslot_counter() doesn't\nacquire directly lwlock but I think that it's appropriate to do the\nexistence check for slots in pgstatfunc.c rather than pgstat.c.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20210319185247.ldebgpdaxsowiflw%40alap3.anarazel.de\n\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:21:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 21, 2021 at 7:11 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 3:39 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > The test is not waiting for a new slot creation message to reach the\n> > > stats collector. So, if the old slot data still exists in the file and\n> > > now when we read stats via backend, then won't there exists a chance\n> > > that old slot stats data still exists?\n> >\n> > You're right. We should wait for the message to reach the collector.\n> > Or should we remove that test case?\n> >\n>\n> I feel we can remove it. I am not sure how much value this additional\n> test case is adding.\n\nOkay, removed.\n\nI’ve attached the updated patch. Please review it.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 22 Apr 2021 11:55:33 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 7:52 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 21, 2021 at 4:44 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Apr 20, 2021 at 7:54 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > I've attached the patch. In addition to the test Vignesh prepared, I\n> > > added one test for the message for creating a slot that checks if the\n> > > statistics are initialized after re-creating the same name slot.\n> > > Please review it.\n> >\n> > Overall the patch looks good to me. However, I have one question, I\n> > did not understand the reason behind moving the below code from\n> > \"pgstat_reset_replslot_counter\" to \"pg_stat_reset_replication_slot\"?\n>\n> Andres pointed out that pgstat_reset_replslot_counter() acquires lwlock[1]:\n>\n> ---\n> - pgstat_reset_replslot_counter() acquires ReplicationSlotControlLock. I\n> think pgstat.c has absolutely no business doing things on that level.\n> ---\n>\n> I changed the code so that pgstat_reset_replslot_counter() doesn't\n> acquire directly lwlock but I think that it's appropriate to do the\n> existence check for slots in pgstatfunc.c rather than pgstat.c.\n\nThanks for pointing that out. It makes sense to me.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 22 Apr 2021 09:43:23 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 8:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n\nFew comments:\n1.\nI think we want stats collector to not use pgStatLocalContext unless\nit has read the stats file similar to other cases. So probably, we\nshould allocate it in pgStatLocalContext when we read 'R' message in\npgstat_read_statsfiles. Also, the function pgstat_get_replslot_entry\nshould not use pgStatLocalContext to allocate the hash table.\n2.\n+ if (replSlotStatHash != NULL)\n+ (void) hash_search(replSlotStatHash,\n+ (void *) &(msg->m_slotname),\n+ HASH_REMOVE,\n+ NULL);\n\nWhy have you changed this part from using NameStr?\n3.\n+# Check that replicatoin slot stats are expected.\n\nTypo. replicatoin/replication\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Apr 2021 10:19:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 1:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 8:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n>\n> Few comments:\n> 1.\n> I think we want stats collector to not use pgStatLocalContext unless\n> it has read the stats file similar to other cases. So probably, we\n> should allocate it in pgStatLocalContext when we read 'R' message in\n> pgstat_read_statsfiles. Also, the function pgstat_get_replslot_entry\n> should not use pgStatLocalContext to allocate the hash table.\n\nAgreed.\n\n> 2.\n> + if (replSlotStatHash != NULL)\n> + (void) hash_search(replSlotStatHash,\n> + (void *) &(msg->m_slotname),\n> + HASH_REMOVE,\n> + NULL);\n>\n> Why have you changed this part from using NameStr?\n\nI thought that since the hash table is created with the key size\nsizeof(NameData) it's better to use NameData for searching as well.\n\n> 3.\n> +# Check that replicatoin slot stats are expected.\n>\n> Typo. replicatoin/replication\n\nWill fix in the next version.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 22 Apr 2021 14:09:15 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 10:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 1:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 22, 2021 at 8:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n>\n> > 2.\n> > + if (replSlotStatHash != NULL)\n> > + (void) hash_search(replSlotStatHash,\n> > + (void *) &(msg->m_slotname),\n> > + HASH_REMOVE,\n> > + NULL);\n> >\n> > Why have you changed this part from using NameStr?\n>\n> I thought that since the hash table is created with the key size\n> sizeof(NameData) it's better to use NameData for searching as well.\n>\n\nFair enough. I think this will give the same result either way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:33:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 3:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 22, 2021 at 10:39 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 22, 2021 at 1:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 22, 2021 at 8:26 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> >\n> > > 2.\n> > > + if (replSlotStatHash != NULL)\n> > > + (void) hash_search(replSlotStatHash,\n> > > + (void *) &(msg->m_slotname),\n> > > + HASH_REMOVE,\n> > > + NULL);\n> > >\n> > > Why have you changed this part from using NameStr?\n> >\n> > I thought that since the hash table is created with the key size\n> > sizeof(NameData) it's better to use NameData for searching as well.\n> >\n>\n> Fair enough. I think this will give the same result either way.\n\nI've attached the updated version patch.\n\nBesides the review comment from Amit, I changed\npgstat_read_statsfiles() so that it doesn't use\nspgstat_get_replslot_entry(). That’s because it was slightly unclear\nwhy we create the hash table beforehand even though we call\npgstat_read_statsfiles() with ‘create’ = true. By this change, the\ncaller of pgstat_get_replslot_entry() with ‘create’ = true is only\npgstat_recv_replslot(), which makes the code clear and safe since\npgstat_recv_replslot() is used only by the collector. Also, I ran\npgindent on the modified files.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 22 Apr 2021 16:31:24 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 22, 2021 at 1:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n\nThanks, it looks good to me now. I'll review/test some more before\ncommitting but at this stage, I would like to know from Andres or\nothers whether they see any problem with this approach to fixing a few\nof the problems reported in this thread. Basically, it will fix the\ncases where the drop message is lost and we were not able to record\nstats for new slots and writing beyond the end of the array when after\nrestarting the number of slots whose stats are stored in the stats\nfile exceeds max_replication_slots.\n\nIt uses HTAB instead of an array to record slot stats and also taught\npgstat_vacuum_stat() to search for all the dead replication slots in\nstats hashtable and tell the collector to remove them. This still uses\nslot_name as the key because we were not able to find a better way to\nuse slot's idx.\n\nAndres, unless you see any problems with this approach, I would like\nto move forward with this early next week?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 22 Apr 2021 16:24:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I have made the changes to update the replication statistics at\n> replication slot release. Please find the patch attached for the same.\n> Thoughts?\n>\n\nThanks, the changes look mostly good to me. The slot stats need to be\ninitialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\nStartupDecodingContext. Apart from that, I have moved the declaration\nof UpdateDecodingStats from slot.h back to logical.h. I have also\nadded/edited a few comments. Please check and let me know what do you\nthink of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 23 Apr 2021 14:45:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 23, 2021 at 6:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > I have made the changes to update the replication statistics at\n> > replication slot release. Please find the patch attached for the same.\n> > Thoughts?\n> >\n>\n> Thanks, the changes look mostly good to me. The slot stats need to be\n> initialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\n> StartupDecodingContext. Apart from that, I have moved the declaration\n> of UpdateDecodingStats from slot.h back to logical.h. I have also\n> added/edited a few comments. Please check and let me know what do you\n> think of the attached?\n\nThe patch moves slot stats to the ReplicationSlot data that is on the\nshared memory. If we have a space to store the statistics in the\nshared memory can we simply accumulate the stats there and make them\npersistent without using the stats collector? And I think there is\nalso a risk to increase shared memory when we want to add other\nstatistics in the future.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 26 Apr 2021 11:30:27 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 26, 2021 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Apr 23, 2021 at 6:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > I have made the changes to update the replication statistics at\n> > > replication slot release. Please find the patch attached for the same.\n> > > Thoughts?\n> > >\n> >\n> > Thanks, the changes look mostly good to me. The slot stats need to be\n> > initialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\n> > StartupDecodingContext. Apart from that, I have moved the declaration\n> > of UpdateDecodingStats from slot.h back to logical.h. I have also\n> > added/edited a few comments. Please check and let me know what do you\n> > think of the attached?\n>\n> The patch moves slot stats to the ReplicationSlot data that is on the\n> shared memory. If we have a space to store the statistics in the\n> shared memory can we simply accumulate the stats there and make them\n> persistent without using the stats collector?\n>\n\nBut for that, we need to write to file at every commit/abort/prepare\n(decode of commit) which I think will incur significant overhead.\nAlso, we try to write after few commits then there is a danger of\nlosing them and still there could be a visible overhead for small\ntransactions.\n\n> And I think there is\n> also a risk to increase shared memory when we want to add other\n> statistics in the future.\n>\n\nYeah, so do you think it is not a good idea to store stats in\nReplicationSlot? Actually storing them in a slot makes it easier to\nsend them during ReplicationSlotRelease which is quite helpful if the\nreplication is interrupted due to some reason. Or the other idea was\nthat we send stats every time we stream or spill changes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Apr 2021 08:42:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, Apr 26, 2021 at 8:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Apr 23, 2021 at 6:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > I have made the changes to update the replication statistics at\n> > > > replication slot release. Please find the patch attached for the same.\n> > > > Thoughts?\n> > > >\n> > >\n> > > Thanks, the changes look mostly good to me. The slot stats need to be\n> > > initialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\n> > > StartupDecodingContext. Apart from that, I have moved the declaration\n> > > of UpdateDecodingStats from slot.h back to logical.h. I have also\n> > > added/edited a few comments. Please check and let me know what do you\n> > > think of the attached?\n> >\n> > The patch moves slot stats to the ReplicationSlot data that is on the\n> > shared memory. If we have a space to store the statistics in the\n> > shared memory can we simply accumulate the stats there and make them\n> > persistent without using the stats collector?\n> >\n>\n> But for that, we need to write to file at every commit/abort/prepare\n> (decode of commit) which I think will incur significant overhead.\n> Also, we try to write after few commits then there is a danger of\n> losing them and still there could be a visible overhead for small\n> transactions.\n>\n\nI preferred not to persist this information to file, let's have stats\ncollector handle the stats persisting.\n\n> > And I think there is\n> > also a risk to increase shared memory when we want to add other\n> > statistics in the future.\n> >\n>\n> Yeah, so do you think it is not a good idea to store stats in\n> ReplicationSlot? Actually storing them in a slot makes it easier to\n> send them during ReplicationSlotRelease which is quite helpful if the\n> replication is interrupted due to some reason. Or the other idea was\n> that we send stats every time we stream or spill changes.\n\nWe use around 64 bytes of shared memory to store the statistics\ninformation per slot, I'm not sure if this is a lot of memory. If this\nmemory is fine, then I felt the approach to store stats seems fine. If\nthat memory is too much then we could use the other approach to update\nstats when we stream or spill the changes as suggested by Amit.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 27 Apr 2021 08:01:22 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 8:01 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 8:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 23, 2021 at 6:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > I have made the changes to update the replication statistics at\n> > > > > replication slot release. Please find the patch attached for the same.\n> > > > > Thoughts?\n> > > > >\n> > > >\n> > > > Thanks, the changes look mostly good to me. The slot stats need to be\n> > > > initialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\n> > > > StartupDecodingContext. Apart from that, I have moved the declaration\n> > > > of UpdateDecodingStats from slot.h back to logical.h. I have also\n> > > > added/edited a few comments. Please check and let me know what do you\n> > > > think of the attached?\n> > >\n> > > The patch moves slot stats to the ReplicationSlot data that is on the\n> > > shared memory. If we have a space to store the statistics in the\n> > > shared memory can we simply accumulate the stats there and make them\n> > > persistent without using the stats collector?\n> > >\n> >\n> > But for that, we need to write to file at every commit/abort/prepare\n> > (decode of commit) which I think will incur significant overhead.\n> > Also, we try to write after few commits then there is a danger of\n> > losing them and still there could be a visible overhead for small\n> > transactions.\n> >\n>\n> I preferred not to persist this information to file, let's have stats\n> collector handle the stats persisting.\n>\n\nSawada-San, I would like to go ahead with your\n\"Use-HTAB-for-replication-slot-statistics\" unless you think otherwise?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 08:14:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 8:01 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 8:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, Apr 26, 2021 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Fri, Apr 23, 2021 at 6:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > > >\n> > > > > > I have made the changes to update the replication statistics at\n> > > > > > replication slot release. Please find the patch attached for the same.\n> > > > > > Thoughts?\n> > > > > >\n> > > > >\n> > > > > Thanks, the changes look mostly good to me. The slot stats need to be\n> > > > > initialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\n> > > > > StartupDecodingContext. Apart from that, I have moved the declaration\n> > > > > of UpdateDecodingStats from slot.h back to logical.h. I have also\n> > > > > added/edited a few comments. Please check and let me know what do you\n> > > > > think of the attached?\n> > > >\n> > > > The patch moves slot stats to the ReplicationSlot data that is on the\n> > > > shared memory. If we have a space to store the statistics in the\n> > > > shared memory can we simply accumulate the stats there and make them\n> > > > persistent without using the stats collector?\n> > > >\n> > >\n> > > But for that, we need to write to file at every commit/abort/prepare\n> > > (decode of commit) which I think will incur significant overhead.\n> > > Also, we try to write after few commits then there is a danger of\n> > > losing them and still there could be a visible overhead for small\n> > > transactions.\n> > >\n> >\n> > I preferred not to persist this information to file, let's have stats\n> > collector handle the stats persisting.\n> >\n>\n> Sawada-San, I would like to go ahead with your\n> \"Use-HTAB-for-replication-slot-statistics\" unless you think otherwise?\n\nI agree that it's better to use the stats collector. So please go ahead.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:28:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Apr 26, 2021 at 8:42 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, Apr 26, 2021 at 8:01 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Fri, Apr 23, 2021 at 6:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, Apr 19, 2021 at 4:28 PM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > I have made the changes to update the replication statistics at\n> > > > > replication slot release. Please find the patch attached for the same.\n> > > > > Thoughts?\n> > > > >\n> > > >\n> > > > Thanks, the changes look mostly good to me. The slot stats need to be\n> > > > initialized in RestoreSlotFromDisk and ReplicationSlotCreate, not in\n> > > > StartupDecodingContext. Apart from that, I have moved the declaration\n> > > > of UpdateDecodingStats from slot.h back to logical.h. I have also\n> > > > added/edited a few comments. Please check and let me know what do you\n> > > > think of the attached?\n> > >\n> > > The patch moves slot stats to the ReplicationSlot data that is on the\n> > > shared memory. If we have a space to store the statistics in the\n> > > shared memory can we simply accumulate the stats there and make them\n> > > persistent without using the stats collector?\n> > >\n> >\n> > But for that, we need to write to file at every commit/abort/prepare\n> > (decode of commit) which I think will incur significant overhead.\n> > Also, we try to write after few commits then there is a danger of\n> > losing them and still there could be a visible overhead for small\n> > transactions.\n> >\n>\n> I preferred not to persist this information to file, let's have stats\n> collector handle the stats persisting.\n>\n> > > And I think there is\n> > > also a risk to increase shared memory when we want to add other\n> > > statistics in the future.\n> > >\n> >\n> > Yeah, so do you think it is not a good idea to store stats in\n> > ReplicationSlot? Actually storing them in a slot makes it easier to\n> > send them during ReplicationSlotRelease which is quite helpful if the\n> > replication is interrupted due to some reason. Or the other idea was\n> > that we send stats every time we stream or spill changes.\n>\n> We use around 64 bytes of shared memory to store the statistics\n> information per slot, I'm not sure if this is a lot of memory. If this\n> memory is fine, then I felt the approach to store stats seems fine. If\n> that memory is too much then we could use the other approach to update\n> stats when we stream or spill the changes as suggested by Amit.\n\nI agree that makes it easier to send slot stats during\nReplicationSlotRelease() but I'd prefer to avoid storing data that\ndoesn't need to be shared in the shared buffer if possible. And those\ncounters are not used by physical slots at all. If sending slot stats\nevery time we stream or spill changes doesn't affect the system much,\nI think it's better than having slot stats in the shared memory.\n\nAlso, not sure it’s better but another idea would be to make the slot\nstats a global variable like pgBufferUsage and use it during decoding.\nOr we can set a proc-exit callback? But to be honest, I'm not sure\nwhich approach we should go with. Those approaches have proc and cons.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 27 Apr 2021 12:47:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 9:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > > And I think there is\n> > > > also a risk to increase shared memory when we want to add other\n> > > > statistics in the future.\n> > > >\n> > >\n> > > Yeah, so do you think it is not a good idea to store stats in\n> > > ReplicationSlot? Actually storing them in a slot makes it easier to\n> > > send them during ReplicationSlotRelease which is quite helpful if the\n> > > replication is interrupted due to some reason. Or the other idea was\n> > > that we send stats every time we stream or spill changes.\n> >\n> > We use around 64 bytes of shared memory to store the statistics\n> > information per slot, I'm not sure if this is a lot of memory. If this\n> > memory is fine, then I felt the approach to store stats seems fine. If\n> > that memory is too much then we could use the other approach to update\n> > stats when we stream or spill the changes as suggested by Amit.\n>\n> I agree that makes it easier to send slot stats during\n> ReplicationSlotRelease() but I'd prefer to avoid storing data that\n> doesn't need to be shared in the shared buffer if possible.\n>\n\nSounds reasonable and we might add some stats in the future so that\nwill further increase the usage of shared memory.\n\n> And those\n> counters are not used by physical slots at all. If sending slot stats\n> every time we stream or spill changes doesn't affect the system much,\n> I think it's better than having slot stats in the shared memory.\n>\n\nAs the minimum size of logical_decoding_work_mem is 64KB, so in the\nworst case, we will send stats after decoding that many changes. I\ndon't think it would impact too much considering that we need to spill\nor stream those many changes. If it concerns any users they can\nalways increase logical_decoding_work_mem. The default value is 64MB\nat which point, I don't think it will matter sending the stats.\n\n> Also, not sure it’s better but another idea would be to make the slot\n> stats a global variable like pgBufferUsage and use it during decoding.\n>\n\nHmm, I think it is better to avoid global variables if possible.\n\n> Or we can set a proc-exit callback? But to be honest, I'm not sure\n> which approach we should go with. Those approaches have proc and cons.\n>\n\nI think we can try the first approach listed here which is to send\nstats each time we spill or stream.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 09:43:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 9:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 9:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > > > And I think there is\n> > > > > also a risk to increase shared memory when we want to add other\n> > > > > statistics in the future.\n> > > > >\n> > > >\n> > > > Yeah, so do you think it is not a good idea to store stats in\n> > > > ReplicationSlot? Actually storing them in a slot makes it easier to\n> > > > send them during ReplicationSlotRelease which is quite helpful if the\n> > > > replication is interrupted due to some reason. Or the other idea was\n> > > > that we send stats every time we stream or spill changes.\n> > >\n> > > We use around 64 bytes of shared memory to store the statistics\n> > > information per slot, I'm not sure if this is a lot of memory. If this\n> > > memory is fine, then I felt the approach to store stats seems fine. If\n> > > that memory is too much then we could use the other approach to update\n> > > stats when we stream or spill the changes as suggested by Amit.\n> >\n> > I agree that makes it easier to send slot stats during\n> > ReplicationSlotRelease() but I'd prefer to avoid storing data that\n> > doesn't need to be shared in the shared buffer if possible.\n> >\n>\n> Sounds reasonable and we might add some stats in the future so that\n> will further increase the usage of shared memory.\n>\n> > And those\n> > counters are not used by physical slots at all. If sending slot stats\n> > every time we stream or spill changes doesn't affect the system much,\n> > I think it's better than having slot stats in the shared memory.\n> >\n>\n> As the minimum size of logical_decoding_work_mem is 64KB, so in the\n> worst case, we will send stats after decoding that many changes. I\n> don't think it would impact too much considering that we need to spill\n> or stream those many changes. If it concerns any users they can\n> always increase logical_decoding_work_mem. The default value is 64MB\n> at which point, I don't think it will matter sending the stats.\n\nSounds good to me, I will rebase my previous patch and send a patch for this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 27 Apr 2021 09:48:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 1:18 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 9:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 9:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 27, 2021 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > > > And I think there is\n> > > > > > also a risk to increase shared memory when we want to add other\n> > > > > > statistics in the future.\n> > > > > >\n> > > > >\n> > > > > Yeah, so do you think it is not a good idea to store stats in\n> > > > > ReplicationSlot? Actually storing them in a slot makes it easier to\n> > > > > send them during ReplicationSlotRelease which is quite helpful if the\n> > > > > replication is interrupted due to some reason. Or the other idea was\n> > > > > that we send stats every time we stream or spill changes.\n> > > >\n> > > > We use around 64 bytes of shared memory to store the statistics\n> > > > information per slot, I'm not sure if this is a lot of memory. If this\n> > > > memory is fine, then I felt the approach to store stats seems fine. If\n> > > > that memory is too much then we could use the other approach to update\n> > > > stats when we stream or spill the changes as suggested by Amit.\n> > >\n> > > I agree that makes it easier to send slot stats during\n> > > ReplicationSlotRelease() but I'd prefer to avoid storing data that\n> > > doesn't need to be shared in the shared buffer if possible.\n> > >\n> >\n> > Sounds reasonable and we might add some stats in the future so that\n> > will further increase the usage of shared memory.\n> >\n> > > And those\n> > > counters are not used by physical slots at all. If sending slot stats\n> > > every time we stream or spill changes doesn't affect the system much,\n> > > I think it's better than having slot stats in the shared memory.\n> > >\n> >\n> > As the minimum size of logical_decoding_work_mem is 64KB, so in the\n> > worst case, we will send stats after decoding that many changes. I\n> > don't think it would impact too much considering that we need to spill\n> > or stream those many changes. If it concerns any users they can\n> > always increase logical_decoding_work_mem. The default value is 64MB\n> > at which point, I don't think it will matter sending the stats.\n>\n> Sounds good to me, I will rebase my previous patch and send a patch for this.\n\n+1. Thanks!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 27 Apr 2021 13:27:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 9:43 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 9:17 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 27, 2021 at 11:31 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > > > And I think there is\n> > > > > > also a risk to increase shared memory when we want to add other\n> > > > > > statistics in the future.\n> > > > > >\n> > > > >\n> > > > > Yeah, so do you think it is not a good idea to store stats in\n> > > > > ReplicationSlot? Actually storing them in a slot makes it easier to\n> > > > > send them during ReplicationSlotRelease which is quite helpful if the\n> > > > > replication is interrupted due to some reason. Or the other idea was\n> > > > > that we send stats every time we stream or spill changes.\n> > > >\n> > > > We use around 64 bytes of shared memory to store the statistics\n> > > > information per slot, I'm not sure if this is a lot of memory. If this\n> > > > memory is fine, then I felt the approach to store stats seems fine. If\n> > > > that memory is too much then we could use the other approach to update\n> > > > stats when we stream or spill the changes as suggested by Amit.\n> > >\n> > > I agree that makes it easier to send slot stats during\n> > > ReplicationSlotRelease() but I'd prefer to avoid storing data that\n> > > doesn't need to be shared in the shared buffer if possible.\n> > >\n> >\n> > Sounds reasonable and we might add some stats in the future so that\n> > will further increase the usage of shared memory.\n> >\n> > > And those\n> > > counters are not used by physical slots at all. If sending slot stats\n> > > every time we stream or spill changes doesn't affect the system much,\n> > > I think it's better than having slot stats in the shared memory.\n> > >\n> >\n> > As the minimum size of logical_decoding_work_mem is 64KB, so in the\n> > worst case, we will send stats after decoding that many changes. I\n> > don't think it would impact too much considering that we need to spill\n> > or stream those many changes. If it concerns any users they can\n> > always increase logical_decoding_work_mem. The default value is 64MB\n> > at which point, I don't think it will matter sending the stats.\n>\n> Sounds good to me, I will rebase my previous patch and send a patch for this.\n>\n\nAttached patch has the changes to update statistics during\nspill/stream which prevents the statistics from being lost during\ninterrupt.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Tue, 27 Apr 2021 11:02:12 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 8:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 11:45 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > Sawada-San, I would like to go ahead with your\n> > \"Use-HTAB-for-replication-slot-statistics\" unless you think otherwise?\n>\n> I agree that it's better to use the stats collector. So please go ahead.\n>\n\nI have pushed this patch and seeing one buildfarm failure:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-04-27%2009%3A23%3A14\n\n starting permutation: s1_init s1_begin s1_insert_tbl1 s1_insert_tbl2\ns2_alter_tbl1_char s1_commit s2_get_changes\n+ isolationtester: canceling step s1_init after 314 seconds\n step s1_init: SELECT 'init' FROM\npg_create_logical_replication_slot('isolation_slot', 'test_decoding');\n ?column?\n\nI am analyzing this. Do let me know if you have any thoughts on the same?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 17:40:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 8:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> I have pushed this patch and seeing one buildfarm failure:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-04-27%2009%3A23%3A14\n>\n> starting permutation: s1_init s1_begin s1_insert_tbl1 s1_insert_tbl2\n> s2_alter_tbl1_char s1_commit s2_get_changes\n> + isolationtester: canceling step s1_init after 314 seconds\n> step s1_init: SELECT 'init' FROM\n> pg_create_logical_replication_slot('isolation_slot', 'test_decoding');\n> ?column?\n>\n> I am analyzing this.\n>\n\nAfter checking below logs corresponding to this test, it seems test\nhas been executed and create_slot was successful:\n2021-04-27 11:06:43.770 UTC [17694956:52] isolation/concurrent_ddl_dml\nSTATEMENT: SELECT 'init' FROM\npg_create_logical_replication_slot('isolation_slot', 'test_decoding');\n2021-04-27 11:07:11.748 UTC [5243096:9] LOG: checkpoint starting: time\n2021-04-27 11:09:24.332 UTC [5243096:10] LOG: checkpoint complete:\nwrote 14 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled;\nwrite=0.716 s, sync=0.001 s, total=132.584 s; sync files=0,\nlongest=0.000 s, average=0.000 s; distance=198 kB, estimate=406 kB\n2021-04-27 11:09:40.116 UTC [6226046:1] [unknown] LOG: connection\nreceived: host=[local]\n2021-04-27 11:09:40.117 UTC [17694956:53] isolation/concurrent_ddl_dml\nLOG: statement: BEGIN;\n2021-04-27 11:09:40.117 UTC [17694956:54] isolation/concurrent_ddl_dml\nLOG: statement: INSERT INTO tbl1 (val1, val2) VALUES (1, 1);\n2021-04-27 11:09:40.118 UTC [17694956:55] isolation/concurrent_ddl_dml\nLOG: statement: INSERT INTO tbl2 (val1, val2) VALUES (1, 1);\n2021-04-27 11:09:40.119 UTC [10944636:49] isolation/concurrent_ddl_dml\nLOG: statement: ALTER TABLE tbl1 ALTER COLUMN val2 TYPE character\nvarying;\n\nI am not sure but there is some possibility that even though create\nslot is successful, the isolation tester got successful in canceling\nit, maybe because create_slot is just finished at the same time. As we\ncan see from logs, during this test checkpoint also happened which\ncould also lead to the slowness of this particular command.\n\nAlso, I see a lot of messages like below which indicate stats\ncollector is also quite slow:\n2021-04-27 10:57:59.385 UTC [18743536:1] LOG: using stale statistics\ninstead of current ones because stats collector is not responding\n\nI am not sure if the timeout happened because the machine is slow or\nis it in any way related to code. I am seeing some previous failures\ndue to timeout on this machine [1][2]. In those failures, I see the\n\"using stale stats....\" message. Also, I am not able to see why it can\nfail due to this patch?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-02-23%2004%3A23%3A56\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2020-12-24%2005%3A31%3A43\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 27 Apr 2021 19:59:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 8:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > I have pushed this patch and seeing one buildfarm failure:\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-04-27%2009%3A23%3A14\n> >\n> > starting permutation: s1_init s1_begin s1_insert_tbl1 s1_insert_tbl2\n> > s2_alter_tbl1_char s1_commit s2_get_changes\n> > + isolationtester: canceling step s1_init after 314 seconds\n> > step s1_init: SELECT 'init' FROM\n> > pg_create_logical_replication_slot('isolation_slot', 'test_decoding');\n> > ?column?\n> >\n> > I am analyzing this.\n> >\n>\n> After checking below logs corresponding to this test, it seems test\n> has been executed and create_slot was successful:\n\nThe pg_create_logical_replication_slot() was executed at 11:04:25:\n\n2021-04-27 11:04:25.494 UTC [17694956:49] isolation/concurrent_ddl_dml\nLOG: statement: SELECT 'init' FROM\npg_create_logical_replication_slot('isolation_slot', 'test_decoding');\n\nTherefore this command took 314 sec that matches the number the\nisolation test reported. And the folling logs follow:\n\n2021-04-27 11:06:43.770 UTC [17694956:50] isolation/concurrent_ddl_dml\nLOG: logical decoding found consistent point at 0/17F9078\n2021-04-27 11:06:43.770 UTC [17694956:51] isolation/concurrent_ddl_dml\nDETAIL: There are no running transactions.\n\n> 2021-04-27 11:06:43.770 UTC [17694956:52] isolation/concurrent_ddl_dml\n> STATEMENT: SELECT 'init' FROM\n> pg_create_logical_replication_slot('isolation_slot', 'test_decoding');\n> 2021-04-27 11:07:11.748 UTC [5243096:9] LOG: checkpoint starting: time\n> 2021-04-27 11:09:24.332 UTC [5243096:10] LOG: checkpoint complete:\n> wrote 14 buffers (0.1%); 0 WAL file(s) added, 0 removed, 0 recycled;\n> write=0.716 s, sync=0.001 s, total=132.584 s; sync files=0,\n> longest=0.000 s, average=0.000 s; distance=198 kB, estimate=406 kB\n> 2021-04-27 11:09:40.116 UTC [6226046:1] [unknown] LOG: connection\n> received: host=[local]\n> 2021-04-27 11:09:40.117 UTC [17694956:53] isolation/concurrent_ddl_dml\n> LOG: statement: BEGIN;\n> 2021-04-27 11:09:40.117 UTC [17694956:54] isolation/concurrent_ddl_dml\n> LOG: statement: INSERT INTO tbl1 (val1, val2) VALUES (1, 1);\n> 2021-04-27 11:09:40.118 UTC [17694956:55] isolation/concurrent_ddl_dml\n> LOG: statement: INSERT INTO tbl2 (val1, val2) VALUES (1, 1);\n> 2021-04-27 11:09:40.119 UTC [10944636:49] isolation/concurrent_ddl_dml\n> LOG: statement: ALTER TABLE tbl1 ALTER COLUMN val2 TYPE character\n> varying;\n>\n> I am not sure but there is some possibility that even though create\n> slot is successful, the isolation tester got successful in canceling\n> it, maybe because create_slot is just finished at the same time.\n\nYeah, we see the test log \"canceling step s1_init after 314 seconds\"\nbut don't see any log indicating canceling query.\n\n> As we\n> can see from logs, during this test checkpoint also happened which\n> could also lead to the slowness of this particular command.\n\nYes. I also think the checkpoint could somewhat lead to the slowness.\nAnd since create_slot() took 2min to find a consistent snapshot the\nsystem might have already been busy.\n\n>\n> Also, I see a lot of messages like below which indicate stats\n> collector is also quite slow:\n> 2021-04-27 10:57:59.385 UTC [18743536:1] LOG: using stale statistics\n> instead of current ones because stats collector is not responding\n>\n> I am not sure if the timeout happened because the machine is slow or\n> is it in any way related to code. I am seeing some previous failures\n> due to timeout on this machine [1][2]. In those failures, I see the\n> \"using stale stats....\" message.\n\nIt seems like a time-dependent issue but I'm wondering why the logical\ndecoding test failed at this time.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 27 Apr 2021 23:58:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 8:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 11:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > I am not sure if the timeout happened because the machine is slow or\n> > is it in any way related to code. I am seeing some previous failures\n> > due to timeout on this machine [1][2]. In those failures, I see the\n> > \"using stale stats....\" message.\n>\n> It seems like a time-dependent issue but I'm wondering why the logical\n> decoding test failed at this time.\n>\n\nAs per the analysis done till now, it appears to be due to the reason\nthat the machine is slow which leads to timeout and there appear to be\nsome prior failures related to timeout as well. I think it is better\nto wait for another run (or few runs) to see if this occurs again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Apr 2021 08:28:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 8:28 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 8:28 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 11:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 27, 2021 at 5:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > I am not sure if the timeout happened because the machine is slow or\n> > > is it in any way related to code. I am seeing some previous failures\n> > > due to timeout on this machine [1][2]. In those failures, I see the\n> > > \"using stale stats....\" message.\n> >\n> > It seems like a time-dependent issue but I'm wondering why the logical\n> > decoding test failed at this time.\n> >\n>\n> As per the analysis done till now, it appears to be due to the reason\n> that the machine is slow which leads to timeout and there appear to be\n> some prior failures related to timeout as well. I think it is better\n> to wait for another run (or few runs) to see if this occurs again.\n>\n\nYes, checkpoint seems to take a lot of time, could be because the\nmachine is slow. Let's wait for the next run and see.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 28 Apr 2021 08:32:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, Apr 27, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Attached patch has the changes to update statistics during\n> spill/stream which prevents the statistics from being lost during\n> interrupt.\n>\n\n void\n-UpdateDecodingStats(LogicalDecodingContext *ctx)\n+UpdateDecodingStats(ReorderBuffer *rb)\n\nI don't think you need to change this interface because\nreorderbuffer->private_data points to LogicalDecodingContext. See\nStartupDecodingContext. Other than that there is a comment in the code\n\"Update the decoding stats at transaction prepare/commit/abort...\".\nThis patch should extend that comment by saying something like\n\"Additionally we send the stats when we spill or stream the changes to\navoid losing them in case the decoding is interrupted.\"\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Apr 2021 08:59:45 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 8:59 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > Attached patch has the changes to update statistics during\n> > spill/stream which prevents the statistics from being lost during\n> > interrupt.\n> >\n>\n> void\n> -UpdateDecodingStats(LogicalDecodingContext *ctx)\n> +UpdateDecodingStats(ReorderBuffer *rb)\n>\n> I don't think you need to change this interface because\n> reorderbuffer->private_data points to LogicalDecodingContext. See\n> StartupDecodingContext. Other than that there is a comment in the code\n> \"Update the decoding stats at transaction prepare/commit/abort...\".\n> This patch should extend that comment by saying something like\n> \"Additionally we send the stats when we spill or stream the changes to\n> avoid losing them in case the decoding is interrupted.\"\n\nThanks for the comments, Please find the attached v4 patch having the\nfixes for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 28 Apr 2021 09:36:56 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Apr 27, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > Attached patch has the changes to update statistics during\n> > spill/stream which prevents the statistics from being lost during\n> > interrupt.\n> >\n>\n> void\n> -UpdateDecodingStats(LogicalDecodingContext *ctx)\n> +UpdateDecodingStats(ReorderBuffer *rb)\n>\n> I don't think you need to change this interface because\n> reorderbuffer->private_data points to LogicalDecodingContext. See\n> StartupDecodingContext.\n\n+1\n\nWith this approach, we could still miss the totalTxns and totalBytes\nupdates if the decoding a large but less than\nlogical_decoding_work_mem is interrupted, right?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 28 Apr 2021 13:07:11 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 9:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > Attached patch has the changes to update statistics during\n> > > spill/stream which prevents the statistics from being lost during\n> > > interrupt.\n> > >\n> >\n> > void\n> > -UpdateDecodingStats(LogicalDecodingContext *ctx)\n> > +UpdateDecodingStats(ReorderBuffer *rb)\n> >\n> > I don't think you need to change this interface because\n> > reorderbuffer->private_data points to LogicalDecodingContext. See\n> > StartupDecodingContext.\n>\n> +1\n>\n> With this approach, we could still miss the totalTxns and totalBytes\n> updates if the decoding a large but less than\n> logical_decoding_work_mem is interrupted, right?\n\nYes you are right, I felt that is reasonable and that way it reduces\nfrequent calls to the stats collector to update the stats.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 28 Apr 2021 09:44:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 9:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Apr 27, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > >\n> > > Attached patch has the changes to update statistics during\n> > > spill/stream which prevents the statistics from being lost during\n> > > interrupt.\n> > >\n> >\n> > void\n> > -UpdateDecodingStats(LogicalDecodingContext *ctx)\n> > +UpdateDecodingStats(ReorderBuffer *rb)\n> >\n> > I don't think you need to change this interface because\n> > reorderbuffer->private_data points to LogicalDecodingContext. See\n> > StartupDecodingContext.\n>\n> +1\n>\n> With this approach, we could still miss the totalTxns and totalBytes\n> updates if the decoding a large but less than\n> logical_decoding_work_mem is interrupted, right?\n>\n\nRight, but is there some simple way to avoid that? I see two\npossibilities (a) store stats in ReplicationSlot and then send them at\nReplicationSlotRelease but that will lead to an increase in shared\nmemory usage and as per the discussion above, we don't want that, (b)\nsend intermediate stats after decoding say N changes but for that, we\nneed to additionally compute the size of each change which might add\nsome overhead.\n\nI am not sure if any of these alternatives are a good idea. What do\nyou think? Do you have any other ideas for this?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Apr 2021 11:55:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 16, 2021 at 2:58 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 15, 2021 at 4:35 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Thank you for the update! The patch looks good to me.\n> >\n\nBTW regarding the commit f5fc2f5b23 that added total_txns and\ntotal_bytes, we add the reorder buffer size (i.g., rb->size) to\nrb->totalBytes but I think we should use the transaction size (i.g.,\ntxn->size) instead:\n\n@@ -1363,6 +1365,11 @@ ReorderBufferIterTXNNext(ReorderBuffer *rb,\nReorderBufferIterTXNState *state)\n dlist_delete(&change->node);\n dlist_push_tail(&state->old_change, &change->node);\n\n+ /*\n+ * Update the total bytes processed before releasing the current set\n+ * of changes and restoring the new set of changes.\n+ */\n+ rb->totalBytes += rb->size;\n if (ReorderBufferRestoreChanges(rb, entry->txn, &entry->file,\n &state->entries[off].segno))\n {\n@@ -2363,6 +2370,20 @@ ReorderBufferProcessTXN(ReorderBuffer *rb,\nReorderBufferTXN *txn,\n ReorderBufferIterTXNFinish(rb, iterstate);\n iterstate = NULL;\n\n+ /*\n+ * Update total transaction count and total transaction bytes\n+ * processed. Ensure to not count the streamed transaction multiple\n+ * times.\n+ *\n+ * Note that the statistics computation has to be done after\n+ * ReorderBufferIterTXNFinish as it releases the serialized change\n+ * which we have already accounted in ReorderBufferIterTXNNext.\n+ */\n+ if (!rbtxn_is_streamed(txn))\n+ rb->totalTxns++;\n+\n+ rb->totalBytes += rb->size;\n+\n\nIIUC rb->size could include multiple decoded transactions. So it's not\nappropriate to add that value to the counter as the transaction size\npassed to the logical decoding plugin. If the reorder buffer process a\ntransaction while having a large transaction that is being decoded, we\ncould end up more increasing txn_bytes than necessary.\n\nPlease review the attached patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 28 Apr 2021 16:19:18 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 12:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n>\n> BTW regarding the commit f5fc2f5b23 that added total_txns and\n> total_bytes, we add the reorder buffer size (i.g., rb->size) to\n> rb->totalBytes but I think we should use the transaction size (i.g.,\n> txn->size) instead:\n>\n\nYou are right about the problem but I think your proposed fix also\nwon't work because txn->size always has current transaction size which\nwill be top-transaction in the case when a transaction has multiple\nsubtransactions. It won't include the subtxn->size. For example, you\ncan try to decode with below kind of transaction:\nBegin;\ninsert into t1 values(1);\nsavepoint s1;\ninsert into t1 values(2);\nsavepoint s2;\ninsert into t1 values(3);\ncommit;\n\nI think we can fix it by keeping track of total_size in toptxn as we\nare doing for the streaming case in ReorderBufferChangeMemoryUpdate.\nWe can probably do it for non-streaming cases as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Apr 2021 15:09:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 12:49 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> >\n> > BTW regarding the commit f5fc2f5b23 that added total_txns and\n> > total_bytes, we add the reorder buffer size (i.g., rb->size) to\n> > rb->totalBytes but I think we should use the transaction size (i.g.,\n> > txn->size) instead:\n> >\n>\n> You are right about the problem but I think your proposed fix also\n> won't work because txn->size always has current transaction size which\n> will be top-transaction in the case when a transaction has multiple\n> subtransactions. It won't include the subtxn->size.\n\nRight. I missed the point that ReorderBufferProcessTXN() processes\nalso subtransactions.\n\n> I think we can fix it by keeping track of total_size in toptxn as we\n> are doing for the streaming case in ReorderBufferChangeMemoryUpdate.\n> We can probably do it for non-streaming cases as well.\n\nAgreed.\n\nI've updated the patch. What do you think?\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 28 Apr 2021 20:21:21 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 4:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I think we can fix it by keeping track of total_size in toptxn as we\n> > are doing for the streaming case in ReorderBufferChangeMemoryUpdate.\n> > We can probably do it for non-streaming cases as well.\n>\n> Agreed.\n>\n> I've updated the patch. What do you think?\n>\n\n@@ -1369,7 +1369,7 @@ ReorderBufferIterTXNNext(ReorderBuffer *rb,\nReorderBufferIterTXNState *state)\n * Update the total bytes processed before releasing the current set\n * of changes and restoring the new set of changes.\n */\n- rb->totalBytes += rb->size;\n+ rb->totalBytes += entry->txn->total_size;\n if (ReorderBufferRestoreChanges(rb, entry->txn, &entry->file,\n &state->entries[off].segno))\n\nI have not tested this but won't in the above change you need to check\ntxn->toptxn for subtxns?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 28 Apr 2021 17:01:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 9:37 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 28, 2021 at 12:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Apr 27, 2021 at 11:02 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Tue, Apr 27, 2021 at 9:48 AM vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > Attached patch has the changes to update statistics during\n> > > > spill/stream which prevents the statistics from being lost during\n> > > > interrupt.\n> > > >\n> > >\n> > > void\n> > > -UpdateDecodingStats(LogicalDecodingContext *ctx)\n> > > +UpdateDecodingStats(ReorderBuffer *rb)\n> > >\n> > > I don't think you need to change this interface because\n> > > reorderbuffer->private_data points to LogicalDecodingContext. See\n> > > StartupDecodingContext.\n> >\n> > +1\n> >\n> > With this approach, we could still miss the totalTxns and totalBytes\n> > updates if the decoding a large but less than\n> > logical_decoding_work_mem is interrupted, right?\n> >\n>\n> Right, but is there some simple way to avoid that? I see two\n> possibilities (a) store stats in ReplicationSlot and then send them at\n> ReplicationSlotRelease but that will lead to an increase in shared\n> memory usage and as per the discussion above, we don't want that, (b)\n> send intermediate stats after decoding say N changes but for that, we\n> need to additionally compute the size of each change which might add\n> some overhead.\n\nRight.\n\n> I am not sure if any of these alternatives are a good idea. What do\n> you think? Do you have any other ideas for this?\n\nI've been considering some ideas but don't come up with a good one\nyet. It’s just an idea and not tested but how about having\nCreateDecodingContext() register before_shmem_exit() callback with the\ndecoding context to ensure that we send slot stats even on\ninterruption. And FreeDecodingContext() cancels the callback.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 28 Apr 2021 23:12:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "It seems that the test case added by f5fc2f5b2 is still a bit\nunstable, even after c64dcc7fe:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2021-04-23%2006%3A20%3A12\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2021-04-24%2018%3A20%3A10\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snapper&dt=2021-04-28%2017%3A53%3A14\n\n(The snapper run fails to show regression.diffs, so it's not certain\nthat it's the same failure as peripatus, but ...)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 16:41:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 5:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> It seems that the test case added by f5fc2f5b2 is still a bit\n> unstable, even after c64dcc7fe:\n\nHmm, I don't see the exact cause yet but there are two possibilities:\nsome transactions were really spilled, and it showed the old stats due\nto losing the drop (and create) slot messages. For the former case, it\nseems to better to create the slot just before the insertion and\nsetting logical_decoding_work_mem to the default (64MB). For the\nlatter case, maybe we can use a different name slot than the name used\nin other tests?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 29 Apr 2021 08:28:03 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 4:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 5:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > It seems that the test case added by f5fc2f5b2 is still a bit\n> > unstable, even after c64dcc7fe:\n>\n> Hmm, I don't see the exact cause yet but there are two possibilities:\n> some transactions were really spilled,\n>\n\nThis is the first test and inserts just one small record, so how it\ncan lead to spill of data. Do you mean to say that may be some\nbackground process has written some transaction which leads to a spill\nof data?\n\n> and it showed the old stats due\n> to losing the drop (and create) slot messages.\n>\n\nYeah, something like this could happen. Another possibility here could\nbe that before the stats collector has processed drop and create\nmessages, we have enquired about the stats which lead to it giving us\nthe old stats. Note, that we don't wait for 'drop' or 'create' message\nto be delivered. So, there is a possibility of the same. What do you\nthink?\n\n> For the former case, it\n> seems to better to create the slot just before the insertion and\n> setting logical_decoding_work_mem to the default (64MB). For the\n> latter case, maybe we can use a different name slot than the name used\n> in other tests?\n>\n\nHow about doing both of the above suggestions? Alternatively, we can\nwait for both 'drop' and 'create' message to be delivered but that\nmight be overkill.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Apr 2021 08:24:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> This is the first test and inserts just one small record, so how it\n> can lead to spill of data. Do you mean to say that may be some\n> background process has written some transaction which leads to a spill\n> of data?\n\nautovacuum, say?\n\n> Yeah, something like this could happen. Another possibility here could\n> be that before the stats collector has processed drop and create\n> messages, we have enquired about the stats which lead to it giving us\n> the old stats. Note, that we don't wait for 'drop' or 'create' message\n> to be delivered. So, there is a possibility of the same. What do you\n> think?\n\nYou should take a close look at the stats test in the main regression\ntests. We had to jump through *high* hoops to get that to be stable,\nand yet it still fails semi-regularly. This looks like pretty much the\nsame thing, and so I'm pessimistically inclined to guess that it will\nnever be entirely stable.\n\n(At least not before the fabled stats collector rewrite, which may well\nintroduce some entirely new set of failure modes.)\n\nDo we really need this test in this form? Perhaps it could be converted\nto a TAP test that's a bit more forgiving.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 28 Apr 2021 23:20:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On 2021-04-28 23:20:00 -0400, Tom Lane wrote:\n> (At least not before the fabled stats collector rewrite, which may well\n> introduce some entirely new set of failure modes.)\n\nFWIW, I added a function that forces a flush there. That can be done\nsynchronously and the underlying functionality needs to exist anyway to\ndeal with backend exit. Makes it a *lot* easier to write tests for stats\nrelated things...\n\n\n", "msg_date": "Wed, 28 Apr 2021 20:51:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 8:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > This is the first test and inserts just one small record, so how it\n> > can lead to spill of data. Do you mean to say that may be some\n> > background process has written some transaction which leads to a spill\n> > of data?\n>\n> autovacuum, say?\n>\n> > Yeah, something like this could happen. Another possibility here could\n> > be that before the stats collector has processed drop and create\n> > messages, we have enquired about the stats which lead to it giving us\n> > the old stats. Note, that we don't wait for 'drop' or 'create' message\n> > to be delivered. So, there is a possibility of the same. What do you\n> > think?\n>\n> You should take a close look at the stats test in the main regression\n> tests. We had to jump through *high* hoops to get that to be stable,\n> and yet it still fails semi-regularly. This looks like pretty much the\n> same thing, and so I'm pessimistically inclined to guess that it will\n> never be entirely stable.\n>\n\nTrue, it is possible that we can't make it entirely stable but I would\nlike to try some more before giving up on this. Otherwise, I guess the\nother possibility is to remove some of the latest tests added or\nprobably change them to be more forgiving. For example, we can change\nthe currently failing test to not check 'spill*' count and rely on\njust 'total*' count which will work even in scenarios we discussed for\nthis failure but it will reduce the efficiency/completeness of the\ntest case.\n\n> (At least not before the fabled stats collector rewrite, which may well\n> introduce some entirely new set of failure modes.)\n>\n> Do we really need this test in this form? Perhaps it could be converted\n> to a TAP test that's a bit more forgiving.\n>\n\nWe have a TAP test for slot stats but there we are checking some\nscenarios across the restart. We can surely move these tests also\nthere but it is not apparent to me how it can create a difference?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Apr 2021 09:55:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > I am not sure if any of these alternatives are a good idea. What do\n> > you think? Do you have any other ideas for this?\n>\n> I've been considering some ideas but don't come up with a good one\n> yet. It’s just an idea and not tested but how about having\n> CreateDecodingContext() register before_shmem_exit() callback with the\n> decoding context to ensure that we send slot stats even on\n> interruption. And FreeDecodingContext() cancels the callback.\n>\n\nIs it a good idea to send stats while exiting and rely on the same? I\nthink before_shmem_exit is mostly used for the cleanup purpose so not\nsure if we can rely on it for this purpose. I think we can't be sure\nthat in all cases we will send all stats, so maybe Vignesh's patch is\nsufficient to cover the cases where we avoid losing it in cases where\nwe would have sent a large amount of data.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Apr 2021 10:37:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 11:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 4:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, Apr 29, 2021 at 5:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > It seems that the test case added by f5fc2f5b2 is still a bit\n> > > unstable, even after c64dcc7fe:\n> >\n> > Hmm, I don't see the exact cause yet but there are two possibilities:\n> > some transactions were really spilled,\n> >\n>\n> This is the first test and inserts just one small record, so how it\n> can lead to spill of data. Do you mean to say that may be some\n> background process has written some transaction which leads to a spill\n> of data?\n\nNot sure but I thought that the logical decoding started to decodes\nfrom a relatively old point for some reason and decoded incomplete\ntransactions that weren’t shown in the result.\n\n>\n> > and it showed the old stats due\n> > to losing the drop (and create) slot messages.\n> >\n>\n> Yeah, something like this could happen. Another possibility here could\n> be that before the stats collector has processed drop and create\n> messages, we have enquired about the stats which lead to it giving us\n> the old stats. Note, that we don't wait for 'drop' or 'create' message\n> to be delivered. So, there is a possibility of the same. What do you\n> think?\n\nYeah, that could happen even if any message didn't get dropped.\n\n>\n> > For the former case, it\n> > seems to better to create the slot just before the insertion and\n> > setting logical_decoding_work_mem to the default (64MB). For the\n> > latter case, maybe we can use a different name slot than the name used\n> > in other tests?\n> >\n>\n> How about doing both of the above suggestions? Alternatively, we can\n> wait for both 'drop' and 'create' message to be delivered but that\n> might be overkill.\n\nAgreed. Attached the patch doing both things.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 29 Apr 2021 14:43:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> >\n> > How about doing both of the above suggestions? Alternatively, we can\n> > wait for both 'drop' and 'create' message to be delivered but that\n> > might be overkill.\n>\n> Agreed. Attached the patch doing both things.\n>\n\nThanks, the patch LGTM. I'll wait for a day before committing to see\nif anyone has better ideas.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 29 Apr 2021 12:07:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 11:55 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 29, 2021 at 4:58 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 29, 2021 at 5:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > It seems that the test case added by f5fc2f5b2 is still a bit\n> > > > unstable, even after c64dcc7fe:\n> > >\n> > > Hmm, I don't see the exact cause yet but there are two possibilities:\n> > > some transactions were really spilled,\n> > >\n> >\n> > This is the first test and inserts just one small record, so how it\n> > can lead to spill of data. Do you mean to say that may be some\n> > background process has written some transaction which leads to a spill\n> > of data?\n>\n> Not sure but I thought that the logical decoding started to decodes\n> from a relatively old point for some reason and decoded incomplete\n> transactions that weren’t shown in the result.\n>\n> >\n> > > and it showed the old stats due\n> > > to losing the drop (and create) slot messages.\n> > >\n> >\n> > Yeah, something like this could happen. Another possibility here could\n> > be that before the stats collector has processed drop and create\n> > messages, we have enquired about the stats which lead to it giving us\n> > the old stats. Note, that we don't wait for 'drop' or 'create' message\n> > to be delivered. So, there is a possibility of the same. What do you\n> > think?\n>\n> Yeah, that could happen even if any message didn't get dropped.\n>\n> >\n> > > For the former case, it\n> > > seems to better to create the slot just before the insertion and\n> > > setting logical_decoding_work_mem to the default (64MB). For the\n> > > latter case, maybe we can use a different name slot than the name used\n> > > in other tests?\n> > >\n> >\n> > How about doing both of the above suggestions? Alternatively, we can\n> > wait for both 'drop' and 'create' message to be delivered but that\n> > might be overkill.\n>\n> Agreed. Attached the patch doing both things.\n\nHaving a different slot name should solve the problem. The patch looks\ngood to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 29 Apr 2021 13:56:43 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, Apr 28, 2021 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 4:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 28, 2021 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> @@ -1369,7 +1369,7 @@ ReorderBufferIterTXNNext(ReorderBuffer *rb,\n> ReorderBufferIterTXNState *state)\n> * Update the total bytes processed before releasing the current set\n> * of changes and restoring the new set of changes.\n> */\n> - rb->totalBytes += rb->size;\n> + rb->totalBytes += entry->txn->total_size;\n> if (ReorderBufferRestoreChanges(rb, entry->txn, &entry->file,\n> &state->entries[off].segno))\n>\n> I have not tested this but won't in the above change you need to check\n> txn->toptxn for subtxns?\n>\n\nNow, I am able to reproduce this issue:\nCreate table t1(c1 int);\nselect pg_create_logical_replication_slot('s', 'test_decoding');\nBegin;\ninsert into t1 values(1);\nsavepoint s1;\ninsert into t1 select generate_series(1, 100000);\ncommit;\n\npostgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL, NULL);\n count\n--------\n 100005\n(1 row)\n\npostgres=# select * from pg_stat_replication_slots;\n slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\nstream_count | stream_bytes | total_txns | total_bytes |\nstats_reset\n-----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n s1 | 0 | 0 | 0 | 0 |\n 0 | 0 | 2 | 13200672 | 2021-04-29\n14:33:55.156566+05:30\n(1 row)\n\nselect * from pg_stat_reset_replication_slot('s1');\n\nNow reduce the logical decoding work mem to allow spilling.\npostgres=# set logical_decoding_work_mem='64kB';\nSET\npostgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL, NULL);\n count\n--------\n 100005\n(1 row)\n\npostgres=# select * from pg_stat_replication_slots;\n slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\nstream_count | stream_bytes | total_txns | total_bytes |\nstats_reset\n-----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n s1 | 1 | 202 | 13200000 | 0 |\n 0 | 0 | 2 | 672 | 2021-04-29\n14:35:21.836613+05:30\n(1 row)\n\nYou can notice that after we have allowed spilling the 'total_bytes'\nstats is showing a different value. The attached patch fixes the issue\nfor me. Let me know what do you think about this?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 29 Apr 2021 15:06:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 3:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> >\n> > On Wed, Apr 28, 2021 at 4:51 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n> > >\n> > > On Wed, Apr 28, 2021 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com>\nwrote:\n> >\n> > @@ -1369,7 +1369,7 @@ ReorderBufferIterTXNNext(ReorderBuffer *rb,\n> > ReorderBufferIterTXNState *state)\n> > * Update the total bytes processed before releasing the current set\n> > * of changes and restoring the new set of changes.\n> > */\n> > - rb->totalBytes += rb->size;\n> > + rb->totalBytes += entry->txn->total_size;\n> > if (ReorderBufferRestoreChanges(rb, entry->txn, &entry->file,\n> > &state->entries[off].segno))\n> >\n> > I have not tested this but won't in the above change you need to check\n> > txn->toptxn for subtxns?\n> >\n>\n> Now, I am able to reproduce this issue:\n> Create table t1(c1 int);\n> select pg_create_logical_replication_slot('s', 'test_decoding');\n> Begin;\n> insert into t1 values(1);\n> savepoint s1;\n> insert into t1 select generate_series(1, 100000);\n> commit;\n>\n> postgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL,\nNULL);\n> count\n> --------\n> 100005\n> (1 row)\n>\n> postgres=# select * from pg_stat_replication_slots;\n> slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\n> stream_count | stream_bytes | total_txns | total_bytes |\n> stats_reset\n>\n-----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> s1 | 0 | 0 | 0 | 0 |\n> 0 | 0 | 2 | 13200672 | 2021-04-29\n> 14:33:55.156566+05:30\n> (1 row)\n>\n> select * from pg_stat_reset_replication_slot('s1');\n>\n> Now reduce the logical decoding work mem to allow spilling.\n> postgres=# set logical_decoding_work_mem='64kB';\n> SET\n> postgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL,\nNULL);\n> count\n> --------\n> 100005\n> (1 row)\n>\n> postgres=# select * from pg_stat_replication_slots;\n> slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\n> stream_count | stream_bytes | total_txns | total_bytes |\n> stats_reset\n>\n-----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> s1 | 1 | 202 | 13200000 | 0 |\n> 0 | 0 | 2 | 672 | 2021-04-29\n> 14:35:21.836613+05:30\n> (1 row)\n>\n> You can notice that after we have allowed spilling the 'total_bytes'\n> stats is showing a different value. The attached patch fixes the issue\n> for me. Let me know what do you think about this?\n\nI found one issue with the following scenario when testing with\nlogical_decoding_work_mem as 64kB:\n\nBEGIN;\nINSERT INTO t1 values(generate_series(1,10000));\nSAVEPOINT s1;\nINSERT INTO t1 values(generate_series(1,10000));\nCOMMIT;\nSELECT count(*) FROM pg_logical_slot_get_changes('regression_slot1', NULL,\n NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\nselect * from pg_stat_replication_slots;\n slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\nstream_count | stream_bytes | total_txns | total_bytes |\nstats_reset\n------------------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n regression_slot1 | 6 | 154 | 9130176 | 0 |\n 0 | 0 | 1 | *4262016* | 2021-04-29\n17:50:00.080663+05:30\n(1 row)\n\nSame thing works fine with logical_decoding_work_mem as 64MB:\nselect * from pg_stat_replication_slots;\n slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\nstream_count | stream_bytes | total_txns | total_bytes |\nstats_reset\n------------------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n regression_slot1 | 6 | 154 | 9130176 | 0 |\n 0 | 0 | 1 | *2640000* | 2021-04-29\n17:50:00.080663+05:30\n(1 row)\n\nThe patch required one change:\n- rb->totalBytes += rb->size;\n+ if (entry->txn->toptxn)\n+ rb->totalBytes += entry->txn->toptxn->total_size;\n+ else\n+ rb->totalBytes += entry->txn->*total_size*;\n\nThe above should be changed to:\n- rb->totalBytes += rb->size;\n+ if (entry->txn->toptxn)\n+ rb->totalBytes += entry->txn->toptxn->total_size;\n+ else\n+ rb->totalBytes += entry->txn->*size*;\n\nAttached patch fixes the issue.\nThoughts?\n\nRegards,\nVignesh", "msg_date": "Thu, 29 Apr 2021 18:14:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 9:44 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n>\n>\n> On Thu, Apr 29, 2021 at 3:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 28, 2021 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 28, 2021 at 4:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Apr 28, 2021 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > @@ -1369,7 +1369,7 @@ ReorderBufferIterTXNNext(ReorderBuffer *rb,\n> > > ReorderBufferIterTXNState *state)\n> > > * Update the total bytes processed before releasing the current set\n> > > * of changes and restoring the new set of changes.\n> > > */\n> > > - rb->totalBytes += rb->size;\n> > > + rb->totalBytes += entry->txn->total_size;\n> > > if (ReorderBufferRestoreChanges(rb, entry->txn, &entry->file,\n> > > &state->entries[off].segno))\n> > >\n> > > I have not tested this but won't in the above change you need to check\n> > > txn->toptxn for subtxns?\n> > >\n> >\n> > Now, I am able to reproduce this issue:\n> > Create table t1(c1 int);\n> > select pg_create_logical_replication_slot('s', 'test_decoding');\n> > Begin;\n> > insert into t1 values(1);\n> > savepoint s1;\n> > insert into t1 select generate_series(1, 100000);\n> > commit;\n> >\n> > postgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL, NULL);\n> > count\n> > --------\n> > 100005\n> > (1 row)\n> >\n> > postgres=# select * from pg_stat_replication_slots;\n> > slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\n> > stream_count | stream_bytes | total_txns | total_bytes |\n> > stats_reset\n> > -----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> > s1 | 0 | 0 | 0 | 0 |\n> > 0 | 0 | 2 | 13200672 | 2021-04-29\n> > 14:33:55.156566+05:30\n> > (1 row)\n> >\n> > select * from pg_stat_reset_replication_slot('s1');\n> >\n> > Now reduce the logical decoding work mem to allow spilling.\n> > postgres=# set logical_decoding_work_mem='64kB';\n> > SET\n> > postgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL, NULL);\n> > count\n> > --------\n> > 100005\n> > (1 row)\n> >\n> > postgres=# select * from pg_stat_replication_slots;\n> > slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\n> > stream_count | stream_bytes | total_txns | total_bytes |\n> > stats_reset\n> > -----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> > s1 | 1 | 202 | 13200000 | 0 |\n> > 0 | 0 | 2 | 672 | 2021-04-29\n> > 14:35:21.836613+05:30\n> > (1 row)\n> >\n> > You can notice that after we have allowed spilling the 'total_bytes'\n> > stats is showing a different value. The attached patch fixes the issue\n> > for me. Let me know what do you think about this?\n>\n> I found one issue with the following scenario when testing with logical_decoding_work_mem as 64kB:\n>\n> BEGIN;\n> INSERT INTO t1 values(generate_series(1,10000));\n> SAVEPOINT s1;\n> INSERT INTO t1 values(generate_series(1,10000));\n> COMMIT;\n> SELECT count(*) FROM pg_logical_slot_get_changes('regression_slot1', NULL,\n> NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> select * from pg_stat_replication_slots;\n> slot_name | spill_txns | spill_count | spill_bytes | stream_txns | stream_count | stream_bytes | total_txns | total_bytes | stats_reset\n> ------------------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> regression_slot1 | 6 | 154 | 9130176 | 0 | 0 | 0 | 1 | 4262016 | 2021-04-29 17:50:00.080663+05:30\n> (1 row)\n>\n> Same thing works fine with logical_decoding_work_mem as 64MB:\n> select * from pg_stat_replication_slots;\n> slot_name | spill_txns | spill_count | spill_bytes | stream_txns | stream_count | stream_bytes | total_txns | total_bytes | stats_reset\n> ------------------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> regression_slot1 | 6 | 154 | 9130176 | 0 | 0 | 0 | 1 | 2640000 | 2021-04-29 17:50:00.080663+05:30\n> (1 row)\n>\n> The patch required one change:\n> - rb->totalBytes += rb->size;\n> + if (entry->txn->toptxn)\n> + rb->totalBytes += entry->txn->toptxn->total_size;\n> + else\n> + rb->totalBytes += entry->txn->total_size;\n>\n> The above should be changed to:\n> - rb->totalBytes += rb->size;\n> + if (entry->txn->toptxn)\n> + rb->totalBytes += entry->txn->toptxn->total_size;\n> + else\n> + rb->totalBytes += entry->txn->size;\n>\n> Attached patch fixes the issue.\n> Thoughts?\n\nAfter more thought, it seems to me that we should use txn->size here\nregardless of the top transaction or subtransaction since we're\niterating changes associated with a transaction that is either the top\ntransaction or a subtransaction. Otherwise, I think if some\nsubtransactions are not serialized, we will end up adding bytes\nincluding those subtransactions during iterating other serialized\nsubtransactions. Whereas in ReorderBufferProcessTXN() we should use\ntxn->total_txn since txn is always the top transaction. I've attached\nanother patch to do this.\n\nBTW, to check how many bytes of changes are passed to the decoder\nplugin I wrote and attached a simple decoder plugin that calculates\nthe total amount of bytes for each change on the decoding plugin side.\nI think what we expect is that the amounts of change bytes shown on\nboth sides are matched. You can build it in the same way as other\nthird-party modules and need to create decoder_stats extension.\n\nThe basic usage is to execute pg_logical_slot_get/peek_changes() and\nmystats('slot_name') in the same process. During decoding the changes,\ndecoder_stats plugin accumulates the change bytes in the local memory\nand mystats() SQL function, defined in decoder_stats extension, shows\nthose stats.\n\nI've done some test with v4 patch. For instance, with the following\nworkload the output is expected:\n\nBEGIN;\nINSERT INTO t1 values(generate_series(1,10000));\nSAVEPOINT s1;\nINSERT INTO t1 values(generate_series(1,10000));\nCOMMIT;\n\nmystats() functions shows:\n\n=# select pg_logical_slot_get_changes('test_slot', null, null);\n=# select change_type, change_bytes, total_bytes from mystats('test_slot');\n change_type | change_bytes | total_bytes\n-------------+--------------+-------------\n INSERT | 2578 kB | 2578 kB\n(1 row)\n\n'change_bytes' and 'total_bytes' are the total amount of changes\ncalculated on the plugin side and core side, respectively. Those are\nmatched, which is expected. On the other hand, with the following\nworkload those are not matched:\n\nBEGIN;\nINSERT INTO t1 values(generate_series(1,10000));\nSAVEPOINT s1;\nINSERT INTO t1 values(generate_series(1,10000));\nSAVEPOINT s2;\nINSERT INTO t1 values(generate_series(1,10000));\nCOMMIT;\n\n=# select pg_logical_slot_get_changes('test_slot', null, null);\n=# select change_type, change_bytes, total_bytes from mystats('test_slot');\n change_type | change_bytes | total_bytes\n-------------+--------------+-------------\n INSERT | 3867 kB | 5451 kB\n(1 row)\n\nThis is fixed by the attached v5 patch.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Fri, 30 Apr 2021 09:24:39 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 12:07 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 11:14 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > >\n> > > How about doing both of the above suggestions? Alternatively, we can\n> > > wait for both 'drop' and 'create' message to be delivered but that\n> > > might be overkill.\n> >\n> > Agreed. Attached the patch doing both things.\n> >\n>\n> Thanks, the patch LGTM. I'll wait for a day before committing to see\n> if anyone has better ideas.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 30 Apr 2021 08:42:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 30, 2021 at 5:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 9:44 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> >\n> >\n> > On Thu, Apr 29, 2021 at 3:06 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 28, 2021 at 5:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Apr 28, 2021 at 4:51 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Apr 28, 2021 at 6:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > @@ -1369,7 +1369,7 @@ ReorderBufferIterTXNNext(ReorderBuffer *rb,\n> > > > ReorderBufferIterTXNState *state)\n> > > > * Update the total bytes processed before releasing the current set\n> > > > * of changes and restoring the new set of changes.\n> > > > */\n> > > > - rb->totalBytes += rb->size;\n> > > > + rb->totalBytes += entry->txn->total_size;\n> > > > if (ReorderBufferRestoreChanges(rb, entry->txn, &entry->file,\n> > > > &state->entries[off].segno))\n> > > >\n> > > > I have not tested this but won't in the above change you need to check\n> > > > txn->toptxn for subtxns?\n> > > >\n> > >\n> > > Now, I am able to reproduce this issue:\n> > > Create table t1(c1 int);\n> > > select pg_create_logical_replication_slot('s', 'test_decoding');\n> > > Begin;\n> > > insert into t1 values(1);\n> > > savepoint s1;\n> > > insert into t1 select generate_series(1, 100000);\n> > > commit;\n> > >\n> > > postgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL, NULL);\n> > > count\n> > > --------\n> > > 100005\n> > > (1 row)\n> > >\n> > > postgres=# select * from pg_stat_replication_slots;\n> > > slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\n> > > stream_count | stream_bytes | total_txns | total_bytes |\n> > > stats_reset\n> > > -----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> > > s1 | 0 | 0 | 0 | 0 |\n> > > 0 | 0 | 2 | 13200672 | 2021-04-29\n> > > 14:33:55.156566+05:30\n> > > (1 row)\n> > >\n> > > select * from pg_stat_reset_replication_slot('s1');\n> > >\n> > > Now reduce the logical decoding work mem to allow spilling.\n> > > postgres=# set logical_decoding_work_mem='64kB';\n> > > SET\n> > > postgres=# select count(*) from pg_logical_slot_peek_changes('s1', NULL, NULL);\n> > > count\n> > > --------\n> > > 100005\n> > > (1 row)\n> > >\n> > > postgres=# select * from pg_stat_replication_slots;\n> > > slot_name | spill_txns | spill_count | spill_bytes | stream_txns |\n> > > stream_count | stream_bytes | total_txns | total_bytes |\n> > > stats_reset\n> > > -----------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> > > s1 | 1 | 202 | 13200000 | 0 |\n> > > 0 | 0 | 2 | 672 | 2021-04-29\n> > > 14:35:21.836613+05:30\n> > > (1 row)\n> > >\n> > > You can notice that after we have allowed spilling the 'total_bytes'\n> > > stats is showing a different value. The attached patch fixes the issue\n> > > for me. Let me know what do you think about this?\n> >\n> > I found one issue with the following scenario when testing with logical_decoding_work_mem as 64kB:\n> >\n> > BEGIN;\n> > INSERT INTO t1 values(generate_series(1,10000));\n> > SAVEPOINT s1;\n> > INSERT INTO t1 values(generate_series(1,10000));\n> > COMMIT;\n> > SELECT count(*) FROM pg_logical_slot_get_changes('regression_slot1', NULL,\n> > NULL, 'include-xids', '0', 'skip-empty-xacts', '1');\n> > select * from pg_stat_replication_slots;\n> > slot_name | spill_txns | spill_count | spill_bytes | stream_txns | stream_count | stream_bytes | total_txns | total_bytes | stats_reset\n> > ------------------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> > regression_slot1 | 6 | 154 | 9130176 | 0 | 0 | 0 | 1 | 4262016 | 2021-04-29 17:50:00.080663+05:30\n> > (1 row)\n> >\n> > Same thing works fine with logical_decoding_work_mem as 64MB:\n> > select * from pg_stat_replication_slots;\n> > slot_name | spill_txns | spill_count | spill_bytes | stream_txns | stream_count | stream_bytes | total_txns | total_bytes | stats_reset\n> > ------------------+------------+-------------+-------------+-------------+--------------+--------------+------------+-------------+----------------------------------\n> > regression_slot1 | 6 | 154 | 9130176 | 0 | 0 | 0 | 1 | 2640000 | 2021-04-29 17:50:00.080663+05:30\n> > (1 row)\n> >\n> > The patch required one change:\n> > - rb->totalBytes += rb->size;\n> > + if (entry->txn->toptxn)\n> > + rb->totalBytes += entry->txn->toptxn->total_size;\n> > + else\n> > + rb->totalBytes += entry->txn->total_size;\n> >\n> > The above should be changed to:\n> > - rb->totalBytes += rb->size;\n> > + if (entry->txn->toptxn)\n> > + rb->totalBytes += entry->txn->toptxn->total_size;\n> > + else\n> > + rb->totalBytes += entry->txn->size;\n> >\n> > Attached patch fixes the issue.\n> > Thoughts?\n>\n> After more thought, it seems to me that we should use txn->size here\n> regardless of the top transaction or subtransaction since we're\n> iterating changes associated with a transaction that is either the top\n> transaction or a subtransaction. Otherwise, I think if some\n> subtransactions are not serialized, we will end up adding bytes\n> including those subtransactions during iterating other serialized\n> subtransactions. Whereas in ReorderBufferProcessTXN() we should use\n> txn->total_txn since txn is always the top transaction. I've attached\n> another patch to do this.\n>\n> BTW, to check how many bytes of changes are passed to the decoder\n> plugin I wrote and attached a simple decoder plugin that calculates\n> the total amount of bytes for each change on the decoding plugin side.\n> I think what we expect is that the amounts of change bytes shown on\n> both sides are matched. You can build it in the same way as other\n> third-party modules and need to create decoder_stats extension.\n>\n> The basic usage is to execute pg_logical_slot_get/peek_changes() and\n> mystats('slot_name') in the same process. During decoding the changes,\n> decoder_stats plugin accumulates the change bytes in the local memory\n> and mystats() SQL function, defined in decoder_stats extension, shows\n> those stats.\n>\n> I've done some test with v4 patch. For instance, with the following\n> workload the output is expected:\n>\n> BEGIN;\n> INSERT INTO t1 values(generate_series(1,10000));\n> SAVEPOINT s1;\n> INSERT INTO t1 values(generate_series(1,10000));\n> COMMIT;\n>\n> mystats() functions shows:\n>\n> =# select pg_logical_slot_get_changes('test_slot', null, null);\n> =# select change_type, change_bytes, total_bytes from mystats('test_slot');\n> change_type | change_bytes | total_bytes\n> -------------+--------------+-------------\n> INSERT | 2578 kB | 2578 kB\n> (1 row)\n>\n> 'change_bytes' and 'total_bytes' are the total amount of changes\n> calculated on the plugin side and core side, respectively. Those are\n> matched, which is expected. On the other hand, with the following\n> workload those are not matched:\n>\n> BEGIN;\n> INSERT INTO t1 values(generate_series(1,10000));\n> SAVEPOINT s1;\n> INSERT INTO t1 values(generate_series(1,10000));\n> SAVEPOINT s2;\n> INSERT INTO t1 values(generate_series(1,10000));\n> COMMIT;\n>\n> =# select pg_logical_slot_get_changes('test_slot', null, null);\n> =# select change_type, change_bytes, total_bytes from mystats('test_slot');\n> change_type | change_bytes | total_bytes\n> -------------+--------------+-------------\n> INSERT | 3867 kB | 5451 kB\n> (1 row)\n>\n> This is fixed by the attached v5 patch.\n\nThe changes look good to me, I don't have any comments.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 30 Apr 2021 11:36:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 30, 2021 at 5:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> After more thought, it seems to me that we should use txn->size here\n> regardless of the top transaction or subtransaction since we're\n> iterating changes associated with a transaction that is either the top\n> transaction or a subtransaction. Otherwise, I think if some\n> subtransactions are not serialized, we will end up adding bytes\n> including those subtransactions during iterating other serialized\n> subtransactions. Whereas in ReorderBufferProcessTXN() we should use\n> txn->total_txn since txn is always the top transaction. I've attached\n> another patch to do this.\n>\n\nLGTM. I have slightly edited the comments in the attached. I'll push\nthis early next week unless there are more comments.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Fri, 30 Apr 2021 13:47:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, Apr 29, 2021 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > I am not sure if any of these alternatives are a good idea. What do\n> > > you think? Do you have any other ideas for this?\n> >\n> > I've been considering some ideas but don't come up with a good one\n> > yet. It’s just an idea and not tested but how about having\n> > CreateDecodingContext() register before_shmem_exit() callback with the\n> > decoding context to ensure that we send slot stats even on\n> > interruption. And FreeDecodingContext() cancels the callback.\n> >\n>\n> Is it a good idea to send stats while exiting and rely on the same? I\n> think before_shmem_exit is mostly used for the cleanup purpose so not\n> sure if we can rely on it for this purpose. I think we can't be sure\n> that in all cases we will send all stats, so maybe Vignesh's patch is\n> sufficient to cover the cases where we avoid losing it in cases where\n> we would have sent a large amount of data.\n>\n\nSawada-San, any thoughts on this point? Apart from this, I think you\nhave suggested somewhere in this thread to slightly update the\ndescription of stream_bytes. I would like to update the description of\nstream_bytes and total_bytes as below:\n\nstream_bytes\nAmount of transaction data decoded for streaming in-progress\ntransactions to the decoding output plugin while decoding changes from\nWAL for this slot. This and other streaming counters for this slot can\nbe used to tune logical_decoding_work_mem.\n\ntotal_bytes\nAmount of transaction data decoded for sending transactions to the\ndecoding output plugin while decoding changes from WAL for this slot.\nNote that this includes data that is streamed and/or spilled.\n\nThis update considers two points:\na. we don't send this data across the network because plugin might\ndecide to filter this data, ex. based on publications.\nb. not all of the decoded changes are sent to plugin, consider\nREORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID,\nREORDER_BUFFER_CHANGE_INTERNAL_SNAPSHOT, etc.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 3 May 2021 10:56:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, Apr 30, 2021 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> LGTM. I have slightly edited the comments in the attached. I'll push\n> this early next week unless there are more comments.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 3 May 2021 10:59:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 3, 2021 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Apr 29, 2021 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > > I am not sure if any of these alternatives are a good idea. What do\n> > > > you think? Do you have any other ideas for this?\n> > >\n> > > I've been considering some ideas but don't come up with a good one\n> > > yet. It’s just an idea and not tested but how about having\n> > > CreateDecodingContext() register before_shmem_exit() callback with the\n> > > decoding context to ensure that we send slot stats even on\n> > > interruption. And FreeDecodingContext() cancels the callback.\n> > >\n> >\n> > Is it a good idea to send stats while exiting and rely on the same? I\n> > think before_shmem_exit is mostly used for the cleanup purpose so not\n> > sure if we can rely on it for this purpose. I think we can't be sure\n> > that in all cases we will send all stats, so maybe Vignesh's patch is\n> > sufficient to cover the cases where we avoid losing it in cases where\n> > we would have sent a large amount of data.\n> >\n>\n> Sawada-San, any thoughts on this point?\n\nbefore_shmem_exit is mostly used to the cleanup purpose but how about\non_shmem_exit()? pgstats relies on that to send stats at the\ninterruption. See pgstat_shutdown_hook().\n\nThat being said, I agree Vignesh' patch would cover most cases. If we\ndon't find any better solution, I think we can go with Vignesh's\npatch.\n\n> Apart from this, I think you\n> have suggested somewhere in this thread to slightly update the\n> description of stream_bytes. I would like to update the description of\n> stream_bytes and total_bytes as below:\n>\n> stream_bytes\n> Amount of transaction data decoded for streaming in-progress\n> transactions to the decoding output plugin while decoding changes from\n> WAL for this slot. This and other streaming counters for this slot can\n> be used to tune logical_decoding_work_mem.\n>\n> total_bytes\n> Amount of transaction data decoded for sending transactions to the\n> decoding output plugin while decoding changes from WAL for this slot.\n> Note that this includes data that is streamed and/or spilled.\n>\n> This update considers two points:\n> a. we don't send this data across the network because plugin might\n> decide to filter this data, ex. based on publications.\n> b. not all of the decoded changes are sent to plugin, consider\n> REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID,\n> REORDER_BUFFER_CHANGE_INTERNAL_SNAPSHOT, etc.\n\nLooks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 3 May 2021 21:18:14 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 3, 2021 at 2:29 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Apr 30, 2021 at 1:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > LGTM. I have slightly edited the comments in the attached. I'll push\n> > this early next week unless there are more comments.\n> >\n>\n> Pushed.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 3 May 2021 21:18:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 3, 2021 at 5:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Apr 29, 2021 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > >\n> > > > > I am not sure if any of these alternatives are a good idea. What do\n> > > > > you think? Do you have any other ideas for this?\n> > > >\n> > > > I've been considering some ideas but don't come up with a good one\n> > > > yet. It’s just an idea and not tested but how about having\n> > > > CreateDecodingContext() register before_shmem_exit() callback with the\n> > > > decoding context to ensure that we send slot stats even on\n> > > > interruption. And FreeDecodingContext() cancels the callback.\n> > > >\n> > >\n> > > Is it a good idea to send stats while exiting and rely on the same? I\n> > > think before_shmem_exit is mostly used for the cleanup purpose so not\n> > > sure if we can rely on it for this purpose. I think we can't be sure\n> > > that in all cases we will send all stats, so maybe Vignesh's patch is\n> > > sufficient to cover the cases where we avoid losing it in cases where\n> > > we would have sent a large amount of data.\n> > >\n> >\n> > Sawada-San, any thoughts on this point?\n>\n> before_shmem_exit is mostly used to the cleanup purpose but how about\n> on_shmem_exit()? pgstats relies on that to send stats at the\n> interruption. See pgstat_shutdown_hook().\n>\n\nYeah, that is worth trying. Would you like to give it a try? I think\nit still might not cover the cases where we error out in the backend\nwhile decoding via APIs because at that time we won't exit, maybe for\nthat we can consider Vignesh's patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 3 May 2021 18:50:54 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 3, 2021 at 10:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 5:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 3, 2021 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Thu, Apr 29, 2021 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > >\n> > > > > > I am not sure if any of these alternatives are a good idea. What do\n> > > > > > you think? Do you have any other ideas for this?\n> > > > >\n> > > > > I've been considering some ideas but don't come up with a good one\n> > > > > yet. It’s just an idea and not tested but how about having\n> > > > > CreateDecodingContext() register before_shmem_exit() callback with the\n> > > > > decoding context to ensure that we send slot stats even on\n> > > > > interruption. And FreeDecodingContext() cancels the callback.\n> > > > >\n> > > >\n> > > > Is it a good idea to send stats while exiting and rely on the same? I\n> > > > think before_shmem_exit is mostly used for the cleanup purpose so not\n> > > > sure if we can rely on it for this purpose. I think we can't be sure\n> > > > that in all cases we will send all stats, so maybe Vignesh's patch is\n> > > > sufficient to cover the cases where we avoid losing it in cases where\n> > > > we would have sent a large amount of data.\n> > > >\n> > >\n> > > Sawada-San, any thoughts on this point?\n> >\n> > before_shmem_exit is mostly used to the cleanup purpose but how about\n> > on_shmem_exit()? pgstats relies on that to send stats at the\n> > interruption. See pgstat_shutdown_hook().\n> >\n>\n> Yeah, that is worth trying. Would you like to give it a try?\n\nYes.\n\nIn this approach, I think we will need to have a static pointer in\nlogical.c pointing to LogicalDecodingContext that we’re using. At\nStartupDecodingContext(), we set the pointer to the just created\nLogicalDecodingContext and register the callback so that we can refer\nto the LogicalDecodingContext on that callback. And at\nFreeDecodingContext(), we reset the pointer to NULL (however, since\nFreeDecodingContext() is not called when an error happens we would\nneed to ensure resetting it somehow). But, after more thought, if we\nhave the static pointer in logical.c it would rather be better to have\na global function that sends slot stats based on the\nLogicalDecodingContext pointed by the static pointer and can be called\nby ReplicationSlotRelease(). That way, we don’t need to worry about\nerroring out cases as well as interruption cases, although we need to\nhave a new static pointer.\n\nI've attached a quick-hacked patch. I also incorporated the change\nthat calls UpdateDecodingStats() at FreeDecodingContext() so that we\ncan send slot stats also in the case where we spilled/streamed changes\nbut finished without commit/abort/prepare record.\n\n> I think\n> it still might not cover the cases where we error out in the backend\n> while decoding via APIs because at that time we won't exit, maybe for\n> that we can consider Vignesh's patch.\n\nAgreed. It seems to me that the approach of the attached patch is\nbetter than the approach using on_shmem_exit(). So if we want to avoid\nhaving the new static pointer and function for this purpose we can\nconsider Vignesh’s patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 4 May 2021 13:18:03 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, May 4, 2021 at 9:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 10:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 3, 2021 at 5:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Mon, May 3, 2021 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Thu, Apr 29, 2021 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > >\n> > > > > > On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > >\n> > > > > >\n> > > > > > > I am not sure if any of these alternatives are a good idea. What do\n> > > > > > > you think? Do you have any other ideas for this?\n> > > > > >\n> > > > > > I've been considering some ideas but don't come up with a good one\n> > > > > > yet. It’s just an idea and not tested but how about having\n> > > > > > CreateDecodingContext() register before_shmem_exit() callback with the\n> > > > > > decoding context to ensure that we send slot stats even on\n> > > > > > interruption. And FreeDecodingContext() cancels the callback.\n> > > > > >\n> > > > >\n> > > > > Is it a good idea to send stats while exiting and rely on the same? I\n> > > > > think before_shmem_exit is mostly used for the cleanup purpose so not\n> > > > > sure if we can rely on it for this purpose. I think we can't be sure\n> > > > > that in all cases we will send all stats, so maybe Vignesh's patch is\n> > > > > sufficient to cover the cases where we avoid losing it in cases where\n> > > > > we would have sent a large amount of data.\n> > > > >\n> > > >\n> > > > Sawada-San, any thoughts on this point?\n> > >\n> > > before_shmem_exit is mostly used to the cleanup purpose but how about\n> > > on_shmem_exit()? pgstats relies on that to send stats at the\n> > > interruption. See pgstat_shutdown_hook().\n> > >\n> >\n> > Yeah, that is worth trying. Would you like to give it a try?\n>\n> Yes.\n>\n> In this approach, I think we will need to have a static pointer in\n> logical.c pointing to LogicalDecodingContext that we’re using. At\n> StartupDecodingContext(), we set the pointer to the just created\n> LogicalDecodingContext and register the callback so that we can refer\n> to the LogicalDecodingContext on that callback. And at\n> FreeDecodingContext(), we reset the pointer to NULL (however, since\n> FreeDecodingContext() is not called when an error happens we would\n> need to ensure resetting it somehow). But, after more thought, if we\n> have the static pointer in logical.c it would rather be better to have\n> a global function that sends slot stats based on the\n> LogicalDecodingContext pointed by the static pointer and can be called\n> by ReplicationSlotRelease(). That way, we don’t need to worry about\n> erroring out cases as well as interruption cases, although we need to\n> have a new static pointer.\n>\n> I've attached a quick-hacked patch. I also incorporated the change\n> that calls UpdateDecodingStats() at FreeDecodingContext() so that we\n> can send slot stats also in the case where we spilled/streamed changes\n> but finished without commit/abort/prepare record.\n>\n> > I think\n> > it still might not cover the cases where we error out in the backend\n> > while decoding via APIs because at that time we won't exit, maybe for\n> > that we can consider Vignesh's patch.\n>\n> Agreed. It seems to me that the approach of the attached patch is\n> better than the approach using on_shmem_exit(). So if we want to avoid\n> having the new static pointer and function for this purpose we can\n> consider Vignesh’s patch.\n>\n\nI'm ok with using either my patch or Sawada san's patch, Even I had\nthe same thought of whether we should have a static variable thought\npointed out by Sawada san. Apart from that I had one minor comment:\nThis comment needs to be corrected \"andu sed to sent\"\n+/*\n+ * Pointing to the currently-used logical decoding context andu sed to sent\n+ * slot statistics on releasing slots.\n+ */\n+static LogicalDecodingContext *MyLogicalDecodingContext = NULL;\n+\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 4 May 2021 11:04:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Tue, May 4, 2021 at 2:34 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, May 4, 2021 at 9:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Mon, May 3, 2021 at 10:21 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 3, 2021 at 5:48 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 3, 2021 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > >\n> > > > > On Thu, Apr 29, 2021 at 10:37 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > >\n> > > > > > On Wed, Apr 28, 2021 at 7:43 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > > > > >\n> > > > > > > On Wed, Apr 28, 2021 at 3:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > > > > > >\n> > > > > > >\n> > > > > > > > I am not sure if any of these alternatives are a good idea. What do\n> > > > > > > > you think? Do you have any other ideas for this?\n> > > > > > >\n> > > > > > > I've been considering some ideas but don't come up with a good one\n> > > > > > > yet. It’s just an idea and not tested but how about having\n> > > > > > > CreateDecodingContext() register before_shmem_exit() callback with the\n> > > > > > > decoding context to ensure that we send slot stats even on\n> > > > > > > interruption. And FreeDecodingContext() cancels the callback.\n> > > > > > >\n> > > > > >\n> > > > > > Is it a good idea to send stats while exiting and rely on the same? I\n> > > > > > think before_shmem_exit is mostly used for the cleanup purpose so not\n> > > > > > sure if we can rely on it for this purpose. I think we can't be sure\n> > > > > > that in all cases we will send all stats, so maybe Vignesh's patch is\n> > > > > > sufficient to cover the cases where we avoid losing it in cases where\n> > > > > > we would have sent a large amount of data.\n> > > > > >\n> > > > >\n> > > > > Sawada-San, any thoughts on this point?\n> > > >\n> > > > before_shmem_exit is mostly used to the cleanup purpose but how about\n> > > > on_shmem_exit()? pgstats relies on that to send stats at the\n> > > > interruption. See pgstat_shutdown_hook().\n> > > >\n> > >\n> > > Yeah, that is worth trying. Would you like to give it a try?\n> >\n> > Yes.\n> >\n> > In this approach, I think we will need to have a static pointer in\n> > logical.c pointing to LogicalDecodingContext that we’re using. At\n> > StartupDecodingContext(), we set the pointer to the just created\n> > LogicalDecodingContext and register the callback so that we can refer\n> > to the LogicalDecodingContext on that callback. And at\n> > FreeDecodingContext(), we reset the pointer to NULL (however, since\n> > FreeDecodingContext() is not called when an error happens we would\n> > need to ensure resetting it somehow). But, after more thought, if we\n> > have the static pointer in logical.c it would rather be better to have\n> > a global function that sends slot stats based on the\n> > LogicalDecodingContext pointed by the static pointer and can be called\n> > by ReplicationSlotRelease(). That way, we don’t need to worry about\n> > erroring out cases as well as interruption cases, although we need to\n> > have a new static pointer.\n> >\n> > I've attached a quick-hacked patch. I also incorporated the change\n> > that calls UpdateDecodingStats() at FreeDecodingContext() so that we\n> > can send slot stats also in the case where we spilled/streamed changes\n> > but finished without commit/abort/prepare record.\n> >\n> > > I think\n> > > it still might not cover the cases where we error out in the backend\n> > > while decoding via APIs because at that time we won't exit, maybe for\n> > > that we can consider Vignesh's patch.\n> >\n> > Agreed. It seems to me that the approach of the attached patch is\n> > better than the approach using on_shmem_exit(). So if we want to avoid\n> > having the new static pointer and function for this purpose we can\n> > consider Vignesh’s patch.\n> >\n>\n> I'm ok with using either my patch or Sawada san's patch, Even I had\n> the same thought of whether we should have a static variable thought\n> pointed out by Sawada san. Apart from that I had one minor comment:\n> This comment needs to be corrected \"andu sed to sent\"\n> +/*\n> + * Pointing to the currently-used logical decoding context andu sed to sent\n> + * slot statistics on releasing slots.\n> + */\n> +static LogicalDecodingContext *MyLogicalDecodingContext = NULL;\n> +\n\nRight, that needs to be fixed.\n\nAfter more thought, I'm concerned that my patch's approach might be\ninvasive for PG14. Given that Vignesh’s patch would cover most cases,\nI think we can live with a small downside that could miss some slot\nstats. If we want to ensure sending slot stats at releasing slot, we\ncan develop it as an improvement. My patch would be better to get\nreviewed by more peoples including the design during PG15 development.\nThoughts?\n\nRegards,\n\n---\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 09:45:12 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 3, 2021 at 9:18 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Mon, May 3, 2021 at 2:27 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > Apart from this, I think you\n> > have suggested somewhere in this thread to slightly update the\n> > description of stream_bytes. I would like to update the description of\n> > stream_bytes and total_bytes as below:\n> >\n> > stream_bytes\n> > Amount of transaction data decoded for streaming in-progress\n> > transactions to the decoding output plugin while decoding changes from\n> > WAL for this slot. This and other streaming counters for this slot can\n> > be used to tune logical_decoding_work_mem.\n> >\n> > total_bytes\n> > Amount of transaction data decoded for sending transactions to the\n> > decoding output plugin while decoding changes from WAL for this slot.\n> > Note that this includes data that is streamed and/or spilled.\n> >\n> > This update considers two points:\n> > a. we don't send this data across the network because plugin might\n> > decide to filter this data, ex. based on publications.\n> > b. not all of the decoded changes are sent to plugin, consider\n> > REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID,\n> > REORDER_BUFFER_CHANGE_INTERNAL_SNAPSHOT, etc.\n>\n> Looks good to me.\n\nAttached the doc update patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 6 May 2021 12:43:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 6:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> After more thought, I'm concerned that my patch's approach might be\n> invasive for PG14. Given that Vignesh’s patch would cover most cases,\n>\n\nI am not sure if your patch is too invasive but OTOH I am also\nconvinced that Vignesh's patch covers most cases and is much simpler\nso we can go ahead with that. In the attached, I have combined\nVignesh's patch and your doc fix patch. Additionally, I have changed\nsome comments and some other cosmetic stuff. Let me know what you\nthink of the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 6 May 2021 09:39:19 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 1:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 6:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > After more thought, I'm concerned that my patch's approach might be\n> > invasive for PG14. Given that Vignesh’s patch would cover most cases,\n> >\n>\n> I am not sure if your patch is too invasive but OTOH I am also\n> convinced that Vignesh's patch covers most cases and is much simpler\n> so we can go ahead with that.\n\nI think that my patch affects also other codes including logical\ndecoding and decoding context. We will need to write code while\nworrying about MyLogicalDecodingContext.\n\n> In the attached, I have combined\n> Vignesh's patch and your doc fix patch. Additionally, I have changed\n> some comments and some other cosmetic stuff. Let me know what you\n> think of the attached?\n\nThank you for updating the patch. The patch looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 14:24:58 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 9:39 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 6:15 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > After more thought, I'm concerned that my patch's approach might be\n> > invasive for PG14. Given that Vignesh’s patch would cover most cases,\n> >\n>\n> I am not sure if your patch is too invasive but OTOH I am also\n> convinced that Vignesh's patch covers most cases and is much simpler\n> so we can go ahead with that. In the attached, I have combined\n> Vignesh's patch and your doc fix patch. Additionally, I have changed\n> some comments and some other cosmetic stuff. Let me know what you\n> think of the attached?\n\nThe updated patch looks good to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 6 May 2021 11:22:09 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 10:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 1:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> > In the attached, I have combined\n> > Vignesh's patch and your doc fix patch. Additionally, I have changed\n> > some comments and some other cosmetic stuff. Let me know what you\n> > think of the attached?\n>\n> Thank you for updating the patch. The patch looks good to me.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 May 2021 12:32:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 4:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 10:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Thu, May 6, 2021 at 1:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > > In the attached, I have combined\n> > > Vignesh's patch and your doc fix patch. Additionally, I have changed\n> > > some comments and some other cosmetic stuff. Let me know what you\n> > > think of the attached?\n> >\n> > Thank you for updating the patch. The patch looks good to me.\n> >\n>\n> Pushed!\n\nThanks!\n\nAll issues pointed out in this thread are resolved and we can remove\nthis item from the open items?\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 16:59:27 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> All issues pointed out in this thread are resolved and we can remove\n> this item from the open items?\n>\n\nI think so. Do you think we should reply to Andres's original email\nstating the commits that fixed the individual review comments to avoid\nany confusion later?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 May 2021 13:58:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 4:03 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 6, 2021 at 10:55 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > On Thu, May 6, 2021 at 1:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > >\n> > > > In the attached, I have combined\n> > > > Vignesh's patch and your doc fix patch. Additionally, I have changed\n> > > > some comments and some other cosmetic stuff. Let me know what you\n> > > > think of the attached?\n> > >\n> > > Thank you for updating the patch. The patch looks good to me.\n> > >\n> >\n> > Pushed!\n\nThanks for committing.\n\n>\n> All issues pointed out in this thread are resolved and we can remove\n> this item from the open items?\n\nI felt all the comments listed have been addressed.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 6 May 2021 14:04:04 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > All issues pointed out in this thread are resolved and we can remove\n> > this item from the open items?\n> >\n>\n> I think so. Do you think we should reply to Andres's original email\n> stating the commits that fixed the individual review comments to avoid\n> any confusion later?\n\nGood idea. That's also helpful for confirming that all comments are\naddressed. Would you like to gather those commits? or shall I?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 6 May 2021 17:35:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 6, 2021 at 2:05 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Thu, May 6, 2021 at 5:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 6, 2021 at 1:30 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > All issues pointed out in this thread are resolved and we can remove\n> > > this item from the open items?\n> > >\n> >\n> > I think so. Do you think we should reply to Andres's original email\n> > stating the commits that fixed the individual review comments to avoid\n> > any confusion later?\n>\n> Good idea. That's also helpful for confirming that all comments are\n> addressed. Would you like to gather those commits? or shall I?\n>\n\nI am fine either way. I will do it tomorrow unless you have responded\nbefore that.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 6 May 2021 14:55:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I started to write this as a reply to\n> https://postgr.es/m/20210318015105.dcfa4ceybdjubf2i%40alap3.anarazel.de\n> but I think it doesn't really fit under that header anymore.\n>\n> On 2021-03-17 18:51:05 -0700, Andres Freund wrote:\n> > It does make it easier for the shared memory stats patch, because if\n> > there's a fixed number + location, the relevant stats reporting doesn't\n> > need to go through a hashtable with the associated locking. I guess\n> > that may have colored my perception that it's better to just have a\n> > statically sized memory allocation for this. Noteworthy that SLRU stats\n> > are done in a fixed size allocation as well...\n\nThrough a long discussion, all review comments pointed out here have\nbeen addressed. I summarized that individual review comments are fixed\nby which commit to avoid any confusion later.\n\n>\n> As part of reviewing the replication slot stats patch I looked at\n> replication slot stats a fair bit, and I've a few misgivings. First,\n> about the pgstat.c side of things:\n>\n> - If somehow slot stat drop messages got lost (remember pgstat\n> communication is lossy!), we'll just stop maintaining stats for slots\n> created later, because there'll eventually be no space for keeping\n> stats for another slot.\n>\n> - If max_replication_slots was lowered between a restart,\n> pgstat_read_statfile() will happily write beyond the end of\n> replSlotStats.\n>\n> - pgstat_reset_replslot_counter() acquires ReplicationSlotControlLock. I\n> think pgstat.c has absolutely no business doing things on that level.\n>\n> - We do a linear search through all replication slots whenever receiving\n> stats for a slot. Even though there'd be a perfectly good index to\n> just use all throughout - the slots index itself. It looks to me like\n> slots stat reports can be fairly frequent in some workloads, so that\n> doesn't seem great.\n\nFixed by 3fa17d37716.\n\n>\n> - PgStat_ReplSlotStats etc use slotname[NAMEDATALEN]. Why not just NameData?\n>\n> - pgstat_report_replslot() already has a lot of stats parameters, it\n> seems likely that we'll get more. Seems like we should just use a\n> struct of stats updates.\n\nFixed by cca57c1d9bf.\n\n>\n> And then more generally about the feature:\n> - If a slot was used to stream out a large amount of changes (say an\n> initial data load), but then replication is interrupted before the\n> transaction is committed/aborted, stream_bytes will not reflect the\n> many gigabytes of data we may have sent.\n\nFixed by 592f00f8d.\n\n> - I seems weird that we went to the trouble of inventing replication\n> slot stats, but then limit them to logical slots, and even there don't\n> record the obvious things like the total amount of data sent.\n\nFixed by f5fc2f5b23d.\n\n>\n> I think the best way to address the more fundamental \"pgstat related\"\n> complaints is to change how replication slot stats are\n> \"addressed\". Instead of using the slots name, report stats using the\n> index in ReplicationSlotCtl->replication_slots.\n>\n> That removes the risk of running out of \"replication slot stat slots\":\n> If we loose a drop message, the index eventually will be reused and we\n> likely can detect that the stats were for a different slot by comparing\n> the slot name.\n>\n> It also makes it easy to handle the issue of max_replication_slots being\n> lowered and there still being stats for a slot - we simply can skip\n> restoring that slots data, because we know the relevant slot can't exist\n> anymore. And we can make the initial pgstat_report_replslot() during\n> slot creation use a\n>\n> I'm wondering if we should just remove the slot name entirely from the\n> pgstat.c side of things, and have pg_stat_get_replication_slots()\n> inquire about slots by index as well and get the list of slots to report\n> stats for from slot.c infrastructure.\n\nWe fixed the problem of \"running out of replication slot stat slots\"\nby using HTAB to store slot stats, see 3fa17d37716. The slot stats\ncould be orphaned if a slot drop message gets lost. But we constantly\ncheck and remove them in pgstat_vacuum_stat().\n\nFor the record, there are two known issues that are unlikely to happen\nin practice or don't affect users much: (1) if the messages for\ncreation and drop slot of the same name get lost and create happens\nbefore (auto)vacuum cleans up the dead slot, the stats will be\naccumulated into the old slot, and (2) we could miss the total_txn and\ntotal_bytes updates if logical decoding is interrupted.\n\nFor (1), there is an idea of having OIDs for each slot to avoid the\naccumulation of stats but that doesn't seem worth doing as in practice\nthis won't happen frequently. For (2), there are some ideas of\nreporting slot stats at releasing slot (by keeping stats in\nReplicationSlot or by using callback) but we decided to go with\nreporting slot stats after every stream/spill. Because it covers most\ncases in practice and is much simpler than other approaches.\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 7 May 2021 09:39:56 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, May 7, 2021 at 6:10 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 3:52 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > I started to write this as a reply to\n> > https://postgr.es/m/20210318015105.dcfa4ceybdjubf2i%40alap3.anarazel.de\n> > but I think it doesn't really fit under that header anymore.\n> >\n> > On 2021-03-17 18:51:05 -0700, Andres Freund wrote:\n> > > It does make it easier for the shared memory stats patch, because if\n> > > there's a fixed number + location, the relevant stats reporting doesn't\n> > > need to go through a hashtable with the associated locking. I guess\n> > > that may have colored my perception that it's better to just have a\n> > > statically sized memory allocation for this. Noteworthy that SLRU stats\n> > > are done in a fixed size allocation as well...\n>\n> Through a long discussion, all review comments pointed out here have\n> been addressed. I summarized that individual review comments are fixed\n> by which commit to avoid any confusion later.\n>\n> >\n> > As part of reviewing the replication slot stats patch I looked at\n> > replication slot stats a fair bit, and I've a few misgivings. First,\n> > about the pgstat.c side of things:\n> >\n> > - If somehow slot stat drop messages got lost (remember pgstat\n> > communication is lossy!), we'll just stop maintaining stats for slots\n> > created later, because there'll eventually be no space for keeping\n> > stats for another slot.\n> >\n> > - If max_replication_slots was lowered between a restart,\n> > pgstat_read_statfile() will happily write beyond the end of\n> > replSlotStats.\n> >\n> > - pgstat_reset_replslot_counter() acquires ReplicationSlotControlLock. I\n> > think pgstat.c has absolutely no business doing things on that level.\n> >\n> > - We do a linear search through all replication slots whenever receiving\n> > stats for a slot. Even though there'd be a perfectly good index to\n> > just use all throughout - the slots index itself. It looks to me like\n> > slots stat reports can be fairly frequent in some workloads, so that\n> > doesn't seem great.\n>\n> Fixed by 3fa17d37716.\n>\n> >\n> > - PgStat_ReplSlotStats etc use slotname[NAMEDATALEN]. Why not just NameData?\n> >\n> > - pgstat_report_replslot() already has a lot of stats parameters, it\n> > seems likely that we'll get more. Seems like we should just use a\n> > struct of stats updates.\n>\n> Fixed by cca57c1d9bf.\n>\n> >\n> > And then more generally about the feature:\n> > - If a slot was used to stream out a large amount of changes (say an\n> > initial data load), but then replication is interrupted before the\n> > transaction is committed/aborted, stream_bytes will not reflect the\n> > many gigabytes of data we may have sent.\n>\n> Fixed by 592f00f8d.\n>\n> > - I seems weird that we went to the trouble of inventing replication\n> > slot stats, but then limit them to logical slots, and even there don't\n> > record the obvious things like the total amount of data sent.\n>\n> Fixed by f5fc2f5b23d.\n>\n> >\n> > I think the best way to address the more fundamental \"pgstat related\"\n> > complaints is to change how replication slot stats are\n> > \"addressed\". Instead of using the slots name, report stats using the\n> > index in ReplicationSlotCtl->replication_slots.\n> >\n> > That removes the risk of running out of \"replication slot stat slots\":\n> > If we loose a drop message, the index eventually will be reused and we\n> > likely can detect that the stats were for a different slot by comparing\n> > the slot name.\n> >\n> > It also makes it easy to handle the issue of max_replication_slots being\n> > lowered and there still being stats for a slot - we simply can skip\n> > restoring that slots data, because we know the relevant slot can't exist\n> > anymore. And we can make the initial pgstat_report_replslot() during\n> > slot creation use a\n> >\n> > I'm wondering if we should just remove the slot name entirely from the\n> > pgstat.c side of things, and have pg_stat_get_replication_slots()\n> > inquire about slots by index as well and get the list of slots to report\n> > stats for from slot.c infrastructure.\n>\n> We fixed the problem of \"running out of replication slot stat slots\"\n> by using HTAB to store slot stats, see 3fa17d37716. The slot stats\n> could be orphaned if a slot drop message gets lost. But we constantly\n> check and remove them in pgstat_vacuum_stat().\n>\n\nThanks for the summarization. I don't find anything that is left\nunaddressed. I think we can wait for a day or two to see if Andres or\nanyone else sees anything that is left unaddressed and then we can\nclose the open item.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 7 May 2021 08:03:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Fri, May 7, 2021 at 8:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Thanks for the summarization. I don't find anything that is left\n> unaddressed. I think we can wait for a day or two to see if Andres or\n> anyone else sees anything that is left unaddressed and then we can\n> close the open item.\n>\n\nI have closed this open item.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 11 May 2021 07:29:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I have closed this open item.\n\nThat seems a little premature, considering that the\ncontrib/test_decoding/sql/stats.sql test case is still failing regularly.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2021-05-11%2019%3A14%3A53\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=peripatus&dt=2021-05-07%2010%3A20%3A21\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 May 2021 17:32:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 6:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > I have closed this open item.\n>\n> That seems a little premature, considering that the\n> contrib/test_decoding/sql/stats.sql test case is still failing regularly.\n\nThank you for reporting.\n\nUgh, since by commit 592f00f8de we send slot stats every after\nspil/stream it’s possible that we report slot stats that have non-zero\ncounters for spill_bytes/txns and zeroes for total_bytes/txns. It\nseems to me it’s legitimate that the slot stats view shows non-zero\nvalues for spill_bytes/txns and zero values for total_bytes/txns\nduring decoding a large transaction. So I think we can fix the test\nscript so that it checks only spill_bytes/txns when checking spilled\ntransactions.\n\nFor the record, during streaming transactions, IIUC this kind of thing\ndoesn’t happen since we update both total_bytes/txns and\nstream_bytes/txns before reporting slot stats.\n\nI've attached a patch to fix it.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Wed, 12 May 2021 07:29:59 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 4:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 6:32 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > I have closed this open item.\n> >\n> > That seems a little premature, considering that the\n> > contrib/test_decoding/sql/stats.sql test case is still failing regularly.\n>\n> Thank you for reporting.\n>\n> Ugh, since by commit 592f00f8de we send slot stats every after\n> spil/stream it’s possible that we report slot stats that have non-zero\n> counters for spill_bytes/txns and zeroes for total_bytes/txns. It\n> seems to me it’s legitimate that the slot stats view shows non-zero\n> values for spill_bytes/txns and zero values for total_bytes/txns\n> during decoding a large transaction. So I think we can fix the test\n> script so that it checks only spill_bytes/txns when checking spilled\n> transactions.\n>\n\nYour analysis and fix look correct to me. I'll test and push your\npatch if I don't see any problem with it.\n\n> For the record, during streaming transactions, IIUC this kind of thing\n> doesn’t happen since we update both total_bytes/txns and\n> stream_bytes/txns before reporting slot stats.\n>\n\nRight, because during streaming, we send the data to the decoding plugin.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 May 2021 07:53:04 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 7:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 4:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Ugh, since by commit 592f00f8de we send slot stats every after\n> > spil/stream it’s possible that we report slot stats that have non-zero\n> > counters for spill_bytes/txns and zeroes for total_bytes/txns. It\n> > seems to me it’s legitimate that the slot stats view shows non-zero\n> > values for spill_bytes/txns and zero values for total_bytes/txns\n> > during decoding a large transaction. So I think we can fix the test\n> > script so that it checks only spill_bytes/txns when checking spilled\n> > transactions.\n> >\n>\n> Your analysis and fix look correct to me.\n>\n\nI think the part of the test that tests the stats after resetting it\nmight give different results. This can happen because in the previous\ntest we spill multiple times (spill_count is 12 in my testing) and it\nis possible that some of the spill stats messages is received by stats\ncollector after the reset message. If this theory is correct then it\nbetter that we remove the test for reset stats and the test after it\n\"decode and check stats again.\". This is not directly related to your\npatch or buildfarm failure but I guess this can happen and we might\nsee such a failure in future.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 12 May 2021 09:07:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 9:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 7:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, May 12, 2021 at 4:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Ugh, since by commit 592f00f8de we send slot stats every after\n> > > spil/stream it’s possible that we report slot stats that have non-zero\n> > > counters for spill_bytes/txns and zeroes for total_bytes/txns. It\n> > > seems to me it’s legitimate that the slot stats view shows non-zero\n> > > values for spill_bytes/txns and zero values for total_bytes/txns\n> > > during decoding a large transaction. So I think we can fix the test\n> > > script so that it checks only spill_bytes/txns when checking spilled\n> > > transactions.\n> > >\n> >\n> > Your analysis and fix look correct to me.\n> >\n>\n> I think the part of the test that tests the stats after resetting it\n> might give different results. This can happen because in the previous\n> test we spill multiple times (spill_count is 12 in my testing) and it\n> is possible that some of the spill stats messages is received by stats\n> collector after the reset message. If this theory is correct then it\n> better that we remove the test for reset stats and the test after it\n> \"decode and check stats again.\". This is not directly related to your\n> patch or buildfarm failure but I guess this can happen and we might\n> see such a failure in future.\n\nI agree with your analysis to remove that test. Attached patch has the\nchanges for the same.\n\nRegards,\nVignesh", "msg_date": "Wed, 12 May 2021 09:49:34 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> I agree with your analysis to remove that test. Attached patch has the\n> changes for the same.\n\nIs there any value in converting the test case into a TAP test that\ncould be more flexible about the expected output? I'm mainly wondering\nwhether there are any code paths that this test forces the server through,\nwhich would otherwise lack coverage.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 00:29:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > I agree with your analysis to remove that test. Attached patch has the\n> > changes for the same.\n>\n> Is there any value in converting the test case into a TAP test that\n> could be more flexible about the expected output? I'm mainly wondering\n> whether there are any code paths that this test forces the server through,\n> which would otherwise lack coverage.\n\nRemoving this test does not reduce code coverage. This test is\nbasically to decode and check the stats again, it is kind of a\nrepetitive test. The problem with this test here is that when a\ntransaction is spilled, the statistics for the spill transaction will\nbe sent to the statistics collector as and when the transaction is\nspilled. This test sends spill stats around 12 times. The test expects\nto reset the stats and check the stats gets updated when we get the\nchanges. We cannot validate reset slot stats results here, as it could\nbe possible that in some machines the stats collector receives the\nspilled transaction stats after getting reset slots. This same problem\nwill exist with tap tests too. So I feel it is better to remove this\ntest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 12 May 2021 10:12:20 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Wed, May 12, 2021 at 9:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Is there any value in converting the test case into a TAP test that\n>> could be more flexible about the expected output? I'm mainly wondering\n>> whether there are any code paths that this test forces the server through,\n>> which would otherwise lack coverage.\n\n> Removing this test does not reduce code coverage. This test is\n> basically to decode and check the stats again, it is kind of a\n> repetitive test. The problem with this test here is that when a\n> transaction is spilled, the statistics for the spill transaction will\n> be sent to the statistics collector as and when the transaction is\n> spilled. This test sends spill stats around 12 times. The test expects\n> to reset the stats and check the stats gets updated when we get the\n> changes. We cannot validate reset slot stats results here, as it could\n> be possible that in some machines the stats collector receives the\n> spilled transaction stats after getting reset slots. This same problem\n> will exist with tap tests too. So I feel it is better to remove this\n> test.\n\nOK, I'm satisfied as long as we've considered the code-coverage angle.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 May 2021 00:49:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 1:19 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 9:08 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, May 12, 2021 at 7:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Wed, May 12, 2021 at 4:00 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > > >\n> > > > Ugh, since by commit 592f00f8de we send slot stats every after\n> > > > spil/stream it’s possible that we report slot stats that have non-zero\n> > > > counters for spill_bytes/txns and zeroes for total_bytes/txns. It\n> > > > seems to me it’s legitimate that the slot stats view shows non-zero\n> > > > values for spill_bytes/txns and zero values for total_bytes/txns\n> > > > during decoding a large transaction. So I think we can fix the test\n> > > > script so that it checks only spill_bytes/txns when checking spilled\n> > > > transactions.\n> > > >\n> > >\n> > > Your analysis and fix look correct to me.\n> > >\n> >\n> > I think the part of the test that tests the stats after resetting it\n> > might give different results. This can happen because in the previous\n> > test we spill multiple times (spill_count is 12 in my testing) and it\n> > is possible that some of the spill stats messages is received by stats\n> > collector after the reset message. If this theory is correct then it\n> > better that we remove the test for reset stats and the test after it\n> > \"decode and check stats again.\". This is not directly related to your\n> > patch or buildfarm failure but I guess this can happen and we might\n> > see such a failure in future.\n\nGood point. I agree to remove this test.\n\n>\n> I agree with your analysis to remove that test. Attached patch has the\n> changes for the same.\n\nThank you for the patch. The patch looks good to me. I also agree that\nremoving the test doesn't reduce the test coverage.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 12 May 2021 19:32:10 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Wed, May 12, 2021 at 4:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 1:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > > I think the part of the test that tests the stats after resetting it\n> > > might give different results. This can happen because in the previous\n> > > test we spill multiple times (spill_count is 12 in my testing) and it\n> > > is possible that some of the spill stats messages is received by stats\n> > > collector after the reset message. If this theory is correct then it\n> > > better that we remove the test for reset stats and the test after it\n> > > \"decode and check stats again.\". This is not directly related to your\n> > > patch or buildfarm failure but I guess this can happen and we might\n> > > see such a failure in future.\n>\n> Good point. I agree to remove this test.\n>\n> >\n> > I agree with your analysis to remove that test. Attached patch has the\n> > changes for the same.\n>\n> Thank you for the patch. The patch looks good to me. I also agree that\n> removing the test doesn't reduce the test coverage.\n>\n\nThanks, I have pushed the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 13 May 2021 11:21:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 13, 2021 at 11:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, May 12, 2021 at 4:02 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Wed, May 12, 2021 at 1:19 PM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > > I think the part of the test that tests the stats after resetting it\n> > > > might give different results. This can happen because in the previous\n> > > > test we spill multiple times (spill_count is 12 in my testing) and it\n> > > > is possible that some of the spill stats messages is received by stats\n> > > > collector after the reset message. If this theory is correct then it\n> > > > better that we remove the test for reset stats and the test after it\n> > > > \"decode and check stats again.\". This is not directly related to your\n> > > > patch or buildfarm failure but I guess this can happen and we might\n> > > > see such a failure in future.\n> >\n> > Good point. I agree to remove this test.\n> >\n> > >\n> > > I agree with your analysis to remove that test. Attached patch has the\n> > > changes for the same.\n> >\n> > Thank you for the patch. The patch looks good to me. I also agree that\n> > removing the test doesn't reduce the test coverage.\n> >\n>\n> Thanks, I have pushed the patch.\n>\n\nThanks for pushing the patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 13 May 2021 11:30:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Thu, May 13, 2021 at 11:30 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n\nDo we want to update the information about pg_stat_replication_slots\nat the following place in docs\nhttps://www.postgresql.org/docs/devel/logicaldecoding-catalogs.html?\n\nIf so, feel free to submit the patch for it?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 24 May 2021 09:37:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 24, 2021 at 9:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 13, 2021 at 11:30 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n>\n> Do we want to update the information about pg_stat_replication_slots\n> at the following place in docs\n> https://www.postgresql.org/docs/devel/logicaldecoding-catalogs.html?\n>\n> If so, feel free to submit the patch for it?\n\nAdding it will be useful, the attached patch has the changes for the same.\n\nRegards,\nVignesh", "msg_date": "Mon, 24 May 2021 10:08:53 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" }, { "msg_contents": "On Mon, May 24, 2021 at 10:09 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, May 24, 2021 at 9:38 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, May 13, 2021 at 11:30 AM vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> >\n> > Do we want to update the information about pg_stat_replication_slots\n> > at the following place in docs\n> > https://www.postgresql.org/docs/devel/logicaldecoding-catalogs.html?\n> >\n> > If so, feel free to submit the patch for it?\n>\n> Adding it will be useful, the attached patch has the changes for the same.\n>\n\nThanks for the patch, pushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 25 May 2021 15:00:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replication slot stats misgivings" } ]
[ { "msg_contents": "Hi,\n\nI am working on Kyotaro Horiguchi's shared memory stats patch [1] with\nthe goal of getting it into a shape that I'd be happy to commit. That\nthread is quite long and most are probably skipping over new messages in\nit.\n\nThere are two high-level design decisions / questions that I think\nwarrant a wider audience (I'll keep lower level discussion in the other\nthread).\n\nIn case it is not obvious, the goal of the shared memory stats patch is\nto replace the existing statistics collector, to which new stats are\nreported via an UDP socket, and where clients read data from the stats\ncollector by reading the entire database's stats from disk.\n\nThe replacement is to put the statistics into a shared memory\nsegment. Fixed-size stats (e.g. bgwriter, checkpointer, wal activity,\netc) being stored directly in a struct in memory. Stats for objects\nwhere a variable number exists, e.g. tables, are addressed via a dshash.\ntable that points to the stats that are in turn allocated using dsa.h.\n\n\n1) What kind of consistency do we want from the pg_stats_* views?\n\nRight now the first access to stats in a transaction will trigger a read\nof both the global and per-database stats from disk. If the on-disk\nstate is too old, we'll ask the stats collector to write out a new file\na couple times.\n\nFor the rest of the transaction that in-memory state is used unless\npg_stat_clear_snapshot() is called. Which means that multiple reads from\ne.g. pg_stat_user_tables will show the same results as before [2].\n\nThat makes stats accesses quite expensive if there are lots of\nobjects.\n\nBut it also means that separate stats accesses - which happen all the\ntime - return something repeatable and kind of consistent.\n\nNow, the stats aren't really consistent in the sense that they are\nreally accurate, UDP messages can be lost, or only some of the stats\ngenerated by a TX might not yet have been received, other transactions\nhaven't yet sent them. Etc.\n\n\nWith the shared memory patch the concept of copying all stats for the\ncurrent database into local memory at the time of the first stats access\ndoesn't make sense to me. Horiguchi-san had actually implemented that,\nbut I argued that that would be cargo-culting an efficiency hack\nrequired by the old storage model forward.\n\nThe cost of doing this is substantial. On master, with a database that\ncontains 1 million empty tables, any stats access takes ~0.4s and\nincreases memory usage by 170MB.\n\n\n1.1)\n\nI hope everybody agrees with not requiring that stats don't need to be\nthe way they were at the time of first stat access in a transaction,\neven if that first access was to a different stat object than the\ncurrently accessed stat?\n\n\n1.2)\n\nDo we expect repeated accesses to the same stat to stay the same through\nthe transaction? The easiest way to implement stats accesses is to\nsimply fetch the stats from shared memory ([3]). That would obviously\nresult in repeated accesses to the same stat potentially returning\nchanging results over time.\n\nI think that's perfectly fine, desirable even, for pg_stat_*.\n\n\n1.3)\n\nWhat kind of consistency do we expect between columns of views like\npg_stat_all_tables?\n\nSeveral of the stats views aren't based on SRFs or composite-type\nreturning functions, but instead fetch each stat separately:\n\nE.g. pg_stat_all_tables:\n SELECT c.oid AS relid,\n n.nspname AS schemaname,\n c.relname,\n pg_stat_get_numscans(c.oid) AS seq_scan,\n pg_stat_get_tuples_returned(c.oid) AS seq_tup_read,\n sum(pg_stat_get_numscans(i.indexrelid))::bigint AS idx_scan,\n sum(pg_stat_get_tuples_fetched(i.indexrelid))::bigint + pg_stat_get_tuples_fetched(c.oid) AS idx_tup_fetch,\n pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins,\n...\n pg_stat_get_autoanalyze_count(c.oid) AS autoanalyze_count\n FROM pg_class c\n LEFT JOIN pg_index i ON c.oid = i.indrelid\n...\n\nWhich means that if we do not cache stats, additional stats updates\ncould have been applied between two stats accessors. E.g the seq_scan\nfrom before some pgstat_report_stat() but the seq_tup_read from after.\n\nIf we instead fetch all of a table's stats in one go, we would get\nconsistency between the columns. But obviously that'd require changing\nall the stats views.\n\nHoriguchi-san, in later iterations of the patch, attempted to address\nthis issue by adding a one-entry caches below\npgstat_fetch_stat_tabentry(), pgstat_fetch_stat_dbentry() etc, which is\nwhat pg_stat_get_numscans(), pg_stat_get_db_tuples_updated() etc use.\n\n\nBut I think that leads to very confusing results. Access stats for the\nsame relation multiple times in a row? Do not see updates. Switch\nbetween e.g. a table and its indexes? See updates.\n\n\nI personally think it's fine to have short-term divergences between the\ncolumns. The stats aren't that accurate anyway, as we don't submit them\nall the time. And that if we want consistency between columns, we\ninstead should replace the current view definitions with[set of] record\nreturning function - everything else seems to lead to weird tradeoffs.\n\n\n\n2) How to remove stats of dropped objects?\n\nIn the stats collector world stats for dropped objects (tables, indexes,\nfunctions, etc) are dropped after the fact, using a pretty expensive\nprocess:\n\nEach autovacuum worker cycle and each manual VACUUM does\npgstat_vacuum_stat() to detect since-dropped objects. It does that by\nbuilding hash-tables for all databases, tables and functions, and then\ncomparing that against a freshly loaded stats snapshot. All stats object\nnot in pg_class etc are dropped.\n\nThe patch currently copies that approach, although that adds some\ncomplications, mainly around [3].\n\n\nAccessing all database objects after each VACUUM, even if the table was\ntiny, isn't great, performance wise. In a fresh master database with 1\nmillion functions, a VACUUM of an empty table takes ~0.5s, with 1\nmillion tables it's ~1s. Due to partitioning tables with many database\nobjects are of course getting more common.\n\n\nThere isn't really a better approach in the stats collector world. As\nmessages to the stats collector can get lost, we need to to be able to\nre-do dropping of dead stats objects.\n\n\nBut now we could instead schedule stats to be removed at commit\ntime. That's not trivial of course, as we'd need to handle cases where\nthe commit fails after the commit record, but before processing the\ndropped stats.\n\nBut it seems that integrating the stats that need to be dropped into the\ncommit message would make a lot of sense. With benefits beyond the\n[auto-]vacuum efficiency gains, e.g. neatly integrate into streaming\nreplication and even opening the door to keep stats across crashes.\n\n\nMy gut feeling here is to try to to fix the remaining issues in the\n\"collect oids\" approach for 14 and to try to change the approach in\n15. And, if that proves too hard, try to see how hard it'd be to\n\"accurately\" drop. But I'm also not sure - it might be smarter to go\nfull in, to avoid introducing a system that we'll just rip out again.\n\n\nComments?\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://www.postgresql.org/message-id/20180629.173418.190173462.horiguchi.kyotaro%40lab.ntt.co.jp\n\n[2] Except that new tables with show up with lots of 0s\n\n[3] There is a cache to avoid repeated dshash lookups for\n previously-accessed stats, to avoid contention. But that just points\n to the shared memory area with the stats.\n\n\n", "msg_date": "Fri, 19 Mar 2021 16:51:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "> But now we could instead schedule stats to be removed at commit\ntime. That's not trivial of course, as we'd need to handle cases where\nthe commit fails after the commit record, but before processing the\ndropped stats.\n\nWe likely can not remove them at commit time, but only after the\noldest open snapshot moves parts that commit ?\n\nWould an approach where we keep stats in a structure logically similar\nto MVCC we use for normal tables be completely unfeasible ?\n\nWe would only need to keep one Version per backend in transaction .\n\n---\nHannu\n\n\n\n\nOn Sat, Mar 20, 2021 at 12:51 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I am working on Kyotaro Horiguchi's shared memory stats patch [1] with\n> the goal of getting it into a shape that I'd be happy to commit. That\n> thread is quite long and most are probably skipping over new messages in\n> it.\n>\n> There are two high-level design decisions / questions that I think\n> warrant a wider audience (I'll keep lower level discussion in the other\n> thread).\n>\n> In case it is not obvious, the goal of the shared memory stats patch is\n> to replace the existing statistics collector, to which new stats are\n> reported via an UDP socket, and where clients read data from the stats\n> collector by reading the entire database's stats from disk.\n>\n> The replacement is to put the statistics into a shared memory\n> segment. Fixed-size stats (e.g. bgwriter, checkpointer, wal activity,\n> etc) being stored directly in a struct in memory. Stats for objects\n> where a variable number exists, e.g. tables, are addressed via a dshash.\n> table that points to the stats that are in turn allocated using dsa.h.\n>\n>\n> 1) What kind of consistency do we want from the pg_stats_* views?\n>\n> Right now the first access to stats in a transaction will trigger a read\n> of both the global and per-database stats from disk. If the on-disk\n> state is too old, we'll ask the stats collector to write out a new file\n> a couple times.\n>\n> For the rest of the transaction that in-memory state is used unless\n> pg_stat_clear_snapshot() is called. Which means that multiple reads from\n> e.g. pg_stat_user_tables will show the same results as before [2].\n>\n> That makes stats accesses quite expensive if there are lots of\n> objects.\n>\n> But it also means that separate stats accesses - which happen all the\n> time - return something repeatable and kind of consistent.\n>\n> Now, the stats aren't really consistent in the sense that they are\n> really accurate, UDP messages can be lost, or only some of the stats\n> generated by a TX might not yet have been received, other transactions\n> haven't yet sent them. Etc.\n>\n>\n> With the shared memory patch the concept of copying all stats for the\n> current database into local memory at the time of the first stats access\n> doesn't make sense to me. Horiguchi-san had actually implemented that,\n> but I argued that that would be cargo-culting an efficiency hack\n> required by the old storage model forward.\n>\n> The cost of doing this is substantial. On master, with a database that\n> contains 1 million empty tables, any stats access takes ~0.4s and\n> increases memory usage by 170MB.\n>\n>\n> 1.1)\n>\n> I hope everybody agrees with not requiring that stats don't need to be\n> the way they were at the time of first stat access in a transaction,\n> even if that first access was to a different stat object than the\n> currently accessed stat?\n>\n>\n> 1.2)\n>\n> Do we expect repeated accesses to the same stat to stay the same through\n> the transaction? The easiest way to implement stats accesses is to\n> simply fetch the stats from shared memory ([3]). That would obviously\n> result in repeated accesses to the same stat potentially returning\n> changing results over time.\n>\n> I think that's perfectly fine, desirable even, for pg_stat_*.\n>\n>\n> 1.3)\n>\n> What kind of consistency do we expect between columns of views like\n> pg_stat_all_tables?\n>\n> Several of the stats views aren't based on SRFs or composite-type\n> returning functions, but instead fetch each stat separately:\n>\n> E.g. pg_stat_all_tables:\n> SELECT c.oid AS relid,\n> n.nspname AS schemaname,\n> c.relname,\n> pg_stat_get_numscans(c.oid) AS seq_scan,\n> pg_stat_get_tuples_returned(c.oid) AS seq_tup_read,\n> sum(pg_stat_get_numscans(i.indexrelid))::bigint AS idx_scan,\n> sum(pg_stat_get_tuples_fetched(i.indexrelid))::bigint + pg_stat_get_tuples_fetched(c.oid) AS idx_tup_fetch,\n> pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins,\n> ...\n> pg_stat_get_autoanalyze_count(c.oid) AS autoanalyze_count\n> FROM pg_class c\n> LEFT JOIN pg_index i ON c.oid = i.indrelid\n> ...\n>\n> Which means that if we do not cache stats, additional stats updates\n> could have been applied between two stats accessors. E.g the seq_scan\n> from before some pgstat_report_stat() but the seq_tup_read from after.\n>\n> If we instead fetch all of a table's stats in one go, we would get\n> consistency between the columns. But obviously that'd require changing\n> all the stats views.\n>\n> Horiguchi-san, in later iterations of the patch, attempted to address\n> this issue by adding a one-entry caches below\n> pgstat_fetch_stat_tabentry(), pgstat_fetch_stat_dbentry() etc, which is\n> what pg_stat_get_numscans(), pg_stat_get_db_tuples_updated() etc use.\n>\n>\n> But I think that leads to very confusing results. Access stats for the\n> same relation multiple times in a row? Do not see updates. Switch\n> between e.g. a table and its indexes? See updates.\n>\n>\n> I personally think it's fine to have short-term divergences between the\n> columns. The stats aren't that accurate anyway, as we don't submit them\n> all the time. And that if we want consistency between columns, we\n> instead should replace the current view definitions with[set of] record\n> returning function - everything else seems to lead to weird tradeoffs.\n>\n>\n>\n> 2) How to remove stats of dropped objects?\n>\n> In the stats collector world stats for dropped objects (tables, indexes,\n> functions, etc) are dropped after the fact, using a pretty expensive\n> process:\n>\n> Each autovacuum worker cycle and each manual VACUUM does\n> pgstat_vacuum_stat() to detect since-dropped objects. It does that by\n> building hash-tables for all databases, tables and functions, and then\n> comparing that against a freshly loaded stats snapshot. All stats object\n> not in pg_class etc are dropped.\n>\n> The patch currently copies that approach, although that adds some\n> complications, mainly around [3].\n>\n>\n> Accessing all database objects after each VACUUM, even if the table was\n> tiny, isn't great, performance wise. In a fresh master database with 1\n> million functions, a VACUUM of an empty table takes ~0.5s, with 1\n> million tables it's ~1s. Due to partitioning tables with many database\n> objects are of course getting more common.\n>\n>\n> There isn't really a better approach in the stats collector world. As\n> messages to the stats collector can get lost, we need to to be able to\n> re-do dropping of dead stats objects.\n>\n>\n> But now we could instead schedule stats to be removed at commit\n> time. That's not trivial of course, as we'd need to handle cases where\n> the commit fails after the commit record, but before processing the\n> dropped stats.\n>\n> But it seems that integrating the stats that need to be dropped into the\n> commit message would make a lot of sense. With benefits beyond the\n> [auto-]vacuum efficiency gains, e.g. neatly integrate into streaming\n> replication and even opening the door to keep stats across crashes.\n>\n>\n> My gut feeling here is to try to to fix the remaining issues in the\n> \"collect oids\" approach for 14 and to try to change the approach in\n> 15. And, if that proves too hard, try to see how hard it'd be to\n> \"accurately\" drop. But I'm also not sure - it might be smarter to go\n> full in, to avoid introducing a system that we'll just rip out again.\n>\n>\n> Comments?\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n> [1] https://www.postgresql.org/message-id/20180629.173418.190173462.horiguchi.kyotaro%40lab.ntt.co.jp\n>\n> [2] Except that new tables with show up with lots of 0s\n>\n> [3] There is a cache to avoid repeated dshash lookups for\n> previously-accessed stats, to avoid contention. But that just points\n> to the shared memory area with the stats.\n>\n>\n\n\n", "msg_date": "Sat, 20 Mar 2021 01:16:31 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Hi,\n\nOn 2021-03-20 01:16:31 +0100, Hannu Krosing wrote:\n> > But now we could instead schedule stats to be removed at commit\n> time. That's not trivial of course, as we'd need to handle cases where\n> the commit fails after the commit record, but before processing the\n> dropped stats.\n> \n> We likely can not remove them at commit time, but only after the\n> oldest open snapshot moves parts that commit ?\n\nI don't see why? A dropped table is dropped, and cannot be accessed\nanymore. Snapshots don't play a role here - the underlying data is gone\n(minus a placeholder file to avoid reusing the oid, until the next\ncommit). If you run a vacuum on some unrelated table in the same\ndatabase, the stats for a dropped table will already be removed long\nbefore there's no relation that could theoretically open the table.\n\nNote that table level locking would prevent a table from being dropped\nif a long-running transaction has already accessed it.\n\n\n> Would an approach where we keep stats in a structure logically similar\n> to MVCC we use for normal tables be completely unfeasible ?\n\nYes, pretty unfeasible. Stats should work on standbys too...\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Fri, 19 Mar 2021 17:21:52 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On Sat, Mar 20, 2021 at 1:21 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-20 01:16:31 +0100, Hannu Krosing wrote:\n> > > But now we could instead schedule stats to be removed at commit\n> > time. That's not trivial of course, as we'd need to handle cases where\n> > the commit fails after the commit record, but before processing the\n> > dropped stats.\n> >\n> > We likely can not remove them at commit time, but only after the\n> > oldest open snapshot moves parts that commit ?\n>\n> I don't see why? A dropped table is dropped, and cannot be accessed\n> anymore. Snapshots don't play a role here - the underlying data is gone\n> (minus a placeholder file to avoid reusing the oid, until the next\n> commit). If you run a vacuum on some unrelated table in the same\n> database, the stats for a dropped table will already be removed long\n> before there's no relation that could theoretically open the table.\n>\n> Note that table level locking would prevent a table from being dropped\n> if a long-running transaction has already accessed it.\n\nYeah, just checked. DROP TABLE waits until the reading transaction finishes.\n\n>\n> > Would an approach where we keep stats in a structure logically similar\n> > to MVCC we use for normal tables be completely unfeasible ?\n>\n> Yes, pretty unfeasible. Stats should work on standbys too...\n\nI did not mean actually using MVCC and real transaction ids but rather\n similar approach, where (potentially) different stats rows are kept\nfor each backend.\n\nThis of course only is a win in case multiple backends can use the\nsame stats row. Else it is easier to copy the backends version into\nbackend local memory.\n\nBut I myself do not see any problem with stats rows changing all the time.\n\nThe only worry would be parts of the same row being out of sync. This\ncan of course be solved by locking, but for large number of backends\nwith tiny transactions this locking itself could potentially become a\nproblem. Here alternating between two or more versions could help and\nthen it also starts makes sense to keep the copies in shared memory\n\n\n\n> Regards,\n>\n> Andres\n\n\n", "msg_date": "Sat, 20 Mar 2021 01:31:13 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> I am working on Kyotaro Horiguchi's shared memory stats patch [1] with\n> the goal of getting it into a shape that I'd be happy to commit. That\n> thread is quite long and most are probably skipping over new messages in\n> it.\n\nAwesome, +1.\n\n> 1) What kind of consistency do we want from the pg_stats_* views?\n> \n> Right now the first access to stats in a transaction will trigger a read\n> of both the global and per-database stats from disk. If the on-disk\n> state is too old, we'll ask the stats collector to write out a new file\n> a couple times.\n> \n> For the rest of the transaction that in-memory state is used unless\n> pg_stat_clear_snapshot() is called. Which means that multiple reads from\n> e.g. pg_stat_user_tables will show the same results as before [2].\n> \n> That makes stats accesses quite expensive if there are lots of\n> objects.\n> \n> But it also means that separate stats accesses - which happen all the\n> time - return something repeatable and kind of consistent.\n> \n> Now, the stats aren't really consistent in the sense that they are\n> really accurate, UDP messages can be lost, or only some of the stats\n> generated by a TX might not yet have been received, other transactions\n> haven't yet sent them. Etc.\n> \n> \n> With the shared memory patch the concept of copying all stats for the\n> current database into local memory at the time of the first stats access\n> doesn't make sense to me. Horiguchi-san had actually implemented that,\n> but I argued that that would be cargo-culting an efficiency hack\n> required by the old storage model forward.\n> \n> The cost of doing this is substantial. On master, with a database that\n> contains 1 million empty tables, any stats access takes ~0.4s and\n> increases memory usage by 170MB.\n> \n> \n> 1.1)\n> \n> I hope everybody agrees with not requiring that stats don't need to be\n> the way they were at the time of first stat access in a transaction,\n> even if that first access was to a different stat object than the\n> currently accessed stat?\n\nAgreed, that doesn't seem necessary and blowing up backend memory usage\nby copying all the stats into local memory seems pretty terrible.\n\n> 1.2)\n> \n> Do we expect repeated accesses to the same stat to stay the same through\n> the transaction? The easiest way to implement stats accesses is to\n> simply fetch the stats from shared memory ([3]). That would obviously\n> result in repeated accesses to the same stat potentially returning\n> changing results over time.\n> \n> I think that's perfectly fine, desirable even, for pg_stat_*.\n\nThis seems alright to me.\n\n> 1.3)\n> \n> What kind of consistency do we expect between columns of views like\n> pg_stat_all_tables?\n> \n> Several of the stats views aren't based on SRFs or composite-type\n> returning functions, but instead fetch each stat separately:\n> \n> E.g. pg_stat_all_tables:\n> SELECT c.oid AS relid,\n> n.nspname AS schemaname,\n> c.relname,\n> pg_stat_get_numscans(c.oid) AS seq_scan,\n> pg_stat_get_tuples_returned(c.oid) AS seq_tup_read,\n> sum(pg_stat_get_numscans(i.indexrelid))::bigint AS idx_scan,\n> sum(pg_stat_get_tuples_fetched(i.indexrelid))::bigint + pg_stat_get_tuples_fetched(c.oid) AS idx_tup_fetch,\n> pg_stat_get_tuples_inserted(c.oid) AS n_tup_ins,\n> ...\n> pg_stat_get_autoanalyze_count(c.oid) AS autoanalyze_count\n> FROM pg_class c\n> LEFT JOIN pg_index i ON c.oid = i.indrelid\n> ...\n> \n> Which means that if we do not cache stats, additional stats updates\n> could have been applied between two stats accessors. E.g the seq_scan\n> from before some pgstat_report_stat() but the seq_tup_read from after.\n> \n> If we instead fetch all of a table's stats in one go, we would get\n> consistency between the columns. But obviously that'd require changing\n> all the stats views.\n> \n> Horiguchi-san, in later iterations of the patch, attempted to address\n> this issue by adding a one-entry caches below\n> pgstat_fetch_stat_tabentry(), pgstat_fetch_stat_dbentry() etc, which is\n> what pg_stat_get_numscans(), pg_stat_get_db_tuples_updated() etc use.\n> \n> \n> But I think that leads to very confusing results. Access stats for the\n> same relation multiple times in a row? Do not see updates. Switch\n> between e.g. a table and its indexes? See updates.\n> \n> \n> I personally think it's fine to have short-term divergences between the\n> columns. The stats aren't that accurate anyway, as we don't submit them\n> all the time. And that if we want consistency between columns, we\n> instead should replace the current view definitions with[set of] record\n> returning function - everything else seems to lead to weird tradeoffs.\n\nAgreed, doesn't seem like a huge issue to have short-term divergences\nbut if we want to fix them then flipping those all to SRFs would make\nthe most sense.\n\n> 2) How to remove stats of dropped objects?\n> \n> In the stats collector world stats for dropped objects (tables, indexes,\n> functions, etc) are dropped after the fact, using a pretty expensive\n> process:\n> \n> Each autovacuum worker cycle and each manual VACUUM does\n> pgstat_vacuum_stat() to detect since-dropped objects. It does that by\n> building hash-tables for all databases, tables and functions, and then\n> comparing that against a freshly loaded stats snapshot. All stats object\n> not in pg_class etc are dropped.\n> \n> The patch currently copies that approach, although that adds some\n> complications, mainly around [3].\n> \n> \n> Accessing all database objects after each VACUUM, even if the table was\n> tiny, isn't great, performance wise. In a fresh master database with 1\n> million functions, a VACUUM of an empty table takes ~0.5s, with 1\n> million tables it's ~1s. Due to partitioning tables with many database\n> objects are of course getting more common.\n> \n> \n> There isn't really a better approach in the stats collector world. As\n> messages to the stats collector can get lost, we need to to be able to\n> re-do dropping of dead stats objects.\n> \n> \n> But now we could instead schedule stats to be removed at commit\n> time. That's not trivial of course, as we'd need to handle cases where\n> the commit fails after the commit record, but before processing the\n> dropped stats.\n> \n> But it seems that integrating the stats that need to be dropped into the\n> commit message would make a lot of sense. With benefits beyond the\n> [auto-]vacuum efficiency gains, e.g. neatly integrate into streaming\n> replication and even opening the door to keep stats across crashes.\n> \n> \n> My gut feeling here is to try to to fix the remaining issues in the\n> \"collect oids\" approach for 14 and to try to change the approach in\n> 15. And, if that proves too hard, try to see how hard it'd be to\n> \"accurately\" drop. But I'm also not sure - it might be smarter to go\n> full in, to avoid introducing a system that we'll just rip out again.\n> \n> Comments?\n\nThe current approach sounds pretty terrible and propagating that forward\ndoesn't seem great. Guess here I'd disagree with your gut feeling and\nencourage trying to go 'full in', as you put it, or at least put enough\neffort into it to get a feeling of if it's going to require a *lot* more\nwork or not and then reconsider if necessary.\n\nThanks,\n\nStephen", "msg_date": "Sun, 21 Mar 2021 11:41:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> 1) What kind of consistency do we want from the pg_stats_* views?\n\nThat's a hard choice to make. But let me set the record straight:\nwhen we did the initial implementation, the stats snapshotting behavior\nwas considered a FEATURE, not an \"efficiency hack required by the old\nstorage model\".\n\nIf I understand what you are proposing, all stats views would become\ncompletely volatile, without even within-query consistency. That really\nis not gonna work. As an example, you could get not-even-self-consistent\nresults from a join to a stats view if the planner decides to implement\nit as a nestloop with the view on the inside.\n\nI also believe that the snapshotting behavior has advantages in terms\nof being able to perform multiple successive queries and get consistent\nresults from them. Only the most trivial sorts of analysis don't need\nthat.\n\nIn short, what you are proposing sounds absolutely disastrous for\nusability of the stats views, and I for one will not sign off on it\nbeing acceptable.\n\nI do think we could relax the consistency guarantees a little bit,\nperhaps along the lines of only caching view rows that have already\nbeen read, rather than grabbing everything up front. But we can't\njust toss the snapshot concept out the window. It'd be like deciding\nthat nobody needs MVCC, or even any sort of repeatable read.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 21 Mar 2021 12:14:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Hi,\n\nOn 2021-03-21 11:41:30 -0400, Stephen Frost wrote:\n> > 1.1)\n> >\n> > I hope everybody agrees with not requiring that stats don't need to be\n> > the way they were at the time of first stat access in a transaction,\n> > even if that first access was to a different stat object than the\n> > currently accessed stat?\n>\n> Agreed, that doesn't seem necessary and blowing up backend memory usage\n> by copying all the stats into local memory seems pretty terrible.\n\nYea. I've seen instances where most backends had several hundreds MB of\nstats loaded :(. Even leaving the timing overhead aside, that's really\nnot fun. Of course that application may not have had exactly the\ngreatest design, but ...\n\n\n> > 1.2)\n> >\n> > Do we expect repeated accesses to the same stat to stay the same through\n> > the transaction? The easiest way to implement stats accesses is to\n> > simply fetch the stats from shared memory ([3]). That would obviously\n> > result in repeated accesses to the same stat potentially returning\n> > changing results over time.\n> >\n> > I think that's perfectly fine, desirable even, for pg_stat_*.\n>\n> This seems alright to me.\n\nSeems Tom disagrees :(\n\n\n> > 1.3)\n> >\n> > What kind of consistency do we expect between columns of views like\n> > pg_stat_all_tables?\n> > [...]\n> > I personally think it's fine to have short-term divergences between the\n> > columns. The stats aren't that accurate anyway, as we don't submit them\n> > all the time. And that if we want consistency between columns, we\n> > instead should replace the current view definitions with[set of] record\n> > returning function - everything else seems to lead to weird tradeoffs.\n>\n> Agreed, doesn't seem like a huge issue to have short-term divergences\n> but if we want to fix them then flipping those all to SRFs would make\n> the most sense.\n\nThere's also a pretty good efficiency argument for going to SRFs. Doing\n18 hashtable lookups + function calls just to return one row of\npg_stat_all_tables surely is a higher overhead than unnecessarily\nreturning columns that weren't needed by the user.\n\nI do think it makes sense to get idx_scan/idx_tup_fetch via a join\nthough.\n\n\n> > 2) How to remove stats of dropped objects?\n> >\n> > [...]\n>\n> The current approach sounds pretty terrible and propagating that forward\n> doesn't seem great. Guess here I'd disagree with your gut feeling and\n> encourage trying to go 'full in', as you put it, or at least put enough\n> effort into it to get a feeling of if it's going to require a *lot* more\n> work or not and then reconsider if necessary.\n\nI think my gut's argument is that it's already a huge patch, and that\nit's better to have the the very substantial memory and disk IO savings\nwith the crappy vacuum approach, than neither. And given the timeframe\nthere does seem to be a substantial danger of \"neither\" being the\noutcome... Anyway, I'm mentally sketching out what it'd take.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Mar 2021 14:53:50 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> If I understand what you are proposing, all stats views would become\n> completely volatile, without even within-query consistency. That really\n> is not gonna work. As an example, you could get not-even-self-consistent\n> results from a join to a stats view if the planner decides to implement\n> it as a nestloop with the view on the inside.\n> \n> I also believe that the snapshotting behavior has advantages in terms\n> of being able to perform multiple successive queries and get consistent\n> results from them. Only the most trivial sorts of analysis don't need\n> that.\n> \n> In short, what you are proposing sounds absolutely disastrous for\n> usability of the stats views, and I for one will not sign off on it\n> being acceptable.\n> \n> I do think we could relax the consistency guarantees a little bit,\n> perhaps along the lines of only caching view rows that have already\n> been read, rather than grabbing everything up front. But we can't\n> just toss the snapshot concept out the window. It'd be like deciding\n> that nobody needs MVCC, or even any sort of repeatable read.\n\nThis isn't the same use-case as traditional tables or relational\nconcepts in general- there aren't any foreign keys for the fields that\nwould actually be changing across these accesses to the shared memory\nstats- we're talking about gross stats numbers like the number of\ninserts into a table, not an employee_id column. In short, I don't\nagree that this is a fair comparison.\n\nPerhaps there's a good argument to try and cache all this info per\nbackend, but saying that it's because we need MVCC-like semantics for\nthis data because other things need MVCC isn't it and I don't know that\nsaying this is a justifiable case for requiring repeatable read is\nreasonable either.\n\nWhat specific, reasonable, analysis of the values that we're actually\ntalking about, which are already aggregates themselves, is going to\nend up being utterly confused?\n\nThanks,\n\nStephen", "msg_date": "Sun, 21 Mar 2021 18:16:06 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Hi,\n\nOn 2021-03-21 12:14:35 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > 1) What kind of consistency do we want from the pg_stats_* views?\n>\n> That's a hard choice to make. But let me set the record straight:\n> when we did the initial implementation, the stats snapshotting behavior\n> was considered a FEATURE, not an \"efficiency hack required by the old\n> storage model\".\n\nOh - sorry for misstating that then. I did try to look for the origins of the\napproach, and all that I found was that it'd be too expensive to do multiple\nstats file reads.\n\n\n> If I understand what you are proposing, all stats views would become\n> completely volatile, without even within-query consistency. That really\n> is not gonna work. As an example, you could get not-even-self-consistent\n> results from a join to a stats view if the planner decides to implement\n> it as a nestloop with the view on the inside.\n\nI don't really think it's a problem that's worth incuring that much cost to\nprevent. We already have that behaviour for a number of of the pg_stat_* views,\ne.g. pg_stat_xact_all_tables, pg_stat_replication.\n\nIf the cost were low - or we can find a reasonable way to get to low costs - I\nthink it'd be worth preserving for backward compatibility's sake alone. From\nan application perspective, I actually rarely want that behaviour for stats\nviews - I'm querying them to get the most recent information, not an older\nsnapshot. And in the cases I do want snapshots, I'd want them for longer than a\ntransaction.\n\nThere's just a huge difference between being able to access a table's stats in\nO(1) time, or having a single stats access be O(database-objects).\n\nAnd that includes accesses to things like pg_stat_bgwriter, pg_stat_database\n(for IO over time stats etc) that often are monitored at a somewhat high\nfrequency - they also pay the price of reading in all object stats. On my\nexample database with 1M tables it takes 0.4s to read pg_stat_database.\n\n\nWe currently also fetch the full stats in places like autovacuum.c. Where we\ndon't need repeated access to be consistent - we even explicitly force the\nstats to be re-read for every single table that's getting vacuumed.\n\nEven if we to just cache already accessed stats, places like do_autovacuum()\nwould end up with a completely unnecessary cache of all tables, blowing up\nmemory usage by a large amount on systems with lots of relations.\n\n\n> I also believe that the snapshotting behavior has advantages in terms\n> of being able to perform multiple successive queries and get consistent\n> results from them. Only the most trivial sorts of analysis don't need\n> that.\n\nIn most cases you'd not do that in a transaction tho, and you'd need to create\ntemporary tables with a snapshot of the stats anyway.\n\n\n> In short, what you are proposing sounds absolutely disastrous for\n> usability of the stats views, and I for one will not sign off on it\n> being acceptable.\n\n:(\n\nThat's why I thought it'd be important to bring this up to a wider\naudience. This has been discussed several times in the thread, and nobody\nreally chimed up wanting the \"snapshot\" behaviour...\n\n\n> I do think we could relax the consistency guarantees a little bit,\n> perhaps along the lines of only caching view rows that have already\n> been read, rather than grabbing everything up front. But we can't\n> just toss the snapshot concept out the window. It'd be like deciding\n> that nobody needs MVCC, or even any sort of repeatable read.\n\nI think that'd still a huge win - caching only what's been accessed rather than\neverything will save a lot of memory in very common cases. I did bring it up as\none approach for that reason.\n\nI do think it has a few usability quirks though. The time-skew between stats\nobjects accessed at different times seems like it could be quite confusing?\nE.g. imagine looking at table stats and then later join to index stats and see\ntable / index stats not matching up at all.\n\n\nI wonder if a reasonable way out could be to have pg_stat_make_snapshot()\n(accompanying the existing pg_stat_clear_snapshot()) that'd do the full eager\ndata load. But not use any snapshot / caching behaviour without that?\n\nIt's be a fair bit of code to have that, but I think can see a way to have it\nnot be too bad?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:34:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On Sun, 21 Mar 2021 at 18:16, Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > I also believe that the snapshotting behavior has advantages in terms\n> > of being able to perform multiple successive queries and get consistent\n> > results from them. Only the most trivial sorts of analysis don't need\n> > that.\n> >\n> > In short, what you are proposing sounds absolutely disastrous for\n> > usability of the stats views, and I for one will not sign off on it\n> > being acceptable.\n> >\n> > I do think we could relax the consistency guarantees a little bit,\n> > perhaps along the lines of only caching view rows that have already\n> > been read, rather than grabbing everything up front. But we can't\n> > just toss the snapshot concept out the window. It'd be like deciding\n> > that nobody needs MVCC, or even any sort of repeatable read.\n>\n> This isn't the same use-case as traditional tables or relational\n> concepts in general- there aren't any foreign keys for the fields that\n> would actually be changing across these accesses to the shared memory\n> stats- we're talking about gross stats numbers like the number of\n> inserts into a table, not an employee_id column. In short, I don't\n> agree that this is a fair comparison.\n\nI use these stats quite a bit and do lots of slicing and dicing with\nthem. I don't think it's as bad as Tom says but I also don't think we\ncan be quite as loosy-goosy as I think Andres or Stephen might be\nproposing either (though I note that haven't said they don't want any\nconsistency at all).\n\nThe cases where the consistency really matter for me is when I'm doing\nmath involving more than one statistic.\n\nTypically that's ratios. E.g. with pg_stat_*_tables I routinely divide\nseq_tup_read by seq_scan or idx_tup_* by idx_scans. I also often look\nat the ratio between n_tup_upd and n_tup_hot_upd.\n\nAnd no, it doesn't help that these are often large numbers after a\nlong time because I'm actually working with the first derivative of\nthese numbers using snapshots or a time series database. So if you\nhave the seq_tup_read incremented but not seq_scan incremented you\ncould get a wildly incorrect calculation of \"tup read per seq scan\"\nwhich actually matters.\n\nI don't think I've ever done math across stats for different objects.\nI mean, I've plotted them together and looked at which was higher but\nI don't think that's affected by some plots having peaks slightly out\nof sync with the other. I suppose you could look at the ratio of\naccess patterns between two tables and know that they're only ever\naccessed by a single code path at the same time and therefore the\nratios would be meaningful. But I don't think users would be surprised\nto find they're not consistent that way either.\n\nSo I think we need to ensure that at least all the values for a single\nrow representing a single object are consistent. Or at least that\nthere's *some* correct way to retrieve a consistent row and that the\nstandard views use that. I don't think we need to guarantee that every\npossible plan will always be consistent even if you access the row\nmultiple times in a self-join or use the lookup function on individual\ncolumns separately.\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 22 Mar 2021 23:20:46 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On 2021-Mar-21, Andres Freund wrote:\n\n> We currently also fetch the full stats in places like autovacuum.c. Where we\n> don't need repeated access to be consistent - we even explicitly force the\n> stats to be re-read for every single table that's getting vacuumed.\n> \n> Even if we to just cache already accessed stats, places like do_autovacuum()\n> would end up with a completely unnecessary cache of all tables, blowing up\n> memory usage by a large amount on systems with lots of relations.\n\nIt's certainly not the case that autovacuum needs to keep fully\nconsistent stats. That's just the way that seemed easier (?) to do at\nthe time. Unless I misunderstand things severely, we could just have\nautovacuum grab all necessary numbers for one database at the start of a\nrun, not cache anything, then re-read the numbers for one table as it\nrechecks that table.\n\nResetting before re-reading was obviously necessary because the\nbuilt-in snapshotting made it impossible to freshen up the numbers at\nthe recheck step.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 24 Mar 2021 05:51:14 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On Tue, Mar 23, 2021 at 4:21 AM Greg Stark <stark@mit.edu> wrote:\n>\n> On Sun, 21 Mar 2021 at 18:16, Stephen Frost <sfrost@snowman.net> wrote:\n> >\n> > Greetings,\n> >\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> > > I also believe that the snapshotting behavior has advantages in terms\n> > > of being able to perform multiple successive queries and get consistent\n> > > results from them. Only the most trivial sorts of analysis don't need\n> > > that.\n> > >\n> > > In short, what you are proposing sounds absolutely disastrous for\n> > > usability of the stats views, and I for one will not sign off on it\n> > > being acceptable.\n> > >\n> > > I do think we could relax the consistency guarantees a little bit,\n> > > perhaps along the lines of only caching view rows that have already\n> > > been read, rather than grabbing everything up front. But we can't\n> > > just toss the snapshot concept out the window. It'd be like deciding\n> > > that nobody needs MVCC, or even any sort of repeatable read.\n> >\n> > This isn't the same use-case as traditional tables or relational\n> > concepts in general- there aren't any foreign keys for the fields that\n> > would actually be changing across these accesses to the shared memory\n> > stats- we're talking about gross stats numbers like the number of\n> > inserts into a table, not an employee_id column. In short, I don't\n> > agree that this is a fair comparison.\n>\n> I use these stats quite a bit and do lots of slicing and dicing with\n> them. I don't think it's as bad as Tom says but I also don't think we\n> can be quite as loosy-goosy as I think Andres or Stephen might be\n> proposing either (though I note that haven't said they don't want any\n> consistency at all).\n>\n> The cases where the consistency really matter for me is when I'm doing\n> math involving more than one statistic.\n>\n> Typically that's ratios. E.g. with pg_stat_*_tables I routinely divide\n> seq_tup_read by seq_scan or idx_tup_* by idx_scans. I also often look\n> at the ratio between n_tup_upd and n_tup_hot_upd.\n>\n> And no, it doesn't help that these are often large numbers after a\n> long time because I'm actually working with the first derivative of\n> these numbers using snapshots or a time series database. So if you\n> have the seq_tup_read incremented but not seq_scan incremented you\n> could get a wildly incorrect calculation of \"tup read per seq scan\"\n> which actually matters.\n>\n> I don't think I've ever done math across stats for different objects.\n> I mean, I've plotted them together and looked at which was higher but\n> I don't think that's affected by some plots having peaks slightly out\n> of sync with the other. I suppose you could look at the ratio of\n> access patterns between two tables and know that they're only ever\n> accessed by a single code path at the same time and therefore the\n> ratios would be meaningful. But I don't think users would be surprised\n> to find they're not consistent that way either.\n\nYeah, it's important to differentiate if things can be inconsistent\nwithin a single object, or just between objects. And I agree that in a\nlot of cases, just having per-object consistent data is probably\nenough.\n\nNormally when you graph things for example, your peaks will look\nacross >1 sample point anyway, and in that case it doesn't much matter\ndoes it?\n\nBut if we said we try to offer per-object consistency only, then for\nexample the idx_scans value in the tables view may see changes to some\nbut not all indexes on that table. Would that be acceptable?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 24 Mar 2021 14:26:12 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On Sun, Mar 21, 2021 at 11:34 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-21 12:14:35 -0400, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > 1) What kind of consistency do we want from the pg_stats_* views?\n> >\n> > That's a hard choice to make. But let me set the record straight:\n> > when we did the initial implementation, the stats snapshotting behavior\n> > was considered a FEATURE, not an \"efficiency hack required by the old\n> > storage model\".\n>\n> Oh - sorry for misstating that then. I did try to look for the origins of the\n> approach, and all that I found was that it'd be too expensive to do multiple\n> stats file reads.\n>\n>\n> > If I understand what you are proposing, all stats views would become\n> > completely volatile, without even within-query consistency. That really\n> > is not gonna work. As an example, you could get not-even-self-consistent\n> > results from a join to a stats view if the planner decides to implement\n> > it as a nestloop with the view on the inside.\n>\n> I don't really think it's a problem that's worth incuring that much cost to\n> prevent. We already have that behaviour for a number of of the pg_stat_* views,\n> e.g. pg_stat_xact_all_tables, pg_stat_replication.\n\nAren't those both pretty bad examples though?\n\npg_stat_xact_all_tables surely is within-query consistent, and would\nbe pretty useless if it wwas within-transaction consistent?\n\npg_stat_replication is a snapshot of what things are right now (like\npg_stat_activity), and not collected statistics.\n\nMaybe there's inconsistency in that they should've had a different\nname to separate it out, but fundamentally having xact consistent\nviews there would be a bad thing, no?\n\n\n> If the cost were low - or we can find a reasonable way to get to low costs - I\n> think it'd be worth preserving for backward compatibility's sake alone. From\n> an application perspective, I actually rarely want that behaviour for stats\n> views - I'm querying them to get the most recent information, not an older\n> snapshot. And in the cases I do want snapshots, I'd want them for longer than a\n> transaction.\n\nI agree in general, but I'd want them to be *query-consistent*, not\n*transaction-consistent*. But the question is as you say, am I willing\nto pay for that. Less certain of that.\n\n\n> There's just a huge difference between being able to access a table's stats in\n> O(1) time, or having a single stats access be O(database-objects).\n>\n> And that includes accesses to things like pg_stat_bgwriter, pg_stat_database\n> (for IO over time stats etc) that often are monitored at a somewhat high\n> frequency - they also pay the price of reading in all object stats. On my\n> example database with 1M tables it takes 0.4s to read pg_stat_database.\n\nIMV, singeling things out into \"larger groups\" would be one perfectly\nacceptable compromise. That is, say that pg_stat_user_tables can be\ninconsistent vs with pg_stat_bgwriter, but it cannot be inconsistent\nwith itself.\n\nBasically anything that's \"global\" seems like it could be treated that\nway, independent of each other.\n\nFor relations and such having a way to get just a single relation\nstats or a number of them that will be consistent with each other\nwithout getting all of them, could also be a reasonable optimization.\nMabye an SRF that takes an array of oids as a parameter and returns\nconsistent data across those, without having to copy/mess with the\nrest?\n\n\n> We currently also fetch the full stats in places like autovacuum.c. Where we\n> don't need repeated access to be consistent - we even explicitly force the\n> stats to be re-read for every single table that's getting vacuumed.\n>\n> Even if we to just cache already accessed stats, places like do_autovacuum()\n> would end up with a completely unnecessary cache of all tables, blowing up\n> memory usage by a large amount on systems with lots of relations.\n\nautovacuum is already dealing with things being pretty fuzzy though,\nso it shouldn't matter much there?\n\nBut autovacuum might also deserve it's own interface to access the\ndata directly and doesn't have to follow the same one as the stats\nviews in this new scheme, perhaps?\n\n\n> > I also believe that the snapshotting behavior has advantages in terms\n> > of being able to perform multiple successive queries and get consistent\n> > results from them. Only the most trivial sorts of analysis don't need\n> > that.\n>\n> In most cases you'd not do that in a transaction tho, and you'd need to create\n> temporary tables with a snapshot of the stats anyway.\n\nI'd say in most cases this analysis happens in snapshots anyway, and\nthose are snapshots unrelated to what we do in pg_stat. It's either\nsnapshotted to tables, or to storage in a completely separate\ndatabase.\n\n\n> > In short, what you are proposing sounds absolutely disastrous for\n> > usability of the stats views, and I for one will not sign off on it\n> > being acceptable.\n>\n> :(\n>\n> That's why I thought it'd be important to bring this up to a wider\n> audience. This has been discussed several times in the thread, and nobody\n> really chimed up wanting the \"snapshot\" behaviour...\n\nI can chime in with the ones saying I don't think I need that kind of\nsnapshot behaviour.\n\nI would *like* to have query-level consistent views. But I may be able\nto compromise on that one for the sake of performance as well.\n\nI definitely need there to be object-level consistent views.\n\n\n> > I do think we could relax the consistency guarantees a little bit,\n> > perhaps along the lines of only caching view rows that have already\n> > been read, rather than grabbing everything up front. But we can't\n> > just toss the snapshot concept out the window. It'd be like deciding\n> > that nobody needs MVCC, or even any sort of repeatable read.\n>\n> I think that'd still a huge win - caching only what's been accessed rather than\n> everything will save a lot of memory in very common cases. I did bring it up as\n> one approach for that reason.\n>\n> I do think it has a few usability quirks though. The time-skew between stats\n> objects accessed at different times seems like it could be quite confusing?\n> E.g. imagine looking at table stats and then later join to index stats and see\n> table / index stats not matching up at all.\n>\n>\n> I wonder if a reasonable way out could be to have pg_stat_make_snapshot()\n> (accompanying the existing pg_stat_clear_snapshot()) that'd do the full eager\n> data load. But not use any snapshot / caching behaviour without that?\n\nI think that's a pretty good idea.\n\nI bet the vast majority of all queries against the pg_stat views are\ndone by automated tools, and they don't care about the snapshot\nbehaviour, and thus wouldn't have to pay the overhead. In the more\nrare cases when you do the live-analysis, you can explicitly request\nit.\n\nAnother idea could be a per-user GUC of \"stats_snapshots\" or so, and\nthen if it's on force the snapshots all times. That way a DBA who\nwants the snapshots could set it on their own user but keep it off for\nthe automated jobs for example. (It'd basically be the same except\nautomatically calling pg_stat_make_snapshot() the first time stats are\nqueried)\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 24 Mar 2021 14:42:11 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Hi,\n\nOn 2021-03-24 14:42:11 +0100, Magnus Hagander wrote:\n> On Sun, Mar 21, 2021 at 11:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > > If I understand what you are proposing, all stats views would become\n> > > completely volatile, without even within-query consistency. That really\n> > > is not gonna work. As an example, you could get not-even-self-consistent\n> > > results from a join to a stats view if the planner decides to implement\n> > > it as a nestloop with the view on the inside.\n> >\n> > I don't really think it's a problem that's worth incuring that much cost to\n> > prevent. We already have that behaviour for a number of of the pg_stat_* views,\n> > e.g. pg_stat_xact_all_tables, pg_stat_replication.\n> \n> Aren't those both pretty bad examples though?\n> \n> pg_stat_xact_all_tables surely is within-query consistent, and would\n> be pretty useless if it wwas within-transaction consistent?\n\nIt's not within-query consistent:\n\npostgres[1182102][1]=# SELECT pg_stat_get_xact_numscans('pg_class'::regclass) UNION ALL SELECT count(*) FROM pg_class UNION ALL SELECT pg_stat_get_xact_numscans('pg_class'::regclass);\n┌───────────────────────────┐\n│ pg_stat_get_xact_numscans │\n├───────────────────────────┤\n│ 0 │\n│ 397 │\n│ 1 │\n└───────────────────────────┘\n(3 rows)\n\n\n> pg_stat_replication is a snapshot of what things are right now (like\n> pg_stat_activity), and not collected statistics.\n\nHowever, pg_stat_activity does have snapshot semantics...\n\n\n> Maybe there's inconsistency in that they should've had a different\n> name to separate it out, but fundamentally having xact consistent\n> views there would be a bad thing, no?\n\nTrue. One weird thing around the _xact_ versions that we, at best,\n*hint* at in the docs, but also contradict, is that _xact_ views are\nactually not tied to the transaction. It's really about unsubmitted\nstats. E.g. if executing the following via copy-paste\n\nSELECT count(*) FROM pg_class;\nSELECT pg_stat_get_xact_numscans('pg_class'::regclass);\nSELECT count(*) FROM pg_class;\nSELECT pg_stat_get_xact_numscans('pg_class'::regclass);\n\nwill most of the time return\n<count>\n0\n<count>\n1\n\nbecause after the transaction for the first count(*) commits, we'll not\nhave submitted stats for more than PGSTAT_STAT_INTERVAL. But after the\nsecond count(*) it'll be shorter, therefore the stats won't be\nsubmitted...\n\n\n> > There's just a huge difference between being able to access a table's stats in\n> > O(1) time, or having a single stats access be O(database-objects).\n> >\n> > And that includes accesses to things like pg_stat_bgwriter, pg_stat_database\n> > (for IO over time stats etc) that often are monitored at a somewhat high\n> > frequency - they also pay the price of reading in all object stats. On my\n> > example database with 1M tables it takes 0.4s to read pg_stat_database.\n> \n> IMV, singeling things out into \"larger groups\" would be one perfectly\n> acceptable compromise. That is, say that pg_stat_user_tables can be\n> inconsistent vs with pg_stat_bgwriter, but it cannot be inconsistent\n> with itself.\n\nI don't think that buys us all that much though. It still is a huge\nissue that we need to cache the stats for all relations even though we\nonly access the stats for table.\n\n\n> > We currently also fetch the full stats in places like autovacuum.c. Where we\n> > don't need repeated access to be consistent - we even explicitly force the\n> > stats to be re-read for every single table that's getting vacuumed.\n> >\n> > Even if we to just cache already accessed stats, places like do_autovacuum()\n> > would end up with a completely unnecessary cache of all tables, blowing up\n> > memory usage by a large amount on systems with lots of relations.\n> \n> autovacuum is already dealing with things being pretty fuzzy though,\n> so it shouldn't matter much there?\n> \n> But autovacuum might also deserve it's own interface to access the\n> data directly and doesn't have to follow the same one as the stats\n> views in this new scheme, perhaps?\n\nYes, we can do that now.\n\n\n> > > I also believe that the snapshotting behavior has advantages in terms\n> > > of being able to perform multiple successive queries and get consistent\n> > > results from them. Only the most trivial sorts of analysis don't need\n> > > that.\n> >\n> > In most cases you'd not do that in a transaction tho, and you'd need to create\n> > temporary tables with a snapshot of the stats anyway.\n> \n> I'd say in most cases this analysis happens in snapshots anyway, and\n> those are snapshots unrelated to what we do in pg_stat. It's either\n> snapshotted to tables, or to storage in a completely separate\n> database.\n\nAgreed. I wonder some of that work would be made easier if we added a\nfunction to export all the data in the current snapshot as a json\ndocument or such? If we add configurable caching (see below) that'd\nreally not be a lot of additional work.\n\n\n> I would *like* to have query-level consistent views. But I may be able\n> to compromise on that one for the sake of performance as well.\n> \n> I definitely need there to be object-level consistent views.\n\nThat'd be free if we didn't use all those separate function calls for\neach row in pg_stat_all_tables etc...\n\n\n> > I wonder if a reasonable way out could be to have pg_stat_make_snapshot()\n> > (accompanying the existing pg_stat_clear_snapshot()) that'd do the full eager\n> > data load. But not use any snapshot / caching behaviour without that?\n> \n> I think that's a pretty good idea.\n\nIt's what I am leaning towards right now.\n\n\n> Another idea could be a per-user GUC of \"stats_snapshots\" or so, and\n> then if it's on force the snapshots all times. That way a DBA who\n> wants the snapshots could set it on their own user but keep it off for\n> the automated jobs for example. (It'd basically be the same except\n> automatically calling pg_stat_make_snapshot() the first time stats are\n> queried)\n\nYea, I was thinking similar. We could e.g. have\nstats_snapshot_consistency, an enum of 'none', 'query', 'snapshot'.\n\nWith 'none' there's no consistency beyond that a single pg_stat_get_*\nfunction call will give consistent results.\n\nWith 'query' we cache each object on first access (i.e. there can be\ninconsistency between different objects if their accesses are further\nappart, but not within a stats object).\n\nWith 'snapshot' we cache all the stats at the first access, for the\nduration of the transaction.\n\n\nHowever: I think 'query' is surprisingly hard to implement. There can be\nmultiple ongoing overlapping queries ongoing at the same time. And even\nif there aren't multiple ongoing queries, there's really nothing great\nto hook a reset into.\n\nSo I'm inclined to instead have 'access', which caches individual stats\nobject on first access. But lasts longer than the query.\n\n\nTom, if we defaulted to 'access' would that satisfy your concerns? Or\nwould even 'none' as a default do? I'd rather not have SELECT * FROM\npg_stat_all_tables; accumulate an unnecessary copy of most of the\ncluster's stats in memory just to immediately throw it away again.\n\n\n> I bet the vast majority of all queries against the pg_stat views are\n> done by automated tools, and they don't care about the snapshot\n> behaviour, and thus wouldn't have to pay the overhead. In the more\n> rare cases when you do the live-analysis, you can explicitly request\n> it.\n\nI don't think I'd often want it in that case either, but ymmv.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:20:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Hi,\n\nOn 2021-03-21 14:53:50 -0700, Andres Freund wrote:\n> On 2021-03-21 11:41:30 -0400, Stephen Frost wrote:\n> > > 2) How to remove stats of dropped objects?\n> > >\n> > > [...]\n> >\n> > The current approach sounds pretty terrible and propagating that forward\n> > doesn't seem great. Guess here I'd disagree with your gut feeling and\n> > encourage trying to go 'full in', as you put it, or at least put enough\n> > effort into it to get a feeling of if it's going to require a *lot* more\n> > work or not and then reconsider if necessary.\n> \n> I think my gut's argument is that it's already a huge patch, and that\n> it's better to have the the very substantial memory and disk IO savings\n> with the crappy vacuum approach, than neither. And given the timeframe\n> there does seem to be a substantial danger of \"neither\" being the\n> outcome... Anyway, I'm mentally sketching out what it'd take.\n\nI implemented this. Far from polished, but it does survive the\nregression tests, including new tests in stats.sql. functions stats,\n2PC and lots of naming issues (xl_xact_dropped_stats with members\nndropped and'dropped_stats) are yet to be addressed - but not\narchitecturally relevant.\n\nI think there are three hairy corner-cases that I haven't thought\nsufficiently about:\n\n- It seems likely that there's a relatively narrow window where a crash\n could end up not dropping stats.\n\n The xact.c integration is basically parallel to smgrGetPendingDeletes\n etc. I don't see what prevents a checkpoint from happing after\n RecordTransactionCommit() (which does smgrGetPendingDeletes), but\n before smgrDoPendingDeletes(). Which means that if we crash in that\n window, nothing would clean up those files (and thus also not the\n dropped stats that I implemented very similarly).\n\n Closing that window seems doable (put the pending file/stat deletion\n list in shared memory after logging the WAL record, but before\n MyProc->delayChkpt = false, do something with that in\n CheckPointGuts()), but not trivial.\n\n If we had a good way to run pgstat_vacuum_stat() on a very\n *occasional* basis via autovac, that'd be ok. But it'd be a lot nicer\n if it were bulletproof.\n\n- Does temporary table cleanup after a crash properly deal with their\n stats?\n\n- I suspect there are a few cases where pending stats in one connection\n could \"revive\" an already dropped stat. It'd not be hard to address in\n an inefficient way in the shmstats patch, but it'd come with some\n efficiency penalty - but it might an irrelevant efficiency difference.\n\n\nDefinitely for later, but if we got this ironed out, we probably could\nstop throwing stats away after crashes. Instead storing stats snapshots\nalongside redo LSNs or such. It'd be nice if immediate shutdowns etc\nwouldn't lead to autovacuum not doing any vacuuming for quite a while.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:49:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On Sun, Mar 21, 2021 at 12:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If I understand what you are proposing, all stats views would become\n> completely volatile, without even within-query consistency. That really\n> is not gonna work. As an example, you could get not-even-self-consistent\n> results from a join to a stats view if the planner decides to implement\n> it as a nestloop with the view on the inside.\n>\n> I also believe that the snapshotting behavior has advantages in terms\n> of being able to perform multiple successive queries and get consistent\n> results from them. Only the most trivial sorts of analysis don't need\n> that.\n>\n> In short, what you are proposing sounds absolutely disastrous for\n> usability of the stats views, and I for one will not sign off on it\n> being acceptable.\n>\n> I do think we could relax the consistency guarantees a little bit,\n> perhaps along the lines of only caching view rows that have already\n> been read, rather than grabbing everything up front. But we can't\n> just toss the snapshot concept out the window. It'd be like deciding\n> that nobody needs MVCC, or even any sort of repeatable read.\n\nSo, just as a data point, the output of pg_locks is not stable within\na transaction. In fact, the pg_locks output could technically include\nthe same exact lock more than once, if it's being moved from the\nfast-path table to the main table just as you are reading all the\ndata. In theory, that could produce the same kinds of problems that\nyou're concerned about here, and I suspect sometimes it does. But I\nhaven't seen a lot of people yelling and screaming about it. The\nsituation isn't ideal, but it's not disastrous either.\n\nI think it's really hard for us as developers to predict what kinds of\neffects of these kinds of decisions will have in real-world\ndeployments. All of us have probably had the experience of making some\nbehavioral change that we thought would not be too big a deal and it\nactually pissed off a bunch of users who were relying on the old\nbehavior. I know I have. Conversely, I've reluctantly made changes\nthat seemed rather dangerous to me and heard nary a peep. If somebody\ntakes the position that changing this behavior is scary because we\ndon't know how many users will be inconvenienced or how badly, I can\nonly agree. But saying that it's tantamount to deciding that nobody\nneeds MVCC is completely over the top. This is statistical data, not\nuser data, and there are good reasons to think that people don't have\nthe same expectations in both cases, starting with the fact that we\nhave some stuff that works like that already.\n\nMore than that, there's a huge problem with the way this works today\nthat can't be fixed without making some compromises. In the test case\nAndres mentioned upthread, the stats snapshot burned through 170MB of\nRAM. Now, you might dismiss that as not much memory in 2021, but if\nyou have a lot of backends accessing the stats, that value could be\nmultiplied by a two digit or even three digit number, and that is\n*definitely* a lot of memory, even in 2021. But even if it's not\nmultiplied by anything, we're shipping with a default work_mem of just\n4MB. So, the position we're implicitly taking today is: if you join\ntwo 5MB tables, it's too risky to put the entire contents of one of\nthem into a single in-memory hash table, because we might run the\nmachine out of RAM. But if you have millions of objects in your\ndatabase and touch the statistics for one of those objects, once, it's\nabsolutely OK to slurp tens or even hundreds of megabytes of data into\nbackend-private memory to avoid the possibility that you might later\naccess another one of those counters and expect snapshot semantics.\n\nTo be honest, I don't find either of those positions very believable.\nI do not think it likely that the typical user really wants a 5MB hash\njoin to be done in batches to save memory, and I think it equally\nunlikely that everybody wants to read and cache tens or hundreds of\nmegabytes of data to get MVCC semantics for volatile statistics. I\nthink there are probably some cases where having that information be\nstable across a transaction lifetime is really useful, so if we can\nprovide that as an optional behavior, I think that would be a pretty\ngood idea. But I don't think it's reasonable for that to be the only\nbehavior, and I'm doubtful about whether it should even be the\ndefault. I bet there are a lot of cases where somebody just wants to\ntake a quick glance at some of the values as they exist right now, and\nhas no intention of running any more queries that might examine the\nsame data again later. Not only does caching all the data use a lot of\nmemory, but having to read all the data in order to cache it is\npotentially a lot slower than just reading the data actually\nrequested. I'm unwilling to dismiss that as a negligible problem.\n\nIn short, I agree that there's stuff to worry about here, but I don't\nagree that a zero-consistency model is a crazy idea, even though I\nalso think it would be nice if we can make stronger consistency\navailable upon request.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 25 Mar 2021 15:05:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "On Thu, Mar 25, 2021 at 6:20 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2021-03-24 14:42:11 +0100, Magnus Hagander wrote:\n> > On Sun, Mar 21, 2021 at 11:34 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > If I understand what you are proposing, all stats views would become\n> > > > completely volatile, without even within-query consistency. That really\n> > > > is not gonna work. As an example, you could get not-even-self-consistent\n> > > > results from a join to a stats view if the planner decides to implement\n> > > > it as a nestloop with the view on the inside.\n> > >\n> > > I don't really think it's a problem that's worth incuring that much cost to\n> > > prevent. We already have that behaviour for a number of of the pg_stat_* views,\n> > > e.g. pg_stat_xact_all_tables, pg_stat_replication.\n> >\n> > Aren't those both pretty bad examples though?\n> >\n> > pg_stat_xact_all_tables surely is within-query consistent, and would\n> > be pretty useless if it wwas within-transaction consistent?\n>\n> It's not within-query consistent:\n>\n> postgres[1182102][1]=# SELECT pg_stat_get_xact_numscans('pg_class'::regclass) UNION ALL SELECT count(*) FROM pg_class UNION ALL SELECT pg_stat_get_xact_numscans('pg_class'::regclass);\n> ┌───────────────────────────┐\n> │ pg_stat_get_xact_numscans │\n> ├───────────────────────────┤\n> │ 0 │\n> │ 397 │\n> │ 1 │\n> └───────────────────────────┘\n> (3 rows)\n\nHeh. OK, I admit I didn't consider a UNION query like that -- I only\nconsidered it being present *once* in a query :)\n\nThat said, if wanted that can be dealt with a WITH MATERIALIZED as\nlong as it's in the same query, no?\n\n\n> > pg_stat_replication is a snapshot of what things are right now (like\n> > pg_stat_activity), and not collected statistics.\n>\n> However, pg_stat_activity does have snapshot semantics...\n\nYeah, yay consistency.\n\n\n> > Maybe there's inconsistency in that they should've had a different\n> > name to separate it out, but fundamentally having xact consistent\n> > views there would be a bad thing, no?\n>\n> True. One weird thing around the _xact_ versions that we, at best,\n> *hint* at in the docs, but also contradict, is that _xact_ views are\n> actually not tied to the transaction. It's really about unsubmitted\n> stats. E.g. if executing the following via copy-paste\n>\n> SELECT count(*) FROM pg_class;\n> SELECT pg_stat_get_xact_numscans('pg_class'::regclass);\n> SELECT count(*) FROM pg_class;\n> SELECT pg_stat_get_xact_numscans('pg_class'::regclass);\n>\n> will most of the time return\n> <count>\n> 0\n> <count>\n> 1\n>\n> because after the transaction for the first count(*) commits, we'll not\n> have submitted stats for more than PGSTAT_STAT_INTERVAL. But after the\n> second count(*) it'll be shorter, therefore the stats won't be\n> submitted...\n\nThat's... cute. I hadn't realized that part, but then I've never\nactually had use for the _xact_ views.\n\n\n> > > There's just a huge difference between being able to access a table's stats in\n> > > O(1) time, or having a single stats access be O(database-objects).\n> > >\n> > > And that includes accesses to things like pg_stat_bgwriter, pg_stat_database\n> > > (for IO over time stats etc) that often are monitored at a somewhat high\n> > > frequency - they also pay the price of reading in all object stats. On my\n> > > example database with 1M tables it takes 0.4s to read pg_stat_database.\n> >\n> > IMV, singeling things out into \"larger groups\" would be one perfectly\n> > acceptable compromise. That is, say that pg_stat_user_tables can be\n> > inconsistent vs with pg_stat_bgwriter, but it cannot be inconsistent\n> > with itself.\n>\n> I don't think that buys us all that much though. It still is a huge\n> issue that we need to cache the stats for all relations even though we\n> only access the stats for table.\n\nWell, you yourself just mentioned that access to bgwriter and db stats\nare often sampled at a higher frequency.\n\nThat said, this can often include *individual* tables as well, but\nmaybe not all at once.\n\n\n> > > > I also believe that the snapshotting behavior has advantages in terms\n> > > > of being able to perform multiple successive queries and get consistent\n> > > > results from them. Only the most trivial sorts of analysis don't need\n> > > > that.\n> > >\n> > > In most cases you'd not do that in a transaction tho, and you'd need to create\n> > > temporary tables with a snapshot of the stats anyway.\n> >\n> > I'd say in most cases this analysis happens in snapshots anyway, and\n> > those are snapshots unrelated to what we do in pg_stat. It's either\n> > snapshotted to tables, or to storage in a completely separate\n> > database.\n>\n> Agreed. I wonder some of that work would be made easier if we added a\n> function to export all the data in the current snapshot as a json\n> document or such? If we add configurable caching (see below) that'd\n> really not be a lot of additional work.\n\nI'd assume if you want the snapshot in the database, you'd want it in\na database format. That is, if you want the snapshot in the db, you\nactually *want* it in a table or similar. I'm not sure json format\nwould really help that much?\n\nWhat would probably be interesting to more in that case is if we could\nbuild ourselves, either builtin or as an extension a background worker\nthat would listen and export openmetrics format talking directly to\nthe stats and bypassing the need to have an exporter running that\nconnects as a regular user and ends up converting the things between\nmany different formats on the way.\n\n\n> > I would *like* to have query-level consistent views. But I may be able\n> > to compromise on that one for the sake of performance as well.\n> >\n> > I definitely need there to be object-level consistent views.\n>\n> That'd be free if we didn't use all those separate function calls for\n> each row in pg_stat_all_tables etc...\n\nSo.. We should fix that?\n\nWe could still keep those separate functions for backwards\ncompatibility if we wanted of course, but move the view to use an SRF\nand make that SRF also directly callable with useful parameters.\n\nI mean, it's been over 10 years since we did that for pg_stat_activity\nin 9.1, so it's perhaps time to do another view? :)\n\n\n> > > I wonder if a reasonable way out could be to have pg_stat_make_snapshot()\n> > > (accompanying the existing pg_stat_clear_snapshot()) that'd do the full eager\n> > > data load. But not use any snapshot / caching behaviour without that?\n> >\n> > I think that's a pretty good idea.\n>\n> It's what I am leaning towards right now.\n>\n>\n> > Another idea could be a per-user GUC of \"stats_snapshots\" or so, and\n> > then if it's on force the snapshots all times. That way a DBA who\n> > wants the snapshots could set it on their own user but keep it off for\n> > the automated jobs for example. (It'd basically be the same except\n> > automatically calling pg_stat_make_snapshot() the first time stats are\n> > queried)\n>\n> Yea, I was thinking similar. We could e.g. have\n> stats_snapshot_consistency, an enum of 'none', 'query', 'snapshot'.\n>\n> With 'none' there's no consistency beyond that a single pg_stat_get_*\n> function call will give consistent results.\n>\n> With 'query' we cache each object on first access (i.e. there can be\n> inconsistency between different objects if their accesses are further\n> appart, but not within a stats object).\n>\n> With 'snapshot' we cache all the stats at the first access, for the\n> duration of the transaction.\n>\n>\n> However: I think 'query' is surprisingly hard to implement. There can be\n> multiple ongoing overlapping queries ongoing at the same time. And even\n> if there aren't multiple ongoing queries, there's really nothing great\n> to hook a reset into.\n>\n> So I'm inclined to instead have 'access', which caches individual stats\n> object on first access. But lasts longer than the query.\n\nI guess the naive way to do query would be to just, ahem, lock the\nstats while a query is ongoing. But that would probably be *really*\nterrible in the case of big stats, yes :)\n\n\n> Tom, if we defaulted to 'access' would that satisfy your concerns? Or\n> would even 'none' as a default do? I'd rather not have SELECT * FROM\n> pg_stat_all_tables; accumulate an unnecessary copy of most of the\n> cluster's stats in memory just to immediately throw it away again.\n\nBut in your \"cache individual stats object on first access\", it would\ncopy most of the stats over when you did a \"SELECT *\", no? It would\nonly be able to avoid that if you had a WHERE clause on it limiting\nwhat you got?\n\n--\n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 25 Mar 2021 23:52:58 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" }, { "msg_contents": "Hi,\n\nI did end up implementing the configurable fetch consistency. Seems to\nwork well, and it's not that much code after a bunch of other\ncleanups. See below the quoted part at the bottom.\n\nI just posted the latest iteration of that code at\nhttps://www.postgresql.org/message-id/20210405092914.mmxqe7j56lsjfsej%40alap3.anarazel.de\n\nCopying from that mail:\n\nI've spent most of the last 2 1/2 weeks on this now. Unfortunately I think\nthat, while it has gotten a lot closer, it's still about a week's worth of\nwork away from being committable.\n\n\nMy main concerns are:\n\n\n- Test Coverage:\n\n\n I've added a fair bit of tests, but it's still pretty bad. There were a lot\n of easy-to-hit bugs in earlier versions that nevertheless passed the test\n just fine.\n\n\n Due to the addition of pg_stat_force_next_flush(), and that there's no need\n to wait for the stats collector to write out files, it's now a lot more\n realistic to have proper testing of a lot of the pgstat.c code.\n\n\n- Architectural Review\n\n\n I rejiggered the patchset pretty significantly, and I think it needs more\n review than I see as realistic in the next two days. In particular I don't\n think\n\n\n- Performance Testing\n\n\n I did a small amount, but given that this touches just about every query\n etc, I think that's not enough. My changes unfortunately are substantial\n enough to invalidate Horiguchi-san's earlier tests.\n\n\n- Currently there's a corner case in which function (but not table!) stats\n for a dropped function may not be removed. That possibly is not too bad,\n\n\n- Too many FIXMEs still open\n\n\nIt is quite disappointing to not have the patch go into v14 :(. But I just\ndon't quite see the path right now. But maybe I am just too tired right now,\nand it'll look better tomorrow (err today, in a few hours).\n\n\nOne aspect making this particularly annoying is that there's a number of\nstats additions in v14 that'd be easier to make robust with the shared\nmemory based approach. But I don't think I can get it a committable\nshape in 2 days. Nor is it clear to that that it'd be a good idea to\ncommit it, even if I could just about make it, given that pretty much\neverything involves pgstat somewhere.\n\n\nOn 2021-03-25 23:52:58 +0100, Magnus Hagander wrote:\n> > Yea, I was thinking similar. We could e.g. have\n> > stats_snapshot_consistency, an enum of 'none', 'query', 'snapshot'.\n> >\n> > With 'none' there's no consistency beyond that a single pg_stat_get_*\n> > function call will give consistent results.\n> >\n> > With 'query' we cache each object on first access (i.e. there can be\n> > inconsistency between different objects if their accesses are further\n> > appart, but not within a stats object).\n> >\n> > With 'snapshot' we cache all the stats at the first access, for the\n> > duration of the transaction.\n> >\n> >\n> > However: I think 'query' is surprisingly hard to implement. There can be\n> > multiple ongoing overlapping queries ongoing at the same time. And even\n> > if there aren't multiple ongoing queries, there's really nothing great\n> > to hook a reset into.\n> >\n> > So I'm inclined to instead have 'access', which caches individual stats\n> > object on first access. But lasts longer than the query.\n\nI went with stats_fetch_consistency = {snapshot, cache, none}, that\nseems a bit more obvious than cache. I haven't yet changed [auto]vacuum\nso it uses 'none' unconditionally - but that shouldn't be hard.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 5 Apr 2021 02:38:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: shared memory stats: high level design decisions: consistency,\n dropping" } ]
[ { "msg_contents": "Hello hackers,\n\nI'm starting a new thread and CF entry for the material for r15 from\nthe earlier thread[1] that introduced the recovery_init_sync_method\nGUC for r14. I wrote a summary of this topic as I see it, while it's\nstill fresh on my mind from working on commit 61752afb, starting from\nwhat problem this solves.\n\nTL;DR: Here's a patch that adds a less pessimistic, faster starting\ncrash recovery mode based on first principles.\n\n=== Background ===\n\nWhy do we synchronise the data directory before we run crash recovery?\n\n1. WAL: It's not safe for changes to data pages to reach the disk\nbefore the WAL. This is the basic log-before-data rule. Suppose we\ndidn't do that. If our online cluster crashed after calling\npwrite(<some WAL data>) but before calling fdatasync(), the WAL data\nwe later read in crash recovery may differ from what's really on disk,\nand it'd be dangerous to replay its contents, because its effects to\ndata pages might then be written to disk at any time and break the\nrule. If that happens and you lose power and then run crash recovery\na second time, now you have some phantom partial changes already\napplied but no WAL left to redo them, leading to hazards including\nxids being recycled, effects of committed transactions being partially\nlost, multi-page changes being half done, and other such corruption.\n\n2. Data files: We can't skip changes to a data page based on the page\nheader's LSN if the page is not known to be on disk (that is, it is\nclean in PostgreSQL's buffer pool, but possibly dirty in the kernel's\npage cache). Otherwise, the end-of-recovery checkpoint will do\nnothing for the page (assuming nothing else marks the page dirty in\nour buffer pool before that), so we'll complete the checkpoint, and\nallow the WAL to be discarded. Then we might lose power before the\nkernel gets a chance to write the data page to disk, and when the\nlights come back on we'll run crash recovery but we don't replay that\nforgotten change from before the bogus checkpoint, and we have lost\ncommitted changes. (I don't think this can happen with\nfull_page_writes=on, because in that mode we never skip pages and\nalways do a full replay, which has various tradeoffs.)\n\nI believe those are the fundamental reasons. If you know of more\nreasons, I'd love to hear them.\n\nWhy don't we synchronise the data directory for a clean startup?\n\nWhen we start up the database from a shutdown checkpoint, we take the\ncheckpoint at face value. A checkpoint is a promise that all changes\nup to a given LSN have been durably stored on disk. There are a\ncouple of cases where that isn't true:\n\n1. You were previously running with fsync=off. That's OK, we told\nyou not to do that. Checkpoints created without fsync barriers to\nenforce the strict WAL-then-data-then-checkpoint protocol are\nforgeries.\n\n2. You made a file system-level copy of a cluster that you shut down\ncleanly first, using cp, tar, scp, rsync, xmodem etc. Now you start\nup the copy. Its checkpoint is a forgery. (Maybe our manual should\nmention this problem under \"25.2. File System Level Backup\" where it\nteaches you to rsync your cluster.)\n\nHow do the existing recovery_init_sync_method modes work?\n\nYou can think of recovery_init_sync_method as different \"query plans\"\nfor finding dirty buffers in the kernel's page cache to sync.\n\n1. fsync: Go through the directory tree and call fsync() on each\nfile, just in case that file had some dirty pages. This is a terrible\nplan if the files aren't currently in the kernel's VFS cache, because\nit could take up to a few milliseconds to get each one in there\n(random read to slower SSDs or network storage or IOPS-capped cloud\nstorage). If there really is a lot of dirty data, that's a good bet,\nbecause the files must have been in the VFS cache already. But if\nthere are one million mostly read-only tables, it could take ages just\nto *open* all the files, even though there's not much to actually\nwrite out.\n\n2. syncfs: Go through the kernel page cache instead, looking for\ndirty data in the small number of file systems that contain our\ndirectories. This is driven by data that is already in the kernel's\ncache, so we avoid the need to perform I/O to search for dirty data.\nThat's great if your cluster is running mostly alone on the file\nsystem in question, but it's not great if you're running another\nPostgreSQL cluster on the same file system, because now we generate\nextra write I/O when it finds incidental other stuff to write out.\n\nThese are both scatter gun approaches that can sometimes do a lot of\nuseless work, and I'd like to find a precise version that uses\ninformation we already have about what might be dirty according to the\nmeaning of a checkpoint and a transaction log. The attached patch\ndoes that as follows:\n\n1. Sync the WAL using fsync(), to enforce the log-before-data rule.\nThat's moved into the existing loop that scans the WAL files looking\nfor temporary files to unlink. (I suppose it should perhaps do the\n\"presync\" trick too. Not done yet.)\n\n2. While replaying the WAL, if we ever decide to skip a page because\nof its LSN, remember to fsync() the file in the next checkpoint anyway\n(because the data might be dirty in the kernel). This way we sync\nall files that changed since the last checkpoint (even if we don't\nhave to apply the change again). (A more paranoid mode would mark the\npage dirty instead, so that we'll not only fsync() it, but we'll also\nwrite it out again. This would defend against kernels that have\nwriteback failure modes that include keeping changes but dropping\ntheir own dirty flag. Not done here.)\n\nOne thing about this approach is that it takes the checkpoint it\nrecovers from at face value. This is similar to the current situation\nwith startup from a clean shutdown checkpoint. If you're starting up\na database that was previously running with fsync=off, it won't fix\nthe problems that might have created, and if you beamed a copy of your\ncrashed cluster to another machine with rsync and took no steps to\nsync it, then it won't fix the problems caused by random files that\nare not yet flushed to disk, and that don't happen to be dirtied (or\nskipped with BLK_DONE) by WAL replay.\nrecovery_init_sync_method=fsync,syncfs will fix at least that second\nproblem for you.\n\nNow, what holes are there in this scheme?\n\n[1] https://postgr.es/m/11bc2bb7-ecb5-3ad0-b39f-df632734cd81%40discourse.org", "msg_date": "Sat, 20 Mar 2021 15:35:29 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "recovery_init_sync_method=wal" }, { "msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> 2. You made a file system-level copy of a cluster that you shut down\n> cleanly first, using cp, tar, scp, rsync, xmodem etc. Now you start\n> up the copy. Its checkpoint is a forgery. (Maybe our manual should\n> mention this problem under \"25.2. File System Level Backup\" where it\n> teaches you to rsync your cluster.)\n\nYes, it'd be good to get some updates to the backup documentation around\nthis which stresses in all cases that your backup utility should make\nsure to fsync everything it restores.\n\n> These are both scatter gun approaches that can sometimes do a lot of\n> useless work, and I'd like to find a precise version that uses\n> information we already have about what might be dirty according to the\n> meaning of a checkpoint and a transaction log. The attached patch\n> does that as follows:\n> \n> 1. Sync the WAL using fsync(), to enforce the log-before-data rule.\n> That's moved into the existing loop that scans the WAL files looking\n> for temporary files to unlink. (I suppose it should perhaps do the\n> \"presync\" trick too. Not done yet.)\n> \n> 2. While replaying the WAL, if we ever decide to skip a page because\n> of its LSN, remember to fsync() the file in the next checkpoint anyway\n> (because the data might be dirty in the kernel). This way we sync\n> all files that changed since the last checkpoint (even if we don't\n> have to apply the change again). (A more paranoid mode would mark the\n> page dirty instead, so that we'll not only fsync() it, but we'll also\n> write it out again. This would defend against kernels that have\n> writeback failure modes that include keeping changes but dropping\n> their own dirty flag. Not done here.)\n\nPresuming that we do add to the documentation the language to document\nwhat's assumed (and already done by modern backup tools) that they're\nfsync'ing everything they're restoring, do we/can we have an option\nwhich those tools could set that explicitly tells PG \"everything in the\ncluster has been fsync'd already, you don't need to do anything extra\"?\nPerhaps also/seperately one for WAL that's restored with restore command\nif we think that's necessary?\n\nOtherwise, just in general, agree with doing this to address the risks\ndiscussed around regular crash recovery. We have some pretty clear \"if\nthe DB was doing recovery and was interrupted, you need to restore from\nbackup\" messages today in xlog.c, and this patch didn't seem to change\nthat? Am I missing something or isn't the idea here that these changes\nwould make it so you aren't going to end up with corruption in those\ncases? Specifically looking at-\n\nxlog.c:6509-\n case DB_IN_CRASH_RECOVERY:\n ereport(LOG,\n (errmsg(\"database system was interrupted while in recovery at %s\",\n str_time(ControlFile->time)),\n errhint(\"This probably means that some data is corrupted and\"\n \" you will have to use the last backup for recovery.\")));\n break;\n\n case DB_IN_ARCHIVE_RECOVERY:\n ereport(LOG,\n (errmsg(\"database system was interrupted while in recovery at log time %s\",\n str_time(ControlFile->checkPointCopy.time)),\n errhint(\"If this has occurred more than once some data might be corrupted\"\n \" and you might need to choose an earlier recovery target.\")));\n break;\n\nThanks!\n\nStephen", "msg_date": "Sun, 21 Mar 2021 11:31:43 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: recovery_init_sync_method=wal" }, { "msg_contents": "On Mon, Mar 22, 2021 at 4:31 AM Stephen Frost <sfrost@snowman.net> wrote:\n> Presuming that we do add to the documentation the language to document\n> what's assumed (and already done by modern backup tools) that they're\n> fsync'ing everything they're restoring, do we/can we have an option\n> which those tools could set that explicitly tells PG \"everything in the\n> cluster has been fsync'd already, you don't need to do anything extra\"?\n> Perhaps also/seperately one for WAL that's restored with restore command\n> if we think that's necessary?\n\nIn the earlier thread, we did contemplate\nrecovery_init_sync_method=none, but it has the problem that after\nrecovery completes you have a cluster running with a setting that is\nreally bad if you eventually crash again and run crash recovery again,\nthis time with a dirty kernel page cache. I was one of the people\nvoting against that feature, but I also wrote a strawman patch just\nfor the visceral experience of every cell in my body suddenly\nwhispering \"yeah, nah, I'm not committing that\" as I wrote the\nweaselwordage for the manual.\n\nIn that thread we also contemplated safe ways for a basebackup-type\ntool to promise that data has been sync'd, to avoid that problem with\nthe GUC. Maybe something like: the backup label file could contain a\n\"SYNC_DONE\" message. But then what if someone copies the whole\ndirectory to a new location, how can you invalidate the promise? This\nis another version of the question of whether it's our problem or the\nuser's to worry about buffering of pgdata files that they copy around\nwith unknown tools. If it's our problem, maybe something like:\n\"SYNC_DONE for pgdata_inode=1234, hostname=x, ...\" is enough to detect\ncases where we still believe the claim. But there's probably a long\ntail of weird ways for whatever you come up with to be deceived.\n\nIn the case of the recovery_init_sync_method=wal patch proposed in\n*this* thread, here's the thing: there's not much to gain by trying to\nskip the sync, anyway! For the WAL, you'll be opening those files\nsoon anyway to replay them, and if they're already sync'd then fsync()\nwill return quickly. For the relfile data, you'll be opening all\nrelfiles that are referenced by the WAL soon anyway, and syncing them\nif required. So that just leaves relfiles that are not referenced in\nthe WAL you're about to replay. Whether we have a duty to sync those\ntoo is that central question again, and one of the things I'm trying\nto get an answer to with this thread. The\nrecovery_init_sync_method=wal patch only makes sense if the answer is\n\"no\".\n\n> Otherwise, just in general, agree with doing this to address the risks\n> discussed around regular crash recovery. We have some pretty clear \"if\n> the DB was doing recovery and was interrupted, you need to restore from\n> backup\" messages today in xlog.c, and this patch didn't seem to change\n> that? Am I missing something or isn't the idea here that these changes\n> would make it so you aren't going to end up with corruption in those\n> cases? Specifically looking at-\n>\n> xlog.c:6509-\n> case DB_IN_CRASH_RECOVERY:\n> ereport(LOG,\n> (errmsg(\"database system was interrupted while in recovery at %s\",\n> str_time(ControlFile->time)),\n> errhint(\"This probably means that some data is corrupted and\"\n> \" you will have to use the last backup for recovery.\")));\n> break;\n>\n> case DB_IN_ARCHIVE_RECOVERY:\n> ereport(LOG,\n> (errmsg(\"database system was interrupted while in recovery at log time %s\",\n> str_time(ControlFile->checkPointCopy.time)),\n> errhint(\"If this has occurred more than once some data might be corrupted\"\n> \" and you might need to choose an earlier recovery target.\")));\n> break;\n\nMaybe I missed your point... but I don't think anything changes here?\nIf recovery is crashing in some deterministic way (not just because\nyou lost power the first time, but rather because a particular log\nrecord hits the same bug or gets confused by the same corruption and\nimplodes every time) it'll probably keep doing so, and our sync\nalgorithm doesn't seem to make a difference to that.\n\n\n", "msg_date": "Mon, 22 Mar 2021 10:56:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: recovery_init_sync_method=wal" }, { "msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Mon, Mar 22, 2021 at 4:31 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > Presuming that we do add to the documentation the language to document\n> > what's assumed (and already done by modern backup tools) that they're\n> > fsync'ing everything they're restoring, do we/can we have an option\n> > which those tools could set that explicitly tells PG \"everything in the\n> > cluster has been fsync'd already, you don't need to do anything extra\"?\n> > Perhaps also/seperately one for WAL that's restored with restore command\n> > if we think that's necessary?\n> \n> In the earlier thread, we did contemplate\n> recovery_init_sync_method=none, but it has the problem that after\n> recovery completes you have a cluster running with a setting that is\n> really bad if you eventually crash again and run crash recovery again,\n> this time with a dirty kernel page cache. I was one of the people\n> voting against that feature, but I also wrote a strawman patch just\n> for the visceral experience of every cell in my body suddenly\n> whispering \"yeah, nah, I'm not committing that\" as I wrote the\n> weaselwordage for the manual.\n\nWhy not have a 'recovery_init_sync_method=backup'? It's not like\nthere's a question about if we're doing recovery from a backup or not.\n\n> In that thread we also contemplated safe ways for a basebackup-type\n> tool to promise that data has been sync'd, to avoid that problem with\n> the GUC. Maybe something like: the backup label file could contain a\n> \"SYNC_DONE\" message. But then what if someone copies the whole\n> directory to a new location, how can you invalidate the promise? This\n> is another version of the question of whether it's our problem or the\n> user's to worry about buffering of pgdata files that they copy around\n> with unknown tools. If it's our problem, maybe something like:\n> \"SYNC_DONE for pgdata_inode=1234, hostname=x, ...\" is enough to detect\n> cases where we still believe the claim. But there's probably a long\n> tail of weird ways for whatever you come up with to be deceived.\n\nI don't really care for the idea of backup tools modifying the backup\nlabel... they're explicitly told not to do that and that seems like the\nbest move. I also don't particularly care about silly \"what ifs\" like\nif someone randomly copies the data directory after the restore- yes,\nthere's a lot of ways that people can screw things up by doing things\nthat aren't sane, but that doesn't mean we should try to cater to such\ncases.\n\n> In the case of the recovery_init_sync_method=wal patch proposed in\n> *this* thread, here's the thing: there's not much to gain by trying to\n> skip the sync, anyway! For the WAL, you'll be opening those files\n> soon anyway to replay them, and if they're already sync'd then fsync()\n> will return quickly. For the relfile data, you'll be opening all\n> relfiles that are referenced by the WAL soon anyway, and syncing them\n> if required. So that just leaves relfiles that are not referenced in\n> the WAL you're about to replay. Whether we have a duty to sync those\n> too is that central question again, and one of the things I'm trying\n> to get an answer to with this thread. The\n> recovery_init_sync_method=wal patch only makes sense if the answer is\n> \"no\".\n\nI'm not too bothered by an extra fsync() for WAL files, just to be\nclear, it's running around fsync'ing everything else that seems\nobjectionable to me.\n\n> > Otherwise, just in general, agree with doing this to address the risks\n> > discussed around regular crash recovery. We have some pretty clear \"if\n> > the DB was doing recovery and was interrupted, you need to restore from\n> > backup\" messages today in xlog.c, and this patch didn't seem to change\n> > that? Am I missing something or isn't the idea here that these changes\n> > would make it so you aren't going to end up with corruption in those\n> > cases? Specifically looking at-\n> >\n> > xlog.c:6509-\n> > case DB_IN_CRASH_RECOVERY:\n> > ereport(LOG,\n> > (errmsg(\"database system was interrupted while in recovery at %s\",\n> > str_time(ControlFile->time)),\n> > errhint(\"This probably means that some data is corrupted and\"\n> > \" you will have to use the last backup for recovery.\")));\n> > break;\n> >\n> > case DB_IN_ARCHIVE_RECOVERY:\n> > ereport(LOG,\n> > (errmsg(\"database system was interrupted while in recovery at log time %s\",\n> > str_time(ControlFile->checkPointCopy.time)),\n> > errhint(\"If this has occurred more than once some data might be corrupted\"\n> > \" and you might need to choose an earlier recovery target.\")));\n> > break;\n> \n> Maybe I missed your point... but I don't think anything changes here?\n> If recovery is crashing in some deterministic way (not just because\n> you lost power the first time, but rather because a particular log\n> record hits the same bug or gets confused by the same corruption and\n> implodes every time) it'll probably keep doing so, and our sync\n> algorithm doesn't seem to make a difference to that.\n\nThese errors aren't thrown only in the case where we hit a bad XLOG\nrecord though, are they..? Maybe I missed that somehow but it seems\nlike these get thrown in the simple case that we, say, lost power,\nstarted recovery and didn't finish recovery and lost power again, even\nthough with your patches hopefully that wouldn't actually result in a\nfailure case or in corruption..?\n\nIn the 'bad XLOG' or 'confused by corruption' cases, I wonder if it'd be\nhelpful to write that out more explicitly somehow.. essentially\nsegregating these cases.\n\nThanks,\n\nStephen", "msg_date": "Sun, 21 Mar 2021 18:07:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: recovery_init_sync_method=wal" } ]
[ { "msg_contents": "Hi\n\nI finished work on pspg.\n\nhttps://github.com/okbob/pspg\n\nNow it has special features like rows or block selection by mouse, and\nexport related data to file or to clipboard in csv or tsv or insert\nformats. Some basic features like sorting data per selected columns are\npossible too.\n\nI hope this tool will serve well, and so work with Postgres (or other\nsupported databases) in the terminal will be more comfortable and more\nefficient.\n\nRegards\n\nPavel Stehule\n\nHiI finished work on pspg.https://github.com/okbob/pspgNow it has special features like rows or block selection by mouse, and export related data to file or to clipboard in csv or tsv or insert formats. Some basic features like sorting data per selected columns are possible too. I hope this tool will serve well, and so work with Postgres (or other supported databases) in the terminal will be more comfortable and more efficient.RegardsPavel Stehule", "msg_date": "Sat, 20 Mar 2021 04:34:30 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "pspg pager is finished" }, { "msg_contents": "On Sat, Mar 20, 2021 at 04:34:30AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I finished work on pspg.\n> \n> https://github.com/okbob/pspg\n> \n> Now it has special features like rows or block selection by mouse, and\n> export related data to file or to clipboard in csv or tsv or insert\n> formats. Some basic features like sorting data per selected columns are\n> possible too.\n> \n> I hope this tool will serve well, and so work with Postgres (or other\n> supported databases) in the terminal will be more comfortable and more\n> efficient.\n\nThanks a lot for that tool Pavel. It has been my favorite psql pager for\nyears!\n\n\n", "msg_date": "Sat, 20 Mar 2021 11:46:19 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "so 20. 3. 2021 v 4:45 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Sat, Mar 20, 2021 at 04:34:30AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > I finished work on pspg.\n> >\n> > https://github.com/okbob/pspg\n> >\n> > Now it has special features like rows or block selection by mouse, and\n> > export related data to file or to clipboard in csv or tsv or insert\n> > formats. Some basic features like sorting data per selected columns are\n> > possible too.\n> >\n> > I hope this tool will serve well, and so work with Postgres (or other\n> > supported databases) in the terminal will be more comfortable and more\n> > efficient.\n>\n> Thanks a lot for that tool Pavel. It has been my favorite psql pager for\n> years!\n>\n\nThank you\n\nPavel\n\nso 20. 3. 2021 v 4:45 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Sat, Mar 20, 2021 at 04:34:30AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I finished work on pspg.\n> \n> https://github.com/okbob/pspg\n> \n> Now it has special features like rows or block selection by mouse, and\n> export related data to file or to clipboard in csv or tsv or insert\n> formats. Some basic features like sorting data per selected columns are\n> possible too.\n> \n> I hope this tool will serve well, and so work with Postgres (or other\n> supported databases) in the terminal will be more comfortable and more\n> efficient.\n\nThanks a lot for that tool Pavel.  It has been my favorite psql pager for\nyears!Thank youPavel", "msg_date": "Sat, 20 Mar 2021 07:13:03 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "On 3/20/21 4:34 AM, Pavel Stehule wrote:\n> Hi\n> \n> I finished work on pspg.\n> \n> https://github.com/okbob/pspg\n> \n> Now it has special features like rows or block selection by mouse, and\n> export related data to file or to clipboard in csv or tsv or insert\n> formats. Some basic features like sorting data per selected columns are\n> possible too.\n> \n> I hope this tool will serve well, and so work with Postgres (or other\n> supported databases) in the terminal will be more comfortable and more\n> efficient.\n\nIf this means active development on it is finished, I would like to see\nthis integrated into the tree, perhaps even directly into psql itself\n(unless the user chooses a different pager). It is that useful.\n\nThank you, Pavel, for this work.\n-- \nVik Fearing\n\n\n", "msg_date": "Sat, 20 Mar 2021 07:51:36 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "so 20. 3. 2021 v 7:51 odesílatel Vik Fearing <vik@postgresfriends.org>\nnapsal:\n\n> On 3/20/21 4:34 AM, Pavel Stehule wrote:\n> > Hi\n> >\n> > I finished work on pspg.\n> >\n> > https://github.com/okbob/pspg\n> >\n> > Now it has special features like rows or block selection by mouse, and\n> > export related data to file or to clipboard in csv or tsv or insert\n> > formats. Some basic features like sorting data per selected columns are\n> > possible too.\n> >\n> > I hope this tool will serve well, and so work with Postgres (or other\n> > supported databases) in the terminal will be more comfortable and more\n> > efficient.\n>\n> If this means active development on it is finished, I would like to see\n> this integrated into the tree, perhaps even directly into psql itself\n> (unless the user chooses a different pager). It is that useful.\n>\n\nyes, - almost all my ideas are implemented - and I have no plans to write\npspg as a light spreadsheet. It is just a pager.\n\nUnfortunately, the source code of pspg has not required postgres quality. I\nwrote it mostly alone without reviews and without initial experience with\nthis kind of application. Some implemented interactive features are very\ncomplex (for terminal applications), and not too simply understandable. The\ncode is not too bad, if I compare it with source code of \"more\" or \"less\",\nbut again, there was not any check of other eyes. There are not any regress\ntests, so I don't think so integration too core can be a good idea. Review\nof about 35000 lines can be terrible work, but this project needs it to\nmove forward . On second hand, it uses a lot of Postgres C patterns. And\nany new development can be more concentrated on quality and less to\nresearch.\n\nAlthough pspg has not Postgres quality, it is a good tool that is used by a\nlot of people. Can be nice to be propagated inside Postgres documentation,\nor some Postgres demos.\n\npspg is now in my private repository (and although it uses BSD licence), I\nwill be proud if it can be moved to some community repository, and if the\ncommunity takes more control and all rights to this project.\n\nNow, I would work more on other projects than pspg - and then pspg will be\nin maintenance mode. I'll fix all reported errors.\n\n\n> Thank you, Pavel, for this work.\n>\n\nThank you :)\n\nPavel\n\n-- \n> Vik Fearing\n>\n\nso 20. 3. 2021 v 7:51 odesílatel Vik Fearing <vik@postgresfriends.org> napsal:On 3/20/21 4:34 AM, Pavel Stehule wrote:\n> Hi\n> \n> I finished work on pspg.\n> \n> https://github.com/okbob/pspg\n> \n> Now it has special features like rows or block selection by mouse, and\n> export related data to file or to clipboard in csv or tsv or insert\n> formats. Some basic features like sorting data per selected columns are\n> possible too.\n> \n> I hope this tool will serve well, and so work with Postgres (or other\n> supported databases) in the terminal will be more comfortable and more\n> efficient.\n\nIf this means active development on it is finished, I would like to see\nthis integrated into the tree, perhaps even directly into psql itself\n(unless the user chooses a different pager).  It is that useful.yes, - almost all my ideas are implemented - and I have no plans to write pspg as a light spreadsheet. It is just a pager. Unfortunately, the source code of pspg has not required postgres quality. I wrote it mostly alone without reviews and without initial experience with this kind of application. Some implemented interactive features are very complex (for terminal applications), and not too simply understandable. The code is not too bad, if I compare it with source code of \"more\" or \"less\", but again, there was not any check of other eyes. There are not any regress tests, so I don't think so integration too core can be a good idea. Review of about 35000 lines can be terrible work, but this project needs it  to move forward . On second hand, it uses a lot of Postgres C patterns. And any new development can be more concentrated on quality and less to research.Although pspg has not Postgres quality, it is a good tool that is used by a lot of people. Can be nice to be propagated inside Postgres documentation, or some Postgres demos. pspg is now in my private repository (and although it uses BSD licence), I will be proud if it can be moved to some community repository, and if the community takes more control and all rights to this project. Now, I would work more on other projects than pspg - and then pspg will be in maintenance mode. I'll fix all reported errors.\n\nThank you, Pavel, for this work.Thank you :)Pavel\n-- \nVik Fearing", "msg_date": "Sat, 20 Mar 2021 11:51:42 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "On Fri, Mar 19, 2021 at 20:35 Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n> Hi\n>\n> I finished work on pspg.\n>\n> https://github.com/okbob/pspg\n>\n\nThank you, Pavel. I use it always when possible, and highly recommend it to\nothers.\n\n> <https://github.com/okbob/pspg>\n>\nNik\n\nOn Fri, Mar 19, 2021 at 20:35 Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI finished work on pspg.https://github.com/okbob/pspgThank you, Pavel. I use it always when possible, and highly recommend it to others.\nNik", "msg_date": "Sat, 20 Mar 2021 09:01:32 -0700", "msg_from": "Nikolay Samokhvalov <samokhvalov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "so 20. 3. 2021 v 17:01 odesílatel Nikolay Samokhvalov <samokhvalov@gmail.com>\nnapsal:\n\n> On Fri, Mar 19, 2021 at 20:35 Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>> Hi\n>>\n>> I finished work on pspg.\n>>\n>> https://github.com/okbob/pspg\n>>\n>\n> Thank you, Pavel. I use it always when possible, and highly recommend it\n> to others.\n>\n>> <https://github.com/okbob/pspg>\n>>\n> Nik\n>\n\nThank you\n\nPavel\n\nso 20. 3. 2021 v 17:01 odesílatel Nikolay Samokhvalov <samokhvalov@gmail.com> napsal:On Fri, Mar 19, 2021 at 20:35 Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI finished work on pspg.https://github.com/okbob/pspgThank you, Pavel. I use it always when possible, and highly recommend it to others.\nNikThank youPavel", "msg_date": "Sat, 20 Mar 2021 17:13:03 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "On Sat, Mar 20, 2021 at 9:05 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n> Hi\n>\n> I finished work on pspg.\n>\n> https://github.com/okbob/pspg\n>\n> Now it has special features like rows or block selection by mouse, and\n> export related data to file or to clipboard in csv or tsv or insert\n> formats. Some basic features like sorting data per selected columns are\n> possible too.\n>\n> I hope this tool will serve well, and so work with Postgres (or other\n> supported databases) in the terminal will be more comfortable and more\n> efficient.\n>\n> Regards\n>\n> Pavel Stehule\n>\n\nIt's awesome Pavel,\n\nBuilding UI at console level is a serious stuff, and I love it.\nThank you so much for all your efforts.\n\n-- \n\nRegards,\nDinesh\nmanojadinesh.blogspot.com\n\nOn Sat, Mar 20, 2021 at 9:05 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:HiI finished work on pspg.https://github.com/okbob/pspgNow it has special features like rows or block selection by mouse, and export related data to file or to clipboard in csv or tsv or insert formats. Some basic features like sorting data per selected columns are possible too. I hope this tool will serve well, and so work with Postgres (or other supported databases) in the terminal will be more comfortable and more efficient.RegardsPavel Stehule\n\nIt's awesome Pavel,Building UI at console level is a serious stuff, and I love it.Thank you so much for all your efforts.-- Regards,Dineshmanojadinesh.blogspot.com", "msg_date": "Sat, 20 Mar 2021 23:08:18 +0530", "msg_from": "dinesh kumar <dineshkumar02@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "On Sat, Mar 20, 2021 at 4:35 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I finished work on pspg.\n\ntmunro@x1:~/projects/postgresql$ pspg --stream --graph\npspg: unrecognized option '--graph'\nTry pspg --help\n\nHmm, seems to be not finished :-)\n\n\n", "msg_date": "Sun, 21 Mar 2021 12:47:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pspg pager is finished" }, { "msg_contents": "ne 21. 3. 2021 v 0:48 odesílatel Thomas Munro <thomas.munro@gmail.com>\nnapsal:\n\n> On Sat, Mar 20, 2021 at 4:35 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> > I finished work on pspg.\n>\n> tmunro@x1:~/projects/postgresql$ pspg --stream --graph\n> pspg: unrecognized option '--graph'\n> Try pspg --help\n>\n> Hmm, seems to be not finished :-)\n>\n\n:-)\n\nne 21. 3. 2021 v 0:48 odesílatel Thomas Munro <thomas.munro@gmail.com> napsal:On Sat, Mar 20, 2021 at 4:35 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> I finished work on pspg.\n\ntmunro@x1:~/projects/postgresql$ pspg --stream --graph\npspg: unrecognized option '--graph'\nTry pspg --help\n\nHmm, seems to be not finished :-):-)", "msg_date": "Sun, 21 Mar 2021 06:22:00 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pspg pager is finished" } ]
[ { "msg_contents": "Hi,\n\nI found some dubious looking HTAB cleanup code for replication streams\n(see file:worker.c, function:stream_cleanup_files).\n\nviz.\n\n----------\nstatic void\nstream_cleanup_files(Oid subid, TransactionId xid)\n{\n char path[MAXPGPATH];\n StreamXidHash *ent;\n\n /* Remove the xid entry from the stream xid hash */\n ent = (StreamXidHash *) hash_search(xidhash,\n (void *) &xid,\n HASH_REMOVE,\n NULL);\n /* By this time we must have created the transaction entry */\n Assert(ent != NULL);\n\n /* Delete the change file and release the stream fileset memory */\n changes_filename(path, subid, xid);\n SharedFileSetDeleteAll(ent->stream_fileset);\n pfree(ent->stream_fileset);\n ent->stream_fileset = NULL;\n\n /* Delete the subxact file and release the memory, if it exist */\n if (ent->subxact_fileset)\n {\n subxact_filename(path, subid, xid);\n SharedFileSetDeleteAll(ent->subxact_fileset);\n pfree(ent->subxact_fileset);\n ent->subxact_fileset = NULL;\n }\n}\n----------\n\nNotice how the code calls hash_search(... HASH_REMOVE ...), but then\nit deferences the same ent that was returned from that function.\n\nIIUC that is a violation of the hash_search API, whose function\ncomment (dynahash.c) clearly says not to use the return value in such\na way:\n\n----------\n * Return value is a pointer to the element found/entered/removed if any,\n * or NULL if no match was found. (NB: in the case of the REMOVE action,\n * the result is a dangling pointer that shouldn't be dereferenced!)\n----------\n\n~~\n\nPSA my patch to correct this by firstly doing a HASH_FIND, then only\nHASH_REMOVE after we've finished using the ent.\n\n----\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Sat, 20 Mar 2021 18:24:20 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "replication cleanup code incorrect way to use of HTAB HASH_REMOVE ?" }, { "msg_contents": "On Sat, Mar 20, 2021 at 12:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> PSA my patch to correct this by firstly doing a HASH_FIND, then only\n> HASH_REMOVE after we've finished using the ent.\n>\n\nWhy can't we keep using HASH_REMOVE as it is but get the output (entry\nfound or not) in the last parameter of hash_search API and then\nperform Assert based on that? See similar usage in reorderbuffer.c and\nrewriteheap.c.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:24:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE\n ?" }, { "msg_contents": "On Sun, Mar 21, 2021 at 8:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sat, Mar 20, 2021 at 12:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > PSA my patch to correct this by firstly doing a HASH_FIND, then only\n> > HASH_REMOVE after we've finished using the ent.\n> >\n>\n> Why can't we keep using HASH_REMOVE as it is but get the output (entry\n> found or not) in the last parameter of hash_search API and then\n> perform Assert based on that? See similar usage in reorderbuffer.c and\n> rewriteheap.c.\n>\n\nChanging the Assert doesn't do anything to fix the problem as\ndescribed, i.e. dereferencing of ent after the HASH_REMOVE.\n\nThe real problem isn't the Assert. It's all those other usages of ent\ndisobeying the API rule: \"(NB: in the case of the REMOVE action, the\nresult is a dangling pointer that shouldn't be dereferenced!)\"\n\ne.g.\n- SharedFileSetDeleteAll(ent->stream_fileset);\n- pfree(ent->stream_fileset);\n- ent->stream_fileset = NULL;\n- if (ent->subxact_fileset)\n- SharedFileSetDeleteAll(ent->subxact_fileset);\n- pfree(ent->subxact_fileset);\n- ent->subxact_fileset = NULL;\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 22 Mar 2021 08:49:50 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE\n ?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> The real problem isn't the Assert. It's all those other usages of ent\n> disobeying the API rule: \"(NB: in the case of the REMOVE action, the\n> result is a dangling pointer that shouldn't be dereferenced!)\"\n\nI suppose the HASH_REMOVE case could clobber the object with 0x7f if\nCLOBBER_FREED_MEMORY is defined (typically assertion builds), or\nalternatively return some other non-NULL but poisoned pointer, so that\nproblems of this ilk blow up in early testing.\n\n\n", "msg_date": "Mon, 22 Mar 2021 11:20:37 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE\n ?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 9:21 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > The real problem isn't the Assert. It's all those other usages of ent\n> > disobeying the API rule: \"(NB: in the case of the REMOVE action, the\n> > result is a dangling pointer that shouldn't be dereferenced!)\"\n>\n> I suppose the HASH_REMOVE case could clobber the object with 0x7f if\n> CLOBBER_FREED_MEMORY is defined (typically assertion builds), or\n> alternatively return some other non-NULL but poisoned pointer, so that\n> problems of this ilk blow up in early testing.\n\n+1, but not sure if the poisoned ptr alternative can work because some\ncode (e.g see RemoveTargetIfNoLongerUsed function) is asserting the\nreturn ptr actual value, not just its NULL-ness.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Mon, 22 Mar 2021 09:44:15 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE\n ?" }, { "msg_contents": "Hi,\n\nOn 2021-03-22 11:20:37 +1300, Thomas Munro wrote:\n> On Mon, Mar 22, 2021 at 10:50 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > The real problem isn't the Assert. It's all those other usages of ent\n> > disobeying the API rule: \"(NB: in the case of the REMOVE action, the\n> > result is a dangling pointer that shouldn't be dereferenced!)\"\n\nRight that's clearly not ok.\n\n\n> I suppose the HASH_REMOVE case could clobber the object with 0x7f if\n> CLOBBER_FREED_MEMORY is defined (typically assertion builds)\n\nYea. Plus VALGRIND_MAKE_MEM_NOACCESS() (requiring a\nVALGRIND_MAKE_MEM_UNDEFINED() when reusing) or at least a\nVALGRIND_MAKE_MEM_UNDEFINED().\n\n\n>or alternatively return some other non-NULL but poisoned pointer, so\n>that problems of this ilk blow up in early testing.\n\nIMO it's just a bad API to combine the different use cases into\nhash_search(). It's kinda defensible to have FIND/ENTER/ENTER_NULL go\nthrough the same function (although I do think it makes code harder to\nread), but having HASH_REMOVE is just wrong. The only reason for\nreturning a dangling pointer is that that's the obvious way to check if\nsomething was found.\n\nIf they weren't combined we could tell newer compilers that the memory\nshouldn't be accessed after the HASH_REMOVE anymore. And it'd remove\nsome unnecessary branches in performance critical code...\n\nBut I guess making dynahash not terrible from that POV basically means\nreplacing all of dynahash. Having all those branches for partitioned\nhashes, actions are really not great.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 21 Mar 2021 15:56:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB\n HASH_REMOVE ?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 3:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Sun, Mar 21, 2021 at 8:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sat, Mar 20, 2021 at 12:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > PSA my patch to correct this by firstly doing a HASH_FIND, then only\n> > > HASH_REMOVE after we've finished using the ent.\n> > >\n> >\n> > Why can't we keep using HASH_REMOVE as it is but get the output (entry\n> > found or not) in the last parameter of hash_search API and then\n> > perform Assert based on that? See similar usage in reorderbuffer.c and\n> > rewriteheap.c.\n> >\n>\n> Changing the Assert doesn't do anything to fix the problem as\n> described, i.e. dereferencing of ent after the HASH_REMOVE.\n>\n> The real problem isn't the Assert. It's all those other usages of ent\n> disobeying the API rule: \"(NB: in the case of the REMOVE action, the\n> result is a dangling pointer that shouldn't be dereferenced!)\"\n>\n\nRight, that is a problem. I see that your patch will fix it. Thanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 22 Mar 2021 07:57:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE\n ?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 7:57 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 3:20 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Sun, Mar 21, 2021 at 8:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Sat, Mar 20, 2021 at 12:54 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > > >\n> > > > PSA my patch to correct this by firstly doing a HASH_FIND, then only\n> > > > HASH_REMOVE after we've finished using the ent.\n> > > >\n> > >\n> > > Why can't we keep using HASH_REMOVE as it is but get the output (entry\n> > > found or not) in the last parameter of hash_search API and then\n> > > perform Assert based on that? See similar usage in reorderbuffer.c and\n> > > rewriteheap.c.\n> > >\n> >\n> > Changing the Assert doesn't do anything to fix the problem as\n> > described, i.e. dereferencing of ent after the HASH_REMOVE.\n> >\n> > The real problem isn't the Assert. It's all those other usages of ent\n> > disobeying the API rule: \"(NB: in the case of the REMOVE action, the\n> > result is a dangling pointer that shouldn't be dereferenced!)\"\n> >\n>\n> Right, that is a problem. I see that your patch will fix it. Thanks.\n>\n\nPushed your patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:52:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: replication cleanup code incorrect way to use of HTAB HASH_REMOVE\n ?" } ]
[ { "msg_contents": "Hi,\n\nUser inoas on IRC channel #postgresql noted that \\? does not describe \\do as supporting the + option. It does however support this option, as do \\dAp and \\dy.\n\nThis patch adds the annotation to the description of these commands in \\?.\n\nWhile adding it to the translation files I noticed some obvious errors, so this patch fixes the following as well:\n\n* correct inconsistent alignment. This was especially egregious in fr.po. Differing alignment across sections in es.po is preserved as this may have been a deliberate choice by the translator.\n* cs.po: remove extraneous newline\n* es.po: replace incorrect mention of \\do with the correct \\dAc, \\dAf and \\dAo\n* fr.po: merge some small lines that still fit within 78 chars\n* fr.po: remove [brackets] denoting optional parameters, when these aren't present in the English text\n* fr.po: \\sv was shown as accepting \"FONCTION\". This is replaced with \"VUE\".\n* fr.po: \\t was missing its optional [on|off] parameter.\n* tr.po: fix typo at \\d\n* uk.po: add missing newline at \\c\n\nRegards,\nMatthijs", "msg_date": "Sun, 21 Mar 2021 20:41:47 +0100", "msg_from": "\"Matthijs van der Vleuten\" <postgresql@zr40.nl>", "msg_from_op": true, "msg_subject": "[PATCH] In psql \\?, add [+] annotation where appropriate" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nHi, thank you for your work. I think this is a meaningful patch that should be merged.", "msg_date": "Tue, 25 May 2021 06:10:15 +0000", "msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] In psql \\?, add [+] annotation where appropriate" }, { "msg_contents": "On Tue, May 25, 2021 at 06:10:15AM +0000, Neil Chen wrote:\n> Hi, thank you for your work. I think this is a meaningful patch that\n> should be merged.\n\nMerged, then. I have scanned the rest of the area and did not notice\nany other inconsistencies.\n--\nMichael", "msg_date": "Wed, 9 Jun 2021 16:29:09 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] In psql \\?, add [+] annotation where appropriate" } ]
[ { "msg_contents": "I noticed when I execute \"pg_ctl stop\".\n\n11799 2021-03-22 07:28:19 JST LOG: received fast shutdown request\n11799 2021-03-22 07:28:19 JST LOG: aborting any active transactions\n11799 2021-03-22 07:28:20 JST LOG: background worker \"logical replication launcher\" (PID 11807) exited with exit code 1\n11802 2021-03-22 07:28:20 JST LOG: shutting down\n11799 2021-03-22 07:28:20 JST LOG: database system is shut down\n\nIt seems only logical replication launcher exited with exit code 1\nwhen it received shutdown request. Why?\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 22 Mar 2021 09:11:16 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Why logical replication lancher exits 1?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 1:11 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n> It seems only logical replication launcher exited with exit code 1\n> when it received shutdown request. Why?\n\nFWIW here's an earlier discussion of that topic:\n\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D1c3hG1g3iKYwfa_PDsT49RBaBJsaot_qNhPSCXBm9rzA%40mail.gmail.com\n\n\n", "msg_date": "Mon, 22 Mar 2021 14:53:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why logical replication lancher exits 1?" }, { "msg_contents": "> On Mon, Mar 22, 2021 at 1:11 PM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>> It seems only logical replication launcher exited with exit code 1\n>> when it received shutdown request. Why?\n> \n> FWIW here's an earlier discussion of that topic:\n> \n> https://www.postgresql.org/message-id/flat/CAEepm%3D1c3hG1g3iKYwfa_PDsT49RBaBJsaot_qNhPSCXBm9rzA%40mail.gmail.com\n\nThank you for pointing it out. I will look into the discussion.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 22 Mar 2021 11:36:27 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Why logical replication lancher exits 1?" } ]
[ { "msg_contents": "Hi,\n\nThe Release Management Team (RMT) for the PostgreSQL 14 is assembled\nand has determined that the feature freeze date for the PostgreSQL 11\nrelease will be April 7, 2021. This means that any feature for the\nPostgreSQL 14 release *must be committed by April 7, 2021 AoE*\n(\"anywhere on earth\", see [1]). In other words, by April 8, it is too\nlate.\n\nThis naturally extends the March 2021 Commit Fest to April 7, 2021.\nAfter the freeze is in effect, any open feature in the current Commit\nFest will be moved into the future one.\n\nOpen items for the PostgreSQL 14 release will be tracked here:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\nFor the PostgreSQL 14 release, the release management team is composed\nof:\nPeter Geoghegan <pg(at)bowt(dot)ie>\nAndrew Dunstan <andrew(at)dunslane(dot)net>\nMichael Paquier <michael(at)paquier(dot)xyz>\n\nFor the time being, if you have any questions about the process,\nplease feel free to email any member of the RMT. We will send out\nnotes with updates and additional guidance in the near future.\n\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth\n\nOn behalf of the RMT, thanks,\n--\nMichael", "msg_date": "Mon, 22 Mar 2021 10:17:56 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "PostgreSQL 14 Feature Freeze + Release Management Team (RMT)" }, { "msg_contents": "On Mon, Mar 22, 2021 at 10:17:56AM +0900, Michael Paquier wrote:\n> The Release Management Team (RMT) for the PostgreSQL 14 is assembled\n> and has determined that the feature freeze date for the PostgreSQL 11\n> release will be April 7, 2021. This means that any feature for the\n> PostgreSQL 14 release *must be committed by April 7, 2021 AoE*\n> (\"anywhere on earth\", see [1]). In other words, by April 8, it is too\n> late.\n\nAnd so, here we are.\n--\nMichael", "msg_date": "Thu, 8 Apr 2021 21:05:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 14 Feature Freeze + Release Management Team (RMT)" } ]
[ { "msg_contents": "Hi,\n\nWe are memset-ting the special space page that's already set to zeros\nby PageInit in BloomInitPage, GinInitPage and SpGistInitPage. We have\nalready removed the memset after PageInit in gistinitpage (see the\ncomment there). Unless I'm missing something, IMO they are redundant.\nI'm attaching a small patch that gets rid of the extra memset calls.\n\nWhile on it, I removed MAXALIGN(sizeof(SpGistPageOpaqueData)) in\nSpGistInitPage because the PageInit will anyways align the\nspecialSize. This change is inline with other places (such as\nBloomInitPage, brin_page_init GinInitPage, gistinitpage,\n_hash_pageinit and so on) where we just pass the size of special space\ndata structure.\n\nI didn't see any regression test failure on my dev system with the\nattached patch.\n\nThoughts?\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Mar 2021 10:16:33 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Mon, 22 Mar 2021 at 10:16, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> We are memset-ting the special space page that's already set to zeros\n> by PageInit in BloomInitPage, GinInitPage and SpGistInitPage. We have\n> already removed the memset after PageInit in gistinitpage (see the\n> comment there). Unless I'm missing something, IMO they are redundant.\n> I'm attaching a small patch that gets rid of the extra memset calls.\n>\n>\n> While on it, I removed MAXALIGN(sizeof(SpGistPageOpaqueData)) in\n> SpGistInitPage because the PageInit will anyways align the\n> specialSize. This change is inline with other places (such as\n> BloomInitPage, brin_page_init GinInitPage, gistinitpage,\n> _hash_pageinit and so on) where we just pass the size of special space\n> data structure.\n>\n> I didn't see any regression test failure on my dev system with the\n> attached patch.\n>\n>\n> Thoughts?\n\nYour changes look to fine me and I am also not getting any failure. I\nthink we should back-patch all the branches.\n\nPatch is applying to all the branches(till v95) and there is no failure.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Mar 2021 10:58:17 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 10:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> We are memset-ting the special space page that's already set to zeros\n> by PageInit in BloomInitPage, GinInitPage and SpGistInitPage. We have\n> already removed the memset after PageInit in gistinitpage (see the\n> comment there). Unless I'm missing something, IMO they are redundant.\n> I'm attaching a small patch that gets rid of the extra memset calls.\n>\n> While on it, I removed MAXALIGN(sizeof(SpGistPageOpaqueData)) in\n> SpGistInitPage because the PageInit will anyways align the\n> specialSize. This change is inline with other places (such as\n> BloomInitPage, brin_page_init GinInitPage, gistinitpage,\n> _hash_pageinit and so on) where we just pass the size of special space\n> data structure.\n>\n> I didn't see any regression test failure on my dev system with the\n> attached patch.\n>\n> Thoughts?\n\nThe changes look fine to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 3 Apr 2021 15:09:06 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Sat, Apr 3, 2021 at 3:09 PM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 10:16 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > We are memset-ting the special space page that's already set to zeros\n> > by PageInit in BloomInitPage, GinInitPage and SpGistInitPage. We have\n> > already removed the memset after PageInit in gistinitpage (see the\n> > comment there). Unless I'm missing something, IMO they are redundant.\n> > I'm attaching a small patch that gets rid of the extra memset calls.\n> >\n> > While on it, I removed MAXALIGN(sizeof(SpGistPageOpaqueData)) in\n> > SpGistInitPage because the PageInit will anyways align the\n> > specialSize. This change is inline with other places (such as\n> > BloomInitPage, brin_page_init GinInitPage, gistinitpage,\n> > _hash_pageinit and so on) where we just pass the size of special space\n> > data structure.\n> >\n> > I didn't see any regression test failure on my dev system with the\n> > attached patch.\n> >\n> > Thoughts?\n>\n> The changes look fine to me.\n\nThanks!\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 5 Apr 2021 09:41:12 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Mon, Mar 22, 2021 at 10:58:17AM +0530, Mahendra Singh Thalor wrote:\n> Your changes look to fine me and I am also not getting any failure. I\n> think we should back-patch all the branches.\n> \n> Patch is applying to all the branches(till v95) and there is no failure.\n\nEr, no. This is just some duplicated code with no extra effect. I\nhave no objection to simplify a bit the whole on readability and\nconsistency grounds (will do so tomorrow), including the removal of\nthe commented-out memset call in gistinitpage, but this is not\nsomething that should be backpatched.\n--\nMichael", "msg_date": "Tue, 6 Apr 2021 21:39:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Tue, Apr 6, 2021 at 6:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 22, 2021 at 10:58:17AM +0530, Mahendra Singh Thalor wrote:\n> > Your changes look to fine me and I am also not getting any failure. I\n> > think we should back-patch all the branches.\n> >\n> > Patch is applying to all the branches(till v95) and there is no failure.\n>\n> Er, no. This is just some duplicated code with no extra effect. I\n> have no objection to simplify a bit the whole on readability and\n> consistency grounds (will do so tomorrow), including the removal of\n> the commented-out memset call in gistinitpage, but this is not\n> something that should be backpatched.\n\n+1 to not backport this patch because it's not a bug or not even a\ncritical issue. Having said that removal of these unnecessary memsets\nwould not only be better for readability and consistency but also can\nreduce few extra function call costs(although minimal) while adding\nnew index pages.\n\nPlease find the v3 patch that removed the commented-out memset call in\ngistinitpage.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 6 Apr 2021 19:13:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Tue, 6 Apr 2021 at 19:14, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Apr 6, 2021 at 6:09 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Mar 22, 2021 at 10:58:17AM +0530, Mahendra Singh Thalor wrote:\n> > > Your changes look to fine me and I am also not getting any failure. I\n> > > think we should back-patch all the branches.\n> > >\n> > > Patch is applying to all the branches(till v95) and there is no failure.\n> >\n> > Er, no. This is just some duplicated code with no extra effect. I\n> > have no objection to simplify a bit the whole on readability and\n> > consistency grounds (will do so tomorrow), including the removal of\n> > the commented-out memset call in gistinitpage, but this is not\n> > something that should be backpatched.\n>\n> +1 to not backport this patch because it's not a bug or not even a\n> critical issue. Having said that removal of these unnecessary memsets\n> would not only be better for readability and consistency but also can\n> reduce few extra function call costs(although minimal) while adding\n> new index pages.\n>\n> Please find the v3 patch that removed the commented-out memset call in\n> gistinitpage.\n\nThanks Bharath for updated patch.\n\n+++ b/src/backend/storage/page/bufpage.c\n@@ -51,7 +51,7 @@ PageInit(Page page, Size pageSize, Size specialSize)\n /* Make sure all fields of page are zero, as well as unused space */\n MemSet(p, 0, pageSize);\n\n- p->pd_flags = 0;\n+ /* p->pd_flags = 0; done by above MemSet */\n\nI think, for readability we can keep old code here or we can remove\nnew added comment also.\n\nApart from this, all other changes looks good to me.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 00:07:08 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Wed, Apr 7, 2021 at 12:07 AM Mahendra Singh Thalor\n<mahi6run@gmail.com> wrote:\n> +++ b/src/backend/storage/page/bufpage.c\n> @@ -51,7 +51,7 @@ PageInit(Page page, Size pageSize, Size specialSize)\n> /* Make sure all fields of page are zero, as well as unused space */\n> MemSet(p, 0, pageSize);\n>\n> - p->pd_flags = 0;\n> + /* p->pd_flags = 0; done by above MemSet */\n>\n> I think, for readability we can keep old code here or we can remove\n> new added comment also.\n\nSetting p->pd_flags = 0; is unnecessary and redundant after memsetting\nthe page to zeros. Also, see the existing code for pd_prune_xid,\nsimilarly I've done that for pd_flags. I think it's okay with /*\np->pd_flags = 0; done by above MemSet */.\n\n> Apart from this, all other changes looks good to me.\n\nThanks for taking a look at the patch.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 06:31:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Wed, Apr 07, 2021 at 06:31:19AM +0530, Bharath Rupireddy wrote:\n> Setting p->pd_flags = 0; is unnecessary and redundant after memsetting\n> the page to zeros. Also, see the existing code for pd_prune_xid,\n> similarly I've done that for pd_flags. I think it's okay with /*\n> p->pd_flags = 0; done by above MemSet */.\n\nSure, but this one does not hurt much either as-is, so I have left it\nout, and applied the rest.\n--\nMichael", "msg_date": "Wed, 7 Apr 2021 15:13:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Wed, Apr 7, 2021 at 11:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 07, 2021 at 06:31:19AM +0530, Bharath Rupireddy wrote:\n> > Setting p->pd_flags = 0; is unnecessary and redundant after memsetting\n> > the page to zeros. Also, see the existing code for pd_prune_xid,\n> > similarly I've done that for pd_flags. I think it's okay with /*\n> > p->pd_flags = 0; done by above MemSet */.\n>\n> Sure, but this one does not hurt much either as-is, so I have left it\n> out, and applied the rest.\n\nThanks for pushing the patch.\n\nI wanted to comment out p->pd_flags = 0; in PageInit similar to the\npd_prune_xid just for consistency.\n /* p->pd_prune_xid = InvalidTransactionId; done by above MemSet */\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 7 Apr 2021 11:47:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "ср, 7 апр. 2021 г. в 10:18, Bharath Rupireddy <\nbharath.rupireddyforpostgres@gmail.com>:\n\n> On Wed, Apr 7, 2021 at 11:44 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n> >\n> > On Wed, Apr 07, 2021 at 06:31:19AM +0530, Bharath Rupireddy wrote:\n> > > Setting p->pd_flags = 0; is unnecessary and redundant after memsetting\n> > > the page to zeros. Also, see the existing code for pd_prune_xid,\n> > > similarly I've done that for pd_flags. I think it's okay with /*\n> > > p->pd_flags = 0; done by above MemSet */.\n> >\n> > Sure, but this one does not hurt much either as-is, so I have left it\n> > out, and applied the rest.\n>\n> Thanks for pushing the patch.\n>\n> I wanted to comment out p->pd_flags = 0; in PageInit similar to the\n> pd_prune_xid just for consistency.\n> /* p->pd_prune_xid = InvalidTransactionId; done by above MemSet\n> */\n>\n> With Regards,\n> Bharath Rupireddy.\n> EnterpriseDB: http://www.enterprisedb.com\n>\n>\n>\nI've investigated the commit, and I think there is just one more thing that\ncan make the page init more even. I propose my very small patch on this in\nanother discussion branch:\nhttps://www.postgresql.org/message-id/CALT9ZEFFq2-n5Lmfg59L6Hm3ZrgCexyhR9eqme7v1jodtXGg1A@mail.gmail.com\n\nIf you want, feel free to discuss it and push if consider the change\nrelevant.\n\n-- \nBest regards,\nPavel Borisov\n\nPostgres Professional: http://postgrespro.com <http://www.postgrespro.com>\n\nср, 7 апр. 2021 г. в 10:18, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>:On Wed, Apr 7, 2021 at 11:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Apr 07, 2021 at 06:31:19AM +0530, Bharath Rupireddy wrote:\n> > Setting p->pd_flags = 0; is unnecessary and redundant after memsetting\n> > the page to zeros. Also, see the existing code for pd_prune_xid,\n> > similarly I've done that for pd_flags. I think it's okay with /*\n> > p->pd_flags = 0;        done by above MemSet */.\n>\n> Sure, but this one does not hurt much either as-is, so I have left it\n> out, and applied the rest.\n\nThanks for pushing the patch.\n\nI wanted to comment out p->pd_flags = 0; in PageInit similar to the\npd_prune_xid just for consistency.\n    /* p->pd_prune_xid = InvalidTransactionId;        done by above MemSet */\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\nI've investigated the commit, and I think there is just one more thing that can make the page init more even. I propose my very small patch on this in another discussion branch: https://www.postgresql.org/message-id/CALT9ZEFFq2-n5Lmfg59L6Hm3ZrgCexyhR9eqme7v1jodtXGg1A@mail.gmail.comIf you want, feel free to discuss it and push if consider the change relevant.-- Best regards,Pavel BorisovPostgres Professional: http://postgrespro.com", "msg_date": "Wed, 7 Apr 2021 16:08:51 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Wed, Apr 7, 2021 at 11:47 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Apr 7, 2021 at 11:44 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Wed, Apr 07, 2021 at 06:31:19AM +0530, Bharath Rupireddy wrote:\n> > > Setting p->pd_flags = 0; is unnecessary and redundant after memsetting\n> > > the page to zeros. Also, see the existing code for pd_prune_xid,\n> > > similarly I've done that for pd_flags. I think it's okay with /*\n> > > p->pd_flags = 0; done by above MemSet */.\n> >\n> > Sure, but this one does not hurt much either as-is, so I have left it\n> > out, and applied the rest.\n>\n> Thanks for pushing the patch.\n>\n> I wanted to comment out p->pd_flags = 0; in PageInit similar to the\n> pd_prune_xid just for consistency.\n> /* p->pd_prune_xid = InvalidTransactionId; done by above MemSet */\n\nAs I said above, just for consistency, I would like to see if the\nattached one line patch can be taken, even though it doesn't have any\nimpact.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 8 Apr 2021 07:45:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Thu, Apr 08, 2021 at 07:45:25AM +0530, Bharath Rupireddy wrote:\n> On Wed, Apr 7, 2021 at 11:47 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> I wanted to comment out p->pd_flags = 0; in PageInit similar to the\n>> pd_prune_xid just for consistency.\n>> /* p->pd_prune_xid = InvalidTransactionId; done by above MemSet */\n> \n> As I said above, just for consistency, I would like to see if the\n> attached one line patch can be taken, even though it doesn't have any\n> impact.\n\nFWIW, I tend to prefer the existing style to keep around this code\nrather than commenting it out, as one could think to remove it, but I\nthink that it can be important in terms of code comprehension when\nreading the area. So I quite like what 96ef3b8 has undone for\npd_flags, but not much what cc59049 did back in 2007. That's a matter\nof taste, really.\n--\nMichael", "msg_date": "Thu, 8 Apr 2021 16:52:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" }, { "msg_contents": "On Thu, Apr 8, 2021 at 1:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 08, 2021 at 07:45:25AM +0530, Bharath Rupireddy wrote:\n> > On Wed, Apr 7, 2021 at 11:47 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> I wanted to comment out p->pd_flags = 0; in PageInit similar to the\n> >> pd_prune_xid just for consistency.\n> >> /* p->pd_prune_xid = InvalidTransactionId; done by above MemSet */\n> >\n> > As I said above, just for consistency, I would like to see if the\n> > attached one line patch can be taken, even though it doesn't have any\n> > impact.\n>\n> FWIW, I tend to prefer the existing style to keep around this code\n> rather than commenting it out, as one could think to remove it, but I\n> think that it can be important in terms of code comprehension when\n> reading the area. So I quite like what 96ef3b8 has undone for\n> pd_flags, but not much what cc59049 did back in 2007. That's a matter\n> of taste, really.\n\nThanks! Since the main patch is committed I will go ahead and close\nthe CF entry.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 8 Apr 2021 13:30:58 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Can we remove extra memset in BloomInitPage, GinInitPage and\n SpGistInitPage when we have it in PageInit?" } ]
[ { "msg_contents": "Hello,\n\nWhile executing the below test case server crashed with Segfault 11 on\nmaster branch.\nI have enabled the CLOBBER_CACHE_ALWAYS in src/include/pg_config_manual.h\n\nIssue is only reproducing on master branch.\n\n*Test Case:*\nCREATE TABLE sm_5_323_table (col1 numeric);\nCREATE INDEX sm_5_323_idx ON sm_5_323_table(col1);\n\nCLUSTER sm_5_323_table USING sm_5_323_idx;\n\n\\! /PGClobber_build/postgresql/inst/bin/clusterdb -t sm_5_323_table -U edb\n-h localhost -p 5432 -d postgres\n\n*Test case output:*\nedb@edb:~/PGClobber_build/postgresql/inst/bin$ ./psql postgres\npsql (14devel)\nType \"help\" for help.\n\npostgres=# CREATE TABLE sm_5_323_table (col1 numeric);\nCREATE TABLE\npostgres=# CREATE INDEX sm_5_323_idx ON sm_5_323_table(col1);\nCREATE INDEX\npostgres=# CLUSTER sm_5_323_table USING sm_5_323_idx;\nCLUSTER\npostgres=# \\! /PGClobber_build/postgresql/inst/bin/clusterdb -t\nsm_5_323_table -U edb -h localhost -p 5432 -d postgres\nclusterdb: error: clustering of table \"sm_5_323_table\" in database\n\"postgres\" failed: server closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\n\n*Stack Trace:*\nCore was generated by `postgres: edb postgres 127.0.0.1(50978) CLUSTER\n '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x000055e5c85ea0b4 in mdopenfork (reln=0x0, forknum=MAIN_FORKNUM,\nbehavior=1) at md.c:485\n485 if (reln->md_num_open_segs[forknum] > 0)\n(gdb) bt\n#0 0x000055e5c85ea0b4 in mdopenfork (reln=0x0, forknum=MAIN_FORKNUM,\nbehavior=1) at md.c:485\n#1 0x000055e5c85eb2f0 in mdnblocks (reln=0x0, forknum=MAIN_FORKNUM) at\nmd.c:768\n#2 0x000055e5c85eb61b in mdimmedsync (reln=0x0,\nforknum=forknum@entry=MAIN_FORKNUM)\nat md.c:930\n#3 0x000055e5c85ec6e5 in smgrimmedsync (reln=<optimized out>,\nforknum=forknum@entry=MAIN_FORKNUM) at smgr.c:662\n#4 0x000055e5c81ae28b in end_heap_rewrite (state=state@entry=0x55e5ca5d1d70)\nat rewriteheap.c:342\n#5 0x000055e5c81a32ea in heapam_relation_copy_for_cluster\n(OldHeap=0x7f212ce41ba0, NewHeap=0x7f212ce41058, OldIndex=<optimized out>,\nuse_sort=<optimized out>, OldestXmin=<optimized out>,\n xid_cutoff=<optimized out>, multi_cutoff=0x7ffcba6ebe64,\nnum_tuples=0x7ffcba6ebe68, tups_vacuumed=0x7ffcba6ebe70,\ntups_recently_dead=0x7ffcba6ebe78) at heapam_handler.c:984\n#6 0x000055e5c82f218a in table_relation_copy_for_cluster\n(tups_recently_dead=0x7ffcba6ebe78, tups_vacuumed=0x7ffcba6ebe70,\nnum_tuples=0x7ffcba6ebe68, multi_cutoff=0x7ffcba6ebe64,\n xid_cutoff=0x7ffcba6ebe60, OldestXmin=<optimized out>,\nuse_sort=<optimized out>, OldIndex=0x7f212ce40670, NewTable=0x7f212ce41058,\nOldTable=0x7f212ce41ba0)\n at ../../../src/include/access/tableam.h:1656\n#7 copy_table_data (pCutoffMulti=<synthetic pointer>,\npFreezeXid=<synthetic pointer>, pSwapToastByContent=<synthetic pointer>,\nverbose=<optimized out>, OIDOldIndex=<optimized out>,\n OIDOldHeap=16384, OIDNewHeap=<optimized out>) at cluster.c:908\n#8 rebuild_relation (verbose=<optimized out>, indexOid=<optimized out>,\nOldHeap=<optimized out>) at cluster.c:604\n#9 cluster_rel (tableOid=<optimized out>, indexOid=<optimized out>,\nparams=<optimized out>) at cluster.c:427\n#10 0x000055e5c82f2b7f in cluster (pstate=pstate@entry=0x55e5ca5315c0,\nstmt=stmt@entry=0x55e5ca510368, isTopLevel=isTopLevel@entry=true) at\ncluster.c:195\n#11 0x000055e5c85fcbc6 in standard_ProcessUtility (pstmt=0x55e5ca510430,\nqueryString=0x55e5ca50f850 \"CLUSTER public.sm_5_323_table;\",\ncontext=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n queryEnv=0x0, dest=0x55e5ca510710, qc=0x7ffcba6ec340) at utility.c:822\n#12 0x000055e5c85fd436 in ProcessUtility (pstmt=pstmt@entry=0x55e5ca510430,\nqueryString=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL,\nparams=<optimized out>,\n queryEnv=<optimized out>, dest=dest@entry=0x55e5ca510710,\nqc=0x7ffcba6ec340) at utility.c:525\n#13 0x000055e5c85f6148 in PortalRunUtility (portal=portal@entry=0x55e5ca570d70,\npstmt=pstmt@entry=0x55e5ca510430, isTopLevel=isTopLevel@entry=true,\n setHoldSnapshot=setHoldSnapshot@entry=false,\ndest=dest@entry=0x55e5ca510710,\nqc=qc@entry=0x7ffcba6ec340) at pquery.c:1159\n#14 0x000055e5c85f71a4 in PortalRunMulti (portal=portal@entry=0x55e5ca570d70,\nisTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\n\n dest=dest@entry=0x55e5ca510710, altdest=altdest@entry=0x55e5ca510710,\nqc=qc@entry=0x7ffcba6ec340) at pquery.c:1305\n#15 0x000055e5c85f8823 in PortalRun (portal=portal@entry=0x55e5ca570d70,\ncount=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true,\nrun_once=run_once@entry=true,\n dest=dest@entry=0x55e5ca510710, altdest=altdest@entry=0x55e5ca510710,\nqc=0x7ffcba6ec340) at pquery.c:779\n#16 0x000055e5c85f389e in exec_simple_query (query_string=0x55e5ca50f850\n\"CLUSTER public.sm_5_323_table;\") at postgres.c:1185\n#17 0x000055e5c85f51cf in PostgresMain (argc=argc@entry=1,\nargv=argv@entry=0x7ffcba6ec670,\ndbname=<optimized out>, username=<optimized out>) at postgres.c:4415\n#18 0x000055e5c8522240 in BackendRun (port=<optimized out>, port=<optimized\nout>) at postmaster.c:4470\n#19 BackendStartup (port=<optimized out>) at postmaster.c:4192\n#20 ServerLoop () at postmaster.c:1737\n#21 0x000055e5c85237ec in PostmasterMain (argc=<optimized out>,\nargv=0x55e5ca508fe0) at postmaster.c:1409\n#22 0x000055e5c811a2cf in main (argc=3, argv=0x55e5ca508fe0) at main.c:209\n\nThanks.\n--\nRegards,\nNeha Sharma\n\nHello,While executing the below test case server crashed with Segfault 11 on master branch.I have enabled the CLOBBER_CACHE_ALWAYS in src/include/pg_config_manual.hIssue is only reproducing on master branch.Test Case:CREATE TABLE sm_5_323_table (col1 numeric);CREATE INDEX sm_5_323_idx ON sm_5_323_table(col1);CLUSTER sm_5_323_table USING sm_5_323_idx;\\! /PGClobber_build/postgresql/inst/bin/clusterdb -t sm_5_323_table -U edb -h localhost -p 5432 -d postgresTest case output:edb@edb:~/PGClobber_build/postgresql/inst/bin$ ./psql postgrespsql (14devel)Type \"help\" for help.postgres=# CREATE TABLE sm_5_323_table (col1 numeric);CREATE TABLEpostgres=# CREATE INDEX sm_5_323_idx ON sm_5_323_table(col1);CREATE INDEXpostgres=# CLUSTER sm_5_323_table USING sm_5_323_idx;CLUSTERpostgres=# \\! /PGClobber_build/postgresql/inst/bin/clusterdb -t sm_5_323_table -U edb -h localhost -p 5432 -d postgresclusterdb: error: clustering of table \"sm_5_323_table\" in database \"postgres\" failed: server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.Stack Trace:Core was generated by `postgres: edb postgres 127.0.0.1(50978) CLUSTER         '.Program terminated with signal SIGSEGV, Segmentation fault.#0  0x000055e5c85ea0b4 in mdopenfork (reln=0x0, forknum=MAIN_FORKNUM, behavior=1) at md.c:485485\t\tif (reln->md_num_open_segs[forknum] > 0)(gdb) bt#0  0x000055e5c85ea0b4 in mdopenfork (reln=0x0, forknum=MAIN_FORKNUM, behavior=1) at md.c:485#1  0x000055e5c85eb2f0 in mdnblocks (reln=0x0, forknum=MAIN_FORKNUM) at md.c:768#2  0x000055e5c85eb61b in mdimmedsync (reln=0x0, forknum=forknum@entry=MAIN_FORKNUM) at md.c:930#3  0x000055e5c85ec6e5 in smgrimmedsync (reln=<optimized out>, forknum=forknum@entry=MAIN_FORKNUM) at smgr.c:662#4  0x000055e5c81ae28b in end_heap_rewrite (state=state@entry=0x55e5ca5d1d70) at rewriteheap.c:342#5  0x000055e5c81a32ea in heapam_relation_copy_for_cluster (OldHeap=0x7f212ce41ba0, NewHeap=0x7f212ce41058, OldIndex=<optimized out>, use_sort=<optimized out>, OldestXmin=<optimized out>,     xid_cutoff=<optimized out>, multi_cutoff=0x7ffcba6ebe64, num_tuples=0x7ffcba6ebe68, tups_vacuumed=0x7ffcba6ebe70, tups_recently_dead=0x7ffcba6ebe78) at heapam_handler.c:984#6  0x000055e5c82f218a in table_relation_copy_for_cluster (tups_recently_dead=0x7ffcba6ebe78, tups_vacuumed=0x7ffcba6ebe70, num_tuples=0x7ffcba6ebe68, multi_cutoff=0x7ffcba6ebe64,     xid_cutoff=0x7ffcba6ebe60, OldestXmin=<optimized out>, use_sort=<optimized out>, OldIndex=0x7f212ce40670, NewTable=0x7f212ce41058, OldTable=0x7f212ce41ba0)    at ../../../src/include/access/tableam.h:1656#7  copy_table_data (pCutoffMulti=<synthetic pointer>, pFreezeXid=<synthetic pointer>, pSwapToastByContent=<synthetic pointer>, verbose=<optimized out>, OIDOldIndex=<optimized out>,     OIDOldHeap=16384, OIDNewHeap=<optimized out>) at cluster.c:908#8  rebuild_relation (verbose=<optimized out>, indexOid=<optimized out>, OldHeap=<optimized out>) at cluster.c:604#9  cluster_rel (tableOid=<optimized out>, indexOid=<optimized out>, params=<optimized out>) at cluster.c:427#10 0x000055e5c82f2b7f in cluster (pstate=pstate@entry=0x55e5ca5315c0, stmt=stmt@entry=0x55e5ca510368, isTopLevel=isTopLevel@entry=true) at cluster.c:195#11 0x000055e5c85fcbc6 in standard_ProcessUtility (pstmt=0x55e5ca510430, queryString=0x55e5ca50f850 \"CLUSTER public.sm_5_323_table;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,     queryEnv=0x0, dest=0x55e5ca510710, qc=0x7ffcba6ec340) at utility.c:822#12 0x000055e5c85fd436 in ProcessUtility (pstmt=pstmt@entry=0x55e5ca510430, queryString=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>,     queryEnv=<optimized out>, dest=dest@entry=0x55e5ca510710, qc=0x7ffcba6ec340) at utility.c:525#13 0x000055e5c85f6148 in PortalRunUtility (portal=portal@entry=0x55e5ca570d70, pstmt=pstmt@entry=0x55e5ca510430, isTopLevel=isTopLevel@entry=true,     setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x55e5ca510710, qc=qc@entry=0x7ffcba6ec340) at pquery.c:1159#14 0x000055e5c85f71a4 in PortalRunMulti (portal=portal@entry=0x55e5ca570d70, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,     dest=dest@entry=0x55e5ca510710, altdest=altdest@entry=0x55e5ca510710, qc=qc@entry=0x7ffcba6ec340) at pquery.c:1305#15 0x000055e5c85f8823 in PortalRun (portal=portal@entry=0x55e5ca570d70, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,     dest=dest@entry=0x55e5ca510710, altdest=altdest@entry=0x55e5ca510710, qc=0x7ffcba6ec340) at pquery.c:779#16 0x000055e5c85f389e in exec_simple_query (query_string=0x55e5ca50f850 \"CLUSTER public.sm_5_323_table;\") at postgres.c:1185#17 0x000055e5c85f51cf in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffcba6ec670, dbname=<optimized out>, username=<optimized out>) at postgres.c:4415#18 0x000055e5c8522240 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4470#19 BackendStartup (port=<optimized out>) at postmaster.c:4192#20 ServerLoop () at postmaster.c:1737#21 0x000055e5c85237ec in PostmasterMain (argc=<optimized out>, argv=0x55e5ca508fe0) at postmaster.c:1409#22 0x000055e5c811a2cf in main (argc=3, argv=0x55e5ca508fe0) at main.c:209Thanks.--Regards,Neha Sharma", "msg_date": "Mon, 22 Mar 2021 11:53:13 +0530", "msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "In heapam_relation_copy_for_cluster(), begin_heap_rewrite() sets\nrwstate->rs_new_rel->rd_smgr correctly but next line tuplesort_begin_cluster()\nget called which cause the system cache invalidation and due to CCA setting,\nwipe out rwstate->rs_new_rel->rd_smgr which wasn't restored for the subsequent\noperations and causes segmentation fault.\n\nBy calling RelationOpenSmgr() before calling smgrimmedsync() in\nend_heap_rewrite() would fix the failure. Did the same in the attached patch.\n\nRegards,\nAmul\n\n\n\nOn Mon, Mar 22, 2021 at 11:53 AM Neha Sharma\n<neha.sharma@enterprisedb.com> wrote:\n>\n> Hello,\n>\n> While executing the below test case server crashed with Segfault 11 on master branch.\n> I have enabled the CLOBBER_CACHE_ALWAYS in src/include/pg_config_manual.h\n>\n> Issue is only reproducing on master branch.\n>\n> Test Case:\n> CREATE TABLE sm_5_323_table (col1 numeric);\n> CREATE INDEX sm_5_323_idx ON sm_5_323_table(col1);\n>\n> CLUSTER sm_5_323_table USING sm_5_323_idx;\n>\n> \\! /PGClobber_build/postgresql/inst/bin/clusterdb -t sm_5_323_table -U edb -h localhost -p 5432 -d postgres\n>\n> Test case output:\n> edb@edb:~/PGClobber_build/postgresql/inst/bin$ ./psql postgres\n> psql (14devel)\n> Type \"help\" for help.\n>\n> postgres=# CREATE TABLE sm_5_323_table (col1 numeric);\n> CREATE TABLE\n> postgres=# CREATE INDEX sm_5_323_idx ON sm_5_323_table(col1);\n> CREATE INDEX\n> postgres=# CLUSTER sm_5_323_table USING sm_5_323_idx;\n> CLUSTER\n> postgres=# \\! /PGClobber_build/postgresql/inst/bin/clusterdb -t sm_5_323_table -U edb -h localhost -p 5432 -d postgres\n> clusterdb: error: clustering of table \"sm_5_323_table\" in database \"postgres\" failed: server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n>\n> Stack Trace:\n> Core was generated by `postgres: edb postgres 127.0.0.1(50978) CLUSTER '.\n> Program terminated with signal SIGSEGV, Segmentation fault.\n> #0 0x000055e5c85ea0b4 in mdopenfork (reln=0x0, forknum=MAIN_FORKNUM, behavior=1) at md.c:485\n> 485 if (reln->md_num_open_segs[forknum] > 0)\n> (gdb) bt\n> #0 0x000055e5c85ea0b4 in mdopenfork (reln=0x0, forknum=MAIN_FORKNUM, behavior=1) at md.c:485\n> #1 0x000055e5c85eb2f0 in mdnblocks (reln=0x0, forknum=MAIN_FORKNUM) at md.c:768\n> #2 0x000055e5c85eb61b in mdimmedsync (reln=0x0, forknum=forknum@entry=MAIN_FORKNUM) at md.c:930\n> #3 0x000055e5c85ec6e5 in smgrimmedsync (reln=<optimized out>, forknum=forknum@entry=MAIN_FORKNUM) at smgr.c:662\n> #4 0x000055e5c81ae28b in end_heap_rewrite (state=state@entry=0x55e5ca5d1d70) at rewriteheap.c:342\n> #5 0x000055e5c81a32ea in heapam_relation_copy_for_cluster (OldHeap=0x7f212ce41ba0, NewHeap=0x7f212ce41058, OldIndex=<optimized out>, use_sort=<optimized out>, OldestXmin=<optimized out>,\n> xid_cutoff=<optimized out>, multi_cutoff=0x7ffcba6ebe64, num_tuples=0x7ffcba6ebe68, tups_vacuumed=0x7ffcba6ebe70, tups_recently_dead=0x7ffcba6ebe78) at heapam_handler.c:984\n> #6 0x000055e5c82f218a in table_relation_copy_for_cluster (tups_recently_dead=0x7ffcba6ebe78, tups_vacuumed=0x7ffcba6ebe70, num_tuples=0x7ffcba6ebe68, multi_cutoff=0x7ffcba6ebe64,\n> xid_cutoff=0x7ffcba6ebe60, OldestXmin=<optimized out>, use_sort=<optimized out>, OldIndex=0x7f212ce40670, NewTable=0x7f212ce41058, OldTable=0x7f212ce41ba0)\n> at ../../../src/include/access/tableam.h:1656\n> #7 copy_table_data (pCutoffMulti=<synthetic pointer>, pFreezeXid=<synthetic pointer>, pSwapToastByContent=<synthetic pointer>, verbose=<optimized out>, OIDOldIndex=<optimized out>,\n> OIDOldHeap=16384, OIDNewHeap=<optimized out>) at cluster.c:908\n> #8 rebuild_relation (verbose=<optimized out>, indexOid=<optimized out>, OldHeap=<optimized out>) at cluster.c:604\n> #9 cluster_rel (tableOid=<optimized out>, indexOid=<optimized out>, params=<optimized out>) at cluster.c:427\n> #10 0x000055e5c82f2b7f in cluster (pstate=pstate@entry=0x55e5ca5315c0, stmt=stmt@entry=0x55e5ca510368, isTopLevel=isTopLevel@entry=true) at cluster.c:195\n> #11 0x000055e5c85fcbc6 in standard_ProcessUtility (pstmt=0x55e5ca510430, queryString=0x55e5ca50f850 \"CLUSTER public.sm_5_323_table;\", context=PROCESS_UTILITY_TOPLEVEL, params=0x0,\n> queryEnv=0x0, dest=0x55e5ca510710, qc=0x7ffcba6ec340) at utility.c:822\n> #12 0x000055e5c85fd436 in ProcessUtility (pstmt=pstmt@entry=0x55e5ca510430, queryString=<optimized out>, context=context@entry=PROCESS_UTILITY_TOPLEVEL, params=<optimized out>,\n> queryEnv=<optimized out>, dest=dest@entry=0x55e5ca510710, qc=0x7ffcba6ec340) at utility.c:525\n> #13 0x000055e5c85f6148 in PortalRunUtility (portal=portal@entry=0x55e5ca570d70, pstmt=pstmt@entry=0x55e5ca510430, isTopLevel=isTopLevel@entry=true,\n> setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x55e5ca510710, qc=qc@entry=0x7ffcba6ec340) at pquery.c:1159\n> #14 0x000055e5c85f71a4 in PortalRunMulti (portal=portal@entry=0x55e5ca570d70, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false,\n> dest=dest@entry=0x55e5ca510710, altdest=altdest@entry=0x55e5ca510710, qc=qc@entry=0x7ffcba6ec340) at pquery.c:1305\n> #15 0x000055e5c85f8823 in PortalRun (portal=portal@entry=0x55e5ca570d70, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true,\n> dest=dest@entry=0x55e5ca510710, altdest=altdest@entry=0x55e5ca510710, qc=0x7ffcba6ec340) at pquery.c:779\n> #16 0x000055e5c85f389e in exec_simple_query (query_string=0x55e5ca50f850 \"CLUSTER public.sm_5_323_table;\") at postgres.c:1185\n> #17 0x000055e5c85f51cf in PostgresMain (argc=argc@entry=1, argv=argv@entry=0x7ffcba6ec670, dbname=<optimized out>, username=<optimized out>) at postgres.c:4415\n> #18 0x000055e5c8522240 in BackendRun (port=<optimized out>, port=<optimized out>) at postmaster.c:4470\n> #19 BackendStartup (port=<optimized out>) at postmaster.c:4192\n> #20 ServerLoop () at postmaster.c:1737\n> #21 0x000055e5c85237ec in PostmasterMain (argc=<optimized out>, argv=0x55e5ca508fe0) at postmaster.c:1409\n> #22 0x000055e5c811a2cf in main (argc=3, argv=0x55e5ca508fe0) at main.c:209\n>\n> Thanks.\n> --\n> Regards,\n> Neha Sharma", "msg_date": "Mon, 22 Mar 2021 13:55:23 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Mon, Mar 22, 2021 at 5:26 PM Amul Sul <sulamul@gmail.com> wrote:\n> In heapam_relation_copy_for_cluster(), begin_heap_rewrite() sets\n> rwstate->rs_new_rel->rd_smgr correctly but next line tuplesort_begin_cluster()\n> get called which cause the system cache invalidation and due to CCA setting,\n> wipe out rwstate->rs_new_rel->rd_smgr which wasn't restored for the subsequent\n> operations and causes segmentation fault.\n>\n> By calling RelationOpenSmgr() before calling smgrimmedsync() in\n> end_heap_rewrite() would fix the failure. Did the same in the attached patch.\n\nThat makes sense. I see a few commits in the git history adding\nRelationOpenSmgr() before a smgr* operation, whenever such a problem\nwould have been discovered: 4942ee656ac, afa8f1971ae, bf347c60bdd7,\nfor example.\n\nI do wonder if there are still other smgr* operations in the source\ncode that are preceded by operations that would invalidate the\nSMgrRelation that those smgr* operations would be called with. For\nexample, the smgrnblocks() in gistBuildCallback() may get done too\nlate than a corresponding RelationOpenSmgr() on the index relation.\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 22 Mar 2021 18:32:52 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Mon, Mar 22, 2021 at 3:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 5:26 PM Amul Sul <sulamul@gmail.com> wrote:\n> > In heapam_relation_copy_for_cluster(), begin_heap_rewrite() sets\n> > rwstate->rs_new_rel->rd_smgr correctly but next line tuplesort_begin_cluster()\n> > get called which cause the system cache invalidation and due to CCA setting,\n> > wipe out rwstate->rs_new_rel->rd_smgr which wasn't restored for the subsequent\n> > operations and causes segmentation fault.\n> >\n> > By calling RelationOpenSmgr() before calling smgrimmedsync() in\n> > end_heap_rewrite() would fix the failure. Did the same in the attached patch.\n>\n> That makes sense. I see a few commits in the git history adding\n> RelationOpenSmgr() before a smgr* operation, whenever such a problem\n> would have been discovered: 4942ee656ac, afa8f1971ae, bf347c60bdd7,\n> for example.\n>\n\nThanks for the confirmation.\n\n> I do wonder if there are still other smgr* operations in the source\n> code that are preceded by operations that would invalidate the\n> SMgrRelation that those smgr* operations would be called with. For\n> example, the smgrnblocks() in gistBuildCallback() may get done too\n> late than a corresponding RelationOpenSmgr() on the index relation.\n>\n\nI did the check for gistBuildCallback() by adding Assert(index->rd_smgr) before\nsmgrnblocks() with CCA setting and didn't see any problem there.\n\nI think the easiest way to find that is to run a regression suite with CCA\nbuild, perhaps, there is no guarantee that regression will hit all smgr*\noperations, but that might hit most of them.\n\nRegards,\nAmul\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:08:05 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Tue, Mar 23, 2021 at 10:08 AM Amul Sul <sulamul@gmail.com> wrote:\n\n> On Mon, Mar 22, 2021 at 3:03 PM Amit Langote <amitlangote09@gmail.com>\n> wrote:\n> >\n> > On Mon, Mar 22, 2021 at 5:26 PM Amul Sul <sulamul@gmail.com> wrote:\n> > > In heapam_relation_copy_for_cluster(), begin_heap_rewrite() sets\n> > > rwstate->rs_new_rel->rd_smgr correctly but next line\n> tuplesort_begin_cluster()\n> > > get called which cause the system cache invalidation and due to CCA\n> setting,\n> > > wipe out rwstate->rs_new_rel->rd_smgr which wasn't restored for the\n> subsequent\n> > > operations and causes segmentation fault.\n> > >\n> > > By calling RelationOpenSmgr() before calling smgrimmedsync() in\n> > > end_heap_rewrite() would fix the failure. Did the same in the attached\n> patch.\n> >\n> > That makes sense. I see a few commits in the git history adding\n> > RelationOpenSmgr() before a smgr* operation, whenever such a problem\n> > would have been discovered: 4942ee656ac, afa8f1971ae, bf347c60bdd7,\n> > for example.\n> >\n>\n> Thanks for the confirmation.\n>\n> > I do wonder if there are still other smgr* operations in the source\n> > code that are preceded by operations that would invalidate the\n> > SMgrRelation that those smgr* operations would be called with. For\n> > example, the smgrnblocks() in gistBuildCallback() may get done too\n> > late than a corresponding RelationOpenSmgr() on the index relation.\n> >\n>\n> I did the check for gistBuildCallback() by adding Assert(index->rd_smgr)\n> before\n> smgrnblocks() with CCA setting and didn't see any problem there.\n>\n> I think the easiest way to find that is to run a regression suite with CCA\n> build, perhaps, there is no guarantee that regression will hit all smgr*\n> operations, but that might hit most of them.\n\n\nSure, will give a regression run with CCA enabled.\n\n>\n> Regards,\n> Amul\n>\n\nRegards,\nNeha Sharma\n\nOn Tue, Mar 23, 2021 at 10:08 AM Amul Sul <sulamul@gmail.com> wrote:On Mon, Mar 22, 2021 at 3:03 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 5:26 PM Amul Sul <sulamul@gmail.com> wrote:\n> > In heapam_relation_copy_for_cluster(), begin_heap_rewrite() sets\n> > rwstate->rs_new_rel->rd_smgr correctly but next line tuplesort_begin_cluster()\n> > get called which cause the system cache invalidation and due to CCA setting,\n> > wipe out rwstate->rs_new_rel->rd_smgr which wasn't restored for the subsequent\n> > operations and causes segmentation fault.\n> >\n> > By calling RelationOpenSmgr() before calling smgrimmedsync() in\n> > end_heap_rewrite() would fix the failure. Did the same in the attached patch.\n>\n> That makes sense.  I see a few commits in the git history adding\n> RelationOpenSmgr() before a smgr* operation, whenever such a problem\n> would have been discovered: 4942ee656ac, afa8f1971ae, bf347c60bdd7,\n> for example.\n>\n\nThanks for the confirmation.\n\n> I do wonder if there are still other smgr* operations in the source\n> code that are preceded by operations that would invalidate the\n> SMgrRelation that those smgr* operations would be called with.  For\n> example, the smgrnblocks() in gistBuildCallback() may get done too\n> late than a corresponding RelationOpenSmgr() on the index relation.\n>\n\nI did the check for gistBuildCallback() by adding Assert(index->rd_smgr)  before\nsmgrnblocks() with CCA setting and didn't see any problem there.\n\nI think the easiest way to find that is to run a regression suite with CCA\nbuild, perhaps, there is no guarantee that regression will hit all smgr*\noperations, but that might hit most of them. Sure, will give a regression run with CCA enabled.\n\nRegards,\nAmulRegards,Neha Sharma", "msg_date": "Tue, 23 Mar 2021 10:52:09 +0530", "msg_from": "Neha Sharma <neha.sharma@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Tue, Mar 23, 2021 at 10:52:09AM +0530, Neha Sharma wrote:\n> Sure, will give a regression run with CCA enabled.\n\nI can confirm the regression between 13 and HEAD, so I have added an\nopen item. It would be good to figure out the root issue here, and I\nam ready to bet that the problem is deeper than it looks and that more\ncode paths could be involved.\n\nIt takes some time to initialize a cluster under CLOBBER_CACHE_ALWAYS,\nbut the test is quick enough to reproduce. It would be good to bisect\nthe origin point here as a first step.\n--\nMichael", "msg_date": "Tue, 23 Mar 2021 16:12:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Tue, Mar 23, 2021 at 04:12:01PM +0900, Michael Paquier wrote:\n> It takes some time to initialize a cluster under CLOBBER_CACHE_ALWAYS,\n> but the test is quick enough to reproduce. It would be good to bisect\n> the origin point here as a first step.\n\nOne bisect later, the winner is:\ncommit: 3d351d916b20534f973eda760cde17d96545d4c4\nauthor: Tom Lane <tgl@sss.pgh.pa.us>\ndate: Sun, 30 Aug 2020 12:21:51 -0400\nRedefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n\nI am too tired to poke at that today, so I'll try tomorrow. Tom may\nbeat me at that though.\n--\nMichael", "msg_date": "Tue, 23 Mar 2021 19:09:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Tue, Mar 23, 2021 at 04:12:01PM +0900, Michael Paquier wrote:\n>> It takes some time to initialize a cluster under CLOBBER_CACHE_ALWAYS,\n>> but the test is quick enough to reproduce. It would be good to bisect\n>> the origin point here as a first step.\n\n> One bisect later, the winner is:\n> commit: 3d351d916b20534f973eda760cde17d96545d4c4\n> author: Tom Lane <tgl@sss.pgh.pa.us>\n> date: Sun, 30 Aug 2020 12:21:51 -0400\n> Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n\n> I am too tired to poke at that today, so I'll try tomorrow. Tom may\n> beat me at that though.\n\nI think that's an artifact. That commit didn't touch anything related to\nrelation opening or closing. What it could have done, though, is change\nCLUSTER's behavior on this empty table from use-an-index to use-a-seqscan,\nthus causing us to follow the buggy code path where before we didn't.\n\nThe interesting question here seems to be \"why didn't the existing\nCLOBBER_CACHE_ALWAYS buildfarm testing catch this?\". It looks to me like\nthe answer is that it only happens for an empty table (or at least one\nwhere the data pattern is such that we skip the RelationOpenSmgr call\nearlier in end_heap_rewrite) and we don't happen to be exercising that\nexact scenario in the regression tests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:44:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "I wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> One bisect later, the winner is:\n>> commit: 3d351d916b20534f973eda760cde17d96545d4c4\n>> author: Tom Lane <tgl@sss.pgh.pa.us>\n>> date: Sun, 30 Aug 2020 12:21:51 -0400\n>> Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n\n> I think that's an artifact. That commit didn't touch anything related to\n> relation opening or closing. What it could have done, though, is change\n> CLUSTER's behavior on this empty table from use-an-index to use-a-seqscan,\n> thus causing us to follow the buggy code path where before we didn't.\n\nOn closer inspection, I believe the true culprit is c6b92041d,\nwhich did this:\n\n \t */\n \tif (RelationNeedsWAL(state->rs_new_rel))\n-\t\theap_sync(state->rs_new_rel);\n+\t\tsmgrimmedsync(state->rs_new_rel->rd_smgr, MAIN_FORKNUM);\n \n \tlogical_end_heap_rewrite(state);\n\nheap_sync was careful about opening rd_smgr, the new code not so much.\n\nI read the rest of that commit and didn't see any other equivalent\nbugs, but I might've missed something.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 11:29:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Tue, Mar 23, 2021 at 8:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Michael Paquier <michael@paquier.xyz> writes:\n> >> One bisect later, the winner is:\n> >> commit: 3d351d916b20534f973eda760cde17d96545d4c4\n> >> author: Tom Lane <tgl@sss.pgh.pa.us>\n> >> date: Sun, 30 Aug 2020 12:21:51 -0400\n> >> Redefine pg_class.reltuples to be -1 before the first VACUUM or ANALYZE.\n>\n> > I think that's an artifact. That commit didn't touch anything related to\n> > relation opening or closing. What it could have done, though, is change\n> > CLUSTER's behavior on this empty table from use-an-index to use-a-seqscan,\n> > thus causing us to follow the buggy code path where before we didn't.\n>\n> On closer inspection, I believe the true culprit is c6b92041d,\n> which did this:\n>\n> */\n> if (RelationNeedsWAL(state->rs_new_rel))\n> - heap_sync(state->rs_new_rel);\n> + smgrimmedsync(state->rs_new_rel->rd_smgr, MAIN_FORKNUM);\n>\n> logical_end_heap_rewrite(state);\n>\n> heap_sync was careful about opening rd_smgr, the new code not so much.\n>\n> I read the rest of that commit and didn't see any other equivalent\n> bugs, but I might've missed something.\n>\n\nI too didn't find any other place replacing heap_sync() or equivalent place from\nthis commit where smgr* operation reaches without necessary precautions call.\nheap_sync() was calling RelationOpenSmgr() through FlushRelationBuffers() before\nit reached smgrimmedsync(). So we also need to make sure of the\nRelationOpenSmgr() call before smgrimmedsync() as proposed previously.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 24 Mar 2021 11:10:08 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> On Tue, Mar 23, 2021 at 8:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> On closer inspection, I believe the true culprit is c6b92041d,\n>> which did this:\n>> - heap_sync(state->rs_new_rel);\n>> + smgrimmedsync(state->rs_new_rel->rd_smgr, MAIN_FORKNUM);\n>> heap_sync was careful about opening rd_smgr, the new code not so much.\n\n> So we also need to make sure of the\n> RelationOpenSmgr() call before smgrimmedsync() as proposed previously.\n\nI wonder if we should try to get rid of this sort of bug by banning\ndirect references to rd_smgr? That is, write the above and all\nsimilar code like\n\n\tsmgrimmedsync(RelationGetSmgr(state->rs_new_rel), MAIN_FORKNUM);\n\nwhere we provide something like\n\nstatic inline struct SMgrRelationData *\nRelationGetSmgr(Relation rel)\n{\n\tif (unlikely(rel->rd_smgr == NULL))\n\t\tRelationOpenSmgr(rel);\n\treturn rel->rd_smgr;\n}\n\nand then we could get rid of most or all other RelationOpenSmgr calls.\n\nThis might create more code bloat than it's really worth, but\nit'd be a simple and mechanically-checkable scheme.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 24 Mar 2021 10:39:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Wed, Mar 24, 2021 at 8:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amul Sul <sulamul@gmail.com> writes:\n> > On Tue, Mar 23, 2021 at 8:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> On closer inspection, I believe the true culprit is c6b92041d,\n> >> which did this:\n> >> - heap_sync(state->rs_new_rel);\n> >> + smgrimmedsync(state->rs_new_rel->rd_smgr, MAIN_FORKNUM);\n> >> heap_sync was careful about opening rd_smgr, the new code not so much.\n>\n> > So we also need to make sure of the\n> > RelationOpenSmgr() call before smgrimmedsync() as proposed previously.\n>\n> I wonder if we should try to get rid of this sort of bug by banning\n> direct references to rd_smgr? That is, write the above and all\n> similar code like\n>\n> smgrimmedsync(RelationGetSmgr(state->rs_new_rel), MAIN_FORKNUM);\n>\n> where we provide something like\n>\n> static inline struct SMgrRelationData *\n> RelationGetSmgr(Relation rel)\n> {\n> if (unlikely(rel->rd_smgr == NULL))\n> RelationOpenSmgr(rel);\n> return rel->rd_smgr;\n> }\n>\n> and then we could get rid of most or all other RelationOpenSmgr calls.\n>\n\n+1\n\n> This might create more code bloat than it's really worth, but\n> it'd be a simple and mechanically-checkable scheme.\n\nI think that will be fine, one-time pain. If you want I will do those changes.\n\nA quick question: Can't it be a macro instead of an inline function\nlike other macros we have in rel.h?\n\nRegards,\nAmul\n\n\n", "msg_date": "Thu, 25 Mar 2021 09:19:24 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> On Wed, Mar 24, 2021 at 8:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> static inline struct SMgrRelationData *\n>> RelationGetSmgr(Relation rel)\n>> {\n>> if (unlikely(rel->rd_smgr == NULL))\n>> RelationOpenSmgr(rel);\n>> return rel->rd_smgr;\n>> }\n\n> A quick question: Can't it be a macro instead of an inline function\n> like other macros we have in rel.h?\n\nThe multiple-evaluation hazard seems like an issue. We've tolerated\nsuch hazards in the past, but mostly just because we weren't relying\non static inlines being available, so there wasn't a good way around\nit.\n\nAlso, the conditional evaluation here would look rather ugly\nin a macro, I think, if indeed you could do it at all without\nprovoking compiler warnings.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 25 Mar 2021 01:50:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Sorry for the bug.\n\nAt Thu, 25 Mar 2021 01:50:29 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Amul Sul <sulamul@gmail.com> writes:\n> > On Wed, Mar 24, 2021 at 8:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> static inline struct SMgrRelationData *\n> >> RelationGetSmgr(Relation rel)\n> >> {\n> >> if (unlikely(rel->rd_smgr == NULL))\n> >> RelationOpenSmgr(rel);\n> >> return rel->rd_smgr;\n> >> }\n> \n> > A quick question: Can't it be a macro instead of an inline function\n> > like other macros we have in rel.h?\n> \n> The multiple-evaluation hazard seems like an issue. We've tolerated\n> such hazards in the past, but mostly just because we weren't relying\n> on static inlines being available, so there wasn't a good way around\n> it.\n> \n> Also, the conditional evaluation here would look rather ugly\n> in a macro, I think, if indeed you could do it at all without\n> provoking compiler warnings.\n\nFWIW, +1 for the function as is.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 25 Mar 2021 15:40:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Thu, Mar 25, 2021 at 12:10 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> Sorry for the bug.\n>\n> At Thu, 25 Mar 2021 01:50:29 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Amul Sul <sulamul@gmail.com> writes:\n> > > On Wed, Mar 24, 2021 at 8:09 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> static inline struct SMgrRelationData *\n> > >> RelationGetSmgr(Relation rel)\n> > >> {\n> > >> if (unlikely(rel->rd_smgr == NULL))\n> > >> RelationOpenSmgr(rel);\n> > >> return rel->rd_smgr;\n> > >> }\n> >\n> > > A quick question: Can't it be a macro instead of an inline function\n> > > like other macros we have in rel.h?\n> >\n> > The multiple-evaluation hazard seems like an issue. We've tolerated\n> > such hazards in the past, but mostly just because we weren't relying\n> > on static inlines being available, so there wasn't a good way around\n> > it.\n> >\n> > Also, the conditional evaluation here would look rather ugly\n> > in a macro, I think, if indeed you could do it at all without\n> > provoking compiler warnings.\n>\n> FWIW, +1 for the function as is.\n>\n\nOk, in the attached patch, I have added the inline function to rel.h, and for\nthat, I end up including smgr.h to rel.h. I tried to replace all rel->rd_smgr\nby RelationGetSmgr() function and removed the RelationOpenSmgr() call from\nthe nearby to it which I don't think needed at all.\n\nRegards,\nAmul", "msg_date": "Thu, 25 Mar 2021 16:18:45 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On 2021-Mar-25, Amul Sul wrote:\n\n> Ok, in the attached patch, I have added the inline function to rel.h, and for\n> that, I end up including smgr.h to rel.h. I tried to replace all rel->rd_smgr\n> by RelationGetSmgr() function and removed the RelationOpenSmgr() call from\n> the nearby to it which I don't think needed at all.\n\nWe forgot this patch earlier in the commitfest. Do people think we\nshould still get it in on this cycle? I'm +1 on that, since it's a\nsafety feature poised to prevent more bugs than it's likely to\nintroduce.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Fri, 9 Apr 2021 18:45:45 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Fri, Apr 09, 2021 at 06:45:45PM -0400, Alvaro Herrera wrote:\n> We forgot this patch earlier in the commitfest. Do people think we\n> should still get it in on this cycle? I'm +1 on that, since it's a\n> safety feature poised to prevent more bugs than it's likely to\n> introduce.\n\nNo objections from here to do that now even after feature freeze. I\nalso wonder, while looking at that, why you don't just remove the last\ncall within src/backend/catalog/heap.c. This way, nobody is tempted\nto use RelationOpenSmgr() anymore, and it could just be removed from\nrel.h.\n--\nMichael", "msg_date": "Mon, 19 Apr 2021 15:55:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 12:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Apr 09, 2021 at 06:45:45PM -0400, Alvaro Herrera wrote:\n> > We forgot this patch earlier in the commitfest. Do people think we\n> > should still get it in on this cycle? I'm +1 on that, since it's a\n> > safety feature poised to prevent more bugs than it's likely to\n> > introduce.\n>\n> No objections from here to do that now even after feature freeze. I\n> also wonder, while looking at that, why you don't just remove the last\n> call within src/backend/catalog/heap.c. This way, nobody is tempted\n> to use RelationOpenSmgr() anymore, and it could just be removed from\n> rel.h.\n\nAgree, did the same in the attached version, thanks.\n\nRegards,\nAmul\n\nP.S. commitfest entry https://commitfest.postgresql.org/33/3084/", "msg_date": "Mon, 19 Apr 2021 12:56:18 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "At Mon, 19 Apr 2021 12:56:18 +0530, Amul Sul <sulamul@gmail.com> wrote in \n> On Mon, Apr 19, 2021 at 12:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Fri, Apr 09, 2021 at 06:45:45PM -0400, Alvaro Herrera wrote:\n> > > We forgot this patch earlier in the commitfest. Do people think we\n> > > should still get it in on this cycle? I'm +1 on that, since it's a\n> > > safety feature poised to prevent more bugs than it's likely to\n> > > introduce.\n> >\n> > No objections from here to do that now even after feature freeze. I\n> > also wonder, while looking at that, why you don't just remove the last\n> > call within src/backend/catalog/heap.c. This way, nobody is tempted\n> > to use RelationOpenSmgr() anymore, and it could just be removed from\n> > rel.h.\n> \n> Agree, did the same in the attached version, thanks.\n\n+\tsmgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,\n \t\t\t (char *) metapage, true);\n-\tlog_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,\n+\tlog_newpage(&(RelationGetSmgr(index))->smgr_rnode.node, INIT_FORKNUM,\n\nAt the log_newpage, index is guaranteed to have rd_smgr. So I prefer\nto leave the line alone.. I don't mind other sccessive calls if any\nsince what I don't like is the notation there.\n\n> P.S. commitfest entry https://commitfest.postgresql.org/33/3084/\n\nIsn't this a kind of open item?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 19 Apr 2021 17:35:52 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 2:05 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 19 Apr 2021 12:56:18 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > On Mon, Apr 19, 2021 at 12:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Fri, Apr 09, 2021 at 06:45:45PM -0400, Alvaro Herrera wrote:\n> > > > We forgot this patch earlier in the commitfest. Do people think we\n> > > > should still get it in on this cycle? I'm +1 on that, since it's a\n> > > > safety feature poised to prevent more bugs than it's likely to\n> > > > introduce.\n> > >\n> > > No objections from here to do that now even after feature freeze. I\n> > > also wonder, while looking at that, why you don't just remove the last\n> > > call within src/backend/catalog/heap.c. This way, nobody is tempted\n> > > to use RelationOpenSmgr() anymore, and it could just be removed from\n> > > rel.h.\n> >\n> > Agree, did the same in the attached version, thanks.\n>\n> + smgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,\n> (char *) metapage, true);\n> - log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,\n> + log_newpage(&(RelationGetSmgr(index))->smgr_rnode.node, INIT_FORKNUM,\n>\n> At the log_newpage, index is guaranteed to have rd_smgr. So I prefer\n> to leave the line alone.. I don't mind other sccessive calls if any\n> since what I don't like is the notation there.\n>\n\nPerhaps, isn't that bad. It is good to follow the practice of using\nRelationGetSmgr() for rd_smgr access, IMHO.\n\n> > P.S. commitfest entry https://commitfest.postgresql.org/33/3084/\n>\n> Isn't this a kind of open item?\n>\n\nSorry, I didn't get you. Do I need to move this to some other bucket?\n\nRegards,\nAmul\n\n\n", "msg_date": "Mon, 19 Apr 2021 16:27:25 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 04:27:25PM +0530, Amul Sul wrote:\n> On Mon, Apr 19, 2021 at 2:05 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Mon, 19 Apr 2021 12:56:18 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > > On Mon, Apr 19, 2021 at 12:25 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >\n> > > > On Fri, Apr 09, 2021 at 06:45:45PM -0400, Alvaro Herrera wrote:\n> > > > > We forgot this patch earlier in the commitfest. Do people think we\n> > > > > should still get it in on this cycle? I'm +1 on that, since it's a\n> > > > > safety feature poised to prevent more bugs than it's likely to\n> > > > > introduce.\n> > > >\n> > > > No objections from here to do that now even after feature freeze. I\n> > > > also wonder, while looking at that, why you don't just remove the last\n> > > > call within src/backend/catalog/heap.c. This way, nobody is tempted\n> > > > to use RelationOpenSmgr() anymore, and it could just be removed from\n> > > > rel.h.\n> > >\n> > > Agree, did the same in the attached version, thanks.\n> >\n> > + smgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,\n> > (char *) metapage, true);\n> > - log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,\n> > + log_newpage(&(RelationGetSmgr(index))->smgr_rnode.node, INIT_FORKNUM,\n> >\n> > At the log_newpage, index is guaranteed to have rd_smgr. So I prefer\n> > to leave the line alone.. I don't mind other sccessive calls if any\n> > since what I don't like is the notation there.\n> >\n> \n> Perhaps, isn't that bad. It is good to follow the practice of using\n> RelationGetSmgr() for rd_smgr access, IMHO.\n> \n> > > P.S. commitfest entry https://commitfest.postgresql.org/33/3084/\n> >\n> > Isn't this a kind of open item?\n> >\n> \n> Sorry, I didn't get you. Do I need to move this to some other bucket?\n\nIt's not a new feature, and shouldn't wait for July's CF since it's targetting\nv14.\n\nThe original crash was fixed by Tom by commit 9d523119f.\n\nSo it's not exactly an \"open item\" for v14, but there's probably no better\nplace for it, so you could add it if you think it's at risk of being forgotten\n(again).\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_14_Open_Items\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 19 Apr 2021 06:55:04 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Mon, Apr 19, 2021 at 05:35:52PM +0900, Kyotaro Horiguchi wrote:\n> Isn't this a kind of open item?\n\nThis does not qualify as an open item because it is not an actual bug\nIMO, neither is it a defect of the existing code, so it seems\nappropriate to me to not list it.\n--\nMichael", "msg_date": "Mon, 19 Apr 2021 21:02:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Mon, Apr 19, 2021 at 05:35:52PM +0900, Kyotaro Horiguchi wrote:\n>> Isn't this a kind of open item?\n\n> This does not qualify as an open item because it is not an actual bug\n> IMO, neither is it a defect of the existing code, so it seems\n> appropriate to me to not list it.\n\nAgreed, but by the same token, rushing it into v14 doesn't have any\nclear benefit. I'd be inclined to leave it for v15 at this point,\nespecially since we don't seem to have 100% consensus on the details.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:19:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "At Mon, 19 Apr 2021 09:19:36 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Michael Paquier <michael@paquier.xyz> writes:\n> > On Mon, Apr 19, 2021 at 05:35:52PM +0900, Kyotaro Horiguchi wrote:\n> >> Isn't this a kind of open item?\n> \n> > This does not qualify as an open item because it is not an actual bug\n> > IMO, neither is it a defect of the existing code, so it seems\n> > appropriate to me to not list it.\n> \n> Agreed, but by the same token, rushing it into v14 doesn't have any\n> clear benefit. I'd be inclined to leave it for v15 at this point,\n> especially since we don't seem to have 100% consensus on the details.\n\nThanks. Seems reasonable.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 20 Apr 2021 10:19:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "At Mon, 19 Apr 2021 16:27:25 +0530, Amul Sul <sulamul@gmail.com> wrote in \n> On Mon, Apr 19, 2021 at 2:05 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > + smgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,\n> > (char *) metapage, true);\n> > - log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,\n> > + log_newpage(&(RelationGetSmgr(index))->smgr_rnode.node, INIT_FORKNUM,\n> >\n> > At the log_newpage, index is guaranteed to have rd_smgr. So I prefer\n> > to leave the line alone.. I don't mind other sccessive calls if any\n> > since what I don't like is the notation there.\n> >\n> \n> Perhaps, isn't that bad. It is good to follow the practice of using\n> RelationGetSmgr() for rd_smgr access, IMHO.\n\nI don't mind RelationGetSmgr(index)->smgr_rnode alone or\n&variable->member alone and there's not the previous call to\nRelationGetSmgr just above. How about using a temporary variable?\n\n SMgrRelation srel = RelationGetSmgr(index);\n smgrwrite(srel, ...);\n log_newpage(srel->..);\n\n\n> > > P.S. commitfest entry https://commitfest.postgresql.org/33/3084/\n> >\n> > Isn't this a kind of open item?\n> >\n> \n> Sorry, I didn't get you. Do I need to move this to some other bucket?\n\nAs discussed in the other branch, I agree that it is registered to the\nnext CF, not registered as an open items of this cycle.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 20 Apr 2021 10:29:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Tue, Apr 20, 2021 at 6:59 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 19 Apr 2021 16:27:25 +0530, Amul Sul <sulamul@gmail.com> wrote in\n> > On Mon, Apr 19, 2021 at 2:05 PM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> > > + smgrwrite(RelationGetSmgr(index), INIT_FORKNUM, BLOOM_METAPAGE_BLKNO,\n> > > (char *) metapage, true);\n> > > - log_newpage(&index->rd_smgr->smgr_rnode.node, INIT_FORKNUM,\n> > > + log_newpage(&(RelationGetSmgr(index))->smgr_rnode.node, INIT_FORKNUM,\n> > >\n> > > At the log_newpage, index is guaranteed to have rd_smgr. So I prefer\n> > > to leave the line alone.. I don't mind other sccessive calls if any\n> > > since what I don't like is the notation there.\n> > >\n> >\n> > Perhaps, isn't that bad. It is good to follow the practice of using\n> > RelationGetSmgr() for rd_smgr access, IMHO.\n>\n> I don't mind RelationGetSmgr(index)->smgr_rnode alone or\n> &variable->member alone and there's not the previous call to\n> RelationGetSmgr just above. How about using a temporary variable?\n>\n> SMgrRelation srel = RelationGetSmgr(index);\n> smgrwrite(srel, ...);\n> log_newpage(srel->..);\n>\n\nUnderstood. Used a temporary variable for the place where\nRelationGetSmgr() calls are placed too close or in a loop.\n\nPlease have a look at the attached version, thanks for the review.\n\nRegards,\nAmul", "msg_date": "Tue, 20 Apr 2021 11:18:26 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nI looked through the patch. Looks good to me. \r\n\r\nCFbot tests are passing and, as I got it from the thread, nobody opposes this refactoring, so, move it to RFC status.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Mon, 05 Jul 2021 12:49:09 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> On Tue, Apr 20, 2021 at 6:59 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> I don't mind RelationGetSmgr(index)->smgr_rnode alone or\n>> &variable->member alone and there's not the previous call to\n>> RelationGetSmgr just above. How about using a temporary variable?\n>> \n>> SMgrRelation srel = RelationGetSmgr(index);\n>> smgrwrite(srel, ...);\n>> log_newpage(srel->..);\n\n> Understood. Used a temporary variable for the place where\n> RelationGetSmgr() calls are placed too close or in a loop.\n\n[ squint... ] Doesn't this risk introducing exactly the sort of\ncache-clobber hazard we're trying to prevent? That is, the above is\nnot safe unless you are *entirely* certain that there is not and never\nwill be any possibility of a relcache flush before you are done using\nthe temporary variable. Otherwise it can become a dangling pointer.\n\nThe point of the static-inline function idea was to be cheap enough\nthat it isn't worth worrying about this sort of risky optimization.\nGiven that an smgr function is sure to involve some kernel calls,\nI doubt it's worth sweating over an extra test-and-branch beforehand.\nSo where I was hoping to get to is that smgr objects are *only*\nreferenced by RelationGetSmgr() calls and nobody ever keeps any\nother pointers to them across any non-smgr operations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jul 2021 13:36:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Tue, Jul 6, 2021 at 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amul Sul <sulamul@gmail.com> writes:\n> > On Tue, Apr 20, 2021 at 6:59 AM Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote:\n> >> I don't mind RelationGetSmgr(index)->smgr_rnode alone or\n> >> &variable->member alone and there's not the previous call to\n> >> RelationGetSmgr just above. How about using a temporary variable?\n> >>\n> >> SMgrRelation srel = RelationGetSmgr(index);\n> >> smgrwrite(srel, ...);\n> >> log_newpage(srel->..);\n>\n> > Understood. Used a temporary variable for the place where\n> > RelationGetSmgr() calls are placed too close or in a loop.\n>\n> [ squint... ] Doesn't this risk introducing exactly the sort of\n> cache-clobber hazard we're trying to prevent? That is, the above is\n> not safe unless you are *entirely* certain that there is not and never\n> will be any possibility of a relcache flush before you are done using\n> the temporary variable. Otherwise it can become a dangling pointer.\n>\n\nYeah, there will a hazard, even if we sure right but cannot guarantee future\nchanges in any subroutine that could get call in between.\n\n> The point of the static-inline function idea was to be cheap enough\n> that it isn't worth worrying about this sort of risky optimization.\n> Given that an smgr function is sure to involve some kernel calls,\n> I doubt it's worth sweating over an extra test-and-branch beforehand.\n> So where I was hoping to get to is that smgr objects are *only*\n> referenced by RelationGetSmgr() calls and nobody ever keeps any\n> other pointers to them across any non-smgr operations.\n>\n\nOk, will revert changes added in the previous version, thanks.\n\nRegards,\nAmul\n\n\n", "msg_date": "Wed, 7 Jul 2021 09:44:16 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Wed, Jul 7, 2021 at 9:44 AM Amul Sul <sulamul@gmail.com> wrote:\n>\n> On Tue, Jul 6, 2021 at 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amul Sul <sulamul@gmail.com> writes:\n> > > On Tue, Apr 20, 2021 at 6:59 AM Kyotaro Horiguchi\n> > > <horikyota.ntt@gmail.com> wrote:\n> > >> I don't mind RelationGetSmgr(index)->smgr_rnode alone or\n> > >> &variable->member alone and there's not the previous call to\n> > >> RelationGetSmgr just above. How about using a temporary variable?\n> > >>\n> > >> SMgrRelation srel = RelationGetSmgr(index);\n> > >> smgrwrite(srel, ...);\n> > >> log_newpage(srel->..);\n> >\n> > > Understood. Used a temporary variable for the place where\n> > > RelationGetSmgr() calls are placed too close or in a loop.\n> >\n> > [ squint... ] Doesn't this risk introducing exactly the sort of\n> > cache-clobber hazard we're trying to prevent? That is, the above is\n> > not safe unless you are *entirely* certain that there is not and never\n> > will be any possibility of a relcache flush before you are done using\n> > the temporary variable. Otherwise it can become a dangling pointer.\n> >\n>\n> Yeah, there will a hazard, even if we sure right but cannot guarantee future\n> changes in any subroutine that could get call in between.\n>\n> > The point of the static-inline function idea was to be cheap enough\n> > that it isn't worth worrying about this sort of risky optimization.\n> > Given that an smgr function is sure to involve some kernel calls,\n> > I doubt it's worth sweating over an extra test-and-branch beforehand.\n> > So where I was hoping to get to is that smgr objects are *only*\n> > referenced by RelationGetSmgr() calls and nobody ever keeps any\n> > other pointers to them across any non-smgr operations.\n> >\n>\n> Ok, will revert changes added in the previous version, thanks.\n>\n\nHerewith attached version did the same, thanks.\n\nRegards,\nAmul", "msg_date": "Fri, 9 Jul 2021 19:22:53 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On 2021-Jul-09, Amul Sul wrote:\n\n> > On Tue, Jul 6, 2021 at 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > > The point of the static-inline function idea was to be cheap enough\n> > > that it isn't worth worrying about this sort of risky optimization.\n> > > Given that an smgr function is sure to involve some kernel calls,\n> > > I doubt it's worth sweating over an extra test-and-branch beforehand.\n> > > So where I was hoping to get to is that smgr objects are *only*\n> > > referenced by RelationGetSmgr() calls and nobody ever keeps any\n> > > other pointers to them across any non-smgr operations.\n\n> Herewith attached version did the same, thanks.\n\nI think it would be valuable to have a comment in that function to point\nout what is the function there for.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Industry suffers from the managerial dogma that for the sake of stability\nand continuity, the company should be independent of the competence of\nindividual employees.\" (E. Dijkstra)\n\n\n", "msg_date": "Fri, 9 Jul 2021 10:00:34 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "On Fri, Jul 9, 2021 at 7:30 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2021-Jul-09, Amul Sul wrote:\n>\n> > > On Tue, Jul 6, 2021 at 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> > > > The point of the static-inline function idea was to be cheap enough\n> > > > that it isn't worth worrying about this sort of risky optimization.\n> > > > Given that an smgr function is sure to involve some kernel calls,\n> > > > I doubt it's worth sweating over an extra test-and-branch beforehand.\n> > > > So where I was hoping to get to is that smgr objects are *only*\n> > > > referenced by RelationGetSmgr() calls and nobody ever keeps any\n> > > > other pointers to them across any non-smgr operations.\n>\n> > Herewith attached version did the same, thanks.\n>\n> I think it would be valuable to have a comment in that function to point\n> out what is the function there for.\n\nThanks for the suggestion, added the same in the attached version.\n\nRegards,\nAmul", "msg_date": "Mon, 12 Jul 2021 10:30:36 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Amul Sul <sulamul@gmail.com> writes:\n> [ v5_Add-RelationGetSmgr-inline-function.patch ]\n\nPushed with minor cosmetic adjustments.\n\nRelationCopyStorage() kind of gives me the willies.\nIt's not really an smgr-level function, but we call it\neverywhere with smgr pointers that belong to relcache entries:\n\n \t/* copy main fork */\n-\tRelationCopyStorage(rel->rd_smgr, dstrel, MAIN_FORKNUM,\n+\tRelationCopyStorage(RelationGetSmgr(rel), dstrel, MAIN_FORKNUM,\n \t\t\t\t\t\trel->rd_rel->relpersistence);\n\nSo that would fail hard if a relcache flush could occur inside\nthat function. It seems impossible today, so I settled for\njust annotating the function to that effect. But it won't\nsurprise me a bit if somebody breaks it in future due to not\nhaving read/understood the comment.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 12 Jul 2021 17:07:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" }, { "msg_contents": "Thanks a lot Tom.\n\nOn Tue, Jul 13, 2021 at 2:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amul Sul <sulamul@gmail.com> writes:\n> > [ v5_Add-RelationGetSmgr-inline-function.patch ]\n>\n> Pushed with minor cosmetic adjustments.\n>\n> RelationCopyStorage() kind of gives me the willies.\n> It's not really an smgr-level function, but we call it\n> everywhere with smgr pointers that belong to relcache entries:\n>\n> /* copy main fork */\n> - RelationCopyStorage(rel->rd_smgr, dstrel, MAIN_FORKNUM,\n> + RelationCopyStorage(RelationGetSmgr(rel), dstrel, MAIN_FORKNUM,\n> rel->rd_rel->relpersistence);\n>\n> So that would fail hard if a relcache flush could occur inside\n> that function. It seems impossible today, so I settled for\n> just annotating the function to that effect. But it won't\n> surprise me a bit if somebody breaks it in future due to not\n> having read/understood the comment.\n>\n> regards, tom lane\n\n\n", "msg_date": "Tue, 13 Jul 2021 09:18:05 +0530", "msg_from": "Amul Sul <sulamul@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [CLOBBER_CACHE]Server crashed with segfault 11 while executing\n clusterdb" } ]
[ { "msg_contents": "Hi Hackers,\n\nCommit 906bfcad7ba7c has improved handling for \"UPDATE ... SET\n(column_list) = row_constructor\", but it has broken in some cases where it\nwas working prior to this commit.\nAfter this commit query “DO UPDATE SET (t1_col)” is giving an error which\nwas working fine earlier.\n\ncommit 906bfcad7ba7cb3863fe0e2a7810be8e3cd84fbd\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Tue Nov 22 15:19:57 2016 -0500\n>\n> Improve handling of \"UPDATE ... SET (column_list) = row_constructor\".\n>\n> Previously, the right-hand side of a multiple-column assignment, if it\n>\n\n\n*Test case:*\nCREATE TABLE t1 (id1 int NOT NULL primary key, t1_col text NOT NULL);\nINSERT INTO t1(id1, t1_col) VALUES (88,'test1') ON CONFLICT ( id1 )\nDO UPDATE SET (t1_col) = ('test2');\nERROR: source for a multiple-column UPDATE item must be a sub-SELECT or\nROW() expression\nLINE 1: ...') ON CONFLICT ( id1 )DO UPDATE SET (t1_col) = ('test2');\n ^\nI am getting above error from v10 to master but it is passing in v96 and\nv95.\n\nIf we change \"SET (t1_col)\" to \"SET t1_col\", then the above test case is\npassing in all the branches.\n\nThis looks like a bug.\n\nThoughts?\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nHi Hackers,Commit 906bfcad7ba7c has improved handling for \"UPDATE ... SET (column_list) = row_constructor\", but it has broken in some cases where it was working prior to this commit.After this commit query “DO UPDATE SET (t1_col)” is giving an error which was working fine earlier.commit 906bfcad7ba7cb3863fe0e2a7810be8e3cd84fbdAuthor: Tom Lane <tgl@sss.pgh.pa.us>Date:   Tue Nov 22 15:19:57 2016 -0500    Improve handling of \"UPDATE ... SET (column_list) = row_constructor\".        Previously, the right-hand side of a multiple-column assignment, if it Test case:CREATE TABLE t1 (id1 int NOT NULL primary key, t1_col text NOT NULL);INSERT INTO t1(id1, t1_col) VALUES (88,'test1') ON CONFLICT ( id1 )DO UPDATE SET (t1_col) = ('test2');ERROR:  source for a multiple-column UPDATE item must be a sub-SELECT or ROW() expressionLINE 1: ...')    ON CONFLICT ( id1 )DO UPDATE SET (t1_col) = ('test2');                                                              ^I am getting above error from v10 to master but it is passing in v96 and v95.If we change \"SET (t1_col)\" to \"SET t1_col\", then the above test case is passing in all the branches.This looks like a bug.Thoughts?Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 22 Mar 2021 14:10:49 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "UPDATE ... SET (single_column) = row_constructor is a bit broken from\n V10 906bfcad7ba7c" }, { "msg_contents": "On Mon, Mar 22, 2021 at 02:10:49PM +0530, Mahendra Singh Thalor wrote:\n> Hi Hackers,\n> \n> Commit 906bfcad7ba7c has improved handling for \"UPDATE ... SET\n> (column_list) = row_constructor\", but it has broken in some cases where it\n> was working prior to this commit.\n> After this commit query “DO UPDATE SET (t1_col)” is giving an error which\n> was working fine earlier.\n\nSee prior discussions:\n\nhttps://www.postgresql.org/message-id/flat/20170719174507.GA19616%40telsasoft.com\nhttps://www.postgresql.org/message-id/flat/CAMjNa7cDLzPcs0xnRpkvqmJ6Vb6G3EH8CYGp9ZBjXdpFfTz6dg@mail.gmail.com\nhttps://www.postgresql.org/message-id/flat/87sh5rs74y.fsf@news-spur.riddles.org.uk\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commit&h=86182b18957b8f9e8045d55b137aeef7c9af9916\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 22 Mar 2021 16:13:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE ... SET (single_column) = row_constructor is a bit broken\n from V10 906bfcad7ba7c" }, { "msg_contents": "On Tue, 23 Mar 2021 at 02:43, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Mon, Mar 22, 2021 at 02:10:49PM +0530, Mahendra Singh Thalor wrote:\n> > Hi Hackers,\n> >\n> > Commit 906bfcad7ba7c has improved handling for \"UPDATE ... SET\n> > (column_list) = row_constructor\", but it has broken in some cases where it\n> > was working prior to this commit.\n> > After this commit query “DO UPDATE SET (t1_col)” is giving an error which\n> > was working fine earlier.\n>\n> See prior discussions:\n>\n> https://www.postgresql.org/message-id/flat/20170719174507.GA19616%40telsasoft.com\n> https://www.postgresql.org/message-id/flat/CAMjNa7cDLzPcs0xnRpkvqmJ6Vb6G3EH8CYGp9ZBjXdpFfTz6dg@mail.gmail.com\n> https://www.postgresql.org/message-id/flat/87sh5rs74y.fsf@news-spur.riddles.org.uk\n> https://git.postgresql.org/gitweb/?p=postgresql.git&a=commit&h=86182b18957b8f9e8045d55b137aeef7c9af9916\n>\n\nThanks Justin.\n\nSorry, my mistake is that without checking prior discussion, i opened\na new thread. From next time, I will take care of this.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 23 Mar 2021 09:21:44 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": true, "msg_subject": "Re: UPDATE ... SET (single_column) = row_constructor is a bit broken\n from V10 906bfcad7ba7c" } ]
[ { "msg_contents": "This is a proposal for a new feature in pg_stat_statements extension.\n\nAs a statistical extension providing counters pg_stat_statements\nextension is a target for various sampling solutions. All of them\ninterested in calculation of statement statistics increments between\ntwo samples. But we face a problem here - observing one statement with\nits statistics right now we can't be sure that statistics increment for\nthis statement is continuous from previous sample. This statement could\nbe deallocated after previous sample and come back soon. Also it could\nhappen that statement executions after that increased statistics to\nabove the values we observed in previous sample making it impossible to\ndetect deallocation on statement level.\nMy proposition here is to store statement entry timestamp. In this case\nany sampling solution in case of returning statement will easily detect\nit by changed timestamp value. And for every statement we will know\nexact time interval for its statistics.", "msg_date": "Mon, 22 Mar 2021 12:07:48 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "[PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Dear Andrei,\r\n\r\nI think the idea is good because the pg_stat_statements_info view cannot distinguish\r\nwhether the specific statement is deallocated or not.\r\nBut multiple calling of GetCurrentTimestamp() may cause some performance issues.\r\nHow about adding a configuration parameter for controlling this feature?\r\nOr we don't not have to worry about that?\r\n\r\n\r\n> +\t\tif (api_version >= PGSS_V1_9)\r\n> +\t\t{\r\n> +\t\t\tvalues[i++] = TimestampTzGetDatum(first_seen);\r\n> +\t\t}\r\n\r\nI think {} is not needed here.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 23 Mar 2021 02:13:25 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Kuroda,\n\nThank you for your attention!\n\nOn Tue, 2021-03-23 at 02:13 +0000, kuroda.hayato@fujitsu.com wrote:\n> But multiple calling of GetCurrentTimestamp() may cause some\n> performance issues.\n> How about adding a configuration parameter for controlling this\n> feature?\n> Or we don't not have to worry about that?\nCertaily I was thinking about this. And I've taken an advice of Teodor\nSigaev - a much more expirienced developer than me. It seems that\nGetCurrentTimestamp() is fast enough for our purpose and we won't call\nit too often - only on new statement entry allocation.\n\nBy the way right now in my workload tracing tool pg_profile I have to\nreset pg_stat_statements on every sample (wich is about 30-60 minutes)\nto make sure that all workload between samples is captured. This causes\nmuch more overhead. Introduced first_seen column can eliminate the need\nof resets.\n\nHowever, there is another way - we can store the curent value\nof pg_stat_statements_info.dealloc field when allocating a new\nstatement entry instead of timstamping it. Probably, it would be little\nfaster, but timestamp seems much more valuable here.\n> \n> > +\t\tif (api_version >= PGSS_V1_9)\n> > +\t\t{\n> > +\t\t\tvalues[i++] = TimestampTzGetDatum(first_seen);\n> > +\t\t}\n> \n> I think {} is not needed here.\nOf course, thank you!\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Tue, 23 Mar 2021 09:50:16 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Tue, Mar 23, 2021 at 09:50:16AM +0300, Andrei Zubkov wrote:\n> \n> By the way right now in my workload tracing tool pg_profile I have to\n> reset pg_stat_statements on every sample (wich is about 30-60 minutes)\n> to make sure that all workload between samples is captured. This causes\n> much more overhead. Introduced first_seen column can eliminate the need\n> of resets.\n\nNote that you could also detect entries for which some counters decreased (e.g.\nthe execution count), and in that case only use the current values. It should\ngive the exact same results as what you will get with the first_seen column,\nexcept of course if some entry is almost never used and is suddenly used a lot\nafter an explicit reset or an eviction and only until you perform your\nsnapshot. I'm not sure that it's a very likely scenario though.\n\nFTR that's how powa currently deals with reset/eviction.\n\n\n", "msg_date": "Tue, 23 Mar 2021 15:03:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Julien,\n\nOn Tue, 2021-03-23 at 15:03 +0800, Julien Rouhaud wrote:\n> Note that you could also detect entries for which some counters\n> decreased (e.g.\n> the execution count), and in that case only use the current values. \n\nYes, but checking condition for several counters seems complicated\ncompared to check only one field.\n\n> It should\n> give the exact same results as what you will get with the first_seen\n> column,\n> except of course if some entry is almost never used and is suddenly\n> used a lot\n> after an explicit reset or an eviction and only until you perform\n> your\n> snapshot. I'm not sure that it's a very likely scenario though.\nBut it is possible, and we are guessing here. Storing a timestamp does\nnot seems too expensive to me, but it totally eliminates guessing, and\nprovides a clear view about the time interval we watching for this\nspecific statement.\n\n> FTR that's how powa currently deals with reset/eviction.\nPoWA sampling is much more frequent than pg_profile. For PoWA it is, of\ncource, very unlikely scenario, but still possible.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:55:05 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Dear Andrei,\r\n\r\n> Certaily I was thinking about this. And I've taken an advice of Teodor\r\n> Sigaev - a much more expirienced developer than me. It seems that\r\n> GetCurrentTimestamp() is fast enough for our purpose and we won't call\r\n> it too often - only on new statement entry allocation.\r\n\r\nOK.\r\n\r\n> However, there is another way - we can store the curent value\r\n> of pg_stat_statements_info.dealloc field when allocating a new\r\n> statement entry instead of timstamping it. Probably, it would be little\r\n> faster, but timestamp seems much more valuable here.\r\n\r\nI don't like the idea because such a column has no meaning for the specific row.\r\nI prefer storing timestamp if GetCurrentTimestamp() is cheap.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Tue, 23 Mar 2021 08:09:07 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Dear Kuroda,\n\n> I don't like the idea because such a column has no meaning for the\n> specific row.\n> I prefer storing timestamp if GetCurrentTimestamp() is cheap.\nI agree. New version attached.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 23 Mar 2021 17:08:32 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "2021-03-23 23:08 に Andrei Zubkov さんは書きました:\n> Dear Kuroda,\n> \n>> I don't like the idea because such a column has no meaning for the\n>> specific row.\n>> I prefer storing timestamp if GetCurrentTimestamp() is cheap.\n> I agree. New version attached.\n\nThanks for posting the patch.\nI agree with this content.\n\nIs it necessary to update the version of pg_stat_statements now that the \nrelease is targeted for PG15?\nWe take into account the risk that users will misunderstand.\n\nRegards,\n\n-- \nYuki Seino\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 07 Apr 2021 17:26:21 +0900", "msg_from": "Seino Yuki <seinoyu@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Wed, 2021-04-07 at 17:26 +0900, Seino Yuki wrote:\n\n\n> Is it necessary to update the version of pg_stat_statements now that\n> the \n> release is targeted for PG15?\nI think, yes, version of pg_stat_statements is need to be updated. Is\nit will be 1.10 in PG15?\n\nRegards,\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Wed, 07 Apr 2021 13:37:07 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Dear Andrei,\r\n\r\n> I think, yes, version of pg_stat_statements is need to be updated. Is\r\n> it will be 1.10 in PG15?\r\n\r\nI think you are right. \r\nAccording to [1] we can bump up the version per one PG major version,\r\nand any features are not committed yet for 15.\r\n\r\n[1]: https://www.postgresql.org/message-id/20201202040516.GA43757@nol\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n", "msg_date": "Fri, 9 Apr 2021 00:23:07 +0000", "msg_from": "\"kuroda.hayato@fujitsu.com\" <kuroda.hayato@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi, Kuroda!\n\nI've intended to change the pg_stat_statements version with rebasing\nthis patch to the current master branch state. Now this is commit\n07e5e66.\n\nBut I'm unable to test the patch - it seems that pg_stat_statements is\nreceiving queryId = 0 for every statements in every hook now and\nstatements are not tracked at all.\n\nAm I mistaken somewhere? Maybe you know why this is happening?\n\nThank you!\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\nOn Fri, 2021-04-09 at 00:23 +0000, kuroda.hayato@fujitsu.com wrote:\n> Dear Andrei,\n> \n> > I think, yes, version of pg_stat_statements is need to be updated.\n> > Is\n> > it will be 1.10 in PG15?\n> \n> I think you are right. \n> According to [1] we can bump up the version per one PG major version,\n> and any features are not committed yet for 15.\n> \n> [1]: https://www.postgresql.org/message-id/20201202040516.GA43757@nol\n> \n> \n> Best Regards,\n> Hayato Kuroda\n> FUJITSU LIMITED\n> \n\n\n\n", "msg_date": "Wed, 14 Apr 2021 12:22:03 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Le mer. 14 avr. 2021 à 17:22, Andrei Zubkov <zubkov@moonset.ru> a écrit :\n\n>\n> But I'm unable to test the patch - it seems that pg_stat_statements is\n> receiving queryId = 0 for every statements in every hook now and\n> statements are not tracked at all.\n>\n> Am I mistaken somewhere? Maybe you know why this is happening?\n>\n\ndid you enable compute_query_id new parameter?\n\n>\n\nLe mer. 14 avr. 2021 à 17:22, Andrei Zubkov <zubkov@moonset.ru> a écrit : \nBut I'm unable to test the patch - it seems that pg_stat_statements is\nreceiving queryId = 0 for every statements in every hook now and\nstatements are not tracked at all.\n\nAm I mistaken somewhere? Maybe you know why this is happening?did you enable compute_query_id new parameter?", "msg_date": "Wed, 14 Apr 2021 17:32:14 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Wed, 2021-04-14 at 17:32 +0800, Julien Rouhaud wrote:\n> \n> did you enable compute_query_id new parameter? \n\nHi, Julien!\nThank you very much! I've missed it.\n> \n\nOn Wed, 2021-04-14 at 17:32 +0800, Julien Rouhaud wrote:did you enable compute_query_id new parameter? Hi, Julien!Thank you very much! I've missed it.", "msg_date": "Wed, 14 Apr 2021 12:38:42 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hello, Kuroda!\n\nOn Fri, 2021-04-09 at 00:23 +0000, kuroda.hayato@fujitsu.com wrote:\n> I think you are right. \n> According to [1] we can bump up the version per one PG major version,\n> and any features are not committed yet for 15.\n> \n> [1]: https://www.postgresql.org/message-id/20201202040516.GA43757@nol\n\nVersion 2 of patch attached. \npg_stat_statements version is now 1.10 and patch is based on 0f61727.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 14 Apr 2021 17:21:55 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi, Andrei\r\n\r\nI tested your patch, and it works well. I also prefer timestamp inseatead of dealloc num.\r\nI think it can provide more useful details about query statements.\r\n\r\nRegards,\r\nMartin Sun", "msg_date": "Mon, 19 Apr 2021 11:39:44 +0000", "msg_from": "Chengxi Sun <sunchengxi@highgo.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi, Martin\n\nOn Mon, 2021-04-19 at 11:39 +0000, Chengxi Sun wrote:\n> I tested your patch, and it works well. I also prefer timestamp\n> inseatead of dealloc num.\n> I think it can provide more useful details about query statements.\n> \nThank you for your review.\nCertainly, timestamp is valuable here. Deallocation number is only a\nworkaround in unlikely case when timestamping will cost a much. It\nseems, that it can happen only when significant amount of statements\ncauses a new entry in pg_stat_statements hashtable. However, in such\ncase using of pg_stat_statements extension might be qute difficult.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 15:03:27 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi, Andrey!\n \nI’ve tried to apply your patch v2-0001 on current master, but i failed.\nThere were git apply errors at:\npg_stat_statements.out:941\npg_stat_statements.sql:385\n \nBest Regards,\n\nAnton Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n \nHi, Andrey! I’ve tried to apply your patch v2-0001 on current master, but i failed.There were git apply errors at:pg_stat_statements.out:941pg_stat_statements.sql:385 Best Regards,Anton MelnikovPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company", "msg_date": "Wed, 06 Oct 2021 18:13:56 +0300", "msg_from": "=?UTF-8?B?0JzQtdC70YzQvdC40LrQvtCyINCQ0L3RgtC+0L0g0JDQvdC00YDQtdC10LI=?=\n =?UTF-8?B?0LjRhw==?= <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQQVRDSF0gVHJhY2tpbmcgc3RhdGVtZW50cyBlbnRyeSB0aW1l?=\n =?UTF-8?B?c3RhbXAgaW4gcGdfc3RhdF9zdGF0ZW1lbnRz?=" }, { "msg_contents": "Hi, Anton!\n\nI've corrected the patch and attached a new version.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\nOn Wed, 2021-10-06 at 18:13 +0300, Мельников Антон Андреевич wrote:\n> Hi, Andrey!\n>  \n> I’ve tried to apply your patch v2-0001 on current master, but i\n> failed.\n> There were git apply errors at:\n> pg_stat_statements.out:941\n> pg_stat_statements.sql:385\n>  \n> Best Regards,\n> \n> Anton Melnikov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>", "msg_date": "Thu, 07 Oct 2021 14:22:36 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "There is an issue with this patch. It's main purpose is the ability to\ncalculate values of pg_stat_statements view for a time period between\ntwo samples without resetting pg_stat_statements being absolutely sure\nthat the statement was not evicted.\nSuch approach solves the problem for metrics with except of min and max\ntime values. It seems that partial reset is needed here resetting\nmin/max values during a sample. But overall min/max values will be lost\nin this case. Does addition of resettable min/max metrics to the\npg_stat_statemets view seems reasonable here?\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Thu, 07 Oct 2021 15:31:51 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On 07.10.2021 15:31, Andrei Zubkov wrote:\n > There is an issue with this patch. It's main purpose is the ability to\n > calculate values of pg_stat_statements view\n >  [...]\n > Does addition of resettable min/max metrics to the\n > pg_stat_statemets view seems reasonable here?\n\nHello, Andrey!\n\nI think it depends on what the slow top level sampler wants.\nLet define the current values in pg_stat_statements for some query as gmin and gmax.\nIt seems to be a two main variants:\n1) If top level sampler wants to know for some query the min and max values for\nthe entire watch time, then existing gmin and gmax in pg_stat_statements are sufficient.\nThe top level sampler can clarify top_min and top_max at every slow sample as follows:\ntop_max = gmax > top_max ? gmax : top_max;\ntop_min = gmin < top_min ? gmin : top_min;\nThis should work regardless of whether there was a reset between samples or not.\n2) The second case happens when the top level sampler wants to know the min and max\nvalues for sampling period.\nIn that case i think one shouldn't not use gmin and gmax and especially reset\nthem asynchronously from outside because its lifetime and sampling period are\nindependent values and moreover someone else might need gmin and gmax as is.\nSo i suppose that additional vars loc_min and loc_max is a right way to do it.\nIf that, at every top sample one need to replace not clarify\nthe new top values as follows:\ntop_max = loc_max; loc_max = 0;\ntop_min = loc_min; loc_min = INT_MAX;\nAnd one more thing, if there was a reset of stats between two samples,\nthen i think it is the best to ignore the new values,\nsince they were obtained for an incomplete period.\nThis patch, thanks to the saved time stamp, makes possible\nto determine the presence of reset between samples and\nthere should not be a problem to realize such algorithm.\n\nThe patch is now applied normally, all my tests were successful.\nThe only thing I could suggest to your notice\nis a small cosmetic edit to replace\nthe numeric value in #define on line 1429 of pg_stat_statements.c\nby one of the constants defined above.\n\nBest regards,\nAnton Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n \n\n\n\n\n\n\n\nOn 07.10.2021 15:31, Andrei Zubkov wrote:\n > There is an issue with this patch. It's main purpose is the\n ability to\n > calculate values of pg_stat_statements view \n >  [...]\n > Does addition of resettable min/max metrics to the\n > pg_stat_statemets view seems reasonable here?\n\nHello, Andrey!\n\nI think it depends on what the slow top level sampler wants.\nLet define the current values in pg_stat_statements for some query as gmin and gmax.\nIt seems to be a two main variants:\n1) If top level sampler wants to know for some query the min and max values for \nthe entire watch time, then existing gmin and gmax in pg_stat_statements are sufficient.\nThe top level sampler can clarify top_min and top_max at every slow sample as follows:\ntop_max = gmax > top_max ? gmax : top_max;\ntop_min = gmin < top_min ? gmin : top_min;\nThis should work regardless of whether there was a reset between samples or not.\n2) The second case happens when the top level sampler wants to know the min and max \nvalues for sampling period.\nIn that case i think one shouldn't not use gmin and gmax and especially reset\nthem asynchronously from outside because its lifetime and sampling period are \nindependent values and moreover someone else might need gmin and gmax as is.\nSo i suppose that additional vars loc_min and loc_max is a right way to do it.\nIf that, at every top sample one need to replace not clarify\nthe new top values as follows:\ntop_max = loc_max; loc_max = 0;\ntop_min = loc_min; loc_min = INT_MAX;\nAnd one more thing, if there was a reset of stats between two samples,\nthen i think it is the best to ignore the new values, \nsince they were obtained for an incomplete period.\nThis patch, thanks to the saved time stamp, makes possible \nto determine the presence of reset between samples and\nthere should not be a problem to realize such algorithm.\n\nThe patch is now applied normally, all my tests were successful.\nThe only thing I could suggest to your notice\nis a small cosmetic edit to replace\nthe numeric value in #define on line 1429 of pg_stat_statements.c\nby one of the constants defined above. \n\nBest regards,\nAnton Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Mon, 18 Oct 2021 22:11:12 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi, Anton!\n\nThank you for your review!\n\nOn Mon, 2021-10-18 at 22:11 +0300, Anton A. Melnikov wrote:\n> So i suppose that additional vars loc_min and loc_max is a right way\n> to do it.\n\nI've added the following fields to the pg_stat_statements view:\n\n min_plan_time_local float8,\n max_plan_time_local float8,\n min_exec_time_local float8,\n max_exec_time_local float8\n\nand a function that is able to reset those fields:\n\nCREATE FUNCTION pg_stat_statements_reset_local(IN userid Oid DEFAULT 0,\n IN dbid Oid DEFAULT 0,\n IN queryid bigint DEFAULT 0\n)\n\nIt resets the local fields mentioned above and updates the new field\n\n local_stats_since timestamp with time zone\n\nwith the current timestamp. All other statement statistics are remains\nunchanged. After the reset _local fields will have NULL values till the\nnext statement execution.\n\n> And one more thing, if there was a reset of stats between two\n> samples,\n> then i think it is the best to ignore the new values, \n> since they were obtained for an incomplete period.\n> This patch, thanks to the saved time stamp, makes possible \n> to determine the presence of reset between samples and\n> there should not be a problem to realize such algorithm.\nYes, it seems this is up to the sampling solution. Maybe in some cases\nincomplete information will be better than nothing... Anyway we have\nall necessary data now.\n\n\n> The only thing I could suggest to your notice\n> is a small cosmetic edit to replace\n> the numeric value in #define on line 1429 of pg_stat_statements.c\n> by one of the constants defined above. \nHmm. I've left it just like it was before me. But it seems, you are\nright.\n\nI've attached a new version of a patch. The first_seen column was\nrenamed to stats_since - it seems to be more self-explaining to me. But\nI'm not sure in the current naming at all.\n\nThe tests is not ready yet, but any thoughts about the patch are\nwelcome right now.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 03 Dec 2021 17:03:46 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Fri, 2021-12-03 at 17:03 +0300, Andrei Zubkov wrote:\n> I've added the following fields to the pg_stat_statements view:\n> \n>     min_plan_time_local float8,\n>     max_plan_time_local float8,\n>     min_exec_time_local float8,\n>     max_exec_time_local float8\n> \n> and a function that is able to reset those fields:\n> \n> CREATE FUNCTION pg_stat_statements_reset_local(IN userid Oid DEFAULT\n> 0,\n>         IN dbid Oid DEFAULT 0,\n>         IN queryid bigint DEFAULT 0\n> )\n> \n> It resets the local fields mentioned above and updates the new field\n> \n>     local_stats_since timestamp with time zone\n> \n> with the current timestamp. All other statement statistics are\n> remains\n> unchanged. \n\nAfter adding new fields to pg_stat_statements view it looks a little\nbit overloaded. Furthermore, fields in this view have different\nbehavior.\n\nWhat if we'll create a new view for such resettable fields? It will\nmake description of views and reset functions in the docs much more\nclear.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Fri, 03 Dec 2021 19:55:53 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "\n\nOn 03.12.2021 19:55, Andrei Zubkov wrote:\n> On Fri, 2021-12-03 at 17:03 +0300, Andrei Zubkov wrote:\n...\n> What if we'll create a new view for such resettable fields? It will\n> make description of views and reset functions in the docs much more\n> clear.\n> \n\nHi, Andrey!\n\nI completely agree that creating a separate view for these new fields is\nthe most correct thing to do.\n\nAs for variable names, the term global is already used for global \nstatistics, in particular in the struct pgssGlobalStats.\nThe considered timestamps refer to per statement level\nas pointed out in the struct pgssEntry's comment. Therefore, i think \nit's better to rename gstats_since to just stats_reset in the same way.\nAlso it might be better to use the term 'auxiliary' and use the same \napproach as for existent similar vars.\nSo internally it might look something like this:\n\ndouble\taux_min_time[PGSS_NUMKIND];\ndouble\taux_max_time[PGSS_NUMKIND];\nTimestampTz\taux_stats_reset;\n\nAnd at the view level:\n aux_min_plan_time float8,\n aux_max_plan_time float8,\n aux_min_exec_time float8,\n aux_max_exec_time float8,\n aux_stats_reset timestamp with time zone\n\nFunctions names might be pg_stat_statements_aux() and \npg_stat_statements_aux_reset().\n\nThe top-level application may find out period the aux extrema were \ncollected by determining which reset was closer as follows:\ndata_collect_period = aux_stats_reset > stats_reset ?\nnow - aux_stats_reset : now - stats_reset;\nand decide not to trust this data if the period was too small.\nFor correct work aux_stats_reset must be updated and aux extrema values \nmust be reset simultaneously with updating of stats_reset therefore some \nsynchronization needed to avoid race with possible external reset.\n\nI've tested the patch v4 and didn't find any evident problems.\nContrib installcheck said:\ntest pg_stat_statements ... ok 163 ms\ntest oldextversions ... ok 46 ms\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Wed, 22 Dec 2021 04:25:33 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn 2021-12-03 17:03:46 +0300, Andrei Zubkov wrote:\n> I've attached a new version of a patch.\n\nThis fails with an assertion failure:\nhttps://cirrus-ci.com/task/5567540742062080?logs=cores#L55\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 2 Jan 2022 13:28:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hello,\n\nOn Sun, 2022-01-02 at 13:28 -0800, Andres Freund wrote:\n> Hi,\n> \n> This fails with an assertion failure:\n> https://cirrus-ci.com/task/5567540742062080?logs=cores#L55\n> \n> \nAndres, thank you for your test! I've missed it. Fixed in attached\npatch v5.\n\nOn Wed, 2021-12-22 at 04:25 +0300, Anton A. Melnikov wrote:\n> \n> \n> I completely agree that creating a separate view for these new fields\n> is\n> the most correct thing to do.\n\nAnton,\n\nI've created a new view named pg_stat_statements_aux. But for now both\nviews are using the same function pg_stat_statements which returns all\nfields. It seems reasonable to me - if sampling solution will need all\nvalues it can query the function.\n\n> Also it might be better to use the term 'auxiliary' and use the same \n> approach as for existent similar vars.\n\nAgreed, renamed to auxiliary term.\n\n> So internally it might look something like this:\n> \n> double aux_min_time[PGSS_NUMKIND];\n> double aux_max_time[PGSS_NUMKIND];\n> TimestampTz aux_stats_reset;\n> \n> And at the view level:\n> aux_min_plan_time float8,\n> aux_max_plan_time float8,\n> aux_min_exec_time float8,\n> aux_max_exec_time float8,\n> aux_stats_reset timestamp with time zone\n> \n> Functions names might be pg_stat_statements_aux() and \n> pg_stat_statements_aux_reset().\n> \n\nBut it seems \"stats_reset\" term is not quite correct in this case. The\n\"stats_since\" field holds the timestamp of hashtable entry, but not the\nreset time. The same applies to aux_stats_since - for new statement it\nholds its entry time, but in case of reset it will hold the reset time.\n\n\"stats_reset\" name seems a little bit confusing to me.\n\nAttached patch v5\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 14 Jan 2022 18:15:42 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi, Andrey!\n\nI've checked the 5th version of the patch and there are some remarks.\n\n >I've created a new view named pg_stat_statements_aux. But for now both\n >views are using the same function pg_stat_statements which returns all\n >fields. It seems reasonable to me - if sampling solution will need all\n >values it can query the function.\n\nAgreed, it might be useful in some cases.\n\n >But it seems \"stats_reset\" term is not quite correct in this case. The\n >\"stats_since\" field holds the timestamp of hashtable entry, but not the\n >reset time. The same applies to aux_stats_since - for new statement it\n >holds its entry time, but in case of reset it will hold the reset time.\n\nThanks for the clarification. Indeed if we mean the word 'reset' as the \nremoval of all the hashtable entries during pg_stat_statements_reset() \nthen we shouldn't use it for the first occurrence timestamp in the \nstruct pgssEntry.\nSo with the stats_since field everything is clear.\nOn the other hand aux_stats_since field can be updated for two reasons:\n1) The same as for stats_since due to first occurrence of entry in the \nhashtable. And it will be equal to stats_since timestamp in that case.\n2) Due to an external reset from an upper level sampler.\nI think it's not very important how to name this field, but it would be \nbetter to mention both these reasons in the comment.\n\nAs for more important things, if the aux_min_time initial value is zero \nlike now, then if condition on line 1385 of pg_stat_statements.c will \nnever be true and aux_min_time will remain zero. Init aux_min_time with \nINT_MAX can solve this problem.\n\nIt is possible to reduce size of entry_reset_aux() function via \nremoving if condition on line 2606 and entire third branch from line \n2626. Thanks to check in 2612 this will work in all cases.\n\nAlso it would be nice to move the repeating several times\nlines 2582-2588 into separate function. I think this can make \nentry_reset_aux() more shorter and clearer.\n\nIn general, the 5th patch applies with no problems, make check-world and \nCI gives no error and patch seems to be closely to become RFC.\n\nWith best regards,\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Mon, 24 Jan 2022 20:16:06 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 24, 2022 at 08:16:06PM +0300, Anton A. Melnikov wrote:\n> Hi, Andrey!\n> \n> I've checked the 5th version of the patch and there are some remarks.\n> \n> >I've created a new view named pg_stat_statements_aux. But for now both\n> >views are using the same function pg_stat_statements which returns all\n> >fields. It seems reasonable to me - if sampling solution will need all\n> >values it can query the function.\n> \n> Agreed, it might be useful in some cases.\n> \n> >But it seems \"stats_reset\" term is not quite correct in this case. The\n> >\"stats_since\" field holds the timestamp of hashtable entry, but not the\n> >reset time. The same applies to aux_stats_since - for new statement it\n> >holds its entry time, but in case of reset it will hold the reset time.\n> \n> Thanks for the clarification. Indeed if we mean the word 'reset' as the\n> removal of all the hashtable entries during pg_stat_statements_reset() then\n> we shouldn't use it for the first occurrence timestamp in the struct\n> pgssEntry.\n> So with the stats_since field everything is clear.\n> On the other hand aux_stats_since field can be updated for two reasons:\n> 1) The same as for stats_since due to first occurrence of entry in the\n> hashtable. And it will be equal to stats_since timestamp in that case.\n> 2) Due to an external reset from an upper level sampler.\n> I think it's not very important how to name this field, but it would be\n> better to mention both these reasons in the comment.\n\nAre those 4 new counters really worth it?\n\nThe min/max were added to make pg_stat_statements a bit more useful if you\nhave nothing else using that extension. But once you setup a tool that\nsnapshots the metrics regularly, do you really need to know e.g. \"what was the\nmaximum execution time in the last 3 years\", which will typically be what\npeople will retrieve from the non-aux min/max counters, rather than simply\nusing your additional tool for better and more precise information?\n\n\n", "msg_date": "Tue, 25 Jan 2022 18:08:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Julien,\n Tue, 2022-01-25 at 18:08 +0800, Julien Rouhaud wrote:\n> \n> Are those 4 new counters really worth it?\n> \n> The min/max were added to make pg_stat_statements a bit more useful\n> if you\n> have nothing else using that extension.  But once you setup a tool\n> that\n> snapshots the metrics regularly, do you really need to know e.g.\n> \"what was the\n> maximum execution time in the last 3 years\", which will typically be\n> what\n> people will retrieve from the non-aux min/max counters, rather than\n> simply\n> using your additional tool for better and more precise information?\n\nOf course we can replace old min/max metrics with the new \"aux\" min/max\nmetrics. It seems reasonable to me because they will have the same\nbehavior until we touch reset_aux. I think we can assume that min/max\ndata is saved somewhere if reset_aux was performed, but how about the\nbackward compatibility?\nThere may be some monitoring solutions that doesn't expect min/max\nstats reset independently of other statement statistics.\nIt seems highly unlikely to me, because the min/max stats for \"the last\n3 years\" is obvious unusable but maybe someone uses them as a sign of\nsomething?\nAre we need to worry about that?\n\nAlso, there is one more dramatic consequence of such decision...\nWhat min/max values should be returned after the auxiliary reset but\nbefore the next statement execution?\nThe NULL values seems reasonable but there was not any NULLs before and\nbackward compatibility seems broken. Another approach is to return the\nold values of min/max stats and the old aux_stats_since value until the\nnext statement execution but it seems strange when the reset was\nperformed and it doesn't affected any stats instantly.\n\nregards,\nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 14:58:17 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi Andrei,\n\nOn Tue, Jan 25, 2022 at 02:58:17PM +0300, Andrei Zubkov wrote:\n> \n> Of course we can replace old min/max metrics with the new \"aux\" min/max\n> metrics. It seems reasonable to me because they will have the same\n> behavior until we touch reset_aux. I think we can assume that min/max\n> data is saved somewhere if reset_aux was performed, but how about the\n> backward compatibility?\n> There may be some monitoring solutions that doesn't expect min/max\n> stats reset independently of other statement statistics.\n> It seems highly unlikely to me, because the min/max stats for \"the last\n> 3 years\" is obvious unusable but maybe someone uses them as a sign of\n> something?\n\nTo be honest I don't see how any monitoring solution could make any use of\nthose fields as-is. For that reason in PoWA we unfortunately have to entirely\nignore them. There was a previous attempt to provide a way to reset those\ncounters only (see [1]), but it was returned with feedback due to lack of TLC\nfrom the author.\n\n> Are we need to worry about that?\n\nI don't think it's a problem, as once you have a solution on top of\npg_stat_statements, you get information order of magnitude better from that\nsolution instead of pg_stat_statements. And if that's a problem, well either\ndon't reset those counters, or don't use the external solution if it does it\nautomatically and you're not ok with it.\n\n> Also, there is one more dramatic consequence of such decision...\n> What min/max values should be returned after the auxiliary reset but\n> before the next statement execution?\n> The NULL values seems reasonable but there was not any NULLs before and\n> backward compatibility seems broken. Another approach is to return the\n> old values of min/max stats and the old aux_stats_since value until the\n> next statement execution but it seems strange when the reset was\n> performed and it doesn't affected any stats instantly.\n\nIf you're worried about some external table having a NOT NULL constraint for\nthose fields, how about returning NaN instead? That's a valid value for a\ndouble precision data type.\n\n[1] https://www.postgresql.org/message-id/1762890.8ARNpCrDLI@peanuts2\n\n\n", "msg_date": "Tue, 25 Jan 2022 20:22:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Julien,\n\nOn Tue, 2022-01-25 at 20:22 +0800, Julien Rouhaud wrote:\n> To be honest I don't see how any monitoring solution could make any\n> use of\n> those fields as-is.  For that reason in PoWA we unfortunately have to\n> entirely\n> ignore them.  There was a previous attempt to provide a way to reset\n> those\n> counters only (see [1]), but it was returned with feedback due to\n> lack of TLC\n> from the author.\n\nThank you, I've just seen a thread in [1], it was of course a weak\nattempt and I think it could be done better.\n> \n> \n> I don't think it's a problem, as once you have a solution on top of\n> pg_stat_statements, you get information order of magnitude better\n> from that solution instead of pg_stat_statements.\n\nAgreed. I'm worry about having different solutions running\nsimultaneously. All of them may want to get accurate min/max values,\nbut if they all will reset min/max statistics, they will interfere each\nother. It seems that min/max resetting should be configurable in each\nsampler as a partial problem solution. At least, every sampling\nsolution can make a decision based on reset timestamps provided by\npg_stat_statements now.\n> \n> \n> If you're worried about some external table having a NOT NULL\n> constraint for\n> those fields, how about returning NaN instead?  That's a valid value\n> for a\n> double precision data type.\n\nI don't know for sure what we can expect to be wrong here. My opinion\nis to use NULLs, as they seems more suitable here. But this can be done\nonly if we are not expecting significant side effects.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Wed, 26 Jan 2022 16:43:04 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hello!\n\nOn 26.01.2022 16:43, Andrei Zubkov wrote:\n\n >>\n >> If you're worried about some external table having a NOT NULL\n >> constraint for\n >> those fields, how about returning NaN instead? That's a valid value\n >> for a\n >> double precision data type.\n >\n > I don't know for sure what we can expect to be wrong here. My opinion\n > is to use NULLs, as they seems more suitable here. But this can be done\n > only if we are not expecting significant side effects.\n\nLet me suggest for your consideration an additional reset request flag \nthat can be used to synchronize reset in a way similar to interrupt \nhandling.\nExternal reset can set this flag immediately. Then pg_stat_statements \nwill wait for the moment when the required query hits into the \nstatistics and only at this moment really reset the aux statistics,\nwrite a new timestamp and clear the flag. At the time of real reset, \ntotal_time will be determined, and pg_stat_statements can immediately \ninitialize min and max correctly.\n From reset to the next query execution the aux view will give old \ncorrect values so neither NaNs nor NULLs will be required.\nAlso we can put the value of reset request flag into the aux view to \ngive feedback to the external application that reset was requested.\n\nWith best regards,\n--\nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Tue, 8 Feb 2022 10:47:18 +0300", "msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "This patch seems to have gotten some feedback but development has\nstalled. It's marked \"waiting on author\" but I'm not clear exactly\nwhat is expected from the authors here. It seems there isn't really\nconsensus on the design at the moment. There's been no emails in over\na month.\n\nFwiw I find the idea of having a separate \"aux\" table kind of awkward.\nIt'll seem strange to users not familiar with the history and without\nany clear idea why the fields are split.\n\n\n", "msg_date": "Fri, 25 Mar 2022 00:37:57 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi\n\nOn Fri, 2022-03-25 at 00:37 -0400, Greg Stark wrote:\n> Fwiw I find the idea of having a separate \"aux\" table kind of\n> awkward.\n> It'll seem strange to users not familiar with the history and without\n> any clear idea why the fields are split.\n\nGreg, thank you for your attention and for your thought.\n\nI've just completed the 6th version of a patch implementing idea\nproposed by Julien Rouhaud, i.e. without auxiliary statistics. 6th\nversion will reset current min/max fields to zeros until the first plan\nor execute. I've decided to use zeros here because planning statistics\nis zero in case of disabled tracking. I think sampling solution could\neasily handle this.\n\n-- \nRegards, Andrei Zubkov", "msg_date": "Fri, 25 Mar 2022 13:25:23 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Fri, Mar 25, 2022 at 01:25:23PM +0300, Andrei Zubkov wrote:\n> Greg, thank you for your attention and for your thought.\n> \n> I've just completed the 6th version of a patch implementing idea\n> proposed by Julien Rouhaud, i.e. without auxiliary statistics. 6th\n> version will reset current min/max fields to zeros until the first plan\n> or execute.\n\nThanks!\n\nI've decided to use zeros here because planning statistics\n> is zero in case of disabled tracking. I think sampling solution could\n> easily handle this.\n\nI'm fine with it. It's also consistent with the planning counters when\ntrack_planning is disabled. And even if the sampling solution doesn't handle\nit, you will simply get consistent values, like \"0 calls with minmax timing of\n0 msec\", so it's not really a problem.\n\nFeature wise, I'm happy with the patch. I just have a few comments.\n\nTests:\n\n- it's missing some test in sql/oldextversions.sql to validate that the code\n works with the extension in version 1.9\n- the last test removed the minmax_plan_zero field, why?\n\nCode:\n\n+\tTimestampTz\tstats_since;\t\t/* timestamp of entry allocation moment */\n\nI think \"timestamp of entry allocation\" is enough?\n\n+\t\t\t * Calculate min and max time. min = 0 and max = 0\n+\t\t\t * means that min/max statistics reset was happen\n\nmaybe \"means that the min/max statistics were reset\"\n\n+/*\n+ * Reset min/max values of specified entries\n+ */\n+static void\n+entry_minmax_reset(Oid userid, Oid dbid, uint64 queryid)\n+{\n[...]\n\nThere's a lot of duplicated logic with entry_reset().\nWould it be possible to merge at least the C reset function to handle either\nall-metrics or minmax-only? Also, maybe it would be better to have a single SQL\nreset function, something like:\n\npg_stat_statements_reset(IN userid Oid DEFAULT 0,\n\tIN dbid Oid DEFAULT 0,\n\tIN queryid bigint DEFAULT 0,\n IN minmax_only DEFAULT false\n)\n\nDoc:\n\n+ <structfield>stats_since</structfield> <type>timestamp with time zone</type>\n+ </para>\n+ <para>\n+ Timestamp of statistics gathering start for the statement\n\nThe description is a bit weird. Maybe like \"Time at which statistics gathering\nstarted for this statement\"? Same for the minmax version.\n\n\n", "msg_date": "Wed, 30 Mar 2022 17:31:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Julien!\n\nThank you for such detailed review!\n\nOn Wed, 2022-03-30 at 17:31 +0800, Julien Rouhaud wrote:\n> Feature wise, I'm happy with the patch.  I just have a few comments.\n> \n> Tests:\n> \n> - it's missing some test in sql/oldextversions.sql to validate that the\n> code\n>   works with the extension in version 1.9\n\nYes, I've just added some tests there, but it seems they are not quite\nsuficient. Maybe we should try to do some queries to views and\nfunctions in old versions? A least when new C function version\nappears...\n\nDuring tests developing I've noted that current test of\npg_stat_statements_info view actually tests only view access. However\nwe can test at least functionality of stats_reset field like this:\n\nSELECT now() AS ref_ts \\gset\nSELECT dealloc, stats_reset >= :'ref_ts' FROM pg_stat_statements_info;\nSELECT pg_stat_statements_reset();\nSELECT dealloc, stats_reset >= :'ref_ts' FROM pg_stat_statements_info;\n\nDoes it seems reasonable? \n\n> - the last test removed the minmax_plan_zero field, why?\n\nMy thaught was as follows... Reexecution of the same query will\ndefinitely cause execution. However, most likely it wouldn't be\nplanned, but if it would (maybe this is possible, or maybe it will be\npossible in the future in some cases) the test shouldn't fail. Checking\nof only execution stats seems enough to me - in most cases we can't\ncheck planning stats with such test anyway.\nWhat do you think about it?\n\n> \n> Code:\n> \n> +       TimestampTz     stats_since;            /* timestamp of entry\n> allocation moment */\n> \n> I think \"timestamp of entry allocation\" is enough?\n\nYes\n\n> \n> +                        * Calculate min and max time. min = 0 and max\n> = 0\n> +                        * means that min/max statistics reset was\n> happen\n> \n> maybe \"means that the min/max statistics were reset\"\n\nAgreed\n\n> \n> +/*\n> + * Reset min/max values of specified entries\n> + */\n> +static void\n> +entry_minmax_reset(Oid userid, Oid dbid, uint64 queryid)\n> +{\n> [...]\n> \n> There's a lot of duplicated logic with entry_reset().\n> Would it be possible to merge at least the C reset function to handle\n> either\n> all-metrics or minmax-only? \n\nGreat point! I've merged minmax reset functionality in the entry_reset\nfunction.\n\n> Also, maybe it would be better to have a single SQL\n> reset function, something like:\n> \n> pg_stat_statements_reset(IN userid Oid DEFAULT 0,\n>         IN dbid Oid DEFAULT 0,\n>         IN queryid bigint DEFAULT 0,\n>     IN minmax_only DEFAULT false\n> )\n\nOf course!\n\n> \n> Doc:\n> \n> +       <structfield>stats_since</structfield> <type>timestamp with\n> time zone</type>\n> +      </para>\n> +      <para>\n> +       Timestamp of statistics gathering start for the statement\n> \n> The description is a bit weird.  Maybe like \"Time at which statistics\n> gathering\n> started for this statement\"?  Same for the minmax version.\n\nAgreed.\n\nI've attached 7th patch version with fixes mentioned above.\n-- \nBest regards, Andrei Zubkov", "msg_date": "Thu, 31 Mar 2022 13:06:10 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "FYI this has a compiler warning showing up on the cfbot:\n\n\n[13:19:51.544] pg_stat_statements.c: In function ‘entry_reset’:\n[13:19:51.544] pg_stat_statements.c:2598:32: error:\n‘minmax_stats_reset’ may be used uninitialized in this function\n[-Werror=maybe-uninitialized]\n[13:19:51.544] 2598 | entry->minmax_stats_since = minmax_stats_reset;\n[13:19:51.544] | ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~\n\nIf the patch is otherwise ready to commit then this is an issue that\nshould be fixed before marking it ready to commit.\n\nGiven that this is the last week before feature freeze it'll probably\nget moved to a future commitfest unless it's ready to commit.\n\n\n", "msg_date": "Fri, 1 Apr 2022 11:38:52 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Fri, Apr 01, 2022 at 11:38:52AM -0400, Greg Stark wrote:\n> FYI this has a compiler warning showing up on the cfbot:\n>\n> [13:19:51.544] pg_stat_statements.c: In function ‘entry_reset’:\n> [13:19:51.544] pg_stat_statements.c:2598:32: error:\n> ‘minmax_stats_reset’ may be used uninitialized in this function\n> [-Werror=maybe-uninitialized]\n> [13:19:51.544] 2598 | entry->minmax_stats_since = minmax_stats_reset;\n> [13:19:51.544] | ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~\n>\n> If the patch is otherwise ready to commit then this is an issue that\n> should be fixed before marking it ready to commit.\n>\n> Given that this is the last week before feature freeze it'll probably\n> get moved to a future commitfest unless it's ready to commit.\n\nAs I mentioned in my last review I think feature wise the patch is ok, it just\nneeded a few minor changes. It's a small patch but can help *a lot* tools on\ntop of pg_stat_statements and give users a better overview of their workload so\nit would be nice to commit it in v15.\n\nI was busy looking at the prefetch patch today (not done yet), but I plan to\nreview the last version over the weekend. After a quick look at the patch it\nseems like a compiler bug. I'm not sure which clang version is used, but can't\nreproduce it locally using clang 13. I already saw similar false positive,\nwhen a variable is initialized in a branch (here minmax_only == true), and only\nthen used in similar branches. I guess that pg_stat_statement_reset() is so\nexpensive that an extra gettimeofday() wouldn't change much. Otherwise\ninitializing to NULL should be enough.\n\n\n", "msg_date": "Sat, 2 Apr 2022 00:13:21 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nThank you, Greg\n\nOn Fri, 2022-04-01 at 11:38 -0400, Greg Stark wrote:\n> [13:19:51.544] pg_stat_statements.c: In function ‘entry_reset’:\n> [13:19:51.544] pg_stat_statements.c:2598:32: error:\n> ‘minmax_stats_reset’ may be used uninitialized in this function\n> [-Werror=maybe-uninitialized]\n> [13:19:51.544] 2598 | entry->minmax_stats_since = minmax_stats_reset;\n> [13:19:51.544] | ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~\n>\n\nI was afraid of such warning can appear..\n\nOn Sat, 2022-04-02 at 00:13 +0800, Julien Rouhaud wrote:\n> I guess that pg_stat_statement_reset() is so\n> expensive that an extra gettimeofday() wouldn't change much. \n\nAgreed\n\n> Otherwise\n> initializing to NULL should be enough.\n\nJulien, I would prefer an extra GetCurrentTimestamp(). So, I've opted\nto use the common unconditional\n\nstats_reset = GetCurrentTimestamp();\n\nfor an entire entry_reset function due to the following:\n\n1. It will be uniform for stats_reset and minmax_stats_reset\n2. As you mentioned, it wouldn't change a much\n3. The most common way to use this function is to reset all statements\nmeaning that GetCurrentTimestamp() will be called anyway to update the\nvalue of stats_reset field in pg_stat_statements_info view\n4. Actually I would like that pg_stat_statements_reset function was\nable to return the value of stats_reset as its result. This could give\nto the sampling solutions the ability to check if the last reset (of\nany type) was performed by this solution or any other reset was\nperformed by someone else. It seems valuable to me, but it changes the\nresult type of the pg_stat_statements_reset() function, so I don't know\nif we can do that right now.\n\nv8 attached\n--\nregards, Andrei", "msg_date": "Fri, 01 Apr 2022 22:47:02 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn 2022-04-01 22:47:02 +0300, Andrei Zubkov wrote:\n> +\t\tentry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);\n> +\n> +\t\tif (entry) {\n> +\t\t\t/* Found */\n> +\t\t\tif (minmax_only) {\n> +\t\t\t\t/* When requested reset only min/max statistics of an entry */\n> +\t\t\t\tentry_counters = &entry->counters;\n> +\t\t\t\tfor (int kind = 0; kind < PGSS_NUMKIND; kind++)\n> +\t\t\t\t{\n> +\t\t\t\t\tentry_counters->max_time[kind] = 0;\n> +\t\t\t\t\tentry_counters->min_time[kind] = 0;\n> +\t\t\t\t}\n> +\t\t\t\tentry->minmax_stats_since = stats_reset;\n> +\t\t\t}\n> +\t\t\telse\n> +\t\t\t{\n> +\t\t\t\t/* Remove the key otherwise */\n> +\t\t\t\thash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);\n> +\t\t\t\tnum_remove++;\n> +\t\t\t}\n> +\t\t}\n\nIt seems decidedly not great to have four copies of this code. It was already\nnot great before, but this patch makes the duplicated section go from four\nlines to 20 or so.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 1 Apr 2022 13:01:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Thu, Mar 31, 2022 at 01:06:10PM +0300, Andrei Zubkov wrote:\n>\n> On Wed, 2022-03-30 at 17:31 +0800, Julien Rouhaud wrote:\n> > Feature wise, I'm happy with the patch.� I just have a few comments.\n> >\n> > Tests:\n> >\n> > - it's missing some test in sql/oldextversions.sql to validate that the\n> > code\n> > � works with the extension in version 1.9\n>\n> Yes, I've just added some tests there, but it seems they are not quite\n> suficient. Maybe we should try to do some queries to views and\n> functions in old versions? A least when new C function version\n> appears...\n\nI'm not sure if that's really helpful. If you have new C functions and old\nSQL-version, you won't be able to reach them anyway. Similarly, if you have\nthe new SQL but the old .so (which we can't test), it will just fail as the\nsymbol doesn't exist. The real problem that has to be explicitly handled by\nthe C code is different signatures for C functions.\n>\n> During tests developing I've noted that current test of\n> pg_stat_statements_info view actually tests only view access. However\n> we can test at least functionality of stats_reset field like this:\n>\n> SELECT now() AS ref_ts \\gset\n> SELECT dealloc, stats_reset >= :'ref_ts' FROM pg_stat_statements_info;\n> SELECT pg_stat_statements_reset();\n> SELECT dealloc, stats_reset >= :'ref_ts' FROM pg_stat_statements_info;\n>\n> Does it seems reasonable?\n\nIt looks reasonable, especially if the patch adds a new mode for the reset\nfunction.\n\n> > - the last test removed the minmax_plan_zero field, why?\n>\n> My thaught was as follows... Reexecution of the same query will\n> definitely cause execution. However, most likely it wouldn't be\n> planned, but if it would (maybe this is possible, or maybe it will be\n> possible in the future in some cases) the test shouldn't fail. Checking\n> of only execution stats seems enough to me - in most cases we can't\n> check planning stats with such test anyway.\n> What do you think about it?\n\nAh I see. I guess we could set plan_cache_mode to force_generic_plan to make\nsure we go though planning. But otherwise just adding a comment saying that\nthe test has to be compatible with different plan caching approach would be\nfine with me.\n\nThanks for the work on merging the functions! I will reply on the other parts\nof the thread where some discussion started.\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:10:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Fri, Apr 01, 2022 at 10:47:02PM +0300, Andrei Zubkov wrote:\n>\n> On Fri, 2022-04-01 at 11:38 -0400, Greg Stark wrote:\n> > [13:19:51.544] pg_stat_statements.c: In function ‘entry_reset’:\n> > [13:19:51.544] pg_stat_statements.c:2598:32: error:\n> > ‘minmax_stats_reset’ may be used uninitialized in this function\n> > [-Werror=maybe-uninitialized]\n> > [13:19:51.544] 2598 | entry->minmax_stats_since = minmax_stats_reset;\n> > [13:19:51.544] | ~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~\n> >\n>\n> I was afraid of such warning can appear..\n>\n> On Sat, 2022-04-02 at 00:13 +0800, Julien Rouhaud wrote:\n> > I guess that pg_stat_statement_reset() is so\n> > expensive that an extra gettimeofday() wouldn't change much. \n>\n> Agreed\n>\n> > Otherwise\n> > initializing to NULL should be enough.\n>\n> Julien, I would prefer an extra GetCurrentTimestamp(). So, I've opted\n> to use the common unconditional\n>\n> stats_reset = GetCurrentTimestamp();\n>\n> for an entire entry_reset function due to the following:\n>\n> 1. It will be uniform for stats_reset and minmax_stats_reset\n> 2. As you mentioned, it wouldn't change a much\n> 3. The most common way to use this function is to reset all statements\n> meaning that GetCurrentTimestamp() will be called anyway to update the\n> value of stats_reset field in pg_stat_statements_info view\n> 4. Actually I would like that pg_stat_statements_reset function was\n> able to return the value of stats_reset as its result. This could give\n> to the sampling solutions the ability to check if the last reset (of\n> any type) was performed by this solution or any other reset was\n> performed by someone else. It seems valuable to me, but it changes the\n> result type of the pg_stat_statements_reset() function, so I don't know\n> if we can do that right now.\n\nI'm fine with always getting the current timestamp when calling the function.\n\nI'm not sure about returning the ts. If you need it you could call SELECT\nnow() FROM pg_stat_statements_reset() (or clock_timestamp()). It won't be\nentirely accurate but since the function will have an exclusive lock during the\nwhole execution that shouldn't be a problem. Now you're already adding a new\nversion of the C function so I guess that it wouldn't require any additional\neffort so why not.\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:21:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Fri, Apr 01, 2022 at 01:01:53PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-04-01 22:47:02 +0300, Andrei Zubkov wrote:\n> > +\t\tentry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);\n> > +\n> > +\t\tif (entry) {\n> > +\t\t\t/* Found */\n> > +\t\t\tif (minmax_only) {\n> > +\t\t\t\t/* When requested reset only min/max statistics of an entry */\n> > +\t\t\t\tentry_counters = &entry->counters;\n> > +\t\t\t\tfor (int kind = 0; kind < PGSS_NUMKIND; kind++)\n> > +\t\t\t\t{\n> > +\t\t\t\t\tentry_counters->max_time[kind] = 0;\n> > +\t\t\t\t\tentry_counters->min_time[kind] = 0;\n> > +\t\t\t\t}\n> > +\t\t\t\tentry->minmax_stats_since = stats_reset;\n> > +\t\t\t}\n> > +\t\t\telse\n> > +\t\t\t{\n> > +\t\t\t\t/* Remove the key otherwise */\n> > +\t\t\t\thash_search(pgss_hash, &entry->key, HASH_REMOVE, NULL);\n> > +\t\t\t\tnum_remove++;\n> > +\t\t\t}\n> > +\t\t}\n> \n> It seems decidedly not great to have four copies of this code. It was already\n> not great before, but this patch makes the duplicated section go from four\n> lines to 20 or so.\n\n+1\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:24:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Fri, 2022-04-01 at 13:01 -0700, Andres Freund wrote:\n> It seems decidedly not great to have four copies of this code. It was\n> already\n> not great before, but this patch makes the duplicated section go from\n> four\n> lines to 20 or so.\n\nAgreed. I've created the single_entry_reset() function wrapping this\ncode. I wonder if it should be declared as inline to speedup a little.\n\nOn Sat, 2022-04-02 at 15:10 +0800, Julien Rouhaud wrote:\n> > However\n> > we can test at least functionality of stats_reset field like this:\n> > \n> > SELECT now() AS ref_ts \\gset\n> > SELECT dealloc, stats_reset >= :'ref_ts' FROM\n> > pg_stat_statements_info;\n> > SELECT pg_stat_statements_reset();\n> > SELECT dealloc, stats_reset >= :'ref_ts' FROM\n> > pg_stat_statements_info;\n> > \n> > Does it seems reasonable?\n> \n> It looks reasonable, especially if the patch adds a new mode for the\n> reset\n> function.\n\nI've implemented this test.\n\n> > Checking\n> > of only execution stats seems enough to me - in most cases we can't\n> > check planning stats with such test anyway.\n> > What do you think about it?\n> \n> Ah I see. I guess we could set plan_cache_mode to force_generic_plan\n> to make\n> sure we go though planning. But otherwise just adding a comment\n> saying that\n> the test has to be compatible with different plan caching approach\n> would be\n> fine with me.\n\nSet plan_cache_mode seems a little bit excess to me. And maybe in the\nfuture some another plan caching strategies will be implementd with\ncoresponding settings.. So I've just left a comment there.\n\nOn Sat, 2022-04-02 at 15:21 +0800, Julien Rouhaud wrote:\n> I'm not sure about returning the ts. If you need it you could call\n> SELECT\n> now() FROM pg_stat_statements_reset() (or clock_timestamp()). It\n> won't be\n> entirely accurate but since the function will have an exclusive lock\n> during the\n> whole execution that shouldn't be a problem. Now you're already\n> adding a new\n> version of the C function so I guess that it wouldn't require any\n> additional\n> effort so why not.\n\nI think that if we can do it in accurate way and there is no obvious\nside effects, why not to try it...\nChanging of pg_stat_statements_reset function result caused a\nconfiderable tests update. Also, I'm not sure that my description of\nthis feature in the docs is blameless..\n\nAfter all, I'm a little bit in doubt about this feature, so I'm ready\nto rollback it.\n\nv9 attached\n--\nregards, Andrei", "msg_date": "Sat, 02 Apr 2022 13:12:54 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Sat, Apr 02, 2022 at 01:12:54PM +0300, Andrei Zubkov wrote:\n> On Fri, 2022-04-01 at 13:01 -0700, Andres Freund wrote:\n> > It seems decidedly not great to have four copies of this code. It was\n> > already\n> > not great before, but this patch makes the duplicated section go from\n> > four\n> > lines to 20 or so.\n> \n> Agreed. I've created the single_entry_reset() function wrapping this\n> code. I wonder if it should be declared as inline to speedup a little.\n\nMaybe a macro would be better here? I don't know if that's generally ok or\njust old and not-that-great code, but there are other places relying on macros\nwhen a plain function call isn't that convenient (like here returning 0 or 1 as\na hack for incrementing num_remove), for instance in hba.c.\n\n> On Sat, 2022-04-02 at 15:21 +0800, Julien Rouhaud wrote:\n> > I'm not sure about returning the ts. If you need it you could call\n> > SELECT\n> > now() FROM pg_stat_statements_reset() (or clock_timestamp()). It\n> > won't be\n> > entirely accurate but since the function will have an exclusive lock\n> > during the\n> > whole execution that shouldn't be a problem. Now you're already\n> > adding a new\n> > version of the C function so I guess that it wouldn't require any\n> > additional\n> > effort so why not.\n> \n> I think that if we can do it in accurate way and there is no obvious\n> side effects, why not to try it...\n> Changing of pg_stat_statements_reset function result caused a\n> confiderable tests update. Also, I'm not sure that my description of\n> this feature in the docs is blameless..\n> \n> After all, I'm a little bit in doubt about this feature, so I'm ready\n> to rollback it.\n\nWell, I personally don't think that I would need it for powa as it's designed\nto have very frequent snapshot. I know you have a different approach in\npg_profile, but I'm not sure it will be that useful for you either?\n\n\n", "msg_date": "Sat, 2 Apr 2022 18:56:53 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Sat, 2022-04-02 at 18:56 +0800, Julien Rouhaud wrote:\n> Maybe a macro would be better here?  I don't know if that's generally\n> ok or\n> just old and not-that-great code, but there are other places relying\n> on macros\n> when a plain function call isn't that convenient (like here returning\n> 0 or 1 as\n> a hack for incrementing num_remove), for instance in hba.c.\n\nYes, it is not very convenient and not looks pretty, so I'll try a\nmacro here soon.\n\n> > I think that if we can do it in accurate way and there is no\n> > obvious\n> > side effects, why not to try it...\n> > Changing of pg_stat_statements_reset function result caused a\n> > confiderable tests update. Also, I'm not sure that my description\n> > of\n> > this feature in the docs is blameless..\n> > \n> > After all, I'm a little bit in doubt about this feature, so I'm\n> > ready\n> > to rollback it.\n> \n> Well, I personally don't think that I would need it for powa as it's\n> designed\n> to have very frequent snapshot.  I know you have a different approach\n> in\n> pg_profile, but I'm not sure it will be that useful for you either?\n\nOf course I can do some workaround if the accurate reset time will be\nunavailable. I just want to do the whole thing if it doesn't hurt. If\nwe have a plenty of timestamps saved now, I think it is a good idea to\nhave then bound to some milestones. At least it is a pretty equal join\ncondition between samples.\nBut if you think we should avoid returning ts here I won't insist on\nthat.\n\n\n\n", "msg_date": "Sat, 02 Apr 2022 14:11:52 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Sat, 2022-04-02 at 14:11 +0300, Andrei Zubkov wrote:\n> On Sat, 2022-04-02 at 18:56 +0800, Julien Rouhaud wrote:\n> > Maybe a macro would be better here?  I don't know if that's\n> > generally\n> > ok or\n> > just old and not-that-great code, but there are other places\n> > relying\n> > on macros\n> > when a plain function call isn't that convenient (like here\n> > returning\n> > 0 or 1 as\n> > a hack for incrementing num_remove), for instance in hba.c.\n> \n> Yes, it is not very convenient and not looks pretty, so I'll try a\n> macro here soon.\n\nImplemented SINGLE_ENTRY_RESET as a macro.\nv10 attached\n--\nregards, Andrei", "msg_date": "Sat, 02 Apr 2022 15:02:22 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "The tests for this seem to need adjustments.\n\n[12:41:09.403] test pg_stat_statements ... FAILED 180 ms\n\ndiff -U3 /tmp/cirrus-ci-build/contrib/pg_stat_statements/expected/pg_stat_statements.out\n/tmp/cirrus-ci-build/contrib/pg_stat_statements/results/pg_stat_statements.out\n--- /tmp/cirrus-ci-build/contrib/pg_stat_statements/expected/pg_stat_statements.out\n2022-04-02 12:37:42.201823383 +0000\n+++ /tmp/cirrus-ci-build/contrib/pg_stat_statements/results/pg_stat_statements.out\n2022-04-02 12:41:09.219563204 +0000\n@@ -1174,8 +1174,8 @@\n ORDER BY query;\n query | reset_ts_match\n ---------------------------+----------------\n- SELECT $1,$2 AS \"STMTTS2\" | f\n SELECT $1 AS \"STMTTS1\" | t\n+ SELECT $1,$2 AS \"STMTTS2\" | f\n (2 rows)\n\n -- check that minmax reset does not set stats_reset\n\n\nHm. Is this a collation problem?\n\n\n", "msg_date": "Sat, 2 Apr 2022 17:38:35 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> The tests for this seem to need adjustments.\n> [12:41:09.403] test pg_stat_statements ... FAILED 180 ms\n\n> diff -U3 /tmp/cirrus-ci-build/contrib/pg_stat_statements/expected/pg_stat_statements.out\n> /tmp/cirrus-ci-build/contrib/pg_stat_statements/results/pg_stat_statements.out\n> --- /tmp/cirrus-ci-build/contrib/pg_stat_statements/expected/pg_stat_statements.out\n> 2022-04-02 12:37:42.201823383 +0000\n> +++ /tmp/cirrus-ci-build/contrib/pg_stat_statements/results/pg_stat_statements.out\n> 2022-04-02 12:41:09.219563204 +0000\n> @@ -1174,8 +1174,8 @@\n> ORDER BY query;\n> query | reset_ts_match\n> ---------------------------+----------------\n> - SELECT $1,$2 AS \"STMTTS2\" | f\n> SELECT $1 AS \"STMTTS1\" | t\n> + SELECT $1,$2 AS \"STMTTS2\" | f\n> (2 rows)\n\n> -- check that minmax reset does not set stats_reset\n\n> Hm. Is this a collation problem?\n\nYeah, looks like it. ORDER BY query COLLATE \"C\" might work better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Apr 2022 20:30:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Greg,\n\nOn Sat, 2022-04-02 at 17:38 -0400, Greg Stark wrote:\n> The tests for this seem to need adjustments.\n> \n> [12:41:09.403] test pg_stat_statements ... FAILED 180 ms\n>             query           | reset_ts_match\n>  ---------------------------+----------------\n> - SELECT $1,$2 AS \"STMTTS2\" | f\n>   SELECT $1 AS \"STMTTS1\"    | t\n> + SELECT $1,$2 AS \"STMTTS2\" | f\n>  (2 rows)\n> \n>  -- check that minmax reset does not set stats_reset\n> \n> \n> Hm. Is this a collation problem?\n\nOf course, thank you! I've forgot to set collation here.\n\nv11 attached\n--\nregards, Andrei", "msg_date": "Sun, 03 Apr 2022 07:32:47 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Sun, Apr 03, 2022 at 07:32:47AM +0300, Andrei Zubkov wrote:\n> v11 attached\n\n+ /* When requested reset only min/max statistics of an entry */ \\\n+ entry_counters = &entry->counters; \\\n+ for (int kind = 0; kind < PGSS_NUMKIND; kind++) \\\n+ { \\\n+ entry_counters->max_time[kind] = 0; \\\n+ entry_counters->min_time[kind] = 0; \\\n+ } \\\n[...]\n+static TimestampTz\n+entry_reset(Oid userid, Oid dbid, uint64 queryid, bool minmax_only)\n {\n HASH_SEQ_STATUS hash_seq;\n pgssEntry *entry;\n+ Counters *entry_counters;\n\nDo we really need an extra variable? Why not simply using\nentry->counters.xxx_time[kind]?\n\nAlso, I think it's better to make the macro more like function looking, so\nSINGLE_ENTRY_RESET().\n\nindex f2e822acd3..c2af29866b 100644\n--- a/contrib/pg_stat_statements/sql/oldextversions.sql\n+++ b/contrib/pg_stat_statements/sql/oldextversions.sql\n@@ -36,4 +36,12 @@ AlTER EXTENSION pg_stat_statements UPDATE TO '1.8';\n \\d pg_stat_statements\n SELECT pg_get_functiondef('pg_stat_statements_reset'::regproc);\n\n+ALTER EXTENSION pg_stat_statements UPDATE TO '1.9';\n+\\d pg_stat_statements\n+\\d pg_stat_statements_info\n+SELECT pg_get_functiondef('pg_stat_statements_reset'::regproc);\n\nI don't think this bring any useful coverage.\n\n Minimum time spent planning the statement, in milliseconds\n (if <varname>pg_stat_statements.track_planning</varname> is enabled,\n- otherwise zero)\n+ otherwise zero), this field will contain zero until this statement\n+ is planned fist time after reset performed by the\n+ <function>pg_stat_statements_reset</function> function with the\n+ <structfield>minmax_only</structfield> parameter set to <literal>true</literal>\n\nI think this need some rewording (and s/fist/first). Maybe:\n\nMinimum time spent planning the statement, in milliseconds.\n\nThis field will be zero if <varname>pg_stat_statements.track_planning</varname>\nis disabled, or if the counter has been reset using the the\n<function>pg_stat_statements_reset</function> function with the\n<structfield>minmax_only</structfield> parameter set to <literal>true</literal>\nand never been planned since.\n\n <primary>pg_stat_statements_reset</primary>\n </indexterm>\n@@ -589,6 +623,20 @@\n If all statistics in the <filename>pg_stat_statements</filename>\n view are discarded, it will also reset the statistics in the\n <structname>pg_stat_statements_info</structname> view.\n+ When <structfield>minmax_only</structfield> is <literal>true</literal> only the\n+ values of minimun and maximum execution and planning time will be reset (i.e.\n\nNitpicking: I would say planning and execution time, as the fields are in this\norder in the view/function.\n\n\n", "msg_date": "Sun, 3 Apr 2022 15:07:25 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Julien,\n\nOn Sun, 2022-04-03 at 15:07 +0800, Julien Rouhaud wrote:\n> +static TimestampTz\n> +entry_reset(Oid userid, Oid dbid, uint64 queryid, bool minmax_only)\n>  {\n>     HASH_SEQ_STATUS hash_seq;\n>     pgssEntry  *entry;\n> +   Counters   *entry_counters;\n> \n> Do we really need an extra variable?  Why not simply using\n> entry->counters.xxx_time[kind]?\n> \n> Also, I think it's better to make the macro more like function\n> looking, so\n> SINGLE_ENTRY_RESET().\n\nAgreed with the both, I'll fix it.\n\n> \n> index f2e822acd3..c2af29866b 100644\n> --- a/contrib/pg_stat_statements/sql/oldextversions.sql\n> +++ b/contrib/pg_stat_statements/sql/oldextversions.sql\n> @@ -36,4 +36,12 @@ AlTER EXTENSION pg_stat_statements UPDATE TO\n> '1.8';\n>  \\d pg_stat_statements\n>  SELECT pg_get_functiondef('pg_stat_statements_reset'::regproc);\n> \n> +ALTER EXTENSION pg_stat_statements UPDATE TO '1.9';\n> +\\d pg_stat_statements\n> +\\d pg_stat_statements_info\n> +SELECT pg_get_functiondef('pg_stat_statements_reset'::regproc);\n> \n> I don't think this bring any useful coverage.\n\nI feel the same, but I've done it like previous tests (versions 1.7 and\n1.8). Am I miss something here? Do you think we should remove these\ntests completly?\n\n> \n> I think this need some rewording (and s/fist/first).  Maybe:\n> \n> Minimum time spent planning the statement, in milliseconds.\n> \n> This field will be zero if\n> <varname>pg_stat_statements.track_planning</varname>\n> is disabled, or if the counter has been reset using the the\n> <function>pg_stat_statements_reset</function> function with the\n> <structfield>minmax_only</structfield> parameter set to\n> <literal>true</literal>\n> and never been planned since.\n\nThanks a lot!\n\n> \n>        <primary>pg_stat_statements_reset</primary>\n>       </indexterm>\n> @@ -589,6 +623,20 @@\n>        If all statistics in the\n> <filename>pg_stat_statements</filename>\n>        view are discarded, it will also reset the statistics in the\n>        <structname>pg_stat_statements_info</structname> view.\n> +      When <structfield>minmax_only</structfield> is\n> <literal>true</literal> only the\n> +      values of minimun and maximum execution and planning time will\n> be reset (i.e.\n> \n> Nitpicking: I would say planning and execution time, as the fields\n> are in this\n> order in the view/function.\n\nAgreed.\n--\nregards, Andrei\n\n\n\n", "msg_date": "Sun, 03 Apr 2022 11:34:05 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "I've attached v12 of a patch. The only unsolved issue now is the\nfollowing:\n\nOn Sun, 2022-04-03 at 15:07 +0800, Julien Rouhaud wrote:\n> +ALTER EXTENSION pg_stat_statements UPDATE TO '1.9';\n> +\\d pg_stat_statements\n> +\\d pg_stat_statements_info\n> +SELECT pg_get_functiondef('pg_stat_statements_reset'::regproc);\n> \n> I don't think this bring any useful coverage.\n\nIt is a little bit unclear to me what is the best solution here.\n--\nregards, Andrei", "msg_date": "Sun, 03 Apr 2022 12:29:43 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Sun, Apr 03, 2022 at 12:29:43PM +0300, Andrei Zubkov wrote:\n> I've attached v12 of a patch. The only unsolved issue now is the\n> following:\n> \n> On Sun, 2022-04-03 at 15:07 +0800, Julien Rouhaud wrote:\n> > +ALTER EXTENSION pg_stat_statements UPDATE TO '1.9';\n> > +\\d pg_stat_statements\n> > +\\d pg_stat_statements_info\n> > +SELECT pg_get_functiondef('pg_stat_statements_reset'::regproc);\n> > \n> > I don't think this bring any useful coverage.\n> \n> It is a little bit unclear to me what is the best solution here.\n\nSorry, I missed that there were some similar tests already for previous\nversions. This was probably discussed and agreed before, so +1 to be\nconsistent with the new versions.\n\nThe patch looks good to me, although I will do a full review to make sure I\ndidn't miss anything.\n\nJust another minor nitpicking after a quick look:\n\n+ This field will be zero if ...\n[...]\n+ this field will contain zero until this statement ...\n\nThe wording should be consistent, so either \"will be zero\" or \"will contain\nzero\" everywhere. I'm personally fine with any, but maybe a native English\nwill think one is better.\n\n\n", "msg_date": "Sun, 3 Apr 2022 17:56:16 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Julien,\n\nOn Sun, 2022-04-03 at 17:56 +0800, Julien Rouhaud wrote:\n> Just another minor nitpicking after a quick look:\n> \n> + This field will be zero if ...\n> [...]\n> + this field will contain zero until this statement ...\n> \n> The wording should be consistent, so either \"will be zero\" or \"will\n> contain\n> zero\" everywhere.  I'm personally fine with any, but maybe a native\n> English\n> will think one is better.\nAgreed.\n\nSearching the docs I've fond out that \"will contain\" usually used with\nthe description of contained structure rather then a simple value. So\nI'll use a \"will be zero\" in the next version after your review.\n--\nregards, Andrei\n\n\n\n", "msg_date": "Sun, 03 Apr 2022 13:24:40 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Sun, Apr 03, 2022 at 01:24:40PM +0300, Andrei Zubkov wrote:\n>\n> On Sun, 2022-04-03 at 17:56 +0800, Julien Rouhaud wrote:\n> > Just another minor nitpicking after a quick look:\n> >\n> > + This field will be zero if ...\n> > [...]\n> > + this field will contain zero until this statement ...\n> >\n> > The wording should be consistent, so either \"will be zero\" or \"will\n> > contain\n> > zero\" everywhere.� I'm personally fine with any, but maybe a native\n> > English\n> > will think one is better.\n> Agreed.\n>\n> Searching the docs I've fond out that \"will contain\" usually used with\n> the description of contained structure rather then a simple value. So\n> I'll use a \"will be zero\" in the next version after your review.\n\nOk!\n\nSo last round of review.\n\n- the commit message:\n\nIt should probably mention the mimnax_stats_since at the beginning. Also, both\nthe view and the function contain those new field.\n\nMinor rephrasing:\n\ns/evicted and returned back/evicted and stored again/?\ns/with except of all/with the exception of all/\ns/is now returns/now returns/\n\n- code:\n\n+#define SINGLE_ENTRY_RESET() \\\n+if (entry) { \\\n[...]\n\nIt's not great to rely on caller context too much. I think it would be better\nto pass at least the entry as a parameter (maybe e?) to the macro for more\nclarity. I'm fine with keeping minmax_only, stats_reset and num_remove as is.\n\nApart from that I think this is ready!\n\n\n", "msg_date": "Mon, 4 Apr 2022 10:31:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Julien,\n\nThank you very much for your work on this patch!\n\nOn Mon, 2022-04-04 at 10:31 +0800, Julien Rouhaud wrote:\n> - the commit message:\n> \n> It should probably mention the mimnax_stats_since at the beginning. \n> Also, both\n> the view and the function contain those new field.\n> \n> Minor rephrasing:\n> \n> s/evicted and returned back/evicted and stored again/?\n> s/with except of all/with the exception of all/\n> s/is now returns/now returns/\n\nAgreed, commit message updated.\n\n> - code:\n> \n> +#define SINGLE_ENTRY_RESET() \\\n> +if (entry) { \\\n> [...]\n> \n> It's not great to rely on caller context too much.\n\nYes, I was thinking about it. But using 4 parameters seemed strange to\nme.\n\n>   I think it would be better\n> to pass at least the entry as a parameter (maybe e?) to the macro for\n> more\n> clarity.  I'm fine with keeping minmax_only, stats_reset and\n> num_remove as is.\n\nUsing an entry as a macro parameter looks good, I'm fine with \"e\". \n\n> Apart from that I think this is ready!\n\nv13 attached\n--\nregards, Andrei", "msg_date": "Mon, 04 Apr 2022 09:59:04 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Mon, Apr 04, 2022 at 09:59:04AM +0300, Andrei Zubkov wrote:\n> > Minor rephrasing:\n> >\n> > s/evicted and returned back/evicted and stored again/?\n> > s/with except of all/with the exception of all/\n> > s/is now returns/now returns/\n>\n> Agreed, commit message updated.\n>\n> > - code:\n> >\n> > +#define SINGLE_ENTRY_RESET() \\\n> > +if (entry) { \\\n> > [...]\n> >\n> > It's not great to rely on caller context too much.\n>\n> Yes, I was thinking about it. But using 4 parameters seemed strange to\n> me.\n>\n> > � I think it would be better\n> > to pass at least the entry as a parameter (maybe e?) to the macro for\n> > more\n> > clarity.� I'm fine with keeping minmax_only, stats_reset and\n> > num_remove as is.\n>\n> Using an entry as a macro parameter looks good, I'm fine with \"e\".\n>\n> > Apart from that I think this is ready!\n>\n> v13 attached\n\nThanks a lot! I'm happy with this version, so I'm marking it as Ready for\nCommitter.\n\n\n", "msg_date": "Mon, 4 Apr 2022 16:08:34 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nI've rebased this patch so that it can be applied after 57d6aea00fc.\n\nv14 attached\n--\nregards, Andrei", "msg_date": "Fri, 08 Apr 2022 23:25:18 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nI took a quick look at this patch, to see if there's something we\nwant/can get into v16. The last version was submitted about 9 months\nago, and it doesn't apply cleanly anymore, but the bitrot is fairly\nminor. Not sure there's still interest, though.\n\nAs for the patch, I wonder if it's unnecessarily complex. It adds *two*\ntimestamps for each pg_stat_statements entry - one for reset of the\nwhole entry, one for reset of \"min/max\" times only.\n\nI can see why the first timestamp (essentially tracking creating of the\nentry) is useful. I'd probably call it \"created_at\" or something like\nthat, but that's a minor detail. Or maybe stats_reset, which is what we\nuse in pgstat?\n\nBut is the second timestamp for the min/max fields really useful? AFAIK\nto perform analysis, people take regular pg_stat_statements snapshots,\nwhich works fine for counters (calculating deltas) but not for gauges\n(which need a reset, to track fresh values). But people analyzing this\nare already resetting the whole entry, and so the snapshots already are\ntracking deltas.\n\nSo I'm not convinced actually need the second timestamp.\n\nA couple more comments:\n\n1) I'm not sure why the patch is adding tests of permissions on the\npg_stat_statements_reset function?\n\n2) If we want the second timestamp, shouldn't it also cover resets of\nthe mean values, not just min/max?\n\n3) I don't understand why the patch is adding \"IS NOT NULL AS t\" to\nvarious places in the regression tests.\n\n4) I rather dislike the \"minmax\" naming, because that's often used in\nother contexts (for BRIN indexes), and as I mentioned maybe it should\nalso cover the \"mean\" fields.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 18 Jan 2023 17:29:43 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Tomas,\n\nOn Wed, 2023-01-18 at 17:29 +0100, Tomas Vondra wrote:\n> I took a quick look at this patch, to see if there's something we\n> want/can get into v16. The last version was submitted about 9 months\n> ago, and it doesn't apply cleanly anymore, but the bitrot is fairly\n> minor. Not sure there's still interest, though.\n\nThank you for your attention to this patch!\n\nI'm still very interest in this patch. And I think I'll try to rebase\nthis patch during a week or two if it seems possible to get it in 16..\n> \n> I'd probably call it \"created_at\" or something like\n> that, but that's a minor detail. Or maybe stats_reset, which is what\n> we\n> use in pgstat?\n\nYes there is some naming issue. My thought was the following:\n - \"stats_reset\" is not quite correct here, because the statement entry\nmoment if definitely not a reset. The field named just as it means -\nthis is time of the moment from which statistics is collected for this\nparticular entry.\n - \"created_at\" perfectly matches the purpose of the field, but seems\nnot such self-explaining to me.\n\n> \n> But is the second timestamp for the min/max fields really useful?\n> AFAIK\n> to perform analysis, people take regular pg_stat_statements\n> snapshots,\n> which works fine for counters (calculating deltas) but not for gauges\n> (which need a reset, to track fresh values). But people analyzing\n> this\n> are already resetting the whole entry, and so the snapshots already\n> are\n> tracking deltas.\n>\n> So I'm not convinced actually need the second timestamp.\n\nThe main purpose of the patch is to provide means to collecting\nsolutions to avoid the reset of pgss at all. Just like it happens for\nthe pg_stat_ views. The only really need of reset is that we can't be\nsure that observing statement was not evicted and come back since last\nsample. Right now we only can do a whole reset on each sample and see\nhow many entries will be in pgss hashtable on the next sample - how\nclose this value to the max. If there is a plenty space in hashtable we\ncan hope that there was not evictions since last sample. However there\ncould be reset performed by someone else and we are know nothing about\nthis.\nHaving a timestamp in stats_since field we are sure about how long this\nstatement statistics is tracked. That said sampling solution can\ntotally avoid pgss resets. Avoiding such resets means avoiding\ninterference between monitoring solutions.\nBut if no more resets is done we can't track min/max values, because\nthey still needs a reset and we can do nothing with such resets - they\nare necessary. However I still want to know when min/max reset was\nperformed. This will help to detect possible interference on such\nresets.\n> \n> \n> A couple more comments:\n> \n> 1) I'm not sure why the patch is adding tests of permissions on the\n> pg_stat_statements_reset function?\n> \n> 2) If we want the second timestamp, shouldn't it also cover resets of\n> the mean values, not just min/max?\n\nI think that mean values shouldn't be target for a partial reset\nbecause the value for mean values can be easily reconstructed by the\nsampling solution without a reset.\n\n> \n> 3) I don't understand why the patch is adding \"IS NOT NULL AS t\" to\n> various places in the regression tests.\n\nThe most of tests was copied from the previous version. I'll recheck\nthem.\n\n> \n> 4) I rather dislike the \"minmax\" naming, because that's often used in\n> other contexts (for BRIN indexes), and as I mentioned maybe it should\n> also cover the \"mean\" fields.\n\nAgreed, but I couldn't make it better. Other versions seemed worse to\nme...\n> \n> \nRegards, Andrei Zubkov\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 22:04:56 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nI've updated this patch for the current master. Also I have some\nadditional explanations..\n\nOn Wed, 2023-01-18 at 17:29 +0100, Tomas Vondra wrote:\n> 1) I'm not sure why the patch is adding tests of permissions on the\n> pg_stat_statements_reset function?\n\nI've fixed that\n\n> \n> 2) If we want the second timestamp, shouldn't it also cover resets of\n> the mean values, not just min/max?\n\nI think that mean values are not a targets for auxiliary resets because\nany sampling solution can easily calculate the mean values between\nsamples without a reset.\n\n> \n> 3) I don't understand why the patch is adding \"IS NOT NULL AS t\" to\n> various places in the regression tests.\n\nThis change is necessary in the current version because the\npg_stat_statements_reset() function will return a timestamp of a reset,\nneeded for sampling solutions to detect resets, perfermed by someone\nelse.\n\n\nRegards\n-- \nAndrei Zubkov", "msg_date": "Wed, 25 Jan 2023 18:46:40 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nThe final version of this patch should fix meson build and tests.\n-- \nAndrei Zubkov", "msg_date": "Thu, 26 Jan 2023 14:02:43 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi!\n\nI've attached a new version of a patch - rebase to the current master.\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 01 Mar 2023 11:59:49 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Wed, 1 Mar 2023 at 04:04, Andrei Zubkov <zubkov@moonset.ru> wrote:\n>\n> Hi!\n>\n> I've attached a new version of a patch - rebase to the current master.\n\nThe CFBot (http://cfbot.cputube.org/) doesn't seem to like this. It\nlooks like all the Meson builds are failing, perhaps there's something\nparticular about the test environment that is either not set up right\nor is exposing a bug?\n\nPlease check if this is a real failure or a cfbot failure.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 14:24:41 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Gregory,\n\nOn Wed, 2023-03-01 at 14:24 -0500, Gregory Stark (as CFM) wrote:\n> The CFBot (http://cfbot.cputube.org/) doesn't seem to like this. It\n> looks like all the Meson builds are failing, perhaps there's\n> something\n> particular about the test environment that is either not set up right\n> or is exposing a bug?\n\nThank you, I've missed it.\n> \n> Please check if this is a real failure or a cfbot failure.\n> \nIt is my failure. Just forgot to update meson.build\nI think CFBot should be happy now.\n\nRegards,\n-- \nAndrei Zubkov", "msg_date": "Wed, 01 Mar 2023 23:15:26 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "I'm sorry, It seems this is failing again? It's Makefile and\nmeson.build again :(\n\nThough there are also\n\npatching file contrib/pg_stat_statements/sql/oldextversions.sql\ncan't find file to patch at input line 1833\n\n\nand\n\n|--- a/contrib/pg_stat_statements/sql/pg_stat_statements.sql\n|+++ b/contrib/pg_stat_statements/sql/pg_stat_statements.sql\n--------------------------\nNo file to patch. Skipping patch.\n\n\n\n\n--\nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 6 Mar 2023 15:04:35 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Gregory,\n\n> patching file contrib/pg_stat_statements/sql/oldextversions.sql\n> can't find file to patch at input line 1833\n> \n> \n> and\n> \n> > --- a/contrib/pg_stat_statements/sql/pg_stat_statements.sql\n> > +++ b/contrib/pg_stat_statements/sql/pg_stat_statements.sql\n> --------------------------\n> No file to patch.  Skipping patch.\n> \nThank you for your attention.\n\nYes, it is due to parallel work on \"Normalization of utility queries in\npg_stat_statements\" patch\n(https://postgr.es/m/Y/7Y9U/y/keAW3qH@paquier.xyz)\n\nIt seems I've found something strange in new test files - I've\nmentioned this in a thread of a patch. Once there will be any solution\nI'll do a rebase again as soon as possible.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n", "msg_date": "Mon, 06 Mar 2023 23:16:52 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nI've done a rebase of a patch to the current master.\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Sat, 11 Mar 2023 14:49:50 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Sat, Mar 11, 2023 at 02:49:50PM +0300, Andrei Zubkov wrote:\n> Hi,\n>\n> I've done a rebase of a patch to the current master.\n\n+/* First we have to remove them from the extension */\n+ALTER EXTENSION pg_stat_statements DROP VIEW pg_stat_statements;\n+ALTER EXTENSION pg_stat_statements DROP FUNCTION pg_stat_statements(boolean);\n+ALTER EXTENSION pg_stat_statements DROP FUNCTION\n+ pg_stat_statements_reset(Oid, Oid, bigint);\n\nThe upgrade script of an extension is launched by the backend in the\ncontext of an extension, so these three queries should not be needed,\nas far as I recall.\n\n-SELECT pg_stat_statements_reset();\n- pg_stat_statements_reset\n---------------------------\n-\n+SELECT pg_stat_statements_reset() IS NOT NULL AS t;\n+ t\n+---\n+ t\n (1 row)\n\nWouldn't it be better to do this kind of refactoring in its own patch\nto make the follow-up changes more readable? This function is changed\nto return a timestamp rather than void, but IS NOT NULL applied on the\nexisting queries would also return true. This would clean up quite a\nfew diffs in the main patch..\n--\nMichael", "msg_date": "Thu, 16 Mar 2023 16:13:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Michael,\n\nThank you for your attention.\n\nOn Thu, 2023-03-16 at 16:13 +0900, Michael Paquier wrote:\n> +/* First we have to remove them from the extension */\n> +ALTER EXTENSION pg_stat_statements DROP VIEW pg_stat_statements;\n> +ALTER EXTENSION pg_stat_statements DROP FUNCTION\n> pg_stat_statements(boolean);\n> +ALTER EXTENSION pg_stat_statements DROP FUNCTION\n> +  pg_stat_statements_reset(Oid, Oid, bigint);\n> \n> The upgrade script of an extension is launched by the backend in the\n> context of an extension, so these three queries should not be needed,\n> as far as I recall.\n\nAgreed. I've done it as it was in previous versions. But I'm sure those\nare unnecessary.\n\n> Wouldn't it be better to do this kind of refactoring in its own patch\n> to make the follow-up changes more readable?  This function is\n> changed\n> to return a timestamp rather than void, but IS NOT NULL applied on\n> the\n> existing queries would also return true.  This would clean up quite a\n> few diffs in the main patch..\nSplitting this commit seems reasonable to me.\n\nNew version is attached.", "msg_date": "Thu, 16 Mar 2023 14:02:51 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "A little comment fix in update script of a patch\n-- \nAndrei Zubkov", "msg_date": "Thu, 16 Mar 2023 15:39:24 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hello!\n\nThe documentation still describes the function pg_stat_statements_reset like this\n\n> By default, this function can only be executed by superusers.\n\nBut unfortunately, this part was lost somewhere.\n\n-- Don't want this to be available to non-superusers.\nREVOKE ALL ON FUNCTION pg_stat_statements_reset(Oid, Oid, bigint, boolean) FROM PUBLIC;\n\nshould be added to the upgrade script\n\nAlso, shouldn't we first do:\n\n/* First we have to remove them from the extension */\nALTER EXTENSION pg_stat_statements DROP VIEW ..\nALTER EXTENSION pg_stat_statements DROP FUNCTION ..\n\nlike in previous upgrade scripts?\n\n> + Time at which min/max statistics gathering started for this\n> + statement\n\nI think it would be better to explicitly mention in the documentation all 4 fields for which minmax_stats_since displays the time.\n\nregards, Sergei\n\n\n", "msg_date": "Tue, 21 Mar 2023 23:18:35 +0300", "msg_from": "Sergei Kornilov <sk@zsrv.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Sergei!\n\nThank you for your review.\n\nOn Tue, 2023-03-21 at 23:18 +0300, Sergei Kornilov wrote:\n> -- Don't want this to be available to non-superusers.\n> REVOKE ALL ON FUNCTION pg_stat_statements_reset(Oid, Oid, bigint,\n> boolean) FROM PUBLIC;\n> \n> should be added to the upgrade script\n\nIndeed.\n\n> Also, shouldn't we first do:\n> \n> /* First we have to remove them from the extension */\n> ALTER EXTENSION pg_stat_statements DROP VIEW ..\n> ALTER EXTENSION pg_stat_statements DROP FUNCTION ..\n> \n> like in previous upgrade scripts?\n\nIt was discussed few messages earlier in this thread. We've decided\nthat those are unnecessary in upgrade script.\n\n> > +       Time at which min/max statistics gathering started for this\n> > +       statement\n> \n> I think it would be better to explicitly mention in the documentation\n> all 4 fields for which minmax_stats_since displays the time.\n\nAgreed.\n\nNew version is attached.\n\nregards, Andrei", "msg_date": "Wed, 22 Mar 2023 11:17:00 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "> On 22 Mar 2023, at 09:17, Andrei Zubkov <zubkov@moonset.ru> wrote:\n\n> New version is attached.\n\nThis patch is marked RfC but didn't get reviewed/committed during this CF so\nI'm moving it to the next, the patch no longer applies though so please submit\nan updated version.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 10:13:32 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi hackers,\n\nNew version 23 attached. It contains rebase to the current master.\nNoted that v1.11 adds new fields to the pg_stat_sstatements view, but\nleaves the PGSS_FILE_HEADER constant unchanged. It this correct?\n\n-- \nAndrei Zubkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Thu, 19 Oct 2023 15:40:24 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi,\n\nDuring last moving to the current commitfest this patch have lost its\nreviewers list. With respect to reviewers contribution in this patch, I\nthink reviewers list should be fixed.\n\nregards,\n\nAndrei Zubkov\nPostgres Professional\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 10:58:48 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On 24.10.23 09:58, Andrei Zubkov wrote:\n> During last moving to the current commitfest this patch have lost its\n> reviewers list. With respect to reviewers contribution in this patch, I\n> think reviewers list should be fixed.\n\nI don't think it's the purpose of the commitfest app to track how *has* \nreviewed a patch. The purpose is to plan and allocate *current* work. \nIf we keep a bunch of reviewers listed on a patch who are not actually \nreviewing the patch, then that effectively blocks new reviewers from \nsigning up and the patch will not make progress.\n\nPast reviewers should of course be listed in the commit message, the \nrelease notes, etc. as appropriate.\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 13:54:28 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Tue, Oct 24, 2023 at 6:57 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> On 24.10.23 09:58, Andrei Zubkov wrote:\n> > During last moving to the current commitfest this patch have lost its\n> > reviewers list. With respect to reviewers contribution in this patch, I\n> > think reviewers list should be fixed.\n>\n> I don't think it's the purpose of the commitfest app to track how *has*\n> reviewed a patch. The purpose is to plan and allocate *current* work.\n> If we keep a bunch of reviewers listed on a patch who are not actually\n> reviewing the patch, then that effectively blocks new reviewers from\n> signing up and the patch will not make progress.\n>\n> Past reviewers should of course be listed in the commit message, the\n> release notes, etc. as appropriate.\n\nReally? Last time this topic showed up at least one committer said\nthat they tend to believe the CF app more than digging the thread [1],\nand some other hackers mentioned other usage for being kept in the\nreviewer list. Since no progress has been made on the CF app since\nI'm not sure it's the best idea to drop reviewer names from patch\nentries generally.\n\n[1] https://www.postgresql.org/message-id/552155.1648737431@sss.pgh.pa.us\n\n\n", "msg_date": "Tue, 24 Oct 2023 19:40:22 +0700", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On 24.10.23 14:40, Julien Rouhaud wrote:\n> On Tue, Oct 24, 2023 at 6:57 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> On 24.10.23 09:58, Andrei Zubkov wrote:\n>>> During last moving to the current commitfest this patch have lost its\n>>> reviewers list. With respect to reviewers contribution in this patch, I\n>>> think reviewers list should be fixed.\n>>\n>> I don't think it's the purpose of the commitfest app to track how *has*\n>> reviewed a patch. The purpose is to plan and allocate *current* work.\n>> If we keep a bunch of reviewers listed on a patch who are not actually\n>> reviewing the patch, then that effectively blocks new reviewers from\n>> signing up and the patch will not make progress.\n>>\n>> Past reviewers should of course be listed in the commit message, the\n>> release notes, etc. as appropriate.\n> \n> Really? Last time this topic showed up at least one committer said\n> that they tend to believe the CF app more than digging the thread [1],\n> and some other hackers mentioned other usage for being kept in the\n> reviewer list. Since no progress has been made on the CF app since\n> I'm not sure it's the best idea to drop reviewer names from patch\n> entries generally.\n\nThere is a conflict between the two purposes. But it is clearly the \ncase that reviewers will more likely pick up patches that have no \nreviewers assigned. So if you keep stale reviewer entries around, then \na patch that stays around for a while will never get reviewed again. I \nthink this is a significant problem at the moment, and I made it part of \nmy mission during the last commitfest to clean it up. If people want to \nput the stale reviewer entries back in, that is possible, but I would \ncaution against that, because that would just self-sabotage those patches.\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 17:03:04 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On 19/10/2023 19:40, Andrei Zubkov wrote:\n> Hi hackers,\n> \n> New version 23 attached. It contains rebase to the current master.\n\nI discovered the patch and parameters you've proposed. In my opinion, \nthe stats_since parameter adds valuable information and should \ndefinitely be included in the stats data because the statement can be \nnoteless removed from the list and inserted again.\nBut minmax_stats_since and changes in the UI of the reset routine look \nlike syntactic sugar here.\nI can't convince myself that it is really needed. Do you have some set \nof cases that can enforce the changes proposed? Maybe we should \nintensively work on adding the 'stats_since' parameter and continue the \ndiscussion of the necessity of another one?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Wed, 25 Oct 2023 13:59:09 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Andrei,\n\nOn Wed, 2023-10-25 at 13:59 +0700, Andrei Lepikhov wrote:\n> But minmax_stats_since and changes in the UI of the reset routine\n> look like syntactic sugar here.\n> I can't convince myself that it is really needed. Do you have some\n> set of cases that can enforce the changes proposed?\n\nYes, it looks strange, but it is needed in some way.\nThe main purpose of this patch is to provide means for sampling\nsolutions for collecting statistics data in series of samples. The\nfirst goal here - is to be sure that the statement was not evicted and\ncome back between samples (especially between rare samples). This is\nthe target of the stats_since field. The second goal - is the ability\nof getting all statistic increments for the interval between samples.\nAll statistics provided by pg_stat_statements with except of min/max\nvalues can be calculated for the interval since the last sample knowing\nthe values in the last and current samples. We need a way to get\nmin/max values too. This is achieved by min/max reset performed by the\nsampling solution. The minmax_stats_since field is here for the same\npurpose - the sampling solution is need to be sure that no one have\ndone a min/max reset between samples. And if such reset was performed\nat least we know when it was performed.\n\nregards,\nAndrei Zubkov\nPostgres Professional\n\n\n", "msg_date": "Wed, 25 Oct 2023 16:00:23 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On 19.10.2023 15:40, Andrei Zubkov wrote:\n> Hi hackers,\n>\n> New version 23 attached. It contains rebase to the current master.\n> Noted that v1.11 adds new fields to the pg_stat_sstatements view, but\n> leaves the PGSS_FILE_HEADER constant unchanged. It this correct?\nHi! Thank you for your work on the subject.\n\n1. I didn't understand why we first try to find pgssEntry with a false \ntop_level value, and later find it with a true top_level value.\n\n/*\n  * Remove the entry if it exists, starting with the non-top-level entry.\n  */\n*key.toplevel = false;*\nentry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);\n\nSINGLE_ENTRY_RESET(entry);\n\n*/* Also reset the top-level entry if it exists. */\nkey.toplevel = true;*\nentry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND, NULL);\n\nSINGLE_ENTRY_RESET(entry);\n\nI looked through this topic and found some explanation in this email \n[0], but I didn't understand it. Can you explain it to me?\n\n2. And honestly, I think you should change\n\"Remove the entry if it exists, starting with the non-top-level entry.\" on\n\"Remove the entry or reset min/max time statistic information and the \ntimestamp if it exists, starting with the non-top-level entry.\"\n\nAnd the same with the comment bellow:\n\n\"Also reset the top-level entry if it exists.\"\n\"Also remove the entry or reset min/max time statistic information and \nthe timestamp if it exists.\"\n\nIn my opinion, this is necessary because the minmax_only parameter is \nset by the user, so both ways are possible.\n\n\n0 - \nhttps://www.postgresql.org/message-id/62d16845-e74e-a6f9-9661-022e44f48922%40inbox.ru\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nOn 19.10.2023 15:40, Andrei Zubkov\n wrote:\n\n\nHi hackers,\n\nNew version 23 attached. It contains rebase to the current master.\nNoted that v1.11 adds new fields to the pg_stat_sstatements view, but\nleaves the PGSS_FILE_HEADER constant unchanged. It this correct?\n\n\n Hi! Thank you for your work on the subject.\n\n1. I didn't understand why we first try to find pgssEntry with a\n false top_level value, and later find it with a true top_level\n value.\n/*\n  * Remove the entry if it exists, starting with the non-top-level\n entry.\n  */\nkey.toplevel = false;\n entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND,\n NULL);\n\n SINGLE_ENTRY_RESET(entry);\n\n/* Also reset the top-level entry if it exists. */\n key.toplevel = true;\n entry = (pgssEntry *) hash_search(pgss_hash, &key, HASH_FIND,\n NULL);\n\n SINGLE_ENTRY_RESET(entry);\nI looked through this topic and found some explanation in this\n email [0], but I didn't understand it. Can you explain it to me?\n2. And honestly, I think you should change \n \"Remove the entry if it exists, starting with the non-top-level\n entry.\" on \n \"Remove the entry or reset min/max time statistic information and\n the timestamp if it exists, starting with the non-top-level entry.\"\nAnd the same with the comment bellow:\n\"Also reset the top-level entry if it exists.\"\n \"Also remove the entry or reset min/max time statistic information\n and the timestamp if it exists.\"\n In my opinion, this is necessary because the minmax_only parameter\n is set by the user, so both ways are possible.\n \n\n0 -\nhttps://www.postgresql.org/message-id/62d16845-e74e-a6f9-9661-022e44f48922%40inbox.ru\n\n-- \nRegards,\nAlena Rybakina", "msg_date": "Wed, 25 Oct 2023 16:25:08 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Alena,\n\nOn Wed, 2023-10-25 at 16:25 +0300, Alena Rybakina wrote:\n>  Hi! Thank you for your work on the subject.\n> 1. I didn't understand why we first try to find pgssEntry with a\n> false top_level value, and later find it with a true top_level value.\n\nIn case of pg_stat_statements the top_level field is the bool field of\nthe pgssHashKey struct used as the key for pgss_hash hashtable. When we\nare performing a reset only userid, dbid and queryid values are\nprovided. We need to reset both top-level and non-top level entries. We\nhave only one way to get them all from a hashtable - search for entries\nhaving top_level=true and with top_level=false in their keys.\n\n> 2. And honestly, I think you should change \n>  \"Remove the entry if it exists, starting with the non-top-level\n> entry.\" on \n>  \"Remove the entry or reset min/max time statistic information and\n> the timestamp if it exists, starting with the non-top-level entry.\"\n> And the same with the comment bellow:\n> \"Also reset the top-level entry if it exists.\"\n>  \"Also remove the entry or reset min/max time statistic information\n> and the timestamp if it exists.\"\n\nThere are four such places actually - every time when the\nSINGLE_ENTRY_RESET macro is used. The logic of reset performed is\ndescribed a bit in this macro definition. It seems quite redundant to\nrepeat this description four times. But I've noted that \"remove\" terms\nshould be replaced by \"reset\".\n\nI've attached v24 with commits updated.\n\nregards, Andrei Zubkov\nPostgres Professional", "msg_date": "Wed, 25 Oct 2023 18:35:04 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On 25/10/2023 20:00, Andrei Zubkov wrote:\n> Hi Andrei,\n> \n> On Wed, 2023-10-25 at 13:59 +0700, Andrei Lepikhov wrote:\n>> But minmax_stats_since and changes in the UI of the reset routine\n>> look like syntactic sugar here.\n>> I can't convince myself that it is really needed. Do you have some\n>> set of cases that can enforce the changes proposed?\n> \n> Yes, it looks strange, but it is needed in some way.\n> The main purpose of this patch is to provide means for sampling\n> solutions for collecting statistics data in series of samples. The\n> first goal here - is to be sure that the statement was not evicted and\n> come back between samples (especially between rare samples). This is\n> the target of the stats_since field. The second goal - is the ability\n> of getting all statistic increments for the interval between samples.\n> All statistics provided by pg_stat_statements with except of min/max\n> values can be calculated for the interval since the last sample knowing\n> the values in the last and current samples. We need a way to get\n> min/max values too. This is achieved by min/max reset performed by the\n> sampling solution. The minmax_stats_since field is here for the same\n> purpose - the sampling solution is need to be sure that no one have\n> done a min/max reset between samples. And if such reset was performed\n> at least we know when it was performed.\n\nIt is the gist of my question. If needed, You can remove the record by \n(userid, dbOid, queryId). As I understand, this extension is usually \nused by an administrator. Who can reset these parameters except you and \nthe DBMS?\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 26 Oct 2023 15:49:02 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "On Thu, 2023-10-26 at 15:49 +0700, Andrei Lepikhov wrote:\n> It is the gist of my question. If needed, You can remove the record\n> by \n> (userid, dbOid, queryId). As I understand, this extension is usually \n> used by an administrator. Who can reset these parameters except you\n> and \n> the DBMS?\nThis extension is used by administrator but indirectly through some\nkind of sampling solution providing information about statistics change\nover time. The only kind of statistics unavailable to sampling\nsolutions without a periodic reset is a min/max statistics. This patch\nprovides a way for resetting those statistics without entry eviction.\nSuppose the DBA will use several sampling solutions. Every such\nsolution can perform its own resets of min/max statistics. Other\nsampling solutions need a way to detect such resets to avoid undetected\ninterference. Timestamping of min/max reset can be used for that\npurpose.\n\n-- \nregards, Andrei Zubkov\nPostgres Professional\n\n\n", "msg_date": "Thu, 26 Oct 2023 13:41:47 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On 25.10.2023 18:35, Andrei Zubkov wrote:\n> Hi Alena,\n>\n> On Wed, 2023-10-25 at 16:25 +0300, Alena Rybakina wrote:\n>>  Hi! Thank you for your work on the subject.\n>> 1. I didn't understand why we first try to find pgssEntry with a\n>> false top_level value, and later find it with a true top_level value.\n> In case of pg_stat_statements the top_level field is the bool field of\n> the pgssHashKey struct used as the key for pgss_hash hashtable. When we\n> are performing a reset only userid, dbid and queryid values are\n> provided. We need to reset both top-level and non-top level entries. We\n> have only one way to get them all from a hashtable - search for entries\n> having top_level=true and with top_level=false in their keys.\nThank you for explanation, I got it.\n>> 2. And honestly, I think you should change\n>>  \"Remove the entry if it exists, starting with the non-top-level\n>> entry.\" on\n>>  \"Remove the entry or reset min/max time statistic information and\n>> the timestamp if it exists, starting with the non-top-level entry.\"\n>> And the same with the comment bellow:\n>> \"Also reset the top-level entry if it exists.\"\n>>  \"Also remove the entry or reset min/max time statistic information\n>> and the timestamp if it exists.\"\n> There are four such places actually - every time when the\n> SINGLE_ENTRY_RESET macro is used. The logic of reset performed is\n> described a bit in this macro definition. It seems quite redundant to\n> repeat this description four times. But I've noted that \"remove\" terms\n> should be replaced by \"reset\".\n>\n> I've attached v24 with commits updated.\nYes, I agree with the changes.\n\n-- \nRegards,\nAlena Rybakina\n\n\n\n\n\n\nOn 25.10.2023 18:35, Andrei Zubkov\n wrote:\n\n\nHi Alena,\n\nOn Wed, 2023-10-25 at 16:25 +0300, Alena Rybakina wrote:\n\n\n Hi! Thank you for your work on the subject.\n1. I didn't understand why we first try to find pgssEntry with a\nfalse top_level value, and later find it with a true top_level value.\n\n\n\nIn case of pg_stat_statements the top_level field is the bool field of\nthe pgssHashKey struct used as the key for pgss_hash hashtable. When we\nare performing a reset only userid, dbid and queryid values are\nprovided. We need to reset both top-level and non-top level entries. We\nhave only one way to get them all from a hashtable - search for entries\nhaving top_level=true and with top_level=false in their keys.\n\n\n Thank you for explanation, I got it.\n\n\n\n2. And honestly, I think you should change \n \"Remove the entry if it exists, starting with the non-top-level\nentry.\" on \n \"Remove the entry or reset min/max time statistic information and\nthe timestamp if it exists, starting with the non-top-level entry.\"\nAnd the same with the comment bellow:\n\"Also reset the top-level entry if it exists.\"\n \"Also remove the entry or reset min/max time statistic information\nand the timestamp if it exists.\"\n\n\n\nThere are four such places actually - every time when the\nSINGLE_ENTRY_RESET macro is used. The logic of reset performed is\ndescribed a bit in this macro definition. It seems quite redundant to\nrepeat this description four times. But I've noted that \"remove\" terms\nshould be replaced by \"reset\".\n\nI've attached v24 with commits updated.\n\n\n Yes, I agree with the changes.\n-- \nRegards,\nAlena Rybakina", "msg_date": "Fri, 27 Oct 2023 00:16:28 +0300", "msg_from": "Alena Rybakina <lena.ribackina@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi hackers,\n\nPatch rebased to the current master\n-- \nregards, Andrei Zubkov\nPostgres Professional", "msg_date": "Fri, 17 Nov 2023 00:35:46 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "A little fix in \"level_tracking\" tests after merge.\n\n-- \nregards, Andrei Zubkov\nPostgres Professional", "msg_date": "Fri, 17 Nov 2023 11:40:19 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "Hi!\n\nOn Fri, Nov 17, 2023 at 10:40 AM Andrei Zubkov <zubkov@moonset.ru> wrote:\n>\n> A little fix in \"level_tracking\" tests after merge.\n\nI've reviewed this patch. I think this is the feature of high demand.\nNew columns (stats_since and minmax_stats_since) to the\npg_stat_statements view, enhancing the granularity and precision of\nperformance monitoring. This addition allows database administrators\nto have a clearer understanding of the time intervals for statistics\ncollection on each statement. Such detailed tracking is invaluable for\nperformance tuning and identifying bottlenecks in database operations.\n\nI think the design was well-discussed in this thread. Implementation\nalso looks good to me. I've just slightly revised the commit\nmessages.\n\nI'd going to push this patchset if no objections.\n\n------\nRegards,\nAlexander Korotkov", "msg_date": "Sat, 25 Nov 2023 02:45:07 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi,\n\nOn Sat, Nov 25, 2023 at 02:45:07AM +0200, Alexander Korotkov wrote:\n>\n> I've reviewed this patch. I think this is the feature of high demand.\n> New columns (stats_since and minmax_stats_since) to the\n> pg_stat_statements view, enhancing the granularity and precision of\n> performance monitoring. This addition allows database administrators\n> to have a clearer understanding of the time intervals for statistics\n> collection on each statement. Such detailed tracking is invaluable for\n> performance tuning and identifying bottlenecks in database operations.\n\nYes, it will greatly improve performance analysis tools, and as the maintainer\nof one of them I've been waiting for this feature for a very long time!\n>\n> I think the design was well-discussed in this thread. Implementation\n> also looks good to me. I've just slightly revised the commit\n> messages.\n>\n> I'd going to push this patchset if no objections.\n\nThanks! No objection from me, it all looks good.\n\n\n", "msg_date": "Sat, 25 Nov 2023 11:04:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hi Alexander!\n\nOn Sat, 2023-11-25 at 02:45 +0200, Alexander Korotkov wrote:\n\n> I've reviewed this patch.\n\nThank you very much for your review.\n\n> I think the design was well-discussed in this thread.  Implementation\n> also looks good to me.  I've just slightly revised the commit\n> messages.\n\nI've noted a strange space in a commit message of 0001 patch: \n\"I t is needed for upcoming patch...\" \nIt looks like a typo.\n\n-- \nregards, Andrei Zubkov \nPostgres Professional\n\n\n\n", "msg_date": "Sat, 25 Nov 2023 23:45:41 +0300", "msg_from": "Andrei Zubkov <zubkov@moonset.ru>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in\n pg_stat_statements" }, { "msg_contents": "On Sat, Nov 25, 2023 at 10:45 PM Andrei Zubkov <zubkov@moonset.ru> wrote:\n> On Sat, 2023-11-25 at 02:45 +0200, Alexander Korotkov wrote:\n>\n> > I've reviewed this patch.\n>\n> Thank you very much for your review.\n>\n> > I think the design was well-discussed in this thread. Implementation\n> > also looks good to me. I've just slightly revised the commit\n> > messages.\n>\n> I've noted a strange space in a commit message of 0001 patch:\n> \"I t is needed for upcoming patch...\"\n> It looks like a typo.\n\nThank you for catching it. I'll fix this before commit.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Sat, 25 Nov 2023 22:46:47 +0200", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" }, { "msg_contents": "Hello Alexander,\n\n25.11.2023 23:46, Alexander Korotkov wrote:\n> On Sat, Nov 25, 2023 at 10:45 PM Andrei Zubkov<zubkov@moonset.ru> wrote:\n>>\n>> I've noted a strange space in a commit message of 0001 patch:\n>> \"I t is needed for upcoming patch...\"\n>> It looks like a typo.\n> Thank you for catching it. I'll fix this before commit.\n\nI've found one more typo in that commit: minimun.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Wed, 29 Nov 2023 10:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Tracking statements entry timestamp in pg_stat_statements" } ]
[ { "msg_contents": "Added the ability to specify IF EXISTS when renaming a column of an object\n(table, view, etc.).\nFor example: ALTER TABLE distributors RENAME COLUMN IF EXISTS address TO\ncity;\nIf the column does not exist, a notice is issued instead of throwing an\nerror.", "msg_date": "Mon, 22 Mar 2021 21:40:09 +0200", "msg_from": "David Oksman <oksman.dav@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] rename column if exists" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: tested, passed\n\nThank you for your contribution. \r\n\r\nThis is a useful feature. Although, there are so many places we alter a column which don't support IF EXISTS. For example: ALTER COLUMN IF EXISTS. Why don't we include the necessary changes across different use cases to this patch?\r\n\r\n> + | ALTER TABLE IF_P EXISTS relation_expr RENAME opt_column IF_P EXISTS name TO name\r\n\r\nSince this is my first review patch, can you help me understand why some keywords are written with \"_P\" suffix?\r\n\r\n> + | ALTER TABLE relation_expr RENAME opt_column IF_P EXISTS name TO name\r\n> + {\r\n> + RenameStmt *n = makeNode(RenameStmt);\r\n> + n->renameType = OBJECT_COLUMN;\r\n> + n->relationType = OBJECT_TABLE;\r\n> + n->relation = $3;\r\n> + n->subname = $8;\r\n> + n->newname = $10;\r\n> + n->missing_ok = false;\r\n> + n->sub_missing_ok = true;\r\n> + $$ = (Node *)n;\r\n> + }\r\n\r\nCopying alter table combinations (with and without IF EXISTS statements) makes this patch hard to review and bloats the gram. Instead of copying, perhaps we can use an optional syntax, like opt_if_not_exists of ALTER TYPE.\r\n\r\n> + if (attnum == InvalidAttrNumber)\r\n> + {\r\n> + if (!stmt->sub_missing_ok)\r\n> + ereport(ERROR,\r\n> + (errcode(ERRCODE_UNDEFINED_COLUMN),\r\n> + errmsg(\"column \\\"%s\\\" does not exist\",\r\n> + stmt->subname)));\r\n> + else\r\n> + {\r\n> + ereport(NOTICE,\r\n> + (errmsg(\"column \\\"%s\\\" does not exist, skipping\",\r\n> + stmt->subname)));\r\n> + return InvalidObjectAddress;\r\n> + }\r\n> + }\r\n> +\r\n\r\nOther statements in gram.y includes sub_missing_ok = true and missing_ok = false. Why don't we add sub_missing_ok = false to existing declarations where IF EXISTS is not used?\r\n\r\n> - <term><literal>RENAME ATTRIBUTE</literal></term>\r\n> + <term><literal>RENAME ATTRIBUTE [ IF EXISTS ]</literal></term>\r\n\r\nIt seems that ALTER VIEW, ALTER TYPE, and ALTER MATERIALIZED VIEW does not have any tests for this feature.", "msg_date": "Wed, 16 Jun 2021 10:48:02 +0000", "msg_from": "Yagiz Nizipli <yagiz@nizipli.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "> On 22 Mar 2021, at 20:40, David Oksman <oksman.dav@gmail.com> wrote:\n> \n> Added the ability to specify IF EXISTS when renaming a column of an object (table, view, etc.).\n> For example: ALTER TABLE distributors RENAME COLUMN IF EXISTS address TO city;\n> If the column does not exist, a notice is issued instead of throwing an error.\n\nWhat is the intended use-case for RENAME COLUMN IF EXISTS? I'm struggling to\nsee when that would be helpful to users but I might not be imaginative enough.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 11:46:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Thursday, November 4, 2021, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 22 Mar 2021, at 20:40, David Oksman <oksman.dav@gmail.com> wrote:\n> >\n> > Added the ability to specify IF EXISTS when renaming a column of an\n> object (table, view, etc.).\n> > For example: ALTER TABLE distributors RENAME COLUMN IF EXISTS address TO\n> city;\n> > If the column does not exist, a notice is issued instead of throwing an\n> error.\n>\n> What is the intended use-case for RENAME COLUMN IF EXISTS? I'm struggling\n> to\n> see when that would be helpful to users but I might not be imaginative\n> enough.\n>\n>\nSame reasoning as for all the other if exists we have, idempotence. Being\nable to run the command on an object that is already in the desired state\nwithout provoking an error.\n\nDavid J.\n\nOn Thursday, November 4, 2021, Daniel Gustafsson <daniel@yesql.se> wrote:> On 22 Mar 2021, at 20:40, David Oksman <oksman.dav@gmail.com> wrote:\n> \n> Added the ability to specify IF EXISTS when renaming a column of an object (table, view, etc.).\n> For example: ALTER TABLE distributors RENAME COLUMN IF EXISTS address TO city;\n> If the column does not exist, a notice is issued instead of throwing an error.\n\nWhat is the intended use-case for RENAME COLUMN IF EXISTS?  I'm struggling to\nsee when that would be helpful to users but I might not be imaginative enough.\nSame reasoning as for all the other if exists we have, idempotence. Being able to run the command on an object that is already in the desired state without provoking an error.David J.", "msg_date": "Thu, 4 Nov 2021 06:26:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "> On 4 Nov 2021, at 14:26, David G. Johnston <david.g.johnston@gmail.com> wrote:\n> \n> On Thursday, November 4, 2021, Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> wrote:\n> > On 22 Mar 2021, at 20:40, David Oksman <oksman.dav@gmail.com <mailto:oksman.dav@gmail.com>> wrote:\n> > \n> > Added the ability to specify IF EXISTS when renaming a column of an object (table, view, etc.).\n> > For example: ALTER TABLE distributors RENAME COLUMN IF EXISTS address TO city;\n> > If the column does not exist, a notice is issued instead of throwing an error.\n> \n> What is the intended use-case for RENAME COLUMN IF EXISTS? I'm struggling to\n> see when that would be helpful to users but I might not be imaginative enough.\n> \n> Same reasoning as for all the other if exists we have, idempotence. Being able to run the command on an object that is already in the desired state without provoking an error.\n\nIf the object is known to be in the desired state, there is no need to use IF\nEXISTS. Personally I think IF EXISTS commands are useful when they provide a\ntransition to a known end state, but in this case it's an unknown end state.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 5 Nov 2021 10:21:42 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Fri, 5 Nov 2021 at 05:21, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n\n> > Same reasoning as for all the other if exists we have, idempotence.\n> Being able to run the command on an object that is already in the desired\n> state without provoking an error.\n>\n> If the object is known to be in the desired state, there is no need to use\n> IF\n> EXISTS. Personally I think IF EXISTS commands are useful when they\n> provide a\n> transition to a known end state, but in this case it's an unknown end\n> state.\n>\n\nThe whole point of IF EXISTS, not to mention IF NOT EXISTS and OR REPLACE,\nis that the same script can run without error on a variety of existing\nschemas. They aren't (primarily) for typing directly at the psql prompt. At\nthe time the script is written, the state of the object when the script is\nrun is unknown.\n\nOn Fri, 5 Nov 2021 at 05:21, Daniel Gustafsson <daniel@yesql.se> wrote: \n> Same reasoning as for all the other if exists we have, idempotence. Being able to run the command on an object that is already in the desired state without provoking an error.\n\nIf the object is known to be in the desired state, there is no need to use IF\nEXISTS.  Personally I think IF EXISTS commands are useful when they provide a\ntransition to a known end state, but in this case it's an unknown end state.The whole point of IF EXISTS, not to mention IF NOT EXISTS and OR REPLACE, is that the same script can run without error on a variety of existing schemas. They aren't (primarily) for typing directly at the psql prompt. At the time the script is written, the state of the object when the script is run is unknown.", "msg_date": "Fri, 5 Nov 2021 08:03:58 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "> On 5 Nov 2021, at 13:03, Isaac Morland <isaac.morland@gmail.com> wrote:\n> \n> On Fri, 5 Nov 2021 at 05:21, Daniel Gustafsson <daniel@yesql.se <mailto:daniel@yesql.se>> wrote:\n> \n> > Same reasoning as for all the other if exists we have, idempotence. Being able to run the command on an object that is already in the desired state without provoking an error.\n> \n> If the object is known to be in the desired state, there is no need to use IF\n> EXISTS. Personally I think IF EXISTS commands are useful when they provide a\n> transition to a known end state, but in this case it's an unknown end state.\n> \n> The whole point of IF EXISTS, not to mention IF NOT EXISTS and OR REPLACE, is that the same script can run without error on a variety of existing schemas. They aren't (primarily) for typing directly at the psql prompt. At the time the script is written, the state of the object when the script is run is unknown.\n\nI know that, I'm just not convinced that it's a feature (in the case at hand).\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 5 Nov 2021 13:07:21 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Friday, November 5, 2021, Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> I know that, I'm just not convinced that it's a feature (in the case at\n> hand)\n>\n\nI don’t see how this one should be expected to meet a higher bar than drop\ntable or other existing commands. I get why in the nearby discussion\ncreate role if not exists is treated differently based upon its unique\nsecurity concerns. Does column renaming have a hidden concern I’m not\nthinking of?\n\nDavid J.\n\nOn Friday, November 5, 2021, Daniel Gustafsson <daniel@yesql.se> wrote:\nI know that, I'm just not convinced that it's a feature (in the case at hand)\nI don’t see how this one should be expected to meet a higher bar than drop table or other existing commands.  I get why in the nearby discussion create role if not exists is treated differently based upon its unique security concerns.  Does column renaming have a hidden concern I’m not thinking of?David J.", "msg_date": "Fri, 5 Nov 2021 07:22:29 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Friday, November 5, 2021, Daniel Gustafsson <daniel@yesql.se> wrote:\n>> I know that, I'm just not convinced that it's a feature (in the case at\n>> hand)\n\n> I don’t see how this one should be expected to meet a higher bar than drop\n> table or other existing commands. I get why in the nearby discussion\n> create role if not exists is treated differently based upon its unique\n> security concerns. Does column renaming have a hidden concern I’m not\n> thinking of?\n\nIMV, the best forms of these options are the ones that produce a known\nend state of the object. DROP IF EXISTS meets that test: upon successful\ncompletion, the object doesn't exist. CREATE OR REPLACE meets that test:\nupon successful completion, the object exists and has exactly the\nproperties stated in the command. CREATE IF NOT EXISTS fails that test,\nbecause while you know that the object will exist afterwards, you've got\nnext to no information about its properties. We've put up with C.I.N.E.\nsemantics in some limited cases like CREATE TABLE, where C.O.R.\nsemantics would be too drastic; but IMO it's generally best avoided.\n\nIn this framework, RENAME IF EXISTS is in sort of a weird place.\nYou'd know that afterwards there is no longer any column with the\nsource name. But you are not entitled to draw any conclusions\nabout whether a column exists with the target name, nor what its\nproperties are. So that makes me feel like the semantics are more\non the poorly-defined than well-defined side of the fence.\n\nI'd be more willing to overlook that if a clear use-case had been\ngiven, but AFAICS no concrete case has been offered.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Nov 2021 10:40:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Friday, November 5, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I'd be more willing to overlook that if a clear use-case had been\n> given, but AFAICS no concrete case has been offered.\n>\n>\nThe use case is the exact same one for all of these - indempotence,\nespecially in the face of being able to run migration scripts starting at a\npoint in the past and still having the final outcome be the same (or error\nif there is a fundamental conflict in the history). It’s the same argument\nused for why our current implementation of create [type] if not exists is\nbroken in how it deals with schemas.\n\nDavid J.\n\nOn Friday, November 5, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nI'd be more willing to overlook that if a clear use-case had been\ngiven, but AFAICS no concrete case has been offered.\nThe use case is the exact same one for all of these - indempotence, especially in the face of being able to run migration scripts starting at a point in the past and still having the final outcome be the same (or error if there is a fundamental conflict in the history).  It’s the same argument used for why our current implementation of create [type] if not exists is broken in how it deals with schemas.David J.", "msg_date": "Fri, 5 Nov 2021 08:00:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Friday, November 5, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd be more willing to overlook that if a clear use-case had been\n>> given, but AFAICS no concrete case has been offered.\n\n> The use case is the exact same one for all of these - indempotence,\n\n... except that, as I explained, it's NOT really idempotent.\nIt's a sort of half-baked idempotence, which is exactly the kind\nof underspecification you complain about in your next sentence.\nDo we really want to go there?\n\nThe perspective I'm coming from is that it's not terribly hard\nto write whatever sort of conditional DDL you want using plpgsql\nDO blocks, so it's not like we lack the capability. I think we\nshould only provide pre-fab conditional DDL for the most basic,\nsolidly-defined cases; and it seems to me that RENAME IF EXISTS\nisn't solid enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 05 Nov 2021 11:08:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Fri, Nov 5, 2021 at 10:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> In this framework, RENAME IF EXISTS is in sort of a weird place.\n> You'd know that afterwards there is no longer any column with the\n> source name. But you are not entitled to draw any conclusions\n> about whether a column exists with the target name, nor what its\n> properties are. So that makes me feel like the semantics are more\n> on the poorly-defined than well-defined side of the fence.\n\nI have mixed feelings about this proposal. As you may recall, I was a\nbig fan of CREATE IF NOT EXISTS because it's a super-useful thing in\nDDL upgrade scripts. You have an application and every time you deploy\nit you CREATE IF NOT EXISTS all the tables that are supposed to be\nthere. As long as the application is careful not to modify any table\ndefinitions, and nothing else is changing the database, this works\ngreat. You can add functionality by adding new tables, so the schema\ncan be upgraded before the app. Life is good.\n\nMaking renaming work in the same kind of context is harder. You're\ndefinitely going to have to upgrade the application and the schema in\nlock step, unless the application is smart enough to work with the\ncolumn having either name. You're also going to end up with some\ntrouble if you ever reuse a column name, because then the next time\nyou run the script it might rename the successor of the original\ncolumn by that name rather than the column you intended to rename. So\nit seems more finnicky to use.\n\nBut I'm also not sure that it's useless. People don't usually ask for\na feature unless they have a use in mind for it. Also, adding an\noption to skip failures when the object is missing is a popular kind\nof thing. The backend is full of functions that have a missingOK or\nnoError flag, for example. Maybe the fact that I don't quite see how\nto use this effectively is just lack of imagination on my part....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 5 Nov 2021 11:37:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Fri, Nov 5, 2021 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Friday, November 5, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I'd be more willing to overlook that if a clear use-case had been\n> >> given, but AFAICS no concrete case has been offered.\n>\n> > The use case is the exact same one for all of these - indempotence,\n>\n> ... except that, as I explained, it's NOT really idempotent.\n> It's a sort of half-baked idempotence, which is exactly the kind\n> of underspecification you complain about in your next sentence.\n> Do we really want to go there?\n>\n>\nIt may not be self-contained idempotence but so long as the user is using\nthe command in the expected manner the end result will appear that way.\n\nI disagree with the premise that we have to meet the known end state\nrequirement. In the imagined use case either, but not both, the original\ncolumn or the result column are going to exist. A RIE will behave as\nexpected and desired in that case. If someone executes RIE in a case where\nneither column exists the end result is no error and neither column still\nexists. This is exactly what the command RIE promises will happen in that\ncase. It works reliably and as one would expect and from there it is up to\nthe user, not us, to decide when it is appropriate to use or not.\n\nThe perspective I'm coming from is that it's not terribly hard\n> to write whatever sort of conditional DDL you want using plpgsql\n> DO blocks, so it's not like we lack the capability. I think we\n> should only provide pre-fab conditional DDL for the most basic,\n> solidly-defined cases; and it seems to me that RENAME IF EXISTS\n> isn't solid enough.\n>\n>\nIOW, it doesn't actually matter what the use case is. And the definition\nof solid basically precludes anything except CREATE and DROP commands\n(including the create version written ALTER TABLE IF EXISTS <do something>).\n\nIf this is indeed the agreed upon standard for this kind of thing an FAQ or\ndocumentation entry formalizing it would help, because lots of people are\njust going to see that we meet this migration use case only partially and\nwill continue to request and even develop the missing pieces.\n\nDavid J.\n\nOn Fri, Nov 5, 2021 at 8:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Friday, November 5, 2021, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'd be more willing to overlook that if a clear use-case had been\n>> given, but AFAICS no concrete case has been offered.\n\n> The use case is the exact same one for all of these - indempotence,\n\n... except that, as I explained, it's NOT really idempotent.\nIt's a sort of half-baked idempotence, which is exactly the kind\nof underspecification you complain about in your next sentence.\nDo we really want to go there?\nIt may not be self-contained idempotence but so long as the user is using the command in the expected manner the end result will appear that way.I disagree with the premise that we have to meet the known end state requirement.  In the imagined use case either, but not both, the original column or the result column are going to exist.  A RIE will behave as expected and desired in that case.  If someone executes RIE in a case where neither column exists the end result is no error and neither column still exists.  This is exactly what the command RIE promises will happen in that case.  It works reliably and as one would expect and from there it is up to the user, not us, to decide when it is appropriate to use or not.\nThe perspective I'm coming from is that it's not terribly hard\nto write whatever sort of conditional DDL you want using plpgsql\nDO blocks, so it's not like we lack the capability.  I think we\nshould only provide pre-fab conditional DDL for the most basic,\nsolidly-defined cases; and it seems to me that RENAME IF EXISTS\nisn't solid enough.IOW, it doesn't actually matter what the use case is.  And the definition of solid basically precludes anything except CREATE and DROP commands (including the create version written ALTER TABLE IF EXISTS <do something>).If this is indeed the agreed upon standard for this kind of thing an FAQ or documentation entry formalizing it would help, because lots of people are just going to see that we meet this migration use case only partially and will continue to request and even develop the missing pieces.David J.", "msg_date": "Fri, 5 Nov 2021 08:47:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "On Fri, Nov 5, 2021 at 8:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> Making renaming work in the same kind of context is harder. You're\n> definitely going to have to upgrade the application and the schema in\n> lock step, unless the application is smart enough to work with the\n> column having either name. You're also going to end up with some\n> trouble if you ever reuse a column name, because then the next time\n> you run the script it might rename the successor of the original\n> column by that name rather than the column you intended to rename. So\n> it seems more finnicky to use.\n>\n\nThis I understand fully, and am fine with leaving it to the user to\nhandle. They can choose whether rewriting the table (column add with\nnon-null values) in order to have an easier application migration is better\nor worse than doing a rename and just ensuring that the old name is fully\nretired (which seems likely).\n\nIt can be used profitably and that is good enough for me given that we've\nalready set a precedent of having IF EXISTS conditionals. That people need\nto understand exactly what the command will do, and test their scripts when\nusing it, is a reasonable expectation. The possibility that some may not\nand might have issues shouldn't prevent us from providing a useful feature\nto others who will use it appropriately and with care.\n\nDavid J.\n\nOn Fri, Nov 5, 2021 at 8:37 AM Robert Haas <robertmhaas@gmail.com> wrote:\nMaking renaming work in the same kind of context is harder. You're\ndefinitely going to have to upgrade the application and the schema in\nlock step, unless the application is smart enough to work with the\ncolumn having either name.  You're also going to end up with some\ntrouble if you ever reuse a column name, because then the next time\nyou run the script it might rename the successor of the original\ncolumn by that name rather than the column you intended to rename. So\nit seems more finnicky to use.This I understand fully, and am fine with leaving it to the user to handle.  They can choose whether rewriting the table (column add with non-null values) in order to have an easier application migration is better or worse than doing a rename and just ensuring that the old name is fully retired (which seems likely).It can be used profitably and that is good enough for me given that we've already set a precedent of having IF EXISTS conditionals.  That people need to understand exactly what the command will do, and test their scripts when using it, is a reasonable expectation.  The possibility that some may not and might have issues shouldn't prevent us from providing a useful feature to others who will use it appropriately and with care.David J.", "msg_date": "Fri, 5 Nov 2021 08:58:40 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" }, { "msg_contents": "Hi,\n\nThis patch has been around for about 10 months. There seems to be some support\nfor the feature but 3 committers raised concerns about the patch, and the OP\nnever answered, or clarified the intended use case until now.\n\nAt that point I don't see this patch getting committed at all so I'm marking it\nas Rejected.\n\n\n", "msg_date": "Mon, 17 Jan 2022 08:29:36 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] rename column if exists" } ]
[ { "msg_contents": "Hi,\nI'm looking for a software to migrate database between versions with\nminimum downtime.\n\nwhich one can be used to do this job ?\n\nthanks\n\nIsabelle\n\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>\nMail\npriva di virus. www.avast.com\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>\n<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\nHi,I'm looking for a software to migrate database between versions with minimum downtime. which one can be used to do this job ?thanksIsabelle \n\n\nMail priva di virus. www.avast.com", "msg_date": "Mon, 22 Mar 2021 22:47:07 +0100", "msg_from": "isabelle Ross <mktg.ide@gmail.com>", "msg_from_op": true, "msg_subject": "tool to migrate database" }, { "msg_contents": "On Mon, Mar 22, 2021, at 22:47, isabelle Ross wrote:\n>Hi,\n>I'm looking for a software to migrate database between versions with minimum downtime. \n>\n>which one can be used to do this job ?\n\nHi Isabelle,\n\nThere are multiple ways to do it.\nThe fastest way is probably pg_upgrade.\n\nThere are some difference ways to do it depending on what requirements you have on redundancy while upgrading.\n\nI recently read an interesting real-life story from a very big company, Adyen, and how they upgraded their 50 terrabyte PostgreSQL database. The article is from 2018 but I still think it's relevant:\n\nhttps://medium.com/adyen/updating-a-50-terabyte-postgresql-database-f64384b799e7\n\nThere might be other good tools I don't know of, I'm not an expert on upgrades.\nHopefully other pghackers can fill in.\n\nBest regards,\n\nJoel\n\n\n\nOn Mon, Mar 22, 2021, at 22:47, isabelle Ross wrote:>Hi,>I'm looking for a software to migrate database between versions with minimum downtime. >>which one can be used to do this job ?Hi Isabelle,There are multiple ways to do it.The fastest way is probably pg_upgrade.There are some difference ways to do it depending on what requirements you have on redundancy while upgrading.I recently read an interesting real-life story from a very big company, Adyen, and how they upgraded their 50 terrabyte PostgreSQL database. The article is from 2018 but I still think it's relevant:https://medium.com/adyen/updating-a-50-terabyte-postgresql-database-f64384b799e7There might be other good tools I don't know of, I'm not an expert on upgrades. Hopefully other pghackers can fill in.Best regards,Joel", "msg_date": "Tue, 23 Mar 2021 09:49:57 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": false, "msg_subject": "Re: tool to migrate database" }, { "msg_contents": "On Tue, Mar 23, 2021 at 09:49:57AM +0100, Joel Jacobson wrote:\n> I recently read an interesting real-life story from a very big company, Adyen,\n> and how they upgraded their 50 terrabyte PostgreSQL database. The article is\n> from 2018 but I still think it's relevant:\n> \n> https://medium.com/adyen/\n> updating-a-50-terabyte-postgresql-database-f64384b799e7\n> \n> There might be other good tools I don't know of, I'm not an expert on upgrades.\n> Hopefully other pghackers can fill in.\n\nThis is not an appropriate topic for the hackers email list, which is\nfor internal server development discussion. The 'general' or 'admin'\nlists would be better.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Tue, 23 Mar 2021 10:50:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: tool to migrate database" }, { "msg_contents": "concepts similar and related to migration/replication are applied with\nthe software tcapture , please give a look at https://www.tcapture.net/\n\nIl giorno gio 29 apr 2021 alle ore 16:34 Bruce Momjian <bruce@momjian.us>\nha scritto:\n\n> On Tue, Mar 23, 2021 at 09:49:57AM +0100, Joel Jacobson wrote:\n> > I recently read an interesting real-life story from a very big company,\n> Adyen,\n> > and how they upgraded their 50 terrabyte PostgreSQL database. The\n> article is\n> > from 2018 but I still think it's relevant:\n> >\n> > https://medium.com/adyen/\n> > updating-a-50-terabyte-postgresql-database-f64384b799e7\n> >\n> > There might be other good tools I don't know of, I'm not an expert on\n> upgrades.\n> > Hopefully other pghackers can fill in.\n>\n> This is not an appropriate topic for the hackers email list, which is\n> for internal server development discussion. The 'general' or 'admin'\n> lists would be better.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> If only the physical world exists, free will is an illusion.\n>\n>\n>\n>\n>\n>\n\n concepts similar and  related to migration/replication  are applied with the software tcapture , please give a look at https://www.tcapture.net/Il giorno gio 29 apr 2021 alle ore 16:34 Bruce Momjian <bruce@momjian.us> ha scritto:On Tue, Mar 23, 2021 at 09:49:57AM +0100, Joel Jacobson wrote:\n> I recently read an interesting real-life story from a very big company, Adyen,\n> and how they upgraded their 50 terrabyte PostgreSQL database. The article is\n> from 2018 but I still think it's relevant:\n> \n> https://medium.com/adyen/\n> updating-a-50-terabyte-postgresql-database-f64384b799e7\n> \n> There might be other good tools I don't know of, I'm not an expert on upgrades.\n> Hopefully other pghackers can fill in.\n\nThis is not an appropriate topic for the hackers email list, which is\nfor internal server development discussion.  The 'general' or 'admin'\nlists would be better.\n\n-- \n  Bruce Momjian  <bruce@momjian.us>        https://momjian.us\n  EDB                                      https://enterprisedb.com\n\n  If only the physical world exists, free will is an illusion.", "msg_date": "Thu, 29 Apr 2021 16:37:24 +0200", "msg_from": "silvio brandani <sbrandans@gmail.com>", "msg_from_op": false, "msg_subject": "Re: tool to migrate database" } ]
[ { "msg_contents": "Hi,\n\nwhile working on the new BRIN opclasses in [1], I stumbled on something\nI think is actually a bug in how CREATE OPERATOR CLASS handles the\nstorage type. If you look at built-in opclasses, there's a bunch of\ncases where (opcintype == opckeytype), for example the BRIN opclasses\nfor inet data type:\n\ntest=# select oid, opcname, opcfamily, opcintype, opckeytype from\npg_opclass where opcintype = 869 order by oid;\n\n oid | opcname | opcfamily | opcintype | opckeytype\n-------+-----------------------+-----------+-----------+------------\n 10105 | inet_minmax_ops | 4075 | 869 | 869\n 10106 | inet_inclusion_ops | 4102 | 869 | 869\n\nThe fact that opckeytype is set is important, because this allows the\nopclass to handle data with cidr data type - there's no opclass for\ncidr, but we can do this:\n\ncreate table t (a cidr);\ninsert into t values ('127.0.0.1');\ninsert into t values ('127.0.0.2');\ninsert into t values ('127.0.0.3');\ncreate index on t using brin (a inet_minmax_ops);\n\nThis seems to work fine. Now, if you manually update the opckeytype to\n0, you'll get this:\n\ntest=# update pg_opclass set opckeytype = 0 where oid = 10105;\nUPDATE 1\ntest=# create index on t using brin (a inet_minmax_ops);\nERROR: missing operator 1(650,650) in opfamily 4075\n\n\nUnfortunately, it turns out it's impossible to re-create this opclass\nusing CREATE OPERATOR CLASS, because the opclasscmds.c does this:\n\n /* Just drop the spec if same as column datatype */\n if (storageoid == typeoid && false)\n storageoid = InvalidOid;\n\nSo the storageoid is reset to 0. This only works for the built-in\nopclasses because those are injected directly into the catalogs.\n\nEither the CREATE OPERATOR CLASS is not considering something before\nresetting the storageoid, or maybe it should be reset for all opclasses\n(including the built-in ones) and the code that's using it should\nrestore it in those cases (AFAICS opckeytype=0 means it's the same as\nopcintkey).\n\nAttached is a PoC patch doing the first thing - this does the trick for\nme, but I'm not 100% sure it's the right fix.\n\n\n[1]\nhttps://www.postgresql.org/message-id/54b47e66-bd8a-d44a-2090-fd4b2f49abe6%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 23 Mar 2021 02:34:57 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Handling of opckeytype / CREATE OPERATOR CLASS (bug?)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> while working on the new BRIN opclasses in [1], I stumbled on something\n> I think is actually a bug in how CREATE OPERATOR CLASS handles the\n> storage type.\n\nHm. Both catalogs.sgml and pg_opclass.h say specifically that\nopckeytype should be zero if it's to be the same as the input\ncolumn type. I don't think just dropping the enforcement of that\nis the right answer.\n\nI don't recall for sure what-all might depend on that. I suspect\nthat it's mostly for the benefit of polymorphic opclasses, so\nmaybe the right thing is to say that the opckeytype can be\npolymorphic if opcintype is, and then we resolve it as per\nthe usual polymorphism rules.\n\nIn any case, it's fairly suspicious that the only opclasses\nviolating the existing rule are johnny-come-lately BRIN opclasses.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 22 Mar 2021 22:13:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Handling of opckeytype / CREATE OPERATOR CLASS (bug?)" }, { "msg_contents": "\n\nOn 3/23/21 3:13 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> while working on the new BRIN opclasses in [1], I stumbled on something\n>> I think is actually a bug in how CREATE OPERATOR CLASS handles the\n>> storage type.\n> \n> Hm. Both catalogs.sgml and pg_opclass.h say specifically that\n> opckeytype should be zero if it's to be the same as the input\n> column type. I don't think just dropping the enforcement of that\n> is the right answer.\n> \n\nYeah, that's possible. I was mostly just demonstrating the difference in\nbehavior. Maybe the right fix is to fix the catalog contents and then\ntweak the AM code, or something.\n\n> I don't recall for sure what-all might depend on that. I suspect\n> that it's mostly for the benefit of polymorphic opclasses, so\n> maybe the right thing is to say that the opckeytype can be\n> polymorphic if opcintype is, and then we resolve it as per\n> the usual polymorphism rules.\n> \n\nI did an experiment - fixed all the opclasses violating the rule by\nremoving the opckeytype, and ran make checke. The only cases causing\nissues were cidr and int4range. Not that it proves anything.\n\n> In any case, it's fairly suspicious that the only opclasses\n> violating the existing rule are johnny-come-lately BRIN opclasses.\n> \n\nRight, that seems suspicious.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 23 Mar 2021 04:43:18 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Handling of opckeytype / CREATE OPERATOR CLASS (bug?)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 3/23/21 3:13 AM, Tom Lane wrote:\n>> Hm. Both catalogs.sgml and pg_opclass.h say specifically that\n>> opckeytype should be zero if it's to be the same as the input\n>> column type. I don't think just dropping the enforcement of that\n>> is the right answer.\n\n> Yeah, that's possible. I was mostly just demonstrating the difference in\n> behavior. Maybe the right fix is to fix the catalog contents and then\n> tweak the AM code, or something.\n\nDigging in our git history, the rule about zero opckeytype dates to\n2001 (f933766ba), which precedes our invention of polymorphic types\nin 2003 (somewhere around 730840c9b). So I'm pretty sure that that\nwas a poor man's substitute for polymorphic opclasses, which we\nfailed to clean up entirely after we got real polymorphic opclasses.\n\nNow, I'd be in favor of cleaning that up and just using standard\npolymorphism rules throughout. But (without having studied the code)\nit looks like the immediate issue is that something in the BRIN code is\nunfamiliar with the rule for zero opckeytype. It might be a noticeably\nsmaller patch to fix that than to get rid of the convention about zero.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 01:15:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Handling of opckeytype / CREATE OPERATOR CLASS (bug?)" }, { "msg_contents": "\n\nOn 3/23/21 6:15 AM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 3/23/21 3:13 AM, Tom Lane wrote:\n>>> Hm. Both catalogs.sgml and pg_opclass.h say specifically that\n>>> opckeytype should be zero if it's to be the same as the input\n>>> column type. I don't think just dropping the enforcement of that\n>>> is the right answer.\n> \n>> Yeah, that's possible. I was mostly just demonstrating the difference in\n>> behavior. Maybe the right fix is to fix the catalog contents and then\n>> tweak the AM code, or something.\n> \n> Digging in our git history, the rule about zero opckeytype dates to\n> 2001 (f933766ba), which precedes our invention of polymorphic types\n> in 2003 (somewhere around 730840c9b). So I'm pretty sure that that\n> was a poor man's substitute for polymorphic opclasses, which we\n> failed to clean up entirely after we got real polymorphic opclasses.\n> \n> Now, I'd be in favor of cleaning that up and just using standard\n> polymorphism rules throughout. But (without having studied the code)\n> it looks like the immediate issue is that something in the BRIN code is\n> unfamiliar with the rule for zero opckeytype. It might be a noticeably\n> smaller patch to fix that than to get rid of the convention about zero.\n> \n\nThat's possible. I'm not familiar with how we deal with polymorphic\nopclasses etc. but I tried to look for places dealing with opckeytype,\nso that I can compare BRIN vs. the other AMs, but the only references\nseem to be in amvalidate() functions.\n\nSo either the difference is not very obvious, or maybe the other AMs\ndon't trigger this for some reason. For example btree has a separate\nopclass for cidr, so it does not have to use \"inet_ops\" as polymorphic.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 23 Mar 2021 16:33:17 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Handling of opckeytype / CREATE OPERATOR CLASS (bug?)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 3/23/21 6:15 AM, Tom Lane wrote:\n>> Digging in our git history, the rule about zero opckeytype dates to\n>> 2001 (f933766ba), which precedes our invention of polymorphic types\n>> in 2003 (somewhere around 730840c9b). So I'm pretty sure that that\n>> was a poor man's substitute for polymorphic opclasses, which we\n>> failed to clean up entirely after we got real polymorphic opclasses.\n\n> That's possible. I'm not familiar with how we deal with polymorphic\n> opclasses etc. but I tried to look for places dealing with opckeytype,\n> so that I can compare BRIN vs. the other AMs, but the only references\n> seem to be in amvalidate() functions.\n> So either the difference is not very obvious, or maybe the other AMs\n> don't trigger this for some reason. For example btree has a separate\n> opclass for cidr, so it does not have to use \"inet_ops\" as polymorphic.\n\nI think the difference is that brin is trying to look up opclass members\nbased on the recorded type of the index's column (not the underlying\ntable column). I don't recall that anyplace else does that. btree\nfor instance does some lookups based on opcintype, but I don't think\nit looks at the index column type anywhere.\n\nAfter poking at it a bit more, the convention for zero does allow us\nto do some things that regular polymorphism won't. As an example:\n\ntest=# create table vc (id varchar(9) primary key);\nCREATE TABLE\ntest=# \\d+ vc_pkey\n Index \"public.vc_pkey\"\n Column | Type | Key? | Definition | Storage | Stats target \n--------+----------------------+------+------------+----------+--------------\n id | character varying(9) | yes | id | extended | \nprimary key, btree, for table \"public.vc\"\n\nIf btree text_ops had opckeytype = 'text' then this index column\nwould show as just \"text\", which while not fatal seems like a loss\nof information.\n\nSo I'm coming around to the idea that opckeytype = opcintype and\nopckeytype = 0 are valid but distinct situations, and CREATE OPCLASS\nindeed ought not smash one to the other. But we'd better poke around\nat the documentation, pg_dump, etc and make sure everything plays\nnice with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 12:30:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Handling of opckeytype / CREATE OPERATOR CLASS (bug?)" } ]
[ { "msg_contents": "Hey everyone,\n\nAs you know, Postgres currently supports SQL:1999 recursive common table\nexpressions, using WITH RECURSIVE. However, Postgres does not allow more than\none recursive self-reference in the recursive term. This restriction seems to be\nunnecessary.\n\nIn this mail, I'd like to propose a patch that removes this restriction, and \ntherefore allows the use of multiple self-references in the recursive term.\nAfter the patch:\n\nWITH RECURSIVE t(n) AS (\n VALUES(1)\n UNION ALL\n SELECT t.n+f.n\n FROM t, t AS f\n WHERE t.n < 100\n) SELECT * FROM t;\n\n n\n-----\n 1\n 2\n 4\n 8\n 16\n 32\n 64\n 128\n(8 rows)\n\nThis feature deviates only slightly from the current WITH RECURSIVE, and \nrequires very little changes (~10 loc). Any thoughts on this?\n\n--\nDenis Hirn", "msg_date": "Tue, 23 Mar 2021 14:03:44 +0100", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "[PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On Tue, Mar 23, 2021 at 1:03 PM Denis Hirn <denis.hirn@uni-tuebingen.de>\nwrote:\n\n>\n> Hey everyone,\n>\n> As you know, Postgres currently supports SQL:1999 recursive common table\n> expressions, using WITH RECURSIVE. However, Postgres does not allow more\n> than\n> one recursive self-reference in the recursive term. This restriction seems\n> to be\n> unnecessary.\n>\n> In this mail, I'd like to propose a patch that removes this restriction,\n> and\n> therefore allows the use of multiple self-references in the recursive term.\n> After the patch:\n>\n> WITH RECURSIVE t(n) AS (\n> VALUES(1)\n> UNION ALL\n> SELECT t.n+f.n\n> FROM t, t AS f\n> WHERE t.n < 100\n> ) SELECT * FROM t;\n>\n> n\n> -----\n> 1\n> 2\n> 4\n> 8\n> 16\n> 32\n> 64\n> 128\n> (8 rows)\n>\n> This feature deviates only slightly from the current WITH RECURSIVE, and\n> requires very little changes (~10 loc). Any thoughts on this?\n>\n> --\n> Denis Hirn\n>\n\nI am not at all sure what the standard says about such recursion but it\nlooks like the two t's are treated in your patch as the same incarnation of\nthe table, not as a cross join of two incarnations. The natural result I\nwould expect from a this query would be all numbers from 1 to 198 (assuming\nthat the query is modified to restrict f.n and that UNION ALL is converted\nto UNION to avoid infinite recursion).\n\nI don't think any other DBMS has implemented this, except MariaDB. Tested\nhere:\nhttps://dbfiddle.uk/?rdbms=mariadb_10.5&fiddle=565c22771fdfc746e05808a7da7a205f\n\nSET @@standard_compliant_cte=0;\nWITH RECURSIVE t(n) AS (\n SELECT 1\n UNION -- ALL\n SELECT t.n + f.n\n FROM t, t AS f\n WHERE t.n < 4 AND f.n < 4\n) SELECT * FROM t;\n\nResult:\n\n> | n |\n> | -: |\n> | 1 |\n> | 2 |\n> | 3 |\n> | 4 |\n> | 5 |\n> | 6 |\n\nBest regards\nPantelis Theodosiou\n\nOn Tue, Mar 23, 2021 at 1:03 PM Denis Hirn <denis.hirn@uni-tuebingen.de> wrote:\nHey everyone,\n\nAs you know, Postgres currently supports SQL:1999 recursive common table\nexpressions, using WITH RECURSIVE. However, Postgres does not allow more than\none recursive self-reference in the recursive term. This restriction seems to be\nunnecessary.\n\nIn this mail, I'd like to propose a patch that removes this restriction, and \ntherefore allows the use of multiple self-references in the recursive term.\nAfter the patch:\n\nWITH RECURSIVE t(n) AS (\n    VALUES(1)\n  UNION ALL\n    SELECT t.n+f.n\n    FROM t, t AS f\n    WHERE t.n < 100\n) SELECT * FROM t;\n\n  n\n-----\n   1\n   2\n   4\n   8\n  16\n  32\n  64\n 128\n(8 rows)\n\nThis feature deviates only slightly from the current WITH RECURSIVE, and \nrequires very little changes (~10 loc). Any thoughts on this?\n\n--\nDenis HirnI am not at all sure what the standard says about such recursion but it looks like the two t's are treated in your patch as the same incarnation of the table, not as a cross join of two incarnations. The natural result I would expect from a this query would be all numbers from 1 to 198 (assuming that the query is modified to restrict f.n and that UNION ALL is converted to UNION to avoid infinite recursion).I don't think any other DBMS has implemented this, except MariaDB. Tested here:https://dbfiddle.uk/?rdbms=mariadb_10.5&fiddle=565c22771fdfc746e05808a7da7a205fSET  @@standard_compliant_cte=0;WITH RECURSIVE t(n) AS (    SELECT 1  UNION -- ALL    SELECT t.n + f.n    FROM t, t AS f    WHERE t.n < 4 AND f.n < 4) SELECT * FROM t;Result:> |  n |> | -: |> |  1 |> |  2 |> |  3 |> |  4 |> |  5 |> |  6 |Best regardsPantelis Theodosiou", "msg_date": "Tue, 23 Mar 2021 16:33:37 +0000", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "Hey Pantelis,\n\n> I am not at all sure what the standard says about such recursion [...]\n\nas far as I know, the standard does not constraint the number of self-references\nof recursive common table expressions. However, I could be wrong here.\n\n> [...] but it looks like the two t's are treated in your patch as the same incarnation of the table, not as a cross join of two incarnations.\n\n\nThat's right and – as far as I'm concerned – it's expected behaviour. The patch only allows the recursive\nunion operator's working table to be read more than once. All self-references read exactly the same rows\nin each iteration. You could basically accomplish the same thing with another CTE like this:\n\nWITH RECURSIVE t(n) AS (\n VALUES(1)\n UNION ALL\n (WITH wt AS (SELECT * FROM t)\n SELECT wt.n+f.n\n FROM wt, wt AS f\n WHERE wt.n < 100)\n) SELECT * FROM t;\n\nBut honestly, this feels more like a hack than a solution to me. The entire working table is\nmaterialized by the (non recursive) common table expression wt, effectively doubling the\nmemory consumption of the query. This patch eliminates this intermediate materialization.\n\n> I don't think any other DBMS has implemented this, except MariaDB. Tested here:\n\nThere are a few recent DBMSs that I know of that support this: HyPer, Umbra, DuckDB, and NoisePage.\nI'm sure there are some more examples. Still, you are right, many other DBMSs do not support this – yet.\n\n--\nDenis Hirn\n\n\nHey Pantelis,I am not at all sure what the standard says about such recursion [...]as far as I know, the standard does not constraint the number of self-referencesof recursive common table expressions. However, I could be wrong here.[...] but it looks like the two t's are treated in your patch as the same incarnation of the table, not as a cross join of two incarnations.That's right and – as far as I'm concerned – it's expected behaviour. The patch only allows the recursiveunion operator's working table to be read more than once. All self-references read exactly the same rowsin each iteration. You could basically accomplish the same thing with another CTE like this:WITH RECURSIVE t(n) AS (    VALUES(1)  UNION ALL    (WITH wt AS (SELECT * FROM t)    SELECT wt.n+f.n    FROM wt, wt AS f    WHERE wt.n < 100)) SELECT * FROM t;But honestly, this feels more like a hack than a solution to me. The entire working table ismaterialized by the (non recursive) common table expression wt, effectively doubling thememory consumption of the query. This patch eliminates this intermediate materialization.I don't think any other DBMS has implemented this, except MariaDB. Tested here:There are a few recent DBMSs that I know of that support this: HyPer, Umbra, DuckDB, and NoisePage.I'm sure there are some more examples. Still, you are right, many other DBMSs do not support this – yet.\n--Denis Hirn", "msg_date": "Tue, 23 Mar 2021 19:09:08 +0100", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "Denis Hirn <denis.hirn@uni-tuebingen.de> writes:\n>> I am not at all sure what the standard says about such recursion [...]\n\n> as far as I know, the standard does not constraint the number of self-references\n> of recursive common table expressions. However, I could be wrong here.\n\nAs far as I can see, the spec flat-out forbids this. In SQL:2021,\nit's discussed in 7.17 <query expression> syntax rule 3) j) ix), which\ndefines\n\n ix) If WLEi is recursive, then:\n 1) Let S be the stratum that contains WQNi.\n 2) If WQEi does not contain a <query specification> that contains\n more than one <query name> referencing members of S, then WLEi is\n linearly recursive.\n\n(\"stratum\" means a set of mutually-recursive WITH items), and they\nhelpfully explain\n\n NOTE 308 — For example, if WLEi contains the <query specification>\n SELECT * FROM A AS A1, A AS A2\n where A is a <query name> in S, then WLEi is not linearly recursive. The\n point is that this <query specification> contains two references to\n members of S. It is irrelevant that they reference the same member of S.\n\nand then the next rule says\n\n x) If WLEi is recursive, then WLEi shall be linearly recursive.\n\n\nSo the problem with extending the spec here is that someday they might\nextend it with some other semantics, and then we're between a rock and\na hard place.\n\nIf there were really compelling reasons why (a) we have to have this\nfeature and (b) there's only one reasonable way for it to act, hence\nno likelihood that the SQL committee will choose something else, then\nI'd be willing to think about getting out front of the committee.\nBut you haven't made either case.\n\n>> I don't think any other DBMS has implemented this, except MariaDB. Tested here:\n\n> There are a few recent DBMSs that I know of that support this: HyPer, Umbra, DuckDB, and NoisePage.\n\nDo they all act the same? Has anybody that sits on the SQL committee\ndone it? (If you could point to DB2, in particular, I'd have some\nconfidence that the committee might standardize on their approach.)\n\n\nA totally independent question is whether you've even defined a\nwell-defined behavior. There's an interesting comment in the spec:\n\n NOTE 310 — The restrictions insure that each WLEi, viewed as a\n transformation of the query names of the stratum, is monotonically\n increasing. According to Tarski’s fixed point theorem, this insures that\n there is a fixed point. The General Rules use Kleene’s fixed point\n theorem to define a sequence that converges to the minimal fixed\n point.\n\nI don't know any of the underlying math here, but if you've got a\njoin of multiple recursion references then the number of output\nrows could certainly be nonlinear, which very possibly breaks the\nargument that there's a well-defined interpretation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 23 Mar 2021 15:29:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "Thanks for the feedback, Tom.\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> [...]\n> As far as I can see, the spec flat-out forbids this. In SQL:2021,\n> it's discussed in 7.17 <query expression> syntax rule 3) j) ix), which\n> defines [linear recursion]\n\n(Aside: We don't have a copy of the SQL:2021 specification here (all\nwe've got here is the SQL:2016 variant). We've browsed the ISO site\nand didn't find find SQL:2021 there. Is that a generally available\ndraft document?)\n\n> So the problem with extending the spec here is that someday they might\n> extend it with some other semantics, and then we're between a rock and\n> a hard place.\n\nThere are two issues, say [LIN] and [NON-LIN], here. Re [LIN]:\n\n[LIN] PostgreSQL does not allow multiple references to the recursive\n common table, even if the recursion is LINEAR. Plenty of examples\n of such queries are found in the documentation of established RDBMSs,\n among these IBM Db2 and MS SQL Server:\n\n - https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/sqlp/rbafyrecursivequeries.htm#d60691e2455\n (example \"Two tables used for recursion using recursive common\n table expressions\")\n\n - https://docs.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15#h-using-multiple-anchor-and-recursive-members\n (example \"Using multiple anchor and recursive members\")\n\nDb2 and SQL Server process such linear queries with multiple recursive\nCTE references just fine. As does SQLite3. With the proposed patch,\nPostgreSQL would be able to process these, too (and deliver the same\nexpected result).\n\n> If there were really compelling reasons why (a) we have to have this\n> feature and (b) there's only one reasonable way for it to act, hence\n> no likelihood that the SQL committee will choose something else, then\n> I'd be willing to think about getting out front of the committee.\n> But you haven't made either case.\n> [...]\n> Do they all act the same? Has anybody that sits on the SQL committee\n> done it? (If you could point to DB2, in particular, I'd have some\n> confidence that the committee might standardize on their approach.)\n\nWe'd classify the ability of established RDBMSs to cope with the just\nmentioned class of queries (and PostgreSQL's inability to do the same)\nas one compelling reason. Also, the existing functionality in Db2 and\nSQL Server would be a yardstick for the expected behavior.\n\n> A totally independent question is whether you've even defined a\n> well-defined behavior. There's an interesting comment in the spec:\n>\n>\n> NOTE 310 — The restrictions insure that each WLEi, viewed as a\n> transformation of the query names of the stratum, is monotonically\n> increasing. According to Tarski’s fixed point theorem, this insures that\n> there is a fixed point. The General Rules use Kleene’s fixed point\n> theorem to define a sequence that converges to the minimal fixed\n> point.\n>\n>\n> I don't know any of the underlying math here, but if you've got a\n> join of multiple recursion references then the number of output\n> rows could certainly be nonlinear, which very possibly breaks the\n> argument that there's a well-defined interpretation.\n\nThis brings us to [NON-LIN]:\n\n[NON-LIN] NOTE 310 refers to Tarski's fixed point theorem and the\n prerequisite that the recursive CTE defines a MONOTONIC query (this\n guarantees the existence of a least fixed point which defines\n the meaning of a recursive CTE in the first place). MONOTONICITY\n and NON-LINEAR recursion, however, are two separate/orthogonal issues.\n\nA linear recursive query can be monotonic or non-monotonic (and PostgreSQL\ncurrently has a system of safeguards that aim to identify the latter\nkind of problematic queries, ruling out the use of EXCEPT, aggregation,\nand so forth), just like a non-linear query can. A mononotic non-linear\nrecursive query approaches a least fixed point which makes the\nbehavior of a non-linear recursive CTE as well-defined as a linear\nCTE.\n\nIn fact, the document that led to the inclusion of WITH RECURSIVE into\nthe SQL standard (see reference [1] below) mentions that \"Linear\nrecursion is a special case of non-linear recursion\" (Section 3.9) in\nthe fixpoint-based semantics. (It appears that the authors aimed to\nintroduce non-linear WITH RECURSIVE from the start but the existing\nRECURSIVE UNION proposal at the time was restricted to linear recursion;\nso they paddled back and suggested to include non-linear recursion in a\nfuture SQL standard update, coined SQL4 back then).\n\nWe do agree, though, that the absence of non-linear recursion in the SQL\nstandard has openened the field for varied interpretations of the\nsemantics. MariaDB, for example, admits non-linear recursion once you\nset the configuration option standard_compliant_cte to 0. However, a\nrecursive table reference then returns *all* rows collected so far (not\njust those rows produced by the last iteration). This implicit change of\nbehavior makes sense for use cases of non-linear recursion, yet may come\nas a surprise for query authors. Since this implicit change could\nalternatively be made explicit (and thus, arguably, clearer) in terms of\na UNION with the recursive table reference, we'd argue to keep the\nsemantics of recursive table references as is. But before you know, you\nend up with diverging semantics for a single SQL construct, just as you said\nabove.\n\nGiven this, we would propose the patch as a means to allow multiple\nrecursive references for linear queries (see [LIN] above). This allows\nPostgreSQL to catch up with Db2, SQL Server, or SQLite3. The semantics\nin the [LIN] case are clear.\n\nBest wishes,\n --Denis and Torsten\n\n[1] S.J. Finkelstein, N. Mattos, I.S. Mumick, H. Pirahesh,\n \"Expressing Recursive Queries in SQL,\"\" ANSI Document X3H2-96-075r1, 1996,\n https://www.kirusa.com/mumick/pspapers/ansiRevisedRecProp96-075r1.ps.Z\n\n\nThanks for the feedback, Tom.\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> [...]\n> As far as I can see, the spec flat-out forbids this. In SQL:2021,\n> it's discussed in 7.17 <query expression> syntax rule 3) j) ix), which\n> defines [linear recursion]\n\n(Aside: We don't have a copy of the SQL:2021 specification here (all\nwe've got here is the SQL:2016 variant). We've browsed the ISO site\nand didn't find find SQL:2021 there. Is that a generally available\ndraft document?)\n\n> So the problem with extending the spec here is that someday they might\n> extend it with some other semantics, and then we're between a rock and\n> a hard place.\n\nThere are two issues, say [LIN] and [NON-LIN], here. Re [LIN]:\n\n[LIN] PostgreSQL does not allow multiple references to the recursive\n common table, even if the recursion is LINEAR. Plenty of examples\n of such queries are found in the documentation of established RDBMSs,\n among these IBM Db2 and MS SQL Server:\n\n - https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/sqlp/rbafyrecursivequeries.htm#d60691e2455\n (example \"Two tables used for recursion using recursive common\n table expressions\")\n\n - https://docs.microsoft.com/en-us/sql/t-sql/queries/with-common-table-expression-transact-sql?view=sql-server-ver15#h-using-multiple-anchor-and-recursive-members\n (example \"Using multiple anchor and recursive members\")\n\nDb2 and SQL Server process such linear queries with multiple recursive\nCTE references just fine. As does SQLite3. With the proposed patch,\nPostgreSQL would be able to process these, too (and deliver the same\nexpected result).\n\n> If there were really compelling reasons why (a) we have to have this\n> feature and (b) there's only one reasonable way for it to act, hence\n> no likelihood that the SQL committee will choose something else, then\n> I'd be willing to think about getting out front of the committee.\n> But you haven't made either case.\n> [...]\n> Do they all act the same? Has anybody that sits on the SQL committee\n> done it? (If you could point to DB2, in particular, I'd have some\n> confidence that the committee might standardize on their approach.)\n\nWe'd classify the ability of established RDBMSs to cope with the just\nmentioned class of queries (and PostgreSQL's inability to do the same)\nas one compelling reason. Also, the existing functionality in Db2 and\nSQL Server would be a yardstick for the expected behavior.\n\n> A totally independent question is whether you've even defined a\n> well-defined behavior. There's an interesting comment in the spec:\n>\n>\n> NOTE 310 — The restrictions insure that each WLEi, viewed as a\n> transformation of the query names of the stratum, is monotonically\n> increasing. According to Tarski’s fixed point theorem, this insures that\n> there is a fixed point. The General Rules use Kleene’s fixed point\n> theorem to define a sequence that converges to the minimal fixed\n> point.\n>\n>\n> I don't know any of the underlying math here, but if you've got a\n> join of multiple recursion references then the number of output\n> rows could certainly be nonlinear, which very possibly breaks the\n> argument that there's a well-defined interpretation.\n\nThis brings us to [NON-LIN]:\n\n[NON-LIN] NOTE 310 refers to Tarski's fixed point theorem and the\n prerequisite that the recursive CTE defines a MONOTONIC query (this\n guarantees the existence of a least fixed point which defines\n the meaning of a recursive CTE in the first place). MONOTONICITY\n and NON-LINEAR recursion, however, are two separate/orthogonal issues.\n\nA linear recursive query can be monotonic or non-monotonic (and PostgreSQL\ncurrently has a system of safeguards that aim to identify the latter\nkind of problematic queries, ruling out the use of EXCEPT, aggregation,\nand so forth), just like a non-linear query can. A mononotic non-linear\nrecursive query approaches a least fixed point which makes the\nbehavior of a non-linear recursive CTE as well-defined as a linear\nCTE.\n\nIn fact, the document that led to the inclusion of WITH RECURSIVE into\nthe SQL standard (see reference [1] below) mentions that \"Linear\nrecursion is a special case of non-linear recursion\" (Section 3.9) in\nthe fixpoint-based semantics. (It appears that the authors aimed to\nintroduce non-linear WITH RECURSIVE from the start but the existing\nRECURSIVE UNION proposal at the time was restricted to linear recursion;\nso they paddled back and suggested to include non-linear recursion in a\nfuture SQL standard update, coined SQL4 back then).\n\nWe do agree, though, that the absence of non-linear recursion in the SQL\nstandard has openened the field for varied interpretations of the\nsemantics. MariaDB, for example, admits non-linear recursion once you\nset the configuration option standard_compliant_cte to 0. However, a\nrecursive table reference then returns *all* rows collected so far (not\njust those rows produced by the last iteration). This implicit change of\nbehavior makes sense for use cases of non-linear recursion, yet may come\nas a surprise for query authors. Since this implicit change could\nalternatively be made explicit (and thus, arguably, clearer) in terms of\na UNION with the recursive table reference, we'd argue to keep the\nsemantics of recursive table references as is. But before you know, you\nend up with diverging semantics for a single SQL construct, just as you said\nabove.\n\nGiven this, we would propose the patch as a means to allow multiple\nrecursive references for linear queries (see [LIN] above). This allows\nPostgreSQL to catch up with Db2, SQL Server, or SQLite3. The semantics\nin the [LIN] case are clear.\n\nBest wishes,\n --Denis and Torsten\n\n[1] S.J. Finkelstein, N. Mattos, I.S. Mumick, H. Pirahesh,\n \"Expressing Recursive Queries in SQL,\"\" ANSI Document X3H2-96-075r1, 1996,\n https://www.kirusa.com/mumick/pspapers/ansiRevisedRecProp96-075r1.ps.Z", "msg_date": "Fri, 26 Mar 2021 10:22:06 +0100", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "Based on Toms feedback, and due to the fact that SQL:2021 forbids\nnon-linear recursion, version 2 of the patch allows only linear\nrecursion. Therefore, later SQL committee decisions on non-linear\nrecursion should not be problematic.\n\n> [LIN] PostgreSQL does not allow multiple references to the recursive\n> common table, even if the recursion is LINEAR. Plenty of examples\n> of such queries are found in the documentation of established RDBMSs,\n> among these IBM Db2 and MS SQL Server:\n>\n> - https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/sqlp/rbafyrecursivequeries.htm#d60691e2455 <https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/sqlp/rbafyrecursivequeries.htm#d60691e2455>\n> (example \"Two tables used for recursion using recursive common\n> table expressions\")\n\nSee the gist at [1] below for SQL code that features multiple (yet linear)\nrecursive references. A patched PostgreSQL runs this query as expected.\n\nBest wishes,\n --Denis and Torsten\n\n[1] https://gist.github.com/kryonix/73d77d3eaa5a15b3a4bdb7d590fa1253 <https://gist.github.com/kryonix/73d77d3eaa5a15b3a4bdb7d590fa1253>", "msg_date": "Wed, 31 Mar 2021 15:31:29 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "Sorry, I didn't append the patch properly.\n\nBest wishes,\n --Denis", "msg_date": "Wed, 31 Mar 2021 15:57:52 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On Wed, Mar 31, 2021 at 7:28 PM Denis Hirn <denis.hirn@uni-tuebingen.de> wrote:\n>\n> Sorry, I didn't append the patch properly.\n\nThe patch does not apply on Head anymore, could you rebase and post a\npatch. I'm changing the status to \"Waiting for Author\".\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 14 Jul 2021 21:15:49 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> The patch does not apply on Head anymore, could you rebase and post a patch.\n\nSure thing. Here's the new patch.\n\nBest wishes,\n -- Denis", "msg_date": "Thu, 15 Jul 2021 09:18:00 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On Thu, Jul 15, 2021 at 09:18:00AM +0200, Denis Hirn wrote:\n> > The patch does not apply on Head anymore, could you rebase and post a patch.\n> \n> Sure thing. Here's the new patch.\n> \n> Best wishes,\n\nThanks for updating this.\n\nIn the next version of the patch, would you be so kind as to include\nupdated documentation of the feature and at least one regression test\nof same?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Sun, 18 Jul 2021 23:42:37 +0000", "msg_from": "David Fetter <david@fetter.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> In the next version of the patch, would you be so kind as to include\n> updated documentation of the feature and at least one regression test\n> of same?\n\nAs requested, this new version of the patch contains regression tests and documentation.\n\nBest wishes,\n -- Denis", "msg_date": "Tue, 20 Jul 2021 13:15:30 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On 20.07.21 13:15, Denis Hirn wrote:\n>> In the next version of the patch, would you be so kind as to include\n>> updated documentation of the feature and at least one regression test\n>> of same?\n> \n> As requested, this new version of the patch contains regression tests and documentation.\n\nThe tests fail when you build with assertions enabled (configure \n--enable-cassert).\n\nSome quick style comments:\n\nDocBook content should use 1-space indentation (not 2 spaces).\n\nTypo?: /* we'll see this later */ -> \"set\"\n\nOpening brace after \"if\" should be on new line. (See existing code.)\n\nAlso, there should be a space after \"if\".\n\nThese casts appear to be unnecessary:\n\n if((Node *) stmt->larg != NULL && (Node *) stmt->rarg != NULL)\n\n\n", "msg_date": "Tue, 17 Aug 2021 11:07:19 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> The tests fail when you build with assertions enabled (configure --enable-cassert).\n\nThank you for pointing that out. The new version of this patch fixes that.\nThe tests are working properly now. All style related issues are fixed as well.\n\nBest wishes,\n -- Denis", "msg_date": "Tue, 17 Aug 2021 14:58:05 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On Tue, Aug 17, 2021 at 5:58 AM Denis Hirn <denis.hirn@uni-tuebingen.de>\nwrote:\n\n> > The tests fail when you build with assertions enabled (configure\n> --enable-cassert).\n>\n> Thank you for pointing that out. The new version of this patch fixes that.\n> The tests are working properly now. All style related issues are fixed as\n> well.\n>\n> Best wishes,\n> -- Denis\n>\n>\n> Hi,\n+ selfrefcountL = cstate->selfrefcount;\n+ cstate->selfrefcount = selfrefcount;\n\nMaybe the variable selfrefcountL can be renamed slightly (e.g.\ncurr_selfrefcount) so that the code is easier to read.\n\nCheers\n\nOn Tue, Aug 17, 2021 at 5:58 AM Denis Hirn <denis.hirn@uni-tuebingen.de> wrote:> The tests fail when you build with assertions enabled (configure --enable-cassert).\n\nThank you for pointing that out. The new version of this patch fixes that.\nThe tests are working properly now. All style related issues are fixed as well.\n\nBest wishes,\n  -- Denis\n\nHi,+                   selfrefcountL = cstate->selfrefcount;+                   cstate->selfrefcount = selfrefcount;Maybe the variable  selfrefcountL can be renamed slightly (e.g. curr_selfrefcount) so that the code is easier to read.Cheers", "msg_date": "Tue, 17 Aug 2021 08:12:21 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> Maybe the variable selfrefcountL can be renamed slightly (e.g. curr_selfrefcount) so that the code is easier to read.\n\nYes, you are absolutely right. Thanks for the suggestion.\nThe new version renames this variable.\n\nBest wishes,\n -- Denis", "msg_date": "Wed, 18 Aug 2021 10:28:06 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "I tested the following query (from SQLite documentation):\n\nCREATE TABLE edge(aa INT, bb INT);\n\nWITH RECURSIVE nodes(x) AS (\n SELECT 59\n UNION\n SELECT aa FROM edge JOIN nodes ON bb=x\n UNION\n SELECT bb FROM edge JOIN nodes ON aa=x\n)\nSELECT x FROM nodes;\n\nERROR: 42P19: recursive reference to query \"nodes\" must not appear \nwithin its non-recursive term\nLINE 4: SELECT aa FROM edge JOIN nodes ON bb=x\n ^\nLOCATION: checkWellFormedRecursionWalker, parse_cte.c:960\n\nThis well-formedness check apparently needs to be enhanced to allow for \nmore than two branches in the union.\n\n\n", "msg_date": "Fri, 27 Aug 2021 16:17:35 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> This well-formedness check apparently needs to be enhanced to allow for more than two branches in the union. \n\nThe UNION operators' left associativity causes this problem. Currently, the recursive term\nmust be enclosed in parentheses to make this example work:\n\n> WITH RECURSIVE nodes(x) AS ( \n> SELECT 59 \n> UNION \n> (SELECT aa FROM edge JOIN nodes ON bb=x \n> UNION \n> SELECT bb FROM edge JOIN nodes ON aa=x)\n> ) \n> SELECT x FROM nodes;\n\nThe current well-formedness check assumes the left argument of the top most UNION [ALL]\nto be the non-recursive term. This allows for arbitrarily many non-recursive terms, and\nexactly one recursive term. This should not be changed because later stages expect this\nstructure. But this left-deep UNION [ALL] tree does not suffice anymore. Therefore, the\nctequery field of the CommonTableExpr must be rewritten –– I think.\n\nLet's take a look at another example:\n\n> WITH RECURSIVE nodes(x) AS ( \n> SELECT 59 \n> UNION\n> SELECT 42\n> UNION \n> SELECT aa FROM edge JOIN nodes ON bb=x \n> UNION -- Top most UNION operator\n> SELECT bb FROM edge JOIN nodes ON aa=x\n> ) \n> SELECT x FROM nodes;\n\nThis would be parsed left-deep as:\n((SELECT 59 UNION SELECT 42) UNION SELECT aa ...) UNION SELECT bb ...\n\nThe proposed rewriting should be able to detect that (SELECT 59 UNION SELECT 42) does\nnot contain any recursive references and therefore pull the term upwards in the tree,\nleaving us with:\n\n(SELECT 59 UNION SELECT 42) UNION (SELECT aa ... UNION SELECT bb ...)\n\nWhich would be the equivalent of:\n\n> WITH RECURSIVE nodes(x) AS ( \n> (SELECT 59 \n> UNION\n> SELECT 42)\n> UNION -- Top most UNION operator\n> (SELECT aa FROM edge JOIN nodes ON bb=x \n> UNION \n> SELECT bb FROM edge JOIN nodes ON aa=x)\n> ) \n> SELECT x FROM nodes;\n\nI am not sure if this patch should introduce such a rewriting.\nAny thoughts on this?\n\nBest,\n –– Denis\n\n", "msg_date": "Mon, 30 Aug 2021 12:52:31 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> I am not sure if this patch should introduce such a rewriting.\n\nI have thought about this again. I think it is reasonable that this patch introduces\nsuch a rewriting.\n\n> This well-formedness check apparently needs to be enhanced to allow for more than two branches in the union.\n\nThe new version of this patch contains the necessary rewriting. The example from the SQLite documentation\ncan now be executed without explicit parentheses.\n\nBest,\n –– Denis", "msg_date": "Tue, 31 Aug 2021 10:16:47 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "The documentation was not up to date anymore with the most recent changes.\nThis version of the patch fixes that.\n\nBest,\n –– Denis", "msg_date": "Tue, 31 Aug 2021 16:48:30 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On 31.08.21 16:48, Denis Hirn wrote:\n> The documentation was not up to date anymore with the most recent changes.\n> This version of the patch fixes that.\n\nI studied up on the theory and terminology being discussed here. I \nconclude that what the latest version of this patch is doing (allowing \nmultiple recursive references, but only in a linear-recursion way) is \nsound and useful.\n\nI haven't looked closely at the implementation changes yet. I'm not \nvery familiar with that part of the code, so it will take a bit longer. \nPerhaps you could explain what you are doing there, either in email or \n(even better) in the commit message.\n\nWhat I think this patch needs is a lot more test cases. I would like to \nsee more variants of different nestings of the UNION branches, some \nmixtures of UNION ALL and UNION DISTINCT, joins of the recursive CTE \nwith regular tables (like in the flights-and-trains examples), as well \nas various cases of what is not allowed, namely showing that it can \ncarefully prohibit non-linear recursion. Also, in one instance you \nmodify an existing test case. I'm not sure why. Perhaps it would be \nbetter to leave the existing test case alone and write a new one.\n\nYou also introduce this concept of reshuffling the UNION branches to \ncollect all the recursive terms under one subtree. That could use more \ntesting, like combinations like nonrec UNION rec UNION nonrec UNION rec \netc. Also, currently a query like this works\n\nWITH RECURSIVE t(n) AS (\n VALUES (1)\nUNION ALL\n SELECT n+1 FROM t WHERE n < 100\n)\nSELECT sum(n) FROM t;\n\nbut this doesn't:\n\nWITH RECURSIVE t(n) AS (\n SELECT n+1 FROM t WHERE n < 100\nUNION ALL\n VALUES (1)\n)\nSELECT sum(n) FROM t;\n\nWith your patch, the second should also work, so let's show some tests \nfor that as well.\n\n\n", "msg_date": "Mon, 13 Sep 2021 13:32:29 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> I studied up on the theory and terminology being discussed here. I conclude that what the latest version of this patch is doing (allowing multiple recursive references, but only in a linear-recursion way) is sound and useful.\n\nThat's great to hear!\n\n> I haven't looked closely at the implementation changes yet. I'm not very familiar with that part of the code, so it will take a bit longer. Perhaps you could explain what you are doing there, either in email or (even better) in the commit message.\n\nI have extended the commit message. Some more discussion (regarding tree rotation etc.) can be found\nin this mail, but also in the commit message.\n\n> What I think this patch needs is a lot more test cases. I would like to see more variants of different nestings of the UNION branches, some mixtures of UNION ALL and UNION DISTINCT, joins of the recursive CTE with regular tables (like in the flights-and-trains examples)\n\nYou are right, the testing is a bit sparse at the moment. I've added some more\ntest cases with the new version of this patch. Also I've improved the comments.\n\n> as well as various cases of what is not allowed, namely showing that it can carefully prohibit non-linear recursion.\n\nThe regression tests already feature tests against non-linear recursion.\nTherefore I did not add more myself.\n\n> Also, in one instance you modify an existing test case. I'm not sure why. Perhaps it would be better to leave the existing test case alone and write a new one.\n\nI agree that it would be better not to modify the existing test case, but the\nmodification is unavoidable. Here are the original queries:\n\n> -- non-linear recursion is not allowed\n> WITH RECURSIVE foo(i) AS\n> (values (1)\n> UNION ALL\n> (SELECT i+1 FROM foo WHERE i < 10\n> UNION ALL\n> SELECT i+1 FROM foo WHERE i < 5)\n> ) SELECT * FROM foo;\n\n> WITH RECURSIVE foo(i) AS\n> (values (1)\n> UNION ALL\n> SELECT * FROM\n> (SELECT i+1 FROM foo WHERE i < 10\n> UNION ALL\n> SELECT i+1 FROM foo WHERE i < 5) AS t\n> ) SELECT * FROM foo;\n\nThese two test cases are supposed to trigger the non-linear recursion check,\nand they do without this patch. However, with the modifications this patch\nintroduces, both queries are now valid input. That's because each recursive\nreference is placed inside a separate recursive UNION branch. This means that\nboth queries are linear recursive, and not non-linear recursive as they should be.\n\nTo make these tests check for non-linear recursion again, at least one\nUNION branch must contain more than one recursive reference. That's the\nmodification I did.\n\n\n\n> You also introduce this concept of reshuffling the UNION branches to collect all the recursive terms under one subtree.\n\nYes, but the reshuffling is only applied in a very specific situation. Example:\n\n> UNION ---> UNION\n> / \\ / \\\n> UNION C A UNION\n> / \\ / \\\n> A B B C\n\nA, B, and C are arbitrary SelectStmt nodes and can themselfes be deeper nested\nUNION nodes.\n\nA is a non-recursive term in the WITH RECURSIVE query. B, and C both contain a\nrecursive self-reference. The planner expects the top UNION node to contain\nthe non-recursive term in the larg, and the recursive term in the rarg.\nTherefore, the tree shape on the left is invalid and would result in an error\nmessage at the parsing stage. However, by rotating the tree to the right, this\nproblem can be solved so that the valid tree shape on the right side is created.\n\nAll this does, really, is to re-parenthesize the expression:\n(A UNION B) UNION C ---> A UNION (B UNION C)\n\n\n\n> Also, currently a query like this works [...] but this doesn't:\n> \n> WITH RECURSIVE t(n) AS (\n> SELECT n+1 FROM t WHERE n < 100\n> UNION ALL\n> VALUES (1)\n> )\n> SELECT sum(n) FROM t;\n> \n> With your patch, the second should also work, so let's show some tests for that as well.\n\nWith just the tree rotation, the second query can not be fixed. The order of two\nnodes is never changed. And I think that this is a good thing. Consider the\nfollowing query:\n\n> WITH RECURSIVE t(n) AS (\n> VALUES (1)\n> UNION ALL\n> SELECT n+1 FROM t WHERE n < 100\n> UNION ALL\n> VALUES (2)\n> ) SELECT * FROM t LIMIT 100;\n\nIf we'd just collect all non-recursive UNION branches, the semantics of the\nsecond query would change. But changing the semantics of a query (or preventing\ncertain queries to be formulated at all) is not something I think this patch\nshould do. Therfore – I think – it's appropriate that the second query fails.\n\nBest,\n -- Denis", "msg_date": "Tue, 21 Sep 2021 13:35:13 +0200", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "\nOn 21.09.21 13:35, Denis Hirn wrote:\n>> Also, currently a query like this works [...] but this doesn't:\n>>\n>> WITH RECURSIVE t(n) AS (\n>> SELECT n+1 FROM t WHERE n < 100\n>> UNION ALL\n>> VALUES (1)\n>> )\n>> SELECT sum(n) FROM t;\n>>\n>> With your patch, the second should also work, so let's show some tests for that as well.\n> With just the tree rotation, the second query can not be fixed. The order of two\n> nodes is never changed. And I think that this is a good thing. Consider the\n> following query:\n> \n>> WITH RECURSIVE t(n) AS (\n>> VALUES (1)\n>> UNION ALL\n>> SELECT n+1 FROM t WHERE n < 100\n>> UNION ALL\n>> VALUES (2)\n>> ) SELECT * FROM t LIMIT 100;\n> If we'd just collect all non-recursive UNION branches, the semantics of the\n> second query would change. But changing the semantics of a query (or preventing\n> certain queries to be formulated at all) is not something I think this patch\n> should do. Therfore – I think – it's appropriate that the second query fails.\n\nI have been studying this a bit more. I don't understand your argument \nhere. Why would this query have different semantics than, say\n\nWITH RECURSIVE t(n) AS (\n VALUES (1)\n UNION ALL\n VALUES (2)\n UNION ALL\n SELECT n+1 FROM t WHERE n < 100\n) SELECT * FROM t LIMIT 100;\n\nThe order of UNION branches shouldn't be semantically relevant.\n\nI suppose you put the LIMIT clause in there to make some point, but I \ndidn't get it. ;-)\n\nI also considered this example:\n\nWITH RECURSIVE t(n) AS (\n (VALUES (1) UNION ALL VALUES (2))\n UNION ALL\n SELECT n+1 FROM t WHERE n < 100\n)\nSELECT sum(n) FROM t;\n\nThis works fine without and with your patch.\n\nThis should be equivalent:\n\nWITH RECURSIVE t(n) AS (\n VALUES (1) UNION ALL (VALUES (2)\n UNION ALL\n SELECT n+1 FROM t WHERE n < 100)\n)\nSELECT sum(n) FROM t;\n\nBut this runs forever in current PostgreSQL 14 and 15. I'd have \nexpected your patch to convert this form to the previous form, but it \ndoesn't.\n\nI'm having difficulties understanding which subset of cases your patch \nwants to address.\n\n\n", "msg_date": "Tue, 4 Jan 2022 15:18:30 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "I have some separate questions on the executor changes. Basically, this \nseems the right direction, but I wonder if some things could be clarified.\n\nI wonder why in ExecWorkTableScan() and ExecReScanWorkTableScan() you \ncall tuplestore_copy_read_pointer() instead of just \ntuplestore_select_read_pointer(). What is the special role of read \npointer 0 that you are copying. Before your changes, it was just the \nimplicit read pointer, but now that we have several, it would be good to \nexplain their relationship.\n\nAlso, the comment you deleted says \"Therefore, we don't need a private \nread pointer for the tuplestore, nor do we need to tell \ntuplestore_gettupleslot to copy.\" You addressed the first part with the \nread pointer handling, but what about the second part? The \ntuplestore_gettupleslot() call in WorkTableScanNext() still has \ncopy=false. Is this an oversight, or did you determine that copying is \nstill not necessary?\n\n\n", "msg_date": "Tue, 4 Jan 2022 15:24:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "> I have been studying this a bit more. I don't understand your argument here.\n> Why would this query have different semantics than, say\n>\n> WITH RECURSIVE t(n) AS (\n> VALUES (1)\n> UNION ALL\n> VALUES (2)\n> UNION ALL\n> SELECT n+1 FROM t WHERE n < 100\n> ) SELECT * FROM t LIMIT 100;\n>\n> The order of UNION branches shouldn't be semantically relevant.\n\nWITH RECURSIVE (ab)uses the (typically associative and commutative) UNION [ALL] clause,\nand fundamentally changes the semantics – associativity and commutativity no longer apply.\nI think your confusion stems from this ambiguity.\n\nLet me briefly reiterate WITH RECURSIVE. Basically, you always have a query like this:\n\n> WITH RECURSIVE w(c1,...) AS (\n> <non-recursive>\n> UNION [ALL]\n> <recursive>\n> ) q;\n\nThere must be a non-recursive part that does not refer to w, and -- without\nthe patch -- exactly one recursive part that refers to w. The non-recursive part,\nand the recursive part are combined using UNION [ALL].\n\nHowever, let's assume a different, unambiguous syntax just to make my point:\n\n> WITH RECURSIVE w(c1,...) AS (\n> <non-recursive>\n> RUNION [ALL]\n> <recursive>\n> ) q;\n\nEverything remains the same except that the non-recursive part and the recursive\npart are combined using RUNION [ALL] instead of UNION [ALL].\n\nNow let me rephrase your examples using this syntax:\n\n> WITH RECURSIVE t(n) AS (\n> (VALUES (1) UNION ALL VALUES (2))\n> RUNION ALL\n> SELECT n+1 FROM t WHERE n < 100\n> )\n> SELECT sum(n) FROM t;\n\n> WITH RECURSIVE t(n) AS (\n> VALUES (1) RUNION ALL (VALUES (2)\n> UNION ALL\n> SELECT n+1 FROM t WHERE n < 100)\n> )\n> SELECT sum(n) FROM t;\n\nI hope this shows that this is not the same. The first query has two non-recursive\ncases and one recursive case. The second query has two recursive cases instead.\n\nThe rewrites of those examples:\n\n> RUNION RUNION\n> / \\ good / \\\n> VALUES(1) UNION --> VALUES(1) UNION\n> / \\ / \\\n> SELECT n+1... VALUES(2) VALUES(2) SELECT n+1...\n\nThis rewrite would be valid. The patch would not do that, however.\n\n> RUNION RUNION\n> / \\ bad / \\\n> VALUES(1) UNION --> UNION SELECT n+1...\n> / \\ / \\\n> SELECT n+1... VALUES(2) VALUES(1) VALUES(2)\n\nThis is the rewrite you would expect. But it is not valid, because it changes semantics.\nTherefore the patch does not do that.\n\n> I'm having difficulties understanding which subset of cases your patch wants to address.\n\nWith this patch an extended WITH RECURSIVE syntax is enabled:\n\n> WITH RECURSIVE w(c1,...) AS (\n> <non-recursive>\n> UNION [ALL]\n> <recursive 1>\n> ...\n> UNION [ALL]\n> <recursive n>\n> ) q;\n\nBut really, it is:\n\n> WITH RECURSIVE w(c1,...) AS (\n> <non-recursive 1>\n> ...\n> <non-recursive n>\n> UNION [ALL]\n> <recursive n+1>\n> ...\n> UNION [ALL]\n> <recursive m>\n> ) q;\n\nWe can have arbitrarily many non-recursive branches (that is possible without the patch),\nas well as arbitrarily many recursive UNION [ALL] branches. Problem here: UNION [ALL]\nassociates to the left. This means that we end up with a left-deep parse tree, that looks\nsomething like this:\n\n> RUNION\n> / \\\n> ... m\n> /\n> UNION\n> / \\\n> n n+1\n> /\n> ...\n> /\n> UNION\n> / \\\n> 1 2\n\nThat is not correct, because all non-recursive branches must be part of the left\nRUNION branch, and all recursive branches must be part of the right one. Postgres\nperforms this check in parse_cte.c using the checkWellFormedRecursion() function.\n\nHaving said the above, let me once again define the rewrite case the patch implements:\n\n> RUNION RUNION\n> / \\ rotate / \\\n> ... n+m ---> UNION UNION\n> / / \\ / \\\n> UNION ... n n+1 UNION\n> / \\ / / \\\n> n n+1 UNION ... m\n> / / \\\n> ... 1 2\n> / \n> UNION\n> / \\\n> 1 2\n\nBy using tree rotation, the patch transforms the parse tree on the left\ninto the one on the right. All non-recursive terms 1..n are found in the\nleft branch of RUNION, all recursive terms n+1..m in the right branch.\nThis tree now passes the checkWellFormedRecursion() check.\nI hope this clarifies things.\n\n> The order of UNION branches shouldn't be semantically relevant.\n\nI agree. However, a strict distinction must be made between RUNION and UNION clauses.\nThese must not be interchanged.\n\n\n> I wonder why in ExecWorkTableScan() and ExecReScanWorkTableScan() you call\n> tuplestore_copy_read_pointer() instead of just tuplestore_select_read_pointer().\n\nThe WorkTableScan reads the working_table tuplestore of the parent RecursiveUnion\noperator. But after each iteration of the RecursiveUnion operator, the following\noperations are performed:\n\n> 122 /* done with old working table ... */\n> 123 tuplestore_end(node->working_table); -- (1)\n> 124\n> 125 /* intermediate table becomes working table */\n> 126 node->working_table = node->intermediate_table; -- (2)\n> 127\n> 128 /* create new empty intermediate table */\n> 129 node->intermediate_table = tuplestore_begin_heap(false, false,\n> 130 work_mem); -- (3)\n\nhttps://doxygen.postgresql.org/nodeRecursiveunion_8c_source.html#l00122\n\nIn step (1), the current working_table is released. Therefore, all read pointers\nthat we had additionally allocated are gone, too. The intermediate table becomes\nthe working table in step (2), and finally a new empty intermediate table is\ncreated (3).\n\nBecause of step (1), we have to allocate new unique read pointers for each worktable\nscan again. Just using tuplestore_select_read_pointer() would be incorrect.\n\n> What is the special role of read pointer 0 that you are copying. Before your\n> changes, it was just the implicit read pointer, but now that we have several,\n> it would be good to explain their relationship.\n\nTo allocate a new read pointer, tuplestore_alloc_read_pointer() could also be used.\nHowever, we would have to know about the eflags parameter – which the worktable\nscan has no information about.\n\nThe important thing about read pointer 0 is that it always exists, and it is initialized correctly.\nTherefore, it is save to copy read pointer 0 instead of creating a new one from scratch.\n\n\n> Also, the comment you deleted says \"Therefore, we don't need a private read pointer\n> for the tuplestore, nor do we need to tell tuplestore_gettupleslot to copy.\"\n> You addressed the first part with the read pointer handling, but what about the\n> second part? The tuplestore_gettupleslot() call in WorkTableScanNext() still\n> has copy=false. Is this an oversight, or did you determine that copying is\n> still not necessary?\n\nThat's an oversight. Copying is still not necessary. Copying would only be required,\nif additional writes to the tuplestore occur. But that can not happen here.\n\nBest,\n -- Denis\n\n\n\n", "msg_date": "Tue, 11 Jan 2022 12:33:17 +0100", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On 11.01.22 12:33, Denis Hirn wrote:\n>> I have been studying this a bit more. I don't understand your argument here.\n>> Why would this query have different semantics than, say\n>>\n>> WITH RECURSIVE t(n) AS (\n>> VALUES (1)\n>> UNION ALL\n>> VALUES (2)\n>> UNION ALL\n>> SELECT n+1 FROM t WHERE n < 100\n>> ) SELECT * FROM t LIMIT 100;\n>>\n>> The order of UNION branches shouldn't be semantically relevant.\n> \n> WITH RECURSIVE (ab)uses the (typically associative and commutative) UNION [ALL] clause,\n> and fundamentally changes the semantics – associativity and commutativity no longer apply.\n> I think your confusion stems from this ambiguity.\n\nThe language in the SQL standard does not support this statement. There \nis nothing in there that says that certain branches of the UNION in a \nrecursive query mean certain things. In fact, it doesn't even require \nthe query to contain a UNION at all. It just says to iterate on \nevaluating the query until a fixed point is reached. I think this \nsupports my claim that the associativity and commutativity of a UNION in \na recursive query still apply.\n\nThis is all very complicated, so I don't claim this to be authoritative, \nbut I just don't see anything in the spec that supports what you are saying.\n\n\n", "msg_date": "Fri, 14 Jan 2022 13:21:13 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "\n> On 14. Jan 2022, at 13:21, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> \n> There is nothing in there that says that certain branches of the UNION in a recursive query mean certain things. In fact, it doesn't even require the query to contain a UNION at all. It just says to iterate on evaluating the query until a fixed point is reached. I think this supports my claim that the associativity and commutativity of a UNION in a recursive query still apply.\n> \n> This is all very complicated, so I don't claim this to be authoritative, but I just don't see anything in the spec that supports what you are saying.\n\n\nI disagree. In SQL:2016, it's discussed in 7.16 <query expression> syntax rule 3) j) i), which defines:\n\n> 3) Among the WQEi, ... WQEk of a given stratum, there shall be at least one <query expres-\n> sion>, say WQEj, such that:\n> A) WQEj is a <query expression body> that immediately contains UNION.\n> B) WQEj has one operand that does not contain a <query name> referencing any of WQNi,\n> ..., WQNk. This operand is said to be the non-recursive operand of WQEj.\n> C) WQEj is said to be an anchor expression, and WQNj an anchor name.\n\nWhere <query expression body> is defined as:\n> <query expression body> ::=\n> <query term>\n> | <query expression body> UNION [ALL | DISTINCT]\n> [ <corresponding spec> ] <query term>\n>\n> <query term> ::=\n> <query primary>\n> | ...\n>\n> <query primary> ::= ...\n> | <left paren> <query expression body> ... <right paren>\n\nThis definition pretty much sums up what I have called RUNION.\n\nThe SQL standard might not impose a strict order on the UNION branches.\nBut you have to be able to uniquely identify the anchor expression.\n\nBest,\n -- Denis\n\n", "msg_date": "Fri, 14 Jan 2022 15:55:48 +0100", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": " > On 14. Jan 2022, at 13:21, Peter Eisentraut \n<peter.eisentraut@enterprisedb.com> wrote:\n >\n > There is nothing in there that says that certain branches of the \nUNION in a recursive query mean certain things. In fact, it doesn't even \nrequire the query to contain a UNION at all.  It just says to iterate on \nevaluating the query until a fixed point is reached.  I think this \nsupports my claim that the associativity and commutativity of a UNION in \na recursive query still apply.\n >\n > This is all very complicated, so I don't claim this to be \nauthoritative, but I just don't see anything in the spec that supports \nwhat you are saying.\n\nPlease also have a look at SQL:2016, 7.16 <query expression> General \nRules 2) c),\nwhich defines the evaluation semantics of recursive queries. I think \nthat this part\nof the SQL standard refutes your argument.\n\nBest,\n   -- Denis\n\n\n", "msg_date": "Sat, 15 Jan 2022 11:01:09 +0100", "msg_from": "Denis Hirn <denis.hirn@uni-tuebingen.de>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "On 14.01.22 15:55, Denis Hirn wrote:\n>> There is nothing in there that says that certain branches of the UNION in a recursive query mean certain things. In fact, it doesn't even require the query to contain a UNION at all. It just says to iterate on evaluating the query until a fixed point is reached. I think this supports my claim that the associativity and commutativity of a UNION in a recursive query still apply.\n>>\n>> This is all very complicated, so I don't claim this to be authoritative, but I just don't see anything in the spec that supports what you are saying.\n> \n> I disagree. In SQL:2016, it's discussed in 7.16 <query expression> syntax rule 3) j) i), which defines:\n[actually 7.17]\n> \n>> 3) Among the WQEi, ... WQEk of a given stratum, there shall be at least one <query expres-\n>> sion>, say WQEj, such that:\n>> A) WQEj is a <query expression body> that immediately contains UNION.\n>> B) WQEj has one operand that does not contain a <query name> referencing any of WQNi,\n>> ..., WQNk. This operand is said to be the non-recursive operand of WQEj.\n>> C) WQEj is said to be an anchor expression, and WQNj an anchor name.\n> \n> Where <query expression body> is defined as:\n>> <query expression body> ::=\n>> <query term>\n>> | <query expression body> UNION [ALL | DISTINCT]\n>> [ <corresponding spec> ] <query term>\n>>\n>> <query term> ::=\n>> <query primary>\n>> | ...\n>>\n>> <query primary> ::= ...\n>> | <left paren> <query expression body> ... <right paren>\n> \n> This definition pretty much sums up what I have called RUNION.\n> \n> The SQL standard might not impose a strict order on the UNION branches.\n> But you have to be able to uniquely identify the anchor expression.\n\nRight, the above text does not impose any ordering of the UNION. It \njust means that it has to have an operand that it not recursive. I \nthink that is what I was trying to say.\n\nI don't understand what your RUNION examples are meant to show.\n\n\n", "msg_date": "Tue, 25 Jan 2022 11:03:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "The explanations below are satisfactory to me. I think the executor \nchanges in this patch are ok. But I would like someone else who has \nmore experience in this particular area to check it too; I'm not going \nto take committer responsibility for this by myself without additional \nreview.\n\nAs I said earlier, I think semantically/mathematically, the changes \nproposed by this patch set are okay.\n\nThe rewriting in the parse analysis is still being debated, but it can \nbe tackled in separate patches/commits.\n\n\nOn 11.01.22 12:33, Denis Hirn wrote:\n>> I wonder why in ExecWorkTableScan() and ExecReScanWorkTableScan() you call\n>> tuplestore_copy_read_pointer() instead of just tuplestore_select_read_pointer().\n> The WorkTableScan reads the working_table tuplestore of the parent RecursiveUnion\n> operator. But after each iteration of the RecursiveUnion operator, the following\n> operations are performed:\n> \n>> 122 /* done with old working table ... */\n>> 123 tuplestore_end(node->working_table); -- (1)\n>> 124\n>> 125 /* intermediate table becomes working table */\n>> 126 node->working_table = node->intermediate_table; -- (2)\n>> 127\n>> 128 /* create new empty intermediate table */\n>> 129 node->intermediate_table = tuplestore_begin_heap(false, false,\n>> 130 work_mem); -- (3)\n> https://doxygen.postgresql.org/nodeRecursiveunion_8c_source.html#l00122\n> \n> In step (1), the current working_table is released. Therefore, all read pointers\n> that we had additionally allocated are gone, too. The intermediate table becomes\n> the working table in step (2), and finally a new empty intermediate table is\n> created (3).\n> \n> Because of step (1), we have to allocate new unique read pointers for each worktable\n> scan again. Just using tuplestore_select_read_pointer() would be incorrect.\n> \n>> What is the special role of read pointer 0 that you are copying. Before your\n>> changes, it was just the implicit read pointer, but now that we have several,\n>> it would be good to explain their relationship.\n> To allocate a new read pointer, tuplestore_alloc_read_pointer() could also be used.\n> However, we would have to know about the eflags parameter – which the worktable\n> scan has no information about.\n> \n> The important thing about read pointer 0 is that it always exists, and it is initialized correctly.\n> Therefore, it is save to copy read pointer 0 instead of creating a new one from scratch.\n> \n> \n>> Also, the comment you deleted says \"Therefore, we don't need a private read pointer\n>> for the tuplestore, nor do we need to tell tuplestore_gettupleslot to copy.\"\n>> You addressed the first part with the read pointer handling, but what about the\n>> second part? The tuplestore_gettupleslot() call in WorkTableScanNext() still\n>> has copy=false. Is this an oversight, or did you determine that copying is\n>> still not necessary?\n> That's an oversight. Copying is still not necessary. Copying would only be required,\n> if additional writes to the tuplestore occur. But that can not happen here.\n\n\n\n", "msg_date": "Tue, 25 Jan 2022 11:19:58 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> As I said earlier, I think semantically/mathematically, the changes \n> proposed by this patch set are okay.\n\nI took a quick look at this patch because I wondered how it would\naffect the SEARCH/CYCLE bug discussed at [1]. Doesn't it break\nrewriteSearchAndCycle() completely? That code assumes (without a\nlot of checking) that a recursive query is a UNION [ALL] of exactly\ntwo SELECTs.\n\nSome other random thoughts while I'm looking at it (not a full review):\n\n* The patch removes this comment in WorkTableScanNext:\n\n-\t * Note: we are also assuming that this node is the only reader of the\n-\t * worktable. Therefore, we don't need a private read pointer for the\n-\t * tuplestore, nor do we need to tell tuplestore_gettupleslot to copy.\n\nI see that it deals with the private-read-pointer question, but I do not\nsee any changes addressing the point about copying tuples fetched from the\ntuplestore. Perhaps there's no issue, but if not, a comment explaining\nwhy not would be appropriate.\n\n* The long comment added to checkWellFormedRecursion will be completely\ndestroyed by pgindent, but that's the least of its problems: it does\nnot explain either why we need a tree rotation or why that doesn't\nbreak SQL semantics. The shorter comment in front of it needs\nrewritten, too. And I'm not really convinced that the new loop\nis certain to terminate.\n\n* The chunk added to checkWellFormedSelectStmt is undercommented,\nbecause of which I'm not convinced that it's right at all. Since\nthat's really the meat of this patch, it needs more attention.\nAnd the new variable names are still impossibly confusing.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/17320-70e37868182512ab%40postgresql.org\n\n\n", "msg_date": "Sat, 23 Apr 2022 11:14:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3046/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 14:01:55 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow multiple recursive self-references" } ]
[ { "msg_contents": "Dear hacker:\r\n&nbsp; &nbsp; I am an&nbsp;undergraduate&nbsp;from Nanjing University. I use pgsql source code for my own development. During processing each sql query in function 'exec_simple_query', I'm going to add some extra functions such as index recommendation, which should be asynchronous in respect to the sql query in order to ensure the effectiveness of query processing. So my problem is whether pg has some mechanisms supporting asnchronous processing while processing each sql query?&nbsp;\r\n&nbsp; &nbsp;&nbsp;I am also pleased to dig into&nbsp;some detailed blogs or docs you recommend me. Looking forward to your reply!\r\n\r\n&nbsp; &nbsp; Yours, sincerely.\nDear hacker:    I am an undergraduate from Nanjing University. I use pgsql source code for my own development. During processing each sql query in function 'exec_simple_query', I'm going to add some extra functions such as index recommendation, which should be asynchronous in respect to the sql query in order to ensure the effectiveness of query processing. So my problem is whether pg has some mechanisms supporting asnchronous processing while processing each sql query?     I am also pleased to dig into some detailed blogs or docs you recommend me. Looking forward to your reply!    Yours, sincerely.", "msg_date": "Tue, 23 Mar 2021 23:38:05 +0800", "msg_from": "\"=?gb18030?B?0e7S3bTm?=\" <1057206466@qq.com>", "msg_from_op": true, "msg_subject": "Query about pg asynchronous processing support" }, { "msg_contents": "Hi,\n\nYou should search informations about postgres hooks like thoses used in\nextension pg_stat_statements\nhttps://github.com/postgres/postgres/blob/master/contrib/pg_stat_statements/pg_stat_statements.c\n\nAnd about background capabilities as thoses used in extension pg_background\nhttps://github.com/vibhorkum/pg_background\n\nThere are also extensions working with indexes like hypopg\nhttps://github.com/HypoPG/hypopg\n\nGood luck \nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Fri, 26 Mar 2021 11:27:02 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: Query about pg asynchronous processing support" } ]
[ { "msg_contents": "psql seems to never call clearerr() on its output file. So if it gets\nan error while printing a result, it'll show\n\ncould not print result table: Success\n\nafter each and every result, even though the output file isn't in error\nstate anymore.\n\nIt seems that the simplest fix is just to do clearerr() at the start of\nprintTable(), as in the attached.\n\nI haven't been able to find a good reproducer. Sometimes doing C-s C-c\ndoes it, but I'm not sure it is fully reproducible.\n\n-- \n�lvaro Herrera Valdivia, Chile", "msg_date": "Wed, 24 Mar 2021 11:11:41 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "psql lacking clearerr()" }, { "msg_contents": "At Wed, 24 Mar 2021 11:11:41 -0300, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote in \n> psql seems to never call clearerr() on its output file. So if it gets\n> an error while printing a result, it'll show\n> \n> could not print result table: Success\n> \n> after each and every result, even though the output file isn't in error\n> state anymore.\n> \n> It seems that the simplest fix is just to do clearerr() at the start of\n> printTable(), as in the attached.\n> \n> I haven't been able to find a good reproducer. Sometimes doing C-s C-c\n> does it, but I'm not sure it is fully reproducible.\n\nThat worked for me:p And the following steps always raises that error.\n\npostgres=# select 1; (just to let it into history).\npostgres=# C-s -> C-p -> C-m -> C-c\npostgres=# select 1;\n...\ncould not print result table: Success\n\nAnd actually the patch works and the location looks like appropriate.\n\nBy the way, I think errno is not set when f* functions fail so anyway\nisn't %m in the messages is useless?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 25 Mar 2021 15:31:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql lacking clearerr()" }, { "msg_contents": "On 2021-Mar-25, Kyotaro Horiguchi wrote:\n\n> That worked for me:p And the following steps always raises that error.\n> \n> postgres=# select 1; (just to let it into history).\n> postgres=# C-s -> C-p -> C-m -> C-c\n> postgres=# select 1;\n> ...\n> could not print result table: Success\n\nAh, thanks! Indeed this reliably reproduces the issue. I was very\nsurprised to find out that this bug is new in pg13. But then I bisected\nit down to this commit:\n\ncommit b03436994bcc4909dd644fd5ae6d9a9acdf30da5\nAuthor: Peter Eisentraut <peter@eisentraut.org>\nAuthorDate: Fri Mar 20 16:04:15 2020 +0100\nCommitDate: Fri Mar 20 16:04:15 2020 +0100\n\n psql: Catch and report errors while printing result table\n \n Errors (for example I/O errors or disk full) while printing out result\n tables were completely ignored, which could result in silently\n truncated output in scripts, for example. Fix by adding some basic\n error checking and reporting.\n \n Author: Daniel Verite <daniel@manitou-mail.org>\n Author: David Zhang <david.zhang@highgo.ca>\n Discussion: https://www.postgresql.org/message-id/flat/9a0b3c8d-ee14-4b1d-9d0a-2c993bdabacc@manitou-mail.org\n\n\nwhich is where this message was added. So it turns out that this has\n*always* been a problem ... we just didn't know.\n\nDue to lack of complaints, I'm inclined to apply this only back to pg13.\n\n(And, yes, I'm to remove the %m too, because clearly that was a mistake.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Right now the sectors on the hard disk run clockwise, but I heard a rumor that\nyou can squeeze 0.2% more throughput by running them counterclockwise.\nIt's worth the effort. Recommended.\" (Gerry Pourwelle)\n\n\n", "msg_date": "Mon, 29 Mar 2021 18:05:46 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: psql lacking clearerr()" }, { "msg_contents": "On 2021-Mar-29, Alvaro Herrera wrote:\n\n> (And, yes, I'm to remove the %m too, because clearly that was a mistake.)\n\nRe-reading the other thread, I think the %m should stay.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living f*ck out of me.\" (JWZ)\n\n\n", "msg_date": "Mon, 29 Mar 2021 18:21:39 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "Re: psql lacking clearerr()" } ]
[ { "msg_contents": "Hi,\n\nI got a few questions about the wal stats while working on the shmem\nstats patch:\n\n1) What is the motivation to have both prevWalUsage and pgWalUsage,\n instead of just accumulating the stats since the last submission?\n There doesn't seem to be any comment explaining it? Computing\n differences to previous values, and copying to prev*, isn't free. I\n assume this is out of a fear that the stats could get reset before\n they're used for instrument.c purposes - but who knows?\n\n2) Why is there both pgstat_send_wal() and pgstat_report_wal()? With the\n former being used by wal writer, the latter by most other processes?\n There again don't seem to be comments explaining this.\n\n3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n just to figure out if there's been any changes isn't all that\n cheap. This is regularly exercised in read-only workloads too. Seems\n adding a boolean WalStats.have_pending = true or such would be\n better.\n\n4) For plain backends pgstat_report_wal() is called by\n pgstat_report_stat() - but it is not checked as part of the \"Don't\n expend a clock check if nothing to do\" check at the top. It's\n probably rare to have pending wal stats without also passing one of\n the other conditions, but ...\n\nGenerally the various patches seems to to have a lot of the boilerplate\nstyle comments (like \"Prepare and send the message\"), but very little in\nthe way of explaining the design.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Mar 2021 16:22:24 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "wal stats questions" }, { "msg_contents": "\n\nOn 2021/03/25 8:22, Andres Freund wrote:\n> Hi,\n> \n> I got a few questions about the wal stats while working on the shmem\n> stats patch:\n\nThanks for your reviews.\n\n\n> 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n> instead of just accumulating the stats since the last submission?\n> There doesn't seem to be any comment explaining it? Computing\n> differences to previous values, and copying to prev*, isn't free. I\n> assume this is out of a fear that the stats could get reset before\n> they're used for instrument.c purposes - but who knows?\n\nIs your point that it's better to call pgWalUsage = 0; after sending the\nstats? My understanding is as same as your assumption. For example,\npg_stat_statements.c use pgWalUsage and calculate the diff.\n\nBut, because the stats may be sent after the transaction is finished, it\ndoesn't seem to lead wrong stats if pgWalUsage = 0 is called. So, I agree your\nsuggestion.\n\nIf the extension wants to know the walusage diff across two transactions,\nit may lead to wrong stats, but I think it won't happen.\n\n\n> 2) Why is there both pgstat_send_wal() and pgstat_report_wal()? With the\n> former being used by wal writer, the latter by most other processes?\n> There again don't seem to be comments explaining this.\n\nTo control the transmission interval for the wal writer because it may send\nthe stats too frequency, and to omit to calculate the generated wal stats\nbecause it doesn't generate wal records. But, now I think it's better to merge\nthem.\n\n\n\n> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n> just to figure out if there's been any changes isn't all that\n> cheap. This is regularly exercised in read-only workloads too. Seems\n> adding a boolean WalStats.have_pending = true or such would be\n> better.\n\nI understood that for backends, this may leads performance degradation and\nthis problem is not only for the WalStats but also SLRUStats.\n\nI wondered the degradation is so big because pgstat_report_stat() checks if at\nleast PGSTAT_STAT_INTERVAL msec is passed since it last sent before check the\ndiff. If my understanding is correct, to get timestamp is more expensive.\n\n\n\n> 4) For plain backends pgstat_report_wal() is called by\n> pgstat_report_stat() - but it is not checked as part of the \"Don't\n> expend a clock check if nothing to do\" check at the top. It's\n> probably rare to have pending wal stats without also passing one of\n> the other conditions, but ...\n\n(I'm not confidence my understanding of your comment is right.)\nYou mean that we need to expend a clock check in pgstat_report_wal()?\nI think it's unnecessary because pgstat_report_stat() is already checked it.\n\n\n\n> Generally the various patches seems to to have a lot of the boilerplate\n> style comments (like \"Prepare and send the message\"), but very little in\n> the way of explaining the design.\n\nSorry for that. I'll be careful.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 10:51:56 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "Hi,\n\nOn 2021-03-25 10:51:56 +0900, Masahiro Ikeda wrote:\n> On 2021/03/25 8:22, Andres Freund wrote:\n> > 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n> > instead of just accumulating the stats since the last submission?\n> > There doesn't seem to be any comment explaining it? Computing\n> > differences to previous values, and copying to prev*, isn't free. I\n> > assume this is out of a fear that the stats could get reset before\n> > they're used for instrument.c purposes - but who knows?\n>\n> Is your point that it's better to call pgWalUsage = 0; after sending the\n> stats?\n\nYes. At least there should be a comment explaining why it's done the way\nit is.\n\n\n\n> > 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n> > just to figure out if there's been any changes isn't all that\n> > cheap. This is regularly exercised in read-only workloads too. Seems\n> > adding a boolean WalStats.have_pending = true or such would be\n> > better.\n>\n> I understood that for backends, this may leads performance degradation and\n> this problem is not only for the WalStats but also SLRUStats.\n>\n> I wondered the degradation is so big because pgstat_report_stat() checks if at\n> least PGSTAT_STAT_INTERVAL msec is passed since it last sent before check the\n> diff. If my understanding is correct, to get timestamp is more expensive.\n\nGetting a timestamp is expensive, yes. But I think we need to check for\nthe no-pending-wal-stats *before* the clock check. See the below:\n\n\n> > 4) For plain backends pgstat_report_wal() is called by\n> > pgstat_report_stat() - but it is not checked as part of the \"Don't\n> > expend a clock check if nothing to do\" check at the top. It's\n> > probably rare to have pending wal stats without also passing one of\n> > the other conditions, but ...\n>\n> (I'm not confidence my understanding of your comment is right.)\n> You mean that we need to expend a clock check in pgstat_report_wal()?\n> I think it's unnecessary because pgstat_report_stat() is already checked it.\n\nNo, I mean that right now we might can erroneously return early\npgstat_report_wal() for normal backends in some workloads:\n\nvoid\npgstat_report_stat(bool disconnect)\n...\n\t/* Don't expend a clock check if nothing to do */\n\tif ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n\t\tpgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n\t\t!have_function_stats && !disconnect)\n\t\treturn;\n\nmight return if there only is pending WAL activity. This needs to check\nwhether there are pending WAL stats. Which in turn means that the check\nshould be fast.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 24 Mar 2021 21:07:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "At Wed, 24 Mar 2021 21:07:26 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2021-03-25 10:51:56 +0900, Masahiro Ikeda wrote:\n> > On 2021/03/25 8:22, Andres Freund wrote:\n> > > 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n> > > instead of just accumulating the stats since the last submission?\n> > > There doesn't seem to be any comment explaining it? Computing\n> > > differences to previous values, and copying to prev*, isn't free. I\n> > > assume this is out of a fear that the stats could get reset before\n> > > they're used for instrument.c purposes - but who knows?\n> >\n> > Is your point that it's better to call pgWalUsage = 0; after sending the\n> > stats?\n> \n> Yes. At least there should be a comment explaining why it's done the way\n> it is.\n\npgWalUsage was used without resetting and we (I) thought that that\nbehavior should be preserved. On second thought, as Andres suggested,\nwe can just reset pgWalUsage at sending since AFAICS no one takes\ndifference crossing pgstat_report_stat() calls.\n\n> > > 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n> > > just to figure out if there's been any changes isn't all that\n> > > cheap. This is regularly exercised in read-only workloads too. Seems\n> > > adding a boolean WalStats.have_pending = true or such would be\n> > > better.\n> >\n> > I understood that for backends, this may leads performance degradation and\n> > this problem is not only for the WalStats but also SLRUStats.\n> >\n> > I wondered the degradation is so big because pgstat_report_stat() checks if at\n> > least PGSTAT_STAT_INTERVAL msec is passed since it last sent before check the\n> > diff. If my understanding is correct, to get timestamp is more expensive.\n> \n> Getting a timestamp is expensive, yes. But I think we need to check for\n> the no-pending-wal-stats *before* the clock check. See the below:\n> \n> \n> > > 4) For plain backends pgstat_report_wal() is called by\n> > > pgstat_report_stat() - but it is not checked as part of the \"Don't\n> > > expend a clock check if nothing to do\" check at the top. It's\n> > > probably rare to have pending wal stats without also passing one of\n> > > the other conditions, but ...\n> >\n> > (I'm not confidence my understanding of your comment is right.)\n> > You mean that we need to expend a clock check in pgstat_report_wal()?\n> > I think it's unnecessary because pgstat_report_stat() is already checked it.\n> \n> No, I mean that right now we might can erroneously return early\n> pgstat_report_wal() for normal backends in some workloads:\n> \n> void\n> pgstat_report_stat(bool disconnect)\n> ...\n> \t/* Don't expend a clock check if nothing to do */\n> \tif ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n> \t\tpgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n> \t\t!have_function_stats && !disconnect)\n> \t\treturn;\n> \n> might return if there only is pending WAL activity. This needs to check\n> whether there are pending WAL stats. Which in turn means that the check\n> should be fast.\n\nAgreed that the condition is wrong. On the other hand, the counters\nare incremented in XLogInsertRecord() and I think we don't want add\ninstructions there.\n\nIf any wal activity has been recorded, wal_records is always positive\nthus we can check for wal activity just by \"pgWalUsage.wal_records >\n0, which should be fast enough.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 25 Mar 2021 16:37:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/03/25 16:37, Kyotaro Horiguchi wrote:\n> At Wed, 24 Mar 2021 21:07:26 -0700, Andres Freund <andres@anarazel.de> wrote in\n>> Hi,\n>>\n>> On 2021-03-25 10:51:56 +0900, Masahiro Ikeda wrote:\n>>> On 2021/03/25 8:22, Andres Freund wrote:\n>>>> 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n>>>> instead of just accumulating the stats since the last submission?\n>>>> There doesn't seem to be any comment explaining it? Computing\n>>>> differences to previous values, and copying to prev*, isn't free. I\n>>>> assume this is out of a fear that the stats could get reset before\n>>>> they're used for instrument.c purposes - but who knows?\n>>>\n>>> Is your point that it's better to call pgWalUsage = 0; after sending the\n>>> stats?\n>>\n>> Yes. At least there should be a comment explaining why it's done the way\n>> it is.\n> \n> pgWalUsage was used without resetting and we (I) thought that that\n> behavior should be preserved. On second thought, as Andres suggested,\n> we can just reset pgWalUsage at sending since AFAICS no one takes\n> difference crossing pgstat_report_stat() calls.\n\nYes, I agree that we can do that since there seems no such code for now.\nAlso if we do that, we can check, for example \"pgWalUsage.wal_records > 0\"\nas you suggested, to easily determine whether there is pending WAL stats or not.\nAnyway I agree it's better to add comments about the design more.\n\n\n>>>> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n>>>> just to figure out if there's been any changes isn't all that\n>>>> cheap. This is regularly exercised in read-only workloads too. Seems\n>>>> adding a boolean WalStats.have_pending = true or such would be\n>>>> better.\n>>>\n>>> I understood that for backends, this may leads performance degradation and\n>>> this problem is not only for the WalStats but also SLRUStats.\n>>>\n>>> I wondered the degradation is so big because pgstat_report_stat() checks if at\n>>> least PGSTAT_STAT_INTERVAL msec is passed since it last sent before check the\n>>> diff. If my understanding is correct, to get timestamp is more expensive.\n>>\n>> Getting a timestamp is expensive, yes. But I think we need to check for\n>> the no-pending-wal-stats *before* the clock check. See the below:\n>>\n>>\n>>>> 4) For plain backends pgstat_report_wal() is called by\n>>>> pgstat_report_stat() - but it is not checked as part of the \"Don't\n>>>> expend a clock check if nothing to do\" check at the top. It's\n>>>> probably rare to have pending wal stats without also passing one of\n>>>> the other conditions, but ...\n>>>\n>>> (I'm not confidence my understanding of your comment is right.)\n>>> You mean that we need to expend a clock check in pgstat_report_wal()?\n>>> I think it's unnecessary because pgstat_report_stat() is already checked it.\n>>\n>> No, I mean that right now we might can erroneously return early\n>> pgstat_report_wal() for normal backends in some workloads:\n>>\n>> void\n>> pgstat_report_stat(bool disconnect)\n>> ...\n>> \t/* Don't expend a clock check if nothing to do */\n>> \tif ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n>> \t\tpgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n>> \t\t!have_function_stats && !disconnect)\n>> \t\treturn;\n>>\n>> might return if there only is pending WAL activity. This needs to check\n>> whether there are pending WAL stats. Which in turn means that the check\n>> should be fast.\n> \n> Agreed that the condition is wrong. On the other hand, the counters\n> are incremented in XLogInsertRecord() and I think we don't want add\n> instructions there.\n\nBasically yes. We should avoid that especially while WALInsertLock is being hold.\nBut it's not so harmful to do that after the lock is released?\n\n> If any wal activity has been recorded, wal_records is always positive\n> thus we can check for wal activity just by \"pgWalUsage.wal_records >\n> 0, which should be fast enough.\n\nMaybe there is the case where a backend generates no WAL records,\nbut just writes them because it needs to do write-ahead-logging\nwhen flush the table data? If yes, \"pgWalUsage.wal_records > 0\" is not enough.\nProbably other fields also need to be checked.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 25 Mar 2021 19:01:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "At Thu, 25 Mar 2021 19:01:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2021/03/25 16:37, Kyotaro Horiguchi wrote:\n> > pgWalUsage was used without resetting and we (I) thought that that\n> > behavior should be preserved. On second thought, as Andres suggested,\n> > we can just reset pgWalUsage at sending since AFAICS no one takes\n> > difference crossing pgstat_report_stat() calls.\n> \n> Yes, I agree that we can do that since there seems no such code for\n> now.\n> Also if we do that, we can check, for example \"pgWalUsage.wal_records\n> > 0\"\n> as you suggested, to easily determine whether there is pending WAL\n> stats or not.\n> Anyway I agree it's better to add comments about the design more.\n...\n> > If any wal activity has been recorded, wal_records is always positive\n> > thus we can check for wal activity just by \"pgWalUsage.wal_records >\n> > 0, which should be fast enough.\n> \n> Maybe there is the case where a backend generates no WAL records,\n> but just writes them because it needs to do write-ahead-logging\n> when flush the table data? If yes, \"pgWalUsage.wal_records > 0\" is not\n> enough.\n> Probably other fields also need to be checked.\n\n(I noticed I made the discussion above unconsciously premising\npgWalUsage reset.)\n\nI may be misunderstanding or missing something, but the only place\nwhere pgWalUsage counters are increased is XLogInsertRecrod. That is,\npgWalUsage counts wal insertions, not writes nor flushes. AFAICS\npgWalUsage.wal_records is always incremented when other counters are\nincreased. Looking from another side, we should refrain from adding\ncounters that incrases at a different time than\npgWalUsage.wal_recrods. (That should be written as a comment there.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:08:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/03/26 10:08, Kyotaro Horiguchi wrote:\n> At Thu, 25 Mar 2021 19:01:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> On 2021/03/25 16:37, Kyotaro Horiguchi wrote:\n>>> pgWalUsage was used without resetting and we (I) thought that that\n>>> behavior should be preserved. On second thought, as Andres suggested,\n>>> we can just reset pgWalUsage at sending since AFAICS no one takes\n>>> difference crossing pgstat_report_stat() calls.\n>>\n>> Yes, I agree that we can do that since there seems no such code for\n>> now.\n>> Also if we do that, we can check, for example \"pgWalUsage.wal_records\n>>> 0\"\n>> as you suggested, to easily determine whether there is pending WAL\n>> stats or not.\n>> Anyway I agree it's better to add comments about the design more.\n> ...\n>>> If any wal activity has been recorded, wal_records is always positive\n>>> thus we can check for wal activity just by \"pgWalUsage.wal_records >\n>>> 0, which should be fast enough.\n>>\n>> Maybe there is the case where a backend generates no WAL records,\n>> but just writes them because it needs to do write-ahead-logging\n>> when flush the table data? If yes, \"pgWalUsage.wal_records > 0\" is not\n>> enough.\n>> Probably other fields also need to be checked.\n> \n> (I noticed I made the discussion above unconsciously premising\n> pgWalUsage reset.)\n> \n> I may be misunderstanding or missing something, but the only place\n> where pgWalUsage counters are increased is XLogInsertRecrod. That is,\n> pgWalUsage counts wal insertions, not writes nor flushes. AFAICS\n\nYes. And WalStats instead of pgWalUsage includes the stats about\nnot only WAL insertions, but also writes and flushes.\npgstat_send_wal() checks WalStats to determine whether there are\npending WAL stats to send to the stats collector or not. That is,\nthe counters of not only WAL insertions but also writes and flushes\nshould be checked to determine whether there are pending stats or not, I think..\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:32:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "At Fri, 26 Mar 2021 10:32:23 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > I may be misunderstanding or missing something, but the only place\n> > where pgWalUsage counters are increased is XLogInsertRecrod. That is,\n> > pgWalUsage counts wal insertions, not writes nor flushes. AFAICS\n> \n> Yes. And WalStats instead of pgWalUsage includes the stats about\n> not only WAL insertions, but also writes and flushes.\n\nUgh! I was missing a very large blob.. Ok, we need additional check\nfor non-pgWalUsage part. Sorry.\n\n> pgstat_send_wal() checks WalStats to determine whether there are\n> pending WAL stats to send to the stats collector or not. That is,\n> the counters of not only WAL insertions but also writes and flushes\n> should be checked to determine whether there are pending stats or not,\n> I think..\n\nI think we may have an additional flag to notify about io-stat part,\nin constrast to wal-insertion part . Anyway we do additional\nINSTR_TIME_SET_CURRENT when track_wal_io_timinge.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 26 Mar 2021 12:47:49 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "Thanks for many your suggestions!\nI made the patch to handle the issues.\n\n> 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n> instead of just accumulating the stats since the last submission?\n> There doesn't seem to be any comment explaining it? Computing\n> differences to previous values, and copying to prev*, isn't free. I\n> assume this is out of a fear that the stats could get reset before\n> they're used for instrument.c purposes - but who knows?\n\nI removed the unnecessary code copying pgWalUsage and just reset the\npgWalUsage after reporting the stats in pgstat_report_wal().\n\n\n> 2) Why is there both pgstat_send_wal() and pgstat_report_wal()? With the\n> former being used by wal writer, the latter by most other processes?\n> There again don't seem to be comments explaining this.\n\nI added the comments why two functions are separated.\n(But is it better to merge them?)\n\n\n> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n> just to figure out if there's been any changes isn't all that\n> cheap. This is regularly exercised in read-only workloads too. Seems\n> adding a boolean WalStats.have_pending = true or such would be\n> better.\n> 4) For plain backends pgstat_report_wal() is called by\n> pgstat_report_stat() - but it is not checked as part of the \"Don't\n> expend a clock check if nothing to do\" check at the top. It's\n> probably rare to have pending wal stats without also passing one of\n> the other conditions, but ...\n\nI added the logic to check if the stats counters are updated or not in\npgstat_report_stat() using not only generated wal record but also write/sync\ncounters, and it can skip to call reporting function.\n\nAlthough I added the condition which the write/sync counters are updated or\nnot, I haven't understood the following case yet...Sorry. I checked related\ncode and tested to insert large object, but I couldn't. I'll investigate more\ndeeply, but if you already know the function which leads the following case,\nplease let me know.\n\n> Maybe there is the case where a backend generates no WAL records,\n> but just writes them because it needs to do write-ahead-logging\n> when flush the table data?\n\n> Ugh! I was missing a very large blob.. Ok, we need additional check\n> for non-pgWalUsage part. Sorry.\n\n\nRegards,\n\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 26 Mar 2021 16:20:04 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "Hi,\n\nOn 2021-03-25 16:37:10 +0900, Kyotaro Horiguchi wrote:\n> On the other hand, the counters are incremented in XLogInsertRecord()\n> and I think we don't want add instructions there.\n\nI don't really buy this. Setting a boolean to true, in a cacheline\nyou're already touching, isn't that much compared to all the other stuff\nin there. The branch to check if wal stats timing etc is enabled is much\nmore expensive. I think we should just set a boolean to true and leave\nit at that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:07:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "At Fri, 26 Mar 2021 10:07:45 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2021-03-25 16:37:10 +0900, Kyotaro Horiguchi wrote:\n> > On the other hand, the counters are incremented in XLogInsertRecord()\n> > and I think we don't want add instructions there.\n> \n> I don't really buy this. Setting a boolean to true, in a cacheline\n> you're already touching, isn't that much compared to all the other stuff\n> in there. The branch to check if wal stats timing etc is enabled is much\n> more expensive. I think we should just set a boolean to true and leave\n> it at that.\n\nHmm. Yes, I agree to you in that opinion. I (remember I) was told not\nto add even a cycle to the hot path as far as we can avoid when I\ntried something like that.\n\nSo I'm happy to +1 for that if it is the consensus here, since it is\ncleaner.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 29 Mar 2021 11:09:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "At Mon, 29 Mar 2021 11:09:00 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 26 Mar 2021 10:07:45 -0700, Andres Freund <andres@anarazel.de> wrote in \n> > Hi,\n> > \n> > On 2021-03-25 16:37:10 +0900, Kyotaro Horiguchi wrote:\n> > > On the other hand, the counters are incremented in XLogInsertRecord()\n> > > and I think we don't want add instructions there.\n> > \n> > I don't really buy this. Setting a boolean to true, in a cacheline\n> > you're already touching, isn't that much compared to all the other stuff\n> > in there. The branch to check if wal stats timing etc is enabled is much\n> > more expensive. I think we should just set a boolean to true and leave\n> > it at that.\n> \n> Hmm. Yes, I agree to you in that opinion. I (remember I) was told not\n\nIt might sound differently.. To be precise, \"I had the same opinion\nwith you\".\n\n> to add even a cycle to the hot path as far as we can avoid when I\n> tried something like that.\n> \n> So I'm happy to +1 for that if it is the consensus here, since it is\n> cleaner.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 29 Mar 2021 11:11:00 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "I update the patch since there were my misunderstanding points.\n\nOn 2021/03/26 16:20, Masahiro Ikeda wrote:\n> Thanks for many your suggestions!\n> I made the patch to handle the issues.\n> \n>> 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n>> instead of just accumulating the stats since the last submission?\n>> There doesn't seem to be any comment explaining it? Computing\n>> differences to previous values, and copying to prev*, isn't free. I\n>> assume this is out of a fear that the stats could get reset before\n>> they're used for instrument.c purposes - but who knows?\n> \n> I removed the unnecessary code copying pgWalUsage and just reset the\n> pgWalUsage after reporting the stats in pgstat_report_wal().\n\nI didn't change this.\n\n>> 2) Why is there both pgstat_send_wal() and pgstat_report_wal()? With the\n>> former being used by wal writer, the latter by most other processes?\n>> There again don't seem to be comments explaining this.\n> \n> I added the comments why two functions are separated.\n> (But is it better to merge them?)\n\nI updated the comments for following reasons.\n\n\n>> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n>> just to figure out if there's been any changes isn't all that\n>> cheap. This is regularly exercised in read-only workloads too. Seems\n>> adding a boolean WalStats.have_pending = true or such would be\n>> better.\n>> 4) For plain backends pgstat_report_wal() is called by\n>> pgstat_report_stat() - but it is not checked as part of the \"Don't\n>> expend a clock check if nothing to do\" check at the top. It's\n>> probably rare to have pending wal stats without also passing one of\n>> the other conditions, but ...\n> \n> I added the logic to check if the stats counters are updated or not in\n> pgstat_report_stat() using not only generated wal record but also write/sync\n> counters, and it can skip to call reporting function.\n\nI removed the checking code whether the wal stats counters were updated or not\nin pgstat_report_stat() since I couldn't understand the valid case yet.\npgstat_report_stat() is called by backends when the transaction is finished.\nThis means that the condition seems always pass.\n\nI thought to implement if the WAL stats counters were not updated, skip to\nsend all statistics including the counters for databases and so on. But I\nthink it's not good because it may take more time to be reflected the\ngenerated stats by read-only transaction.\n\n\n> Although I added the condition which the write/sync counters are updated or\n> not, I haven't understood the following case yet...Sorry. I checked related\n> code and tested to insert large object, but I couldn't. I'll investigate more\n> deeply, but if you already know the function which leads the following case,\n> please let me know.\n\nI understood the above case (Fujii-san, thanks for your advice in person).\nWhen to flush buffers, for example, to select buffer replacement victim,\nit requires a WAL flush if the buffer is dirty. So, to check the WAL stats\ncounters are updated or not, I check the number of generated wal record and\nthe counter of syncing in pgstat_report_wal().\n\nThe reason why not to check the counter of writing is that if to write is\nhappened, to sync is happened too in the above case. I added the comments in\nthe patch.\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 30 Mar 2021 09:41:24 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "At Tue, 30 Mar 2021 09:41:24 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n> I update the patch since there were my misunderstanding points.\n> \n> On 2021/03/26 16:20, Masahiro Ikeda wrote:\n> > Thanks for many your suggestions!\n> > I made the patch to handle the issues.\n> > \n> >> 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n> >> instead of just accumulating the stats since the last submission?\n> >> There doesn't seem to be any comment explaining it? Computing\n> >> differences to previous values, and copying to prev*, isn't free. I\n> >> assume this is out of a fear that the stats could get reset before\n> >> they're used for instrument.c purposes - but who knows?\n> > \n> > I removed the unnecessary code copying pgWalUsage and just reset the\n> > pgWalUsage after reporting the stats in pgstat_report_wal().\n> \n> I didn't change this.\n> \n> >> 2) Why is there both pgstat_send_wal() and pgstat_report_wal()? With the\n> >> former being used by wal writer, the latter by most other processes?\n> >> There again don't seem to be comments explaining this.\n> > \n> > I added the comments why two functions are separated.\n> > (But is it better to merge them?)\n> \n> I updated the comments for following reasons.\n> \n> \n> >> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n> >> just to figure out if there's been any changes isn't all that\n> >> cheap. This is regularly exercised in read-only workloads too. Seems\n> >> adding a boolean WalStats.have_pending = true or such would be\n> >> better.\n> >> 4) For plain backends pgstat_report_wal() is called by\n> >> pgstat_report_stat() - but it is not checked as part of the \"Don't\n> >> expend a clock check if nothing to do\" check at the top. It's\n> >> probably rare to have pending wal stats without also passing one of\n> >> the other conditions, but ...\n> > \n> > I added the logic to check if the stats counters are updated or not in\n> > pgstat_report_stat() using not only generated wal record but also write/sync\n> > counters, and it can skip to call reporting function.\n> \n> I removed the checking code whether the wal stats counters were updated or not\n> in pgstat_report_stat() since I couldn't understand the valid case yet.\n> pgstat_report_stat() is called by backends when the transaction is finished.\n> This means that the condition seems always pass.\n\nDoesn't the same holds for all other counters? If you are saying that\n\"wal counters should be zero if all other stats counters are zero\", we\nneed an assertion to check that and a comment to explain that.\n\nI personally find it safer to add the WAL-stats condition to the\nfast-return check, rather than addin such assertion.\n\npgstat_send_wal() is called mainly from pgstat_report_wal() which\nconsumes pgWalStats counters and WalWriterMain() which\ndoesn't. Checking on pgWalStats counters isn't so complex that we need\nto avoid that in wal writer, thus *I* think pgstat_send_wal() and\npgstat_report_wal() can be consolidated. Even if you have a strong\nopinion that wal writer should call a separate function, the name\nshould be something other than pgstat_send_wal() since it ignores\npgWalUsage counters, which are supposed to be included in \"WAL stats\".\n\n\n> I thought to implement if the WAL stats counters were not updated, skip to\n> send all statistics including the counters for databases and so on. But I\n> think it's not good because it may take more time to be reflected the\n> generated stats by read-only transaction.\n\nUr, yep.\n\n> > Although I added the condition which the write/sync counters are updated or\n> > not, I haven't understood the following case yet...Sorry. I checked related\n> > code and tested to insert large object, but I couldn't. I'll investigate more\n> > deeply, but if you already know the function which leads the following case,\n> > please let me know.\n> \n> I understood the above case (Fujii-san, thanks for your advice in person).\n> When to flush buffers, for example, to select buffer replacement victim,\n> it requires a WAL flush if the buffer is dirty. So, to check the WAL stats\n> counters are updated or not, I check the number of generated wal record and\n> the counter of syncing in pgstat_report_wal().\n> \n> The reason why not to check the counter of writing is that if to write is\n> happened, to sync is happened too in the above case. I added the comments in\n> the patch.\n\nMmm.. Although I couldn't read you clearly, The fact that a flush may\nhappen without a write means the reverse at the same time, a write may\nhappen without a flush. For asynchronous commits, WAL-write happens\nalone unaccompanied by a flush. And the corresponding flush would\nhappen later without writes.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 30 Mar 2021 17:28:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/03/30 17:28, Kyotaro Horiguchi wrote:\n> At Tue, 30 Mar 2021 09:41:24 +0900, Masahiro Ikeda <ikedamsh@oss.nttdata.com> wrote in \n>> I update the patch since there were my misunderstanding points.\n>>\n>> On 2021/03/26 16:20, Masahiro Ikeda wrote:\n>>> Thanks for many your suggestions!\n>>> I made the patch to handle the issues.\n>>>\n>>>> 1) What is the motivation to have both prevWalUsage and pgWalUsage,\n>>>> instead of just accumulating the stats since the last submission?\n>>>> There doesn't seem to be any comment explaining it? Computing\n>>>> differences to previous values, and copying to prev*, isn't free. I\n>>>> assume this is out of a fear that the stats could get reset before\n>>>> they're used for instrument.c purposes - but who knows?\n>>>\n>>> I removed the unnecessary code copying pgWalUsage and just reset the\n>>> pgWalUsage after reporting the stats in pgstat_report_wal().\n>>\n>> I didn't change this.\n>>\n>>>> 2) Why is there both pgstat_send_wal() and pgstat_report_wal()? With the\n>>>> former being used by wal writer, the latter by most other processes?\n>>>> There again don't seem to be comments explaining this.\n>>>\n>>> I added the comments why two functions are separated.\n>>> (But is it better to merge them?)\n>>\n>> I updated the comments for following reasons.\n>>\n>>\n>>>> 3) Doing if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)\n>>>> just to figure out if there's been any changes isn't all that\n>>>> cheap. This is regularly exercised in read-only workloads too. Seems\n>>>> adding a boolean WalStats.have_pending = true or such would be\n>>>> better.\n>>>> 4) For plain backends pgstat_report_wal() is called by\n>>>> pgstat_report_stat() - but it is not checked as part of the \"Don't\n>>>> expend a clock check if nothing to do\" check at the top. It's\n>>>> probably rare to have pending wal stats without also passing one of\n>>>> the other conditions, but ...\n>>>\n>>> I added the logic to check if the stats counters are updated or not in\n>>> pgstat_report_stat() using not only generated wal record but also write/sync\n>>> counters, and it can skip to call reporting function.\n>>\n>> I removed the checking code whether the wal stats counters were updated or not\n>> in pgstat_report_stat() since I couldn't understand the valid case yet.\n>> pgstat_report_stat() is called by backends when the transaction is finished.\n>> This means that the condition seems always pass.\n> \n> Doesn't the same holds for all other counters? If you are saying that\n> \"wal counters should be zero if all other stats counters are zero\", we\n> need an assertion to check that and a comment to explain that.\n> \n> I personally find it safer to add the WAL-stats condition to the\n> fast-return check, rather than addin such assertion.\nThanks for your comments.\n\nOK, I added the condition to the fast-return check. I noticed that I\nmisunderstood that the purpose is to avoid expanding a clock check using WAL\nstats counters. But, the purpose is to make the conditions stricter, right?\n\n\n> pgstat_send_wal() is called mainly from pgstat_report_wal() which\n> consumes pgWalStats counters and WalWriterMain() which\n> doesn't. Checking on pgWalStats counters isn't so complex that we need\n> to avoid that in wal writer, thus *I* think pgstat_send_wal() and\n> pgstat_report_wal() can be consolidated. Even if you have a strong\n> opinion that wal writer should call a separate function, the name\n> should be something other than pgstat_send_wal() since it ignores\n> pgWalUsage counters, which are supposed to be included in \"WAL stats\".\n\nOK, I consolidated them.\n\n\n\n>> I thought to implement if the WAL stats counters were not updated, skip to\n>> send all statistics including the counters for databases and so on. But I\n>> think it's not good because it may take more time to be reflected the\n>> generated stats by read-only transaction.\n> \n> Ur, yep.\n> \n>>> Although I added the condition which the write/sync counters are updated or\n>>> not, I haven't understood the following case yet...Sorry. I checked related\n>>> code and tested to insert large object, but I couldn't. I'll investigate more\n>>> deeply, but if you already know the function which leads the following case,\n>>> please let me know.\n>>\n>> I understood the above case (Fujii-san, thanks for your advice in person).\n>> When to flush buffers, for example, to select buffer replacement victim,\n>> it requires a WAL flush if the buffer is dirty. So, to check the WAL stats\n>> counters are updated or not, I check the number of generated wal record and\n>> the counter of syncing in pgstat_report_wal().\n>>\n>> The reason why not to check the counter of writing is that if to write is\n>> happened, to sync is happened too in the above case. I added the comments in\n>> the patch.\n> \n> Mmm.. Although I couldn't read you clearly, The fact that a flush may\n> happen without a write means the reverse at the same time, a write may\n> happen without a flush. For asynchronous commits, WAL-write happens\n> alone unaccompanied by a flush. And the corresponding flush would\n> happen later without writes.\n\nSorry, I didn't explain it enough.\n\nFor processes which may generate WAL records like backends, I thought it's\nenough to check (1)the number of generated WAL records and (2)the counters of\nsyncing(flushing) the WAL. This is checked in pgstat_report_wal(). Sorry for\nthat I didn't mention (1) in the above thread.\n\nIf a backend execute a write transaction, some WAL records must be generated.\nSo, it's ok to check (1) only regardless of whether asynchronous commit is\nenabled or not.\n\nOHOT, if a backend execute a read-only transaction, WAL records won't be\ngenerated (although HOT makes a wal records exactly...). But, WAL-write and\nflush may happen when to flush buffers via XLogFlush(). In this case, if\nWAL-write happened, flush must be happen later. But, if my understanding is\ncorrect, there is no a case to flush doesn't happen, but to write happen.\nSo, I thought (2) is needed and it's enough to check the counter of\nsyncing(flushing).\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Tue, 30 Mar 2021 20:37:36 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/03/30 20:37, Masahiro Ikeda wrote:\n> OK, I added the condition to the fast-return check. I noticed that I\n> misunderstood that the purpose is to avoid expanding a clock check using WAL\n> stats counters. But, the purpose is to make the conditions stricter, right?\n\nYes. Currently if the following condition is false even when the WAL counters\nare updated, nothing is sent to the stats collector. But with your patch,\nin this case the WAL stats are sent.\n\n\tif ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n\t\tpgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n\t\t!have_function_stats && !disconnect)\n\nThanks for the patch! It now fails to be applied to the master cleanly.\nSo could you rebase the patch?\n\npgstat_initialize() should initialize pgWalUsage with zero?\n\n+\t/*\n+\t * set the counters related to generated WAL data.\n+\t */\n+\tWalStats.m_wal_records = pgWalUsage.wal_records;\n+\tWalStats.m_wal_fpi = pgWalUsage.wal_fpi;\n+\tWalStats.m_wal_bytes = pgWalUsage.wal_bytes;\n\nThis should be skipped if pgWalUsage.wal_records is zero?\n\n+ * Be careful that the counters are cleared after reporting them to\n+ * the stats collector although you can use WalUsageAccumDiff()\n+ * to computing differences to previous values. For backends,\n+ * the counters may be reset after a transaction is finished and\n+ * pgstat_send_wal() is invoked, so you can compute the difference\n+ * in the same transaction only.\n\nOn the second thought, I'm afraid that this can be likely to be a foot-gun\nin the future. So I'm now wondering if the current approach (i.e., calculate\nthe diff between the current and previous pgWalUsage and don't reset it\nto zero) is better. Thought? Since the similar data structure pgBufferUsage\ndoesn't have this kind of restriction, I'm afraid that the developer can\neasily miss that only pgWalUsage has the restriction.\n\nBut currently the diff is calculated (i.e., WalUsageAccumDiff() is called)\neven when the WAL counters are not updated. Then if that calculated diff is\nzero, we skip sending the WAL stats. This logic should be improved. That is,\nprobably we should be able to check whether the WAL counters are updated\nor not, without calculating the diff, because the calculation is not free.\nWe can implement this by introducing new flag variable that we discussed\nupthread. This flag is set to true whenever the WAL counters are incremented\nand used to determine whether the WAL stats need to be sent.\n\nIf we do this, another issue is that the data types for wal_records and wal_fpi\nin pgWalUsage are long. Which may lead to overflow? If yes, it's should be\nreplaced with uint64.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 13 Apr 2021 09:33:12 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/04/13 9:33, Fujii Masao wrote:\n> \n> \n> On 2021/03/30 20:37, Masahiro Ikeda wrote:\n>> OK, I added the condition to the fast-return check. I noticed that I\n>> misunderstood that the purpose is to avoid expanding a clock check using WAL\n>> stats counters. But, the purpose is to make the conditions stricter, right?\n> \n> Yes. Currently if the following condition is false even when the WAL counters\n> are updated, nothing is sent to the stats collector. But with your patch,\n> in this case the WAL stats are sent.\n> \n>     if ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n>         pgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n>         !have_function_stats && !disconnect)\n> \n> Thanks for the patch! It now fails to be applied to the master cleanly.\n> So could you rebase the patch?\n\nThanks for your comments!\nI rebased it.\n\n\n> pgstat_initialize() should initialize pgWalUsage with zero?\n\nThanks. But, I didn't handle it because I undo the logic to calculate the diff\nas you mentioned below.\n\n\n> +    /*\n> +     * set the counters related to generated WAL data.\n> +     */\n> +    WalStats.m_wal_records = pgWalUsage.wal_records;\n> +    WalStats.m_wal_fpi = pgWalUsage.wal_fpi;\n> +    WalStats.m_wal_bytes = pgWalUsage.wal_bytes;\n> \n> This should be skipped if pgWalUsage.wal_records is zero?\n\nYes, fixed it.\n\n\n> + * Be careful that the counters are cleared after reporting them to\n> + * the stats collector although you can use WalUsageAccumDiff()\n> + * to computing differences to previous values. For backends,\n> + * the counters may be reset after a transaction is finished and\n> + * pgstat_send_wal() is invoked, so you can compute the difference\n> + * in the same transaction only.\n> \n> On the second thought, I'm afraid that this can be likely to be a foot-gun\n> in the future. So I'm now wondering if the current approach (i.e., calculate\n> the diff between the current and previous pgWalUsage and don't reset it\n> to zero) is better. Thought? Since the similar data structure pgBufferUsage\n> doesn't have this kind of restriction, I'm afraid that the developer can\n> easily miss that only pgWalUsage has the restriction.\n> \n> But currently the diff is calculated (i.e., WalUsageAccumDiff() is called)\n> even when the WAL counters are not updated. Then if that calculated diff is\n> zero, we skip sending the WAL stats. This logic should be improved. That is,\n> probably we should be able to check whether the WAL counters are updated\n> or not, without calculating the diff, because the calculation is not free.\n> We can implement this by introducing new flag variable that we discussed\n> upthread. This flag is set to true whenever the WAL counters are incremented\n> and used to determine whether the WAL stats need to be sent.\n\nSound reasonable. I agreed that the restriction has a risk to lead mistakes.\nI made the patch introducing a new flag.\n\n- v4-0001-performance-improvements-of-reporting-wal-stats.patch\n\n\nI think introducing a new flag is not necessary because we can know if the WAL\nstats are updated or not using the counters of the generated wal record, wal\nwrites and wal syncs. It's fast compared to get timestamp. The attached patch\nis to check if the counters are updated or not using them.\n\n-\nv4-0001-performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch\n\n\n> If we do this, another issue is that the data types for wal_records and wal_fpi\n> in pgWalUsage are long. Which may lead to overflow? If yes, it's should be\n> replaced with uint64.\n\nYes, I fixed. BufferUsage's counters have same issue, so I fixed them too.\n\nBTW, is it better to change PgStat_Counter from int64 to uint64 because there\naren't any counters with negative number?\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Fri, 16 Apr 2021 10:27:11 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021-04-16 10:27, Masahiro Ikeda wrote:\n> On 2021/04/13 9:33, Fujii Masao wrote:\n>> \n>> \n>> On 2021/03/30 20:37, Masahiro Ikeda wrote:\n>>> OK, I added the condition to the fast-return check. I noticed that I\n>>> misunderstood that the purpose is to avoid expanding a clock check \n>>> using WAL\n>>> stats counters. But, the purpose is to make the conditions stricter, \n>>> right?\n>> \n>> Yes. Currently if the following condition is false even when the WAL \n>> counters\n>> are updated, nothing is sent to the stats collector. But with your \n>> patch,\n>> in this case the WAL stats are sent.\n>> \n>>     if ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) &&\n>>         pgStatXactCommit == 0 && pgStatXactRollback == 0 &&\n>>         !have_function_stats && !disconnect)\n>> \n>> Thanks for the patch! It now fails to be applied to the master \n>> cleanly.\n>> So could you rebase the patch?\n> \n> Thanks for your comments!\n> I rebased it.\n\nThanks for working on this!\n\nI have some minor comments on \nperformance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch.\n\n\n177 @@ -3094,20 +3066,33 @@ pgstat_report_wal(void)\n178 * Return true if the message is sent, and false otherwise.\n\nSince you changed the return value to void, it seems the description is\nnot necessary anymore.\n\n208 + * generate wal records. 'wal_writes' and 'wal_sync' are \nzero means the\n\nIt may be better to change 'wal_writes' to 'wal_write' since single\nquotation seems to mean variable name.\n\n234 + * set the counters related to generated WAL data if the \ncounters are\n\n\nset -> Set?\n\n\nRegards,\n\n\n", "msg_date": "Wed, 21 Apr 2021 15:08:48 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "On 2021/04/21 15:08, torikoshia wrote:\n> On 2021-04-16 10:27, Masahiro Ikeda wrote:\n>> On 2021/04/13 9:33, Fujii Masao wrote:\n>>> \n>>> \n>>> On 2021/03/30 20:37, Masahiro Ikeda wrote:\n>>>> OK, I added the condition to the fast-return check. I noticed that I \n>>>> misunderstood that the purpose is to avoid expanding a clock check\n>>>> using WAL stats counters. But, the purpose is to make the conditions\n>>>> stricter, right?\n>>> \n>>> Yes. Currently if the following condition is false even when the WAL\n>>> counters are updated, nothing is sent to the stats collector. But with\n>>> your patch, in this case the WAL stats are sent.\n>>> \n>>> if ((pgStatTabList == NULL || pgStatTabList->tsa_used == 0) && \n>>> pgStatXactCommit == 0 && pgStatXactRollback == 0 && \n>>> !have_function_stats && !disconnect)\n>>> \n>>> Thanks for the patch! It now fails to be applied to the master\n>>> cleanly. So could you rebase the patch?\n>> \n>> Thanks for your comments! I rebased it.\n> \n> Thanks for working on this!\n\nHi, thanks for your comments!\n\n\n> I have some minor comments on \n> performance-improvements-of-reporting-wal-stats-without-introducing-a-new-variable.patch.\n>\n> \n> \n> \n> 177 @@ -3094,20 +3066,33 @@ pgstat_report_wal(void) 178 * Return true if\n> the message is sent, and false otherwise.\n> \n> Since you changed the return value to void, it seems the description is not\n> necessary anymore.\n\nRight, I fixed it.\n\n\n> 208 + * generate wal records. 'wal_writes' and 'wal_sync' are zero \n> means the\n> \n> It may be better to change 'wal_writes' to 'wal_write' since single \n> quotation seems to mean variable name.\n\nAgreed.\n\n\n> 234 + * set the counters related to generated WAL data if the\n> counters are\n> \n> \n> set -> Set?\n\nYes, I fixed.\n\n> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n\nAlthough this is not related to torikoshi-san's comment, the above my\nunderstanding is not right. Some counters like delta_live_tuples,\ndelta_dead_tuples and changed_tuples can be negative.\n\n\nRegards,\n-- \nMasahiro Ikeda\nNTT DATA CORPORATION", "msg_date": "Wed, 21 Apr 2021 18:31:24 +0900", "msg_from": "Masahiro Ikeda <ikedamsh@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "\n\nOn 2021/04/21 18:31, Masahiro Ikeda wrote:\n>> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n\nOn second thought, it's ok even if the counters like wal_records get overflowed.\nBecause they are always used to calculate the diff between the previous and\ncurrent counters. Even when the current counters are overflowed and\nthe previous ones are not, WalUsageAccumDiff() seems to return the right\ndiff of them. If this understanding is right, I'd withdraw my comment and\nit's ok to use \"long\" type for those counters. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 22 Apr 2021 22:42:14 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" }, { "msg_contents": "Hi\n\nOn Thu, Apr 22, 2021, at 06:42, Fujii Masao wrote:\n> \n> \n> On 2021/04/21 18:31, Masahiro Ikeda wrote:\n> >> BTW, is it better to change PgStat_Counter from int64 to uint64 because> there aren't any counters with negative number?\n> \n> On second thought, it's ok even if the counters like wal_records get overflowed.\n> Because they are always used to calculate the diff between the previous and\n> current counters. Even when the current counters are overflowed and\n> the previous ones are not, WalUsageAccumDiff() seems to return the right\n> diff of them. If this understanding is right, I'd withdraw my comment and\n> it's ok to use \"long\" type for those counters. Thought?\n\nWhy long? It's of a platform dependent size, which doesn't seem useful here?\n\nAndres\n\n\n", "msg_date": "Thu, 22 Apr 2021 08:36:59 -0700", "msg_from": "\"Andres Freund\" <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: wal stats questions" } ]
[ { "msg_contents": "Thanks to the get_index_clause_from_support, we can use index for WHERE a\nlike\n'abc%' case. However this currently only works on custom plan. Think about\nthe\ncase where the planning takes lots of time, custom plan will not give a good\nresult. so I want to see if we should support this for a generic plan as\nwell.\n\nThe first step of this is we need to find an operator to present prefix is\njust\nliteral. which means '%' is just '%', not match any characters. After trying\n'like' and '~' operator, I find none of them can be used. for example:\n\nPREPARE s AS SELECT * FROM t WHERE a LIKE ('^' || $1);\nEXECUTE s('%abc');\n\n'%' is still a special character to match any characters. So '~' is. So I\nthink\nwe need to define an new operator like text(a) ~^ text(b), which means a\nis prefixed with b literally. For example:\n\n'abc' ~^ 'ab` -> true\n'abc' ~^ 'ab%' -> false\n\nso the above case can be written as:\n\nPREPARE s AS SELECT * FROM t WHERE a ~^ $1;\n\nThe second step is we need to define greater string for $1 just like\nmake_greater_string. Looks we have to know the exact value to make the\ngreater\nstring, so we can define a new FuncExpr like this:\n\nIndex Cond: ((md5 >= $1::text) AND (md5 < make_greater_string_fn($1)).\n\nThis may not be able to fix handle the current make_greater_string return\nNULL\ncase. so we may define another FuncExpr like text_less_than_or_null\n\nIndex Cond: ((md5 >= $1::text) AND (text_less_than_or_null(md5,\nmake_greater_string_fn($1)))\n\nIs this a right thing to do and a right method?\n\nThanks\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nThanks to the get_index_clause_from_support, we can use index for WHERE a like'abc%' case. However this currently only works on custom plan. Think about thecase where the planning takes lots of time, custom plan will not give a goodresult. so I want to see if we should support this for a generic plan as well.The first step of this is we need to find an operator to present prefix is justliteral. which means '%' is just '%', not match any characters. After trying'like' and '~' operator, I find none of them can be used. for example:PREPARE s AS SELECT * FROM t WHERE a LIKE ('^' || $1);EXECUTE s('%abc');'%' is still a special character to match any characters.  So '~' is.  So I thinkwe need to define an new operator like text(a) ~^ text(b), which means a is prefixed with b literally. For example:'abc' ~^ 'ab` -> true'abc' ~^ 'ab%' -> falseso the above case can be written as:PREPARE s AS SELECT * FROM t WHERE a ~^ $1;The second step is we need to define greater string for $1 just likemake_greater_string.  Looks we have to know the exact value to make the greaterstring, so we can define a new FuncExpr like this:Index Cond: ((md5 >= $1::text) AND (md5 < make_greater_string_fn($1)).This may not be able to fix handle the current make_greater_string return NULLcase. so we may define another FuncExpr like text_less_than_or_nullIndex Cond: ((md5 >= $1::text) AND (text_less_than_or_null(md5, make_greater_string_fn($1)))Is this a right thing to do and a  right method?Thanks-- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Thu, 25 Mar 2021 10:15:36 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Proposal for col LIKE $1 with generic Plan" }, { "msg_contents": "On Thu, Mar 25, 2021 at 10:15 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Thanks to the get_index_clause_from_support, we can use index for WHERE a\n> like\n> 'abc%' case. However this currently only works on custom plan. Think about\n> the\n> case where the planning takes lots of time, custom plan will not give a\n> good\n> result. so I want to see if we should support this for a generic plan as\n> well.\n>\n> The first step of this is we need to find an operator to present prefix is\n> just\n> literal. which means '%' is just '%', not match any characters. After\n> trying\n> 'like' and '~' operator, I find none of them can be used. for example:\n>\n> PREPARE s AS SELECT * FROM t WHERE a LIKE ('^' || $1);\n> EXECUTE s('%abc');\n>\n> '%' is still a special character to match any characters. So '~' is. So\n> I think\n> we need to define an new operator like text(a) ~^ text(b), which means a\n> is prefixed with b literally. For example:\n>\n> 'abc' ~^ 'ab` -> true\n> 'abc' ~^ 'ab%' -> false\n>\n> so the above case can be written as:\n>\n> PREPARE s AS SELECT * FROM t WHERE a ~^ $1;\n>\n>\nDuring the PoC coding, I found we already have ^@ operator for this [1], but\nwe don't implement that for BTree index so far. So I will try gist index\nfor my\ncurrent user case and come back to this thread later. Thanks!\n\n\n[1]\nhttps://www.postgresql.org/message-id/20180416155036.36070396%40wp.localdomain\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Thu, Mar 25, 2021 at 10:15 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:Thanks to the get_index_clause_from_support, we can use index for WHERE a like'abc%' case. However this currently only works on custom plan. Think about thecase where the planning takes lots of time, custom plan will not give a goodresult. so I want to see if we should support this for a generic plan as well.The first step of this is we need to find an operator to present prefix is justliteral. which means '%' is just '%', not match any characters. After trying'like' and '~' operator, I find none of them can be used. for example:PREPARE s AS SELECT * FROM t WHERE a LIKE ('^' || $1);EXECUTE s('%abc');'%' is still a special character to match any characters.  So '~' is.  So I thinkwe need to define an new operator like text(a) ~^ text(b), which means a is prefixed with b literally. For example:'abc' ~^ 'ab` -> true'abc' ~^ 'ab%' -> falseso the above case can be written as:PREPARE s AS SELECT * FROM t WHERE a ~^ $1;During the PoC coding, I found we already have ^@ operator for this [1], butwe don't implement that for BTree index so far.  So I will try gist index for mycurrent user case and come back to this thread later.  Thanks![1] https://www.postgresql.org/message-id/20180416155036.36070396%40wp.localdomain  -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Fri, 26 Mar 2021 09:36:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Proposal for col LIKE $1 with generic Plan" } ]
[ { "msg_contents": "Hi All,\nWe saw OOM in a system where WAL sender consumed Gigabttes of memory\nwhich was never released. Upon investigation, we found out that there\nwere many ReorderBufferToastHash memory contexts linked to\nReorderBuffer context, together consuming gigs of memory. They were\nrunning INSERT ... ON CONFLICT .. among other things. A similar report\nat [1]\n\nI could reproduce a memory leak in wal sender using following steps\nSession 1\npostgres=# create table t_toast (a int primary key, b text);\npostgres=# CREATE PUBLICATION dbz_minimal_publication FOR TABLE public.t_toast;\n\nTerminal 4\n$ pg_recvlogical -d postgres --slot pgoutput_minimal_test_slot\n--create-slot -P pgoutput\n$ pg_recvlogical -d postgres --slot pgoutput_minimal_test_slot --start\n-o proto_version=1 -o publication_names='dbz_minimal_publication' -f\n/dev/null\n\nSession 1\npostgres=# select * from pg_replication_slots ;\n slot_name | plugin | slot_type | datoid | database\n| temporary | active | active_pid | xmin | catalog_xmin | restart_lsn\n| confirmed_flush_lsn\n----------------------------+----------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------\n pgoutput_minimal_test_slot | pgoutput | logical | 12402 | postgres\n| f | f | | | 570 | 0/15FFFD0\n| 0/1600020\n\npostgres=# begin;\npostgres=# insert into t_toast values (500, repeat('something' ||\ntxid_current()::text, 100000)) ON CONFLICT (a) DO NOTHING;\nINSERT 0 1\n\nSession 2 (xid = 571)\npostgres=# begin;\npostgres=# insert into t_toast values (500, repeat('something' ||\ntxid_current()::text, 100000)) ON CONFLICT (a) DO NOTHING;\n\nSession 3 (xid = 572)\npostgres=# begin;\npostgres=# insert into t_toast values (500, repeat('something' ||\ntxid_current()::text, 100000)) ON CONFLICT (a) DO NOTHING;\n\nSession 1 (this session doesn't modify the table but is essential for\nspeculative insert to happen.)\npostgres=# rollback;\n\nSession 2 and 3 in the order in which you get control back (in my case\nsession 2 with xid = 571)\nINSERT 0 1\npostgres=# commit;\nCOMMIT\n\nother session (in my case session 3 with xid = 572)\nINSERT 0 0\npostgres=# commit;\nCOMMIT\n\nWith the attached patch, we see following in the server logs\n2021-03-25 09:57:20.469 IST [12424] LOG: starting logical decoding\nfor slot \"pgoutput_minimal_test_slot\"\n2021-03-25 09:57:20.469 IST [12424] DETAIL: Streaming transactions\ncommitting after 0/1600020, reading WAL from 0/15FFFD0.\n2021-03-25 09:57:20.469 IST [12424] LOG: logical decoding found\nconsistent point at 0/15FFFD0\n2021-03-25 09:57:20.469 IST [12424] DETAIL: There are no running transactions.\n2021-03-25 09:59:45.494 IST [12424] LOG: initializing hash table for\ntransaction 571\n2021-03-25 09:59:45.494 IST [12424] LOG: speculative insert\nencountered in transaction 571\n2021-03-25 09:59:45.494 IST [12424] LOG: speculative insert confirmed\nin transaction 571\n2021-03-25 09:59:45.504 IST [12424] LOG: destroying toast hash for\ntransaction 571\n2021-03-25 09:59:50.806 IST [12424] LOG: initializing hash table for\ntransaction 572\n2021-03-25 09:59:50.806 IST [12424] LOG: speculative insert\nencountered in transaction 572\n2021-03-25 09:59:50.806 IST [12424] LOG: toast hash for transaction\n572 is not cleared\n\nObserve that the toast_hash was cleaned for the transaction 571 which\nsuccessfully inserted the row but was not cleaned for the transaction\n572 which performed DO NOTHING instead of INSERT.\n\nHere's the sequence of events which leads to memory leak in wal sender\n1. Transaction 571 performs a speculative INSERT which is decoded as\ntoast insert followed by speculative insert of row\n2. decoding toast tuple, causes the toast hash to be created\n3. Speculative insert is ignored while decoding\n4. Speculative insert is confimed and decoded as a normal INSERT, also\ndestroying the toast hash\n5. Transaction 572 performs speculative insert which is decoded as\ntoast insert followed by speculative insert of row\n6. decoding toast tuple causes the toast hash to be created\n7. speculative insert is ignored while decoding\n... Speculative INSERT is never confirmed and thus toast hash is never\ndestroyed leaking memory\n\nIn memory context dump we see as many ReorderBufferToastHash as the\nnumber of times the above sequence is repeated.\nTopMemoryContext: 1279640 total in 7 blocks; 23304 free (17 chunks);\n1256336 used\n...\n Replication command context: 32768 total in 3 blocks; 10952 free\n(9 chunks); 21816 used\n ...\n ReorderBuffer: 8192 total in 1 blocks; 7656 free (7 chunks); 536 used\n ReorderBufferToastHash: 8192 total in 1 blocks; 2056 free (0\nchunks); 6136 used\n ReorderBufferToastHash: 8192 total in 1 blocks; 2056 free (0\nchunks); 6136 used\n ReorderBufferToastHash: 8192 total in 1 blocks; 2056 free (0\nchunks); 6136 used\n\n\nThe relevant code is all in ReoderBufferCommit() in cases\nREORDER_BUFFER_CHANGE_INSERT,\nREORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT and\nREORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM.\n\nAbout the solution: The speculative insert needs to be ignored since\nit can be rolled back later. If speculative insert is not confirmed,\nthere is no way to know that the speculative insert change required a\ntoast_hash table and destroy it before the next change starts.\nReorderBufferCommit seems to notice a speculative insert that was\nnever confirmed in the following code\n1624 change_done:\n1625\n1626 /*\n1627 * Either speculative insertion was\nconfirmed, or it was\n1628 * unsuccessful and the record isn't needed anymore.\n1629 */\n1630 if (specinsert != NULL)\n1631 {\n1632 ReorderBufferReturnChange(rb, specinsert);\n1633 specinsert = NULL;\n1634 }\n1635\n1636 if (relation != NULL)\n1637 {\n1638 RelationClose(relation);\n1639 relation = NULL;\n1640 }\n1641 break;\n\nbut by then we might have reused the toast_hash and thus can not be\ndestroyed. But that isn't the problem since the reused toast_hash will\nbe destroyed eventually.\n\nIt's only when the change next to speculative insert is something\nother than INSERT/UPDATE/DELETE that we have to worry about a\nspeculative insert that was never confirmed. So may be for those\ncases, we check whether specinsert != null and destroy toast_hash if\nit exists.\n\n[1] https://www.postgresql-archive.org/Diagnose-memory-leak-in-logical-replication-td6161318.html\n\n-- \nBest Wishes,\nAshutosh Bapat", "msg_date": "Thu, 25 Mar 2021 11:03:58 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Mar 24, 2021 at 10:34 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> Hi All,\n> We saw OOM in a system where WAL sender consumed Gigabttes of memory\n> which was never released. Upon investigation, we found out that there\n> were many ReorderBufferToastHash memory contexts linked to\n> ReorderBuffer context, together consuming gigs of memory. They were\n> running INSERT ... ON CONFLICT .. among other things. A similar report\n> at [1]\n\nWhat is the relationship between this bug and commit 7259736a6e5,\ndealt specifically with TOAST and speculative insertion resource\nmanagement issues within reorderbuffer.c? Amit?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 26 May 2021 19:56:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> Hi All,\n> We saw OOM in a system where WAL sender consumed Gigabttes of memory\n> which was never released. Upon investigation, we found out that there\n> were many ReorderBufferToastHash memory contexts linked to\n> ReorderBuffer context, together consuming gigs of memory. They were\n> running INSERT ... ON CONFLICT .. among other things. A similar report\n> at [1]\n>\n..\n>\n> but by then we might have reused the toast_hash and thus can not be\n> destroyed. But that isn't the problem since the reused toast_hash will\n> be destroyed eventually.\n>\n> It's only when the change next to speculative insert is something\n> other than INSERT/UPDATE/DELETE that we have to worry about a\n> speculative insert that was never confirmed. So may be for those\n> cases, we check whether specinsert != null and destroy toast_hash if\n> it exists.\n>\n\nCan we consider the possibility to destroy the toast_hash in\nReorderBufferCleanupTXN/ReorderBufferTruncateTXN? It will delay the\nclean up of memory till the end of stream or txn but there won't be\nany memory leak.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 May 2021 09:02:18 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, May 27, 2021 at 8:27 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Mar 24, 2021 at 10:34 PM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> > Hi All,\n> > We saw OOM in a system where WAL sender consumed Gigabttes of memory\n> > which was never released. Upon investigation, we found out that there\n> > were many ReorderBufferToastHash memory contexts linked to\n> > ReorderBuffer context, together consuming gigs of memory. They were\n> > running INSERT ... ON CONFLICT .. among other things. A similar report\n> > at [1]\n>\n> What is the relationship between this bug and commit 7259736a6e5,\n> dealt specifically with TOAST and speculative insertion resource\n> management issues within reorderbuffer.c? Amit?\n>\n\nThis seems to be a pre-existing bug. This should be reproduced in\nPG-13 and or prior to that commit. Ashutosh can confirm?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 May 2021 09:03:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, May 27, 2021 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi All,\n> > We saw OOM in a system where WAL sender consumed Gigabttes of memory\n> > which was never released. Upon investigation, we found out that there\n> > were many ReorderBufferToastHash memory contexts linked to\n> > ReorderBuffer context, together consuming gigs of memory. They were\n> > running INSERT ... ON CONFLICT .. among other things. A similar report\n> > at [1]\n> >\n> ..\n> >\n> > but by then we might have reused the toast_hash and thus can not be\n> > destroyed. But that isn't the problem since the reused toast_hash will\n> > be destroyed eventually.\n> >\n> > It's only when the change next to speculative insert is something\n> > other than INSERT/UPDATE/DELETE that we have to worry about a\n> > speculative insert that was never confirmed. So may be for those\n> > cases, we check whether specinsert != null and destroy toast_hash if\n> > it exists.\n> >\n>\n> Can we consider the possibility to destroy the toast_hash in\n> ReorderBufferCleanupTXN/ReorderBufferTruncateTXN? It will delay the\n> clean up of memory till the end of stream or txn but there won't be\n> any memory leak.\n>\n\nThe other possibility could be to clean it up when we clean the spec\ninsert change in the below code:\n/*\n* There's a speculative insertion remaining, just clean in up, it\n* can't have been successful, otherwise we'd gotten a confirmation\n* record.\n*/\nif (specinsert)\n{\nReorderBufferReturnChange(rb, specinsert, true);\nspecinsert = NULL;\n}\n\nBut I guess we might miss cleaning it up in case of an error. A\nsimilar problem could be there in the idea where we will try to tie\nthe clean up with the next change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 May 2021 09:26:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, May 27, 2021 at 9:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n> >\n> > Hi All,\n> > We saw OOM in a system where WAL sender consumed Gigabttes of memory\n> > which was never released. Upon investigation, we found out that there\n> > were many ReorderBufferToastHash memory contexts linked to\n> > ReorderBuffer context, together consuming gigs of memory. They were\n> > running INSERT ... ON CONFLICT .. among other things. A similar report\n> > at [1]\n> >\n> ..\n> >\n> > but by then we might have reused the toast_hash and thus can not be\n> > destroyed. But that isn't the problem since the reused toast_hash will\n> > be destroyed eventually.\n> >\n> > It's only when the change next to speculative insert is something\n> > other than INSERT/UPDATE/DELETE that we have to worry about a\n> > speculative insert that was never confirmed. So may be for those\n> > cases, we check whether specinsert != null and destroy toast_hash if\n> > it exists.\n> >\n>\n> Can we consider the possibility to destroy the toast_hash in\n> ReorderBufferCleanupTXN/ReorderBufferTruncateTXN? It will delay the\n> clean up of memory till the end of stream or txn but there won't be\n> any memory leak.\n\nCurrently, we are ignoring XLH_DELETE_IS_SUPER, so maybe we can do\nsomething based on this flag?\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 May 2021 09:35:37 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, May 27, 2021 at 9:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 9:02 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Thu, Mar 25, 2021 at 11:04 AM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >\n> > > Hi All,\n> > > We saw OOM in a system where WAL sender consumed Gigabttes of memory\n> > > which was never released. Upon investigation, we found out that there\n> > > were many ReorderBufferToastHash memory contexts linked to\n> > > ReorderBuffer context, together consuming gigs of memory. They were\n> > > running INSERT ... ON CONFLICT .. among other things. A similar report\n> > > at [1]\n> > >\n> > ..\n> > >\n> > > but by then we might have reused the toast_hash and thus can not be\n> > > destroyed. But that isn't the problem since the reused toast_hash will\n> > > be destroyed eventually.\n> > >\n> > > It's only when the change next to speculative insert is something\n> > > other than INSERT/UPDATE/DELETE that we have to worry about a\n> > > speculative insert that was never confirmed. So may be for those\n> > > cases, we check whether specinsert != null and destroy toast_hash if\n> > > it exists.\n> > >\n> >\n> > Can we consider the possibility to destroy the toast_hash in\n> > ReorderBufferCleanupTXN/ReorderBufferTruncateTXN? It will delay the\n> > clean up of memory till the end of stream or txn but there won't be\n> > any memory leak.\n> >\n>\n> The other possibility could be to clean it up when we clean the spec\n> insert change in the below code:\n\nYeah that could be done.\n\n> /*\n> * There's a speculative insertion remaining, just clean in up, it\n> * can't have been successful, otherwise we'd gotten a confirmation\n> * record.\n> */\n> if (specinsert)\n> {\n> ReorderBufferReturnChange(rb, specinsert, true);\n> specinsert = NULL;\n> }\n>\n> But I guess we might miss cleaning it up in case of an error. A\n> similar problem could be there in the idea where we will try to tie\n> the clean up with the next change.\n\nIn error case also we can handle it in the CATCH block no?\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 May 2021 09:40:08 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, May 27, 2021 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 9:26 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > Can we consider the possibility to destroy the toast_hash in\n> > > ReorderBufferCleanupTXN/ReorderBufferTruncateTXN? It will delay the\n> > > clean up of memory till the end of stream or txn but there won't be\n> > > any memory leak.\n> > >\n> >\n> > The other possibility could be to clean it up when we clean the spec\n> > insert change in the below code:\n>\n> Yeah that could be done.\n>\n> > /*\n> > * There's a speculative insertion remaining, just clean in up, it\n> > * can't have been successful, otherwise we'd gotten a confirmation\n> > * record.\n> > */\n> > if (specinsert)\n> > {\n> > ReorderBufferReturnChange(rb, specinsert, true);\n> > specinsert = NULL;\n> > }\n> >\n> > But I guess we might miss cleaning it up in case of an error. A\n> > similar problem could be there in the idea where we will try to tie\n> > the clean up with the next change.\n>\n> In error case also we can handle it in the CATCH block no?\n>\n\nTrue, but if you do this clean-up in ReorderBufferCleanupTXN then you\ndon't need to take care at separate places. Also, toast_hash is stored\nin txn so it appears natural to clean it up in while releasing TXN.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 27 May 2021 09:47:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, May 27, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> True, but if you do this clean-up in ReorderBufferCleanupTXN then you\n> don't need to take care at separate places. Also, toast_hash is stored\n> in txn so it appears natural to clean it up in while releasing TXN.\n\nMake sense, basically, IMHO we will have to do in TruncateTXN and\nReturnTXN as attached?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 27 May 2021 10:06:15 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "\n\nOn 5/27/21 6:36 AM, Dilip Kumar wrote:\n> On Thu, May 27, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> True, but if you do this clean-up in ReorderBufferCleanupTXN then you\n>> don't need to take care at separate places. Also, toast_hash is stored\n>> in txn so it appears natural to clean it up in while releasing TXN.\n> \n> Make sense, basically, IMHO we will have to do in TruncateTXN and\n> ReturnTXN as attached?\n> \n\nYeah, I've been working on a fix over the last couple days (because of a\ncustomer issue), and I ended up with the reset in ReorderBufferReturnTXN\ntoo - it does solve the issue in the case I've been investigating.\n\nI'm not sure the reset in ReorderBufferTruncateTXN is correct, though.\nIsn't it possible that we'll need the TOAST data after streaming part of\nthe transaction? After all, we're not resetting invalidations, tuplecids\nand snapshot either ... And we'll eventually clean it after the streamed\ntransaction gets commited (ReorderBufferStreamCommit ends up calling\nReorderBufferReturnTXN too).\n\nI wonder if there's a way to free the TOASTed data earlier, instead of\nwaiting until the end of the transaction (as this patch does). But I\nsuspect it'd be way more complex, harder to backpatch, and destroying\nthe hash table is a good idea anyway.\n\nAlso, I think the \"if (txn->toast_hash != NULL)\" checks are not needed,\nbecause it's the first thing ReorderBufferToastReset does.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 May 2021 13:46:47 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 5/27/21 6:36 AM, Dilip Kumar wrote:\n> > On Thu, May 27, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>\n> >> True, but if you do this clean-up in ReorderBufferCleanupTXN then you\n> >> don't need to take care at separate places. Also, toast_hash is stored\n> >> in txn so it appears natural to clean it up in while releasing TXN.\n> >\n> > Make sense, basically, IMHO we will have to do in TruncateTXN and\n> > ReturnTXN as attached?\n> >\n>\n> Yeah, I've been working on a fix over the last couple days (because of a\n> customer issue), and I ended up with the reset in ReorderBufferReturnTXN\n> too - it does solve the issue in the case I've been investigating.\n>\n> I'm not sure the reset in ReorderBufferTruncateTXN is correct, though.\n> Isn't it possible that we'll need the TOAST data after streaming part of\n> the transaction? After all, we're not resetting invalidations, tuplecids\n> and snapshot either\n\nActually, as per the current design, we don't need the toast data\nafter the streaming. Because currently, we don't allow to stream the\ntransaction if we need to keep the toast across stream e.g. if we only\nhave toast insert without the main insert we assure this as partial\nchanges, another case is if we have multi-insert with toast then we\nkeep the transaction as mark partial until we get the last insert of\nthe multi-insert. So with the current design we don't have any case\nwhere we need to keep toast data across streams.\n\n ... And we'll eventually clean it after the streamed\n> transaction gets commited (ReorderBufferStreamCommit ends up calling\n> ReorderBufferReturnTXN too).\n\nRight, but generally after streaming we assert that txn->size should\nbe 0. That could be changed if we change the above design but this is\nwhat it is today.\n\n> I wonder if there's a way to free the TOASTed data earlier, instead of\n> waiting until the end of the transaction (as this patch does). But I\n> suspect it'd be way more complex, harder to backpatch, and destroying\n> the hash table is a good idea anyway.\n\nRight.\n\n> Also, I think the \"if (txn->toast_hash != NULL)\" checks are not needed,\n> because it's the first thing ReorderBufferToastReset does.\n\nI see, I will change this. If we all agree with this idea.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 May 2021 17:47:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On 5/28/21 2:17 PM, Dilip Kumar wrote:\n> On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 5/27/21 6:36 AM, Dilip Kumar wrote:\n>>> On Thu, May 27, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>>>\n>>>> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>>>\n>>>> True, but if you do this clean-up in ReorderBufferCleanupTXN then you\n>>>> don't need to take care at separate places. Also, toast_hash is stored\n>>>> in txn so it appears natural to clean it up in while releasing TXN.\n>>>\n>>> Make sense, basically, IMHO we will have to do in TruncateTXN and\n>>> ReturnTXN as attached?\n>>>\n>>\n>> Yeah, I've been working on a fix over the last couple days (because of a\n>> customer issue), and I ended up with the reset in ReorderBufferReturnTXN\n>> too - it does solve the issue in the case I've been investigating.\n>>\n>> I'm not sure the reset in ReorderBufferTruncateTXN is correct, though.\n>> Isn't it possible that we'll need the TOAST data after streaming part of\n>> the transaction? After all, we're not resetting invalidations, tuplecids\n>> and snapshot either\n> \n> Actually, as per the current design, we don't need the toast data\n> after the streaming. Because currently, we don't allow to stream the\n> transaction if we need to keep the toast across stream e.g. if we only\n> have toast insert without the main insert we assure this as partial\n> changes, another case is if we have multi-insert with toast then we\n> keep the transaction as mark partial until we get the last insert of\n> the multi-insert. So with the current design we don't have any case\n> where we need to keep toast data across streams.\n> \n> ... And we'll eventually clean it after the streamed\n>> transaction gets commited (ReorderBufferStreamCommit ends up calling\n>> ReorderBufferReturnTXN too).\n> \n> Right, but generally after streaming we assert that txn->size should\n> be 0. That could be changed if we change the above design but this is\n> what it is today.\n> \n\nCan we add some assert to enforce this?\n\n>> I wonder if there's a way to free the TOASTed data earlier, instead of\n>> waiting until the end of the transaction (as this patch does). But I\n>> suspect it'd be way more complex, harder to backpatch, and destroying\n>> the hash table is a good idea anyway.\n> \n> Right.\n> \n>> Also, I think the \"if (txn->toast_hash != NULL)\" checks are not needed,\n>> because it's the first thing ReorderBufferToastReset does.\n> \n> I see, I will change this. If we all agree with this idea.\n> \n\n+1 from me. I think it's good enough to do the cleanup at the end, and\nit's an improvement compared to current state. There might be cases of\ntransactions doing many such speculative inserts and accumulating a lot\nof data in the TOAST hash, but I find it very unlikely.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 28 May 2021 14:31:33 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Fri, May 28, 2021 at 6:01 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/28/21 2:17 PM, Dilip Kumar wrote:\n> > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >> On 5/27/21 6:36 AM, Dilip Kumar wrote:\n> >>> On Thu, May 27, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>>>\n> >>>> On Thu, May 27, 2021 at 9:40 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >>>>\n> >>>> True, but if you do this clean-up in ReorderBufferCleanupTXN then you\n> >>>> don't need to take care at separate places. Also, toast_hash is stored\n> >>>> in txn so it appears natural to clean it up in while releasing TXN.\n> >>>\n> >>> Make sense, basically, IMHO we will have to do in TruncateTXN and\n> >>> ReturnTXN as attached?\n> >>>\n> >>\n> >> Yeah, I've been working on a fix over the last couple days (because of a\n> >> customer issue), and I ended up with the reset in ReorderBufferReturnTXN\n> >> too - it does solve the issue in the case I've been investigating.\n> >>\n> >> I'm not sure the reset in ReorderBufferTruncateTXN is correct, though.\n> >> Isn't it possible that we'll need the TOAST data after streaming part of\n> >> the transaction? After all, we're not resetting invalidations, tuplecids\n> >> and snapshot either\n> >\n> > Actually, as per the current design, we don't need the toast data\n> > after the streaming. Because currently, we don't allow to stream the\n> > transaction if we need to keep the toast across stream e.g. if we only\n> > have toast insert without the main insert we assure this as partial\n> > changes, another case is if we have multi-insert with toast then we\n> > keep the transaction as mark partial until we get the last insert of\n> > the multi-insert. So with the current design we don't have any case\n> > where we need to keep toast data across streams.\n> >\n> > ... And we'll eventually clean it after the streamed\n> >> transaction gets commited (ReorderBufferStreamCommit ends up calling\n> >> ReorderBufferReturnTXN too).\n> >\n> > Right, but generally after streaming we assert that txn->size should\n> > be 0. That could be changed if we change the above design but this is\n> > what it is today.\n> >\n>\n> Can we add some assert to enforce this?\n>\n\nThere is already an Assert for this. See ReorderBufferCheckMemoryLimit.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 28 May 2021 18:49:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> I wonder if there's a way to free the TOASTed data earlier, instead of\n> waiting until the end of the transaction (as this patch does).\n>\n\nIIUC we are anyway freeing the toasted data at the next\ninsert/update/delete. We can try to free at other change message types\nlike REORDER_BUFFER_CHANGE_MESSAGE but as you said that may make the\npatch more complex, so it seems better to do the fix on the lines of\nwhat is proposed in the patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 29 May 2021 09:59:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On 5/29/21 6:29 AM, Amit Kapila wrote:\n> On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> I wonder if there's a way to free the TOASTed data earlier, instead of\n>> waiting until the end of the transaction (as this patch does).\n>>\n> \n> IIUC we are anyway freeing the toasted data at the next\n> insert/update/delete. We can try to free at other change message types\n> like REORDER_BUFFER_CHANGE_MESSAGE but as you said that may make the\n> patch more complex, so it seems better to do the fix on the lines of\n> what is proposed in the patch.\n> \n\n+1\n\nEven if we started doing what you mention (freeing the hash for other\nchange types), we'd still need to do what the patch proposes because the\nspeculative insert may be the last change in the transaction. For the\nother cases it works as a mitigation, so that we don't leak the memory\nforever.\n\nSo let's get this committed, perhaps with a comment explaining that it\nmight be possible to reset earlier if needed.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 29 May 2021 14:15:10 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Sat, May 29, 2021 at 5:45 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/29/21 6:29 AM, Amit Kapila wrote:\n> > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> I wonder if there's a way to free the TOASTed data earlier, instead of\n> >> waiting until the end of the transaction (as this patch does).\n> >>\n> >\n> > IIUC we are anyway freeing the toasted data at the next\n> > insert/update/delete. We can try to free at other change message types\n> > like REORDER_BUFFER_CHANGE_MESSAGE but as you said that may make the\n> > patch more complex, so it seems better to do the fix on the lines of\n> > what is proposed in the patch.\n> >\n>\n> +1\n>\n> Even if we started doing what you mention (freeing the hash for other\n> change types), we'd still need to do what the patch proposes because the\n> speculative insert may be the last change in the transaction. For the\n> other cases it works as a mitigation, so that we don't leak the memory\n> forever.\n>\n\nRight.\n\n> So let's get this committed, perhaps with a comment explaining that it\n> might be possible to reset earlier if needed.\n>\n\nOkay, I think it would be better if we can test this once for the\nstreaming case as well. Dilip, would you like to do that and send the\nupdated patch as per one of the comments by Tomas?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 31 May 2021 08:21:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, 31 May 2021 at 8:21 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Sat, May 29, 2021 at 5:45 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > On 5/29/21 6:29 AM, Amit Kapila wrote:\n> > > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n> > > <tomas.vondra@enterprisedb.com> wrote:\n> > >>\n> > >> I wonder if there's a way to free the TOASTed data earlier, instead of\n> > >> waiting until the end of the transaction (as this patch does).\n> > >>\n> > >\n> > > IIUC we are anyway freeing the toasted data at the next\n> > > insert/update/delete. We can try to free at other change message types\n> > > like REORDER_BUFFER_CHANGE_MESSAGE but as you said that may make the\n> > > patch more complex, so it seems better to do the fix on the lines of\n> > > what is proposed in the patch.\n> > >\n> >\n> > +1\n> >\n> > Even if we started doing what you mention (freeing the hash for other\n> > change types), we'd still need to do what the patch proposes because the\n> > speculative insert may be the last change in the transaction. For the\n> > other cases it works as a mitigation, so that we don't leak the memory\n> > forever.\n> >\n>\n> Right.\n>\n> > So let's get this committed, perhaps with a comment explaining that it\n> > might be possible to reset earlier if needed.\n> >\n>\n> Okay, I think it would be better if we can test this once for the\n> streaming case as well. Dilip, would you like to do that and send the\n> updated patch as per one of the comments by Tomas?\n\n\nI will do that in sometime.\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 31 May 2021 at 8:21 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:On Sat, May 29, 2021 at 5:45 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 5/29/21 6:29 AM, Amit Kapila wrote:\n> > On Fri, May 28, 2021 at 5:16 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> >>\n> >> I wonder if there's a way to free the TOASTed data earlier, instead of\n> >> waiting until the end of the transaction (as this patch does).\n> >>\n> >\n> > IIUC we are anyway freeing the toasted data at the next\n> > insert/update/delete. We can try to free at other change message types\n> > like REORDER_BUFFER_CHANGE_MESSAGE but as you said that may make the\n> > patch more complex, so it seems better to do the fix on the lines of\n> > what is proposed in the patch.\n> >\n>\n> +1\n>\n> Even if we started doing what you mention (freeing the hash for other\n> change types), we'd still need to do what the patch proposes because the\n> speculative insert may be the last change in the transaction. For the\n> other cases it works as a mitigation, so that we don't leak the memory\n> forever.\n>\n\nRight.\n\n> So let's get this committed, perhaps with a comment explaining that it\n> might be possible to reset earlier if needed.\n>\n\nOkay, I think it would be better if we can test this once for the\nstreaming case as well. Dilip, would you like to do that and send the\nupdated patch as per one of the comments by Tomas?I will do that in sometime.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 May 2021 08:50:16 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, May 31, 2021 at 8:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, 31 May 2021 at 8:21 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Okay, I think it would be better if we can test this once for the\n>> streaming case as well. Dilip, would you like to do that and send the\n>> updated patch as per one of the comments by Tomas?\n>\n>\n> I will do that sometime.\n\n\nI have changed patches as Tomas suggested and also created back patches.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 May 2021 16:29:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, 31 May 2021 at 4:29 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, May 31, 2021 at 8:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, 31 May 2021 at 8:21 AM, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> Okay, I think it would be better if we can test this once for the\n> >> streaming case as well. Dilip, would you like to do that and send the\n> >> updated patch as per one of the comments by Tomas?\n> >\n> >\n> > I will do that sometime.\n>\n>\n> I have changed patches as Tomas suggested and also created back patches.\n\n\nI missed to do the test for streaming. I will to that tomorrow and reply\nback.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 31 May 2021 at 4:29 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:On Mon, May 31, 2021 at 8:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, 31 May 2021 at 8:21 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Okay, I think it would be better if we can test this once for the\n>> streaming case as well. Dilip, would you like to do that and send the\n>> updated patch as per one of the comments by Tomas?\n>\n>\n> I will do that sometime.\n\n\nI have changed patches as Tomas suggested and also created back patches.I missed to do the test for streaming.  I will to that tomorrow and reply back.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 31 May 2021 18:32:14 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, May 31, 2021 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, 31 May 2021 at 4:29 PM, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>>\n>> On Mon, May 31, 2021 at 8:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> >\n>> > On Mon, 31 May 2021 at 8:21 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> >>\n>> >> Okay, I think it would be better if we can test this once for the\n>> >> streaming case as well. Dilip, would you like to do that and send the\n>> >> updated patch as per one of the comments by Tomas?\n>> >\n>> >\n>> > I will do that sometime.\n>>\n>>\n>> I have changed patches as Tomas suggested and also created back patches.\n>\n>\n> I missed to do the test for streaming. I will to that tomorrow and reply back.\n\nFor streaming transactions this issue is not there. Because this\nproblem will only occur if the last change is *SPEC INSERT * and after\nthat there is no other UPDATE/INSERT change because on that change we\nare resetting the toast table. Now, if the transaction has only *SPEC\nINSERT* without SPEC CONFIRM or any other INSERT/UPDATE then we will\nnot stream it. And if we get any next INSERT/UPDATE then only we can\nselect this for stream but in that case toast will be reset. So as of\ntoday for streaming mode we don't have this problem.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 31 May 2021 20:12:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, May 31, 2021 at 8:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, May 31, 2021 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I missed to do the test for streaming. I will to that tomorrow and reply back.\n>\n> For streaming transactions this issue is not there. Because this\n> problem will only occur if the last change is *SPEC INSERT * and after\n> that there is no other UPDATE/INSERT change because on that change we\n> are resetting the toast table. Now, if the transaction has only *SPEC\n> INSERT* without SPEC CONFIRM or any other INSERT/UPDATE then we will\n> not stream it. And if we get any next INSERT/UPDATE then only we can\n> select this for stream but in that case toast will be reset. So as of\n> today for streaming mode we don't have this problem.\n>\n\nWhat if the next change is a different SPEC_INSERT\n(REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT)? I think in that case we\nwill stream but won't free the toast memory.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Jun 2021 09:53:40 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Mon, May 31, 2021 at 8:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > I missed to do the test for streaming. I will to that tomorrow and reply back.\n> >\n> > For streaming transactions this issue is not there. Because this\n> > problem will only occur if the last change is *SPEC INSERT * and after\n> > that there is no other UPDATE/INSERT change because on that change we\n> > are resetting the toast table. Now, if the transaction has only *SPEC\n> > INSERT* without SPEC CONFIRM or any other INSERT/UPDATE then we will\n> > not stream it. And if we get any next INSERT/UPDATE then only we can\n> > select this for stream but in that case toast will be reset. So as of\n> > today for streaming mode we don't have this problem.\n> >\n>\n> What if the next change is a different SPEC_INSERT\n> (REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT)? I think in that case we\n> will stream but won't free the toast memory.\n\nBut if the next change is again the SPEC INSERT then we will keep the\nPARTIAL change flag set and we will not select this transaction for\nstream right?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Jun 2021 09:59:41 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Mon, May 31, 2021 at 8:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > I missed to do the test for streaming. I will to that tomorrow and reply back.\n> > >\n> > > For streaming transactions this issue is not there. Because this\n> > > problem will only occur if the last change is *SPEC INSERT * and after\n> > > that there is no other UPDATE/INSERT change because on that change we\n> > > are resetting the toast table. Now, if the transaction has only *SPEC\n> > > INSERT* without SPEC CONFIRM or any other INSERT/UPDATE then we will\n> > > not stream it. And if we get any next INSERT/UPDATE then only we can\n> > > select this for stream but in that case toast will be reset. So as of\n> > > today for streaming mode we don't have this problem.\n> > >\n> >\n> > What if the next change is a different SPEC_INSERT\n> > (REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT)? I think in that case we\n> > will stream but won't free the toast memory.\n>\n> But if the next change is again the SPEC INSERT then we will keep the\n> PARTIAL change flag set and we will not select this transaction for\n> stream right?\n>\n\nRight, I think you can remove the change related to stream xact and\nprobably write some comments on why we don't need it for streamed\ntransactions. But, now I have another question related to fixing the\nnon-streamed case. What if after the missing spec_confirm we get the\ndelete operation in the transaction? It seems\nReorderBufferToastReplace always expects Insert/Update if we have\ntoast hash active in the transaction.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Jun 2021 10:21:35 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Mon, May 31, 2021 at 8:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > I missed to do the test for streaming. I will to that tomorrow and reply back.\n> > > >\n> > > > For streaming transactions this issue is not there. Because this\n> > > > problem will only occur if the last change is *SPEC INSERT * and after\n> > > > that there is no other UPDATE/INSERT change because on that change we\n> > > > are resetting the toast table. Now, if the transaction has only *SPEC\n> > > > INSERT* without SPEC CONFIRM or any other INSERT/UPDATE then we will\n> > > > not stream it. And if we get any next INSERT/UPDATE then only we can\n> > > > select this for stream but in that case toast will be reset. So as of\n> > > > today for streaming mode we don't have this problem.\n> > > >\n> > >\n> > > What if the next change is a different SPEC_INSERT\n> > > (REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT)? I think in that case we\n> > > will stream but won't free the toast memory.\n> >\n> > But if the next change is again the SPEC INSERT then we will keep the\n> > PARTIAL change flag set and we will not select this transaction for\n> > stream right?\n> >\n>\n> Right, I think you can remove the change related to stream xact and\n> probably write some comments on why we don't need it for streamed\n> transactions. But, now I have another question related to fixing the\n> non-streamed case. What if after the missing spec_confirm we get the\n> delete operation in the transaction? It seems\n> ReorderBufferToastReplace always expects Insert/Update if we have\n> toast hash active in the transaction.\n\nYeah, that looks like a problem, I will test this case.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Jun 2021 11:00:49 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 9:59 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 1, 2021 at 9:53 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > On Mon, May 31, 2021 at 8:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > >\n> > > > > On Mon, May 31, 2021 at 6:32 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > > > > >\n> > > > > > I missed to do the test for streaming. I will to that tomorrow and reply back.\n> > > > >\n> > > > > For streaming transactions this issue is not there. Because this\n> > > > > problem will only occur if the last change is *SPEC INSERT * and after\n> > > > > that there is no other UPDATE/INSERT change because on that change we\n> > > > > are resetting the toast table. Now, if the transaction has only *SPEC\n> > > > > INSERT* without SPEC CONFIRM or any other INSERT/UPDATE then we will\n> > > > > not stream it. And if we get any next INSERT/UPDATE then only we can\n> > > > > select this for stream but in that case toast will be reset. So as of\n> > > > > today for streaming mode we don't have this problem.\n> > > > >\n> > > >\n> > > > What if the next change is a different SPEC_INSERT\n> > > > (REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT)? I think in that case we\n> > > > will stream but won't free the toast memory.\n> > >\n> > > But if the next change is again the SPEC INSERT then we will keep the\n> > > PARTIAL change flag set and we will not select this transaction for\n> > > stream right?\n> > >\n> >\n> > Right, I think you can remove the change related to stream xact and\n> > probably write some comments on why we don't need it for streamed\n> > transactions. But, now I have another question related to fixing the\n> > non-streamed case. What if after the missing spec_confirm we get the\n> > delete operation in the transaction? It seems\n> > ReorderBufferToastReplace always expects Insert/Update if we have\n> > toast hash active in the transaction.\n>\n> Yeah, that looks like a problem, I will test this case.\n\nI am able to hit that Assert after slight modification in the original\ntest case, basically, I added an extra delete in the spec abort\ntransaction and I got this assertion.\n\n#0 0x00007f7b8cc3a387 in raise () from /lib64/libc.so.6\n#1 0x00007f7b8cc3ba78 in abort () from /lib64/libc.so.6\n#2 0x0000000000b027c7 in ExceptionalCondition (conditionName=0xcc11df\n\"change->data.tp.newtuple\", errorType=0xcc0244 \"FailedAssertion\",\n fileName=0xcc0290 \"reorderbuffer.c\", lineNumber=4601) at assert.c:69\n#3 0x00000000008dfeaf in ReorderBufferToastReplace (rb=0x1a73e40,\ntxn=0x1b5d6e8, relation=0x7f7b8dab4d78, change=0x1b5fb68) at\nreorderbuffer.c:4601\n#4 0x00000000008db769 in ReorderBufferProcessTXN (rb=0x1a73e40,\ntxn=0x1b5d6e8, commit_lsn=24329048, snapshot_now=0x1b4b8d0,\ncommand_id=0, streaming=false)\n at reorderbuffer.c:2187\n#5 0x00000000008dc1df in ReorderBufferReplay (txn=0x1b5d6e8,\nrb=0x1a73e40, xid=748, commit_lsn=24329048, end_lsn=24329096,\ncommit_time=675842700629597,\n origin_id=0, origin_lsn=0) at reorderbuffer.c:2601\n#6 0x00000000008dc267 in ReorderBufferCommit (rb=0x1a73e40, xid=748,\ncommit_lsn=24329048, end_lsn=24329096, commit_time=675842700629597,\norigin_id=0, origin_lsn=0)\n at reorderbuffer.c:2625\n#7 0x00000000008cc144 in DecodeCommit (ctx=0x1b319b0,\nbuf=0x7ffdf15fb0a0, parsed=0x7ffdf15faf00, xid=748, two_phase=false)\nat decode.c:744\n\nIMHO, as I stated earlier one way to fix this problem is that we add\nthe spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\nqueue, maybe with action name\n\"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\nthat just cleans up the toast and specinsert tuple and nothing else.\nIf we think that makes sense then I can work on that patch?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Jun 2021 11:43:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 11:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 11:00 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 10:21 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > >\n> > > Right, I think you can remove the change related to stream xact and\n> > > probably write some comments on why we don't need it for streamed\n> > > transactions. But, now I have another question related to fixing the\n> > > non-streamed case. What if after the missing spec_confirm we get the\n> > > delete operation in the transaction? It seems\n> > > ReorderBufferToastReplace always expects Insert/Update if we have\n> > > toast hash active in the transaction.\n> >\n> > Yeah, that looks like a problem, I will test this case.\n>\n> I am able to hit that Assert after slight modification in the original\n> test case, basically, I added an extra delete in the spec abort\n> transaction and I got this assertion.\n>\n\nCan we try with other Insert/Update after spec abort to check if there\ncan be other problems due to active toast_hash?\n\n>\n> IMHO, as I stated earlier one way to fix this problem is that we add\n> the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\n> queue, maybe with action name\n> \"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\n> that just cleans up the toast and specinsert tuple and nothing else.\n> If we think that makes sense then I can work on that patch?\n>\n\nI think this should solve the problem but let's first try to see if we\nhave any other problems. Because, say, if we don't have any other\nproblem, then maybe removing Assert might work but I guess we still\nneed to process the tuple to find that we don't need to assemble toast\nfor it which again seems like overkill.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Jun 2021 12:25:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> >\n> > IMHO, as I stated earlier one way to fix this problem is that we add\n> > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\n> > queue, maybe with action name\n> > \"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\n> > that just cleans up the toast and specinsert tuple and nothing else.\n> > If we think that makes sense then I can work on that patch?\n> >\n>\n> I think this should solve the problem but let's first try to see if we\n> have any other problems. Because, say, if we don't have any other\n> problem, then maybe removing Assert might work but I guess we still\n> need to process the tuple to find that we don't need to assemble toast\n> for it which again seems like overkill.\n\nYeah, other operation will also fail, basically, if txn->toast_hash is\nnot NULL then we assume that we need to assemble the tuple using\ntoast, but if there is next insert in another relation and if that\nrelation doesn't have a toast table then it will fail with below\nerror. And otherwise also, if it is the same relation, then the toast\nchunks of previous tuple will be used for constructing this new tuple.\nI think we must have to clean the toast before processing the next\ntuple so I think we can go with the solution I mentioned above.\n\nstatic void\nReorderBufferToastReplace\n{\n...\n toast_rel = RelationIdGetRelation(relation->rd_rel->reltoastrelid);\n if (!RelationIsValid(toast_rel))\n elog(ERROR, \"could not open relation with OID %u\",\n relation->rd_rel->reltoastrelid);\n\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Jun 2021 17:22:55 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 5:22 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > IMHO, as I stated earlier one way to fix this problem is that we add\n> > > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\n> > > queue, maybe with action name\n> > > \"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\n> > > that just cleans up the toast and specinsert tuple and nothing else.\n> > > If we think that makes sense then I can work on that patch?\n> > >\n> >\n> > I think this should solve the problem but let's first try to see if we\n> > have any other problems. Because, say, if we don't have any other\n> > problem, then maybe removing Assert might work but I guess we still\n> > need to process the tuple to find that we don't need to assemble toast\n> > for it which again seems like overkill.\n>\n> Yeah, other operation will also fail, basically, if txn->toast_hash is\n> not NULL then we assume that we need to assemble the tuple using\n> toast, but if there is next insert in another relation and if that\n> relation doesn't have a toast table then it will fail with below\n> error. And otherwise also, if it is the same relation, then the toast\n> chunks of previous tuple will be used for constructing this new tuple.\n> I think we must have to clean the toast before processing the next\n> tuple so I think we can go with the solution I mentioned above.\n>\n> static void\n> ReorderBufferToastReplace\n> {\n> ...\n> toast_rel = RelationIdGetRelation(relation->rd_rel->reltoastrelid);\n> if (!RelationIsValid(toast_rel))\n> elog(ERROR, \"could not open relation with OID %u\",\n> relation->rd_rel->reltoastrelid);\n\nThe attached patch fixes by queuing the spec abort change and cleaning\nup the toast hash on spec abort. Currently, in this patch I am\nqueuing up all the spec abort changes, but as an optimization we can\navoid\nqueuing the spec abort for toast tables but for that we need to log\nthat as a flag in WAL. that this XLH_DELETE_IS_SUPER is for a toast\nrelation.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 1 Jun 2021 20:00:42 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 8:01 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n>\n> The attached patch fixes by queuing the spec abort change and cleaning\n> up the toast hash on spec abort. Currently, in this patch I am\n> queuing up all the spec abort changes, but as an optimization we can\n> avoid\n> queuing the spec abort for toast tables but for that we need to log\n> that as a flag in WAL. that this XLH_DELETE_IS_SUPER is for a toast\n> relation.\n>\n\nI don't think that is required especially because we intend to\nbackpatch this, so I would like to keep such optimization for another\nday. Few comments:\n\nComments:\n------------\n/*\n* Super deletions are irrelevant for logical decoding, it's driven by the\n* confirmation records.\n*/\n1. The above comment is not required after your other changes.\n\n/*\n* Either speculative insertion was confirmed, or it was\n* unsuccessful and the record isn't needed anymore.\n*/\nif (specinsert != NULL)\n2. The above comment needs some adjustment.\n\n/*\n* There's a speculative insertion remaining, just clean in up, it\n* can't have been successful, otherwise we'd gotten a confirmation\n* record.\n*/\nif (specinsert)\n{\nReorderBufferReturnChange(rb, specinsert, true);\nspecinsert = NULL;\n}\n\n3. Ideally, we should have an Assert here because we shouldn't reach\nwithout cleaning up specinsert. If there is still a chance then we\nshould mention that in the comments.\n\n4.\n+ case REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT:\n+\n+ /*\n+ * Abort for speculative insertion arrived.\n\nI think here we should explain why we can't piggyback cleanup on next\ninsert/update/delete.\n\n5. Can we write a test case for it? I guess we don't need to use\nmultiple sessions if the conflicting record is already present.\n\nPlease see if the same patch works on back-branches? I guess this\nmakes the change bit tricky as it involves decoding a new message but\nnot sure if there is a better way.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Jun 2021 09:42:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 1, 2021 at 5:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > >\n> > > IMHO, as I stated earlier one way to fix this problem is that we add\n> > > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\n> > > queue, maybe with action name\n> > > \"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\n> > > that just cleans up the toast and specinsert tuple and nothing else.\n> > > If we think that makes sense then I can work on that patch?\n> > >\n> >\n> > I think this should solve the problem but let's first try to see if we\n> > have any other problems. Because, say, if we don't have any other\n> > problem, then maybe removing Assert might work but I guess we still\n> > need to process the tuple to find that we don't need to assemble toast\n> > for it which again seems like overkill.\n>\n> Yeah, other operation will also fail, basically, if txn->toast_hash is\n> not NULL then we assume that we need to assemble the tuple using\n> toast, but if there is next insert in another relation and if that\n> relation doesn't have a toast table then it will fail with below\n> error. And otherwise also, if it is the same relation, then the toast\n> chunks of previous tuple will be used for constructing this new tuple.\n>\n\nI think the same relation case might not create a problem because it\nwon't find the entry for it in the toast_hash, so it will return from\nthere but the other two problems will be there. So, one idea could be\nto just avoid those two cases (by simply adding return for those\ncases) and still we can rely on toast clean up on the next\ninsert/update/delete. However, your approach looks more natural to me\nas that will allow us to clean up memory in all cases instead of\nwaiting till the transaction end. So, I still vote for going with your\npatch's idea of cleaning at spec_abort but I am fine if you and others\ndecide not to process spec_abort message. What do you think? Tomas, do\nyou have any opinion on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Jun 2021 11:24:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jun 1, 2021 at 5:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > >\n> > > > IMHO, as I stated earlier one way to fix this problem is that we add\n> > > > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\n> > > > queue, maybe with action name\n> > > > \"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\n> > > > that just cleans up the toast and specinsert tuple and nothing else.\n> > > > If we think that makes sense then I can work on that patch?\n> > > >\n> > >\n> > > I think this should solve the problem but let's first try to see if we\n> > > have any other problems. Because, say, if we don't have any other\n> > > problem, then maybe removing Assert might work but I guess we still\n> > > need to process the tuple to find that we don't need to assemble toast\n> > > for it which again seems like overkill.\n> >\n> > Yeah, other operation will also fail, basically, if txn->toast_hash is\n> > not NULL then we assume that we need to assemble the tuple using\n> > toast, but if there is next insert in another relation and if that\n> > relation doesn't have a toast table then it will fail with below\n> > error. And otherwise also, if it is the same relation, then the toast\n> > chunks of previous tuple will be used for constructing this new tuple.\n> >\n>\n> I think the same relation case might not create a problem because it\n> won't find the entry for it in the toast_hash, so it will return from\n> there but the other two problems will be there.\n\nRight\n\nSo, one idea could be\n> to just avoid those two cases (by simply adding return for those\n> cases) and still we can rely on toast clean up on the next\n> insert/update/delete. However, your approach looks more natural to me\n> as that will allow us to clean up memory in all cases instead of\n> waiting till the transaction end. So, I still vote for going with your\n> patch's idea of cleaning at spec_abort but I am fine if you and others\n> decide not to process spec_abort message. What do you think? Tomas, do\n> you have any opinion on this matter?\n\nI agree that processing with spec abort looks more natural and ideally\nthe current code expects it to be getting cleaned after the change,\nthat's the reason we have those assertions and errors. OTOH I agree\nthat we can just return from those conditions because now we know that\nwith the current code those situations are possible. My vote is with\nhandling the spec abort option (Option1) because that looks more\nnatural way of handling these issues and we also don't have to clean\nup the hash in \"ReorderBufferReturnTXN\" if no followup change after\nspec abort. I am attaching the patches with both the approaches for\nthe reference.\n\nOnce we finalize on the approach then I will work on pending review\ncomments and also prepare the back branch patches.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Jun 2021 11:37:56 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jun 1, 2021 at 5:23 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n> > > On Tue, Jun 1, 2021 at 12:25 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > >\n> > > > > IMHO, as I stated earlier one way to fix this problem is that we add\n> > > > > the spec abort operation (DELETE + XLH_DELETE_IS_SUPER flag) to the\n> > > > > queue, maybe with action name\n> > > > > \"REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT\" and as part of processing\n> > > > > that just cleans up the toast and specinsert tuple and nothing else.\n> > > > > If we think that makes sense then I can work on that patch?\n> > > > >\n> > > >\n> > > > I think this should solve the problem but let's first try to see if we\n> > > > have any other problems. Because, say, if we don't have any other\n> > > > problem, then maybe removing Assert might work but I guess we still\n> > > > need to process the tuple to find that we don't need to assemble toast\n> > > > for it which again seems like overkill.\n> > >\n> > > Yeah, other operation will also fail, basically, if txn->toast_hash is\n> > > not NULL then we assume that we need to assemble the tuple using\n> > > toast, but if there is next insert in another relation and if that\n> > > relation doesn't have a toast table then it will fail with below\n> > > error. And otherwise also, if it is the same relation, then the toast\n> > > chunks of previous tuple will be used for constructing this new tuple.\n> > >\n> >\n> > I think the same relation case might not create a problem because it\n> > won't find the entry for it in the toast_hash, so it will return from\n> > there but the other two problems will be there.\n>\n> Right\n>\n> So, one idea could be\n> > to just avoid those two cases (by simply adding return for those\n> > cases) and still we can rely on toast clean up on the next\n> > insert/update/delete. However, your approach looks more natural to me\n> > as that will allow us to clean up memory in all cases instead of\n> > waiting till the transaction end. So, I still vote for going with your\n> > patch's idea of cleaning at spec_abort but I am fine if you and others\n> > decide not to process spec_abort message. What do you think? Tomas, do\n> > you have any opinion on this matter?\n>\n> I agree that processing with spec abort looks more natural and ideally\n> the current code expects it to be getting cleaned after the change,\n> that's the reason we have those assertions and errors. OTOH I agree\n> that we can just return from those conditions because now we know that\n> with the current code those situations are possible. My vote is with\n> handling the spec abort option (Option1) because that looks more\n> natural way of handling these issues and we also don't have to clean\n> up the hash in \"ReorderBufferReturnTXN\" if no followup change after\n> spec abort.\n>\n\nEven, if we decide to go with spec_abort approach, it might be better\nto still keep the toastreset call in ReorderBufferReturnTXN so that it\ncan be freed in case of error.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Jun 2021 11:52:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I think the same relation case might not create a problem because it\n> > > won't find the entry for it in the toast_hash, so it will return from\n> > > there but the other two problems will be there.\n> >\n> > Right\n> >\n> > So, one idea could be\n> > > to just avoid those two cases (by simply adding return for those\n> > > cases) and still we can rely on toast clean up on the next\n> > > insert/update/delete. However, your approach looks more natural to me\n> > > as that will allow us to clean up memory in all cases instead of\n> > > waiting till the transaction end. So, I still vote for going with your\n> > > patch's idea of cleaning at spec_abort but I am fine if you and others\n> > > decide not to process spec_abort message. What do you think? Tomas, do\n> > > you have any opinion on this matter?\n> >\n> > I agree that processing with spec abort looks more natural and ideally\n> > the current code expects it to be getting cleaned after the change,\n> > that's the reason we have those assertions and errors.\n> >\n\nOkay, so, let's go with that approach. I have thought about whether it\ncreates any problem in back-branches but couldn't think of any\nprimarily because we are not going to send anything additional to\nplugin/subscriber. Do you see any problems with back branches if we go\nwith this approach?\n\n> > OTOH I agree\n> > that we can just return from those conditions because now we know that\n> > with the current code those situations are possible. My vote is with\n> > handling the spec abort option (Option1) because that looks more\n> > natural way of handling these issues and we also don't have to clean\n> > up the hash in \"ReorderBufferReturnTXN\" if no followup change after\n> > spec abort.\n> >\n>\n> Even, if we decide to go with spec_abort approach, it might be better\n> to still keep the toastreset call in ReorderBufferReturnTXN so that it\n> can be freed in case of error.\n>\n\nPlease take care of this as well.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Jun 2021 08:30:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, 7 Jun 2021 at 8:30 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com>\n> wrote:\n> > >\n> > > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > > >\n> > > > I think the same relation case might not create a problem because it\n> > > > won't find the entry for it in the toast_hash, so it will return from\n> > > > there but the other two problems will be there.\n> > >\n> > > Right\n> > >\n> > > So, one idea could be\n> > > > to just avoid those two cases (by simply adding return for those\n> > > > cases) and still we can rely on toast clean up on the next\n> > > > insert/update/delete. However, your approach looks more natural to me\n> > > > as that will allow us to clean up memory in all cases instead of\n> > > > waiting till the transaction end. So, I still vote for going with\n> your\n> > > > patch's idea of cleaning at spec_abort but I am fine if you and\n> others\n> > > > decide not to process spec_abort message. What do you think? Tomas,\n> do\n> > > > you have any opinion on this matter?\n> > >\n> > > I agree that processing with spec abort looks more natural and ideally\n> > > the current code expects it to be getting cleaned after the change,\n> > > that's the reason we have those assertions and errors.\n> > >\n>\n> Okay, so, let's go with that approach. I have thought about whether it\n> creates any problem in back-branches but couldn't think of any\n> primarily because we are not going to send anything additional to\n> plugin/subscriber. Do you see any problems with back branches if we go\n> with this approach?\n\n\nI will check this and let you know.\n\n\n> > > OTOH I agree\n> > > that we can just return from those conditions because now we know that\n> > > with the current code those situations are possible. My vote is with\n> > > handling the spec abort option (Option1) because that looks more\n> > > natural way of handling these issues and we also don't have to clean\n> > > up the hash in \"ReorderBufferReturnTXN\" if no followup change after\n> > > spec abort.\n> > >\n> >\n> > Even, if we decide to go with spec_abort approach, it might be better\n> > to still keep the toastreset call in ReorderBufferReturnTXN so that it\n> > can be freed in case of error.\n> >\n>\n> Please take care of this as well.\n\n\nOk\n\n> --\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, 7 Jun 2021 at 8:30 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I think the same relation case might not create a problem because it\n> > > won't find the entry for it in the toast_hash, so it will return from\n> > > there but the other two problems will be there.\n> >\n> > Right\n> >\n> > So, one idea could be\n> > > to just avoid those two cases (by simply adding return for those\n> > > cases) and still we can rely on toast clean up on the next\n> > > insert/update/delete. However, your approach looks more natural to me\n> > > as that will allow us to clean up memory in all cases instead of\n> > > waiting till the transaction end. So, I still vote for going with your\n> > > patch's idea of cleaning at spec_abort but I am fine if you and others\n> > > decide not to process spec_abort message. What do you think? Tomas, do\n> > > you have any opinion on this matter?\n> >\n> > I agree that processing with spec abort looks more natural and ideally\n> > the current code expects it to be getting cleaned after the change,\n> > that's the reason we have those assertions and errors.\n> >\n\nOkay, so, let's go with that approach. I have thought about whether it\ncreates any problem in back-branches but couldn't think of any\nprimarily because we are not going to send anything additional to\nplugin/subscriber. Do you see any problems with back branches if we go\nwith this approach?I will check this and let you know.\n\n> >  OTOH I agree\n> > that we can just return from those conditions because now we know that\n> > with the current code those situations are possible.  My vote is with\n> > handling the spec abort option (Option1) because that looks more\n> > natural way of handling these issues and we also don't have to clean\n> > up the hash in \"ReorderBufferReturnTXN\" if no followup change after\n> > spec abort.\n> >\n>\n> Even, if we decide to go with spec_abort approach, it might be better\n> to still keep the toastreset call in ReorderBufferReturnTXN so that it\n> can be freed in case of error.\n>\n\nPlease take care of this as well.Ok-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Jun 2021 08:46:33 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 7, 2021 at 8:46 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n> On Mon, 7 Jun 2021 at 8:30 AM, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n>\n>> On Wed, Jun 2, 2021 at 11:52 AM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> >\n>> > On Wed, Jun 2, 2021 at 11:38 AM Dilip Kumar <dilipbalaut@gmail.com>\n>> wrote:\n>> > >\n>> > > On Wed, Jun 2, 2021 at 11:25 AM Amit Kapila <amit.kapila16@gmail.com>\n>> wrote:\n>> > > >\n>> > > > I think the same relation case might not create a problem because it\n>> > > > won't find the entry for it in the toast_hash, so it will return\n>> from\n>> > > > there but the other two problems will be there.\n>> > >\n>> > > Right\n>> > >\n>> > > So, one idea could be\n>> > > > to just avoid those two cases (by simply adding return for those\n>> > > > cases) and still we can rely on toast clean up on the next\n>> > > > insert/update/delete. However, your approach looks more natural to\n>> me\n>> > > > as that will allow us to clean up memory in all cases instead of\n>> > > > waiting till the transaction end. So, I still vote for going with\n>> your\n>> > > > patch's idea of cleaning at spec_abort but I am fine if you and\n>> others\n>> > > > decide not to process spec_abort message. What do you think? Tomas,\n>> do\n>> > > > you have any opinion on this matter?\n>> > >\n>> > > I agree that processing with spec abort looks more natural and ideally\n>> > > the current code expects it to be getting cleaned after the change,\n>> > > that's the reason we have those assertions and errors.\n>> > >\n>>\n>> Okay, so, let's go with that approach. I have thought about whether it\n>> creates any problem in back-branches but couldn't think of any\n>> primarily because we are not going to send anything additional to\n>> plugin/subscriber. Do you see any problems with back branches if we go\n>> with this approach?\n>\n>\n> I will check this and let you know.\n>\n>\n>> > > OTOH I agree\n>> > > that we can just return from those conditions because now we know that\n>> > > with the current code those situations are possible. My vote is with\n>> > > handling the spec abort option (Option1) because that looks more\n>> > > natural way of handling these issues and we also don't have to clean\n>> > > up the hash in \"ReorderBufferReturnTXN\" if no followup change after\n>> > > spec abort.\n>> > >\n>> >\n>> > Even, if we decide to go with spec_abort approach, it might be better\n>> > to still keep the toastreset call in ReorderBufferReturnTXN so that it\n>> > can be freed in case of error.\n>> >\n>>\n>> Please take care of this as well.\n>\n>\n> Ok\n>\n\nI have fixed all pending review comments and also added a test case which\nis working fine. I haven't yet checked on the back branches. Let's\ndiscuss if we think this patch looks fine then I can apply and test on the\nback branches.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Jun 2021 18:04:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 7, 2021 at 6:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have fixed all pending review comments and also added a test case which is working fine.\n>\n\nFew observations and questions on testcase:\n1.\n+step \"s1_lock_s2\" { SELECT pg_advisory_lock(2); }\n+step \"s1_lock_s3\" { SELECT pg_advisory_lock(2); }\n+step \"s1_session\" { SET spec.session = 1; }\n+step \"s1_begin\" { BEGIN; }\n+step \"s1_insert_tbl1\" { INSERT INTO tbl1 VALUES(1, repeat('a', 4000))\nON CONFLICT DO NOTHING; }\n+step \"s1_abort\" { ROLLBACK; }\n+step \"s1_unlock_s2\" { SELECT pg_advisory_unlock(2); }\n+step \"s1_unlock_s3\" { SELECT pg_advisory_unlock(2); }\n\nHere, s1_lock_s3 and s1_unlock_s3 uses 2 as identifier. Don't you need\nto use 3 in that part of the test?\n\n2. In the test, there seems to be an assumption that we can unlock s2\nand s3 one after another, and then both will start waiting on s-1 but\nisn't it possible that before s2 start waiting on s1, s3 completes its\ninsertion and then s2 will never proceed for speculative insertion?\n\n> I haven't yet checked on the back branches. Let's discuss if we think this patch looks fine then I can apply and test on the back branches.\n>\n\nSure, that makes sense.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 7 Jun 2021 18:34:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 7, 2021 at 6:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Jun 7, 2021 at 6:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > I have fixed all pending review comments and also added a test case\n> which is working fine.\n> >\n>\n> Few observations and questions on testcase:\n> 1.\n> +step \"s1_lock_s2\" { SELECT pg_advisory_lock(2); }\n> +step \"s1_lock_s3\" { SELECT pg_advisory_lock(2); }\n> +step \"s1_session\" { SET spec.session = 1; }\n> +step \"s1_begin\" { BEGIN; }\n> +step \"s1_insert_tbl1\" { INSERT INTO tbl1 VALUES(1, repeat('a', 4000))\n> ON CONFLICT DO NOTHING; }\n> +step \"s1_abort\" { ROLLBACK; }\n> +step \"s1_unlock_s2\" { SELECT pg_advisory_unlock(2); }\n> +step \"s1_unlock_s3\" { SELECT pg_advisory_unlock(2); }\n>\n> Here, s1_lock_s3 and s1_unlock_s3 uses 2 as identifier. Don't you need\n> to use 3 in that part of the test?\n>\n\nYeah this should be 3.\n\n\n>\n> 2. In the test, there seems to be an assumption that we can unlock s2\n> and s3 one after another, and then both will start waiting on s-1 but\n> isn't it possible that before s2 start waiting on s1, s3 completes its\n> insertion and then s2 will never proceed for speculative insertion?\n>\n\nI agree, such race conditions are possible. Currently, I am not able to\nthink what we can do here, but I will think more on this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Mon, Jun 7, 2021 at 6:34 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Jun 7, 2021 at 6:04 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> I have fixed all pending review comments and also added a test case which is working fine.\n>\n\nFew observations and questions on testcase:\n1.\n+step \"s1_lock_s2\" { SELECT pg_advisory_lock(2); }\n+step \"s1_lock_s3\" { SELECT pg_advisory_lock(2); }\n+step \"s1_session\" { SET spec.session = 1; }\n+step \"s1_begin\" { BEGIN; }\n+step \"s1_insert_tbl1\" { INSERT INTO tbl1 VALUES(1, repeat('a', 4000))\nON CONFLICT DO NOTHING; }\n+step \"s1_abort\" { ROLLBACK; }\n+step \"s1_unlock_s2\" { SELECT pg_advisory_unlock(2); }\n+step \"s1_unlock_s3\" { SELECT pg_advisory_unlock(2); }\n\nHere, s1_lock_s3 and s1_unlock_s3 uses 2 as identifier. Don't you need\nto use 3 in that part of the test?Yeah this should be 3. \n\n2. In the test, there seems to be an assumption that we can unlock s2\nand s3 one after another, and then both will start waiting on s-1 but\nisn't it possible that before s2 start waiting on s1, s3 completes its\ninsertion and then s2 will never proceed for speculative insertion?I agree, such race conditions are possible.  Currently, I am not able to think what we can do here, but I will think more on this.-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 7 Jun 2021 18:45:04 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 7, 2021 at 6:45 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n\n>>\n>>\n>> 2. In the test, there seems to be an assumption that we can unlock s2\n>> and s3 one after another, and then both will start waiting on s-1 but\n>> isn't it possible that before s2 start waiting on s1, s3 completes its\n>> insertion and then s2 will never proceed for speculative insertion?\n>\n>\n> I agree, such race conditions are possible. Currently, I am not able to think what we can do here, but I will think more on this.\n>\n\nBased on the off list discussion, I have modified the test based on\nthe idea showed in\n\"isolation/specs/insert-conflict-specconflict.spec\", other open point\nwe had about the race condition that how to ensure that when we unlock\nany session it make progress, IMHO the isolation tested is designed in\na way that either all the waiting session returns with the output or\nagain block on a heavy weight lock before performing the next step.\n\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Jun 2021 17:16:26 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Based on the off list discussion, I have modified the test based on\n> the idea showed in\n> \"isolation/specs/insert-conflict-specconflict.spec\", other open point\n> we had about the race condition that how to ensure that when we unlock\n> any session it make progress, IMHO the isolation tested is designed in\n> a way that either all the waiting session returns with the output or\n> again block on a heavy weight lock before performing the next step.\n>\n\nFew comments:\n1. The test has a lot of similarities and test duplication with what\nwe are doing in insert-conflict-specconflict.spec. Can we move it to\ninsert-conflict-specconflict.spec? I understand that having it in\ntest_decoding has the advantage that we can have all decoding tests in\none place but OTOH, we can avoid a lot of test-code duplication if we\nadd it in insert-conflict-specconflict.spec.\n\n2.\n+#permutation \"s1_session\" \"s1_lock_s2\" \"s1_lock_s3\" \"s1_begin\"\n\"s1_insert_tbl1\" \"s2_session\" \"s2_begin\" \"s2_insert_tbl1\" \"s3_session\"\n\"s3_begin\" \"s3_insert_tbl1\" \"s1_unlock_s2\" \"s1_unlock_s3\" \"s1_lock_s2\"\n\"s1_abort\" \"s3_commit\" \"s1_unlock_s2\" \"s2_insert_tbl2\" \"s2_commit\"\n\"s1_get_changes\"\n\nThis permutation is not matching with what we are actually doing.\n\n3.\n+# Test that speculative locks are correctly acquired and released, s2\n+# inserts, s1 updates.\n\nThis test description doesn't seem to be correct. Can we change it to\nsomething like: \"Test logical decoding of speculative aborts for toast\ninsertion followed by insertion into a different table which doesn't\nhave a toast\"?\n\nAlso, let's prepare and test the patches for back-branches. It would\nbe better if you can prepare separate patches for code and test-case\nfor each branch then I can merge them before commit. This helps with\ntesting on back-branches.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Jun 2021 11:03:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 9, 2021 at 11:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > Based on the off list discussion, I have modified the test based on\n> > the idea showed in\n> > \"isolation/specs/insert-conflict-specconflict.spec\", other open point\n> > we had about the race condition that how to ensure that when we unlock\n> > any session it make progress, IMHO the isolation tested is designed in\n> > a way that either all the waiting session returns with the output or\n> > again block on a heavy weight lock before performing the next step.\n> >\n>\n> Few comments:\n> 1. The test has a lot of similarities and test duplication with what\n> we are doing in insert-conflict-specconflict.spec. Can we move it to\n> insert-conflict-specconflict.spec? I understand that having it in\n> test_decoding has the advantage that we can have all decoding tests in\n> one place but OTOH, we can avoid a lot of test-code duplication if we\n> add it in insert-conflict-specconflict.spec.\n>\n>\nIt seems the isolation test runs on the default configuration, will it be a\ngood idea to change the wal_level to logical for the whole isolation tester\nfolder?\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jun 9, 2021 at 11:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Based on the off list discussion, I have modified the test based on\n> the idea showed in\n> \"isolation/specs/insert-conflict-specconflict.spec\", other open point\n> we had about the race condition that how to ensure that when we unlock\n> any session it make progress, IMHO the isolation tested is designed in\n> a way that either all the waiting session returns with the output or\n> again block on a heavy weight lock before performing the next step.\n>\n\nFew comments:\n1. The test has a lot of similarities and test duplication with what\nwe are doing in insert-conflict-specconflict.spec. Can we move it to\ninsert-conflict-specconflict.spec? I understand that having it in\ntest_decoding has the advantage that we can have all decoding tests in\none place but OTOH, we can avoid a lot of test-code duplication if we\nadd it in insert-conflict-specconflict.spec. It seems the isolation test runs on the default configuration, will it be a good idea to change the wal_level to logical for the whole isolation tester folder?-- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Jun 2021 16:12:34 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 9, 2021 at 4:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 11:03 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Tue, Jun 8, 2021 at 5:16 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> >\n>> > Based on the off list discussion, I have modified the test based on\n>> > the idea showed in\n>> > \"isolation/specs/insert-conflict-specconflict.spec\", other open point\n>> > we had about the race condition that how to ensure that when we unlock\n>> > any session it make progress, IMHO the isolation tested is designed in\n>> > a way that either all the waiting session returns with the output or\n>> > again block on a heavy weight lock before performing the next step.\n>> >\n>>\n>> Few comments:\n>> 1. The test has a lot of similarities and test duplication with what\n>> we are doing in insert-conflict-specconflict.spec. Can we move it to\n>> insert-conflict-specconflict.spec? I understand that having it in\n>> test_decoding has the advantage that we can have all decoding tests in\n>> one place but OTOH, we can avoid a lot of test-code duplication if we\n>> add it in insert-conflict-specconflict.spec.\n>>\n>\n> It seems the isolation test runs on the default configuration, will it be a good idea to change the wal_level to logical for the whole isolation tester folder?\n>\n\nNo, that doesn't sound like a good idea to me. Let's keep it in\ntest_decoding then.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 9 Jun 2021 16:21:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 9, 2021 at 4:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jun 9, 2021 at 4:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >> Few comments:\n> >> 1. The test has a lot of similarities and test duplication with what\n> >> we are doing in insert-conflict-specconflict.spec. Can we move it to\n> >> insert-conflict-specconflict.spec? I understand that having it in\n> >> test_decoding has the advantage that we can have all decoding tests in\n> >> one place but OTOH, we can avoid a lot of test-code duplication if we\n> >> add it in insert-conflict-specconflict.spec.\n> >>\n> >\n> > It seems the isolation test runs on the default configuration, will it\n> be a good idea to change the wal_level to logical for the whole isolation\n> tester folder?\n> >\n>\n> No, that doesn't sound like a good idea to me. Let's keep it in\n> test_decoding then.\n>\n>\nOkay, I will work on the remaining comments and back patches and send it by\ntomorrow.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Wed, Jun 9, 2021 at 4:22 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jun 9, 2021 at 4:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>> Few comments:\n>> 1. The test has a lot of similarities and test duplication with what\n>> we are doing in insert-conflict-specconflict.spec. Can we move it to\n>> insert-conflict-specconflict.spec? I understand that having it in\n>> test_decoding has the advantage that we can have all decoding tests in\n>> one place but OTOH, we can avoid a lot of test-code duplication if we\n>> add it in insert-conflict-specconflict.spec.\n>>\n>\n> It seems the isolation test runs on the default configuration, will it be a good idea to change the wal_level to logical for the whole isolation tester folder?\n>\n\nNo, that doesn't sound like a good idea to me. Let's keep it in\ntest_decoding then.Okay, I will work on the remaining comments and back patches and send it by tomorrow. -- Regards,Dilip KumarEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Wed, 9 Jun 2021 19:16:39 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "May I suggest to use a different name in the blurt_and_lock_123()\nfunction, so that it doesn't conflict with the one in\ninsert-conflict-specconflict? Thanks\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Wed, 9 Jun 2021 11:29:17 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 9, 2021 at 8:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> May I suggest to use a different name in the blurt_and_lock_123()\n> function, so that it doesn't conflict with the one in\n> insert-conflict-specconflict? Thanks\n\nRenamed to blurt_and_lock(), is that fine?\n\nI haved fixed other comments and also prepared patches for the back branches.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 10 Jun 2021 14:12:25 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 10, 2021 at 2:12 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Wed, Jun 9, 2021 at 8:59 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > May I suggest to use a different name in the blurt_and_lock_123()\n> > function, so that it doesn't conflict with the one in\n> > insert-conflict-specconflict? Thanks\n>\n> Renamed to blurt_and_lock(), is that fine?\n>\n\nI think a non-conflicting name should be fine.\n\n> I haved fixed other comments and also prepared patches for the back branches.\n>\n\nOkay, I have verified the fix on all branches and the newly added test\nwas giving error without patch and passes with code change patch. Few\nminor things:\n1. You forgot to make the change in ReorderBufferChangeSize for v13 patch.\n2. I have made a few changes in the HEAD patch, (a) There was an\nunnecessary cleanup of spec insert at one place. I have replaced that\nwith Assert. (b) I have added and edited few comments both in the code\nand test patch.\n\nPlease find the patch for HEAD attached. Can you please prepare the\npatch for back-branches by doing all the changes I have done in the\npatch for HEAD?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 10 Jun 2021 19:15:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 10, 2021 at 7:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n\n>\n> Please find the patch for HEAD attached. Can you please prepare the\n> patch for back-branches by doing all the changes I have done in the\n> patch for HEAD?\n\nDone\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 11 Jun 2021 11:37:07 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Fri, Jun 11, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jun 10, 2021 at 7:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n>\n> >\n> > Please find the patch for HEAD attached. Can you please prepare the\n> > patch for back-branches by doing all the changes I have done in the\n> > patch for HEAD?\n>\n> Done\n>\n\nThanks, the patch looks good to me. I'll push these early next week\n(Tuesday) unless someone has any other comments or suggestions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 11 Jun 2021 19:23:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Fri, Jun 11, 2021 at 7:23 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Jun 11, 2021 at 11:37 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Thu, Jun 10, 2021 at 7:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> >\n> > >\n> > > Please find the patch for HEAD attached. Can you please prepare the\n> > > patch for back-branches by doing all the changes I have done in the\n> > > patch for HEAD?\n> >\n> > Done\n> >\n>\n> Thanks, the patch looks good to me. I'll push these early next week\n> (Tuesday) unless someone has any other comments or suggestions.\n>\n\nI think the test in this patch is quite similar to what Noah has\npointed in the nearby thread [1] to be failing at some intervals. Can\nyou also please once verify the same and if we can expect similar\nfailures here then we might want to consider dropping the test in this\npatch for now? We can always come back to it once we find a good\nsolution to make it pass consistently.\n\n\n[1] - https://www.postgresql.org/message-id/20210613073407.GA768908%40rfd.leadboat.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 14 Jun 2021 08:34:23 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 14, 2021 at 8:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> I think the test in this patch is quite similar to what Noah has\n> pointed in the nearby thread [1] to be failing at some intervals. Can\n> you also please once verify the same and if we can expect similar\n> failures here then we might want to consider dropping the test in this\n> patch for now? We can always come back to it once we find a good\n> solution to make it pass consistently.\n\ntest insert-conflict-do-nothing ... ok 646 ms\ntest insert-conflict-do-nothing-2 ... ok 1994 ms\ntest insert-conflict-do-update ... ok 1786 ms\ntest insert-conflict-do-update-2 ... ok 2689 ms\ntest insert-conflict-do-update-3 ... ok 851 ms\ntest insert-conflict-specconflict ... FAILED 3695 ms\ntest delete-abort-savept ... ok 1238 ms\n\nYeah, this is the same test that we have used base for our test so\nlet's not push this test until it becomes stable.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Jun 2021 09:44:01 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 14, 2021 at 9:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 8:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > I think the test in this patch is quite similar to what Noah has\n> > pointed in the nearby thread [1] to be failing at some intervals. Can\n> > you also please once verify the same and if we can expect similar\n> > failures here then we might want to consider dropping the test in this\n> > patch for now? We can always come back to it once we find a good\n> > solution to make it pass consistently.\n>\n> test insert-conflict-do-nothing ... ok 646 ms\n> test insert-conflict-do-nothing-2 ... ok 1994 ms\n> test insert-conflict-do-update ... ok 1786 ms\n> test insert-conflict-do-update-2 ... ok 2689 ms\n> test insert-conflict-do-update-3 ... ok 851 ms\n> test insert-conflict-specconflict ... FAILED 3695 ms\n> test delete-abort-savept ... ok 1238 ms\n>\n> Yeah, this is the same test that we have used base for our test so\n> let's not push this test until it becomes stable.\n\nPatches without test case.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 14 Jun 2021 12:05:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Mon, Jun 14, 2021 at 12:06 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Mon, Jun 14, 2021 at 9:44 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n> > On Mon, Jun 14, 2021 at 8:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > I think the test in this patch is quite similar to what Noah has\n> > > pointed in the nearby thread [1] to be failing at some intervals. Can\n> > > you also please once verify the same and if we can expect similar\n> > > failures here then we might want to consider dropping the test in this\n> > > patch for now? We can always come back to it once we find a good\n> > > solution to make it pass consistently.\n> >\n> > test insert-conflict-do-nothing ... ok 646 ms\n> > test insert-conflict-do-nothing-2 ... ok 1994 ms\n> > test insert-conflict-do-update ... ok 1786 ms\n> > test insert-conflict-do-update-2 ... ok 2689 ms\n> > test insert-conflict-do-update-3 ... ok 851 ms\n> > test insert-conflict-specconflict ... FAILED 3695 ms\n> > test delete-abort-savept ... ok 1238 ms\n> >\n> > Yeah, this is the same test that we have used base for our test so\n> > let's not push this test until it becomes stable.\n>\n> Patches without test case.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 15 Jun 2021 16:48:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> Pushed!\n\nskink reports that this has valgrind issues:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26\n\n2021-06-16 01:20:13.344 UTC [2198271][4/0:0] LOG: received replication command: IDENTIFY_SYSTEM\n2021-06-16 01:20:13.384 UTC [2198271][4/0:0] LOG: received replication command: START_REPLICATION SLOT \"sub2\" LOGICAL 0/0 (proto_version '1', publication_names '\"pub2\"')\n2021-06-16 01:20:13.454 UTC [2198271][4/0:0] LOG: starting logical decoding for slot \"sub2\"\n2021-06-16 01:20:13.454 UTC [2198271][4/0:0] DETAIL: Streaming transactions committing after 0/157C828, reading WAL from 0/157C7F0.\n2021-06-16 01:20:13.488 UTC [2198271][4/0:0] LOG: logical decoding found consistent point at 0/157C7F0\n2021-06-16 01:20:13.488 UTC [2198271][4/0:0] DETAIL: There are no running transactions.\n...\n==2198271== VALGRINDERROR-BEGIN\n==2198271== Conditional jump or move depends on uninitialised value(s)\n==2198271== at 0x80EF890: rel_sync_cache_relation_cb (pgoutput.c:833)\n==2198271== by 0x666AEB: LocalExecuteInvalidationMessage (inval.c:595)\n==2198271== by 0x53A423: ReceiveSharedInvalidMessages (sinval.c:90)\n==2198271== by 0x666026: AcceptInvalidationMessages (inval.c:683)\n==2198271== by 0x53FBDD: LockRelationOid (lmgr.c:136)\n==2198271== by 0x1D3943: relation_open (relation.c:56)\n==2198271== by 0x26F21F: table_open (table.c:43)\n==2198271== by 0x66D97F: ScanPgRelation (relcache.c:346)\n==2198271== by 0x674644: RelationBuildDesc (relcache.c:1059)\n==2198271== by 0x674BE8: RelationClearRelation (relcache.c:2568)\n==2198271== by 0x675064: RelationFlushRelation (relcache.c:2736)\n==2198271== by 0x6750A6: RelationCacheInvalidateEntry (relcache.c:2797)\n==2198271== Uninitialised value was created by a heap allocation\n==2198271== at 0x6AC308: MemoryContextAlloc (mcxt.c:826)\n==2198271== by 0x68A8D9: DynaHashAlloc (dynahash.c:283)\n==2198271== by 0x68A94B: element_alloc (dynahash.c:1675)\n==2198271== by 0x68AA58: get_hash_entry (dynahash.c:1284)\n==2198271== by 0x68B23E: hash_search_with_hash_value (dynahash.c:1057)\n==2198271== by 0x68B3D4: hash_search (dynahash.c:913)\n==2198271== by 0x80EE855: get_rel_sync_entry (pgoutput.c:681)\n==2198271== by 0x80EEDA5: pgoutput_truncate (pgoutput.c:530)\n==2198271== by 0x4E48A2: truncate_cb_wrapper (logical.c:797)\n==2198271== by 0x4EFDDB: ReorderBufferCommit (reorderbuffer.c:1777)\n==2198271== by 0x4E1DBE: DecodeCommit (decode.c:637)\n==2198271== by 0x4E1F31: DecodeXactOp (decode.c:245)\n==2198271== \n==2198271== VALGRINDERROR-END\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 10:48:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 16, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > Pushed!\n>\n> skink reports that this has valgrind issues:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26\n>\n\nThe problem happens at line:\nrel_sync_cache_relation_cb()\n{\n..\nif (entry->map)\n..\n\nI think the reason is that before we initialize 'entry->map' in\nget_rel_sync_entry(), the invalidation is processed as part of which\nwhen we try to clean up the entry, it tries to access uninitialized\nvalue. Note, this won't happen in HEAD as we initialize 'entry->map'\nbefore we get to process any invalidation. We have fixed a similar\nissue in HEAD sometime back as part of the commit 69bd60672a, so we\nneed to make a similar change in PG-13 as well.\n\nThis problem is introduced by commit d250568121 (Fix memory leak due\nto RelationSyncEntry.map.) not by the patch in this thread, so keeping\nAmit L and Osumi-San in the loop.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Jun 2021 09:26:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 17, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Wed, Jun 16, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > Pushed!\n> >\n> > skink reports that this has valgrind issues:\n> >\n> > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26\n> >\n>\n> The problem happens at line:\n> rel_sync_cache_relation_cb()\n> {\n> ..\n> if (entry->map)\n> ..\n>\n> I think the reason is that before we initialize 'entry->map' in\n> get_rel_sync_entry(), the invalidation is processed as part of which\n> when we try to clean up the entry, it tries to access uninitialized\n> value. Note, this won't happen in HEAD as we initialize 'entry->map'\n> before we get to process any invalidation. We have fixed a similar\n> issue in HEAD sometime back as part of the commit 69bd60672a, so we\n> need to make a similar change in PG-13 as well.\n>\n> This problem is introduced by commit d250568121 (Fix memory leak due\n> to RelationSyncEntry.map.) not by the patch in this thread, so keeping\n> Amit L and Osumi-San in the loop.\n\nThanks.\n\nMaybe not sufficient as a fix, but I wonder if\nrel_sync_cache_relation_cb() should really also check that\nreplicate_valid is true in the following condition:\n\n /*\n * Reset schema sent status as the relation definition may have changed.\n * Also free any objects that depended on the earlier definition.\n */\n if (entry != NULL)\n {\n\nIf the problem is with HEAD, I don't quite understand why the last\nstatement of the following code block doesn't suffice:\n\n /* Not found means schema wasn't sent */\n if (!found)\n {\n /* immediately make a new entry valid enough to satisfy callbacks */\n entry->schema_sent = false;\n entry->streamed_txns = NIL;\n entry->replicate_valid = false;\n entry->pubactions.pubinsert = entry->pubactions.pubupdate =\n entry->pubactions.pubdelete = entry->pubactions.pubtruncate = false;\n entry->publish_as_relid = InvalidOid;\n entry->map = NULL; /* will be set by maybe_send_schema() if needed */\n }\n\nDo we need the same statement at the end of the following block?\n\n /* Validate the entry */\n if (!entry->replicate_valid)\n {\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:09:27 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 17, 2021 at 10:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Thu, Jun 17, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, Jun 16, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > Pushed!\n> > >\n> > > skink reports that this has valgrind issues:\n> > >\n> > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26\n> > >\n> >\n> > The problem happens at line:\n> > rel_sync_cache_relation_cb()\n> > {\n> > ..\n> > if (entry->map)\n> > ..\n> >\n> > I think the reason is that before we initialize 'entry->map' in\n> > get_rel_sync_entry(), the invalidation is processed as part of which\n> > when we try to clean up the entry, it tries to access uninitialized\n> > value. Note, this won't happen in HEAD as we initialize 'entry->map'\n> > before we get to process any invalidation. We have fixed a similar\n> > issue in HEAD sometime back as part of the commit 69bd60672a, so we\n> > need to make a similar change in PG-13 as well.\n> >\n> > This problem is introduced by commit d250568121 (Fix memory leak due\n> > to RelationSyncEntry.map.) not by the patch in this thread, so keeping\n> > Amit L and Osumi-San in the loop.\n>\n> Thanks.\n>\n> Maybe not sufficient as a fix, but I wonder if\n> rel_sync_cache_relation_cb() should really also check that\n> replicate_valid is true in the following condition:\n>\n\nI don't think that is required because we initialize the entry in \"if\n(!found)\" case in the HEAD.\n\n> /*\n> * Reset schema sent status as the relation definition may have changed.\n> * Also free any objects that depended on the earlier definition.\n> */\n> if (entry != NULL)\n> {\n>\n> If the problem is with HEAD,\n>\n\nThe problem occurs only in PG-13. So, we need to make PG-13 code\nsimilar to HEAD as far as initialization of entry is concerned.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Jun 2021 12:11:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 17, 2021 at 3:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Thu, Jun 17, 2021 at 10:39 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Thu, Jun 17, 2021 at 12:56 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Wed, Jun 16, 2021 at 8:18 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > > Pushed!\n> > > >\n> > > > skink reports that this has valgrind issues:\n> > > >\n> > > > https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2021-06-15%2020%3A49%3A26\n> > > >\n> > >\n> > > The problem happens at line:\n> > > rel_sync_cache_relation_cb()\n> > > {\n> > > ..\n> > > if (entry->map)\n> > > ..\n> > >\n> > > I think the reason is that before we initialize 'entry->map' in\n> > > get_rel_sync_entry(), the invalidation is processed as part of which\n> > > when we try to clean up the entry, it tries to access uninitialized\n> > > value. Note, this won't happen in HEAD as we initialize 'entry->map'\n> > > before we get to process any invalidation. We have fixed a similar\n> > > issue in HEAD sometime back as part of the commit 69bd60672a, so we\n> > > need to make a similar change in PG-13 as well.\n> > >\n> > > This problem is introduced by commit d250568121 (Fix memory leak due\n> > > to RelationSyncEntry.map.) not by the patch in this thread, so keeping\n> > > Amit L and Osumi-San in the loop.\n> >\n> > Thanks.\n> >\n> > Maybe not sufficient as a fix, but I wonder if\n> > rel_sync_cache_relation_cb() should really also check that\n> > replicate_valid is true in the following condition:\n>\n> I don't think that is required because we initialize the entry in \"if\n> (!found)\" case in the HEAD.\n\nYeah, I see that. If we can be sure that the callback can't get\ncalled between hash_search() allocating the entry and the above code\nblock making the entry look valid, which appears to be the case, then\nI guess we don't need to worry.\n\n> > /*\n> > * Reset schema sent status as the relation definition may have changed.\n> > * Also free any objects that depended on the earlier definition.\n> > */\n> > if (entry != NULL)\n> > {\n> >\n> > If the problem is with HEAD,\n> >\n>\n> The problem occurs only in PG-13. So, we need to make PG-13 code\n> similar to HEAD as far as initialization of entry is concerned.\n\nOh I missed that the problem report is for the PG13 branch.\n\nHow about the attached patch then?\n\n\n--\nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 17 Jun 2021 16:22:06 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 17, 2021 at 12:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n\n> Oh I missed that the problem report is for the PG13 branch.\n>\n> How about the attached patch then?\n>\nLooks good, one minor comment, how about making the below comment,\nsame as on the head?\n\n- if (!found || !entry->replicate_valid)\n+ if (!found)\n+ {\n+ /*\n+ * Make the new entry valid enough for the callbacks to look at, in\n+ * case any of them get invoked during the more complicated\n+ * initialization steps below.\n+ */\n\nOn head:\nif (!found)\n{\n/* immediately make a new entry valid enough to satisfy callbacks */\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Jun 2021 13:14:46 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "Hi Dilip,\n\nOn Thu, Jun 17, 2021 at 4:45 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> On Thu, Jun 17, 2021 at 12:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> > Oh I missed that the problem report is for the PG13 branch.\n> >\n> > How about the attached patch then?\n> >\n> Looks good,\n\nThanks for checking.\n\n> one minor comment, how about making the below comment,\n> same as on the head?\n>\n> - if (!found || !entry->replicate_valid)\n> + if (!found)\n> + {\n> + /*\n> + * Make the new entry valid enough for the callbacks to look at, in\n> + * case any of them get invoked during the more complicated\n> + * initialization steps below.\n> + */\n>\n> On head:\n> if (!found)\n> {\n> /* immediately make a new entry valid enough to satisfy callbacks */\n\nAgree it's better to have the same comment in both branches.\n\nThough, I think it should be \"the new entry\", not \"a new entry\". I\nfind the sentence I wrote a bit more enlightening, but I am fine with\njust fixing the aforementioned problem with the existing comment.\n\nI've updated the patch. Also, attaching a patch for HEAD for the\ns/a/the change. While at it, I also capitalized \"immediately\".\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 17 Jun 2021 17:05:30 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 17, 2021 at 1:35 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi Dilip,\n>\n> On Thu, Jun 17, 2021 at 4:45 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > On Thu, Jun 17, 2021 at 12:52 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > > Oh I missed that the problem report is for the PG13 branch.\n> > >\n> > > How about the attached patch then?\n> > >\n> > Looks good,\n>\n> Thanks for checking.\n>\n> > one minor comment, how about making the below comment,\n> > same as on the head?\n> >\n> > - if (!found || !entry->replicate_valid)\n> > + if (!found)\n> > + {\n> > + /*\n> > + * Make the new entry valid enough for the callbacks to look at, in\n> > + * case any of them get invoked during the more complicated\n> > + * initialization steps below.\n> > + */\n> >\n> > On head:\n> > if (!found)\n> > {\n> > /* immediately make a new entry valid enough to satisfy callbacks */\n>\n> Agree it's better to have the same comment in both branches.\n>\n> Though, I think it should be \"the new entry\", not \"a new entry\". I\n> find the sentence I wrote a bit more enlightening, but I am fine with\n> just fixing the aforementioned problem with the existing comment.\n>\n> I've updated the patch. Also, attaching a patch for HEAD for the\n> s/a/the change. While at it, I also capitalized \"immediately\".\n>\n\nYour patch looks good to me as well. I would like to retain the\ncomment as it is from master for now. I'll do some testing and push it\ntomorrow unless there are additional comments.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 17 Jun 2021 14:55:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 17, 2021 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Your patch looks good to me as well. I would like to retain the\n> comment as it is from master for now. I'll do some testing and push it\n> tomorrow unless there are additional comments.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 18 Jun 2021 09:20:50 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "Hi,\n\nOn 6/18/21 5:50 AM, Amit Kapila wrote:\n> On Thu, Jun 17, 2021 at 2:55 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> Your patch looks good to me as well. I would like to retain the\n>> comment as it is from master for now. I'll do some testing and push it\n>> tomorrow unless there are additional comments.\n>>\n> \n> Pushed!\n> \n\nWhile rebasing a patch broken by 4daa140a2f5, I noticed that the patch\ndoes this:\n\n@@ -63,6 +63,7 @@ enum ReorderBufferChangeType\n REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,\n REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,\n REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,\n+ REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,\n REORDER_BUFFER_CHANGE_TRUNCATE\n };\n\n\nI understand adding the ABORT right after CONFIRM\n\nIsn't that an undesirable ABI break for extensions? It changes the value\nfor the TRUNCATE item, so if an extension references to that somehow\nit'd suddenly start failing (until it gets rebuilt). And the failures\nwould be pretty confusing and seemingly contradicting the code.\n\nFWIW I don't know how likely it is for an extension to depend on the\nTRUNCATE value (it'd be far worse for INSERT/UPDATE/DELETE), but seems\nmoving the new element at the end would solve this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 23 Jun 2021 16:43:44 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> While rebasing a patch broken by 4daa140a2f5, I noticed that the patch\n> does this:\n\n> @@ -63,6 +63,7 @@ enum ReorderBufferChangeType\n> REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,\n> REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,\n> REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,\n> + REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,\n> REORDER_BUFFER_CHANGE_TRUNCATE\n> };\n\n> Isn't that an undesirable ABI break for extensions?\n\nI think it's OK in HEAD. I agree we shouldn't do it like that\nin the back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Jun 2021 10:51:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Wed, Jun 23, 2021 at 8:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > While rebasing a patch broken by 4daa140a2f5, I noticed that the patch\n> > does this:\n>\n> > @@ -63,6 +63,7 @@ enum ReorderBufferChangeType\n> > REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,\n> > REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,\n> > REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,\n> > + REORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT,\n> > REORDER_BUFFER_CHANGE_TRUNCATE\n> > };\n>\n> > Isn't that an undesirable ABI break for extensions?\n>\n> I think it's OK in HEAD. I agree we shouldn't do it like that\n> in the back branches.\n>\n\nOkay, I'll change this in back branches and HEAD to keep the code\nconsistent, or do you think it is better to retain the order in HEAD\nas it is and just change it for back-branches?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 24 Jun 2021 08:45:48 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n>> I think it's OK in HEAD. I agree we shouldn't do it like that\n>> in the back branches.\n\n> Okay, I'll change this in back branches and HEAD to keep the code\n> consistent, or do you think it is better to retain the order in HEAD\n> as it is and just change it for back-branches?\n\nAs I said, I'd keep the natural ordering in HEAD.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Jun 2021 00:25:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 24, 2021 at 12:25:15AM -0400, Tom Lane wrote:\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n>> Okay, I'll change this in back branches and HEAD to keep the code\n>> consistent, or do you think it is better to retain the order in HEAD\n>> as it is and just change it for back-branches?\n> \n> As I said, I'd keep the natural ordering in HEAD.\n\nYes, please keep the items in an alphabetical order on HEAD, and just\nhave the new item at the bottom of the enum in the back-branches.\nThat's the usual practice.\n--\nMichael", "msg_date": "Thu, 24 Jun 2021 14:33:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" }, { "msg_contents": "On Thu, Jun 24, 2021 at 11:04 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jun 24, 2021 at 12:25:15AM -0400, Tom Lane wrote:\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> >> Okay, I'll change this in back branches and HEAD to keep the code\n> >> consistent, or do you think it is better to retain the order in HEAD\n> >> as it is and just change it for back-branches?\n> >\n> > As I said, I'd keep the natural ordering in HEAD.\n>\n> Yes, please keep the items in an alphabetical order on HEAD, and just\n> have the new item at the bottom of the enum in the back-branches.\n> That's the usual practice.\n>\n\nOkay, I have back-patched the change till v11 because before that\nREORDER_BUFFER_CHANGE_INTERNAL_SPEC_ABORT is already at the end.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 25 Jun 2021 08:50:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Decoding speculative insert with toast leaks memory" } ]
[ { "msg_contents": "Hi,\n\nhere is another tidbit from our experience with using logical decoding. \nThe attached patch adds a callback to notify the output plugin of a \nconcurrent abort. I'll continue to describe the problem in more detail \nand how this additional callback solves it.\n\nStreamed transactions as well as two-phase commit transactions may get \ndecoded before they finish. At the point the begin_cb is invoked and \nfirst changes are delivered to the output plugin, it is not necessarily \nknown whether the transaction will commit or abort.\n\nThis leads to the possibility of the transaction getting aborted \nconcurrent to logical decoding. In that case, it is likely for the \ndecoder to error on a catalog scan that conflicts with the abort of the \ntransaction. The reorderbuffer sports a PG_CATCH block to cleanup. \nHowever, it does not currently inform the output plugin. From its point \nof view, the transaction is left dangling until another one comes along \nor until the final ROLLBACK or ROLLBACK PREPARED record from WAL gets \ndecoded. Therefore, what the output plugin might see in this case is:\n\n* filter_prepare_cb (txn A)\n* begin_prepare_cb (txn A)\n* apply_change (txn A)\n* apply_change (txn A)\n* apply_change (txn A)\n* begin_cb (txn B)\n\nIn other words, in this example, only the begin_cb of the following \ntransaction implicitly tells the output plugin that txn A could not be \nfully decoded. And there's no upper time boundary on when that may \nhappen. (It could also be another filter_prepare_cb, if the subsequent \ntransaction happens to be a two-phase transaction as well. Or an \nexplicit rollback_prepared_cb or stream_abort if there's no other \ntransaction in between.)\n\nAn alternative and arguably cleaner approach for streamed transactions \nmay be to directly invoke stream_abort. However, the lsn argument \npassed could not be that of the abort record, as that's not known at the \npoint in time of the concurrent abort. Plus, this seems like a bad fit \nfor two-phase commit transactions.\n\nAgain, this callback is especially important for output plugins that \ninvoke further actions on downstream nodes that delay the COMMIT \nPREPARED of a transaction upstream, e.g. until prepared on other nodes. \nUp until now, the output plugin has no way to learn about a concurrent \nabort of the currently decoded (2PC or streamed) transaction (perhaps \nshort of continued polling on the transaction status).\n\nI also think it generally improves the API by allowing the output plugin \nto rely on such a callback, rather than having to implicitly deduce this \nfrom other callbacks.\n\nThoughts or comments? If this is agreed on, I can look into adding \ntests (concurrent aborts are not currently covered, it seems).\n\nRegards\n\nMarkus", "msg_date": "Thu, 25 Mar 2021 10:07:28 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "[PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "Hi,\n\nOn 2021-03-25 10:07:28 +0100, Markus Wanner wrote:\n> This leads to the possibility of the transaction getting aborted concurrent\n> to logical decoding. In that case, it is likely for the decoder to error on\n> a catalog scan that conflicts with the abort of the transaction. The\n> reorderbuffer sports a PG_CATCH block to cleanup.\n\nFWIW, I am seriously suspicuous of the code added as part of\n7259736a6e5b7c75 and plan to look at it after the code freeze. I can't\nreally see this code surviving as is. The tableam.h changes, the bsyscan\nstuff, ... Leaving correctness aside, the code bloat and performance\naffects alone seems problematic.\n\n\n> Again, this callback is especially important for output plugins that invoke\n> further actions on downstream nodes that delay the COMMIT PREPARED of a\n> transaction upstream, e.g. until prepared on other nodes. Up until now, the\n> output plugin has no way to learn about a concurrent abort of the currently\n> decoded (2PC or streamed) transaction (perhaps short of continued polling on\n> the transaction status).\n\nYou may have only meant it as a shorthand: But imo output plugins have\nabsolutely no business \"invoking actions downstream\".\n\n\n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index c291b05a423..a6d044b870b 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -2488,6 +2488,12 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> \t\t\terrdata = NULL;\n> \t\t\tcurtxn->concurrent_abort = true;\n> \n> +\t\t\t/*\n> +\t\t\t * Call the cleanup hook to inform the output plugin that the\n> +\t\t\t * transaction just started had to be aborted.\n> +\t\t\t */\n> +\t\t\trb->concurrent_abort(rb, txn, streaming, commit_lsn);\n> +\n> \t\t\t/* Reset the TXN so that it is allowed to stream remaining data. */\n> \t\t\tReorderBufferResetTXN(rb, txn, snapshot_now,\n> \t\t\t\t\t\t\t\t command_id, prev_lsn,\n\nI don't think this would be ok, errors thrown in the callback wouldn't\nbe handled as they would be in other callbacks.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 25 Mar 2021 13:21:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 25.03.21 21:21, Andres Freund wrote:\n> ... the code added as part of 7259736a6e5b7c75 ...\n\nThat's the streaming part, which can be thought of as a more general \nvariant of the two-phase decoding in that it allows multiple \"flush \npoints\" (invoking ReorderBufferProcessTXN). Unlike the PREPARE of a \ntwo-phase commit, where the reorderbuffer can be sure there's no further \nchange to be processed after the PREPARE. Nor is there any invocation \nof ReorderBufferProcessTXN before that fist one at PREPARE time. With \nthat in mind, I'm surprised support for streaming got committed before \n2PC. It clearly has different use cases, though.\n\nHowever, I'm sure your inputs on how to improve and cleanup the \nimplementation will be appreciated. The single tiny problem this patch \naddresses is the same for 2PC and streaming decoding: the output plugin \ncurrently has no way to learn about a concurrent abort of a transaction \nstill being decoded, at the time this happens.\n\nBoth, 2PC and streaming do require the reorderbuffer to forward changes \n(possibly) prior to the transaction's commit. That's the whole point of \nthese two features. Therefore, I don't think we can get around \nconcurrent aborts.\n\n> You may have only meant it as a shorthand: But imo output plugins have\n> absolutely no business \"invoking actions downstream\".\n\n From my point of view, that's the raison d'être for an output plugin. \nEven if it does so merely by forwarding messages. But yeah, of course a \nwhole bunch of other components and changes are needed to implement the \nkind of global two-phase commit system I tried to describe.\n\nI'm open to suggestions on how to reference that use case.\n\n>> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n>> index c291b05a423..a6d044b870b 100644\n>> --- a/src/backend/replication/logical/reorderbuffer.c\n>> +++ b/src/backend/replication/logical/reorderbuffer.c\n>> @@ -2488,6 +2488,12 @@ ReorderBufferProcessTXN(ReorderBuffer *rb, ReorderBufferTXN *txn,\n>> \t\t\terrdata = NULL;\n>> \t\t\tcurtxn->concurrent_abort = true;\n>> \n>> +\t\t\t/*\n>> +\t\t\t * Call the cleanup hook to inform the output plugin that the\n>> +\t\t\t * transaction just started had to be aborted.\n>> +\t\t\t */\n>> +\t\t\trb->concurrent_abort(rb, txn, streaming, commit_lsn);\n>> +\n>> \t\t\t/* Reset the TXN so that it is allowed to stream remaining data. */\n>> \t\t\tReorderBufferResetTXN(rb, txn, snapshot_now,\n>> \t\t\t\t\t\t\t\t command_id, prev_lsn,\n> \n> I don't think this would be ok, errors thrown in the callback wouldn't\n> be handled as they would be in other callbacks.\n\nThat's a good point. Maybe the CATCH block should only set a flag, \nallowing for the callback to be invoked outside of it.\n\nRegards\n\nMarkus my-callbacks-do-not-throw-error Wanner\n\n\n", "msg_date": "Thu, 25 Mar 2021 23:15:50 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Thu, Mar 25, 2021 at 2:37 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> here is another tidbit from our experience with using logical decoding.\n> The attached patch adds a callback to notify the output plugin of a\n> concurrent abort. I'll continue to describe the problem in more detail\n> and how this additional callback solves it.\n>\n> Streamed transactions as well as two-phase commit transactions may get\n> decoded before they finish. At the point the begin_cb is invoked and\n> first changes are delivered to the output plugin, it is not necessarily\n> known whether the transaction will commit or abort.\n>\n> This leads to the possibility of the transaction getting aborted\n> concurrent to logical decoding. In that case, it is likely for the\n> decoder to error on a catalog scan that conflicts with the abort of the\n> transaction. The reorderbuffer sports a PG_CATCH block to cleanup.\n> However, it does not currently inform the output plugin. From its point\n> of view, the transaction is left dangling until another one comes along\n> or until the final ROLLBACK or ROLLBACK PREPARED record from WAL gets\n> decoded. Therefore, what the output plugin might see in this case is:\n>\n> * filter_prepare_cb (txn A)\n> * begin_prepare_cb (txn A)\n> * apply_change (txn A)\n> * apply_change (txn A)\n> * apply_change (txn A)\n> * begin_cb (txn B)\n>\n> In other words, in this example, only the begin_cb of the following\n> transaction implicitly tells the output plugin that txn A could not be\n> fully decoded. And there's no upper time boundary on when that may\n> happen. (It could also be another filter_prepare_cb, if the subsequent\n> transaction happens to be a two-phase transaction as well. Or an\n> explicit rollback_prepared_cb or stream_abort if there's no other\n> transaction in between.)\n>\n> An alternative and arguably cleaner approach for streamed transactions\n> may be to directly invoke stream_abort. However, the lsn argument\n> passed could not be that of the abort record, as that's not known at the\n> point in time of the concurrent abort. Plus, this seems like a bad fit\n> for two-phase commit transactions.\n>\n> Again, this callback is especially important for output plugins that\n> invoke further actions on downstream nodes that delay the COMMIT\n> PREPARED of a transaction upstream, e.g. until prepared on other nodes.\n> Up until now, the output plugin has no way to learn about a concurrent\n> abort of the currently decoded (2PC or streamed) transaction (perhaps\n> short of continued polling on the transaction status).\n>\n\nI think as you have noted that stream_abort or rollback_prepared will\nbe sent (the remaining changes in-between will be skipped) as we\ndecode them from WAL so it is not clear to me how it causes any delays\nas opposed to where we don't detect concurrent abort say because after\nthat we didn't access catalog table.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Mar 2021 08:58:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 26.03.21 04:28, Amit Kapila wrote:\n> I think as you have noted that stream_abort or rollback_prepared will\n> be sent (the remaining changes in-between will be skipped) as we\n> decode them from WAL\n\nYes, but as outlined, too late. Multiple other transactions may get \ndecoded until the decoder reaches the ROLLBACK PREPARED. Thus, \neffectively, the output plugin currently needs to deduce that a \ntransaction got aborted concurrently from one out of half a dozen other \ncallbacks that may trigger right after that transaction, because it will \nonly get closed properly much later.\n\n> so it is not clear to me how it causes any delays\n> as opposed to where we don't detect concurrent abort say because after\n> that we didn't access catalog table.\n\nYou're assuming very little traffic, where the ROLLBACK ABORT follows \nthe PREPARE immediately in WAL. On a busy system, chances for that to \nhappen are rather low.\n\n(I think the same is true for streaming and stream_abort being sent only \nat the time the decoder reaches the ROLLBACK record in WAL. However, I \ndid not try. Unlike 2PC, where this actually bit me.)\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:12:54 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Fri, Mar 26, 2021 at 2:42 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 26.03.21 04:28, Amit Kapila wrote:\n> > I think as you have noted that stream_abort or rollback_prepared will\n> > be sent (the remaining changes in-between will be skipped) as we\n> > decode them from WAL\n>\n> Yes, but as outlined, too late. Multiple other transactions may get\n> decoded until the decoder reaches the ROLLBACK PREPARED. Thus,\n> effectively, the output plugin currently needs to deduce that a\n> transaction got aborted concurrently from one out of half a dozen other\n> callbacks that may trigger right after that transaction, because it will\n> only get closed properly much later.\n>\n> > so it is not clear to me how it causes any delays\n> > as opposed to where we don't detect concurrent abort say because after\n> > that we didn't access catalog table.\n>\n> You're assuming very little traffic, where the ROLLBACK ABORT follows\n> the PREPARE immediately in WAL. On a busy system, chances for that to\n> happen are rather low.\n>\n\nNo, I am not assuming that. I am just trying to describe you that it\nis not necessary that we will be able to detect concurrent abort in\neach and every case. Say if any transaction operates on one relation\nand concurrent abort happens after first access of relation then we\nwon't access catalog and hence won't detect abort. In such cases, you\nwill get the abort only when it happens in WAL. So, why try to get\nearlier in some cases when it is not guaranteed in every case. Also,\nwhat will you do when you receive actual Rollback, may be the plugin\ncan throw it by checking in some way that it has already aborted the\ntransaction, if so, that sounds a bit awkward to me.\n\nThe other related thing is it may not be a good idea to finish the\ntransaction before we see its actual WAL record because after the\nclient (say subscriber) finishes xact, it sends the updated LSN\nlocation based on which we update the slot LSNs from where it will\nstart decoding next time after restart, so by bad timing it might try\nto decode the contents of same transaction but may be for\nconcurrent_aborts the plugin might arrange such that client won't send\nupdated LSN.\n\n> (I think the same is true for streaming and stream_abort being sent only\n> at the time the decoder reaches the ROLLBACK record in WAL. However, I\n> did not try.\n>\n\nYes, both streaming and 2PC behaves in a similar way in this regard.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 26 Mar 2021 15:49:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 26.03.21 11:19, Amit Kapila wrote:\n> No, I am not assuming that. I am just trying to describe you that it\n> is not necessary that we will be able to detect concurrent abort in\n> each and every case.\n\nSure. Nor am I claiming that would be necessary or that the patch \nchanged anything about it.\n\nAs it stands, assuming the the output plugin basically just forwards the \nevents and the subscriber tries to replicate them as is, the following \nwould happen on the subscriber for a concurrently aborted two-phase \ntransaction:\n\n * start a transaction (begin_prepare_cb)\n * apply changes for it (change_cb)\n * digress to other, unrelated transactions (leaving unspecified what\n exactly happens to the opened transaction)\n * attempt to rollback a transaction that has not ever been prepared\n (rollback_prepared_cb)\n\nThe point of the patch is for the output plugin to get proper \ntransaction entry and exit callbacks. Even in the unfortunate case of a \nconcurrent abort. It offers the output plugin a clean way to learn that \nthe decoder stopped decoding for the current transaction and it won't \npossibly see a prepare_cb for it (despite the decoder having passed the \nPREPARE record in WAL).\n\n> The other related thing is it may not be a good idea to finish the\n> transaction\n\nYou're speaking subscriber side here. And yes, I agree, the subscriber \nshould not abort the transaction at a concurrent_abort. I never claimed \nit should.\n\nIf you are curious, in our case I made the subscriber PREPARE the \ntransaction at its end when receiving a concurrent_abort notification, \nso that the subscriber:\n\n * can hop out of that started transaction and safely proceed\n to process events for other transactions, and\n * has the transaction in the appropriate state for processing the\n subsequent rollback_prepared_cb, once that gets through\n\nThat's probably not ideal in the sense that subscribers do unnecessary \nwork. However, it pretty closely replicates the transaction's state as \nit was on the origin at any given point in time (by LSN).\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Fri, 26 Mar 2021 13:20:24 +0100", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Fri, Mar 26, 2021 at 5:50 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 26.03.21 11:19, Amit Kapila wrote:\n> > No, I am not assuming that. I am just trying to describe you that it\n> > is not necessary that we will be able to detect concurrent abort in\n> > each and every case.\n>\n> Sure. Nor am I claiming that would be necessary or that the patch\n> changed anything about it.\n>\n> As it stands, assuming the the output plugin basically just forwards the\n> events and the subscriber tries to replicate them as is, the following\n> would happen on the subscriber for a concurrently aborted two-phase\n> transaction:\n>\n> * start a transaction (begin_prepare_cb)\n> * apply changes for it (change_cb)\n> * digress to other, unrelated transactions (leaving unspecified what\n> exactly happens to the opened transaction)\n> * attempt to rollback a transaction that has not ever been prepared\n> (rollback_prepared_cb)\n>\n> The point of the patch is for the output plugin to get proper\n> transaction entry and exit callbacks. Even in the unfortunate case of a\n> concurrent abort. It offers the output plugin a clean way to learn that\n> the decoder stopped decoding for the current transaction and it won't\n> possibly see a prepare_cb for it (despite the decoder having passed the\n> PREPARE record in WAL).\n>\n> > The other related thing is it may not be a good idea to finish the\n> > transaction\n>\n> You're speaking subscriber side here. And yes, I agree, the subscriber\n> should not abort the transaction at a concurrent_abort. I never claimed\n> it should.\n>\n> If you are curious, in our case I made the subscriber PREPARE the\n> transaction at its end when receiving a concurrent_abort notification,\n> so that the subscriber:\n>\n> * can hop out of that started transaction and safely proceed\n> to process events for other transactions, and\n> * has the transaction in the appropriate state for processing the\n> subsequent rollback_prepared_cb, once that gets through\n>\n> That's probably not ideal in the sense that subscribers do unnecessary\n> work.\n>\n\nIsn't it better to send prepare from the publisher in such a case so\nthat subscribers can know about it when rollback prepared arrives? I\nthink we have already done the same (sent prepare, exactly to handle\nthe case you have described above) for *streamed* transactions.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 27 Mar 2021 12:07:36 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 27.03.21 07:37, Amit Kapila wrote:\n> Isn't it better to send prepare from the publisher in such a case so\n> that subscribers can know about it when rollback prepared arrives?\n\nThat's exactly what this callback allows (among other options). It \nprovides a way for the output plugin to react to a transaction aborting \nwhile it is being decoded. This would not be possible without this \nadditional callback.\n\nAlso note that I would like to retain the option to do some basic \nprotocol validity checks. Certain messages only make sense within a \ntransaction ('U'pdate, 'C'ommit). Others are only valid outside of a \ntransaction ('B'egin, begin_prepare_cb). This is only possible if the \noutput plugin has a callback for every entry into and exit out of a \ntransaction (being decoded). This used to be the case prior to 2PC \ndecoding and this patch re-establishes that.\n\n> I think we have already done the same (sent prepare, exactly to handle\n> the case you have described above) for *streamed* transactions.\n\nWhere can I find that? ISTM streaming transactions have the same issue: \nthe output plugin does not (or only implicitly) learn about a concurrent \nabort of the transaction.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 29 Mar 2021 09:06:50 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Mon, Mar 29, 2021 at 12:36 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 27.03.21 07:37, Amit Kapila wrote:\n> > Isn't it better to send prepare from the publisher in such a case so\n> > that subscribers can know about it when rollback prepared arrives?\n>\n> That's exactly what this callback allows (among other options). It\n> provides a way for the output plugin to react to a transaction aborting\n> while it is being decoded. This would not be possible without this\n> additional callback.\n>\n\nYou don't need an additional callback for that if we do what I am\nsuggesting above.\n\n> Also note that I would like to retain the option to do some basic\n> protocol validity checks. Certain messages only make sense within a\n> transaction ('U'pdate, 'C'ommit). Others are only valid outside of a\n> transaction ('B'egin, begin_prepare_cb). This is only possible if the\n> output plugin has a callback for every entry into and exit out of a\n> transaction (being decoded). This used to be the case prior to 2PC\n> decoding and this patch re-establishes that.\n>\n> > I think we have already done the same (sent prepare, exactly to handle\n> > the case you have described above) for *streamed* transactions.\n>\n> Where can I find that? ISTM streaming transactions have the same issue:\n> the output plugin does not (or only implicitly) learn about a concurrent\n> abort of the transaction.\n>\n\nOne is you can try to test it, otherwise, there are comments atop\nDecodePrepare() (\"Note that we don't skip prepare even if have\ndetected concurrent abort because it is quite possible that ....\")\nwhich explains this.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 29 Mar 2021 15:03:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 29.03.21 11:33, Amit Kapila wrote:\n> You don't need an additional callback for that if we do what I am\n> suggesting above.\n\nAh, are you suggesting a different change, then? To make two-phase \ntransactions always send PREPARE even if concurrently aborted? In that \ncase, sorry, I misunderstood.\n\nI'm perfectly fine with that approach as well (even though it removes \nflexibility compared to the concurrent abort callback, as the comment \nabove DecodePrepare indicates, i.e. \"not impossible to optimize the \nconcurrent abort case\").\n\n> One is you can try to test it, otherwise, there are comments atop\n> DecodePrepare() (\"Note that we don't skip prepare even if have\n> detected concurrent abort because it is quite possible that ....\")\n> which explains this.\n\nThanks for this pointer, very helpful.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 29 Mar 2021 11:53:20 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Mon, Mar 29, 2021 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Mon, Mar 29, 2021 at 12:36 PM Markus Wanner\n> <markus.wanner@enterprisedb.com> wrote:\n> >\n> > On 27.03.21 07:37, Amit Kapila wrote:\n> > > Isn't it better to send prepare from the publisher in such a case so\n> > > that subscribers can know about it when rollback prepared arrives?\n>\n> Nice catch, Markus.\nInteresting suggestion Amit. Let me try and code this.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Mon, Mar 29, 2021 at 8:33 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Mon, Mar 29, 2021 at 12:36 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 27.03.21 07:37, Amit Kapila wrote:\n> > Isn't it better to send prepare from the publisher in such a case so\n> > that subscribers can know about it when rollback prepared arrives?Nice catch, Markus.Interesting suggestion Amit. Let me try and code this.regards,Ajin CherianFujitsu Australia", "msg_date": "Mon, 29 Mar 2021 22:02:34 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 29.03.21 13:02, Ajin Cherian wrote:\n> Nice catch, Markus.\n> Interesting suggestion Amit. Let me try and code this.\n\nThanks, Ajin. Please consider this concurrent_abort callback as well. \nI think it provides more flexibility for the output plugin and I would \ntherefore prefer it over a solution that hides this. It clearly makes \nall potential optimizations impossible, as it means the output plugin \ncannot distinguish between a proper PREAPRE and a bail-out PREPARE (that \ndoes not fully replicate the PREPARE as on the origin node, either, \nwhich I think is dangerous).\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Mon, 29 Mar 2021 13:09:45 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Mon, Mar 29, 2021 at 10:09 PM Markus Wanner <\nmarkus.wanner@enterprisedb.com> wrote:\n\n> On 29.03.21 13:02, Ajin Cherian wrote:\n> > Nice catch, Markus.\n> > Interesting suggestion Amit. Let me try and code this.\n>\n> Thanks, Ajin. Please consider this concurrent_abort callback as well.\n> I think it provides more flexibility for the output plugin and I would\n> therefore prefer it over a solution that hides this. It clearly makes\n> all potential optimizations impossible, as it means the output plugin\n> cannot distinguish between a proper PREAPRE and a bail-out PREPARE (that\n> does not fully replicate the PREPARE as on the origin node, either,\n> which I think is dangerous).\n>\n>\nI understand your concern Markus, but I will leave it to one of the\ncommitters to decide on the new callback.\nFor now, I've created a patch that addresses the problem reported using the\nexisting callbacks.\nDo have a look if this fixes the problem reported.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 30 Mar 2021 15:48:30 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "Hello Ajin,\n\nOn 30.03.21 06:48, Ajin Cherian wrote:\n> For now, I've created a patch that addresses the problem reported using \n> the existing callbacks.\n\nThanks.\n\n> Do have a look if this fixes the problem reported.\n\nYes, this replaces the PREPARE I would do from the concurrent_abort \ncallback in a direct call to rb->prepare. However, it misses the most \nimportant part: documentation. Because this clearly is a surprising \nbehavior for a transaction that's not fully decoded and guaranteed to \nget aborted.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Tue, 30 Mar 2021 08:30:34 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Tue, Mar 30, 2021 at 5:30 PM Markus Wanner <\nmarkus.wanner@enterprisedb.com> wrote:\n\n> Hello Ajin,\n>\n> On 30.03.21 06:48, Ajin Cherian wrote:\n> > For now, I've created a patch that addresses the problem reported using\n> > the existing callbacks.\n>\n> Thanks.\n>\n> > Do have a look if this fixes the problem reported.\n>\n> Yes, this replaces the PREPARE I would do from the concurrent_abort\n> callback in a direct call to rb->prepare. However, it misses the most\n> important part: documentation. Because this clearly is a surprising\n> behavior for a transaction that's not fully decoded and guaranteed to\n> get aborted.\n>\n>\nWhere do you suggest this be documented? From an externally visible point\nof view, I dont see much of a surprise.\nA transaction that was prepared and rolled back can be decoded to be\nprepared and rolled back with incomplete changes.\nAre you suggesting more comments in code?\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Tue, Mar 30, 2021 at 5:30 PM Markus Wanner <markus.wanner@enterprisedb.com> wrote:Hello Ajin,\n\nOn 30.03.21 06:48, Ajin Cherian wrote:\n> For now, I've created a patch that addresses the problem reported using \n> the existing callbacks.\n\nThanks.\n\n> Do have a look if this fixes the problem reported.\n\nYes, this replaces the PREPARE I would do from the concurrent_abort \ncallback in a direct call to rb->prepare.  However, it misses the most \nimportant part: documentation.  Because this clearly is a surprising \nbehavior for a transaction that's not fully decoded and guaranteed to \nget aborted.Where do you suggest this be documented? From an externally visible point of view, I dont see much of a surprise.A transaction that was prepared and rolled back can be decoded to be prepared and rolled back with incomplete changes.Are you suggesting more comments in code?regards,Ajin CherianFujitsu Australia", "msg_date": "Tue, 30 Mar 2021 18:39:31 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 30.03.21 09:39, Ajin Cherian wrote:\n> Where do you suggest this be documented? From an externally visible \n> point of view, I dont see much of a surprise.\n\nIf you start to think about the option of committing a prepared \ntransaction from a different node, the danger becomes immediately \napparent: A subscriber doesn't even know that the transaction is not \ncomplete. How could it possibly know it's futile to COMMIT PREPARE it? \n I think it's not just surprising, but outright dangerous to pretend \nhaving prepared the transaction, but potentially miss some of the changes.\n\n(Essentially: do not assume the ROLLBACK PREPARED will make it to the \nsubscriber. There's no such guarantee. The provider may crash, burn, \nand vanish before that happens.)\n\nSo I suggest to document this as a caveat for the prepare callback, \nbecause with this patch that's the callback which may be invoked for an \nincomplete transaction without the output plugin knowing.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Tue, 30 Mar 2021 10:10:54 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Tue, Mar 30, 2021 at 12:00 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> Hello Ajin,\n>\n> On 30.03.21 06:48, Ajin Cherian wrote:\n> > For now, I've created a patch that addresses the problem reported using\n> > the existing callbacks.\n>\n> Thanks.\n>\n> > Do have a look if this fixes the problem reported.\n>\n> Yes, this replaces the PREPARE I would do from the concurrent_abort\n> callback in a direct call to rb->prepare.\n>\n\nThat sounds clearly a better choice. Because concurrent_abort()\ninternally trying to prepare transaction seems a bit ugly and not only\nthat if we want to go via that route, it needs to distinguish between\nrollback to savepoint and rollback cases as well.\n\nNow, we can try to find a way where for such cases we don't send\nprepare/rollback prepare, but somehow detect it and send rollback\ninstead. And also some more checks will be required so that if we have\nstreamed the transaction then send stream_abort. I am not telling that\nall this is not possible but I don't find worth making all such\nchecks.\n\n> However, it misses the most\n> important part: documentation. Because this clearly is a surprising\n> behavior for a transaction that's not fully decoded and guaranteed to\n> get aborted.\n>\n\nYeah, I guess that makes sense to me. I think we can document it in\nthe docs [1] where we explained two-phase decoding. I think we can add\na point about concurrent aborts at the end of points mentioned in the\nfollowing paragraph: \"The users that want to decode prepared\ntransactions need to be careful .....\"\n\n[1] - https://www.postgresql.org/docs/devel/logicaldecoding-two-phase-commits.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Mar 2021 14:32:55 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Tue, Mar 30, 2021 at 7:10 PM Markus Wanner <\nmarkus.wanner@enterprisedb.com> wrote:\n\n> On 30.03.21 09:39, Ajin Cherian wrote:\n> > Where do you suggest this be documented? From an externally visible\n> > point of view, I dont see much of a surprise.\n>\n>\n>\n> So I suggest to document this as a caveat for the prepare callback,\n> because with this patch that's the callback which may be invoked for an\n> incomplete transaction without the output plugin knowing.\n>\n\nI found some documentation that already was talking about concurrent aborts\nand updated that.\nPatch attached.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Tue, 30 Mar 2021 20:12:57 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 30.03.21 11:02, Amit Kapila wrote:\n> On Tue, Mar 30, 2021 at 12:00 PM Markus Wanner\n>> Yes, this replaces the PREPARE I would do from the concurrent_abort\n>> callback in a direct call to rb->prepare.\n> \n> Because concurrent_abort()\n> internally trying to prepare transaction seems a bit ugly and not only\n> that if we want to go via that route, it needs to distinguish between\n> rollback to savepoint and rollback cases as well.\n\nJust to clarify: of course, the concurrent_abort callback only sends a \nmessage to the subscriber, which then (in our current implementation) \nupon reception of the concurrent_abort message opts to prepare the \ntransaction. Different implementations would be possible.\n\nI would recommend this more explicit API and communication over hiding \nthe concurrent abort in a prepare callback.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Tue, 30 Mar 2021 11:54:31 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 30.03.21 11:12, Ajin Cherian wrote:\n> I found some documentation that already was talking about concurrent \n> aborts and updated that.\n\nThanks.\n\nI just noticed as of PG13, concurrent_abort is part of ReorderBufferTXN, \nso it seems the prepare_cb (or stream_prepare_cb) can actually figure a \nconcurrent abort happened (and the transaction may be incomplete). \nThat's good and indeed makes an additional callback unnecessary.\n\nI recommend giving a hint to that field in the documentation as well.\n\n> diff --git a/doc/src/sgml/logicaldecoding.sgml b/doc/src/sgml/logicaldecoding.sgml\n> index 80eb96d..d2f8d39 100644\n> --- a/doc/src/sgml/logicaldecoding.sgml\n> +++ b/doc/src/sgml/logicaldecoding.sgml\n> @@ -545,12 +545,14 @@ CREATE TABLE another_catalog_table(data text) WITH (user_catalog_table = true);\n> executed within that transaction. A transaction that is prepared for\n> a two-phase commit using <command>PREPARE TRANSACTION</command> will\n> also be decoded if the output plugin callbacks needed for decoding\n> - them are provided. It is possible that the current transaction which\n> + them are provided. It is possible that the current prepared transaction which\n> is being decoded is aborted concurrently via a <command>ROLLBACK PREPARED</command>\n> command. In that case, the logical decoding of this transaction will\n> - be aborted too. We will skip all the changes of such a transaction once\n> - the abort is detected and abort the transaction when we read WAL for\n> - <command>ROLLBACK PREPARED</command>.\n> + be aborted too. All the changes of such a transaction is skipped once\n\ntypo: changes [..] *are* skipped, plural.\n\n> + the abort is detected and the <function>prepare_cb</function> callback is invoked.\n> + This could result in a prepared transaction with incomplete changes.\n\n... \"in which case the <literal>concurrent_abort</literal> field of the \npassed <literal>ReorderBufferTXN</literal> struct is set.\", as a proposal?\n\n> + This is done so that eventually when the <command>ROLLBACK PREPARED</command>\n> + is decoded, there is a corresponding prepared transaction with a matching gid.\n> </para>\n> \n> <note>\n\nEverything else sounds good to me.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Tue, 30 Mar 2021 13:29:43 +0200", "msg_from": "Markus Wanner <markus@bluegap.ch>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 30.03.21 11:54, Markus Wanner wrote:\n> I would recommend this more explicit API and communication over hiding \n> the concurrent abort in a prepare callback.\n\nI figured we already have the ReorderBufferTXN's concurrent_abort flag, \nthus I agree the prepare_cb is sufficient and revoke this recommendation \n(and the concurrent_abort callback patch).\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Tue, 30 Mar 2021 13:31:36 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Tue, Mar 30, 2021 at 10:29 PM Markus Wanner <markus@bluegap.ch> wrote:\n\n>\n> I just noticed as of PG13, concurrent_abort is part of ReorderBufferTXN,\n> so it seems the prepare_cb (or stream_prepare_cb) can actually figure a\n> concurrent abort happened (and the transaction may be incomplete).\n> That's good and indeed makes an additional callback unnecessary.\n>\n> I recommend giving a hint to that field in the documentation as well.\n>\n> > diff --git a/doc/src/sgml/logicaldecoding.sgml\n> b/doc/src/sgml/logicaldecoding.sgml\n> > index 80eb96d..d2f8d39 100644\n> > --- a/doc/src/sgml/logicaldecoding.sgml\n> > +++ b/doc/src/sgml/logicaldecoding.sgml\n> > @@ -545,12 +545,14 @@ CREATE TABLE another_catalog_table(data text) WITH\n> (user_catalog_table = true);\n> > executed within that transaction. A transaction that is prepared\n> for\n> > a two-phase commit using <command>PREPARE TRANSACTION</command>\n> will\n> > also be decoded if the output plugin callbacks needed for decoding\n> > - them are provided. It is possible that the current transaction\n> which\n> > + them are provided. It is possible that the current prepared\n> transaction which\n> > is being decoded is aborted concurrently via a <command>ROLLBACK\n> PREPARED</command>\n> > command. In that case, the logical decoding of this transaction\n> will\n> > - be aborted too. We will skip all the changes of such a transaction\n> once\n> > - the abort is detected and abort the transaction when we read WAL\n> for\n> > - <command>ROLLBACK PREPARED</command>.\n> > + be aborted too. All the changes of such a transaction is skipped\n> once\n>\n> typo: changes [..] *are* skipped, plural.\n>\n\nUpdated.\n\n\n>\n> > + the abort is detected and the <function>prepare_cb</function>\n> callback is invoked.\n> > + This could result in a prepared transaction with incomplete\n> changes.\n>\n> ... \"in which case the <literal>concurrent_abort</literal> field of the\n> passed <literal>ReorderBufferTXN</literal> struct is set.\", as a proposal?\n>\n> > + This is done so that eventually when the <command>ROLLBACK\n> PREPARED</command>\n> > + is decoded, there is a corresponding prepared transaction with a\n> matching gid.\n> > </para>\n> >\n> > <note>\n>\n> Everything else sounds good to me.\n>\n\nUpdated.\n\nregards,\nAjin Cherian\nFujitsu Australia", "msg_date": "Wed, 31 Mar 2021 12:12:46 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Wed, Mar 31, 2021 at 6:42 AM Ajin Cherian <itsajin@gmail.com> wrote:\n>\n> Updated.\n>\n\nI have slightly adjusted the comments, docs, and commit message. What\ndo you think about the attached?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 31 Mar 2021 10:09:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 31.03.21 06:39, Amit Kapila wrote:\n> I have slightly adjusted the comments, docs, and commit message. What\n> do you think about the attached?\n\nThank you both, Amit and Ajin. This looks good to me.\n\nOnly one minor gripe:\n\n> + a prepared transaction with incomplete changes, in which case the\n> + <literal>concurrent_abort</literal> field of the passed\n> + <literal>ReorderBufferTXN</literal> struct is set. This is done so that\n> + eventually when the <command>ROLLBACK PREPARED</command> is decoded, there\n> + is a corresponding prepared transaction with a matching gid.\n\nThe last sentences there now seems to relate to just the setting of \n\"concurrent_abort\", rather than the whole reason to invoke the \nprepare_cb. And the reference to the \"gid\" is a bit lost. Maybe:\n\n \"Thus even in case of a concurrent abort, enough information is\n provided to the output plugin for it to properly deal with the\n <command>ROLLBACK PREPARED</command> once that is decoded.\"\n\nAlternatively, state that the gid is otherwise missing earlier in the \ndocs (similar to how the commit message describes it).\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Wed, 31 Mar 2021 08:25:36 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Wed, Mar 31, 2021 at 5:25 PM Markus Wanner <\nmarkus.wanner@enterprisedb.com> wrote:\n\n>\n> The last sentences there now seems to relate to just the setting of\n> \"concurrent_abort\", rather than the whole reason to invoke the\n> prepare_cb. And the reference to the \"gid\" is a bit lost. Maybe:\n>\n> \"Thus even in case of a concurrent abort, enough information is\n> provided to the output plugin for it to properly deal with the\n> <command>ROLLBACK PREPARED</command> once that is decoded.\"\n>\n> Alternatively, state that the gid is otherwise missing earlier in the\n> docs (similar to how the commit message describes it).\n>\n>\n> I'm fine with Amit's changes and like Markus's last suggestion as well.\n\nregards,\nAjin Cherian\nFujitsu Australia\n\nOn Wed, Mar 31, 2021 at 5:25 PM Markus Wanner <markus.wanner@enterprisedb.com> wrote:\nThe last sentences there now seems to relate to just the setting of \n\"concurrent_abort\", rather than the whole reason to invoke the \nprepare_cb.  And the reference to the \"gid\" is a bit lost.  Maybe:\n\n    \"Thus even in case of a concurrent abort, enough information is\n     provided to the output plugin for it to properly deal with the\n     <command>ROLLBACK PREPARED</command> once that is decoded.\"\n\nAlternatively, state that the gid is otherwise missing earlier in the \ndocs (similar to how the commit message describes it).\nI'm fine with Amit's changes and like Markus's last suggestion as well.regards,Ajin CherianFujitsu Australia", "msg_date": "Wed, 31 Mar 2021 20:09:58 +1100", "msg_from": "Ajin Cherian <itsajin@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Wed, Mar 31, 2021 at 11:55 AM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 31.03.21 06:39, Amit Kapila wrote:\n> > I have slightly adjusted the comments, docs, and commit message. What\n> > do you think about the attached?\n>\n> Thank you both, Amit and Ajin. This looks good to me.\n>\n> Only one minor gripe:\n>\n> > + a prepared transaction with incomplete changes, in which case the\n> > + <literal>concurrent_abort</literal> field of the passed\n> > + <literal>ReorderBufferTXN</literal> struct is set. This is done so that\n> > + eventually when the <command>ROLLBACK PREPARED</command> is decoded, there\n> > + is a corresponding prepared transaction with a matching gid.\n>\n> The last sentences there now seems to relate to just the setting of\n> \"concurrent_abort\", rather than the whole reason to invoke the\n> prepare_cb. And the reference to the \"gid\" is a bit lost. Maybe:\n>\n> \"Thus even in case of a concurrent abort, enough information is\n> provided to the output plugin for it to properly deal with the\n> <command>ROLLBACK PREPARED</command> once that is decoded.\"\n>\n\nOkay, Changed the patch accordingly.\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Wed, 31 Mar 2021 18:48:49 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On 31.03.21 15:18, Amit Kapila wrote:\n> On Wed, Mar 31, 2021 at 11:55 AM Markus Wanner\n>> The last sentences there now seems to relate to just the setting of\n>> \"concurrent_abort\", rather than the whole reason to invoke the\n>> prepare_cb. And the reference to the \"gid\" is a bit lost. Maybe:\n>>\n>> \"Thus even in case of a concurrent abort, enough information is\n>> provided to the output plugin for it to properly deal with the\n>> <command>ROLLBACK PREPARED</command> once that is decoded.\"\n> \n> Okay, Changed the patch accordingly.\n\nThat's fine with me. I didn't necessarily mean to eliminate the hint to \nthe concurrent_abort field, but it's more concise that way. Thank you.\n\nRegards\n\nMarkus\n\n\n", "msg_date": "Wed, 31 Mar 2021 15:50:53 +0200", "msg_from": "Markus Wanner <markus.wanner@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Wed, Mar 31, 2021 at 7:20 PM Markus Wanner\n<markus.wanner@enterprisedb.com> wrote:\n>\n> On 31.03.21 15:18, Amit Kapila wrote:\n> > On Wed, Mar 31, 2021 at 11:55 AM Markus Wanner\n> >> The last sentences there now seems to relate to just the setting of\n> >> \"concurrent_abort\", rather than the whole reason to invoke the\n> >> prepare_cb. And the reference to the \"gid\" is a bit lost. Maybe:\n> >>\n> >> \"Thus even in case of a concurrent abort, enough information is\n> >> provided to the output plugin for it to properly deal with the\n> >> <command>ROLLBACK PREPARED</command> once that is decoded.\"\n> >\n> > Okay, Changed the patch accordingly.\n>\n> That's fine with me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 1 Apr 2021 09:00:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "Hi,\n\nWhile testing another WIP patch [1] a clashing GID problem was found,\nwhich gives us apply worker errors like:\n\n2021-04-26 10:07:12.883 AEST [22055] ERROR: transaction identifier\n\"pg_gid_16403_608\" is already in use\n2021-04-26 10:08:05.149 AEST [22124] ERROR: transaction identifier\n\"pg_gid_16403_757\" is already in use\n\nThese GID clashes were traced back to a problem of the\nconcurrent-abort logic: when \"streaming\" is enabled the\nconcurrent-abort logic was always sending \"prepare\" even though a\n\"stream_prepare\" had already been sent.\n\nPSA a patch to correct this.\n\n------\n[1] https://www.postgresql.org/message-id/CAHut%2BPuB07xOgJLnDhvbtp0t_qMDhjDD%2BkO%2B2yB%2Br6tgfaR-5Q%40mail.gmail.com\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Mon, 26 Apr 2021 11:35:30 +1000", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" }, { "msg_contents": "On Mon, Apr 26, 2021 at 7:05 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> While testing another WIP patch [1] a clashing GID problem was found,\n> which gives us apply worker errors like:\n>\n> 2021-04-26 10:07:12.883 AEST [22055] ERROR: transaction identifier\n> \"pg_gid_16403_608\" is already in use\n> 2021-04-26 10:08:05.149 AEST [22124] ERROR: transaction identifier\n> \"pg_gid_16403_757\" is already in use\n>\n> These GID clashes were traced back to a problem of the\n> concurrent-abort logic: when \"streaming\" is enabled the\n> concurrent-abort logic was always sending \"prepare\" even though a\n> \"stream_prepare\" had already been sent.\n>\n> PSA a patch to correct this.\n>\n\nYour patch looks good to me, so pushed! Thanks for the report and patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 26 Apr 2021 13:42:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] add concurrent_abort callback for output plugin" } ]
[ { "msg_contents": "I failed to document the lock acquired on tables that reference the\npartitioned table that we're detaching a partition from. This patch\n(proposed for backpatch to 12, where that feature was added) fixes that\nproblem.\n\nNoticed the omission a week ago while working out some details of\nconcurrent detach.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html", "msg_date": "Thu, 25 Mar 2021 15:02:44 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": true, "msg_subject": "document lock obtained for FKs during detach" } ]
[ { "msg_contents": "Hello, I happened to see a doubious behavior of walsender.\n\nOn a replication set with wal_keep_size/(segments) = 0, running the\nfollowing command on the primary causes walsender to fail to send up\nto the final shutdown checkpoint record to the standby.\n\n(create table t in advance)\n\npsql -c 'insert into t values(0); select pg_switch_wal();'; pg_ctl stop\n\nThe primary complains like this:\n\n2021-03-26 17:59:29.324 JST [checkpointer][140697] LOG: shutting down\n2021-03-26 17:59:29.387 JST [walsender][140816] ERROR: requested WAL segment 000000010000000000000032 has already been removed\n2021-03-26 17:59:29.387 JST [walsender][140816] STATEMENT: START_REPLICATION 0/32000000 TIMELINE 1\n2021-03-26 17:59:29.394 JST [postmaster][140695] LOG: database system is shut down\n\nThis is because XLogSendPhysical detects removal of the wal segment\ncurrently reading by shutdown checkpoint. However, there' no fear of\noverwriting of WAL segments at the time.\n\nSo I think we can omit the call to CheckXLogRemoved() while\nMyWalSnd->state is WALSNDSTTE_STOPPING because the state comes after\nthe shutdown checkpoint completes.\n\nOf course that doesn't help if walsender was running two segments\nbehind. There still could be a small window for the failure. But it's\na great help to save the case of just 1 segment behind.\n\nIs it worth fixing?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 26 Mar 2021 18:20:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Walsender may fail to send wal to the end." }, { "msg_contents": "Hi,\n\nOn 2021-03-26 18:20:14 +0900, Kyotaro Horiguchi wrote:\n> This is because XLogSendPhysical detects removal of the wal segment\n> currently reading by shutdown checkpoint. However, there' no fear of\n> overwriting of WAL segments at the time.\n>\n> So I think we can omit the call to CheckXLogRemoved() while\n> MyWalSnd->state is WALSNDSTTE_STOPPING because the state comes after\n> the shutdown checkpoint completes.\n>\n> Of course that doesn't help if walsender was running two segments\n> behind. There still could be a small window for the failure. But it's\n> a great help to save the case of just 1 segment behind.\n\n-1. This seems like a bandaid to make a broken configuration work a tiny\nbit better, without actually being meaningfully better.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 26 Mar 2021 10:16:40 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Walsender may fail to send wal to the end." }, { "msg_contents": "On Fri, Mar 26, 2021 at 10:16:40AM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2021-03-26 18:20:14 +0900, Kyotaro Horiguchi wrote:\n> > This is because XLogSendPhysical detects removal of the wal segment\n> > currently reading by shutdown checkpoint. However, there' no fear of\n> > overwriting of WAL segments at the time.\n> >\n> > So I think we can omit the call to CheckXLogRemoved() while\n> > MyWalSnd->state is WALSNDSTTE_STOPPING because the state comes after\n> > the shutdown checkpoint completes.\n> >\n> > Of course that doesn't help if walsender was running two segments\n> > behind. There still could be a small window for the failure. But it's\n> > a great help to save the case of just 1 segment behind.\n> \n> -1. This seems like a bandaid to make a broken configuration work a tiny\n> bit better, without actually being meaningfully better.\n\nAgreed. Still, wouldn't it be better to avoid such configurations and\nprotect a bit things with a check on the new value?\n--\nMichael", "msg_date": "Mon, 29 Mar 2021 14:47:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Walsender may fail to send wal to the end." }, { "msg_contents": "At Mon, 29 Mar 2021 14:47:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Mar 26, 2021 at 10:16:40AM -0700, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2021-03-26 18:20:14 +0900, Kyotaro Horiguchi wrote:\n> > > This is because XLogSendPhysical detects removal of the wal segment\n> > > currently reading by shutdown checkpoint. However, there' no fear of\n> > > overwriting of WAL segments at the time.\n> > >\n> > > So I think we can omit the call to CheckXLogRemoved() while\n> > > MyWalSnd->state is WALSNDSTTE_STOPPING because the state comes after\n> > > the shutdown checkpoint completes.\n> > >\n> > > Of course that doesn't help if walsender was running two segments\n> > > behind. There still could be a small window for the failure. But it's\n> > > a great help to save the case of just 1 segment behind.\n> > \n> > -1. This seems like a bandaid to make a broken configuration work a tiny\n> > bit better, without actually being meaningfully better.\n> \n> Agreed. Still, wouldn't it be better to avoid such configurations and\n> protect a bit things with a check on the new value?\n\nThe repro was a bit artificial but the symptom happened without\npg_switch_wal() and no load. It caused just by shutting down of\nprimary. If it is normal behavior for walsenders to fail to send the\nlast shutdown record to standby while fast shutdown, we should refuse\nto startup at least wal sender if wal_keep_size = 0.\n\nI can guess two ways to do that.\n\n1. refuse to start server if wal_keep_size = 0 when max_wal_senders > 0.\n\n2. refuse to start wal sender if wal_keep_size= 0.\n\n2 looks like broken. 1 is somewhat annoying.. However, since\nmax_wal_senders already premises wal_level > minimal, we can accept\nthat restriction?\n\n\n<start serer>\n\nFATAL: WAL streaming (max_wal_senders > 0) requires wal_level \"replica\" or \"logical\"\n\n<Mmm. wal_level, fixed, then retry starting server>\n\nFATAL: WAL streaming (max_wal_senders > 0) requires wal_keep_size to be at least 1MB\n\n<Oops!>\n\nOf couse we can list all incompatible parameters at once.\n\nFATAL: WAL streaming (max_wal_senders > 0) requires wal_level \"replica\" or \"logical\" and wal_keep_size to be at least 1MB\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 29 Mar 2021 18:07:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Walsender may fail to send wal to the end." }, { "msg_contents": "Greetings,\n\n* Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> At Mon, 29 Mar 2021 14:47:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Fri, Mar 26, 2021 at 10:16:40AM -0700, Andres Freund wrote:\n> > > On 2021-03-26 18:20:14 +0900, Kyotaro Horiguchi wrote:\n> > > > This is because XLogSendPhysical detects removal of the wal segment\n> > > > currently reading by shutdown checkpoint. However, there' no fear of\n> > > > overwriting of WAL segments at the time.\n> > > >\n> > > > So I think we can omit the call to CheckXLogRemoved() while\n> > > > MyWalSnd->state is WALSNDSTTE_STOPPING because the state comes after\n> > > > the shutdown checkpoint completes.\n> > > >\n> > > > Of course that doesn't help if walsender was running two segments\n> > > > behind. There still could be a small window for the failure. But it's\n> > > > a great help to save the case of just 1 segment behind.\n> > > \n> > > -1. This seems like a bandaid to make a broken configuration work a tiny\n> > > bit better, without actually being meaningfully better.\n> > \n> > Agreed. Still, wouldn't it be better to avoid such configurations and\n> > protect a bit things with a check on the new value?\n\nI have a hard time agreeing that this is somehow a 'broken'\nconfiguration, instead it looks like a race condition that wasn't\nconsidered and should be addressed. If there's zero lag then we really\nshould allow the final WAL to get sent to the replica.\n\n> The repro was a bit artificial but the symptom happened without\n> pg_switch_wal() and no load. It caused just by shutting down of\n> primary. If it is normal behavior for walsenders to fail to send the\n> last shutdown record to standby while fast shutdown, we should refuse\n> to startup at least wal sender if wal_keep_size = 0.\n> \n> I can guess two ways to do that.\n\nBoth of which will break things for people, so this certainly isn't a\ngreat approach, and besides, if archiving is happening with\narchive_command and the replica has a restore command then it should be\nable to follow that just fine, no? So we'd have to also check if\narchive_command has been set up and hope the admin has a restore\ncommand. Having to go through that dance instead of just making sure to\npush out the last WAL to the replica seems a bit silly though.\n\nThanks,\n\nStephen", "msg_date": "Mon, 29 Mar 2021 11:41:32 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Walsender may fail to send wal to the end." }, { "msg_contents": "At Mon, 29 Mar 2021 11:41:32 -0400, Stephen Frost <sfrost@snowman.net> wrote in \n> Greetings,\n> \n> * Kyotaro Horiguchi (horikyota.ntt@gmail.com) wrote:\n> > At Mon, 29 Mar 2021 14:47:33 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > > On Fri, Mar 26, 2021 at 10:16:40AM -0700, Andres Freund wrote:\n> > > > On 2021-03-26 18:20:14 +0900, Kyotaro Horiguchi wrote:\n> > > > > This is because XLogSendPhysical detects removal of the wal segment\n> > > > > currently reading by shutdown checkpoint. However, there' no fear of\n> > > > > overwriting of WAL segments at the time.\n> > > > >\n> > > > > So I think we can omit the call to CheckXLogRemoved() while\n> > > > > MyWalSnd->state is WALSNDSTTE_STOPPING because the state comes after\n> > > > > the shutdown checkpoint completes.\n> > > > >\n> > > > > Of course that doesn't help if walsender was running two segments\n> > > > > behind. There still could be a small window for the failure. But it's\n> > > > > a great help to save the case of just 1 segment behind.\n> > > > \n> > > > -1. This seems like a bandaid to make a broken configuration work a tiny\n> > > > bit better, without actually being meaningfully better.\n> > > \n> > > Agreed. Still, wouldn't it be better to avoid such configurations and\n> > > protect a bit things with a check on the new value?\n> \n> I have a hard time agreeing that this is somehow a 'broken'\n> configuration, instead it looks like a race condition that wasn't\n> considered and should be addressed. If there's zero lag then we really\n> should allow the final WAL to get sent to the replica.\n\nMy unstated point was switching primary/secondary roles in a\nreplication set where both host have separate archives, by the steps\n\"fast shutdown primary\"->\"promote standby\"->\"attach the old primary as\nnew standby\", wihtout a need of synchronizing old primary's archive to\nthat of the new standby before starting the new standby. I thought\nthat should work even if wal_keep_size = 0.\n\n> > The repro was a bit artificial but the symptom happened without\n> > pg_switch_wal() and no load. It caused just by shutting down of\n> > primary. If it is normal behavior for walsenders to fail to send the\n> > last shutdown record to standby while fast shutdown, we should refuse\n> > to startup at least wal sender if wal_keep_size = 0.\n> > \n> > I can guess two ways to do that.\n> \n> Both of which will break things for people, so this certainly isn't a\n> great approach, and besides, if archiving is happening with\n> archive_command and the replica has a restore command then it should be\n\nRight. \n\n> able to follow that just fine, no? So we'd have to also check if\n> archive_command has been set up and hope the admin has a restore\n\nYeah, that sounds stupid (or kind of impossible).\n\n> command. Having to go through that dance instead of just making sure to\n> push out the last WAL to the replica seems a bit silly though.\n\nSounds reasonable to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 30 Mar 2021 15:42:05 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Walsender may fail to send wal to the end." } ]
[ { "msg_contents": "Hi, all\n\nRecently, I found a bug on update timing of walrcv->flushedUpto variable, consider the following scenario, there is one Primary node, one Standby node which streaming from Primary:\nThere are a large number of SQL running in the Primary, and the length of the xlog record generated by these SQL maybe greater than the left space of current page so that it needs to be written cross pages. As shown below, the length of the last_xlog of wal_1 is greater than the left space of last_page, so it has to be written in wal_2. If Primary crashed after flused the last_page of wal_1 to disk, the remian content of last_xlog hasn't been flushed in time, then the last_xlog in wal_1 will be incomplete. And Standby also received the wal_1 by wal-streaming in this case.\n[日志1.png]\n\nPrimary restarts after crash, during the crash recovery, Primary will find that the last_xlog of wal_1 is invalid, and it will cover the space of last_xlog by inserting new xlog record. However, Standby won't do this, and there will be xlog inconsistency between Primary and standby at this time.\n\n\nWhen Standby restarts and replays the last_xlog, it will first get the content of XLogRecord structure (the header of last_xlog is completed flushed), and find that it has to reassemble the last_xlog, the next page of last_xlog is within wal_2, which not exists in pg_wal of Standby. So it request xlog streaming from Primary to get the wal_2, and update the walrcv->flushedUpto when it has received new xlog and flushed them to disk, now the value of walrcv->flushedUpto is some LSN within wal_2.\n\n\nStandby get wal_2 from Primary, but the content of the first page of wal_2 is not the remain content of last_xlog, which has already been covered by new xlog in Primary. Standby checked and found that the record is invalid, it will read the last_xlog again, and call the WaitForWALToBecomeAvailable function, in this function it will shutdown the wal-streaming and read the record from pg_wal.\n\n\nAgain, the record read from pg_wal is also invalid, so Standby will request wal-streaming again, and it is worth noting that the value of walrcv->flushedUpto has already been set to wal_2 before, which is greater than the LSN Standby needs, so the variable havedata in WaitForWALToBecomeAvailable is always true, and Standby considers that it received the xlog, it will read the content from wal_2.\n\n\nNext is the endless loop: Standby found the xlog is invalid -> read the last_xlog again -> shutdown wal-streaming and read xlog from pg_wal -> found the xlog is invalid -> request wal-streaming, expect to get the correct xlog, but it will return from WaitForWALToBecomeAvailable immediately because the walrcv->flushedUpto is always greater than the LSN it needs ->read and found the xlog is invalid -> read the last_xlog again ->......\n\n\nIn this case, Standby will never get the correct xlog record until it restarts\n\n\nThe confusing point is: why only updates the walrcv->flushedUpto at the first startup of walreceiver on a specific timeline, not each time when request xlog streaming? In above case, it is also reasonable to update walrcv->flushedUpto to wal_1 when Standby re-receive wal_1. So I changed to update the walrcv->flushedUpto each time when request xlog streaming, which is the patch I want to share with you, based on postgresql-13.2, what do you think of this change?\n\nBy the way, I also want to know why call pgstat_reset_all function during recovery process?\n\nThanks & Best Regard", "msg_date": "Fri, 26 Mar 2021 23:44:21 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?QnVnIG9uIHVwZGF0ZSB0aW1pbmcgb2Ygd2FscmN2LT5mbHVzaGVkVXB0byB2YXJpYWJsZQ==?=" }, { "msg_contents": "Hi.\r\n\r\n(Added Nathan, Andrey and Heikki in Cc:)\r\n\r\nAt Fri, 26 Mar 2021 23:44:21 +0800, \"蔡梦娟(玊于)\" <mengjuan.cmj@alibaba-inc.com> wrote in \r\n> Hi, all\r\n> \r\n> Recently, I found a bug on update timing of walrcv->flushedUpto variable, consider the following scenario, there is one Primary node, one Standby node which streaming from Primary:\r\n> There are a large number of SQL running in the Primary, and the length of the xlog record generated by these SQL maybe greater than the left space of current page so that it needs to be written cross pages. As shown below, the length of the last_xlog of wal_1 is greater than the left space of last_page, so it has to be written in wal_2. If Primary crashed after flused the last_page of wal_1 to disk, the remian content of last_xlog hasn't been flushed in time, then the last_xlog in wal_1 will be incomplete. And Standby also received the wal_1 by wal-streaming in this case.\r\n\r\nIt seems like the same with the issue discussed in [1].\r\n\r\nThere are two symptom of the issue, one is that archive ends with a\r\nsegment that ends with a immature WAL record, which causes\r\ninconsistency between archive and pg_wal directory. Another is , as\r\nyou saw, walreceiver receives an immature record at the end of a\r\nsegment, which prevents recovery from proceeding.\r\n\r\nIn the thread, trying to solve that by preventing such an immature\r\nrecords at a segment boundary from being archived and inhibiting being\r\nsent to standby.\r\n\r\n> [日志1.png]\r\n\r\nIt doesn't seem attached..\r\n\r\n> The confusing point is: why only updates the walrcv->flushedUpto at the first startup of walreceiver on a specific timeline, not each time when request xlog streaming? In above case, it is also reasonable to update walrcv->flushedUpto to wal_1 when Standby re-receive wal_1. So I changed to update the walrcv->flushedUpto each time when request xlog streaming, which is the patch I want to share with you, based on postgresql-13.2, what do you think of this change?\r\n> \r\n> By the way, I also want to know why call pgstat_reset_all function during recovery process?\r\n\r\nWe shouldn't rewind flushedUpto to backward. The variable notifies how\r\nfar recovery (or startup process) can read WAL content safely. Once\r\nstartup process reads the beginning of a record, XLogReadRecord tries\r\nto continue fetching *only the rest* of the record, which is\r\ninconsistent from the first part in this scenario. So at least only\r\nthis fix doesn't work fine. And we also need to fix the archive\r\ninconsistency, maybe as a part of a fix for this issue.\r\n\r\nWe are trying to fix this by refraining from archiving (or streaming)\r\nuntil a record crossing a segment boundary is completely flushed.\r\n\r\nregards.\r\n\r\n\r\n[1] https://www.postgresql.org/message-id/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com\r\n\r\n-- \r\nKyotaro Horiguchi\r\nNTT Open Source Software Center\r\n", "msg_date": "Mon, 29 Mar 2021 10:54:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Bug on update timing of walrcv->flushedUpto variable" }, { "msg_contents": "Hi.\n\n I still feel confused about some point, hope to get your answer: \n 1) You said that \"We shouldn't rewind flushedUpto to backward. The variable notifies how far recovery (or startup process) can read WAL content safely. \"\n This fix only rewinds flushedUpto when request wal streaming, and rewind flushedUpto means it is re-receiving one wal segment file, which indicates that there maybe some wrong data in previous one. flushedUpto nitifies how far startup process can read WAL content safely because WAL before that has been flushed to disk. However, this doesn't mean it is correct to read the WAL because the content maybe invalid. And if we rewind flushedUpto, WaitForWALToBecomeAvailable function returns true only when the WAL re-received is greater than what XLogReadRecord needs, at this time it is correct and also safe to read the content. \n By the way, what do you think of updating flushedUpto to the LogstreamResult.Flush when start streaming not when request streaming, the value of LogstreamResult.Flush is set to replayPtr when start streaming.\n\n 2) You said that \"Once startup process reads the beginning of a record, XLogReadRecord tries to continue fetching *only the rest* of the record, which is inconsistent from the first part in this scenario.\"\n If XLogReadRecord found the record is invalid, it will read the whole record again, inlcude the beginning of the record, not only the rest of the record. Am I missing something?\n\n 3)I wonder how are you going to acheive preventing an immature records at a segment boundary from being archived and inhibiting being sent to standby\n\n Thanks & Best Regard\n\n------------------------------------------------------------------\n发件人:Kyotaro Horiguchi <horikyota.ntt@gmail.com>\n发送时间:2021年3月29日(星期一) 09:54\n收件人:蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com>\n抄 送:bossartn <bossartn@amazon.com>; x4mmm <x4mmm@yandex-team.ru>; hlinnaka <hlinnaka@iki.fi>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>\n主 题:Re: Bug on update timing of walrcv->flushedUpto variable\n\nHi.\n\n(Added Nathan, Andrey and Heikki in Cc:)\n\nAt Fri, 26 Mar 2021 23:44:21 +0800, \"蔡梦娟(玊于)\" <mengjuan.cmj@alibaba-inc.com> wrote in \n> Hi, all\n> \n> Recently, I found a bug on update timing of walrcv->flushedUpto variable, consider the following scenario, there is one Primary node, one Standby node which streaming from Primary:\n> There are a large number of SQL running in the Primary, and the length of the xlog record generated by these SQL maybe greater than the left space of current page so that it needs to be written cross pages. As shown below, the length of the last_xlog of wal_1 is greater than the left space of last_page, so it has to be written in wal_2. If Primary crashed after flused the last_page of wal_1 to disk, the remian content of last_xlog hasn't been flushed in time, then the last_xlog in wal_1 will be incomplete. And Standby also received the wal_1 by wal-streaming in this case.\n\nIt seems like the same with the issue discussed in [1].\n\nThere are two symptom of the issue, one is that archive ends with a\nsegment that ends with a immature WAL record, which causes\ninconsistency between archive and pg_wal directory. Another is , as\nyou saw, walreceiver receives an immature record at the end of a\nsegment, which prevents recovery from proceeding.\n\nIn the thread, trying to solve that by preventing such an immature\nrecords at a segment boundary from being archived and inhibiting being\nsent to standby.\n\n> [日志1.png]\n\nIt doesn't seem attached..\n\n> The confusing point is: why only updates the walrcv->flushedUpto at the first startup of walreceiver on a specific timeline, not each time when request xlog streaming? In above case, it is also reasonable to update walrcv->flushedUpto to wal_1 when Standby re-receive wal_1. So I changed to update the walrcv->flushedUpto each time when request xlog streaming, which is the patch I want to share with you, based on postgresql-13.2, what do you think of this change?\n> \n> By the way, I also want to know why call pgstat_reset_all function during recovery process?\n\nWe shouldn't rewind flushedUpto to backward. The variable notifies how\nfar recovery (or startup process) can read WAL content safely. Once\nstartup process reads the beginning of a record, XLogReadRecord tries\nto continue fetching *only the rest* of the record, which is\ninconsistent from the first part in this scenario. So at least only\nthis fix doesn't work fine. And we also need to fix the archive\ninconsistency, maybe as a part of a fix for this issue.\n\nWe are trying to fix this by refraining from archiving (or streaming)\nuntil a record crossing a segment boundary is completely flushed.\n\nregards.\n\n\n[1] https://www.postgresql.org/message-id/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n       Hi.              I still feel confused about some point, hope to get your answer:        1)  You said that  \"We shouldn't rewind flushedUpto to backward. The variable notifies how far recovery (or startup process) can read WAL content safely. \"       This fix only rewinds flushedUpto when request wal streaming, and rewind flushedUpto means it is re-receiving one wal segment file, which indicates that  there maybe some wrong data in previous one. flushedUpto nitifies how far startup process can read WAL content safely because WAL before that has been flushed to disk. However, this doesn't mean it is correct to read the WAL because the content maybe invalid. And if we rewind flushedUpto, WaitForWALToBecomeAvailable function returns true only when the WAL re-received is greater than what XLogReadRecord needs, at this time it is correct and also safe to read the content.         By the way, what do you think of updating flushedUpto to the LogstreamResult.Flush when start streaming not when request streaming, the value of LogstreamResult.Flush is set to replayPtr when start streaming.        2) You said that \"Once startup process reads the beginning of a record, XLogReadRecord tries to continue fetching *only the rest* of the record, which is inconsistent from the first part in this scenario.\"        If XLogReadRecord found the record is invalid, it will read the whole record again, inlcude the beginning of the record, not only the rest of the record. Am I missing something?        3)I wonder how are you going to acheive preventing an immature records at a segment boundary from being archived and inhibiting being sent to standby        Thanks & Best Regard------------------------------------------------------------------发件人:Kyotaro Horiguchi <horikyota.ntt@gmail.com>发送时间:2021年3月29日(星期一) 09:54收件人:蔡梦娟(玊于) <mengjuan.cmj@alibaba-inc.com>抄 送:bossartn <bossartn@amazon.com>; x4mmm <x4mmm@yandex-team.ru>; hlinnaka <hlinnaka@iki.fi>; pgsql-hackers <pgsql-hackers@lists.postgresql.org>主 题:Re: Bug on update timing of walrcv->flushedUpto variableHi.(Added Nathan, Andrey and Heikki in Cc:)At Fri, 26 Mar 2021 23:44:21 +0800, \"蔡梦娟(玊于)\" <mengjuan.cmj@alibaba-inc.com> wrote in > Hi, all> > Recently, I found a bug on update timing of walrcv->flushedUpto variable, consider the following scenario, there is one Primary node, one Standby node which streaming from Primary:> There are a large number of SQL running in the Primary, and the length of the xlog record generated by these SQL maybe greater than the left space of current page so that it needs to be written cross pages. As shown below, the length of the last_xlog of wal_1 is greater than the left space of last_page, so it has to be written in wal_2. If Primary crashed after flused the last_page of wal_1 to disk, the remian content of last_xlog hasn't been flushed in time, then the last_xlog in wal_1 will be incomplete. And Standby also received the wal_1 by wal-streaming in this case.It seems like the same with the issue discussed in [1].There are two symptom of the issue, one is that archive ends with asegment that ends with a immature WAL record, which causesinconsistency between archive and pg_wal directory. Another is , asyou saw, walreceiver receives an immature record at the end of asegment, which prevents recovery from proceeding.In the thread, trying to solve that by preventing such an immaturerecords at a segment boundary from being archived and inhibiting beingsent to standby.> [日志1.png]It doesn't seem attached..> The confusing point is: why only updates the walrcv->flushedUpto at the first startup of walreceiver on a specific timeline, not each time when request xlog streaming? In above case, it is also reasonable to update walrcv->flushedUpto to wal_1 when Standby re-receive wal_1. So I changed to update the walrcv->flushedUpto each time when request xlog streaming, which is the patch I want to share with you, based on postgresql-13.2, what do you think of this change?> > By the way, I also want to know why call pgstat_reset_all function during recovery process?We shouldn't rewind flushedUpto to backward. The variable notifies howfar recovery (or startup process) can read WAL content safely.  Oncestartup process reads the beginning of a record, XLogReadRecord triesto continue fetching *only the rest* of the record, which isinconsistent from the first part in this scenario. So at least onlythis fix doesn't work fine.  And we also need to fix the archiveinconsistency, maybe as a part of a fix for this issue.We are trying to fix this by refraining from archiving (or streaming)until a record crossing a segment boundary is completely flushed.regards.[1] https://www.postgresql.org/message-id/CBDDFA01-6E40-46BB-9F98-9340F4379505%40amazon.com-- Kyotaro HoriguchiNTT Open Source Software Center", "msg_date": "Mon, 12 Apr 2021 10:01:43 +0800", "msg_from": "\"=?UTF-8?B?6JSh5qKm5aifKOeOiuS6jik=?=\" <mengjuan.cmj@alibaba-inc.com>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?5Zue5aSN77yaQnVnIG9uIHVwZGF0ZSB0aW1pbmcgb2Ygd2FscmN2LT5mbHVzaGVkVXB0byB2?=\n =?UTF-8?B?YXJpYWJsZQ==?=" } ]
[ { "msg_contents": "I hope you still have a chance to get your review in this cycle; no reviewer had been assigned since January.\r\n\r\nIt compiles, builds, and runs make check-world cleanly. \r\nPreferably apply the patches in the order below (or I can consolidate them into a single patch if you wish)\r\n\r\n64bitGUCs_INT64\r\nXID_FMT\r\nClogPageNumber\r\nFixMergeConflicts_26Mar2021", "msg_date": "Fri, 26 Mar 2021 18:02:31 +0000", "msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>", "msg_from_op": true, "msg_subject": "Re: Challenges preventing us moving to 64 bit transaction id (XID)?" } ]
[ { "msg_contents": "Hi,\n\nThe attached patch allows CustomScan nodes to signal whether they\nsupport projection.\nCurrently all CustomScan nodes are treated as supporting projection.\nBut it would be nice\nfor custom nodes to opt out of this to prevent postgres from modifying\nthe targetlist of\nthe custom node.\n\nFor features similar to set-returning functions the function call\nneeds to be a top level expression in a custom node that implements it\nbut any targetlist adjustments\nmight be modified by postgres because it is not aware of the special\nmeaning of those function calls and it might push down a different\ntargetlist in the node cause it assumes\nthe node can project.\n\nI named the flag CUSTOMPATH_SUPPORT_PROJECTION similar to the other\ncustom node flags, but this would revert the current logic and nodes\nwould have to opt into\nprojection. I thought about naming it CUSTOMPATH_CANNOT_PROJECT to keep\nthe current default and make it an opt out. But that would make the\nflag name notably different from the other flag names. Any opinions on\nthe flag name and whether it should be opt in or opt out?\n\n-- \nRegards, Sven Klemm", "msg_date": "Fri, 26 Mar 2021 19:56:54 +0100", "msg_from": "Sven Klemm <sven@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Allow CustomScan nodes to signal projection support" }, { "msg_contents": "Hi Sven,\n\n> The attached patch allows CustomScan nodes to signal whether they\n> support projection.\n\nI noticed that you didn't change custom-scan.sgml accordingly. The\nupdated patch is attached. Otherwise, it seems to be fine in terms of\ncompiling, passing tests etc.\n\n> I named the flag CUSTOMPATH_SUPPORT_PROJECTION similar to the other\n> custom node flags, but this would revert the current logic\n\nThis seems to be a typical Kobayashi Maru situation, i.e any choice is\na bad one. I suggest keeping the patch as is and hoping that the\ndevelopers of existing extensions read the release notes.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 3 May 2021 16:18:19 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow CustomScan nodes to signal projection support" }, { "msg_contents": "Aleksander Alekseev <aleksander@timescale.com> writes:\n>> I named the flag CUSTOMPATH_SUPPORT_PROJECTION similar to the other\n>> custom node flags, but this would revert the current logic\n\n> This seems to be a typical Kobayashi Maru situation, i.e any choice is\n> a bad one. I suggest keeping the patch as is and hoping that the\n> developers of existing extensions read the release notes.\n\nYeah, I concur that's the least bad choice.\n\nI got annoyed by the fact that the existing checks of CustomPath flags\nhad several randomly different styles for basically identical tests,\nand this patch wanted to introduce yet another. I cleaned that up and\npushed this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 06 Jul 2021 18:14:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Allow CustomScan nodes to signal projection support" } ]
[ { "msg_contents": "Hi,\n\nWith extended statistics it may not be immediately obvious if they were\napplied and to which clauses. If you have multiple extended statistics,\nwe may also apply them in different order, etc. And with expressions,\nthere's also the question of matching expressions to the statistics.\n\nSo it seems useful to include this into in the explain plan - show which\nstatistics were applied, in which order. Attached is an early PoC patch\ndoing that in VERBOSE mode. I'll add it to the next CF.\n\n\nA simple example demonstrating the idea:\n\n======================================================================\n\n create table t (a int, b int);\n insert into t select mod(i,10), mod(i,10)\n from generate_series(1,100000) s(i);\n\n create statistics s on a, b from t;\n analyze t;\n\ntest=# explain (verbose) select * from t where a = 1 and b = 1;\n QUERY PLAN\n---------------------------------------------------------------\n Seq Scan on public.t (cost=0.00..1943.00 rows=10040 width=8)\n Output: a, b\n Filter: ((t.a = 1) AND (t.b = 1))\n Statistics: public.s Clauses: ((a = 1) AND (b = 1))\n(4 rows)\n\ntest=# explain (verbose) select 1 from t group by a, b;\n QUERY PLAN\n----------------------------------------------------------------------\n HashAggregate (cost=1943.00..1943.10 rows=10 width=12)\n Output: 1, a, b\n Group Key: t.a, t.b\n -> Seq Scan on public.t (cost=0.00..1443.00 rows=100000 width=8)\n Output: a, b\n Statistics: public.s Clauses: (a AND b)\n(6 rows)\n\n======================================================================\n\nThe current implementation is a bit ugly PoC, with a couple annoying\nissues that need to be solved:\n\n1) The information is stashed in multiple lists added to a Plan. Maybe\nthere's a better place, and maybe we need to invent a better way to\ntrack the info (a new node stashed in a single List).\n\n2) The deparsing is modeled (i.e. copied) from how we deal with index\nquals, but it's having issues with nested OR clauses, because there are\nnested RestrictInfo nodes and the deparsing does not expect that.\n\n3) It does not work for functional dependencies, because we effectively\n\"merge\" all functional dependencies and apply the entries. Not sure how\nto display this, but I think it should show the individual dependencies\nactually applied.\n\n4) The info is collected always, but I guess we should do that only when\nin explain mode. Not sure how expensive it is.\n\n5) It includes just statistics name + clauses, but maybe we should\ninclude additional info (e.g estimate for that combination of clauses).\n\n6) The clauses in the grouping query are transformed to AND list, which\nis wrong. This is easy to fix, I was lazy to do that in a PoC patch.\n\n7) It does not show statistics for individual expressions. I suppose\nexamine_variable could add it to the rel somehow, and maybe we could do\nthat with index expressions too?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 27 Mar 2021 01:50:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Showing applied extended statistics in explain" }, { "msg_contents": "Le samedi 27 mars 2021, 01:50:54 CEST Tomas Vondra a écrit :\n> The current implementation is a bit ugly PoC, with a couple annoying\n> issues that need to be solved:\n> \nHello Thomas,\n\nI haven't looked at the implementation at all but I think it's an interesting \nidea.\n\n\n> 1) The information is stashed in multiple lists added to a Plan. Maybe\n> there's a better place, and maybe we need to invent a better way to\n> track the info (a new node stashed in a single List).\n\nYes this would probably be cleaner.\n\n> \n> 2) The deparsing is modeled (i.e. copied) from how we deal with index\n> quals, but it's having issues with nested OR clauses, because there are\n> nested RestrictInfo nodes and the deparsing does not expect that.\n> \n> 3) It does not work for functional dependencies, because we effectively\n> \"merge\" all functional dependencies and apply the entries. Not sure how\n> to display this, but I think it should show the individual dependencies\n> actually applied.\n\nYes that would be useful when trying to understand where an estimation comes \nfrom. \n\n> \n> 4) The info is collected always, but I guess we should do that only when\n> in explain mode. Not sure how expensive it is.\n\nThat would probably be better yes. \n\n> \n> 5) It includes just statistics name + clauses, but maybe we should\n> include additional info (e.g estimate for that combination of clauses).\n\nI'm not sure the estimate for the combination is that useful, as you have an \nassociated estimated number of rows attached to the node. I think to be able \nto really make sense of it, a GUC disabling the extended statistics could be \nuseful for the curious DBA to compare the selectivity estimation for a plan \nwith the statistics and a plan without. \n\n> \n> 6) The clauses in the grouping query are transformed to AND list, which\n> is wrong. This is easy to fix, I was lazy to do that in a PoC patch.\n> \n> 7) It does not show statistics for individual expressions. I suppose\n> examine_variable could add it to the rel somehow, and maybe we could do\n> that with index expressions too?\n\nYes this would be useful for single-expression extended statistics as well as \nstatistics collected from a functional index.\n\nI don't know if it's doable, but if we want to provide insights into how \nstatistics are used, it could be nice to also display the statistics target \nused. Since the values at the time of the last analyze and the current value \nmight be different, it could be nice to store it along with the stats. I \nremember having to troubleshoot queries where the problem was an ALTER <TABLE/\nINDEX> ... SET STATISTICS had not been run as expected, and having that \ninformation available in the plan for a complex query might help in diagnosing \nthe problem quicker.\n\nRegards,\n\n-- \nRonan Dunklau\n\n\n\n\n", "msg_date": "Fri, 23 Jul 2021 17:12:25 +0200", "msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "> On Sat, Mar 27, 2021 at 01:50:54AM +0100, Tomas Vondra wrote:\n> Hi,\n>\n> With extended statistics it may not be immediately obvious if they were\n> applied and to which clauses. If you have multiple extended statistics,\n> we may also apply them in different order, etc. And with expressions,\n> there's also the question of matching expressions to the statistics.\n>\n> So it seems useful to include this into in the explain plan - show which\n> statistics were applied, in which order. Attached is an early PoC patch\n> doing that in VERBOSE mode. I'll add it to the next CF.\n\nHi,\n\nsounds like a useful improvement indeed, thanks for the patch. Do you\nplan to invest more time in it?\n\n> 1) The information is stashed in multiple lists added to a Plan. Maybe\n> there's a better place, and maybe we need to invent a better way to\n> track the info (a new node stashed in a single List).\n>\n> ...\n>\n> 3) It does not work for functional dependencies, because we effectively\n> \"merge\" all functional dependencies and apply the entries. Not sure how\n> to display this, but I think it should show the individual dependencies\n> actually applied.\n>\n> ...\n>\n> 5) It includes just statistics name + clauses, but maybe we should\n> include additional info (e.g estimate for that combination of clauses).\n\nYes, a new node would be nice to have. The other questions above are\nsomewhat related to what it should contain, and I guess it depends on\nthe use case this patch targets. E.g. for the case \"help to figure out\nif an extended statistics was applied\" even \"merged\" functional\ndependencies would be fine I believe. More advanced plan troubleshooting\nmay benefit from estimates. What exactly use case do you have in mind?\n\n> 4) The info is collected always, but I guess we should do that only when\n> in explain mode. Not sure how expensive it is.\n\nMaybe it's in fact not that expensive to always collect the info? Adding\nit as it is now do not increase number of cache lines Plan structure\noccupies (although it comes close to the boundary), and a couple of\nsimple tests with CPU bounded load of various types doesn't show\nsignificant slowdown. I haven't tried the worst case scenario with a lot\nof extended stats involved, but in such situations I can imagine the\noverhead also could be rather small in comparison with other expenses.\n\nI've got few more questions after reading the patch:\n\n* Is there any particular reason behind choosing only some scan nodes to\n display extended stats? E.g. BitmapHeapScan is missing.\n\n* StatisticExtInfo should have a _copy etc. node functionalty now,\n right? I've hit \"unrecognized node type\" with a prepared statement\n because it's missing.\n\n\n", "msg_date": "Tue, 27 Jul 2021 12:21:12 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "Hi,\n\nOn 7/27/21 12:21 PM, Dmitry Dolgov wrote:\n>> On Sat, Mar 27, 2021 at 01:50:54AM +0100, Tomas Vondra wrote:\n>> Hi,\n>>\n>> With extended statistics it may not be immediately obvious if they were\n>> applied and to which clauses. If you have multiple extended statistics,\n>> we may also apply them in different order, etc. And with expressions,\n>> there's also the question of matching expressions to the statistics.\n>>\n>> So it seems useful to include this into in the explain plan - show which\n>> statistics were applied, in which order. Attached is an early PoC patch\n>> doing that in VERBOSE mode. I'll add it to the next CF.\n> \n> Hi,\n> \n> sounds like a useful improvement indeed, thanks for the patch. Do you\n> plan to invest more time in it?\n> \n\nYes. I think providing more insight into which statistics were applied,\nin which order and to which clauses, is quite desirable.\n\n>> 1) The information is stashed in multiple lists added to a Plan. Maybe\n>> there's a better place, and maybe we need to invent a better way to\n>> track the info (a new node stashed in a single List).\n>>\n>> ...\n>>\n>> 3) It does not work for functional dependencies, because we effectively\n>> \"merge\" all functional dependencies and apply the entries. Not sure how\n>> to display this, but I think it should show the individual dependencies\n>> actually applied.\n>>\n>> ...\n>>\n>> 5) It includes just statistics name + clauses, but maybe we should\n>> include additional info (e.g estimate for that combination of clauses).\n> \n> Yes, a new node would be nice to have. The other questions above are\n> somewhat related to what it should contain, and I guess it depends on\n> the use case this patch targets. E.g. for the case \"help to figure out\n> if an extended statistics was applied\" even \"merged\" functional\n> dependencies would be fine I believe.\n\nWhat do you mean by \"merged\" functional dependencies? I guess we could\nsay \"these clauses were estimated using dependencies\" without listing\nwhich exact dependencies were applied.\n\n> More advanced plan troubleshooting may benefit from estimates.\n\nI'm sorry, I don't know what you mean by this. Can you elaborate?\n\n> What exactly use case do you have in mind?\nWell, my goal was to help users investigating the plan/estimates,\nbecause once you create multiple \"candidate\" statistics objects it may\nget quite tricky to determine which of them were applied, in what order,\netc. It's not all that straightforward, given the various heuristics we\nuse to pick the \"best\" statistics, apply dependencies last, etc. And I\ndon't quite want to teach the users those rules, because I consider them\nmostly implementation details that wee may want to tweak in the future.\n\n>> 4) The info is collected always, but I guess we should do that only when\n>> in explain mode. Not sure how expensive it is.\n> \n> Maybe it's in fact not that expensive to always collect the info? Adding\n> it as it is now do not increase number of cache lines Plan structure\n> occupies (although it comes close to the boundary), and a couple of\n> simple tests with CPU bounded load of various types doesn't show\n> significant slowdown. I haven't tried the worst case scenario with a lot\n> of extended stats involved, but in such situations I can imagine the\n> overhead also could be rather small in comparison with other expenses.\n> \n\nYeah, once there are many statistics it's probably not an issue - the\noverhead from processing them is likely way higher than copying this\nextra info. Plus people tend to create statistics when there are issues\nwith planning the queries.\n\n> I've got few more questions after reading the patch:\n> \n> * Is there any particular reason behind choosing only some scan nodes to\n> display extended stats? E.g. BitmapHeapScan is missing.\n> \n\nAll nodes that may apply extended stats to estimate quals should include\nthis info. I guess BitmapHeapScan may do that for the filter, right?\n\n> * StatisticExtInfo should have a _copy etc. node functionalty now,\n> right? I've hit \"unrecognized node type\" with a prepared statement\n> because it's missing.\n> \n\nYeah, probably.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 27 Jul 2021 22:20:34 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 7/27/21 12:21 PM, Dmitry Dolgov wrote:\n>>> So it seems useful to include this into in the explain plan - show which\n>>> statistics were applied, in which order. Attached is an early PoC patch\n>>> doing that in VERBOSE mode. I'll add it to the next CF.\n\n> Yes. I think providing more insight into which statistics were applied,\n> in which order and to which clauses, is quite desirable.\n\nTBH I do not agree that this is a great idea. I think it's inevitably\nexposing a lot of unstable internal planner details. I like even less\nthe aspect that this means a lot of information has to be added to the\nfinished plan in case it's needed for EXPLAIN. Aside from the sheer\ncost of copying that data around, what happens if for example somebody\ndrops a statistic object between the time of planning and the EXPLAIN?\nAre we going to start keeping locks on those objects for the lifetime\nof the plans?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 27 Jul 2021 16:50:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "\nOn 7/27/21 10:50 PM, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 7/27/21 12:21 PM, Dmitry Dolgov wrote:\n>>>> So it seems useful to include this into in the explain plan - show which\n>>>> statistics were applied, in which order. Attached is an early PoC patch\n>>>> doing that in VERBOSE mode. I'll add it to the next CF.\n> \n>> Yes. I think providing more insight into which statistics were applied,\n>> in which order and to which clauses, is quite desirable.\n> \n> TBH I do not agree that this is a great idea. I think it's inevitably\n> exposing a lot of unstable internal planner details.\n\nWhich unstable planner details? IMHO it's not all that different from\ninfo about which indexes were picked for the query.\n\n> I like even less the aspect that this means a lot of information has\n> to be added to the finished plan in case it's needed for EXPLAIN.\nYes, that's true. I mentioned that in my initial post, and I suggested\nwe might collect it only when in explain mode.\n\n> Aside from the sheer cost of copying that data around, what happens\n> if for example somebody drops a statistic object between the time of\n> planning and the EXPLAIN? Are we going to start keeping locks on\n> those objects for the lifetime of the plans?\n> \n\nYes, that'd be an issue. I'm not sure what to do about it, short of\neither locking the (applied) statistics objects, or maybe just simply\ncopying the bits we need for explain (pretty much just name).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 27 Jul 2021 23:38:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "On Tue, Jul 27, 2021 at 4:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> TBH I do not agree that this is a great idea. I think it's inevitably\n> exposing a lot of unstable internal planner details.\n\nWell, that is a risk, but I think I like the alternative even less.\nImagine if we had a CREATE INDEX command but no way -- other than\nrunning queries and noticing how long they seem to take -- to tell\nwhether it was being used. That would suck, a lot, and this seems to\nme to be exactly the same. A user who creates a statistics object has\na legitimate interest in finding out whether that object is doing\nanything to a given query that they happen to care about.\n\n> I like even less\n> the aspect that this means a lot of information has to be added to the\n> finished plan in case it's needed for EXPLAIN. Aside from the sheer\n> cost of copying that data around, what happens if for example somebody\n> drops a statistic object between the time of planning and the EXPLAIN?\n> Are we going to start keeping locks on those objects for the lifetime\n> of the plans?\n\nI don't understand the premise of this question. We don't keep locks\non tables or indexes involved in a plan for the lifetime of a plan, or\non functions or any other kind of object either. We instead arrange to\ninvalidate the plan if those objects are modified or dropped. Why\nwould we not use the same approach here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 28 Jul 2021 09:23:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "> On Tue, Jul 27, 2021 at 10:20:34PM +0200, Tomas Vondra wrote:\n>\n> >> 1) The information is stashed in multiple lists added to a Plan. Maybe\n> >> there's a better place, and maybe we need to invent a better way to\n> >> track the info (a new node stashed in a single List).\n> >>\n> >> ...\n> >>\n> >> 3) It does not work for functional dependencies, because we effectively\n> >> \"merge\" all functional dependencies and apply the entries. Not sure how\n> >> to display this, but I think it should show the individual dependencies\n> >> actually applied.\n> >>\n> >> ...\n> >>\n> >> 5) It includes just statistics name + clauses, but maybe we should\n> >> include additional info (e.g estimate for that combination of clauses).\n> >\n> > Yes, a new node would be nice to have. The other questions above are\n> > somewhat related to what it should contain, and I guess it depends on\n> > the use case this patch targets. E.g. for the case \"help to figure out\n> > if an extended statistics was applied\" even \"merged\" functional\n> > dependencies would be fine I believe.\n>\n> What do you mean by \"merged\" functional dependencies? I guess we could\n> say \"these clauses were estimated using dependencies\" without listing\n> which exact dependencies were applied.\n\nYes, that's exactly what I was thinking. From the original email I've\ngot an impression that in case of functional dependencies you plan to\ndisplay the info only with the individual dependencies (when\nimplemented) or nothing at all. By \"merged\" I wanted to refer to the\nstatement about \"merge\" all functional dependencies and apply the\nentries.\n\n> > More advanced plan troubleshooting may benefit from estimates.\n>\n> I'm sorry, I don't know what you mean by this. Can you elaborate?\n\nYeah, sorry for not being clear. The idea was that the question about including\n\"additional info\" strongly depends on which use cases the patch tries to\naddress, and I follow up on that further. There is no more hidden detailed\nmeaning here :)\n\n> > I've got few more questions after reading the patch:\n> >\n> > * Is there any particular reason behind choosing only some scan nodes to\n> > display extended stats? E.g. BitmapHeapScan is missing.\n> >\n>\n> All nodes that may apply extended stats to estimate quals should include\n> this info. I guess BitmapHeapScan may do that for the filter, right?\n>\n\nYes, something like this (stats output added by me):\n\n\t Bitmap Heap Scan on public.tenk1\n\t Output: unique1\n\t Recheck Cond: (tenk1.unique1 < 1000)\n\t Filter: (tenk1.stringu1 = 'xxx'::name)\n\t Statistics: public.s Clauses: ((unique1 < 1000) AND (stringu1 = 'xxx'::name))\n\t -> Bitmap Index Scan on tenk1_unique1\n\t\t\t Index Cond: (tenk1.unique1 < 1000)\n\n\n", "msg_date": "Thu, 29 Jul 2021 16:09:54 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "Hi Tomas!\n\n>> What exactly use case do you have in mind?\n>Well, my goal was to help users investigating the plan/estimates,\n>because once you create multiple \"candidate\" statistics objects it may\n>get quite tricky to determine which of them were applied, in what order,\n>etc. It's not all that straightforward, given the various heuristics we\n>use to pick the \"best\" statistics, apply dependencies last, etc. And I\n>don't quite want to teach the users those rules, because I consider them\n>mostly implementation details that wee may want to tweak in the future.\n\nYou are right. This feature will be very useful for plan tuning with\nextended statistics. Therefore, I would like to help develop it. :-D\n\n\n>5) It includes just statistics name + clauses, but maybe we should\n>include additional info (e.g estimate for that combination of clauses).\n\nI thought the above sentence was about what level to aim for in the initial\nversion. Ideally, we would like to include additional information. However,\nit is clear that the more things we have, the longer it will take to\ndevelop.\nSo, I think it is better to commit the initial version at a minimal level\nto\nprovide it to users quickly.\n\nAs long as an Extended stats name is displayed in EXPLAIN result, it is\npossible to figure out which extended statistics are used. That information\nalone is valuable to the user.\n\n\n> 4) The info is collected always, but I guess we should do that only when\n>in explain mode. Not sure how expensive it is.\n\nIn the case of pg_plan_advsr that I created, I used ProcessUtility_hook to\ndetect EXPLAIN command. It searches all queries to find T_ExplainStmt, and\nset a flag. I guess we can't use this method in Postgres core, right?\n\nIf we have to collect the extra info for all query executions, we need to\nreduce overhead. I mean collecting the minimum necessary.\nTo do that, I think it would be better to display less Extended stats info\nin EXPLAIN results. For example, displaying only extended stats names is\nfine,\nI guess. (I haven't understood your patch yet, so If I say wrong, sorry)\n\n\n>6) The clauses in the grouping query are transformed to AND list, which\n>is wrong. This is easy to fix, I was lazy to do that in a PoC patch.\n\n6) is related 5).\nIf we agree to remove showing quals of extended stats in EXPLAIN result,\nWe can solve this problem by removing the code. Is it okay?\n\n\n>2) The deparsing is modeled (i.e. copied) from how we deal with index\n>quals, but it's having issues with nested OR clauses, because there are\n>nested RestrictInfo nodes and the deparsing does not expect that.\n>\n>3) It does not work for functional dependencies, because we effectively\n>\"merge\" all functional dependencies and apply the entries. Not sure how\n>to display this, but I think it should show the individual dependencies\n>actually applied.\n>\n>7) It does not show statistics for individual expressions. I suppose\n>examine_variable could add it to the rel somehow, and maybe we could do\n>that with index expressions too?\n\nI'm not sure about 2) 3) and 7) above, so I'd like to see some concrete\nexamples\nof queries. I would like to include it in the test pattern for regression\ntesting.\n\n\nTo be fixed:\n\n* StatisticExtInfo should have a _copy etc. node functionality now,\n right? I've hit \"unrecognized node type\" with a prepared statement\n because it's missing.\n\n* Is there any particular reason behind choosing only some scan nodes to\n display extended stats? E.g. BitmapHeapScan is missing.\n\n* (New) In the case of Group by, we should put Extended stats info under the\n Hash/Group Aggregate node (not under Scan node).\n\n* (New) We have to create a regression test including the above test\npatterns.\n\n\nAttached patch:\n\nIt is a rebased version on the head of the master because there were many\nHunks\nwhen I applied the previous patch on master.\n\nI'll create regression tests firstly.\n\nRegards,\nTatsuro Yamada", "msg_date": "Tue, 18 Jan 2022 20:31:30 +0900", "msg_from": "Tatsuro Yamada <yamatattsu@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" }, { "msg_contents": "As discussed in [1], we're taking this opportunity to return some\npatchsets that don't appear to be getting enough reviewer interest.\n\nThis is not a rejection, since we don't necessarily think there's\nanything unacceptable about the entry, but it differs from a standard\n\"Returned with Feedback\" in that there's probably not much actionable\nfeedback at all. Rather than code changes, what this patch needs is more\ncommunity interest. You might\n\n- ask people for help with your approach,\n- see if there are similar patches that your code could supplement,\n- get interested parties to agree to review your patch in a CF, or\n- possibly present the functionality in a way that's easier to review\n overall.\n\n(Doing these things is no guarantee that there will be interest, but\nit's hopefully better than endlessly rebasing a patchset that is not\nreceiving any feedback from the community.)\n\nOnce you think you've built up some community support and the patchset\nis ready for review, you (or any interested party) can resurrect the\npatch entry by visiting\n\n https://commitfest.postgresql.org/38/3050/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n[1] https://postgr.es/m/86140760-8ba5-6f3a-3e6e-5ca6c060bd24@timescale.com\n\n\n\n", "msg_date": "Mon, 1 Aug 2022 13:38:34 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Showing applied extended statistics in explain" } ]
[ { "msg_contents": "Hi,\n\nI am interested in \"Add monitoring of pg_stat_statements to pg_systat\". I\nhave read some code of pg_systat and enabled the pg_stat_statements\nfunction.\n\nI noticed that pg_state_statements has many columns so that can't show in a\nsingle view. Should I divided these columns into diffirent view or chose\nsome of them to show?\n\nBest regards,\n\nLu\n\nHi,I am interested in \"Add monitoring of pg_stat_statements to pg_systat\". I have read some code of pg_systat and enabled the pg_stat_statements function. I noticed that pg_state_statements has many columns so that can't show in a single view. Should I divided these columns into diffirent view or chose some of them to show?Best regards,Lu", "msg_date": "Sat, 27 Mar 2021 15:36:25 +0800", "msg_from": "Trafalgar Ricardo Lu <trafalgarricardolu@gmail.com>", "msg_from_op": true, "msg_subject": "[GSoC] Question about Add functionality to pg_top and supporting\n tools" }, { "msg_contents": "Hi Lu,\n\nOn Sat, Mar 27, 2021 at 03:36:25PM +0800, Trafalgar Ricardo Lu wrote:\n> Hi,\n> \n> I am interested in \"Add monitoring of pg_stat_statements to pg_systat\". I\n> have read some code of pg_systat and enabled the pg_stat_statements\n> function.\n\nThanks for your interest!\n\n> I noticed that pg_state_statements has many columns so that can't show in a\n> single view. Should I divided these columns into diffirent view or chose\n> some of them to show?\n\nDividing the columns up into different views is ok. Some of the views\nare like that now. For example, the pg_stat_database data is split up\nbetween dbblk.c, dbtup.c, and dbxact.c. \n\nRegards,\nMark\n\n\n", "msg_date": "Sat, 27 Mar 2021 20:23:10 -0700", "msg_from": "Mark Wong <markwkm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [GSoC] Question about Add functionality to pg_top and supporting\n tools" } ]
[ { "msg_contents": "Hi,\n\nThe database Neo4j has a language called \"Cypher\" where one of the key selling points is they \"don’t need join tables\".\n\nHere is an example from https://neo4j.com/developer/cypher/guide-sql-to-cypher/\n\nSQL:\n\nSELECT DISTINCT c.company_name\nFROM customers AS c\nJOIN orders AS o ON c.customer_id = o.customer_id\nJOIN order_details AS od ON o.order_id = od.order_id\nJOIN products AS p ON od.product_id = p.product_id\nWHERE p.product_name = 'Chocolade';\n\nNeo4j's Cypher:\n\nMATCH (p:product {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)\nRETURN distinct c.company_name;\n\nImagine if we could simply write the SQL query like this:\n\nSELECT DISTINCT od.order_id.customer_id.company_name\nFROM order_details AS od\nWHERE od.product_id.product_name = 'Chocolade';\n\nI took the inspiration for this syntax from SQL/JSON path expressions.\n\nSince there is only a single foreign key on the order_details.order_id column,\nwe would know how to resolve it, i.e. to the orders table,\nand from there we would follow the customer_id column to the customers table,\nwhere we would finally get the company_name value.\n\nIn the where clause, we would follow the order_details's product_id column\nto the products table, to filter on product_name.\n\nIf there would be multiple foreign keys on a column we try to follow,\nthe query planner would throw an error forcing the user to use explicit joins instead.\n\nI think this syntactic sugar could save a lot of unnecessary typing,\nand as long as the column names are chosen wisely,\nthe path expression will be just as readable as the manual JOINs would be.\n\nThoughts?\n\n/Joel\nHi,The database Neo4j has a language called \"Cypher\" where one of the key selling points is they \"don’t need join tables\".Here is an example from https://neo4j.com/developer/cypher/guide-sql-to-cypher/SQL:SELECT DISTINCT c.company_nameFROM customers AS cJOIN orders AS o ON c.customer_id = o.customer_idJOIN order_details AS od ON o.order_id = od.order_idJOIN products AS p ON od.product_id = p.product_idWHERE p.product_name = 'Chocolade';Neo4j's Cypher:MATCH (p:product {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)RETURN distinct c.company_name;Imagine if we could simply write the SQL query like this:SELECT DISTINCT od.order_id.customer_id.company_nameFROM order_details AS odWHERE od.product_id.product_name = 'Chocolade';I took the inspiration for this syntax from SQL/JSON path expressions.Since there is only a single foreign key on the order_details.order_id column,we would know how to resolve it, i.e. to the orders table,and from there we would follow the customer_id column to the customers table,where we would finally get the company_name value.In the where clause, we would follow the order_details's product_id columnto the products table, to filter on product_name.If there would be multiple foreign keys on a column we try to follow,the query planner would throw an error forcing the user to use explicit joins instead.I think this syntactic sugar could save a lot of unnecessary typing,and as long as the column names are chosen wisely,the path expression will be just as readable as the manual JOINs would be.Thoughts?/Joel", "msg_date": "Sat, 27 Mar 2021 21:27:51 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sat, Mar 27, 2021 at 8:28 PM Joel Jacobson <joel@compiler.org> wrote:\n\n> Hi,\n>\n> The database Neo4j has a language called \"Cypher\" where one of the key\n> selling points is they \"don’t need join tables\".\n>\n> Here is an example from\n> https://neo4j.com/developer/cypher/guide-sql-to-cypher/\n>\n> SQL:\n>\n> SELECT DISTINCT c.company_name\n> FROM customers AS c\n> JOIN orders AS o ON c.customer_id = o.customer_id\n> JOIN order_details AS od ON o.order_id = od.order_id\n> JOIN products AS p ON od.product_id = p.product_id\n> WHERE p.product_name = 'Chocolade';\n>\n> Neo4j's Cypher:\n>\n> MATCH (p:product\n> {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)\n> RETURN distinct c.company_name;\n>\n> Imagine if we could simply write the SQL query like this:\n>\n> SELECT DISTINCT od.order_id.customer_id.company_name\n> FROM order_details AS od\n> WHERE od.product_id.product_name = 'Chocolade';\n>\n> I took the inspiration for this syntax from SQL/JSON path expressions.\n>\n> Since there is only a single foreign key on the order_details.order_id\n> column,\n> we would know how to resolve it, i.e. to the orders table,\n> and from there we would follow the customer_id column to the customers\n> table,\n> where we would finally get the company_name value.\n>\n> In the where clause, we would follow the order_details's product_id column\n> to the products table, to filter on product_name.\n>\n> If there would be multiple foreign keys on a column we try to follow,\n> the query planner would throw an error forcing the user to use explicit\n> joins instead.\n>\n> I think this syntactic sugar could save a lot of unnecessary typing,\n> and as long as the column names are chosen wisely,\n> the path expression will be just as readable as the manual JOINs would be.\n>\n> Thoughts?\n>\n> /Joel\n>\n\nJust my 2c. The idea is nice but:\n\n1. It is changing the FROM clause and the (size of the) intermediate result\nset. While in your example query there is no difference, you'd get\ndifferent results if it was something like\n\nSELECT p.product_name, COUNT(*)\nFROM ... (same joins)\nGROUP BY p.product_name\n\n2. If you want many columns in the SELECT list, possibly form many tables,\nyou'll need to repeated the expressions. i.e. how you propose to write\nthis without repeating the link expressions?\n\nSELECT p.product_name, p.price, p.category, c.company_name, c.address\n...\n\n3. SQL already provides methods to remove the join \"noise\", with JOIN USING\n(columns) when joining column have matching names and with NATURAL JOIN\n(with extreme care).\n\nFinally, extending the specs in this novel way might put Postgres in a\ndifferent path from the SQL specs in the future, especially if they have\nplans to add functionality for graph queries.\n\nBest regards\nPantelis Theodosiou\n\nOn Sat, Mar 27, 2021 at 8:28 PM Joel Jacobson <joel@compiler.org> wrote:Hi,The database Neo4j has a language called \"Cypher\" where one of the key selling points is they \"don’t need join tables\".Here is an example from https://neo4j.com/developer/cypher/guide-sql-to-cypher/SQL:SELECT DISTINCT c.company_nameFROM customers AS cJOIN orders AS o ON c.customer_id = o.customer_idJOIN order_details AS od ON o.order_id = od.order_idJOIN products AS p ON od.product_id = p.product_idWHERE p.product_name = 'Chocolade';Neo4j's Cypher:MATCH (p:product {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)RETURN distinct c.company_name;Imagine if we could simply write the SQL query like this:SELECT DISTINCT od.order_id.customer_id.company_nameFROM order_details AS odWHERE od.product_id.product_name = 'Chocolade';I took the inspiration for this syntax from SQL/JSON path expressions.Since there is only a single foreign key on the order_details.order_id column,we would know how to resolve it, i.e. to the orders table,and from there we would follow the customer_id column to the customers table,where we would finally get the company_name value.In the where clause, we would follow the order_details's product_id columnto the products table, to filter on product_name.If there would be multiple foreign keys on a column we try to follow,the query planner would throw an error forcing the user to use explicit joins instead.I think this syntactic sugar could save a lot of unnecessary typing,and as long as the column names are chosen wisely,the path expression will be just as readable as the manual JOINs would be.Thoughts?/JoelJust my 2c. The idea is nice but:1. It is changing the FROM clause and the (size of the) intermediate result set. While in your example query there is no difference, you'd get different results if it was something likeSELECT p.product_name, COUNT(*)FROM ...  (same joins)GROUP BY p.product_name2. If you want many columns in the SELECT list, possibly form many tables, you'll need to repeated the expressions. i.e. how you propose  to write this without repeating the link expressions?SELECT p.product_name, p.price, p.category, c.company_name, c.address...3. SQL already provides methods to remove the join \"noise\", with JOIN USING (columns) when joining column have matching names and with NATURAL JOIN (with extreme care).Finally, extending the specs in this novel way might put Postgres in a different path from the SQL specs in the future, especially if they have plans to add functionality for graph queries.Best regardsPantelis Theodosiou", "msg_date": "Sat, 27 Mar 2021 21:01:06 +0000", "msg_from": "Pantelis Theodosiou <ypercube@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On 2021-Mar-27, Joel Jacobson wrote:\n\n> If there would be multiple foreign keys on a column we try to follow,\n> the query planner would throw an error forcing the user to use explicit joins instead.\n\nThis seems pretty dangerous -- you just have to create one more FK, and\nsuddenly a query that worked perfectly fine, now starts throwing errors\nbecause it's now ambiguous. Feels a bit like JOIN NATURAL, which many\npeople discourage because of this problem.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nSi no sabes adonde vas, es muy probable que acabes en otra parte.\n\n\n", "msg_date": "Sat, 27 Mar 2021 18:11:06 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On 3/27/21 9:27 PM, Joel Jacobson wrote:\n> Hi,\n> \n> The database Neo4j has a language called \"Cypher\" where one of the key selling points is they \"don’t need join tables\".\n> \n> Here is an example from https://neo4j.com/developer/cypher/guide-sql-to-cypher/\n> \n> SQL:\n> \n> SELECT DISTINCT c.company_name\n> FROM customers AS c\n> JOIN orders AS o ON c.customer_id = o.customer_id\n> JOIN order_details AS od ON o.order_id = od.order_id\n> JOIN products AS p ON od.product_id = p.product_id\n> WHERE p.product_name = 'Chocolade';\n> \n> Neo4j's Cypher:\n> \n> MATCH (p:product {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)\n> RETURN distinct c.company_name;\n> \n> Imagine if we could simply write the SQL query like this:\n> \n> SELECT DISTINCT od.order_id.customer_id.company_name\n> FROM order_details AS od\n> WHERE od.product_id.product_name = 'Chocolade';\n> \n> I took the inspiration for this syntax from SQL/JSON path expressions.\n\nThis is a terrible idea, but let me explain why.\n\nFirst of all, neo4j claims they don't have joins because they actually\ndon't have joins. The nodes point directly to the other nodes. Your\nproposal is syntactic sugar over a real join. The equivalent to neo4j\nwould be somehow storing the foreign ctid in the column or something.\n\nSecondly, and more importantly IMO, graph queries are coming to SQL.\nThey are mostly based on Cypher but not entirely because they amalgamate\nconcepts from other graph query languages like Oracle's PGQ. The\n\"common\" syntax is called GQL (https://www.gqlstandards.org/) and it's\nin the process of becoming Part 16 of the SQL standard. The timeline on\nthat website says August 2022 (it started in September 2019).\n\nIf that timeline holds and ambitious people work on it (I already know\none who is; not me), I would expect this to be in PostgreSQL 16 or 17.\nThe earliest your proposal could get in is 15, so it's not that much of\na wait.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 28 Mar 2021 12:25:46 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sun, Mar 28, 2021, at 12:25, Vik Fearing wrote:\n> On 3/27/21 9:27 PM, Joel Jacobson wrote:\n> > Imagine if we could simply write the SQL query like this:\n> > \n> > SELECT DISTINCT od.order_id.customer_id.company_name\n> > FROM order_details AS od\n> > WHERE od.product_id.product_name = 'Chocolade';\n> > \n> > I took the inspiration for this syntax from SQL/JSON path expressions.\n> \n> This is a terrible idea, but let me explain why.\n> \n> First of all, neo4j claims they don't have joins because they actually\n> don't have joins. The nodes point directly to the other nodes. Your\n> proposal is syntactic sugar over a real join. The equivalent to neo4j\n> would be somehow storing the foreign ctid in the column or something.\n> \n> Secondly, and more importantly IMO, graph queries are coming to SQL.\n> They are mostly based on Cypher but not entirely because they amalgamate\n> concepts from other graph query languages like Oracle's PGQ. The\n> \"common\" syntax is called GQL (https://www.gqlstandards.org/) and it's\n> in the process of becoming Part 16 of the SQL standard. The timeline on\n> that website says August 2022 (it started in September 2019).\n> \n> If that timeline holds and ambitious people work on it (I already know\n> one who is; not me), I would expect this to be in PostgreSQL 16 or 17.\n> The earliest your proposal could get in is 15, so it's not that much of\n> a wait.\n\nI think you misunderstood my idea entirely.\n\nIt has absolutely nothing to do with graph query languages,\nexcept the one similarity I mentioned, not having joins.\n\nWhat I propose is a way to do implicit joins by following foreign keys,\nthus improving the SQL query language, making it more concise for\nthe simple case when what you want to do is an INNER JOIN on a\nsingle column on which there is a single FOREIGN KEY.\n\n/Joel\nOn Sun, Mar 28, 2021, at 12:25, Vik Fearing wrote:On 3/27/21 9:27 PM, Joel Jacobson wrote:> Imagine if we could simply write the SQL query like this:> > SELECT DISTINCT od.order_id.customer_id.company_name> FROM order_details AS od> WHERE od.product_id.product_name = 'Chocolade';> > I took the inspiration for this syntax from SQL/JSON path expressions.This is a terrible idea, but let me explain why.First of all, neo4j claims they don't have joins because they actuallydon't have joins.  The nodes point directly to the other nodes.  Yourproposal is syntactic sugar over a real join.  The equivalent to neo4jwould be somehow storing the foreign ctid in the column or something.Secondly, and more importantly IMO, graph queries are coming to SQL.They are mostly based on Cypher but not entirely because they amalgamateconcepts from other graph query languages like Oracle's PGQ.  The\"common\" syntax is called GQL (https://www.gqlstandards.org/) and it'sin the process of becoming Part 16 of the SQL standard.  The timeline onthat website says August 2022 (it started in September 2019).If that timeline holds and ambitious people work on it (I already knowone who is; not me), I would expect this to be in PostgreSQL 16 or 17.The earliest your proposal could get in is 15, so it's not that much ofa wait.I think you misunderstood my idea entirely.It has absolutely nothing to do with graph query languages,except the one similarity I mentioned, not having joins.What I propose is a way to do implicit joins by following foreign keys,thus improving the SQL query language, making it more concise forthe simple case when what you want to do is an INNER JOIN on asingle column on which there is a single FOREIGN KEY./Joel", "msg_date": "Sun, 28 Mar 2021 13:26:54 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "ne 28. 3. 2021 v 13:27 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sun, Mar 28, 2021, at 12:25, Vik Fearing wrote:\n>\n> On 3/27/21 9:27 PM, Joel Jacobson wrote:\n> > Imagine if we could simply write the SQL query like this:\n> >\n> > SELECT DISTINCT od.order_id.customer_id.company_name\n> > FROM order_details AS od\n> > WHERE od.product_id.product_name = 'Chocolade';\n> >\n> > I took the inspiration for this syntax from SQL/JSON path expressions.\n>\n> This is a terrible idea, but let me explain why.\n>\n> First of all, neo4j claims they don't have joins because they actually\n> don't have joins. The nodes point directly to the other nodes. Your\n> proposal is syntactic sugar over a real join. The equivalent to neo4j\n> would be somehow storing the foreign ctid in the column or something.\n>\n> Secondly, and more importantly IMO, graph queries are coming to SQL.\n> They are mostly based on Cypher but not entirely because they amalgamate\n> concepts from other graph query languages like Oracle's PGQ. The\n> \"common\" syntax is called GQL (https://www.gqlstandards.org/) and it's\n> in the process of becoming Part 16 of the SQL standard. The timeline on\n> that website says August 2022 (it started in September 2019).\n>\n> If that timeline holds and ambitious people work on it (I already know\n> one who is; not me), I would expect this to be in PostgreSQL 16 or 17.\n> The earliest your proposal could get in is 15, so it's not that much of\n> a wait.\n>\n>\n> I think you misunderstood my idea entirely.\n>\n> It has absolutely nothing to do with graph query languages,\n> except the one similarity I mentioned, not having joins.\n>\n> What I propose is a way to do implicit joins by following foreign keys,\n> thus improving the SQL query language, making it more concise for\n> the simple case when what you want to do is an INNER JOIN on a\n> single column on which there is a single FOREIGN KEY.\n>\n\nThere were some similar tools already. Personally I like the current\nstate, where tables should be explicitly defined, and join should be\nexplicitly defined. The joining of tables is not cheap - and I like the\nvisibility of this. On the other hand, this is very frustrable for a lot of\npeople, and I can understand. I don't want to see this feature inside\nPostgres, because it can reduce the possibility to detect badly written\nquery. But it can be a great feature of some outer tool. I worked for\nGoodData and this tool knows the model, and it generates necessary joins\nimplicitly, and people like it (this tool uses Postgres too).\n\nhttps://www.gooddata.com/\n\nRegards\n\nPavel\n\n\n> /Joel\n>\n\nne 28. 3. 2021 v 13:27 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, Mar 28, 2021, at 12:25, Vik Fearing wrote:On 3/27/21 9:27 PM, Joel Jacobson wrote:> Imagine if we could simply write the SQL query like this:> > SELECT DISTINCT od.order_id.customer_id.company_name> FROM order_details AS od> WHERE od.product_id.product_name = 'Chocolade';> > I took the inspiration for this syntax from SQL/JSON path expressions.This is a terrible idea, but let me explain why.First of all, neo4j claims they don't have joins because they actuallydon't have joins.  The nodes point directly to the other nodes.  Yourproposal is syntactic sugar over a real join.  The equivalent to neo4jwould be somehow storing the foreign ctid in the column or something.Secondly, and more importantly IMO, graph queries are coming to SQL.They are mostly based on Cypher but not entirely because they amalgamateconcepts from other graph query languages like Oracle's PGQ.  The\"common\" syntax is called GQL (https://www.gqlstandards.org/) and it'sin the process of becoming Part 16 of the SQL standard.  The timeline onthat website says August 2022 (it started in September 2019).If that timeline holds and ambitious people work on it (I already knowone who is; not me), I would expect this to be in PostgreSQL 16 or 17.The earliest your proposal could get in is 15, so it's not that much ofa wait.I think you misunderstood my idea entirely.It has absolutely nothing to do with graph query languages,except the one similarity I mentioned, not having joins.What I propose is a way to do implicit joins by following foreign keys,thus improving the SQL query language, making it more concise forthe simple case when what you want to do is an INNER JOIN on asingle column on which there is a single FOREIGN KEY.There were some similar tools already.  Personally I like the current state, where tables should be explicitly defined, and join should be explicitly defined. The joining of tables is not cheap - and I like the visibility of this. On the other hand, this is very frustrable for a lot of people, and I can understand. I don't want to see this feature inside Postgres, because it can reduce the possibility to detect badly written query. But it can be a great feature of some outer tool. I worked for GoodData and this tool knows the model, and it generates necessary joins implicitly, and people like it (this tool uses Postgres too). https://www.gooddata.com/RegardsPavel/Joel", "msg_date": "Sun, 28 Mar 2021 13:51:19 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sat, Mar 27, 2021, at 22:11, Alvaro Herrera wrote:\n> On 2021-Mar-27, Joel Jacobson wrote:\n> \n> > If there would be multiple foreign keys on a column we try to follow,\n> > the query planner would throw an error forcing the user to use explicit joins instead.\n> \n> This seems pretty dangerous -- you just have to create one more FK, and\n> suddenly a query that worked perfectly fine, now starts throwing errors\n> because it's now ambiguous. \n\nCreating one more FK referencing some other column,\nwould break queries in the same way USING breaks,\nif a column is added which causes ambiguity.\n\nIn my experience, it's extremely rare to have multiple different FKs on the same set of columns.\nMaybe I'm missing something here, can we think of a realistic use-case?\n\nIf such a FK if created, it would break in the same way as USING breaks\nif a column is added which causes ambiguity, except this is much less likely to happen than the equivalent use case.\n\nI think this problem is hypothetical compared to the actual problem with USING,\nsince adding a column with the same name as some existing column actually happens sometimes.\n \n> Feels a bit like JOIN NATURAL, which many\n> people discourage because of this problem.\n\nThe problem with NATURAL is due to matching based on column names.\nMy proposal doesn't match on column names at all.\nIt merely follows the foreign key for a column.\nWith NATURAL you can also suddenly get a different join,\nwhereas my proposal at worst will generate an error due to multiple FKs on the same column,\nthere can never be any ambiguity.\n\n/Joel\nOn Sat, Mar 27, 2021, at 22:11, Alvaro Herrera wrote:On 2021-Mar-27, Joel Jacobson wrote:> If there would be multiple foreign keys on a column we try to follow,> the query planner would throw an error forcing the user to use explicit joins instead.This seems pretty dangerous -- you just have to create one more FK, andsuddenly a query that worked perfectly fine, now starts throwing errorsbecause it's now ambiguous. Creating one more FK referencing some other column,would break queries in the same way USING breaks,if a column is added which causes ambiguity.In my experience, it's extremely rare to have multiple different FKs on the same set of columns.Maybe I'm missing something here, can we think of a realistic use-case?If such a FK if created, it would break in the same way as USING breaksif a column is added which causes ambiguity, except this is much less likely to happen than the equivalent use case.I think this problem is hypothetical compared to the actual problem with USING,since adding a column with the same name as some existing column actually happens sometimes. Feels a bit like JOIN NATURAL, which manypeople discourage because of this problem.The problem with NATURAL is due to matching based on column names.My proposal doesn't match on column names at all.It merely follows the foreign key for a column.With NATURAL you can also suddenly get a different join,whereas my proposal at worst will generate an error due to multiple FKs on the same column,there can never be any ambiguity./Joel", "msg_date": "Sun, 28 Mar 2021 14:15:42 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sun, Mar 28, 2021, at 13:51, Pavel Stehule wrote:\n> There were some similar tools already. Personally I like the current state, where tables should be explicitly defined, and join should be explicitly defined. The joining of tables is not cheap - and I like the visibility of this. On the other hand, this is very frustrable for a lot of people, and I can understand. I don't want to see this feature inside Postgres, because it can reduce the possibility to detect badly written query. But it can be a great feature of some outer tool. I worked for GoodData and this tool knows the model, and it generates necessary joins implicitly, and people like it (this tool uses Postgres too). \n> \n> https://www.gooddata.com/\n\nVery good points.\n\nAs a counter-argument, I could argue that you don't need to use this feature.\nBut that would be a poor argument, since you might have to work with code\nwritten by other developers.\n\nI'm also fearful of newbies misusing features, not understanding what they are doing, producing inefficient code.\nHowever, this is a general problem, complex queries are hard to reason about,\nand I'm not sure making some INNER JOINs implicit would worsen the situation,\nyou could also make the counter-argument that the remaining explicit JOINs become more visible,\nand will stand-out, exposing what is really complex in the query.\n\nAlso, this proposed syntax would surely appeal to the NoSQL-crowd,\nand should reduce their cravings for MongoDB.\n\nSo ask yourself the following question: Ten years from now, would you rather be forced to\nwork with code using MongoDB or a more concise SQL?\n\nLastly, let me reiterate I think you made a very good point,\nyour argument is the heaviest weigh on the negative side of my own scale.\n\n/Joel\nOn Sun, Mar 28, 2021, at 13:51, Pavel Stehule wrote:There were some similar tools already.  Personally I like the current state, where tables should be explicitly defined, and join should be explicitly defined. The joining of tables is not cheap - and I like the visibility of this. On the other hand, this is very frustrable for a lot of people, and I can understand. I don't want to see this feature inside Postgres, because it can reduce the possibility to detect badly written query. But it can be a great feature of some outer tool. I worked for GoodData and this tool knows the model, and it generates necessary joins implicitly, and people like it (this tool uses Postgres too). https://www.gooddata.com/Very good points.As a counter-argument, I could argue that you don't need to use this feature.But that would be a poor argument, since you might have to work with codewritten by other developers.I'm also fearful of newbies misusing features, not understanding what they are doing, producing inefficient code.However, this is a general problem, complex queries are hard to reason about,and I'm not sure making some INNER JOINs implicit would worsen the situation,you could also make the counter-argument that the remaining explicit JOINs become more visible,and will stand-out, exposing what is really complex in the query.Also, this proposed syntax would surely appeal to the NoSQL-crowd,and should reduce their cravings for MongoDB.So ask yourself the following question: Ten years from now, would you rather be forced towork with code using MongoDB or a more concise SQL?Lastly, let me reiterate I think you made a very good point,your argument is the heaviest weigh on the negative side of my own scale./Joel", "msg_date": "Sun, 28 Mar 2021 14:38:44 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "\nOn 3/27/21 5:11 PM, Alvaro Herrera wrote:\n> On 2021-Mar-27, Joel Jacobson wrote:\n>\n>> If there would be multiple foreign keys on a column we try to follow,\n>> the query planner would throw an error forcing the user to use explicit joins instead.\n> This seems pretty dangerous -- you just have to create one more FK, and\n> suddenly a query that worked perfectly fine, now starts throwing errors\n> because it's now ambiguous. Feels a bit like JOIN NATURAL, which many\n> people discourage because of this problem.\n>\n\n\nMaybe. I don't recall ever having seen a column with more than one FK.\nIs that a common thing? In itself it seems like a bad idea.\n\nNot saying I think this suggestion is a good idea, though. We've seen\nmany frameworks that hide joins, and the results are ... less than\nuniversally good. If your application programmers don't like using\njoins, then either a) you should have the DBA create some views for them\nthat contain the joins, or b) you have the wrong application programmers -:)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 28 Mar 2021 09:23:13 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "ne 28. 3. 2021 v 14:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sun, Mar 28, 2021, at 13:51, Pavel Stehule wrote:\n>\n> There were some similar tools already. Personally I like the current\n> state, where tables should be explicitly defined, and join should be\n> explicitly defined. The joining of tables is not cheap - and I like the\n> visibility of this. On the other hand, this is very frustrable for a lot of\n> people, and I can understand. I don't want to see this feature inside\n> Postgres, because it can reduce the possibility to detect badly written\n> query. But it can be a great feature of some outer tool. I worked for\n> GoodData and this tool knows the model, and it generates necessary joins\n> implicitly, and people like it (this tool uses Postgres too).\n>\n> https://www.gooddata.com/\n>\n>\n> Very good points.\n>\n> As a counter-argument, I could argue that you don't need to use this\n> feature.\n> But that would be a poor argument, since you might have to work with code\n> written by other developers.\n>\n> I'm also fearful of newbies misusing features, not understanding what they\n> are doing, producing inefficient code.\n> However, this is a general problem, complex queries are hard to reason\n> about,\n> and I'm not sure making some INNER JOINs implicit would worsen the\n> situation,\n> you could also make the counter-argument that the remaining explicit JOINs\n> become more visible,\n> and will stand-out, exposing what is really complex in the query.\n>\n\nIt is not the problem only for newbies - yesterday a very experienced user\n(I know him personally) reported an issue related to misunderstanding some\nbehaviour and just some typo, I like some mandatory redundancy in syntax,\nbecause it allows to detect some typo errors. SQL is not consistent in this\n- the query is relatively safe, but if you use subqueries, then are not\nsafe because you can use an outer identifier without qualification, and\nwhat is worse, the identifiers are prioritized - there is not amobigonuous\ncolumn check. So SQL has enough traps already, and I am afraid to introduce\nsome new implicit features.\n\nTheoretically you can introduce own procedural language\n\nCREATE OR REPLACE FUNCTION foo(a int)\nRETURNS TABLE (x int, y int) AS $$\nSELECT t1.x, t2.y WHERE t3.a = a;\n$$ LANGUAGE mylanguage.\n\nIt is well wrapped, and well isolated.\n\n\n> Also, this proposed syntax would surely appeal to the NoSQL-crowd,\n> and should reduce their cravings for MongoDB.\n>\n> So ask yourself the following question: Ten years from now, would you\n> rather be forced to\n> work with code using MongoDB or a more concise SQL?\n>\n\nI am a man who likes SQL - for me, it is a human readable language with a\ngood level of verbosity and abstraction - all time, when I learned SQL. But\nI see that SQL is not a fully \"safe\" language. It allows bad joins, or\ndoesn't detect all human errors. It can be a good reason for a new layer\nover SQL - some more abstract language. And it can work - I have really\ngood experience with GoodData query language. This is a transpiler from\ndomain language to SQL.\n\nI think so every tool, every layer should have a similar level of\nabstraction to be well usable.\n\n\n> Lastly, let me reiterate I think you made a very good point,\n> your argument is the heaviest weigh on the negative side of my own scale.\n>\n\n:)\n\n\n\n> /Joel\n>\n\nne 28. 3. 2021 v 14:39 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, Mar 28, 2021, at 13:51, Pavel Stehule wrote:There were some similar tools already.  Personally I like the current state, where tables should be explicitly defined, and join should be explicitly defined. The joining of tables is not cheap - and I like the visibility of this. On the other hand, this is very frustrable for a lot of people, and I can understand. I don't want to see this feature inside Postgres, because it can reduce the possibility to detect badly written query. But it can be a great feature of some outer tool. I worked for GoodData and this tool knows the model, and it generates necessary joins implicitly, and people like it (this tool uses Postgres too). https://www.gooddata.com/Very good points.As a counter-argument, I could argue that you don't need to use this feature.But that would be a poor argument, since you might have to work with codewritten by other developers.I'm also fearful of newbies misusing features, not understanding what they are doing, producing inefficient code.However, this is a general problem, complex queries are hard to reason about,and I'm not sure making some INNER JOINs implicit would worsen the situation,you could also make the counter-argument that the remaining explicit JOINs become more visible,and will stand-out, exposing what is really complex in the query.It is not the problem only for newbies - yesterday a very experienced user (I know him personally) reported an issue related to misunderstanding some behaviour and just some typo, I like some mandatory redundancy in syntax, because it allows to detect some typo errors. SQL is not consistent in this - the query is relatively safe, but if you use subqueries, then are not safe because you can use an outer identifier without qualification, and what is worse, the identifiers are prioritized - there is not amobigonuous column check. So SQL has enough traps already, and I am afraid to introduce some new implicit features.Theoretically you can introduce own procedural languageCREATE OR REPLACE FUNCTION foo(a int)RETURNS TABLE (x int, y int) AS $$SELECT t1.x, t2.y WHERE t3.a = a;$$ LANGUAGE mylanguage.It is well wrapped, and well isolated. Also, this proposed syntax would surely appeal to the NoSQL-crowd,and should reduce their cravings for MongoDB.So ask yourself the following question: Ten years from now, would you rather be forced towork with code using MongoDB or a more concise SQL?I am a man who likes SQL - for me, it is a human readable language with a good level of verbosity and abstraction - all time, when I learned SQL. But I see that SQL is not a fully \"safe\" language. It allows bad joins, or doesn't detect all human errors. It can be a good reason for a new layer over SQL - some more abstract language.  And it can work - I have really good experience with GoodData query language. This is a transpiler from domain language to SQL. I think so every tool, every layer should have a similar level of abstraction to be well usable. Lastly, let me reiterate I think you made a very good point,your argument is the heaviest weigh on the negative side of my own scale.:) /Joel", "msg_date": "Sun, 28 Mar 2021 15:36:28 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 3/27/21 5:11 PM, Alvaro Herrera wrote:\n>> This seems pretty dangerous -- you just have to create one more FK, and\n>> suddenly a query that worked perfectly fine, now starts throwing errors\n>> because it's now ambiguous. Feels a bit like JOIN NATURAL, which many\n>> people discourage because of this problem.\n\n> Maybe. I don't recall ever having seen a column with more than one FK.\n> Is that a common thing? In itself it seems like a bad idea.\n\nYeah, that aspect seems like a complete show-stopper. We have a way\nto enforce that you can't *drop* a constraint that some stored view\ndepends on for semantic validity. We don't have a way to say that\nyou can't *add* a constraint-with-certain-properties. And I don't\nthink it'd be very practical to do (consider race conditions, if\nnothing more).\n\nHowever, that stumbling block is just dependent on the assumption\nthat the foreign key constraint being used is implicit. If the\nsyntax names it explicitly then you just have a normal constraint\ndependency and all's well.\n\nYou might be able to have a shorthand notation in which the constraint\nisn't named and the system will accept it as long as there's just one\ncandidate (but then, when dumping a stored view, the constraint name\nwould always be shown explicitly). However I'm not sure that the\n\"shorthand\" would be any shorter. I'm imagining a syntax in which\nyou give the constraint name instead of the column name. Thought\nexperiment: how could the original syntax proposal make any use of\na multi-column foreign key?\n\n> Not saying I think this suggestion is a good idea, though. We've seen\n> many frameworks that hide joins, and the results are ... less than\n> universally good.\n\nYeah, I'm pretty much not sold on this idea either. I think it would\nlead to the same problems we see with ORMs, namely that people write\nqueries that are impossible to execute efficiently and then blame\nthe database for their poor choice of schema.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 28 Mar 2021 10:04:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net <mailto:andrew%40dunslane.net>> writes:\n> > Maybe. I don't recall ever having seen a column with more than one FK.\n> > Is that a common thing? In itself it seems like a bad idea.\n> \n> Yeah, that aspect seems like a complete show-stopper. We have a way\n> to enforce that you can't *drop* a constraint that some stored view\n> depends on for semantic validity. We don't have a way to say that\n> you can't *add* a constraint-with-certain-properties. And I don't\n> think it'd be very practical to do (consider race conditions, if\n> nothing more).\n\nThanks for valuable insights, I didn't think about these things.\n\nWhat if the path expressions are just syntactic sugar for an INNER JOIN on the referencing -> referenced column?\nIf a VIEW is created using this syntax, it would be stored as INNER JOINs, similar to how SELECT * is expanded.\n\n/Joel\nOn Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:Andrew Dunstan <andrew@dunslane.net> writes:> Maybe. I don't recall ever having seen a column with more than one FK.> Is that a common thing? In itself it seems like a bad idea.Yeah, that aspect seems like a complete show-stopper.  We have a wayto enforce that you can't *drop* a constraint that some stored viewdepends on for semantic validity.  We don't have a way to say thatyou can't *add* a constraint-with-certain-properties.  And I don'tthink it'd be very practical to do (consider race conditions, ifnothing more).Thanks for valuable insights, I didn't think about these things.What if the path expressions are just syntactic sugar for an INNER JOIN on the referencing -> referenced column?If a VIEW is created using this syntax, it would be stored as INNER JOINs, similar to how SELECT * is expanded./Joel", "msg_date": "Sun, 28 Mar 2021 16:39:39 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "\nOn 3/28/21 10:04 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 3/27/21 5:11 PM, Alvaro Herrera wrote:\n>>> This seems pretty dangerous -- you just have to create one more FK, and\n>>> suddenly a query that worked perfectly fine, now starts throwing errors\n>>> because it's now ambiguous. Feels a bit like JOIN NATURAL, which many\n>>> people discourage because of this problem.\n>> Maybe. I don't recall ever having seen a column with more than one FK.\n>> Is that a common thing? In itself it seems like a bad idea.\n> Yeah, that aspect seems like a complete show-stopper. We have a way\n> to enforce that you can't *drop* a constraint that some stored view\n> depends on for semantic validity. We don't have a way to say that\n> you can't *add* a constraint-with-certain-properties. And I don't\n> think it'd be very practical to do (consider race conditions, if\n> nothing more).\n>\n> However, that stumbling block is just dependent on the assumption\n> that the foreign key constraint being used is implicit. If the\n> syntax names it explicitly then you just have a normal constraint\n> dependency and all's well.\n>\n> You might be able to have a shorthand notation in which the constraint\n> isn't named and the system will accept it as long as there's just one\n> candidate (but then, when dumping a stored view, the constraint name\n> would always be shown explicitly). However I'm not sure that the\n> \"shorthand\" would be any shorter. I'm imagining a syntax in which\n> you give the constraint name instead of the column name. Thought\n> experiment: how could the original syntax proposal make any use of\n> a multi-column foreign key?\n\n\nI guess we could have a special operator, which allows the LHS to be\neither a column (in which case it must have only one single-valued FK\nconstraint) or a constraint name in which case it would match the\ncorresponding columns on both sides.\n\n\nIt gets kinda tricky though, as there are FKs going both ways:\n\n\n    customers <- orders <- order_details -> products\n\n\nand in fact this could make composing the query LESS clear. The natural\nplace to start this query (show me the name of every customer who\nordered chocolate) is with orders ISTM, but the example given starts\nwith order_details which seems somewhat unnatural.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 28 Mar 2021 10:40:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On 3/28/21 1:26 PM, Joel Jacobson wrote:\n> On Sun, Mar 28, 2021, at 12:25, Vik Fearing wrote:\n>> On 3/27/21 9:27 PM, Joel Jacobson wrote:\n>>> Imagine if we could simply write the SQL query like this:\n>>>\n>>> SELECT DISTINCT od.order_id.customer_id.company_name\n>>> FROM order_details AS od\n>>> WHERE od.product_id.product_name = 'Chocolade';\n>>>\n>>> I took the inspiration for this syntax from SQL/JSON path expressions.\n>>\n>> This is a terrible idea, but let me explain why.\n>>\n>> First of all, neo4j claims they don't have joins because they actually\n>> don't have joins. The nodes point directly to the other nodes. Your\n>> proposal is syntactic sugar over a real join. The equivalent to neo4j\n>> would be somehow storing the foreign ctid in the column or something.\n>>\n>> Secondly, and more importantly IMO, graph queries are coming to SQL.\n>> They are mostly based on Cypher but not entirely because they amalgamate\n>> concepts from other graph query languages like Oracle's PGQ. The\n>> \"common\" syntax is called GQL (https://www.gqlstandards.org/) and it's\n>> in the process of becoming Part 16 of the SQL standard. The timeline on\n>> that website says August 2022 (it started in September 2019).\n>>\n>> If that timeline holds and ambitious people work on it (I already know\n>> one who is; not me), I would expect this to be in PostgreSQL 16 or 17.\n>> The earliest your proposal could get in is 15, so it's not that much of\n>> a wait.\n> \n> I think you misunderstood my idea entirely.\n> \n> It has absolutely nothing to do with graph query languages,\n> except the one similarity I mentioned, not having joins.\n\nIn that case, I oppose this suggestion.\n\n> What I propose is a way to do implicit joins by following foreign keys,\n> thus improving the SQL query language, making it more concise for\n> the simple case when what you want to do is an INNER JOIN on a\n> single column on which there is a single FOREIGN KEY.\n\nSQL is not an implicit language, though.\n\nI wouldn't mind something like\n\n FROM a JOIN b WITH a_b_fk\n\nor something. That would be really nice when the keys are multicolumn.\nBut I don't want an implicit join like you're suggesting.\n-- \nVik Fearing\n\n\n", "msg_date": "Sun, 28 Mar 2021 22:43:30 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:\n> I'm imagining a syntax in which\n> you give the constraint name instead of the column name. Thought\n> experiment: how could the original syntax proposal make any use of\n> a multi-column foreign key?\n\nThanks for coming up with this genius idea.\n\nAt first I didn't see the beauty of it; I wrongly thought the constraint name needed to be\nunique per schema, but I realize we could just use the foreign table's name\nas the constraint name, which will allow a nice syntax:\n\nSELECT DISTINCT order_details.orders.customers.company_name\nFROM order_details\nWHERE order_details.products.product_name = 'Chocolade';\n\nGiven this data model:\n\nCREATE TABLE customers (\ncustomer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\ncompany_name text,\nPRIMARY KEY (customer_id)\n);\n\nCREATE TABLE orders (\norder_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\ncustomer_id bigint NOT NULL,\nPRIMARY KEY (order_id),\nCONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers\n);\n\nCREATE TABLE products (\nproduct_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\nproduct_name text NOT NULL,\nPRIMARY KEY (product_id)\n);\n\nCREATE TABLE order_details (\norder_id bigint NOT NULL,\nproduct_id bigint NOT NULL,\nPRIMARY KEY (order_id, product_id),\nCONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,\nCONSTRAINT products FOREIGN KEY (product_id) REFERENCES products\n);\n\n> > Not saying I think this suggestion is a good idea, though. We've seen\n> > many frameworks that hide joins, and the results are ... less than\n> > universally good.\n> \n> Yeah, I'm pretty much not sold on this idea either. I think it would\n> lead to the same problems we see with ORMs, namely that people write\n> queries that are impossible to execute efficiently and then blame\n> the database for their poor choice of schema.\n\nI think this concern is valid for the original syntax,\nbut I actually think the idea on using foreign key constraint names\neffectively solves an entire class of query writing bugs.\n\nUsers writing queries using this syntax are guaranteed to be aware\nof the existence of the foreign keys, otherwise they couldn't write\nthe query this way, since they must use the foreign key\nconstraint names in the path expression.\n\nThis ensures it's not possible to produce a nonsensical JOIN\non the wrong columns, a problem for which traditional JOINs\nhave no means to protect against.\n\nEven with foreign keys, indexes could of course be missing,\ncausing an inefficient query anyway, but at least the classes\nof potential problems is reduced by one.\n\nI think what's neat is how this syntax works excellent in combination\nwith traditional JOINs, allowing the one which feels most natural for\neach part of the query to be used.\n\nLet's also remember foreign keys did first appear in SQL-89,\nso they couldn't have been taken into account when SQL-86\nwas designed. Maybe they would have came up with the idea\nof making more use of foreign key constraints,\nif they would have been invented from the very beginning.\n\nHowever, it's not too late to fix this, it seems doable without\nbreaking any backwards compatibility. I think there is a risk\nour personal preferences are biased due to being experienced\nSQL users. I think it's likely newcomers to SQL would really\nfancy this proposed syntax, and cause them to prefer PostgreSQL\nover some other NoSQL product.\n\nIf we can provide such newcomers with a built-in solution,\nI think that better than telling them they should\nuse some ORM/tool/macro to simplify their query writing.\n\n/Joel\nOn Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:I'm imagining a syntax in whichyou give the constraint name instead of the column name.  Thoughtexperiment: how could the original syntax proposal make any use ofa multi-column foreign key?Thanks for coming up with this genius idea.At first I didn't see the beauty of it; I wrongly thought the constraint name needed to beunique per schema, but I realize we could just use the foreign table's nameas the constraint name, which will allow a nice syntax:SELECT DISTINCT order_details.orders.customers.company_nameFROM order_detailsWHERE order_details.products.product_name = 'Chocolade';Given this data model:CREATE TABLE customers (customer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,company_name text,PRIMARY KEY (customer_id));CREATE TABLE orders (order_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,customer_id bigint NOT NULL,PRIMARY KEY (order_id),CONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers);CREATE TABLE products (product_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,product_name text NOT NULL,PRIMARY KEY (product_id));CREATE TABLE order_details (order_id bigint NOT NULL,product_id bigint NOT NULL,PRIMARY KEY (order_id, product_id),CONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,CONSTRAINT products FOREIGN KEY (product_id) REFERENCES products);> Not saying I think this suggestion is a good idea, though. We've seen> many frameworks that hide joins, and the results are ... less than> universally good.Yeah, I'm pretty much not sold on this idea either.  I think it wouldlead to the same problems we see with ORMs, namely that people writequeries that are impossible to execute efficiently and then blamethe database for their poor choice of schema.I think this concern is valid for the original syntax,but I actually think the idea on using foreign key constraint nameseffectively solves an entire class of query writing bugs.Users writing queries using this syntax are guaranteed to be awareof the existence of the foreign keys, otherwise they couldn't writethe query this way, since they must use the foreign keyconstraint names in the path expression.This ensures it's not possible to produce a nonsensical JOINon the wrong columns, a problem for which traditional JOINshave no means to protect against.Even with foreign keys, indexes could of course be missing,causing an inefficient query anyway, but at least the classesof potential problems is reduced by one.I think what's neat is how this syntax works excellent in combinationwith traditional JOINs, allowing the one which feels most natural foreach part of the query to be used.Let's also remember foreign keys did first appear in SQL-89,so they couldn't have been taken into account when SQL-86was designed. Maybe they would have came up with the ideaof making more use of foreign key constraints,if they would have been invented from the very beginning.However, it's not too late to fix this, it seems doable withoutbreaking any backwards compatibility. I think there is a riskour personal preferences are biased due to being experiencedSQL users. I think it's likely newcomers to SQL would reallyfancy this proposed syntax, and cause them to prefer PostgreSQLover some other NoSQL product.If we can provide such newcomers with a built-in solution,I think that better than telling them they shoulduse some ORM/tool/macro to simplify their query writing./Joel", "msg_date": "Mon, 29 Mar 2021 11:59:48 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "po 29. 3. 2021 v 12:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:\n>\n> I'm imagining a syntax in which\n> you give the constraint name instead of the column name. Thought\n> experiment: how could the original syntax proposal make any use of\n> a multi-column foreign key?\n>\n>\n> Thanks for coming up with this genius idea.\n>\n> At first I didn't see the beauty of it; I wrongly thought the constraint\n> name needed to be\n> unique per schema, but I realize we could just use the foreign table's name\n> as the constraint name, which will allow a nice syntax:\n>\n> SELECT DISTINCT order_details.orders.customers.company_name\n> FROM order_details\n> WHERE order_details.products.product_name = 'Chocolade';\n>\n\nThis syntax is similar to Oracle's object references (this is example from\nthread from Czech Postgres list last week)\n\nSelect e.last_name employee,\n e.department_ref.department_name department,\n e.department_ref.manager_ref.last_name dept_manager\n From employees_obj e\nwhere e.initials() like 'K_';\n\nI see few limitations: a) there is not support for outer join, b) there is\nnot support for aliasing - and it probably doesn't too nice, when you want\nto returns more (but not all) columns\n\nRegards\n\nPavel\n\n\n\n\n> Given this data model:\n>\n> CREATE TABLE customers (\n> customer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\n> company_name text,\n> PRIMARY KEY (customer_id)\n> );\n>\n> CREATE TABLE orders (\n> order_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\n> customer_id bigint NOT NULL,\n> PRIMARY KEY (order_id),\n> CONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers\n> );\n>\n> CREATE TABLE products (\n> product_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\n> product_name text NOT NULL,\n> PRIMARY KEY (product_id)\n> );\n>\n> CREATE TABLE order_details (\n> order_id bigint NOT NULL,\n> product_id bigint NOT NULL,\n> PRIMARY KEY (order_id, product_id),\n> CONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,\n> CONSTRAINT products FOREIGN KEY (product_id) REFERENCES products\n> );\n>\n> > Not saying I think this suggestion is a good idea, though. We've seen\n> > many frameworks that hide joins, and the results are ... less than\n> > universally good.\n>\n> Yeah, I'm pretty much not sold on this idea either. I think it would\n> lead to the same problems we see with ORMs, namely that people write\n> queries that are impossible to execute efficiently and then blame\n> the database for their poor choice of schema.\n>\n>\n> I think this concern is valid for the original syntax,\n> but I actually think the idea on using foreign key constraint names\n> effectively solves an entire class of query writing bugs.\n>\n> Users writing queries using this syntax are guaranteed to be aware\n> of the existence of the foreign keys, otherwise they couldn't write\n> the query this way, since they must use the foreign key\n> constraint names in the path expression.\n>\n> This ensures it's not possible to produce a nonsensical JOIN\n> on the wrong columns, a problem for which traditional JOINs\n> have no means to protect against.\n>\n> Even with foreign keys, indexes could of course be missing,\n> causing an inefficient query anyway, but at least the classes\n> of potential problems is reduced by one.\n>\n> I think what's neat is how this syntax works excellent in combination\n> with traditional JOINs, allowing the one which feels most natural for\n> each part of the query to be used.\n>\n> Let's also remember foreign keys did first appear in SQL-89,\n> so they couldn't have been taken into account when SQL-86\n> was designed. Maybe they would have came up with the idea\n> of making more use of foreign key constraints,\n> if they would have been invented from the very beginning.\n>\n> However, it's not too late to fix this, it seems doable without\n> breaking any backwards compatibility. I think there is a risk\n> our personal preferences are biased due to being experienced\n> SQL users. I think it's likely newcomers to SQL would really\n> fancy this proposed syntax, and cause them to prefer PostgreSQL\n> over some other NoSQL product.\n>\n> If we can provide such newcomers with a built-in solution,\n> I think that better than telling them they should\n> use some ORM/tool/macro to simplify their query writing.\n>\n> /Joel\n>\n\npo 29. 3. 2021 v 12:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:I'm imagining a syntax in whichyou give the constraint name instead of the column name.  Thoughtexperiment: how could the original syntax proposal make any use ofa multi-column foreign key?Thanks for coming up with this genius idea.At first I didn't see the beauty of it; I wrongly thought the constraint name needed to beunique per schema, but I realize we could just use the foreign table's nameas the constraint name, which will allow a nice syntax:SELECT DISTINCT order_details.orders.customers.company_nameFROM order_detailsWHERE order_details.products.product_name = 'Chocolade';This syntax is similar to Oracle's object references (this is example from thread from Czech Postgres list last week)Select e.last_name employee,\n       e.department_ref.department_name department,\n       e.department_ref.manager_ref.last_name dept_manager\n From employees_obj e\nwhere e.initials() like 'K_';I see few limitations: a) there is not support for outer join, b) there is not support for aliasing - and it probably doesn't too nice, when you want to returns more (but not all) columnsRegardsPavelGiven this data model:CREATE TABLE customers (customer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,company_name text,PRIMARY KEY (customer_id));CREATE TABLE orders (order_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,customer_id bigint NOT NULL,PRIMARY KEY (order_id),CONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers);CREATE TABLE products (product_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,product_name text NOT NULL,PRIMARY KEY (product_id));CREATE TABLE order_details (order_id bigint NOT NULL,product_id bigint NOT NULL,PRIMARY KEY (order_id, product_id),CONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,CONSTRAINT products FOREIGN KEY (product_id) REFERENCES products);> Not saying I think this suggestion is a good idea, though. We've seen> many frameworks that hide joins, and the results are ... less than> universally good.Yeah, I'm pretty much not sold on this idea either.  I think it wouldlead to the same problems we see with ORMs, namely that people writequeries that are impossible to execute efficiently and then blamethe database for their poor choice of schema.I think this concern is valid for the original syntax,but I actually think the idea on using foreign key constraint nameseffectively solves an entire class of query writing bugs.Users writing queries using this syntax are guaranteed to be awareof the existence of the foreign keys, otherwise they couldn't writethe query this way, since they must use the foreign keyconstraint names in the path expression.This ensures it's not possible to produce a nonsensical JOINon the wrong columns, a problem for which traditional JOINshave no means to protect against.Even with foreign keys, indexes could of course be missing,causing an inefficient query anyway, but at least the classesof potential problems is reduced by one.I think what's neat is how this syntax works excellent in combinationwith traditional JOINs, allowing the one which feels most natural foreach part of the query to be used.Let's also remember foreign keys did first appear in SQL-89,so they couldn't have been taken into account when SQL-86was designed. Maybe they would have came up with the ideaof making more use of foreign key constraints,if they would have been invented from the very beginning.However, it's not too late to fix this, it seems doable withoutbreaking any backwards compatibility. I think there is a riskour personal preferences are biased due to being experiencedSQL users. I think it's likely newcomers to SQL would reallyfancy this proposed syntax, and cause them to prefer PostgreSQLover some other NoSQL product.If we can provide such newcomers with a built-in solution,I think that better than telling them they shoulduse some ORM/tool/macro to simplify their query writing./Joel", "msg_date": "Mon, 29 Mar 2021 12:48:05 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On 3/29/21 11:59 AM, Joel Jacobson wrote:\n> On Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:\n>> I'm imagining a syntax in which\n>> you give the constraint name instead of the column name. Thought\n>> experiment: how could the original syntax proposal make any use of\n>> a multi-column foreign key?\n> \n> Thanks for coming up with this genius idea.\n> \n> At first I didn't see the beauty of it; I wrongly thought the constraint name needed to be\n> unique per schema, but I realize we could just use the foreign table's name\n> as the constraint name, which will allow a nice syntax:\n> \n> SELECT DISTINCT order_details.orders.customers.company_name\n> FROM order_details\n> WHERE order_details.products.product_name = 'Chocolade';\n> \n> Given this data model:\n> \n> CREATE TABLE customers (\n> customer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\n> company_name text,\n> PRIMARY KEY (customer_id)\n> );\n> \n> CREATE TABLE orders (\n> order_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\n> customer_id bigint NOT NULL,\n> PRIMARY KEY (order_id),\n> CONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers\n> );\n> \n> CREATE TABLE products (\n> product_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\n> product_name text NOT NULL,\n> PRIMARY KEY (product_id)\n> );\n> \n> CREATE TABLE order_details (\n> order_id bigint NOT NULL,\n> product_id bigint NOT NULL,\n> PRIMARY KEY (order_id, product_id),\n> CONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,\n> CONSTRAINT products FOREIGN KEY (product_id) REFERENCES products\n> );\n\n\nIf you write your schema like this, then it becomes standards compliant:\n\nCREATE TYPE customers AS (\n company_name text\n);\nCREATE TABLE customers OF customers (\n REF IS customer_id SYSTEM GENERATED\n);\n\nCREATE TYPE orders AS (\n customer REF(customers) NOT NULL\n);\nCREATE TABLE orders OF orders (\n REF IS order_id SYSTEM GENERATED\n);\n\nCREATE TYPE products AS (\n product_name text\n);\nCREATE TABLE products OF products (\n REF IS product_id SYSTEM GENERATED\n);\n\nCREATE TABLE order_details (\n \"order\" REF(orders),\n product REF(products),\n quantity integer,\n PRIMARY KEY (\"order\", product)\n);\n\n\nAnd the query would be:\n\nSELECT DISTINCT order_details.\"order\"->customer->company_name\nFROM order_details\nWHERE order_details.product->product_name = 'Chocolade';\n\n\nPostgres already supports most of that, but not all of it.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 29 Mar 2021 16:17:50 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:\n> If you write your schema like this, then it becomes standards compliant:\n> ...\n> CREATE TABLE order_details (\n> \"order\" REF(orders),\n> product REF(products),\n> quantity integer,\n> PRIMARY KEY (\"order\", product)\n> );\n> \n> \n> And the query would be:\n> \n> SELECT DISTINCT order_details.\"order\"->customer->company_name\n> FROM order_details\n> WHERE order_details.product->product_name = 'Chocolade';\n> \n> \n> Postgres already supports most of that, but not all of it.\n\nThanks for making me aware of this.\nI can see this is \"4.9 Reference types\" in ISO/IEC 9075-2:2016(E).\n\nThis looks awesome.\n\n/Joel\n\nOn Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:If you write your schema like this, then it becomes standards compliant:...CREATE TABLE order_details (    \"order\" REF(orders),    product REF(products),    quantity integer,    PRIMARY KEY (\"order\", product));And the query would be:SELECT DISTINCT order_details.\"order\"->customer->company_nameFROM order_detailsWHERE order_details.product->product_name = 'Chocolade';Postgres already supports most of that, but not all of it.Thanks for making me aware of this.I can see this is \"4.9 Reference types\" in ISO/IEC 9075-2:2016(E).This looks awesome./Joel", "msg_date": "Mon, 29 Mar 2021 17:50:20 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:\n> CREATE TABLE order_details (\n> \"order\" REF(orders),\n> product REF(products),\n> quantity integer,\n> PRIMARY KEY (\"order\", product)\n> );\n> \n> \n> And the query would be:\n> \n> SELECT DISTINCT order_details.\"order\"->customer->company_name\n> FROM order_details\n> WHERE order_details.product->product_name = 'Chocolade';\n> \n> \n> Postgres already supports most of that, but not all of it.\n\nDo you know if REF is meant to be a replacement for foreign keys?\n\nAre they a different thing meant to co-exist with foreign keys,\nor are they actually foreign keys \"under the hood\"\nor something else entirely?\n\n/Joel\nOn Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:CREATE TABLE order_details (    \"order\" REF(orders),    product REF(products),    quantity integer,    PRIMARY KEY (\"order\", product));And the query would be:SELECT DISTINCT order_details.\"order\"->customer->company_nameFROM order_detailsWHERE order_details.product->product_name = 'Chocolade';Postgres already supports most of that, but not all of it.Do you know if REF is meant to be a replacement for foreign keys?Are they a different thing meant to co-exist with foreign keys,or are they actually foreign keys \"under the hood\"or something else entirely?/Joel", "msg_date": "Mon, 29 Mar 2021 19:55:49 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On 3/29/21 7:55 PM, Joel Jacobson wrote:\n> On Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:\n>> CREATE TABLE order_details (\n>> \"order\" REF(orders),\n>> product REF(products),\n>> quantity integer,\n>> PRIMARY KEY (\"order\", product)\n>> );\n>>\n>>\n>> And the query would be:\n>>\n>> SELECT DISTINCT order_details.\"order\"->customer->company_name\n>> FROM order_details\n>> WHERE order_details.product->product_name = 'Chocolade';\n>>\n>>\n>> Postgres already supports most of that, but not all of it.\n> \n> Do you know if REF is meant to be a replacement for foreign keys?\n> \n> Are they a different thing meant to co-exist with foreign keys,\n> or are they actually foreign keys \"under the hood\"\n> or something else entirely?\n\nThey're supposed to be OOP where each row in the typed table is an\ninstance of the object. Types can also have methods associated with\nthem, and the instance tables can have subtables similar to our table\ninheritance. The dereference operator is replaced by a subquery.\n\nThere is a whole slew of things in this area of the standard that\napparently never caught on.\n-- \nVik Fearing\n\n\n", "msg_date": "Mon, 29 Mar 2021 20:53:55 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Mon, Mar 29, 2021, at 20:53, Vik Fearing wrote:\n> On 3/29/21 7:55 PM, Joel Jacobson wrote:\n> > Do you know if REF is meant to be a replacement for foreign keys?\n> > \n> > Are they a different thing meant to co-exist with foreign keys,\n> > or are they actually foreign keys \"under the hood\"\n> > or something else entirely?\n> \n> They're supposed to be OOP where each row in the typed table is an\n> instance of the object. Types can also have methods associated with\n> them, and the instance tables can have subtables similar to our table\n> inheritance. The dereference operator is replaced by a subquery.\n> \n> There is a whole slew of things in this area of the standard that\n> apparently never caught on.\n\nHmm. Since it never caught on, maybe it was partly due to too much complexity, and maybe can invent a simpler solution?\n\nI would also be against this idea if the complexity cost would be too high,\nbut I think Tom's foreign key constraint name idea looks fruitful since it's simple and non-invasive.\n\n/Joel\nOn Mon, Mar 29, 2021, at 20:53, Vik Fearing wrote:On 3/29/21 7:55 PM, Joel Jacobson wrote:> Do you know if REF is meant to be a replacement for foreign keys?> > Are they a different thing meant to co-exist with foreign keys,> or are they actually foreign keys \"under the hood\"> or something else entirely?They're supposed to be OOP where each row in the typed table is aninstance of the object.  Types can also have methods associated withthem, and the instance tables can have subtables similar to our tableinheritance.  The dereference operator is replaced by a subquery.There is a whole slew of things in this area of the standard thatapparently never caught on.Hmm. Since it never caught on, maybe it was partly due to too much complexity, and maybe can invent a simpler solution?I would also be against this idea if the complexity cost would be too high,but I think Tom's foreign key constraint name idea looks fruitful since it's simple and non-invasive./Joel", "msg_date": "Mon, 29 Mar 2021 22:59:38 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "po 29. 3. 2021 v 23:00 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Mon, Mar 29, 2021, at 20:53, Vik Fearing wrote:\n>\n> On 3/29/21 7:55 PM, Joel Jacobson wrote:\n> > Do you know if REF is meant to be a replacement for foreign keys?\n> >\n> > Are they a different thing meant to co-exist with foreign keys,\n> > or are they actually foreign keys \"under the hood\"\n> > or something else entirely?\n>\n> They're supposed to be OOP where each row in the typed table is an\n> instance of the object. Types can also have methods associated with\n> them, and the instance tables can have subtables similar to our table\n> inheritance. The dereference operator is replaced by a subquery.\n>\n> There is a whole slew of things in this area of the standard that\n> apparently never caught on.\n>\n>\n> Hmm. Since it never caught on, maybe it was partly due to too much\n> complexity, and maybe can invent a simpler solution?\n>\n> I would also be against this idea if the complexity cost would be too high,\n> but I think Tom's foreign key constraint name idea looks fruitful since\n> it's simple and non-invasive.\n>\n\nMaybe there were no technical problems. Just this technology was coming at\na bad time. The people who needed (wanted) OOP access to data got the\nHibernate, and there was no necessity to do this work on SQL level. In this\ntime, there was possibility to use GUI for databases, and in this time\nthere were a lot of graphic query designers. I don't like the idea of\nforeign key constraint names - it doesn't look comfortable to me. I don't\nsay it is a bad idea, but it is not SQL, and I am not sure if it needs more\nor less work than explicitly to write PK=FK.\n\nOn second hand, it can be very nice to have some special strict mode in\nPostgres - maybe slower, not compatible, that disallow some dangerous or\nunsafe queries. But it is possible to solve in extensions, but nobody did\nit. Something like plpgsql_check for SQL - who will write sql_check?\n\nRegards\n\nPavel\n\n\n> /Joel\n>\n\npo 29. 3. 2021 v 23:00 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Mon, Mar 29, 2021, at 20:53, Vik Fearing wrote:On 3/29/21 7:55 PM, Joel Jacobson wrote:> Do you know if REF is meant to be a replacement for foreign keys?> > Are they a different thing meant to co-exist with foreign keys,> or are they actually foreign keys \"under the hood\"> or something else entirely?They're supposed to be OOP where each row in the typed table is aninstance of the object.  Types can also have methods associated withthem, and the instance tables can have subtables similar to our tableinheritance.  The dereference operator is replaced by a subquery.There is a whole slew of things in this area of the standard thatapparently never caught on.Hmm. Since it never caught on, maybe it was partly due to too much complexity, and maybe can invent a simpler solution?I would also be against this idea if the complexity cost would be too high,but I think Tom's foreign key constraint name idea looks fruitful since it's simple and non-invasive.Maybe there were no technical problems.  Just this technology was coming at a bad time.  The people who needed (wanted) OOP access to data got the Hibernate, and there was no necessity to do this work on SQL level. In this time, there was possibility to use GUI for databases, and in this time there were a lot of graphic query designers. I don't like the idea of foreign key constraint names - it doesn't look comfortable to me.  I don't say it is a bad idea, but it is not SQL, and I am not sure if it needs more or less work than explicitly to write PK=FK.On second hand, it can be very nice to have some special strict mode in Postgres - maybe slower, not compatible, that disallow some dangerous or unsafe queries. But it is possible to solve in extensions, but nobody did it. Something like plpgsql_check for SQL - who will write sql_check? RegardsPavel/Joel", "msg_date": "Tue, 30 Mar 2021 08:03:09 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021 at 08:03:09AM +0200, Pavel Stehule wrote:\n> \n> On second hand, it can be very nice to have some special strict mode in\n> Postgres - maybe slower, not compatible, that disallow some dangerous or\n> unsafe queries. But it is possible to solve in extensions, but nobody did\n> it. Something like plpgsql_check for SQL - who will write sql_check?\n\nThe #1 cause of problems is probably unqualified outer references, and\nunfortunately I don't think it's really possible to detect that in an\nextension, as the required information is only available in the raw parsetree.\n\n\n", "msg_date": "Tue, 30 Mar 2021 14:52:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "út 30. 3. 2021 v 8:52 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Tue, Mar 30, 2021 at 08:03:09AM +0200, Pavel Stehule wrote:\n> >\n> > On second hand, it can be very nice to have some special strict mode in\n> > Postgres - maybe slower, not compatible, that disallow some dangerous or\n> > unsafe queries. But it is possible to solve in extensions, but nobody did\n> > it. Something like plpgsql_check for SQL - who will write sql_check?\n>\n> The #1 cause of problems is probably unqualified outer references, and\n> unfortunately I don't think it's really possible to detect that in an\n> extension, as the required information is only available in the raw\n> parsetree.\n>\n\nthe raw parsetree is available I think. I didn't check it. But it can be\neasy to attach or attach a copy to Query structure. Maybe there is no\nnecessary hook. But it can be a good reason for implementing a post parsing\nhook.\n\nIt should be easy to check if all joins are related to foreign key\nconstraints.\n\nút 30. 3. 2021 v 8:52 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Mar 30, 2021 at 08:03:09AM +0200, Pavel Stehule wrote:\n> \n> On second hand, it can be very nice to have some special strict mode in\n> Postgres - maybe slower, not compatible, that disallow some dangerous or\n> unsafe queries. But it is possible to solve in extensions, but nobody did\n> it. Something like plpgsql_check for SQL - who will write sql_check?\n\nThe #1 cause of problems is probably unqualified outer references, and\nunfortunately I don't think it's really possible to detect that in an\nextension, as the required information is only available in the raw parsetree.the raw parsetree is available  I think. I didn't check it. But it can be easy to attach or attach a copy to Query structure. Maybe there is no necessary hook. But it can be a good reason for implementing a post parsing hook.It should be easy to check if all joins are related to foreign key constraints.", "msg_date": "Tue, 30 Mar 2021 09:02:39 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021 at 09:02:39AM +0200, Pavel Stehule wrote:\n> �t 30. 3. 2021 v 8:52 odes�latel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> > On Tue, Mar 30, 2021 at 08:03:09AM +0200, Pavel Stehule wrote:\n> > >\n> > > On second hand, it can be very nice to have some special strict mode in\n> > > Postgres - maybe slower, not compatible, that disallow some dangerous or\n> > > unsafe queries. But it is possible to solve in extensions, but nobody did\n> > > it. Something like plpgsql_check for SQL - who will write sql_check?\n> >\n> > The #1 cause of problems is probably unqualified outer references, and\n> > unfortunately I don't think it's really possible to detect that in an\n> > extension, as the required information is only available in the raw\n> > parsetree.\n> >\n> \n> the raw parsetree is available I think. I didn't check it. But it can be\n> easy to attach or attach a copy to Query structure. Maybe there is no\n> necessary hook. But it can be a good reason for implementing a post parsing\n> hook.\n\nIt's not available in any existing hook. And even if it was you would have to\nimport most of transformTopLevelStmt() and all underlying functions to be able\nto detect this case I think. This should be best done in core postgres.\n\n> It should be easy to check if all joins are related to foreign key\n> constraints.\n\nYes, and also if the referenced columns are covered by indexes for instance.\nMy concern is mostly that you won't be able to cover the unqualified outer\nreferences, which can lead to wrong query results rather than just slow\nqueries.\n\n\n", "msg_date": "Tue, 30 Mar 2021 15:28:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "út 30. 3. 2021 v 9:28 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> On Tue, Mar 30, 2021 at 09:02:39AM +0200, Pavel Stehule wrote:\n> > út 30. 3. 2021 v 8:52 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n> >\n> > > On Tue, Mar 30, 2021 at 08:03:09AM +0200, Pavel Stehule wrote:\n> > > >\n> > > > On second hand, it can be very nice to have some special strict mode\n> in\n> > > > Postgres - maybe slower, not compatible, that disallow some\n> dangerous or\n> > > > unsafe queries. But it is possible to solve in extensions, but\n> nobody did\n> > > > it. Something like plpgsql_check for SQL - who will write sql_check?\n> > >\n> > > The #1 cause of problems is probably unqualified outer references, and\n> > > unfortunately I don't think it's really possible to detect that in an\n> > > extension, as the required information is only available in the raw\n> > > parsetree.\n> > >\n> >\n> > the raw parsetree is available I think. I didn't check it. But it can be\n> > easy to attach or attach a copy to Query structure. Maybe there is no\n> > necessary hook. But it can be a good reason for implementing a post\n> parsing\n> > hook.\n>\n> It's not available in any existing hook. And even if it was you would\n> have to\n> import most of transformTopLevelStmt() and all underlying functions to be\n> able\n> to detect this case I think. This should be best done in core postgres.\n>\n> > It should be easy to check if all joins are related to foreign key\n> > constraints.\n>\n> Yes, and also if the referenced columns are covered by indexes for\n> instance.\n> My concern is mostly that you won't be able to cover the unqualified outer\n> references, which can lead to wrong query results rather than just slow\n> queries.\n>\n\nit can be fixed\n\nPavel\n\nút 30. 3. 2021 v 9:28 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:On Tue, Mar 30, 2021 at 09:02:39AM +0200, Pavel Stehule wrote:\n> út 30. 3. 2021 v 8:52 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n> \n> > On Tue, Mar 30, 2021 at 08:03:09AM +0200, Pavel Stehule wrote:\n> > >\n> > > On second hand, it can be very nice to have some special strict mode in\n> > > Postgres - maybe slower, not compatible, that disallow some dangerous or\n> > > unsafe queries. But it is possible to solve in extensions, but nobody did\n> > > it. Something like plpgsql_check for SQL - who will write sql_check?\n> >\n> > The #1 cause of problems is probably unqualified outer references, and\n> > unfortunately I don't think it's really possible to detect that in an\n> > extension, as the required information is only available in the raw\n> > parsetree.\n> >\n> \n> the raw parsetree is available  I think. I didn't check it. But it can be\n> easy to attach or attach a copy to Query structure. Maybe there is no\n> necessary hook. But it can be a good reason for implementing a post parsing\n> hook.\n\nIt's not available in any existing hook.  And even if it was you would have to\nimport most of transformTopLevelStmt() and all underlying functions to be able\nto detect this case I think.  This should be best done in core postgres.\n\n> It should be easy to check if all joins are related to foreign key\n> constraints.\n\nYes, and also if the referenced columns are covered by indexes for instance.\nMy concern is mostly that you won't be able to cover the unqualified outer\nreferences, which can lead to wrong query results rather than just slow\nqueries.it can be fixedPavel", "msg_date": "Tue, 30 Mar 2021 09:32:14 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 08:03, Pavel Stehule wrote:\n> Maybe there were no technical problems. Just this technology was coming at a bad time. The people who needed (wanted) OOP access to data got the Hibernate, and there was no necessity to do this work on SQL level. In this time, there was possibility to use GUI for databases, and in this time there were a lot of graphic query designers.\n\nThanks for giving this perspective. It seems like a likely explanation. In the ORM camp, SQL is merely a low-level language compilation target, not a language humans primarily write code in.\n\n> I don't like the idea of foreign key constraint names - it doesn't look comfortable to me. I don't say it is a bad idea, but it is not SQL, and I am not sure if it needs more or less work than explicitly to write PK=FK.\n\nI agree, it's not very comfortable. Maybe we can think of ways to improve the comfort?\n\nHere are two such ideas:\n \nIdea #1\n=======\n\nInitial semi-automated script-assisted renaming of existing foreign keys.\n\nIn my experiences, multiple foreign keys per primary table is quite common,\nbut not multiple foreign keys referencing the same foreign table from the same primary table.\n\nIf so, then a script can be written to rename most existing foreign keys:\n\n--\n-- Script to rename foreign keys to the name of the foreign table.\n-- Tables with multiple foreign keys referencing the same foreign table are skipped.\n--\nDO\n$_$\nDECLARE\nsql_cmd text;\nBEGIN\nFOR sql_cmd IN\n SELECT * FROM\n (\n SELECT\n format\n (\n 'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',\n conrel_nsp.nspname,\n conrel.relname,\n pg_constraint.conname,\n confrel.relname\n ) AS sql_cmd,\n COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)\n AS count_foreign_keys_to_same_table\n FROM pg_constraint\n JOIN pg_class AS conrel\n ON conrel.oid = pg_constraint.conrelid\n JOIN pg_class AS confrel\n ON confrel.oid = pg_constraint.confrelid\n JOIN pg_namespace AS conrel_nsp\n ON conrel_nsp.oid = conrel.relnamespace\n WHERE pg_constraint.contype = 'f'\n ) AS x\n WHERE count_foreign_keys_to_same_table = 1\nLOOP\n RAISE NOTICE '%', sql_cmd;\n EXECUTE sql_cmd;\nEND LOOP;\nEND\n$_$;\n\nFor our example data model, this would produce:\n\nALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;\nALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;\nALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;\n\nTo clarify what I mean with multiple foreign keys to the same table, here is an example:\n\nCREATE TABLE p (\na int,\nb int,\nPRIMARY KEY (a),\nUNIQUE (a,b)\n);\n\nCREATE TABLE f1 (\na int,\nb int,\nFOREIGN KEY (a) REFERENCES p\n);\n\nCREATE TABLE f2 (\na int,\nb int,\nFOREIGN KEY (a) REFERENCES p,\nFOREIGN KEY (a,b) REFERENCES p(a,b)\n);\n\nFor this example, only f1's foreign key constraint would be renamed:\n\nALTER TABLE public.f1 RENAME CONSTRAINT f1_a_fkey TO p;\n\nIdea #2\n=======\n\nAllow user to define the default format for new foreign key constraint name.\n\nThe format could use template patterns similar to how e.g. to_char() works.\nIf a conflict is found, it would do the same as today, try appending an increasing integer.\n\nUsers could then decide on a company-wide consistent naming convention\non how foreign keys are usually named, which would reduce the need to manually name them\nusing the CONSTRAINT keyword.\n\nFinally, just for fun, here is an example of how we could write the query above,\nif we would have real foreign keys on the catalogs:\n\n SELECT\n format\n (\n 'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',\n pg_constraint.conrel.pg_namespace.nspname,\n pg_constraint.conrel.relname,\n pg_constraint.conname,\n pg_constraint.confrel.relname,\n ) AS sql_cmd,\n COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)\n AS count_foreign_keys_to_same_table\n FROM pg_constraint\n WHERE pg_constraint.contype = 'f'\n\nIn this example the foreign key constraint names have been\nderived from the column names since both conrelid and confrelid,\nreference pg_class.\n\nI think this is a good example of where this improves the situation the most,\nwhen you have multiple joins of the same table, forcing you to come up with multiple aliases\nfor the same table, keeping them all in memory while writing and reading such queries.\n \n> On second hand, it can be very nice to have some special strict mode in Postgres - maybe slower, not compatible, that disallow some dangerous or unsafe queries. But it is possible to solve in extensions, but nobody did it. Something like plpgsql_check for SQL - who will write sql_check?\n\nNot a bad idea, this is a real problem, such a tool would be useful even with this proposed new syntax, as normal JOINs would continue to co-exist, for which nonsensical joins would still be possible.\n\n/Joel\nOn Tue, Mar 30, 2021, at 08:03, Pavel Stehule wrote:Maybe there were no technical problems.  Just this technology was coming at a bad time.  The people who needed (wanted) OOP access to data got the Hibernate, and there was no necessity to do this work on SQL level. In this time, there was possibility to use GUI for databases, and in this time there were a lot of graphic query designers.Thanks for giving this perspective. It seems like a likely explanation. In the ORM camp, SQL is merely a low-level language compilation target, not a language humans primarily write code in. I don't like the idea of foreign key constraint names - it doesn't look comfortable to me.  I don't say it is a bad idea, but it is not SQL, and I am not sure if it needs more or less work than explicitly to write PK=FK.I agree, it's not very comfortable. Maybe we can think of ways to improve the comfort?Here are two such ideas: Idea #1=======Initial semi-automated script-assisted renaming of existing foreign keys. In my experiences, multiple foreign keys per primary table is quite common,but not multiple foreign keys referencing the same foreign table from the same primary table.If so, then a script can be written to rename most existing foreign keys:---- Script to rename foreign keys to the name of the foreign table.-- Tables with multiple foreign keys referencing the same foreign table are skipped.--DO$_$DECLAREsql_cmd text;BEGINFOR sql_cmd IN  SELECT * FROM  (    SELECT      format      (        'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',        conrel_nsp.nspname,        conrel.relname,        pg_constraint.conname,        confrel.relname      ) AS sql_cmd,      COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)      AS count_foreign_keys_to_same_table    FROM pg_constraint    JOIN pg_class AS conrel      ON conrel.oid = pg_constraint.conrelid    JOIN pg_class AS confrel      ON confrel.oid = pg_constraint.confrelid    JOIN pg_namespace AS conrel_nsp      ON conrel_nsp.oid = conrel.relnamespace    WHERE pg_constraint.contype = 'f'  ) AS x  WHERE count_foreign_keys_to_same_table = 1LOOP  RAISE NOTICE '%', sql_cmd;  EXECUTE sql_cmd;END LOOP;END$_$;For our example data model, this would produce:ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;To clarify what I mean with multiple foreign keys to the same table, here is an example:CREATE TABLE p (a int,b int,PRIMARY KEY (a),UNIQUE (a,b));CREATE TABLE f1 (a int,b int,FOREIGN KEY (a) REFERENCES p);CREATE TABLE f2 (a int,b int,FOREIGN KEY (a) REFERENCES p,FOREIGN KEY (a,b) REFERENCES p(a,b));For this example, only f1's foreign key constraint would be renamed:ALTER TABLE public.f1 RENAME CONSTRAINT f1_a_fkey TO p;Idea #2=======Allow user to define the default format for new foreign key constraint name.The format could use template patterns similar to how e.g. to_char() works.If a conflict is found, it would do the same as today, try appending an increasing integer.Users could then decide on a company-wide consistent naming conventionon how foreign keys are usually named, which would reduce the need to manually name themusing the CONSTRAINT keyword.Finally, just for fun, here is an example of how we could write the query above,if we would have real foreign keys on the catalogs:  SELECT    format    (      'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',      pg_constraint.conrel.pg_namespace.nspname,      pg_constraint.conrel.relname,      pg_constraint.conname,      pg_constraint.confrel.relname,    ) AS sql_cmd,    COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)    AS count_foreign_keys_to_same_table  FROM pg_constraint  WHERE pg_constraint.contype = 'f'In this example the foreign key constraint names have beenderived from the column names since both conrelid and confrelid,reference pg_class.I think this is a good example of where this improves the situation the most,when you have multiple joins of the same table, forcing you to come up with multiple aliasesfor the same table, keeping them all in memory while writing and reading such queries. On second hand, it can be very nice to have some special strict mode in Postgres - maybe slower, not compatible, that disallow some dangerous or unsafe queries. But it is possible to solve in extensions, but nobody did it. Something like plpgsql_check for SQL - who will write sql_check?Not a bad idea, this is a real problem, such a tool would be useful even with this proposed new syntax, as normal JOINs would continue to co-exist, for which nonsensical joins would still be possible./Joel", "msg_date": "Tue, 30 Mar 2021 09:47:07 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": ">\n> I agree, it's not very comfortable. Maybe we can think of ways to improve\n> the comfort?\n>\n> Here are two such ideas:\n>\n> Idea #1\n> =======\n>\n> Initial semi-automated script-assisted renaming of existing foreign keys.\n>\n> In my experiences, multiple foreign keys per primary table is quite common,\n> but not multiple foreign keys referencing the same foreign table from the\n> same primary table.\n>\n> If so, then a script can be written to rename most existing foreign keys:\n>\n> --\n> -- Script to rename foreign keys to the name of the foreign table.\n> -- Tables with multiple foreign keys referencing the same foreign table\n> are skipped.\n> --\n> DO\n> $_$\n> DECLARE\n> sql_cmd text;\n> BEGIN\n> FOR sql_cmd IN\n> SELECT * FROM\n> (\n> SELECT\n> format\n> (\n> 'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',\n> conrel_nsp.nspname,\n> conrel.relname,\n> pg_constraint.conname,\n> confrel.relname\n> ) AS sql_cmd,\n> COUNT(*) OVER (PARTITION BY pg_constraint.conrelid,\n> pg_constraint.confrelid)\n> AS count_foreign_keys_to_same_table\n> FROM pg_constraint\n> JOIN pg_class AS conrel\n> ON conrel.oid = pg_constraint.conrelid\n> JOIN pg_class AS confrel\n> ON confrel.oid = pg_constraint.confrelid\n> JOIN pg_namespace AS conrel_nsp\n> ON conrel_nsp.oid = conrel.relnamespace\n> WHERE pg_constraint.contype = 'f'\n> ) AS x\n> WHERE count_foreign_keys_to_same_table = 1\n> LOOP\n> RAISE NOTICE '%', sql_cmd;\n> EXECUTE sql_cmd;\n> END LOOP;\n> END\n> $_$;\n>\n> For our example data model, this would produce:\n>\n> ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO\n> customers;\n> ALTER TABLE public.order_details RENAME CONSTRAINT\n> order_details_order_id_fkey TO orders;\n> ALTER TABLE public.order_details RENAME CONSTRAINT\n> order_details_product_id_fkey TO products;\n>\n\nyou fix one issue, but you lost interesting informations\n\n\n> To clarify what I mean with multiple foreign keys to the same table, here\n> is an example:\n>\n> CREATE TABLE p (\n> a int,\n> b int,\n> PRIMARY KEY (a),\n> UNIQUE (a,b)\n> );\n>\n> CREATE TABLE f1 (\n> a int,\n> b int,\n> FOREIGN KEY (a) REFERENCES p\n> );\n>\n> CREATE TABLE f2 (\n> a int,\n> b int,\n> FOREIGN KEY (a) REFERENCES p,\n> FOREIGN KEY (a,b) REFERENCES p(a,b)\n> );\n>\n> For this example, only f1's foreign key constraint would be renamed:\n>\n> ALTER TABLE public.f1 RENAME CONSTRAINT f1_a_fkey TO p;\n>\n> Idea #2\n> =======\n>\n> Allow user to define the default format for new foreign key constraint\n> name.\n>\n> The format could use template patterns similar to how e.g. to_char() works.\n> If a conflict is found, it would do the same as today, try appending an\n> increasing integer.\n>\n> Users could then decide on a company-wide consistent naming convention\n> on how foreign keys are usually named, which would reduce the need to\n> manually name them\n> using the CONSTRAINT keyword.\n>\n> Finally, just for fun, here is an example of how we could write the query\n> above,\n> if we would have real foreign keys on the catalogs:\n>\n> SELECT\n> format\n> (\n> 'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',\n> pg_constraint.conrel.pg_namespace.nspname,\n> pg_constraint.conrel.relname,\n> pg_constraint.conname,\n> pg_constraint.confrel.relname,\n> ) AS sql_cmd,\n> COUNT(*) OVER (PARTITION BY pg_constraint.conrelid,\n> pg_constraint.confrelid)\n> AS count_foreign_keys_to_same_table\n> FROM pg_constraint\n> WHERE pg_constraint.contype = 'f'\n>\n> In this example the foreign key constraint names have been\n> derived from the column names since both conrelid and confrelid,\n> reference pg_class.\n>\n> I think this is a good example of where this improves the situation the\n> most,\n> when you have multiple joins of the same table, forcing you to come up\n> with multiple aliases\n> for the same table, keeping them all in memory while writing and reading\n> such queries.\n>\n\nI do not have an opinion about this, I am sorry. I cannot imagine so this\ncan work. In some complex cases, the graphic query designer can work\nbetter. The invention of new syntax, or new tool should be better just than\nchecking correct usage of foreign constraints. I have worked with SQL for\nover 25 years, and there were a lot of tools, and people don't use it too\nmuch. So I am not good at dialog in this area, because I am a little bit\ntoo sceptical :).\n\nI remember multiple self joins only when developers used an EAV model. This\nis an antipattern, and today we have better tools, and we don't need it.\nIt is scary, because it is completely against the relational model. If I\nwant to fix it, then I will invent a new different syntax type that can be\nused for optimization of this case. But I have no idea how to do it well.\nMaybe:\n\nSELECT * FROM EAVTOENTITY( FROM data GROUP BY objid COLUMN name varchar\nWHEN attrname = 'name', surname varchar WHEN attrname = 'surname', ...)\n\n\n>\n> On second hand, it can be very nice to have some special strict mode in\n> Postgres - maybe slower, not compatible, that disallow some dangerous or\n> unsafe queries. But it is possible to solve in extensions, but nobody did\n> it. Something like plpgsql_check for SQL - who will write sql_check?\n>\n>\n> Not a bad idea, this is a real problem, such a tool would be useful even\n> with this proposed new syntax, as normal JOINs would continue to co-exist,\n> for which nonsensical joins would still be possible.\n>\n\nMaybe some similar what we have in plpgsql - extra checks - with three\nlevels, off, warnings, errors.\n\n\n\n> /Joel\n>\n\nI agree, it's not very comfortable. Maybe we can think of ways to improve the comfort?Here are two such ideas: Idea #1=======Initial semi-automated script-assisted renaming of existing foreign keys. In my experiences, multiple foreign keys per primary table is quite common,but not multiple foreign keys referencing the same foreign table from the same primary table.If so, then a script can be written to rename most existing foreign keys:---- Script to rename foreign keys to the name of the foreign table.-- Tables with multiple foreign keys referencing the same foreign table are skipped.--DO$_$DECLAREsql_cmd text;BEGINFOR sql_cmd IN  SELECT * FROM  (    SELECT      format      (        'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',        conrel_nsp.nspname,        conrel.relname,        pg_constraint.conname,        confrel.relname      ) AS sql_cmd,      COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)      AS count_foreign_keys_to_same_table    FROM pg_constraint    JOIN pg_class AS conrel      ON conrel.oid = pg_constraint.conrelid    JOIN pg_class AS confrel      ON confrel.oid = pg_constraint.confrelid    JOIN pg_namespace AS conrel_nsp      ON conrel_nsp.oid = conrel.relnamespace    WHERE pg_constraint.contype = 'f'  ) AS x  WHERE count_foreign_keys_to_same_table = 1LOOP  RAISE NOTICE '%', sql_cmd;  EXECUTE sql_cmd;END LOOP;END$_$;For our example data model, this would produce:ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;you fix one issue, but you lost interesting informations To clarify what I mean with multiple foreign keys to the same table, here is an example:CREATE TABLE p (a int,b int,PRIMARY KEY (a),UNIQUE (a,b));CREATE TABLE f1 (a int,b int,FOREIGN KEY (a) REFERENCES p);CREATE TABLE f2 (a int,b int,FOREIGN KEY (a) REFERENCES p,FOREIGN KEY (a,b) REFERENCES p(a,b));For this example, only f1's foreign key constraint would be renamed:ALTER TABLE public.f1 RENAME CONSTRAINT f1_a_fkey TO p;Idea #2=======Allow user to define the default format for new foreign key constraint name.The format could use template patterns similar to how e.g. to_char() works.If a conflict is found, it would do the same as today, try appending an increasing integer.Users could then decide on a company-wide consistent naming conventionon how foreign keys are usually named, which would reduce the need to manually name themusing the CONSTRAINT keyword.Finally, just for fun, here is an example of how we could write the query above,if we would have real foreign keys on the catalogs:  SELECT    format    (      'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',      pg_constraint.conrel.pg_namespace.nspname,      pg_constraint.conrel.relname,      pg_constraint.conname,      pg_constraint.confrel.relname,    ) AS sql_cmd,    COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)    AS count_foreign_keys_to_same_table  FROM pg_constraint  WHERE pg_constraint.contype = 'f'In this example the foreign key constraint names have beenderived from the column names since both conrelid and confrelid,reference pg_class.I think this is a good example of where this improves the situation the most,when you have multiple joins of the same table, forcing you to come up with multiple aliasesfor the same table, keeping them all in memory while writing and reading such queries.I do not have an opinion about this, I am sorry.  I cannot imagine so this can work. In some complex cases, the graphic query designer can work better. The invention of new syntax, or new tool should be better just than checking correct usage of foreign constraints. I have worked with SQL for over 25 years, and there were a lot of tools, and people don't use it too much. So I am not good at dialog in this area, because I am a little bit too sceptical :).I remember multiple self joins only when developers used an EAV model. This is an antipattern, and today we have better tools, and we don't need it.  It is scary, because it is completely against the relational model. If I want to fix it, then I will invent a new different syntax type that can be used for optimization of this case. But I have no idea how to do it well. Maybe:SELECT * FROM EAVTOENTITY( FROM data  GROUP BY objid COLUMN name varchar WHEN attrname = 'name',  surname varchar WHEN attrname = 'surname',  ...) On second hand, it can be very nice to have some special strict mode in Postgres - maybe slower, not compatible, that disallow some dangerous or unsafe queries. But it is possible to solve in extensions, but nobody did it. Something like plpgsql_check for SQL - who will write sql_check?Not a bad idea, this is a real problem, such a tool would be useful even with this proposed new syntax, as normal JOINs would continue to co-exist, for which nonsensical joins would still be possible.Maybe some similar what we have in plpgsql - extra checks - with three levels, off, warnings, errors. /Joel", "msg_date": "Tue, 30 Mar 2021 10:24:40 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:\n> \n>> \n>> I think this is a good example of where this improves the situation the most,\n>> when you have multiple joins of the same table, forcing you to come up with multiple aliases\n>> for the same table, keeping them all in memory while writing and reading such queries.\n> \n> ...\n> I remember multiple self joins only when developers used an EAV model. This is an antipattern, and today we have better tools, and we don't need it. It is scary, because it is completely against the relational model.\n\nNo, you are mistaken. There are no self-joins in any of the examples I presented.\nI merely joined in the same table multiple times, but not with itself, so it's not a self join.\n\nHere is the query again, it doesn't contain any self-joins:\n\n SELECT\n format\n (\n 'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',\n conrel_nsp.nspname,\n conrel.relname,\n pg_constraint.conname,\n confrel.relname\n ) AS sql_cmd,\n COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)\n AS count_foreign_keys_to_same_table\n FROM pg_constraint\n JOIN pg_class AS conrel\n ON conrel.oid = pg_constraint.conrelid\n JOIN pg_class AS confrel\n ON confrel.oid = pg_constraint.confrelid\n JOIN pg_namespace AS conrel_nsp\n ON conrel_nsp.oid = conrel.relnamespace\n WHERE pg_constraint.contype = 'f'\n\nWhere would the antipattern be here?\n\n/Joel\nOn Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:I think this is a good example of where this improves the situation the most,when you have multiple joins of the same table, forcing you to come up with multiple aliasesfor the same table, keeping them all in memory while writing and reading such queries....I remember multiple self joins only when developers used an EAV model. This is an antipattern, and today we have better tools, and we don't need it.  It is scary, because it is completely against the relational model.No, you are mistaken. There are no self-joins in any of the examples I presented.I merely joined in the same table multiple times, but not with itself, so it's not a self join.Here is the query again, it doesn't contain any self-joins:    SELECT      format      (        'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',        conrel_nsp.nspname,        conrel.relname,        pg_constraint.conname,        confrel.relname      ) AS sql_cmd,      COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)      AS count_foreign_keys_to_same_table    FROM pg_constraint    JOIN pg_class AS conrel      ON conrel.oid = pg_constraint.conrelid    JOIN pg_class AS confrel      ON confrel.oid = pg_constraint.confrelid    JOIN pg_namespace AS conrel_nsp      ON conrel_nsp.oid = conrel.relnamespace    WHERE pg_constraint.contype = 'f'Where would the antipattern be here?/Joel", "msg_date": "Tue, 30 Mar 2021 10:49:24 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "út 30. 3. 2021 v 10:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\n> On Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:\n>\n>\n>\n> I think this is a good example of where this improves the situation the\n> most,\n> when you have multiple joins of the same table, forcing you to come up\n> with multiple aliases\n> for the same table, keeping them all in memory while writing and reading\n> such queries.\n>\n>\n> ...\n> I remember multiple self joins only when developers used an EAV model.\n> This is an antipattern, and today we have better tools, and we don't need\n> it. It is scary, because it is completely against the relational model.\n>\n>\n> No, you are mistaken. There are no self-joins in any of the examples I\n> presented.\n> I merely joined in the same table multiple times, but not with itself, so\n> it's not a self join.\n>\n> Here is the query again, it doesn't contain any self-joins:\n>\n> SELECT\n> format\n> (\n> 'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',\n> conrel_nsp.nspname,\n> conrel.relname,\n> pg_constraint.conname,\n> confrel.relname\n> ) AS sql_cmd,\n> COUNT(*) OVER (PARTITION BY pg_constraint.conrelid,\n> pg_constraint.confrelid)\n> AS count_foreign_keys_to_same_table\n> FROM pg_constraint\n> JOIN pg_class AS conrel\n> ON conrel.oid = pg_constraint.conrelid\n> JOIN pg_class AS confrel\n> ON confrel.oid = pg_constraint.confrelid\n> JOIN pg_namespace AS conrel_nsp\n> ON conrel_nsp.oid = conrel.relnamespace\n> WHERE pg_constraint.contype = 'f'\n>\n> Where would the antipattern be here?\n>\n>\nok, this is not EAV.\n\n\n\n/Joel\n>\n\nút 30. 3. 2021 v 10:49 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:I think this is a good example of where this improves the situation the most,when you have multiple joins of the same table, forcing you to come up with multiple aliasesfor the same table, keeping them all in memory while writing and reading such queries....I remember multiple self joins only when developers used an EAV model. This is an antipattern, and today we have better tools, and we don't need it.  It is scary, because it is completely against the relational model.No, you are mistaken. There are no self-joins in any of the examples I presented.I merely joined in the same table multiple times, but not with itself, so it's not a self join.Here is the query again, it doesn't contain any self-joins:    SELECT      format      (        'ALTER TABLE %I.%I RENAME CONSTRAINT %I TO %I;',        conrel_nsp.nspname,        conrel.relname,        pg_constraint.conname,        confrel.relname      ) AS sql_cmd,      COUNT(*) OVER (PARTITION BY pg_constraint.conrelid, pg_constraint.confrelid)      AS count_foreign_keys_to_same_table    FROM pg_constraint    JOIN pg_class AS conrel      ON conrel.oid = pg_constraint.conrelid    JOIN pg_class AS confrel      ON confrel.oid = pg_constraint.confrelid    JOIN pg_namespace AS conrel_nsp      ON conrel_nsp.oid = conrel.relnamespace    WHERE pg_constraint.contype = 'f'Where would the antipattern be here?ok, this is not EAV. /Joel", "msg_date": "Tue, 30 Mar 2021 10:51:52 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:\n> For our example data model, this would produce:\n>> \n>> ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;\n>> ALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;\n>> ALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;\n> \n> you fix one issue, but you lost interesting informations \n\nNo, it's not lost. It's still there:\n\n# \\d order_details\nForeign-key constraints:\n \"orders\" FOREIGN KEY (order_id) REFERENCES orders(order_id)\n \"products\" FOREIGN KEY (product_id) REFERENCES products(product_id)\n\nYou can still easily find out what tables/columns are referencing/referenced,\nby using \\d or look in the information_schema.\n\nThe primarily reason why this information is duplicated in the default name,\nis AFAIK due to avoid hypothetical name conflicts,\nwhich is only a real problem for users who would need to export the schema\nto some other SQL database, or use apps that depend on the names to be\nunique within the namespace, and not just within the table.\n\nThe comment in pg_constraint.c explains this:\n\n/* Select a nonconflicting name for a new constraint.\n*\n* The objective here is to choose a name that is unique within the\n* specified namespace. Postgres does not require this, but the SQL\n* spec does, and some apps depend on it. Therefore we avoid choosing\n* default names that so conflict.\n\n/Joel\nOn Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:For our example data model, this would produce:ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;you fix one issue, but you lost interesting informations No, it's not lost. It's still there:# \\d order_detailsForeign-key constraints:    \"orders\" FOREIGN KEY (order_id) REFERENCES orders(order_id)    \"products\" FOREIGN KEY (product_id) REFERENCES products(product_id)You can still easily find out what tables/columns are referencing/referenced,by using \\d or look in the information_schema.The primarily reason why this information is duplicated in the default name,is AFAIK due to avoid hypothetical name conflicts,which is only a real problem for users who would need to export the schemato some other SQL database, or use apps that depend on the names to beunique within the namespace, and not just within the table.The comment in pg_constraint.c explains this:/* Select a nonconflicting name for a new constraint.** The objective here is to choose a name that is unique within the* specified namespace.  Postgres does not require this, but the SQL* spec does, and some apps depend on it.  Therefore we avoid choosing* default names that so conflict./Joel", "msg_date": "Tue, 30 Mar 2021 11:21:57 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 11:21, Joel Jacobson wrote:\n> On Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:\n>> For our example data model, this would produce:\n>>> \n>>> ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;\n>>> ALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;\n>>> ALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;\n>> \n>> you fix one issue, but you lost interesting informations \n> \n> No, it's not lost. It's still there:\n> \n> # \\d order_details\n> Foreign-key constraints:\n> \"orders\" FOREIGN KEY (order_id) REFERENCES orders(order_id)\n> \"products\" FOREIGN KEY (product_id) REFERENCES products(product_id)\n> \n> You can still easily find out what tables/columns are referencing/referenced,\n> by using \\d or look in the information_schema.\n> \n> The primarily reason why this information is duplicated in the default name,\n> is AFAIK due to avoid hypothetical name conflicts,\n> which is only a real problem for users who would need to export the schema\n> to some other SQL database, or use apps that depend on the names to be\n> unique within the namespace, and not just within the table.\n> \n> The comment in pg_constraint.c explains this:\n> \n> /* Select a nonconflicting name for a new constraint.\n> *\n> * The objective here is to choose a name that is unique within the\n> * specified namespace. Postgres does not require this, but the SQL\n> * spec does, and some apps depend on it. Therefore we avoid choosing\n> * default names that so conflict.\n> \n> /Joel\n\nUsers who have decided to stick to PostgreSQL for ever,\nand don't have any apps that depend on (the IMHO stupid) decision by the SQL standard\nto require constraints to be unique per namespace, can and should happily ignore this restriction.\n\n/Joel\n\nOn Tue, Mar 30, 2021, at 11:21, Joel Jacobson wrote:On Tue, Mar 30, 2021, at 10:24, Pavel Stehule wrote:For our example data model, this would produce:ALTER TABLE public.orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;ALTER TABLE public.order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;you fix one issue, but you lost interesting informations No, it's not lost. It's still there:# \\d order_detailsForeign-key constraints:    \"orders\" FOREIGN KEY (order_id) REFERENCES orders(order_id)    \"products\" FOREIGN KEY (product_id) REFERENCES products(product_id)You can still easily find out what tables/columns are referencing/referenced,by using \\d or look in the information_schema.The primarily reason why this information is duplicated in the default name,is AFAIK due to avoid hypothetical name conflicts,which is only a real problem for users who would need to export the schemato some other SQL database, or use apps that depend on the names to beunique within the namespace, and not just within the table.The comment in pg_constraint.c explains this:/* Select a nonconflicting name for a new constraint.** The objective here is to choose a name that is unique within the* specified namespace.  Postgres does not require this, but the SQL* spec does, and some apps depend on it.  Therefore we avoid choosing* default names that so conflict./JoelUsers who have decided to stick to PostgreSQL for ever,and don't have any apps that depend on (the IMHO stupid) decision by the SQL standardto require constraints to be unique per namespace, can and should happily ignore this restriction./Joel", "msg_date": "Tue, 30 Mar 2021 11:24:10 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Sat, 27 Mar 2021 at 16:28, Joel Jacobson <joel@compiler.org> wrote:\n\n> Hi,\n>\n> The database Neo4j has a language called \"Cypher\" where one of the key\n> selling points is they \"don’t need join tables\".\n>\n> Here is an example from\n> https://neo4j.com/developer/cypher/guide-sql-to-cypher/\n>\n> SQL:\n>\n> SELECT DISTINCT c.company_name\n> FROM customers AS c\n> JOIN orders AS o ON c.customer_id = o.customer_id\n> JOIN order_details AS od ON o.order_id = od.order_id\n> JOIN products AS p ON od.product_id = p.product_id\n> WHERE p.product_name = 'Chocolade';\n>\n> Neo4j's Cypher:\n>\n> MATCH (p:product\n> {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)\n> RETURN distinct c.company_name;\n>\n> Imagine if we could simply write the SQL query like this:\n>\n> SELECT DISTINCT od.order_id.customer_id.company_name\n> FROM order_details AS od\n> WHERE od.product_id.product_name = 'Chocolade';\n>\n\nI regularly do this type of thing via views. It's a bit confusing as writes\ngo to one set of tables while selects often go through the view with all\nthe details readily available.\n\nI think I'd want these shortcuts to be well defined and obvious to someone\nexploring via psql. I can also see uses where a foreign key might not be\navailable (left join rather than join).\n\nI wonder if GENERATED ... VIRTUAL might be a way of defining this type of\nadded record.\n\nALTER TABLE order ADD customer record GENERATED JOIN customer USING\n(customer_id) VIRTUAL;\nALTER TABLE order_detail ADD order record GENERATED JOIN order USING\n(order_id) VIRTUAL;\n\nSELECT order.customer.company_name FROM order_detail;\n\nOf course, if they don't reference the GENERATED column then the join isn't\nadded to the query.\n\n--\nRod Taylor\n\nOn Sat, 27 Mar 2021 at 16:28, Joel Jacobson <joel@compiler.org> wrote:Hi,The database Neo4j has a language called \"Cypher\" where one of the key selling points is they \"don’t need join tables\".Here is an example from https://neo4j.com/developer/cypher/guide-sql-to-cypher/SQL:SELECT DISTINCT c.company_nameFROM customers AS cJOIN orders AS o ON c.customer_id = o.customer_idJOIN order_details AS od ON o.order_id = od.order_idJOIN products AS p ON od.product_id = p.product_idWHERE p.product_name = 'Chocolade';Neo4j's Cypher:MATCH (p:product {product_name:\"Chocolade\"})<-[:PRODUCT]-(:order)<-[:PURCHASED]-(c:customer)RETURN distinct c.company_name;Imagine if we could simply write the SQL query like this:SELECT DISTINCT od.order_id.customer_id.company_nameFROM order_details AS odWHERE od.product_id.product_name = 'Chocolade';I regularly do this type of thing via views. It's a bit confusing as writes go to one set of tables while selects often go through the view with all the details readily available.I think I'd want these shortcuts to be well defined and obvious to someone exploring via psql. I can also see uses where a foreign key might not be available (left join rather than join).I wonder if GENERATED ... VIRTUAL might be a way of defining this type of added record.ALTER TABLE order ADD customer record GENERATED JOIN customer USING (customer_id) VIRTUAL;ALTER TABLE order_detail ADD order record GENERATED JOIN order USING (order_id) VIRTUAL;SELECT order.customer.company_name FROM order_detail;Of course, if they don't reference the GENERATED column then the join isn't added to the query.--Rod Taylor", "msg_date": "Tue, 30 Mar 2021 10:25:23 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 16:25, Rod Taylor wrote:\n> On Sat, 27 Mar 2021 at 16:28, Joel Jacobson <joel@compiler.org> wrote:\n>> __Imagine if we could simply write the SQL query like this:\n>> \n>> SELECT DISTINCT od.order_id.customer_id.company_name\n>> FROM order_details AS od\n>> WHERE od.product_id.product_name = 'Chocolade';\n> \n> I regularly do this type of thing via views. It's a bit confusing as writes go to one set of tables while selects often go through the view with all the details readily available.\n> \n> I think I'd want these shortcuts to be well defined and obvious to someone exploring via psql. I can also see uses where a foreign key might not be available (left join rather than join).\n> \n> I wonder if GENERATED ... VIRTUAL might be a way of defining this type of added record.\n> \n> ALTER TABLE order ADD customer record GENERATED JOIN customer USING (customer_id) VIRTUAL;\n> ALTER TABLE order_detail ADD order record GENERATED JOIN order USING (order_id) VIRTUAL;\n> \n> SELECT order.customer.company_name FROM order_detail;\n> \n> Of course, if they don't reference the GENERATED column then the join isn't added to the query.\n\nInteresting idea, but not sure I like it, since you would need twice as many columns,\nand you would still need the foreign keys, right?\n\n/Joel\nOn Tue, Mar 30, 2021, at 16:25, Rod Taylor wrote:On Sat, 27 Mar 2021 at 16:28, Joel Jacobson <joel@compiler.org> wrote:Imagine if we could simply write the SQL query like this:SELECT DISTINCT od.order_id.customer_id.company_nameFROM order_details AS odWHERE od.product_id.product_name = 'Chocolade';I regularly do this type of thing via views. It's a bit confusing as writes go to one set of tables while selects often go through the view with all the details readily available.I think I'd want these shortcuts to be well defined and obvious to someone exploring via psql. I can also see uses where a foreign key might not be available (left join rather than join).I wonder if GENERATED ... VIRTUAL might be a way of defining this type of added record.ALTER TABLE order ADD customer record GENERATED JOIN customer USING (customer_id) VIRTUAL;ALTER TABLE order_detail ADD order record GENERATED JOIN order USING (order_id) VIRTUAL;SELECT order.customer.company_name FROM order_detail;Of course, if they don't reference the GENERATED column then the join isn't added to the query.Interesting idea, but not sure I like it, since you would need twice as many columns,and you would still need the foreign keys, right?/Joel", "msg_date": "Tue, 30 Mar 2021 20:15:00 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:\n> SELECT DISTINCT order_details.\"order\"->customer->company_name\n> FROM order_details\n> WHERE order_details.product->product_name = 'Chocolade';\n\nI like the idea of using -> instead of . (dot),\nsince name resolution is already complicated,\nso overloading the dot operator feels like a bad idea.\n\nI therefore propose the following syntax:\n\n{ table_name | alias } -> constraint_name [ [ -> constraint_name ... ] -> column_name ]\n\nIt's necessary to start with the table name or its alias,\nsince two tables/aliases used in the same query\nmight have different constraints with the same name.\n\nIf the expression ends with a column_name,\nyou get the value for the column.\n\nIf the expression ends with a constraint_name,\nyou get the referenced table as a record.\n\nI also have a new idea on how we can use\nthe nullability of the foreign key's column(s),\nas a rule to determine if you would get\na LEFT JOIN or an INNER JOIN:\n\nIf ALL of the foreign key column(s) are declared as NOT NULL,\nthen you would get an INNER JOIN.\n\nIf ANY of the foreign key column(s) are declared as NULL,\nthen you would get a LEFT JOIN.\n\nThoughts?\n\n/Joel\nOn Mon, Mar 29, 2021, at 16:17, Vik Fearing wrote:SELECT DISTINCT order_details.\"order\"->customer->company_nameFROM order_detailsWHERE order_details.product->product_name = 'Chocolade';I like the idea of using -> instead of . (dot),since name resolution is already complicated,so overloading the dot operator feels like a bad idea.I therefore propose the following syntax:{ table_name | alias } -> constraint_name [ [ -> constraint_name ... ] -> column_name ]It's necessary to start with the table name or its alias,since two tables/aliases used in the same querymight have different constraints with the same name.If the expression ends with a column_name,you get the value for the column.If the expression ends with a constraint_name,you get the referenced table as a record.I also have a new idea on how we can usethe nullability of the foreign key's column(s),as a rule to determine if you would geta LEFT JOIN or an INNER JOIN:If ALL of the foreign key column(s) are declared as NOT NULL,then you would get an INNER JOIN.If ANY of the foreign key column(s) are declared as NULL,then you would get a LEFT JOIN.Thoughts?/Joel", "msg_date": "Tue, 30 Mar 2021 20:30:20 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, 30 Mar 2021 at 14:30, Joel Jacobson <joel@compiler.org> wrote:\n\nIf the expression ends with a column_name,\n> you get the value for the column.\n>\n> If the expression ends with a constraint_name,\n> you get the referenced table as a record.\n>\n\nCan’t you just leave off the “ends with a column_name” part? If you want\none of its columns, just put .column_name:\n\ntable -> constraint -> ... -> constraint . column_name\n\nThen you know that -> expects a constraint_name and only that to its right.\n\nAlso, should the join be a left join, which would therefore return a NULL\nwhen there is no matching record? Or could we have a variation such as ->?\nto give a left join (NULL when no matching record) with -> using an inner\njoin (record is not included in result when no matching record).\n\nFor the record I would find something like this quite useful. I constantly\nfind myself joining in code lookup tables and the like, and while from a\nmathematical view it’s just another join, explicitly listing the table in\nthe FROM clause of a large query does not assist with readability to say\nthe least.\n\nOn Tue, 30 Mar 2021 at 14:30, Joel Jacobson <joel@compiler.org> wrote:If the expression ends with a column_name,you get the value for the column.If the expression ends with a constraint_name,you get the referenced table as a record.Can’t you just leave off the “ends with a column_name” part? If you want one of its columns, just put .column_name:table -> constraint -> ... -> constraint . column_nameThen you know that -> expects a constraint_name and only that to its right.Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).For the record I would find something like this quite useful. I constantly find myself joining in code lookup tables and the like, and while from a mathematical view it’s just another join, explicitly listing the table in the FROM clause of a large query does not assist with readability to say the least.", "msg_date": "Tue, 30 Mar 2021 15:02:17 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 21:02, Isaac Morland wrote:\n> On Tue, 30 Mar 2021 at 14:30, Joel Jacobson <joel@compiler.org> wrote:\n> \n>> __\n>> If the expression ends with a column_name,\n>> you get the value for the column.\n>> \n>> If the expression ends with a constraint_name,\n>> you get the referenced table as a record.\n> \n> Can’t you just leave off the “ends with a column_name” part? If you want one of its columns, just put .column_name:\n> \n> table -> constraint -> ... -> constraint . column_name\n> \n> Then you know that -> expects a constraint_name and only that to its right.\n\n+1\n\nOf course! Much simpler. Thanks.\n\n> \n> Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).\n\nInteresting idea, but I think we can keep it simple, and still support the case you mention:\n\nIf we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),\nI think we should just use the WHERE clause and filter on such condition.\n\n> \n> For the record I would find something like this quite useful. I constantly find myself joining in code lookup tables and the like, and while from a mathematical view it’s just another join, explicitly listing the table in the FROM clause of a large query does not assist with readability to say the least.\n\nThanks for the encouraging words. I have exactly the same experience myself and share your view.\n\nI look forward to continued discussion on this matter.\n\n/Joel\nOn Tue, Mar 30, 2021, at 21:02, Isaac Morland wrote:On Tue, 30 Mar 2021 at 14:30, Joel Jacobson <joel@compiler.org> wrote:If the expression ends with a column_name,you get the value for the column.If the expression ends with a constraint_name,you get the referenced table as a record.Can’t you just leave off the “ends with a column_name” part? If you want one of its columns, just put .column_name:table -> constraint -> ... -> constraint . column_nameThen you know that -> expects a constraint_name and only that to its right.+1Of course! Much simpler. Thanks.Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).Interesting idea, but I think we can keep it simple, and still support the case you mention:If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),I think we should just use the WHERE clause and filter on such condition.For the record I would find something like this quite useful. I constantly find myself joining in code lookup tables and the like, and while from a mathematical view it’s just another join, explicitly listing the table in the FROM clause of a large query does not assist with readability to say the least.Thanks for the encouraging words. I have exactly the same experience myself and share your view.I look forward to continued discussion on this matter./Joel", "msg_date": "Tue, 30 Mar 2021 21:32:59 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org> wrote:\n\n> Also, should the join be a left join, which would therefore return a NULL\n> when there is no matching record? Or could we have a variation such as ->?\n> to give a left join (NULL when no matching record) with -> using an inner\n> join (record is not included in result when no matching record).\n>\n>\n> Interesting idea, but I think we can keep it simple, and still support the\n> case you mention:\n>\n> If we only have -> and you want to exclude records where the column is\n> NULL (i.e. INNER JOIN),\n> I think we should just use the WHERE clause and filter on such condition.\n>\n\nJust to be clear, it will always be a left join? Agreed that getting the\ninner join behaviour can be done in the WHERE clause. I think this is a\ncase where simple is good. As long as the left join case is supported I'm\nhappy.\n\n\n> Thanks for the encouraging words. I have exactly the same experience\n> myself and share your view.\n>\n> I look forward to continued discussion on this matter.\n>\n\nI had another idea: maybe the default name of a foreign key constraint to a\nprimary key should simply be the name of the target table? That is, if I\nsay:\n\nFOREIGN KEY (...) REFERENCES t\n\n... then unless the table name t is already in use as a constraint name, it\nwill be used as the constraint name. It would be nice not to have to keep\nrepeating, like this:\n\nCONSTRAINT t FOREIGN KEY (...) REFERENCES t\n\nOn Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org> wrote:Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).Interesting idea, but I think we can keep it simple, and still support the case you mention:If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),I think we should just use the WHERE clause and filter on such condition.Just to be clear, it will always be a left join? Agreed that getting the inner join behaviour can be done in the WHERE clause. I think this is a case where simple is good. As long as the left join case is supported I'm happy. Thanks for the encouraging words. I have exactly the same experience myself and share your view.I look forward to continued discussion on this matter.I had another idea: maybe the default name of a foreign key constraint to a primary key should simply be the name of the target table? That is, if I say:FOREIGN KEY (...) REFERENCES t... then unless the table name t is already in use as a constraint name, it will be used as the constraint name. It would be nice not to have to keep repeating, like this:CONSTRAINT t FOREIGN KEY (...) REFERENCES t", "msg_date": "Tue, 30 Mar 2021 16:01:16 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Tue, Mar 30, 2021, at 22:01, Isaac Morland wrote:\n> On Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org> wrote:\n>>> Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).\n>> \n>> Interesting idea, but I think we can keep it simple, and still support the case you mention:\n>> \n>> If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),\n>> I think we should just use the WHERE clause and filter on such condition.\n>> \n> \n> Just to be clear, it will always be a left join? Agreed that getting the inner join behaviour can be done in the WHERE clause. I think this is a case where simple is good. As long as the left join case is supported I'm happy.\n\nHmm, I guess, since technically, if all foreign key column(s) are declared as NOT NULL, we would know for sure such values exist, so a LEFT JOIN and INNER JOIN would always produce the same result.\nI'm not sure if the query planner could produce different plans though, and if an INNER JOIN could be more efficient. If it matters, then I think we should generate an INNER JOIN for the \"all column(s) NOT NULL\" case.\n\n> \n>> Thanks for the encouraging words. I have exactly the same experience myself and share your view.\n>> \n>> I look forward to continued discussion on this matter.\n> \n> I had another idea: maybe the default name of a foreign key constraint to a primary key should simply be the name of the target table? That is, if I say:\n> \n> FOREIGN KEY (...) REFERENCES t\n> \n> ... then unless the table name t is already in use as a constraint name, it will be used as the constraint name. It would be nice not to have to keep repeating, like this:\n> \n> CONSTRAINT t FOREIGN KEY (...) REFERENCES t\n> \n\nI suggested earlier in the thread to allow making the default name format user-definable,\nsince some users according to the comment in pg_constraint.c might depend on apps that rely on the name\nbeing unique within the namespace and not just the table.\n\nHere is the commit that implemented this:\n\ncommit 45616f5bbbb87745e0e82b00e77562d6502aa042\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Jun 10 17:56:03 2004 +0000\n\n Clean up generation of default names for constraints, indexes, and serial\n sequences, as per recent discussion. All these names are now of the\n form table_column_type, with digits added if needed to make them unique.\n Default constraint names are chosen to be unique across their whole schema,\n not just within the parent object, so as to be more SQL-spec-compatible\n and make the information schema views more useful.\n\nSo if nothing has changed since then, I don't think we should change the default name for all users.\nBut like I said earlier, I think it would be good if users who know what they are doing could override the default name format.\n\n/Joel\nOn Tue, Mar 30, 2021, at 22:01, Isaac Morland wrote:On Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org> wrote:Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).Interesting idea, but I think we can keep it simple, and still support the case you mention:If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),I think we should just use the WHERE clause and filter on such condition.Just to be clear, it will always be a left join? Agreed that getting the inner join behaviour can be done in the WHERE clause. I think this is a case where simple is good. As long as the left join case is supported I'm happy.Hmm, I guess, since technically, if all foreign key column(s) are declared as NOT NULL, we would know for sure such values exist, so a LEFT JOIN and INNER JOIN would always produce the same result.I'm not sure if the query planner could produce different plans though, and if an INNER JOIN could be more efficient. If it matters, then I think we should generate an INNER JOIN for the \"all column(s) NOT NULL\" case. Thanks for the encouraging words. I have exactly the same experience myself and share your view.I look forward to continued discussion on this matter.I had another idea: maybe the default name of a foreign key constraint to a primary key should simply be the name of the target table? That is, if I say:FOREIGN KEY (...) REFERENCES t... then unless the table name t is already in use as a constraint name, it will be used as the constraint name. It would be nice not to have to keep repeating, like this:CONSTRAINT t FOREIGN KEY (...) REFERENCES tI suggested earlier in the thread to allow making the default name format user-definable,since some users according to the comment in pg_constraint.c might depend on apps that rely on the namebeing unique within the namespace and not just the table.Here is the commit that implemented this:commit 45616f5bbbb87745e0e82b00e77562d6502aa042Author: Tom Lane <tgl@sss.pgh.pa.us>Date:   Thu Jun 10 17:56:03 2004 +0000    Clean up generation of default names for constraints, indexes, and serial    sequences, as per recent discussion.  All these names are now of the    form table_column_type, with digits added if needed to make them unique.    Default constraint names are chosen to be unique across their whole schema,    not just within the parent object, so as to be more SQL-spec-compatible    and make the information schema views more useful.So if nothing has changed since then, I don't think we should change the default name for all users.But like I said earlier, I think it would be good if users who know what they are doing could override the default name format./Joel", "msg_date": "Wed, 31 Mar 2021 00:50:19 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, Mar 31, 2021 at 12:50:19AM +0200, Joel Jacobson wrote:\n> On Tue, Mar 30, 2021, at 22:01, Isaac Morland wrote:\n> > On Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org> wrote:\n> >>> Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).\n> >> \n> >> Interesting idea, but I think we can keep it simple, and still support the case you mention:\n> >> \n> >> If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),\n> >> I think we should just use the WHERE clause and filter on such condition.\n> >> \n> > \n> > Just to be clear, it will always be a left join? Agreed that getting the inner join behaviour can be done in the WHERE clause. I think this is a case where simple is good. As long as the left join case is supported I'm happy.\n> \n> Hmm, I guess, since technically, if all foreign key column(s) are declared as NOT NULL, we would know for sure such values exist, so a LEFT JOIN and INNER JOIN would always produce the same result.\n> I'm not sure if the query planner could produce different plans though, and if an INNER JOIN could be more efficient. If it matters, then I think we should generate an INNER JOIN for the \"all column(s) NOT NULL\" case.\n\nI'm not sure who is supposed to be the target for this proposal.\n\nAs far as I understand this won't change the fact that users will still have to\nunderstand the \"relational\" part of RDBMS, understand what is a JOIN\ncardinality and everything that comes with it. So you think that people who\nare too lazy to learn the proper JOIN syntax will still bother to learn about\nrelational algebra and understand what they're doing, and I'm very doubtful\nabout that.\n\nYou also think that writing a proper JOIN is complex, but somehow writing a\nproper WHERE clause to subtly change the query behavior is not a problem, or\nthat if users want to use aggregate or anything more complex then they'll\nhappily open the documentation and learn how to do that. In my experience what\nwill happen is that instead users will keep using that limited subset of SQL\nfeatures and build creative and incredibly inefficient systems to avoid using\nanything else and will then complain that postgres is too slow.\n\nAs an example just yesterday some user complained that it's not possible to\nwrite a trigger on a table that could intercept inserting a textual value on an\ninteger field and replace it with the referenced value. And he rejected our\nsuggested solution to replace the \"INSERT INTO sometable VALUES...\" with\n\"INSERT INTO sometable SELECT ...\". And no this proposal would not have\nchanged anything because changing the python script doing the import to add\nsome minimal SQL knowledge was apparently too problematic. Instead he will\ninsert the data in a temporary table and dispatch everything on a per-row\nbasis, using triggers. So here again the problem wasn't the syntax but having\nto deal with a relational rather than an imperative approach.\n\nEven if I'm totally wrong about that, I still think your proposal will lead to\nproblematic or ambiguous situation unless you come up with a syntax that can\nfully handle the JOIN grammar. For instance, what should happen if the query\nalso contains an explicit JOIN for the same relation? I can see many reason\nwhy this would happen with this proposal given the set of features it can\nhandle. For instance:\n\n- you want multiple JOIN. Like one OUTER JOIN and one INNER JOIN for the same\n relation\n- you want to push predicates on an OUTER JOIN\n\n\n", "msg_date": "Wed, 31 Mar 2021 14:18:24 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, Mar 31, 2021, at 08:18, Julien Rouhaud wrote:\n> On Wed, Mar 31, 2021 at 12:50:19AM +0200, Joel Jacobson wrote:\n> > On Tue, Mar 30, 2021, at 22:01, Isaac Morland wrote:\n> > > On Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org <mailto:joel%40compiler.org>> wrote:\n> > >>> Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).\n> > >> \n> > >> Interesting idea, but I think we can keep it simple, and still support the case you mention:\n> > >> \n> > >> If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),\n> > >> I think we should just use the WHERE clause and filter on such condition.\n> > >> \n> > > \n> > > Just to be clear, it will always be a left join? Agreed that getting the inner join behaviour can be done in the WHERE clause. I think this is a case where simple is good. As long as the left join case is supported I'm happy.\n> > \n> > Hmm, I guess, since technically, if all foreign key column(s) are declared as NOT NULL, we would know for sure such values exist, so a LEFT JOIN and INNER JOIN would always produce the same result.\n> > I'm not sure if the query planner could produce different plans though, and if an INNER JOIN could be more efficient. If it matters, then I think we should generate an INNER JOIN for the \"all column(s) NOT NULL\" case.\n> \n> I'm not sure who is supposed to be the target for this proposal.\n> \n> As far as I understand this won't change the fact that users will still have to\n> understand the \"relational\" part of RDBMS, understand what is a JOIN\n> cardinality and everything that comes with it. So you think that people who\n> are too lazy to learn the proper JOIN syntax will still bother to learn about\n> relational algebra and understand what they're doing, and I'm very doubtful\n> about that.\n> \n> You also think that writing a proper JOIN is complex, but somehow writing a\n> proper WHERE clause to subtly change the query behavior is not a problem, or\n> that if users want to use aggregate or anything more complex then they'll\n> happily open the documentation and learn how to do that. In my experience what\n> will happen is that instead users will keep using that limited subset of SQL\n> features and build creative and incredibly inefficient systems to avoid using\n> anything else and will then complain that postgres is too slow.\n\nThanks for interesting new insights and questions.\n\nTraditional SQL JOINs reveals less information about the data model,\ncompared to this new proposed foreign key based syntax.\n\nTraditional SQL JOINs => undirected graph can be inferred\nForeign key joins => directed graph can be inferred\n\nWhen looking at a traditional join, you might be able to guess the direction,\nbased on the name of tables and columns, but you cannot know for sure without\nlooking at the table definitions.\n\nI'm thinking the target is both expert as well as beginner users,\nwho prefer a more concise syntax and reduced cognitive load:\n \nImagine a company with two types of SQL users:\n1) Tech core team, responsible for schema changes (DDL), such as adding new tables/columns\nand adding proper foreign keys.\n2) Normal users, responsible for writing SQL queries using the existing schema.\n\nIn such a scenario, (2) would use the foreign keys added by (1),\nletting them focus on *what* to join and less on *how* to join,\nall in line with the objectives of the declarative paradigm.\n\nBy using the foreign keys, it is guaranteed you cannot get an\naccidental one-to-many join that would multiply the result set.\n\nHow many rows a certain big query with lots of joins returns\ncan be difficult to reason about, you need to carefully inspect each\ntable to understand what column(s) there are unique constraints on,\nthat cannot multiply the result set.\n\nIf using the -> notation, you would only need to manually\ninspect the tables involved in the remaining JOINs;\nsince you could be confident all uses of -> cannot affect cardinality.\n\nI think this would be a win also for an expert SQL consultant working\nwith a new complex data model never seen before.\n\n> \n> As an example just yesterday some user complained that it's not possible to\n> write a trigger on a table that could intercept inserting a textual value on an\n> integer field and replace it with the referenced value. And he rejected our\n> suggested solution to replace the \"INSERT INTO sometable VALUES...\" with\n> \"INSERT INTO sometable SELECT ...\". And no this proposal would not have\n> changed anything because changing the python script doing the import to add\n> some minimal SQL knowledge was apparently too problematic. Instead he will\n> insert the data in a temporary table and dispatch everything on a per-row\n> basis, using triggers. So here again the problem wasn't the syntax but having\n> to deal with a relational rather than an imperative approach.\n\nSad but a bit funny story. I guess some people cannot learn from others mistake,\nbut insist on shooting themselves in the foot first.\n\nI understand it must feel wasteful and hopeless trying to educate such users.\nMaybe we could recycle the invested energy into such conversations,\nby creating a wiki-page for each such anti-pattern, so that each new attempt\nat explaining hopefully eventually leads to sufficient information for anyone\nto understand why X is a bad idea.\n\n/Joel\nOn Wed, Mar 31, 2021, at 08:18, Julien Rouhaud wrote:On Wed, Mar 31, 2021 at 12:50:19AM +0200, Joel Jacobson wrote:> On Tue, Mar 30, 2021, at 22:01, Isaac Morland wrote:> > On Tue, 30 Mar 2021 at 15:33, Joel Jacobson <joel@compiler.org> wrote:> >>> Also, should the join be a left join, which would therefore return a NULL when there is no matching record? Or could we have a variation such as ->? to give a left join (NULL when no matching record) with -> using an inner join (record is not included in result when no matching record).> >> > >> Interesting idea, but I think we can keep it simple, and still support the case you mention:> >> > >> If we only have -> and you want to exclude records where the column is NULL (i.e. INNER JOIN),> >> I think we should just use the WHERE clause and filter on such condition.> >> > > > > Just to be clear, it will always be a left join? Agreed that getting the inner join behaviour can be done in the WHERE clause. I think this is a case where simple is good. As long as the left join case is supported I'm happy.> > Hmm, I guess, since technically, if all foreign key column(s) are declared as NOT NULL, we would know for sure such values exist, so a LEFT JOIN and INNER JOIN would always produce the same result.> I'm not sure if the query planner could produce different plans though, and if an INNER JOIN could be more efficient. If it matters, then I think we should generate an INNER JOIN for the \"all column(s) NOT NULL\" case.I'm not sure who is supposed to be the target for this proposal.As far as I understand this won't change the fact that users will still have tounderstand the \"relational\" part of RDBMS, understand what is a JOINcardinality and everything that comes with it.  So you think that people whoare too lazy to learn the proper JOIN syntax will still bother to learn aboutrelational algebra and understand what they're doing, and I'm very doubtfulabout that.You also think that writing a proper JOIN is complex, but somehow writing aproper WHERE clause to subtly change the query behavior is not a problem, orthat if users want to use aggregate or anything more complex then they'llhappily open the documentation and learn how to do that.  In my experience whatwill happen is that instead users will keep using that limited subset of SQLfeatures and build creative and incredibly inefficient systems to avoid usinganything else and will then complain that postgres is too slow.Thanks for interesting new insights and questions.Traditional SQL JOINs reveals less information about the data model,compared to this new proposed foreign key based syntax.Traditional SQL JOINs => undirected graph can be inferredForeign key joins => directed graph can be inferredWhen looking at a traditional join, you might be able to guess the direction,based on the name of tables and columns, but you cannot know for sure withoutlooking at the table definitions.I'm thinking the target is both expert as well as beginner users,who prefer a more concise syntax and reduced cognitive load: Imagine a company with two types of SQL users:1) Tech core team, responsible for schema changes (DDL), such as adding new tables/columnsand adding proper foreign keys.2) Normal users, responsible for writing SQL queries using the existing schema.In such a scenario, (2) would use the foreign keys added by (1),letting them focus on *what* to join and less on *how* to join,all in line with the objectives of the declarative paradigm.By using the foreign keys, it is guaranteed you cannot get anaccidental one-to-many join that would multiply the result set.How many rows a certain big query with lots of joins returnscan be difficult to reason about, you need to carefully inspect eachtable to understand what column(s) there are unique constraints on,that cannot multiply the result set.If using the -> notation, you would only need to manuallyinspect the tables involved in the remaining JOINs;since you could be confident all uses of -> cannot affect cardinality.I think this would be a win also for an expert SQL consultant workingwith a new complex data model never seen before.As an example just yesterday some user complained that it's not possible towrite a trigger on a table that could intercept inserting a textual value on aninteger field and replace it with the referenced value.  And he rejected oursuggested solution to replace the \"INSERT INTO sometable VALUES...\" with\"INSERT INTO sometable SELECT ...\".  And no this proposal would not havechanged anything because changing the python script doing the import to addsome minimal SQL knowledge was apparently too problematic.  Instead he willinsert the data in a temporary table and dispatch everything on a per-rowbasis, using triggers.  So here again the problem wasn't the syntax but havingto deal with a relational rather than an imperative approach.Sad but a bit funny story. I guess some people cannot learn from others mistake,but insist on shooting themselves in the foot first.I understand it must feel wasteful and hopeless trying to educate such users.Maybe we could recycle the invested energy into such conversations,by creating a wiki-page for each such anti-pattern, so that each new attemptat explaining hopefully eventually leads to sufficient information for anyoneto understand why X is a bad idea./Joel", "msg_date": "Wed, 31 Mar 2021 11:18:52 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": ">\n>\n>\n> If using the -> notation, you would only need to manually\n> inspect the tables involved in the remaining JOINs;\n> since you could be confident all uses of -> cannot affect cardinality.\n>\n> I think this would be a win also for an expert SQL consultant working\n> with a new complex data model never seen before.\n>\n>\nI did not feel comfortable when I read about this proprietary extension of\nSQL. I can accept and it can be nice to support ANSI/SQL object's\nreferentions, but implementing own syntax for JOIN looks too strange. I\ndon't see too strong benefit in inventing new syntax and increasing the\ncomplexity and possible disorientation of users about correct syntax. Some\nusers didn't adopt a difference between old joins and modern joins, and you\nare inventing a third syntax.\n\nPavel\n\nIf using the -> notation, you would only need to manuallyinspect the tables involved in the remaining JOINs;since you could be confident all uses of -> cannot affect cardinality.I think this would be a win also for an expert SQL consultant workingwith a new complex data model never seen before.I did not feel comfortable when I read about this proprietary extension of SQL.  I can accept and it can be nice to support ANSI/SQL object's referentions, but implementing own syntax for JOIN looks too strange. I  don't see too strong benefit in inventing new syntax and increasing the complexity and possible disorientation of users about correct syntax. Some users didn't adopt a difference between old joins and modern joins, and you are inventing a third syntax.Pavel", "msg_date": "Wed, 31 Mar 2021 18:54:00 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, Mar 31, 2021 at 5:19 PM Joel Jacobson <joel@compiler.org> wrote:\n>\n> If using the -> notation, you would only need to manually\n> inspect the tables involved in the remaining JOINs;\n> since you could be confident all uses of -> cannot affect cardinality.\n\nTalking about that, do you have some answers to the points raised in\nmy previous mail, which is how it's supposed to behave when a table is\nboth join using your \"->\" syntax and a plain JOIN, how to join the\nsame table multiple time using this new syntax, and how to add\npredicates to the join clause using this new syntax.\n\n> I think this would be a win also for an expert SQL consultant working\n> with a new complex data model never seen before.\n\nBy experience if the queries are written with ANSI JOIN it's not\nreally a problem. And if it's a new complex data model that was never\nseen, I would need to inspect the data model first anyway to\nunderstand what the query is (or should be) doing.\n\n\n", "msg_date": "Thu, 1 Apr 2021 01:16:44 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "SAP has implemented something similar all across their stack. In their HANA database, application platform ABAP and also their cloud. So clearly they find it very popular:-) It is called CDS (Core Data Services) views. Here is a quick overview:\n- Superset of SQL to declare views and associations between views. They are views with sort of named joins. The code is parsed and stored as a normal SQL view as well as metadata. Note this metadata is not technically part of the database SQL layer but rather the database application layer. The user normally sees no difference.\n- Superset of SQL to query in a very similar way as described above with paths. This is parsed to normal SQL with joins, taking into consideration above metadata. Can only work on the above views.\nThis has obvious limitations, most mentioned earlier in this thread. Specifically, join types are limited. Still it eases the pain considerably of writing queries. The SAP system I work on now has 400K tables and over 1 million fields. Most keys are composite.. One needs to be a super hero to keep that data model in memory.... \nThis might be an extreme case but I'm sure there are other use cases. SAP technical users are actually quite happy to work with it since, in my humble opinion, it is in a way SQL light. The nice data model parts without the pesky complicated stuff.\nIt is not really an ORM but it makes ORM work significantly simpler by keeping the metadata on the database. The hierarchical CRUD stuff of ORM legend is squarely out of scope. \nI've been seeing this type of question appearing regularly in this forum and maybe this SAP way holds water as a solution? In that case, the work should probably be in user land but close to the database. Maybe as as extension with a SQL preprocessor? Much of the grammar and parsing work is already available, at least as inspiration.\nMartin\n \n \n\n\n\n\n\n\n Le lundi 29 mars 2021, 12:48:58 UTC+2, Pavel Stehule <pavel.stehule@gmail.com> a écrit : \n \n \n\npo 29. 3. 2021 v 12:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:\n\nOn Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:\n\nI'm imagining a syntax in which\nyou give the constraint name instead of the column name.  Thought\nexperiment: how could the original syntax proposal make any use of\na multi-column foreign key?\n\n\nThanks for coming up with this genius idea.\n\nAt first I didn't see the beauty of it; I wrongly thought the constraint name needed to be\nunique per schema, but I realize we could just use the foreign table's name\nas the constraint name, which will allow a nice syntax:\n\nSELECT DISTINCT order_details.orders.customers.company_name\nFROM order_details\nWHERE order_details.products.product_name = 'Chocolade';\n\n\nThis syntax is similar to Oracle's object references (this is example from thread from Czech Postgres list last week)\n\nSelect e.last_name employee,\n       e.department_ref.department_name department,\n       e.department_ref.manager_ref.last_name dept_manager\n From employees_obj e\nwhere e.initials() like 'K_';\nI see few limitations: a) there is not support for outer join, b) there is not support for aliasing - and it probably doesn't too nice, when you want to returns more (but not all) columns\nRegards\nPavel\n\n\n\n\n\nGiven this data model:\n\nCREATE TABLE customers (\ncustomer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\ncompany_name text,\nPRIMARY KEY (customer_id)\n);\n\nCREATE TABLE orders (\norder_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\ncustomer_id bigint NOT NULL,\nPRIMARY KEY (order_id),\nCONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers\n);\n\nCREATE TABLE products (\nproduct_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,\nproduct_name text NOT NULL,\nPRIMARY KEY (product_id)\n);\n\nCREATE TABLE order_details (\norder_id bigint NOT NULL,\nproduct_id bigint NOT NULL,\nPRIMARY KEY (order_id, product_id),\nCONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,\nCONSTRAINT products FOREIGN KEY (product_id) REFERENCES products\n);\n\n\n> Not saying I think this suggestion is a good idea, though. We've seen\n> many frameworks that hide joins, and the results are ... less than\n> universally good.\n\nYeah, I'm pretty much not sold on this idea either.  I think it would\nlead to the same problems we see with ORMs, namely that people write\nqueries that are impossible to execute efficiently and then blame\nthe database for their poor choice of schema.\n\n\nI think this concern is valid for the original syntax,\nbut I actually think the idea on using foreign key constraint names\neffectively solves an entire class of query writing bugs.\n\nUsers writing queries using this syntax are guaranteed to be aware\nof the existence of the foreign keys, otherwise they couldn't write\nthe query this way, since they must use the foreign key\nconstraint names in the path expression.\n\nThis ensures it's not possible to produce a nonsensical JOIN\non the wrong columns, a problem for which traditional JOINs\nhave no means to protect against.\n\nEven with foreign keys, indexes could of course be missing,\ncausing an inefficient query anyway, but at least the classes\nof potential problems is reduced by one.\n\nI think what's neat is how this syntax works excellent in combination\nwith traditional JOINs, allowing the one which feels most natural for\neach part of the query to be used.\n\nLet's also remember foreign keys did first appear in SQL-89,\nso they couldn't have been taken into account when SQL-86\nwas designed. Maybe they would have came up with the idea\nof making more use of foreign key constraints,\nif they would have been invented from the very beginning.\n\nHowever, it's not too late to fix this, it seems doable without\nbreaking any backwards compatibility. I think there is a risk\nour personal preferences are biased due to being experienced\nSQL users. I think it's likely newcomers to SQL would really\nfancy this proposed syntax, and cause them to prefer PostgreSQL\nover some other NoSQL product.\n\nIf we can provide such newcomers with a built-in solution,\nI think that better than telling them they should\nuse some ORM/tool/macro to simplify their query writing.\n\n/Joel\n\n \n\nSAP has implemented something similar all across their stack. In their HANA database, application platform ABAP and also their cloud. So clearly they find it very popular:-) It is called CDS (Core Data Services) views. Here is a quick overview:- Superset of SQL to declare views and associations between views. They are views with sort of named joins. The code is parsed and stored as a normal SQL view as well as metadata. Note this metadata is not technically part of the database SQL layer but rather the database application layer. The user normally sees no difference.- Superset of SQL to query in a very similar way as described above with paths. This is parsed to normal SQL with joins, taking into consideration above metadata. Can only work on the above views.This has obvious limitations, most mentioned earlier in this thread. Specifically, join types are limited. Still it eases the pain considerably of writing queries. The SAP system I work on now has 400K tables and over 1 million fields. Most keys are composite.. One needs to be a super hero to keep that data model in memory.... This might be an extreme case but I'm sure there are other use cases. SAP technical users are actually quite happy to work with it since, in my humble opinion, it is in a way SQL light. The nice data model parts without the pesky complicated stuff.It is not really an ORM but it makes ORM work significantly simpler by keeping the metadata on the database. The hierarchical CRUD stuff of ORM legend is squarely out of scope. I've been seeing this type of question appearing regularly in this forum and maybe this SAP way holds water as a solution? In that case, the work should probably be in user land but close to the database. Maybe as as extension with a SQL preprocessor? Much of the grammar and parsing work is already available, at least as inspiration.Martin  \n\n\n\n Le lundi 29 mars 2021, 12:48:58 UTC+2, Pavel Stehule <pavel.stehule@gmail.com> a écrit :\n \n\n\npo 29. 3. 2021 v 12:01 odesílatel Joel Jacobson <joel@compiler.org> napsal:On Sun, Mar 28, 2021, at 16:04, Tom Lane wrote:I'm imagining a syntax in whichyou give the constraint name instead of the column name.  Thoughtexperiment: how could the original syntax proposal make any use ofa multi-column foreign key?Thanks for coming up with this genius idea.At first I didn't see the beauty of it; I wrongly thought the constraint name needed to beunique per schema, but I realize we could just use the foreign table's nameas the constraint name, which will allow a nice syntax:SELECT DISTINCT order_details.orders.customers.company_nameFROM order_detailsWHERE order_details.products.product_name = 'Chocolade';This syntax is similar to Oracle's object references (this is example from thread from Czech Postgres list last week)Select e.last_name employee,\n       e.department_ref.department_name department,\n       e.department_ref.manager_ref.last_name dept_manager\n From employees_obj e\nwhere e.initials() like 'K_';I see few limitations: a) there is not support for outer join, b) there is not support for aliasing - and it probably doesn't too nice, when you want to returns more (but not all) columnsRegardsPavelGiven this data model:CREATE TABLE customers (customer_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,company_name text,PRIMARY KEY (customer_id));CREATE TABLE orders (order_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,customer_id bigint NOT NULL,PRIMARY KEY (order_id),CONSTRAINT customers FOREIGN KEY (customer_id) REFERENCES customers);CREATE TABLE products (product_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY,product_name text NOT NULL,PRIMARY KEY (product_id));CREATE TABLE order_details (order_id bigint NOT NULL,product_id bigint NOT NULL,PRIMARY KEY (order_id, product_id),CONSTRAINT orders FOREIGN KEY (order_id) REFERENCES orders,CONSTRAINT products FOREIGN KEY (product_id) REFERENCES products);> Not saying I think this suggestion is a good idea, though. We've seen> many frameworks that hide joins, and the results are ... less than> universally good.Yeah, I'm pretty much not sold on this idea either.  I think it wouldlead to the same problems we see with ORMs, namely that people writequeries that are impossible to execute efficiently and then blamethe database for their poor choice of schema.I think this concern is valid for the original syntax,but I actually think the idea on using foreign key constraint nameseffectively solves an entire class of query writing bugs.Users writing queries using this syntax are guaranteed to be awareof the existence of the foreign keys, otherwise they couldn't writethe query this way, since they must use the foreign keyconstraint names in the path expression.This ensures it's not possible to produce a nonsensical JOINon the wrong columns, a problem for which traditional JOINshave no means to protect against.Even with foreign keys, indexes could of course be missing,causing an inefficient query anyway, but at least the classesof potential problems is reduced by one.I think what's neat is how this syntax works excellent in combinationwith traditional JOINs, allowing the one which feels most natural foreach part of the query to be used.Let's also remember foreign keys did first appear in SQL-89,so they couldn't have been taken into account when SQL-86was designed. Maybe they would have came up with the ideaof making more use of foreign key constraints,if they would have been invented from the very beginning.However, it's not too late to fix this, it seems doable withoutbreaking any backwards compatibility. I think there is a riskour personal preferences are biased due to being experiencedSQL users. I think it's likely newcomers to SQL would reallyfancy this proposed syntax, and cause them to prefer PostgreSQLover some other NoSQL product.If we can provide such newcomers with a built-in solution,I think that better than telling them they shoulduse some ORM/tool/macro to simplify their query writing./Joel", "msg_date": "Wed, 31 Mar 2021 19:24:23 +0000 (UTC)", "msg_from": "Martin Jonsson <martinerikjonsson@yahoo.fr>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, Mar 31, 2021, at 19:16, Julien Rouhaud wrote:\n> On Wed, Mar 31, 2021 at 5:19 PM Joel Jacobson <joel@compiler.org <mailto:joel%40compiler.org>> wrote:\n> >\n> > If using the -> notation, you would only need to manually\n> > inspect the tables involved in the remaining JOINs;\n> > since you could be confident all uses of -> cannot affect cardinality.\n> \n> Talking about that, do you have some answers to the points raised in\n> my previous mail, which is how it's supposed to behave when a table is\n> both join using your \"->\" syntax and a plain JOIN, how to join the\n> same table multiple time using this new syntax, and how to add\n> predicates to the join clause using this new syntax.\n\nIt's tricky, I don't see a good solution.\n\nMy original proposal aimed to improve syntax conciseness.\nWhile this would be nice, I see much more potential value in Tom's idea\nof somehow making use of foreign key constrain names.\n\nInstead of trying to hack it into the <select list> part of a query,\nmaybe it's more fruitful to see if we can find a way to integrate it into the <from clause>.\n\nPerhaps something along the lines of what Vik suggested earlier:\n> FROM a JOIN b WITH a_b_fk\n\nThe problem I have with the above is \"b\" is redundant information,\nsince the foreign key is always between two specific tables,\nand given \"a\" and \"a_b_fk\" we know we are joining \"b\".\n\nI would prefer a new chainable binary operator.\n\nPavel raised some concerns about using \"->\" since used by the standard already,\nbut perhaps it is less of a problem when only used in the <from clause>?\nOtherwise we could use something else entirely.\n\nHere comes some ideas on <from clause> syntax.\n\nWith default foreign key constraint names:\n\nSELECT DISTINCT customers.company_name\nFROM order_details->order_details_product_id_fkey AS products\nJOIN order_details->order_details_order_id_fkey->orders_customer_id_fkey AS customers\nWHERE products.product_name = 'Chocolade';\n\nIn a PostgreSQL-only environment, foreign keys could be renamed:\n\nALTER TABLE orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;\nALTER TABLE order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;\nALTER TABLE order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;\n\nThen we would get:\n\nSELECT DISTINCT customers.company_name\nFROM order_details->products\nJOIN order_details->orders->customers\nWHERE products.product_name = 'Chocolade';\n\nWhich would be the same thing as:\n\nSELECT DISTINCT customers.company_name\nFROM order_details\nJOIN order_details->products\nJOIN order_details->orders\nJOIN orders->customers\nWHERE products.product_name = 'Chocolade';\n\nType of join can be specified as well as aliases, just like normal:\n\nSELECT DISTINCT c.company_name\nFROM order_details AS od\nJOIN od->products AS p\nFULL JOIN od->orders AS o\nLEFT JOIN o->customers AS c\nWHERE p.product_name = 'Chocolade';\n\n(FULL and LEFT join makes no sense in this example, but just to illustrate join types works just like normal)\n\nI don't know how challenging this would be to integrate into the grammar though.\nHere are some other ideas which might be easier to parse:\n\nSELECT DISTINCT customers.company_name\nFROM order_details->products\nJOIN ON order_details->orders->customers\nWHERE products.product_name = 'Chocolade';\n\nSELECT DISTINCT customers.company_name\nFROM order_details->products\nJOIN USING order_details->orders->customers\nWHERE products.product_name = 'Chocolade';\n\nSELECT DISTINCT customers.company_name\nFROM order_details->products\nJOIN WITH order_details->orders->customers\nWHERE products.product_name = 'Chocolade';\n\nMore syntax ideas?\n\nSemantic ideas:\n\n* When chaining, all joins on the chain would be made of the same type.\n* To use different join types, you would write a separate join.\n* All tables joined in the chain, would be accessible in the <select list>, via the names of the foreign key constraints.\n* Only the last link on the chain can be given an alias. If you want to alias something in the middle, split the chain into two separate joins (, so that the one in the middle becomes the last one, which can then be given an alias.)\n \nThoughts?\n\n/Joel\n\nOn Wed, Mar 31, 2021, at 19:16, Julien Rouhaud wrote:On Wed, Mar 31, 2021 at 5:19 PM Joel Jacobson <joel@compiler.org> wrote:>> If using the -> notation, you would only need to manually> inspect the tables involved in the remaining JOINs;> since you could be confident all uses of -> cannot affect cardinality.Talking about that, do you have some answers to the points raised inmy previous mail, which is how it's supposed to behave when a table isboth join using your \"->\" syntax and a plain JOIN, how to join thesame table multiple time using this new syntax, and how to addpredicates to the join clause using  this new syntax.It's tricky, I don't see a good solution.My original proposal aimed to improve syntax conciseness.While this would be nice, I see much more potential value in Tom's ideaof somehow making use of foreign key constrain names.Instead of trying to hack it into the <select list> part of a query,maybe it's more fruitful to see if we can find a way to integrate it into the <from clause>.Perhaps something along the lines of what Vik suggested earlier:> FROM a JOIN b WITH a_b_fkThe problem I have with the above is \"b\" is redundant information,since the foreign key is always between two specific tables,and given \"a\" and \"a_b_fk\" we know we are joining \"b\".I would prefer a new chainable binary operator.Pavel raised some concerns about using \"->\" since used by the standard already,but perhaps it is less of a problem when only used in the <from clause>?Otherwise we could use something else entirely.Here comes some ideas on <from clause> syntax.With default foreign key constraint names:SELECT DISTINCT customers.company_nameFROM order_details->order_details_product_id_fkey AS productsJOIN order_details->order_details_order_id_fkey->orders_customer_id_fkey AS customersWHERE products.product_name = 'Chocolade';In a PostgreSQL-only environment, foreign keys could be renamed:ALTER TABLE orders RENAME CONSTRAINT orders_customer_id_fkey TO customers;ALTER TABLE order_details RENAME CONSTRAINT order_details_order_id_fkey TO orders;ALTER TABLE order_details RENAME CONSTRAINT order_details_product_id_fkey TO products;Then we would get:SELECT DISTINCT customers.company_nameFROM order_details->productsJOIN order_details->orders->customersWHERE products.product_name = 'Chocolade';Which would be the same thing as:SELECT DISTINCT customers.company_nameFROM order_detailsJOIN order_details->productsJOIN order_details->ordersJOIN orders->customersWHERE products.product_name = 'Chocolade';Type of join can be specified as well as aliases, just like normal:SELECT DISTINCT c.company_nameFROM order_details AS odJOIN od->products AS pFULL JOIN od->orders AS oLEFT JOIN o->customers AS cWHERE p.product_name = 'Chocolade';(FULL and LEFT join makes no sense in this example, but just to illustrate join types works just like normal)I don't know how challenging this would be to integrate into the grammar though.Here are some other ideas which might be easier to parse:SELECT DISTINCT customers.company_nameFROM order_details->productsJOIN ON order_details->orders->customersWHERE products.product_name = 'Chocolade';SELECT DISTINCT customers.company_nameFROM order_details->productsJOIN USING order_details->orders->customersWHERE products.product_name = 'Chocolade';SELECT DISTINCT customers.company_nameFROM order_details->productsJOIN WITH order_details->orders->customersWHERE products.product_name = 'Chocolade';More syntax ideas?Semantic ideas:* When chaining, all joins on the chain would be made of the same type.* To use different join types, you would write a separate join.* All tables joined in the chain, would be accessible in the <select list>, via the names of the foreign key constraints.* Only the last link on the chain can be given an alias. If you want to alias something in the middle, split the chain into two separate joins (, so that the one in the middle becomes the last one, which can then be given an alias.) Thoughts?/Joel", "msg_date": "Wed, 31 Mar 2021 21:32:09 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, Mar 31, 2021, at 21:32, Joel Jacobson wrote:\n> SELECT DISTINCT customers.company_name\n> FROM order_details->products\n> JOIN order_details->orders->customers\n> WHERE products.product_name = 'Chocolade';\n\nHm, maybe the operator shouldn't be allowed directly after FROM, but only used with a join:\n\nSELECT DISTINCT customers.company_name\nFROM order_details\nJOIN order_details->orders->customers\nJOIN order_details->products\nWHERE products.product_name = 'Chocolade';\n\n/Joel\nOn Wed, Mar 31, 2021, at 21:32, Joel Jacobson wrote:SELECT DISTINCT customers.company_nameFROM order_details->productsJOIN order_details->orders->customersWHERE products.product_name = 'Chocolade';Hm, maybe the operator shouldn't be allowed directly after FROM, but only used with a join:SELECT DISTINCT customers.company_nameFROM order_detailsJOIN order_details->orders->customersJOIN order_details->productsWHERE products.product_name = 'Chocolade';/Joel", "msg_date": "Wed, 31 Mar 2021 21:49:52 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, 31 Mar 2021 at 15:32, Joel Jacobson <joel@compiler.org> wrote:\n\n> On Wed, Mar 31, 2021, at 19:16, Julien Rouhaud wrote:\n>\n> On Wed, Mar 31, 2021 at 5:19 PM Joel Jacobson <joel@compiler.org> wrote:\n> >\n> > If using the -> notation, you would only need to manually\n> > inspect the tables involved in the remaining JOINs;\n> > since you could be confident all uses of -> cannot affect cardinality.\n>\n> Talking about that, do you have some answers to the points raised in\n> my previous mail, which is how it's supposed to behave when a table is\n> both join using your \"->\" syntax and a plain JOIN, how to join the\n> same table multiple time using this new syntax, and how to add\n> predicates to the join clause using this new syntax.\n>\n>\n> It's tricky, I don't see a good solution.\n>\n> My original proposal aimed to improve syntax conciseness.\n> While this would be nice, I see much more potential value in Tom's idea\n> of somehow making use of foreign key constrain names.\n>\n\nMaybe I have a different proposal in mind than anybody else, but I don't\nthink there is a problem with multiple joins to the same table. If the\njoins are via the same constraint, then a single join is enough, and if\nthey are via different constraints, the constraints have unique names.\n\nI think if TA is a table with a foreign key constraint CB to another table\nTB, then the hypothetical expression:\n\nTA -> CB\n\nreally just means:\n\n(select TB from TB where (TB.[primary key columns) = (TA.[source columns of\nconstraint CB]))\n\nYou can then add .fieldname to get the required fieldname. The issue is\nthat writing it this way is hopelessly verbose, but the short form is fine.\nThe query planner also needs to be guaranteed to collapse multiple\nreferences through the same constraint to a single actual join (and then\ntake all the multiple fields requested).\n\nIf TA is a table with a foreign key constraint CB to TB, which has a\nforeign key constraint CC to TC, then this expression:\n\nTA -> CB -> CC\n\njust means, by the same definition (except I won't expand it fully, only\none level):\n\n(select TC from TC where (TC.[primary key columns) = ((TA -> CB).[source\ncolumns of constraint CC]))\n\nWhich reminds me, I often find myself wanting to write something like\na.(f1, f2, f3) = b.(f1, f2, f3) rather than (a.f1, a.f2, a.f3) = (b.f1,\nb.f2, b.f3). But that's another story.\n\nOn Wed, 31 Mar 2021 at 15:32, Joel Jacobson <joel@compiler.org> wrote:On Wed, Mar 31, 2021, at 19:16, Julien Rouhaud wrote:On Wed, Mar 31, 2021 at 5:19 PM Joel Jacobson <joel@compiler.org> wrote:>> If using the -> notation, you would only need to manually> inspect the tables involved in the remaining JOINs;> since you could be confident all uses of -> cannot affect cardinality.Talking about that, do you have some answers to the points raised inmy previous mail, which is how it's supposed to behave when a table isboth join using your \"->\" syntax and a plain JOIN, how to join thesame table multiple time using this new syntax, and how to addpredicates to the join clause using  this new syntax.It's tricky, I don't see a good solution.My original proposal aimed to improve syntax conciseness.While this would be nice, I see much more potential value in Tom's ideaof somehow making use of foreign key constrain names.Maybe I have a different proposal in mind than anybody else, but I don't think there is a problem with multiple joins to the same table. If the joins are via the same constraint, then a single join is enough, and if they are via different constraints, the constraints have unique names.I think if TA is a table with a foreign key constraint CB to another table TB, then the hypothetical expression:TA -> CBreally just means:(select TB from TB where (TB.[primary key columns) = (TA.[source columns of constraint CB]))You can then add .fieldname to get the required fieldname. The issue is that writing it this way is hopelessly verbose, but the short form is fine. The query planner also needs to be guaranteed to collapse multiple references through the same constraint to a single actual join (and then take all the multiple fields requested).If TA is a table with a foreign key constraint CB to TB, which has a foreign key constraint CC to TC, then this expression:TA -> CB -> CCjust means, by the same definition (except I won't expand it fully, only one level):(select TC from TC where (TC.[primary key columns) = ((TA -> CB).[source columns of constraint CC]))Which reminds me, I often find myself wanting to write something like a.(f1, f2, f3) = b.(f1, f2, f3) rather than (a.f1, a.f2, a.f3) = (b.f1, b.f2, b.f3). But that's another story.", "msg_date": "Wed, 31 Mar 2021 16:25:15 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On Wed, Mar 31, 2021, at 22:25, Isaac Morland wrote:\n> \n> Maybe I have a different proposal in mind than anybody else, but I don't think there is a problem with multiple joins to the same table. If the joins are via the same constraint, then a single join is enough, and if they are via different constraints, the constraints have unique names.\n> \n> I think if TA is a table with a foreign key constraint CB to another table TB, then the hypothetical expression:\n> \n> TA -> CB\n> \n> really just means:\n> \n> (select TB from TB where (TB.[primary key columns) = (TA.[source columns of constraint CB]))\n> \n> You can then add .fieldname to get the required fieldname. The issue is that writing it this way is hopelessly verbose, but the short form is fine. The query planner also needs to be guaranteed to collapse multiple references through the same constraint to a single actual join (and then take all the multiple fields requested).\n> \n> If TA is a table with a foreign key constraint CB to TB, which has a foreign key constraint CC to TC, then this expression:\n> \n> TA -> CB -> CC\n> \n> just means, by the same definition (except I won't expand it fully, only one level):\n> \n> (select TC from TC where (TC.[primary key columns) = ((TA -> CB).[source columns of constraint CC]))\n> \n> Which reminds me, I often find myself wanting to write something like a.(f1, f2, f3) = b.(f1, f2, f3) rather than (a.f1, a.f2, a.f3) = (b.f1, b.f2, b.f3). But that's another story\n\nMaybe “anonymous join” would be a good name for this, similar to anonymous functions. The joined table(s) would not pollute the namespace.\n\n/Joel\nOn Wed, Mar 31, 2021, at 22:25, Isaac Morland wrote:Maybe I have a different proposal in mind than anybody else, but I don't think there is a problem with multiple joins to the same table. If the joins are via the same constraint, then a single join is enough, and if they are via different constraints, the constraints have unique names.I think if TA is a table with a foreign key constraint CB to another table TB, then the hypothetical expression:TA -> CBreally just means:(select TB from TB where (TB.[primary key columns) = (TA.[source columns of constraint CB]))You can then add .fieldname to get the required fieldname. The issue is that writing it this way is hopelessly verbose, but the short form is fine. The query planner also needs to be guaranteed to collapse multiple references through the same constraint to a single actual join (and then take all the multiple fields requested).If TA is a table with a foreign key constraint CB to TB, which has a foreign key constraint CC to TC, then this expression:TA -> CB -> CCjust means, by the same definition (except I won't expand it fully, only one level):(select TC from TC where (TC.[primary key columns) = ((TA -> CB).[source columns of constraint CC]))Which reminds me, I often find myself wanting to write something like a.(f1, f2, f3) = b.(f1, f2, f3) rather than (a.f1, a.f2, a.f3) = (b.f1, b.f2, b.f3). But that's another storyMaybe “anonymous join” would be a good name for this, similar to anonymous functions. The joined table(s) would not pollute the namespace./Joel", "msg_date": "Wed, 31 Mar 2021 23:16:29 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "On 3/31/21 6:54 PM, Pavel Stehule wrote:\n>>\n>>\n>>\n>> If using the -> notation, you would only need to manually\n>> inspect the tables involved in the remaining JOINs;\n>> since you could be confident all uses of -> cannot affect cardinality.\n>>\n>> I think this would be a win also for an expert SQL consultant working\n>> with a new complex data model never seen before.\n>>\n>>\n> I did not feel comfortable when I read about this proprietary extension of\n> SQL. I can accept and it can be nice to support ANSI/SQL object's\n> referentions, but implementing own syntax for JOIN looks too strange. I\n> don't see too strong benefit in inventing new syntax and increasing the\n> complexity and possible disorientation of users about correct syntax. Some\n> users didn't adopt a difference between old joins and modern joins, and you\n> are inventing a third syntax.\n\nI'm with you on this: let's do it the Standard way, or not do it at all.\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 31 Mar 2021 23:18:14 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" }, { "msg_contents": "In a Hacker News discussion [2] on using foreign keys for joins,\nthe author of PostgREST, Steve Chavez, mentioned they are actually already\nusing this idea in PostgREST:\n\nSteve Chavez wrote:\n>The idea about using FK as a JOIN target is interesting.\n>While developing a syntax for PostgREST resource embedding[1],\n>I also reached the conclusion that FKs would be a convenient way to join tables\n>(also suggested renaming them as you do here).\n>\n>IIRC, self joins are still an issue with FK joining.\n>\n>[1]: https://postgrest.org/en/v7.0.0/api.html#embedding-disambiguation\n\nI think this idea looks very promising and fruitful.\n\nMaybe we can think of some other existing/new operator which would be acceptable,\ninstead of using \"->\" (which is potentially in conflict with the SQL standard's \"REF\" thing)?\n\nNot saying we should move forward on our own with this idea,\nbut if we can come up with a complete proposal,\nmaybe it can be presented as an idea to the SQL committee?\n\n[2] https://news.ycombinator.com/item?id=26693705\n\n/Joel\nIn a Hacker News discussion [2] on using foreign keys for joins,the author of PostgREST, Steve Chavez, mentioned they are actually alreadyusing this idea in PostgREST:Steve Chavez wrote:>The idea about using FK as a JOIN target is interesting.>While developing a syntax for PostgREST resource embedding[1],>I also reached the conclusion that FKs would be a convenient way to join tables>(also suggested renaming them as you do here).>>IIRC, self joins are still an issue with FK joining.>>[1]: https://postgrest.org/en/v7.0.0/api.html#embedding-disambiguationI think this idea looks very promising and fruitful.Maybe we can think of some other existing/new operator which would be acceptable,instead of using \"->\" (which is potentially in conflict with the SQL standard's \"REF\" thing)?Not saying we should move forward on our own with this idea,but if we can come up with a complete proposal,maybe it can be presented as an idea to the SQL committee?[2] https://news.ycombinator.com/item?id=26693705/Joel", "msg_date": "Tue, 06 Apr 2021 10:11:03 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: Idea: Avoid JOINs by using path expressions to follow FKs" } ]
[ { "msg_contents": "Hi -hackers,\n\nThis is my first patch here in the mailing list, so I tried to explain the\n\"why\" and the \"how\" of this enhancement.\n\n\n*Needs :*\nFormerly, Oracle DBA, I used to query v$sql to know the latest queries\nissued by each session with their timestamp. I found postgresql\npg_stat_statements very useful for this need, but the aggregation did not\nalways permitted me to analyze correctly the queries issued (at least for a\nbuffer stats per query overview). So, I enhanced the existing\npg_stat_statements.\n\n*Changes overview :*\n - new configuration pg_stat_statements.track_every = (TRUE|FALSE)\n -> generating per query data in a new view : pg_stat_sql\n -> can be resetted by using : pg_stat_sql_reset(userid,dbid,queryid)\n - added the timestamp per query to the view (replacing the number of calls\nof pg_stat_statements)\n -> the view column itself :\n\n*userid oid,*\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n*dbid oid,queryid bigint,query text,start timestamp,\n*total_time float8,rows int8,shared_blks_hit int8,\nshared_blks_read int8, shared_blks_dirtied int8,\nshared_blks_written int8, local_blks_hit int8, local_blks_read\nint8, local_blks_dirtied int8, local_blks_written int8,\ntemp_blks_read int8, temp_blks_written int8, blk_read_time\nfloat8, blk_write_time float8, wal_records int8, wal_fpi\nint8, wal_bytes numeric*\n\n\n - data are stored in a hash in a new shared memory area.\n - query texts are still stored in the same file.\n\nThe goal was to avoid generating too much data with track_every option\nenabled.\n\n - added some tests to the sql/pg_stat_statements.sql\n - added views to pg_stat_statements--1.7--1.8.sql\n\n*Bug fix :*\n - with UTF8 encoding, the \"\\0\" to delimit the end of the query text was\nbuggy; modified to query[query_len]=0;\n\n*Note :*\nI didn't want to change version number by myself, the attached files are\nstill pointing to 1.8\nThis is my first code for pgsql.\n\n\nI wanted to share with you this enhancement, hope you'll find it useful.\n\n-- \nRegards,\nYoan SULTAN", "msg_date": "Sun, 28 Mar 2021 09:05:10 +0200", "msg_from": "Yoan SULTAN <sultanyoan@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Enhancements to pg_stat_statements contrib extension" }, { "msg_contents": "Hi,\n\nOn Sun, Mar 28, 2021 at 09:05:10AM +0200, Yoan SULTAN wrote:\n> \n> *Needs :*\n> Formerly, Oracle DBA, I used to query v$sql to know the latest queries\n> issued by each session with their timestamp. I found postgresql\n> pg_stat_statements very useful for this need, but the aggregation did not\n> always permitted me to analyze correctly the queries issued (at least for a\n> buffer stats per query overview). So, I enhanced the existing\n> pg_stat_statements.\n\nI'm not an Oracle DBA so I may be lacking some background, but I don't really\nunderstand what this feature is about. Is it about having the statistics for\nthe last query execution for each backend, or simply the last N queries\nexecuted even if they're all from the same backend? More details of what this\nfeature should do and how it should interact with existing version would be\nwelcome.\n\n> *Changes overview :*\n> - new configuration pg_stat_statements.track_every = (TRUE|FALSE)\n> -> generating per query data in a new view : pg_stat_sql\n> -> can be resetted by using : pg_stat_sql_reset(userid,dbid,queryid)\n> - added the timestamp per query to the view (replacing the number of calls\n> of pg_stat_statements)\n> -> the view column itself :\n> \n> *userid oid,*\n> *dbid oid,queryid bigint,query text,start timestamp,\n> *total_time float8,rows int8,shared_blks_hit int8,\n> shared_blks_read int8, shared_blks_dirtied int8,\n> shared_blks_written int8, local_blks_hit int8, local_blks_read\n> int8, local_blks_dirtied int8, local_blks_written int8,\n> temp_blks_read int8, temp_blks_written int8, blk_read_time\n> float8, blk_write_time float8, wal_records int8, wal_fpi\n> int8, wal_bytes numeric*\n> \n> \n> - data are stored in a hash in a new shared memory area.\n> - query texts are still stored in the same file.*\n\nIf this is about storing the last query (or N queries), shouldn't it be stored\non some kind of ring buffer rather than a hash table? I also don't understand\nwhat is the EVERY_FACTOR supposed to do, given that pgss_every_max seems to be\nused to request more memory but then never used to actually store additional\nentries in the hash table.\n\n> *Bug fix :*\n> - with UTF8 encoding, the \"\\0\" to delimit the end of the query text was\n> buggy; modified to query[query_len]=0;\n\nDo you have an example on how to reproduce that?\n\nAlso, if there's a bug it should be fixed outside of a new feature proposal.\n\n> *Note :*\n> I didn't want to change version number by myself, the attached files are\n> still pointing to 1.8\n> This is my first code for pgsql.\n\nI tried to have a look at the patch but it's unfortunately a bit hard to do\nwith the current format. You should instead submit a diff (e.g. with \"git\ndiff\") or a patch (git format-patch) to ease review, ideally with a version\nnumber. If you're looking for pointers you can look at\nhttps://wiki.postgresql.org/wiki/Submitting_a_Patch, and for larger\ndocumentation maybe\nhttps://wiki.postgresql.org/wiki/So,_you_want_to_be_a_developer%3F and\nhttps://wiki.postgresql.org/wiki/Developer_FAQ.\n\n\n", "msg_date": "Sun, 28 Mar 2021 21:58:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Enhancements to pg_stat_statements contrib extension" } ]
[ { "msg_contents": "Dear mentors,\n\nI am a third-year undergrad student at University of Toronto. I am very interested to do a project in the context of Google Summer of Code with mentors from PostgreSQL. I am mostly interested in working on improving pgeu-system for Conference Management. I would like to gain valuable skills in web development and contribute to the open-source PostgreSQL community under the guidance of mentors. I have attached my resume in this email.\n\nI wonder if I could have the opportunity to have a 15 minute chat with you to discuss the details of the project with you. I have had the experience of writing a 100 pages guide for immigrants in Quebec by myself. Although this project is not related to computer science, I have learnt how to carry out planning to complete a large-scale project.\n\nThank you very much,\nJenny Zi Yi Xu", "msg_date": "Sun, 28 Mar 2021 22:52:28 +0000", "msg_from": "Zi Yi Xu <jennyziyi.xu@mail.utoronto.ca>", "msg_from_op": true, "msg_subject": "Pgsql Google Summer of Code " }, { "msg_contents": "Greetings!\n\n* Zi Yi Xu (jennyziyi.xu@mail.utoronto.ca) wrote:\n> I am a third-year undergrad student at University of Toronto. I am very interested to do a project in the context of Google Summer of Code with mentors from PostgreSQL. I am mostly interested in working on improving pgeu-system for Conference Management. I would like to gain valuable skills in web development and contribute to the open-source PostgreSQL community under the guidance of mentors. I have attached my resume in this email.\n\nI'd suggest you review the GitHub issues here:\n\nhttps://github.com/pgeu/pgeu-system/issues\n\n> I wonder if I could have the opportunity to have a 15 minute chat with you to discuss the details of the project with you. I have had the experience of writing a 100 pages guide for immigrants in Quebec by myself. Although this project is not related to computer science, I have learnt how to carry out planning to complete a large-scale project.\n\nFeel free to reach out to me directly (off-list) and we can chat.\n\nThanks!\n\nStephen\n(PG GSoC Admin)", "msg_date": "Mon, 29 Mar 2021 12:30:34 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Pgsql Google Summer of Code" } ]
[ { "msg_contents": "Hi,\nI was looking at:\n Cache if PathTarget and RestrictInfos contain volatile functions\n\nVOLATILITY_NOVOLATILE caught my attention. Since the enum values don't\nstart with HAS, I think VOLATILITY_NO*N*VOLATILE would be easier to read.\nActually I think since the enums are defined in VolatileFunctionStatus,\nthey can be simply called (the prefix should be redundant):\n\nUNKNOWN\nNONVOLATILE\nVOLATILE\n\nThanks\n\nHi,I was looking at:  Cache if PathTarget and RestrictInfos contain volatile functionsVOLATILITY_NOVOLATILE caught my attention. Since the enum values don't start with HAS, I think VOLATILITY_NONVOLATILE would be easier to read.Actually I think since the enums are defined in VolatileFunctionStatus, they can be simply called (the prefix should be redundant):UNKNOWNNONVOLATILEVOLATILEThanks", "msg_date": "Sun, 28 Mar 2021 19:18:17 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": true, "msg_subject": "name of enum used in 'Cache if PathTarget and RestrictInfos contain\n volatile functions'" }, { "msg_contents": "On Mon, Mar 29, 2021 at 1:15 PM Zhihong Yu <zyu@yugabyte.com> wrote:\n>\n> Hi,\n> I was looking at:\n> Cache if PathTarget and RestrictInfos contain volatile functions\n>\n> VOLATILITY_NOVOLATILE caught my attention. Since the enum values don't start with HAS, I think VOLATILITY_NONVOLATILE would be easier to read.\n> Actually I think since the enums are defined in VolatileFunctionStatus, they can be simply called (the prefix should be redundant):\n>\n> UNKNOWN\n> NONVOLATILE\n> VOLATILE\n>\n\nAlthough it seems like a good idea to remove prefixes, a name as\ncommon as UNKNOWN is going to clash [1] with something else, which\nIIUC is why the enums all have prefixes in the first place.\n\n------\n[1] https://stackoverflow.com/questions/35380279/avoid-name-collisions-with-enum-in-c-c99\n\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Mon, 29 Mar 2021 16:16:40 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: name of enum used in 'Cache if PathTarget and RestrictInfos\n contain volatile functions'" } ]
[ { "msg_contents": "While working on async execution, I noticed $subject. Attached is a\nsmall patch for that.\n\nBest regards,\nEtsuro Fujita", "msg_date": "Mon, 29 Mar 2021 19:15:55 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Obsolete comment in postgres_fdw.c" }, { "msg_contents": "On Mon, Mar 29, 2021 at 7:15 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\n> While working on async execution, I noticed $subject. Attached is a\n> small patch for that.\n\nApplied to all supported branches.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Tue, 30 Mar 2021 13:10:31 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Obsolete comment in postgres_fdw.c" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing the thread about issues with auto-analyze on partitioned\ntables [1] I remembered that the way we build statistics on the parent\nis somewhat expensive, because it samples rows from the children again.\n\nIt's true we sample much smaller amounts of rows from each partition\n(proportional to it's size), so it's not as expensive as simply running\nANALYZE on all partitions individually. But it's not free either, and in\nsome cases (e.g. with partitioning by time) it may touch old data that\nis not touched by regular queries, so likely not cached etc. That's a\nbit against the idea to use partitioning to \"segregate\" old data.\n\nOne reason why the ANALYZE costs are not a huge issue in practice is\nthat we're not actually triggering that - changes_since_analyze is not\nupdated for the parent, so autoanalyze does not realize it needs to do\nanything, and we never rebuild the statistics :-( But as shown in [2],\nthat may lead to poor estimates (and bad plans) in cases when we rely on\nthe parent's stats (probably due to not expanding the list of children\nearly enough).\n\n(Note: I wonder if we might simply not rely on the parent stats at all,\nand always consult directly the children stats - but with many children\nthat's likely problematic/expensive, and it does have the same issues as\nthe merging.)\n\nThe other thread [1] attempts to fix that by incrementing the counters\nfor the parent too, but that'll make this much worse. firstly, with\nmulti-level partitioning we'll have to sample the children repeatedly,\nessentially once for each level. Secondly, it's tricky to control when\nexactly are the counters propagated to the parent, making the parent\nanalyzes more frequent. Not great.\n\nEven if we do a great job in [1] and come up with smart heuristics,\nthere will always be cases where it does not work too well and we either\nanalyze too late or too often.\n\nNote: Propagating changes_since_analyze is only part of the story, as it\ndoes not do anything about stats after attach/detach of a partition.\n\n\nThis attempts to approach the problem from the other end - instead of\ntightly controlling when to analyze the parent, it makes the analyze\nmuch cheaper. That means we don't need to worry too much about doing the\nanalyze too often, and we can update the stats more aggressively.\n\nSo, how does it work? Well, it simply fetches the statistics for all\nchildren, and merges them together. For most statistics that's fairly\nsimple for most statistics types.\n\n1) stawidth, stanullfrac - Those are trivial to merge.\n\n2) stacorrelation - Not sure, I've used weighted average for now.\nPerhaps we could store the \"internal\" counters (sumX, sumX2) which would\nallow us to calculate regular estimate for the parent.\n\n3) stadistinct - This is quite problematic. We only have the per-child\nestimates, and it's not clear if there's any overlap. For now I've just\nsummed it up, because that's safer / similar to what we do for gather\nmerge paths etc. Maybe we could improve this by estimating the overlap\nsomehow (e.g. from MCV lists / histograms). But honestly, I doubt the\nestimates based on tiny sample of each child are any better. I suppose\nwe could introduce a column option, determining how to combine ndistinct\n(similar to how we can override n_distinct itself).\n\n4) MCV - It's trivial to build a new \"parent\" MCV list, although it may\nbe too large (in which case we cut it at statistics target, and copy the\nremaining bits to the histogram)\n\n5) histograms - A bit more complicated, because it requires dealing with\noverlapping bins, so we may have to \"cut\" and combine them in some way.\nIf we assume that cutting a bin in K parts means each part has 1/K\ntuples (no matter where exactly we cut it), then it's trivial and it\nworks just fine in practice. That's because with N children, each bin\nactually represents 1.0/(target*N) of the tuples, so the errors are\nquite limited.\n\n\nThe attached patch is a PoC - it should work, but there's plenty of room\nfor improvement. It only deals with \"regular\" per-column statistics, not\nwith extended stats (but I don't see why it wouldn't work for them too).\n\nIt adds a new analyze option \"MERGE\" which does not sample the children\nbut instead just combines the statistics. So the example from [2] looks\nlike this:\n\n======================================================================\ncreate table p (i integer, j integer) partition by list (i);\n\ncreate table p0 partition of p for values in (0);\ncreate table p1 partition of p for values in (1);\n\ninsert into p select 0,generate_series(1,1000);\ninsert into p select 1,generate_series(1,1000);\n\nanalyze p0;\nanalyze p1;\n\ncreate table q (i integer);\ninsert into q values (0);\nanalyze q;\n\ntest=# explain select * from q join p using (i) where j\nbetween 1 and 500;\n QUERY PLAN\n---------------------------------------------------------------------\n Hash Join (cost=1.02..49.82 rows=5 width=8)\n Hash Cond: (p.i = q.i)\n -> Append (cost=0.00..45.00 rows=1000 width=8)\n -> Seq Scan on p0 p_1 (cost=0.00..20.00 rows=500 width=8)\n Filter: ((j >= 1) AND (j <= 500))\n -> Seq Scan on p1 p_2 (cost=0.00..20.00 rows=500 width=8)\n Filter: ((j >= 1) AND (j <= 500))\n -> Hash (cost=1.01..1.01 rows=1 width=4)\n -> Seq Scan on q (cost=0.00..1.01 rows=1 width=4)\n(9 rows)\n\ntest=# analyze (merge) p;\nANALYZE\ntest=# explain select * from q join p using (i) where j\nbetween 1 and 500;\n QUERY PLAN\n---------------------------------------------------------------------\n Hash Join (cost=1.02..54.77 rows=500 width=8)\n Hash Cond: (p.i = q.i)\n -> Append (cost=0.00..45.00 rows=1000 width=8)\n -> Seq Scan on p0 p_1 (cost=0.00..20.00 rows=500 width=8)\n Filter: ((j >= 1) AND (j <= 500))\n -> Seq Scan on p1 p_2 (cost=0.00..20.00 rows=500 width=8)\n Filter: ((j >= 1) AND (j <= 500))\n -> Hash (cost=1.01..1.01 rows=1 width=4)\n -> Seq Scan on q (cost=0.00..1.01 rows=1 width=4)\n(9 rows)\n\n======================================================================\n\nFWIW I'm not sure we need the separate MERGE mode, but it's an easy way\nto allow both the regular and \"merge\" approach, so that it's possible to\nexperiment and compare the behavior.\n\nOne issue is that this would require some coordination between the\nparent/child analyzes. Essentially, any time a child is analyzed, the\nparent should rebuild the stats (to merge from the new child stats).\nThis is similar to the issue of analyzing the parent too often because\nwe don't know when exactly the counters get updated, but it's much\ncheaper to merge the stats, so it's much less problematic.\n\n\nregards\n\n\n[1] https://commitfest.postgresql.org/32/2492/\n\n[2]\nhttps://www.postgresql.org/message-id/CAM-w4HO9hUHvJDVwQ8%3DFgm-znF9WNvQiWsfyBjCr-5FD7gWKGA%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 29 Mar 2021 17:54:25 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Merging statistics from children instead of re-sampling everything" }, { "msg_contents": "Thanks for taking a fresh look at this.\n\nAs you've written it, this can apply to either/both partitioned or inheritence.\nI imagine when \"MERGE\" goes away, this should apply only to partitioned tables.\n(Actually, personally I would advocate to consider applying it to *both*, but I\ndon't think that's been the tendency over the last 4 years. I wrote here about\nsome arguably-gratuitous differences between inheritence and partitioning.\nhttps://www.postgresql.org/message-id/20180601221428.GU5164@telsasoft.com)\n\n> + if (*mcv_items > default_statistics_target)\n> + n = default_statistics_target;\n\nIt should use any non-default stats target of the parent's column\n\n> +\t\t * ignore anything but valid leaf relatiins with data, but release\n\nsp: relatiins.\n\n> +\t\t\t\telog(WARNING, \"stats for %d %d not found\",\n> + RelationGetRelid(rels[j]), vacattrstats[i]->attr->attnum);\n\nshould be %u %d\n\nThis code duplication is undesirable:\n> +\t/* Log the action if appropriate */\n> +\t * Determine which columns to analyze\n\n\n", "msg_date": "Mon, 29 Mar 2021 13:36:24 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "\n\nOn 3/29/21 8:36 PM, Justin Pryzby wrote:\n> Thanks for taking a fresh look at this.\n> \n> As you've written it, this can apply to either/both partitioned or inheritence.\n> I imagine when \"MERGE\" goes away, this should apply only to partitioned tables.\n> (Actually, personally I would advocate to consider applying it to *both*, but I\n> don't think that's been the tendency over the last 4 years. I wrote here about\n> some arguably-gratuitous differences between inheritence and partitioning.\n> https://www.postgresql.org/message-id/20180601221428.GU5164@telsasoft.com)\n> \n\nHaven't thought about that too much at this point, but I don't see any\nreason to not apply it only to both cases. I might be missing something,\nbut the fact that with declarative partitioning we analyze the children,\nwhile with inheritance we don't, seems a bit strange. Not sure.\n\n>> + if (*mcv_items > default_statistics_target)\n>> + n = default_statistics_target;\n> \n> It should use any non-default stats target of the parent's column\n> \n\nYeah. That's a simplification, the non-PoC code would have to look at\nper-column statistics target for the target / all the children, and make\nsome calculation based on that. I ignored that in the PoC.\n\n>> +\t\t * ignore anything but valid leaf relatiins with data, but release\n> \n> sp: relatiins.\n> \n>> +\t\t\t\telog(WARNING, \"stats for %d %d not found\",\n>> + RelationGetRelid(rels[j]), vacattrstats[i]->attr->attnum);\n> \n> should be %u %d\n> \n> This code duplication is undesirable:\n>> +\t/* Log the action if appropriate */\n>> +\t * Determine which columns to analyze\nYeah. Fine for PoC, but needs to be cleaned up in future patch.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 29 Mar 2021 21:24:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "\n\nOn 3/29/21 9:24 PM, Tomas Vondra wrote:\n> \n> \n> On 3/29/21 8:36 PM, Justin Pryzby wrote:\n>> Thanks for taking a fresh look at this.\n>>\n>> As you've written it, this can apply to either/both partitioned or inheritence.\n>> I imagine when \"MERGE\" goes away, this should apply only to partitioned tables.\n>> (Actually, personally I would advocate to consider applying it to *both*, but I\n>> don't think that's been the tendency over the last 4 years. I wrote here about\n>> some arguably-gratuitous differences between inheritence and partitioning.\n>> https://www.postgresql.org/message-id/20180601221428.GU5164@telsasoft.com)\n>>\n> \n> Haven't thought about that too much at this point, but I don't see any\n> reason to not apply it only to both cases. I might be missing something,\n> but the fact that with declarative partitioning we analyze the children,\n> while with inheritance we don't, seems a bit strange. Not sure.\n> \n\nBTW I'm not sure the \"merge\" will / should go away. What I'd expect is\nthat we'll keep it, and you can do either \"ANALYZE\" or \"ANALYZE\n(MERGE)\". The former does regular sampling, while the latter does the\nproposed merging of stats.\n\nFor the autovacuum, I think a new autovacuum_analyze_merge GUC and\nreloption would work - chances are people will want to set the default\nand then maybe enable merging only for some cases. Not sure.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Mar 2021 02:32:53 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "Hi,\n\nI'd like to point out two less obvious things, about how this relates to\nTom's proposal [1] and patch [2] from 2015. Tom approached the problem\nfrom a different direction, essentially allowing Var to be associated\nwith a list of statistics instead of just one.\n\nSo it's a somewhat orthogonal solution, and it has pros and cons. The\npro is that it can ignore statistics for eliminated partitions, thus\nproducing better estimates. The con is that it requires all the places\ndealing with VariableStatData to assume there's a list, not just one,\nmaking the code more complex and more CPU expensive (with sufficiently\nmany partitions).\n\nHowever, it seems to me we could easily combine those two things - we\ncan merge the statistics (the way I proposed here), so that each Var has\nstill just one VariableStatData. That'd mean the various places don't\nneed to start dealing with a list, and it'd still allow ignoring stats\nfor eliminated partitions.\n\nOf course, that assumes the merge is cheaper than processing the list of\nstatistics, but I find that plausible, especially the list needs to be\nprocessed multiple (e.g. when considering different join orders, filters\nand so on).\n\nHaven't tried, but might be worth exploring in the future.\n\n\nregards\n\n\n[1] https://www.postgresql.org/message-id/7363.1426537103@sss.pgh.pa.us\n\n[2] https://www.postgresql.org/message-id/22598.1425686096@sss.pgh.pa.us\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 30 Mar 2021 02:51:37 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "Sorry, I forgot to send CC into pgsql-hackers.\nOn 29/6/21 13:23, Tomas Vondra wrote:\n> Because sampling is fairly expensive, especially if you have to do it \n> for large number of child relations. And you'd have to do that every \n> time *any* child triggers autovacuum, pretty much. Merging the stats is \n> way cheaper.\n> \n> See the other thread linked from the first message.\nMaybe i couldn't describe my idea clearly.\nThe most commonly partitioning is used for large tables.\nI suppose to store a sampling reservoir for each partition, replace on \nupdate of statistics and merge to build statistics for parent table.\nIt can be spilled into tuplestore on a disk, or stored in a parent table.\nIn the case of complex inheritance we can store sampling reservoirs only \nfor leafs.\nYou can consider this idea as an imagination, but the merging statistics \napproach has an extensibility problem on another types of statistics.\n> \n> \n> On 6/29/21 9:01 AM, Andrey Lepikhov wrote:\n>> On 30/3/21 03:51, Tomas Vondra wrote:\n>>> Of course, that assumes the merge is cheaper than processing the list of\n>>> statistics, but I find that plausible, especially the list needs to be\n>>> processed multiple (e.g. when considering different join orders, filters\n>>> and so on).\n>> I think your approach have a chance. But I didn't understand: why do \n>> you merge statistics? I think we could merge only samples of each \n>> children and build statistics as usual.\n>> Error of a sample merging procedure would be quite limited.\n>>\n> \n\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Wed, 30 Jun 2021 15:55:38 +0300", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On 6/30/21 2:55 PM, Andrey Lepikhov wrote:\n> Sorry, I forgot to send CC into pgsql-hackers.\n> On 29/6/21 13:23, Tomas Vondra wrote:\n>> Because sampling is fairly expensive, especially if you have to do it \n>> for large number of child relations. And you'd have to do that every \n>> time *any* child triggers autovacuum, pretty much. Merging the stats \n>> is way cheaper.\n>>\n>> See the other thread linked from the first message.\n> Maybe i couldn't describe my idea clearly.\n> The most commonly partitioning is used for large tables.\n> I suppose to store a sampling reservoir for each partition, replace on \n> update of statistics and merge to build statistics for parent table.\n> It can be spilled into tuplestore on a disk, or stored in a parent table.\n> In the case of complex inheritance we can store sampling reservoirs only \n> for leafs.\n> You can consider this idea as an imagination, but the merging statistics \n> approach has an extensibility problem on another types of statistics.\n >\n\nWell, yeah - we might try that too, of course. This is simply exploring \nthe \"merge statistics\" idea from [1], which is why it does not even \nattempt to do what you suggested. We may explore the approach with \nkeeping per-partition samples, of course.\n\nYou're right maintaining a per-partition samples and merging those might \nsolve (or at least reduce) some of the problems, e.g. eliminating most \nof the I/O that'd be needed for sampling. And yeah, it's not entirely \nclear how to merge some of the statistics types (like ndistinct). But \nfor a lot of the basic stats it works quite nicely, I think.\n\nI'm sure there'll be some complexity due to handling large / toasted \nvalues, etc. And we probably need to design this for large hierarchies \n(IMHO it should work with 10k partitions, not just 100), in which case \nit may still be quite a bit more expensive than merging the stats.\n\nSo maybe we should really support both, and combine them somehow?\n\nregards\n\n\nhttps://www.postgresql.org/message-id/CAM-w4HO9hUHvJDVwQ8%3DFgm-znF9WNvQiWsfyBjCr-5FD7gWKGA%40mail.gmail.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 30 Jun 2021 17:15:11 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On 6/30/21 17:15, Tomas Vondra wrote:\n> On 6/30/21 2:55 PM, Andrey Lepikhov wrote:\n>> Sorry, I forgot to send CC into pgsql-hackers.\n>> On 29/6/21 13:23, Tomas Vondra wrote:\n>>> Because sampling is fairly expensive, especially if you have to do it\n>>> for large number of child relations. And you'd have to do that every\n>>> time *any* child triggers autovacuum, pretty much. Merging the stats\n>>> is way cheaper.\n>>>\n>>> See the other thread linked from the first message.\n>> Maybe i couldn't describe my idea clearly.\n>> The most commonly partitioning is used for large tables.\n>> I suppose to store a sampling reservoir for each partition, replace on\n>> update of statistics and merge to build statistics for parent table.\n>> It can be spilled into tuplestore on a disk, or stored in a parent table.\n>> In the case of complex inheritance we can store sampling reservoirs\n>> only for leafs.\n>> You can consider this idea as an imagination, but the merging\n>> statistics approach has an extensibility problem on another types of\n>> statistics.\n>>\n> \n> Well, yeah - we might try that too, of course. This is simply exploring\n> the \"merge statistics\" idea from [1], which is why it does not even\n> attempt to do what you suggested. We may explore the approach with\n> keeping per-partition samples, of course.\n> \n> You're right maintaining a per-partition samples and merging those might\n> solve (or at least reduce) some of the problems, e.g. eliminating most\n> of the I/O that'd be needed for sampling. And yeah, it's not entirely\n> clear how to merge some of the statistics types (like ndistinct). But\n> for a lot of the basic stats it works quite nicely, I think.\n> \n> I'm sure there'll be some complexity due to handling large / toasted\n> values, etc. And we probably need to design this for large hierarchies\n> (IMHO it should work with 10k partitions, not just 100), in which case\n> it may still be quite a bit more expensive than merging the stats.\n> \n> So maybe we should really support both, and combine them somehow?\n> \n\nI've been thinking about this PoC patch regularly since I submitted it a\nyear ago, and I still think merging the statistics is an interesting\nidea. It may be a huge win in various scenarios, like:\n\n1) Multi-level partitioning hierarchies, where analyze of each level has\nto re-sample all the leaf relations, causing a lot of I/O.\n\n2) Partitioning with foreign/remote partitions, where analyze has to\nretrieve significant amounts of data from the remote node over network\n(so a different kind of I/O).\n\nThese issues will only get worse as the number of partitions used by\nsystems in the wild grows - we continuously reduce the per-partition\noverhead, so people are likely to leverage that by using more of them.\n\nBut I don't have a very good idea what to do about statistics that we\ncan't really merge. For some types of statistics it's rather tricky to\nreasonably merge the results - ndistinct is a simple example, although\nwe could work around that by building and merging hyperloglog counters.\n\nWhat's trickier are extended statistics - I can't quite imagine merging\nof functional dependencies, mvdistinct, etc. So if there are extended\nstats we'd have to do the sampling anyway. (Some of the extended\nstatistics can also be defined only on the parent relation, in which\ncase we have nothing to merge. But that seems like a rare case.)\n\nIn any case, I don't have a very good plan how to move this patch\nforward, so unless people have some interesting ideas I'll mark it as\nreturned with feedback. It's been lurking in the CF for ~1 year ...\n\n\nHowever, It'd be a mistake to also discard the approach proposed by\nAndrey Lepikhov - storing samples for individual relations, and then\njust using those while analyzing the parent relations. That is more\nexpensive, but it does not have the issues with merging some types of\nstatistics, and so on.\n\nIt seems interesting also in for the dynamic sampling PoC patch [1],\nwhich does sampling as part of the query planning. In that context the\ncost of collecting the sample is clearly a major issue, and storing the\nsample somewhere would help a lot. But the question is how/where to\nstore it. Joins are another issue, because we can't just build two\nrandom samples. But let's discuss that in the other thread [1].\n\n\n[1] https://commitfest.postgresql.org/36/3211/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 20 Jan 2022 21:25:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On 21/1/2022 01:25, Tomas Vondra wrote:\n> But I don't have a very good idea what to do about statistics that we\n> can't really merge. For some types of statistics it's rather tricky to\n> reasonably merge the results - ndistinct is a simple example, although\n> we could work around that by building and merging hyperloglog counters.\nI think, as a first step on this way we can reduce a number of pulled \ntuples. We don't really needed to pull all tuples from a remote server. \nTo construct a reservoir, we can pull only a tuple sample. Reservoir \nmethod needs only a few arguments to return a sample like you read \ntuples locally. Also, to get such parts of samples asynchronously, we \ncan get size of each partition on a preliminary step of analysis.\nIn my opinion, even this solution can reduce heaviness of a problem \ndrastically.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Thu, 10 Feb 2022 16:50:31 +0500", "msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "\n\nOn 2/10/22 12:50, Andrey Lepikhov wrote:\n> On 21/1/2022 01:25, Tomas Vondra wrote:\n>> But I don't have a very good idea what to do about statistics that we\n>> can't really merge. For some types of statistics it's rather tricky to\n>> reasonably merge the results - ndistinct is a simple example, although\n>> we could work around that by building and merging hyperloglog counters.\n>\n> I think, as a first step on this way we can reduce a number of pulled\n> tuples. We don't really needed to pull all tuples from a remote server.\n> To construct a reservoir, we can pull only a tuple sample. Reservoir\n> method needs only a few arguments to return a sample like you read\n> tuples locally. Also, to get such parts of samples asynchronously, we\n> can get size of each partition on a preliminary step of analysis.\n> In my opinion, even this solution can reduce heaviness of a problem\n> drastically.\n> \n\nOh, wow! I haven't realized we're fetching all the rows from foreign\n(postgres_fdw) partitions. For local partitions we already do that,\nbecause that uses the usual acquire function, with a reservoir\nproportional to partition size. I have assumed we use tablesample to\nfetch just a small fraction of rows from FDW partitions, and I agree\ndoing that would be a pretty huge benefit.\n\nI actually tried hacking that together - there's a couple problems with\nthat (e.g. determining what fraction to sample using bernoulli/system),\nbut in principle it seems quite doable. Some minor changes to the FDW\nAPI may be necessary, not sure.\n\nNot sure about the async execution - that seems way more complicated,\nand the sampling reduces the total cost, async just parallelizes it.\n\n\nThat being said, this thread was not really about foreign partitions,\nbut about re-analyzing inheritance trees in general. And sampling\nforeign partitions doesn't really solve that - we'll still do the\nsampling over and over.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 10 Feb 2022 23:37:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On 2/11/22 03:37, Tomas Vondra wrote:\n> That being said, this thread was not really about foreign partitions,\n> but about re-analyzing inheritance trees in general. And sampling\n> foreign partitions doesn't really solve that - we'll still do the\n> sampling over and over.\nIMO, to solve the problem we should do two things:\n1. Avoid repeatable partition scans in the case inheritance tree.\n2. Avoid to re-analyze everything in the case of active changes in small \nsubset of partitions.\n\nFor (1) i can imagine a solution like multiplexing: on the stage of \ndefining which relations to scan, group them and prepare parameters of \nscanning to make multiple samples in one shot.\nIt looks like we need a separate logic for analysis of partitioned \ntables - we should form and cache samples on each partition before an \nanalysis.\nIt requires a prototype to understand complexity of such solution and \ncan be done separately from (2).\n\nTask (2) is more difficult to solve. Here we can store samples from each \npartition in values[] field of pg_statistic or in specific table which \nstores a 'most probable values' snapshot of each table.\nMost difficult problem here, as you mentioned, is ndistinct value. Is it \npossible to store not exactly calculated value of ndistinct, but an \n'expected value', based on analysis of samples and histograms on \npartitions? Such value can solve also a problem of estimation of a SETOP \nresult grouping (joining of them, etc), where we have statistics only on \nsources of the union.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 11 Feb 2022 09:29:38 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "\n\nOn 2/11/22 05:29, Andrey V. Lepikhov wrote:\n> On 2/11/22 03:37, Tomas Vondra wrote:\n>> That being said, this thread was not really about foreign partitions,\n>> but about re-analyzing inheritance trees in general. And sampling\n>> foreign partitions doesn't really solve that - we'll still do the\n>> sampling over and over.\n> IMO, to solve the problem we should do two things:\n> 1. Avoid repeatable partition scans in the case inheritance tree.\n> 2. Avoid to re-analyze everything in the case of active changes in small \n> subset of partitions.\n> \n> For (1) i can imagine a solution like multiplexing: on the stage of \n> defining which relations to scan, group them and prepare parameters of \n> scanning to make multiple samples in one shot.\n >> It looks like we need a separate logic for analysis of partitioned\n> tables - we should form and cache samples on each partition before an \n> analysis.\n> It requires a prototype to understand complexity of such solution and \n> can be done separately from (2).\n> \n\nI'm not sure I understand what you mean by multiplexing. The term \nusually means \"sending multiple signals at once\" but I'm not sure how \nthat applies to this issue. Can you elaborate?\n\nI assume you mean something like collecting a sample for a partition \nonce, and then keeping and reusing the sample for future ANALYZE runs, \nuntil invalidated in some sense.\n\nYeah, I agree that'd be useful - and not just for partitions, actually. \nI've been pondering something like that for regular tables, because the \nsample might be used for estimation of clauses directly.\n\nBut it requires storing the sample somewhere, and I haven't found a good \nand simple way to do that. We could serialize that into bytea, or we \ncould create a new fork, or something, but what should that do with \noversized attributes (how would TOAST work for a fork) and/or large \nsamples (which might not fit into 1GB bytea)?\n\n\n> Task (2) is more difficult to solve. Here we can store samples from each \n> partition in values[] field of pg_statistic or in specific table which \n> stores a 'most probable values' snapshot of each table.\n\nI think storing samples in pg_statistic is problematic, because values[] \nis subject to 1GB limit etc. Not an issue for small MCV list of a single \nattribute, certainly an issue for larger samples. Even if the data fit, \nthe size of pg_statistic would explode.\n\n> Most difficult problem here, as you mentioned, is ndistinct value. Is it \n> possible to store not exactly calculated value of ndistinct, but an \n> 'expected value', based on analysis of samples and histograms on \n> partitions? Such value can solve also a problem of estimation of a SETOP \n> result grouping (joining of them, etc), where we have statistics only on \n> sources of the union.\n> \n\nI think ndistinct is problem only when merging final estimates. But if \nwe're able to calculate and store some intermediate results, we can \neasily use HLL and merge that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:12:16 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On Wed, Jun 30, 2021 at 11:15 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> You're right maintaining a per-partition samples and merging those might\n> solve (or at least reduce) some of the problems, e.g. eliminating most\n> of the I/O that'd be needed for sampling. And yeah, it's not entirely\n> clear how to merge some of the statistics types (like ndistinct). But\n> for a lot of the basic stats it works quite nicely, I think.\n\nIt feels like you might in some cases get very different answers.\nLet's say you have 1000 partitions. In each of those partitions, there\nis a particular value that appears in column X in 50% of the rows.\nThis value differs for every partition. So you can imagine for example\nthat in partition 1, X = 1 with probability 50%; in partition 2, X = 2\nwith probability 50%, etc. There is also a value, let's say 0, which\nappears in 0.5% of the rows in every partition. It seems possible that\n0 is not an MCV in any partition, or in only some of them, but it\nmight be more common overall than the #1 MCV of any single partition.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:17:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On 2/11/22 20:12, Tomas Vondra wrote:\n> \n> \n> On 2/11/22 05:29, Andrey V. Lepikhov wrote:\n>> On 2/11/22 03:37, Tomas Vondra wrote:\n>>> That being said, this thread was not really about foreign partitions,\n>>> but about re-analyzing inheritance trees in general. And sampling\n>>> foreign partitions doesn't really solve that - we'll still do the\n>>> sampling over and over.\n>> IMO, to solve the problem we should do two things:\n>> 1. Avoid repeatable partition scans in the case inheritance tree.\n>> 2. Avoid to re-analyze everything in the case of active changes in \n>> small subset of partitions.\n>>\n>> For (1) i can imagine a solution like multiplexing: on the stage of \n>> defining which relations to scan, group them and prepare parameters of \n>> scanning to make multiple samples in one shot.\n> I'm not sure I understand what you mean by multiplexing. The term \n> usually means \"sending multiple signals at once\" but I'm not sure how \n> that applies to this issue. Can you elaborate?\n\nI suppose to make a set of samples in one scan: one sample for plane \ntable, another - for a parent and so on, according to the inheritance \ntree. And cache these samples in memory. We can calculate all parameters \nof reservoir method to do it.\n\n> sample might be used for estimation of clauses directly.\nYou mean, to use them in difficult cases, such of estimation of grouping \nover APPEND ?\n> \n> But it requires storing the sample somewhere, and I haven't found a good \n> and simple way to do that. We could serialize that into bytea, or we \n> could create a new fork, or something, but what should that do with \n> oversized attributes (how would TOAST work for a fork) and/or large \n> samples (which might not fit into 1GB bytea)? \nThis feature looks like meta-info over a database. It can be stored in \nseparate relation. It is not obvious that we need to use it for each \nrelation, for example, with large samples. I think, it can be controlled \nby a table parameter.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:22:30 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "\n\nOn 2/14/22 11:22, Andrey V. Lepikhov wrote:\n> On 2/11/22 20:12, Tomas Vondra wrote:\n>>\n>>\n>> On 2/11/22 05:29, Andrey V. Lepikhov wrote:\n>>> On 2/11/22 03:37, Tomas Vondra wrote:\n>>>> That being said, this thread was not really about foreign partitions,\n>>>> but about re-analyzing inheritance trees in general. And sampling\n>>>> foreign partitions doesn't really solve that - we'll still do the\n>>>> sampling over and over.\n>>> IMO, to solve the problem we should do two things:\n>>> 1. Avoid repeatable partition scans in the case inheritance tree.\n>>> 2. Avoid to re-analyze everything in the case of active changes in \n>>> small subset of partitions.\n>>>\n>>> For (1) i can imagine a solution like multiplexing: on the stage of \n>>> defining which relations to scan, group them and prepare parameters \n>>> of scanning to make multiple samples in one shot.\n>> I'm not sure I understand what you mean by multiplexing. The term \n>> usually means \"sending multiple signals at once\" but I'm not sure how \n>> that applies to this issue. Can you elaborate?\n> \n> I suppose to make a set of samples in one scan: one sample for plane \n> table, another - for a parent and so on, according to the inheritance \n> tree. And cache these samples in memory. We can calculate all parameters \n> of reservoir method to do it.\n> \n\nI doubt keeping the samples just in memory is a good solution. Firstly, \nthere's the question of memory consumption. Imagine a large partitioned \ntable with 1-10k partitions. If we keep a \"regular\" sample (30k rows) \nper partition, that's 30M-300M rows. If each row needs 100B, that's \n3-30GB of data.\n\nSure, maybe we could keep smaller per-partition samples, large enough to \nget the merged sample of 30k row. But then you can also have higher \nstatistics target values, the rows can be larger, etc.\n\nSo a couple of GB per inheritance tree can easily happen. And this data \nmay not be used all that often, so keeping it in memory may be wasteful.\n\nBut maybe you have an idea how to optimize sizes per-partition samples? \nIn principle we need\n\n 30k * size(partition) / size(total)\n\nfor each partition, but the trouble is partitions may be detached, data \nmay be deleted from some partitions, etc.\n\nAlso, what would happen after a restart? If we lose the samples, we'll \nhave to resample everything anyway - and after a restart the system is \nusually fairly busy, so that's not a great timing.\n\nSo IMHO the samples need to be serialized, in some way.\n\n>> sample might be used for estimation of clauses directly.\n> You mean, to use them in difficult cases, such of estimation of grouping \n> over APPEND ?\n\nThat's one example, yes. But the sample might be used even to estimate \ncomplex conditions on a single partition (there's plenty of stuff we \ncan't estimate from MCV/histogram).\n\n>> But it requires storing the sample somewhere, and I haven't found a \n>> good and simple way to do that. We could serialize that into bytea, or \n>> we could create a new fork, or something, but what should that do with \n>> oversized attributes (how would TOAST work for a fork) and/or large \n>> samples (which might not fit into 1GB bytea)? \n> This feature looks like meta-info over a database. It can be stored in \n> separate relation. It is not obvious that we need to use it for each \n> relation, for example, with large samples. I think, it can be controlled \n> by a table parameter.\n> \n\nWell, a separate catalog is one of the options. But I don't see how that \ndeals with large samples, etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 14 Feb 2022 16:16:19 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": "On 2/14/22 20:16, Tomas Vondra wrote:\n> \n> \n> On 2/14/22 11:22, Andrey V. Lepikhov wrote:\n>> On 2/11/22 20:12, Tomas Vondra wrote:\n>>>\n>>>\n>>> On 2/11/22 05:29, Andrey V. Lepikhov wrote:\n>>>> On 2/11/22 03:37, Tomas Vondra wrote:\n>>>>> That being said, this thread was not really about foreign partitions,\n>>>>> but about re-analyzing inheritance trees in general. And sampling\n>>>>> foreign partitions doesn't really solve that - we'll still do the\n>>>>> sampling over and over.\n>>>> IMO, to solve the problem we should do two things:\n>>>> 1. Avoid repeatable partition scans in the case inheritance tree.\n>>>> 2. Avoid to re-analyze everything in the case of active changes in \n>>>> small subset of partitions.\n>>>>\n>>>> For (1) i can imagine a solution like multiplexing: on the stage of \n>>>> defining which relations to scan, group them and prepare parameters \n>>>> of scanning to make multiple samples in one shot.\n>>> I'm not sure I understand what you mean by multiplexing. The term \n>>> usually means \"sending multiple signals at once\" but I'm not sure how \n>>> that applies to this issue. Can you elaborate?\n>>\n>> I suppose to make a set of samples in one scan: one sample for plane \n>> table, another - for a parent and so on, according to the inheritance \n>> tree. And cache these samples in memory. We can calculate all \n>> parameters of reservoir method to do it.\n>>\n> \n> I doubt keeping the samples just in memory is a good solution. Firstly, \n> there's the question of memory consumption. Imagine a large partitioned \n> table with 1-10k partitions. If we keep a \"regular\" sample (30k rows) \n> per partition, that's 30M-300M rows. If each row needs 100B, that's \n> 3-30GB of data.\nI tell about caching a sample only for a time that it needed in this \nANALYZE operation. Imagine 3 levels of partitioned table. On each \npartition you should create and keep three different samples (we can do \nit in one scan). Sample for a plane table we can use immediately and \ndestroy it.\nSample for the partition on second level of hierarchy: we can save a \ncopy of sample for future usage (maybe, repeated analyze) to a disk. \nIn-memory data used to form a reservoir, that has a limited size and can \nbe destroyed immediately. At the third level we can use the same logic.\nSo, at one moment we only use as many samples as many levels of \nhierarchy we have. IMO, it isn't large number.\n\n > the trouble is partitions may be detached, data may be deleted from\n > some partitions, etc.\nBecause statistics hasn't strong relation with data, we can use two \nstrategies: In the case of explicit 'ANALYZE <table>' we can recalculate \nall samples for all partitions, but in autovacuum case or implicit \nanalysis we can use not-so-old versions of samples and samples of \ndetached (but not destroyed) partitions in optimistic assumption that it \ndoesn't change statistic drastically.\n\n> So IMHO the samples need to be serialized, in some way.\nAgreed\n\n> Well, a separate catalog is one of the options. But I don't see how that \n> deals with large samples, etc.\nI think, we can design fall back to previous approach in the case of \nvery large tuples, like a switch from HashJoin to NestedLoop if we \nestimate, that we haven't enough memory.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n", "msg_date": "Fri, 18 Feb 2022 16:50:54 +0500", "msg_from": "\"Andrey V. Lepikhov\" <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Merging statistics from children instead of re-sampling\n everything" }, { "msg_contents": ">\n> 3) stadistinct - This is quite problematic. We only have the per-child\n> estimates, and it's not clear if there's any overlap. For now I've just\n> summed it up, because that's safer / similar to what we do for gather\n> merge paths etc. Maybe we could improve this by estimating the overlap\n> somehow (e.g. from MCV lists / histograms). But honestly, I doubt the\n> estimates based on tiny sample of each child are any better. I suppose\n> we could introduce a column option, determining how to combine ndistinct\n> (similar to how we can override n_distinct itself).\n>\n> 4) MCV - It's trivial to build a new \"parent\" MCV list, although it may\n> be too large (in which case we cut it at statistics target, and copy the\n> remaining bits to the histogram)\n>\n\nI think there is one approach to solve the problem with calculating mcv and\ndistinct statistics.\nTo do this, you need to calculate the density of the sample distribution\nand store it, for example, in some slot.\nThen, when merging statistics, we will sum up the densities of all\npartitions as functions and get a new density.\nAccording to the new density, you can find out which values are most common\nand which are distinct.\n\nTo calculate the partition densities, you can use the \"Kernel density\nEstimation\" -\nhttps://www.statsmodels.org/dev/examples/notebooks/generated/kernel_density\nhtml\n\nThe approach may be very inaccurate and difficult to implement, but solves\nthe problem.\n\nRegards,\nDamir Belyalov\nPostgres Professional\n\n3) stadistinct - This is quite problematic. We only have the per-child\nestimates, and it's not clear if there's any overlap. For now I've just\nsummed it up, because that's safer / similar to what we do for gather\nmerge paths etc. Maybe we could improve this by estimating the overlap\nsomehow (e.g. from MCV lists / histograms). But honestly, I doubt the\nestimates based on tiny sample of each child are any better. I suppose\nwe could introduce a column option, determining how to combine ndistinct\n(similar to how we can override n_distinct itself).\n\n4) MCV - It's trivial to build a new \"parent\" MCV list, although it may\nbe too large (in which case we cut it at statistics target, and copy the\nremaining bits to the histogram)I think there is one approach to solve the problem with calculating mcv and distinct statistics.To do this, you need to calculate the density of the sample distribution and store it, for example, in some slot.Then, when merging statistics, we will sum up the densities of all partitions as functions and get a new density.According to the new density, you can find out which values are most common and which are distinct.To calculate the partition densities, you can use the \"Kernel density Estimation\" - https://www.statsmodels.org/dev/examples/notebooks/generated/kernel_density htmlThe approach may be very inaccurate and difficult to implement, but solves the problem.Regards,Damir BelyalovPostgres Professional", "msg_date": "Mon, 15 Aug 2022 14:10:44 +0300", "msg_from": "Damir Belyalov <dam.bel07@gmail.com>", "msg_from_op": false, "msg_subject": "Fwd: Merging statistics from children instead of re-sampling\n everything" } ]
[ { "msg_contents": "This fixes an issue where preadv and pwritev detection does not\nproperly respect the OSX deployment target version symbol\navailability.\n\nJames", "msg_date": "Mon, 29 Mar 2021 11:49:58 -0600", "msg_from": "James Hilliard <james.hilliard1@gmail.com>", "msg_from_op": true, "msg_subject": "Fix detection of preadv/pwritev support for OSX." } ]
[ { "msg_contents": "On a newly set up system there are 7 types with a unary minus operator\ndefined, but only 6 of them have an abs function:\n\npostgres=# \\df abs\n List of functions\n Schema | Name | Result data type | Argument data types | Type\n------------+------+------------------+---------------------+------\n pg_catalog | abs | bigint | bigint | func\n pg_catalog | abs | double precision | double precision | func\n pg_catalog | abs | integer | integer | func\n pg_catalog | abs | numeric | numeric | func\n pg_catalog | abs | real | real | func\n pg_catalog | abs | smallint | smallint | func\n(6 rows)\n\nI now have the following definition in my database:\n\nCREATE OR REPLACE FUNCTION abs (\n p interval\n) RETURNS interval\n LANGUAGE SQL IMMUTABLE STRICT\n SET search_path FROM CURRENT\nAS $$\nSELECT GREATEST (p, -p)\n$$;\nCOMMENT ON FUNCTION abs (interval) IS 'absolute value';\n\nWould a patch to add a function with this behaviour to the initial database\nbe welcome?\n\nIf so, should I implement it essentially like the above, or as an internal\nfunction? I've noticed that even when it seems like it might be reasonable\nto implement a built-in function as an SQL function they tend to be\ninternal.\n\nOn a newly set up system there are 7 types with a unary minus operator defined, but only 6 of them have an abs function:postgres=# \\df abs                         List of functions   Schema   | Name | Result data type | Argument data types | Type ------------+------+------------------+---------------------+------ pg_catalog | abs  | bigint           | bigint              | func pg_catalog | abs  | double precision | double precision    | func pg_catalog | abs  | integer          | integer             | func pg_catalog | abs  | numeric          | numeric             | func pg_catalog | abs  | real             | real                | func pg_catalog | abs  | smallint         | smallint            | func(6 rows)I now have the following definition in my database:CREATE OR REPLACE FUNCTION abs (    p                           interval) RETURNS interval    LANGUAGE SQL IMMUTABLE STRICT    SET search_path FROM CURRENTAS $$SELECT GREATEST (p, -p)$$;COMMENT ON FUNCTION abs (interval) IS 'absolute value';Would a patch to add a function with this behaviour to the initial database be welcome?If so, should I implement it essentially like the above, or as an internal function? I've noticed that even when it seems like it might be reasonable to implement a built-in function as an SQL function they tend to be internal.", "msg_date": "Mon, 29 Mar 2021 15:32:56 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Add missing function abs (interval)" }, { "msg_contents": "On Mon, Mar 29, 2021 at 3:33 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n>\n> On a newly set up system there are 7 types with a unary minus operator\ndefined, but only 6 of them have an abs function:\n>\n...\n> Would a patch to add a function with this behaviour to the initial\ndatabase be welcome?\n\nLooking in the archives, I see this attempt that you can build upon:\n\nhttps://www.postgresql.org/message-id/flat/CAHE3wggpj%2Bk-zXLUdcBDRe3oahkb21pSMPDm-HzPjZxJn4vMMw%40mail.gmail.com\n\n> If so, should I implement it essentially like the above, or as an\ninternal function? I've noticed that even when it seems like it might be\nreasonable to implement a built-in function as an SQL function they tend to\nbe internal.\n\nBy default it should be internal.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Mar 29, 2021 at 3:33 PM Isaac Morland <isaac.morland@gmail.com> wrote:>> On a newly set up system there are 7 types with a unary minus operator defined, but only 6 of them have an abs function:>...> Would a patch to add a function with this behaviour to the initial database be welcome?Looking in the archives, I see this attempt that you can build upon:https://www.postgresql.org/message-id/flat/CAHE3wggpj%2Bk-zXLUdcBDRe3oahkb21pSMPDm-HzPjZxJn4vMMw%40mail.gmail.com> If so, should I implement it essentially like the above, or as an internal function? I've noticed that even when it seems like it might be reasonable to implement a built-in function as an SQL function they tend to be internal.By default it should be internal. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Mon, 29 Mar 2021 19:15:19 -0400", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "On Mon, Mar 29, 2021 at 07:15:19PM -0400, John Naylor wrote:\n> Looking in the archives, I see this attempt that you can build upon:\n> https://www.postgresql.org/message-id/flat/CAHE3wggpj%2Bk-zXLUdcBDRe3oahkb21pSMPDm-HzPjZxJn4vMMw%40mail.gmail.com\n\nI see no problem with doing something more here. If you can get a\npatch, please feel free to add it to the next commit fest, for\nPostgres 15:\nhttps://commitfest.postgresql.org/33/\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 10:06:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "I've attached a patch for this. Turns out there was a comment in the source\nexplaining that there is no interval_abs because it's not clear what to\nreturn; but I think it's clear that if i is an interval the larger of i and\n-i should be considered to be the absolute value, the same as would be done\nfor any type; essentially, if the type is orderable and has a meaningful\ndefinition of unary minus, the definition of abs follows from those.\n\nThis does have some odd effects, as was observed in the previous discussion\npointed at by John Naylor above (for which thanks!). But those odd effects\nare not due to abs(interval) itself but rather due to the odd behaviour of\ninterval, where values which compare equal to '0'::interval can change a\ntimestamp when added to it. This in turn comes from what the interval data\ntype is trying to do combined with the inherently complicated nature of our\ntimekeeping system.\n\nI have included in the test case some testing of what happens with '1 month\n-30 days'::interval, which is \"equal\" to '0'::interval.\n\nAt least one thing concerns me about my code: Given an interval i, I\npalloc() space to calculate -i; then either return that or the original\ninput depending on the result of a comparison. Will I leak space as a\nresult? Should I free the value if I don't return it?\n\nIn addition to adding abs(interval) and related @ operator, I would like to\nupdate interval_smaller and interval_larger to change < and > to <= and >=\nrespectively. This is to make the min(interval) and max(interval)\naggregates return the first of multiple distinct \"equal\" intervals,\ncontrary to the current behaviour:\n\nodyssey=> select max (i) from (values ('1 month -30 days'::interval), ('-1\nmonth 30 days'))t(i);\n max\n------------------\n -1 mons +30 days\n(1 row)\n\nodyssey=> select min (i) from (values ('1 month -30 days'::interval), ('-1\nmonth 30 days'))t(i);\n min\n------------------\n -1 mons +30 days\n(1 row)\n\nodyssey=>\n\nGREATEST and LEAST already take the first value:\n\nodyssey=> select greatest ('1 month -30 days'::interval, '-1 month 30\ndays');\n greatest\n----------------\n 1 mon -30 days\n(1 row)\n\nodyssey=> select least ('1 month -30 days'::interval, '-1 month 30 days');\n least\n----------------\n 1 mon -30 days\n(1 row)\n\nodyssey=>\n\n\nOn Mon, 29 Mar 2021 at 21:06, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Mar 29, 2021 at 07:15:19PM -0400, John Naylor wrote:\n> > Looking in the archives, I see this attempt that you can build upon:\n> >\n> https://www.postgresql.org/message-id/flat/CAHE3wggpj%2Bk-zXLUdcBDRe3oahkb21pSMPDm-HzPjZxJn4vMMw%40mail.gmail.com\n>\n> I see no problem with doing something more here. If you can get a\n> patch, please feel free to add it to the next commit fest, for\n> Postgres 15:\n> https://commitfest.postgresql.org/33/\n> --\n> Michael\n>", "msg_date": "Tue, 30 Mar 2021 23:18:30 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> I've attached a patch for this. Turns out there was a comment in the source\n> explaining that there is no interval_abs because it's not clear what to\n> return; but I think it's clear that if i is an interval the larger of i and\n> -i should be considered to be the absolute value, the same as would be done\n> for any type; essentially, if the type is orderable and has a meaningful\n> definition of unary minus, the definition of abs follows from those.\n\nThe problem with that blithe summary is the hidden assumption that\nvalues that compare \"equal\" aren't interesting to distinguish. As\nthe discussion back in 2009 pointed out, this doesn't help you decide\nwhat to do with cases like '1 month -30 days'::interval. Either answer\nyou might choose seems pretty arbitrary --- and we've got more than\nenough arbitrariness in type interval :-(\n\nFor similar reasons, I find myself mighty suspicious of your proposal\nto change how max(interval) and min(interval) work. That cannot make\nthings any better overall --- it will only move the undesirable results\nfrom one set of cases to some other set. Moreover your argument for\nit seems based on a false assumption, that the input values can be\nexpected to arrive in a particular order. So I'm inclined to think\nthat backwards compatibility is sufficient reason to leave that alone.\n\nIf we wanted to make some actual progress here, maybe we should\nreconsider the definition of equality/ordering for interval, with\nan eye to not allowing two intervals to be considered \"equal\" unless\nthey really are identical. That is, for two values that are currently\nreported as \"equal\", apply some more-or-less-arbitrary tiebreak rule,\nsay by sorting on the individual fields left-to-right. This would be\nvery similar to type text's rule that only bitwise-equal strings are\nreally equal, even if strcoll() claims otherwise. I am not sure how\nfeasible this idea is from a compatibility standpoint, but it's\nsomething to think about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 14 Jul 2021 19:29:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "I wrote:\n> Isaac Morland <isaac.morland@gmail.com> writes:\n>> I've attached a patch for this. Turns out there was a comment in the source\n>> explaining that there is no interval_abs because it's not clear what to\n>> return; but I think it's clear that if i is an interval the larger of i and\n>> -i should be considered to be the absolute value, the same as would be done\n>> for any type; essentially, if the type is orderable and has a meaningful\n>> definition of unary minus, the definition of abs follows from those.\n\n> The problem with that blithe summary is the hidden assumption that\n> values that compare \"equal\" aren't interesting to distinguish.\n\nAfter thinking about this some more, it seems to me that it's a lot\nclearer that the definition of abs(interval) is forced by our comparison\nrules if you define it as\n\n\tCASE WHEN x < '0'::interval THEN -x ELSE x END\n\nIn particular, this makes it clear what happens and why for values\nthat compare equal to zero. The thing that is bothering me about\nthe formulation GREATEST(x, -x) is exactly that whether you get x\nor -x in such a case depends on a probably-unspecified implementation\ndetail inside GREATEST().\n\nBTW, you could implement this by something along the lines of\n(cf generate_series_timestamp()):\n\n MemSet(&interval_zero, 0, sizeof(Interval));\n if (interval_cmp_internal(interval, &interval_zero) < 0)\n return interval_um(fcinfo);\n else\n PG_RETURN_INTERVAL_P(interval);\n\nwhich would avoid the need to refactor interval_um().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 26 Sep 2021 13:42:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "On Sun, 26 Sept 2021 at 13:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Isaac Morland <isaac.morland@gmail.com> writes:\n> >> I've attached a patch for this. Turns out there was a comment in the\n> source\n> >> explaining that there is no interval_abs because it's not clear what to\n> >> return; but I think it's clear that if i is an interval the larger of i\n> and\n> >> -i should be considered to be the absolute value, the same as would be\n> done\n> >> for any type; essentially, if the type is orderable and has a meaningful\n> >> definition of unary minus, the definition of abs follows from those.\n>\n> > The problem with that blithe summary is the hidden assumption that\n> > values that compare \"equal\" aren't interesting to distinguish.\n>\n> After thinking about this some more, it seems to me that it's a lot\n> clearer that the definition of abs(interval) is forced by our comparison\n> rules if you define it as\n>\n> CASE WHEN x < '0'::interval THEN -x ELSE x END\n>\n> In particular, this makes it clear what happens and why for values\n> that compare equal to zero. The thing that is bothering me about\n> the formulation GREATEST(x, -x) is exactly that whether you get x\n> or -x in such a case depends on a probably-unspecified implementation\n> detail inside GREATEST().\n>\n\nThanks very much for continuing to think about this. It really reinforces\nmy impression that this community takes seriously input and suggestions,\neven when it takes some thought to work out how (or whether) to proceed\nwith a proposed change.\n\nSo I think I will prepare a revised patch that uses this formulation; and\nif I still have any suggestions that aren't directly related to adding\nabs(interval) I will split them off into a separate discussion.\n\nBTW, you could implement this by something along the lines of\n> (cf generate_series_timestamp()):\n>\n> MemSet(&interval_zero, 0, sizeof(Interval));\n> if (interval_cmp_internal(interval, &interval_zero) < 0)\n> return interval_um(fcinfo);\n> else\n> PG_RETURN_INTERVAL_P(interval);\n>\n> which would avoid the need to refactor interval_um().\n>\n> regards, tom lane\n>\n\nOn Sun, 26 Sept 2021 at 13:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Isaac Morland <isaac.morland@gmail.com> writes:\n>> I've attached a patch for this. Turns out there was a comment in the source\n>> explaining that there is no interval_abs because it's not clear what to\n>> return; but I think it's clear that if i is an interval the larger of i and\n>> -i should be considered to be the absolute value, the same as would be done\n>> for any type; essentially, if the type is orderable and has a meaningful\n>> definition of unary minus, the definition of abs follows from those.\n\n> The problem with that blithe summary is the hidden assumption that\n> values that compare \"equal\" aren't interesting to distinguish.\n\nAfter thinking about this some more, it seems to me that it's a lot\nclearer that the definition of abs(interval) is forced by our comparison\nrules if you define it as\n\n        CASE WHEN x < '0'::interval THEN -x ELSE x END\n\nIn particular, this makes it clear what happens and why for values\nthat compare equal to zero.  The thing that is bothering me about\nthe formulation GREATEST(x, -x) is exactly that whether you get x\nor -x in such a case depends on a probably-unspecified implementation\ndetail inside GREATEST().Thanks very much for continuing to think about this. It really reinforces my impression that this community takes seriously input and suggestions, even when it takes some thought to work out how (or whether) to proceed with a proposed change.So I think I will prepare a revised patch that uses this formulation; and if I still have any suggestions that aren't directly related to adding abs(interval) I will split them off into a separate discussion.\nBTW, you could implement this by something along the lines of\n(cf generate_series_timestamp()):\n\n        MemSet(&interval_zero, 0, sizeof(Interval));\n        if (interval_cmp_internal(interval, &interval_zero) < 0)\n            return interval_um(fcinfo);\n        else\n            PG_RETURN_INTERVAL_P(interval);\n\nwhich would avoid the need to refactor interval_um().\n\n                        regards, tom lane", "msg_date": "Sun, 26 Sep 2021 13:58:07 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "> On 26 Sep 2021, at 19:58, Isaac Morland <isaac.morland@gmail.com> wrote:\n\n> So I think I will prepare a revised patch that uses this formulation; and if I still have any suggestions that aren't directly related to adding abs(interval) I will split them off into a separate discussion.\n\nThis CF entry is marked Waiting on Author, have you had the chance to prepare\nan updated version of this patch?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 4 Nov 2021 13:08:11 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "On Thu, 4 Nov 2021 at 08:08, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 26 Sep 2021, at 19:58, Isaac Morland <isaac.morland@gmail.com> wrote:\n>\n> > So I think I will prepare a revised patch that uses this formulation;\n> and if I still have any suggestions that aren't directly related to adding\n> abs(interval) I will split them off into a separate discussion.\n>\n> This CF entry is marked Waiting on Author, have you had the chance to\n> prepare\n> an updated version of this patch?\n>\n\nNot yet, but thanks for the reminder. I will try to get this done on the\nweekend.\n\nOn Thu, 4 Nov 2021 at 08:08, Daniel Gustafsson <daniel@yesql.se> wrote:> On 26 Sep 2021, at 19:58, Isaac Morland <isaac.morland@gmail.com> wrote:\n\n> So I think I will prepare a revised patch that uses this formulation; and if I still have any suggestions that aren't directly related to adding abs(interval) I will split them off into a separate discussion.\n\nThis CF entry is marked Waiting on Author, have you had the chance to prepare\nan updated version of this patch?\nNot yet, but thanks for the reminder. I will try to get this done on the weekend.", "msg_date": "Fri, 5 Nov 2021 00:06:05 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add missing function abs (interval)" }, { "msg_contents": "On Fri, Nov 05, 2021 at 12:06:05AM -0400, Isaac Morland wrote:\n> Not yet, but thanks for the reminder. I will try to get this done on the\n> weekend.\n\nSeeing no updates, this has been switched to returned with feedback in\nthe CF app.\n--\nMichael", "msg_date": "Fri, 3 Dec 2021 16:41:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add missing function abs (interval)" } ]
[ { "msg_contents": "I believe the \"box\" type description is slightly incorrect:\n\n# \\dT box\n Liste der Datentypen\n Schema │ Name │ Beschreibung\n────────────┼──────┼──────────────────────────────────────────\n pg_catalog │ box │ geometric box '(lower left,upper right)'\n\nWhile the syntax '((3,4),(1,2))'::box works, the canonical spelling is\n'(3,4),(1,2)' and hence the description should be:\n\ngeometric box '(lower left),(upper right)'\n\nChristoph\n\n\n", "msg_date": "Mon, 29 Mar 2021 22:44:29 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "\"box\" type description" }, { "msg_contents": "At Mon, 29 Mar 2021 22:44:29 +0200, Christoph Berg <myon@debian.org> wrote in \n> I believe the \"box\" type description is slightly incorrect:\n> \n> # \\dT box\n> Liste der Datentypen\n> Schema │ Name │ Beschreibung\n> ────────────┼──────┼──────────────────────────────────────────\n> pg_catalog │ box │ geometric box '(lower left,upper right)'\n> \n> While the syntax '((3,4),(1,2))'::box works, the canonical spelling is\n> '(3,4),(1,2)' and hence the description should be:\n> geometric box '(lower left),(upper right)'\n\nMaybe the reason you think so is that a box is printed in that format.\n\npostgres=# select '((1,1),(2,2))'::box;\n box \n-------------\n (2,2),(1,1)\n(1 row)\n\nIt doesn't use the word \"canonical\", but the documentation is saying\nthat it is the output format. So I think you're right in that point.\n\n\nhttps://www.postgresql.org/docs/13/datatype-geometric.html\n\nTable 8.20. Geometric Types\n\npoint\t16 bytes\t\tPoint on a plane\t(x,y)\nline\t32 bytes\t\tInfinite line\t\t{A,B,C}\nlseg\t32 bytes\t\tFinite line segment\t((x1,y1),(x2,y2))\nbox\t\t32 bytes\t\tRectangular box\t\t((x1,y1),(x2,y2))\npath\t16+16n bytes\tClosed path (similar to polygon)\t((x1,y1),...)\npath\t16+16n bytes\tOpen path\t\t\t[(x1,y1),...]\npolygon\t40+16n bytes\tPolygon (similar to closed path)\t((x1,y1),...)\ncircle\t24 bytes\t\tCircle\t\t\t\t<(x,y),r> (center point and radius)\n\nSimilary, lseg seems inconsistent... (It is correctly described in\nlater sections.)\n\nselect '(1,1),(2,2)'::lseg; => [(1,1),(2,2)]\n\nSurely it would be better that the documentation is consistent with\nthe output function. Perhaps we prefer to fix documentation rather\nthan to fix implementation to give impacts on applications that may\nexist. (I don't like the notation since the representation of box\ndoesn't look like one object, though..)\n\n\nReturing to the description of pg_types, it should be changed like\nthis following the discussion here.\n\n- pg_catalog | box | geometric box '(lower left,upper right)'\n+ pg_catalog | box | geometric box 'lower left,upper right'\n\nBut I find it hard to read. I fixed it instead as the following in the\nattached. However, it might rather be better not changing it..\n\n+ pg_catalog | box | geometric box 'pt-lower-left,pt-upper-right'\n\nI added a space after commas, since point has it and (I think) it is\neasier to read having the ones.\n\nIs there any opinions?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 31 Mar 2021 11:32:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"box\" type description" }, { "msg_contents": "Re: Kyotaro Horiguchi\n> Returing to the description of pg_types, it should be changed like\n> this following the discussion here.\n> \n> - pg_catalog | box | geometric box '(lower left,upper right)'\n> + pg_catalog | box | geometric box 'lower left,upper right'\n> \n> But I find it hard to read. I fixed it instead as the following in the\n> attached. However, it might rather be better not changing it..\n> \n> + pg_catalog | box | geometric box 'pt-lower-left,pt-upper-right'\n\nI like that because it points to the \"point\" syntax so users can\nfigure out how to spell a box.\n\nChristoph\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:43:47 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": true, "msg_subject": "Re: \"box\" type description" }, { "msg_contents": "On Wed, Mar 31, 2021 at 01:43:47PM +0200, Christoph Berg wrote:\n> Re: Kyotaro Horiguchi\n> > Returing to the description of pg_types, it should be changed like\n> > this following the discussion here.\n> > \n> > - pg_catalog | box | geometric box '(lower left,upper right)'\n> > + pg_catalog | box | geometric box 'lower left,upper right'\n> > \n> > But I find it hard to read. I fixed it instead as the following in the\n> > attached. However, it might rather be better not changing it..\n> > \n> > + pg_catalog | box | geometric box 'pt-lower-left,pt-upper-right'\n> \n> I like that because it points to the \"point\" syntax so users can\n> figure out how to spell a box.\n\nI liked Horiguchi-san's patch from 2021, but once I started looking\nfurther, I found a number of improvements that can be made in the \\dTS\noutput beyond Horiguchi-san's changes:\n\n* boolean outputs 't'/'f', not 'true'/'false'\n* Added \"format\" ... for output\n* tid output format was at the start, not the end\n* I didn't add space between point x,y because the output has no space\n* I spelled out \"point\" instead of \"pt\"\n* \"line\" has two very different input formats so I listed both\n (more different than others like boolean)\n* I didn't see the need to say \"datatype\" for LSN and UUID\n* I improved the txid_snapshot description\n* There was no description for table_am_handler\n\nPatch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Wed, 1 Nov 2023 11:36:01 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"box\" type description" }, { "msg_contents": "At Wed, 1 Nov 2023 11:36:01 -0400, Bruce Momjian <bruce@momjian.us> wrote in \n> On Wed, Mar 31, 2021 at 01:43:47PM +0200, Christoph Berg wrote:\n> > Re: Kyotaro Horiguchi\n> > I like that because it points to the \"point\" syntax so users can\n> > figure out how to spell a box.\n> \n> I liked Horiguchi-san's patch from 2021, but once I started looking\n> further, I found a number of improvements that can be made in the \\dTS\n> output beyond Horiguchi-san's changes:\n> \n> * boolean outputs 't'/'f', not 'true'/'false'\n> * Added \"format\" ... for output\n> * tid output format was at the start, not the end\n> * I didn't add space between point x,y because the output has no space\n> * I spelled out \"point\" instead of \"pt\"\n> * \"line\" has two very different input formats so I listed both\n> (more different than others like boolean)\n> * I didn't see the need to say \"datatype\" for LSN and UUID\n> * I improved the txid_snapshot description\n> * There was no description for table_am_handler\n> \n> Patch attached.\n\nThank you for continuing this. The additional changes looks\nfine.\n\nUpon reviewing the table again in this line, further potential\nimprovements and issues have been found. For example:\n\ncharacter, varchar: don't follow the rule.\n- 'char(length)' blank-padded string, fixed storage length\n+ blank-padded string, fixed storage length, format 'char(length)'\n\ninterval: doesn't follow the rule.\n- @ <number> <units>, time interval\n+ time interval, format '[@] <number> <units>'\n(I think '@' is not necessary here..)\n\npg_snapshot:\n\n The description given is just \"snapshot\", which seems overly simplistic.\n\ntxid_snapshot:\n\n The description reads \"transaction snapshot\". Is this really\n accurate, especially in contrast with pg_snapshot?\n\npg_brin_bloom_summary, pg_brin_minmax_multi_summary, pg_mcv_list and many:\n\nI'm uncertain whether these types, which lack an input syntax (but\nhave an output format), qualify as pseudo-types. Nevertheless, I\nbelieve it would be beneficial to describe that those types differ\nfrom ordinary types.\n\n\nShould we consider refining these descriptions in the table?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Nov 2023 17:28:20 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: \"box\" type description" }, { "msg_contents": "On Thu, Nov 2, 2023 at 05:28:20PM +0900, Kyotaro Horiguchi wrote:\n> Thank you for continuing this. The additional changes looks\n> fine.\n> \n> Upon reviewing the table again in this line, further potential\n> improvements and issues have been found. For example:\n> \n> character, varchar: don't follow the rule.\n> - 'char(length)' blank-padded string, fixed storage length\n> + blank-padded string, fixed storage length, format 'char(length)'\n\nSo, char() and varchar() are _definition_ synonyms for characater and\ncharacter varying, so I put the way you define them at the _front_ of\nthe text. The \"format\" is the _output_ format and I put that at the end\nfor other types. I put numeric() at the front too since its definition\nis complex. (I now see numeric should be \"precision, scale\" so I fixed\nthat.)\n\n> interval: doesn't follow the rule.\n> - @ <number> <units>, time interval\n> + time interval, format '[@] <number> <units>'\n> (I think '@' is not necessary here..)\n\nAgreed, '@' is optional so removed, and I added \"...\".\n\n> pg_snapshot:\n> \n> The description given is just \"snapshot\", which seems overly simplistic.\n> \n> txid_snapshot:\n> \n> The description reads \"transaction snapshot\". Is this really\n> accurate, especially in contrast with pg_snapshot?\n\nUh, the docs have for txid_snapshot:\n\n\tuser-level transaction ID snapshot (deprecated; see\n\t<type>pg_snapshot</type>)<\n\nDo we want to add \"deprecated\" in the output.\n\n> pg_brin_bloom_summary, pg_brin_minmax_multi_summary, pg_mcv_list and many:\n> \n> I'm uncertain whether these types, which lack an input syntax (but\n> have an output format), qualify as pseudo-types. Nevertheless, I\n> believe it would be beneficial to describe that those types differ\n> from ordinary types.\n\nGood point, now labeled as pseudo-types.\n\nUpdated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Thu, 2 Nov 2023 16:12:57 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"box\" type description" }, { "msg_contents": "On Thu, Nov 2, 2023 at 04:12:57PM -0400, Bruce Momjian wrote:\n> > pg_brin_bloom_summary, pg_brin_minmax_multi_summary, pg_mcv_list and many:\n> > \n> > I'm uncertain whether these types, which lack an input syntax (but\n> > have an output format), qualify as pseudo-types. Nevertheless, I\n> > believe it would be beneficial to describe that those types differ\n> > from ordinary types.\n> \n> Good point, now labeled as pseudo-types.\n> \n> Updated patch attached.\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Mon, 13 Nov 2023 16:27:23 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: \"box\" type description" } ]
[ { "msg_contents": "Hi,\n\nI rarely observe failure of vacuum with truncation test in\nreloptions.sql, i.e. the truncation doesn't happen:\n\n--- ../../src/test/regress/expected/reloptions.out 2020-04-16 12:37:17.749547401 +0300\n+++ ../../src/test/regress/results/reloptions.out 2020-04-17 00:14:58.999211750 +0300\n@@ -131,7 +131,7 @@\n SELECT pg_relation_size('reloptions_test') = 0;\n ?column?\n ----------\n- t\n+ f\n (1 row)\n\nIntimate reading of lazy_scan_heap says that the failure indeed might\nhappen; if ConditionalLockBufferForCleanup couldn't lock the buffer and\neither the buffer doesn't need freezing or vacuum is not aggressive, we\ndon't insist on close inspection of the page contents and count it as\nnonempty according to lazy_check_needs_freeze. It means the page is\nregarded as such even if it contains only garbage (but occupied) ItemIds,\nwhich is the case of the test. And of course this allegedly nonempty\npage prevents the truncation. Obvious competitors for the page are\nbgwriter/checkpointer; the chances of a simultaneous attack are small\nbut they exist.\n\nA simple fix is to perform aggressive VACUUM FREEZE, as attached.\n\nI'm a bit puzzled that I've ever seen this only when running regression\ntests under our multimaster. While multimaster contains a fair amount of\nC code, I don't see how any of it can interfere with the vacuuming\nbusiness here. I can't say I did my best to create the repoduction\nthough -- the explanation above seems to be enough.\n\n\n--\nArseny Sher\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 30 Mar 2021 01:58:50 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Tue, Mar 30, 2021 at 01:58:50AM +0300, Arseny Sher wrote:\n> Intimate reading of lazy_scan_heap says that the failure indeed might\n> happen; if ConditionalLockBufferForCleanup couldn't lock the buffer and\n> either the buffer doesn't need freezing or vacuum is not aggressive, we\n> don't insist on close inspection of the page contents and count it as\n> nonempty according to lazy_check_needs_freeze. It means the page is\n> regarded as such even if it contains only garbage (but occupied) ItemIds,\n> which is the case of the test. And of course this allegedly nonempty\n> page prevents the truncation. Obvious competitors for the page are\n> bgwriter/checkpointer; the chances of a simultaneous attack are small\n> but they exist.\n\nYep, this is the same problem as the one discussed for c2dc1a7, where\na concurrent checkpoint may cause a page to be skipped, breaking the\ntest.\n\n> I'm a bit puzzled that I've ever seen this only when running regression\n> tests under our multimaster. While multimaster contains a fair amount of\n> C code, I don't see how any of it can interfere with the vacuuming\n> business here. I can't say I did my best to create the repoduction\n> though -- the explanation above seems to be enough.\n\nWhy not just using DISABLE_PAGE_SKIPPING instead here?\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 16:12:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On 3/30/21 10:12 AM, Michael Paquier wrote:\n\n > Yep, this is the same problem as the one discussed for c2dc1a7, where\n > a concurrent checkpoint may cause a page to be skipped, breaking the\n > test.\n\nIndeed, Alexander Lakhin pointed me to that commit after I wrote the \nmessage.\n\n > Why not just using DISABLE_PAGE_SKIPPING instead here?\n\nI think this is not enough. DISABLE_PAGE_SKIPPING disables vm consulting \n(sets\naggressive=true in the routine); however, if the page is locked and\nlazy_check_needs_freeze says there is nothing to freeze on it, we again \ndon't\nlook at its contents closely.\n\n\n-- cheers, arseny\n\n\n", "msg_date": "Tue, 30 Mar 2021 16:22:18 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Tue, Mar 30, 2021 at 10:22 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n> On 3/30/21 10:12 AM, Michael Paquier wrote:\n>\n> > Yep, this is the same problem as the one discussed for c2dc1a7, where\n> > a concurrent checkpoint may cause a page to be skipped, breaking the\n> > test.\n>\n> Indeed, Alexander Lakhin pointed me to that commit after I wrote the\n> message.\n>\n> > Why not just using DISABLE_PAGE_SKIPPING instead here?\n>\n> I think this is not enough. DISABLE_PAGE_SKIPPING disables vm consulting\n> (sets\n> aggressive=true in the routine); however, if the page is locked and\n> lazy_check_needs_freeze says there is nothing to freeze on it, we again\n> don't\n> look at its contents closely.\n\nRight.\n\nIs it better to add FREEZE to the first \"VACUUM reloptions_test;\" as well?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Wed, 31 Mar 2021 22:17:09 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "\nOn 3/31/21 4:17 PM, Masahiko Sawada wrote:\n\n > Is it better to add FREEZE to the first \"VACUUM reloptions_test;\" as \nwell?\n\nI don't think this matters much, as it tests the contrary and the \nprobability of\nsuccessful test passing (in case of theoretical bug making vacuum to \ntruncate\nnon-empty relation) becomes stunningly small. But adding it wouldn't hurt\neither.\n\n-- cheers, arseny\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:39:39 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Wed, Mar 31, 2021 at 10:39 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> On 3/31/21 4:17 PM, Masahiko Sawada wrote:\n>\n> > Is it better to add FREEZE to the first \"VACUUM reloptions_test;\" as\n> well?\n>\n> I don't think this matters much, as it tests the contrary and the\n> probability of\n> successful test passing (in case of theoretical bug making vacuum to\n> truncate\n> non-empty relation) becomes stunningly small. But adding it wouldn't hurt\n> either.\n\nI was concerned a bit that without FREEZE in the first VACUUM we could\nnot test it properly because the table could not be truncated because\neither vacuum_truncate is off or the page is skipped.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 1 Apr 2021 11:33:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "\nMasahiko Sawada <sawada.mshk@gmail.com> writes:\n\n>> I don't think this matters much, as it tests the contrary and the\n>> probability of\n>> successful test passing (in case of theoretical bug making vacuum to\n>> truncate\n>> non-empty relation) becomes stunningly small. But adding it wouldn't hurt\n>> either.\n>\n> I was concerned a bit that without FREEZE in the first VACUUM we could\n> not test it properly because the table could not be truncated because\n> either vacuum_truncate is off\n\nFREEZE won't help us there.\n\n> or the page is skipped.\n\nYou mean at the same time there is a potential bug in vacuum which would\nforce the truncation of non-empy relation if the page wasn't locked?\nThat would mean the chance of test getting passed even single time is\nclose to 0, as currently the chance of its failure is close to 1.\n\n\n-- cheers, arseny\n\n\n", "msg_date": "Thu, 01 Apr 2021 06:08:28 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "\nArseny Sher <a.sher@postgrespro.ru> writes:\n\n> as currently the chance of its failure is close to 1.\n\nA typo, to 0 too, of course.\n\n\n", "msg_date": "Thu, 01 Apr 2021 06:11:50 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Thu, Apr 1, 2021 at 12:08 PM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n>\n> >> I don't think this matters much, as it tests the contrary and the\n> >> probability of\n> >> successful test passing (in case of theoretical bug making vacuum to\n> >> truncate\n> >> non-empty relation) becomes stunningly small. But adding it wouldn't hurt\n> >> either.\n> >\n> > I was concerned a bit that without FREEZE in the first VACUUM we could\n> > not test it properly because the table could not be truncated because\n> > either vacuum_truncate is off\n>\n> FREEZE won't help us there.\n>\n> > or the page is skipped.\n>\n> You mean at the same time there is a potential bug in vacuum which would\n> force the truncation of non-empy relation if the page wasn't locked?\n\nJust to be clear the context, I’m mentioning the following test case:\n\nCREATE TABLE reloptions_test(i INT NOT NULL, j text)\n WITH (vacuum_truncate=false,\n toast.vacuum_truncate=false,\n autovacuum_enabled=false);\nSELECT reloptions FROM pg_class WHERE oid = 'reloptions_test'::regclass;\nINSERT INTO reloptions_test VALUES (1, NULL), (NULL, NULL);\nVACUUM reloptions_test;\nSELECT pg_relation_size('reloptions_test') > 0;\n\nWhat I meant is that without FREEZE option, there are two possible\ncases where the table is not truncated (i.g.,\npg_relation_size('reloptions_test') > 0 is true): the page got empty\nby vacuum but is not truncated because of vacuum_truncate = false, and\nthe page could not be vacuumed (i.g., tuples remain in the page)\nbecause the page is skipped due to conflict on cleanup lock on the\npage. This test is intended to test the former case. I guess adding\nFREEZE will prevent the latter case.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 1 Apr 2021 12:52:21 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Thu, Apr 01, 2021 at 12:52:21PM +0900, Masahiko Sawada wrote:\n> Just to be clear the context, I’m mentioning the following test case:\n\n(Coming back a couple of emails later, where indeed I forgot about the\nbusiness with lazy_check_needs_freeze() that could cause a page to be\nskipped even if DISABLE_PAGE_SKIPPING is used.)\n\n> What I meant is that without FREEZE option, there are two possible\n> cases where the table is not truncated (i.g.,\n> pg_relation_size('reloptions_test') > 0 is true): the page got empty\n> by vacuum but is not truncated because of vacuum_truncate = false, and\n> the page could not be vacuumed (i.g., tuples remain in the page)\n> because the page is skipped due to conflict on cleanup lock on the\n> page. This test is intended to test the former case. I guess adding\n> FREEZE will prevent the latter case.\n\nWhat you are writing here makes sense to me. Looking at the test, it\nis designed to test vacuum_truncate, aka that the behavior we want to\nstress (your former case here) gets stressed all the time, so adding\nthe options to avoid the latter case all the time is an improvement.\nAnd this, even if the latter case does not actually cause a diff and\nit has a small chance to happen in practice.\n\nIt would be good to add a comment explaining why the options are\nadded (aka just don't skip any pages).\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 13:28:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n\n> On Thu, Apr 01, 2021 at 12:52:21PM +0900, Masahiko Sawada wrote:\n>> Just to be clear the context, I’m mentioning the following test case:\n\nSorry, I misremembered the test and assumed the table is non-empty there\nwhile it is empty but vacuum_truncate is disabled. Still, this doesn't\nchange my conclusion of freezing being not a big deal there due to small\nchance of locked page. Anyway, let's finish with this.\n\n> What you are writing here makes sense to me. Looking at the test, it\n> is designed to test vacuum_truncate, aka that the behavior we want to\n> stress (your former case here) gets stressed all the time, so adding\n> the options to avoid the latter case all the time is an improvement.\n> And this, even if the latter case does not actually cause a diff and\n> it has a small chance to happen in practice.\n>\n> It would be good to add a comment explaining why the options are\n> added (aka just don't skip any pages).\n\nHow about the attached?\n\n\n-- cheers, arseny", "msg_date": "Thu, 01 Apr 2021 10:58:25 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Thu, Apr 01, 2021 at 10:58:25AM +0300, Arseny Sher wrote:\n> How about the attached?\n\nSounds fine to me. Sawada-san?\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 17:49:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Thu, Apr 1, 2021 at 5:49 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 01, 2021 at 10:58:25AM +0300, Arseny Sher wrote:\n> > How about the attached?\n\nThank you for updating the patch!\n\n> Sounds fine to me. Sawada-san?\n\nLooks good to me too.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 1 Apr 2021 22:54:29 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Thu, Apr 01, 2021 at 10:54:29PM +0900, Masahiko Sawada wrote:\n> Looks good to me too.\n\nOkay, applied and back-patched down to 12 then.\n--\nMichael", "msg_date": "Fri, 2 Apr 2021 09:46:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Fri, Apr 2, 2021 at 9:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Apr 01, 2021 at 10:54:29PM +0900, Masahiko Sawada wrote:\n> > Looks good to me too.\n>\n> Okay, applied and back-patched down to 12 then.\n\nThank you!\n\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 2 Apr 2021 11:57:00 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Fri, Apr 2, 2021 at 9:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Okay, applied and back-patched down to 12 then.\n\nThank you both. Unfortunately and surprisingly, the test still fails\n(perhaps even rarer, once in several hundred runs) under\nmultimaster. After scratching the head for some more time, it seems to\nme the following happens: not only vacuum encounters locked page, but\nalso there exist a concurrent backend (as the parallel schedule is run)\nwho holds back oldestxmin keeping it less than xid of transaction which\ndid the insertion\n\nINSERT INTO reloptions_test VALUES (1, NULL), (NULL, NULL);\n\nFreezeLimit can't be higher than oldestxmin, so lazy_check_needs_freeze\ndecides there is nothing to freeze on the page. multimaster commits are\nquite heavy, which apparently shifts the timings making the issue more\nlikely.\n\nCurrently we are testing the rather funny attached patch which forces\nall such old-snapshot-holders to finish. It is crutchy, but I doubt we\nwant to change vacuum logic (e.g. checking tuple liveness in\nlazy_check_needs_freeze) due to this issue. (it is especially crutchy in\nxid::bigint casts, but wraparound is hardly expected in regression tests\nrun).\n\n\n-- cheers, arseny", "msg_date": "Sun, 04 Apr 2021 23:00:25 +0300", "msg_from": "Arseny Sher <a.sher@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" }, { "msg_contents": "On Mon, Apr 5, 2021 at 5:00 AM Arseny Sher <a.sher@postgrespro.ru> wrote:\n>\n>\n> On Fri, Apr 2, 2021 at 9:46 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > Okay, applied and back-patched down to 12 then.\n>\n> Thank you both. Unfortunately and surprisingly, the test still fails\n> (perhaps even rarer, once in several hundred runs) under\n> multimaster. After scratching the head for some more time, it seems to\n> me the following happens: not only vacuum encounters locked page, but\n> also there exist a concurrent backend (as the parallel schedule is run)\n> who holds back oldestxmin keeping it less than xid of transaction which\n> did the insertion\n>\n> INSERT INTO reloptions_test VALUES (1, NULL), (NULL, NULL);\n>\n> FreezeLimit can't be higher than oldestxmin, so lazy_check_needs_freeze\n> decides there is nothing to freeze on the page. multimaster commits are\n> quite heavy, which apparently shifts the timings making the issue more\n> likely.\n>\n> Currently we are testing the rather funny attached patch which forces\n> all such old-snapshot-holders to finish. It is crutchy, but I doubt we\n> want to change vacuum logic (e.g. checking tuple liveness in\n> lazy_check_needs_freeze) due to this issue. (it is especially crutchy in\n> xid::bigint casts, but wraparound is hardly expected in regression tests\n> run).\n\nOr maybe we can remove reloptions.sql test from the parallel group.\nBTW I wonder if the following tests in vacuum.sql test also have the\nsame issue (page skipping and oldestxmin):\n\n-- TRUNCATE option\nCREATE TABLE vac_truncate_test(i INT NOT NULL, j text)\n WITH (vacuum_truncate=true, autovacuum_enabled=false);\nINSERT INTO vac_truncate_test VALUES (1, NULL), (NULL, NULL);\nVACUUM (TRUNCATE FALSE) vac_truncate_test;\nSELECT pg_relation_size('vac_truncate_test') > 0;\nVACUUM vac_truncate_test;\nSELECT pg_relation_size('vac_truncate_test') = 0;\nVACUUM (TRUNCATE FALSE, FULL TRUE) vac_truncate_test;\nDROP TABLE vac_truncate_test;\n\nShould we add FREEZE to those tests as well?\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Mon, 5 Apr 2021 20:55:04 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flaky vacuum truncate test in reloptions.sql" } ]
[ { "msg_contents": "Hi\n\nFound one code committed at 2021.01.13 with copyright 2020.\nFix it in the attached patch. \n\nRegards,\nTang", "msg_date": "Tue, 30 Mar 2021 02:51:26 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Copyright update for nbtsearch.c" }, { "msg_contents": "On Tue, Mar 30, 2021 at 02:51:26AM +0000, tanghy.fnst@fujitsu.com wrote:\n> Found one code committed at 2021.01.13 with copyright 2020.\n> Fix it in the attached patch. \n\nThanks. If I look at the top of HEAD, it is not the only place. I am\nto blame for some of them like src/common/sha1. Will fix in a couple\nof minutes the whole set.\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 12:08:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Copyright update for nbtsearch.c" }, { "msg_contents": "Hi,\n\nOn Tue, Mar 30, 2021 at 10:51 AM tanghy.fnst@fujitsu.com\n<tanghy.fnst@fujitsu.com> wrote:\n>\n> Hi\n>\n> Found one code committed at 2021.01.13 with copyright 2020.\n> Fix it in the attached patch.\n\nThis is actually gistfuncs.c. There are other files impacted\n(jsonbsubs, hex, sha1, pg_iovec and rewriteSearchCycle) see\nhttps://www.postgresql.org/message-id/20210323143438.GB579@momjian.us.\n\n\n", "msg_date": "Tue, 30 Mar 2021 11:09:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Copyright update for nbtsearch.c" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:09:47AM +0800, Julien Rouhaud wrote:\n> This is actually gistfuncs.c. There are other files impacted\n> (jsonbsubs, hex, sha1, pg_iovec and rewriteSearchCycle) see\n> https://www.postgresql.org/message-id/20210323143438.GB579@momjian.us.\n\nAh, thanks. I did not notice this one. Let's leave that alone then,\nexcept if there are more people in favor of fixing the whole set.\n--\nMichael", "msg_date": "Tue, 30 Mar 2021 12:35:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Copyright update for nbtsearch.c" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:09:47AM +0800, Julien Rouhaud wrote:\n> This is actually gistfuncs.c.\n\nThanks, you are right. There's typo in the mail title. \nSorry for your confusion.\n\nOn Tuesday, March 30, 2021 12:08 PM, Michael Paquier wrote:\n>Thanks. If I look at the top of HEAD, it is not the only place. I am\n>to blame for some of them like src/common/sha1. Will fix in a couple\n>of minutes the whole set.\n\nThanks. Please feel free to fix this issue.\n\nRegards,\nTang\n\n\n", "msg_date": "Tue, 30 Mar 2021 03:44:15 +0000", "msg_from": "\"tanghy.fnst@fujitsu.com\" <tanghy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Copyright update for nbtsearch.c" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Ah, thanks. I did not notice this one. Let's leave that alone then,\n> except if there are more people in favor of fixing the whole set.\n\nMight be worth running Bruce's copyright-update script again, though\nI'd suggest waiting till after the CF closes. We might be seeing a\nfew more of these land in the next few days.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 02:03:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Copyright update for nbtsearch.c" } ]
[ { "msg_contents": "There are a couple of error messages within the logical replication\ncode where the errdetail text includes a prefix of \"The error was:\"\n\nI could not find any other examples in all the PG src which have a\nsimilar errdetail prefix like this.\n\nThe text seems not only redundant, but \"The error was: ERROR: \" also\nlooks a bit peculiar.\n\ne.g.\n------\n2021-03-30 14:17:37.567 AEDT [22317] ERROR: could not drop the\nreplication slot \"tap_sub\" on publisher\n2021-03-30 14:17:37.567 AEDT [22317] DETAIL: The error was: ERROR:\nreplication slot \"tap_sub\" does not exist\n2021-03-30 14:17:37.567 AEDT [22317] STATEMENT: DROP SUBSCRIPTION tap_sub;\n------\n\nPSA a small patch which simply removes this prefix \"The error was:\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 30 Mar 2021 15:29:35 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Redundant errdetail prefix \"The error was:\" in some logical\n replication messages" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> There are a couple of error messages within the logical replication\n> code where the errdetail text includes a prefix of \"The error was:\"\n\nHmm, isn't project style more usually to include the error reason\nin the primary message? That is,\n\n ereport(LOG,\n- (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n- slotname),\n- errdetail(\"The error was: %s\", res->err)));\n+ (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher: %s\",\n+ slotname, res->err)));\n\nand so on. If we had reason to think that res->err would be extremely\nlong, maybe pushing it to errdetail would be sensible, but I'm not\nseeing that that is likely.\n\n(I think the \"the\" before \"replication slot\" could go away, too.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 02:10:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Redundant errdetail prefix \"The error was:\" in some logical\n replication messages" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > There are a couple of error messages within the logical replication\n> > code where the errdetail text includes a prefix of \"The error was:\"\n>\n> Hmm, isn't project style more usually to include the error reason\n> in the primary message? That is,\n>\n> ereport(LOG,\n> - (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n> - slotname),\n> - errdetail(\"The error was: %s\", res->err)));\n> + (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher: %s\",\n> + slotname, res->err)));\n>\n> and so on. If we had reason to think that res->err would be extremely\n> long, maybe pushing it to errdetail would be sensible, but I'm not\n> seeing that that is likely.\n>\n> (I think the \"the\" before \"replication slot\" could go away, too.)\n\n+1 to have the res->err in the primary message itself and get rid of\nerrdetail. Currently the error \"could not fetch table info for table\"\ndoes that.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Mar 2021 11:51:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Redundant errdetail prefix \"The error was:\" in some logical\n replication messages" }, { "msg_contents": "On Tue, Mar 30, 2021 at 5:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > There are a couple of error messages within the logical replication\n> > code where the errdetail text includes a prefix of \"The error was:\"\n>\n> Hmm, isn't project style more usually to include the error reason\n> in the primary message? That is,\n>\n> ereport(LOG,\n> - (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher\",\n> - slotname),\n> - errdetail(\"The error was: %s\", res->err)));\n> + (errmsg(\"could not drop the replication slot \\\"%s\\\" on publisher: %s\",\n> + slotname, res->err)));\n>\n> and so on. If we had reason to think that res->err would be extremely\n> long, maybe pushing it to errdetail would be sensible, but I'm not\n> seeing that that is likely.\n>\n> (I think the \"the\" before \"replication slot\" could go away, too.)\n\nThanks for the review and advice.\n\nPSA version 2 of this patch which adopts your suggestions.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 30 Mar 2021 19:30:02 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Redundant errdetail prefix \"The error was:\" in some logical\n replication messages" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> PSA version 2 of this patch which adopts your suggestions.\n\nLGTM, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 15:30:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Redundant errdetail prefix \"The error was:\" in some logical\n replication messages" } ]
[ { "msg_contents": "Hi,\n\nThe logical replication tablesync worker creates tablesync slots.\n\nPreviously some PG docs pages were referring to these as \"tablesync\nslots\", but other pages called them as \"table synchronization slots\".\n\nPSA a trivial patch which (for consistency) now calls them all the\nsame - \"tablesync slots\"\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Tue, 30 Mar 2021 19:51:14 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Use consistent terminology for tablesync slots." }, { "msg_contents": "On Tue, Mar 30, 2021 at 2:21 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi,\n>\n> The logical replication tablesync worker creates tablesync slots.\n>\n> Previously some PG docs pages were referring to these as \"tablesync\n> slots\", but other pages called them as \"table synchronization slots\".\n>\n> PSA a trivial patch which (for consistency) now calls them all the\n> same - \"tablesync slots\"\n>\n\n+1 for the consistency. But I think it better to use \"table\nsynchronization slots\" in the user-facing docs as that makes it easier\nfor users to understand.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Mar 2021 14:44:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use consistent terminology for tablesync slots." }, { "msg_contents": "On Tue, Mar 30, 2021 at 2:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 2:21 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > The logical replication tablesync worker creates tablesync slots.\n> >\n> > Previously some PG docs pages were referring to these as \"tablesync\n> > slots\", but other pages called them as \"table synchronization slots\".\n> >\n> > PSA a trivial patch which (for consistency) now calls them all the\n> > same - \"tablesync slots\"\n> >\n>\n> +1 for the consistency. But I think it better to use \"table\n> synchronization slots\" in the user-facing docs as that makes it easier\n> for users to understand.\n\n+1 for the phrasing \"tablesync slots\" to \"table synchronization slots\"\nas it is more readable. And also the user facing error message and guc\ndescription i.e. \"logical replication table synchronization worker for\nsubscription\" and max_sync_workers_per_subscription respectively are\nshowing it that way.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Mar 2021 14:56:42 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use consistent terminology for tablesync slots." }, { "msg_contents": "On Tue, Mar 30, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 2:21 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > The logical replication tablesync worker creates tablesync slots.\n> >\n> > Previously some PG docs pages were referring to these as \"tablesync\n> > slots\", but other pages called them as \"table synchronization slots\".\n> >\n> > PSA a trivial patch which (for consistency) now calls them all the\n> > same - \"tablesync slots\"\n> >\n>\n> +1 for the consistency. But I think it better to use \"table\n> synchronization slots\" in the user-facing docs as that makes it easier\n> for users to understand.\n>\n\nPSA patch version 2 updated to use \"table synchronization slots\" as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 31 Mar 2021 12:09:43 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use consistent terminology for tablesync slots." }, { "msg_contents": "On Wed, Mar 31, 2021 at 6:39 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 8:14 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 30, 2021 at 2:21 PM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > The logical replication tablesync worker creates tablesync slots.\n> > >\n> > > Previously some PG docs pages were referring to these as \"tablesync\n> > > slots\", but other pages called them as \"table synchronization slots\".\n> > >\n> > > PSA a trivial patch which (for consistency) now calls them all the\n> > > same - \"tablesync slots\"\n> > >\n> >\n> > +1 for the consistency. But I think it better to use \"table\n> > synchronization slots\" in the user-facing docs as that makes it easier\n> > for users to understand.\n> >\n>\n> PSA patch version 2 updated to use \"table synchronization slots\" as suggested.\n>\n\nThanks, Pushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 31 Mar 2021 10:40:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use consistent terminology for tablesync slots." } ]
[ { "msg_contents": "Hi,\n\nNoticed that an extra semicolon in a couple of test cases related to\npostgres_fdw. Removed that in the attached patch. It can be backported till\nv11 where we added those test cases.\n\n-- \n--\n\nThanks & Regards,\nSuraj kharage,\n\n\n\nedbpostgres.com", "msg_date": "Tue, 30 Mar 2021 15:21:24 +0530", "msg_from": "Suraj Kharage <suraj.kharage@enterprisedb.com>", "msg_from_op": true, "msg_subject": "extra semicolon in postgres_fdw test cases" }, { "msg_contents": "On Tue, Mar 30, 2021 at 3:21 PM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Noticed that an extra semicolon in a couple of test cases related to postgres_fdw. Removed that in the attached patch. It can be backported till v11 where we added those test cases.\n\n+1 for the change. It looks like a typo and can be backported.\n\nWith Regards,\nBharath Rupireddy.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Mar 2021 16:49:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extra semicolon in postgres_fdw test cases" }, { "msg_contents": "On Tue, Mar 30, 2021 at 4:50 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Tue, Mar 30, 2021 at 3:21 PM Suraj Kharage\n> <suraj.kharage@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > Noticed that an extra semicolon in a couple of test cases related to postgres_fdw. Removed that in the attached patch. It can be backported till v11 where we added those test cases.\n>\n> +1 for the change. It looks like a typo and can be backported.\n>\n\nLooks good to me as well but I think one can choose not to backpatch\nas there is no functional impact but OTOH, there is some value in\nkeeping tests/code consistent.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 30 Mar 2021 17:00:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extra semicolon in postgres_fdw test cases" }, { "msg_contents": "On Tue, Mar 30, 2021 at 3:21 PM Suraj Kharage\n<suraj.kharage@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> Noticed that an extra semicolon in a couple of test cases related to postgres_fdw. Removed that in the attached patch. It can be backported till v11 where we added those test cases.\n>\n\nThanks for identifying this, the changes look fine to me.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Wed, 31 Mar 2021 09:08:25 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extra semicolon in postgres_fdw test cases" }, { "msg_contents": "On Tue, Mar 30, 2021 at 05:00:53PM +0530, Amit Kapila wrote:\n> Looks good to me as well but I think one can choose not to backpatch\n> as there is no functional impact but OTOH, there is some value in\n> keeping tests/code consistent.\n\nFWIW, I would not bother with the back branches for just that, but if\nyou feel that this is better, of course feel free.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 13:05:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: extra semicolon in postgres_fdw test cases" }, { "msg_contents": "On Wed, Mar 31, 2021 at 9:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 30, 2021 at 05:00:53PM +0530, Amit Kapila wrote:\n> > Looks good to me as well but I think one can choose not to backpatch\n> > as there is no functional impact but OTOH, there is some value in\n> > keeping tests/code consistent.\n>\n> FWIW, I would not bother with the back branches for just that, but if\n> you feel that this is better, of course feel free.\n>\n\nFair enough. I'll push this just for HEAD.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 31 Mar 2021 09:47:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extra semicolon in postgres_fdw test cases" }, { "msg_contents": "On Wed, Mar 31, 2021 at 9:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 31, 2021 at 9:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Mar 30, 2021 at 05:00:53PM +0530, Amit Kapila wrote:\n> > > Looks good to me as well but I think one can choose not to backpatch\n> > > as there is no functional impact but OTOH, there is some value in\n> > > keeping tests/code consistent.\n> >\n> > FWIW, I would not bother with the back branches for just that, but if\n> > you feel that this is better, of course feel free.\n> >\n>\n> Fair enough. I'll push this just for HEAD.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 31 Mar 2021 11:27:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: extra semicolon in postgres_fdw test cases" } ]
[ { "msg_contents": "Hi,\n\nWhile running some sanity checks on the regression tests, I found one test that\nreturns different results depending on whether an index or a sequential scan is\nused.\n\nMinimal reproducer:\n\n=# CREATE TABLE point_tbl AS select '(nan,nan)'::point f1;\n=# CREATE INDEX ON point_tbl USING gist(f1);\n\n=# EXPLAIN SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n QUERY PLAN\n------------------------------------------------------------------------------\n Seq Scan on point_tbl (cost=0.00..1.01 rows=1 width=16)\n Filter: (f1 <@ '((0,0),(0,100),(100,100),(50,50),(100,0),(0,0))'::polygon)\n(2 rows)\n\n=# SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n f1\n-----------\n (NaN,NaN)\n(1 row)\n\nSET enable_seqscan = 0;\n\n\n=# EXPLAIN SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n QUERY PLAN \n----------------------------------------------------------------------------------------\n Index Only Scan using point_tbl_f1_idx on point_tbl (cost=0.12..8.14 rows=1 width=16)\n Index Cond: (f1 <@ '((0,0),(0,100),(100,100),(50,50),(100,0),(0,0))'::polygon)\n(2 rows)\n\n=# SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n f1 \n----\n(0 rows)\n\nThe discrepancy comes from the fact that the sequential scan checks the\ncondition using point_inside() / lseg_crossing(), while the gist index will\ncheck the condition using box_overlap() / box_ov(), which have different\nopinions on how to handle NaN.\n\nGetting a consistent behavior shouldn't be hard, but I'm unsure which behavior\nis actually correct.\n\n\n", "msg_date": "Tue, 30 Mar 2021 17:57:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Issue with point_ops and NaN" }, { "msg_contents": "On Tue, 2021-03-30 at 17:57 +0800, Julien Rouhaud wrote:\n> While running some sanity checks on the regression tests, I found one test that\n> returns different results depending on whether an index or a sequential scan is\n> used.\n> \n> Minimal reproducer:\n> \n> =# CREATE TABLE point_tbl AS select '(nan,nan)'::point f1;\n> =# CREATE INDEX ON point_tbl USING gist(f1);\n> \n> =# EXPLAIN SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> Seq Scan on point_tbl (cost=0.00..1.01 rows=1 width=16)\n> Filter: (f1 <@ '((0,0),(0,100),(100,100),(50,50),(100,0),(0,0))'::polygon)\n> (2 rows)\n> \n> =# SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n> f1\n> -----------\n> (NaN,NaN)\n> (1 row)\n> \n> SET enable_seqscan = 0;\n> \n> \n> =# EXPLAIN SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------\n> Index Only Scan using point_tbl_f1_idx on point_tbl (cost=0.12..8.14 rows=1 width=16)\n> Index Cond: (f1 <@ '((0,0),(0,100),(100,100),(50,50),(100,0),(0,0))'::polygon)\n> (2 rows)\n> \n> =# SELECT * FROM point_tbl WHERE f1 <@ polygon '(0,0),(0,100),(100,100),(50,50),(100,0),(0,0)';\n> f1 \n> ----\n> (0 rows)\n> \n> The discrepancy comes from the fact that the sequential scan checks the\n> condition using point_inside() / lseg_crossing(), while the gist index will\n> check the condition using box_overlap() / box_ov(), which have different\n> opinions on how to handle NaN.\n> \n> Getting a consistent behavior shouldn't be hard, but I'm unsure which behavior\n> is actually correct.\n\nI'd say that this is certainly wrong:\n\nSELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n\n ?column? \n----------\n t\n(1 row)\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 14:47:05 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Tue, Mar 30, 2021 at 02:47:05PM +0200, Laurenz Albe wrote:\n> On Tue, 2021-03-30 at 17:57 +0800, Julien Rouhaud wrote:\n> > \n> > Getting a consistent behavior shouldn't be hard, but I'm unsure which behavior\n> > is actually correct.\n> \n> I'd say that this is certainly wrong:\n> \n> SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> \n> ?column? \n> ----------\n> t\n> (1 row)\n\nYeah that's what I think too, but I wanted to have confirmation.\n\n\n", "msg_date": "Tue, 30 Mar 2021 20:54:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Tue, Mar 30, 2021 at 02:47:05PM +0200, Laurenz Albe wrote:\n>> I'd say that this is certainly wrong:\n>> SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n>> \n>> ?column? \n>> ----------\n>> t\n>> (1 row)\n\n> Yeah that's what I think too, but I wanted to have confirmation.\n\nAgreed --- one could make an argument for either 'false' or NULL\nresult, but surely not 'true'.\n\nI wonder if Horiguchi-san's patch [1] improves this case.\n\n\t\t\tregards, tom lane\n\n[1] https://commitfest.postgresql.org/32/2710/\n\n\n", "msg_date": "Tue, 30 Mar 2021 11:02:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:02:32AM -0400, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Tue, Mar 30, 2021 at 02:47:05PM +0200, Laurenz Albe wrote:\n> >> I'd say that this is certainly wrong:\n> >> SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> >> \n> >> ?column? \n> >> ----------\n> >> t\n> >> (1 row)\n> \n> > Yeah that's what I think too, but I wanted to have confirmation.\n> \n> Agreed --- one could make an argument for either 'false' or NULL\n> result, but surely not 'true'.\n\nI would think that it should return NULL since it's not inside nor outside the\npolygon, but I'm fine with false.\n\n> I wonder if Horiguchi-san's patch [1] improves this case.\n\nOh I totally missed that patch :(\n\nAfter a quick look I see this addition in point_inside():\n\n+\t\t/* NaN makes the point cannot be inside the polygon */\n+\t\tif (unlikely(isnan(x) || isnan(y)))\n+\t\t\treturn 0;\n\nSo I would assume that it should fix this case too. I'll check tomorrow.\n\n\n", "msg_date": "Tue, 30 Mar 2021 23:39:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:39:40PM +0800, Julien Rouhaud wrote:\n> On Tue, Mar 30, 2021 at 11:02:32AM -0400, Tom Lane wrote:\n>> Agreed --- one could make an argument for either 'false' or NULL\n>> result, but surely not 'true'.\n> \n> I would think that it should return NULL since it's not inside nor outside the\n> polygon, but I'm fine with false.\n\nYeah, this is trying to make an undefined point fit into a box that\nhas a definition, so \"false\" does not make sense to me either here as\nit implies that the point exists? NULL seems adapted here.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 09:26:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:39:40PM +0800, Julien Rouhaud wrote:\n> On Tue, Mar 30, 2021 at 11:02:32AM -0400, Tom Lane wrote:\n> > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > > On Tue, Mar 30, 2021 at 02:47:05PM +0200, Laurenz Albe wrote:\n> > >> I'd say that this is certainly wrong:\n> > >> SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> > >> \n> > >> ?column? \n> > >> ----------\n> > >> t\n> > >> (1 row)\n> > \n> > > Yeah that's what I think too, but I wanted to have confirmation.\n> > \n> > Agreed --- one could make an argument for either 'false' or NULL\n> > result, but surely not 'true'.\n> \n> I would think that it should return NULL since it's not inside nor outside the\n> polygon, but I'm fine with false.\n> \n> > I wonder if Horiguchi-san's patch [1] improves this case.\n> \n> Oh I totally missed that patch :(\n> \n> After a quick look I see this addition in point_inside():\n> \n> +\t\t/* NaN makes the point cannot be inside the polygon */\n> +\t\tif (unlikely(isnan(x) || isnan(y)))\n> +\t\t\treturn 0;\n> \n> So I would assume that it should fix this case too. I'll check tomorrow.\n\nI confirm that this patch fixes the issue, and after looking a bit more at the\nthread it's unsurprising since Jesse initially reported the exact same problem.\n\nI'll try to review it as soon as I'll be done with my work duties.\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:04:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "At Wed, 31 Mar 2021 09:26:00 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Mar 30, 2021 at 11:39:40PM +0800, Julien Rouhaud wrote:\n> > On Tue, Mar 30, 2021 at 11:02:32AM -0400, Tom Lane wrote:\n> >> Agreed --- one could make an argument for either 'false' or NULL\n> >> result, but surely not 'true'.\n> > \n> > I would think that it should return NULL since it's not inside nor outside the\n> > polygon, but I'm fine with false.\n> \n> Yeah, this is trying to make an undefined point fit into a box that\n> has a definition, so \"false\" does not make sense to me either here as\n> it implies that the point exists? NULL seems adapted here.\n\nSounds reasonable. The function may return NULL for other cases so\nit's easily changed to NULL.\n\n# But it's bothersome to cover all parallels..\n\nDoes anyone oppose to make the case NULL? If no one objects, I'll do\nthat.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 31 Mar 2021 15:46:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "At Wed, 31 Mar 2021 12:04:26 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Tue, Mar 30, 2021 at 11:39:40PM +0800, Julien Rouhaud wrote:\n> > On Tue, Mar 30, 2021 at 11:02:32AM -0400, Tom Lane wrote:\n> > > Julien Rouhaud <rjuju123@gmail.com> writes:\n> > > > On Tue, Mar 30, 2021 at 02:47:05PM +0200, Laurenz Albe wrote:\n> > > >> I'd say that this is certainly wrong:\n> > > >> SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> > > >> \n> > > >> ?column? \n> > > >> ----------\n> > > >> t\n> > > >> (1 row)\n> > > \n> > > > Yeah that's what I think too, but I wanted to have confirmation.\n> > > \n> > > Agreed --- one could make an argument for either 'false' or NULL\n> > > result, but surely not 'true'.\n> > \n> > I would think that it should return NULL since it's not inside nor outside the\n> > polygon, but I'm fine with false.\n> > \n> > > I wonder if Horiguchi-san's patch [1] improves this case.\n> > \n> > Oh I totally missed that patch :(\n> > \n> > After a quick look I see this addition in point_inside():\n> > \n> > +\t\t/* NaN makes the point cannot be inside the polygon */\n> > +\t\tif (unlikely(isnan(x) || isnan(y)))\n> > +\t\t\treturn 0;\n> > \n> > So I would assume that it should fix this case too. I'll check tomorrow.\n> \n> I confirm that this patch fixes the issue, and after looking a bit more at the\n> thread it's unsurprising since Jesse initially reported the exact same problem.\n> \n> I'll try to review it as soon as I'll be done with my work duties.\n\nThanks! However, Michael's suggestion is worth considering. What do\nyou think about makeing NaN-involved comparison return NULL? If you\nagree to that, I'll make a further change to the patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 31 Mar 2021 15:48:16 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "At Wed, 31 Mar 2021 15:46:16 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 31 Mar 2021 09:26:00 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> > On Tue, Mar 30, 2021 at 11:39:40PM +0800, Julien Rouhaud wrote:\n> > > On Tue, Mar 30, 2021 at 11:02:32AM -0400, Tom Lane wrote:\n> > >> Agreed --- one could make an argument for either 'false' or NULL\n> > >> result, but surely not 'true'.\n> > > \n> > > I would think that it should return NULL since it's not inside nor outside the\n> > > polygon, but I'm fine with false.\n> > \n> > Yeah, this is trying to make an undefined point fit into a box that\n> > has a definition, so \"false\" does not make sense to me either here as\n> > it implies that the point exists? NULL seems adapted here.\n> \n> Sounds reasonable. The function may return NULL for other cases so\n> it's easily changed to NULL.\n> \n> # But it's bothersome to cover all parallels..\n\nHmm. Many internal functions handles bool, which cannot handle the\ncase of NaN naturally. In short, it's more invasive than expected.\n\n> Does anyone oppose to make the case NULL? If no one objects, I'll do\n> that.\n\nMmm. I'd like to reduce from +1 to +0.7 or so, considering the amount\nof needed work...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:10:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Wed, Mar 31, 2021 at 03:48:16PM +0900, Kyotaro Horiguchi wrote:\n> \n> Thanks! However, Michael's suggestion is worth considering. What do\n> you think about makeing NaN-involved comparison return NULL? If you\n> agree to that, I'll make a further change to the patch.\n\nAs I mentioned in [1] I think that returning NULL would the right thing to do.\nBut you mentioned elsewhere that it would need a lot more work to make the code\nwork that way, so given that we're 7 days away from the feature freeze maybe\nreturning false would be a better option. One important thing to consider is\nthat we should consistently return NULL for similar cases, and having some\ndiscrepancy there would be way worse than returning false everywhere.\n\n[1] https://www.postgresql.org/message-id/20210330153940.vmncwnmuw3qnpkfa@nol\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:30:41 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Wed, 2021-03-31 at 15:48 +0900, Kyotaro Horiguchi wrote:\n> > > > > > SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> > > > > > ?column? \n> > > > > > ----------\n> > > > > > t\n> > > > > > (1 row)\n> > > > \n> > > > Agreed --- one could make an argument for either 'false' or NULL\n> > > > result, but surely not 'true'.\n> \n> Thanks! However, Michael's suggestion is worth considering. What do\n> you think about makeing NaN-involved comparison return NULL? If you\n> agree to that, I'll make a further change to the patch.\n\nIf you think of \"NaN\" literally as \"not a number\", then FALSE would\nmake sense, since \"not a number\" implies \"not a number between 0 and 1\".\n\nBut since NaN is the result of operations like 0/0 or infinity - infinity,\nNULL might be better.\n\nSo I'd opt for NULL too.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:01:08 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "At Wed, 31 Mar 2021 16:30:41 +0800, Julien Rouhaud <rjuju123@gmail.com> wrote in \n> On Wed, Mar 31, 2021 at 03:48:16PM +0900, Kyotaro Horiguchi wrote:\n> > \n> > Thanks! However, Michael's suggestion is worth considering. What do\n> > you think about makeing NaN-involved comparison return NULL? If you\n> > agree to that, I'll make a further change to the patch.\n> \n> As I mentioned in [1] I think that returning NULL would the right thing to do.\n> But you mentioned elsewhere that it would need a lot more work to make the code\n> work that way, so given that we're 7 days away from the feature freeze maybe\n> returning false would be a better option. One important thing to consider is\n\nAgreed that it's a better option.\n\nI have to change almost all boolean-returning functions to\ntri-state-boolean ones. I'll give it try a bit futther.\n\n> that we should consistently return NULL for similar cases, and having some\n> discrepancy there would be way worse than returning false everywhere.\n\nSure.\n\n> [1] https://www.postgresql.org/message-id/20210330153940.vmncwnmuw3qnpkfa@nol\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 01 Apr 2021 09:34:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "At Wed, 31 Mar 2021 12:01:08 +0200, Laurenz Albe <laurenz.albe@cybertec.at> wrote in \n> On Wed, 2021-03-31 at 15:48 +0900, Kyotaro Horiguchi wrote:\n> > > > > > > SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> > > > > > > ?column? \n> > > > > > > ----------\n> > > > > > > t\n> > > > > > > (1 row)\n> > > > > \n> > > > > Agreed --- one could make an argument for either 'false' or NULL\n> > > > > result, but surely not 'true'.\n> > \n> > Thanks! However, Michael's suggestion is worth considering. What do\n> > you think about makeing NaN-involved comparison return NULL? If you\n> > agree to that, I'll make a further change to the patch.\n> \n> If you think of \"NaN\" literally as \"not a number\", then FALSE would\n> make sense, since \"not a number\" implies \"not a number between 0 and 1\".\n> \n> But since NaN is the result of operations like 0/0 or infinity - infinity,\n> NULL might be better.\n> \n> So I'd opt for NULL too.\n\nThanks. Do you think it's acceptable that returning false instead of\nNULL as an alternative behavior?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 01 Apr 2021 09:35:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "At Thu, 01 Apr 2021 09:34:40 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I have to change almost all boolean-returning functions to\n> tri-state-boolean ones. I'll give it try a bit futther.\n\nThe attached is a rush work of that, on top of the (rebased version of\nthe) base patch. Disregarding its uneffectiveness, it gives a rough\nestimate of how large that would be and how that affects other parts.\n\nMaybe one of the largest issue with that would be that GiST doesn't\nseem to like NULL to be returned from comparison functions.\n\n\nregression=# set enable_seqscan to off;\nregression=# set enable_indexscan to on;\nregression=# SELECT * FROM circle_tbl WHERE f1 && circle(point(1,-2), 1) ORDER BY area(f1);\nERROR: function 0x9d7bf6 returned NULL\n(function 0x9d7bf6 is box_overlap())\n\nThat seems like the reason not to make the functions tri-state.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/utils/adt/geo_ops.c b/src/backend/utils/adt/geo_ops.c\nindex a2e798ff95..b9ff60f56b 100644\n--- a/src/backend/utils/adt/geo_ops.c\n+++ b/src/backend/utils/adt/geo_ops.c\n@@ -79,7 +79,7 @@ static inline void point_add_point(Point *result, Point *pt1, Point *pt2);\n static inline void point_sub_point(Point *result, Point *pt1, Point *pt2);\n static inline void point_mul_point(Point *result, Point *pt1, Point *pt2);\n static inline void point_div_point(Point *result, Point *pt1, Point *pt2);\n-static inline bool point_eq_point(Point *pt1, Point *pt2);\n+static inline tsbool point_eq_point(Point *pt1, Point *pt2);\n static inline float8 point_dt(Point *pt1, Point *pt2);\n static inline float8 point_sl(Point *pt1, Point *pt2);\n static int\tpoint_inside(Point *p, int npts, Point *plist);\n@@ -88,18 +88,18 @@ static int\tpoint_inside(Point *p, int npts, Point *plist);\n static inline void line_construct(LINE *result, Point *pt, float8 m);\n static inline float8 line_sl(LINE *line);\n static inline float8 line_invsl(LINE *line);\n-static bool line_interpt_line(Point *result, LINE *l1, LINE *l2);\n-static bool line_contain_point(LINE *line, Point *point);\n+static tsbool line_interpt_line(Point *result, LINE *l1, LINE *l2);\n+static tsbool line_contain_point(LINE *line, Point *point);\n static float8 line_closept_point(Point *result, LINE *line, Point *pt);\n \n /* Routines for line segments */\n static inline void statlseg_construct(LSEG *lseg, Point *pt1, Point *pt2);\n static inline float8 lseg_sl(LSEG *lseg);\n static inline float8 lseg_invsl(LSEG *lseg);\n-static bool lseg_interpt_line(Point *result, LSEG *lseg, LINE *line);\n-static bool lseg_interpt_lseg(Point *result, LSEG *l1, LSEG *l2);\n+static tsbool lseg_interpt_line(Point *result, LSEG *lseg, LINE *line);\n+static tsbool lseg_interpt_lseg(Point *result, LSEG *l1, LSEG *l2);\n static int\tlseg_crossing(float8 x, float8 y, float8 px, float8 py);\n-static bool lseg_contain_point(LSEG *lseg, Point *point);\n+static tsbool lseg_contain_point(LSEG *lseg, Point *point);\n static float8 lseg_closept_point(Point *result, LSEG *lseg, Point *pt);\n static float8 lseg_closept_line(Point *result, LSEG *lseg, LINE *line);\n static float8 lseg_closept_lseg(Point *result, LSEG *on_lseg, LSEG *to_lseg);\n@@ -107,14 +107,14 @@ static float8 lseg_closept_lseg(Point *result, LSEG *on_lseg, LSEG *to_lseg);\n /* Routines for boxes */\n static inline void box_construct(BOX *result, Point *pt1, Point *pt2);\n static void box_cn(Point *center, BOX *box);\n-static bool box_ov(BOX *box1, BOX *box2);\n+static tsbool box_ov(BOX *box1, BOX *box2);\n static float8 box_ar(BOX *box);\n static float8 box_ht(BOX *box);\n static float8 box_wd(BOX *box);\n-static bool box_contain_point(BOX *box, Point *point);\n-static bool box_contain_box(BOX *contains_box, BOX *contained_box);\n-static bool box_contain_lseg(BOX *box, LSEG *lseg);\n-static bool box_interpt_lseg(Point *result, BOX *box, LSEG *lseg);\n+static tsbool box_contain_point(BOX *box, Point *point);\n+static tsbool box_contain_box(BOX *contains_box, BOX *contained_box);\n+static tsbool box_contain_lseg(BOX *box, LSEG *lseg);\n+static tsbool box_interpt_lseg(Point *result, BOX *box, LSEG *lseg);\n static float8 box_closept_point(Point *result, BOX *box, Point *point);\n static float8 box_closept_lseg(Point *result, BOX *box, LSEG *lseg);\n \n@@ -124,9 +124,9 @@ static float8 circle_ar(CIRCLE *circle);\n /* Routines for polygons */\n static void make_bound_box(POLYGON *poly);\n static void poly_to_circle(CIRCLE *result, POLYGON *poly);\n-static bool lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start);\n-static bool poly_contain_poly(POLYGON *contains_poly, POLYGON *contained_poly);\n-static bool plist_same(int npts, Point *p1, Point *p2);\n+static tsbool lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start);\n+static tsbool poly_contain_poly(POLYGON *contains_poly, POLYGON *contained_poly);\n+static tsbool plist_same(int npts, Point *p1, Point *p2);\n static float8 dist_ppoly_internal(Point *pt, POLYGON *poly);\n \n /* Routines for encoding and decoding */\n@@ -540,9 +540,13 @@ box_same(PG_FUNCTION_ARGS)\n {\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n+\ttsbool\t\tres1 = point_eq_point(&box1->high, &box2->high);\n+\ttsbool\t\tres2 = point_eq_point(&box1->low, &box2->low);\n \n-\tPG_RETURN_BOOL(point_eq_point(&box1->high, &box2->high) &&\n-\t\t\t\t point_eq_point(&box1->low, &box2->low));\n+\tif (res1 == TS_NULL || res2 == TS_NULL)\n+\t\tPG_RETURN_NULL();\n+\n+\tPG_RETURN_BOOL(res1 && res2);\n }\n \n /*\t\tbox_overlap\t\t-\t\tdoes box1 overlap box2?\n@@ -553,16 +557,16 @@ box_overlap(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(box_ov(box1, box2));\n+\tPG_RETURN_TSBOOL(box_ov(box1, box2));\n }\n \n-static bool\n+static tsbool\n box_ov(BOX *box1, BOX *box2)\n {\n-\treturn (FPle(box1->low.x, box2->high.x) &&\n-\t\t\tFPle(box2->low.x, box1->high.x) &&\n-\t\t\tFPle(box1->low.y, box2->high.y) &&\n-\t\t\tFPle(box2->low.y, box1->high.y));\n+\treturn (TS_AND4(FPTle(box1->low.x, box2->high.x),\n+\t\t\t\t\tFPTle(box2->low.x, box1->high.x),\n+\t\t\t\t\tFPTle(box1->low.y, box2->high.y),\n+\t\t\t\t\tFPTle(box2->low.y, box1->high.y)));\n }\n \n /*\t\tbox_left\t\t-\t\tis box1 strictly left of box2?\n@@ -573,7 +577,7 @@ box_left(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(box1->high.x, box2->low.x));\n+\tPG_RETURN_TSBOOL(FPTlt(box1->high.x, box2->low.x));\n }\n \n /*\t\tbox_overleft\t-\t\tis the right edge of box1 at or left of\n@@ -588,7 +592,7 @@ box_overleft(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPle(box1->high.x, box2->high.x));\n+\tPG_RETURN_TSBOOL(FPTle(box1->high.x, box2->high.x));\n }\n \n /*\t\tbox_right\t\t-\t\tis box1 strictly right of box2?\n@@ -599,7 +603,7 @@ box_right(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(box1->low.x, box2->high.x));\n+\tPG_RETURN_TSBOOL(FPTgt(box1->low.x, box2->high.x));\n }\n \n /*\t\tbox_overright\t-\t\tis the left edge of box1 at or right of\n@@ -614,7 +618,7 @@ box_overright(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPge(box1->low.x, box2->low.x));\n+\tPG_RETURN_TSBOOL(FPTge(box1->low.x, box2->low.x));\n }\n \n /*\t\tbox_below\t\t-\t\tis box1 strictly below box2?\n@@ -625,7 +629,7 @@ box_below(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(box1->high.y, box2->low.y));\n+\tPG_RETURN_TSBOOL(FPTlt(box1->high.y, box2->low.y));\n }\n \n /*\t\tbox_overbelow\t-\t\tis the upper edge of box1 at or below\n@@ -637,7 +641,7 @@ box_overbelow(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPle(box1->high.y, box2->high.y));\n+\tPG_RETURN_TSBOOL(FPTle(box1->high.y, box2->high.y));\n }\n \n /*\t\tbox_above\t\t-\t\tis box1 strictly above box2?\n@@ -648,7 +652,7 @@ box_above(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(box1->low.y, box2->high.y));\n+\tPG_RETURN_TSBOOL(FPTgt(box1->low.y, box2->high.y));\n }\n \n /*\t\tbox_overabove\t-\t\tis the lower edge of box1 at or above\n@@ -660,7 +664,7 @@ box_overabove(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPge(box1->low.y, box2->low.y));\n+\tPG_RETURN_TSBOOL(FPTge(box1->low.y, box2->low.y));\n }\n \n /*\t\tbox_contained\t-\t\tis box1 contained by box2?\n@@ -671,7 +675,7 @@ box_contained(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(box_contain_box(box2, box1));\n+\tPG_RETURN_TSBOOL(box_contain_box(box2, box1));\n }\n \n /*\t\tbox_contain\t\t-\t\tdoes box1 contain box2?\n@@ -682,19 +686,19 @@ box_contain(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(box_contain_box(box1, box2));\n+\tPG_RETURN_TSBOOL(box_contain_box(box1, box2));\n }\n \n /*\n * Check whether the second box is in the first box or on its border\n */\n-static bool\n+static tsbool\n box_contain_box(BOX *contains_box, BOX *contained_box)\n {\n-\treturn FPge(contains_box->high.x, contained_box->high.x) &&\n-\t\tFPle(contains_box->low.x, contained_box->low.x) &&\n-\t\tFPge(contains_box->high.y, contained_box->high.y) &&\n-\t\tFPle(contains_box->low.y, contained_box->low.y);\n+\treturn TS_AND4(FPTge(contains_box->high.x, contained_box->high.x),\n+\t\t\t\t FPTle(contains_box->low.x, contained_box->low.x),\n+\t\t\t\t FPTge(contains_box->high.y, contained_box->high.y),\n+\t\t\t\t FPTle(contains_box->low.y, contained_box->low.y));\n }\n \n \n@@ -712,7 +716,7 @@ box_below_eq(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPle(box1->high.y, box2->low.y));\n+\tPG_RETURN_TSBOOL(FPTle(box1->high.y, box2->low.y));\n }\n \n Datum\n@@ -721,7 +725,7 @@ box_above_eq(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPge(box1->low.y, box2->high.y));\n+\tPG_RETURN_TSBOOL(FPTge(box1->low.y, box2->high.y));\n }\n \n \n@@ -734,7 +738,7 @@ box_lt(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(box_ar(box1), box_ar(box2)));\n+\tPG_RETURN_TSBOOL(FPTlt(box_ar(box1), box_ar(box2)));\n }\n \n Datum\n@@ -743,7 +747,7 @@ box_gt(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(box_ar(box1), box_ar(box2)));\n+\tPG_RETURN_TSBOOL(FPTgt(box_ar(box1), box_ar(box2)));\n }\n \n Datum\n@@ -752,7 +756,7 @@ box_eq(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPeq(box_ar(box1), box_ar(box2)));\n+\tPG_RETURN_TSBOOL(FPTeq(box_ar(box1), box_ar(box2)));\n }\n \n Datum\n@@ -761,7 +765,7 @@ box_le(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPle(box_ar(box1), box_ar(box2)));\n+\tPG_RETURN_TSBOOL(FPTle(box_ar(box1), box_ar(box2)));\n }\n \n Datum\n@@ -770,7 +774,7 @@ box_ge(PG_FUNCTION_ARGS)\n \tBOX\t\t *box1 = PG_GETARG_BOX_P(0);\n \tBOX\t\t *box2 = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(FPge(box_ar(box1), box_ar(box2)));\n+\tPG_RETURN_TSBOOL(FPTge(box_ar(box1), box_ar(box2)));\n }\n \n \n@@ -981,7 +985,7 @@ line_in(PG_FUNCTION_ARGS)\n \telse\n \t{\n \t\tpath_decode(s, true, 2, &lseg.p[0], &isopen, NULL, \"line\", str);\n-\t\tif (point_eq_point(&lseg.p[0], &lseg.p[1]))\n+\t\tif (point_eq_point(&lseg.p[0], &lseg.p[1]) == TS_TRUE)\n \t\t\tereport(ERROR,\n \t\t\t\t\t(errcode(ERRCODE_INVALID_TEXT_REPRESENTATION),\n \t\t\t\t\t errmsg(\"invalid line specification: must be two distinct points\")));\n@@ -1105,7 +1109,7 @@ line_construct_pp(PG_FUNCTION_ARGS)\n \tLINE\t *result = (LINE *) palloc(sizeof(LINE));\n \n \t/* NaNs are considered to be equal by point_eq_point */\n-\tif (point_eq_point(pt1, pt2))\n+\tif (point_eq_point(pt1, pt2) == TS_TRUE)\n \t\tereport(ERROR,\n \t\t\t\t(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n \t\t\t\t errmsg(\"invalid line specification: must be two distinct points\")));\n@@ -1126,11 +1130,16 @@ line_intersect(PG_FUNCTION_ARGS)\n \tLINE\t *l1 = PG_GETARG_LINE_P(0);\n \tLINE\t *l2 = PG_GETARG_LINE_P(1);\n \tPoint\t\txp;\n+\ttsbool\t\tr;\n \n-\tif (line_interpt_line(&xp, l1, l2) && !isnan(xp.x) && !isnan(xp.y))\n-\t\tPG_RETURN_BOOL(true);\n+\tif ((r = line_interpt_line(&xp, l1, l2)) == TS_TRUE)\n+\t{\n+\t\tif (!isnan(xp.x) && !isnan(xp.y))\n+\t\t\tPG_RETURN_BOOL(true);\n+\t\tPG_RETURN_NULL();\n+\t}\n \telse\n-\t\tPG_RETURN_BOOL(false);\n+\t\tPG_RETURN_TSBOOL(r);\n }\n \n Datum\n@@ -1139,7 +1148,7 @@ line_parallel(PG_FUNCTION_ARGS)\n \tLINE\t *l1 = PG_GETARG_LINE_P(0);\n \tLINE\t *l2 = PG_GETARG_LINE_P(1);\n \n-\tPG_RETURN_BOOL(!line_interpt_line(NULL, l1, l2));\n+\tPG_RETURN_TSBOOL(TS_NOT(line_interpt_line(NULL, l1, l2)));\n }\n \n Datum\n@@ -1148,17 +1157,18 @@ line_perp(PG_FUNCTION_ARGS)\n \tLINE\t *l1 = PG_GETARG_LINE_P(0);\n \tLINE\t *l2 = PG_GETARG_LINE_P(1);\n \n-\tif (unlikely(isnan(l1->C) || isnan(l2->C)))\n-\t\treturn false;\n+\tif (unlikely(isnan(l1->A) || isnan(l1->B) || isnan(l1->C) ||\n+\t\t\t\t isnan(l2->A) || isnan(l2->B) || isnan(l2->C)))\n+\t\tPG_RETURN_NULL();\n \n \tif (FPzero(l1->A))\n-\t\tPG_RETURN_BOOL(FPzero(l2->B) && !isnan(l1->B) && !isnan(l2->A));\n+\t\tPG_RETURN_BOOL(FPzero(l2->B));\n \tif (FPzero(l2->A))\n-\t\tPG_RETURN_BOOL(FPzero(l1->B) && !isnan(l2->B) && !isnan(l1->A));\n+\t\tPG_RETURN_BOOL(FPzero(l1->B));\n \tif (FPzero(l1->B))\n-\t\tPG_RETURN_BOOL(FPzero(l2->A) && !isnan(l1->A) && !isnan(l2->B));\n+\t\tPG_RETURN_BOOL(FPzero(l2->A));\n \tif (FPzero(l2->B))\n-\t\tPG_RETURN_BOOL(FPzero(l1->A) && !isnan(l2->A) && !isnan(l1->B));\n+\t\tPG_RETURN_BOOL(FPzero(l1->A));\n \n \tPG_RETURN_BOOL(FPeq(float8_div(float8_mul(l1->A, l2->A),\n \t\t\t\t\t\t\t\t float8_mul(l1->B, l2->B)), -1.0));\n@@ -1177,7 +1187,10 @@ line_horizontal(PG_FUNCTION_ARGS)\n {\n \tLINE\t *line = PG_GETARG_LINE_P(0);\n \n-\tPG_RETURN_BOOL(FPzero(line->A) && !isnan(line->B) && !isnan(line->C));\n+\tif (!isnan(line->B) && !isnan(line->C))\n+\t\tPG_RETURN_TSBOOL(FPzero(line->A));\n+\n+\tPG_RETURN_NULL();\n }\n \n \n@@ -1191,14 +1204,10 @@ line_eq(PG_FUNCTION_ARGS)\n \tLINE\t *l2 = PG_GETARG_LINE_P(1);\n \tfloat8\t\tratio;\n \n-\t/* If any NaNs are involved, insist on exact equality */\n+\t/* If any NaNs are involved, the two cannot be equal */\n \tif (unlikely(isnan(l1->A) || isnan(l1->B) || isnan(l1->C) ||\n \t\t\t\t isnan(l2->A) || isnan(l2->B) || isnan(l2->C)))\n-\t{\n-\t\tPG_RETURN_BOOL(float8_eq(l1->A, l2->A) &&\n-\t\t\t\t\t float8_eq(l1->B, l2->B) &&\n-\t\t\t\t\t float8_eq(l1->C, l2->C));\n-\t}\n+\t\tPG_RETURN_NULL();\n \n \t/* Otherwise, lines whose parameters are proportional are the same */\n \tif (!FPzero(l2->A))\n@@ -1281,7 +1290,7 @@ line_distance(PG_FUNCTION_ARGS)\n \tPoint\t\txp;\n \tfloat8\t\tratio;\n \n-\tif (line_interpt_line(&xp, l1, l2)) /* intersecting? */\n+\tif (line_interpt_line(&xp, l1, l2) == TS_TRUE) /* intersecting? */\n \t{\n \t\t/* return NaN if NaN is involved */\n \t\tif (isnan(xp.x) || isnan(xp.y))\n@@ -1316,7 +1325,7 @@ line_interpt(PG_FUNCTION_ARGS)\n \n \tresult = (Point *) palloc(sizeof(Point));\n \n-\tif (!line_interpt_line(result, l1, l2) ||\n+\tif (line_interpt_line(result, l1, l2) != TS_TRUE ||\n \t\tisnan(result->x) || isnan(result->y))\n \t{\n \t\tpfree(result);\n@@ -1340,17 +1349,24 @@ line_interpt(PG_FUNCTION_ARGS)\n * point would have NaN coordinates. We shouldn't return false in this case\n * because that would mean the lines are parallel.\n */\n-static bool\n+static tsbool\n line_interpt_line(Point *result, LINE *l1, LINE *l2)\n {\n \tfloat8\t\tx,\n \t\t\t\ty;\n+\ttsbool\t\tr;\n \n-\tif (!FPzero(l1->B))\n+\tif ((r = FPTzero(l1->B)) == TS_FALSE)\n \t{\n \t\t/* l1 is not virtucal */\n-\t\tif (FPeq(l2->A, float8_mul(l1->A, float8_div(l2->B, l1->B))))\n-\t\t\treturn false;\n+\t\tif ((r = FPTeq(l2->A, float8_mul(l1->A, float8_div(l2->B, l1->B))))\n+\t\t\t!= TS_FALSE)\n+\t\t{\n+\t\t\tif (r == TS_TRUE)\n+\t\t\t\treturn TS_FALSE;\n+\t\t\telse\n+\t\t\t\treturn TS_NULL;\n+\t\t}\n \n \t\tx = float8_div(float8_mi(float8_mul(l1->B, l2->C),\n \t\t\t\t\t\t\t\t float8_mul(l2->B, l1->C)),\n@@ -1367,7 +1383,7 @@ line_interpt_line(Point *result, LINE *l1, LINE *l2)\n \t\t\t\t\t float8_mi(float8_mul(l2->A, l1->B),\n \t\t\t\t\t\t\t\t float8_mul(l1->A, l2->B)));\n \t}\n-\telse if (!FPzero(l2->B))\n+\telse if ((r = FPzero(l2->B)) == TS_FALSE)\n \t{\n \t\t/* l2 is not virtical */\n \t\t/*\n@@ -1380,13 +1396,17 @@ line_interpt_line(Point *result, LINE *l1, LINE *l2)\n \t\t * When l2->A is zero, y is determined independently from x even if it\n \t\t * is Inf.\n \t\t */\n-\t\tif (FPzero(l2->A))\n+\t\tif ((r = FPzero(l2->A)) == TS_TRUE)\n \t\t\ty = -float8_div(l2->C, l2->B);\n-\t\telse\n+\t\telse if (r == TS_FALSE)\n \t\t\ty = float8_div(-float8_pl(float8_mul(l2->A, x), l2->C), l2->B);\n+\t\telse\n+\t\t\treturn TS_NULL;\n \t}\n+\telse if (r != TS_NULL)\n+\t\treturn TS_FALSE;\n \telse\n-\t\treturn false;\n+\t\treturn TS_NULL;\n \n \t/* On some platforms, the preceding expressions tend to produce -0. */\n \tif (x == 0.0)\n@@ -1397,7 +1417,7 @@ line_interpt_line(Point *result, LINE *l1, LINE *l2)\n \tif (result != NULL)\n \t\tpoint_construct(result, x, y);\n \n-\treturn true;\n+\treturn TS_TRUE;\n }\n \n \n@@ -1704,6 +1724,7 @@ path_inter(PG_FUNCTION_ARGS)\n \t\t\t\tj;\n \tLSEG\t\tseg1,\n \t\t\t\tseg2;\n+\ttsbool\t\tr;\n \n \tAssert(p1->npts > 0 && p2->npts > 0);\n \n@@ -1716,6 +1737,7 @@ path_inter(PG_FUNCTION_ARGS)\n \t\tb1.low.x = float8_min_nan(p1->p[i].x, b1.low.x);\n \t\tb1.low.y = float8_min_nan(p1->p[i].y, b1.low.y);\n \t}\n+\n \tb2.high.x = b2.low.x = p2->p[0].x;\n \tb2.high.y = b2.low.y = p2->p[0].y;\n \tfor (i = 1; i < p2->npts; i++)\n@@ -1725,8 +1747,8 @@ path_inter(PG_FUNCTION_ARGS)\n \t\tb2.low.x = float8_min_nan(p2->p[i].x, b2.low.x);\n \t\tb2.low.y = float8_min_nan(p2->p[i].y, b2.low.y);\n \t}\n-\tif (!box_ov(&b1, &b2))\n-\t\tPG_RETURN_BOOL(false);\n+\tif ((r = box_ov(&b1, &b2)) != TS_TRUE)\n+\t\tPG_RETURN_TSBOOL(r);\n \n \t/* pairwise check lseg intersections */\n \tfor (i = 0; i < p1->npts; i++)\n@@ -2004,8 +2026,12 @@ point_eq(PG_FUNCTION_ARGS)\n {\n \tPoint\t *pt1 = PG_GETARG_POINT_P(0);\n \tPoint\t *pt2 = PG_GETARG_POINT_P(1);\n+\ttsbool\t\tres = point_eq_point(pt1, pt2);\n \n-\tPG_RETURN_BOOL(point_eq_point(pt1, pt2));\n+\tif (res == TS_NULL)\n+\t\tPG_RETURN_NULL();\n+\n+\tPG_RETURN_BOOL(res);\n }\n \n Datum\n@@ -2013,23 +2039,22 @@ point_ne(PG_FUNCTION_ARGS)\n {\n \tPoint\t *pt1 = PG_GETARG_POINT_P(0);\n \tPoint\t *pt2 = PG_GETARG_POINT_P(1);\n+\ttsbool\t\tres = point_eq_point(pt1, pt2);\n \n-\tPG_RETURN_BOOL(!point_eq_point(pt1, pt2));\n+\tif (res == TS_NULL)\n+\t\tPG_RETURN_NULL();\n+\n+\tPG_RETURN_BOOL(!res);\n }\n \n \n /*\n * Check whether the two points are the same\n */\n-static inline bool\n+static inline tsbool\n point_eq_point(Point *pt1, Point *pt2)\n {\n-\t/* If any NaNs are involved, insist on exact equality */\n-\tif (unlikely(isnan(pt1->x) || isnan(pt1->y) ||\n-\t\t\t\t isnan(pt2->x) || isnan(pt2->y)))\n-\t\treturn (float8_eq(pt1->x, pt2->x) && float8_eq(pt1->y, pt2->y));\n-\n-\treturn (FPeq(pt1->x, pt2->x) && FPeq(pt1->y, pt2->y));\n+\treturn TS_AND2(FPTeq(pt1->x, pt2->x), FPTeq(pt1->y, pt2->y));\n }\n \n \n@@ -2070,6 +2095,7 @@ point_slope(PG_FUNCTION_ARGS)\n static inline float8\n point_sl(Point *pt1, Point *pt2)\n {\n+\t/* NaN doesn't equal to NaN, so don't bother using a tri-state value */\n \tif (FPeq(pt1->x, pt2->x))\n \t{\n \t\tif (unlikely(isnan(pt1->y) || isnan(pt2->y)))\n@@ -2096,6 +2122,7 @@ point_sl(Point *pt1, Point *pt2)\n static inline float8\n point_invsl(Point *pt1, Point *pt2)\n {\n+\t/* NaN doesn't equal to NaN, so don't bother using a tri-state value */\n \tif (FPeq(pt1->x, pt2->x))\n \t{\n \t\tif (unlikely(isnan(pt1->y) || isnan(pt2->y)))\n@@ -2254,7 +2281,7 @@ lseg_intersect(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(lseg_interpt_lseg(NULL, l1, l2));\n+\tPG_RETURN_TSBOOL(lseg_interpt_lseg(NULL, l1, l2));\n }\n \n \n@@ -2264,7 +2291,7 @@ lseg_parallel(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(FPeq(lseg_sl(l1), lseg_sl(l2)));\n+\tPG_RETURN_TSBOOL(FPTeq(lseg_sl(l1), lseg_sl(l2)));\n }\n \n /*\n@@ -2276,7 +2303,7 @@ lseg_perp(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(FPeq(lseg_sl(l1), lseg_invsl(l2)));\n+\tPG_RETURN_TSBOOL(FPTeq(lseg_sl(l1), lseg_invsl(l2)));\n }\n \n Datum\n@@ -2284,7 +2311,7 @@ lseg_vertical(PG_FUNCTION_ARGS)\n {\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(0);\n \n-\tPG_RETURN_BOOL(FPeq(lseg->p[0].x, lseg->p[1].x));\n+\tPG_RETURN_TSBOOL(FPeq(lseg->p[0].x, lseg->p[1].x));\n }\n \n Datum\n@@ -2292,7 +2319,7 @@ lseg_horizontal(PG_FUNCTION_ARGS)\n {\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(0);\n \n-\tPG_RETURN_BOOL(FPeq(lseg->p[0].y, lseg->p[1].y));\n+\tPG_RETURN_TSBOOL(FPTeq(lseg->p[0].y, lseg->p[1].y));\n }\n \n \n@@ -2302,8 +2329,8 @@ lseg_eq(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(point_eq_point(&l1->p[0], &l2->p[0]) &&\n-\t\t\t\t point_eq_point(&l1->p[1], &l2->p[1]));\n+\tPG_RETURN_TSBOOL(TS_AND2(point_eq_point(&l1->p[0], &l2->p[0]),\n+\t\t\t\t\t\t\t point_eq_point(&l1->p[1], &l2->p[1])));\n }\n \n Datum\n@@ -2312,8 +2339,9 @@ lseg_ne(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(!point_eq_point(&l1->p[0], &l2->p[0]) ||\n-\t\t\t\t !point_eq_point(&l1->p[1], &l2->p[1]));\n+\tPG_RETURN_TSBOOL(TS_OR2(\n+\t\t\t\t\t\t TS_NOT(point_eq_point(&l1->p[0], &l2->p[0])),\n+\t\t\t\t\t\t TS_NOT(point_eq_point(&l1->p[1], &l2->p[1]))));\n }\n \n Datum\n@@ -2322,8 +2350,8 @@ lseg_lt(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(point_dt(&l1->p[0], &l1->p[1]),\n-\t\t\t\t\t\tpoint_dt(&l2->p[0], &l2->p[1])));\n+\tPG_RETURN_TSBOOL(FPTlt(point_dt(&l1->p[0], &l1->p[1]),\n+\t\t\t\t\t\t point_dt(&l2->p[0], &l2->p[1])));\n }\n \n Datum\n@@ -2332,8 +2360,8 @@ lseg_le(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(FPle(point_dt(&l1->p[0], &l1->p[1]),\n-\t\t\t\t\t\tpoint_dt(&l2->p[0], &l2->p[1])));\n+\tPG_RETURN_TSBOOL(FPTle(point_dt(&l1->p[0], &l1->p[1]),\n+\t\t\t\t\t\t point_dt(&l2->p[0], &l2->p[1])));\n }\n \n Datum\n@@ -2342,8 +2370,8 @@ lseg_gt(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(point_dt(&l1->p[0], &l1->p[1]),\n-\t\t\t\t\t\tpoint_dt(&l2->p[0], &l2->p[1])));\n+\tPG_RETURN_TSBOOL(FPTgt(point_dt(&l1->p[0], &l1->p[1]),\n+\t\t\t\t\t\t point_dt(&l2->p[0], &l2->p[1])));\n }\n \n Datum\n@@ -2352,8 +2380,8 @@ lseg_ge(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(FPge(point_dt(&l1->p[0], &l1->p[1]),\n-\t\t\t\t\t\tpoint_dt(&l2->p[0], &l2->p[1])));\n+\tPG_RETURN_TSBOOL(FPTge(point_dt(&l1->p[0], &l1->p[1]),\n+\t\t\t\t\t\t point_dt(&l2->p[0], &l2->p[1])));\n }\n \n \n@@ -2398,27 +2426,28 @@ lseg_center(PG_FUNCTION_ARGS)\n * This function is almost perfectly symmetric, even though it doesn't look\n * like it. See lseg_interpt_line() for the other half of it.\n */\n-static bool\n+static tsbool\n lseg_interpt_lseg(Point *result, LSEG *l1, LSEG *l2)\n {\n \tPoint\t\tinterpt;\n \tLINE\t\ttmp;\n+\ttsbool\t\tr;\n \n \tline_construct(&tmp, &l2->p[0], lseg_sl(l2));\n-\tif (!lseg_interpt_line(&interpt, l1, &tmp))\n-\t\treturn false;\n+\tif ((r = lseg_interpt_line(&interpt, l1, &tmp)) != TS_TRUE)\n+\t\treturn r;\n \n \t/*\n \t * If the line intersection point isn't within l2, there is no valid\n \t * segment intersection point at all.\n \t */\n-\tif (!lseg_contain_point(l2, &interpt))\n-\t\treturn false;\n+\tif ((r = lseg_contain_point(l2, &interpt)) != TS_TRUE)\n+\t\treturn r;\n \n \tif (result != NULL)\n \t\t*result = interpt;\n \n-\treturn true;\n+\treturn TS_TRUE;\n }\n \n Datum\n@@ -2427,10 +2456,11 @@ lseg_interpt(PG_FUNCTION_ARGS)\n \tLSEG\t *l1 = PG_GETARG_LSEG_P(0);\n \tLSEG\t *l2 = PG_GETARG_LSEG_P(1);\n \tPoint\t *result;\n+\ttsbool\t\tr;\n \n \tresult = (Point *) palloc(sizeof(Point));\n \n-\tif (!lseg_interpt_lseg(result, l1, l2))\n+\tif ((r = lseg_interpt_lseg(result, l1, l2)) != TS_TRUE)\n \t\tPG_RETURN_NULL();\n \tPG_RETURN_POINT_P(result);\n }\n@@ -2740,8 +2770,13 @@ dist_ppoly_internal(Point *pt, POLYGON *poly)\n \tfloat8\t\td;\n \tint\t\t\ti;\n \tLSEG\t\tseg;\n+\tint\t\t\tinside;\n \n-\tif (point_inside(pt, poly->npts, poly->p) != 0)\n+\tinside = point_inside(pt, poly->npts, poly->p);\n+\tif (inside == -1)\n+\t\treturn get_float8_nan();\n+\n+\tif (inside != 0)\n \t\treturn 0.0;\n \n \t/* initialize distance with segment between first and last points */\n@@ -2780,11 +2815,12 @@ dist_ppoly_internal(Point *pt, POLYGON *poly)\n * Return whether the line segment intersect with the line. If *result is not\n * NULL, it is set to the intersection point.\n */\n-static bool\n+static tsbool\n lseg_interpt_line(Point *result, LSEG *lseg, LINE *line)\n {\n \tPoint\t\tinterpt;\n \tLINE\t\ttmp;\n+\ttsbool\t\tr;\n \n \t/*\n \t * First, we promote the line segment to a line, because we know how to\n@@ -2792,32 +2828,40 @@ lseg_interpt_line(Point *result, LSEG *lseg, LINE *line)\n \t * intersection point, we are done.\n \t */\n \tline_construct(&tmp, &lseg->p[0], lseg_sl(lseg));\n-\tif (!line_interpt_line(&interpt, &tmp, line) ||\n-\t\tunlikely(isnan(interpt.x) || isnan(interpt.y)))\n-\t\treturn false;\n+\tif ((r = line_interpt_line(&interpt, &tmp, line)) != TS_TRUE)\n+\t\treturn r;\n+\n+\tif (unlikely(isnan(interpt.x) || isnan(interpt.y)))\n+\t\treturn TS_FALSE;\n \n \t/*\n \t * Then, we check whether the intersection point is actually on the line\n \t * segment.\n \t */\n-\tif (!lseg_contain_point(lseg, &interpt))\n-\t\treturn false;\n+\tif ((r = lseg_contain_point(lseg, &interpt)) != TS_TRUE)\n+\t\treturn r;\n+\n \tif (result != NULL)\n \t{\n+\t\ttsbool\tr;\n+\n \t\t/*\n \t\t * If there is an intersection, then check explicitly for matching\n \t\t * endpoints since there may be rounding effects with annoying LSB\n \t\t * residue.\n \t\t */\n-\t\tif (point_eq_point(&lseg->p[0], &interpt))\n+\t\tif ((r = point_eq_point(&lseg->p[0], &interpt)) == TS_TRUE)\n \t\t\t*result = lseg->p[0];\n-\t\telse if (point_eq_point(&lseg->p[1], &interpt))\n+\t\telse if (r == TS_FALSE &&\n+\t\t\t\t (r = point_eq_point(&lseg->p[1], &interpt)) == TS_TRUE)\n \t\t\t*result = lseg->p[1];\n-\t\telse\n+\t\telse if (r == TS_FALSE)\n \t\t\t*result = interpt;\n+\t\telse\n+\t\t\treturn r;\n \t}\n \n-\treturn true;\n+\treturn TS_TRUE;\n }\n \n /*---------------------------------------------------------------------\n@@ -3178,11 +3222,14 @@ close_sl(PG_FUNCTION_ARGS)\n \tPoint\t *result;\n \tfloat8\t\td1,\n \t\t\t\td2;\n+\ttsbool\t\tr;\n \n \tresult = (Point *) palloc(sizeof(Point));\n \n-\tif (lseg_interpt_line(result, lseg, line))\n+\tif ((r = lseg_interpt_line(result, lseg, line)) == TS_TRUE)\n \t\tPG_RETURN_POINT_P(result);\n+\telse if (r == TS_NULL)\n+\t\tPG_RETURN_NULL;\n \n \td1 = line_closept_point(NULL, line, &lseg->p[0]);\n \td2 = line_closept_point(NULL, line, &lseg->p[1]);\n@@ -3219,9 +3266,12 @@ lseg_closept_line(Point *result, LSEG *lseg, LINE *line)\n {\n \tfloat8\t\tdist1,\n \t\t\t\tdist2;\n+\ttsbool\t\tr;\n \n-\tif (lseg_interpt_line(result, lseg, line))\n+\tif ((r = lseg_interpt_line(result, lseg, line)) == TS_TRUE)\n \t\treturn 0.0;\n+\telse if (r == TS_NULL)\n+\t\treturn get_float8_nan();\n \n \tdist1 = line_closept_point(NULL, line, &lseg->p[0]);\n \tdist2 = line_closept_point(NULL, line, &lseg->p[1]);\n@@ -3365,7 +3415,7 @@ close_lb(PG_FUNCTION_ARGS)\n /*\n *\t\tDoes the point satisfy the equation?\n */\n-static bool\n+static tsbool\n line_contain_point(LINE *line, Point *point)\n {\n \t/*\n@@ -3379,7 +3429,7 @@ line_contain_point(LINE *line, Point *point)\n \t\tAssert(line->B != 0.0);\n \n \t\t/* inf == inf here */\n-\t\treturn FPeq(point->y, -line->C / line->B);\n+\t\treturn FPTeq(point->y, -line->C / line->B);\n \t}\n \telse if (line->B == 0.0)\n \t{\n@@ -3387,13 +3437,13 @@ line_contain_point(LINE *line, Point *point)\n \t\tAssert(line->A != 0.0);\n \n \t\t/* inf == inf here */\n-\t\treturn FPeq(point->x, -line->C / line->A);\n+\t\treturn FPTeq(point->x, -line->C / line->A);\n \t}\n \n-\treturn FPzero(float8_pl(\n-\t\t\t\t\t float8_pl(float8_mul(line->A, point->x),\n-\t\t\t\t\t\t\t\tfloat8_mul(line->B, point->y)),\n-\t\t\t\t\t line->C));\n+\treturn FPTzero(float8_pl(\n+\t\t\t\t\t float8_pl(float8_mul(line->A, point->x),\n+\t\t\t\t\t\t\t\t float8_mul(line->B, point->y)),\n+\t\t\t\t\t line->C));\n }\n \n Datum\n@@ -3402,7 +3452,7 @@ on_pl(PG_FUNCTION_ARGS)\n \tPoint\t *pt = PG_GETARG_POINT_P(0);\n \tLINE\t *line = PG_GETARG_LINE_P(1);\n \n-\tPG_RETURN_BOOL(line_contain_point(line, pt));\n+\tPG_RETURN_TSBOOL(line_contain_point(line, pt));\n }\n \n \n@@ -3410,12 +3460,12 @@ on_pl(PG_FUNCTION_ARGS)\n *\t\tDetermine colinearity by detecting a triangle inequality.\n * This algorithm seems to behave nicely even with lsb residues - tgl 1997-07-09\n */\n-static bool\n+static tsbool\n lseg_contain_point(LSEG *lseg, Point *pt)\n {\n-\treturn FPeq(point_dt(pt, &lseg->p[0]) +\n-\t\t\t\tpoint_dt(pt, &lseg->p[1]),\n-\t\t\t\tpoint_dt(&lseg->p[0], &lseg->p[1]));\n+\treturn FPTeq(point_dt(pt, &lseg->p[0]) +\n+\t\t\t\t point_dt(pt, &lseg->p[1]),\n+\t\t\t\t point_dt(&lseg->p[0], &lseg->p[1]));\n }\n \n Datum\n@@ -3424,18 +3474,25 @@ on_ps(PG_FUNCTION_ARGS)\n \tPoint\t *pt = PG_GETARG_POINT_P(0);\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(1);\n \n-\tPG_RETURN_BOOL(lseg_contain_point(lseg, pt));\n+\tPG_RETURN_TSBOOL(lseg_contain_point(lseg, pt));\n }\n \n \n /*\n * Check whether the point is in the box or on its border\n */\n-static bool\n+static tsbool\n box_contain_point(BOX *box, Point *point)\n {\n-\treturn box->high.x >= point->x && box->low.x <= point->x &&\n-\t\tbox->high.y >= point->y && box->low.y <= point->y;\n+\tif (box->high.x >= point->x && box->low.x <= point->x &&\n+\t\tbox->high.y >= point->y && box->low.y <= point->y)\n+\t\treturn TS_TRUE;\n+\telse if (!isnan(box->high.x) && !isnan(box->high.y) &&\n+\t\t\t !isnan(box->low.x) && !isnan(box->low.y) &&\n+\t\t\t !isnan(point->x) && !isnan(point->y))\n+\t\treturn TS_FALSE;\n+\n+\treturn TS_NULL;\n }\n \n Datum\n@@ -3444,7 +3501,7 @@ on_pb(PG_FUNCTION_ARGS)\n \tPoint\t *pt = PG_GETARG_POINT_P(0);\n \tBOX\t\t *box = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(box_contain_point(box, pt));\n+\tPG_RETURN_TSBOOL(box_contain_point(box, pt));\n }\n \n Datum\n@@ -3453,7 +3510,7 @@ box_contain_pt(PG_FUNCTION_ARGS)\n \tBOX\t\t *box = PG_GETARG_BOX_P(0);\n \tPoint\t *pt = PG_GETARG_POINT_P(1);\n \n-\tPG_RETURN_BOOL(box_contain_point(box, pt));\n+\tPG_RETURN_TSBOOL(box_contain_point(box, pt));\n }\n \n /* on_ppath -\n@@ -3476,24 +3533,38 @@ on_ppath(PG_FUNCTION_ARGS)\n \t\t\t\tn;\n \tfloat8\t\ta,\n \t\t\t\tb;\n+\tint\t\t\tinside;\n \n \t/*-- OPEN --*/\n \tif (!path->closed)\n \t{\n+\t\ttsbool r;\n+\n \t\tn = path->npts - 1;\n \t\ta = point_dt(pt, &path->p[0]);\n \t\tfor (i = 0; i < n; i++)\n \t\t{\n \t\t\tb = point_dt(pt, &path->p[i + 1]);\n-\t\t\tif (FPeq(float8_pl(a, b), point_dt(&path->p[i], &path->p[i + 1])))\n-\t\t\t\tPG_RETURN_BOOL(true);\n+\t\t\tr = FPTeq(float8_pl(a, b), point_dt(&path->p[i], &path->p[i + 1]));\n+\t\t\tif (r != TS_FALSE)\n+\t\t\t\tPG_RETURN_TSBOOL(r);\n \t\t\ta = b;\n \t\t}\n+\t\t/* See the PG_RETURN_BOOL at the end of this function */\n \t\tPG_RETURN_BOOL(false);\n \t}\n \n \t/*-- CLOSED --*/\n-\tPG_RETURN_BOOL(point_inside(pt, path->npts, path->p) != 0);\n+\tinside = point_inside(pt, path->npts, path->p);\n+\n+\tif (inside < 0)\n+\t\tPG_RETURN_NULL();\n+\n+\t/*\n+\t * PG_RETURN_BOOL is compatible and faster than PG_RETURN_TSBOOL when the\n+\t * value is guaranteed to be in bool.\n+\t */\n+\tPG_RETURN_BOOL(inside != 0);\n }\n \n \n@@ -3508,8 +3579,8 @@ on_sl(PG_FUNCTION_ARGS)\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(0);\n \tLINE\t *line = PG_GETARG_LINE_P(1);\n \n-\tPG_RETURN_BOOL(line_contain_point(line, &lseg->p[0]) &&\n-\t\t\t\t line_contain_point(line, &lseg->p[1]));\n+\tPG_RETURN_TSBOOL(TS_AND2(line_contain_point(line, &lseg->p[0]),\n+\t\t\t\t\t\t\t line_contain_point(line, &lseg->p[1])));\n }\n \n \n@@ -3518,11 +3589,11 @@ on_sl(PG_FUNCTION_ARGS)\n *\n * It is, if both of its points are in the box or on its border.\n */\n-static bool\n+static tsbool\n box_contain_lseg(BOX *box, LSEG *lseg)\n {\n-\treturn box_contain_point(box, &lseg->p[0]) &&\n-\t\tbox_contain_point(box, &lseg->p[1]);\n+\treturn TS_AND2(box_contain_point(box, &lseg->p[0]),\n+\t\t\t\t box_contain_point(box, &lseg->p[1]));\n }\n \n Datum\n@@ -3531,7 +3602,7 @@ on_sb(PG_FUNCTION_ARGS)\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(0);\n \tBOX\t\t *box = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(box_contain_lseg(box, lseg));\n+\tPG_RETURN_TSBOOL(box_contain_lseg(box, lseg));\n }\n \n /*---------------------------------------------------------------------\n@@ -3545,7 +3616,7 @@ inter_sl(PG_FUNCTION_ARGS)\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(0);\n \tLINE\t *line = PG_GETARG_LINE_P(1);\n \n-\tPG_RETURN_BOOL(lseg_interpt_line(NULL, lseg, line));\n+\tPG_RETURN_TSBOOL(lseg_interpt_line(NULL, lseg, line));\n }\n \n \n@@ -3564,12 +3635,13 @@ inter_sl(PG_FUNCTION_ARGS)\n * Optimize for non-intersection by checking for box intersection first.\n * - thomas 1998-01-30\n */\n-static bool\n+static tsbool\n box_interpt_lseg(Point *result, BOX *box, LSEG *lseg)\n {\n \tBOX\t\t\tlbox;\n \tLSEG\t\tbseg;\n \tPoint\t\tpoint;\n+\ttsbool\t\tr;\n \n \tlbox.low.x = float8_min(lseg->p[0].x, lseg->p[1].x);\n \tlbox.low.y = float8_min(lseg->p[0].y, lseg->p[1].y);\n@@ -3577,8 +3649,8 @@ box_interpt_lseg(Point *result, BOX *box, LSEG *lseg)\n \tlbox.high.y = float8_max(lseg->p[0].y, lseg->p[1].y);\n \n \t/* nothing close to overlap? then not going to intersect */\n-\tif (!box_ov(&lbox, box))\n-\t\treturn false;\n+\tif ((r = box_ov(&lbox, box)) != TS_TRUE)\n+\t\treturn r;\n \n \tif (result != NULL)\n \t{\n@@ -3587,30 +3659,30 @@ box_interpt_lseg(Point *result, BOX *box, LSEG *lseg)\n \t}\n \n \t/* an endpoint of segment is inside box? then clearly intersects */\n-\tif (box_contain_point(box, &lseg->p[0]) ||\n-\t\tbox_contain_point(box, &lseg->p[1]))\n-\t\treturn true;\n+\tif ((r = TS_OR2(box_contain_point(box, &lseg->p[0]),\n+\t\t\t\t\tbox_contain_point(box, &lseg->p[1]))) != TS_FALSE)\n+\t\treturn r;\n \n \t/* pairwise check lseg intersections */\n \tpoint.x = box->low.x;\n \tpoint.y = box->high.y;\n \tstatlseg_construct(&bseg, &box->low, &point);\n-\tif (lseg_interpt_lseg(NULL, &bseg, lseg))\n-\t\treturn true;\n+\tif ((r = lseg_interpt_lseg(NULL, &bseg, lseg)) != TS_FALSE)\n+\t\treturn r;\n \n \tstatlseg_construct(&bseg, &box->high, &point);\n-\tif (lseg_interpt_lseg(NULL, &bseg, lseg))\n-\t\treturn true;\n+\tif ((r = lseg_interpt_lseg(NULL, &bseg, lseg)) != TS_FALSE)\n+\t\treturn r;\n \n \tpoint.x = box->high.x;\n \tpoint.y = box->low.y;\n \tstatlseg_construct(&bseg, &box->low, &point);\n-\tif (lseg_interpt_lseg(NULL, &bseg, lseg))\n-\t\treturn true;\n+\tif ((r = lseg_interpt_lseg(NULL, &bseg, lseg)) != TS_FALSE)\n+\t\treturn r;\n \n \tstatlseg_construct(&bseg, &box->high, &point);\n-\tif (lseg_interpt_lseg(NULL, &bseg, lseg))\n-\t\treturn true;\n+\tif ((r = lseg_interpt_lseg(NULL, &bseg, lseg)) != TS_FALSE)\n+\t\treturn r;\n \n \t/* if we dropped through, no two segs intersected */\n \treturn false;\n@@ -3622,7 +3694,7 @@ inter_sb(PG_FUNCTION_ARGS)\n \tLSEG\t *lseg = PG_GETARG_LSEG_P(0);\n \tBOX\t\t *box = PG_GETARG_BOX_P(1);\n \n-\tPG_RETURN_BOOL(box_interpt_lseg(NULL, box, lseg));\n+\tPG_RETURN_TSBOOL(box_interpt_lseg(NULL, box, lseg));\n }\n \n \n@@ -3637,6 +3709,7 @@ inter_lb(PG_FUNCTION_ARGS)\n \tLSEG\t\tbseg;\n \tPoint\t\tp1,\n \t\t\t\tp2;\n+\ttsbool\t\tr;\n \n \t/* pairwise check lseg intersections */\n \tp1.x = box->low.x;\n@@ -3644,25 +3717,28 @@ inter_lb(PG_FUNCTION_ARGS)\n \tp2.x = box->low.x;\n \tp2.y = box->high.y;\n \tstatlseg_construct(&bseg, &p1, &p2);\n-\tif (lseg_interpt_line(NULL, &bseg, line))\n-\t\tPG_RETURN_BOOL(true);\n+\tif ((r = lseg_interpt_line(NULL, &bseg, line)) != TS_FALSE)\n+\t\tPG_RETURN_TSBOOL(r);\n \tp1.x = box->high.x;\n \tp1.y = box->high.y;\n \tstatlseg_construct(&bseg, &p1, &p2);\n-\tif (lseg_interpt_line(NULL, &bseg, line))\n-\t\tPG_RETURN_BOOL(true);\n+\tif ((r = lseg_interpt_line(NULL, &bseg, line)) != TS_FALSE)\n+\t\tPG_RETURN_TSBOOL(r);\n \tp2.x = box->high.x;\n \tp2.y = box->low.y;\n \tstatlseg_construct(&bseg, &p1, &p2);\n-\tif (lseg_interpt_line(NULL, &bseg, line))\n-\t\tPG_RETURN_BOOL(true);\n+\tif ((r = lseg_interpt_line(NULL, &bseg, line)) != TS_FALSE)\n+\t\tPG_RETURN_TSBOOL(r);\n \tp1.x = box->low.x;\n \tp1.y = box->low.y;\n \tstatlseg_construct(&bseg, &p1, &p2);\n-\tif (lseg_interpt_line(NULL, &bseg, line))\n-\t\tPG_RETURN_BOOL(true);\n+\tif ((r = lseg_interpt_line(NULL, &bseg, line)) != TS_FALSE)\n+\t\tPG_RETURN_TSBOOL(r);\n \n-\t/* if we dropped through, no intersection */\n+\t/*\n+\t * if we dropped through, no intersection \"false\" doesn't need\n+\t * PG_RETURN_TSBOOL()\n+\t */\n \tPG_RETURN_BOOL(false);\n }\n \n@@ -3844,6 +3920,9 @@ poly_left(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.high.x < polyb->boundbox.low.x;\n \n \t/*\n@@ -3867,6 +3946,9 @@ poly_overleft(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.high.x <= polyb->boundbox.high.x;\n \n \t/*\n@@ -3890,6 +3972,9 @@ poly_right(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.low.x > polyb->boundbox.high.x;\n \n \t/*\n@@ -3913,6 +3998,9 @@ poly_overright(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.low.x >= polyb->boundbox.low.x;\n \n \t/*\n@@ -3936,6 +4024,9 @@ poly_below(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.high.y < polyb->boundbox.low.y;\n \n \t/*\n@@ -3959,6 +4050,9 @@ poly_overbelow(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.high.y <= polyb->boundbox.high.y;\n \n \t/*\n@@ -3982,6 +4076,9 @@ poly_above(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.low.y > polyb->boundbox.high.y;\n \n \t/*\n@@ -4005,6 +4102,9 @@ poly_overabove(PG_FUNCTION_ARGS)\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n \tbool\t\tresult;\n \n+\tif (isnan(polya->boundbox.high.x) || isnan(polyb->boundbox.high.x))\n+\t\tPG_RETURN_NULL();\n+\n \tresult = polya->boundbox.low.y >= polyb->boundbox.low.y;\n \n \t/*\n@@ -4023,16 +4123,20 @@ poly_overabove(PG_FUNCTION_ARGS)\n * Check all points for matches in both forward and reverse\n *\tdirection since polygons are non-directional and are\n *\tclosed shapes.\n+ *\n+ * XXX: returns TS_FALSE when the two polygons consists of\n+ * different number of points even if any of the points were\n+ * NaN. It might be thewrong defintion.\n *-------------------------------------------------------*/\n Datum\n poly_same(PG_FUNCTION_ARGS)\n {\n \tPOLYGON *polya = PG_GETARG_POLYGON_P(0);\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n-\tbool\t\tresult;\n+\ttsbool\t\tresult;\n \n \tif (polya->npts != polyb->npts)\n-\t\tresult = false;\n+\t\tresult = TS_FALSE;\n \telse\n \t\tresult = plist_same(polya->npts, polya->p, polyb->p);\n \n@@ -4053,13 +4157,16 @@ poly_overlap(PG_FUNCTION_ARGS)\n {\n \tPOLYGON *polya = PG_GETARG_POLYGON_P(0);\n \tPOLYGON *polyb = PG_GETARG_POLYGON_P(1);\n-\tbool\t\tresult;\n+\ttsbool\t\tresult;\n \n \tAssert(polya->npts > 0 && polyb->npts > 0);\n \n \t/* Quick check by bounding box */\n \tresult = box_ov(&polya->boundbox, &polyb->boundbox);\n \n+\tif (result == TS_NULL)\n+\t\tPG_RETURN_NULL();\n+\n \t/*\n \t * Brute-force algorithm - try to find intersected edges, if so then\n \t * polygons are overlapped else check is one polygon inside other or not\n@@ -4074,9 +4181,9 @@ poly_overlap(PG_FUNCTION_ARGS)\n \n \t\t/* Init first of polya's edge with last point */\n \t\tsa.p[0] = polya->p[polya->npts - 1];\n-\t\tresult = false;\n+\t\tresult = TS_FALSE;\n \n-\t\tfor (ia = 0; ia < polya->npts && !result; ia++)\n+\t\tfor (ia = 0; ia < polya->npts && result != TS_TRUE; ia++)\n \t\t{\n \t\t\t/* Second point of polya's edge is a current one */\n \t\t\tsa.p[1] = polya->p[ia];\n@@ -4097,8 +4204,12 @@ poly_overlap(PG_FUNCTION_ARGS)\n \t\t\tsa.p[0] = sa.p[1];\n \t\t}\n \n-\t\tif (!result)\n+\t\tif (result == TS_NULL)\n+\t\t\tPG_RETURN_NULL();\n+\n+\t\tif (result == TS_FALSE)\n \t\t{\n+\t\t\t/* in the case of NaN is handled ealier */\n \t\t\tresult = (point_inside(polya->p, polyb->npts, polyb->p) ||\n \t\t\t\t\t point_inside(polyb->p, polya->npts, polya->p));\n \t\t}\n@@ -4133,6 +4244,12 @@ touched_lseg_inside_poly(Point *a, Point *b, LSEG *s, POLYGON *poly, int start)\n \tt.p[0] = *a;\n \tt.p[1] = *b;\n \n+\t/*\n+\t * assume no parameters have NaN, so the tsbool functions shouldn't return\n+\t * TS_NULL.\n+\t */\n+\tAssert(!isnan(a->x) && !isnan(a->y) && !isnan(b->x) && !isnan(b->y) &&\n+\t\t !isnan(poly->boundbox.high.x));\n \tif (point_eq_point(a, s->p))\n \t{\n \t\tif (lseg_contain_point(&t, s->p + 1))\n@@ -4160,7 +4277,7 @@ touched_lseg_inside_poly(Point *a, Point *b, LSEG *s, POLYGON *poly, int start)\n * start is used for optimization - function checks\n * polygon's edges starting from start\n */\n-static bool\n+static tsbool\n lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start)\n {\n \tLSEG\t\ts,\n@@ -4168,14 +4285,21 @@ lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start)\n \tint\t\t\ti;\n \tbool\t\tres = true,\n \t\t\t\tintersection = false;\n+\ttsbool\t\tres1;\n+\ttsbool\t\tres2;\n \n \tt.p[0] = *a;\n \tt.p[1] = *b;\n \ts.p[0] = poly->p[(start == 0) ? (poly->npts - 1) : (start - 1)];\n \n \t/* Fast path. Check against boundbox. Also checks NaNs. */\n-\tif (!box_contain_point(&poly->boundbox, a) ||\n-\t\t!box_contain_point(&poly->boundbox, b))\n+\tres1 = box_contain_point(&poly->boundbox, a);\n+\tres2 = box_contain_point(&poly->boundbox, b);\n+\n+\tif (res1 == TS_NULL || res2 == TS_NULL)\n+\t\treturn TS_NULL;\n+\n+\tif (!res1 || !res2)\n \t\treturn false;\n \n \tfor (i = start; i < poly->npts && res; i++)\n@@ -4206,6 +4330,8 @@ lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start)\n \t\t\t */\n \n \t\t\tintersection = true;\n+\n+\t\t\t/* the calls below won't return TS_NULL */\n \t\t\tres = lseg_inside_poly(t.p, &interpt, poly, i + 1);\n \t\t\tif (res)\n \t\t\t\tres = lseg_inside_poly(t.p + 1, &interpt, poly, i + 1);\n@@ -4234,11 +4360,12 @@ lseg_inside_poly(Point *a, Point *b, POLYGON *poly, int start)\n /*\n * Check whether the first polygon contains the second\n */\n-static bool\n+static tsbool\n poly_contain_poly(POLYGON *contains_poly, POLYGON *contained_poly)\n {\n \tint\t\t\ti;\n \tLSEG\t\ts;\n+\ttsbool\t\tr;\n \n \tAssert(contains_poly->npts > 0 && contained_poly->npts > 0);\n \n@@ -4246,20 +4373,22 @@ poly_contain_poly(POLYGON *contains_poly, POLYGON *contained_poly)\n \t * Quick check to see if contained's bounding box is contained in\n \t * contains' bb.\n \t */\n-\tif (!box_contain_box(&contains_poly->boundbox, &contained_poly->boundbox))\n-\t\treturn false;\n+\tr = box_contain_box(&contains_poly->boundbox, &contained_poly->boundbox);\n+\tif (r != TS_TRUE)\n+\t\treturn r;\n \n \ts.p[0] = contained_poly->p[contained_poly->npts - 1];\n \n \tfor (i = 0; i < contained_poly->npts; i++)\n \t{\n \t\ts.p[1] = contained_poly->p[i];\n-\t\tif (!lseg_inside_poly(s.p, s.p + 1, contains_poly, 0))\n-\t\t\treturn false;\n+\t\tr = lseg_inside_poly(s.p, s.p + 1, contains_poly, 0);\n+\t\tif (r != TS_TRUE)\n+\t\t\treturn r;\n \t\ts.p[0] = s.p[1];\n \t}\n \n-\treturn true;\n+\treturn TS_TRUE;\n }\n \n Datum\n@@ -4277,7 +4406,7 @@ poly_contain(PG_FUNCTION_ARGS)\n \tPG_FREE_IF_COPY(polya, 0);\n \tPG_FREE_IF_COPY(polyb, 1);\n \n-\tPG_RETURN_BOOL(result);\n+\tPG_RETURN_TSBOOL(result);\n }\n \n \n@@ -4300,7 +4429,7 @@ poly_contained(PG_FUNCTION_ARGS)\n \tPG_FREE_IF_COPY(polya, 0);\n \tPG_FREE_IF_COPY(polyb, 1);\n \n-\tPG_RETURN_BOOL(result);\n+\tPG_RETURN_TSBOOL(result);\n }\n \n \n@@ -4310,7 +4439,7 @@ poly_contain_pt(PG_FUNCTION_ARGS)\n \tPOLYGON *poly = PG_GETARG_POLYGON_P(0);\n \tPoint\t *p = PG_GETARG_POINT_P(1);\n \n-\tPG_RETURN_BOOL(point_inside(p, poly->npts, poly->p) != 0);\n+\tPG_RETURN_TSBOOL(point_inside(p, poly->npts, poly->p) != 0);\n }\n \n Datum\n@@ -4319,7 +4448,7 @@ pt_contained_poly(PG_FUNCTION_ARGS)\n \tPoint\t *p = PG_GETARG_POINT_P(0);\n \tPOLYGON *poly = PG_GETARG_POLYGON_P(1);\n \n-\tPG_RETURN_BOOL(point_inside(p, poly->npts, poly->p) != 0);\n+\tPG_RETURN_TSBOOL(point_inside(p, poly->npts, poly->p) != 0);\n }\n \n \n@@ -5015,9 +5144,9 @@ circle_same(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(((isnan(circle1->radius) && isnan(circle1->radius)) ||\n-\t\t\t\t\tFPeq(circle1->radius, circle2->radius)) &&\n-\t\t\t\t point_eq_point(&circle1->center, &circle2->center));\n+\tPG_RETURN_TSBOOL(\n+\t\tTS_AND2(FPTeq(circle1->radius, circle2->radius),\n+\t\t\t\tpoint_eq_point(&circle1->center, &circle2->center)));\n }\n \n /*\t\tcircle_overlap\t-\t\tdoes circle1 overlap circle2?\n@@ -5028,8 +5157,8 @@ circle_overlap(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPle(point_dt(&circle1->center, &circle2->center),\n-\t\t\t\t\t\tfloat8_pl(circle1->radius, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTle(point_dt(&circle1->center, &circle2->center),\n+\t\t\t\t\t\t float8_pl(circle1->radius, circle2->radius)));\n }\n \n /*\t\tcircle_overleft -\t\tis the right edge of circle1 at or left of\n@@ -5041,8 +5170,8 @@ circle_overleft(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPle(float8_pl(circle1->center.x, circle1->radius),\n-\t\t\t\t\t\tfloat8_pl(circle2->center.x, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTle(float8_pl(circle1->center.x, circle1->radius),\n+\t\t\t\t\t\t float8_pl(circle2->center.x, circle2->radius)));\n }\n \n /*\t\tcircle_left\t\t-\t\tis circle1 strictly left of circle2?\n@@ -5053,8 +5182,8 @@ circle_left(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(float8_pl(circle1->center.x, circle1->radius),\n-\t\t\t\t\t\tfloat8_mi(circle2->center.x, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTlt(float8_pl(circle1->center.x, circle1->radius),\n+\t\t\t\t\t\t float8_mi(circle2->center.x, circle2->radius)));\n }\n \n /*\t\tcircle_right\t-\t\tis circle1 strictly right of circle2?\n@@ -5065,8 +5194,8 @@ circle_right(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(float8_mi(circle1->center.x, circle1->radius),\n-\t\t\t\t\t\tfloat8_pl(circle2->center.x, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTgt(float8_mi(circle1->center.x, circle1->radius),\n+\t\t\t\t\t\t float8_pl(circle2->center.x, circle2->radius)));\n }\n \n /*\t\tcircle_overright\t-\tis the left edge of circle1 at or right of\n@@ -5078,8 +5207,8 @@ circle_overright(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPge(float8_mi(circle1->center.x, circle1->radius),\n-\t\t\t\t\t\tfloat8_mi(circle2->center.x, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTge(float8_mi(circle1->center.x, circle1->radius),\n+\t\t\t\t\t\t float8_mi(circle2->center.x, circle2->radius)));\n }\n \n /*\t\tcircle_contained\t\t-\t\tis circle1 contained by circle2?\n@@ -5102,8 +5231,8 @@ circle_contain(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPle(point_dt(&circle1->center, &circle2->center),\n-\t\t\t\t\t\tfloat8_mi(circle1->radius, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTle(point_dt(&circle1->center, &circle2->center),\n+\t\t\t\t\t\t float8_mi(circle1->radius, circle2->radius)));\n }\n \n \n@@ -5115,8 +5244,8 @@ circle_below(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(float8_pl(circle1->center.y, circle1->radius),\n-\t\t\t\t\t\tfloat8_mi(circle2->center.y, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTlt(float8_pl(circle1->center.y, circle1->radius),\n+\t\t\t\t\t\t float8_mi(circle2->center.y, circle2->radius)));\n }\n \n /*\t\tcircle_above\t-\t\tis circle1 strictly above circle2?\n@@ -5127,8 +5256,8 @@ circle_above(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(float8_mi(circle1->center.y, circle1->radius),\n-\t\t\t\t\t\tfloat8_pl(circle2->center.y, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTgt(float8_mi(circle1->center.y, circle1->radius),\n+\t\t\t\t\t\t float8_pl(circle2->center.y, circle2->radius)));\n }\n \n /*\t\tcircle_overbelow -\t\tis the upper edge of circle1 at or below\n@@ -5140,8 +5269,8 @@ circle_overbelow(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPle(float8_pl(circle1->center.y, circle1->radius),\n-\t\t\t\t\t\tfloat8_pl(circle2->center.y, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTle(float8_pl(circle1->center.y, circle1->radius),\n+\t\t\t\t\t\t float8_pl(circle2->center.y, circle2->radius)));\n }\n \n /*\t\tcircle_overabove\t-\tis the lower edge of circle1 at or above\n@@ -5153,13 +5282,15 @@ circle_overabove(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPge(float8_mi(circle1->center.y, circle1->radius),\n-\t\t\t\t\t\tfloat8_mi(circle2->center.y, circle2->radius)));\n+\tPG_RETURN_TSBOOL(FPTge(float8_mi(circle1->center.y, circle1->radius),\n+\t\t\t\t\t\t float8_mi(circle2->center.y, circle2->radius)));\n }\n \n \n /*\t\tcircle_relop\t-\t\tis area(circle1) relop area(circle2), within\n *\t\t\t\t\t\t\t\tour accuracy constraint?\n+ *\n+ * XXX; area comparison doen't consider the NaN-ness of the center location.\n */\n Datum\n circle_eq(PG_FUNCTION_ARGS)\n@@ -5167,7 +5298,7 @@ circle_eq(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPeq(circle_ar(circle1), circle_ar(circle2)));\n+\tPG_RETURN_TSBOOL(FPTeq(circle_ar(circle1), circle_ar(circle2)));\n }\n \n Datum\n@@ -5176,7 +5307,7 @@ circle_ne(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPne(circle_ar(circle1), circle_ar(circle2)));\n+\tPG_RETURN_TSBOOL(FPTne(circle_ar(circle1), circle_ar(circle2)));\n }\n \n Datum\n@@ -5185,7 +5316,7 @@ circle_lt(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPlt(circle_ar(circle1), circle_ar(circle2)));\n+\tPG_RETURN_TSBOOL(FPTlt(circle_ar(circle1), circle_ar(circle2)));\n }\n \n Datum\n@@ -5194,7 +5325,7 @@ circle_gt(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPgt(circle_ar(circle1), circle_ar(circle2)));\n+\tPG_RETURN_TSBOOL(FPTgt(circle_ar(circle1), circle_ar(circle2)));\n }\n \n Datum\n@@ -5203,7 +5334,7 @@ circle_le(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPle(circle_ar(circle1), circle_ar(circle2)));\n+\tPG_RETURN_TSBOOL(FPTle(circle_ar(circle1), circle_ar(circle2)));\n }\n \n Datum\n@@ -5212,7 +5343,7 @@ circle_ge(PG_FUNCTION_ARGS)\n \tCIRCLE\t *circle1 = PG_GETARG_CIRCLE_P(0);\n \tCIRCLE\t *circle2 = PG_GETARG_CIRCLE_P(1);\n \n-\tPG_RETURN_BOOL(FPge(circle_ar(circle1), circle_ar(circle2)));\n+\tPG_RETURN_TSBOOL(FPTge(circle_ar(circle1), circle_ar(circle2)));\n }\n \n \n@@ -5348,6 +5479,11 @@ circle_contain_pt(PG_FUNCTION_ARGS)\n \tfloat8\t\td;\n \n \td = point_dt(&circle->center, point);\n+\n+\tif (isnan(d) || isnan(circle->radius))\n+\t\tPG_RETURN_NULL();\n+\n+\t/* XXX: why don't we use FP(T)le? */\n \tPG_RETURN_BOOL(d <= circle->radius);\n }\n \n@@ -5360,6 +5496,10 @@ pt_contained_circle(PG_FUNCTION_ARGS)\n \tfloat8\t\td;\n \n \td = point_dt(&circle->center, point);\n+\tif (isnan(d) || isnan(circle->radius))\n+\t\tPG_RETURN_NULL();\n+\n+\t/* XXX: why don't we use FP(T)le? */\n \tPG_RETURN_BOOL(d <= circle->radius);\n }\n \n@@ -5586,8 +5726,9 @@ poly_circle(PG_FUNCTION_ARGS)\n ***********************************************************************/\n \n /*\n- *\tTest to see if the point is inside the polygon, returns 1/0, or 2 if\n- *\tthe point is on the polygon.\n+ *\tTest to see if the point is inside the polygon, returns 1/0, or 2 if the\n+ *\tpoint is on the polygon. -1 means undetermined, in case any operand is an\n+ *\tinvalid object. (-1, 0 and 1 are compatible with tsbool type)\n *\tCode adapted but not copied from integer-based routines in WN: A\n *\tServer for the HTTP\n *\tversion 1.15.1, file wn/image.c\n@@ -5619,7 +5760,7 @@ point_inside(Point *p, int npts, Point *plist)\n \n \t/* NaN makes the point cannot be inside the polygon */\n \tif (unlikely(isnan(x0) || isnan(y0) || isnan(p->x) || isnan(p->y)))\n-\t\treturn 0;\n+\t\treturn -1;\n \n \tprev_x = x0;\n \tprev_y = y0;\n@@ -5632,7 +5773,7 @@ point_inside(Point *p, int npts, Point *plist)\n \n \t\t/* NaN makes the point cannot be inside the polygon */\n \t\tif (unlikely(isnan(x) || isnan(y)))\n-\t\t\treturn 0;\n+\t\t\treturn -1;\n \n \t\t/* compute previous to current point crossing */\n \t\tif ((cross = lseg_crossing(x, y, prev_x, prev_y)) == POINT_ON_POLYGON)\n@@ -5659,6 +5800,8 @@ point_inside(Point *p, int npts, Point *plist)\n * Returns +/-1 if one point is on the positive X-axis.\n * Returns 0 if both points are on the positive X-axis, or there is no crossing.\n * Returns POINT_ON_POLYGON if the segment contains (0,0).\n+ * This function doesn't check if the parameters contain NaNs, it's the\n+ * responsibility of the callers.\n * Wow, that is one confusing API, but it is used above, and when summed,\n * can tell is if a point is in a polygon.\n */\n@@ -5723,17 +5866,18 @@ lseg_crossing(float8 x, float8 y, float8 prev_x, float8 prev_y)\n }\n \n \n-static bool\n+static tsbool\n plist_same(int npts, Point *p1, Point *p2)\n {\n \tint\t\t\ti,\n \t\t\t\tii,\n \t\t\t\tj;\n+\ttsbool\t\tr;\n \n \t/* find match for first point */\n \tfor (i = 0; i < npts; i++)\n \t{\n-\t\tif (point_eq_point(&p2[i], &p1[0]))\n+\t\tif ((r = point_eq_point(&p2[i], &p1[0])) == TS_TRUE)\n \t\t{\n \n \t\t\t/* match found? then look forward through remaining points */\n@@ -5741,26 +5885,29 @@ plist_same(int npts, Point *p1, Point *p2)\n \t\t\t{\n \t\t\t\tif (j >= npts)\n \t\t\t\t\tj = 0;\n-\t\t\t\tif (!point_eq_point(&p2[j], &p1[ii]))\n+\t\t\t\tif ((r = point_eq_point(&p2[j], &p1[ii])) != TS_TRUE)\n \t\t\t\t\tbreak;\n \t\t\t}\n \t\t\tif (ii == npts)\n-\t\t\t\treturn true;\n+\t\t\t\treturn TS_TRUE;\n \n \t\t\t/* match not found forwards? then look backwards */\n \t\t\tfor (ii = 1, j = i - 1; ii < npts; ii++, j--)\n \t\t\t{\n \t\t\t\tif (j < 0)\n \t\t\t\t\tj = (npts - 1);\n-\t\t\t\tif (!point_eq_point(&p2[j], &p1[ii]))\n+\t\t\t\tif ((r = point_eq_point(&p2[j], &p1[ii])) != TS_TRUE)\n \t\t\t\t\tbreak;\n \t\t\t}\n \t\t\tif (ii == npts)\n-\t\t\t\treturn true;\n+\t\t\t\treturn TS_TRUE;\n \t\t}\n+\n+\t\tif (r == TS_NULL)\n+\t\t\treturn TS_NULL;\n \t}\n \n-\treturn false;\n+\treturn TS_FALSE;\n }\n \n \ndiff --git a/src/include/c.h b/src/include/c.h\nindex c8ede08273..03e541936e 100644\n--- a/src/include/c.h\n+++ b/src/include/c.h\n@@ -402,6 +402,13 @@ typedef unsigned char bool;\n #endif\t\t\t\t\t\t\t/* not PG_USE_STDBOOL */\n #endif\t\t\t\t\t\t\t/* not C++ */\n \n+/* tri-state boolean, false/true are compatible with bool */\n+typedef enum tsbool\n+{\n+\tTS_NULL = -1,\n+\tTS_FALSE = false,\n+\tTS_TRUE = true\n+} tsbool;\n \n /* ----------------------------------------------------------------\n *\t\t\t\tSection 3:\tstandard system types\ndiff --git a/src/include/utils/float.h b/src/include/utils/float.h\nindex 4ab3f9d8ef..a983d6d8d0 100644\n--- a/src/include/utils/float.h\n+++ b/src/include/utils/float.h\n@@ -353,6 +353,115 @@ float8_max(const float8 val1, const float8 val2)\n \treturn float8_gt(val1, val2) ? val1 : val2;\n }\n \n+/* tri-state equivalents */\n+static inline tsbool\n+float4_teq(const float4 val1, const float4 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 == val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float8_teq(const float8 val1, const float8 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 == val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float4_tne(const float4 val1, const float4 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 != val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float8_tne(const float8 val1, const float8 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 != val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float4_tlt(const float4 val1, const float4 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 < val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float8_tlt(const float8 val1, const float8 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 < val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float4_tle(const float4 val1, const float4 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 <= val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float8_tle(const float8 val1, const float8 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 <= val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float4_tgt(const float4 val1, const float4 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 > val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float8_tgt(const float8 val1, const float8 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 > val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float4_tge(const float4 val1, const float4 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 >= val2;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+float8_tge(const float8 val1, const float8 val2)\n+{\n+\tif (!isnan(val1) && !isnan(val2))\n+\t\treturn val1 >= val2;\n+\n+\treturn TS_NULL;\n+}\n+\n /*\n * These two functions return NaN if either input is NaN, else the smaller\n * of the two inputs. This does NOT follow our usual sort rule, but it is\ndiff --git a/src/include/utils/geo_decls.h b/src/include/utils/geo_decls.h\nindex 0b87437d83..7b8a0dbdf7 100644\n--- a/src/include/utils/geo_decls.h\n+++ b/src/include/utils/geo_decls.h\n@@ -40,6 +40,19 @@\n \n #define EPSILON\t\t\t\t\t1.0E-06\n \n+/* helper function for tri-state checking */\n+static inline tsbool\n+FP_TRICHECK(double A, double B, bool cond)\n+{\n+\tif (cond)\n+\t\treturn TS_TRUE;\n+\tif (!isnan(A) && !isnan(B))\n+\t\treturn TS_FALSE;\n+\n+\treturn TS_NULL;\n+}\n+\n+\n #ifdef EPSILON\n #define FPzero(A)\t\t\t\t(fabs(A) <= EPSILON)\n \n@@ -78,6 +91,53 @@ FPge(double A, double B)\n {\n \treturn A + EPSILON >= B;\n }\n+\n+/* tri-state comparisons, don't use define to avoid duplicate evaluation */\n+static inline tsbool\n+FPTzero(double A)\n+{\n+\tif (fabs(A) <= EPSILON)\n+\t\treturn TS_TRUE;\n+\tif (isnan(A))\n+\t\treturn TS_NULL;\n+\treturn TS_FALSE;\n+}\n+\n+static inline tsbool\n+FPTeq(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A == B || fabs(A - B) <= EPSILON);\n+}\n+\n+static inline tsbool\n+FPTne(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A != B && fabs(A - B) > EPSILON);\n+}\n+\n+static inline tsbool\n+FPTlt(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A + EPSILON < B);\n+}\n+\n+static inline tsbool\n+FPTle(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A <= B + EPSILON);\n+}\n+\n+static inline tsbool\n+FPTgt(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A > B + EPSILON);\n+}\n+\n+static inline tsbool\n+FPTge(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A + EPSILON >= B);\n+}\n #else\n #define FPzero(A)\t\t\t\t((A) == 0)\n #define FPeq(A,B)\t\t\t\t((A) == (B))\n@@ -86,10 +146,109 @@ FPge(double A, double B)\n #define FPle(A,B)\t\t\t\t((A) <= (B))\n #define FPgt(A,B)\t\t\t\t((A) > (B))\n #define FPge(A,B)\t\t\t\t((A) >= (B))\n+\n+/* define as inline functions to avoid duplicate evaluation */\n+static inline tsbool\n+FPTzero(double A)\n+{\n+\tif (fabs(A) <= EPSILON)\n+\t\treturn TS_TRUE;\n+\tif (isnan(A))\n+\t\treturn TS_NULL;\n+\treturn TS_FALSE;\n+}\n+\n+static inline tsbool\n+FPTeq(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A == B);\n+}\n+\n+static inline tsbool\n+FPTne(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A != B);\n+}\n+\n+static inline tsbool\n+FPTlt(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A < B);\n+}\n+\n+static inline tsbool\n+FPTle(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A <= B);\n+}\n+\n+static inline tsbool\n+FPTgt(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A > B);\n+}\n+\n+static inline tsbool\n+FPge(double A, double B)\n+{\n+\treturn FP_TRICHECK(A, B, A >= B);\n+}\n #endif\n \n #define HYPOT(A, B)\t\t\t\tpg_hypot(A, B)\n \n+static inline tsbool\n+TS_NOT(tsbool a)\n+{\n+\tif (a != TS_NULL)\n+\t\treturn !a;\n+\n+\treturn TS_NULL;\n+}\n+\n+static inline tsbool\n+TS_OR2(tsbool p1, tsbool p2)\n+{\n+\tif (p1 == TS_TRUE || p2 == TS_TRUE)\n+\t\treturn TS_TRUE;\n+\tif (p1 == TS_NULL || p2 == TS_NULL)\n+\t\treturn TS_NULL;\n+\telse\n+\t\treturn TS_FALSE;\n+}\n+\n+static inline tsbool\n+TS_AND2(tsbool p1, tsbool p2)\n+{\n+\tif (p1 == TS_TRUE && p2 == TS_TRUE)\n+\t\treturn TS_TRUE;\n+\tif (p1 == TS_NULL || p2 == TS_NULL)\n+\t\treturn TS_NULL;\n+\telse\n+\t\treturn TS_FALSE;\n+}\n+\n+static inline tsbool\n+TS_AND4(tsbool p1, tsbool p2, tsbool p3, tsbool p4)\n+{\n+\tif (p1 == TS_TRUE && p2 == TS_TRUE && p3 == TS_TRUE && p4 == TS_TRUE)\n+\t\treturn TS_TRUE;\n+\tif (p1 == TS_NULL || p2 == TS_NULL || p3 == TS_NULL || p4 ==TS_NULL)\n+\t\treturn TS_NULL;\n+\telse\n+\t\treturn TS_FALSE;\n+}\n+\n+#define PG_RETURN_TSBOOL(e)\t\t\t\\\n+\tdo\t\t\t\t\t\t\t\t\\\n+\t{\t\t\t\t\t\t\t\t\\\n+\t\ttsbool _tmpsb = (e);\t\t\\\n+\t\tif (_tmpsb != TS_NULL)\t\t\\\n+\t\t\tPG_RETURN_BOOL(_tmpsb);\t\\\n+\t\telse\t\t\t\t\t\t\\\n+\t\t\tPG_RETURN_NULL();\t\t\\\n+\t} while (0)\n+\n /*---------------------------------------------------------------------\n * Point - (x,y)\n *-------------------------------------------------------------------*/", "msg_date": "Thu, 01 Apr 2021 15:46:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "On Thu, 2021-04-01 at 09:35 +0900, Kyotaro Horiguchi wrote:\n> > > > > > > > SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> > > > > > > > ?column? \n> > > > > > > > ----------\n> > > > > > > > t\n> > > > > > > > (1 row)\n> > \n> > If you think of \"NaN\" literally as \"not a number\", then FALSE would\n> > make sense, since \"not a number\" implies \"not a number between 0 and 1\".\n> > But since NaN is the result of operations like 0/0 or infinity - infinity,\n> > NULL might be better.\n> > So I'd opt for NULL too.\n> \n> Thanks. Do you think it's acceptable that returning false instead of\n> NULL as an alternative behavior?\n\nYes, I think that is acceptable.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Thu, 01 Apr 2021 09:54:20 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Issue with point_ops and NaN" }, { "msg_contents": "Le jeu. 1 avr. 2021 à 15:54, Laurenz Albe <laurenz.albe@cybertec.at> a\nécrit :\n\n> On Thu, 2021-04-01 at 09:35 +0900, Kyotaro Horiguchi wrote:\n> > > > > > > > > SELECT point('NaN','NaN') <@\n> polygon('(0,0),(1,0),(1,1),(0,0)');\n> > > > > > > > > ?column?\n> > > > > > > > > ----------\n> > > > > > > > > t\n> > > > > > > > > (1 row)\n> > >\n> > > If you think of \"NaN\" literally as \"not a number\", then FALSE would\n> > > make sense, since \"not a number\" implies \"not a number between 0 and\n> 1\".\n> > > But since NaN is the result of operations like 0/0 or infinity -\n> infinity,\n> > > NULL might be better.\n> > > So I'd opt for NULL too.\n> >\n> > Thanks. Do you think it's acceptable that returning false instead of\n> > NULL as an alternative behavior?\n>\n> Yes, I think that is acceptable.\n>\n\n+1 especially after looking at the poc patch you sent to handle NULLs.\n\n>\n\nLe jeu. 1 avr. 2021 à 15:54, Laurenz Albe <laurenz.albe@cybertec.at> a écrit :On Thu, 2021-04-01 at 09:35 +0900, Kyotaro Horiguchi wrote:\n> > > > > > > > SELECT point('NaN','NaN') <@ polygon('(0,0),(1,0),(1,1),(0,0)');\n> > > > > > > > ?column? \n> > > > > > > > ----------\n> > > > > > > >    t\n> > > > > > > >    (1 row)\n> > \n> > If you think of \"NaN\" literally as \"not a number\", then FALSE would\n> > make sense, since \"not a number\" implies \"not a number between 0 and 1\".\n> > But since NaN is the result of operations like 0/0 or infinity - infinity,\n> > NULL might be better.\n> > So I'd opt for NULL too.\n> \n> Thanks.  Do you think it's acceptable that returning false instead of\n> NULL as an alternative behavior?\n\nYes, I think that is acceptable.+1 especially after looking at the poc patch you sent to handle NULLs.", "msg_date": "Sat, 3 Apr 2021 20:22:06 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Issue with point_ops and NaN" } ]
[ { "msg_contents": "Hi,\n\nI just noticed that the comment for CreateStmt.inhRelations says that it's a\nList of inhRelation, which hasn't been the case for a very long time.\n\nTrivial patch attached.", "msg_date": "Tue, 30 Mar 2021 20:30:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Outdated comment for CreateStmt.inhRelations" }, { "msg_contents": "On Tue, Mar 30, 2021 at 08:30:15PM +0800, Julien Rouhaud wrote:\n> I just noticed that the comment for CreateStmt.inhRelations says that it's a\n> List of inhRelation, which hasn't been the case for a very long time.\n\nThanks, applied.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 09:36:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Outdated comment for CreateStmt.inhRelations" }, { "msg_contents": "On Wed, Mar 31, 2021 at 09:36:51AM +0900, Michael Paquier wrote:\n> On Tue, Mar 30, 2021 at 08:30:15PM +0800, Julien Rouhaud wrote:\n> > I just noticed that the comment for CreateStmt.inhRelations says that it's a\n> > List of inhRelation, which hasn't been the case for a very long time.\n> \n> Thanks, applied.\n\nThanks!\n\n\n", "msg_date": "Wed, 31 Mar 2021 11:44:45 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Outdated comment for CreateStmt.inhRelations" } ]
[ { "msg_contents": "Hi,\n\nWhile reading the documentation for DROP INDEX[1], I noticed the lock was\ndescribed colloquially as an \"exclusive\" lock, which made me pause for a\nsecond because it's the same name as the EXCLUSIVE table lock.\n\nThe attached patch explicitly states that an ACCESS EXCLUSIVE lock is\nacquired.\n\n[1]\nhttps://www.postgresql.org/docs/current/sql-dropindex.html", "msg_date": "Tue, 30 Mar 2021 10:33:46 -0400", "msg_from": "Greg Rychlewski <greg.rychlewski@gmail.com>", "msg_from_op": true, "msg_subject": "DROP INDEX docs - explicit lock naming" }, { "msg_contents": "On Tue, Mar 30, 2021 at 10:33:46AM -0400, Greg Rychlewski wrote:\n> While reading the documentation for DROP INDEX[1], I noticed the lock was\n> described colloquially as an \"exclusive\" lock, which made me pause for a\n> second because it's the same name as the EXCLUSIVE table lock.\n> \n> The attached patch explicitly states that an ACCESS EXCLUSIVE lock is\n> acquired.\n\nIndeed, this could be read as ACCESS SHARE being allowed, but that's\nnever the case for any of the index code paths, except if CONCURRENTLY\nis involved. It is not the only place in the docs where we could do\nmore clarification. For instance, reindex.sgml mentions twice an\nexclusive lock but that should be an access exclusive lock. To be\nexact, I can spot 27 places under doc/ that could be improved. Such\nchanges depend on the surrounding context, of course.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 09:47:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: DROP INDEX docs - explicit lock naming" }, { "msg_contents": "Thanks for pointing that out. I've attached a new patch with several other\nupdates where I felt confident the docs were referring to an ACCESS\nEXCLUSIVE lock.\n\nOn Tue, Mar 30, 2021 at 8:47 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 30, 2021 at 10:33:46AM -0400, Greg Rychlewski wrote:\n> > While reading the documentation for DROP INDEX[1], I noticed the lock was\n> > described colloquially as an \"exclusive\" lock, which made me pause for a\n> > second because it's the same name as the EXCLUSIVE table lock.\n> >\n> > The attached patch explicitly states that an ACCESS EXCLUSIVE lock is\n> > acquired.\n>\n> Indeed, this could be read as ACCESS SHARE being allowed, but that's\n> never the case for any of the index code paths, except if CONCURRENTLY\n> is involved. It is not the only place in the docs where we could do\n> more clarification. For instance, reindex.sgml mentions twice an\n> exclusive lock but that should be an access exclusive lock. To be\n> exact, I can spot 27 places under doc/ that could be improved. Such\n> changes depend on the surrounding context, of course.\n> --\n> Michael\n>", "msg_date": "Tue, 30 Mar 2021 23:29:17 -0400", "msg_from": "Greg Rychlewski <greg.rychlewski@gmail.com>", "msg_from_op": true, "msg_subject": "Re: DROP INDEX docs - explicit lock naming" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:29:17PM -0400, Greg Rychlewski wrote:\n> Thanks for pointing that out. I've attached a new patch with several other\n> updates where I felt confident the docs were referring to an ACCESS\n> EXCLUSIVE lock.\n\nThanks, applied! I have reviewed the whole and there is one place in\nvacuum.sgml that could switch \"exclusive lock\" to \"SHARE UPDATE\nEXCLUSIVE lock\" but I have left that out as it does not bring more\nclarity in the text. The change in indexam.sgml was partially wrong\nas REINDEX CONCURRENTLY does not take an access exclusive lock, and I\nhave tweaked a bit the wording of pgrowlocks.sgml.\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 15:32:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: DROP INDEX docs - explicit lock naming" }, { "msg_contents": "Thanks! I apologize, I added a commitfest entry for this and failed to add\nit to my message: https://commitfest.postgresql.org/33/3053/.\n\nThis is my first time submitting a patch and I'm not sure if it needs to be\ndeleted now or if you are supposed to add yourself as a committer.\n\nOn Thu, Apr 1, 2021 at 2:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Mar 30, 2021 at 11:29:17PM -0400, Greg Rychlewski wrote:\n> > Thanks for pointing that out. I've attached a new patch with several\n> other\n> > updates where I felt confident the docs were referring to an ACCESS\n> > EXCLUSIVE lock.\n>\n> Thanks, applied! I have reviewed the whole and there is one place in\n> vacuum.sgml that could switch \"exclusive lock\" to \"SHARE UPDATE\n> EXCLUSIVE lock\" but I have left that out as it does not bring more\n> clarity in the text. The change in indexam.sgml was partially wrong\n> as REINDEX CONCURRENTLY does not take an access exclusive lock, and I\n> have tweaked a bit the wording of pgrowlocks.sgml.\n> --\n> Michael\n>\n\nThanks! I apologize, I added a commitfest entry for this and failed to add it to my message: https://commitfest.postgresql.org/33/3053/.This is my first time submitting a patch and I'm not sure if it needs to be deleted now or if you are supposed to add yourself as a committer. On Thu, Apr 1, 2021 at 2:32 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Mar 30, 2021 at 11:29:17PM -0400, Greg Rychlewski wrote:\n> Thanks for pointing that out. I've attached a new patch with several other\n> updates where I felt confident the docs were referring to an ACCESS\n> EXCLUSIVE lock.\n\nThanks, applied!  I have reviewed the whole and there is one place in\nvacuum.sgml that could switch \"exclusive lock\" to \"SHARE UPDATE\nEXCLUSIVE lock\" but I have left that out as it does not bring more\nclarity in the text.  The change in indexam.sgml was partially wrong\nas REINDEX CONCURRENTLY does not take an access exclusive lock, and I\nhave tweaked a bit the wording of pgrowlocks.sgml.\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 08:26:50 -0400", "msg_from": "Greg Rychlewski <greg.rychlewski@gmail.com>", "msg_from_op": true, "msg_subject": "Re: DROP INDEX docs - explicit lock naming" }, { "msg_contents": "On Thu, Apr 01, 2021 at 08:26:50AM -0400, Greg Rychlewski wrote:\n> Thanks! I apologize, I added a commitfest entry for this and failed to add\n> it to my message: https://commitfest.postgresql.org/33/3053/.\n> \n> This is my first time submitting a patch and I'm not sure if it needs to be\n> deleted now or if you are supposed to add yourself as a committer.\n\nThanks, I did not notice that. The entry has been updated as it\nshould.\n--\nMichael", "msg_date": "Fri, 2 Apr 2021 08:54:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: DROP INDEX docs - explicit lock naming" } ]
[ { "msg_contents": "Andrew,\n\nWhile developing some cross version tests, I noticed that PostgresNode::init fails for postgres versions older than 9.3, like so:\n\n# Checking port 52814\n# Found port 52814\nName: 9.2.24\nData directory: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/pgdata\nBackup directory: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/backup\nArchive directory: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/archives\nConnection string: port=52814 host=/var/folders/6n/3f4vwbnn7fz5qk0xqhgbjrkw0000gp/T/L_A2w1x7qb\nLog file: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/log/001_verify_paths_9.2.24.log\n# Running: initdb -D /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/pgdata -A trust -N\ninitdb: invalid option -- N\nTry \"initdb --help\" for more information.\nBail out! system initdb failed\n\nThe problem is clear enough; -N/--nosync was added in 9.3, and PostgresNode::init is passing -N to initdb unconditionally. I wonder if during PostgresNode::new a call should be made to pg_config and the version information grep'd out so that version specific options to various functions (init, backup, etc) could hinge on the version of postgres being used?\n\nYou could also just remove the -N option, but that would slow down all tests for everybody, so I'm not keen to do that. Or you could remove -N in cases where $node->{_install_path} is defined, which would be far more acceptable. I'm leaning towards using the output of pg_config, though, since this problem is likely to come up again with other options/commands.\n\nThoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 10:39:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "> On Mar 30, 2021, at 10:39 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Andrew,\n> \n> While developing some cross version tests, I noticed that PostgresNode::init fails for postgres versions older than 9.3, like so:\n> \n> # Checking port 52814\n> # Found port 52814\n> Name: 9.2.24\n> Data directory: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/pgdata\n> Backup directory: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/backup\n> Archive directory: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/archives\n> Connection string: port=52814 host=/var/folders/6n/3f4vwbnn7fz5qk0xqhgbjrkw0000gp/T/L_A2w1x7qb\n> Log file: /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/log/001_verify_paths_9.2.24.log\n> # Running: initdb -D /Users/mark.dilger/hydra/postgresnode.review/src/test/modules/test_cross_version/tmp_check/t_001_verify_paths_9.2.24_data/pgdata -A trust -N\n> initdb: invalid option -- N\n> Try \"initdb --help\" for more information.\n> Bail out! system initdb failed\n> \n> The problem is clear enough; -N/--nosync was added in 9.3, and PostgresNode::init is passing -N to initdb unconditionally. I wonder if during PostgresNode::new a call should be made to pg_config and the version information grep'd out so that version specific options to various functions (init, backup, etc) could hinge on the version of postgres being used?\n> \n> You could also just remove the -N option, but that would slow down all tests for everybody, so I'm not keen to do that. Or you could remove -N in cases where $node->{_install_path} is defined, which would be far more acceptable. I'm leaning towards using the output of pg_config, though, since this problem is likely to come up again with other options/commands.\n> \n> Thoughts?\n\nI fixed this largely as outlined above, introducing a few new functions which ease test development and using one of them to condition the behavior of init() on the postgres version.\n\nIn the tests I have been developing (not included), the developer (or some buildfarm script) has to list all postgresql installations in a configuration file, like so:\n\n/Users/mark.dilger/install/8.4\n/Users/mark.dilger/install/9.0.23\n/Users/mark.dilger/install/9.1.24\n/Users/mark.dilger/install/9.2.24\n/Users/mark.dilger/install/9.3.25\n/Users/mark.dilger/install/9.4.26\n/Users/mark.dilger/install/9.5.25\n/Users/mark.dilger/install/9.6\n/Users/mark.dilger/install/10\n/Users/mark.dilger/install/11\n/Users/mark.dilger/install/12\n/Users/mark.dilger/install/13\n\nThe tests can't be hardcoded to know anything about which specific postgres versions will be installed, or what version of postgres exists in any particular install directory. It makes the tests easier to maintain if they can do stuff like:\n\n $node{$_} = PostgresNode->get_new_node(...) for (@installed_versions);\n \n if ($node{a}->newer_than_version($node{b}))\n {\n # check that newer version A's whatever can connect to and work with older server B\n ....\n }\n\nI therefore included functions of that sort in the patch along with the $node->at_least_version(version) function that the fix uses.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 30 Mar 2021 14:33:36 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Mar-30, Mark Dilger wrote:\n\n> The problem is clear enough; -N/--nosync was added in 9.3, and\n> PostgresNode::init is passing -N to initdb unconditionally. I wonder\n> if during PostgresNode::new a call should be made to pg_config and the\n> version information grep'd out so that version specific options to\n> various functions (init, backup, etc) could hinge on the version of\n> postgres being used?\n\nYeah, I think making it backwards-compatible would be good. Is running\npg_config to obtain the version the best way to do it? I'm not sure --\nwhat other ways are there? I can't of anything. (Asking the user seems\nright out.)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n\n", "msg_date": "Tue, 30 Mar 2021 19:12:48 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Mar 30, 2021, at 3:12 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Mar-30, Mark Dilger wrote:\n> \n>> The problem is clear enough; -N/--nosync was added in 9.3, and\n>> PostgresNode::init is passing -N to initdb unconditionally. I wonder\n>> if during PostgresNode::new a call should be made to pg_config and the\n>> version information grep'd out so that version specific options to\n>> various functions (init, backup, etc) could hinge on the version of\n>> postgres being used?\n> \n> Yeah, I think making it backwards-compatible would be good. Is running\n> pg_config to obtain the version the best way to do it? I'm not sure --\n> what other ways are there? I can't of anything. (Asking the user seems\n> right out.)\n\nOnce you have a node running, you can query the version using safe_psql, but that clearly doesn't work soon enough, since we need the information prior to running initdb.\n\nOne of the things I noticed while playing with this new toy (thanks, Andrew!) is that if you pass a completely insane install_path, you don't get any errors. In fact, you get executables and libraries from whatever PATH=\"/no/such/postgres:$PATH\" gets you, probably the executables and libraries of your latest development branch. By forcing get_new_node to call the pg_config of the path you pass in, you'd fix that problem. I didn't do that, mind you, but you could. I just executed pg_config, which means you'll still get the wrong version owing to the PATH confusion.\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 15:17:26 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Mar-30, Mark Dilger wrote:\n\n> Once you have a node running, you can query the version using\n> safe_psql, but that clearly doesn't work soon enough, since we need\n> the information prior to running initdb.\n\nI was thinking something like examining some file in the install dir --\nsay, include/server/pg_config.h, but that seems messier than just\nrunning pg_config.\n\n> One of the things I noticed while playing with this new toy (thanks,\n> Andrew!) is that if you pass a completely insane install_path, you\n> don't get any errors. In fact, you get executables and libraries from\n> whatever PATH=\"/no/such/postgres:$PATH\" gets you, probably the\n> executables and libraries of your latest development branch. By\n> forcing get_new_node to call the pg_config of the path you pass in,\n> you'd fix that problem. I didn't do that, mind you, but you could. I\n> just executed pg_config, which means you'll still get the wrong\n> version owing to the PATH confusion.\n\nHmm, I think it should complain if you give it a path that doesn't\nactually contain a valid installation.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n", "msg_date": "Tue, 30 Mar 2021 19:22:01 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "> On Mar 30, 2021, at 3:22 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n>> One of the things I noticed while playing with this new toy (thanks,\n>> Andrew!) is that if you pass a completely insane install_path, you\n>> don't get any errors. In fact, you get executables and libraries from\n>> whatever PATH=\"/no/such/postgres:$PATH\" gets you, probably the\n>> executables and libraries of your latest development branch. By\n>> forcing get_new_node to call the pg_config of the path you pass in,\n>> you'd fix that problem. I didn't do that, mind you, but you could. I\n>> just executed pg_config, which means you'll still get the wrong\n>> version owing to the PATH confusion.\n> \n> Hmm, I think it should complain if you give it a path that doesn't\n> actually contain a valid installation.\n\nI felt the same way, but wondered if Andrew had set path variables without sanity checking the install_path argument for some specific reason, and didn't want to break something he did intentionally. If that wasn't intentional, then there are two open bugs/infelicities against master:\n\n1) PostgresNode::init() doesn't work for older server versions\n\n2) PostgresNode::get_new_node() doesn't reject invalid paths, resulting in confusion about which binaries subsequently get executed\n\nI think this next version of the patch addresses both issues. The first issue was already fixed in the previous patch. The second issue is also now fixed by forcing the usage of the install_path qualified pg_config executable, rather than using whatever pg_config happens to be found in the path.\n\nThere is an existing issue that if you configure with --bindir=$SOMEWHERE_UNEXPECTED, PostgresNode won't work. It inserts ${install_path}/bin and ${install_path}/lib into the environment without regard for whether \"bin\" and \"lib\" are correct. That's a pre-existing limitation, and I'm not complaining, but just commenting that I didn't do anything to fix it.\n\nKeeping the WIP marking on the patch until we hear Andrew's opinion on all this.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 30 Mar 2021 17:41:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 3/30/21 6:22 PM, Alvaro Herrera wrote:\n> On 2021-Mar-30, Mark Dilger wrote:\n>\n>> Once you have a node running, you can query the version using\n>> safe_psql, but that clearly doesn't work soon enough, since we need\n>> the information prior to running initdb.\n> I was thinking something like examining some file in the install dir --\n> say, include/server/pg_config.h, but that seems messier than just\n> running pg_config.\n>\n>> One of the things I noticed while playing with this new toy (thanks,\n>> Andrew!) \n\n\n(I'm really happy someone is playing with it so soon.)\n\n\n\n>> is that if you pass a completely insane install_path, you\n>> don't get any errors. In fact, you get executables and libraries from\n>> whatever PATH=\"/no/such/postgres:$PATH\" gets you, probably the\n>> executables and libraries of your latest development branch. By\n>> forcing get_new_node to call the pg_config of the path you pass in,\n>> you'd fix that problem. I didn't do that, mind you, but you could. I\n>> just executed pg_config, which means you'll still get the wrong\n>> version owing to the PATH confusion.\n> Hmm, I think it should complain if you give it a path that doesn't\n> actually contain a valid installation.\n>\n\n\nYeah, it should be validated. All things considered I think just calling\n'pg_config --version' is probably the simplest validation, and likely to\nbe sufficient.\n\n\nI'll try to come up with something tomorrow.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 20:44:26 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Mar 30, 2021, at 5:44 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> I'll try to come up with something tomorrow.\n\nI hope the patch I sent is useful, at least as a starting point.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 17:50:14 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Tue, Mar 30, 2021 at 08:44:26PM -0400, Andrew Dunstan wrote:\n> Yeah, it should be validated. All things considered I think just calling\n> 'pg_config --version' is probably the simplest validation, and likely to\n> be sufficient.\n> \n> I'll try to come up with something tomorrow.\n\nThere is already TestLib::check_pg_config(). Shouldn't you leverage\nthat with PG_VERSION_NUM or equivalent?\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 09:52:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Mar 30, 2021, at 5:52 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 30, 2021 at 08:44:26PM -0400, Andrew Dunstan wrote:\n>> Yeah, it should be validated. All things considered I think just calling\n>> 'pg_config --version' is probably the simplest validation, and likely to\n>> be sufficient.\n>> \n>> I'll try to come up with something tomorrow.\n> \n> There is already TestLib::check_pg_config(). Shouldn't you leverage\n> that with PG_VERSION_NUM or equivalent?\n\nOnly if you change that function. It doesn't currently do anything special to run the *right* pg_config.\n\nThe patch I sent takes the view that once the install_path has been sanity checked and the *right* pg_config executed, relying on the environment's path variables thereafter is safe. But that means the initial pg_config execution is unique in not being able to rely on the path. There really isn't enough motivation for changing TestLib, I don't think, because subsequent calls to pg_config don't need to be paranoid, just the first call.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 30 Mar 2021 17:59:08 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Mar-31, Michael Paquier wrote:\n\n> There is already TestLib::check_pg_config(). Shouldn't you leverage\n> that with PG_VERSION_NUM or equivalent?\n\nhmm, I wonder if we shouldn't take the stance that it is not TestLib's\nbusiness to be calling any Pg binaries. So that routine should be moved\nto PostgresNode.\n\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Tue, 30 Mar 2021 23:06:55 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Tue, Mar 30, 2021 at 11:06:55PM -0300, Alvaro Herrera wrote:\n> On 2021-Mar-31, Michael Paquier wrote:\n>> There is already TestLib::check_pg_config(). Shouldn't you leverage\n>> that with PG_VERSION_NUM or equivalent?\n> \n> hmm, I wonder if we shouldn't take the stance that it is not TestLib's\n> business to be calling any Pg binaries. So that routine should be moved\n> to PostgresNode.\n\nHmm. I can see the intention, but I am not sure that this is\ncompletely correct either to assume that we require a backend node\nwhen tests just do tests with frontends in standalone, aka with\nTestLib::program_*.\n--\nMichael", "msg_date": "Wed, 31 Mar 2021 12:08:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Mar 30, 2021, at 5:41 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> 1) PostgresNode::init() doesn't work for older server versions\n\n\nPostgresNode::start() doesn't work for servers older than version 10, either. If I hack that function to sleep until the postmaster.pid file exists, it works, but that is really ugly and is just to prove to myself that it is a timing issue. There were a few commits in the version 10 development cycle (cf, commit f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl works, though I haven't figured out yet exactly what the interaction with PostgresNode would be. I'll keep looking.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:34:41 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Mar-31, Mark Dilger wrote:\n\n> PostgresNode::start() doesn't work for servers older than version 10,\n> either. If I hack that function to sleep until the postmaster.pid\n> file exists, it works, but that is really ugly and is just to prove to\n> myself that it is a timing issue. There were a few commits in the\n> version 10 development cycle (cf, commit\n> f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl\n> works, though I haven't figured out yet exactly what the interaction\n> with PostgresNode would be. I'll keep looking.\n\nDo you need to do \"pg_ctl -w\" perhaps?\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:48:17 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 3/31/21 3:48 PM, Alvaro Herrera wrote:\n> On 2021-Mar-31, Mark Dilger wrote:\n>\n>> PostgresNode::start() doesn't work for servers older than version 10,\n>> either. If I hack that function to sleep until the postmaster.pid\n>> file exists, it works, but that is really ugly and is just to prove to\n>> myself that it is a timing issue. There were a few commits in the\n>> version 10 development cycle (cf, commit\n>> f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl\n>> works, though I haven't figured out yet exactly what the interaction\n>> with PostgresNode would be. I'll keep looking.\n> Do you need to do \"pg_ctl -w\" perhaps?\n\n\n\nProbably. The buildfarm does this unconditionally and has done for a\nvery long time, so maybe we don't need a version test for it.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:05:23 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Mar 31, 2021, at 1:05 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \n> On 3/31/21 3:48 PM, Alvaro Herrera wrote:\n>> On 2021-Mar-31, Mark Dilger wrote:\n>> \n>>> PostgresNode::start() doesn't work for servers older than version 10,\n>>> either. If I hack that function to sleep until the postmaster.pid\n>>> file exists, it works, but that is really ugly and is just to prove to\n>>> myself that it is a timing issue. There were a few commits in the\n>>> version 10 development cycle (cf, commit\n>>> f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl\n>>> works, though I haven't figured out yet exactly what the interaction\n>>> with PostgresNode would be. I'll keep looking.\n>> Do you need to do \"pg_ctl -w\" perhaps?\n> \n> \n> \n> Probably. The buildfarm does this unconditionally and has done for a\n> very long time, so maybe we don't need a version test for it.\n\nI put a version test for this and it works for me. I guess you could do it unconditionally, if you want, but the condition is just:\n\n- TestLib::system_or_bail('pg_ctl', '-D', $pgdata, '-l', $logfile,\n+ TestLib::system_or_bail('pg_ctl',\n+ $self->older_than_version('10') ? '-w' : (),\n+ '-D', $pgdata, '-l', $logfile,\n 'restart');\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:07:48 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 3/30/21 8:52 PM, Michael Paquier wrote:\n> On Tue, Mar 30, 2021 at 08:44:26PM -0400, Andrew Dunstan wrote:\n>> Yeah, it should be validated. All things considered I think just calling\n>> 'pg_config --version' is probably the simplest validation, and likely to\n>> be sufficient.\n>>\n>> I'll try to come up with something tomorrow.\n> There is already TestLib::check_pg_config(). Shouldn't you leverage\n> that with PG_VERSION_NUM or equivalent?\n\n\n\nTBH, TestLib::check_pg_config looks like a bit of a wart, and I would be\ntempted to remove it. It's the only Postgres-specific thing in\nTestLib.pm I think. It's only used in one place AFAICT\n(src/test/ssl/t/002_scram.pl) so we could just remove it and inline the\ncode.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:45:32 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "> On Mar 31, 2021, at 1:07 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> \n> \n>> On Mar 31, 2021, at 1:05 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> \n>> \n>> On 3/31/21 3:48 PM, Alvaro Herrera wrote:\n>>> On 2021-Mar-31, Mark Dilger wrote:\n>>> \n>>>> PostgresNode::start() doesn't work for servers older than version 10,\n>>>> either. If I hack that function to sleep until the postmaster.pid\n>>>> file exists, it works, but that is really ugly and is just to prove to\n>>>> myself that it is a timing issue. There were a few commits in the\n>>>> version 10 development cycle (cf, commit\n>>>> f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl\n>>>> works, though I haven't figured out yet exactly what the interaction\n>>>> with PostgresNode would be. I'll keep looking.\n>>> Do you need to do \"pg_ctl -w\" perhaps?\n>> \n>> \n>> \n>> Probably. The buildfarm does this unconditionally and has done for a\n>> very long time, so maybe we don't need a version test for it.\n> \n> I put a version test for this and it works for me. I guess you could do it unconditionally, if you want, but the condition is just:\n> \n> - TestLib::system_or_bail('pg_ctl', '-D', $pgdata, '-l', $logfile,\n> + TestLib::system_or_bail('pg_ctl',\n> + $self->older_than_version('10') ? '-w' : (),\n> + '-D', $pgdata, '-l', $logfile,\n> 'restart');\n\nI have needed to do a number of these version checks to get PostgresNode working across a variety of versions. Attached is a WIP patch set with those changes, and with a framework that exercises PostgresNode and can be extended to check other things. For now, it just checks that init(), start(), safe_psql(), and teardown_node() work.\n\nWith the existing changes to PostgresNode in 0001, the framework in 0002 works for server versions back to 9.3. Versions 9.1 and 9.2 fail on the safe_psql(), and I haven't dug into that far enough yet to explain why. Versions 8.4 and 9.0 fail on the start(). I had trouble getting versions of postgres older than 8.4 to compile on my laptop. I haven't dug far enough into that yet, either.\n\nTo get this running, you need to install the versions you care about and edit src/test/modules/test_cross_version/version.dat with the names and locations of those installations. (I committed the patch with my local settings, so you can easily compare and edit.) That should get you to the point where you can run 'make check' in the test_cross_version directory.\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 31 Mar 2021 19:28:56 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 3/31/21 10:28 PM, Mark Dilger wrote:\n>\n>> On Mar 31, 2021, at 1:07 PM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>\n>>\n>>\n>>> On Mar 31, 2021, at 1:05 PM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>\n>>>\n>>> On 3/31/21 3:48 PM, Alvaro Herrera wrote:\n>>>> On 2021-Mar-31, Mark Dilger wrote:\n>>>>\n>>>>> PostgresNode::start() doesn't work for servers older than version 10,\n>>>>> either. If I hack that function to sleep until the postmaster.pid\n>>>>> file exists, it works, but that is really ugly and is just to prove to\n>>>>> myself that it is a timing issue. There were a few commits in the\n>>>>> version 10 development cycle (cf, commit\n>>>>> f13ea95f9e473a43ee4e1baeb94daaf83535d37c) which changed how pg_ctl\n>>>>> works, though I haven't figured out yet exactly what the interaction\n>>>>> with PostgresNode would be. I'll keep looking.\n>>>> Do you need to do \"pg_ctl -w\" perhaps?\n>>>\n>>>\n>>> Probably. The buildfarm does this unconditionally and has done for a\n>>> very long time, so maybe we don't need a version test for it.\n>> I put a version test for this and it works for me. I guess you could do it unconditionally, if you want, but the condition is just:\n>>\n>> - TestLib::system_or_bail('pg_ctl', '-D', $pgdata, '-l', $logfile,\n>> + TestLib::system_or_bail('pg_ctl',\n>> + $self->older_than_version('10') ? '-w' : (),\n>> + '-D', $pgdata, '-l', $logfile,\n>> 'restart');\n> I have needed to do a number of these version checks to get PostgresNode working across a variety of versions. Attached is a WIP patch set with those changes, and with a framework that exercises PostgresNode and can be extended to check other things. For now, it just checks that init(), start(), safe_psql(), and teardown_node() work.\n>\n> With the existing changes to PostgresNode in 0001, the framework in 0002 works for server versions back to 9.3. Versions 9.1 and 9.2 fail on the safe_psql(), and I haven't dug into that far enough yet to explain why. Versions 8.4 and 9.0 fail on the start(). I had trouble getting versions of postgres older than 8.4 to compile on my laptop. I haven't dug far enough into that yet, either.\n>\n> To get this running, you need to install the versions you care about and edit src/test/modules/test_cross_version/version.dat with the names and locations of those installations. (I committed the patch with my local settings, so you can easily compare and edit.) That should get you to the point where you can run 'make check' in the test_cross_version directory.\n\n\nI've had a look at the first of these patches. I think it's generally\nok, but:\n\n\n-��� TestLib::system_or_bail('initdb', '-D', $pgdata, '-A', 'trust', '-N',\n+��� TestLib::system_or_bail('initdb', '-D', $pgdata, '-A', 'trust',\n+��� ��� $self->at_least_version(\"9.3\") ? '-N' : (),\n���� ��� @{ $params{extra} });\n\n\nI'd rather do this in two steps to make it clearer.\n\n\nI still think just doing pg_ctl -w unconditionally would be simpler.\n\n\nPrior to 9.3 \"unix_socket_directories\" was spelled\n\"unix_socket_directory\". We should just set a variable appropriately and\nuse it. That should make the changes around that a whole lot simpler.\n(c.f. buildfarm code)\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 3 Apr 2021 14:01:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "> On Apr 3, 2021, at 11:01 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> I've had a look at the first of these patches. I think it's generally\n> ok, but:\n> \n> \n> - TestLib::system_or_bail('initdb', '-D', $pgdata, '-A', 'trust', '-N',\n> + TestLib::system_or_bail('initdb', '-D', $pgdata, '-A', 'trust',\n> + $self->at_least_version(\"9.3\") ? '-N' : (),\n> @{ $params{extra} });\n> \n> \n> I'd rather do this in two steps to make it clearer.\n\nChanged.\n\n> I still think just doing pg_ctl -w unconditionally would be simpler.\n\nChanged.\n\n> Prior to 9.3 \"unix_socket_directories\" was spelled\n> \"unix_socket_directory\". We should just set a variable appropriately and\n> use it. That should make the changes around that a whole lot simpler.\n> (c.f. buildfarm code)\n\nAh, good to know. Changed.\n\n\nOther changes:\n\nThe v1 patch supported postgres versions back to 8.4, but v2 pushes that back to 8.1.\n\nThe version of PostgresNode currently committed relies on IPC::Run in a way that is subtly wrong. The first time IPC::Run::run(X, ...) is called, it uses the PATH as it exists at that time, resolves the path for X, and caches it. Subsequent calls to IPC::Run::run(X, ...) use the cached path, without respecting changes to $ENV{PATH}. In practice, this means that:\n\n use PostgresNode;\n\n my $a = PostgresNode->get_new_node('a', install_path => '/my/install/8.4');\n my $b = PostgresNode->get_new_node('b', install_path => '/my/install/9.0');\n\n $a->safe_psql(...) # <=== Resolves and caches 'psql' as /my/install/8.4/bin/psql\n\n $b->safe_psql(...) # <=== Executes /my/install/8.4/bin/psql, not /my/install/9.0/bin/psql as one might expect\n\nPostgresNode::safe_psql() and PostgresNode::psql() both suffer from this, and similarly PostgresNode::pg_recvlogical_upto() because the path to pg_recvlogical gets cached. Calls to initdb and pg_ctl do not appear to suffer this problem, as they are ultimately handled by perl's system() call, not by IPC::Run::run.\n\nSince postgres commands work fairly similarly from one release to another, this can cause subtle and hard to diagnose bugs in regression tests. The fix in v2-0001 works for me, as demonstrated by v2-0002, but whether the fix in the attached v2 patch set gets used or not, I think something needs to be done to fix this.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Tue, 6 Apr 2021 22:03:50 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "Hi all,\n\nFirst, sorry to step in this discussion this late. I didn't noticed it before :(\n\nI did some work about these compatibility issues in late 2020 to use\nPostgresNode in the check_pgactivity TAP tests.\n\nSee https://github.com/ioguix/check_pgactivity/tree/tests/t/lib\n\nPostgresNode.pm, TestLib.pm, SimpleTee.pm and RecursiveCopy.pm comes unchanged\nfrom PostgreSQL source file (see headers and COPYRIGHT.pgsql).\n\nThen, I'm using the facet class PostgresNodeFacet to extend it with some more\nmethods. Finaly, I created one class per majpr version, each inheriting from the\nnext version. That means 13 inherits from PostgresNodeFacet.pm, 12 inherits from\n13, 11 inherits from 12, etc.\n\nWhen I'm creating a new node, I'm using the \"pgaTester\" factory class. It\nrelies on PATH to check the major version using pg_config, then loads the\nappropriate class.\n\nThat means some class overrides almost no methods but version(), which returns\nthe major version. Eg.:\nhttps://github.com/ioguix/check_pgactivity/blob/tests/t/lib/PostgresNode12.pm\n\nFrom tests, I can check the node version using this method, eg.:\n\n skip \"skip non-compatible test on PostgreSQL 8.0 and before\", 3\n unless $node->version <= 8.0;\n\nOf course, there's a lot of duplicate code between classes, but my main goal\nwas to keep PostgresNode.pm unchanged from upstream so I can easily update it.\n\nAnd here is a demo test file:\nhttps://github.com/ioguix/check_pgactivity/blob/tests/t/01-streaming_delta.t\n\nMy limited set of tests are working with versions back to 9.0 so far.\n\nMy 2¢\n\n\n", "msg_date": "Wed, 7 Apr 2021 16:37:12 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/7/21 1:03 AM, Mark Dilger wrote:\n> The v1 patch supported postgres versions back to 8.4, but v2 pushes that back to 8.1.\n>\n> The version of PostgresNode currently committed relies on IPC::Run in a way that is subtly wrong. The first time IPC::Run::run(X, ...) is called, it uses the PATH as it exists at that time, resolves the path for X, and caches it. Subsequent calls to IPC::Run::run(X, ...) use the cached path, without respecting changes to $ENV{PATH}. In practice, this means that:\n>\n> use PostgresNode;\n>\n> my $a = PostgresNode->get_new_node('a', install_path => '/my/install/8.4');\n> my $b = PostgresNode->get_new_node('b', install_path => '/my/install/9.0');\n>\n> $a->safe_psql(...) # <=== Resolves and caches 'psql' as /my/install/8.4/bin/psql\n>\n> $b->safe_psql(...) # <=== Executes /my/install/8.4/bin/psql, not /my/install/9.0/bin/psql as one might expect\n>\n> PostgresNode::safe_psql() and PostgresNode::psql() both suffer from this, and similarly PostgresNode::pg_recvlogical_upto() because the path to pg_recvlogical gets cached. Calls to initdb and pg_ctl do not appear to suffer this problem, as they are ultimately handled by perl's system() call, not by IPC::Run::run.\n>\n> Since postgres commands work fairly similarly from one release to another, this can cause subtle and hard to diagnose bugs in regression tests. The fix in v2-0001 works for me, as demonstrated by v2-0002, but whether the fix in the attached v2 patch set gets used or not, I think something needs to be done to fix this.\n>\n>\n\nAwesome work. The IPC::Run behaviour is darned unfriendly, and AFAICS\ncompletely undocumented. It can't even be easily modified by a client\nbecause the cache is stashed in a lexical variable. You fix looks good.\n\n\nother notes:\n\n\n. needs a perltidy run, some lines are too long (see\nsrc/tools/pgindent/perltidyrc)\n\n\n. Please use an explicit return here:\n\n\n+��� # Return an array reference\n+��� [ @result ];\n\n\n. I'm not sure the computation in _pg_version_cmp is right. What if the\nnumber of elements differ? As I read it you would return 0 for a\ncomparison of '1.2' and '1.2.3'. Is that what's intended?\n\n\n. The second patch has a bunch of stuff it doesn't need. The control\nfile should be unnecessary as should all the lines above 'ifdef\nUSE_PGXS' in the Makefile except 'TAP_TESTS = 1'\n\n\n. the test script should have a way of passing a non-default version\nfile to CrossVersion::nodes(). Possible get it from an environment variable?\n\n\n. I'm somewhat inclined to say that CrossVersion should just return a\n{name => path} map, and let the client script do the node creation. Then\ncrossVersion doesn't need to know anything much about the\ninfrastructure. But I could possibly be persuaded otherwise. Also, maybe\nit belongs in src/test/perl.\n\n\n. This line appears deundant, the variable is not referenced:\n\n\n+��� my $path = $ENV{PATH};\n\n\nAlso these lines at the bottom of CrossVersion.pm are redundant:\n\n\n+use strict;\n+use warnings;\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 11:43:41 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/7/21 10:37 AM, Jehan-Guillaume de Rorthais wrote:\n> Hi all,\n>\n> First, sorry to step in this discussion this late. I didn't noticed it before :(\n>\n> I did some work about these compatibility issues in late 2020 to use\n> PostgresNode in the check_pgactivity TAP tests.\n>\n> See https://github.com/ioguix/check_pgactivity/tree/tests/t/lib\n>\n> PostgresNode.pm, TestLib.pm, SimpleTee.pm and RecursiveCopy.pm comes unchanged\n> from PostgreSQL source file (see headers and COPYRIGHT.pgsql).\n>\n> Then, I'm using the facet class PostgresNodeFacet to extend it with some more\n> methods. Finaly, I created one class per majpr version, each inheriting from the\n> next version. That means 13 inherits from PostgresNodeFacet.pm, 12 inherits from\n> 13, 11 inherits from 12, etc.\n>\n> When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n> relies on PATH to check the major version using pg_config, then loads the\n> appropriate class.\n>\n> That means some class overrides almost no methods but version(), which returns\n> the major version. Eg.:\n> https://github.com/ioguix/check_pgactivity/blob/tests/t/lib/PostgresNode12.pm\n>\n> From tests, I can check the node version using this method, eg.:\n>\n> skip \"skip non-compatible test on PostgreSQL 8.0 and before\", 3\n> unless $node->version <= 8.0;\n>\n> Of course, there's a lot of duplicate code between classes, but my main goal\n> was to keep PostgresNode.pm unchanged from upstream so I can easily update it.\n>\n> And here is a demo test file:\n> https://github.com/ioguix/check_pgactivity/blob/tests/t/01-streaming_delta.t\n>\n> My limited set of tests are working with versions back to 9.0 so far.\n>\n> My 2¢\n\n\n\nI don't really want to create a multitude of classes. I think Mark is\nbasically on the right track. I started off using a subclass of\nPostgresNode but was persuaded not to go down that route, and instead we\nhave made some fairly minimal changes to PostgresNode itself. I think\nthat was a good decision. If you take out the versioning subroutines,\nthe actual version-aware changes Mark is proposing to PostgresNode are\nquite small - less that 200 lines supporting versions all the way back\nto 8.1. That's pretty awesome.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 11:54:36 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 7:37 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> Hi all,\n> \n> First, sorry to step in this discussion this late. I didn't noticed it before :(\n\nNot a problem.\n\n> I did some work about these compatibility issues in late 2020 to use\n> PostgresNode in the check_pgactivity TAP tests.\n> \n> See https://github.com/ioguix/check_pgactivity/tree/tests/t/lib\n> \n> PostgresNode.pm, TestLib.pm, SimpleTee.pm and RecursiveCopy.pm comes unchanged\n> from PostgreSQL source file (see headers and COPYRIGHT.pgsql).\n> \n> Then, I'm using the facet class PostgresNodeFacet to extend it with some more\n> methods. Finaly, I created one class per majpr version, each inheriting from the\n> next version. That means 13 inherits from PostgresNodeFacet.pm, 12 inherits from\n> 13, 11 inherits from 12, etc.\n> \n> When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n> relies on PATH to check the major version using pg_config, then loads the\n> appropriate class.\n> \n> That means some class overrides almost no methods but version(), which returns\n> the major version. Eg.:\n> https://github.com/ioguix/check_pgactivity/blob/tests/t/lib/PostgresNode12.pm\n> \n> From tests, I can check the node version using this method, eg.:\n> \n> skip \"skip non-compatible test on PostgreSQL 8.0 and before\", 3\n> unless $node->version <= 8.0;\n> \n> Of course, there's a lot of duplicate code between classes, but my main goal\n> was to keep PostgresNode.pm unchanged from upstream so I can easily update it.\n\nI see that.\n\n> And here is a demo test file:\n> https://github.com/ioguix/check_pgactivity/blob/tests/t/01-streaming_delta.t\n> \n> My limited set of tests are working with versions back to 9.0 so far.\n> \n> My 2¢\n\nHmm, I took a look. I'm not sure that we're working on the same problem, but I might have missed something.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 09:08:31 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, 7 Apr 2021 11:54:36 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 4/7/21 10:37 AM, Jehan-Guillaume de Rorthais wrote:\n> > Hi all,\n> >\n> > First, sorry to step in this discussion this late. I didn't noticed it\n> > before :(\n> >\n> > I did some work about these compatibility issues in late 2020 to use\n> > PostgresNode in the check_pgactivity TAP tests.\n> >\n> > See https://github.com/ioguix/check_pgactivity/tree/tests/t/lib\n> >\n> > PostgresNode.pm, TestLib.pm, SimpleTee.pm and RecursiveCopy.pm comes\n> > unchanged from PostgreSQL source file (see headers and COPYRIGHT.pgsql).\n> >\n> > Then, I'm using the facet class PostgresNodeFacet to extend it with some\n> > more methods. Finaly, I created one class per majpr version, each\n> > inheriting from the next version. That means 13 inherits from\n> > PostgresNodeFacet.pm, 12 inherits from 13, 11 inherits from 12, etc.\n> >\n> > When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n> > relies on PATH to check the major version using pg_config, then loads the\n> > appropriate class.\n> >\n> > That means some class overrides almost no methods but version(), which\n> > returns the major version. Eg.:\n> > https://github.com/ioguix/check_pgactivity/blob/tests/t/lib/PostgresNode12.pm\n> >\n> > From tests, I can check the node version using this method, eg.:\n> >\n> > skip \"skip non-compatible test on PostgreSQL 8.0 and before\", 3\n> > unless $node->version <= 8.0;\n> >\n> > Of course, there's a lot of duplicate code between classes, but my main goal\n> > was to keep PostgresNode.pm unchanged from upstream so I can easily update\n> > it.\n> >\n> > And here is a demo test file:\n> > https://github.com/ioguix/check_pgactivity/blob/tests/t/01-streaming_delta.t\n> >\n> > My limited set of tests are working with versions back to 9.0 so far.\n> >\n> > My 2¢ \n> \n> \n> \n> I don't really want to create a multitude of classes. I think Mark is\n> basically on the right track. I started off using a subclass of\n> PostgresNode but was persuaded not to go down that route, and instead we\n> have made some fairly minimal changes to PostgresNode itself. I think\n> that was a good decision. If you take out the versioning subroutines,\n> the actual version-aware changes Mark is proposing to PostgresNode are\n> quite small - less that 200 lines supporting versions all the way back\n> to 8.1. That's pretty awesome.\n\n\nI took this road because as soon as you want to use some other methods like\nenable_streaming, enable_archiving, etc, you find much more incompatibilities\non your way. My current stack of classes is backward compatible with much\nmore methods than just init(). But I admit it creates a multitude of class and\nsome duplicate code...\n\nIt's still possible to patch each methods in PostgresNode, but you'll end up\nwith a forest of conditional blocks depending on how far you want to go with old\nPostgreSQL versions.\n\nRegards,\n\n\n", "msg_date": "Wed, 7 Apr 2021 18:21:06 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, 7 Apr 2021 09:08:31 -0700\nMark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> > On Apr 7, 2021, at 7:37 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n\n> > And here is a demo test file:\n> > https://github.com/ioguix/check_pgactivity/blob/tests/t/01-streaming_delta.t\n> > \n> > My limited set of tests are working with versions back to 9.0 so far.\n> > \n> > My 2¢ \n> \n> Hmm, I took a look. I'm not sure that we're working on the same problem, but\n> I might have missed something.\n\nI understood you were working on making PostgresNode compatible with older\nversions of PostgreSQL. So ou can create and interact with older versions,\neg. 9.0. Did I misunderstood?\n\nMy set of class had the exact same goal: creating and managing PostgreSQL nodes\nfrom various major versions. It's going a bit further than just init() though,\nas it supports some more existing methods (eg. enable_streaming) and adds some\nothers (version, switch_wal, wait_for_archive).\n\nRegards,\n\n\n", "msg_date": "Wed, 7 Apr 2021 18:26:57 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n\n> When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n> relies on PATH to check the major version using pg_config, then loads the\n> appropriate class.\n\n From a code cleanliness point of view, I agree that having separate\nclasses for each version is neater than what you call a forest of\nconditionals. I'm not sure I like the way you instantiate the classes\nin pgaTester though -- wouldn't it be saner to have PostgresNode::new\nitself be in charge of deciding which class to bless the object as?\nSince we're talking about modifying PostgresNode itself in order to\nsupport this, it would make sense to do that.\n\n(I understand that one of your decisions was to avoid modifying\nPostgresNode, so that you could ingest whatever came out of PGDG without\nhaving to patch it each time.)\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 7 Apr 2021 12:51:55 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 9:26 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> On Wed, 7 Apr 2021 09:08:31 -0700\n> Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n>>> On Apr 7, 2021, at 7:37 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> \n>>> And here is a demo test file:\n>>> https://github.com/ioguix/check_pgactivity/blob/tests/t/01-streaming_delta.t\n>>> \n>>> My limited set of tests are working with versions back to 9.0 so far.\n>>> \n>>> My 2¢ \n>> \n>> Hmm, I took a look. I'm not sure that we're working on the same problem, but\n>> I might have missed something.\n> \n> I understood you were working on making PostgresNode compatible with older\n> versions of PostgreSQL. So ou can create and interact with older versions,\n> eg. 9.0. Did I misunderstood?\n\nWe're both working on compatibility with older versions, that is true. I may have misunderstood your code somewhat. It is hard to quickly review your changes, as they are all mixed together with a mass of copied code from the main project. I'll look some more for parts that I could reuse.\n\nThanks\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 10:07:07 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, 7 Apr 2021 12:51:55 -0400\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n> \n> > When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n> > relies on PATH to check the major version using pg_config, then loads the\n> > appropriate class. \n> \n> From a code cleanliness point of view, I agree that having separate\n> classes for each version is neater than what you call a forest of\n> conditionals. I'm not sure I like the way you instantiate the classes\n> in pgaTester though -- wouldn't it be saner to have PostgresNode::new\n> itself be in charge of deciding which class to bless the object as?\n> Since we're talking about modifying PostgresNode itself in order to\n> support this, it would make sense to do that.\n\nYes, it would be much saner to make PostgresNode the factory class. Plus, some\nmore logic could be injected there to either auto-detect the version (current\nbehavior) or eg. use a given path to the binaries as Mark did in its patch.\n\n> (I understand that one of your decisions was to avoid modifying\n> PostgresNode, so that you could ingest whatever came out of PGDG without\n> having to patch it each time.)\n\nIndeed, that's why I created this class with a not-very-fortunate name :) \n\nLet me know if it worth that I work on an official patch.\n\nRegards,\n\n\n", "msg_date": "Wed, 7 Apr 2021 19:19:21 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n\n> Yes, it would be much saner to make PostgresNode the factory class. Plus, some\n> more logic could be injected there to either auto-detect the version (current\n> behavior) or eg. use a given path to the binaries as Mark did in its patch.\n\nI'm not sure what you mean about auto-detecting the version -- I assume\nwe would auto-detect the version by calling pg_config from the\nconfigured path and parsing the binary, which is what Mark's patch is\nsupposed to do already. So I don't see what the distinction between\nthose two things is.\n\nIn order to avoid having an ever-growing plethora of 100-byte .pm files,\nwe can put the version-specific classes in the same PostgresNode.pm\nfile, at the bottom, \"class PostgresNode96; use parent PostgresNode10;\"\nfollowed by the routines that are overridden for each version.\n\n> Let me know if it worth that I work on an official patch.\n\nLet's give it a try ...\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 7 Apr 2021 13:36:31 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/7/21 1:19 PM, Jehan-Guillaume de Rorthais wrote:\n> On Wed, 7 Apr 2021 12:51:55 -0400\n> Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n>> On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n>>\n>>> When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n>>> relies on PATH to check the major version using pg_config, then loads the\n>>> appropriate class. \n>> From a code cleanliness point of view, I agree that having separate\n>> classes for each version is neater than what you call a forest of\n>> conditionals. I'm not sure I like the way you instantiate the classes\n>> in pgaTester though -- wouldn't it be saner to have PostgresNode::new\n>> itself be in charge of deciding which class to bless the object as?\n>> Since we're talking about modifying PostgresNode itself in order to\n>> support this, it would make sense to do that.\n> Yes, it would be much saner to make PostgresNode the factory class. Plus, some\n> more logic could be injected there to either auto-detect the version (current\n> behavior) or eg. use a given path to the binaries as Mark did in its patch.\n\n\nAren't you likely to end up duplicating substantial amounts of code,\nthough? I'm certainly not at the stage where I think the version-aware\ncode is creating too much clutter. The \"forest of conditionals\" seems\nmore like a small thicket.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 13:38:39 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 10:36 AM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n>> Yes, it would be much saner to make PostgresNode the factory class. Plus, some\n>> more logic could be injected there to either auto-detect the version (current\n>> behavior) or eg. use a given path to the binaries as Mark did in its patch.\n> \n> I'm not sure what you mean about auto-detecting the version -- I assume\n> we would auto-detect the version by calling pg_config from the\n> configured path and parsing the binary, which is what Mark's patch is\n> supposed to do already. So I don't see what the distinction between\n> those two things is.\n> \n> In order to avoid having an ever-growing plethora of 100-byte .pm files,\n> we can put the version-specific classes in the same PostgresNode.pm\n> file, at the bottom, \"class PostgresNode96; use parent PostgresNode10;\"\n> followed by the routines that are overridden for each version.\n\nIt's not sufficient to think about postgres versions as \"10\", \"11\", etc. You have to be able to spin up nodes of any build, like \"9.0.7\". There are specific versions of postgres with specific bugs that cause specific problems, and later versions of postgres on that same development branch have been patched. If you only ever spin up the latest version, you can't reproduce the problems and test how they impact cross version compatibility.\n\nI don't think it works to have a class per micro release. So you'd have a PostgresNode of type \"10\" or such, but how does that help? If you have ten different versions of \"10\" in your test, they all look the same?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 10:50:26 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, 7 Apr 2021 13:36:31 -0400\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n> \n> > Yes, it would be much saner to make PostgresNode the factory class. Plus,\n> > some more logic could be injected there to either auto-detect the version\n> > (current behavior) or eg. use a given path to the binaries as Mark did in\n> > its patch. \n> \n> I'm not sure what you mean about auto-detecting the version -- I assume\n> we would auto-detect the version by calling pg_config from the\n> configured path and parsing the binary, which is what Mark's patch is\n> supposed to do already. So I don't see what the distinction between\n> those two things is.\n\nMy version is currently calling pg_config without any knowledge about its\nabsolute path.\n\nMark's patch is able to take an explicit binary path:\n\n my $a = PostgresNode->get_new_node('a', install_path => '/my/install/8.4');\n\n> In order to avoid having an ever-growing plethora of 100-byte .pm files,\n> we can put the version-specific classes in the same PostgresNode.pm\n> file, at the bottom, \"class PostgresNode96; use parent PostgresNode10;\"\n> followed by the routines that are overridden for each version.\n\nSure.\n\n> > Let me know if it worth that I work on an official patch. \n> \n> Let's give it a try ...\n\nOK\n\nRegards,\n\n\n", "msg_date": "Wed, 7 Apr 2021 20:07:41 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, 7 Apr 2021 10:50:26 -0700\nMark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> > On Apr 7, 2021, at 10:36 AM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > \n> >> Yes, it would be much saner to make PostgresNode the factory class. Plus,\n> >> some more logic could be injected there to either auto-detect the version\n> >> (current behavior) or eg. use a given path to the binaries as Mark did in\n> >> its patch. \n> > \n> > I'm not sure what you mean about auto-detecting the version -- I assume\n> > we would auto-detect the version by calling pg_config from the\n> > configured path and parsing the binary, which is what Mark's patch is\n> > supposed to do already. So I don't see what the distinction between\n> > those two things is.\n> > \n> > In order to avoid having an ever-growing plethora of 100-byte .pm files,\n> > we can put the version-specific classes in the same PostgresNode.pm\n> > file, at the bottom, \"class PostgresNode96; use parent PostgresNode10;\"\n> > followed by the routines that are overridden for each version. \n> \n> It's not sufficient to think about postgres versions as \"10\", \"11\", etc. You\n> have to be able to spin up nodes of any build, like \"9.0.7\". There are\n> specific versions of postgres with specific bugs that cause specific\n> problems, and later versions of postgres on that same development branch have\n> been patched. If you only ever spin up the latest version, you can't\n> reproduce the problems and test how they impact cross version compatibility.\n\n\nAgree.\n\n> I don't think it works to have a class per micro release. So you'd have a\n> PostgresNode of type \"10\" or such, but how does that help? If you have ten\n> different versions of \"10\" in your test, they all look the same?\n\nAs PostgresNode factory already checked pg_config version, it can give it as\nargument to the specific class constructor. We can then have eg.\nmajor_version() and version() to return the major version and full versions.\n\nRegards,\n\n\n", "msg_date": "Wed, 7 Apr 2021 20:15:40 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, 7 Apr 2021 13:38:39 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 4/7/21 1:19 PM, Jehan-Guillaume de Rorthais wrote:\n> > On Wed, 7 Apr 2021 12:51:55 -0400\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > \n> >> On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n> >> \n> >>> When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n> >>> relies on PATH to check the major version using pg_config, then loads the\n> >>> appropriate class. \n> >> From a code cleanliness point of view, I agree that having separate\n> >> classes for each version is neater than what you call a forest of\n> >> conditionals. I'm not sure I like the way you instantiate the classes\n> >> in pgaTester though -- wouldn't it be saner to have PostgresNode::new\n> >> itself be in charge of deciding which class to bless the object as?\n> >> Since we're talking about modifying PostgresNode itself in order to\n> >> support this, it would make sense to do that. \n> > Yes, it would be much saner to make PostgresNode the factory class. Plus,\n> > some more logic could be injected there to either auto-detect the version\n> > (current behavior) or eg. use a given path to the binaries as Mark did in\n> > its patch. \n> \n> \n> Aren't you likely to end up duplicating substantial amounts of code,\n> though? I'm certainly not at the stage where I think the version-aware\n> code is creating too much clutter. The \"forest of conditionals\" seems\n> more like a small thicket.\n\nI started with a patched PostgresNode. Then I had to support backups and\nreplication for my tests. Then it become hard to follow the logic in\nconditional blocks, moreover some conditionals needed to appear in multiple\nplaces in the same methods, depending on the enabled features, the conditions,\nwhat GUC was enabled by default or not, etc. So I end up with this design.\n\nI really don't want to waste community brain cycles in discussions and useless\nreviews. But as far as someone agree to review it, I already have the material\nand I can create a patch with a limited amount of work to compare and review.\nThe one-class approach would need to support replication down to 9.0 to be fair\nthough :/\n\nThanks,\n\n\n", "msg_date": "Wed, 7 Apr 2021 20:35:40 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 11:35 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> On Wed, 7 Apr 2021 13:38:39 -0400\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n>> On 4/7/21 1:19 PM, Jehan-Guillaume de Rorthais wrote:\n>>> On Wed, 7 Apr 2021 12:51:55 -0400\n>>> Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>> \n>>>> On 2021-Apr-07, Jehan-Guillaume de Rorthais wrote:\n>>>> \n>>>>> When I'm creating a new node, I'm using the \"pgaTester\" factory class. It\n>>>>> relies on PATH to check the major version using pg_config, then loads the\n>>>>> appropriate class. \n>>>> From a code cleanliness point of view, I agree that having separate\n>>>> classes for each version is neater than what you call a forest of\n>>>> conditionals. I'm not sure I like the way you instantiate the classes\n>>>> in pgaTester though -- wouldn't it be saner to have PostgresNode::new\n>>>> itself be in charge of deciding which class to bless the object as?\n>>>> Since we're talking about modifying PostgresNode itself in order to\n>>>> support this, it would make sense to do that. \n>>> Yes, it would be much saner to make PostgresNode the factory class. Plus,\n>>> some more logic could be injected there to either auto-detect the version\n>>> (current behavior) or eg. use a given path to the binaries as Mark did in\n>>> its patch. \n>> \n>> \n>> Aren't you likely to end up duplicating substantial amounts of code,\n>> though? I'm certainly not at the stage where I think the version-aware\n>> code is creating too much clutter. The \"forest of conditionals\" seems\n>> more like a small thicket.\n> \n> I started with a patched PostgresNode. Then I had to support backups and\n> replication for my tests. Then it become hard to follow the logic in\n> conditional blocks, moreover some conditionals needed to appear in multiple\n> places in the same methods, depending on the enabled features, the conditions,\n> what GUC was enabled by default or not, etc. So I end up with this design.\n> \n> I really don't want to waste community brain cycles in discussions and useless\n> reviews. But as far as someone agree to review it, I already have the material\n> and I can create a patch with a limited amount of work to compare and review.\n> The one-class approach would need to support replication down to 9.0 to be fair\n> though :/\n\nIf you can create a patch that integrates your ideas into PostgresNode.pm, I'd be interested in reviewing it.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 11:40:05 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Andrew Dunstan wrote:\n\n> Aren't you likely to end up duplicating substantial amounts of code,\n> though?\n\nNo — did you look at his code? Each version is child of the one just\nabove, so you only need to override things where behavior changes from\none version to the next.\n\n> I'm certainly not at the stage where I think the version-aware\n> code is creating too much clutter. The \"forest of conditionals\" seems\n> more like a small thicket.\n\nAfter comparing both approaches, I think ioguix's is superior in\ncleanliness.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Wed, 7 Apr 2021 15:09:24 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Mark Dilger wrote:\n\n> It's not sufficient to think about postgres versions as \"10\", \"11\",\n> etc. You have to be able to spin up nodes of any build, like \"9.0.7\".\n> There are specific versions of postgres with specific bugs that cause\n> specific problems, and later versions of postgres on that same\n> development branch have been patched. If you only ever spin up the\n> latest version, you can't reproduce the problems and test how they\n> impact cross version compatibility.\n\nI don't get it. Surely if you need 10.0.7 you just compile that version\nand give its path as install path? You can have two 1.0.x as long as\ninstall them separately, right?\n\n> I don't think it works to have a class per micro release.\n\nI don't understand why you would do that.\n\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"Crear es tan dif�cil como ser libre\" (Elsa Triolet)\n\n\n", "msg_date": "Wed, 7 Apr 2021 15:13:14 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 12:13 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Apr-07, Mark Dilger wrote:\n> \n>> It's not sufficient to think about postgres versions as \"10\", \"11\",\n>> etc. You have to be able to spin up nodes of any build, like \"9.0.7\".\n>> There are specific versions of postgres with specific bugs that cause\n>> specific problems, and later versions of postgres on that same\n>> development branch have been patched. If you only ever spin up the\n>> latest version, you can't reproduce the problems and test how they\n>> impact cross version compatibility.\n> \n> I don't get it. Surely if you need 10.0.7 you just compile that version\n> and give its path as install path? You can have two 1.0.x as long as\n> install them separately, right?\n\nI was commenting on the design to have the PostgresNode derived subclass hard-coded to return \"10\" as the version:\n\n sub version { return 10 }\n\n\nIt's hard to think about how this other system works when you have lots of separate micro releases all compiled and used as the basis of the $node's, since this other system does not support that at all. So maybe it can be done properly, but I don't want to have different microversions of 10 and then find that $a->version eq $b->version when $a is 10.1 and $b is 10.2.\n\n>> I don't think it works to have a class per micro release.\n> \n> I don't understand why you would do that.\n\nIf you need a \"version\" subroutine per derived class, then the only way to solve the problem is to have a profusion of derived classes. It would make more sense to me if the version method worked the way I implemented it, where it just returns the version obtained from pg_config\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 12:20:50 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/7/21 3:09 PM, Alvaro Herrera wrote:\n> On 2021-Apr-07, Andrew Dunstan wrote:\n>\n>> Aren't you likely to end up duplicating substantial amounts of code,\n>> though?\n> No — did you look at his code? Each version is child of the one just\n> above, so you only need to override things where behavior changes from\n> one version to the next.\n>\n\nyes\n\n\n>> I'm certainly not at the stage where I think the version-aware\n>> code is creating too much clutter. The \"forest of conditionals\" seems\n>> more like a small thicket.\n> After comparing both approaches, I think ioguix's is superior in\n> cleanliness.\n>\n\na) I'm not mad keen on having oodles of little classes. I should point\nout that this will have to traverse possibly the whole hierarchy of\nclasses at each call to get the the actual method, which is not very\nefficient. But to some extent this is a matter of taste. OTOH\n\nb) as it stands pgaTester.pm can't be used for multiple versions in a\nsingle program, which is a design goal here - it sets the single class\nto invoke in its BEGIN block. At the very least we would need to replace\nthat with code which would require the relevant class as needed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 16:13:21 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Andrew Dunstan wrote:\n\n> b) as it stands pgaTester.pm can't be used for multiple versions in a\n> single program, which is a design goal here - it sets the single class\n> to invoke in its BEGIN block. At the very least we would need to replace\n> that with code which would require the relevant class as needed.\n\nI'm not suggesting that we adopt pgaTester.pm! I think a real patch for\nthis approach involves moving that stuff into PostgresNode::new itself,\nas I said upthread: if install_path is given, call pg_config --version\nand then parse the version number into a class name $versionclass, then\n\"bless $versionclass, $self\". So the object returned by\nPostgresNode::new already has the correct class. We don't need to\nrequire anything, since all classes are in the same PostgresNode.pm\nfile.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\"This is what I like so much about PostgreSQL. Most of the surprises\nare of the \"oh wow! That's cool\" Not the \"oh shit!\" kind. :)\"\nScott Marlowe, http://archives.postgresql.org/pgsql-admin/2008-10/msg00152.php\n\n\n", "msg_date": "Wed, 7 Apr 2021 16:19:35 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Mark Dilger wrote:\n\n> I was commenting on the design to have the PostgresNode derived\n> subclass hard-coded to return \"10\" as the version:\n> \n> sub version { return 10 }\n\nThat seems a minor bug rather than a showstopper design deficiency.\nI agree that hardcoding the version in the source code is not very\nusable; it should store the version number when it runs pg_config\n--version in an instance variable that can be returned.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\nAl principio era UNIX, y UNIX habl� y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n", "msg_date": "Wed, 7 Apr 2021 16:28:50 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/7/21 4:19 PM, Alvaro Herrera wrote:\n> On 2021-Apr-07, Andrew Dunstan wrote:\n>\n>> b) as it stands pgaTester.pm can't be used for multiple versions in a\n>> single program, which is a design goal here - it sets the single class\n>> to invoke in its BEGIN block. At the very least we would need to replace\n>> that with code which would require the relevant class as needed.\n> I'm not suggesting that we adopt pgaTester.pm! I think a real patch for\n> this approach involves moving that stuff into PostgresNode::new itself,\n> as I said upthread: if install_path is given, call pg_config --version\n> and then parse the version number into a class name $versionclass, then\n> \"bless $versionclass, $self\". So the object returned by\n> PostgresNode::new already has the correct class. We don't need to\n> require anything, since all classes are in the same PostgresNode.pm\n> file.\n>\n\n\nOh, you want to roll them all up into one file? That could work. It's a\nbit frowned on by perl purists, but I've done similar (see PGBuild/SCM.pm).\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 16:36:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 1:28 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Apr-07, Mark Dilger wrote:\n> \n>> I was commenting on the design to have the PostgresNode derived\n>> subclass hard-coded to return \"10\" as the version:\n>> \n>> sub version { return 10 }\n> \n> That seems a minor bug rather than a showstopper design deficiency.\n> I agree that hardcoding the version in the source code is not very\n> usable; it should store the version number when it runs pg_config\n> --version in an instance variable that can be returned.\n\nIt seems we're debating between two designs. In the first, each PostgresNode function knows about version limitations and has code like:\n\n\tDoSomething() if $self->at_least_version(\"11\")\n\nand in the second design we're subclassing for each postgres release where something changed, so that DoSomething is implemented differently in one class than another. I think the subclassing solution is cleaner if the number of version tests is large, but not so much otherwise.\n\n\nThere is a much bigger design decision to be made that I have delayed making. The PostgresNode implementation has functions that work a certain way, but cannot work that same way with older versions of postgres that don't have the necessary support. This means that\n\n\t$my_node->do_something(...)\n\nworks differently based on which version of postgres $my_node is based upon, even though PostgresNode could have avoided it. To wit:\n\n # \"restart_after_crash\" was introduced in version 9.1. Older versions\n # always restart after crash.\n print $conf \"restart_after_crash = off\\n\"\n if $self->at_least_version(\"9.1\");\n\nPostgresNode is mostly designed around supporting regression tests for the current postgres version under development. Prior to Andrew's recent introduction of support for alternate installation paths, it made sense to have restart_after_crash be off. But now, if you spin up a postgres node for version 9.0 or before, you get different behavior, because the prior behavior is to implicitly have this \"on\", not \"off\".\n\nAgain:\n\n # \"log_replication_commands\" was introduced in 9.5. Older versions do\n # not log replication commands.\n print $conf \"log_replication_commands = on\\n\"\n if $self->at_least_version(\"9.5\");\n\nShould we have \"log_replication_commands\" be off by default so that nodes of varying postgres version behave more similarly?\n\nAgain:\n\n # \"wal_retrieve_retry_interval\" was introduced in 9.5. Older versions\n # always wait 5 seconds.\n print $conf \"wal_retrieve_retry_interval = '500ms'\\n\"\n if $self->at_least_version(\"9.5\");\n\n\nShould we have \"wal_retrieve_retry_interval\" be 5 seconds for consistency?\n\nI didn't do these things, as I didn't want to break the majority of tests which don't care about cross version compatibility, but if we're going to debate this thing, subclassing is a distraction. The real question is, *what do we want it to do*?\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 13:48:44 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/7/21 4:48 PM, Mark Dilger wrote:\n>\n>> On Apr 7, 2021, at 1:28 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>> On 2021-Apr-07, Mark Dilger wrote:\n>>\n>>> I was commenting on the design to have the PostgresNode derived\n>>> subclass hard-coded to return \"10\" as the version:\n>>>\n>>> sub version { return 10 }\n>> That seems a minor bug rather than a showstopper design deficiency.\n>> I agree that hardcoding the version in the source code is not very\n>> usable; it should store the version number when it runs pg_config\n>> --version in an instance variable that can be returned.\n> It seems we're debating between two designs. In the first, each PostgresNode function knows about version limitations and has code like:\n>\n> \tDoSomething() if $self->at_least_version(\"11\")\n>\n> and in the second design we're subclassing for each postgres release where something changed, so that DoSomething is implemented differently in one class than another. I think the subclassing solution is cleaner if the number of version tests is large, but not so much otherwise.\n\n\n\nI think you and I are of one mind here.\n\n\n>\n>\n> There is a much bigger design decision to be made that I have delayed making. The PostgresNode implementation has functions that work a certain way, but cannot work that same way with older versions of postgres that don't have the necessary support. This means that\n>\n> \t$my_node->do_something(...)\n>\n> works differently based on which version of postgres $my_node is based upon, even though PostgresNode could have avoided it. To wit:\n>\n> # \"restart_after_crash\" was introduced in version 9.1. Older versions\n> # always restart after crash.\n> print $conf \"restart_after_crash = off\\n\"\n> if $self->at_least_version(\"9.1\");\n>\n> PostgresNode is mostly designed around supporting regression tests for the current postgres version under development. Prior to Andrew's recent introduction of support for alternate installation paths, it made sense to have restart_after_crash be off. But now, if you spin up a postgres node for version 9.0 or before, you get different behavior, because the prior behavior is to implicitly have this \"on\", not \"off\".\n>\n> Again:\n>\n> # \"log_replication_commands\" was introduced in 9.5. Older versions do\n> # not log replication commands.\n> print $conf \"log_replication_commands = on\\n\"\n> if $self->at_least_version(\"9.5\");\n>\n> Should we have \"log_replication_commands\" be off by default so that nodes of varying postgres version behave more similarly?\n>\n> Again:\n>\n> # \"wal_retrieve_retry_interval\" was introduced in 9.5. Older versions\n> # always wait 5 seconds.\n> print $conf \"wal_retrieve_retry_interval = '500ms'\\n\"\n> if $self->at_least_version(\"9.5\");\n>\n>\n> Should we have \"wal_retrieve_retry_interval\" be 5 seconds for consistency?\n>\n> I didn't do these things, as I didn't want to break the majority of tests which don't care about cross version compatibility, but if we're going to debate this thing, subclassing is a distraction. The real question is, *what do we want it to do*?\n>\n>\n\n\nYeah, much more important. I think I would say that anything that's\nsimply not possible in an older version should cause an error and for\nthe rest the old version should probably behave by default as much like\nits default as possible. I don't think we should try to make different\nversions behave identically (or as close to as possible). In some\nparticular cases we should be able to override default behavior (e.g. by\nsetting the config explicitly).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 17:02:20 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Mark Dilger wrote:\n\n> It seems we're debating between two designs. In the first, each\n> PostgresNode function knows about version limitations and has code\n> like:\n> \n> \tDoSomething() if $self->at_least_version(\"11\")\n\nYeah, I didn't like this approach -- it is quite messy.\n\n> and in the second design we're subclassing for each postgres release\n> where something changed, so that DoSomething is implemented\n> differently in one class than another.\n\nSo DoSomething still does Something, regardless of what it has to do in\norder to get it done.\n\n> I think the subclassing solution is cleaner if the number of version\n> tests is large, but not so much otherwise.\n\nWell, your patch has rather a lot of at_least_version() tests.\n\n> To wit:\n> \n> # \"restart_after_crash\" was introduced in version 9.1. Older versions\n> # always restart after crash.\n> print $conf \"restart_after_crash = off\\n\"\n> if $self->at_least_version(\"9.1\");\n> \n> PostgresNode is mostly designed around supporting regression tests for\n> the current postgres version under development.\n\nI think we should make PostgresNode do what makes the most sense for the\ncurrent branch, and go to whatever contortions are necessary to do the\nsame thing in older versions as long as it is sensible. If we were\nrobots, then we would care to preserve behavior down to the very last\nbyte, but I think we can make judgement calls to ignore the changes that\nare not relevant. Whenever we introduce a behavior that is not\nsupportable by the older version, then the function would throw an error\nif that behavior is requested from that older version.\n\n> Prior to Andrew's recent introduction of support for alternate\n> installation paths, it made sense to have restart_after_crash be off.\n> But now, if you spin up a postgres node for version 9.0 or before, you\n> get different behavior, because the prior behavior is to implicitly\n> have this \"on\", not \"off\".\n\nThat seems mostly okay, except in a very few narrow cases where the node\nstaying down is critical. So we can let things be.\n\n> Again:\n> \n> # \"log_replication_commands\" was introduced in 9.5. Older versions do\n> # not log replication commands.\n> print $conf \"log_replication_commands = on\\n\"\n> if $self->at_least_version(\"9.5\");\n> \n> Should we have \"log_replication_commands\" be off by default so that\n> nodes of varying postgres version behave more similarly?\n\nIf it's important for the tests, then let's have a method to change it\nif necessary. Otherwise, we don't care.\n\n-- \n�lvaro Herrera Valdivia, Chile\n\n\n", "msg_date": "Wed, 7 Apr 2021 17:04:21 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-07, Andrew Dunstan wrote:\n\n> Oh, you want to roll them all up into one file? That could work. It's a\n> bit frowned on by perl purists, but I've done similar (see PGBuild/SCM.pm).\n\nAh! Yeah, pretty much exactly like that, including the \"no critic\" flag ...\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\"El sudor es la mejor cura para un pensamiento enfermo\" (Bardia)\n\n\n", "msg_date": "Wed, 7 Apr 2021 17:06:19 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 7, 2021, at 2:04 PM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2021-Apr-07, Mark Dilger wrote:\n> \n>> It seems we're debating between two designs. In the first, each\n>> PostgresNode function knows about version limitations and has code\n>> like:\n>> \n>> \tDoSomething() if $self->at_least_version(\"11\")\n> \n> Yeah, I didn't like this approach -- it is quite messy.\n> \n>> and in the second design we're subclassing for each postgres release\n>> where something changed, so that DoSomething is implemented\n>> differently in one class than another.\n> \n> So DoSomething still does Something, regardless of what it has to do in\n> order to get it done.\n> \n>> I think the subclassing solution is cleaner if the number of version\n>> tests is large, but not so much otherwise.\n> \n> Well, your patch has rather a lot of at_least_version() tests.\n\nI don't really care about this part, and you do, so you win. I'm happy enough to have this be done with subclassing. My problems upthread never had anything to do with whether we used subclassing, but rather which behaviors were supported. And that appears not to be controversial, so that's all for the good....\n\n\n>> To wit:\n>> \n>> # \"restart_after_crash\" was introduced in version 9.1. Older versions\n>> # always restart after crash.\n>> print $conf \"restart_after_crash = off\\n\"\n>> if $self->at_least_version(\"9.1\");\n>> \n>> PostgresNode is mostly designed around supporting regression tests for\n>> the current postgres version under development.\n> \n> I think we should make PostgresNode do what makes the most sense for the\n> current branch, and go to whatever contortions are necessary to do the\n> same thing in older versions as long as it is sensible. If we were\n> robots, then we would care to preserve behavior down to the very last\n> byte, but I think we can make judgement calls to ignore the changes that\n> are not relevant. Whenever we introduce a behavior that is not\n> supportable by the older version, then the function would throw an error\n> if that behavior is requested from that older version.\n\nBeep bop boop beeb bop, danger Will Robinson:\n\nfor my $i (@all_postgres_versions)\n{\n\tfor my $j (grep { $_ > $i } @all_postgres_versions)\n\t{\n\t\tfor my $k (grep { $_ > $j } @all_postgres_versions)\n\t\t{\n\t\t\tmy $node = node_of_version($i);\n\t\t\t$node->do_stuff();\n\t\t\t$node->pg_upgrade_to($j);\n\t\t\t$node->do_more_stuff();\n\t\t\t$node->pg_upgrade_to($k);\n\t\t\t$node->do_yet_more_stuff();\n\n\t\t\t# verify $node isn't broken\n\t\t}\n\t}\n}\n\t\t\t\nI think it is harder to write simple tests like this when how $node behaves changes as the values of (i,j,k) change. Of course the behaviors change to the extent that postgres itself changed between versions. I mean changes because PostgresNode behaves differently.\n\nWe don't need to debate this now, though. It will be better to discuss individual issues as they come up.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Wed, 7 Apr 2021 14:34:41 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "> On Apr 7, 2021, at 8:43 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> \n> On 4/7/21 1:03 AM, Mark Dilger wrote:\n>> The v1 patch supported postgres versions back to 8.4, but v2 pushes that back to 8.1.\n>> \n>> The version of PostgresNode currently committed relies on IPC::Run in a way that is subtly wrong. The first time IPC::Run::run(X, ...) is called, it uses the PATH as it exists at that time, resolves the path for X, and caches it. Subsequent calls to IPC::Run::run(X, ...) use the cached path, without respecting changes to $ENV{PATH}. In practice, this means that:\n>> \n>> use PostgresNode;\n>> \n>> my $a = PostgresNode->get_new_node('a', install_path => '/my/install/8.4');\n>> my $b = PostgresNode->get_new_node('b', install_path => '/my/install/9.0');\n>> \n>> $a->safe_psql(...) # <=== Resolves and caches 'psql' as /my/install/8.4/bin/psql\n>> \n>> $b->safe_psql(...) # <=== Executes /my/install/8.4/bin/psql, not /my/install/9.0/bin/psql as one might expect\n>> \n>> PostgresNode::safe_psql() and PostgresNode::psql() both suffer from this, and similarly PostgresNode::pg_recvlogical_upto() because the path to pg_recvlogical gets cached. Calls to initdb and pg_ctl do not appear to suffer this problem, as they are ultimately handled by perl's system() call, not by IPC::Run::run.\n>> \n>> Since postgres commands work fairly similarly from one release to another, this can cause subtle and hard to diagnose bugs in regression tests. The fix in v2-0001 works for me, as demonstrated by v2-0002, but whether the fix in the attached v2 patch set gets used or not, I think something needs to be done to fix this.\n>> \n>> \n> \n> Awesome work. The IPC::Run behaviour is darned unfriendly, and AFAICS\n> completely undocumented. It can't even be easily modified by a client\n> because the cache is stashed in a lexical variable.\n\nYes, I noticed that, too. Even if we could get a patch accepted into IPC::Run, we'd need to be compatible with older versions. So there doesn't seem to be any option but to work around the issue.\n\n> You fix looks good.\n\nThanks for reviewing!\n\n\n> other notes:\n> \n> \n> . needs a perltidy run, some lines are too long (see\n> src/tools/pgindent/perltidyrc)\n> \n> \n> . Please use an explicit return here:\n> \n> \n> + # Return an array reference\n> + [ @result ];\n\nDone.\n\n\n> . I'm not sure the computation in _pg_version_cmp is right. What if the\n> number of elements differ? As I read it you would return 0 for a\n> comparison of '1.2' and '1.2.3'. Is that what's intended?\n\nYes, that is intended. '1.2' and '1.2.0' are not the same. '1.2' means \"some unspecified micro release or development version of 1.2\", whereas '1.2.0' does not.\n\nThis is useful for things like $node->at_least_version(\"13\") returning true for development versions of version 13, which are otherwise less than (not equal to) 13.0\n\n> . The second patch has a bunch of stuff it doesn't need. The control\n> file should be unnecessary as should all the lines above 'ifdef\n> USE_PGXS' in the Makefile except 'TAP_TESTS = 1'\n\nDone.\n\nYeah, I started the second patch as simply a means of testing the first and didn't clean it up after copying boilerplate from elsewhere. The second patch has turned into something possibly worth keeping in its own right, and having the build farm execute on a regular basis.\n\n> . the test script should have a way of passing a non-default version\n> file to CrossVersion::nodes(). Possible get it from an environment variable?\n\nGood idea. I changed it to use $ENV{PG_TEST_VERSIONS_FILE}. I'm open to other names for this variable.\n\n> . I'm somewhat inclined to say that CrossVersion should just return a\n> {name => path} map, and let the client script do the node creation. Then\n> crossVersion doesn't need to know anything much about the\n> infrastructure. But I could possibly be persuaded otherwise. Also, maybe\n> it belongs in src/test/perl.\n\nHmm. That's a good thought. I've moved it to src/test/perl with the change you suggest.\n\n> . This line appears deundant, the variable is not referenced:\n> \n> \n> + my $path = $ENV{PATH};\n\nRemoved.\n\n> Also these lines at the bottom of CrossVersion.pm are redundant:\n> \n> \n> +use strict;\n> +use warnings;\n\nYeah, those are silly. Removed.\n\nThis patch has none of the Object Oriented changes Alvaro and Jehan-Guillaume requested, but that should not be construed as an argument against their request. It's just not handled in this particular patch.\n\n\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 8 Apr 2021 12:07:27 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/7/21 5:06 PM, Alvaro Herrera wrote:\n> On 2021-Apr-07, Andrew Dunstan wrote:\n>\n>> Oh, you want to roll them all up into one file? That could work. It's a\n>> bit frowned on by perl purists, but I've done similar (see PGBuild/SCM.pm).\n> Ah! Yeah, pretty much exactly like that, including the \"no critic\" flag ...\n>\n\n\nOK, here's an attempt at that. There is almost certainly more work to\ndo, but it does pass my basic test (set up a node, start it, talk to it,\nshut it down) on some very old versions down as low as 7.2.\n\n\nIs this is more to your liking?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sun, 11 Apr 2021 13:01:18 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "Hi,\n\nOn Wed, 7 Apr 2021 20:07:41 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n[...]\n> > > Let me know if it worth that I work on an official patch. \n> > \n> > Let's give it a try ... \n> \n> OK\n\nSo, as promised, here is my take to port my previous work on PostgreSQL\nsource tree.\n\nMake check pass with no errors. The new class probably deserve some own TAP\ntests.\n\nNote that I added a PostgresVersion class for easier and cleaner version\ncomparisons. This could be an interesting take away no matter what.\n\nI still have some more ideas to cleanup, revamp and extend the base class, but\nI prefer to go incremental to keep things review-ability.\n\nThanks,", "msg_date": "Mon, 12 Apr 2021 14:59:18 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/12/21 8:59 AM, Jehan-Guillaume de Rorthais wrote:\n> Hi,\n>\n> On Wed, 7 Apr 2021 20:07:41 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> [...]\n>>>> Let me know if it worth that I work on an official patch. \n>>> Let's give it a try ... \n>> OK\n> So, as promised, here is my take to port my previous work on PostgreSQL\n> source tree.\n>\n> Make check pass with no errors. The new class probably deserve some own TAP\n> tests.\n>\n> Note that I added a PostgresVersion class for easier and cleaner version\n> comparisons. This could be an interesting take away no matter what.\n>\n> I still have some more ideas to cleanup, revamp and extend the base class, but\n> I prefer to go incremental to keep things review-ability.\n>\n\nThanks for this. We have been working somewhat on parallel lines. With\nyour permission I'm going to take some of what you've done and\nincorporate it in the patch I'm working on.\n\n\nA PostgresVersion class is a good idea - I was already contemplating\nsomething of the kind.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 12 Apr 2021 09:52:24 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Mon, 12 Apr 2021 09:52:24 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> On 4/12/21 8:59 AM, Jehan-Guillaume de Rorthais wrote:\n> > Hi,\n> >\n> > On Wed, 7 Apr 2021 20:07:41 +0200\n> > Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> > [...] \n> >>>> Let me know if it worth that I work on an official patch. \n> >>> Let's give it a try ... \n> >> OK \n> > So, as promised, here is my take to port my previous work on PostgreSQL\n> > source tree.\n> >\n> > Make check pass with no errors. The new class probably deserve some own TAP\n> > tests.\n> >\n> > Note that I added a PostgresVersion class for easier and cleaner version\n> > comparisons. This could be an interesting take away no matter what.\n> >\n> > I still have some more ideas to cleanup, revamp and extend the base class,\n> > but I prefer to go incremental to keep things review-ability.\n> > \n> \n> Thanks for this. We have been working somewhat on parallel lines. With\n> your permission I'm going to take some of what you've done and\n> incorporate it in the patch I'm working on.\n\nThe current context makes my weeks difficult to plan and quite chaotic, that's\nwhy it took me some days to produce the patch I promised.\n\nI'm fine with working with a common base code, thanks. Feel free to merge both\ncode, we'll trade patches during review. However, I'm not sure what is the\nstatus of your patch, so I can not judge what would be the easier way to\nincorporate. Mine is tested down to 9.1 (9.0 was meaningless because of lack of\npg_stat_replication) with:\n\n* get_new_node\n* init(allows_streaming => 1)\n* start\n* stop\n* backup\n* init_from_backup\n* wait_for_catchup\n* command_checks_all\n\nNote the various changes in my init() and new method allow_streaming(), etc.\n\nFYI (to avoid duplicate work), the next step on my todo was to produce some\nmeaningful tap tests to prove the class.\n\n> A PostgresVersion class is a good idea - I was already contemplating\n> something of the kind.\n\nThanks!\n\nRegards,\n\n\n", "msg_date": "Mon, 12 Apr 2021 16:57:12 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/12/21 10:57 AM, Jehan-Guillaume de Rorthais wrote:\n> On Mon, 12 Apr 2021 09:52:24 -0400\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n>> On 4/12/21 8:59 AM, Jehan-Guillaume de Rorthais wrote:\n>>> Hi,\n>>>\n>>> On Wed, 7 Apr 2021 20:07:41 +0200\n>>> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>>> [...] \n>>>>>> Let me know if it worth that I work on an official patch. \n>>>>> Let's give it a try ... \n>>>> OK \n>>> So, as promised, here is my take to port my previous work on PostgreSQL\n>>> source tree.\n>>>\n>>> Make check pass with no errors. The new class probably deserve some own TAP\n>>> tests.\n>>>\n>>> Note that I added a PostgresVersion class for easier and cleaner version\n>>> comparisons. This could be an interesting take away no matter what.\n>>>\n>>> I still have some more ideas to cleanup, revamp and extend the base class,\n>>> but I prefer to go incremental to keep things review-ability.\n>>> \n>> Thanks for this. We have been working somewhat on parallel lines. With\n>> your permission I'm going to take some of what you've done and\n>> incorporate it in the patch I'm working on.\n> The current context makes my weeks difficult to plan and quite chaotic, that's\n> why it took me some days to produce the patch I promised.\n>\n> I'm fine with working with a common base code, thanks. Feel free to merge both\n> code, we'll trade patches during review. However, I'm not sure what is the\n> status of your patch, so I can not judge what would be the easier way to\n> incorporate. Mine is tested down to 9.1 (9.0 was meaningless because of lack of\n> pg_stat_replication) with:\n>\n> * get_new_node\n> * init(allows_streaming => 1)\n> * start\n> * stop\n> * backup\n> * init_from_backup\n> * wait_for_catchup\n> * command_checks_all\n>\n> Note the various changes in my init() and new method allow_streaming(), etc.\n>\n> FYI (to avoid duplicate work), the next step on my todo was to produce some\n> meaningful tap tests to prove the class.\n>\n>> A PostgresVersion class is a good idea - I was already contemplating\n>> something of the kind.\n> Thanks!\n>\n\n\nOK, here is more WIP on this item. I have drawn substantially on Mark's\nand Jehan-Guillaime's work, but it doesn't really resemble either, and I\ntake full responsibility for it.\n\nThe guiding principles have been:\n\n. avoid doing version tests, or capability tests which are the moral\nequivalent, and rely instead on pure overloading.\n\n. avoid overriding large pieces of code.\n\n\nThe last has involved breaking up some large subroutines (like init)\ninto pieces which can more sensibly be overridden. Breaking them up\nisn't a bad thing to do anyway.\n\nThere is a new PostgresVersion object, but it's deliberately very\nminimal. Since we're not doing version tests we don't need more complex\nroutines.\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 17 Apr 2021 12:31:11 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/17/21 12:31 PM, Andrew Dunstan wrote:\n> On 4/12/21 10:57 AM, Jehan-Guillaume de Rorthais wrote:\n>> On Mon, 12 Apr 2021 09:52:24 -0400\n>> Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>>> On 4/12/21 8:59 AM, Jehan-Guillaume de Rorthais wrote:\n>>>> Hi,\n>>>>\n>>>> On Wed, 7 Apr 2021 20:07:41 +0200\n>>>> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n>>>> [...] \n>>>>>>> Let me know if it worth that I work on an official patch. \n>>>>>> Let's give it a try ... \n>>>>> OK \n>>>> So, as promised, here is my take to port my previous work on PostgreSQL\n>>>> source tree.\n>>>>\n>>>> Make check pass with no errors. The new class probably deserve some own TAP\n>>>> tests.\n>>>>\n>>>> Note that I added a PostgresVersion class for easier and cleaner version\n>>>> comparisons. This could be an interesting take away no matter what.\n>>>>\n>>>> I still have some more ideas to cleanup, revamp and extend the base class,\n>>>> but I prefer to go incremental to keep things review-ability.\n>>>> \n>>> Thanks for this. We have been working somewhat on parallel lines. With\n>>> your permission I'm going to take some of what you've done and\n>>> incorporate it in the patch I'm working on.\n>> The current context makes my weeks difficult to plan and quite chaotic, that's\n>> why it took me some days to produce the patch I promised.\n>>\n>> I'm fine with working with a common base code, thanks. Feel free to merge both\n>> code, we'll trade patches during review. However, I'm not sure what is the\n>> status of your patch, so I can not judge what would be the easier way to\n>> incorporate. Mine is tested down to 9.1 (9.0 was meaningless because of lack of\n>> pg_stat_replication) with:\n>>\n>> * get_new_node\n>> * init(allows_streaming => 1)\n>> * start\n>> * stop\n>> * backup\n>> * init_from_backup\n>> * wait_for_catchup\n>> * command_checks_all\n>>\n>> Note the various changes in my init() and new method allow_streaming(), etc.\n>>\n>> FYI (to avoid duplicate work), the next step on my todo was to produce some\n>> meaningful tap tests to prove the class.\n>>\n>>> A PostgresVersion class is a good idea - I was already contemplating\n>>> something of the kind.\n>> Thanks!\n>>\n>\n> OK, here is more WIP on this item. I have drawn substantially on Mark's\n> and Jehan-Guillaime's work, but it doesn't really resemble either, and I\n> take full responsibility for it.\n>\n> The guiding principles have been:\n>\n> . avoid doing version tests, or capability tests which are the moral\n> equivalent, and rely instead on pure overloading.\n>\n> . avoid overriding large pieces of code.\n>\n>\n> The last has involved breaking up some large subroutines (like init)\n> into pieces which can more sensibly be overridden. Breaking them up\n> isn't a bad thing to do anyway.\n>\n> There is a new PostgresVersion object, but it's deliberately very\n> minimal. Since we're not doing version tests we don't need more complex\n> routines.\n>\n>\n\n\nI should have also attached my test program - here it is. Also, I have\nbeen testing with the binaries which I've published here:\n<https://gitlab.com/adunstan/pg-old-bin> along with some saved by my\nbuildfarm animal.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Sat, 17 Apr 2021 12:35:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/17/21 12:35 PM, Andrew Dunstan wrote:\n>\n>> OK, here is more WIP on this item. I have drawn substantially on Mark's\n>> and Jehan-Guillaime's work, but it doesn't really resemble either, and I\n>> take full responsibility for it.\n>>\n>> The guiding principles have been:\n>>\n>> . avoid doing version tests, or capability tests which are the moral\n>> equivalent, and rely instead on pure overloading.\n>>\n>> . avoid overriding large pieces of code.\n>>\n>>\n>> The last has involved breaking up some large subroutines (like init)\n>> into pieces which can more sensibly be overridden. Breaking them up\n>> isn't a bad thing to do anyway.\n>>\n>> There is a new PostgresVersion object, but it's deliberately very\n>> minimal. Since we're not doing version tests we don't need more complex\n>> routines.\n>>\n>>\n>\n> I should have also attached my test program - here it is. Also, I have\n> been testing with the binaries which I've published here:\n> <https://gitlab.com/adunstan/pg-old-bin> along with some saved by my\n> buildfarm animal.\n>\n>\n\nI've been thinking about this some more over the weekend. I'm not really\nhappy with any of the three approaches to this problem:\n\n\na) Use version or capability tests in the main package\n\nb) No changes to main package, use overrides\n\nc) Changes to main package to allow smaller overrides\n\n\nThe problem is that a) and c) involve substantial changes to the main\nPostgresNode package, while b) involves overriding large functions (like\ninit) sometimes for quite trivial changes.\n\nI think therefore I'm inclined for now to do nothing for old version\ncompatibility. I would commit the fix for the IPC::Run caching glitch,\nand version detection. I would add a warning if the module is used with\na version <= 11.\n\nThe original goal of these changes was to allow testing of combinations\nof different builds with openssl and nss, which doesn't involve old\nversion compatibility.\n\nAs far as I know, without any compatibility changes the module is fully\ncompatible with releases 13 and 12, and with releases 11 and 10 so long\nas you don't want a standby, and with releases 9.6 and 9.5 if you also\ndon't want a backup. That makes it suitable for a lot of testing without\nany attempt at version compatibility.\n\nWe can revisit compatibility further in the next release.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 08:11:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Mon, Apr 19, 2021 at 08:11:01AM -0400, Andrew Dunstan wrote:\n> As far as I know, without any compatibility changes the module is fully\n> compatible with releases 13 and 12, and with releases 11 and 10 so long\n> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n> don't want a backup. That makes it suitable for a lot of testing without\n> any attempt at version compatibility.\n> \n> We can revisit compatibility further in the next release.\n\nAgreed, and I am not convinced that there is any strong need for any\nof that in the close future either, as long as there are no\nground-breaking compatibility changes.\n\nHow far does the buildfarm test pg_upgrade? One thing that I\npersonally care about here is the possibility to make pg_upgrade's\ntest.sh become a TAP test. However, I am also pretty sure that we\ncould apply some local changes to the TAP test of pg_upgrade itself to\nnot require any wide changes to PostgresNode.pm either to make the\ncentral logic as simple as possible with all the stable branches still\nsupported or even older ones. Having compatibility for free down to\n12 is nice enough IMO for most of the core logic, and pg_upgrade would\nalso work just fine down to 9.5 without any extra changes because we\ndon't care there about standbys or backups.\n--\nMichael", "msg_date": "Mon, 19 Apr 2021 21:32:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/19/21 8:32 AM, Michael Paquier wrote:\n> On Mon, Apr 19, 2021 at 08:11:01AM -0400, Andrew Dunstan wrote:\n>> As far as I know, without any compatibility changes the module is fully\n>> compatible with releases 13 and 12, and with releases 11 and 10 so long\n>> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n>> don't want a backup. That makes it suitable for a lot of testing without\n>> any attempt at version compatibility.\n>>\n>> We can revisit compatibility further in the next release.\n> Agreed, and I am not convinced that there is any strong need for any\n> of that in the close future either, as long as there are no\n> ground-breaking compatibility changes.\n>\n> How far does the buildfarm test pg_upgrade? One thing that I\n> personally care about here is the possibility to make pg_upgrade's\n> test.sh become a TAP test. However, I am also pretty sure that we\n> could apply some local changes to the TAP test of pg_upgrade itself to\n> not require any wide changes to PostgresNode.pm either to make the\n> central logic as simple as possible with all the stable branches still\n> supported or even older ones. Having compatibility for free down to\n> 12 is nice enough IMO for most of the core logic, and pg_upgrade would\n> also work just fine down to 9.5 without any extra changes because we\n> don't care there about standbys or backups.\n\n\nThe buildfarm tests self-targetted pg_upgrade by calling the builtin\ntests (make check / vcregress.pl upgradecheck).\n\n\nHowever, for cross version testing the regime is quite different. The\ncross version module doesn't ever construct a repo. Rather, it tries to\nupgrade a repo saved from a prior run. So all it does is some\nadjustments for things that have changed between releases and then calls\npg_upgrade. See\n<https://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm>\n\n\nNote that we currently test upgrades down to 9.2 on crake. However, now\nI have some working binaries for really old releases I might extend that\nall the way back to 8.4 at some stage. pg_upgrade and pg_dump/pg_restore\ntesting are the major use cases I can see for backwards compatibility -\npg_dump is still supposed to be able to go back into the dim dark ages,\nwhich is why I built the old binaries all the way back to 7.2.\n\n\n....\n\n\nIt's just occurred to me that a much nicer way of doing this\nPostgresNode stuff would be to have a function that instead of appending\nto the config file would adjust it. Then we wouldn't need all those\nlittle settings functions to be overridden - the subclasses could just\npost-process the� config files. I'm going to try that and see what I can\ncome up with. I think it will look heaps nicer.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n-- \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 09:54:47 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n> I think therefore I'm inclined for now to do nothing for old version\n> compatibility.\n\nI agree with waiting until the v15 development cycle.\n\n> I would commit the fix for the IPC::Run caching glitch,\n> and version detection\n\nThank you.\n\n> I would add a warning if the module is used with\n> a version <= 11.\n\nSounds fine for now.\n\n> The original goal of these changes was to allow testing of combinations\n> of different builds with openssl and nss, which doesn't involve old\n> version compatibility.\n\nHmm. I think different folks had different goals. My personal interest is to write automated tests which spin up older servers, create data that cannot be created on newer servers (such as heap tuples with HEAP_MOVED_IN or HEAP_MOVED_OFF bits set), upgrade, and test that new code handles the old data correctly. I think this is not only useful for our test suites as a community, but is also useful for companies providing support services who need to reproduce problems that customers are having on clusters that have been pg_upgraded across large numbers of postgres versions.\n\n> As far as I know, without any compatibility changes the module is fully\n> compatible with releases 13 and 12, and with releases 11 and 10 so long\n> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n> don't want a backup. That makes it suitable for a lot of testing without\n> any attempt at version compatibility.\n> \n> We can revisit compatibility further in the next release.\n\nSounds good.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 07:43:58 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/19/21 10:43 AM, Mark Dilger wrote:\n>\n>> On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>\n>> I think therefore I'm inclined for now to do nothing for old version\n>> compatibility.\n> I agree with waiting until the v15 development cycle.\n>\n>> I would commit the fix for the IPC::Run caching glitch,\n>> and version detection\n> Thank you.\n>\n>> I would add a warning if the module is used with\n>> a version <= 11.\n> Sounds fine for now.\n>\n>> The original goal of these changes was to allow testing of combinations\n>> of different builds with openssl and nss, which doesn't involve old\n>> version compatibility.\n> Hmm. I think different folks had different goals. My personal interest is to write automated tests which spin up older servers, create data that cannot be created on newer servers (such as heap tuples with HEAP_MOVED_IN or HEAP_MOVED_OFF bits set), upgrade, and test that new code handles the old data correctly. I think this is not only useful for our test suites as a community, but is also useful for companies providing support services who need to reproduce problems that customers are having on clusters that have been pg_upgraded across large numbers of postgres versions.\n>\n>> As far as I know, without any compatibility changes the module is fully\n>> compatible with releases 13 and 12, and with releases 11 and 10 so long\n>> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n>> don't want a backup. That makes it suitable for a lot of testing without\n>> any attempt at version compatibility.\n>>\n>> We can revisit compatibility further in the next release.\n> Sounds good.\n\n\nI'll work on this. Meanwhile FTR here's my latest revision - it's a lot\nless invasive of the main module, so it seems much more palatable to me,\nand still passes my test down to 7.2.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 19 Apr 2021 12:37:08 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Mon, 19 Apr 2021 07:43:58 -0700\nMark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> > On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> > I think therefore I'm inclined for now to do nothing for old version\n> > compatibility. \n> \n> I agree with waiting until the v15 development cycle.\n\nAgree.\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:02:14 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Mon, 19 Apr 2021 12:37:08 -0400\nAndrew Dunstan <andrew@dunslane.net> wrote:\n\n> \n> On 4/19/21 10:43 AM, Mark Dilger wrote:\n> >\n> >> On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>\n> >> I think therefore I'm inclined for now to do nothing for old version\n> >> compatibility.\n> > I agree with waiting until the v15 development cycle.\n> >\n> >> I would commit the fix for the IPC::Run caching glitch,\n> >> and version detection\n> > Thank you.\n> >\n> >> I would add a warning if the module is used with\n> >> a version <= 11.\n> > Sounds fine for now.\n> >\n> >> The original goal of these changes was to allow testing of combinations\n> >> of different builds with openssl and nss, which doesn't involve old\n> >> version compatibility.\n> > Hmm. I think different folks had different goals. My personal interest is\n> > to write automated tests which spin up older servers, create data that\n> > cannot be created on newer servers (such as heap tuples with HEAP_MOVED_IN\n> > or HEAP_MOVED_OFF bits set), upgrade, and test that new code handles the\n> > old data correctly. I think this is not only useful for our test suites as\n> > a community, but is also useful for companies providing support services\n> > who need to reproduce problems that customers are having on clusters that\n> > have been pg_upgraded across large numbers of postgres versions.\n> >\n> >> As far as I know, without any compatibility changes the module is fully\n> >> compatible with releases 13 and 12, and with releases 11 and 10 so long\n> >> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n> >> don't want a backup. That makes it suitable for a lot of testing without\n> >> any attempt at version compatibility.\n> >>\n> >> We can revisit compatibility further in the next release.\n> > Sounds good.\n> \n> \n> I'll work on this. Meanwhile FTR here's my latest revision - it's a lot\n> less invasive of the main module, so it seems much more palatable to me,\n> and still passes my test down to 7.2.\n\nI spend a fair bit of time to wonder how useful it could be to either maintain\nsuch a module in core, including for external needs, or creating a separate\nexternal project with a different release/distribution/packaging policy.\n\nWherever the module is maintained, the goal would be to address broader\nneeds, eg. adding a switch_wal() method or wait_for_archive(), supporting\nreplication, backups, etc for multi-old-deprecated-PostgreSQL versions.\n\nTo be honest I have mixed feelings. I feel this burden shouldn't be carried\nby the core, which has restricted needs compared to external projects. In the\nopposite, maintaining an external project which shares 90% of the code seems to\nbe a useless duplicate and backport effort. Moreover Craig Ringer already opened\nthe door for external use of PostgresNode with his effort to install/package it,\nsee:\nhttps://www.postgresql.org/message-id/CAGRY4nxxKSFJEgVAv5YAk%3DbqULtFmNw7gEJef0CCgzpNy6O%3D-w%40mail.gmail.com\n\nThoughts?\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:25:44 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 10:25 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n> On Mon, 19 Apr 2021 12:37:08 -0400\n> Andrew Dunstan <andrew@dunslane.net> wrote:\n> \n>> \n>> On 4/19/21 10:43 AM, Mark Dilger wrote:\n>>> \n>>>> On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> \n>>>> I think therefore I'm inclined for now to do nothing for old version\n>>>> compatibility.\n>>> I agree with waiting until the v15 development cycle.\n>>> \n>>>> I would commit the fix for the IPC::Run caching glitch,\n>>>> and version detection\n>>> Thank you.\n>>> \n>>>> I would add a warning if the module is used with\n>>>> a version <= 11.\n>>> Sounds fine for now.\n>>> \n>>>> The original goal of these changes was to allow testing of combinations\n>>>> of different builds with openssl and nss, which doesn't involve old\n>>>> version compatibility.\n>>> Hmm. I think different folks had different goals. My personal interest is\n>>> to write automated tests which spin up older servers, create data that\n>>> cannot be created on newer servers (such as heap tuples with HEAP_MOVED_IN\n>>> or HEAP_MOVED_OFF bits set), upgrade, and test that new code handles the\n>>> old data correctly. I think this is not only useful for our test suites as\n>>> a community, but is also useful for companies providing support services\n>>> who need to reproduce problems that customers are having on clusters that\n>>> have been pg_upgraded across large numbers of postgres versions.\n>>> \n>>>> As far as I know, without any compatibility changes the module is fully\n>>>> compatible with releases 13 and 12, and with releases 11 and 10 so long\n>>>> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n>>>> don't want a backup. That makes it suitable for a lot of testing without\n>>>> any attempt at version compatibility.\n>>>> \n>>>> We can revisit compatibility further in the next release.\n>>> Sounds good.\n>> \n>> \n>> I'll work on this. Meanwhile FTR here's my latest revision - it's a lot\n>> less invasive of the main module, so it seems much more palatable to me,\n>> and still passes my test down to 7.2.\n> \n> I spend a fair bit of time to wonder how useful it could be to either maintain\n> such a module in core, including for external needs, or creating a separate\n> external project with a different release/distribution/packaging policy.\n> \n> Wherever the module is maintained, the goal would be to address broader\n> needs, eg. adding a switch_wal() method or wait_for_archive(), supporting\n> replication, backups, etc for multi-old-deprecated-PostgreSQL versions.\n> \n> To be honest I have mixed feelings. I feel this burden shouldn't be carried\n> by the core, which has restricted needs compared to external projects. In the\n> opposite, maintaining an external project which shares 90% of the code seems to\n> be a useless duplicate and backport effort. Moreover Craig Ringer already opened\n> the door for external use of PostgresNode with his effort to install/package it,\n> see:\n> https://www.postgresql.org/message-id/CAGRY4nxxKSFJEgVAv5YAk%3DbqULtFmNw7gEJef0CCgzpNy6O%3D-w%40mail.gmail.com\n> \n> Thoughts?\n\nThe community needs a single shared PostgresNode implementation that can be used by scripts which reproduce bugs. For bugs that can only be triggered by cross version upgrades, the scripts need a PostgresNode implementation which can work across versions. Likewise for bugs that can only be triggered when client applications connect to servers of a different version.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 10:35:39 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Mon, 19 Apr 2021 10:35:39 -0700\nMark Dilger <mark.dilger@enterprisedb.com> wrote:\n\n> > On Apr 19, 2021, at 10:25 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com>\n> > wrote:\n> > \n> > On Mon, 19 Apr 2021 12:37:08 -0400\n> > Andrew Dunstan <andrew@dunslane.net> wrote:\n> > \n> >> \n> >> On 4/19/21 10:43 AM, Mark Dilger wrote: \n> >>> \n> >>>> On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n> >>>> \n> >>>> I think therefore I'm inclined for now to do nothing for old version\n> >>>> compatibility. \n> >>> I agree with waiting until the v15 development cycle.\n> >>> \n> >>>> I would commit the fix for the IPC::Run caching glitch,\n> >>>> and version detection \n> >>> Thank you.\n> >>> \n> >>>> I would add a warning if the module is used with\n> >>>> a version <= 11. \n> >>> Sounds fine for now.\n> >>> \n> >>>> The original goal of these changes was to allow testing of combinations\n> >>>> of different builds with openssl and nss, which doesn't involve old\n> >>>> version compatibility. \n> >>> Hmm. I think different folks had different goals. My personal interest\n> >>> is to write automated tests which spin up older servers, create data that\n> >>> cannot be created on newer servers (such as heap tuples with HEAP_MOVED_IN\n> >>> or HEAP_MOVED_OFF bits set), upgrade, and test that new code handles the\n> >>> old data correctly. I think this is not only useful for our test suites\n> >>> as a community, but is also useful for companies providing support\n> >>> services who need to reproduce problems that customers are having on\n> >>> clusters that have been pg_upgraded across large numbers of postgres\n> >>> versions. \n> >>>> As far as I know, without any compatibility changes the module is fully\n> >>>> compatible with releases 13 and 12, and with releases 11 and 10 so long\n> >>>> as you don't want a standby, and with releases 9.6 and 9.5 if you also\n> >>>> don't want a backup. That makes it suitable for a lot of testing without\n> >>>> any attempt at version compatibility.\n> >>>> \n> >>>> We can revisit compatibility further in the next release. \n> >>> Sounds good. \n> >> \n> >> \n> >> I'll work on this. Meanwhile FTR here's my latest revision - it's a lot\n> >> less invasive of the main module, so it seems much more palatable to me,\n> >> and still passes my test down to 7.2. \n> > \n> > I spend a fair bit of time to wonder how useful it could be to either\n> > maintain such a module in core, including for external needs, or creating a\n> > separate external project with a different release/distribution/packaging\n> > policy.\n> > \n> > Wherever the module is maintained, the goal would be to address broader\n> > needs, eg. adding a switch_wal() method or wait_for_archive(), supporting\n> > replication, backups, etc for multi-old-deprecated-PostgreSQL versions.\n> >\n> > To be honest I have mixed feelings. I feel this burden shouldn't be carried\n> > by the core, which has restricted needs compared to external projects. In\n> > the opposite, maintaining an external project which shares 90% of the code\n> > seems to be a useless duplicate and backport effort. Moreover Craig Ringer\n> > already opened the door for external use of PostgresNode with his effort to\n> > install/package it, see:\n> > https://www.postgresql.org/message-id/CAGRY4nxxKSFJEgVAv5YAk%3DbqULtFmNw7gEJef0CCgzpNy6O%3D-w%40mail.gmail.com\n> > \n> > Thoughts? \n> \n> The community needs a single shared PostgresNode implementation that can be\n> used by scripts which reproduce bugs.\n\nWhich means it could be OK to have a PostgresNode implementation, leaving in\ncore source-tree, which supports broader needs than the core ones (older\nversions and some more methods)? Did I understood correctly?\n\nIf this is correct, I suppose this effort could be committed early in v15 cycle?\n\nDoes it deserve some effort to build some dedicated TAP tests for these\nmodules? I already have a small patch for this waiting on my disk for some more\ntests and review...\n\nRegards\n\n\n", "msg_date": "Mon, 19 Apr 2021 19:50:43 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 19, 2021, at 10:50 AM, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> \n>> The community needs a single shared PostgresNode implementation that can be\n>> used by scripts which reproduce bugs.\n> \n> Which means it could be OK to have a PostgresNode implementation, leaving in\n> core source-tree, which supports broader needs than the core ones (older\n> versions and some more methods)? Did I understood correctly?\n\nYes, I believe it should be in core.\n\nI don't know about \"some more methods\", as it depends which methods you are proposing.\n\n> If this is correct, I suppose this effort could be committed early in v15 cycle?\n\nI don't care to speculate on that yet.\n\n> Does it deserve some effort to build some dedicated TAP tests for these\n> modules? I already have a small patch for this waiting on my disk for some more\n> tests and review...\n\nI did that, too, in the 0002 version of my patch. Perhaps we need to merge your work and mine.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 19 Apr 2021 11:17:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/19/21 12:37 PM, Andrew Dunstan wrote:\n> On 4/19/21 10:43 AM, Mark Dilger wrote:\n>>> On Apr 19, 2021, at 5:11 AM, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>\n>>> I think therefore I'm inclined for now to do nothing for old version\n>>> compatibility.\n>> I agree with waiting until the v15 development cycle.\n>>\n>>> I would commit the fix for the IPC::Run caching glitch,\n>>> and version detection\n>> Thank you.\n\n\nI've committed this piece.\n\n\n\n>>\n>>> I would add a warning if the module is used with\n>>> a version <= 11.\n>> Sounds fine for now.\n\n\nHere's the patch for that.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Tue, 20 Apr 2021 13:11:59 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Tue, Apr 20, 2021 at 01:11:59PM -0400, Andrew Dunstan wrote:\n> Here's the patch for that.\n\nThanks.\n\n> +\t# Accept standard formats, in case caller has handed us the output of a\n> +\t# postgres command line tool\n> +\t$arg = $1\n> +\t\tif ($arg =~ m/\\(?PostgreSQL\\)? (\\d+(?:\\.\\d+)*(?:devel)?)/);\n\nInteresting. This would work even if using --with-extra-version,\nwhich is a good thing.\n\n> +# render the version number in the standard \"joined by dots\" notation if\n> +# interpolated into a string\n> +sub _stringify\n> +{\n> + my $self = shift;\n> + return join('.', @$self);\n> +}\n\nThis comes out a bit strangely when using a devel build as this\nappends -1 as sub-version number, becoming say 14.-1. It may be\nclearer to add back \"devel\" in this case?\n\nWouldn't it be better to add some perldoc to PostgresVersion.pm?\n--\nMichael", "msg_date": "Wed, 21 Apr 2021 14:13:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/21/21 1:13 AM, Michael Paquier wrote:\n> On Tue, Apr 20, 2021 at 01:11:59PM -0400, Andrew Dunstan wrote:\n>> Here's the patch for that.\n> Thanks.\n>\n>> +\t# Accept standard formats, in case caller has handed us the output of a\n>> +\t# postgres command line tool\n>> +\t$arg = $1\n>> +\t\tif ($arg =~ m/\\(?PostgreSQL\\)? (\\d+(?:\\.\\d+)*(?:devel)?)/);\n> Interesting. This would work even if using --with-extra-version,\n> which is a good thing.\n>\n>> +# render the version number in the standard \"joined by dots\" notation if\n>> +# interpolated into a string\n>> +sub _stringify\n>> +{\n>> + my $self = shift;\n>> + return join('.', @$self);\n>> +}\n> This comes out a bit strangely when using a devel build as this\n> appends -1 as sub-version number, becoming say 14.-1. It may be\n> clearer to add back \"devel\" in this case?\n>\n> Wouldn't it be better to add some perldoc to PostgresVersion.pm?\n\n\n\n\nHere's a patch with these things attended to.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 21 Apr 2021 10:04:40 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Wed, Apr 21, 2021 at 10:04:40AM -0400, Andrew Dunstan wrote:\n> Here's a patch with these things attended to.\n\nThanks. Reading through it, that seems pretty much fine to me. I\nhave not spent time checking _version_cmp in details though :)\n--\nMichael", "msg_date": "Thu, 22 Apr 2021 15:52:28 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/22/21 2:52 AM, Michael Paquier wrote:\n> On Wed, Apr 21, 2021 at 10:04:40AM -0400, Andrew Dunstan wrote:\n>> Here's a patch with these things attended to.\n> Thanks. Reading through it, that seems pretty much fine to me. I\n> have not spent time checking _version_cmp in details though :)\n\n\nOk, Thanks.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 08:53:52 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-21, Andrew Dunstan wrote:\n\n> +=head1 DESCRIPTION\n> +\n> +PostgresVersion encapsulated Postgres version numbers, providing parsing\n> +of common version formats and comparison operations.\n\nSmall typo here: should be \"encapsulates\"\n\n> +\t# Accept standard formats, in case caller has handed us the output of a\n> +\t# postgres command line tool\n> +\t$arg = $1\n> +\t\tif ($arg =~ m/\\(?PostgreSQL\\)? (\\d+(?:\\.\\d+)*(?:devel)?)/);\n> +\n> +\t# Split into an array\n> +\tmy @result = split(/\\./, $arg);\n> +\n> +\t# Treat development versions as having a minor/micro version one less than\n> +\t# the first released version of that branch.\n> +\tif ($result[$#result] =~ m/^(\\d+)devel$/)\n> +\t{\n> +\t\tpop(@result);\n> +\t\tpush(@result, $1, -1);\n> +\t}\n\nIt's a bit weird to parse the \"devel\" bit twice. Would it work to leave\n(?:devel)? out of the capturing parens that becomes $1 in the first\nregex and make it capturing itself, so you get \"devel\" in $2, and decide\nbased on its presence/absence? Then you don't have to pop and push a -1.\n\n> +\tmy $res = [ @result ];\n\nHmm, isn't this just \\@result? So you could do\n\treturn bless \\@result, $class;\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:09:54 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/22/21 2:52 AM, Michael Paquier wrote:\n> On Wed, Apr 21, 2021 at 10:04:40AM -0400, Andrew Dunstan wrote:\n>> Here's a patch with these things attended to.\n> Thanks. Reading through it, that seems pretty much fine to me. I\n> have not spent time checking _version_cmp in details though :)\n\n\n\n\npushed with a couple of fixes.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 11:13:20 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\n\n> On Apr 22, 2021, at 8:09 AM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n>> \n>> +\t# Accept standard formats, in case caller has handed us the output of a\n>> +\t# postgres command line tool\n>> +\t$arg = $1\n>> +\t\tif ($arg =~ m/\\(?PostgreSQL\\)? (\\d+(?:\\.\\d+)*(?:devel)?)/);\n>> +\n>> +\t# Split into an array\n>> +\tmy @result = split(/\\./, $arg);\n>> +\n>> +\t# Treat development versions as having a minor/micro version one less than\n>> +\t# the first released version of that branch.\n>> +\tif ($result[$#result] =~ m/^(\\d+)devel$/)\n>> +\t{\n>> +\t\tpop(@result);\n>> +\t\tpush(@result, $1, -1);\n>> +\t}\n> \n> It's a bit weird to parse the \"devel\" bit twice. Would it work to leave\n> (?:devel)? out of the capturing parens that becomes $1 in the first\n> regex and make it capturing itself, so you get \"devel\" in $2, and decide\n> based on its presence/absence? Then you don't have to pop and push a -1.\n\nThe first regex should match things like \"12\", \"12.1\", \"14devel\", or those same things prefixed with \"(PostgreSQL) \", and strip off the \"(PostgreSQL)\" part if it exists. But the code should also BAIL_OUT if the regex completely fails to match.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 08:46:09 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/22/21 11:46 AM, Mark Dilger wrote:\n>\n>> On Apr 22, 2021, at 8:09 AM, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>\n>>> +\t# Accept standard formats, in case caller has handed us the output of a\n>>> +\t# postgres command line tool\n>>> +\t$arg = $1\n>>> +\t\tif ($arg =~ m/\\(?PostgreSQL\\)? (\\d+(?:\\.\\d+)*(?:devel)?)/);\n>>> +\n>>> +\t# Split into an array\n>>> +\tmy @result = split(/\\./, $arg);\n>>> +\n>>> +\t# Treat development versions as having a minor/micro version one less than\n>>> +\t# the first released version of that branch.\n>>> +\tif ($result[$#result] =~ m/^(\\d+)devel$/)\n>>> +\t{\n>>> +\t\tpop(@result);\n>>> +\t\tpush(@result, $1, -1);\n>>> +\t}\n>> It's a bit weird to parse the \"devel\" bit twice. Would it work to leave\n>> (?:devel)? out of the capturing parens that becomes $1 in the first\n>> regex and make it capturing itself, so you get \"devel\" in $2, and decide\n>> based on its presence/absence? Then you don't have to pop and push a -1.\n> The first regex should match things like \"12\", \"12.1\", \"14devel\", or those same things prefixed with \"(PostgreSQL) \", and strip off the \"(PostgreSQL)\" part if it exists. But the code should also BAIL_OUT if the regex completely fails to match.\n>\n\n\nNot quite. PostgresVersion doesn't know about Test::More. It could die\n(or croak) and we could catch it in an eval.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 12:42:33 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/22/21 11:09 AM, Alvaro Herrera wrote:\n> On 2021-Apr-21, Andrew Dunstan wrote:\n>\n>> +=head1 DESCRIPTION\n>> +\n>> +PostgresVersion encapsulated Postgres version numbers, providing parsing\n>> +of common version formats and comparison operations.\n> Small typo here: should be \"encapsulates\"\n>\n>> +\t# Accept standard formats, in case caller has handed us the output of a\n>> +\t# postgres command line tool\n>> +\t$arg = $1\n>> +\t\tif ($arg =~ m/\\(?PostgreSQL\\)? (\\d+(?:\\.\\d+)*(?:devel)?)/);\n>> +\n>> +\t# Split into an array\n>> +\tmy @result = split(/\\./, $arg);\n>> +\n>> +\t# Treat development versions as having a minor/micro version one less than\n>> +\t# the first released version of that branch.\n>> +\tif ($result[$#result] =~ m/^(\\d+)devel$/)\n>> +\t{\n>> +\t\tpop(@result);\n>> +\t\tpush(@result, $1, -1);\n>> +\t}\n> It's a bit weird to parse the \"devel\" bit twice. Would it work to leave\n> (?:devel)? out of the capturing parens that becomes $1 in the first\n> regex and make it capturing itself, so you get \"devel\" in $2, and decide\n> based on its presence/absence? Then you don't have to pop and push a -1.\n\n\nHow about this?\n\n\n     # Accept standard formats, in case caller has handed us the\n output of a\n     # postgres command line tool\n     my $devel;\n     ($arg,$devel) = ($1, $2)\n       if ($arg =~  m/^(?:\\(?PostgreSQL\\)? )?(\\d+(?:\\.\\d+)*)(devel)?/);\n\n     # Split into an array\n     my @result = split(/\\./, $arg);\n\n     # Treat development versions as having a minor/micro version one\n less than\n     # the first released version of that branch.\n     push @result, -1 if ($devel);\n\n     return bless \\@result, $class;\n\n\ncheers\n\n\nandrew\n\n\n\n\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 13:58:18 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 2021-Apr-22, Andrew Dunstan wrote:\n\n> ��� # Accept standard formats, in case caller has handed us the\n> output of a\n> ��� # postgres command line tool\n> ��� my $devel;\n> ��� ($arg,$devel) = ($1, $2)\n> ��� � if ($arg =~� m/^(?:\\(?PostgreSQL\\)? )?(\\d+(?:\\.\\d+)*)(devel)?/);\n> \n> ��� # Split into an array\n> ��� my @result = split(/\\./, $arg);\n> \n> ��� # Treat development versions as having a minor/micro version one\n> less than\n> ��� # the first released version of that branch.\n> ��� push @result, -1 if ($devel);\n> \n> ��� return bless \\@result, $class;\n\nWFM, thanks :-)\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W\n\n\n", "msg_date": "Thu, 22 Apr 2021 14:35:13 -0400", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, Apr 22, 2021 at 02:35:13PM -0400, Alvaro Herrera wrote:\n> WFM, thanks :-)\n\nAlso, do we need to worry about beta releases? Just recalled that\nnow.\n--\nMichael", "msg_date": "Fri, 23 Apr 2021 06:08:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/22/21 5:08 PM, Michael Paquier wrote:\n> On Thu, Apr 22, 2021 at 02:35:13PM -0400, Alvaro Herrera wrote:\n>> WFM, thanks :-)\n> Also, do we need to worry about beta releases? Just recalled that\n> now.\n\n\nInteresting point. Maybe we need to do something like devel = -4, alpha\n= -3, beta = -2, rc = -1. Or maybe that's overkill.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 22 Apr 2021 20:43:10 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, Apr 22, 2021 at 08:43:10PM -0400, Andrew Dunstan wrote:\n> Interesting point. Maybe we need to do something like devel = -4, alpha\n> = -3, beta = -2, rc = -1. Or maybe that's overkill.\n\nAnd after that it would come to how many betas, alphas or RCs you\nhave, but you can never be sure of how many of each you may finish\nwith. I think that you have the right answer with just marking all\nof them with -1 for the minor number, keeping the code a maximum\nsimple.\n--\nMichael", "msg_date": "Fri, 23 Apr 2021 13:41:26 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On 4/23/21 12:41 AM, Michael Paquier wrote:\n> On Thu, Apr 22, 2021 at 08:43:10PM -0400, Andrew Dunstan wrote:\n>> Interesting point. Maybe we need to do something like devel = -4, alpha\n>> = -3, beta = -2, rc = -1. Or maybe that's overkill.\n> And after that it would come to how many betas, alphas or RCs you\n> have, but you can never be sure of how many of each you may finish\n> with. I think that you have the right answer with just marking all\n> of them with -1 for the minor number, keeping the code a maximum\n> simple.\n\n\nYeah, I think it's ok for comparison purposes just to lump them all\ntogether. Here's a patch that does that and some consequent cleanup.\nNote we now cache the string rather than trying to reconstruct it.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 23 Apr 2021 08:10:01 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Fri, Apr 23, 2021 at 08:10:01AM -0400, Andrew Dunstan wrote:\n> Yeah, I think it's ok for comparison purposes just to lump them all\n> together. Here's a patch that does that and some consequent cleanup.\n> Note we now cache the string rather than trying to reconstruct it.\n\nNo objections from here to build the version string beforehand. \n\n> + (devel|(?:alpha|beta|rc)\\d+)? # dev marker - see version_stamp.pl\n> +\t\t !x);\n\nI have been playing with patch and version_stamp.pl, and that does the\njob. Nice.\n--\nMichael", "msg_date": "Sat, 24 Apr 2021 14:54:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 4/24/21 1:54 AM, Michael Paquier wrote:\n> On Fri, Apr 23, 2021 at 08:10:01AM -0400, Andrew Dunstan wrote:\n>> Yeah, I think it's ok for comparison purposes just to lump them all\n>> together. Here's a patch that does that and some consequent cleanup.\n>> Note we now cache the string rather than trying to reconstruct it.\n> No objections from here to build the version string beforehand. \n>\n>> + (devel|(?:alpha|beta|rc)\\d+)? # dev marker - see version_stamp.pl\n>> +\t\t !x);\n> I have been playing with patch and version_stamp.pl, and that does the\n> job. Nice.\n\n\nThanks, pushed.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 24 Apr 2021 09:49:07 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, Apr 22, 2021 at 8:43 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 4/22/21 2:52 AM, Michael Paquier wrote:\n> > On Wed, Apr 21, 2021 at 10:04:40AM -0400, Andrew Dunstan wrote:\n> >> Here's a patch with these things attended to.\n> > Thanks. Reading through it, that seems pretty much fine to me. I\n> > have not spent time checking _version_cmp in details though :)\n>\n>\n>\n>\n> pushed with a couple of fixes.\n>\n\nIn my windows environment (Windows 10), I am not able to successfully\nexecute taptests and the failure indicates the line by this commit\n(4c4eaf3d Make PostgresNode version aware). I am trying to execute\ntests with command: vcregress.bat taptest src/test/subscription\n\nI am seeing below in the log file:\nLog file: D:/WorkSpace/postgresql/src/test/subscription/tmp_check/log/001_rep_changes_publisher.log\nList form of pipe open not implemented at\nD:/WorkSpace/postgresql/src/test/perl/PostgresNode.pm line 1251.\n# Looks like your test exited with 255 before it could output anything.\n\nCan you please let me know if I need to do something additional here?\n\n\n\n--\nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 May 2021 14:06:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 5/20/21 4:36 AM, Amit Kapila wrote:\n> On Thu, Apr 22, 2021 at 8:43 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 4/22/21 2:52 AM, Michael Paquier wrote:\n>>> On Wed, Apr 21, 2021 at 10:04:40AM -0400, Andrew Dunstan wrote:\n>>>> Here's a patch with these things attended to.\n>>> Thanks. Reading through it, that seems pretty much fine to me. I\n>>> have not spent time checking _version_cmp in details though :)\n>>\n>>\n>>\n>> pushed with a couple of fixes.\n>>\n> In my windows environment (Windows 10), I am not able to successfully\n> execute taptests and the failure indicates the line by this commit\n> (4c4eaf3d Make PostgresNode version aware). I am trying to execute\n> tests with command: vcregress.bat taptest src/test/subscription\n>\n> I am seeing below in the log file:\n> Log file: D:/WorkSpace/postgresql/src/test/subscription/tmp_check/log/001_rep_changes_publisher.log\n> List form of pipe open not implemented at\n> D:/WorkSpace/postgresql/src/test/perl/PostgresNode.pm line 1251.\n> # Looks like your test exited with 255 before it could output anything.\n>\n> Can you please let me know if I need to do something additional here?\n>\n>\n>\n\nYour version of perl is apparently too old for this. Looks like that\nneeds to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 May 2021 07:05:05 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 5/20/21 7:05 AM, Andrew Dunstan wrote:\n> On 5/20/21 4:36 AM, Amit Kapila wrote:\n>> On Thu, Apr 22, 2021 at 8:43 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> On 4/22/21 2:52 AM, Michael Paquier wrote:\n>>>> On Wed, Apr 21, 2021 at 10:04:40AM -0400, Andrew Dunstan wrote:\n>>>>> Here's a patch with these things attended to.\n>>>> Thanks. Reading through it, that seems pretty much fine to me. I\n>>>> have not spent time checking _version_cmp in details though :)\n>>>\n>>>\n>>> pushed with a couple of fixes.\n>>>\n>> In my windows environment (Windows 10), I am not able to successfully\n>> execute taptests and the failure indicates the line by this commit\n>> (4c4eaf3d Make PostgresNode version aware). I am trying to execute\n>> tests with command: vcregress.bat taptest src/test/subscription\n>>\n>> I am seeing below in the log file:\n>> Log file: D:/WorkSpace/postgresql/src/test/subscription/tmp_check/log/001_rep_changes_publisher.log\n>> List form of pipe open not implemented at\n>> D:/WorkSpace/postgresql/src/test/perl/PostgresNode.pm line 1251.\n>> # Looks like your test exited with 255 before it could output anything.\n>>\n>> Can you please let me know if I need to do something additional here?\n>>\n>>\n>>\n> Your version of perl is apparently too old for this. Looks like that\n> needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n>\n>\n\nHowever, we could probably rewrite this in a way that would work with\nyour older perl and at the same time not offend perlcritic, using qx{}\ninstead of an explicit open.\n\n\nWill test.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 May 2021 07:13:18 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, May 20, 2021 at 07:05:05AM -0400, Andrew Dunstan wrote:\n> Your version of perl is apparently too old for this. Looks like that\n> needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n\nHmm. src/test/perl/README tells about 5.8.0. That's quite a jump.\n--\nMichael", "msg_date": "Thu, 20 May 2021 20:15:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, May 20, 2021 at 4:43 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 5/20/21 7:05 AM, Andrew Dunstan wrote:\n> >>\n> > Your version of perl is apparently too old for this. Looks like that\n> > needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n> >\n> >\n>\n> However, we could probably rewrite this in a way that would work with\n> your older perl and at the same time not offend perlcritic, using qx{}\n> instead of an explicit open.\n>\n>\n> Will test.\n>\n\nOkay, thanks. BTW, our docs don't seem to reflect that we need a newer\nversion of Perl. See [1] (version 5.8.3 or later is required).\n\n[1] - https://www.postgresql.org/docs/devel/install-windows-full.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 20 May 2021 16:46:44 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 5/20/21 7:15 AM, Michael Paquier wrote:\n> On Thu, May 20, 2021 at 07:05:05AM -0400, Andrew Dunstan wrote:\n>> Your version of perl is apparently too old for this. Looks like that\n>> needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n> Hmm. src/test/perl/README tells about 5.8.0. That's quite a jump.\n\n\n\nYes. I've pushed a fix that should take care of the issue.\n\n\n5.8 is ancient. Yes I know it's what's in the Msys1 DTK, but the DTK\nperl seems happy with the list form of pipe open - it's only native\nwindows perl's that aren't.\n\n\nMaybe it's time to update the requirement a bit, at least for running\nTAP tests.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 May 2021 08:25:45 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Thu, May 20, 2021 at 07:05:05AM -0400, Andrew Dunstan wrote:\n>> Your version of perl is apparently too old for this. Looks like that\n>> needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n\n> Hmm. src/test/perl/README tells about 5.8.0. That's quite a jump.\n\nSomething odd about that, because my dinosaurs aren't complaining;\nprairiedog for example uses perl 5.8.3.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 May 2021 09:53:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 5/20/21 9:53 AM, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Thu, May 20, 2021 at 07:05:05AM -0400, Andrew Dunstan wrote:\n>>> Your version of perl is apparently too old for this. Looks like that\n>>> needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n>> Hmm. src/test/perl/README tells about 5.8.0. That's quite a jump.\n> Something odd about that, because my dinosaurs aren't complaining;\n> prairiedog for example uses perl 5.8.3.\n>\n> \t\t\t\n\n\n\nIt was only on Windows that this form of pipe open was not supported.\n5.22 fixed that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 May 2021 10:51:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, May 20, 2021 at 08:25:45AM -0400, Andrew Dunstan wrote:\n> 5.8 is ancient. Yes I know it's what's in the Msys1 DTK, but the DTK\n> perl seems happy with the list form of pipe open - it's only native\n> windows perl's that aren't.\n\nRespect to the Msys1 DTK for that.\n\n> Maybe it's time to update the requirement a bit, at least for running\n> TAP tests.\n\nAre older versions of the perl MSI that activestate provides hard to\ncome by? FWIW, I would not mind if this README and the docs are\nupdated to mention that on Windows we require a newer version for this\nset of MSIs.\n--\nMichael", "msg_date": "Fri, 21 May 2021 10:49:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 5/20/21 9:49 PM, Michael Paquier wrote:\n> On Thu, May 20, 2021 at 08:25:45AM -0400, Andrew Dunstan wrote:\n>> 5.8 is ancient. Yes I know it's what's in the Msys1 DTK, but the DTK\n>> perl seems happy with the list form of pipe open - it's only native\n>> windows perl's that aren't.\n> Respect to the Msys1 DTK for that.\n>\n>> Maybe it's time to update the requirement a bit, at least for running\n>> TAP tests.\n> Are older versions of the perl MSI that activestate provides hard to\n> come by? FWIW, I would not mind if this README and the docs are\n> updated to mention that on Windows we require a newer version for this\n> set of MSIs.\n\n\nI've fixed the coding that led to this particular problem. So for now\nlet's let sleeping dogs lie.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 20 May 2021 21:59:33 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 5/20/21 9:49 PM, Michael Paquier wrote:\n>> Are older versions of the perl MSI that activestate provides hard to\n>> come by? FWIW, I would not mind if this README and the docs are\n>> updated to mention that on Windows we require a newer version for this\n>> set of MSIs.\n\n> I've fixed the coding that led to this particular problem. So for now\n> let's let sleeping dogs lie.\n\nSeems like the right solution is for somebody to be running a\nbuildfarm animal on Windows with an old perl version.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 20 May 2021 23:04:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "On Thu, May 20, 2021 at 5:55 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 5/20/21 7:15 AM, Michael Paquier wrote:\n> > On Thu, May 20, 2021 at 07:05:05AM -0400, Andrew Dunstan wrote:\n> >> Your version of perl is apparently too old for this. Looks like that\n> >> needs to be 5.22 or later: <https://perldoc.perl.org/perl5220delta>\n> > Hmm. src/test/perl/README tells about 5.8.0. That's quite a jump.\n>\n> Yes. I've pushed a fix that should take care of the issue.\n>\n\nThanks. It is working now.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 21 May 2021 09:03:13 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" }, { "msg_contents": "\nOn 5/20/21 11:04 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 5/20/21 9:49 PM, Michael Paquier wrote:\n>>> Are older versions of the perl MSI that activestate provides hard to\n>>> come by? FWIW, I would not mind if this README and the docs are\n>>> updated to mention that on Windows we require a newer version for this\n>>> set of MSIs.\n>> I've fixed the coding that led to this particular problem. So for now\n>> let's let sleeping dogs lie.\n> Seems like the right solution is for somebody to be running a\n> buildfarm animal on Windows with an old perl version.\n>\n> \t\t\t\n\n\nMaybe Amit can :-) Getting hold of old builds isn't always easy.\nStrawberry's downloads page has versions back to 5.14, ActiveState only\nto 5.26.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 21 May 2021 08:29:14 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: multi-install PostgresNode fails with older postgres versions" } ]
[ { "msg_contents": "Hi,\n\nSome catalog oid values originate from other catalogs,\nsuch as pg_aggregate.aggfnoid -> pg_proc.oid\nor pg_attribute.attrelid -> pg_class.oid.\n\nFor such oid values, the foreign catalog is the regclass\nwhich should be passed as the first argument to\nall the functions taking (classid oid, objid oid, objsubid integer)\nas input, i.e. pg_describe_object(), pg_identify_object() and\npg_identify_object_as_address().\n\nAll oids values in all catalogs,\ncan be used with these functions,\nas long as the correct regclass is passed as the first argument,\n*except* pg_enum.oid.\n\n(This is not a problem for pg_enum.enumtypid,\nits regclass is 'pg_type' and works fine.)\n\nI would have expected the regclass to be 'pg_enum'::regclass,\nsince there is no foreign key on pg_enum.oid.\n\nIn a way, pg_enum is similar to pg_attribute,\n pg_enum.enumtypid -> pg_type.oid\nreminds me of\n pg_attribute.attrelid -> pg_class.oid\n\nBut pg_enum has its own oid column as primary key,\nwhereas in pg_attribute we only have a multi-column primary key (attrelid, attnum).\n\nIs this a bug? I.e. should we add support to deal with pg_enum.oid?\n\nOr is this by design?\nIf so, wouldn't it be good to mention this corner-case\nsomewhere in the documentation for pg_identify_object_as_address() et al?\nThat is, to explain these functions works for almost all oid values, except pg_enum.oid.\n\n/Joel\n\n\n\nHi,Some catalog oid values originate from other catalogs,such as pg_aggregate.aggfnoid -> pg_proc.oidor pg_attribute.attrelid -> pg_class.oid.For such oid values, the foreign catalog is the regclasswhich should be passed as the first argument toall the functions taking (classid oid, objid oid, objsubid integer)as input, i.e. pg_describe_object(), pg_identify_object() andpg_identify_object_as_address().All oids values in all catalogs,can be used with these functions,as long as the correct regclass is passed as the first argument,*except* pg_enum.oid.(This is not a problem for pg_enum.enumtypid,its regclass is 'pg_type' and works fine.)I would have expected the regclass to be 'pg_enum'::regclass,since there is no foreign key on pg_enum.oid.In a way, pg_enum is similar to pg_attribute,   pg_enum.enumtypid -> pg_type.oidreminds me of   pg_attribute.attrelid -> pg_class.oidBut pg_enum has its own oid column as primary key,whereas in pg_attribute we only have a multi-column primary key (attrelid, attnum).Is this a bug? I.e. should we add support to deal with pg_enum.oid?Or is this by design?If so, wouldn't it be good to mention this corner-casesomewhere in the documentation for pg_identify_object_as_address() et al?That is, to explain these functions works for almost all oid values, except pg_enum.oid./Joel", "msg_date": "Tue, 30 Mar 2021 21:08:07 +0200", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?Q?Bug=3F_pg=5Fidentify=5Fobject=5Fas=5Faddress()_et_al_doesn't_w?=\n =?UTF-8?Q?ork_with_pg=5Fenum.oid?=" }, { "msg_contents": "\nOn 3/30/21 3:08 PM, Joel Jacobson wrote:\n> Hi,\n>\n> Some catalog oid values originate from other catalogs,\n> such as pg_aggregate.aggfnoid -> pg_proc.oid\n> or pg_attribute.attrelid -> pg_class.oid.\n>\n> For such oid values, the foreign catalog is the regclass\n> which should be passed as the first argument to\n> all the functions taking (classid oid, objid oid, objsubid integer)\n> as input, i.e. pg_describe_object(), pg_identify_object() and\n> pg_identify_object_as_address().\n>\n> All oids values in all catalogs,\n> can be used with these functions,\n> as long as the correct regclass is passed as the first argument,\n> *except* pg_enum.oid.\n>\n> (This is not a problem for pg_enum.enumtypid,\n> its regclass is 'pg_type' and works fine.)\n>\n> I would have expected the regclass to be 'pg_enum'::regclass,\n> since there is no foreign key on pg_enum.oid.\n>\n> In a way, pg_enum is similar to pg_attribute,\n>    pg_enum.enumtypid -> pg_type.oid\n> reminds me of\n>    pg_attribute.attrelid -> pg_class.oid\n>\n> But pg_enum has its own oid column as primary key,\n> whereas in pg_attribute we only have a multi-column primary\n> key (attrelid, attnum).\n>\n> Is this a bug? I.e. should we add support to deal with pg_enum.oid?\n>\n> Or is this by design?\n> If so, wouldn't it be good to mention this corner-case\n> somewhere in the documentation for pg_identify_object_as_address() et al?\n> That is, to explain these functions works for almost all oid values,\n> except pg_enum.oid.\n>\n>\n\n\nI think the short answer is it's not a bug. In theory we could provide\nsupport for\n\n\n  pg_describe_object('pg_enum'::regclass, myenum, 0)\n\n\nbut what would it return? There is no sane description other than the\nenum's label, which you can get far more simply.\n\n\nMaybe this small break on orthogonality should be noted, if enough\npeople care, but I doubt we should do anything else.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 2 Apr 2021 17:08:46 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Bug? pg_identify_object_as_address() et al doesn't work with\n pg_enum.oid" } ]
[ { "msg_contents": "Hello all,\n\nWe just upgraded from postgres 11 to 12.6 and our server is running\nout of memory and rebooting about 1-2 times a day. Application\narchitecture is a single threaded stored procedure, executed with CALL\nthat loops and never terminates. With postgres 11 we had no memory\nissues. Ultimately the crash looks like this:\n\nterminate called after throwing an instance of 'std::bad_alloc'\n what(): std::bad_alloc\n2021-03-29 04:34:31.262 CDT [1413] LOG: server process (PID 9792) was\nterminated by signal 6: Aborted\n2021-03-29 04:34:31.262 CDT [1413] DETAIL: Failed process was\nrunning: CALL Main()\n2021-03-29 04:34:31.262 CDT [1413] LOG: terminating any other active\nserver processes\n2021-03-29 04:34:31.264 CDT [9741] WARNING: terminating connection\nbecause of crash of another server process\n2021-03-29 04:34:31.264 CDT [9741] DETAIL: The postmaster has\ncommanded this server process to roll back the current transaction and\nexit, because another server process exited abnormally and possibly\ncorrupted shared memory.\n2021-03-29 04:34:31.264 CDT [9741] HINT: In a moment you should be\nable to reconnect to the database and repeat your command.\n2021-03-29 04:34:31.267 CDT [1413] LOG: archiver process (PID 9742)\nexited with exit code 1\n2021-03-29 04:34:31.267 CDT [1413] LOG: all server processes\nterminated; reinitializing\n\nAttached is a self contained test case which reproduces the problem.\n\nInstructions:\n1. run the attached script in psql, pgtask_test.sql. It will create a\ndatabase, initialize it, and run the main procedure. dblink must be\navailable\n2. in another window, run SELECT CreateTaskChain('test', 'DEV');\n\nIn the console that ran main(), you should see output that the\nprocedure began to do work. Once it does, a 'top' should show resident\nmemory growth immediately. It's about a gigabyte an hour in my test.\nSorry for the large-ish attachment.\n\nmerlin", "msg_date": "Tue, 30 Mar 2021 16:17:03 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "unconstrained memory growth in long running procedure stored\n procedure after upgrading 11-12" }, { "msg_contents": "On Tue, Mar 30, 2021 at 04:17:03PM -0500, Merlin Moncure wrote:\n> Hello all,\n> \n> We just upgraded from postgres 11 to 12.6 and our server is running\n> out of memory and rebooting about 1-2 times a day. Application\n> architecture is a single threaded stored procedure, executed with CALL\n> that loops and never terminates. With postgres 11 we had no memory\n> issues. Ultimately the crash looks like this:\n> \n> terminate called after throwing an instance of 'std::bad_alloc'\n> what(): std::bad_alloc\n> 2021-03-29 04:34:31.262 CDT [1413] LOG: server process (PID 9792) was\n> terminated by signal 6: Aborted\n\nI haven't tried your test, but this sounds a lot like the issue I reported with\nJIT, which is enabled by default in v12.\n\nhttps://www.postgresql.org/docs/12/release-12.html\nEnable Just-in-Time (JIT) compilation by default, if the server has been built with support for it (Andres Freund)\nNote that this support is not built by default, but has to be selected explicitly while configuring the build.\n\nhttps://www.postgresql.org/message-id/20201001021609.GC8476%40telsasoft.com\nterminate called after throwing an instance of 'std::bad_alloc'\n\nI suggest to try ALTER SYSTEM SET jit_inline_above_cost=-1; SELECT pg_reload_conf();\n\n> memory growth immediately. It's about a gigabyte an hour in my test.\n> Sorry for the large-ish attachment.\n\nYour reproducer is probably much better than mine was.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 30 Mar 2021 18:46:12 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unconstrained memory growth in long running procedure stored\n procedure after upgrading 11-12" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Tue, Mar 30, 2021 at 04:17:03PM -0500, Merlin Moncure wrote:\n>> We just upgraded from postgres 11 to 12.6 and our server is running\n>> out of memory and rebooting about 1-2 times a day.\n\n> I haven't tried your test, but this sounds a lot like the issue I reported with\n> JIT, which is enabled by default in v12.\n\nFWIW, I just finished failing to reproduce any problem with that\ntest case ... but I was using a non-JIT-enabled build.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 20:14:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: unconstrained memory growth in long running procedure stored\n procedure after upgrading 11-12" }, { "msg_contents": "On Tue, Mar 30, 2021 at 04:17:03PM -0500, Merlin Moncure wrote:\n> Instructions:\n> 1. run the attached script in psql, pgtask_test.sql. It will create a\n> database, initialize it, and run the main procedure. dblink must be\n> available\n> 2. in another window, run SELECT CreateTaskChain('test', 'DEV');\n\nFor your reproducer, I needed to: \n1.1) comment this:\n|INSERT INTO Task SELECT\n| -- 'test',\n1.2) then run: CALL MAIN();\n\nAnyway I reproduced this without an extension this time:\n\nCREATE OR REPLACE FUNCTION cfn() RETURNS void LANGUAGE PLPGSQL AS $$ declare a record; begin FOR a IN SELECT generate_series(1,99) LOOP PERFORM format('select 1'); END LOOP; END $$;\n$ yes 'SET jit_above_cost=0; SET jit_inline_above_cost=0; SET jit=on; SET client_min_messages=debug; SET log_executor_stats=on; SELECT cfn();' |head -11 |psql 2>&1 |grep 'max resident'\n! 33708 kB max resident size\n! 35956 kB max resident size\n! 37800 kB max resident size\n! 40300 kB max resident size\n! 41928 kB max resident size\n! 43928 kB max resident size\n! 48496 kB max resident size\n! 48964 kB max resident size\n! 50460 kB max resident size\n! 52272 kB max resident size\n! 53740 kB max resident size\n\nThere's also a relatively microscopic leak even if inline is off. It may be\nthat this is what I reproduced last time - I couldn't see how a few hundred kB\nleak was causing a our process to be GB sized. It may or may not be a separate\nissue, though.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 30 Mar 2021 23:07:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: unconstrained memory growth in long running procedure stored\n procedure after upgrading 11-12" }, { "msg_contents": "On Tue, Mar 30, 2021 at 7:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Tue, Mar 30, 2021 at 04:17:03PM -0500, Merlin Moncure wrote:\n> >> We just upgraded from postgres 11 to 12.6 and our server is running\n> >> out of memory and rebooting about 1-2 times a day.\n>\n> > I haven't tried your test, but this sounds a lot like the issue I reported with\n> > JIT, which is enabled by default in v12.\n>\n> FWIW, I just finished failing to reproduce any problem with that\n> test case ... but I was using a non-JIT-enabled build.\n\nYep. Disabling jit (fully, fia jit=off, not what was suggested\nupthread) eliminated the issue, or at least highly mitigated the leak.\nI was using pgdg rpm packaging, which enables jit by default. Thanks\neveryone for looking at this, and the workaround is quick and easy.\n\nmerlin\n\n\n", "msg_date": "Wed, 31 Mar 2021 10:01:02 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "Re: unconstrained memory growth in long running procedure stored\n procedure after upgrading 11-12" } ]
[ { "msg_contents": "I've built Postgres inside a Ubuntu Vagrant VM. When I try to \"make check\",\nI get a complaint about the permissions on the data directory:\n\n[....]\npg_regress: initdb failed\nExamine /vagrant/src/test/regress/log/initdb.log for the reason.\nCommand was: \"initdb\" -D \"/vagrant/src/test/regress/./tmp_check/data\"\n--no-clean --no-sync > \"/vagrant/src/test/regress/log/initdb.log\" 2>&1\nmake[1]: *** [GNUmakefile:125: check] Error 2\nmake[1]: Leaving directory '/vagrant/src/test/regress'\nmake: *** [GNUmakefile:69: check] Error 2\nvagrant@ubuntu-focal:/vagrant$ tail /vagrant/src/test/regress/log/initdb.log\ncreating subdirectories ... ok\nselecting dynamic shared memory implementation ... posix\nselecting default max_connections ... 20\nselecting default shared_buffers ... 400kB\nselecting default time zone ... Etc/UTC\ncreating configuration files ... ok\nrunning bootstrap script ... 2021-03-30 21:38:32.746 UTC [23154] FATAL:\n data directory \"/vagrant/src/test/regress/./tmp_check/data\" has invalid\npermissions\n2021-03-30 21:38:32.746 UTC [23154] DETAIL: Permissions should be u=rwx\n(0700) or u=rwx,g=rx (0750).\nchild process exited with exit code 1\ninitdb: data directory \"/vagrant/src/test/regress/./tmp_check/data\" not\nremoved at user's request\nvagrant@ubuntu-focal:/vagrant$\n\nHas anybody had this problem? The directory in question is created by the\nmake check activities so I would have thought that it would set the\npermissions; and if not, then everybody trying to run regression tests\nwould bump into this.\n\nI've built Postgres inside a Ubuntu Vagrant VM. When I try to \"make check\", I get a complaint about the permissions on the data directory:[....]pg_regress: initdb failedExamine /vagrant/src/test/regress/log/initdb.log for the reason.Command was: \"initdb\" -D \"/vagrant/src/test/regress/./tmp_check/data\" --no-clean --no-sync > \"/vagrant/src/test/regress/log/initdb.log\" 2>&1make[1]: *** [GNUmakefile:125: check] Error 2make[1]: Leaving directory '/vagrant/src/test/regress'make: *** [GNUmakefile:69: check] Error 2vagrant@ubuntu-focal:/vagrant$ tail /vagrant/src/test/regress/log/initdb.logcreating subdirectories ... okselecting dynamic shared memory implementation ... posixselecting default max_connections ... 20selecting default shared_buffers ... 400kBselecting default time zone ... Etc/UTCcreating configuration files ... okrunning bootstrap script ... 2021-03-30 21:38:32.746 UTC [23154] FATAL:  data directory \"/vagrant/src/test/regress/./tmp_check/data\" has invalid permissions2021-03-30 21:38:32.746 UTC [23154] DETAIL:  Permissions should be u=rwx (0700) or u=rwx,g=rx (0750).child process exited with exit code 1initdb: data directory \"/vagrant/src/test/regress/./tmp_check/data\" not removed at user's requestvagrant@ubuntu-focal:/vagrant$ Has anybody had this problem? The directory in question is created by the make check activities so I would have thought that it would set the permissions; and if not, then everybody trying to run regression tests would bump into this.", "msg_date": "Tue, 30 Mar 2021 17:42:30 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Trouble with initdb trying to run regression tests" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> I've built Postgres inside a Ubuntu Vagrant VM. When I try to \"make check\",\n> I get a complaint about the permissions on the data directory:\n\n> vagrant@ubuntu-focal:/vagrant$ tail /vagrant/src/test/regress/log/initdb.log\n> creating subdirectories ... ok\n> selecting dynamic shared memory implementation ... posix\n> selecting default max_connections ... 20\n> selecting default shared_buffers ... 400kB\n> selecting default time zone ... Etc/UTC\n> creating configuration files ... ok\n> running bootstrap script ... 2021-03-30 21:38:32.746 UTC [23154] FATAL:\n> data directory \"/vagrant/src/test/regress/./tmp_check/data\" has invalid\n> permissions\n> 2021-03-30 21:38:32.746 UTC [23154] DETAIL: Permissions should be u=rwx\n> (0700) or u=rwx,g=rx (0750).\n> child process exited with exit code 1\n\nFurther up in initdb.log, there was probably some useful information\nabout whether it found an existing directory there or not.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 30 Mar 2021 18:38:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trouble with initdb trying to run regression tests" }, { "msg_contents": "On Tue, 30 Mar 2021 at 18:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > I've built Postgres inside a Ubuntu Vagrant VM. When I try to \"make\n> check\",\n> > I get a complaint about the permissions on the data directory:\n>\n[....]\n\n> Further up in initdb.log, there was probably some useful information\n> about whether it found an existing directory there or not.\n>\n\nSorry for the noise. Turns out that directory creation in /vagrant does not\nrespect umask:\n\nvagrant@ubuntu-focal:/vagrant$ umask\n0027\nvagrant@ubuntu-focal:/vagrant$ mkdir test-umask\nvagrant@ubuntu-focal:/vagrant$ ls -ld test-umask/\ndrwxr-xr-x 1 vagrant vagrant 64 Mar 31 01:12 test-umask/\nvagrant@ubuntu-focal:/vagrant$\n\nI knew that file ownership changes are not processed in /vagrant; and\nbecause I remembered that I checked whether permission mode changes were\naccepted, but didn't think to check whether umask worked. When I tried\nagain (git clone, build, make check) in another directory it worked fine.\n\nI was then able to get the tests to run (and pass) in /vagrant by changing\nthe --temp-instance setting in src/Makefile.global (and looks like I should\nbe able to edit src/Makefile.global.in and re-run configure) to a location\noutside of /vagrant. Is there a way to tell configure to override the\nsetting? I ask mostly because src/Makefile.global.in says users shouldn't\nneed to edit it. Otherwise my fix will most likely be to maintain a\none-line update to this file in my checkout.\n\nOn Tue, 30 Mar 2021 at 18:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> I've built Postgres inside a Ubuntu Vagrant VM. When I try to \"make check\",\n> I get a complaint about the permissions on the data directory:[....] \nFurther up in initdb.log, there was probably some useful information\nabout whether it found an existing directory there or not.Sorry for the noise. Turns out that directory creation in /vagrant does not respect umask:vagrant@ubuntu-focal:/vagrant$ umask0027vagrant@ubuntu-focal:/vagrant$ mkdir test-umaskvagrant@ubuntu-focal:/vagrant$ ls -ld test-umask/drwxr-xr-x 1 vagrant vagrant 64 Mar 31 01:12 test-umask/vagrant@ubuntu-focal:/vagrant$ I knew that file ownership changes are not processed in /vagrant; and because I remembered that I checked whether permission mode changes were accepted, but didn't think to check whether umask worked. When I tried again (git clone, build, make check) in another directory it worked fine.I was then able to get the tests to run (and pass) in /vagrant by changing the --temp-instance setting in src/Makefile.global (and looks like I should be able to edit src/Makefile.global.in and re-run configure) to a location outside of /vagrant. Is there a way to tell configure to override the setting? I ask mostly because src/Makefile.global.in says users shouldn't need to edit it. Otherwise my fix will most likely be to maintain a one-line update to this file in my checkout.", "msg_date": "Tue, 30 Mar 2021 21:57:03 -0400", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Trouble with initdb trying to run regression tests" } ]
[ { "msg_contents": "Hackers,\n\nOver on [1] I've been working on adding a new type of executor node\nwhich caches tuples in a hash table belonging to a given cache key.\n\nThe current sole use of this node type is to go between a\nparameterized nested loop and the inner node in order to cache\npreviously seen sets of parameters so that we can skip scanning the\ninner scan for parameter values that we've already cached. The node\ncould also be used to cache results from correlated subqueries,\nalthough that's not done yet.\n\nThe cache limits itself to not use more than hash_mem by evicting the\nleast recently used entries whenever more space is needed for new\nentries.\n\nCurrently, in the patch, the node is named \"Result Cache\". That name\nwas not carefully thought out. I just needed to pick something when\nwriting the code.\n\nHere's an EXPLAIN output with the current name:\n\npostgres=# explain (costs off) select relkind,c from pg_class c1,\nlateral (select count(*) c from pg_class c2 where c1.relkind =\nc2.relkind) c2;\n QUERY PLAN\n----------------------------------------------------\n Nested Loop\n -> Seq Scan on pg_class c1\n -> Result Cache\n Cache Key: c1.relkind\n -> Aggregate\n -> Seq Scan on pg_class c2\n Filter: (c1.relkind = relkind)\n(7 rows)\n\nI just got off a team call with Andres, Thomas and Melanie. During the\ncall I mentioned that I didn't like the name \"Result Cache\". Many name\nsuggestions followed:\n\nHere's a list of a few that were mentioned:\n\nProbe Cache\nTuple Cache\nKeyed Materialize\nHash Materialize\nResult Cache\nCache\nHash Cache\nLazy Hash\nReactive Hash\nParameterized Hash\nParameterized Cache\nKeyed Inner Cache\nMRU Cache\nMRU Hash\n\nI was hoping to commit the final patch pretty soon, but thought I'd\nhave another go at seeing if we can get some consensus on a name\nbefore doing that. Otherwise, I'd sort of assumed that we'd just reach\nsome consensus after everyone complained about the current name after\nthe feature is committed.\n\nMy personal preference is \"Lazy Hash\", but I feel it might be better\nto use the word \"Reactive\" instead of \"Lazy\".\n\nThere was some previous discussion on the name in [2]. I suggested\nsome other names in [3]. Andy voted for \"Tuple Cache\" in [4]\n\nVotes? Other suggestions?\n\n(I've included all the people who have shown some previous interest in\nnaming this node.)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmoZMxLeanqrS00_p3xNsU3g1v3EKjNZ4dM02ShRxxLiDBw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAApHDvoj_sH1H3JVXgHuwnxf1FQbjRVOqqgxzOgJX13NiA9-cg%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/CAKU4AWoshM0JoymwBK6PKOFDMKg-OO6qtSVU_Piqb0dynxeL5w%40mail.gmail.com\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:29:36 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "What to call an executor node which lazily caches tuples in a hash\n table?" }, { "msg_contents": "Hi,\nI was reading this part of the description:\n\nthe Result Cache's\nhash table is much smaller than the hash join's due to result cache only\ncaching useful values rather than all tuples from the inner side of the\njoin.\n\nI think the word 'Result' should be part of the cache name considering the\nabove.\n\nCheers\n\nOn Tue, Mar 30, 2021 at 4:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Hackers,\n>\n> Over on [1] I've been working on adding a new type of executor node\n> which caches tuples in a hash table belonging to a given cache key.\n>\n> The current sole use of this node type is to go between a\n> parameterized nested loop and the inner node in order to cache\n> previously seen sets of parameters so that we can skip scanning the\n> inner scan for parameter values that we've already cached. The node\n> could also be used to cache results from correlated subqueries,\n> although that's not done yet.\n>\n> The cache limits itself to not use more than hash_mem by evicting the\n> least recently used entries whenever more space is needed for new\n> entries.\n>\n> Currently, in the patch, the node is named \"Result Cache\". That name\n> was not carefully thought out. I just needed to pick something when\n> writing the code.\n>\n> Here's an EXPLAIN output with the current name:\n>\n> postgres=# explain (costs off) select relkind,c from pg_class c1,\n> lateral (select count(*) c from pg_class c2 where c1.relkind =\n> c2.relkind) c2;\n> QUERY PLAN\n> ----------------------------------------------------\n> Nested Loop\n> -> Seq Scan on pg_class c1\n> -> Result Cache\n> Cache Key: c1.relkind\n> -> Aggregate\n> -> Seq Scan on pg_class c2\n> Filter: (c1.relkind = relkind)\n> (7 rows)\n>\n> I just got off a team call with Andres, Thomas and Melanie. During the\n> call I mentioned that I didn't like the name \"Result Cache\". Many name\n> suggestions followed:\n>\n> Here's a list of a few that were mentioned:\n>\n> Probe Cache\n> Tuple Cache\n> Keyed Materialize\n> Hash Materialize\n> Result Cache\n> Cache\n> Hash Cache\n> Lazy Hash\n> Reactive Hash\n> Parameterized Hash\n> Parameterized Cache\n> Keyed Inner Cache\n> MRU Cache\n> MRU Hash\n>\n> I was hoping to commit the final patch pretty soon, but thought I'd\n> have another go at seeing if we can get some consensus on a name\n> before doing that. Otherwise, I'd sort of assumed that we'd just reach\n> some consensus after everyone complained about the current name after\n> the feature is committed.\n>\n> My personal preference is \"Lazy Hash\", but I feel it might be better\n> to use the word \"Reactive\" instead of \"Lazy\".\n>\n> There was some previous discussion on the name in [2]. I suggested\n> some other names in [3]. Andy voted for \"Tuple Cache\" in [4]\n>\n> Votes? Other suggestions?\n>\n> (I've included all the people who have shown some previous interest in\n> naming this node.)\n>\n> David\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com\n> [2]\n> https://www.postgresql.org/message-id/CA%2BTgmoZMxLeanqrS00_p3xNsU3g1v3EKjNZ4dM02ShRxxLiDBw%40mail.gmail.com\n> [3]\n> https://www.postgresql.org/message-id/CAApHDvoj_sH1H3JVXgHuwnxf1FQbjRVOqqgxzOgJX13NiA9-cg%40mail.gmail.com\n> [4]\n> https://www.postgresql.org/message-id/CAKU4AWoshM0JoymwBK6PKOFDMKg-OO6qtSVU_Piqb0dynxeL5w%40mail.gmail.com\n>\n>\n>\n\nHi,I was reading this part of the description:the Result Cache'shash table is much smaller than the hash join's due to result cache onlycaching useful values rather than all tuples from the inner side of the join.I think the word 'Result' should be part of the cache name considering the above.CheersOn Tue, Mar 30, 2021 at 4:30 PM David Rowley <dgrowleyml@gmail.com> wrote:Hackers,\n\nOver on [1] I've been working on adding a new type of executor node\nwhich caches tuples in a hash table belonging to a given cache key.\n\nThe current sole use of this node type is to go between a\nparameterized nested loop and the inner node in order to cache\npreviously seen sets of parameters so that we can skip scanning the\ninner scan for parameter values that we've already cached.  The node\ncould also be used to cache results from correlated subqueries,\nalthough that's not done yet.\n\nThe cache limits itself to not use more than hash_mem by evicting the\nleast recently used entries whenever more space is needed for new\nentries.\n\nCurrently, in the patch, the node is named \"Result Cache\".  That name\nwas not carefully thought out. I just needed to pick something when\nwriting the code.\n\nHere's an EXPLAIN output with the current name:\n\npostgres=# explain (costs off) select relkind,c from pg_class c1,\nlateral (select count(*) c from pg_class c2 where c1.relkind =\nc2.relkind) c2;\n                     QUERY PLAN\n----------------------------------------------------\n Nested Loop\n   ->  Seq Scan on pg_class c1\n   ->  Result Cache\n         Cache Key: c1.relkind\n         ->  Aggregate\n               ->  Seq Scan on pg_class c2\n                     Filter: (c1.relkind = relkind)\n(7 rows)\n\nI just got off a team call with Andres, Thomas and Melanie. During the\ncall I mentioned that I didn't like the name \"Result Cache\". Many name\nsuggestions followed:\n\nHere's a list of a few that were mentioned:\n\nProbe Cache\nTuple Cache\nKeyed Materialize\nHash Materialize\nResult Cache\nCache\nHash Cache\nLazy Hash\nReactive Hash\nParameterized Hash\nParameterized Cache\nKeyed Inner Cache\nMRU Cache\nMRU Hash\n\nI was hoping to commit the final patch pretty soon, but thought I'd\nhave another go at seeing if we can get some consensus on a name\nbefore doing that. Otherwise, I'd sort of assumed that we'd just reach\nsome consensus after everyone complained about the current name after\nthe feature is committed.\n\nMy personal preference is \"Lazy Hash\", but I feel it might be better\nto use the word \"Reactive\" instead of \"Lazy\".\n\nThere was some previous discussion on the name in [2]. I suggested\nsome other names in [3]. Andy voted for \"Tuple Cache\" in [4]\n\nVotes? Other suggestions?\n\n(I've included all the people who have shown some previous interest in\nnaming this node.)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmoZMxLeanqrS00_p3xNsU3g1v3EKjNZ4dM02ShRxxLiDBw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAApHDvoj_sH1H3JVXgHuwnxf1FQbjRVOqqgxzOgJX13NiA9-cg%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/CAKU4AWoshM0JoymwBK6PKOFDMKg-OO6qtSVU_Piqb0dynxeL5w%40mail.gmail.com", "msg_date": "Tue, 30 Mar 2021 16:48:17 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table?" }, { "msg_contents": "On Wed, Mar 31, 2021 at 7:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:\n\n> Hi,\n> I was reading this part of the description:\n>\n> the Result Cache's\n> hash table is much smaller than the hash join's due to result cache only\n> caching useful values rather than all tuples from the inner side of the\n> join.\n>\n> I think the word 'Result' should be part of the cache name considering the\n> above.\n>\n> Cheers\n>\n> On Tue, Mar 30, 2021 at 4:30 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> Hackers,\n>>\n>> Over on [1] I've been working on adding a new type of executor node\n>> which caches tuples in a hash table belonging to a given cache key.\n>>\n>> The current sole use of this node type is to go between a\n>> parameterized nested loop and the inner node in order to cache\n>> previously seen sets of parameters so that we can skip scanning the\n>> inner scan for parameter values that we've already cached. The node\n>> could also be used to cache results from correlated subqueries,\n>> although that's not done yet.\n>>\n>> The cache limits itself to not use more than hash_mem by evicting the\n>> least recently used entries whenever more space is needed for new\n>> entries.\n>>\n>> Currently, in the patch, the node is named \"Result Cache\". That name\n>> was not carefully thought out. I just needed to pick something when\n>> writing the code.\n>>\n>> Here's an EXPLAIN output with the current name:\n>>\n>> postgres=# explain (costs off) select relkind,c from pg_class c1,\n>> lateral (select count(*) c from pg_class c2 where c1.relkind =\n>> c2.relkind) c2;\n>> QUERY PLAN\n>> ----------------------------------------------------\n>> Nested Loop\n>> -> Seq Scan on pg_class c1\n>> -> Result Cache\n>> Cache Key: c1.relkind\n>> -> Aggregate\n>> -> Seq Scan on pg_class c2\n>> Filter: (c1.relkind = relkind)\n>> (7 rows)\n>>\n>> I just got off a team call with Andres, Thomas and Melanie. During the\n>> call I mentioned that I didn't like the name \"Result Cache\". Many name\n>> suggestions followed:\n>>\n>> Here's a list of a few that were mentioned:\n>>\n>> Probe Cache\n>> Tuple Cache\n>> Keyed Materialize\n>> Hash Materialize\n>> Result Cache\n>> Cache\n>> Hash Cache\n>> Lazy Hash\n>> Reactive Hash\n>> Parameterized Hash\n>> Parameterized Cache\n>> Keyed Inner Cache\n>> MRU Cache\n>> MRU Hash\n>>\n>> I was hoping to commit the final patch pretty soon, but thought I'd\n>> have another go at seeing if we can get some consensus on a name\n>> before doing that. Otherwise, I'd sort of assumed that we'd just reach\n>> some consensus after everyone complained about the current name after\n>> the feature is committed.\n>>\n>> My personal preference is \"Lazy Hash\", but I feel it might be better\n>> to use the word \"Reactive\" instead of \"Lazy\".\n>>\n>> There was some previous discussion on the name in [2]. I suggested\n>> some other names in [3]. Andy voted for \"Tuple Cache\" in [4]\n>>\n>> Votes? Other suggestions?\n>>\n>> (I've included all the people who have shown some previous interest in\n>> naming this node.)\n>>\n>> David\n>>\n>> [1]\n>> https://www.postgresql.org/message-id/flat/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com\n>> [2]\n>> https://www.postgresql.org/message-id/CA%2BTgmoZMxLeanqrS00_p3xNsU3g1v3EKjNZ4dM02ShRxxLiDBw%40mail.gmail.com\n>> [3]\n>> https://www.postgresql.org/message-id/CAApHDvoj_sH1H3JVXgHuwnxf1FQbjRVOqqgxzOgJX13NiA9-cg%40mail.gmail.com\n>> [4]\n>> https://www.postgresql.org/message-id/CAKU4AWoshM0JoymwBK6PKOFDMKg-OO6qtSVU_Piqb0dynxeL5w%40mail.gmail.com\n>>\n>>\n>>\nI want to share some feelings about other keywords. Materialize are used\nin\nMaterialize node in executor node, which would write data to disk when\nmemory\nis not enough, and it is used in \"Materialized View\", where it stores all\nthe data to disk\nThis gives me some feeling that \"Materialize\" usually has something with\ndisk,\nbut our result cache node doesn't.\n\nAnd I think DBA checks plans more than the PostgreSQL developer. So\nsome MRU might be too internal for them. As for developers, if they want to\nknow such details, they can just read the source code.\n\nWhen naming it, we may also think about some non native English speakers,\nso\nsome too advanced words may make them uncomfortable. Actually when I read\n\"Reactive\", I googled to find what its meaning is. I knew reactive\nprogramming, but I\ndo not truly understand \"reactive hash\". And Compared with HashJoin, Hash\nmay\nmislead people the result may be spilled into disk as well. so I prefer\n\"Cache\"\nover \"Hash\".\n\n At last, I still want to vote for \"Tuple(s) Cache\", which sounds simple\nand enough.\nI was thinking if we need to put \"Lazy\" in the node name since we do build\ncache\nlazily, then I found we didn't call \"Materialize\" as \"Lazy Materialize\",\nso I think we\ncan keep consistent.\n\n> I was hoping to commit the final patch pretty soon\n\nVery glad to see it, thanks for the great feature.\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Wed, Mar 31, 2021 at 7:45 AM Zhihong Yu <zyu@yugabyte.com> wrote:Hi,I was reading this part of the description:the Result Cache'shash table is much smaller than the hash join's due to result cache onlycaching useful values rather than all tuples from the inner side of the join.I think the word 'Result' should be part of the cache name considering the above.CheersOn Tue, Mar 30, 2021 at 4:30 PM David Rowley <dgrowleyml@gmail.com> wrote:Hackers,\n\nOver on [1] I've been working on adding a new type of executor node\nwhich caches tuples in a hash table belonging to a given cache key.\n\nThe current sole use of this node type is to go between a\nparameterized nested loop and the inner node in order to cache\npreviously seen sets of parameters so that we can skip scanning the\ninner scan for parameter values that we've already cached.  The node\ncould also be used to cache results from correlated subqueries,\nalthough that's not done yet.\n\nThe cache limits itself to not use more than hash_mem by evicting the\nleast recently used entries whenever more space is needed for new\nentries.\n\nCurrently, in the patch, the node is named \"Result Cache\".  That name\nwas not carefully thought out. I just needed to pick something when\nwriting the code.\n\nHere's an EXPLAIN output with the current name:\n\npostgres=# explain (costs off) select relkind,c from pg_class c1,\nlateral (select count(*) c from pg_class c2 where c1.relkind =\nc2.relkind) c2;\n                     QUERY PLAN\n----------------------------------------------------\n Nested Loop\n   ->  Seq Scan on pg_class c1\n   ->  Result Cache\n         Cache Key: c1.relkind\n         ->  Aggregate\n               ->  Seq Scan on pg_class c2\n                     Filter: (c1.relkind = relkind)\n(7 rows)\n\nI just got off a team call with Andres, Thomas and Melanie. During the\ncall I mentioned that I didn't like the name \"Result Cache\". Many name\nsuggestions followed:\n\nHere's a list of a few that were mentioned:\n\nProbe Cache\nTuple Cache\nKeyed Materialize\nHash Materialize\nResult Cache\nCache\nHash Cache\nLazy Hash\nReactive Hash\nParameterized Hash\nParameterized Cache\nKeyed Inner Cache\nMRU Cache\nMRU Hash\n\nI was hoping to commit the final patch pretty soon, but thought I'd\nhave another go at seeing if we can get some consensus on a name\nbefore doing that. Otherwise, I'd sort of assumed that we'd just reach\nsome consensus after everyone complained about the current name after\nthe feature is committed.\n\nMy personal preference is \"Lazy Hash\", but I feel it might be better\nto use the word \"Reactive\" instead of \"Lazy\".\n\nThere was some previous discussion on the name in [2]. I suggested\nsome other names in [3]. Andy voted for \"Tuple Cache\" in [4]\n\nVotes? Other suggestions?\n\n(I've included all the people who have shown some previous interest in\nnaming this node.)\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/CAApHDvrPcQyQdWERGYWx8J%2B2DLUNgXu%2BfOSbQ1UscxrunyXyrQ%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CA%2BTgmoZMxLeanqrS00_p3xNsU3g1v3EKjNZ4dM02ShRxxLiDBw%40mail.gmail.com\n[3] https://www.postgresql.org/message-id/CAApHDvoj_sH1H3JVXgHuwnxf1FQbjRVOqqgxzOgJX13NiA9-cg%40mail.gmail.com\n[4] https://www.postgresql.org/message-id/CAKU4AWoshM0JoymwBK6PKOFDMKg-OO6qtSVU_Piqb0dynxeL5w%40mail.gmail.com\n\n\n\nI want to share some feelings about other keywords.  Materialize are used in Materialize node in executor node, which would write data to disk when memoryis not enough, and it is used in \"Materialized View\", where it stores all the data to diskThis gives me some feeling that \"Materialize\" usually has something with disk,but our result cache node doesn't.  And I think DBA checks plans more than the PostgreSQL developer.  Sosome MRU might be too internal for them. As for developers,  if they want toknow such details, they can just read the source code. When naming it,  we may also think about some non native English speakers, sosome too advanced words may make them uncomfortable.  Actually when  I read\"Reactive\", I googled to find what its meaning is. I knew reactive programming, but Ido not truly understand \"reactive hash\".   And Compared with HashJoin, Hash maymislead people the result may be spilled into disk as well. so I prefer \"Cache\" over \"Hash\".  At last, I still want to vote for \"Tuple(s) Cache\", which sounds simple and enough. I was thinking if we need to put \"Lazy\" in the node name since we do build cachelazily,  then I found we didn't call \"Materialize\"  as \"Lazy Materialize\", so I think wecan keep consistent. > I was hoping to commit the final patch pretty soonVery glad to see it,  thanks for the great feature. -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Wed, 31 Mar 2021 09:43:05 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table?" }, { "msg_contents": "On Wed, 31 Mar 2021 at 14:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> When naming it, we may also think about some non native English speakers, so\n> some too advanced words may make them uncomfortable. Actually when I read\n> \"Reactive\", I googled to find what its meaning is. I knew reactive programming, but I\n> do not truly understand \"reactive hash\".\n\nThe origin of that idea came from \"reactive\" being the opposite of\n\"proactive\". If that's not clear then it's likely a bad choice for a\nname.\n\nI had thought proactive would mean \"do things beforehand\" i.e not on\ndemand. Basically, just fill the hash table with records that we need\nto put in it rather than all records that we might need, the latter\nbeing what Hash Join does, and the former is what the new node does.\n\nDavid\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:01:55 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table?" }, { "msg_contents": "On Wed, 31 Mar 2021 at 14:43, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> At last, I still want to vote for \"Tuple(s) Cache\", which sounds simple and enough.\n> I was thinking if we need to put \"Lazy\" in the node name since we do build cache\n> lazily, then I found we didn't call \"Materialize\" as \"Lazy Materialize\", so I think we\n> can keep consistent.\n\nI thought about this a little more and I can see now why I put the\nword \"Cache\" in the original name. I do now agree we really need to\nkeep the word \"Cache\" in the name.\n\nThe EXPLAIN ANALYZE talks about \"hits\", \"misses\" and \"evictions\", all\nof those are things that caches do.\n\n -> Result Cache (actual rows=1 loops=403)\n Cache Key: c1.relkind\n Hits: 398 Misses: 5 Evictions: 0 Overflows: 0 Memory Usage: 1kB\n -> Aggregate (actual rows=1 loops=5)\n\nI don't think there's any need to put the word \"Lazy\" in the name as\nif we're keeping \"Cache\", then most caches do only cache results of\nvalues that have been looked for.\n\nI'm just not sure if \"Tuple\" is the best word or not. I primarily\nthink of \"tuple\" as the word we use internally, but a quick grep of\nthe docs reminds me that's not the case. The word is used all over the\ndocuments. We have GUCs like parallel_tuple_cost and cpu_tuple_cost.\nSo it does seem like the sort of thing anyone who is interested in\nlooking at the EXPLAIN output should know about. I'm just not\nmassively keen on using that word in the name. The only other options\nthat come to mind are \"Result\" and \"Parameterized\". However, I think\n\"Parameterized\" does not add much meaning. I think most people would\nexpect a cache to have a key. I sort of see why I went with \"Result\nCache\" now.\n\nDoes anyone else like the name \"Tuple Cache\"?\n\nDavid\n\n\n", "msg_date": "Wed, 31 Mar 2021 17:06:40 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table?" }, { "msg_contents": "Hi,\n\nOn 2021-03-31 12:29:36 +1300, David Rowley wrote:\n> Here's a list of a few that were mentioned:\n> \n> Probe Cache\n> Tuple Cache\n> Keyed Materialize\n> Hash Materialize\n> Result Cache\n> Cache\n> Hash Cache\n> Lazy Hash\n> Reactive Hash\n> Parameterized Hash\n> Parameterized Cache\n> Keyed Inner Cache\n> MRU Cache\n> MRU Hash\n\nSuggestion: ParamKeyedCache.\n\nI don't like Parameterized because it makes it sound like the cache is\nparameterized (e.g. size), rather than using Param values as the keys\nfor the cache. ParamKeyed indicates that Params are the key, rather\nthan configuring the cache...\n\nI don't like \"result cache\" all that much, because it does sound like\nwe'd be caching query results etc, or that it might be referring to\nResult nodes. Neither of which is the case...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Mar 2021 14:19:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table?" }, { "msg_contents": "> Does anyone else like the name \"Tuple Cache\"?\nI personally like that name best.\n\nIt makes sense to me when thinking about looking at an EXPLAIN and trying\nto understand why this node may be there. The way we look up the value\nstored in the cache doesn't really matter to me as a user, I'm more\nthinking about the reason the node is there in the first place, which Tuple\nCache conveys...at least for me.\n\n> \nDoes anyone else like the name \"Tuple Cache\"?I personally like that name best.It makes sense to me when thinking about looking at an EXPLAIN and trying to understand why this node may be there. The way we look up the value stored in the cache doesn't really matter to me as a user, I'm more thinking about the reason the node is there in the first place, which Tuple Cache conveys...at least for me.", "msg_date": "Wed, 31 Mar 2021 18:22:34 -0400", "msg_from": "Adam Brusselback <adambrusselback@gmail.com>", "msg_from_op": false, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table?" }, { "msg_contents": "You started the thread about what to call the node, but what about its GUC?\n\nShould it be enable_result_cache instead of enable_resultcache?\n\nSee also Robert's opinion last year about enable_incrementalsort and \"economizing on underscores\".\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoazhPwebpOwNbHt1vJoos1eLYhJVQPka%2BpptSLgS685aA%40mail.gmail.com#e6d5ea0a10384d540b924688c6357b21\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 2 Jun 2021 21:36:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table? (GUC)" }, { "msg_contents": "On Thu, 3 Jun 2021 at 14:36, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> You started the thread about what to call the node, but what about its GUC?\n>\n> Should it be enable_result_cache instead of enable_resultcache?\n>\n> See also Robert's opinion last year about enable_incrementalsort and \"economizing on underscores\".\n> https://www.postgresql.org/message-id/flat/CA%2BTgmoazhPwebpOwNbHt1vJoos1eLYhJVQPka%2BpptSLgS685aA%40mail.gmail.com#e6d5ea0a10384d540b924688c6357b21\n\nIf starting fresh, either is ok for me. We already have a mix and we\nended up adding the underscore to enable_incremental_sort as people\nthought it was more clear. I did vote to keep it as it was without the\nunderscore as it seemed to follow the other enable* GUCs more closely.\n\nI get the idea that we changed enable_incrementalsort because the word\nincremental was fairly long when you compare it to things like hash,\nnest and merge. \"result\" is not too much longer so I don't really\nfeel like the name enable_resultcache is wrong. Since it's already\ncalled that, I'd rather not confuse things.\n\nTo be honest, I'd rather the discussion was about whether we want to\nactually call the entire node \"Result Cache\" in the first place. I\ndid do an internet search a few weeks ago on \"Result Cache\" just to\nsee if anyone was talking about it anywhere that I might miss. I\ndiscovered that Oracle has something of the same name, but going by\n[1] it appears that function is for caching results between queries.\nThat's obviously not what PostgreSQL's Result Cache does. After\nlearning that, I dislike the name \"Result Cache\" even more.\n\nFWIW, I'm still willing to go and do the renaming work for the node,\nbut the time we have to do that is ticking away fast and so far\nthere's no consensus [2] on any other name. Just a few people\nsuggesting other names.\n\nDavid\n\n[1] https://logicalread.com/huge-performance-gains-using-oracle-11g-result-cache-dr01/#.YLhEjLczb9Q\n[2] https://www.postgresql.org/message-id/CAApHDvq=yQXr5kqhRviT2RhNKwToaWr9JAN5t+5_PzhuRJ3wvg@mail.gmail.com\n\n\n", "msg_date": "Thu, 3 Jun 2021 15:04:33 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: What to call an executor node which lazily caches tuples in a\n hash table? (GUC)" } ]
[ { "msg_contents": "Hi,\n\nJust noted an interesting behaviour when using a cursor in a function\nin an UPDATE RETURNING (note that INSERT RETURNING has no problem).\n\nI have seen this problem in all versions I tested (9.4 thru master).\nSteps to reproduce:\n\nprepare the test\n```\ncreate table t1 as select random() * foo i from generate_series(1, 100) foo;\ncreate table t2 as select random() * foo i from generate_series(1, 100) foo;\n\nCREATE OR REPLACE FUNCTION cursor_bug()\n RETURNS integer\n LANGUAGE plpgsql\nAS $function$\ndeclare\n c1 cursor (p1 int) for select count(*) from t1 where i = p1;\n n int4;\nbegin\n open c1 (77);\n fetch c1 into n;\n return n;\nend $function$\n;\n```\n\n-- this ends fine\ninsert into t2 values(5) returning cursor_bug() as c1;\n c1\n----\n 0\n(1 row)\n\n-- this fails\nupdate t2 set i = 5 returning cursor_bug() as c1;\nERROR: cursor \"c1\" already in use\nCONTEXT: PL/pgSQL function cursor_bug() line 6 at OPEN\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n", "msg_date": "Tue, 30 Mar 2021 19:39:14 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "cursor already in use, UPDATE RETURNING bug?" }, { "msg_contents": "On Wed, Mar 31, 2021 at 6:09 AM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> Hi,\n>\n> Just noted an interesting behaviour when using a cursor in a function\n> in an UPDATE RETURNING (note that INSERT RETURNING has no problem).\n>\n> I have seen this problem in all versions I tested (9.4 thru master).\n> Steps to reproduce:\n>\n> prepare the test\n> ```\n> create table t1 as select random() * foo i from generate_series(1, 100) foo;\n> create table t2 as select random() * foo i from generate_series(1, 100) foo;\n>\n> CREATE OR REPLACE FUNCTION cursor_bug()\n> RETURNS integer\n> LANGUAGE plpgsql\n> AS $function$\n> declare\n> c1 cursor (p1 int) for select count(*) from t1 where i = p1;\n> n int4;\n> begin\n> open c1 (77);\n> fetch c1 into n;\n> return n;\n> end $function$\n> ;\n> ```\n>\n> -- this ends fine\n> insert into t2 values(5) returning cursor_bug() as c1;\n> c1\n> ----\n> 0\n> (1 row)\n\ncursor_bug() is called only once here.\n\n>\n> -- this fails\n> update t2 set i = 5 returning cursor_bug() as c1;\n> ERROR: cursor \"c1\" already in use\n> CONTEXT: PL/pgSQL function cursor_bug() line 6 at OPEN\n\nbut that's called as many time as the number of rows in t2 in the same\ntransaction. The first row will go fine. For the second row it will\nfind c1 is already open. Shouldn't cursor_bug() close c1 at the end?\nIs it intended to be kept open when the function finishes? May be you\nare expecting it to be closed automatically when the function\nfinishes. But that's not what is documented at\nhttps://www.postgresql.org/docs/13/plpgsql-cursors.html.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 31 Mar 2021 18:20:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: cursor already in use, UPDATE RETURNING bug?" }, { "msg_contents": "On Wed, Mar 31, 2021 at 7:50 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Mar 31, 2021 at 6:09 AM Jaime Casanova\n>\n> >\n> > -- this fails\n> > update t2 set i = 5 returning cursor_bug() as c1;\n> > ERROR: cursor \"c1\" already in use\n> > CONTEXT: PL/pgSQL function cursor_bug() line 6 at OPEN\n>\n> but that's called as many time as the number of rows in t2 in the same\n> transaction. The first row will go fine. For the second row it will\n> find c1 is already open. Shouldn't cursor_bug() close c1 at the end?\n> Is it intended to be kept open when the function finishes? May be you\n> are expecting it to be closed automatically when the function\n> finishes. But that's not what is documented at\n> https://www.postgresql.org/docs/13/plpgsql-cursors.html.\n>\n\nNow that I see it again, after sleeping, I can see you're right! sorry\nfor the noise\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n", "msg_date": "Wed, 31 Mar 2021 09:47:08 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: cursor already in use, UPDATE RETURNING bug?" } ]
[ { "msg_contents": "Greetings,\n\nI did some research on this bug and found that the reason for the problem\nis that the pg_dump misjudged the non-global default access privileges when\nexporting. The details are as follows:\n\n> The default for a global entry is the hard-wired default ACL for the\n> particular object type. The default for non-global entries is an empty\n> ACL. This must be so because global entries replace the hard-wired\n> defaults, while others are added on.\n>\nWe can find this description in code\ncomments(src/backend/catalog/aclchk.c:1162). For example, if we log as user\npostgres, for global entire our default ACL is\n\"{=X/postgres,postgres=X/postgres}\", for non-global entire it's \"NULL\".\n\nNow let's look at a part of the SQL statement used when pg_dump exports the\ndefault ACL(it can be found in src/bin/pg_dump/dumputils.c:762):\n\n> (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM\n> (SELECT acl, row_n FROM\n> pg_catalog.unnest(coalesce(defaclacl,pg_catalog.acldefault(CASE WHEN\n> defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole)))\n> WITH ORDINALITY AS perm(acl,row_n)\n> WHERE NOT EXISTS (\n> SELECT 1 FROM\n> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN\n> defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole)))\n> AS init(init_acl) WHERE acl = init_acl)) as foo)\n\nIt can be seen that when comparing the changes of default ACL, it does not\ndistinguish between global and non-global default ACL. It uses\n{=X/postgres,postgres=X/postgres} as the non-global default ACL by mistake,\nresulting in the export error.\n\nCombined with the above research, I gave this patch to fix the bug. Hackers\ncan help to see if this modification is correct. I'm studying how to write\ntest scripts for it...\n\nThanks.\n\n-- \nThere is no royal road to learning.\nHighGo Software Co.", "msg_date": "Wed, 31 Mar 2021 11:02:00 +0800", "msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Sorry I used the wrong way to send the email. The email about the bug is\nhere:\nhttps://www.postgresql.org/message-id/111621616618184%40mail.yandex.ru\n\nOn Wed, Mar 31, 2021 at 11:02 AM Neil Chen <carpenter.nail.cz@gmail.com>\nwrote:\n\n> Greetings,\n>\n> I did some research on this bug and found that the reason for the problem\n> is that the pg_dump misjudged the non-global default access privileges when\n> exporting. The details are as follows:\n>\n>> The default for a global entry is the hard-wired default ACL for the\n>> particular object type. The default for non-global entries is an empty\n>> ACL. This must be so because global entries replace the hard-wired\n>> defaults, while others are added on.\n>>\n> We can find this description in code\n> comments(src/backend/catalog/aclchk.c:1162). For example, if we log as user\n> postgres, for global entire our default ACL is\n> \"{=X/postgres,postgres=X/postgres}\", for non-global entire it's \"NULL\".\n>\n> Now let's look at a part of the SQL statement used when pg_dump exports\n> the default ACL(it can be found in src/bin/pg_dump/dumputils.c:762):\n>\n>> (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM\n>> (SELECT acl, row_n FROM\n>> pg_catalog.unnest(coalesce(defaclacl,pg_catalog.acldefault(CASE WHEN\n>> defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole)))\n>> WITH ORDINALITY AS perm(acl,row_n)\n>> WHERE NOT EXISTS (\n>> SELECT 1 FROM\n>> pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN\n>> defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole)))\n>> AS init(init_acl) WHERE acl = init_acl)) as foo)\n>\n> It can be seen that when comparing the changes of default ACL, it does not\n> distinguish between global and non-global default ACL. It uses\n> {=X/postgres,postgres=X/postgres} as the non-global default ACL by mistake,\n> resulting in the export error.\n>\n> Combined with the above research, I gave this patch to fix the\n> bug. Hackers can help to see if this modification is correct. I'm studying\n> how to write test scripts for it...\n>\n> Thanks.\n>\n> --\n> There is no royal road to learning.\n> HighGo Software Co.\n>\n\n\n-- \nThere is no royal road to learning.\nHighGo Software Co.\n\nSorry I used the wrong way to send the email. The email about the bug is here:https://www.postgresql.org/message-id/111621616618184%40mail.yandex.ruOn Wed, Mar 31, 2021 at 11:02 AM Neil Chen <carpenter.nail.cz@gmail.com> wrote:Greetings,I did some research on this bug and found that the reason for the problem is that the pg_dump misjudged the non-global default access privileges when exporting. The details are as follows:The default for a global entry is the hard-wired default ACL for theparticular object type.  The default for non-global entries is an emptyACL.  This must be so because global entries replace the hard-wireddefaults, while others are added on.We can find this description in code comments(src/backend/catalog/aclchk.c:1162). For example, if we log as user postgres, for global entire our default ACL is \"{=X/postgres,postgres=X/postgres}\", for non-global entire it's \"NULL\".Now let's look at a part of the SQL statement used when pg_dump exports the default ACL(it can be found in src/bin/pg_dump/dumputils.c:762):(SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(defaclacl,pg_catalog.acldefault(CASE WHEN defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole))) AS init(init_acl) WHERE acl = init_acl)) as foo) It can be seen that when comparing the changes of default ACL, it does not distinguish between global and non-global default ACL. It uses \n\n{=X/postgres,postgres=X/postgres}\n\nas the non-global default ACL by mistake, resulting in the export error.Combined with the above research, I gave this patch to fix the bug. Hackers can help to see if this modification is correct. I'm studying how to write test scripts for it...Thanks.-- There is no royal road to learning.HighGo Software Co.\n-- There is no royal road to learning.HighGo Software Co.", "msg_date": "Wed, 31 Mar 2021 11:30:54 +0800", "msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Hi Neil,\n\n> Combined with the above research, I gave this patch to fix the \n> bug. Hackers can help to see if this modification is correct. I'm \n> studying how to write test scripts for it...\n\nit works. Thx.\n\nWBR,\nBoris\n\n\n", "msg_date": "Fri, 2 Apr 2021 17:09:14 +0300", "msg_from": "\"Boris P. Korzun\" <drtr0jan@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Hi Neil, what about the commit to the upstream? 31.03.2021, 06:02, \"Neil Chen\" <carpenter.nail.cz@gmail.com>:Greetings, I did some research on this bug and found that the reason for the problem is that the pg_dump misjudged the non-global default access privileges when exporting. The details are as follows:The default for a global entry is the hard-wired default ACL for theparticular object type.  The default for non-global entries is an emptyACL.  This must be so because global entries replace the hard-wireddefaults, while others are added on.We can find this description in code comments(src/backend/catalog/aclchk.c:1162). For example, if we log as user postgres, for global entire our default ACL is \"{=X/postgres,postgres=X/postgres}\", for non-global entire it's \"NULL\". Now let's look at a part of the SQL statement used when pg_dump exports the default ACL(it can be found in src/bin/pg_dump/dumputils.c:762):(SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM(SELECT acl, row_n FROMpg_catalog.unnest(coalesce(defaclacl,pg_catalog.acldefault(CASE WHEN defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole)))WITH ORDINALITY AS perm(acl,row_n)WHERE NOT EXISTS (SELECT 1 FROMpg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\"char\",defaclrole)))AS init(init_acl) WHERE acl = init_acl)) as foo) It can be seen that when comparing the changes of default ACL, it does not distinguish between global and non-global default ACL. It uses {=X/postgres,postgres=X/postgres} as the non-global default ACL by mistake, resulting in the export error. Combined with the above research, I gave this patch to fix the bug. Hackers can help to see if this modification is correct. I'm studying how to write test scripts for it... Thanks. --There is no royal road to learning.HighGo Software Co.---WBRBoris", "msg_date": "Tue, 21 Sep 2021 08:04:58 +0300", "msg_from": "Boris P. Korzun <drtr0jan@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Hi Boris,\n\nActually, because I am a PG beginner, I am not familiar with the rules of\nthe community. What extra work do I need to do to submit to the upstream?\nThis bug discussion doesn't seem to see the concern of others.\n\nOn Tue, Sep 21, 2021 at 1:05 PM Boris P. Korzun <drtr0jan@yandex.ru> wrote:\n\n> Hi Neil,\n>\n> what about the commit to the upstream?\n>\n> ---\n> WBR\n> Boris\n>\n\n\n-- \nThere is no royal road to learning.\nHighGo Software Co.\n\nHi Boris,Actually, because I am a PG beginner, I am not familiar with the rules of the community. What extra work do I need to do to submit to the upstream? This bug discussion doesn't seem to see the concern of others.On Tue, Sep 21, 2021 at 1:05 PM Boris P. Korzun <drtr0jan@yandex.ru> wrote:Hi Neil, what about the commit to the upstream? ---WBRBoris-- There is no royal road to learning.HighGo Software Co.", "msg_date": "Wed, 22 Sep 2021 09:30:50 +0800", "msg_from": "Neil Chen <carpenter.nail.cz@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Wed, Sep 22, 2021 at 10:31 AM Neil Chen <carpenter.nail.cz@gmail.com> wrote:\n>\n> Hi Boris,\n>\n> Actually, because I am a PG beginner, I am not familiar with the rules of the community. What extra work do I need to do to submit to the upstream? This bug discussion doesn't seem to see the concern of others.\n\nAs far as I checked this bug still exists in all supported branches\n(from 10 to 14, and HEAD). I'd recommend adding this patch to the next\ncommit fest so as not to forget, if not yet.\n\nI agree with your analysis on this bug. For non-default\n(defaclnamespace != 0) entries, their acl should be compared to NULL.\n\nThe fix also looks good to me. But I think it'd be better to add tests for this.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 1 Oct 2021 23:13:45 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Hi Neil,\n\nyou should send the patch via e-mail to the pgsql-hackers ( \nhttp://www.postgresql.org/list/pgsql-hackers/ ) mailing list for adding \nto the next commit fest as Masahiko Sawada said.\n\nI can help you if you have any questions.\n\nOn 22/09/2021 04:30, Neil Chen wrote:\n> Hi Boris,\n>\n> Actually, because I am a PG beginner, I am not familiar with the rules \n> of the community. What extra work do I need to do to submit to the \n> upstream? This bug discussion doesn't seem to see the concern of others.\n>\n> On Tue, Sep 21, 2021 at 1:05 PM Boris P. Korzun <drtr0jan@yandex.ru> \n> wrote:\n>\n> Hi Neil,\n> what about the commit to the upstream?\n> ---\n> WBR\n> Boris\n>\n>\n>\n> -- \n> There is no royal road to learning.\n> HighGo Software Co.\n\n---\n\nWBR\n\nBoris\n\n\n\n", "msg_date": "Wed, 13 Oct 2021 11:22:49 +0300", "msg_from": "Boris Korzun <drtr0jan@yandex.ru>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Hi,\n\nOn Fri, Oct 1, 2021 at 11:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Sep 22, 2021 at 10:31 AM Neil Chen <carpenter.nail.cz@gmail.com> wrote:\n> >\n> > Hi Boris,\n> >\n> > Actually, because I am a PG beginner, I am not familiar with the rules of the community. What extra work do I need to do to submit to the upstream? This bug discussion doesn't seem to see the concern of others.\n>\n> As far as I checked this bug still exists in all supported branches\n> (from 10 to 14, and HEAD). I'd recommend adding this patch to the next\n> commit fest so as not to forget, if not yet.\n>\n> I agree with your analysis on this bug. For non-default\n> (defaclnamespace != 0) entries, their acl should be compared to NULL.\n>\n> The fix also looks good to me. But I think it'd be better to add tests for this.\n\nSince the patch conflicts with the current HEAD, I've rebased and\nslightly updated the patch, adding the regression tests.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 14 Oct 2021 09:36:55 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Fri, Oct 1, 2021 at 11:13 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Sep 22, 2021 at 10:31 AM Neil Chen <carpenter.nail.cz@gmail.com> wrote:\n> >\n> > Hi Boris,\n> >\n> > Actually, because I am a PG beginner, I am not familiar with the rules of the community. What extra work do I need to do to submit to the upstream? This bug discussion doesn't seem to see the concern of others.\n>\n> As far as I checked this bug still exists in all supported branches\n> (from 10 to 14, and HEAD). I'd recommend adding this patch to the next\n> commit fest so as not to forget, if not yet.\n>\n> I agree with your analysis on this bug. For non-default\n> (defaclnamespace != 0) entries, their acl should be compared to NULL.\n>\n> The fix also looks good to me. But I think it'd be better to add tests for this.\n>\n\nSince the patch conflicts with the current HEAD, I've rebased and\nslightly updated the patch, adding the regression tests.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 14 Oct 2021 09:59:23 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n>> I agree with your analysis on this bug. For non-default\n>> (defaclnamespace != 0) entries, their acl should be compared to NULL.\n>> \n>> The fix also looks good to me. But I think it'd be better to add tests for this.\n\n> Since the patch conflicts with the current HEAD, I've rebased and\n> slightly updated the patch, adding the regression tests.\n\nHmmm ... if we're adding a comment about the defaclnamespace check,\nseems like it would also be a nice idea to explain the S-to-s\ntransformation, because the reason for that is surely not apparent.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 13 Oct 2021 22:55:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Thu, Oct 14, 2021 at 11:55 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> >> I agree with your analysis on this bug. For non-default\n> >> (defaclnamespace != 0) entries, their acl should be compared to NULL.\n> >>\n> >> The fix also looks good to me. But I think it'd be better to add tests for this.\n>\n> > Since the patch conflicts with the current HEAD, I've rebased and\n> > slightly updated the patch, adding the regression tests.\n>\n> Hmmm ... if we're adding a comment about the defaclnamespace check,\n> seems like it would also be a nice idea to explain the S-to-s\n> transformation, because the reason for that is surely not apparent.\n>\n\nAgreed. Please find an attached new patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Thu, 14 Oct 2021 14:22:21 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Thu, Oct 14, 2021 at 02:22:21PM +0900, Masahiko Sawada wrote:\n> Agreed. Please find an attached new patch.\n\nI have not dived into the details of the patch yet, but I can see the\nfollowing diffs in some of the dumps dropped by the new test added\nbetween HEAD and the patch:\n1) For DEFAULT PRIVILEGES FOR FUNCTIONS:\n-ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n dump_test REVOKE ALL ON FUNCTIONS FROM PUBLIC;\n+ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n dump_test GRANT ALL ON FUNCTIONS TO regress_dump_test_role;\n2) For DEFAULT PRIVILEGES FOR TABLES:\n-ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n dump_test REVOKE ALL ON TABLES FROM regress_dump_test_role;\n ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n dump_test GRANT SELECT ON TABLES TO regress_dump_test_role;\n\nSo the patch removes a REVOKE ALL ON TABLES on\nregress_dump_test_role after the addition of only the GRANT EXECUTE ON\nFUNCTIONS. That seems off. Am I missing something?\n--\nMichael", "msg_date": "Thu, 14 Oct 2021 16:53:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On 10/14/21, 12:55 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> So the patch removes a REVOKE ALL ON TABLES on\r\n> regress_dump_test_role after the addition of only the GRANT EXECUTE ON\r\n> FUNCTIONS. That seems off. Am I missing something?\r\n\r\nThis issue is also tracked here:\r\n\r\n https://commitfest.postgresql.org/35/3288/\r\n\r\nI had attempted to fix this by replacing acldefault() with NULL when\r\ndefaclnamespace was 0. From a quick glance, the patch in this thread\r\nseems to be adjusting obj_kind based on whether defaclnamespace is 0.\r\nI think this has the same effect because acldefault() is STRICT.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 14 Oct 2021 16:02:15 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On 10/14/21, 12:55 AM, \"Michael Paquier\" <michael@paquier.xyz> wrote:\r\n> 1) For DEFAULT PRIVILEGES FOR FUNCTIONS:\r\n> -ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\r\n> dump_test REVOKE ALL ON FUNCTIONS FROM PUBLIC;\r\n> +ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\r\n> dump_test GRANT ALL ON FUNCTIONS TO regress_dump_test_role;\r\n\r\nThis one looks correct to me.\r\n\r\n> 2) For DEFAULT PRIVILEGES FOR TABLES:\r\n> -ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\r\n> dump_test REVOKE ALL ON TABLES FROM regress_dump_test_role;\r\n> ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\r\n> dump_test GRANT SELECT ON TABLES TO regress_dump_test_role;\r\n>\r\n> So the patch removes a REVOKE ALL ON TABLES on\r\n> regress_dump_test_role after the addition of only the GRANT EXECUTE ON\r\n> FUNCTIONS. That seems off. Am I missing something?\r\n\r\nI might be missing something as well, but this one looks correct to\r\nme, too. I suspect that REVOKE statement was generated by comparing\r\nagainst the wrong default ACL and that it actually has no effect.\r\n\r\nNathan\r\n\r\n", "msg_date": "Thu, 14 Oct 2021 16:13:52 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Thu, Oct 14, 2021 at 4:53 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Oct 14, 2021 at 02:22:21PM +0900, Masahiko Sawada wrote:\n> > Agreed. Please find an attached new patch.\n>\n> I have not dived into the details of the patch yet, but I can see the\n> following diffs in some of the dumps dropped by the new test added\n> between HEAD and the patch:\n\nI've checked where these differences come from:\n\n> 1) For DEFAULT PRIVILEGES FOR FUNCTIONS:\n> -ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n> dump_test REVOKE ALL ON FUNCTIONS FROM PUBLIC;\n> +ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n> dump_test GRANT ALL ON FUNCTIONS TO regress_dump_test_role;\n\nThe test query and the default privileges I got are:\n\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test GRANT EXECUTE ON FUNCTIONS TO regress_dump_test_role;\n\n Default access privileges\n Owner | Schema | Type | Access\nprivileges\n------------------------+-----------+----------+-------------------------------------------------\n regress_dump_test_role | dump_test | function |\nregress_dump_test_role=X/regress_dump_test_role\n(1 row)\n\nThe query dumped by the current pg_dump (i.g., HEAD, w/o patch) is:\n\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test REVOKE ALL ON FUNCTIONS FROM PUBLIC;\n\nThe query dumped by pg_dump with the patch is:\n\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test GRANT ALL ON FUNCTIONS TO regress_dump_test_role;\n\nThe query dumped by the current pg_dump is wrong and the patch fixes\nit. This difference looks good to me.\n\n> 2) For DEFAULT PRIVILEGES FOR TABLES:\n> -ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n> dump_test REVOKE ALL ON TABLES FROM regress_dump_test_role;\n> ALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\n> dump_test GRANT SELECT ON TABLES TO regress_dump_test_role;\n\nThe test query and the default privileges I got are:\n\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test GRANT SELECT ON TABLES TO regress_dump_test_role;\n\n Default access privileges\n Owner | Schema | Type | Access privileges\n------------------------+-----------+-------+-------------------------------------------------\n regress_dump_test_role | dump_test | table |\nregress_dump_test_role=r/regress_dump_test_role\n(1 row)\n\nThe query dumped by the current pg_dump (i.g., HEAD, w/o patch) is:\n\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test REVOKE ALL ON TABLES FROM regress_dump_test_role;\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test GRANT SELECT ON TABLES TO regress_dump_test_role;\n\nThe query dumped by pg_dump with the patch is:\n\nALTER DEFAULT PRIVILEGES FOR ROLE regress_dump_test_role IN SCHEMA\ndump_test GRANT SELECT ON TABLES TO regress_dump_test_role;\n\n\nThe current pg_dump produced a REVOKE ALL ON TABLES FROM\nregress_dump_test_role but it seems unnecessary. The patch removes it\nso looks good to me too.\n\nRegards,\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 15 Oct 2021 09:05:07 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On 10/14/21, 5:06 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\r\n> The current pg_dump produced a REVOKE ALL ON TABLES FROM\r\n> regress_dump_test_role but it seems unnecessary. The patch removes it\r\n> so looks good to me too.\r\n\r\n+1\r\n\r\nIf we are going to proceed with the patch in this thread, I think we\r\nshould also mention in the comment that we are depending on\r\nacldefault() being STRICT. This patch is quite a bit smaller than\r\nwhat I had proposed, but AFAICT it produces the same result.\r\n\r\nNathan\r\n\r\n", "msg_date": "Mon, 18 Oct 2021 23:46:56 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Tue, Oct 19, 2021 at 8:47 AM Bossart, Nathan <bossartn@amazon.com> wrote:\n>\n> On 10/14/21, 5:06 PM, \"Masahiko Sawada\" <sawada.mshk@gmail.com> wrote:\n> > The current pg_dump produced a REVOKE ALL ON TABLES FROM\n> > regress_dump_test_role but it seems unnecessary. The patch removes it\n> > so looks good to me too.\n>\n> +1\n>\n\nI've looked at the patch proposed you proposed. If we can depend on\nacldefault() being STRICT (which is legitimate to me), I think we\ndon't need to build an expression depending on the caller (i.g.,\nis_default_acl). If acldefault() were to become not STRICT, we could\ndetect it by regression tests. What do you think?\n\n> If we are going to proceed with the patch in this thread, I think we\n> should also mention in the comment that we are depending on\n> acldefault() being STRICT.\n\nI've updated the patch.\n\n> This patch is quite a bit smaller than\n> what I had proposed, but AFAICT it produces the same result.\n\nYes. I've also confirmed both produce the same result.\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/", "msg_date": "Tue, 19 Oct 2021 11:04:32 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> I've looked at the patch proposed you proposed. If we can depend on\n> acldefault() being STRICT (which is legitimate to me), I think we\n> don't need to build an expression depending on the caller (i.g.,\n> is_default_acl). If acldefault() were to become not STRICT, we could\n> detect it by regression tests. What do you think?\n\nFWIW, I'm working on a refactoring of this logic that will bring the\nacldefault() call into the getDefaultACLs code, which would mean that\nwe won't need that assumption anymore anyway. The code as I have it\nproduces SQL like\n\n acldefault(CASE WHEN defaclobjtype = 'S'\n THEN 's'::\"char\" ELSE defaclobjtype END, defaclrole) AS acldefault\n\nand we could wrap the test-for-zero around that:\n\n CASE WHEN defaclnamespace = 0 THEN\n acldefault(CASE WHEN defaclobjtype = 'S'\n THEN 's'::\"char\" ELSE defaclobjtype END, defaclrole)\n ELSE NULL END AS acldefault\n\n(although I think it might be better to write ELSE '{}' not ELSE NULL).\n\nSo I think we don't need to worry about whether acldefault() will stay\nstrict. This patch will only need to work in the back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 18 Oct 2021 23:19:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On 10/18/21, 8:20 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\r\n>> I've looked at the patch proposed you proposed. If we can depend on\r\n>> acldefault() being STRICT (which is legitimate to me), I think we\r\n>> don't need to build an expression depending on the caller (i.g.,\r\n>> is_default_acl). If acldefault() were to become not STRICT, we could\r\n>> detect it by regression tests. What do you think?\r\n>\r\n> FWIW, I'm working on a refactoring of this logic that will bring the\r\n> acldefault() call into the getDefaultACLs code, which would mean that\r\n> we won't need that assumption anymore anyway. The code as I have it\r\n> produces SQL like\r\n\r\nNice!\r\n\r\n> So I think we don't need to worry about whether acldefault() will stay\r\n> strict. This patch will only need to work in the back branches.\r\n\r\nThis seems fine to me, too. I don't think relying on STRICT is any\r\nbetter or worse than adding a flag for this one special case.\r\n\r\n+\t\t/*\r\n+\t\t * Since the default for a global entry is the hard-wired default\r\n+\t\t * ACL for the particular object type, we pass defaclobjtype except\r\n+\t\t * for the case of 'S' (DEFACLOBJ_SEQUENCE) where we need to\r\n+\t\t * transform it to 's' since acldefault() SQL-callable function\r\n+\t\t * handles 's' as a sequence. On the other hand, since the default\r\n+\t\t * for non-global entries is an empty ACL we pass NULL. This works\r\n+\t\t * because acldefault() is STRICT.\r\n+\t\t */\r\n\r\nI'd split out the two special cases in the comment. What do you think\r\nabout something like the following?\r\n\r\n /*\r\n * Build the expression for determining the object type.\r\n *\r\n * While pg_default_acl uses 'S' for sequences, acldefault()\r\n * uses 's', so we must transform 'S' to 's'.\r\n *\r\n * The default for a schema-local default ACL (i.e., entries\r\n * in pg_default_acl with defaclnamespace != 0) is an empty\r\n * ACL. We use NULL as the object type for those entries,\r\n * which forces acldefault() to also return NULL because it is\r\n * STRICT.\r\n */\r\n\r\n+\t\tcreate_sql => 'ALTER DEFAULT PRIVILEGES\r\n+\t\t\t\t\t FOR ROLE regress_dump_test_role IN SCHEMA dump_test\r\n+\t\t\t\t\t GRANT EXECUTE ON FUNCTIONS TO regress_dump_test_role;',\r\n+\t\tregexp => qr/^\r\n+\t\t\t\\QALTER DEFAULT PRIVILEGES \\E\r\n+\t\t\t\\QFOR ROLE regress_dump_test_role IN SCHEMA dump_test \\E\r\n+\t\t\t\\QGRANT ALL ON FUNCTIONS TO regress_dump_test_role;\\E\r\n+\t\t\t/xm,\r\n\r\nIt could be a bit confusing that create_sql uses \"GRANT EXECUTE\" but\r\nwe expect to see \"GRANT ALL.\" IIUC this is correct, but maybe we\r\nshould use \"GRANT ALL\" in create_sql so that they match.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 03:54:37 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On Tue, Oct 19, 2021 at 12:19 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > I've looked at the patch proposed you proposed. If we can depend on\n> > acldefault() being STRICT (which is legitimate to me), I think we\n> > don't need to build an expression depending on the caller (i.g.,\n> > is_default_acl). If acldefault() were to become not STRICT, we could\n> > detect it by regression tests. What do you think?\n>\n> FWIW, I'm working on a refactoring of this logic that will bring the\n> acldefault() call into the getDefaultACLs code, which would mean that\n> we won't need that assumption anymore anyway. The code as I have it\n> produces SQL like\n>\n> acldefault(CASE WHEN defaclobjtype = 'S'\n> THEN 's'::\"char\" ELSE defaclobjtype END, defaclrole) AS acldefault\n>\n> and we could wrap the test-for-zero around that:\n>\n> CASE WHEN defaclnamespace = 0 THEN\n> acldefault(CASE WHEN defaclobjtype = 'S'\n> THEN 's'::\"char\" ELSE defaclobjtype END, defaclrole)\n> ELSE NULL END AS acldefault\n>\n> (although I think it might be better to write ELSE '{}' not ELSE NULL).\n>\n> So I think we don't need to worry about whether acldefault() will stay\n> strict. This patch will only need to work in the back branches.\n\n+1\n\nRegards,\n\n-- \nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Tue, 19 Oct 2021 14:51:36 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "... BTW, I think this patch is not correct yet. What I read in\ncatalogs.sgml is\n\n ... If a global entry is present then\n it <emphasis>overrides</emphasis> the normal hard-wired default privileges\n for the object type. A per-schema entry, if present, represents privileges\n to be <emphasis>added to</emphasis> the global or hard-wired default privileges.\n\nI didn't check the code, but if that last bit is correct, then non-global\nentries aren't necessarily relative to the acldefault privileges either.\n\nI kind of wonder now whether the existing behavior is correct for either\ncase. Why aren't we simply regurgitating the pg_default_acl entries\nverbatim? That is, I think maybe we don't need the acldefault call at\nall; we should just use null/empty as the starting ACL in all cases\nwhen printing pg_default_acl entries. Like this:\n\n buildACLQueries(acl_subquery, racl_subquery, initacl_subquery,\n initracl_subquery, \"defaclacl\", \"defaclrole\",\n \"pip.initprivs\",\n- \"CASE WHEN defaclobjtype = 'S' THEN 's' ELSE defaclobjtype END::\\\"char\\\"\",\n+ \"NULL\",\n dopt->binary_upgrade);\n\nI didn't test that. I suspect it will cause some regression test\nchanges, but will they be wrong?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 15:53:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On 10/19/21, 12:54 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> I kind of wonder now whether the existing behavior is correct for either\r\n> case. Why aren't we simply regurgitating the pg_default_acl entries\r\n> verbatim? That is, I think maybe we don't need the acldefault call at\r\n> all; we should just use null/empty as the starting ACL in all cases\r\n> when printing pg_default_acl entries. Like this:\r\n\r\nHm. If we do this, then this command:\r\n\r\n ALTER DEFAULT PRIVILEGES FOR ROLE myrole REVOKE ALL ON FUNCTIONS FROM PUBLIC;\r\n\r\nis dumped as:\r\n\r\n ALTER DEFAULT PRIVILEGES FOR ROLE myrole GRANT ALL ON FUNCTIONS TO myrole;\r\n\r\nThis command is effectively ignored when you apply it, as no entry is\r\nadded to pg_default_acl. I haven't looked too closely into what's\r\nhappening yet, but it does look like there is more to the story.\r\n\r\nNathan\r\n\r\n", "msg_date": "Tue, 19 Oct 2021 21:01:13 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "\"Bossart, Nathan\" <bossartn@amazon.com> writes:\n> On 10/19/21, 12:54 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n>> I kind of wonder now whether the existing behavior is correct for either\n>> case.\n\n> Hm. If we do this, then this command:\n> ALTER DEFAULT PRIVILEGES FOR ROLE myrole REVOKE ALL ON FUNCTIONS FROM PUBLIC;\n> is dumped as:\n> ALTER DEFAULT PRIVILEGES FOR ROLE myrole GRANT ALL ON FUNCTIONS TO myrole;\n\n[ pokes at it some more... ] Yeah, I just didn't have my head screwed\non straight. We need the global entries to be dumped as deltas from\nthe proper object-type-specific ACL, while the non-global ones should be\ndumped as grants only, which can be modeled as a delta from an empty\nACL. So the patch should be good as given (though maybe the comment\nneeds more work to clarify this). Sorry for the noise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Oct 2021 17:58:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "I wrote:\n> [ pokes at it some more... ] Yeah, I just didn't have my head screwed\n> on straight. We need the global entries to be dumped as deltas from\n> the proper object-type-specific ACL, while the non-global ones should be\n> dumped as grants only, which can be modeled as a delta from an empty\n> ACL. So the patch should be good as given (though maybe the comment\n> needs more work to clarify this). Sorry for the noise.\n\nThis was blocking some other work I'm doing on pg_dump, so I rewrote\nthe comment some more and pushed it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 22 Oct 2021 15:26:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" }, { "msg_contents": "On 10/22/21, 12:27 PM, \"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\r\n> This was blocking some other work I'm doing on pg_dump, so I rewrote\r\n> the comment some more and pushed it.\r\n\r\nThanks!\r\n\r\nNathan\r\n\r\n", "msg_date": "Fri, 22 Oct 2021 20:08:01 +0000", "msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistent behavior of pg_dump/pg_restore on DEFAULT PRIVILEGES" } ]
[ { "msg_contents": "Hello,\n\nIn citus, we have seen the following crash backtraces because of a NULL tupledesc multiple times and we weren't sure if this was related to citus or postgres:\n\n\n#0 equalTupleDescs (tupdesc1=0x0, tupdesc2=0x1b9f3f0) at tupdesc.c:417\n417 tupdesc.c: No such file or directory.\n#0 equalTupleDescs (tupdesc1=0x0, tupdesc2=0x1b9f3f0) at tupdesc.c:417\n#1 0x000000000085b51f in record_type_typmod_compare (a=<optimized out>, b=<optimized out>, size=<optimized out>) at typcache.c:1761\n#2 0x0000000000869c73 in hash_search_with_hash_value (hashp=0x1c10530, keyPtr=keyPtr@entry=0x7ffcfd3117b8, hashvalue=3194332168, action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffcfd3117c0) at dynahash.c:987\n#3 0x000000000086a3fd in hash_search (hashp=<optimized out>, keyPtr=keyPtr@entry=0x7ffcfd3117b8, action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffcfd3117c0) at dynahash.c:911\n#4 0x000000000085d0e1 in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x1b9f3f0) at typcache.c:1801\n#5 0x000000000061832b in BlessTupleDesc (tupdesc=0x1b9f3f0) at execTuples.c:2056\n#6 TupleDescGetAttInMetadata (tupdesc=0x1b9f3f0) at execTuples.c:2081\n#7 0x00007f2701878dee in CreateDistributedExecution (modLevel=ROW_MODIFY_READONLY, taskList=0x1c82398, hasReturning=<optimized out>, paramListInfo=0x1c3e5a0, tupleDescriptor=0x1b9f3f0, tupleStore=<optimized out>, targetPoolSize=16, xactProperties=0x7ffcfd311960, jobIdList=0x0) at executor/adaptive_executor.c:951\n#8 0x00007f270187ba09 in AdaptiveExecutor (scanState=0x1b9eff0) at executor/adaptive_executor.c:676\n#9 0x00007f270187c582 in CitusExecScan (node=0x1b9eff0) at executor/citus_custom_scan.c:182\n#10 0x000000000060c9e2 in ExecProcNode (node=0x1b9eff0) at ../../../src/include/executor/executor.h:239\n#11 ExecutePlan (execute_once=<optimized out>, dest=0x1abfc90, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1b9eff0, estate=0x1b9ed50) at execMain.c:1646\n#12 standard_ExecutorRun (queryDesc=0x1c3e660, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#13 0x00007f27018819bd in CitusExecutorRun (queryDesc=0x1c3e660, direction=ForwardScanDirection, count=0, execute_once=true) at executor/multi_executor.c:177\n#14 0x00007f27000adfee in pgss_ExecutorRun (queryDesc=0x1c3e660, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:891\n#15 0x000000000074f97d in PortalRunSelect (portal=portal@entry=0x1b8ed00, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1abfc90) at pquery.c:929\n#16 0x0000000000750df0 in PortalRun (portal=portal@entry=0x1b8ed00, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=<optimized out>, dest=dest@entry=0x1abfc90, altdest=altdest@entry=0x1abfc90, completionTag=0x7ffcfd312090 \"\") at pquery.c:770\n#17 0x000000000074e745 in exec_execute_message (max_rows=9223372036854775807, portal_name=0x1abf880 \"\") at postgres.c:2090\n#18 PostgresMain (argc=<optimized out>, argv=argv@entry=0x1b4a0e8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4308\n#19 0x00000000006de9d8 in BackendRun (port=0x1b37230, port=0x1b37230) at postmaster.c:4437\n#20 BackendStartup (port=0x1b37230) at postmaster.c:4128\n#21 ServerLoop () at postmaster.c:1704\n#22 0x00000000006df955 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1aba280) at postmaster.c:1377\n#23 0x0000000000487a4e in main (argc=3, argv=0x1aba280) at main.c:228\n\nThis is the issue: https://github.com/citusdata/citus/issues/3825\n\n\nI think this is related to postgres because of the following events:\n\n * In assign_record_type_typmod<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1984> tupledesc will be set to NULL if it is not in the cache and it will be set to an actual value in this line<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1998>.\n * It is possible that postgres will error in between these two lines, hence leaving a NULL tupledesc in the cache. For example in find_or_make_matching_shared_tupledesc<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1988>. (Possibly because of OOM)\n * Now there is a NULL tupledesc in the hash table, hence when doing a comparison in record_type_typmod_compare<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1935>, it will crash.\n\nI have manually added a line to error in \"find_or_make_matching_shared_tupledesc\" and I was able to get a similar crash with two subsequent simple SELECT queries. You can see the backtrace in the issue<https://github.com/citusdata/citus/issues/3825#issuecomment-805627864>.\n\nWe should probably do HASH_ENTER<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1974> only after we have a valid entry so that we don't end up with a NULL entry in the cache even if an intermediate error happens. I will share a fix in this thread soon.\n\nBest,\nTalha.\n\n\n\n\n\n\n\n\n\nHello,\n\n\n\n\nIn citus, we have seen the following crash backtraces because of a NULL tupledesc multiple times and we weren't sure if this was related to citus or postgres:\n\n\n\n\n#0  equalTupleDescs (tupdesc1=0x0, tupdesc2=0x1b9f3f0) at tupdesc.c:417\n417\ttupdesc.c: No such file or directory.\n#0  equalTupleDescs (tupdesc1=0x0, tupdesc2=0x1b9f3f0) at tupdesc.c:417\n#1  0x000000000085b51f in record_type_typmod_compare (a=<optimized out>, b=<optimized out>, size=<optimized out>) at typcache.c:1761\n#2  0x0000000000869c73 in hash_search_with_hash_value (hashp=0x1c10530, keyPtr=keyPtr@entry=0x7ffcfd3117b8, hashvalue=3194332168, action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffcfd3117c0) at dynahash.c:987\n#3  0x000000000086a3fd in hash_search (hashp=<optimized out>, keyPtr=keyPtr@entry=0x7ffcfd3117b8, action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffcfd3117c0) at dynahash.c:911\n#4  0x000000000085d0e1 in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x1b9f3f0) at typcache.c:1801\n#5  0x000000000061832b in BlessTupleDesc (tupdesc=0x1b9f3f0) at execTuples.c:2056\n#6  TupleDescGetAttInMetadata (tupdesc=0x1b9f3f0) at execTuples.c:2081\n#7  0x00007f2701878dee in CreateDistributedExecution (modLevel=ROW_MODIFY_READONLY, taskList=0x1c82398, hasReturning=<optimized out>, paramListInfo=0x1c3e5a0, tupleDescriptor=0x1b9f3f0, tupleStore=<optimized out>, targetPoolSize=16, xactProperties=0x7ffcfd311960, jobIdList=0x0) at executor/adaptive_executor.c:951\n#8  0x00007f270187ba09 in AdaptiveExecutor (scanState=0x1b9eff0) at executor/adaptive_executor.c:676\n#9  0x00007f270187c582 in CitusExecScan (node=0x1b9eff0) at executor/citus_custom_scan.c:182\n#10 0x000000000060c9e2 in ExecProcNode (node=0x1b9eff0) at ../../../src/include/executor/executor.h:239\n#11 ExecutePlan (execute_once=<optimized out>, dest=0x1abfc90, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1b9eff0, estate=0x1b9ed50) at execMain.c:1646\n#12 standard_ExecutorRun (queryDesc=0x1c3e660, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#13 0x00007f27018819bd in CitusExecutorRun (queryDesc=0x1c3e660, direction=ForwardScanDirection, count=0, execute_once=true) at executor/multi_executor.c:177\n#14 0x00007f27000adfee in pgss_ExecutorRun (queryDesc=0x1c3e660, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:891\n#15 0x000000000074f97d in PortalRunSelect (portal=portal@entry=0x1b8ed00, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1abfc90) at pquery.c:929\n#16 0x0000000000750df0 in PortalRun (portal=portal@entry=0x1b8ed00, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=<optimized out>, dest=dest@entry=0x1abfc90, altdest=altdest@entry=0x1abfc90, completionTag=0x7ffcfd312090 \"\") at pquery.c:770\n#17 0x000000000074e745 in exec_execute_message (max_rows=9223372036854775807, portal_name=0x1abf880 \"\") at postgres.c:2090\n#18 PostgresMain (argc=<optimized out>, argv=argv@entry=0x1b4a0e8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4308\n#19 0x00000000006de9d8 in BackendRun (port=0x1b37230, port=0x1b37230) at postmaster.c:4437\n#20 BackendStartup (port=0x1b37230) at postmaster.c:4128\n#21 ServerLoop () at postmaster.c:1704\n#22 0x00000000006df955 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1aba280) at postmaster.c:1377\n#23 0x0000000000487a4e in main (argc=3, argv=0x1aba280) at main.c:228\n\n\n\nThis is the issue: https://github.com/citusdata/citus/issues/3825\n\n\n\n\n\n\n\nI think this is related to postgres because of the following events:\n\n\nIn\n assign_record_type_typmod tupledesc will be set to NULL if it is not in the cache and it will be set to an actual value in this\n line.It is possible that postgres will error in between these two lines, hence leaving a NULL tupledesc in the cache. For example in find_or_make_matching_shared_tupledesc.\n (Possibly because of OOM)Now there is a NULL tupledesc in the hash table, hence when doing a comparison in record_type_typmod_compare,\n it will crash.\n\n\nI have manually added a line to error in \"find_or_make_matching_shared_tupledesc\" and I was able to get a similar crash with two subsequent simple SELECT queries. You can see the backtrace in\n the issue.\n\n\n\nWe should probably do HASH_ENTER\n only after we have a valid entry so that we don't end up with a NULL entry in the cache even if an intermediate error happens. I will share a fix in this thread soon.\n\n\nBest,\nTalha.", "msg_date": "Wed, 31 Mar 2021 08:50:00 +0000", "msg_from": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com>", "msg_from_op": true, "msg_subject": "Crash in record_type_typmod_compare" }, { "msg_contents": ">We should probably do HASH_ENTER<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1974> only after we have a valid entry so that we don't end up with a NULL entry in the cache even if an intermediate error happens. I will share a fix in this thread soon.\n\nI am attaching this patch.\n\n\n\n\n\n\n________________________________\nFrom: Sait Talha Nisanci\nSent: Wednesday, March 31, 2021 11:50 AM\nTo: pgsql-hackers <pgsql-hackers@postgresql.org>\nCc: Metin Doslu <Metin.Doslu@microsoft.com>\nSubject: Crash in record_type_typmod_compare\n\nHello,\n\nIn citus, we have seen the following crash backtraces because of a NULL tupledesc multiple times and we weren't sure if this was related to citus or postgres:\n\n\n#0 equalTupleDescs (tupdesc1=0x0, tupdesc2=0x1b9f3f0) at tupdesc.c:417\n417 tupdesc.c: No such file or directory.\n#0 equalTupleDescs (tupdesc1=0x0, tupdesc2=0x1b9f3f0) at tupdesc.c:417\n#1 0x000000000085b51f in record_type_typmod_compare (a=<optimized out>, b=<optimized out>, size=<optimized out>) at typcache.c:1761\n#2 0x0000000000869c73 in hash_search_with_hash_value (hashp=0x1c10530, keyPtr=keyPtr@entry=0x7ffcfd3117b8, hashvalue=3194332168, action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffcfd3117c0) at dynahash.c:987\n#3 0x000000000086a3fd in hash_search (hashp=<optimized out>, keyPtr=keyPtr@entry=0x7ffcfd3117b8, action=action@entry=HASH_ENTER, foundPtr=foundPtr@entry=0x7ffcfd3117c0) at dynahash.c:911\n#4 0x000000000085d0e1 in assign_record_type_typmod (tupDesc=<optimized out>, tupDesc@entry=0x1b9f3f0) at typcache.c:1801\n#5 0x000000000061832b in BlessTupleDesc (tupdesc=0x1b9f3f0) at execTuples.c:2056\n#6 TupleDescGetAttInMetadata (tupdesc=0x1b9f3f0) at execTuples.c:2081\n#7 0x00007f2701878dee in CreateDistributedExecution (modLevel=ROW_MODIFY_READONLY, taskList=0x1c82398, hasReturning=<optimized out>, paramListInfo=0x1c3e5a0, tupleDescriptor=0x1b9f3f0, tupleStore=<optimized out>, targetPoolSize=16, xactProperties=0x7ffcfd311960, jobIdList=0x0) at executor/adaptive_executor.c:951\n#8 0x00007f270187ba09 in AdaptiveExecutor (scanState=0x1b9eff0) at executor/adaptive_executor.c:676\n#9 0x00007f270187c582 in CitusExecScan (node=0x1b9eff0) at executor/citus_custom_scan.c:182\n#10 0x000000000060c9e2 in ExecProcNode (node=0x1b9eff0) at ../../../src/include/executor/executor.h:239\n#11 ExecutePlan (execute_once=<optimized out>, dest=0x1abfc90, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1b9eff0, estate=0x1b9ed50) at execMain.c:1646\n#12 standard_ExecutorRun (queryDesc=0x1c3e660, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364\n#13 0x00007f27018819bd in CitusExecutorRun (queryDesc=0x1c3e660, direction=ForwardScanDirection, count=0, execute_once=true) at executor/multi_executor.c:177\n#14 0x00007f27000adfee in pgss_ExecutorRun (queryDesc=0x1c3e660, direction=ForwardScanDirection, count=0, execute_once=<optimized out>) at pg_stat_statements.c:891\n#15 0x000000000074f97d in PortalRunSelect (portal=portal@entry=0x1b8ed00, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0x1abfc90) at pquery.c:929\n#16 0x0000000000750df0 in PortalRun (portal=portal@entry=0x1b8ed00, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=<optimized out>, dest=dest@entry=0x1abfc90, altdest=altdest@entry=0x1abfc90, completionTag=0x7ffcfd312090 \"\") at pquery.c:770\n#17 0x000000000074e745 in exec_execute_message (max_rows=9223372036854775807, portal_name=0x1abf880 \"\") at postgres.c:2090\n#18 PostgresMain (argc=<optimized out>, argv=argv@entry=0x1b4a0e8, dbname=<optimized out>, username=<optimized out>) at postgres.c:4308\n#19 0x00000000006de9d8 in BackendRun (port=0x1b37230, port=0x1b37230) at postmaster.c:4437\n#20 BackendStartup (port=0x1b37230) at postmaster.c:4128\n#21 ServerLoop () at postmaster.c:1704\n#22 0x00000000006df955 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1aba280) at postmaster.c:1377\n#23 0x0000000000487a4e in main (argc=3, argv=0x1aba280) at main.c:228\n\nThis is the issue: https://github.com/citusdata/citus/issues/3825\n\n\nI think this is related to postgres because of the following events:\n\n * In assign_record_type_typmod<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1984> tupledesc will be set to NULL if it is not in the cache and it will be set to an actual value in this line<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1998>.\n * It is possible that postgres will error in between these two lines, hence leaving a NULL tupledesc in the cache. For example in find_or_make_matching_shared_tupledesc<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1988>. (Possibly because of OOM)\n * Now there is a NULL tupledesc in the hash table, hence when doing a comparison in record_type_typmod_compare<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1935>, it will crash.\n\nI have manually added a line to error in \"find_or_make_matching_shared_tupledesc\" and I was able to get a similar crash with two subsequent simple SELECT queries. You can see the backtrace in the issue<https://github.com/citusdata/citus/issues/3825#issuecomment-805627864>.\n\nWe should probably do HASH_ENTER<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1974> only after we have a valid entry so that we don't end up with a NULL entry in the cache even if an intermediate error happens. I will share a fix in this thread soon.\n\nBest,\nTalha.", "msg_date": "Wed, 31 Mar 2021 13:26:34 +0000", "msg_from": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com>", "msg_from_op": true, "msg_subject": "Re: Crash in record_type_typmod_compare" }, { "msg_contents": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com> writes:\n>> We should probably do HASH_ENTER<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1974> only after we have a valid entry so that we don't end up with a NULL entry in the cache even if an intermediate error happens. I will share a fix in this thread soon.\n> I am attaching this patch.\n\nI see the hazard, but this seems like an expensive way to fix it,\nas it forces two hash searches for every insertion. Couldn't we just\nteach record_type_typmod_compare to say \"not equal\" if it sees a\nnull tupdesc?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 13:10:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Crash in record_type_typmod_compare" }, { "msg_contents": "Hi,\n\nOn 2021-03-31 13:10:50 -0400, Tom Lane wrote:\n> Sait Talha Nisanci <Sait.Nisanci@microsoft.com> writes:\n> >> We should probably do HASH_ENTER<https://github.com/postgres/postgres/blob/1509c6fc29c07d13c9a590fbd6f37c7576f58ba6/src/backend/utils/cache/typcache.c#L1974> only after we have a valid entry so that we don't end up with a NULL entry in the cache even if an intermediate error happens. I will share a fix in this thread soon.\n> > I am attaching this patch.\n> \n> I see the hazard, but this seems like an expensive way to fix it,\n> as it forces two hash searches for every insertion.\n\nObviously not free - but at least it'd be overhead only in the insertion\npath. And the bucket should still be in L1 cache for the second\ninsertion...\n\nI doubt that the cost of a separate HASH_ENTER is all that significant\ncompared to find_or_make_matching_shared_tupledesc/CreateTupleDescCopy?\n\nWe do the separate HASH_FIND/HASH_ENTER in plenty other places that are\nmuch hotter than assign_record_type_typmod(),\ne.g. RelationIdGetRelation().\n\nIt could even be that the additional branches in the comparator would\nend up costing more in the aggregate...\n\n\n> Couldn't we just\n> teach record_type_typmod_compare to say \"not equal\" if it sees a\n> null tupdesc?\n\nWon't that lead to an accumulation of dead hash table entries over time?\n\nIt also just seems quite wrong to have hash table entries that cannot\never be found via HASH_FIND/HASH_REMOVE, because\nrecord_type_typmod_compare() returns false once there's a NULL in there.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:29:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Crash in record_type_typmod_compare" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-03-31 13:10:50 -0400, Tom Lane wrote:\n>> Couldn't we just\n>> teach record_type_typmod_compare to say \"not equal\" if it sees a\n>> null tupdesc?\n\n> Won't that lead to an accumulation of dead hash table entries over time?\n\nYeah, if you have repeat failures there, which doesn't seem very likely.\nStill, I take your point that we're doing it the first way in other\nplaces, so maybe inventing a different way here isn't good.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 31 Mar 2021 15:34:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Crash in record_type_typmod_compare" }, { "msg_contents": "Hi,\n\nOn 2021-03-31 13:26:34 +0000, Sait Talha Nisanci wrote:\n> diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c\n> index 4915ef5934..4757e8fa80 100644\n> --- a/src/backend/utils/cache/typcache.c\n> +++ b/src/backend/utils/cache/typcache.c\n> @@ -1970,18 +1970,16 @@ assign_record_type_typmod(TupleDesc tupDesc)\n> \t\t\tCreateCacheMemoryContext();\n> \t}\n> \n> -\t/* Find or create a hashtable entry for this tuple descriptor */\n> +\t/* Find a hashtable entry for this tuple descriptor */\n> \trecentry = (RecordCacheEntry *) hash_search(RecordCacheHash,\n> \t\t\t\t\t\t\t\t\t\t\t\t(void *) &tupDesc,\n> -\t\t\t\t\t\t\t\t\t\t\t\tHASH_ENTER, &found);\n> +\t\t\t\t\t\t\t\t\t\t\t\tHASH_FIND, &found);\n> \tif (found && recentry->tupdesc != NULL)\n> \t{\n> \t\ttupDesc->tdtypmod = recentry->tupdesc->tdtypmod;\n> \t\treturn;\n> \t}\n> \n> -\t/* Not present, so need to manufacture an entry */\n> -\trecentry->tupdesc = NULL;\n> \toldcxt = MemoryContextSwitchTo(CacheMemoryContext);\n> \n> \t/* Look in the SharedRecordTypmodRegistry, if attached */\n> @@ -1995,6 +1993,10 @@ assign_record_type_typmod(TupleDesc tupDesc)\n> \t}\n> \tensure_record_cache_typmod_slot_exists(entDesc->tdtypmod);\n> \tRecordCacheArray[entDesc->tdtypmod] = entDesc;\n> +\t/* Not present, so need to manufacture an entry */\n> +\trecentry = (RecordCacheEntry *) hash_search(RecordCacheHash,\n> +\t\t\t\t\t\t\t\t\t\t\t\t(void *) &tupDesc,\n> +\t\t\t\t\t\t\t\t\t\t\t\tHASH_ENTER, NULL);\n> \trecentry->tupdesc = entDesc;\n\nISTM that the ensure_record_cache_typmod_slot_exists() should be moved\nto before find_or_make_matching_shared_tupledesc /\nCreateTupleDescCopy. Otherwise we can leak if the CreateTupleDescCopy()\nsucceeds but ensure_record_cache_typmod_slot_exists() fails. Conversely,\nif ensure_record_cache_typmod_slot_exists() succeeds, but\nCreateTupleDescCopy() we won't leak, since the former is just\nrepalloc()ing allocations.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 31 Mar 2021 12:49:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Crash in record_type_typmod_compare" }, { "msg_contents": "Hi Andres,\n\nPlease see the updated patch, do you mean something like this? (there might be a simpler way for doing this)\n\nBest,\nTalha.\n________________________________\nFrom: Andres Freund <andres@anarazel.de>\nSent: Wednesday, March 31, 2021 10:49 PM\nTo: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>\nCc: pgsql-hackers <pgsql-hackers@postgresql.org>; Metin Doslu <Metin.Doslu@microsoft.com>\nSubject: [EXTERNAL] Re: Crash in record_type_typmod_compare\n\nHi,\n\nOn 2021-03-31 13:26:34 +0000, Sait Talha Nisanci wrote:\n> diff --git a/src/backend/utils/cache/typcache.c b/src/backend/utils/cache/typcache.c\n> index 4915ef5934..4757e8fa80 100644\n> --- a/src/backend/utils/cache/typcache.c\n> +++ b/src/backend/utils/cache/typcache.c\n> @@ -1970,18 +1970,16 @@ assign_record_type_typmod(TupleDesc tupDesc)\n> CreateCacheMemoryContext();\n> }\n>\n> - /* Find or create a hashtable entry for this tuple descriptor */\n> + /* Find a hashtable entry for this tuple descriptor */\n> recentry = (RecordCacheEntry *) hash_search(RecordCacheHash,\n> (void *) &tupDesc,\n> - HASH_ENTER, &found);\n> + HASH_FIND, &found);\n> if (found && recentry->tupdesc != NULL)\n> {\n> tupDesc->tdtypmod = recentry->tupdesc->tdtypmod;\n> return;\n> }\n>\n> - /* Not present, so need to manufacture an entry */\n> - recentry->tupdesc = NULL;\n> oldcxt = MemoryContextSwitchTo(CacheMemoryContext);\n>\n> /* Look in the SharedRecordTypmodRegistry, if attached */\n> @@ -1995,6 +1993,10 @@ assign_record_type_typmod(TupleDesc tupDesc)\n> }\n> ensure_record_cache_typmod_slot_exists(entDesc->tdtypmod);\n> RecordCacheArray[entDesc->tdtypmod] = entDesc;\n> + /* Not present, so need to manufacture an entry */\n> + recentry = (RecordCacheEntry *) hash_search(RecordCacheHash,\n> + (void *) &tupDesc,\n> + HASH_ENTER, NULL);\n> recentry->tupdesc = entDesc;\n\nISTM that the ensure_record_cache_typmod_slot_exists() should be moved\nto before find_or_make_matching_shared_tupledesc /\nCreateTupleDescCopy. Otherwise we can leak if the CreateTupleDescCopy()\nsucceeds but ensure_record_cache_typmod_slot_exists() fails. Conversely,\nif ensure_record_cache_typmod_slot_exists() succeeds, but\nCreateTupleDescCopy() we won't leak, since the former is just\nrepalloc()ing allocations.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 5 Apr 2021 12:07:29 +0000", "msg_from": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com>", "msg_from_op": true, "msg_subject": "Re: [EXTERNAL] Re: Crash in record_type_typmod_compare" }, { "msg_contents": "On Mon, 2021-04-05 at 12:07 +0000, Sait Talha Nisanci wrote:\n> Hi Andres,\n> \n> Please see the updated patch, do you mean something like this? (there\n> might be a simpler way for doing this)\n> \n\nCommitted with minor revisions.\n\nMy patch also avoids incrementing NextRecordTypmod until we've already\ncalled CreateTupleDescCopy(). Otherwise, an allocation failure could\nleave NextRecordTypmod unnecessarily incremented, which is a tiny leak.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 10 Jul 2021 10:58:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: Crash in record_type_typmod_compare" } ]
[ { "msg_contents": "Hello,\n\nThe wiki page [1] still mentions that : \"On Windows the useful range (for\nshared buffers) is 64MB to 512MB\". The topic showed up in a pgtune\ndiscussion [2].\n\nIs it possible to remove this advice or add that since pg10 it no longer\nholds true [3] ?\n\nBenoit\n\n[1] https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n[2] https://github.com/le0pard/pgtune/discussions/50\n[3]\nhttps://github.com/postgres/postgres/commit/81c52728f82be5303ea16508255e948017f4cd87#diff-0553fde8fa0de14527cb5c14b02adb28bdedf3e458f7347b51fb11e1dade2fa7\n\nHello,The wiki page [1] still mentions that : \"On Windows the useful range (for shared buffers) is 64MB to 512MB\". The topic showed up in a pgtune discussion [2].Is it possible to remove this advice or add that since pg10 it no longer holds true [3] ?Benoit[1] https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server[2] https://github.com/le0pard/pgtune/discussions/50[3] https://github.com/postgres/postgres/commit/81c52728f82be5303ea16508255e948017f4cd87#diff-0553fde8fa0de14527cb5c14b02adb28bdedf3e458f7347b51fb11e1dade2fa7", "msg_date": "Wed, 31 Mar 2021 11:39:39 +0200", "msg_from": "talk to ben <blo.talkto@gmail.com>", "msg_from_op": true, "msg_subject": "Shared buffers advice for windows in the wiki" }, { "msg_contents": "On Wed, 31 Mar 2021 at 22:39, talk to ben <blo.talkto@gmail.com> wrote:\n> Is it possible to remove this advice or add that since pg10 it no longer holds true [3] ?\n\nI've just removed all mention of it from the wiki.\n\nDavid\n\n\n", "msg_date": "Wed, 31 Mar 2021 22:48:20 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Shared buffers advice for windows in the wiki" } ]
[ { "msg_contents": "Hello,\n\nMy name is Sebastian and I am new to this list and community.\nI have been following PostgreSQL for several years and I love the work done\non it, but I never had the chance (time) to join.\n\nI was going through the TODO list and studied the code and the thread\ndiscussing the optional fixes and I think I have a solution to this one\nwhich has the following advantages:\n1. No change to the protocol is needed\n2. Can be implemented in a both forward and backward compatible way\n3. Does not require any shared memory trickery\n4. Is immune to brute force attacks (probably)\n\nIf this is still something we wish to fix I will be happy to share the\ndetails (and implement it) - I don't wish to burden you with the details if\nthere is no real interest in solving this.\n\nCheers\nSebastian\n\nHello,My name is Sebastian and I am new to this list and community.I have been following PostgreSQL for several years and I love the work done on it, but I never had the chance (time) to join.I was going through the TODO list and studied the code and the thread discussing the optional fixes  and I think I have a solution to this one which has the following advantages:1. No change to the protocol is needed2. Can be implemented in a both forward and backward compatible way3. Does not require any shared memory trickery4. Is immune to brute force attacks  (probably)If this is still something we wish to fix I will be happy to share the details (and implement it) - I don't wish to burden you with the details if there is no real interest in solving this.CheersSebastian", "msg_date": "Wed, 31 Mar 2021 16:54:24 +0300", "msg_from": "Sebastian Cabot <scabot@gmail.com>", "msg_from_op": true, "msg_subject": "Prevent query cancel packets from being replayed by an attacker (From\n TODO)" }, { "msg_contents": "On Wed, 2021-03-31 at 16:54 +0300, Sebastian Cabot wrote:\n> My name is Sebastian and I am new to this list and community.\n> I have been following PostgreSQL for several years and I love the work done on\n> it, but I never had the chance (time) to join.\n> \n> I was going through the TODO list and studied the code and the thread discussing the\n> optional fixes and I think I have a solution to this one which has the following advantages:\n> 1. No change to the protocol is needed\n> 2. Can be implemented in a both forward and backward compatible way\n> 3. Does not require any shared memory trickery\n> 4. Is immune to brute force attacks (probably)\n> \n> If this is still something we wish to fix I will be happy to share the details (and\n> implement it) - I don't wish to burden you with the details if there is no real interest in solving this.\n\nThank you for your willingness to help!\n\nSure, this is the place to discuss your idea, go ahead.\n\nRight now is the end of the final commitfest for v14, so people\nare busy getting patches committed and you may get less echo than normally.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Wed, 31 Mar 2021 16:44:19 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Prevent query cancel packets from being replayed by an attacker\n (From TODO)" }, { "msg_contents": "On Wed, Mar 31, 2021 at 5:44 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Wed, 2021-03-31 at 16:54 +0300, Sebastian Cabot wrote:\n> > My name is Sebastian and I am new to this list and community.\n> > I have been following PostgreSQL for several years and I love the work done on\n> > it, but I never had the chance (time) to join.\n> >\n> > I was going through the TODO list and studied the code and the thread discussing the\n> > optional fixes and I think I have a solution to this one which has the following advantages:\n> > 1. No change to the protocol is needed\n> > 2. Can be implemented in a both forward and backward compatible way\n> > 3. Does not require any shared memory trickery\n> > 4. Is immune to brute force attacks (probably)\n> >\n> > If this is still something we wish to fix I will be happy to share the details (and\n> > implement it) - I don't wish to burden you with the details if there is no real interest in solving this.\n>\n> Thank you for your willingness to help!\n>\n> Sure, this is the place to discuss your idea, go ahead.\n>\n> Right now is the end of the final commitfest for v14, so people\n> are busy getting patches committed and you may get less echo than normally.\n>\n> Yours,\n> Laurenz Albe\n>\nThank you Laurenz.\nI will post the details. Hopefully some developers will find the time\nto review the idea.\n\nSummary of the problem:\nTo cancel a query a query cancel message is sent. This message is\nalways sent unencrypted even if SSL is enabled because there is a need\nfor the cancel message to be sent from signals. This opens up the\npossibility of a replay attack since the cancel message contains a\ncancel auth key which does not change after the backend is started.\n\nSummary of the solution proposed in the discussion on the thread:\nhttps://www.postgresql.org/message-id/489C969D.8020800@enterprisedb.com\nNot all the details were agreed upon but the trend was as follows:\n1. The current way of canceling a request shall continue to be\nsupported (at least until the next protocol version update)\n2. A new cancel message format and processing method shall be\nsupported alongside the original and the server shall advertise in the\nstartup runtime parameters whether this new method is supported\n3. A new libpq client that supports the new method shall know to use\nit if the server supports it\n4. The new method involves the client sending a hashed representation\nof the cancel authkey so that the key is never sent in clear text\n5. There will be a mechanism (an integer which is incremented before\nevery cancel message) to generate a new hash for every request\n6. When using the new method the postmaster shall check that the\ninteger of a new request is larger than the one used for the last\nrequest\n8. There will be a postmaster configuration parameter to decide\nwhether the new method is enforced or is optional\n\nThe above suggestion is not bad and quite appealing but in reality it\nforms a protocol change, It requires changes to both the server and\nthe client.\n\nDetails for new suggestion:\n\n1. We notice that the incentive for the elaborate solution proposed is\nthat the cancel auth key cannot be regenerated easily (Without\ntouching shared memory in a way that is not trivial with the current\nimplementation especially for non EXEC_BACKEND).\n2. If a new cancelation authkey can be regenerated once a cancel\nmessage was sent then we could just send the new key and no changes to\nlibpq or the protocol are required.\n2.1. One problem with such an approach is that other clients (JDBC?)\nonly read the key at startup and this shall be addressed below\n3. There will be a postmaster configuration parameter to decide\nwhether the new method is enforced or is optional\n\nHere is how the new design will allow generating a new \"random\"\nauthkey without any shared memory trickery:\nWe will add two backend variables:\nint32 original_cancel_key; (To allow cancels from client that do not\nupdate the key in case the new method is not enforced by the server)\nchar random_bits[32];\n\nPOSTMASTER:\nWhen creating a new backend the original_cancel_key shall have a copy\nof the random key generated. ramdom_bits shall be initialized using\npg_strong_random\n\nWhen a new cancel message request arrives:\nPOSTMASTER:\n(If the new method is not enforced then also a match against the\noriginal_cancel_key shall be accepted)\nIf the message's cancelAuthCode matches the current cancel_key then\n1. Generate a new key by a deterministic algorithm from random_bits\n2. regenerate random_bits using a SHA256 of the current key and the\ncurrent random bits\n3. As in today send SIGINT to backend\nBACKEND:\nUpon receiving SIGINT mark that a new key should be generated.\nWhen appropriate in the loop:\n1. Generate a new key by a deterministic algorithm from random_bits\n(same way POSTMASTER did so it will get the exact same key)\n2. regenerate random_bits using a SHA256 of the current key and the\ncurrent random bits (The same way POSTMANGER did so it will get the\nsame randomness)\n3. Send new key to client\n\nSo if the server does not enforce the new method old clients will work\njust the same. For clients using the new libpq or derived\nimplementations the new secure cancel shall be the default whether the\nserver enforces it or not.\n\nI welcome any comments or questions.\n\nThanks\nSebastian\n\n\n", "msg_date": "Wed, 31 Mar 2021 19:31:45 +0300", "msg_from": "Sebastian Cabot <scabot@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Prevent query cancel packets from being replayed by an attacker\n (From TODO)" } ]
[ { "msg_contents": "Hi,\n\nJust found $SUBJECT involving time with time zone and a subselect. I\nstill don't have narrowed to the exact table/index minimal schema but\nif you run this query on the regression database it will creash.\n\n```\nupdate public.brintest_multi set\n timetzcol = (select tz from generate_series('2021-01-01'::timestamp\nwith time zone, '2021-01-31', '5 days') tz limit 1)\n;\n```\n\nattached a backtrace. Let me know if you need extra information.\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL", "msg_date": "Wed, 31 Mar 2021 12:29:33 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Crash in BRIN minmax-multi indexes" }, { "msg_contents": "Hi,\nIn build_distances():\n\n a1 = eranges[i].maxval;\n a2 = eranges[i + 1].minval;\n\nIt seems there was overlap between the successive ranges, leading to\ndelta = -6785000000\n\nFYI\n\nOn Wed, Mar 31, 2021 at 10:30 AM Jaime Casanova <\njcasanov@systemguards.com.ec> wrote:\n\n> Hi,\n>\n> Just found $SUBJECT involving time with time zone and a subselect. I\n> still don't have narrowed to the exact table/index minimal schema but\n> if you run this query on the regression database it will creash.\n>\n> ```\n> update public.brintest_multi set\n> timetzcol = (select tz from generate_series('2021-01-01'::timestamp\n> with time zone, '2021-01-31', '5 days') tz limit 1)\n> ;\n> ```\n>\n> attached a backtrace. Let me know if you need extra information.\n>\n> --\n> Jaime Casanova\n> Director de Servicios Profesionales\n> SYSTEMGUARDS - Consultores de PostgreSQL\n>\n\nHi,In build_distances():        a1 = eranges[i].maxval;        a2 = eranges[i + 1].minval;It seems there was overlap between the successive ranges, leading todelta = -6785000000FYIOn Wed, Mar 31, 2021 at 10:30 AM Jaime Casanova <jcasanov@systemguards.com.ec> wrote:Hi,\n\nJust found $SUBJECT involving time with time zone and a subselect. I\nstill don't have narrowed to the exact table/index minimal schema but\nif you run this query on the regression database it will creash.\n\n```\nupdate public.brintest_multi set\n  timetzcol = (select tz from generate_series('2021-01-01'::timestamp\nwith time zone, '2021-01-31', '5 days') tz limit 1)\n;\n```\n\nattached a backtrace. Let me know if you need extra information.\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL", "msg_date": "Wed, 31 Mar 2021 11:20:20 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 3/31/21 8:20 PM, Zhihong Yu wrote:\n> Hi,\n> In build_distances():\n> \n>         a1 = eranges[i].maxval;\n>         a2 = eranges[i + 1].minval;\n> \n> It seems there was overlap between the successive ranges, leading to\n> delta = -6785000000\n> \n\nI've been unable to reproduce this, so far :-( How exactly did you\nmanage to reproduce it?\n\n\nThe thing is - how could there be an overlap? The way we build expanded\nranges that should not be possible, I think. Can you print the ranges at\nthe end of fill_expanded_ranges? That should shed some light on this.\n\n\nFWIW I suspect those asserts on delta may be a bit problematic due to\nrounding errors. And I found one issue in the inet distance function,\nbecause apparently\n\ntest=# select '10.2.14.243/24'::inet < '10.2.14.231/24'::inet;\n ?column?\n----------\n f\n(1 row)\n\nbut the delta formula currently ignores the mask. But that's a separate\nissue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 31 Mar 2021 20:27:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "Hi,\n\nI think I found the issue - it's kinda obvious, really. We need to\nconsider the timezone, because the \"time\" parts alone may be sorted\ndifferently. The attached patch should fix this, and it also fixes a\nsimilar issue in the inet data type.\n\nAs for why the regression tests did not catch this, it's most likely\nbecause the data is likely generated in \"nice\" ordering, or something\nlike that. I'll see if I can tweak the ordering to trigger these issues\nreliably, and I'll do a bit more randomized testing.\n\nThere's also the question of rounding errors, which I think might cause\nrandom assert failures (but in practice it's harmless, in the worst case\nwe'll merge the ranges a bit differently).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 1 Apr 2021 00:25:19 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "Hi,\nFor inet data type fix:\n\n+ unsigned char a = addra[i];\n+ unsigned char b = addrb[i];\n+\n+ if (i >= lena)\n+ a = 0;\n+\n+ if (i >= lenb)\n+ b = 0;\n\nShould the length check precede the addra[i] ?\nSomething like:\n\n unsigned char a;\n if (i >= lena) a = 0;\n else a = addra[i];\n\nCheers\n\nOn Wed, Mar 31, 2021 at 3:25 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> I think I found the issue - it's kinda obvious, really. We need to\n> consider the timezone, because the \"time\" parts alone may be sorted\n> differently. The attached patch should fix this, and it also fixes a\n> similar issue in the inet data type.\n>\n> As for why the regression tests did not catch this, it's most likely\n> because the data is likely generated in \"nice\" ordering, or something\n> like that. I'll see if I can tweak the ordering to trigger these issues\n> reliably, and I'll do a bit more randomized testing.\n>\n> There's also the question of rounding errors, which I think might cause\n> random assert failures (but in practice it's harmless, in the worst case\n> we'll merge the ranges a bit differently).\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,For inet data type fix:+       unsigned char a = addra[i];+       unsigned char b = addrb[i];++       if (i >= lena)+           a = 0;++       if (i >= lenb)+           b = 0;Should the length check precede the addra[i] ?Something like:       unsigned char a;       if (i >= lena) a = 0;       else a = addra[i];CheersOn Wed, Mar 31, 2021 at 3:25 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nI think I found the issue - it's kinda obvious, really. We need to\nconsider the timezone, because the \"time\" parts alone may be sorted\ndifferently. The attached patch should fix this, and it also fixes a\nsimilar issue in the inet data type.\n\nAs for why the regression tests did not catch this, it's most likely\nbecause the data is likely generated in \"nice\" ordering, or something\nlike that. I'll see if I can tweak the ordering to trigger these issues\nreliably, and I'll do a bit more randomized testing.\n\nThere's also the question of rounding errors, which I think might cause\nrandom assert failures (but in practice it's harmless, in the worst case\nwe'll merge the ranges a bit differently).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 31 Mar 2021 15:53:45 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 4/1/21 12:53 AM, Zhihong Yu wrote:\n> Hi,\n> For inet data type fix:\n> \n> +       unsigned char a = addra[i];\n> +       unsigned char b = addrb[i];\n> +\n> +       if (i >= lena)\n> +           a = 0;\n> +\n> +       if (i >= lenb)\n> +           b = 0;\n> \n> Should the length check precede the addra[i] ?\n> Something like:\n> \n>        unsigned char a;\n>        if (i >= lena) a = 0;\n>        else a = addra[i];\n> \n\nI don't think that makes any difference. We know the bytes are there, we\njust want to ignore / reset them in some cases.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 1 Apr 2021 01:10:36 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Wed, Mar 31, 2021 at 5:25 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> Hi,\n>\n> I think I found the issue - it's kinda obvious, really. We need to\n> consider the timezone, because the \"time\" parts alone may be sorted\n> differently. The attached patch should fix this, and it also fixes a\n> similar issue in the inet data type.\n>\n\nah! yeah! obvious... if you say so ;)\n\n> As for why the regression tests did not catch this, it's most likely\n> because the data is likely generated in \"nice\" ordering, or something\n> like that. I'll see if I can tweak the ordering to trigger these issues\n> reliably, and I'll do a bit more randomized testing.\n>\n> There's also the question of rounding errors, which I think might cause\n> random assert failures (but in practice it's harmless, in the worst case\n> we'll merge the ranges a bit differently).\n>\n>\n\nI can confirm this fixes the crash in the query I showed and the original case.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL\n\n\n", "msg_date": "Wed, 31 Mar 2021 18:19:47 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "Hi,\n- delta += (float8) addrb[i] - (float8) addra[i];\n- delta /= 256;\n...\n+ delta /= 255;\n\nMay I know why the divisor was changed ?\n\nThanks\n\nOn Wed, Mar 31, 2021 at 3:25 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> I think I found the issue - it's kinda obvious, really. We need to\n> consider the timezone, because the \"time\" parts alone may be sorted\n> differently. The attached patch should fix this, and it also fixes a\n> similar issue in the inet data type.\n>\n> As for why the regression tests did not catch this, it's most likely\n> because the data is likely generated in \"nice\" ordering, or something\n> like that. I'll see if I can tweak the ordering to trigger these issues\n> reliably, and I'll do a bit more randomized testing.\n>\n> There's also the question of rounding errors, which I think might cause\n> random assert failures (but in practice it's harmless, in the worst case\n> we'll merge the ranges a bit differently).\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,-       delta += (float8) addrb[i] - (float8) addra[i];-       delta /= 256;...+       delta /= 255;May I know why the divisor was changed ?ThanksOn Wed, Mar 31, 2021 at 3:25 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nI think I found the issue - it's kinda obvious, really. We need to\nconsider the timezone, because the \"time\" parts alone may be sorted\ndifferently. The attached patch should fix this, and it also fixes a\nsimilar issue in the inet data type.\n\nAs for why the regression tests did not catch this, it's most likely\nbecause the data is likely generated in \"nice\" ordering, or something\nlike that. I'll see if I can tweak the ordering to trigger these issues\nreliably, and I'll do a bit more randomized testing.\n\nThere's also the question of rounding errors, which I think might cause\nrandom assert failures (but in practice it's harmless, in the worst case\nwe'll merge the ranges a bit differently).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 31 Mar 2021 18:22:03 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 4/1/21 3:22 AM, Zhihong Yu wrote:\n> Hi,\n> -       delta += (float8) addrb[i] - (float8) addra[i];\n> -       delta /= 256;\n> ...\n> +       delta /= 255;\n> \n> May I know why the divisor was changed ?\n> \n\nYeah, that's a mistake, it should remain 256. Consider two subtractions\n\n1.1.2.255 - 1.1.1.0 = [0, 0, 1, 255]\n\n1.1.2.255 - 1.1.0.255 = [0, 0, 2, 0]\n\nWith the divisor being 255 those would be the same (2 * 256), but we\nwant the first one to be a bit smaller. It's also consistent with how\ninet does subtractions:\n\ntest=# select '1.1.2.255'::inet - '1.1.0.255'::inet;\n ?column?\n----------\n 512\n(1 row)\n\ntest=# select '1.1.2.255'::inet - '1.1.1.0'::inet;\n ?column?\n----------\n 511\n(1 row)\n\nSo I'll keep the 256.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 1 Apr 2021 03:39:30 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Thu, Apr 1, 2021 at 11:25 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> As for why the regression tests did not catch this, it's most likely\n> because the data is likely generated in \"nice\" ordering, or something\n> like that. I'll see if I can tweak the ordering to trigger these issues\n> reliably, and I'll do a bit more randomized testing.\n\nFor what little it's worth now that you've cracked it, I can report\nthat make check blows up somewhere in here on a 32 bit system with\n--with-blocksize=32 :-)\n\n\n", "msg_date": "Thu, 1 Apr 2021 15:56:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Wed, Mar 31, 2021 at 6:19 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Wed, Mar 31, 2021 at 5:25 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > I think I found the issue - it's kinda obvious, really. We need to\n> > consider the timezone, because the \"time\" parts alone may be sorted\n> > differently. The attached patch should fix this, and it also fixes a\n> > similar issue in the inet data type.\n> >\n>\n> ah! yeah! obvious... if you say so ;)\n>\n> > As for why the regression tests did not catch this, it's most likely\n> > because the data is likely generated in \"nice\" ordering, or something\n> > like that. I'll see if I can tweak the ordering to trigger these issues\n> > reliably, and I'll do a bit more randomized testing.\n> >\n> > There's also the question of rounding errors, which I think might cause\n> > random assert failures (but in practice it's harmless, in the worst case\n> > we'll merge the ranges a bit differently).\n> >\n> >\n>\n> I can confirm this fixes the crash in the query I showed and the original case.\n>\n\nBut I found another, but similar issue.\n\n```\nupdate public.brintest_multi set\n intervalcol = (select pg_catalog.avg(intervalcol) from public.brintest_bloom)\n;\n```\n\nBTW, i can reproduce just by executing \"make installcheck\" and\nimmediately execute that query\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL", "msg_date": "Thu, 1 Apr 2021 01:25:45 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "Hi,\nCan you try this patch ?\n\nThanks\n\ndiff --git a/src/backend/access/brin/brin_minmax_multi.c\nb/src/backend/access/brin/brin_minmax_multi.c\nindex 70109960e8..25d6d2e274 100644\n--- a/src/backend/access/brin/brin_minmax_multi.c\n+++ b/src/backend/access/brin/brin_minmax_multi.c\n@@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)\n delta = 24L * 3600L * delta;\n\n /* and add the time part */\n- delta += result->time / (float8) 1000000.0;\n+ delta += (result->time + result->zone * USECS_PER_SEC) / (float8)\n1000000.0;\n\n Assert(delta >= 0);\n\nOn Wed, Mar 31, 2021 at 11:25 PM Jaime Casanova <\njcasanov@systemguards.com.ec> wrote:\n\n> On Wed, Mar 31, 2021 at 6:19 PM Jaime Casanova\n> <jcasanov@systemguards.com.ec> wrote:\n> >\n> > On Wed, Mar 31, 2021 at 5:25 PM Tomas Vondra\n> > <tomas.vondra@enterprisedb.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I think I found the issue - it's kinda obvious, really. We need to\n> > > consider the timezone, because the \"time\" parts alone may be sorted\n> > > differently. The attached patch should fix this, and it also fixes a\n> > > similar issue in the inet data type.\n> > >\n> >\n> > ah! yeah! obvious... if you say so ;)\n> >\n> > > As for why the regression tests did not catch this, it's most likely\n> > > because the data is likely generated in \"nice\" ordering, or something\n> > > like that. I'll see if I can tweak the ordering to trigger these issues\n> > > reliably, and I'll do a bit more randomized testing.\n> > >\n> > > There's also the question of rounding errors, which I think might cause\n> > > random assert failures (but in practice it's harmless, in the worst\n> case\n> > > we'll merge the ranges a bit differently).\n> > >\n> > >\n> >\n> > I can confirm this fixes the crash in the query I showed and the\n> original case.\n> >\n>\n> But I found another, but similar issue.\n>\n> ```\n> update public.brintest_multi set\n> intervalcol = (select pg_catalog.avg(intervalcol) from\n> public.brintest_bloom)\n> ;\n> ```\n>\n> BTW, i can reproduce just by executing \"make installcheck\" and\n> immediately execute that query\n>\n> --\n> Jaime Casanova\n> Director de Servicios Profesionales\n> SYSTEMGUARDS - Consultores de PostgreSQL\n>\n\nHi,Can you try this patch ?Thanksdiff --git a/src/backend/access/brin/brin_minmax_multi.c b/src/backend/access/brin/brin_minmax_multi.cindex 70109960e8..25d6d2e274 100644--- a/src/backend/access/brin/brin_minmax_multi.c+++ b/src/backend/access/brin/brin_minmax_multi.c@@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)     delta = 24L * 3600L * delta;     /* and add the time part */-    delta += result->time / (float8) 1000000.0;+    delta += (result->time + result->zone * USECS_PER_SEC) / (float8) 1000000.0;     Assert(delta >= 0);On Wed, Mar 31, 2021 at 11:25 PM Jaime Casanova <jcasanov@systemguards.com.ec> wrote:On Wed, Mar 31, 2021 at 6:19 PM Jaime Casanova\n<jcasanov@systemguards.com.ec> wrote:\n>\n> On Wed, Mar 31, 2021 at 5:25 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> >\n> > Hi,\n> >\n> > I think I found the issue - it's kinda obvious, really. We need to\n> > consider the timezone, because the \"time\" parts alone may be sorted\n> > differently. The attached patch should fix this, and it also fixes a\n> > similar issue in the inet data type.\n> >\n>\n> ah! yeah! obvious... if you say so ;)\n>\n> > As for why the regression tests did not catch this, it's most likely\n> > because the data is likely generated in \"nice\" ordering, or something\n> > like that. I'll see if I can tweak the ordering to trigger these issues\n> > reliably, and I'll do a bit more randomized testing.\n> >\n> > There's also the question of rounding errors, which I think might cause\n> > random assert failures (but in practice it's harmless, in the worst case\n> > we'll merge the ranges a bit differently).\n> >\n> >\n>\n> I can confirm this fixes the crash in the query I showed and the original case.\n>\n\nBut I found another, but similar issue.\n\n```\nupdate public.brintest_multi set\n  intervalcol = (select pg_catalog.avg(intervalcol) from public.brintest_bloom)\n;\n```\n\nBTW, i can reproduce just by executing \"make installcheck\" and\nimmediately execute that query\n\n--\nJaime Casanova\nDirector de Servicios Profesionales\nSYSTEMGUARDS - Consultores de PostgreSQL", "msg_date": "Thu, 1 Apr 2021 06:09:47 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 4/1/21 3:09 PM, Zhihong Yu wrote:\n> Hi,\n> Can you try this patch ?\n> \n> Thanks\n> \n> diff --git a/src/backend/access/brin/brin_minmax_multi.c\n> b/src/backend/access/brin/brin_minmax_multi.c\n> index 70109960e8..25d6d2e274 100644\n> --- a/src/backend/access/brin/brin_minmax_multi.c\n> +++ b/src/backend/access/brin/brin_minmax_multi.c\n> @@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)\n>      delta = 24L * 3600L * delta;\n> \n>      /* and add the time part */\n> -    delta += result->time / (float8) 1000000.0;\n> +    delta += (result->time + result->zone * USECS_PER_SEC) / (float8)\n> 1000000.0;\n> \n\nThat won't work, because Interval does not have a \"zone\" field, so this\nwon't even compile.\n\nThe problem is that interval comparisons convert the value using 30 days\nper month (see interval_cmp_value), but the formula in this function\nuses 31. So either we can tweak that (seems to fix it for me), or maybe\njust switch to interval_cmp_value directly.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 1 Apr 2021 15:22:59 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "Hi, Tomas:\nThanks for the correction.\n\nI think switching to interval_cmp_value() would be better (with a comment\nexplaining why).\n\nCheers\n\nOn Thu, Apr 1, 2021 at 6:23 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> On 4/1/21 3:09 PM, Zhihong Yu wrote:\n> > Hi,\n> > Can you try this patch ?\n> >\n> > Thanks\n> >\n> > diff --git a/src/backend/access/brin/brin_minmax_multi.c\n> > b/src/backend/access/brin/brin_minmax_multi.c\n> > index 70109960e8..25d6d2e274 100644\n> > --- a/src/backend/access/brin/brin_minmax_multi.c\n> > +++ b/src/backend/access/brin/brin_minmax_multi.c\n> > @@ -2161,7 +2161,7 @@\n> brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)\n> > delta = 24L * 3600L * delta;\n> >\n> > /* and add the time part */\n> > - delta += result->time / (float8) 1000000.0;\n> > + delta += (result->time + result->zone * USECS_PER_SEC) / (float8)\n> > 1000000.0;\n> >\n>\n> That won't work, because Interval does not have a \"zone\" field, so this\n> won't even compile.\n>\n> The problem is that interval comparisons convert the value using 30 days\n> per month (see interval_cmp_value), but the formula in this function\n> uses 31. So either we can tweak that (seems to fix it for me), or maybe\n> just switch to interval_cmp_value directly.\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi, Tomas:Thanks for the correction.I think switching to interval_cmp_value() would be better (with a comment explaining why).CheersOn Thu, Apr 1, 2021 at 6:23 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:On 4/1/21 3:09 PM, Zhihong Yu wrote:\n> Hi,\n> Can you try this patch ?\n> \n> Thanks\n> \n> diff --git a/src/backend/access/brin/brin_minmax_multi.c\n> b/src/backend/access/brin/brin_minmax_multi.c\n> index 70109960e8..25d6d2e274 100644\n> --- a/src/backend/access/brin/brin_minmax_multi.c\n> +++ b/src/backend/access/brin/brin_minmax_multi.c\n> @@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)\n>      delta = 24L * 3600L * delta;\n> \n>      /* and add the time part */\n> -    delta += result->time / (float8) 1000000.0;\n> +    delta += (result->time + result->zone * USECS_PER_SEC) / (float8)\n> 1000000.0;\n> \n\nThat won't work, because Interval does not have a \"zone\" field, so this\nwon't even compile.\n\nThe problem is that interval comparisons convert the value using 30 days\nper month (see interval_cmp_value), but the formula in this function\nuses 31. So either we can tweak that (seems to fix it for me), or maybe\njust switch to interval_cmp_value directly.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 1 Apr 2021 06:31:31 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Thu, Apr 01, 2021 at 03:22:59PM +0200, Tomas Vondra wrote:\n> On 4/1/21 3:09 PM, Zhihong Yu wrote:\n> > Hi,\n> > Can you try this patch ?\n> > \n> > Thanks\n> > \n> > diff --git a/src/backend/access/brin/brin_minmax_multi.c\n> > b/src/backend/access/brin/brin_minmax_multi.c\n> > index 70109960e8..25d6d2e274 100644\n> > --- a/src/backend/access/brin/brin_minmax_multi.c\n> > +++ b/src/backend/access/brin/brin_minmax_multi.c\n> > @@ -2161,7 +2161,7 @@ brin_minmax_multi_distance_interval(PG_FUNCTION_ARGS)\n> > � � �delta = 24L * 3600L * delta;\n> > \n> > � � �/* and add the time part */\n> > - � �delta += result->time / (float8) 1000000.0;\n> > + � �delta += (result->time + result->zone * USECS_PER_SEC) / (float8)\n> > 1000000.0;\n> > \n> \n> That won't work, because Interval does not have a \"zone\" field, so this\n> won't even compile.\n> \n> The problem is that interval comparisons convert the value using 30 days\n> per month (see interval_cmp_value), but the formula in this function\n> uses 31. So either we can tweak that (seems to fix it for me), or maybe\n> just switch to interval_cmp_value directly.\n> \n\nChanging to using month of 30 days on the formula fixed it.\n\nand I found another issue, this time involves autovacuum which makes it\na little more complicated to reproduce.\n\nCurrently the only stable way to reproduce it is using pgbench:\n\npgbench -i postgres\npsql -c \"CREATE INDEX ON pgbench_history USING brin (tid int4_minmax_multi_ops);\" postgres\npgbench -c2 -j2 -T 300 -n postgres\n\nAttached a backtrace\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Sun, 4 Apr 2021 00:25:50 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 4/4/21 7:25 AM, Jaime Casanova wrote:\n> ...\n> Changing to using month of 30 days on the formula fixed it.\n> \n\nI've pushed fixes for all the bugs reported in this thread so far\n(mostly distance calculations, ...), and one bug (swapped operator\nparameters in one place) I discovered while working on the fixes.\n\n> and I found another issue, this time involves autovacuum which makes it\n> a little more complicated to reproduce.\n> \n> Currently the only stable way to reproduce it is using pgbench:\n> \n> pgbench -i postgres\n> psql -c \"CREATE INDEX ON pgbench_history USING brin (tid int4_minmax_multi_ops);\" postgres\n> pgbench -c2 -j2 -T 300 -n postgres\n> \n\nFixed and pushed too.\n\nTurned out to be a silly bug in forgetting to remember the number of\nranges after deduplication, which sometimes resulted in assert failure.\nIt's a bit hard to trigger because concurrency / good timing is needed\nwhile summarizing the range, requiring a call to \"union\" function. Just\nrunning the pgbench did not trigger the issue for me, I had to manually\ncall the brin_summarize_new_values().\n\nFor the record, I did a lot of testing with data randomized in various\nways - the scripts are available here:\n\nhttps://github.com/tvondra/brin-randomized-tests\n\nIt was focused on discovering issues in the distance functions, and\ncomparing results with/without the index. Maybe the next step should be\nadding some changes to the data, which might trigger more issues like\nthis one.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 4 Apr 2021 19:52:50 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "BTW, for the inet data type, I considered simply calling the \"minus\"\nfunction, but that does not work because of this strange behavior:\n\n\nint4=# select '10.1.1.102/32'::inet > '10.1.1.142/24'::inet;\n ?column?\n----------\n t\n(1 row)\n\nint4=# select '10.1.1.102/32'::inet - '10.1.1.142/24'::inet;\n ?column?\n----------\n -40\n(1 row)\n\n\nThat is, (a>b) but then (a-b) < 0. AFAICS it's due to comparator\nconsidering the mask, while the minus ignores it. I find it a bit\nstrange, but I assume it's intentional.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 4 Apr 2021 20:01:52 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Sun, Apr 04, 2021 at 07:52:50PM +0200, Tomas Vondra wrote:\n> On 4/4/21 7:25 AM, Jaime Casanova wrote:\n> > \n> > pgbench -i postgres\n> > psql -c \"CREATE INDEX ON pgbench_history USING brin (tid int4_minmax_multi_ops);\" postgres\n> > pgbench -c2 -j2 -T 300 -n postgres\n> > \n> \n> Fixed and pushed too.\n> \n> Turned out to be a silly bug in forgetting to remember the number of\n> ranges after deduplication, which sometimes resulted in assert failure.\n> It's a bit hard to trigger because concurrency / good timing is needed\n> while summarizing the range, requiring a call to \"union\" function. Just\n> running the pgbench did not trigger the issue for me, I had to manually\n> call the brin_summarize_new_values().\n> \n> For the record, I did a lot of testing with data randomized in various\n> ways - the scripts are available here:\n> \n> https://github.com/tvondra/brin-randomized-tests\n> \n> It was focused on discovering issues in the distance functions, and\n> comparing results with/without the index. Maybe the next step should be\n> adding some changes to the data, which might trigger more issues like\n> this one.\n> \n\nJust found one more ocurrance of this one with this index while an\nautovacuum was running:\n\n\"\"\"\nCREATE INDEX bt_f8_heap_seqno_idx \n ON public.bt_f8_heap \n USING brin (seqno float8_minmax_multi_ops);\n\"\"\"\n\nAttached is a backtrace.\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Thu, 29 Sep 2022 01:53:27 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 9/29/22 08:53, Jaime Casanova wrote:\n> ...\n> \n> Just found one more ocurrance of this one with this index while an\n> autovacuum was running:\n> \n> \"\"\"\n> CREATE INDEX bt_f8_heap_seqno_idx \n> ON public.bt_f8_heap \n> USING brin (seqno float8_minmax_multi_ops);\n> \"\"\"\n> Attached is a backtrace.\n\nThanks for the report!\n\nI think I see the issue - brin_minmax_multi_union does not realize the\ntwo summaries could have just one range each, and those can overlap so\nthat merge_overlapping_ranges combines them into a single one.\n\nThis is harmless, except that the assert int build_distances is overly\nstrict. Not sure if we should just remove the assert or only compute the\ndistances with (neranges>1).\n\nDo you happen to have the core dump? It'd be useful to look at ranges_a\nand ranges_b, to confirm this is indeed what's happening.\n\nIf not, how reproducible is it?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Oct 2022 19:53:34 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Mon, Oct 03, 2022 at 07:53:34PM +0200, Tomas Vondra wrote:\n> On 9/29/22 08:53, Jaime Casanova wrote:\n> > ...\n> > \n> > Just found one more ocurrance of this one with this index while an\n> > autovacuum was running:\n> > \n> > \"\"\"\n> > CREATE INDEX bt_f8_heap_seqno_idx \n> > ON public.bt_f8_heap \n> > USING brin (seqno float8_minmax_multi_ops);\n> > \"\"\"\n> > Attached is a backtrace.\n> \n> Thanks for the report!\n> \n> I think I see the issue - brin_minmax_multi_union does not realize the\n> two summaries could have just one range each, and those can overlap so\n> that merge_overlapping_ranges combines them into a single one.\n> \n> This is harmless, except that the assert int build_distances is overly\n> strict. Not sure if we should just remove the assert or only compute the\n> distances with (neranges>1).\n> \n> Do you happen to have the core dump? It'd be useful to look at ranges_a\n> and ranges_b, to confirm this is indeed what's happening.\n> \n\nI do have it.\n\n(gdb) p *ranges_a\n$4 = {\n typid = 701,\n colloid = 0,\n attno = 0,\n cmp = 0x0,\n nranges = 0,\n nsorted = 1,\n nvalues = 1,\n maxvalues = 32,\n target_maxvalues = 32,\n values = 0x55d2ea1987c8\n}\n(gdb) p *ranges_b\n$5 = {\n typid = 701,\n colloid = 0,\n attno = 0,\n cmp = 0x0,\n nranges = 0,\n nsorted = 1,\n nvalues = 1,\n maxvalues = 32,\n target_maxvalues = 32,\n values = 0x55d2ea196da8\n}\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 3 Oct 2022 14:25:49 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 10/3/22 21:25, Jaime Casanova wrote:\n> On Mon, Oct 03, 2022 at 07:53:34PM +0200, Tomas Vondra wrote:\n>> On 9/29/22 08:53, Jaime Casanova wrote:\n>>> ...\n>>>\n>>> Just found one more ocurrance of this one with this index while an\n>>> autovacuum was running:\n>>>\n>>> \"\"\"\n>>> CREATE INDEX bt_f8_heap_seqno_idx \n>>> ON public.bt_f8_heap \n>>> USING brin (seqno float8_minmax_multi_ops);\n>>> \"\"\"\n>>> Attached is a backtrace.\n>>\n>> Thanks for the report!\n>>\n>> I think I see the issue - brin_minmax_multi_union does not realize the\n>> two summaries could have just one range each, and those can overlap so\n>> that merge_overlapping_ranges combines them into a single one.\n>>\n>> This is harmless, except that the assert int build_distances is overly\n>> strict. Not sure if we should just remove the assert or only compute the\n>> distances with (neranges>1).\n>>\n>> Do you happen to have the core dump? It'd be useful to look at ranges_a\n>> and ranges_b, to confirm this is indeed what's happening.\n>>\n> \n> I do have it.\n> \n> (gdb) p *ranges_a\n> $4 = {\n> typid = 701,\n> colloid = 0,\n> attno = 0,\n> cmp = 0x0,\n> nranges = 0,\n> nsorted = 1,\n> nvalues = 1,\n> maxvalues = 32,\n> target_maxvalues = 32,\n> values = 0x55d2ea1987c8\n> }\n> (gdb) p *ranges_b\n> $5 = {\n> typid = 701,\n> colloid = 0,\n> attno = 0,\n> cmp = 0x0,\n> nranges = 0,\n> nsorted = 1,\n> nvalues = 1,\n> maxvalues = 32,\n> target_maxvalues = 32,\n> values = 0x55d2ea196da8\n> }\n> \n\nThanks. That mostly confirms my theory. I'd bet that this\n\n(gdb) p ranges_a->values[0]\n(gdb) p ranges_b->values[0]\n\nwill print the same value. I've been able to reproduce this, but it's\npretty difficult to get the timing right (and it requires table with\njust a single value in that BRIN range).\n\nI'll get this fixed in a couple days. Considering the benign nature of\nthis issue (unnecessary assert) I'm not going to rush.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 3 Oct 2022 22:29:38 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Mon, Oct 03, 2022 at 10:29:38PM +0200, Tomas Vondra wrote:\n> On 10/3/22 21:25, Jaime Casanova wrote:\n> > On Mon, Oct 03, 2022 at 07:53:34PM +0200, Tomas Vondra wrote:\n> >> On 9/29/22 08:53, Jaime Casanova wrote:\n> >>> ...\n> >>>\n> >>> Just found one more ocurrance of this one with this index while an\n> >>> autovacuum was running:\n> >>>\n> >>> \"\"\"\n> >>> CREATE INDEX bt_f8_heap_seqno_idx \n> >>> ON public.bt_f8_heap \n> >>> USING brin (seqno float8_minmax_multi_ops);\n> >>> \"\"\"\n> >>> Attached is a backtrace.\n> >>\n> >> Thanks for the report!\n> >>\n> >> I think I see the issue - brin_minmax_multi_union does not realize the\n> >> two summaries could have just one range each, and those can overlap so\n> >> that merge_overlapping_ranges combines them into a single one.\n> >>\n> >> This is harmless, except that the assert int build_distances is overly\n> >> strict. Not sure if we should just remove the assert or only compute the\n> >> distances with (neranges>1).\n> >>\n> >> Do you happen to have the core dump? It'd be useful to look at ranges_a\n> >> and ranges_b, to confirm this is indeed what's happening.\n> >>\n> > \n> > I do have it.\n> > \n> > (gdb) p *ranges_a\n> > $4 = {\n> > typid = 701,\n> > colloid = 0,\n> > attno = 0,\n> > cmp = 0x0,\n> > nranges = 0,\n> > nsorted = 1,\n> > nvalues = 1,\n> > maxvalues = 32,\n> > target_maxvalues = 32,\n> > values = 0x55d2ea1987c8\n> > }\n> > (gdb) p *ranges_b\n> > $5 = {\n> > typid = 701,\n> > colloid = 0,\n> > attno = 0,\n> > cmp = 0x0,\n> > nranges = 0,\n> > nsorted = 1,\n> > nvalues = 1,\n> > maxvalues = 32,\n> > target_maxvalues = 32,\n> > values = 0x55d2ea196da8\n> > }\n> > \n> \n> Thanks. That mostly confirms my theory. I'd bet that this\n> \n> (gdb) p ranges_a->values[0]\n> (gdb) p ranges_b->values[0]\n> \n> will print the same value. \n> \n\nyou're right, they are same value\n\n(gdb) p ranges_a->values[0]\n$1 = 4679532294229561068\n(gdb) p ranges_b->values[0]\n$2 = 4679532294229561068\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Mon, 3 Oct 2022 23:26:56 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On Mon, Oct 03, 2022 at 10:29:38PM +0200, Tomas Vondra wrote:\n> I'll get this fixed in a couple days. Considering the benign nature of\n> this issue (unnecessary assert) I'm not going to rush.\n\nIs this still an outstanding issue ?\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Nov 2022 17:13:10 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "On 11/9/22 00:13, Justin Pryzby wrote:\n> On Mon, Oct 03, 2022 at 10:29:38PM +0200, Tomas Vondra wrote:\n>> I'll get this fixed in a couple days. Considering the benign nature of\n>> this issue (unnecessary assert) I'm not going to rush.\n> \n> Is this still an outstanding issue ?\n> \n\nYes. I'll get it fixed, but it's harmless in practice (without asserts),\nand I've been focusing on the other issue with broken NULL-handling in\nBRIN indexes.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 10 Nov 2022 13:46:26 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" }, { "msg_contents": "I finally pushed this fix.\n\nIn the end I both relaxed the assert a little bit to allow calling\nbuild_distances for a single range, and added a bail out so that the\ncaller gets regular NULL and not whatever palloc(0) produces.\n\nThanks again for the report!\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 30 Dec 2022 20:53:50 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Crash in BRIN minmax-multi indexes" } ]
[ { "msg_contents": "Hi,\n\nSo far the extended statistics are applied only at scan level, i.e. when\nestimating selectivity for individual tables. Which is great, but joins\nare a known challenge, so let's try doing something about it ...\n\nKonstantin Knizhnik posted a patch [1] using functional dependencies to\nimprove join estimates in January. It's an interesting approach, but as\nI explained in that thread I think we should try a different approach,\nsimilar to how we use MCV lists without extended stats. We'll probably\nend up considering functional dependencies too, but probably only as a\nfallback (similar to what we do for single-relation estimates).\n\nThis is a PoC demonstrating the approach I envisioned. It's incomplete\nand has various limitations:\n\n- no expression support, just plain attribute references\n- only equality conditions\n- requires MCV lists on both sides\n- inner joins only\n\nAll of this can / should be relaxed later, of course. But for a PoC this\nseems sufficient.\n\nThe basic principle is fairly simple, and mimics what eqjoinsel_inner\ndoes. Assume we have a query:\n\n SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a AND t1.b = t2.b)\n\nIf we have MCV lists on (t1.a,t1.b) and (t2.a,t2.b) then we can use the\nsame logic as eqjoinsel_inner and \"match\" them together. If the MCV list\nis \"larger\" - e.g. on (a,b,c) - then it's a bit more complicated, but\nthe general idea is the same.\n\nTo demonstrate this, consider a very simple example with a table that\nhas a lot of dependency between the columns:\n\n==================================================================\n\nCREATE TABLE t (a INT, b INT, c INT, d INT);\nINSERT INTO t SELECT mod(i,100), mod(i,100), mod(i,100), mod(i,100)\n FROM generate_series(1,100000) s(i);\nANALYZE t;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n\nCREATE STATISTICS s (mcv, ndistinct) ON a,b,c,d FROM t;\nANALYZE t;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n\nALTER STATISTICS s SET STATISTICS 10000;\nANALYZE t;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n\n==================================================================\n\nThe results look like this:\n\n- actual rows: 100000000\n- estimated (no stats): 1003638\n- estimated (stats, 100): 100247844\n- estimated (stats, 10k): 100000000\n\nSo, in this case the extended stats help quite a bit, even with the\ndefault statistics target.\n\nHowever, there are other things we can do. For example, we can use\nrestrictions (at relation level) as \"conditions\" to filter the MCV lits,\nand calculate conditional probability. This is useful even if there's\njust a single join condition (on one column), but there are dependencies\nbetween that column and the other filters. Or maybe when there are\nfilters between conditions on the two sides.\n\nConsider for example these two queries:\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n WHERE t1.c < 25 AND t2.d < 25;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n WHERE t1.c < 25 AND t2.d > 75;\n\nIn this particular case we know that (a = b = c = d) so the two filters\nare somewhat redundant. The regular estimates will ignore that, but with\nMCV we can actually detect that - when we combine the two MCV lists, we\nessentially calculate MCV (a,b,t1.c,t2.d), and use that.\n\n Q1 Q2\n- actual rows: 25000000 0\n- estimated (no stats): 62158 60241\n- estimated (stats, 100): 25047900 1\n- estimated (stats, 10k): 25000000 1\n\nObviously, the accuracy depends on how representative the MCV list is\n(what fraction of the data it represents), and in this case it works\nfairly nicely. A lot of the future work will be about handling cases\nwhen it represents only part of the data.\n\nThe attached PoC patch has a number of FIXME and XXX, describing stuff I\nignored to keep it simple, possible future improvement. And so on.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/71d67391-16a9-3e5e-b5e4-8f7fd32cc1b2@postgrespro.ru\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 31 Mar 2021 19:36:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "using extended statistics to improve join estimates" }, { "msg_contents": "Hi,\n\n+ * has_matching_mcv\n+ * Check whether the list contains statistic of a given kind\n\nThe method name is find_matching_mcv(). It seems the method initially\nreturned bool but later the return type was changed.\n\n+ StatisticExtInfo *found = NULL;\n\nfound normally is associated with bool return value. Maybe call the\nvariable matching_mcv or something similar.\n\n+ /* skip items eliminated by restrictions on rel2 */\n+ if (conditions2 && !conditions2[j])\n+ continue;\n\nMaybe you can add a counter recording the number of non-skipped items for\nthe inner loop. If this counter is 0 after the completion of one iteration,\nwe come out of the outer loop directly.\n\nCheers\n\nOn Wed, Mar 31, 2021 at 10:36 AM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> So far the extended statistics are applied only at scan level, i.e. when\n> estimating selectivity for individual tables. Which is great, but joins\n> are a known challenge, so let's try doing something about it ...\n>\n> Konstantin Knizhnik posted a patch [1] using functional dependencies to\n> improve join estimates in January. It's an interesting approach, but as\n> I explained in that thread I think we should try a different approach,\n> similar to how we use MCV lists without extended stats. We'll probably\n> end up considering functional dependencies too, but probably only as a\n> fallback (similar to what we do for single-relation estimates).\n>\n> This is a PoC demonstrating the approach I envisioned. It's incomplete\n> and has various limitations:\n>\n> - no expression support, just plain attribute references\n> - only equality conditions\n> - requires MCV lists on both sides\n> - inner joins only\n>\n> All of this can / should be relaxed later, of course. But for a PoC this\n> seems sufficient.\n>\n> The basic principle is fairly simple, and mimics what eqjoinsel_inner\n> does. Assume we have a query:\n>\n> SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a AND t1.b = t2.b)\n>\n> If we have MCV lists on (t1.a,t1.b) and (t2.a,t2.b) then we can use the\n> same logic as eqjoinsel_inner and \"match\" them together. If the MCV list\n> is \"larger\" - e.g. on (a,b,c) - then it's a bit more complicated, but\n> the general idea is the same.\n>\n> To demonstrate this, consider a very simple example with a table that\n> has a lot of dependency between the columns:\n>\n> ==================================================================\n>\n> CREATE TABLE t (a INT, b INT, c INT, d INT);\n> INSERT INTO t SELECT mod(i,100), mod(i,100), mod(i,100), mod(i,100)\n> FROM generate_series(1,100000) s(i);\n> ANALYZE t;\n>\n> SELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n>\n> CREATE STATISTICS s (mcv, ndistinct) ON a,b,c,d FROM t;\n> ANALYZE t;\n>\n> SELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n>\n> ALTER STATISTICS s SET STATISTICS 10000;\n> ANALYZE t;\n>\n> SELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n>\n> ==================================================================\n>\n> The results look like this:\n>\n> - actual rows: 100000000\n> - estimated (no stats): 1003638\n> - estimated (stats, 100): 100247844\n> - estimated (stats, 10k): 100000000\n>\n> So, in this case the extended stats help quite a bit, even with the\n> default statistics target.\n>\n> However, there are other things we can do. For example, we can use\n> restrictions (at relation level) as \"conditions\" to filter the MCV lits,\n> and calculate conditional probability. This is useful even if there's\n> just a single join condition (on one column), but there are dependencies\n> between that column and the other filters. Or maybe when there are\n> filters between conditions on the two sides.\n>\n> Consider for example these two queries:\n>\n> SELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n> WHERE t1.c < 25 AND t2.d < 25;\n>\n> SELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n> WHERE t1.c < 25 AND t2.d > 75;\n>\n> In this particular case we know that (a = b = c = d) so the two filters\n> are somewhat redundant. The regular estimates will ignore that, but with\n> MCV we can actually detect that - when we combine the two MCV lists, we\n> essentially calculate MCV (a,b,t1.c,t2.d), and use that.\n>\n> Q1 Q2\n> - actual rows: 25000000 0\n> - estimated (no stats): 62158 60241\n> - estimated (stats, 100): 25047900 1\n> - estimated (stats, 10k): 25000000 1\n>\n> Obviously, the accuracy depends on how representative the MCV list is\n> (what fraction of the data it represents), and in this case it works\n> fairly nicely. A lot of the future work will be about handling cases\n> when it represents only part of the data.\n>\n> The attached PoC patch has a number of FIXME and XXX, describing stuff I\n> ignored to keep it simple, possible future improvement. And so on.\n>\n>\n> regards\n>\n>\n> [1]\n>\n> https://www.postgresql.org/message-id/flat/71d67391-16a9-3e5e-b5e4-8f7fd32cc1b2@postgrespro.ru\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,+ * has_matching_mcv+ *     Check whether the list contains statistic of a given kindThe method name is find_matching_mcv(). It seems the method initially returned bool but later the return type was changed.+   StatisticExtInfo *found = NULL;found normally is associated with bool return value. Maybe call the variable matching_mcv or something similar.+           /* skip items eliminated by restrictions on rel2 */+           if (conditions2 && !conditions2[j])+               continue;Maybe you can add a counter recording the number of non-skipped items for the inner loop. If this counter is 0 after the completion of one iteration, we come out of the outer loop directly.CheersOn Wed, Mar 31, 2021 at 10:36 AM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nSo far the extended statistics are applied only at scan level, i.e. when\nestimating selectivity for individual tables. Which is great, but joins\nare a known challenge, so let's try doing something about it ...\n\nKonstantin Knizhnik posted a patch [1] using functional dependencies to\nimprove join estimates in January. It's an interesting approach, but as\nI explained in that thread I think we should try a different approach,\nsimilar to how we use MCV lists without extended stats. We'll probably\nend up considering functional dependencies too, but probably only as a\nfallback (similar to what we do for single-relation estimates).\n\nThis is a PoC demonstrating the approach I envisioned. It's incomplete\nand has various limitations:\n\n- no expression support, just plain attribute references\n- only equality conditions\n- requires MCV lists on both sides\n- inner joins only\n\nAll of this can / should be relaxed later, of course. But for a PoC this\nseems sufficient.\n\nThe basic principle is fairly simple, and mimics what eqjoinsel_inner\ndoes. Assume we have a query:\n\n  SELECT * FROM t1 JOIN t2 ON (t1.a = t2.a AND t1.b = t2.b)\n\nIf we have MCV lists on (t1.a,t1.b) and (t2.a,t2.b) then we can use the\nsame logic as eqjoinsel_inner and \"match\" them together. If the MCV list\nis \"larger\" - e.g. on (a,b,c) - then it's a bit more complicated, but\nthe general idea is the same.\n\nTo demonstrate this, consider a very simple example with a table that\nhas a lot of dependency between the columns:\n\n==================================================================\n\nCREATE TABLE t (a INT, b INT, c INT, d INT);\nINSERT INTO t SELECT mod(i,100), mod(i,100), mod(i,100), mod(i,100)\n  FROM generate_series(1,100000) s(i);\nANALYZE t;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n\nCREATE STATISTICS s (mcv, ndistinct) ON a,b,c,d FROM t;\nANALYZE t;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n\nALTER STATISTICS s SET STATISTICS 10000;\nANALYZE t;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b);\n\n==================================================================\n\nThe results look like this:\n\n- actual rows:             100000000\n- estimated (no stats):      1003638\n- estimated (stats, 100):  100247844\n- estimated (stats, 10k):  100000000\n\nSo, in this case the extended stats help quite a bit, even with the\ndefault statistics target.\n\nHowever, there are other things we can do. For example, we can use\nrestrictions (at relation level) as \"conditions\" to filter the MCV lits,\nand calculate conditional probability. This is useful even if there's\njust a single join condition (on one column), but there are dependencies\nbetween that column and the other filters. Or maybe when there are\nfilters between conditions on the two sides.\n\nConsider for example these two queries:\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n WHERE t1.c < 25 AND t2.d < 25;\n\nSELECT * FROM t t1 JOIN t t2 ON (t1.a = t2.a AND t1.b = t2.b)\n WHERE t1.c < 25 AND t2.d > 75;\n\nIn this particular case we know that (a = b = c = d) so the two filters\nare somewhat redundant. The regular estimates will ignore that, but with\nMCV we can actually detect that - when we combine the two MCV lists, we\nessentially calculate MCV (a,b,t1.c,t2.d), and use that.\n\n                                  Q1          Q2\n- actual rows:              25000000           0\n- estimated (no stats):        62158       60241\n- estimated (stats, 100):   25047900           1\n- estimated (stats, 10k):   25000000           1\n\nObviously, the accuracy depends on how representative the MCV list is\n(what fraction of the data it represents), and in this case it works\nfairly nicely. A lot of the future work will be about handling cases\nwhen it represents only part of the data.\n\nThe attached PoC patch has a number of FIXME and XXX, describing stuff I\nignored to keep it simple, possible future improvement. And so on.\n\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/71d67391-16a9-3e5e-b5e4-8f7fd32cc1b2@postgrespro.ru\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 31 Mar 2021 16:47:14 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hi,\n\nHere's a slightly improved / cleaned up version of the PoC patch, \nremoving a bunch of XXX and FIXMEs, adding comments, etc.\n\nThe approach is sound in principle, I think, although there's still a \nbunch of things to address:\n\n1) statext_compare_mcvs only really deals with equijoins / inner joins \nat the moment, as it's based on eqjoinsel_inner. It's probably desirable \nto add support for additional join types (inequality and outer joins).\n\n2) Some of the steps are performed multiple times - e.g. matching base \nrestrictions to statistics, etc. Those probably can be cached somehow, \nto reduce the overhead.\n\n3) The logic of picking the statistics to apply is somewhat simplistic, \nand maybe could be improved in some way. OTOH the number of candidate \nstatistics is likely low, so this is not a big issue.\n\n4) statext_compare_mcvs is based on eqjoinsel_inner and makes a bunch of \nassumptions similar to the original, but some of those assumptions may \nbe wrong in multi-column case, particularly when working with a subset \nof columns. For example (ndistinct - size(MCV)) may not be the number of \ndistinct combinations outside the MCV, when ignoring some columns. Same \nfor nullfract, and so on. I'm not sure we can do much more than pick \nsome reasonable approximation.\n\n5) There are no regression tests at the moment. Clearly a gap.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 14 Jun 2021 19:34:15 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hi,\n\nattached is an improved version of this patch, addressing some of the\npoints mentioned in my last message:\n\n1) Adds a couple regression tests, testing various join cases with\nexpressions, additional conditions, etc.\n\n2) Adds support for expressions, so the join clauses don't need to\nreference just simple columns. So e.g. this can benefit from extended\nstatistics, when defined on the expressions:\n\n -- CREATE STATISTICS s1 ON (a+1), b FROM t1;\n -- CREATE STATISTICS s2 ON (a+1), b FROM t2;\n\n SELECT * FROM t1 JOIN t2 ON ((t1.a + 1) = (t2.a + 1) AND t1.b = t2.b);\n\n3) Can combine extended statistics and regular (per-column) statistics.\nThe previous version required extended statistics MCV on both sides of\nthe join, but adding extended statistics on both sides may impractical\n(e.g. if one side does not have correlated columns it's silly to have to\nadd it just to make this patch happy).\n\nFor example you may have extended stats on the dimension table but not\nthe fact table, and the patch still can combine those two. Of course, if\nthere's no MCV on either side, we can't do much.\n\nSo this patch works when both sides have extended statistics MCV, or\nwhen one side has extended MCV and the other side regular MCV. It might\nseem silly, but the extended MCV allows considering additional baserel\nconditions (if there are any).\n\n\nexamples\n========\n\nThe table / data is very simple, but hopefully good enough for some\nsimple examples.\n\n create table t1 (a int, b int, c int);\n create table t2 (a int, b int, c int);\n\n insert into t1 select mod(i,50), mod(i,50), mod(i,50)\n from generate_series(1,1000) s(i);\n\n insert into t2 select mod(i,50), mod(i,50), mod(i,50)\n from generate_series(1,1000) s(i);\n\n analyze t1, t2;\n\nFirst, without extended stats (just the first line of explain analyze,\nto keep the message short):\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=31.00..106.00 rows=400 width=24)\n (actual time=5.426..22.678 rows=20000 loops=1)\n\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c < 25;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=28.50..160.75 rows=10000 width=24)\n (actual time=5.325..19.760 rows=10000 loops=1)\n\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c <\n25 and t2.c > 25;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=24.50..104.75 rows=4800 width=24)\n (actual time=5.618..5.639 rows=0 loops=1)\n\n\nNow, let's create statistics:\n\n create statistics s1 on a, b, c from t1 ;\n create statistics s2 on a, b, c from t2 ;\n analyze t1, t2;\n\nand now the same queries again:\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=31.00..106.00 rows=20000 width=24)\n (actual time=5.448..22.713 rows=20000 loops=1)\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c < 25;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=28.50..160.75 rows=10000 width=24)\n (actual time=5.317..16.680 rows=10000 loops=1)\n\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c <\n25 and t2.c > 25;\n\n QUERY PLAN\n------------------------------------------------------------------------\n Hash Join (cost=24.50..104.75 rows=1 width=24)\n (actual time=5.647..5.667 rows=0 loops=1)\n\n\nThose examples are a bit simplistic, but the improvements are fairly\nclear I think.\n\n\nlimitations & open issues\n=========================\n\nLet's talk about the main general restrictions and open issues in the\ncurrent patch that I can think of at the moment.\n\n1) statistics covering all join clauses\n\nThe patch requires the statistics to cover all the join clauses, mostly\nbecause it simplifies the implementation. This means that to use the\nper-column statistics, there has to be just a single join clause.\n\nAFAICS this could be relaxed and we could use multiple statistics to\nestimate the clauses. But it'd make selection of statistics much more\ncomplicated, because we have to pick \"matching\" statistics on both sides\nof the join. So it seems like an overkill, and most joins have very few\nconditions anyway.\n\n\n2) only equality join clauses\n\nThe other restriction is that at the moment this only supports simple\nequality clauses, combined with AND. So for example this is supported\n\n t1 JOIN t2 ON ((t1.a = t2.a) AND (t1.b + 2 = t2.b + 1))\n\nwhile these are not:\n\n t1 JOIN t2 ON ((t1.a = t2.a) OR (t1.b + 2 = t2.b + 1))\n\n t1 JOIN t2 ON ((t1.a - t2.a = 0) AND (t1.b + 2 = t2.b + 1))\n\n t1 JOIN t2 ON ((t1.a = t2.a) AND ((t1.b = t2.b) OR (t1.c = t2.c)))\n\nI'm not entirely sure these restrictions can be relaxed. It's not that\ndifficult to evaluate these cases when matching items between the MCV\nlists, similarly to how we evaluate bitmaps for baserel estimates.\n\nBut I'm not sure what to do about the part not covered by the MCV lists.\nThe eqjoinsel() approach uses ndistinct estimates for that, but that\nonly works for AND clauses, I think. How would that work for OR?\n\nSimilarly, I'm not sure we can do much for non-equality conditions, but\nthose are currently estimated as default selectivity in selfuncs.c.\n\n\n3) estimation by join pairs\n\nAt the moment, the estimates are calculated for pairs of relations, so\nfor example given a query\n\n explain analyze\n select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n join t3 on (t1.b = t3.b and t2.c = t3.c);\n\nwe'll estimate the first join (t1,t2) just fine, but then the second\njoin actually combines (t1,t2,t3). What the patch currently does is it\nsplits it into (t1,t2) and (t2,t3) and estimates those. I wonder if this\nshould actually combine all three MCVs at once - we're pretty much\ncombining the MCVs into one large MCV representing the join result.\n\nBut I haven't done that yet, as it requires the MCVs to be combined\nusing the join clauses (overlap in a way), but I'm not sure how likely\nthat is in practice. In the example it could help, but that's a bit\nartificial example.\n\n\n4) still just inner equi-joins\n\nI haven't done any work on extending this to outer joins etc. Adding\nouter and semi joins should not be complicated, mostly copying and\ntweaking what eqjoinsel() does.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 6 Oct 2021 21:33:03 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On Wed, Oct 6, 2021 at 12:33 PM Tomas Vondra <tomas.vondra@enterprisedb.com>\nwrote:\n\n> Hi,\n>\n> attached is an improved version of this patch, addressing some of the\n> points mentioned in my last message:\n>\n> 1) Adds a couple regression tests, testing various join cases with\n> expressions, additional conditions, etc.\n>\n> 2) Adds support for expressions, so the join clauses don't need to\n> reference just simple columns. So e.g. this can benefit from extended\n> statistics, when defined on the expressions:\n>\n> -- CREATE STATISTICS s1 ON (a+1), b FROM t1;\n> -- CREATE STATISTICS s2 ON (a+1), b FROM t2;\n>\n> SELECT * FROM t1 JOIN t2 ON ((t1.a + 1) = (t2.a + 1) AND t1.b = t2.b);\n>\n> 3) Can combine extended statistics and regular (per-column) statistics.\n> The previous version required extended statistics MCV on both sides of\n> the join, but adding extended statistics on both sides may impractical\n> (e.g. if one side does not have correlated columns it's silly to have to\n> add it just to make this patch happy).\n>\n> For example you may have extended stats on the dimension table but not\n> the fact table, and the patch still can combine those two. Of course, if\n> there's no MCV on either side, we can't do much.\n>\n> So this patch works when both sides have extended statistics MCV, or\n> when one side has extended MCV and the other side regular MCV. It might\n> seem silly, but the extended MCV allows considering additional baserel\n> conditions (if there are any).\n>\n>\n> examples\n> ========\n>\n> The table / data is very simple, but hopefully good enough for some\n> simple examples.\n>\n> create table t1 (a int, b int, c int);\n> create table t2 (a int, b int, c int);\n>\n> insert into t1 select mod(i,50), mod(i,50), mod(i,50)\n> from generate_series(1,1000) s(i);\n>\n> insert into t2 select mod(i,50), mod(i,50), mod(i,50)\n> from generate_series(1,1000) s(i);\n>\n> analyze t1, t2;\n>\n> First, without extended stats (just the first line of explain analyze,\n> to keep the message short):\n>\n> explain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=31.00..106.00 rows=400 width=24)\n> (actual time=5.426..22.678 rows=20000 loops=1)\n>\n>\n> explain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c < 25;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=28.50..160.75 rows=10000 width=24)\n> (actual time=5.325..19.760 rows=10000 loops=1)\n>\n>\n> explain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c <\n> 25 and t2.c > 25;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=24.50..104.75 rows=4800 width=24)\n> (actual time=5.618..5.639 rows=0 loops=1)\n>\n>\n> Now, let's create statistics:\n>\n> create statistics s1 on a, b, c from t1 ;\n> create statistics s2 on a, b, c from t2 ;\n> analyze t1, t2;\n>\n> and now the same queries again:\n>\n> explain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=31.00..106.00 rows=20000 width=24)\n> (actual time=5.448..22.713 rows=20000 loops=1)\n>\n> explain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c < 25;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=28.50..160.75 rows=10000 width=24)\n> (actual time=5.317..16.680 rows=10000 loops=1)\n>\n>\n> explain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c <\n> 25 and t2.c > 25;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> Hash Join (cost=24.50..104.75 rows=1 width=24)\n> (actual time=5.647..5.667 rows=0 loops=1)\n>\n>\n> Those examples are a bit simplistic, but the improvements are fairly\n> clear I think.\n>\n>\n> limitations & open issues\n> =========================\n>\n> Let's talk about the main general restrictions and open issues in the\n> current patch that I can think of at the moment.\n>\n> 1) statistics covering all join clauses\n>\n> The patch requires the statistics to cover all the join clauses, mostly\n> because it simplifies the implementation. This means that to use the\n> per-column statistics, there has to be just a single join clause.\n>\n> AFAICS this could be relaxed and we could use multiple statistics to\n> estimate the clauses. But it'd make selection of statistics much more\n> complicated, because we have to pick \"matching\" statistics on both sides\n> of the join. So it seems like an overkill, and most joins have very few\n> conditions anyway.\n>\n>\n> 2) only equality join clauses\n>\n> The other restriction is that at the moment this only supports simple\n> equality clauses, combined with AND. So for example this is supported\n>\n> t1 JOIN t2 ON ((t1.a = t2.a) AND (t1.b + 2 = t2.b + 1))\n>\n> while these are not:\n>\n> t1 JOIN t2 ON ((t1.a = t2.a) OR (t1.b + 2 = t2.b + 1))\n>\n> t1 JOIN t2 ON ((t1.a - t2.a = 0) AND (t1.b + 2 = t2.b + 1))\n>\n> t1 JOIN t2 ON ((t1.a = t2.a) AND ((t1.b = t2.b) OR (t1.c = t2.c)))\n>\n> I'm not entirely sure these restrictions can be relaxed. It's not that\n> difficult to evaluate these cases when matching items between the MCV\n> lists, similarly to how we evaluate bitmaps for baserel estimates.\n>\n> But I'm not sure what to do about the part not covered by the MCV lists.\n> The eqjoinsel() approach uses ndistinct estimates for that, but that\n> only works for AND clauses, I think. How would that work for OR?\n>\n> Similarly, I'm not sure we can do much for non-equality conditions, but\n> those are currently estimated as default selectivity in selfuncs.c.\n>\n>\n> 3) estimation by join pairs\n>\n> At the moment, the estimates are calculated for pairs of relations, so\n> for example given a query\n>\n> explain analyze\n> select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n> join t3 on (t1.b = t3.b and t2.c = t3.c);\n>\n> we'll estimate the first join (t1,t2) just fine, but then the second\n> join actually combines (t1,t2,t3). What the patch currently does is it\n> splits it into (t1,t2) and (t2,t3) and estimates those. I wonder if this\n> should actually combine all three MCVs at once - we're pretty much\n> combining the MCVs into one large MCV representing the join result.\n>\n> But I haven't done that yet, as it requires the MCVs to be combined\n> using the join clauses (overlap in a way), but I'm not sure how likely\n> that is in practice. In the example it could help, but that's a bit\n> artificial example.\n>\n>\n> 4) still just inner equi-joins\n>\n> I haven't done any work on extending this to outer joins etc. Adding\n> outer and semi joins should not be complicated, mostly copying and\n> tweaking what eqjoinsel() does.\n>\n>\n>\n> regards\n>\n> --\n> Tomas Vondra\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\nHi,\n\n+ conditions2 = statext_determine_join_restrictions(root, rel, mcv);\n+\n+ /* if the new statistics covers more conditions, use it */\n+ if (list_length(conditions2) > list_length(conditions1))\n+ {\n+ mcv = stat;\n\nIt seems conditions2 is calculated using mcv, I wonder why mcv is replaced\nby stat (for conditions1 whose length is shorter) ?\n\nCheers\n\nOn Wed, Oct 6, 2021 at 12:33 PM Tomas Vondra <tomas.vondra@enterprisedb.com> wrote:Hi,\n\nattached is an improved version of this patch, addressing some of the\npoints mentioned in my last message:\n\n1) Adds a couple regression tests, testing various join cases with\nexpressions, additional conditions, etc.\n\n2) Adds support for expressions, so the join clauses don't need to\nreference just simple columns. So e.g. this can benefit from extended\nstatistics, when defined on the expressions:\n\n -- CREATE STATISTICS s1 ON (a+1), b FROM t1;\n -- CREATE STATISTICS s2 ON (a+1), b FROM t2;\n\n SELECT * FROM t1 JOIN t2 ON ((t1.a + 1) = (t2.a + 1) AND t1.b = t2.b);\n\n3) Can combine extended statistics and regular (per-column) statistics.\nThe previous version required extended statistics MCV on both sides of\nthe join, but adding extended statistics on both sides may impractical\n(e.g. if one side does not have correlated columns it's silly to have to\nadd it just to make this patch happy).\n\nFor example you may have extended stats on the dimension table but not\nthe fact table, and the patch still can combine those two. Of course, if\nthere's no MCV on either side, we can't do much.\n\nSo this patch works when both sides have extended statistics MCV, or\nwhen one side has extended MCV and the other side regular MCV. It might\nseem silly, but the extended MCV allows considering additional baserel\nconditions (if there are any).\n\n\nexamples\n========\n\nThe table / data is very simple, but hopefully good enough for some\nsimple examples.\n\n  create table t1 (a int, b int, c int);\n  create table t2 (a int, b int, c int);\n\n  insert into t1 select mod(i,50), mod(i,50), mod(i,50)\n    from generate_series(1,1000) s(i);\n\n  insert into t2 select mod(i,50), mod(i,50), mod(i,50)\n    from generate_series(1,1000) s(i);\n\n  analyze t1, t2;\n\nFirst, without extended stats (just the first line of explain analyze,\nto keep the message short):\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n\n                            QUERY PLAN\n------------------------------------------------------------------------\n Hash Join  (cost=31.00..106.00 rows=400 width=24)\n            (actual time=5.426..22.678 rows=20000 loops=1)\n\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c < 25;\n\n                            QUERY PLAN\n------------------------------------------------------------------------\n Hash Join  (cost=28.50..160.75 rows=10000 width=24)\n            (actual time=5.325..19.760 rows=10000 loops=1)\n\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c <\n25 and t2.c > 25;\n\n                            QUERY PLAN\n------------------------------------------------------------------------\n Hash Join  (cost=24.50..104.75 rows=4800 width=24)\n            (actual time=5.618..5.639 rows=0 loops=1)\n\n\nNow, let's create statistics:\n\n  create statistics s1 on a, b, c from t1 ;\n  create statistics s2 on a, b, c from t2 ;\n  analyze t1, t2;\n\nand now the same queries again:\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b);\n\n                            QUERY PLAN\n------------------------------------------------------------------------\n Hash Join  (cost=31.00..106.00 rows=20000 width=24)\n            (actual time=5.448..22.713 rows=20000 loops=1)\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c < 25;\n\n                            QUERY PLAN\n------------------------------------------------------------------------\n Hash Join  (cost=28.50..160.75 rows=10000 width=24)\n            (actual time=5.317..16.680 rows=10000 loops=1)\n\n\nexplain analyze select * from t1 join t2 on (t1.a = t2.a) where t1.c <\n25 and t2.c > 25;\n\n                            QUERY PLAN\n------------------------------------------------------------------------\n Hash Join  (cost=24.50..104.75 rows=1 width=24)\n            (actual time=5.647..5.667 rows=0 loops=1)\n\n\nThose examples are a bit simplistic, but the improvements are fairly\nclear I think.\n\n\nlimitations & open issues\n=========================\n\nLet's talk about the main general restrictions and open issues in the\ncurrent patch that I can think of at the moment.\n\n1) statistics covering all join clauses\n\nThe patch requires the statistics to cover all the join clauses, mostly\nbecause it simplifies the implementation. This means that to use the\nper-column statistics, there has to be just a single join clause.\n\nAFAICS this could be relaxed and we could use multiple statistics to\nestimate the clauses. But it'd make selection of statistics much more\ncomplicated, because we have to pick \"matching\" statistics on both sides\nof the join. So it seems like an overkill, and most joins have very few\nconditions anyway.\n\n\n2) only equality join clauses\n\nThe other restriction is that at the moment this only supports simple\nequality clauses, combined with AND. So for example this is supported\n\n   t1 JOIN t2 ON ((t1.a = t2.a) AND (t1.b + 2 = t2.b + 1))\n\nwhile these are not:\n\n   t1 JOIN t2 ON ((t1.a = t2.a) OR (t1.b + 2 = t2.b + 1))\n\n   t1 JOIN t2 ON ((t1.a - t2.a = 0) AND (t1.b + 2 = t2.b + 1))\n\n   t1 JOIN t2 ON ((t1.a = t2.a) AND ((t1.b = t2.b) OR (t1.c = t2.c)))\n\nI'm not entirely sure these restrictions can be relaxed. It's not that\ndifficult to evaluate these cases when matching items between the MCV\nlists, similarly to how we evaluate bitmaps for baserel estimates.\n\nBut I'm not sure what to do about the part not covered by the MCV lists.\nThe eqjoinsel() approach uses ndistinct estimates for that, but that\nonly works for AND clauses, I think. How would that work for OR?\n\nSimilarly, I'm not sure we can do much for non-equality conditions, but\nthose are currently estimated as default selectivity in selfuncs.c.\n\n\n3) estimation by join pairs\n\nAt the moment, the estimates are calculated for pairs of relations, so\nfor example given a query\n\n  explain analyze\n  select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n                   join t3 on (t1.b = t3.b and t2.c = t3.c);\n\nwe'll estimate the first join (t1,t2) just fine, but then the second\njoin actually combines (t1,t2,t3). What the patch currently does is it\nsplits it into (t1,t2) and (t2,t3) and estimates those. I wonder if this\nshould actually combine all three MCVs at once - we're pretty much\ncombining the MCVs into one large MCV representing the join result.\n\nBut I haven't done that yet, as it requires the MCVs to be combined\nusing the join clauses (overlap in a way), but I'm not sure how likely\nthat is in practice. In the example it could help, but that's a bit\nartificial example.\n\n\n4) still just inner equi-joins\n\nI haven't done any work on extending this to outer joins etc. Adding\nouter and semi joins should not be complicated, mostly copying and\ntweaking what eqjoinsel() does.\n\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL CompanyHi,+       conditions2 = statext_determine_join_restrictions(root, rel, mcv);++       /* if the new statistics covers more conditions, use it */+       if (list_length(conditions2) > list_length(conditions1))+       {+           mcv = stat;It seems conditions2 is calculated using mcv, I wonder why mcv is replaced by stat (for conditions1 whose length is shorter) ?Cheers", "msg_date": "Wed, 6 Oct 2021 14:03:02 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 10/6/21 23:03, Zhihong Yu wrote:\n> Hi,\n> \n> +       conditions2 = statext_determine_join_restrictions(root, rel, mcv);\n> +\n> +       /* if the new statistics covers more conditions, use it */\n> +       if (list_length(conditions2) > list_length(conditions1))\n> +       {\n> +           mcv = stat;\n> \n> It seems conditions2 is calculated using mcv, I wonder why mcv is \n> replaced by stat (for conditions1 whose length is shorter) ?\n> \n\nYeah, that's wrong - it should be the other way around, i.e.\n\n if (list_length(conditions1) > list_length(conditions2))\n\nThere's no test with multiple candidate statistics yet, so this went \nunnoticed :-/\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 7 Oct 2021 00:05:39 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hi Tomas:\n\nThis is the exact patch I want, thanks for the patch!\n\nOn Thu, Oct 7, 2021 at 3:33 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n\n\n> 3) estimation by join pairs\n>\n> At the moment, the estimates are calculated for pairs of relations, so\n> for example given a query\n>\n> explain analyze\n> select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n> join t3 on (t1.b = t3.b and t2.c = t3.c);\n>\n> we'll estimate the first join (t1,t2) just fine, but then the second\n> join actually combines (t1,t2,t3). What the patch currently does is it\n> splits it into (t1,t2) and (t2,t3) and estimates those.\n\nActually I can't understand how this works even for a simpler example.\nlet's say we query like this (ONLY use t2's column to join t3).\n\nselect * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n join t3 on (t2.c = t3.c and t2.d = t3.d);\n\nThen it works well on JoinRel(t1, t2) AND JoinRel(t2, t3). But when comes\nto JoinRel(t1, t2, t3), we didn't maintain the MCV on join rel, so it\nis hard to\nwork. Here I see your solution is splitting it into (t1, t2) AND (t2,\nt3) and estimate\nthose. But how does this help to estimate the size of JoinRel(t1, t2, t3)?\n\n> I wonder if this\n> should actually combine all three MCVs at once - we're pretty much\n> combining the MCVs into one large MCV representing the join result.\n>\n\nI guess we can keep the MCVs on joinrel for these matches. Take the above\nquery I provided for example, and suppose the MCV data as below:\n\nt1(a, b)\n(1, 2) -> 0.1\n(1, 3) -> 0.2\n(2, 3) -> 0.5\n(2, 8) -> 0.1\n\nt2(a, b)\n(1, 2) -> 0.2\n(1, 3) -> 0.1\n(2, 4) -> 0.2\n(2, 10) -> 0.1\n\nAfter t1.a = t2.a AND t1.b = t2.b, we can build the MCV as below\n\n(1, 2, 1, 2) -> 0.1 * 0.2\n(1, 3, 1, 3) -> 0.2 * 0.1\n\nAnd recording the total mcv frequence as (0.1 + 0.2 + 0.5 + 0.1) *\n(0.2 + 0.1 + 0.2 + 0.1)\n\nWith this design, the nitems of MCV on joinrel would be less than\neither of baserel.\n\nand since we handle the eqjoin as well, we even can record the items as\n\n(1, 2) -> 0.1 * 0.2\n(1, 3) -> 0.2 * 0.1;\n\nAbout when we should maintain the JoinRel's MCV data, rather than\nmaintain this just\nafter the JoinRel size is estimated, we can only estimate it when it\nis needed. for example:\n\nselect * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n join t3 on (t2.c = t3.c and t2.d = t3.d);\n\nwe don't need to maintain the MCV on (t1, t2, t3) since no others\nneed it at all. However\nI don't check code too closely to see if it (Lazing computing MVC on\njoinrel) is convenient\nto do.\n\n\n> But I haven't done that yet, as it requires the MCVs to be combined\n> using the join clauses (overlap in a way), but I'm not sure how likely\n> that is in practice. In the example it could help, but that's a bit\n> artificial example.\n>\n>\n> 4) still just inner equi-joins\n>\n> I haven't done any work on extending this to outer joins etc. Adding\n> outer and semi joins should not be complicated, mostly copying and\n> tweaking what eqjoinsel() does.\n>\n\nOverall, thanks for the feature and I am expecting there are more cases\nto handle during discussion. To make the review process more efficient,\nI suggest that we split the patch into smaller ones and review/commit them\nseparately if we have finalized the design roughly . For example:\n\nPatch 1 -- required both sides to have extended statistics.\nPatch 2 -- required one side to have extended statistics and the other side had\nper-column MCV.\nPatch 3 -- handle the case like WHERE t1.a = t2.a and t1.b = Const;\nPatch 3 -- handle the case for 3+ table joins.\nPatch 4 -- supports the outer join.\n\nI think we can do this if we are sure that each individual patch would work in\nsome cases and would not make any other case worse. If you agree with this,\nI can do that splitting work during my review process.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\n\n", "msg_date": "Sat, 6 Nov 2021 18:03:23 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Your regression tests include two errors, which appear to be accidental, and\nfixing the error shows that this case is being estimated poorly.\n\n+-- try combining with single-column (and single-expression) statistics\n+DROP STATISTICS join_test_2;\n+ERROR: statistics object \"join_test_2\" does not exist\n...\n+ERROR: statistics object \"join_stats_2\" already exists\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 21 Nov 2021 19:23:15 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 11/22/21 02:23, Justin Pryzby wrote:\n> Your regression tests include two errors, which appear to be accidental, and\n> fixing the error shows that this case is being estimated poorly.\n> \n> +-- try combining with single-column (and single-expression) statistics\n> +DROP STATISTICS join_test_2;\n> +ERROR: statistics object \"join_test_2\" does not exist\n> ...\n> +ERROR: statistics object \"join_stats_2\" already exists\n> \n\nD'oh, what a silly mistake ...\n\nYou're right fixing the DROP STATISTICS results in worse estimate, but \nthat's actually expected for a fairly simple reason. The join condition \nhas expressions on both sides, and dropping the statistics means we \ndon't have any MCV for the join_test_2 side. So the optimizer ends up \nnot using the regular estimates, as if there were no extended stats.\n\nA couple lines later the script creates an extended statistics on that \nexpression alone, which fixes this. An expression index would do the \ntrick too.\n\nAttached is a patch fixing the test and also the issue reported by \nZhihong Yu some time ago.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 13 Dec 2021 14:20:40 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 11/6/21 11:03, Andy Fan wrote:\n> Hi Tomas:\n> \n> This is the exact patch I want, thanks for the patch!\n> \n\nGood to hear.\n\n> On Thu, Oct 7, 2021 at 3:33 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n> \n> \n>> 3) estimation by join pairs\n>>\n>> At the moment, the estimates are calculated for pairs of relations, so\n>> for example given a query\n>>\n>> explain analyze\n>> select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n>> join t3 on (t1.b = t3.b and t2.c = t3.c);\n>>\n>> we'll estimate the first join (t1,t2) just fine, but then the second\n>> join actually combines (t1,t2,t3). What the patch currently does is it\n>> splits it into (t1,t2) and (t2,t3) and estimates those.\n> \n> Actually I can't understand how this works even for a simpler example.\n> let's say we query like this (ONLY use t2's column to join t3).\n> \n> select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n> join t3 on (t2.c = t3.c and t2.d = t3.d);\n> \n> Then it works well on JoinRel(t1, t2) AND JoinRel(t2, t3). But when comes\n> to JoinRel(t1, t2, t3), we didn't maintain the MCV on join rel, so it\n> is hard to\n> work. Here I see your solution is splitting it into (t1, t2) AND (t2,\n> t3) and estimate\n> those. But how does this help to estimate the size of JoinRel(t1, t2, t3)?\n> \n\nYeah, this is really confusing. The crucial thing to keep in mind is \nthis works with clauses before running setrefs.c, so the clauses \nreference the original relations - *not* the join relation. Otherwise \neven the regular estimation would not work, because where would it get \nthe per-column MCV lists etc.\n\nLet's use a simple case with join clauses referencing just a single \nattribute for each pair or relations. And let's talk about how many join \npairs it'll extract:\n\n t1 JOIN t2 ON (t1.a = t2.a) JOIN t3 ON (t1.b = t3.b)\n\n=> first we join t1/t2, which is 1 join pair (t1,t2)\n=> then we join t1/t2/t3, but the join clause references just 2 rels, so \n1 join pair (t1,t3)\n\nNow a more complicated case, with more complex join clause\n\n t1 JOIN t2 ON (t1.a = t2.a) JOIN t3 ON (t1.b = t3.b AND t2.c = t3.c)\n\n=> first we join t1/t2, which is 1 join pair (t1,t2)\n=> then we join t1/t2/t3, but this time the join clause references all \nthree rels, so we have two join pairs (t1,t3) and (t2,t3) and we can use \nall the statistics.\n\n\n\n>> I wonder if this\n>> should actually combine all three MCVs at once - we're pretty much\n>> combining the MCVs into one large MCV representing the join result.\n>>\n> \n> I guess we can keep the MCVs on joinrel for these matches. Take the above\n> query I provided for example, and suppose the MCV data as below:\n> \n> t1(a, b)\n> (1, 2) -> 0.1\n> (1, 3) -> 0.2\n> (2, 3) -> 0.5\n> (2, 8) -> 0.1\n> \n> t2(a, b)\n> (1, 2) -> 0.2\n> (1, 3) -> 0.1\n> (2, 4) -> 0.2\n> (2, 10) -> 0.1\n> \n> After t1.a = t2.a AND t1.b = t2.b, we can build the MCV as below\n> \n> (1, 2, 1, 2) -> 0.1 * 0.2\n> (1, 3, 1, 3) -> 0.2 * 0.1\n> \n> And recording the total mcv frequence as (0.1 + 0.2 + 0.5 + 0.1) *\n> (0.2 + 0.1 + 0.2 + 0.1)\n> \n\nRight, that's about the joint distribution I whole join.\n\n> With this design, the nitems of MCV on joinrel would be less than\n> either of baserel.\n> \n\nActually, I think the number of items can grow, because the matches may \nduplicate some items. For example in your example with (t1.a = t2.a) the \nfirst first (1,2) item in t1 matches (1,2) and (1,3) in t2. And same for \n(1,3) in t1. So that's 4 combinations. Of course, we could aggregate the \nMCV by ignoring columns not used in the query.\n\n> and since we handle the eqjoin as well, we even can record the items as\n> \n> (1, 2) -> 0.1 * 0.2\n> (1, 3) -> 0.2 * 0.1;\n> \n> About when we should maintain the JoinRel's MCV data, rather than\n> maintain this just\n> after the JoinRel size is estimated, we can only estimate it when it\n> is needed. for example:\n> \n> select * from t1 join t2 on (t1.a = t2.a and t1.b = t2.b)\n> join t3 on (t2.c = t3.c and t2.d = t3.d);\n> \n> we don't need to maintain the MCV on (t1, t2, t3) since no others\n> need it at all. However\n> I don't check code too closely to see if it (Lazing computing MVC on\n> joinrel) is convenient\n> to do.\n> \n\nI'm not sure I understand what you're proposing here.\n\nHowever, I think that estimating it for pairs has two advantages:\n\n1) Combining MCVs for k relations requires k for loops. Processing 2 \nrelations at a time limits the amount of CPU we need. Of course, this \nassumes the joins are independent, which may or may not be true.\n\n2) It seems fairly easy to combine different types of statistics \n(regular, extended, ...), and also consider the part not represented by \nMCV. It seems much harder when joining more than 2 relations.\n\nI'm also worried about amplification of errors - I suspect attempting to \nbuild the joint MCV for the whole join relation may produce significant \nestimation errors.\n\nFurthermore, I think joins with clauses referencing more than just two \nrelations are fairly uncommon. And we can always improve the feature in \nthis direction in the future.\n\n> \n>> But I haven't done that yet, as it requires the MCVs to be combined\n>> using the join clauses (overlap in a way), but I'm not sure how likely\n>> that is in practice. In the example it could help, but that's a bit\n>> artificial example.\n>>\n>>\n>> 4) still just inner equi-joins\n>>\n>> I haven't done any work on extending this to outer joins etc. Adding\n>> outer and semi joins should not be complicated, mostly copying and\n>> tweaking what eqjoinsel() does.\n>>\n> \n> Overall, thanks for the feature and I am expecting there are more cases\n> to handle during discussion. To make the review process more efficient,\n> I suggest that we split the patch into smaller ones and review/commit them\n> separately if we have finalized the design roughly . For example:\n> \n> Patch 1 -- required both sides to have extended statistics.\n> Patch 2 -- required one side to have extended statistics and the other side had\n> per-column MCV.\n> Patch 3 -- handle the case like WHERE t1.a = t2.a and t1.b = Const;\n> Patch 3 -- handle the case for 3+ table joins.\n> Patch 4 -- supports the outer join.\n> \n> I think we can do this if we are sure that each individual patch would work in\n> some cases and would not make any other case worse. If you agree with this,\n> I can do that splitting work during my review process.\n> \n\nI'll consider splitting it like this, but I'm not sure it makes the main \npatch that much smaller.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Mon, 13 Dec 2021 19:25:04 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hi,\n\nHere's an updated patch, rebased and fixing a couple typos reported by \nJustin Pryzby directly.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 1 Jan 2022 18:21:06 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 2022-01-01 18:21:06 +0100, Tomas Vondra wrote:\n> Here's an updated patch, rebased and fixing a couple typos reported by\n> Justin Pryzby directly.\n\nFWIW, cfbot reports a few compiler warnings:\n\nhttps://cirrus-ci.com/task/6067262669979648?logs=gcc_warning#L505\n[18:52:15.132] time make -s -j${BUILD_JOBS} world-bin\n[18:52:22.697] mcv.c: In function ‘mcv_combine_simple’:\n[18:52:22.697] mcv.c:2787:7: error: ‘reverse’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n[18:52:22.697] 2787 | if (reverse)\n[18:52:22.697] | ^\n[18:52:22.697] mcv.c:2766:27: error: ‘index’ may be used uninitialized in this function [-Werror=maybe-uninitialized]\n[18:52:22.697] 2766 | if (mcv->items[i].isnull[index])\n[18:52:22.697] | ^\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 4 Jan 2022 15:55:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 04, 2022 at 03:55:50PM -0800, Andres Freund wrote:\n> On 2022-01-01 18:21:06 +0100, Tomas Vondra wrote:\n> > Here's an updated patch, rebased and fixing a couple typos reported by\n> > Justin Pryzby directly.\n> \n> FWIW, cfbot reports a few compiler warnings:\n\nAlso the patch doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_3055.log\n=== Applying patches on top of PostgreSQL commit ID 74527c3e022d3ace648340b79a6ddec3419f6732 ===\n=== applying patch ./0001-Estimate-joins-using-extended-statistics-20220101.patch\npatching file src/backend/optimizer/path/clausesel.c\npatching file src/backend/statistics/extended_stats.c\nHunk #1 FAILED at 30.\nHunk #2 succeeded at 102 (offset 1 line).\nHunk #3 succeeded at 2619 (offset 9 lines).\n1 out of 3 hunks FAILED -- saving rejects to file src/backend/statistics/extended_stats.c.rej\n\n\n", "msg_date": "Wed, 19 Jan 2022 18:18:09 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On Wed, Jan 19, 2022 at 06:18:09PM +0800, Julien Rouhaud wrote:\n> On Tue, Jan 04, 2022 at 03:55:50PM -0800, Andres Freund wrote:\n> > On 2022-01-01 18:21:06 +0100, Tomas Vondra wrote:\n> > > Here's an updated patch, rebased and fixing a couple typos reported by\n> > > Justin Pryzby directly.\n> > \n> > FWIW, cfbot reports a few compiler warnings:\n> \n> Also the patch doesn't apply anymore:\n> \n> http://cfbot.cputube.org/patch_36_3055.log\n> === Applying patches on top of PostgreSQL commit ID 74527c3e022d3ace648340b79a6ddec3419f6732 ===\n> === applying patch ./0001-Estimate-joins-using-extended-statistics-20220101.patch\n> patching file src/backend/optimizer/path/clausesel.c\n> patching file src/backend/statistics/extended_stats.c\n> Hunk #1 FAILED at 30.\n> Hunk #2 succeeded at 102 (offset 1 line).\n> Hunk #3 succeeded at 2619 (offset 9 lines).\n> 1 out of 3 hunks FAILED -- saving rejects to file src/backend/statistics/extended_stats.c.rej\n\nRebased over 269b532ae and muted compiler warnings.\n\nTomas - is this patch viable for pg15 , or should move to the next CF ?\n\nIn case it's useful, I ran this on cirrus with my branch for code coverage.\nhttps://cirrus-ci.com/task/5816731397521408\nhttps://api.cirrus-ci.com/v1/artifact/task/5816731397521408/coverage/coverage/00-index.html\n\nstatext_find_matching_mcv() has poor coverage.\nstatext_clauselist_join_selectivity() has poor coverage for the \"stats2\" case.\n\nIn mcv.c: mcv_combine_extended() and mcv_combine_simple() have poor coverage\nfor the \"else if\" cases (does it matter?)\n\nNot related to this patch:\nbuild_attnums_array() isn't being hit.\n\nSame at statext_is_compatible_clause_internal()\n 1538 0 : *exprs = lappend(*exprs, clause);\n\nstatext_mcv_[de]serialize() aren't being hit for cstrings.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 2 Mar 2022 11:38:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On Wed, Mar 02, 2022 at 11:38:21AM -0600, Justin Pryzby wrote:\n> Rebased over 269b532ae and muted compiler warnings.\n\nAnd attached.", "msg_date": "Wed, 2 Mar 2022 11:39:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "> On Wed, Mar 02, 2022 at 11:38:21AM -0600, Justin Pryzby wrote:\n>> Rebased over 269b532ae and muted compiler warnings.\n\nThank you Justin for the rebase!\n\nHello Tomas,\n\nThanks for the patch! Before I review the path at the code level, I want\nto explain my understanding about this patch first.\n\nBefore this patch, we already use MCV information for the eqjoinsel, it\nworks as combine the MCV on the both sides to figure out the mcv_freq\nand then treat the rest equally, but this doesn't work for MCV in\nextended statistics, this patch fill this gap. Besides that, since\nextended statistics means more than 1 columns are involved, if 1+\ncolumns are Const based on RestrictInfo, we can use such information to\nfilter the MCVs we are interesting, that's really cool. \n\nI did some more testing, all of them are inner join so far, all of them\nworks amazing and I am suprised this patch didn't draw enough\nattention. I will test more after I go though the code.\n\nAt for the code level, I reviewed them in the top-down manner and almost\n40% completed. Here are some findings just FYI. For efficiency purpose,\nI provide each feedback with a individual commit, after all I want to\nmake sure my comment is practical and coding and testing is a good way\nto archive that. I tried to make each of them as small as possible so\nthat you can reject or accept them convinently.\n\n0001 is your patch, I just rebase them against the current master. 0006\nis not much relevant with current patch, and I think it can be committed\nindividually if you are OK with that.\n\nHope this kind of review is helpful.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Tue, 02 Apr 2024 16:23:45 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 4/2/24 10:23, Andy Fan wrote:\n> \n>> On Wed, Mar 02, 2022 at 11:38:21AM -0600, Justin Pryzby wrote:\n>>> Rebased over 269b532ae and muted compiler warnings.\n> \n> Thank you Justin for the rebase!\n> \n> Hello Tomas,\n> \n> Thanks for the patch! Before I review the path at the code level, I want\n> to explain my understanding about this patch first.\n> \n\nIf you want to work on this patch, that'd be cool. A review would be\ngreat, but if you want to maybe take over and try moving it forward,\nthat'd be even better. I don't know when I'll have time to work on it\nagain, but I'd promise to help you with working on it.\n\n> Before this patch, we already use MCV information for the eqjoinsel, it\n> works as combine the MCV on the both sides to figure out the mcv_freq\n> and then treat the rest equally, but this doesn't work for MCV in\n> extended statistics, this patch fill this gap. Besides that, since\n> extended statistics means more than 1 columns are involved, if 1+\n> columns are Const based on RestrictInfo, we can use such information to\n> filter the MCVs we are interesting, that's really cool. \n> \n\nYes, I think that's an accurate description of what the patch does.\n\n> I did some more testing, all of them are inner join so far, all of them\n> works amazing and I am suprised this patch didn't draw enough\n> attention. I will test more after I go though the code.\n> \n\nI think it didn't go forward for a bunch of reasons:\n\n1) I got distracted by something else requiring immediate attention, and\nforgot about this patch.\n\n2) I got stuck on some detail of the patch, unsure which of the possible\nsolutions to try first.\n\n3) Uncertainty about how applicable the patch is in practice.\n\nI suppose it was some combination of these reasons, not sure.\n\n\nAs for the \"practicality\" mentioned in (3), it's been a while since I\nworked on the patch so I don't recall the details, but I think I've been\nthinking mostly about \"start join\" queries, where a big \"fact\" table\njoins to small dimensions. And in that case the fact table may have a\nMCV, but the dimensions certainly don't have any (because the join\nhappens on a PK).\n\nBut maybe that's a wrong way to think about it - it was clearly useful\nto consider the case with (per-attribute) MCVs on both sides as worth\nspecial handling. So why not to do that for multi-column MCVs, right?\n\n> At for the code level, I reviewed them in the top-down manner and almost\n> 40% completed. Here are some findings just FYI. For efficiency purpose,\n> I provide each feedback with a individual commit, after all I want to\n> make sure my comment is practical and coding and testing is a good way\n> to archive that. I tried to make each of them as small as possible so\n> that you can reject or accept them convinently.\n> \n> 0001 is your patch, I just rebase them against the current master. 0006\n> is not much relevant with current patch, and I think it can be committed\n> individually if you are OK with that.\n> \n> Hope this kind of review is helpful.\n> \n\nCool! There's obviously no chance to get this into v18, and I have stuff\nto do in this CF. But I'll take a look after that.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 2 Apr 2024 20:22:55 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "\nTomas Vondra <tomas.vondra@enterprisedb.com> writes:\n\n> On 4/2/24 10:23, Andy Fan wrote:\n>> \n>>> On Wed, Mar 02, 2022 at 11:38:21AM -0600, Justin Pryzby wrote:\n>>>> Rebased over 269b532ae and muted compiler warnings.\n>> \n>> Thank you Justin for the rebase!\n>> \n>> Hello Tomas,\n>> \n>> Thanks for the patch! Before I review the path at the code level, I want\n>> to explain my understanding about this patch first.\n>> \n>\n> If you want to work on this patch, that'd be cool. A review would be\n> great, but if you want to maybe take over and try moving it forward,\n> that'd be even better. I don't know when I'll have time to work on it\n> again, but I'd promise to help you with working on it.\n\nOK, I'd try to moving it forward.\n\n>\n>> Before this patch, we already use MCV information for the eqjoinsel, it\n>> works as combine the MCV on the both sides to figure out the mcv_freq\n>> and then treat the rest equally, but this doesn't work for MCV in\n>> extended statistics, this patch fill this gap. Besides that, since\n>> extended statistics means more than 1 columns are involved, if 1+\n>> columns are Const based on RestrictInfo, we can use such information to\n>> filter the MCVs we are interesting, that's really cool. \n>> \n>\n> Yes, I think that's an accurate description of what the patch does.\n\nGreat to know that:)\n\n>\n>> I did some more testing, all of them are inner join so far, all of them\n>> works amazing and I am suprised this patch didn't draw enough\n>> attention.\n\n> I think it didn't go forward for a bunch of reasons:\n>\n..\n>\n> 3) Uncertainty about how applicable the patch is in practice.\n>\n> I suppose it was some combination of these reasons, not sure.\n>\n> As for the \"practicality\" mentioned in (3), it's been a while since I\n> worked on the patch so I don't recall the details, but I think I've been\n> thinking mostly about \"start join\" queries, where a big \"fact\" table\n> joins to small dimensions. And in that case the fact table may have a\n> MCV, but the dimensions certainly don't have any (because the join\n> happens on a PK).\n>\n> But maybe that's a wrong way to think about it - it was clearly useful\n> to consider the case with (per-attribute) MCVs on both sides as worth\n> special handling. So why not to do that for multi-column MCVs, right?\n\nYes, that's what my current understanding is.\n\nThere are some cases where there are 2+ clauses between two tables AND\nthe rows estimiation is bad AND the plan is not the best one. In such\nsisuations, I'd think this patch probably be helpful. The current case\nin hand is PG11, there is no MCV information for extended statistics, so\nI even can't verify the patch here is useful or not manually. When I see\nthem next time in a newer version of PG, I can verity it manually to see\nif the rows estimation can be better. \n\n>> At for the code level, I reviewed them in the top-down manner and almost\n>> 40% completed. Here are some findings just FYI. For efficiency purpose,\n>> I provide each feedback with a individual commit, after all I want to\n>> make sure my comment is practical and coding and testing is a good way\n>> to archive that. I tried to make each of them as small as possible so\n>> that you can reject or accept them convinently.\n>> \n>> 0001 is your patch, I just rebase them against the current master. 0006\n>> is not much relevant with current patch, and I think it can be committed\n>> individually if you are OK with that.\n>> \n>> Hope this kind of review is helpful.\n>> \n>\n> Cool! There's obviously no chance to get this into v18, and I have stuff\n> to do in this CF. But I'll take a look after that.\n\nGood to know that. I will continue my work before that. \n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Wed, 03 Apr 2024 09:33:28 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On Tue, Apr 02, 2024 at 04:23:45PM +0800, Andy Fan wrote:\n> \n> 0001 is your patch, I just rebase them against the current master. 0006\n> is not much relevant with current patch, and I think it can be committed\n> individually if you are OK with that.\n\nYour 002 should also remove listidx to avoid warning\n../src/backend/statistics/extended_stats.c:2879:8: error: variable 'listidx' set but not used [-Werror,-Wunused-but-set-variable]\n\n> Subject: [PATCH v1 2/8] Remove estimiatedcluases and varRelid arguments\n\n> @@ -2939,15 +2939,11 @@ statext_try_join_estimates(PlannerInfo *root, List *clauses, int varRelid,\n> \t\t/* needs to happen before skipping any clauses */\n> \t\tlistidx++;\n> \n> -\t\t/* Skip clauses that were already estimated. */\n> -\t\tif (bms_is_member(listidx, estimatedclauses))\n> -\t\t\tcontinue;\n> -\n\nYour 007 could instead test if relids == NULL:\n\n> Subject: [PATCH v1 7/8] bms_is_empty is more effective than bms_num_members(b)\n>- if (bms_num_members(relids) == 0)\n>+ if (bms_is_empty(relids))\n\ntypos:\n001: s/heuristict/heuristics/\n002: s/grantee/guarantee/\n002: s/estimiatedcluases/estimatedclauses/\n\nIt'd be nice to fix/silence these warnings from 001:\n\n|../src/backend/statistics/extended_stats.c:3151:36: warning: ‘relid’ may be used uninitialized [-Wmaybe-uninitialized]\n| 3151 | if (var->varno != relid)\n| | ^\n|../src/backend/statistics/extended_stats.c:3104:33: note: ‘relid’ was declared here\n| 3104 | int relid;\n| | ^~~~~\n|[1016/1893] Compiling C object src/backend/postgres_lib.a.p/statistics_mcv.c.o\n|../src/backend/statistics/mcv.c: In function ‘mcv_combine_extended’:\n|../src/backend/statistics/mcv.c:2431:49: warning: declaration of ‘idx’ shadows a previous local [-Wshadow=compatible-local]\n\nFYI, I also ran the patch with a $large number of reports without\nobserving any errors or crashes.\n\nI'll try to look harder at the next patch revision.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 12 Apr 2024 13:22:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hello Tomas!\n\n>>> At for the code level, I reviewed them in the top-down manner and almost\n>>> 40% completed. Here are some findings just FYI. For efficiency purpose,\n>>> I provide each feedback with a individual commit, after all I want to\n>>> make sure my comment is practical and coding and testing is a good way\n>>> to archive that. I tried to make each of them as small as possible so\n>>> that you can reject or accept them convinently.\n>>> \n>>> 0001 is your patch, I just rebase them against the current master. 0006\n>>> is not much relevant with current patch, and I think it can be committed\n>>> individually if you are OK with that.\n>>> \n>>> Hope this kind of review is helpful.\n>>> \n>>\n>> Cool! There's obviously no chance to get this into v18, and I have stuff\n>> to do in this CF. But I'll take a look after that.\n>\n> Good to know that. I will continue my work before that. \n\nI have completed my code level review and modification. These individual\ncommits and message probably be helpful for discussion. \n\n-- \nBest Regards\nAndy Fan", "msg_date": "Sun, 28 Apr 2024 09:43:42 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "\nHello Justin!\n\nJustin Pryzby <pryzby@telsasoft.com> writes:\n\n\n> |../src/backend/statistics/extended_stats.c:3151:36: warning: ‘relid’ may be used uninitialized [-Wmaybe-uninitialized]\n> | 3151 | if (var->varno != relid)\n> | | ^\n> |../src/backend/statistics/extended_stats.c:3104:33: note: ‘relid’ was declared here\n> | 3104 | int relid;\n> | | ^~~~~\n> |[1016/1893] Compiling C object src/backend/postgres_lib.a.p/statistics_mcv.c.o\n> |../src/backend/statistics/mcv.c: In function ‘mcv_combine_extended’:\n> |../src/backend/statistics/mcv.c:2431:49: warning: declaration of ‘idx’ shadows a previous local [-Wshadow=compatible-local]\n\nThanks for the feedback, the warnning should be fixed in the lastest\nrevision and 's/estimiatedcluases/estimatedclauses/' typo error in the\ncommit message is not fixed since I have to regenerate all the commits\nto fix that. We are still in dicussion stage and I think these impact is\npretty limited on dicussion.\n\n> FYI, I also ran the patch with a $large number of reports without\n> observing any errors or crashes.\n\nGood to know that.\n\n> I'll try to look harder at the next patch revision.\n\nThank you!\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Sun, 28 Apr 2024 10:07:01 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On Sun, Apr 28, 2024 at 10:07:01AM +0800, Andy Fan wrote:\n> 's/estimiatedcluases/estimatedclauses/' typo error in the\n> commit message is not fixed since I have to regenerate all the commits\n\nMaybe you know this, but some of these patches need to be squashed.\nRegenerating the patches to address feedback is the usual process.\nWhen they're not squished, it makes it hard to review the content of the\npatches.\n\nFor example:\n[PATCH v1 18/22] Fix error \"unexpected system attribute\" when join with system attr\n..adds .sql regression tests, but the expected .out isn't updated until\n[PATCH v1 19/22] Fix the incorrect comment on extended stats.\n\nThat fixes an elog() in Tomas' original commit, so it should probably be\n002 or 003. It might make sense to keep the first commit separate for\nnow, since it's nice to keep Tomas' original patch \"pristine\" to make\nmore apparent the changes you're proposing.\n\nAnother:\n[PATCH v1 20/22] Add fastpath when combine the 2 MCV like eqjoinsel_inner.\n..doesn't compile without\n[PATCH v1 21/22] When mcv->ndimensions == list_length(clauses), handle it same as\n\nYour 022 patch fixes a typo in your 002 patch, which means that first\none reads a patch with a typo, and then later, a 10 line long patch\nreflowing the comment with a typo fixed.\n\nA good guideline is that each patch should be self-contained, compiling\nand passing tests. Which is more difficult with a long stack of\npatches.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 29 Apr 2024 07:39:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "\nHello Justin,\n\nThanks for showing interest on this!\n\n> On Sun, Apr 28, 2024 at 10:07:01AM +0800, Andy Fan wrote:\n>> 's/estimiatedcluases/estimatedclauses/' typo error in the\n>> commit message is not fixed since I have to regenerate all the commits\n>\n> Maybe you know this, but some of these patches need to be squashed.\n> Regenerating the patches to address feedback is the usual process.\n> When they're not squished, it makes it hard to review the content of the\n> patches.\n\nYou might overlooked the fact that the each individual commit is just to\nmake the communication effectively (easy to review) and all of them\nwill be merged into 1 commit at the last / during the process of review. \n\nEven though if something make it hard to review, I am pretty happy to\nregenerate the patches, but does 's/estimiatedcluases/estimatedclauses/'\nbelongs to this category? I'm pretty sure that is not the only typo\nerror or inapproprate word, if we need to regenerate the 22 patches\nbecause of that, we have to regenerate that pretty often.\n\nDo you mind to provide more feedback once and I can merge all of them in\none modification or you think the typo error has blocked the review\nprocess?\n\n>\n> For example:\n> [PATCH v1 18/22] Fix error \"unexpected system attribute\" when join with system attr\n> ..adds .sql regression tests, but the expected .out isn't updated until\n> [PATCH v1 19/22] Fix the incorrect comment on extended stats.\n>\n> That fixes an elog() in Tomas' original commit, so it should probably be\n> 002 or 003.\n\nWhich elog are you talking about?\n\n> It might make sense to keep the first commit separate for\n> now, since it's nice to keep Tomas' original patch \"pristine\" to make\n> more apparent the changes you're proposing.\n\nThis is my goal as well, did you find anything I did which break this\nrule, that's absoluately not my intention.\n\n> Another:\n> [PATCH v1 20/22] Add fastpath when combine the 2 MCV like eqjoinsel_inner.\n> ..doesn't compile without\n> [PATCH v1 21/22] When mcv->ndimensions == list_length(clauses), handle it same as\n>\n> Your 022 patch fixes a typo in your 002 patch, which means that first\n> one reads a patch with a typo, and then later, a 10 line long patch\n> reflowing the comment with a typo fixed.\n\nI would like to regenerate the 22 patches if you think the typo error\nmake the reivew process hard. I can do such things but not willing to\ndo that often.\n\n>\n> A good guideline is that each patch should be self-contained, compiling\n> and passing tests. Which is more difficult with a long stack of\n> patches.\n\nI agree.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Tue, 30 Apr 2024 10:40:54 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 4/3/24 01:22, Tomas Vondra wrote:\n> Cool! There's obviously no chance to get this into v18, and I have stuff\n> to do in this CF. But I'll take a look after that.\nI'm looking at your patch now - an excellent start to an eagerly awaited \nfeature!\nA couple of questions:\n1. I didn't find the implementation of strategy 'c' - estimation by the \nnumber of distinct values. Do you forget it?\n2. Can we add a clauselist selectivity hook into the core (something \nsimilar the code in attachment)? It can allow the development and \ntesting of multicolumn join estimations without patching the core.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional", "msg_date": "Mon, 20 May 2024 15:31:38 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "\nHi Andrei,\n\n> On 4/3/24 01:22, Tomas Vondra wrote:\n>> Cool! There's obviously no chance to get this into v18, and I have stuff\n>> to do in this CF. But I'll take a look after that.\n> I'm looking at your patch now - an excellent start to an eagerly awaited\n> feature!\n> A couple of questions:\n> 1. I didn't find the implementation of strategy 'c' - estimation by the\n> number of distinct values. Do you forget it?\n\nWhat do you mean the \"strategy 'c'\"? \n\n> 2. Can we add a clauselist selectivity hook into the core (something\n> similar the code in attachment)? It can allow the development and\n> testing of multicolumn join estimations without patching the core.\n\nThe idea LGTM. But do you want \n\n+\tif (clauselist_selectivity_hook)\n+\t\ts1 = clauselist_selectivity_hook(root, clauses, varRelid, jointype,\n+\n\nrather than\n\n+\tif (clauselist_selectivity_hook)\n+\t\t*return* clauselist_selectivity_hook(root, clauses, ..)\n\n\n?\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Mon, 20 May 2024 16:52:07 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 20/5/2024 15:52, Andy Fan wrote:\n> \n> Hi Andrei,\n> \n>> On 4/3/24 01:22, Tomas Vondra wrote:\n>>> Cool! There's obviously no chance to get this into v18, and I have stuff\n>>> to do in this CF. But I'll take a look after that.\n>> I'm looking at your patch now - an excellent start to an eagerly awaited\n>> feature!\n>> A couple of questions:\n>> 1. I didn't find the implementation of strategy 'c' - estimation by the\n>> number of distinct values. Do you forget it?\n> \n> What do you mean the \"strategy 'c'\"?\nAs described in 0001-* patch:\n* c) No extended stats with MCV. If there are multiple join clauses,\n* we can try using ndistinct coefficients and do what eqjoinsel does.\n\n> \n>> 2. Can we add a clauselist selectivity hook into the core (something\n>> similar the code in attachment)? It can allow the development and\n>> testing of multicolumn join estimations without patching the core.\n> \n> The idea LGTM. But do you want\n> \n> +\tif (clauselist_selectivity_hook)\n> +\t\ts1 = clauselist_selectivity_hook(root, clauses, varRelid, jointype,\n> +\n> \n> rather than\n> \n> +\tif (clauselist_selectivity_hook)\n> +\t\t*return* clauselist_selectivity_hook(root, clauses, ..)\nOf course - library may estimate not all the clauses - it is a reason, \nwhy I added input/output parameter 'estimatedclauses' by analogy with \nstatext_clauselist_selectivity.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Mon, 20 May 2024 16:40:31 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 5/20/24 16:40, Andrei Lepikhov wrote:\n> On 20/5/2024 15:52, Andy Fan wrote:\n>> +    if (clauselist_selectivity_hook)\n>> +        *return* clauselist_selectivity_hook(root, clauses, ..)\n> Of course - library may estimate not all the clauses - it is a reason, \n> why I added input/output parameter 'estimatedclauses' by analogy with \n> statext_clauselist_selectivity.\nHere is a polished and a bit modified version of the hook proposed.\nAdditionally, I propose exporting the statext_mcv_clauselist_selectivity \nroutine, likewise dependencies_clauselist_selectivity. This could \npotentially enhance the functionality of the PostgreSQL estimation code.\n\nTo clarify the purpose, I want an optional, loaded as a library, more \nconservative estimation based on distinct statistics. Let's provide (a \nbit degenerate) example:\n\nCREATE TABLE is_test(x1 integer, x2 integer, x3 integer, x4 integer);\nINSERT INTO is_test (x1,x2,x3,x4)\n SELECT x%5,x%7,x%11,x%13 FROM generate_series(1,1E3) AS x;\nINSERT INTO is_test (x1,x2,x3,x4)\n SELECT 14,14,14,14 FROM generate_series(1,100) AS x;\nCREATE STATISTICS ist_stat (dependencies,ndistinct)\n ON x1,x2,x3,x4 FROM is_test;\nANALYZE is_test;\nEXPLAIN (ANALYZE, COSTS ON, SUMMARY OFF, TIMING OFF)\nSELECT * FROM is_test WHERE x1=14 AND x2=14 AND x3=14 AND x4=14;\nDROP TABLE is_test CASCADE;\n\nI see:\n(cost=0.00..15.17 rows=3 width=16) (actual rows=100 loops=1)\n\nDependency works great if it is the same for all the data in the \ncolumns. But we get underestimations if we have different laws for \nsubsets of rows. So, if we don't have MCV statistics, sometimes we need \nto pass over dependency statistics and use ndistinct instead.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional", "msg_date": "Tue, 21 May 2024 13:46:08 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "\nAndrei Lepikhov <a.lepikhov@postgrespro.ru> writes:\n\n> On 20/5/2024 15:52, Andy Fan wrote:\n>> Hi Andrei,\n>> \n>>> On 4/3/24 01:22, Tomas Vondra wrote:\n>>>> Cool! There's obviously no chance to get this into v18, and I have stuff\n>>>> to do in this CF. But I'll take a look after that.\n>>> I'm looking at your patch now - an excellent start to an eagerly awaited\n>>> feature!\n>>> A couple of questions:\n>>> 1. I didn't find the implementation of strategy 'c' - estimation by the\n>>> number of distinct values. Do you forget it?\n>> What do you mean the \"strategy 'c'\"?\n> As described in 0001-* patch:\n> * c) No extended stats with MCV. If there are multiple join clauses,\n> * we can try using ndistinct coefficients and do what eqjoinsel does.\n\nOK, I didn't pay enough attention to this comment before. and yes, I get\nthe same conclusion as you - there is no implementation of this.\n\nand if so, I think we should remove the comments and do the\nimplementation in the next patch. \n\n>>> 2. Can we add a clauselist selectivity hook into the core (something\n>>> similar the code in attachment)? It can allow the development and\n>>> testing of multicolumn join estimations without patching the core.\n>> The idea LGTM. But do you want\n>> +\tif (clauselist_selectivity_hook)\n>> +\t\ts1 = clauselist_selectivity_hook(root, clauses, varRelid, jointype,\n>> +\n>> rather than\n>> +\tif (clauselist_selectivity_hook)\n>> +\t\t*return* clauselist_selectivity_hook(root, clauses, ..)\n> Of course - library may estimate not all the clauses - it is a reason,\n> why I added input/output parameter 'estimatedclauses' by analogy with\n> statext_clauselist_selectivity.\n\nOK.\n\nDo you think the hook proposal is closely connected with the current\ntopic? IIUC it's seems not. So a dedicated thread to explain the problem\nto slove and the proposal and the follwing discussion should be helpful\nfor both topics. I'm just worried about mixing the two in one thread\nwould make things complexer unnecessarily.\n\n-- \nBest Regards\nAndy Fan\n\n\n\n", "msg_date": "Thu, 23 May 2024 10:04:27 +0800", "msg_from": "Andy Fan <zhihuifan1213@163.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 5/23/24 09:04, Andy Fan wrote:\n> Andrei Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> * c) No extended stats with MCV. If there are multiple join clauses,\n>> * we can try using ndistinct coefficients and do what eqjoinsel does.\n> \n> OK, I didn't pay enough attention to this comment before. and yes, I get\n> the same conclusion as you - there is no implementation of this.\n> \n> and if so, I think we should remove the comments and do the\n> implementation in the next patch.\nI have an opposite opinion about it:\n1. distinct estimation is more universal thing - you can use it \nprecisely on any subset of columns.\n2. distinct estimation is faster - it just a number, you don't need to \ndetoast huge array of values and compare them one-by-one.\n\nSo, IMO, it is essential part of join estimation and it should be \nimplemented like in eqjoinsel.\n> Do you think the hook proposal is closely connected with the current\n> topic? IIUC it's seems not. So a dedicated thread to explain the problem\n> to slove and the proposal and the follwing discussion should be helpful\n> for both topics. I'm just worried about mixing the two in one thread\n> would make things complexer unnecessarily.\nSure.\n\n-- \nregards,\nAndrei Lepikhov\nPostgres Professional\n\n\n\n", "msg_date": "Thu, 23 May 2024 10:22:09 +0700", "msg_from": "Andrei Lepikhov <a.lepikhov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "Hi,\n\nI finally got to do a review of the reworked patch series. For the most\npart I do like the changes, although I'm not 100% sure about some of\nthem. I do like that the changes have been kept in separate patches,\nwhich makes it much easier to understand what the goal is etc. But it's\nprobably time to start merging some of the patches back into the main\npatch - it's a bit tedious work with 22 patches.\n\nNote: This needs to be added to the next CF, so that we get cfbot\nresults and can focus on it in 2024-07. Also, I'd attach the patches\ndirectly, not as .tar.\n\nI did go though the patches one by one, and did a review for each of\nthem separately. I only had a couple hours for this today, so it's not\nsuper-deep review, more a start for a discussion / asking questions.\n\nFor each patch I added a \"review\" and \"pgindent\" where review is my\ncomments, pgindent is the changes pgindent would do (which we now expect\nto happen before commit). In hindsight I should have skipped the\npgindent, it made it a more tedious with little benefits. But I realized\nthat half-way through the series, so it was easier to just continue.\n\nLet me quickly go through the original parts - most of this is already\nin the \"review\" patches, but it's better to quote the main points here\nto start a discussion. I'll omit some of the smaller suggestions, so\nplease look at the 'review' patches.\n\n\nv20240617-0001-Estimate-joins-using-extended-statistics.patch\n\n- rewords a couple comments, particularly for statext_find_matching_mcv\n\n- a couple XXX comments about possibly stale/inaccurate comments\n\n- suggestion to improve statext_determine_join_restrictions, but we one\nof the later patches already does the caching\n\n\nv20240617-0004-Remove-estimiatedcluases-and-varRelid-argu.patch\n\n- I'm not sure we actually should do this (esp. the removal of\nestimatedclauses bitmap). It breaks if we add the new hook.\n\n\nv20240617-0007-Remove-SpecialJoinInfo-sjinfo-argument.patch\nv20240617-0009-Remove-joinType-argument.patch\n\n- I'm skeptical about removing these two. Yes, the current code does not\nactually use those fields, but selfuncs.c always passes both jointype\nand sjinfo, so maybe we should do that too for consistency. What happens\nif we end up wanting to call an existing selfuncs function that needs\nthese parameters in the future? Say because we want to call the regular\njoin estimator, and then apply some \"correction\" to the result?\n\n\nv20240617-0011-use-the-pre-calculated-RestrictInfo-left-r.patch\n\n- why not to keep the BMS_MULTIPLE check on clause_relids, seems cheap\nso maybe we could do it before the more expensive stuff?\n\n\nv20240617-0014-Fast-path-for-general-clauselist_selectivi.patch\n\n- Does this actually make meaningful difference?\n\n\nv20240617-0017-a-branch-of-updates-around-JoinPairInfo.patch\n\n- Can we actually assume the clause has a RestrictInfo on top? IIRC\nthere are cases where we can get here without it (e.g. AND clause?).\n\n\nv20240617-0020-Cache-the-result-of-statext_determine_join.patch\n\n- This addresses some of my suggestions in 0001, but I think we don't\nactually need to recalculate both lists in each loop.\n\n\nv20240617-0030-optimize-the-order-of-mcv-equal-function-e.patch\n\n- There's no explanation to support this optimization. I guess I know\nwhat it tries to do, but doesn't it have the same issues withu\nnpredictable behavior like the GROUP BY patch, which ended up reverting\nand reworking?\n\n- modifies .sql test but not the expected output\n\n- The McvProc name seems a bit misleading. I think it's really \"procs\",\nfor example.\n\n\nv20240617-0033-Merge-3-palloc-into-1-palloc.patch\n\n- Not sure. It's presented as an optimization to save on palloc calls,\nbut I doubt that's measurable. Maybe it makes it a little bit more\nreadable, but now I'm not convinced it's worth it.\n\n\nv20240617-0036-Remove-2-pull_varnos-calls-with-rinfo-left.patch\n\n- Again, can we rely on this always getting a RestrictInfo? Maybe we do,\nbut it's not obvious to me, so a comment explaining that would be nice.\nAnd maybe an assert to check this.\n\n\nv20240617-0040-some-code-refactor-as-before.patch\n\n- Essentially applies earlier refactorings/tweaks to another place.\n\n- Seems OK (depending on whether we agree on those changes), but it\nseems mostly independent of this patch series. So I'd at least keep it\nin a separate patch.\n\n\nv20240617-0043-Fix-error-unexpected-system-attribute-when.patch\n\n- seems to only tweak the .sql, not expected output\n\n- One of the comments refers to \"above limitation\" but I'm unsure what\nthat's about.\n\n\nv20240617-0048-Add-fastpath-when-combine-the-2-MCV-like-e.patch\nv20240617-0050-When-mcv-ndimensions-list_length-clauses-h.patch\n\n- I'm not sure about one of the opimizations, relying on having a clause\nfor each dimensions of the MCV.\n\n\nv20240617-0054-clauselist_selectivity_hook.patch\n\n- I believe this does not work with the earlier patch that removed\nestimatedclaused bitmap from the \"try\" function.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 17 Jun 2024 18:10:45 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 17/6/2024 18:10, Tomas Vondra wrote:\n> Let me quickly go through the original parts - most of this is already\n> in the \"review\" patches, but it's better to quote the main points here\n> to start a discussion. I'll omit some of the smaller suggestions, so\n> please look at the 'review' patches.\n> \n> \n> v20240617-0001-Estimate-joins-using-extended-statistics.patch\n> \n> - rewords a couple comments, particularly for statext_find_matching_mcv\n> \n> - a couple XXX comments about possibly stale/inaccurate comments\n> v20240617-0054-clauselist_selectivity_hook.patch\n> \n> - I believe this does not work with the earlier patch that removed\n> estimatedclaused bitmap from the \"try\" function.\nThis patch set is too big to eat at once - it's just challenging to \ninvent examples and counterexamples. Can we see these two patches in the \nmaster and analyse further improvements based on that?\n\nSome thoughts:\nYou remove verRelid. I have thought about replacing this value with \nRelOptInfo, which would allow extensions (remember selectivity hook) to \nknow about the underlying path tree.\n\nThe first patch is generally ok, and I vote for having it in the master. \nHowever, the most harmful case I see most reports about is parameterised \nJOIN on multiple anded clauses. In that case, we have a scan filter on \nsomething like the below:\nx = $1 AND y = $2 AND ...\nAs I see, current patch doesn't resolve this issue currently.\n\n-- \nregards, Andrei Lepikhov\n\n\n\n", "msg_date": "Tue, 3 Sep 2024 14:58:05 +0200", "msg_from": "Andrei Lepikhov <lepihov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" }, { "msg_contents": "On 3/9/2024 14:58, Andrei Lepikhov wrote:\n> On 17/6/2024 18:10, Tomas Vondra wrote:\n> x = $1 AND y = $2 AND ...\n> As I see, current patch doesn't resolve this issue currently.\nLet's explain my previous argument with an example (see in attachment).\n\nThe query designed to be executed with parameterised NL join:\n\nEXPLAIN (ANALYZE, TIMING OFF)\nSELECT * FROM test t1 NATURAL JOIN test1 t2 WHERE t2.x1 < 1;\n\nAfter applying the topmost patch from the patchset we can see two \ndifferent estimations (explain tuned a little bit) before and after \nextended statistics:\n\n-- before:\n\n Nested Loop (rows=1) (actual rows=10000 loops=1)\n -> Seq Scan on test1 t2 (rows=100) (actual rows=100 loops=1)\n Filter: (x1 < 1)\n -> Memoize (rows=1) (actual rows=100 loops=100)\n Cache Key: t2.x1, t2.x2, t2.x3, t2.x4\n -> Index Scan using test_x1_x2_x3_x4_idx on test t1\n\t (rows=1 width=404) (actual rows=100 loops=1)\n Index Cond: ((x1 = t2.x1) AND (x2 = t2.x2) AND\n\t\t\t (x3 = t2.x3) AND (x4 = t2.x4))\n\n-- after:\n\n Nested Loop (rows=10000) (actual rows=10000 loops=1)\n -> Seq Scan on test1 t2 (rows=100) (actual rows=100 loops=1)\n Filter: (x1 < 1)\n -> Memoize (rows=1) (actual rows=100 loops=100)\n Cache Key: t2.x1, t2.x2, t2.x3, t2.x4\n -> Index Scan using test_x1_x2_x3_x4_idx on test t1 (rows=1)\n (actual rows=100 loops=1)\n Index Cond: ((x1 = t2.x1) AND (x2 = t2.x2) AND\n (x3 = t2.x3) AND (x4 = t2.x4))\n\nYou can see, that index condition was treated as join clause and PNL \nestimated correctly by an MCV on both sides.\nBut scan estimation is incorrect.\nMoreover, sometimes we don't have MCV at all. And the next step for this \npatch should be implementation of bare estimation by the only ndistinct \non each side.\n\nWhat to do with the scan filter? Not sure so far, but it looks like here \nmay be used the logic similar to var_eq_non_const().\n\n\n-- \nregards, Andrei Lepikhov", "msg_date": "Wed, 4 Sep 2024 16:50:26 +0200", "msg_from": "Andrei Lepikhov <lepihov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using extended statistics to improve join estimates" } ]
[ { "msg_contents": "Hi all,\n\nI started looking into how table scans are handled for table access\nmethods and have discovered a few things that I find odd. I cannot\nfind any material regarding why this particular choice was made (if\nanybody has pointers, I would be very grateful).\n\nI am quite new to PostgreSQL so forgive me if my understanding of the\ncode below is wrong and please clarify what I have misunderstood.\n\nI noted that `scan_begin` accepts a `ScanKey` and my *guess* was that\nthe intention for adding this to the interface was to support primary\nindexes for table access methods (the comment is a little vague, but\nit seems to point to that). However, looking at where `scan_begin` is\ncalled from, I see that it is called from the following methods in\n`tableam.h`:\n\n- `table_beginscan` is always called using zero scan keys and NULL.\n- `table_beginscan_strat` is mostly called with zero keys and NULL,\n with the exception of `systable_beginscan`, which is only for system\n tables. It does use this feature.\n- `table_beginscan_bm` is only called with zero keys and NULL.\n- `table_beginscan_sampling` is only called with zero keys and NULL.\n- `table_beginscan_tid` calls `scan_begin` with zero keys and NULL.\n- `table_beginscan_analyze` calls `scan_begin` with zero keys and NULL.\n- `table_beginscan_catalog` is called with more than one key, but\n AFACT this is only for catalog tables.\n- `table_beginscan_parallel` calls `scan_begin` with zero keys and NULL.\n\nI draw the conclusion that the scan keys only make sense for a table\naccess method for the odd case where it is used for a system tables or\ncatalog tables, so for all practical purposes the scan key cannot be\nused to implement a primary index for general tables.\n\nAs an example of how this is useful, I noticed the work by Heikki and\nAshwin [1], where they return a `TableScanDesc` that contains\ninformation about what columns to scan, which looks very useful. Since\nthe function `table_beginscan` in `src/include/access/tableam.h`\naccept a `ScanKey` as input, this is (AFAICT) what Heikki and Ashwin\nwas exploiting to create a specialized scan for a columnar store.\n\nAnother example of where this can be useful is to optimize access\nduring a sequential scan when you can handle some specific scans very\nefficiently and can \"skip ahead\" many tuples if you know what is being\nlooked for instead of filtering \"late\". Two examples of where this\ncould be useful are:\n\n- An access method that reads data from a remote system and doesn't want\n to transfer all tuples unless necessary.\n- Some sort of log-structured storage with Bloom filters that allows\n you to quickly skip suites that do not have a key.\n\nInterestingly enough, `ScanKey` is generated for `IndexScan` and I\nthink that the same approach could be used for sequential scans: pick\nout the quals that can be used for filtering and offer them to the\ntable access method through the `scan_begin` callback.\n\nThoughts around this?\n\nBest wishes,\nMats Kindahl\n\n[1] https://www.postgresql-archive.org/Zedstore-compressed-in-core-columnar-storage-tp6081536.html\n\n\n", "msg_date": "Wed, 31 Mar 2021 22:10:22 +0200", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "RFC: Table access methods and scans" }, { "msg_contents": "Hi,\n\nOn Wed, 2021-03-31 at 22:10 +0200, Mats Kindahl wrote:\n> As an example of how this is useful, I noticed the work by Heikki and\n> Ashwin [1], where they return a `TableScanDesc` that contains\n> information about what columns to scan, which looks very useful.\n> Since\n> the function `table_beginscan` in `src/include/access/tableam.h`\n> accept a `ScanKey` as input, this is (AFAICT) what Heikki and Ashwin\n> was exploiting to create a specialized scan for a columnar store.\n\nI don't think ScanKeys are the right place to store information about\nwhat columns would be useful. See another thread[2] about that topic.\n\n> Another example of where this can be useful is to optimize access\n> during a sequential scan when you can handle some specific scans very\n> efficiently and can \"skip ahead\" many tuples if you know what is\n> being\n> looked for instead of filtering \"late\". Two examples of where this\n> could be useful are:\n> \n> - An access method that reads data from a remote system and doesn't\n> want\n> to transfer all tuples unless necessary.\n> - Some sort of log-structured storage with Bloom filters that allows\n> you to quickly skip suites that do not have a key.\n\nI agree that would be very conventient for non-heap AMs. There's a very\nold commit[3] that says:\n\n+ /*\n+ * Note that unlike IndexScan, SeqScan never use keys\n+ * in heap_beginscan (and this is very bad) - so, here\n+ * we have not check are keys ok or not.\n+ */\n\nand that language has just been carried forward for decades. I wonder\nif there's any major reason this hasn't been done yet. Does it just not\nimprove performance for a heap, or is there some other reason?\n\nRegards,\n\tJeff Davis\n\n[2] \nhttps://www.postgresql.org/message-id/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com\n\n[3] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e3a1ab764ef2\n\n\n\n\n", "msg_date": "Thu, 03 Jun 2021 17:52:24 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Table access methods and scans" }, { "msg_contents": "Hi Jeff,\n\nOn Fri, Jun 4, 2021 at 2:52 AM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> Hi,\n>\n> On Wed, 2021-03-31 at 22:10 +0200, Mats Kindahl wrote:\n> > As an example of how this is useful, I noticed the work by Heikki and\n> > Ashwin [1], where they return a `TableScanDesc` that contains\n> > information about what columns to scan, which looks very useful.\n> > Since\n> > the function `table_beginscan` in `src/include/access/tableam.h`\n> > accept a `ScanKey` as input, this is (AFAICT) what Heikki and Ashwin\n> > was exploiting to create a specialized scan for a columnar store.\n>\n> I don't think ScanKeys are the right place to store information about\n> what columns would be useful. See another thread[2] about that topic.\n>\n\nYeah, it is not a good example. The examples below are better examples.\nThe scan keys are not sufficient to get all the columns, but AFAICT, it is\nthis callback that is exploited in the patch.\n\n\n>\n> > Another example of where this can be useful is to optimize access\n> > during a sequential scan when you can handle some specific scans very\n> > efficiently and can \"skip ahead\" many tuples if you know what is\n> > being\n> > looked for instead of filtering \"late\". Two examples of where this\n> > could be useful are:\n> >\n> > - An access method that reads data from a remote system and doesn't\n> > want\n> > to transfer all tuples unless necessary.\n> > - Some sort of log-structured storage with Bloom filters that allows\n> > you to quickly skip suites that do not have a key.\n>\n> I agree that would be very conventient for non-heap AMs. There's a very\n> old commit[3] that says:\n>\n> + /*\n> + * Note that unlike IndexScan, SeqScan never use keys\n> + * in heap_beginscan (and this is very bad) - so, here\n> + * we have not check are keys ok or not.\n> + */\n>\n> and that language has just been carried forward for decades. I wonder\n> if there's any major reason this hasn't been done yet. Does it just not\n> improve performance for a heap, or is there some other reason?\n>\n\nThat is basically the question. I'm prepared to take a shot at it unless\nthere is a good reason not to.\n\nBest wishes,\nMats Kindahl\n\n\n\n>\n> Regards,\n> Jeff Davis\n>\n> [2]\n>\n> https://www.postgresql.org/message-id/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com\n>\n> [3]\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e3a1ab764ef2\n>\n>\n>\n\nHi Jeff,On Fri, Jun 4, 2021 at 2:52 AM Jeff Davis <pgsql@j-davis.com> wrote:Hi,\n\nOn Wed, 2021-03-31 at 22:10 +0200, Mats Kindahl wrote:\n> As an example of how this is useful, I noticed the work by Heikki and\n> Ashwin [1], where they return a `TableScanDesc` that contains\n> information about what columns to scan, which looks very useful.\n> Since\n> the function `table_beginscan` in `src/include/access/tableam.h`\n> accept a `ScanKey` as input, this is (AFAICT) what Heikki and Ashwin\n> was exploiting to create a specialized scan for a columnar store.\n\nI don't think ScanKeys are the right place to store information about\nwhat columns would be useful. See another thread[2] about that topic.Yeah, it is not a good example.  The examples below are better examples. The scan keys are not sufficient to get all the columns, but AFAICT, it is this callback that is exploited in the patch. \n\n> Another example of where this can be useful is to optimize access\n> during a sequential scan when you can handle some specific scans very\n> efficiently and can \"skip ahead\" many tuples if you know what is\n> being\n> looked for instead of filtering \"late\". Two examples of where this\n> could be useful are:\n> \n> - An access method that reads data from a remote system and doesn't\n> want\n>   to transfer all tuples unless necessary.\n> - Some sort of log-structured storage with Bloom filters that allows\n>   you to quickly skip suites that do not have a key.\n\nI agree that would be very conventient for non-heap AMs. There's a very\nold commit[3] that says:\n\n+       /*\n+        * Note that unlike IndexScan, SeqScan never use keys\n+        * in heap_beginscan (and this is very bad) - so, here\n+        * we have not check are keys ok or not.\n+        */\n\nand that language has just been carried forward for decades. I wonder\nif there's any major reason this hasn't been done yet. Does it just not\nimprove performance for a heap, or is there some other reason?That is basically the question. I'm prepared to take a shot at it unless there is a good reason not to.Best wishes,Mats Kindahl \n\nRegards,\n        Jeff Davis\n\n[2] \nhttps://www.postgresql.org/message-id/CAE-ML+9RmTNzKCNTZPQf8O3b-UjHWGFbSoXpQa3Wvuc8YBbEQw@mail.gmail.com\n\n[3] \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e3a1ab764ef2", "msg_date": "Fri, 4 Jun 2021 08:23:37 +0200", "msg_from": "Mats Kindahl <mats@timescale.com>", "msg_from_op": true, "msg_subject": "Re: RFC: Table access methods and scans" }, { "msg_contents": "On Fri, 2021-06-04 at 08:23 +0200, Mats Kindahl wrote:\n> That is basically the question. I'm prepared to take a shot at it\n> unless there is a good reason not to.\n\nSounds good, I can review.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Fri, 04 Jun 2021 11:20:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: RFC: Table access methods and scans" }, { "msg_contents": "Hi,\n\nOn 2021-06-03 17:52:24 -0700, Jeff Davis wrote:\n> I agree that would be very conventient for non-heap AMs. There's a very\n> old commit[3] that says:\n>\n> + /*\n> + * Note that unlike IndexScan, SeqScan never use keys\n> + * in heap_beginscan (and this is very bad) - so, here\n> + * we have not check are keys ok or not.\n> + */\n>\n> and that language has just been carried forward for decades. I wonder\n> if there's any major reason this hasn't been done yet. Does it just not\n> improve performance for a heap, or is there some other reason?\n\nIt's not actually a good idea in general:\n\n- Without substantial refactoring more work is done while holding the\n content lock on the page. Whereas doing it as part of a seqscan only\n requires a buffer pin (and thus allows for concurrent writes to the\n same page)\n\n- It's hard to avoid repeated work with expressions that can't fully be\n evaluated as part of the ScanKey. Expression evaluation generally can\n be a bit smarter about evaluation, e.g. not deforming the tuple\n one-by-one.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Jun 2021 12:09:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: RFC: Table access methods and scans" } ]
[ { "msg_contents": "Hi all,\n\nIt has been mentioned twice for the last couple of days that some of\nthe SSL tests are not really picky with what they check, which can be\nannoying when it comes to the testing of other SSL implementations as\nwe cannot really be sure if an error tells more than \"SSL error\":\nhttps://www.postgresql.org/message-id/20210330151507.GA9536@alvherre.pgsql\nhttps://www.postgresql.org/message-id/e0f0484a1815b26bb99ef9ddc7a110dfd6425931.camel@vmware.com\n\nPlease find attached a patch to tighten a bit all that. The errors\nproduced by OpenSSL down to 1.0.1 are the same. I have noticed one\nextra place where we just check for a FATAL, where the trust\nauthentication failed after a CN mismatch.\n\nThoughts?\n--\nMichael", "msg_date": "Thu, 1 Apr 2021 11:59:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Improve error matching patterns in the SSL tests " }, { "msg_contents": "On Thu, Apr 01, 2021 at 11:59:15AM +0900, Michael Paquier wrote:\n> Please find attached a patch to tighten a bit all that. The errors\n> produced by OpenSSL down to 1.0.1 are the same. I have noticed one\n> extra place where we just check for a FATAL, where the trust\n> authentication failed after a CN mismatch.\n\nSorry for the late reply here. This has been applied as of 8d3a4c3.\n--\nMichael", "msg_date": "Mon, 5 Apr 2021 10:36:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Improve error matching patterns in the SSL tests" } ]
[ { "msg_contents": "Hello, team PostgreSQL!\n\nI am YoungHwan Joo, a student who is interested in GSoC 2021.\nI was drawn to the project \"Develop Performance Farm Benchmarks and\nWebsite\".\n\nI send you the first draft of my proposal as an attachment.\nPlease give me any feedback or review.\n\nI am in the Postgres Slack channel #gsoc2021-students.\n\nThank you.\n\nRegards,\nYoungHwan", "msg_date": "Thu, 1 Apr 2021 17:09:28 +0900", "msg_from": "YoungHwan Joo <rulyox@gmail.com>", "msg_from_op": true, "msg_subject": "[GSoC 2021 Proposal] Develop Performance Farm Benchmarks and Website" }, { "msg_contents": "Hello, everyone.\nThis is the second draft of my GSoC proposal.\n\nI have updated\nSection 6 : \"Collect system metadata\", \"Admin panel inside website\", \"GUI\nimprovements\", \"Dependency removal\"\nSection 7 : \"Benchmark execution through API or Website using a Client\nManager\"\nSection 8 : \"Schedule\"\n\nAny feedback is appreciated.\n\nThank you.\n\nRegards,\nYoungHwan\n\n\nHello, team PostgreSQL!\n>\n> I am YoungHwan Joo, a student who is interested in GSoC 2021.\n> I was drawn to the project \"Develop Performance Farm Benchmarks and\n> Website\".\n>\n> I send you the first draft of my proposal as an attachment.\n> Please give me any feedback or review.\n>\n> I am in the Postgres Slack channel #gsoc2021-students.\n>\n> Thank you.\n>\n> Regards,\n> YoungHwan\n>\n>", "msg_date": "Tue, 6 Apr 2021 22:22:20 +0900", "msg_from": "YoungHwan Joo <rulyox@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [GSoC 2021 Proposal] Develop Performance Farm Benchmarks and\n Website" }, { "msg_contents": "Just a quick heads up - this is being followed up by myself in private messages :) \n\nIf anyone has any other inputs for the proposal, feel free to share!\n\nIlaria\n\n> Am 06.04.2021 um 19:59 schrieb YoungHwan Joo <rulyox@gmail.com>:\n> \n> \n> Hello, team PostgreSQL!\n> \n> I am YoungHwan Joo, a student who is interested in GSoC 2021.\n> I was drawn to the project \"Develop Performance Farm Benchmarks and Website\".\n> \n> I send you the first draft of my proposal as an attachment.\n> Please give me any feedback or review.\n> \n> I am in the Postgres Slack channel #gsoc2021-students.\n> \n> Thank you.\n> \n> Regards,\n> YoungHwan\n> \n> <GSoC 2021 Proposal - PostgreSQL - Develop Performance Farm Benchmarks and Website - YoungHwan Joo.pdf>\n\n\n", "msg_date": "Tue, 6 Apr 2021 20:03:02 +0200", "msg_from": "Ilaria <ilaria.battiston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [GSoC 2021 Proposal] Develop Performance Farm Benchmarks and\n Website" }, { "msg_contents": "Hi!\nThis is the third draft of my GSoC proposal.\n\nI have\nAdded some details to Section 6 \"Collect system metadata\".\nRemoved ideas \"Watch for git actions\" and \"Code conventions and testing\"\nfrom Section 7 due to the limited time.\nReorganized my schedule to fit in the additional ideas.\n\nPlease share your thoughts if you have any ideas.\n\nThank you.\n\nRegards,\nYoungHwan\n\n\nHello, everyone.\n> This is the second draft of my GSoC proposal.\n>\n> I have updated\n> Section 6 : \"Collect system metadata\", \"Admin panel inside website\", \"GUI\n> improvements\", \"Dependency removal\"\n> Section 7 : \"Benchmark execution through API or Website using a Client\n> Manager\"\n> Section 8 : \"Schedule\"\n>\n> Any feedback is appreciated.\n>\n> Thank you.\n>\n> Regards,\n> YoungHwan\n>", "msg_date": "Sat, 10 Apr 2021 01:00:06 +0900", "msg_from": "YoungHwan Joo <rulyox@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [GSoC 2021 Proposal] Develop Performance Farm Benchmarks and\n Website" } ]
[ { "msg_contents": "Hi\n\nI've been trying to figure out selinux with sepgsql (which is proving quite\ndifficult as there is an almost total lack of documentation/blogs etc. on\nthe topic) and ran into an issue. Whilst my system had selinux in enforcing\nmode, I mistakenly had sepgsql in permissive mode. I created a table and\nrestricted access to one column to regular users using the label\nsystem_u:object_r:sepgsql_secret_table_t:s0. Because sepgsql was in\npermissive mode, my test user could still access the restricted column.\n\nPostgres logged this:\n\n2021-03-31 17:12:29.713 BST [3917] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column\nname=\"column private of table t1\"\n\nThat's very confusing, because the norm in selinux is to log denials as if\nthe system were in enforcing mode, but then allow the action to proceed\nanyway, when in permissive mode. For example, log entries such as this are\ncreated when my restricted user tries to run an executable from /tmp after\nrunning \"setsebool -P user_exec_content off\":\n\ntype=AVC msg=audit(1617278924.917:484): avc: denied { execute } for\n pid=53036 comm=\"bash\" name=\"ls\" dev=\"dm-0\" ino=319727\nscontext=user_u:user_r:user_t:s0 tcontext=user_u:object_r:user_tmp_t:s0\ntclass=file permissive=1\n\nThe point being to let the admin know what would fail if the system were\nswitched to enforcing mode. Whilst that wasn't the point of what I was\ntrying to do, such a message would have indicated to me that I was in\npermissive mode without realising.\n\nIt seems to me that sepgsql should also log the denial, but flag that\npermissive mode is on.\n\nAny reason not to do that?\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nHiI've been trying to figure out selinux with sepgsql (which is proving quite difficult as there is an almost total lack of documentation/blogs etc. on the topic) and ran into an issue. Whilst my system had selinux in enforcing mode, I mistakenly had sepgsql in permissive mode. I created a table and restricted access to one column to regular users using the label system_u:object_r:sepgsql_secret_table_t:s0. Because sepgsql was in permissive mode, my test user could still access the restricted column.Postgres logged this:2021-03-31 17:12:29.713 BST [3917] LOG:  SELinux: allowed { select } scontext=user_u:user_r:user_t:s0 tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column name=\"column private of table t1\"That's very confusing, because the norm in selinux is to log denials as if the system were in enforcing mode, but then allow the action to proceed anyway, when in permissive mode. For example, log entries such as this are created when my restricted user tries to run an executable from /tmp after running \"setsebool -P user_exec_content off\":type=AVC msg=audit(1617278924.917:484): avc:  denied  { execute } for  pid=53036 comm=\"bash\" name=\"ls\" dev=\"dm-0\" ino=319727 scontext=user_u:user_r:user_t:s0 tcontext=user_u:object_r:user_tmp_t:s0 tclass=file permissive=1The point being to let the admin know what would fail if the system were switched to enforcing mode. Whilst that wasn't the point of what I was trying to do, such a message would have indicated to me that I was in permissive mode without realising.It seems to me that sepgsql should also log the denial, but flag that permissive mode is on.Any reason not to do that?-- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 13:32:54 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": true, "msg_subject": "sepgsql logging" }, { "msg_contents": "\nOn 4/1/21 8:32 AM, Dave Page wrote:\n> Hi\n>\n> I've been trying to figure out selinux with sepgsql (which is proving\n> quite difficult as there is an almost total lack of\n> documentation/blogs etc. on the topic) and ran into an issue. Whilst\n> my system had selinux in enforcing mode, I mistakenly had sepgsql in\n> permissive mode. I created a table and restricted access to one column\n> to regular users using the label\n> system_u:object_r:sepgsql_secret_table_t:s0. Because sepgsql was in\n> permissive mode, my test user could still access the restricted column.\n>\n> Postgres logged this:\n>\n> 2021-03-31 17:12:29.713 BST [3917] LOG:  SELinux: allowed { select }\n> scontext=user_u:user_r:user_t:s0\n> tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column\n> name=\"column private of table t1\"\n>\n> That's very confusing, because the norm in selinux is to log denials\n> as if the system were in enforcing mode, but then allow the action to\n> proceed anyway, when in permissive mode. For example, log entries such\n> as this are created when my restricted user tries to run an executable\n> from /tmp after running \"setsebool -P user_exec_content off\":\n>\n> type=AVC msg=audit(1617278924.917:484): avc:  denied  { execute } for\n>  pid=53036 comm=\"bash\" name=\"ls\" dev=\"dm-0\" ino=319727\n> scontext=user_u:user_r:user_t:s0\n> tcontext=user_u:object_r:user_tmp_t:s0 tclass=file permissive=1\n>\n> The point being to let the admin know what would fail if the system\n> were switched to enforcing mode. Whilst that wasn't the point of what\n> I was trying to do, such a message would have indicated to me that I\n> was in permissive mode without realising.\n>\n> It seems to me that sepgsql should also log the denial, but flag that\n> permissive mode is on.\n>\n> Any reason not to do that?\n\n\n+1 for doing what selinux does if possible.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 1 Apr 2021 10:19:43 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/1/21 8:32 AM, Dave Page wrote:\n>> It seems to me that sepgsql should also log the denial, but flag that\n>> permissive mode is on.\n\n> +1 for doing what selinux does if possible.\n\n+1. If selinux itself is doing that, it's hard to see a reason why\nwe should not; and I concur that the info is useful.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 01 Apr 2021 10:23:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "On Thu, Apr 1, 2021 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > On 4/1/21 8:32 AM, Dave Page wrote:\n> >> It seems to me that sepgsql should also log the denial, but flag that\n> >> permissive mode is on.\n>\n> > +1 for doing what selinux does if possible.\n>\n> +1. If selinux itself is doing that, it's hard to see a reason why\n> we should not; and I concur that the info is useful.\n>\n\nThanks both. I'll take a look at the code and see if I can whip up a patch\n(it'll be a week or so as I'm taking some time off for Easter).\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com\n\nOn Thu, Apr 1, 2021 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Dunstan <andrew@dunslane.net> writes:\n> On 4/1/21 8:32 AM, Dave Page wrote:\n>> It seems to me that sepgsql should also log the denial, but flag that\n>> permissive mode is on.\n\n> +1 for doing what selinux does if possible.\n\n+1.  If selinux itself is doing that, it's hard to see a reason why\nwe should not; and I concur that the info is useful.Thanks both. I'll take a look at the code and see if I can whip up a patch (it'll be a week or so as I'm taking some time off for Easter). -- Dave PageBlog: http://pgsnake.blogspot.comTwitter: @pgsnakeEDB: http://www.enterprisedb.com", "msg_date": "Thu, 1 Apr 2021 15:30:08 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": true, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "Hi\n\nOn Thu, Apr 1, 2021 at 3:30 PM Dave Page <dpage@pgadmin.org> wrote:\n\n>\n>\n> On Thu, Apr 1, 2021 at 3:23 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>> > On 4/1/21 8:32 AM, Dave Page wrote:\n>> >> It seems to me that sepgsql should also log the denial, but flag that\n>> >> permissive mode is on.\n>>\n>> > +1 for doing what selinux does if possible.\n>>\n>> +1. If selinux itself is doing that, it's hard to see a reason why\n>> we should not; and I concur that the info is useful.\n>>\n>\n> Thanks both. I'll take a look at the code and see if I can whip up a patch\n> (it'll be a week or so as I'm taking some time off for Easter).\n>\n\nAttached is a patch to clean this up. It will log denials as such\nregardless of whether or not either selinux or sepgsql is in permissive\nmode. When either is in permissive mode, it'll add \" permissive=1\" to the\nend of the log messages. e.g.\n\nRegular user in permissive mode, with a restricted table column:\n\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_table\nname=\"public.tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column uid of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column name of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column mail of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column address of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column salt of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: denied { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column\nname=\"column phash of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] STATEMENT: SELECT * FROM tb_users;\n\nThe same user/table, but in enforcing mode:\n\n2021-04-14 13:17:21.645 BST [22974] LOG: SELinux: allowed { search }\nscontext=user_u:user_r:user_t:s0\ntcontext=system_u:object_r:sepgsql_schema_t:s0 tclass=db_schema\nname=\"public\" at character 15\n2021-04-14 13:17:21.645 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_table\nname=\"public.tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column uid of table tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column name of table tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column mail of table tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column address of table tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column salt of table tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] LOG: SELinux: denied { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column\nname=\"column phash of table tb_users\"\n2021-04-14 13:17:21.646 BST [22974] STATEMENT: SELECT * FROM tb_users;\n2021-04-14 13:17:21.646 BST [22974] ERROR: SELinux: security policy\nviolation\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 14 Apr 2021 13:41:46 +0100", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": true, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "On Wed, Apr 14, 2021 at 8:42 AM Dave Page <dpage@pgadmin.org> wrote:\n> Attached is a patch to clean this up. It will log denials as such regardless of whether or not either selinux or sepgsql is in permissive mode. When either is in permissive mode, it'll add \" permissive=1\" to the end of the log messages. e.g.\n\nLooks superficially reasonable on first glance, but I think we should\ntry to get an opinion from someone who knows more about SELinux.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 14 Apr 2021 09:49:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "On Wed, Apr 14, 2021 at 8:42 AM Dave Page <dpage@pgadmin.org> wrote:\r\n> Attached is a patch to clean this up. It will log denials as such\r\n> regardless of whether or not either selinux or sepgsql is in\r\n> permissive mode. When either is in permissive mode, it'll add \"\r\n> permissive=1\" to the end of the log messages. e.g.\r\n\r\nDave,\r\n\r\nJust to clarify -- it looks like this patch *only* adds the\r\n\"permissive=1\" part, right? I don't see any changes around denied-vs-\r\nallowed.\r\n\r\nI read the previous posts to mean that you were seeing \"allowed\" when\r\nyou should have been seeing \"denied\". I don't see that behavior --\r\nwithout this patch, I see the correct \"denied\" entries even when\r\nrunning in permissive mode. (It's been a while since the patch was\r\nposted, so I checked to make sure there hadn't been any relevant\r\nchanges in the meantime, and none jumped out at me.)\r\n\r\nThat said, the patch looks good as-is and seems to be working for me on\r\na Rocky 8 VM. (You weren't kidding about the setup difficulty.) Having\r\npermissive mode show up in the logs seems very useful.\r\n\r\nAs an aside, I don't see the \"allowed\" verbiage that sepgsql uses in\r\nany of the SELinux documentation. I do see third-party references to\r\n\"granted\", though, as in e.g.\r\n\r\n avc: granted { execute } for ...\r\n\r\nThat's not something that I think this patch should touch, but it\r\nseemed tangentially relevant for future convergence work.\r\n\r\nOn Wed, 2021-04-14 at 09:49 -0400, Robert Haas wrote:\r\n> Looks superficially reasonable on first glance, but I think we should\r\n> try to get an opinion from someone who knows more about SELinux.\r\n\r\nI am not that someone, but this looks straightforward, it's been\r\nstalled for a while, and I think it should probably go in.\r\n\r\n--Jacob\r\n", "msg_date": "Tue, 11 Jan 2022 00:04:32 +0000", "msg_from": "Jacob Champion <pchampion@vmware.com>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "Hi\n\nOn Tue, Jan 11, 2022 at 12:04 AM Jacob Champion <pchampion@vmware.com>\nwrote:\n\n> On Wed, Apr 14, 2021 at 8:42 AM Dave Page <dpage@pgadmin.org> wrote:\n> > Attached is a patch to clean this up. It will log denials as such\n> > regardless of whether or not either selinux or sepgsql is in\n> > permissive mode. When either is in permissive mode, it'll add \"\n> > permissive=1\" to the end of the log messages. e.g.\n>\n> Dave,\n>\n> Just to clarify -- it looks like this patch *only* adds the\n> \"permissive=1\" part, right? I don't see any changes around denied-vs-\n> allowed.\n>\n\nRight. denied-vs-allowed is shown at the beginning of the log line. From my\nearlier output:\n\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: allowed { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column\nname=\"column salt of table tb_users\" permissive=1\n2021-04-14 13:20:30.401 BST [23073] LOG: SELinux: denied { select }\nscontext=user_u:user_r:user_t:s0\ntcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column\nname=\"column phash of table tb_users\" permissive=1\n\n\n>\n> I read the previous posts to mean that you were seeing \"allowed\" when\n> you should have been seeing \"denied\".\n\n\nThat's what I *thought* was happening originally, because I was mistakenly\nin permissive mode (if memory serves).\n\n\n> I don't see that behavior --\n> without this patch, I see the correct \"denied\" entries even when\n> running in permissive mode. (It's been a while since the patch was\n> posted, so I checked to make sure there hadn't been any relevant\n> changes in the meantime, and none jumped out at me.)\n>\n\nRight. The point is that if permissive mode is enabled, access will not be\ndenied. Effectively if you see permissive=1, then \"denied\" really means\n\"would be denied if enforcing mode was enabled\".\n\nThe idea is that you can run a production system in permissive mode to see\nwhat would be denied without breaking things for users. You can use that\ninfo to build your policy, and then when you no longer see any unexpected\ndenials in the logs, switch to enforcing mode.\n\n\n>\n> That said, the patch looks good as-is and seems to be working for me on\n> a Rocky 8 VM. (You weren't kidding about the setup difficulty.) Having\n> permissive mode show up in the logs seems very useful.\n>\n> As an aside, I don't see the \"allowed\" verbiage that sepgsql uses in\n> any of the SELinux documentation. I do see third-party references to\n> \"granted\", though, as in e.g.\n>\n> avc: granted { execute } for ...\n>\n> That's not something that I think this patch should touch, but it\n> seemed tangentially relevant for future convergence work.\n>\n\nInteresting. I never spotted that one. I'm not sure it matters much, except\nfor consistency. It's not like the various tools for analyzing SELinux logs\nwould be likely to work on a PostgreSQL log.\n\n\n>\n> On Wed, 2021-04-14 at 09:49 -0400, Robert Haas wrote:\n> > Looks superficially reasonable on first glance, but I think we should\n> > try to get an opinion from someone who knows more about SELinux.\n>\n> I am not that someone, but this looks straightforward, it's been\n> stalled for a while, and I think it should probably go in.\n>\n\nI'd like to see that. Thanks for the review.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nHiOn Tue, Jan 11, 2022 at 12:04 AM Jacob Champion <pchampion@vmware.com> wrote:On Wed, Apr 14, 2021 at 8:42 AM Dave Page <dpage@pgadmin.org> wrote:\n> Attached is a patch to clean this up. It will log denials as such\n> regardless of whether or not either selinux or sepgsql is in\n> permissive mode. When either is in permissive mode, it'll add \"\n> permissive=1\" to the end of the log messages. e.g.\n\nDave,\n\nJust to clarify -- it looks like this patch *only* adds the\n\"permissive=1\" part, right? I don't see any changes around denied-vs-\nallowed.Right. denied-vs-allowed is shown at the beginning of the log line. From my earlier output:2021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: allowed { select } scontext=user_u:user_r:user_t:s0 tcontext=unconfined_u:object_r:sepgsql_table_t:s0 tclass=db_column name=\"column salt of table tb_users\" permissive=12021-04-14 13:20:30.401 BST [23073] LOG:  SELinux: denied { select } scontext=user_u:user_r:user_t:s0 tcontext=system_u:object_r:sepgsql_secret_table_t:s0 tclass=db_column name=\"column phash of table tb_users\" permissive=1 \n\nI read the previous posts to mean that you were seeing \"allowed\" when\nyou should have been seeing \"denied\". That's what I *thought* was happening originally, because I was mistakenly in permissive mode (if memory serves). I don't see that behavior --\nwithout this patch, I see the correct \"denied\" entries even when\nrunning in permissive mode. (It's been a while since the patch was\nposted, so I checked to make sure there hadn't been any relevant\nchanges in the meantime, and none jumped out at me.)Right. The point is that if permissive mode is enabled, access will not be denied. Effectively if you see permissive=1, then \"denied\" really means \"would be denied if enforcing mode was enabled\".The idea is that you can run a production system in permissive mode to see what would be denied without breaking things for users. You can use that info to build your policy, and then when you no longer see any unexpected denials in the logs, switch to enforcing mode. \n\nThat said, the patch looks good as-is and seems to be working for me on\na Rocky 8 VM. (You weren't kidding about the setup difficulty.) Having\npermissive mode show up in the logs seems very useful.\n\nAs an aside, I don't see the \"allowed\" verbiage that sepgsql uses in\nany of the SELinux documentation. I do see third-party references to\n\"granted\", though, as in e.g.\n\n    avc: granted { execute } for ...\n\nThat's not something that I think this patch should touch, but it\nseemed tangentially relevant for future convergence work.Interesting. I never spotted that one. I'm not sure it matters much, except for consistency. It's not like the various tools for analyzing SELinux logs would be likely to work on a PostgreSQL log.  \n\nOn Wed, 2021-04-14 at 09:49 -0400, Robert Haas wrote:\n> Looks superficially reasonable on first glance, but I think we should\n> try to get an opinion from someone who knows more about SELinux.\n\nI am not that someone, but this looks straightforward, it's been\nstalled for a while, and I think it should probably go in.I'd like to see that. Thanks for the review. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Tue, 11 Jan 2022 15:40:36 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": true, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "\nOn 1/11/22 10:40, Dave Page wrote:\n>\n>\n> On Wed, 2021-04-14 at 09:49 -0400, Robert Haas wrote:\n> > Looks superficially reasonable on first glance, but I think we\n> should\n> > try to get an opinion from someone who knows more about SELinux.\n>\n> I am not that someone, but this looks straightforward, it's been\n> stalled for a while, and I think it should probably go in.\n>\n>\n> I'd like to see that. Thanks for the review. \n>\n\nI am not that person either. I agree this looks reasonable, but I also\nwould like the opinion of an expert, if we have one.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 11 Jan 2022 11:31:19 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I am not that person either. I agree this looks reasonable, but I also\n> would like the opinion of an expert, if we have one.\n\nI'm not sure we do anymore. Anyway, I tried this on Fedora 35 and\nconfirmed that it compiles and the (very tedious) test process\ndescribed in the sepgsql docs still passes. Looking in the system's\nlogs, it appears that Dave didn't precisely emulate how SELinux\nlogs this setting, because I see messages like\n\nJan 4 12:25:46 nuc1 audit[1754]: AVC avc: denied { setgid } for pid=1754 comm=\"sss_cache\" capability=6 scontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023 tcontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023 tclass=capability permissive=0\n\nSo it looks like their plan is to unconditionally write \"permissive=0\"\nor \"permissive=1\", while Dave's patch just prints nothing in enforcing\nmode. While I can see some virtue in brevity, I think that doing\nexactly what SELinux does is probably a better choice. For one thing,\nit'd remove doubt about whether one is looking at a log from a sepgsql\nversion that logs this or one that doesn't.\n\nOther than that nitpick, I think we should just push this.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 11 Jan 2022 12:55:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "On Tue, Jan 11, 2022 at 5:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Dunstan <andrew@dunslane.net> writes:\n> > I am not that person either. I agree this looks reasonable, but I also\n> > would like the opinion of an expert, if we have one.\n>\n> I'm not sure we do anymore. Anyway, I tried this on Fedora 35 and\n> confirmed that it compiles and the (very tedious) test process\n> described in the sepgsql docs still passes. Looking in the system's\n> logs, it appears that Dave didn't precisely emulate how SELinux\n> logs this setting, because I see messages like\n>\n> Jan 4 12:25:46 nuc1 audit[1754]: AVC avc: denied { setgid } for\n> pid=1754 comm=\"sss_cache\" capability=6\n> scontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023\n> tcontext=unconfined_u:unconfined_r:useradd_t:s0-s0:c0.c1023\n> tclass=capability permissive=0\n>\n> So it looks like their plan is to unconditionally write \"permissive=0\"\n> or \"permissive=1\", while Dave's patch just prints nothing in enforcing\n> mode. While I can see some virtue in brevity, I think that doing\n> exactly what SELinux does is probably a better choice. For one thing,\n> it'd remove doubt about whether one is looking at a log from a sepgsql\n> version that logs this or one that doesn't.\n>\n> Other than that nitpick, I think we should just push this.\n>\n\nHere's an update that adds the \"permissive=0\" case.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 12 Jan 2022 10:30:07 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": true, "msg_subject": "Re: sepgsql logging" }, { "msg_contents": "Dave Page <dpage@pgadmin.org> writes:\n> On Tue, Jan 11, 2022 at 5:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So it looks like their plan is to unconditionally write \"permissive=0\"\n>> or \"permissive=1\", while Dave's patch just prints nothing in enforcing\n>> mode. While I can see some virtue in brevity, I think that doing\n>> exactly what SELinux does is probably a better choice. For one thing,\n>> it'd remove doubt about whether one is looking at a log from a sepgsql\n>> version that logs this or one that doesn't.\n\n> Here's an update that adds the \"permissive=0\" case.\n\nYou forgot to update the expected-results files :-(.\nDone and pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 12 Jan 2022 14:24:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sepgsql logging" } ]