threads
listlengths 1
2.99k
|
|---|
[
{
"msg_contents": "Hi all, \r\n\r\nI have been going through the multixact code over the last few \r\nweeks, and noticed a potential discrepancy between the need for critical \r\nsections in the creation of new multixacts and the current code.\r\n\r\nAs per the comment in GetNewMultiXact:\r\n\r\n\r\n\t/*\r\n\t * Critical section from here until caller has written the data into the\r\n\t * just-reserved SLRU space; we don't want to error out with a partly\r\n\t * written MultiXact structure. (In particular, failing to write our\r\n\t * start offset after advancing nextMXact would effectively corrupt the\r\n\t * previous MultiXact.)\r\n\t */\r\n\tSTART_CRIT_SECTION()\r\n\r\n\r\nThis makes sense, as we need the multixact state and multixact offset \r\ndata to be consistent, but once we write data into the offsets, we can \r\nend the critical section. Currently we wait until the members data is \r\nalso written before we end the critical section. \r\n\r\n\r\nI’ve attached a patch that moves the end of the critical section into \r\nRecordNewMultiXact, right after we finish writing data into the offsets \r\ncache.\r\n\r\nThis passes regression tests, as well as some custom testing around \r\ninjecting random failures while writing multixact members, and \r\nrestarting.\r\n\r\nI would appreciate any feedback on this.\r\n\r\n\r\nSincerely, \r\n\r\nRishu Bagga, Amazon Web Services (AWS)",
"msg_date": "Wed, 14 Dec 2022 00:14:34 +0000",
"msg_from": "\"Bagga, Rishu\" <bagrishu@amazon.com>",
"msg_from_op": true,
"msg_subject": "Shortening the Scope of Critical Section in Creation of New\n MultiXacts"
},
{
"msg_contents": "Hello.\n\nIn short, the code as-is looks correct.\n\nAt Wed, 14 Dec 2022 00:14:34 +0000, \"Bagga, Rishu\" <bagrishu@amazon.com> wrote in \n>\t * Critical section from here until caller has written the data into the\n>\t * just-reserved SLRU space; we don't want to error out with a partly\n\n\"the data\" here means the whole this multi transaction, which includes\nmembers. We shouldn't exit the critical section at least until the\nvery end of RecordNewMultiXact().\n\n> This makes sense, as we need the multixact state and multixact offset \n> data to be consistent, but once we write data into the offsets, we can \n> end the critical section. Currently we wait until the members data is \n> also written before we end the critical section. \n\nWhy do you think that the members are not a part of a\nmultitransaction? A multitransaction is not complete without them.\n\nAddition to the above, we cannot simply move the END_CRIT_SECTION() to\nthere since RecordNewMultiXact() has another caller that doesn't start\na critical section.\n\nBy the way, I didn't tested this for myself but..\n\n> This passes regression tests\n\nMaybe if you did the same with an assertion-build, you will get an\nassertion-failure.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 14 Dec 2022 14:45:09 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Shortening the Scope of Critical Section in Creation of New\n MultiXacts"
}
] |
[
{
"msg_contents": "Right now, if an unprivileged user issues VACUUM/ANALYZE (without\nspecifying a table), it will emit messages for each relation that it\nskips, including indexes, views, and other objects that can't be a\ndirect target of VACUUM/ANALYZE anyway. Attached patch causes it to\ncheck the type of object first, and then check privileges second.\n\nFound while reviewing the MAINTAIN privilege patch. Implemented with\nhis suggested fix. I intend to commit soon.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Tue, 13 Dec 2022 18:29:56 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Avoid extra \"skipping\" messages from VACUUM/ANALYZE"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 06:29:56PM -0800, Jeff Davis wrote:\n> Right now, if an unprivileged user issues VACUUM/ANALYZE (without\n> specifying a table), it will emit messages for each relation that it\n> skips, including indexes, views, and other objects that can't be a\n> direct target of VACUUM/ANALYZE anyway. Attached patch causes it to\n> check the type of object first, and then check privileges second.\n\nThis also seems to be the case when a table name is specified:\n\n\tpostgres=# CREATE TABLE test (a INT);\n\tCREATE TABLE\n\tpostgres=# CREATE INDEX ON test (a);\n\tCREATE INDEX\n\tpostgres=# CREATE ROLE myuser;\n\tCREATE ROLE\n\tpostgres=# SET ROLE myuser;\n\tSET\n\tpostgres=> VACUUM test_a_idx;\n\tWARNING: permission denied to vacuum \"test_a_idx\", skipping it\n\tVACUUM\n\nGranted, this likely won't create as much noise as a database-wide VACUUM,\nbut perhaps we could add a relkind check in expand_vacuum_rel() and swap\nthe checks in vacuum_rel()/analyze_rel(), too. I don't know if it's worth\nthe trouble, though.\n\n> Found while reviewing the MAINTAIN privilege patch. Implemented with\n> his suggested fix. I intend to commit soon.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 13 Dec 2022 19:40:59 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid extra \"skipping\" messages from VACUUM/ANALYZE"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 07:40:59PM -0800, Nathan Bossart wrote:\n> Granted, this likely won't create as much noise as a database-wide VACUUM,\n> but perhaps we could add a relkind check in expand_vacuum_rel() and swap\n> the checks in vacuum_rel()/analyze_rel(), too. I don't know if it's worth\n> the trouble, though.\n\nI looked into this. I don't think adding a check in expand_vacuum_rel() is\nworth much because we'd have to permit all relkinds that can be either\nvacuumed or analyzed, and you have to check the relkind again in\nvacuum_rel()/analyze_rel() anyway. It's easy enough to postpone the\npermissions check in vacuum_rel() so that the relkind messages take\nprecedence, but if we do the same in analyze_rel(), FDWs'\nAnalyzeForeignTable functions will be called prior to checking permissions,\nwhich doesn't seem great. We could move the call to AnalyzeForeignTable\nout of the relkind check to avoid this, but I'm having trouble believing\nit's worth it to reorder the WARNING messages.\n\nUltimately, I think reversing the checks in get_all_vacuum_rels() (as your\npatch does) should eliminate most of the noise, so I filed a commitfest\nentry [0] and marked it as ready-for-committer.\n\n[0] https://commitfest.postgresql.org/41/4094/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Dec 2022 12:01:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid extra \"skipping\" messages from VACUUM/ANALYZE"
}
] |
[
{
"msg_contents": "All,\n\nThe recent discussion surrounding aggregates and ORDER BY moved me to look\nover our existing documentation, especially now that we've reworked the\nfunction tables, to see what improvements can be had by simply documenting\nthose functions where ORDER BY may change the user-visible output. I\nskipped range aggregates for the moment but handled the others on the\naggregates page (not window functions). This includes the float types for\nsum and avg.\n\nI added a note just before the table linking back to the syntax chapter and\ndescribing the newly added rules and syntax choice in the table.\n\nThe nuances of floating point math suggest to me that specifying order by\nfor those is in some kind of gray area and so I've marked it optional...any\nsuggestions for wording (or an xref) to explain those nuances or should it\njust be shown non-optional like the others? Or not shown at all?\n\nThe novelty of my examples is up for bikeshedding. I didn't want\nanything too long so a subquery didn't make sense, and I was trying to\navoid duplication as well as multiple lines - hence creating a CTE that can\nbe copied onto all of the example queries to produce the noted result.\n\nI added a DISTINCT example to array_agg because it is the first aggregate\non the page and so hopefully will be seen during a cursory reading. Plus,\narray_agg is the go-to function for doing this kind of experimentation.\n\nDavid J.\n\nThe patch is attached. A screenshot exemplifying the changes is copied\ninline and attached.\n\n[image: image.png]",
"msg_date": "Tue, 13 Dec 2022 19:38:15 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 07:38:15PM -0700, David G. Johnston wrote:\n> All,\n> \n> The recent discussion surrounding aggregates and ORDER BY moved me to look over\n> our existing documentation, especially now that we've reworked the function\n> tables, to see what improvements can be had by simply documenting those\n> functions where ORDER BY may change the user-visible output. I skipped range\n> aggregates for the moment but handled the others on the aggregates page (not\n> window functions). This includes the float types for sum and avg.\n> \n> I added a note just before the table linking back to the syntax chapter and\n> describing the newly added rules and syntax choice in the table.\n> \n> The nuances of floating point math suggest to me that specifying order by for\n> those is in some kind of gray area and so I've marked it optional...any\n> suggestions for wording (or an xref) to explain those nuances or should it just\n> be shown non-optional like the others? Or not shown at all?\n> \n> The novelty of my examples is up for bikeshedding. I didn't want anything too\n> long so a subquery didn't make sense, and I was trying to avoid duplication as\n> well as multiple lines - hence creating a CTE that can be copied onto all of\n> the example queries to produce the noted result.\n> \n> I added a DISTINCT example to array_agg because it is the first aggregate on\n> the page and so hopefully will be seen during a cursory reading. Plus,\n> array_agg is the go-to function for doing this kind of experimentation.\n\nI like this idea, though the examples seemed too detailed so I skipped\nthem. Here is the trimmed-down patch I would like to apply.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Tue, 24 Oct 2023 16:39:00 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 1:39 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Dec 13, 2022 at 07:38:15PM -0700, David G. Johnston wrote:\n> > All,\n> >\n> > The recent discussion surrounding aggregates and ORDER BY moved me to\n> look over\n> > our existing documentation, especially now that we've reworked the\n> function\n> > tables, to see what improvements can be had by simply documenting those\n> > functions where ORDER BY may change the user-visible output. I skipped\n> range\n> > aggregates for the moment but handled the others on the aggregates page\n> (not\n> > window functions). This includes the float types for sum and avg.\n> >\n> > I added a note just before the table linking back to the syntax chapter\n> and\n> > describing the newly added rules and syntax choice in the table.\n> >\n> > The nuances of floating point math suggest to me that specifying order\n> by for\n> > those is in some kind of gray area and so I've marked it optional...any\n> > suggestions for wording (or an xref) to explain those nuances or should\n> it just\n> > be shown non-optional like the others? Or not shown at all?\n> >\n> > The novelty of my examples is up for bikeshedding. I didn't want\n> anything too\n> > long so a subquery didn't make sense, and I was trying to avoid\n> duplication as\n> > well as multiple lines - hence creating a CTE that can be copied onto\n> all of\n> > the example queries to produce the noted result.\n> >\n> > I added a DISTINCT example to array_agg because it is the first\n> aggregate on\n> > the page and so hopefully will be seen during a cursory reading. Plus,\n> > array_agg is the go-to function for doing this kind of experimentation.\n>\n> I like this idea, though the examples seemed too detailed so I skipped\n> them. Here is the trimmed-down patch I would like to apply.\n>\n>\nI'd prefer to keep pointing out that the ones documented are those whose\noutputs will vary due to ordering.\n\nI've been sympathetic to the user comments that we don't have enough\nexamples. Just using array_agg for that purpose, showing both DISTINCT and\nORDER BY seems like a fair compromise (removes two from my original\nproposal). The examples in the section we tell them to go see aren't of\nthat great quality. If you strongly dislike having the function table\ncontain the examples we should at least improve the page we are sending\nthem to. (As an aside to this, I've personally always found the syntax\nblock with the 5 syntaxes shown there to be intimidating/hard-to-read).\n\nI'd at least suggest you reconsider the commentary and examples surrounding\njsonb_object_agg.\n\nThe same goes for the special knowledge of floating point behavior for why\nwe've chosen to document avg/sum, something that typically doesn't care\nabout order, as having an optional order by.\n\nDavid J.\n\nOn Tue, Oct 24, 2023 at 1:39 PM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Dec 13, 2022 at 07:38:15PM -0700, David G. Johnston wrote:\n> All,\n> \n> The recent discussion surrounding aggregates and ORDER BY moved me to look over\n> our existing documentation, especially now that we've reworked the function\n> tables, to see what improvements can be had by simply documenting those\n> functions where ORDER BY may change the user-visible output. I skipped range\n> aggregates for the moment but handled the others on the aggregates page (not\n> window functions). This includes the float types for sum and avg.\n> \n> I added a note just before the table linking back to the syntax chapter and\n> describing the newly added rules and syntax choice in the table.\n> \n> The nuances of floating point math suggest to me that specifying order by for\n> those is in some kind of gray area and so I've marked it optional...any\n> suggestions for wording (or an xref) to explain those nuances or should it just\n> be shown non-optional like the others? Or not shown at all?\n> \n> The novelty of my examples is up for bikeshedding. I didn't want anything too\n> long so a subquery didn't make sense, and I was trying to avoid duplication as\n> well as multiple lines - hence creating a CTE that can be copied onto all of\n> the example queries to produce the noted result.\n> \n> I added a DISTINCT example to array_agg because it is the first aggregate on\n> the page and so hopefully will be seen during a cursory reading. Plus,\n> array_agg is the go-to function for doing this kind of experimentation.\n\nI like this idea, though the examples seemed too detailed so I skipped\nthem. Here is the trimmed-down patch I would like to apply.I'd prefer to keep pointing out that the ones documented are those whose outputs will vary due to ordering.I've been sympathetic to the user comments that we don't have enough examples. Just using array_agg for that purpose, showing both DISTINCT and ORDER BY seems like a fair compromise (removes two from my original proposal). The examples in the section we tell them to go see aren't of that great quality. If you strongly dislike having the function table contain the examples we should at least improve the page we are sending them to. (As an aside to this, I've personally always found the syntax block with the 5 syntaxes shown there to be intimidating/hard-to-read).I'd at least suggest you reconsider the commentary and examples surrounding jsonb_object_agg.The same goes for the special knowledge of floating point behavior for why we've chosen to document avg/sum, something that typically doesn't care about order, as having an optional order by.David J.",
"msg_date": "Tue, 24 Oct 2023 18:45:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Tue, Oct 24, 2023 at 06:45:48PM -0700, David G. Johnston wrote:\n> I'd prefer to keep pointing out that the ones documented are those whose\n> outputs will vary due to ordering.\n\nOkay, I re-added it in the attached patch, and tightened up the text.\n\n> I've been sympathetic to the user comments that we don't have enough examples. \n\nGood point.\n\n> Just using array_agg for that purpose, showing both DISTINCT and ORDER BY seems\n> like a fair compromise (removes two from my original proposal). The examples\n> in the section we tell them to go see aren't of that great quality. If you\n> strongly dislike having the function table contain the examples we should at\n> least improve the page we are sending them to. (As an aside to this, I've\n> personally always found the syntax block with the 5 syntaxes shown there to be\n> intimidating/hard-to-read).\n\nI think you are right that it belongs in the syntax section; we cover\nordering extensively there. We already have queries there, but not\noutput, so I moved the relevant examples to there and replaced the\nexample that had no output.\n\n> I'd at least suggest you reconsider the commentary and examples surrounding\n> jsonb_object_agg.\n\nI moved that as well, and tightened the example.\n\n> The same goes for the special knowledge of floating point behavior for why\n> we've chosen to document avg/sum, something that typically doesn't care about\n> order, as having an optional order by.\n\nThe floating example seems too obscure to mention in our function docs. \nI can put a sentence in the syntax docs, but is there value in\nexplaining that to users? How it that helpful? Example?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Wed, 25 Oct 2023 11:36:41 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 8:36 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Tue, Oct 24, 2023 at 06:45:48PM -0700, David G. Johnston wrote:\n> > I'd prefer to keep pointing out that the ones documented are those whose\n> > outputs will vary due to ordering.\n>\n> Okay, I re-added it in the attached patch, and tightened up the text.\n>\n\nThanks\n\n\n> I think you are right that it belongs in the syntax section; we cover\n> ordering extensively there. We already have queries there, but not\n> output, so I moved the relevant examples to there and replaced the\n> example that had no output.\n>\n\nThanks\n\n\n> > The same goes for the special knowledge of floating point behavior for\n> why\n> > we've chosen to document avg/sum, something that typically doesn't care\n> about\n> > order, as having an optional order by.\n>\n> The floating example seems too obscure to mention in our function docs.\n> I can put a sentence in the syntax docs, but is there value in\n> explaining that to users? How it that helpful? Example?\n>\n>\nYeah, we punt on the entire concept in the data type section:\n\n\"Managing these errors and how they propagate through calculations is the\nsubject of an entire branch of mathematics and computer science and will\nnot be discussed here,\" ...\n\nAlso, I'm now led to believe that the relevant IEEE 754 floating point\naddition is indeed commutative. Given that, I am inclined to simply not\nadd the order by clause at all to those four functions. (actually, you\nalready got rid of the avg()s but the sum()s are still present, so just\nthose two).\n\nDavid J.\n\nOn Wed, Oct 25, 2023 at 8:36 AM Bruce Momjian <bruce@momjian.us> wrote:On Tue, Oct 24, 2023 at 06:45:48PM -0700, David G. Johnston wrote:\n> I'd prefer to keep pointing out that the ones documented are those whose\n> outputs will vary due to ordering.\n\nOkay, I re-added it in the attached patch, and tightened up the text.Thanks\n\nI think you are right that it belongs in the syntax section; we cover\nordering extensively there. We already have queries there, but not\noutput, so I moved the relevant examples to there and replaced the\nexample that had no output.Thanks \n> The same goes for the special knowledge of floating point behavior for why\n> we've chosen to document avg/sum, something that typically doesn't care about\n> order, as having an optional order by.\n\nThe floating example seems too obscure to mention in our function docs. \nI can put a sentence in the syntax docs, but is there value in\nexplaining that to users? How it that helpful? Example?Yeah, we punt on the entire concept in the data type section:\"Managing these errors and how they propagate through calculations is the subject of an entire branch of mathematics and computer science and will not be discussed here,\" ...Also, I'm now led to believe that the relevant IEEE 754 floating point addition is indeed commutative. Given that, I am inclined to simply not add the order by clause at all to those four functions. (actually, you already got rid of the avg()s but the sum()s are still present, so just those two).David J.",
"msg_date": "Wed, 25 Oct 2023 16:14:11 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 04:14:11PM -0700, David G. Johnston wrote:\n> Yeah, we punt on the entire concept in the data type section:\n> \n> \"Managing these errors and how they propagate through calculations is the\n> subject of an entire branch of mathematics and computer science and will not be\n> discussed here,\" ...\n> \n> Also, I'm now led to believe that the relevant IEEE 754 floating point addition\n> is indeed commutative. Given that, I am inclined to simply not add the order\n> by clause at all to those four functions. (actually, you already got rid of the\n> avg()s but the sum()s are still present, so just those two).\n\nAh, yes, sum() removed. Updated patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Wed, 25 Oct 2023 19:22:29 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 4:22 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Oct 25, 2023 at 04:14:11PM -0700, David G. Johnston wrote:\n> > Yeah, we punt on the entire concept in the data type section:\n> >\n> > \"Managing these errors and how they propagate through calculations is the\n> > subject of an entire branch of mathematics and computer science and will\n> not be\n> > discussed here,\" ...\n> >\n> > Also, I'm now led to believe that the relevant IEEE 754 floating point\n> addition\n> > is indeed commutative. Given that, I am inclined to simply not add the\n> order\n> > by clause at all to those four functions. (actually, you already got rid\n> of the\n> > avg()s but the sum()s are still present, so just those two).\n>\n> Ah, yes, sum() removed. Updated patch attached.\n>\n>\nThe paragraph leading into the last added example needs to be tweaked:\n\nIf DISTINCT is specified within an aggregate, the data is sorted in\nascending order while extracting unique values. You can add an ORDER BY\nclause, limited to expressions matching the regular arguments of the\naggregate, to sort the output in descending order.\n\n(show existing - DISTINCT only - example here)\n\n<programlisting>\nWITH vals (v) AS ( VALUES (1),(3),(4),(3),(2) )\nSELECT string_agg(DISTINCT v::text, ';' ORDER BY v::text DESC) FROM vals;\n string_agg\n-----------\n 4;3;2;1\n</programlisting>\n\n(existing note)\n\nQuestion: Do you know whether we for certain always sort ascending here to\ncompute the unique values or whether if, say, there is an index on the\ncolumn in descending order (or ascending and traversed backwards) that the\ndata within the aggregate could, with an order by, be returned in\ndescending order? If it is ascending, is that part of the SQL Standard\n(since it doesn't even allow an order by to give the user the ability the\ncontrol the output ordering) or does the SQL Standard expect that even a\nrandom order would be fine since there are algorithms that can be used that\ndo not involve sorting the input?\n\nIt seems redundant to first say \"regular arguments\" then negate it in order\nto say \"DISTINCT list\". Using the positive form with \"DISTINCT list\"\nshould get the point across sufficiently and succinctly. It also avoids me\nfeeling like there should be an example of what happens when you do \"sort\non an expression that is not included in the DISTINCT list\".\n\nInterestingly:\n\nWITH vals (v,l) AS ( VALUES (1,'Z'),(3,'D'),(4,'R'),(3,'A'),(2,'T') )\nSELECT string_agg(DISTINCT l, ';' ORDER BY l, ';' DESC) FROM vals;\n\nERROR: in an aggregate with DISTINCT, ORDER BY expressions must appear in\nargument list\nLINE 2: SELECT string_agg(DISTINCT l, ';' ORDER BY l, ';' DESC) FROM...\n\nBut both expressions in the argument list (el and semicolon) do appear in\nthe ORDER BY...\n\nDavid J.\n\nOn Wed, Oct 25, 2023 at 4:22 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Oct 25, 2023 at 04:14:11PM -0700, David G. Johnston wrote:\n> Yeah, we punt on the entire concept in the data type section:\n> \n> \"Managing these errors and how they propagate through calculations is the\n> subject of an entire branch of mathematics and computer science and will not be\n> discussed here,\" ...\n> \n> Also, I'm now led to believe that the relevant IEEE 754 floating point addition\n> is indeed commutative. Given that, I am inclined to simply not add the order\n> by clause at all to those four functions. (actually, you already got rid of the\n> avg()s but the sum()s are still present, so just those two).\n\nAh, yes, sum() removed. Updated patch attached.The paragraph leading into the last added example needs to be tweaked:If DISTINCT is specified within an aggregate, the data is sorted in ascending order while extracting unique values. You can add an ORDER BY clause, limited to expressions matching the regular arguments of the aggregate, to sort the output in descending order.(show existing - DISTINCT only - example here)<programlisting>WITH vals (v) AS ( VALUES (1),(3),(4),(3),(2) )SELECT string_agg(DISTINCT v::text, ';' ORDER BY v::text DESC) FROM vals; string_agg----------- 4;3;2;1</programlisting>(existing note)Question: Do you know whether we for certain always sort ascending here to compute the unique values or whether if, say, there is an index on the column in descending order (or ascending and traversed backwards) that the data within the aggregate could, with an order by, be returned in descending order? If it is ascending, is that part of the SQL Standard (since it doesn't even allow an order by to give the user the ability the control the output ordering) or does the SQL Standard expect that even a random order would be fine since there are algorithms that can be used that do not involve sorting the input?It seems redundant to first say \"regular arguments\" then negate it in order to say \"DISTINCT list\". Using the positive form with \"DISTINCT list\" should get the point across sufficiently and succinctly. It also avoids me feeling like there should be an example of what happens when you do \"sort on an expression that is not included in the DISTINCT list\".Interestingly:WITH vals (v,l) AS ( VALUES (1,'Z'),(3,'D'),(4,'R'),(3,'A'),(2,'T') )SELECT string_agg(DISTINCT l, ';' ORDER BY l, ';' DESC) FROM vals;ERROR: in an aggregate with DISTINCT, ORDER BY expressions must appear in argument listLINE 2: SELECT string_agg(DISTINCT l, ';' ORDER BY l, ';' DESC) FROM...But both expressions in the argument list (el and semicolon) do appear in the ORDER BY...David J.",
"msg_date": "Wed, 25 Oct 2023 17:10:17 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 05:10:17PM -0700, David G. Johnston wrote:\n> The paragraph leading into the last added example needs to be tweaked:\n> \n> If DISTINCT is specified within an aggregate, the data is sorted in ascending\n> order while extracting unique values. You can add an ORDER BY clause, limited\n> to expressions matching the regular arguments of the aggregate, to sort the\n> output in descending order.\n> \n> (show existing - DISTINCT only - example here)\n> \n> <programlisting>\n> WITH vals (v) AS ( VALUES (1),(3),(4),(3),(2) )\n> SELECT string_agg(DISTINCT v::text, ';' ORDER BY v::text DESC) FROM vals;\n> string_agg\n> -----------\n> 4;3;2;1\n> </programlisting>\n> \n> (existing note)\n\nI see what you mean. I added an example that doesn't match the existing\nparagraph. I have rewritten the paragraph and used a relevant example;\npatch attached.\n\n> Question: Do you know whether we for certain always sort ascending here to\n> compute the unique values or whether if, say, there is an index on the column\n> in descending order (or ascending and traversed backwards) that the data within\n> the aggregate could, with an order by, be returned in descending order? If it\n> is ascending, is that part of the SQL Standard (since it doesn't even allow an\n> order by to give the user the ability the control the output ordering) or does\n> the SQL Standard expect that even a random order would be fine since there are\n> algorithms that can be used that do not involve sorting the input?\n\nI don't think order is ever guaranteed in the standard without an ORDER\nBY.\n\n> It seems redundant to first say \"regular arguments\" then negate it in order to\n> say \"DISTINCT list\". Using the positive form with \"DISTINCT list\" should get\n> the point across sufficiently and succinctly. It also avoids me feeling like\n> there should be an example of what happens when you do \"sort on an expression\n> that is not included in the DISTINCT list\".\n\nAgreed, I rewrote that.\n\n> Interestingly:\n> \n> WITH vals (v,l) AS ( VALUES (1,'Z'),(3,'D'),(4,'R'),(3,'A'),(2,'T') )\n> SELECT string_agg(DISTINCT l, ';' ORDER BY l, ';' DESC) FROM vals;\n> \n> ERROR: in an aggregate with DISTINCT, ORDER BY expressions must appear in\n> argument list\n> LINE 2: SELECT string_agg(DISTINCT l, ';' ORDER BY l, ';' DESC) FROM...\n> \n> But both expressions in the argument list (el and semicolon) do appear in the\n> ORDER BY...\n\nI think ORDER BY has to match DISTINCT columns, while you are using ';'.\nI used a simpler example with array_agg() in my patch to avoid the issue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Wed, 25 Oct 2023 21:57:10 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, 26 Oct 2023 at 13:10, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Question: Do you know whether we for certain always sort ascending here to compute the unique values or whether if, say, there is an index on the column in descending order (or ascending and traversed backwards) that the data within the aggregate could, with an order by, be returned in descending order?\n\nThe way it's currently coded, we seem to always require ascending\norder. See addTargetToGroupList(). The call to\nget_sort_group_operators() only requests the ltOpr.\n\nA quick test creating an index on a column with DESC shows that we end\nup doing a backwards index scan so that we get the requested ascending\norder:\n\ncreate table b (b text);\ncreate index on b (b desc);\nexplain select string_agg(distinct b,',') from b;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Aggregate (cost=67.95..67.97 rows=1 width=32)\n -> Index Only Scan Backward using b_b_idx on b (cost=0.15..64.55\nrows=1360 width=32)\n(2 rows)\n\nHowever, I think we'd best stay clear of offering any guarantees in\nthe documents about this. If we did that it would be much harder in\nthe future if we wanted to implement the DISTINCT aggregates by\nhashing.\n\nDavid\n\n\n",
"msg_date": "Thu, 26 Oct 2023 15:12:56 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 7:13 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 26 Oct 2023 at 13:10, David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > Question: Do you know whether we for certain always sort ascending here\n> to compute the unique values or whether if, say, there is an index on the\n> column in descending order (or ascending and traversed backwards) that the\n> data within the aggregate could, with an order by, be returned in\n> descending order?\n>\n> The way it's currently coded, we seem to always require ascending\n> order. See addTargetToGroupList(). The call to\n> get_sort_group_operators() only requests the ltOpr.\n>\n> A quick test creating an index on a column with DESC shows that we end\n> up doing a backwards index scan so that we get the requested ascending\n> order:\n>\n> create table b (b text);\n> create index on b (b desc);\n> explain select string_agg(distinct b,',') from b;\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------\n> Aggregate (cost=67.95..67.97 rows=1 width=32)\n> -> Index Only Scan Backward using b_b_idx on b (cost=0.15..64.55\n> rows=1360 width=32)\n> (2 rows)\n>\n> However, I think we'd best stay clear of offering any guarantees in\n> the documents about this. If we did that it would be much harder in\n> the future if we wanted to implement the DISTINCT aggregates by\n> hashing.\n>\n> So, I think we are mischaracterizing the Standard here, if only in the\nspecific case of array_agg.\n\nSQL Standard: 4.16.4\n\nEvery unary aggregate function takes an arbitrary <value expression> as the\nargument; most unary aggregate\nfunctions can optionally be qualified with either DISTINCT or ALL.\n\nIf ARRAY_AGG is specified, then an array value with one element formed from\nthe <value expression>\nevaluated for each row that qualifies.\n\nNeither DISTINCT nor ALL are allowed to be specified for VAR_POP, VAR_SAMP,\nSTDDEV_POP, or\nSTDDEV_SAMP; redundant duplicates are not removed when computing these\nfunctions.\n\n10.9\n\n<array aggregate function> ::=\nARRAY_AGG\n<left paren> <value expression> [ ORDER BY <sort specification list> ]\n<right paren>\n\nI would reword the existing note to be something like:\n\nThe SQL Standard defines specific aggregates and their properties,\nincluding which of DISTINCT and/or ORDER BY is allowed. Due to the\nextensible nature of PostgreSQL it accepts either or both clauses for any\naggregate.\n\n From the most recent patch:\n\n <para>\n- If <literal>DISTINCT</literal> is specified in addition to an\n- <replaceable>order_by_clause</replaceable>, then all the\n<literal>ORDER BY</literal>\n- expressions must match regular arguments of the aggregate; that is,\n- you cannot sort on an expression that is not included in the\n- <literal>DISTINCT</literal> list.\n+ If <literal>DISTINCT</literal> is specified with an\n+ <replaceable>order_by_clause</replaceable>, <literal>ORDER\n+ BY</literal> expressions can only reference columns in the\n+ <literal>DISTINCT</literal> list. For example:\n+<programlisting>\n+WITH vals (v1, v2) AS ( VALUES (1,'Z'),(3,'D'),(4,'R'),(3,'A'),(2,'T') )\n+SELECT array_agg(DISTINCT v2 ORDER BY v2 DESC) FROM vals;\n+ array_agg\n+-------------\n+ {Z,T,R,D,A}\n+</programlisting>\n\nThe change to a two-column vals was mostly to try and find corner-cases\nthat might need to be addressed. If we don't intend to show the error case\nof DISTINCT v1 ORDER BY v2 then we should go back to the original example\nand just add ORDER BY v DESC. I'm fine with not using string_agg here.\n\n+ For example:\n+<programlisting>\n+WITH vals (v) AS ( VALUES (1),(3),(4),(3),(2) )\n+SELECT array_agg(DISTINCT v ORDER BY v DESC) FROM vals;\n+ array_agg\n+-----------\n+ {4,3,2,1}\n+</programlisting>\n\nWe get enough complaints regarding \"apparent ordering\" that I would like to\nadd:\n\nAs a reminder, while some DISTINCT processing algorithms produce sorted\noutput as a side-effect, only by specifying ORDER BY is the output order\nguaranteed.\n\nDavid J.\n\nOn Wed, Oct 25, 2023 at 7:13 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 26 Oct 2023 at 13:10, David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Question: Do you know whether we for certain always sort ascending here to compute the unique values or whether if, say, there is an index on the column in descending order (or ascending and traversed backwards) that the data within the aggregate could, with an order by, be returned in descending order?\n\nThe way it's currently coded, we seem to always require ascending\norder. See addTargetToGroupList(). The call to\nget_sort_group_operators() only requests the ltOpr.\n\nA quick test creating an index on a column with DESC shows that we end\nup doing a backwards index scan so that we get the requested ascending\norder:\n\ncreate table b (b text);\ncreate index on b (b desc);\nexplain select string_agg(distinct b,',') from b;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Aggregate (cost=67.95..67.97 rows=1 width=32)\n -> Index Only Scan Backward using b_b_idx on b (cost=0.15..64.55\nrows=1360 width=32)\n(2 rows)\n\nHowever, I think we'd best stay clear of offering any guarantees in\nthe documents about this. If we did that it would be much harder in\nthe future if we wanted to implement the DISTINCT aggregates by\nhashing.So, I think we are mischaracterizing the Standard here, if only in the specific case of array_agg.SQL Standard: 4.16.4Every unary aggregate function takes an arbitrary <value expression> as the argument; most unary aggregatefunctions can optionally be qualified with either DISTINCT or ALL.If ARRAY_AGG is specified, then an array value with one element formed from the <value expression>evaluated for each row that qualifies.Neither DISTINCT nor ALL are allowed to be specified for VAR_POP, VAR_SAMP, STDDEV_POP, orSTDDEV_SAMP; redundant duplicates are not removed when computing these functions.10.9<array aggregate function> ::=ARRAY_AGG<left paren> <value expression> [ ORDER BY <sort specification list> ] <right paren>I would reword the existing note to be something like:The SQL Standard defines specific aggregates and their properties, including which of DISTINCT and/or ORDER BY is allowed. Due to the extensible nature of PostgreSQL it accepts either or both clauses for any aggregate.From the most recent patch: <para>- If <literal>DISTINCT</literal> is specified in addition to an- <replaceable>order_by_clause</replaceable>, then all the <literal>ORDER BY</literal>- expressions must match regular arguments of the aggregate; that is,- you cannot sort on an expression that is not included in the- <literal>DISTINCT</literal> list.+ If <literal>DISTINCT</literal> is specified with an+ <replaceable>order_by_clause</replaceable>, <literal>ORDER+ BY</literal> expressions can only reference columns in the+ <literal>DISTINCT</literal> list. For example:+<programlisting>+WITH vals (v1, v2) AS ( VALUES (1,'Z'),(3,'D'),(4,'R'),(3,'A'),(2,'T') )+SELECT array_agg(DISTINCT v2 ORDER BY v2 DESC) FROM vals;+ array_agg+-------------+ {Z,T,R,D,A}+</programlisting>The change to a two-column vals was mostly to try and find corner-cases that might need to be addressed. If we don't intend to show the error case of DISTINCT v1 ORDER BY v2 then we should go back to the original example and just add ORDER BY v DESC. I'm fine with not using string_agg here.+ For example:+<programlisting>+WITH vals (v) AS ( VALUES (1),(3),(4),(3),(2) )+SELECT array_agg(DISTINCT v ORDER BY v DESC) FROM vals;+ array_agg+-----------+ {4,3,2,1}+</programlisting>We get enough complaints regarding \"apparent ordering\" that I would like to add:As a reminder, while some DISTINCT processing algorithms produce sorted output as a side-effect, only by specifying ORDER BY is the output order guaranteed.David J.",
"msg_date": "Wed, 25 Oct 2023 22:34:10 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Wed, Oct 25, 2023 at 10:34:10PM -0700, David G. Johnston wrote:\n> I would reword the existing note to be something like:\n> \n> The SQL Standard defines specific aggregates and their properties, including\n> which of DISTINCT and/or ORDER BY is allowed. Due to the extensible nature of\n> PostgreSQL it accepts either or both clauses for any aggregate.\n\nUh, is this something in my patch or somewhere else? I don't think\nPostgreSQL extensible is an example of syntax flexibility.\n\n> From the most recent patch:\n> \n> <para>\n> - If <literal>DISTINCT</literal> is specified in addition to an\n> - <replaceable>order_by_clause</replaceable>, then all the <literal>ORDER BY\n> </literal>\n> - expressions must match regular arguments of the aggregate; that is,\n> - you cannot sort on an expression that is not included in the\n> - <literal>DISTINCT</literal> list.\n> + If <literal>DISTINCT</literal> is specified with an\n> + <replaceable>order_by_clause</replaceable>, <literal>ORDER\n> + BY</literal> expressions can only reference columns in the\n> + <literal>DISTINCT</literal> list. For example:\n> +<programlisting>\n> +WITH vals (v1, v2) AS ( VALUES (1,'Z'),(3,'D'),(4,'R'),(3,'A'),(2,'T') )\n> +SELECT array_agg(DISTINCT v2 ORDER BY v2 DESC) FROM vals;\n> + array_agg\n> +-------------\n> + {Z,T,R,D,A}\n> +</programlisting>\n> \n> The change to a two-column vals was mostly to try and find corner-cases that\n> might need to be addressed. If we don't intend to show the error case of\n> DISTINCT v1 ORDER BY v2 then we should go back to the original example and just\n> add ORDER BY v DESC. I'm fine with not using string_agg here.\n> \n> + For example:\n> +<programlisting>\n> +WITH vals (v) AS ( VALUES (1),(3),(4),(3),(2) )\n> +SELECT array_agg(DISTINCT v ORDER BY v DESC) FROM vals;\n> + array_agg\n> +-----------\n> + {4,3,2,1}\n> +</programlisting>\n\nOkay, good, switched in the attached patch.\n\n> We get enough complaints regarding \"apparent ordering\" that I would like to\n> add:\n> \n> As a reminder, while some DISTINCT processing algorithms produce sorted output\n> as a side-effect, only by specifying ORDER BY is the output order guaranteed.\n\nWell, we need to create a new email thread for this and look at all the\nareas is applies to since this is a much larger issue.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 26 Oct 2023 17:56:44 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 2:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Oct 25, 2023 at 10:34:10PM -0700, David G. Johnston wrote:\n> > I would reword the existing note to be something like:\n> >\n> > The SQL Standard defines specific aggregates and their properties,\n> including\n> > which of DISTINCT and/or ORDER BY is allowed. Due to the extensible\n> nature of\n> > PostgreSQL it accepts either or both clauses for any aggregate.\n>\n> Uh, is this something in my patch or somewhere else? I don't think\n> PostgreSQL extensible is an example of syntax flexibility.\n>\n\nhttps://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\n\nNote\nThe ability to specify both DISTINCT and ORDER BY in an aggregate function\nis a PostgreSQL extension.\n\nI am pointing out that the first sentence of the existing note above seems\nto be factually incorrect. I tried to make it correct - while explaining\nwhy we differ. Though in truth I'd probably rather just remove the note.\n\n> We get enough complaints regarding \"apparent ordering\" that I would like\n> to\n> > add:\n> >\n> > As a reminder, while some DISTINCT processing algorithms produce sorted\n> output\n> > as a side-effect, only by specifying ORDER BY is the output order\n> guaranteed.\n>\n> Well, we need to create a new email thread for this and look at all the\n> areas is applies to since this is a much larger issue.\n>\n>\nI was hoping to sneak this one in regardless of the bigger picture issues,\nsince this specific combination is guaranteed to output ordered presently.\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 2:56 PM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Oct 25, 2023 at 10:34:10PM -0700, David G. Johnston wrote:\n> I would reword the existing note to be something like:\n> \n> The SQL Standard defines specific aggregates and their properties, including\n> which of DISTINCT and/or ORDER BY is allowed. Due to the extensible nature of\n> PostgreSQL it accepts either or both clauses for any aggregate.\n\nUh, is this something in my patch or somewhere else? I don't think\nPostgreSQL extensible is an example of syntax flexibility.https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATESNoteThe ability to specify both DISTINCT and ORDER BY in an aggregate function is a PostgreSQL extension.I am pointing out that the first sentence of the existing note above seems to be factually incorrect. I tried to make it correct - while explaining why we differ. Though in truth I'd probably rather just remove the note.\n> We get enough complaints regarding \"apparent ordering\" that I would like to\n> add:\n> \n> As a reminder, while some DISTINCT processing algorithms produce sorted output\n> as a side-effect, only by specifying ORDER BY is the output order guaranteed.\n\nWell, we need to create a new email thread for this and look at all the\nareas is applies to since this is a much larger issue.I was hoping to sneak this one in regardless of the bigger picture issues, since this specific combination is guaranteed to output ordered presently.David J.",
"msg_date": "Thu, 26 Oct 2023 15:09:26 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 03:09:26PM -0700, David G. Johnston wrote:\n> On Thu, Oct 26, 2023 at 2:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> On Wed, Oct 25, 2023 at 10:34:10PM -0700, David G. Johnston wrote:\n> > I would reword the existing note to be something like:\n> >\n> > The SQL Standard defines specific aggregates and their properties,\n> including\n> > which of DISTINCT and/or ORDER BY is allowed. Due to the extensible\n> nature of\n> > PostgreSQL it accepts either or both clauses for any aggregate.\n> \n> Uh, is this something in my patch or somewhere else? I don't think\n> PostgreSQL extensible is an example of syntax flexibility.\n> \n> \n> https://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES\n> \n> Note\n> The ability to specify both DISTINCT and ORDER BY in an aggregate function is a\n> PostgreSQL extension.\n> \n> I am pointing out that the first sentence of the existing note above seems to\n> be factually incorrect. I tried to make it correct - while explaining why we\n> differ. Though in truth I'd probably rather just remove the note.\n\nAgreed, removed, patch attached. This is just too complex to specify.\n\n> > We get enough complaints regarding \"apparent ordering\" that I would like\n> to\n> > add:\n> >\n> > As a reminder, while some DISTINCT processing algorithms produce sorted\n> output\n> > as a side-effect, only by specifying ORDER BY is the output order\n> guaranteed.\n> \n> Well, we need to create a new email thread for this and look at all the\n> areas is applies to since this is a much larger issue.\n> \n> I was hoping to sneak this one in regardless of the bigger picture issues,\n> since this specific combination is guaranteed to output ordered presently.\n\nNo sneaking. ;-) It would be bad to document this unevenly because it\nsets expectations in other parts of the system if we don't mention it.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 26 Oct 2023 18:36:25 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 3:36 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> No sneaking. ;-) It would be bad to document this unevenly because it\n> sets expectations in other parts of the system if we don't mention it.\n>\n\nAgreed.\n\nLast suggestion, remove the first jsonb_agg example that lacks an order by.\n\n+WITH vals (k, v) AS ( VALUES ('key0','1'), ('key1','3'), ('key1','2') )\n+SELECT jsonb_object_agg(k, v) FROM vals;\n+ jsonb_object_agg\n+----------------------------\n+ {\"key0\": \"1\", \"key1\": \"2\"}\n+\n\nWe shouldn't write an example that relies on the rows being evaluated 1-2-3\nwithout specifying an order by clause.\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 3:36 PM Bruce Momjian <bruce@momjian.us> wrote:No sneaking. ;-) It would be bad to document this unevenly because it\nsets expectations in other parts of the system if we don't mention it.Agreed.Last suggestion, remove the first jsonb_agg example that lacks an order by.+WITH vals (k, v) AS ( VALUES ('key0','1'), ('key1','3'), ('key1','2') )+SELECT jsonb_object_agg(k, v) FROM vals;+ jsonb_object_agg+----------------------------+ {\"key0\": \"1\", \"key1\": \"2\"}+We shouldn't write an example that relies on the rows being evaluated 1-2-3 without specifying an order by clause.David J.",
"msg_date": "Thu, 26 Oct 2023 15:44:14 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 03:44:14PM -0700, David G. Johnston wrote:\n> On Thu, Oct 26, 2023 at 3:36 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> No sneaking. ;-) It would be bad to document this unevenly because it\n> sets expectations in other parts of the system if we don't mention it.\n> \n> \n> Agreed.\n> \n> Last suggestion, remove the first jsonb_agg example that lacks an order by.\n> \n> +WITH vals (k, v) AS ( VALUES ('key0','1'), ('key1','3'), ('key1','2') )\n> +SELECT jsonb_object_agg(k, v) FROM vals;\n> + jsonb_object_agg\n> +----------------------------\n> + {\"key0\": \"1\", \"key1\": \"2\"}\n> +\n> \n> We shouldn't write an example that relies on the rows being evaluated 1-2-3\n> without specifying an order by clause.\n\nSure, done in the attached patch.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.",
"msg_date": "Thu, 26 Oct 2023 19:03:03 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 4:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n>\n> Sure, done in the attached patch.\n>\n>\nWFM. Thank You!\n\nDavid J.\n\nOn Thu, Oct 26, 2023 at 4:03 PM Bruce Momjian <bruce@momjian.us> wrote:\nSure, done in the attached patch.WFM. Thank You!David J.",
"msg_date": "Thu, 26 Oct 2023 16:05:12 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
},
{
"msg_contents": "On Thu, Oct 26, 2023 at 04:05:12PM -0700, David G. Johnston wrote:\n> On Thu, Oct 26, 2023 at 4:03 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> \n> Sure, done in the attached patch.\n> \n> \n> \n> WFM. Thank You!\n\nPatch applied to master.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Fri, 3 Nov 2023 13:05:38 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Document aggregate functions better w.r.t. ORDER BY"
}
] |
[
{
"msg_contents": "Hi all,\n(Adding Daniel and Jonathan per recent threads)\n\nWhile investigating on what it would take to extend SCRAM to use new\nhash methods (say like the RFC draft for SCRAM-SHA-512), I have been\nquickly reminded of the limitations created by SCRAM_KEY_LEN, which is\nthe key length that we use in the HMAC and hash computations when\ncreating a SCRAM verifier or when doing a SASL exchange.\n\nBack in v10 when SCRAM was implemented, we have kept the\nimplementation simple and took the decision to rely heavily on buffers\nwith a static size of SCRAM_KEY_LEN during the exchanges. This was a\ngood choice back then, because we did not really have a way to handle\nerrors and there were no need to worry about things like OOMs or even\nSHA computations errors. This was also incorrect in its own ways,\nbecause we failed to go through OpenSSL for the hash/HMAC computations\nwhich would be an issue with FIPS certifications. This has led to the\nintroduction of the new cryptohash and HMAC APIs, and the SCRAM code\nhas been changed so as it is able to know and pass through its layers\nany errors (OOM, hash/MAC computation, OpenSSL issue), as of 87ae969\nand e6bdfd9.\n\nTaking advantage of all the error stack and logic introduced\npreviously, it becomes rather straight-forward to remove the\nhardcoded assumptions behind SHA-256 in the SCRAM code path. Attached\nis a patch that does so:\n- SCRAM_KEY_LEN is removed from all the internal SCRAM routines, this\nis replaced by a logic where the hash type and the key length are\nstored in fe_scram_state for the frontend and scram_state for the\nbackend.\n- The frontend assigns the hash type and the key length depending on\nits choice of SASL mechanism in scram_init()@fe-auth-scram.c.\n- The backend assigns the hash type and the key length based on the\nparsed SCRAM entry from pg_authid, which works nicely when we need to\nhandle a raw password for a SASL exchange, for example. That's\nbasically what we do now, but scram_state is just changed to work\nthrough it.\n\nWe have currently on HEAD 68 references to SCRAM_KEY_LEN. This brings\ndown these references to 6, that cannot really be avoided because we\nstill need to handle SCRAM-SHA-256 one way or another:\n- When parsing a SCRAM password from pg_authid (to get a password type\nor fill in scram_state in the backend).\n- For the mock authentication, where SHA-256 is forced. We are going\nto fail anyway, so any hash would be fine as long as we just let the\nuser know about the failure transparently.\n- When initializing the exchange in libpq based on the SASL mechanism\nchoice.\n- scram-common.h itself.\n- When building a verifier in the be-fe interfaces. These could be\nchanged as well but I did not see a point in doing so yet.\n- SCRAM_KEY_LEN is renamed to SCRAM_SHA_256_KEY_LEN to reflect its\ndependency to SHA-256.\n\nWith this patch in place, the internals of SCRAM for SASL are\nbasically able to handle any hash methods. There is much more to\nconsider like how we'd need to treat uaSCRAM (separate code for more\nhash methods or use the same) or HBA entries, but this removes what I\nconsider to be 70%~80% of the pain in terms of extensibility with the\ncurrent code, and this is enough to be a submission on its own to move\ntowards more methods. I am planning to tackle more things in terms of\npluggability after what's done here, btw :)\n\nThis patch passes check-world and the CI is green. I have tested as\nwell the patch with SCRAM verifiers coming from a server initially on\nHEAD, so it looks pretty solid seen from here, being careful of memory\nleaks in the frontend, mainly.\n\nThoughts or comments?\n--\nMichael",
"msg_date": "Wed, 14 Dec 2022 11:38:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Refactor SCRAM code to dynamically handle hash type and key length"
},
{
"msg_contents": "On 14.12.22 03:38, Michael Paquier wrote:\n> This patch passes check-world and the CI is green. I have tested as\n> well the patch with SCRAM verifiers coming from a server initially on\n> HEAD, so it looks pretty solid seen from here, being careful of memory\n> leaks in the frontend, mainly.\n\nThe changes from local arrays to dynamic allocation appear to introduce \nsignificant complexity. I would reconsider that. If we consider your \nreasoning\n\n > While investigating on what it would take to extend SCRAM to use new\n > hash methods (say like the RFC draft for SCRAM-SHA-512), I have been\n > quickly reminded of the limitations created by SCRAM_KEY_LEN, which is\n > the key length that we use in the HMAC and hash computations when\n > creating a SCRAM verifier or when doing a SASL exchange.\n\nthen the obvious fix there is to change the definition of SCRAM_KEY_LEN \nto PG_SHA512_DIGEST_LENGTH, which would be a much smaller and simpler \nchange. We don't have to support arbitrary key sizes, so a fixed-size \narray seems appropriate.\n\n\n\n",
"msg_date": "Wed, 14 Dec 2022 14:39:43 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 02:39:43PM +0100, Peter Eisentraut wrote:\n> On 14.12.22 03:38, Michael Paquier wrote:\n>> While investigating on what it would take to extend SCRAM to use new\n>> hash methods (say like the RFC draft for SCRAM-SHA-512), I have been\n>> quickly reminded of the limitations created by SCRAM_KEY_LEN, which is\n>> the key length that we use in the HMAC and hash computations when\n>> creating a SCRAM verifier or when doing a SASL exchange.\n> \n> then the obvious fix there is to change the definition of SCRAM_KEY_LEN to\n> PG_SHA512_DIGEST_LENGTH, which would be a much smaller and simpler change.\n> We don't have to support arbitrary key sizes, so a fixed-size array seems\n> appropriate.\n\nYeah, I was considering doing that as well for the static arrays, with\nsomething like a Max() to combine but perhaps that's not necessary for\nthe digest lengths anyway. Perhaps I just over-engineered the\napproach.\n\nHowever, that's only half of the picture. The key length and the hash\ntype (or just the hash type to know what's the digest/key length to\nuse but that's more invasive) still need to be sent across the\ninternal routines of SCRAM and attached to the state data of the\nfrontend and the backend or we won't be able to do the hash and HMAC\ncomputations dependent on that.\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 04:59:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 04:59:52AM +0900, Michael Paquier wrote:\n> However, that's only half of the picture. The key length and the hash\n> type (or just the hash type to know what's the digest/key length to\n> use but that's more invasive) still need to be sent across the\n> internal routines of SCRAM and attached to the state data of the\n> frontend and the backend or we won't be able to do the hash and HMAC\n> computations dependent on that.\n\nAttached is a patch to do exactly that, and as a result v2 is half the\nsize of v1:\n- SCRAM_KEY_LEN is now named SCRAM_MAX_KEY_LEN, adding a note that\nthis should be kept in sync as the maximum digest size of the\nsupported hash methods. This is used as the method to size all the\ninternal buffers of the SCRAM routines.\n- SCRAM_SHA_256_KEY_LEN is used to track the key length for\nSCRAM-SHA-256, the one initialized with the state data.\n- No changes in the internal, the buffers are just resized based on\nthe max defined.\n\nI'd like to move on with that in the next couple of days (still need\nto study more the other areas of the code to see what else could be\nmade more pluggable), so let me know if there are any objections..\n--\nMichael",
"msg_date": "Sat, 17 Dec 2022 12:08:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On 12/16/22 10:08 PM, Michael Paquier wrote:\r\n> On Thu, Dec 15, 2022 at 04:59:52AM +0900, Michael Paquier wrote:\r\n>> However, that's only half of the picture. The key length and the hash\r\n>> type (or just the hash type to know what's the digest/key length to\r\n>> use but that's more invasive) still need to be sent across the\r\n>> internal routines of SCRAM and attached to the state data of the\r\n>> frontend and the backend or we won't be able to do the hash and HMAC\r\n>> computations dependent on that.\r\n> \r\n> Attached is a patch to do exactly that, and as a result v2 is half the\r\n> size of v1:\r\n> - SCRAM_KEY_LEN is now named SCRAM_MAX_KEY_LEN, adding a note that\r\n> this should be kept in sync as the maximum digest size of the\r\n> supported hash methods. This is used as the method to size all the\r\n> internal buffers of the SCRAM routines.\r\n> - SCRAM_SHA_256_KEY_LEN is used to track the key length for\r\n> SCRAM-SHA-256, the one initialized with the state data.\r\n> - No changes in the internal, the buffers are just resized based on\r\n> the max defined.\r\n\r\nThanks! I looked through this and ran tests. I like the approach overall \r\nand I think this sets us up pretty well for expanding our SCRAM support.\r\n\r\nOnly a couple of minor comments:\r\n\r\n- I noticed a couple of these in \"scram_build_secret\" and \"scram_mock_salt\":\r\n\r\n Assert(hash_type == PG_SHA256);\r\n\r\nPresumably there to ensure 1/ We're setting a hash_type and 2/ as \r\npossibly a reminder to update the assertions if/when we support more \r\ndigests.\r\n\r\nWith the assertion in \"scram_build_secret\", that value is set from the \r\n\"PG_SHA256\" constant anyway, so I don't know if it actually gives us \r\nanything other than a reminder? With \"scram_mock\"salt\" the value \r\nultimately comes from state (which is currently set from the constant), \r\nso perhaps there is a guard there.\r\n\r\nAt a minimum, I'd suggest a comment around it, especially if it's set up \r\nto be removed at a future date.\r\n\r\n- I do like the \"SCRAM_MAX_KEY_LEN\" change, and I see we're now passing \r\n\"key_length\" around to ensure we're only using the desired number of \r\nbytes. I am a little queasy that once we expand \"SCRAM_MAX_KEY_LEN\" we \r\nrun the risk of having the smaller hashes accidentally use the extra \r\nbytes in their calculations. However, I think that's more a fear than \r\nnot, an we can mitigate the risk with testing.\r\n\r\n> I'd like to move on with that in the next couple of days (still need\r\n> to study more the other areas of the code to see what else could be\r\n> made more pluggable), so let me know if there are any objections..\r\n\r\nNo objections. I think this decreases the lift to supporting more \r\nvariations of SCRAM.\r\n\r\nOnce committed, I'll rebase the server-side SCRAM functions patch with \r\nthis. I may want to rethink the interface for that to allow the digest \r\nto be \"selectable\" (vs. from the function) but I'll discuss on that \r\nthread[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/fce7228e-d0d6-64a1-3dcb-bba85c2fac85@postgresql.org",
"msg_date": "Mon, 19 Dec 2022 14:58:24 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 02:58:24PM -0500, Jonathan S. Katz wrote:\n> With the assertion in \"scram_build_secret\", that value is set from the\n> \"PG_SHA256\" constant anyway, so I don't know if it actually gives us\n> anything other than a reminder? With \"scram_mock\"salt\" the value ultimately\n> comes from state (which is currently set from the constant), so perhaps\n> there is a guard there.\n\nYes, these mostly act as reminders to anybody touching this code, so\nI'd like to keep both. For the mock part, we may also want to use\nsomething different than SHA-256.\n\n> At a minimum, I'd suggest a comment around it, especially if it's set up to\n> be removed at a future date.\n\nOkay, sure.\n\n> - I do like the \"SCRAM_MAX_KEY_LEN\" change, and I see we're now passing\n> \"key_length\" around to ensure we're only using the desired number of bytes.\n> I am a little queasy that once we expand \"SCRAM_MAX_KEY_LEN\" we run the risk\n> of having the smaller hashes accidentally use the extra bytes in their\n> calculations. However, I think that's more a fear than not, an we can\n> mitigate the risk with testing.\n\nA few code paths relied on the size of these local buffers, now they\njust use the passed-in key length from the state.\n\n> No objections. I think this decreases the lift to supporting more variations\n> of SCRAM.\n> \n> Once committed, I'll rebase the server-side SCRAM functions patch with this.\n> I may want to rethink the interface for that to allow the digest to be\n> \"selectable\" (vs. from the function) but I'll discuss on that thread[1].\n\nThanks! I have applied for I have here.. There are other pieces to\nthink about in this area.\n--\nMichael",
"msg_date": "Tue, 20 Dec 2022 08:58:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 08:58:38AM +0900, Michael Paquier wrote:\n> Thanks! I have applied for I have here.. There are other pieces to\n> think about in this area.\n\nFYI, I have spent a few hours looking at the remaining parts of the\nSCRAM code that could be simplified if a new hash method is added, and\nthis b3bb7d1 has really made things easier. There are a few things\nthat will need more thoughts. Here are my notes, assuming that\nSHA-512 is done:\n1) HBA entries had better use a new keyword for scram-sha-512, implying\na new uaSCRAM512 to combine with the existing uaSCRAM. One reason\nbehind that it to advertise the mechanisms supported back to the\nclient depending on the matching HBA entry.\n2) If a role has a SCRAM-SHA-256 password and the HBA entry matches\nscram-sha-512, the SASL exchange needs to go through the mock process\nwith SHA-512 and fail.\n3) If a role has a SCRAM-SHA-512 password and the HBA entry matches\nscram-sha-256, the SASL exchange needs to go through the mock process\nwith SHA-256 and fail.\n4) The case of MD5 is something that looks a bit tricky at quick\nglance. We know that if the role has a MD5 password stored, we will\nfail anyway. So we could just advertise the SHA-256 mechanisms in\nthis case and map the mock to that?\n5) The mechanism choice in libpq needs to be reworked a bit based on\nwhat the backend sends. There may be no point in advertising all the\nSHA-256 and SHA-512 mechanisms at the same time, I guess.\n\nAttached is a WIP patch that I have played with. This shows the parts\nof the code that would need more thoughts if implementing such\nthings. This works for the cases 1~3 (see the TAP tests). I've given\nup on the MD5 case 4 for now, but perhaps I just missed a simple trick.\n5 in libpq uses dirty tricks. I have marked this CF entry as\ncommitted, and I'll come back to each relevant part on new separate\nthreads.\n--\nMichael",
"msg_date": "Tue, 20 Dec 2022 16:25:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On 12/20/22 2:25 AM, Michael Paquier wrote:\r\n> On Tue, Dec 20, 2022 at 08:58:38AM +0900, Michael Paquier wrote:\r\n>> Thanks! I have applied for I have here.. There are other pieces to\r\n>> think about in this area.\r\n> \r\n> FYI, I have spent a few hours looking at the remaining parts of the\r\n> SCRAM code that could be simplified if a new hash method is added, and\r\n> this b3bb7d1 has really made things easier.\r\n\r\nGreat! Thanks for doing a quick \"stress test\" on this.\r\n\r\n> There are a few things\r\n> that will need more thoughts. Here are my notes, assuming that\r\n> SHA-512 is done:\r\n> 1) HBA entries had better use a new keyword for scram-sha-512, implying\r\n> a new uaSCRAM512 to combine with the existing uaSCRAM. One reason\r\n> behind that it to advertise the mechanisms supported back to the\r\n> client depending on the matching HBA entry.\r\n\r\nThis does seem like a challenge, particularly if we have to support \r\nmultiple different SCRAM hashes.\r\n\r\nPerhaps this can be done with an interface change in HBA. For example, \r\nwe could rename the auth method from \"scram-sha-256\" to \"scram\" and \r\nsupport an option list of hashes (e.g. \"hash=sha-512,sha-256\"). We can \r\nthen advertise the user-selected hashes as part of the handshake.\r\n\r\nFor backwards compatibility, we can take an auth method of \r\n\"scram-sha-256\" to mean \"scram\" + using a sha-256 hash. Similarly, if no \r\nhash map is defined, we can default to \"scram-sha-256\".\r\n\r\nAnyway, I understand this point would require more discussion, but \r\nperhaps it is a way to simplify the amount of code we would need to \r\nwrite to support more hashes.\r\n\r\n> 2) If a role has a SCRAM-SHA-256 password and the HBA entry matches\r\n> scram-sha-512, the SASL exchange needs to go through the mock process\r\n> with SHA-512 and fail.\r\n> 3) If a role has a SCRAM-SHA-512 password and the HBA entry matches\r\n> scram-sha-256, the SASL exchange needs to go through the mock process\r\n> with SHA-256 and fail.\r\n\r\n*nods*\r\n\r\n> 4) The case of MD5 is something that looks a bit tricky at quick\r\n> glance. We know that if the role has a MD5 password stored, we will\r\n> fail anyway. So we could just advertise the SHA-256 mechanisms in\r\n> this case and map the mock to that?\r\n\r\nIs this the case where we specify \"md5\" as the auth method but the \r\nuser-password is stored in SCRAM?\r\n\r\n> 5) The mechanism choice in libpq needs to be reworked a bit based on\r\n> what the backend sends. There may be no point in advertising all the\r\n> SHA-256 and SHA-512 mechanisms at the same time, I guess.\r\n\r\nYeah, I think a user may want to select which ones they want to use \r\n(e.g. they may not want to advertise SHA-256).\r\n\r\n> Attached is a WIP patch that I have played with. This shows the parts\r\n> of the code that would need more thoughts if implementing such\r\n> things. This works for the cases 1~3 (see the TAP tests). I've given\r\n> up on the MD5 case 4 for now, but perhaps I just missed a simple trick.\r\n> 5 in libpq uses dirty tricks. I have marked this CF entry as\r\n> committed, and I'll come back to each relevant part on new separate\r\n> threads.\r\n\r\nThanks for starting this.\r\n\r\nJonathan",
"msg_date": "Tue, 20 Dec 2022 20:45:29 -0500",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 08:45:29PM -0500, Jonathan S. Katz wrote:\n> On 12/20/22 2:25 AM, Michael Paquier wrote:\n>> 4) The case of MD5 is something that looks a bit tricky at quick\n>> glance. We know that if the role has a MD5 password stored, we will\n>> fail anyway. So we could just advertise the SHA-256 mechanisms in\n>> this case and map the mock to that?\n> \n> Is this the case where we specify \"md5\" as the auth method but the\n> user-password is stored in SCRAM?\n\nYes. A port storing uaMD5 with a SCRAM password makes the backend use\nSASL for the whole exchange. At quick glance, we could fallback to\nlook at the password of the user sent by the startup packet and\nadvertise the mechanisms based on that because we know that one user\n=> one password currently. I'd need to double-check on the RFCs to\nsee if there is anything specific here to worry about. The recent\nones being worked on may tell more.\n\n>> 5) The mechanism choice in libpq needs to be reworked a bit based on\n>> what the backend sends. There may be no point in advertising all the\n>> SHA-256 and SHA-512 mechanisms at the same time, I guess.\n> \n> Yeah, I think a user may want to select which ones they want to use (e.g.\n> they may not want to advertise SHA-256).\n\nYep, they should be able to do so. libpq should select the strongest\none if the server sends all of them, but things like\nhttps://commitfest.postgresql.org/41/3716/ should be able to enforce\none over the other.\n--\nMichael",
"msg_date": "Wed, 21 Dec 2022 14:01:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Refactor SCRAM code to dynamically handle hash type and key\n length"
}
] |
[
{
"msg_contents": "The existing permissions for LOCK TABLE are surprising/confusing. For\ninstance, if you have UPDATE privileges on a table, you can lock in any\nmode *except* ACCESS SHARE.\n\n drop table x cascade;\n drop user u1;\n create user u1;\n create table x(i int);\n grant update on x to u1;\n\n set session authorization u1;\n begin;\n lock table x in access exclusive mode; -- succeeds\n commit;\n begin;\n lock table x in share mode; -- succeeds\n commit;\n begin;\n lock table x in access share mode; -- fails\n commit;\n\nI can't think of any reason for this behavior, and I didn't find an\nobvious answer in the last commits to touch that (2ad36c4e44,\nfa2642438f).\n\nPatch attached to simplify it. It uses the philosophy that, if you have\npermissions to lock at a given mode, you should be able to lock at\nstrictly less-conflicting modes as well.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Tue, 13 Dec 2022 18:59:48 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Rework confusing permissions for LOCK TABLE"
},
{
"msg_contents": "On Tue, Dec 13, 2022 at 06:59:48PM -0800, Jeff Davis wrote:\n> I can't think of any reason for this behavior, and I didn't find an\n> obvious answer in the last commits to touch that (2ad36c4e44,\n> fa2642438f).\n\nI can't think of a reason, either.\n\n> Patch attached to simplify it. It uses the philosophy that, if you have\n> permissions to lock at a given mode, you should be able to lock at\n> strictly less-conflicting modes as well.\n\n+1. Your patch looks reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 09:23:15 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework confusing permissions for LOCK TABLE"
},
{
"msg_contents": "I filed a commitfest entry for this so that it doesn't get lost:\n\n\thttps://commitfest.postgresql.org/41/4093\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 30 Dec 2022 10:51:43 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework confusing permissions for LOCK TABLE"
}
] |
[
{
"msg_contents": "There are a number of places where a shell command is constructed with \npercent-placeholders (like %x). First, it's obviously cumbersome to \nhave to open-code this several times. Second, each of those pieces of \ncode silently encodes some edge case behavior, such as what to do with \nunrecognized placeholders. (I remember when I last did one of these, I \nstared very hard at the existing code instances to figure out what they \nwould do.) We now also have a newer instance in basebackup_to_shell.c \nthat has different behavior in such cases. (Maybe it's better, but it \nwould be good to be explicit and consistent about this.)\n\nThis patch factors out this logic into a separate function. I have \ndocumented to \"old\" error handling (which is to not handle them) and \nbrutally converted basebackup_to_shell.c to use that. We could also \nadopt the new behavior; now there is only a single place to change for that.\n\nNote that this is only used for shell commands with placeholders, not \nfor other places with placeholders, such as prompts and log line \nprefixes, which would (IMO) need a different API that wouldn't be quite \nas compact. This is explained in the code comments.",
"msg_date": "Wed, 14 Dec 2022 08:31:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Common function for percent placeholder replacement"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 2:31 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> There are a number of places where a shell command is constructed with\n> percent-placeholders (like %x). First, it's obviously cumbersome to\n> have to open-code this several times. Second, each of those pieces of\n> code silently encodes some edge case behavior, such as what to do with\n> unrecognized placeholders. (I remember when I last did one of these, I\n> stared very hard at the existing code instances to figure out what they\n> would do.) We now also have a newer instance in basebackup_to_shell.c\n> that has different behavior in such cases. (Maybe it's better, but it\n> would be good to be explicit and consistent about this.)\n\nWell, OK, I'll tentatively cast a vote in favor of adopting\nbasebackup_to_shell's approach elsewhere. Or to put that in plain\nEnglish: I think that if the input appears to be malformed, it's\nbetter to throw an error than to guess what the user meant. In the\ncase of basebackup_to_shell there are potentially security\nramifications to the setting involved so it seemed like a bad idea to\ntake a laissez faire approach. But also, just in general, if somebody\nsupplies an ssl_passphrase_command or archive_command with %<something\nunexpected>, I don't really see why we should treat that differently\nthan trying to start the server with work_mem=banana.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 11:09:51 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 08:31:02AM +0100, Peter Eisentraut wrote:\n> +\treturn replace_percent_placeholders(base_command, \"df\", (const char *[]){target_detail, filename});\n\nThis is a \"compound literal\", which I gather is required by C99.\n\nBut I don't think that's currently being exercised, so I wonder if it's\ngoing to break some BF members.\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 14 Dec 2022 11:05:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Dec 14, 2022 at 08:31:02AM +0100, Peter Eisentraut wrote:\n>> +\treturn replace_percent_placeholders(base_command, \"df\", (const char *[]){target_detail, filename});\n\n> This is a \"compound literal\", which I gather is required by C99.\n\n> But I don't think that's currently being exercised, so I wonder if it's\n> going to break some BF members.\n\nIt's pretty illegible, whatever it is. Could we maybe expend a\nfew more keystrokes in favor of readability?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 14 Dec 2022 12:40:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 14.12.22 17:09, Robert Haas wrote:\n> Well, OK, I'll tentatively cast a vote in favor of adopting\n> basebackup_to_shell's approach elsewhere. Or to put that in plain\n> English: I think that if the input appears to be malformed, it's\n> better to throw an error than to guess what the user meant. In the\n> case of basebackup_to_shell there are potentially security\n> ramifications to the setting involved so it seemed like a bad idea to\n> take a laissez faire approach. But also, just in general, if somebody\n> supplies an ssl_passphrase_command or archive_command with %<something\n> unexpected>, I don't really see why we should treat that differently\n> than trying to start the server with work_mem=banana.\n\nI agree. Here is an updated patch with the error checking included.",
"msg_date": "Mon, 19 Dec 2022 09:13:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 14.12.22 18:05, Justin Pryzby wrote:\n> On Wed, Dec 14, 2022 at 08:31:02AM +0100, Peter Eisentraut wrote:\n>> +\treturn replace_percent_placeholders(base_command, \"df\", (const char *[]){target_detail, filename});\n> \n> This is a \"compound literal\", which I gather is required by C99.\n> \n> But I don't think that's currently being exercised, so I wonder if it's\n> going to break some BF members.\n\nWe already use this, for example in pg_dump.\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 09:13:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 2022-Dec-19, Peter Eisentraut wrote:\n\n> On 14.12.22 18:05, Justin Pryzby wrote:\n> > On Wed, Dec 14, 2022 at 08:31:02AM +0100, Peter Eisentraut wrote:\n> > > +\treturn replace_percent_placeholders(base_command, \"df\", (const char *[]){target_detail, filename});\n> > \n> > This is a \"compound literal\", which I gather is required by C99.\n> > \n> > But I don't think that's currently being exercised, so I wonder if it's\n> > going to break some BF members.\n> \n> We already use this, for example in pg_dump.\n\nYeah, we have this\n\n#define ARCHIVE_OPTS(...) &(ArchiveOpts){__VA_ARGS__}\n\nwhich we then use like this\n\n ArchiveEntry(fout,\n dbCatId, /* catalog ID */\n dbDumpId, /* dump ID */\n ARCHIVE_OPTS(.tag = datname,\n .owner = dba,\n .description = \"DATABASE\",\n .section = SECTION_PRE_DATA,\n .createStmt = creaQry->data,\n .dropStmt = delQry->data));\n\nI think the new one is not great. I wish we could do something more\nstraightforward, maybe like\n\n replace_percent_placeholders(base_command,\n PERCENT_OPT(\"f\", filename),\n PERCENT_OPT(\"d\", target_detail));\n\nIs there a performance disadvantage to a variadic implementation?\nAlternatively, have all these macro calls form an array.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nSubversion to GIT: the shortest path to happiness I've ever heard of\n (Alexey Klyukin)\n\n\n",
"msg_date": "Mon, 19 Dec 2022 10:51:14 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 3:13 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> I agree. Here is an updated patch with the error checking included.\n\nNice, but I think something in the error report needs to indicate what\ncaused the problem exactly. As coded, I think the user would have to\nguess which GUC caused the problem. For basebackup_to_shell that might\nnot be too hard since you would have to try to initiate a backup to a\nshell target to trigger the error, but for something that happens at\nserver start, you don't want to have to go search all of\npostgresql.conf for possible causes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 11:38:49 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 19.12.22 10:51, Alvaro Herrera wrote:\n> I think the new one is not great. I wish we could do something more\n> straightforward, maybe like\n> \n> replace_percent_placeholders(base_command,\n> PERCENT_OPT(\"f\", filename),\n> PERCENT_OPT(\"d\", target_detail));\n> \n> Is there a performance disadvantage to a variadic implementation?\n> Alternatively, have all these macro calls form an array.\n\nHow about this new one with variable arguments?\n\n(Still need to think about Robert's comment about lack of error context.)",
"msg_date": "Tue, 20 Dec 2022 06:30:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": ">\n> How about this new one with variable arguments?\n\n\nI like this a lot, but I also see merit in Alvaro's PERCENT_OPT variadic,\nwhich at least avoids the two lists getting out of sync.\n\nInitially, I was going to ask that we have shell-quote-safe equivalents of\nwhatever fixed parameters we baked in, but this allows the caller to do\nthat as needed. It seems we could now just copy quote_identifier and strip\nout the keyword checks to get the desired effect. Has anyone else had a\nneed for quote-safe args in the shell commands?\n\nHow about this new one with variable arguments?I like this a lot, but I also see merit in Alvaro's PERCENT_OPT variadic, which at least avoids the two lists getting out of sync.Initially, I was going to ask that we have shell-quote-safe equivalents of whatever fixed parameters we baked in, but this allows the caller to do that as needed. It seems we could now just copy quote_identifier and strip out the keyword checks to get the desired effect. Has anyone else had a need for quote-safe args in the shell commands?",
"msg_date": "Tue, 20 Dec 2022 16:32:45 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "In general, +1.\n\nOn Tue, Dec 20, 2022 at 06:30:40AM +0100, Peter Eisentraut wrote:\n> (Still need to think about Robert's comment about lack of error context.)\n\nWould adding the name of the GUC be sufficient?\n\n\tereport(ERROR,\n\t\t\t(errmsg(\"could not build %s\", guc_name),\n\t\t\t errdetail(\"string ends unexpectedly after escape character \\\"%%\\\"\")));\n\n> + * A value may be NULL. If the corresponding placeholder is found in the\n> + * input string, the whole function returns NULL.\n\nThis appears to be carried over from BuildRestoreCommand(), and AFAICT it\nis only necessary because pg_rewind doesn't support %r in restore_command.\nIMHO this behavior is counterintuitive and possibly error-prone and should\nresult in an ERROR instead. Since pg_rewind is the only special case, it\ncould independently check for %r before building the command.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:37:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 04.01.23 01:37, Nathan Bossart wrote:\n> In general, +1.\n> \n> On Tue, Dec 20, 2022 at 06:30:40AM +0100, Peter Eisentraut wrote:\n>> (Still need to think about Robert's comment about lack of error context.)\n> \n> Would adding the name of the GUC be sufficient?\n> \n> \tereport(ERROR,\n> \t\t\t(errmsg(\"could not build %s\", guc_name),\n> \t\t\t errdetail(\"string ends unexpectedly after escape character \\\"%%\\\"\")));\n\ndone\n\nThe errors now basically look like an invalid GUC value.\n\n>> + * A value may be NULL. If the corresponding placeholder is found in the\n>> + * input string, the whole function returns NULL.\n> \n> This appears to be carried over from BuildRestoreCommand(), and AFAICT it\n> is only necessary because pg_rewind doesn't support %r in restore_command.\n> IMHO this behavior is counterintuitive and possibly error-prone and should\n> result in an ERROR instead. Since pg_rewind is the only special case, it\n> could independently check for %r before building the command.\n\nYeah, this annoyed me, too. I have now changed it so that a NULL \n\"value\" is the same as an unsupported placeholder. This preserves the \nexisting behavior.\n\n(Having pg_rewind check for %r itself would probably require replicating \nmost of the string processing logic (consider something like \"%%r\"), so \nit didn't seem appealing.)",
"msg_date": "Mon, 9 Jan 2023 09:36:12 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On Mon, Jan 09, 2023 at 09:36:12AM +0100, Peter Eisentraut wrote:\n> On 04.01.23 01:37, Nathan Bossart wrote:\n>> On Tue, Dec 20, 2022 at 06:30:40AM +0100, Peter Eisentraut wrote:\n>> > + * A value may be NULL. If the corresponding placeholder is found in the\n>> > + * input string, the whole function returns NULL.\n>> \n>> This appears to be carried over from BuildRestoreCommand(), and AFAICT it\n>> is only necessary because pg_rewind doesn't support %r in restore_command.\n>> IMHO this behavior is counterintuitive and possibly error-prone and should\n>> result in an ERROR instead. Since pg_rewind is the only special case, it\n>> could independently check for %r before building the command.\n> \n> Yeah, this annoyed me, too. I have now changed it so that a NULL \"value\" is\n> the same as an unsupported placeholder. This preserves the existing\n> behavior.\n> \n> (Having pg_rewind check for %r itself would probably require replicating\n> most of the string processing logic (consider something like \"%%r\"), so it\n> didn't seem appealing.)\n\nSounds good to me.\n\n> +\t\tnativePath = pstrdup(path);\n> +\t\tmake_native_path(nativePath);\n\n> +\t\tnativePath = pstrdup(xlogpath);\n> +\t\tmake_native_path(nativePath);\n\nShould these be freed?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 9 Jan 2023 09:53:57 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 09.01.23 18:53, Nathan Bossart wrote:\n>> +\t\tnativePath = pstrdup(path);\n>> +\t\tmake_native_path(nativePath);\n> \n>> +\t\tnativePath = pstrdup(xlogpath);\n>> +\t\tmake_native_path(nativePath);\n> \n> Should these be freed?\n\ncommitted with that fixed\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 11:09:01 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 11:09:01AM +0100, Peter Eisentraut wrote:\n> committed with that fixed\n\nWhile rebasing my recovery modules patch set, I noticed a couple of small\nthings that might be worth cleaning up.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 11 Jan 2023 10:54:34 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Common function for percent placeholder replacement"
},
{
"msg_contents": "On 11.01.23 19:54, Nathan Bossart wrote:\n> On Wed, Jan 11, 2023 at 11:09:01AM +0100, Peter Eisentraut wrote:\n>> committed with that fixed\n> \n> While rebasing my recovery modules patch set, I noticed a couple of small\n> things that might be worth cleaning up.\n\ncommitted, thanks\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 07:40:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Common function for percent placeholder replacement"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile working on checkpoint related stuff, I have encountered that\nthere is some inconsistency while reporting checkpointer stats. When a\ncheckpoint gets completed, a checkpoint complete message gets logged.\nThis message has a lot of information including the buffers written\n(CheckpointStats.ckpt_bufs_written). This variable gets incremented in\n2 cases. First is in BufferSync() and the second is in\nSlruInternalWritePage(). On the other hand the checkpointer stats\nexposed using pg_stat_bgwriter contains a lot of information including\nbuffers written (PendingCheckpointerStats.buf_written_checkpoints).\nThis variable gets incremented in only one place and that is in\nBufferSync(). So there is inconsistent behaviour between these two\ndata. Please refer to the sample output below.\n\npostgres=# select * from pg_stat_bgwriter;\n checkpoints_timed | checkpoints_req | checkpoint_write_time |\ncheckpoint_sync_time | buffers_checkpoint | buffers_clean |\nmaxwritten_clean | buffers_backend | buffers_backend_fsync |\nbuffers_alloc | stats_reset\n-------------------+-----------------+-----------------------+----------------------+--------------------+---------------+------------------+-----------------+-----------------------+---------------+-------------------------------\n 0 | 1 | 75 |\n 176 | 4702 | 0 | 0 |\n 4656 | 0 | 5023 | 2022-12-14\n07:01:01.494672+00\n\n2022-12-14 07:03:18.052 UTC [6087] LOG: checkpoint starting:\nimmediate force wait\n2022-12-14 07:03:18.370 UTC [6087] LOG: checkpoint complete: wrote\n4705 buffers (28.7%); 0 WAL file(s) added, 0 removed, 4 recycled;\nwrite=0.075 s, sync=0.176 s, total=0.318 s; sync files=34,\nlongest=0.159 s, average=0.006 s; distance=66180 kB, estimate=66180\nkB; lsn=0/5565E38, redo lsn=0/5565E00\n\n\nIn order to fix this, the\nPendingCheckpointerStats.buf_written_checkpoints should be incremented\nin SlruInternalWritePage() similar to\nCheckpointStats.ckpt_bufs_written. I have attached the patch for the\nsame. Please share your thoughts.\n\n\nThanks & Regards,\nNitin Jadhav",
"msg_date": "Wed, 14 Dec 2022 13:01:49 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 1:02 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on checkpoint related stuff, I have encountered that\n> there is some inconsistency while reporting checkpointer stats. When a\n> checkpoint gets completed, a checkpoint complete message gets logged.\n> This message has a lot of information including the buffers written\n> (CheckpointStats.ckpt_bufs_written). This variable gets incremented in\n> 2 cases. First is in BufferSync() and the second is in\n> SlruInternalWritePage(). On the other hand the checkpointer stats\n> exposed using pg_stat_bgwriter contains a lot of information including\n> buffers written (PendingCheckpointerStats.buf_written_checkpoints).\n> This variable gets incremented in only one place and that is in\n> BufferSync(). So there is inconsistent behaviour between these two\n> data. Please refer to the sample output below.\n>\n> In order to fix this, the\n> PendingCheckpointerStats.buf_written_checkpoints should be incremented\n> in SlruInternalWritePage() similar to\n> CheckpointStats.ckpt_bufs_written. I have attached the patch for the\n> same. Please share your thoughts.\n\nIndeed PendingCheckpointerStats.buf_written_checkpoints needs to count\nbuffer writes in SlruInternalWritePage(). However, does it need to be\ndone immediately there? The stats will not be visible to the users\nuntil the next pgstat_report_checkpointer(). Incrementing\nbuf_written_checkpoints in BufferSync() makes sense as the\npgstat_report_checkpointer() gets called in there via\nCheckpointWriteDelay() and it becomes visible to the user immediately.\nHave you checked if pgstat_report_checkpointer() gets called while the\ncheckpoint calls SlruInternalWritePage()? If not, then you can just\nassign ckpt_bufs_written to buf_written_checkpoints in\nLogCheckpointEnd() like its other friends\ncheckpoint_write_time and checkpoint_sync_time.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 14 Dec 2022 16:54:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 04:54:53PM +0530, Bharath Rupireddy wrote:\n> Indeed PendingCheckpointerStats.buf_written_checkpoints needs to count\n> buffer writes in SlruInternalWritePage(). However, does it need to be\n> done immediately there? The stats will not be visible to the users\n> until the next pgstat_report_checkpointer(). Incrementing\n> buf_written_checkpoints in BufferSync() makes sense as the\n> pgstat_report_checkpointer() gets called in there via\n> CheckpointWriteDelay() and it becomes visible to the user immediately.\n> Have you checked if pgstat_report_checkpointer() gets called while the\n> checkpoint calls SlruInternalWritePage()? If not, then you can just\n> assign ckpt_bufs_written to buf_written_checkpoints in\n> LogCheckpointEnd() like its other friends\n> checkpoint_write_time and checkpoint_sync_time.\n\n /* If part of a checkpoint, count this as a buffer written. */\n if (fdata)\n CheckpointStats.ckpt_bufs_written++;\n+ PendingCheckpointerStats.buf_written_checkpoints++;\nAlso, the proposed patch would touch PendingCheckpointerStats even\nwhen there is no fdata, aka outside the context of a checkpoint or\nshutdown sequence..\n--\nMichael",
"msg_date": "Thu, 15 Dec 2022 06:46:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "> Indeed PendingCheckpointerStats.buf_written_checkpoints needs to count\n> buffer writes in SlruInternalWritePage(). However, does it need to be\n> done immediately there? The stats will not be visible to the users\n> until the next pgstat_report_checkpointer(). Incrementing\n> buf_written_checkpoints in BufferSync() makes sense as the\n> pgstat_report_checkpointer() gets called in there via\n> CheckpointWriteDelay() and it becomes visible to the user immediately.\n> Have you checked if pgstat_report_checkpointer() gets called while the\n> checkpoint calls SlruInternalWritePage()? If not, then you can just\n> assign ckpt_bufs_written to buf_written_checkpoints in\n> LogCheckpointEnd() like its other friends\n> checkpoint_write_time and checkpoint_sync_time.\n\nIn case of an immediate checkpoint, the CheckpointWriteDelay() never\ngets called until the checkpoint is completed. So no issues in this\ncase. CheckpointWriteDelay() comes into picture in case of non\nimmediate checkpoints (i.e. checkpoint timeout is reached or max wal\nsize is reached). If we remove the increment in BufferSync() and\nSlruInternalWritePage() and then just assign ckpt_bufs_written to\nbuf_written_checkpoints in LogCheckpointEnd() then the update will be\navailable after the end of each checkpoint which is not better than\nthe existing behaviour (without patch). If we keep the increment in\nBufferSync() then we have to calculate the remaining buffer\nincremented in SlruInternalWritePage() and then increment\nbuf_written_checkpoints with this number in LogCheckpointEnd(). This\njust makes it complicated and again the buffer incremented in\nSlruInternalWritePage() will get updated at the end of the checkpoint.\nIn the case of checkpoint_write_time and checkpoint_sync_time, it\nmakes sense because this information is based on the entire checkpoint\noperation and it should be done at the end. So I feel the patch\nhandles it in a better way even though the\npgstat_report_checkpointer() does not get called immediately but it\nwill be called during the next increment in BufferSync() which is\nbefore the end of the checkpoint. Please share if you have any other\nideas.\n\nOn Wed, Dec 14, 2022 at 4:55 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 1:02 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While working on checkpoint related stuff, I have encountered that\n> > there is some inconsistency while reporting checkpointer stats. When a\n> > checkpoint gets completed, a checkpoint complete message gets logged.\n> > This message has a lot of information including the buffers written\n> > (CheckpointStats.ckpt_bufs_written). This variable gets incremented in\n> > 2 cases. First is in BufferSync() and the second is in\n> > SlruInternalWritePage(). On the other hand the checkpointer stats\n> > exposed using pg_stat_bgwriter contains a lot of information including\n> > buffers written (PendingCheckpointerStats.buf_written_checkpoints).\n> > This variable gets incremented in only one place and that is in\n> > BufferSync(). So there is inconsistent behaviour between these two\n> > data. Please refer to the sample output below.\n> >\n> > In order to fix this, the\n> > PendingCheckpointerStats.buf_written_checkpoints should be incremented\n> > in SlruInternalWritePage() similar to\n> > CheckpointStats.ckpt_bufs_written. I have attached the patch for the\n> > same. Please share your thoughts.\n>\n> Indeed PendingCheckpointerStats.buf_written_checkpoints needs to count\n> buffer writes in SlruInternalWritePage(). However, does it need to be\n> done immediately there? The stats will not be visible to the users\n> until the next pgstat_report_checkpointer(). Incrementing\n> buf_written_checkpoints in BufferSync() makes sense as the\n> pgstat_report_checkpointer() gets called in there via\n> CheckpointWriteDelay() and it becomes visible to the user immediately.\n> Have you checked if pgstat_report_checkpointer() gets called while the\n> checkpoint calls SlruInternalWritePage()? If not, then you can just\n> assign ckpt_bufs_written to buf_written_checkpoints in\n> LogCheckpointEnd() like its other friends\n> checkpoint_write_time and checkpoint_sync_time.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Dec 2022 15:42:00 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "> /* If part of a checkpoint, count this as a buffer written. */\n> if (fdata)\n> CheckpointStats.ckpt_bufs_written++;\n> + PendingCheckpointerStats.buf_written_checkpoints++;\n> Also, the proposed patch would touch PendingCheckpointerStats even\n> when there is no fdata, aka outside the context of a checkpoint or\n> shutdown sequence.\n\nSorry. I missed adding braces. Fixed in the v2 patch attached.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Dec 15, 2022 at 3:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 14, 2022 at 04:54:53PM +0530, Bharath Rupireddy wrote:\n> > Indeed PendingCheckpointerStats.buf_written_checkpoints needs to count\n> > buffer writes in SlruInternalWritePage(). However, does it need to be\n> > done immediately there? The stats will not be visible to the users\n> > until the next pgstat_report_checkpointer(). Incrementing\n> > buf_written_checkpoints in BufferSync() makes sense as the\n> > pgstat_report_checkpointer() gets called in there via\n> > CheckpointWriteDelay() and it becomes visible to the user immediately.\n> > Have you checked if pgstat_report_checkpointer() gets called while the\n> > checkpoint calls SlruInternalWritePage()? If not, then you can just\n> > assign ckpt_bufs_written to buf_written_checkpoints in\n> > LogCheckpointEnd() like its other friends\n> > checkpoint_write_time and checkpoint_sync_time.\n>\n> /* If part of a checkpoint, count this as a buffer written. */\n> if (fdata)\n> CheckpointStats.ckpt_bufs_written++;\n> + PendingCheckpointerStats.buf_written_checkpoints++;\n> Also, the proposed patch would touch PendingCheckpointerStats even\n> when there is no fdata, aka outside the context of a checkpoint or\n> shutdown sequence..\n> --\n> Michael",
"msg_date": "Thu, 15 Dec 2022 15:43:35 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "At Wed, 14 Dec 2022 16:54:53 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> Indeed PendingCheckpointerStats.buf_written_checkpoints needs to count\n> buffer writes in SlruInternalWritePage(). However, does it need to be\n> done immediately there? The stats will not be visible to the users\n> until the next pgstat_report_checkpointer(). Incrementing\n> buf_written_checkpoints in BufferSync() makes sense as the\n> pgstat_report_checkpointer() gets called in there via\n> CheckpointWriteDelay() and it becomes visible to the user immediately.\n> Have you checked if pgstat_report_checkpointer() gets called while the\n> checkpoint calls SlruInternalWritePage()? If not, then you can just\n> assign ckpt_bufs_written to buf_written_checkpoints in\n> LogCheckpointEnd() like its other friends\n> checkpoint_write_time and checkpoint_sync_time.\n\nIf I'm getting Bharath correctly, it results in double counting of\nBufferSync. If we want to keep the realtime-reporting nature of\nBufferSync, BufferSync should give up to increment CheckPointerStats'\ncounter. Such separation seems to be a kind of stupid and quite\nbug-prone.\n\nIn the first place I don't like that we count the same things twice.\nCouldn't we count the number only by any one of them?\n\nIf we remove CheckPointerStats.ckpt_bufs_written, CreateCheckPoint can\nget the final number as the difference between the start-end values of\n*the shared stats*. As long as a checkpoint runs on a single process,\ntrace info in BufferSync will work fine. Assuming single process\ncheckpointing there must be no problem to do that. (Anyway the current\nshared stats update for checkpointer is assuming single-process).\n\nOtherwise, in exchange with giving up the realtime nature, we can\ncount the number only by CheckPointerStats.ckpt_bufs_written.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 16 Dec 2022 17:43:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 2:14 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> In the first place I don't like that we count the same things twice.\n> Couldn't we count the number only by any one of them?\n>\n> If we remove CheckPointerStats.ckpt_bufs_written, CreateCheckPoint can\n> get the final number as the difference between the start-end values of\n> *the shared stats*. As long as a checkpoint runs on a single process,\n> trace info in BufferSync will work fine. Assuming single process\n> checkpointing there must be no problem to do that. (Anyway the current\n> shared stats update for checkpointer is assuming single-process).\n\nWhat if someone resets checkpointer shared stats with\npg_stat_reset_shared()? In such a case, the checkpoint complete\nmessage will not have the stats, no?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 18:05:38 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Wed, Dec 14, 2022 at 2:32 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> In order to fix this, the\n> PendingCheckpointerStats.buf_written_checkpoints should be incremented\n> in SlruInternalWritePage() similar to\n> CheckpointStats.ckpt_bufs_written. I have attached the patch for the\n> same. Please share your thoughts.\n\nPresumably we could make this consistent either by counting SLRU\nwrites in both places, or by counting them in neither place. This\nproposal would count them in both places. But why is that the right\nthing to do?\n\nI'm somewhat inclined to think that we should use \"buffers\" to mean\nregular data buffers, and if SLRU buffers also need to be counted, we\nought to make that a separate counter. Or just leave it out\naltogether.\n\nThis is arguable, though, for sure....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 16:08:23 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "> Presumably we could make this consistent either by counting SLRU\n> writes in both places, or by counting them in neither place. This\n> proposal would count them in both places. But why is that the right\n> thing to do?\n>\n> I'm somewhat inclined to think that we should use \"buffers\" to mean\n> regular data buffers, and if SLRU buffers also need to be counted, we\n> ought to make that a separate counter. Or just leave it out\n> altogether.\n>\n> This is arguable, though, for sure....\n\nThanks Robert for sharing your thoughts.\nMy first thought was to just remove counting SLRU buffers, then after\nsome more analysis, I found that the checkpointer is responsible for\nincluding both regular data buffers and SLRU buffers. Please refer to\ndee663f7843902535a15ae366cede8b4089f1144 commit for more information.\nThe part of the commit message is included here [1] for quick\nreference. Hence I concluded to keep the information and added an\nincrement to count SLRU buffers. I am not in favour of making this as\na separate counter as this can be treated as little low level\ninformation and it just adds up in the stats. Please share your\nthoughts.\n\n[1]:\nHoist ProcessSyncRequests() up into CheckPointGuts() to make it clearer\nthat it applies to all the SLRU mini-buffer-pools as well as the main\nbuffer pool. Rearrange things so that data collected in CheckpointStats\nincludes SLRU activity.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Dec 20, 2022 at 2:38 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Dec 14, 2022 at 2:32 AM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> > In order to fix this, the\n> > PendingCheckpointerStats.buf_written_checkpoints should be incremented\n> > in SlruInternalWritePage() similar to\n> > CheckpointStats.ckpt_bufs_written. I have attached the patch for the\n> > same. Please share your thoughts.\n>\n> Presumably we could make this consistent either by counting SLRU\n> writes in both places, or by counting them in neither place. This\n> proposal would count them in both places. But why is that the right\n> thing to do?\n>\n> I'm somewhat inclined to think that we should use \"buffers\" to mean\n> regular data buffers, and if SLRU buffers also need to be counted, we\n> ought to make that a separate counter. Or just leave it out\n> altogether.\n>\n> This is arguable, though, for sure....\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Dec 2022 18:33:18 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 8:03 AM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n> Thanks Robert for sharing your thoughts.\n> My first thought was to just remove counting SLRU buffers, then after\n> some more analysis, I found that the checkpointer is responsible for\n> including both regular data buffers and SLRU buffers.\n\nI know that, but what the checkpointer handles and what ought to be\nincluded in the stats are two separate questions.\n\nI think that the SLRU information is potentially useful, but mixing it\nwith the information about regular buffers just seems confusing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Dec 2022 08:18:36 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On 2022-12-20 08:18:36 -0500, Robert Haas wrote:\n> I think that the SLRU information is potentially useful, but mixing it\n> with the information about regular buffers just seems confusing.\n\n+1\n\nAt least for now, it'd be different if/when we manage to move SLRUs to\nthe main buffer pool.\n\n- Andres\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:38:20 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 11:08 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-12-20 08:18:36 -0500, Robert Haas wrote:\n> > I think that the SLRU information is potentially useful, but mixing it\n> > with the information about regular buffers just seems confusing.\n>\n> +1\n>\n> At least for now, it'd be different if/when we manage to move SLRUs to\n> the main buffer pool.\n\n+1 to not count SLRU writes in ckpt_bufs_written. If needed we can\nhave new fields CheckpointStats.ckpt_slru_bufs_written and\nPendingCheckpointerStats.slru_buf_written_checkpoint.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 21 Dec 2022 17:02:39 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "Thanks Robert and Andres for sharing your thoughts.\n\nI have modified the code accordingly and attached the new version of\npatches. patch 0001 fixes the inconsistency in checkpointer stats and\npatch 0002 separates main buffer and SLRU buffer count from checkpoint\ncomplete log message. In 0001, I added a new column to\npg_stat_bgwriter view and named it as slru_buffers_checkpoint and kept\nthe existing column buffers_checkpoint as-is. Should I rename this to\nsomething like main_buffers_checkpoint? Thoughts?\n\nPlease refer to sample checkpoint complete log message[1]. I am not\nquite satisfied with the percentage of buffers written information\nlogged there. The percentage is calculated based on NBuffers in both\nthe cases but I am just worried that are we passing wrong information\nto the user while user may\nthink that the percentage of buffers is based on the total number of\nbuffers available and the percentage of SLRU buffers is based on the\ntotal number of SLRU buffers available.\n\nKindly review and share your comments.\n\n[1]:\n2022-12-21 10:52:25.931 UTC [63530] LOG: checkpoint complete: wrote\n4670 buffers (28.5%), wrote 3 slru buffers (0.0%); 0 WAL file(s)\nadded, 0 removed, 4 recycled; write=0.045 s, sync=0.161 s, total=0.244\ns; sync files=25, longest=0.146 s, average=0.007 s; distance=66130 kB,\nestimate=66130 kB; lsn=0/5557C78, redo lsn=0/5557C40\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Dec 20, 2022 at 11:08 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2022-12-20 08:18:36 -0500, Robert Haas wrote:\n> > I think that the SLRU information is potentially useful, but mixing it\n> > with the information about regular buffers just seems confusing.\n>\n> +1\n>\n> At least for now, it'd be different if/when we manage to move SLRUs to\n> the main buffer pool.\n>\n> - Andres",
"msg_date": "Wed, 21 Dec 2022 17:14:12 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "At Mon, 19 Dec 2022 18:05:38 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Fri, Dec 16, 2022 at 2:14 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > In the first place I don't like that we count the same things twice.\n> > Couldn't we count the number only by any one of them?\n> >\n> > If we remove CheckPointerStats.ckpt_bufs_written, CreateCheckPoint can\n> > get the final number as the difference between the start-end values of\n> > *the shared stats*. As long as a checkpoint runs on a single process,\n> > trace info in BufferSync will work fine. Assuming single process\n> > checkpointing there must be no problem to do that. (Anyway the current\n> > shared stats update for checkpointer is assuming single-process).\n> \n> What if someone resets checkpointer shared stats with\n> pg_stat_reset_shared()? In such a case, the checkpoint complete\n> message will not have the stats, no?\n\nI don't know. I don't believe the stats system doesn't follow such a\nstrict resetting policy.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Dec 2022 11:31:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "At Wed, 21 Dec 2022 17:14:12 +0530, Nitin Jadhav <nitinjadhavpostgres@gmail.com> wrote in \n> [1]:\n> 2022-12-21 10:52:25.931 UTC [63530] LOG: checkpoint complete: wrote\n> 4670 buffers (28.5%), wrote 3 slru buffers (0.0%); 0 WAL file(s)\n> added, 0 removed, 4 recycled; write=0.045 s, sync=0.161 s, total=0.244\n> s; sync files=25, longest=0.146 s, average=0.007 s; distance=66130 kB,\n> estimate=66130 kB; lsn=0/5557C78, redo lsn=0/5557C40\n> \n> Thanks & Regards,\n> Nitin Jadhav\n> \n> On Tue, Dec 20, 2022 at 11:08 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2022-12-20 08:18:36 -0500, Robert Haas wrote:\n> > > I think that the SLRU information is potentially useful, but mixing it\n> > > with the information about regular buffers just seems confusing.\n> >\n> > +1\n> >\n> > At least for now, it'd be different if/when we manage to move SLRUs to\n> > the main buffer pool.\n\nIt sounds reasonable to exclude SRLU write from buffer writes. But I'm\nnot sure its useful to count SLRU writes separately since it is under\nthe noise level of buffer writes in reglular cases and the value\ndoesn't lead to tuning. However I'm not strongly opposed to adding it\neither.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 22 Dec 2022 11:54:28 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 5:15 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> I have modified the code accordingly and attached the new version of\n> patches. patch 0001 fixes the inconsistency in checkpointer stats and\n> patch 0002 separates main buffer and SLRU buffer count from checkpoint\n> complete log message.\n\nIMO, there's no need for 2 separate patches for these changes.\n\n+ (errmsg(\"restartpoint complete: wrote %d buffers (%.1f%%), \"\n+ \"wrote %d slru buffers (%.1f%%); %d WAL\nfile(s) added, \"\n+ \"%d removed, %d recycled; write=%ld.%03d s, \"\n+ \"sync=%ld.%03d s, total=%ld.%03d s; sync files=%d, \"\n+ \"longest=%ld.%03d s, average=%ld.%03d s;\ndistance=%d kB, \"\n+ \"estimate=%d kB; lsn=%X/%X, redo lsn=%X/%X\",\nHm, restartpoint /checkpoint complete message is already too long to\nread and adding slru buffers to it make it further longer. Note that\nwe don't need to add every checkpoint stat to the log message but to\npg_stat_bgwriter. Isn't it enough to show SLRU buffers information in\npg_stat_bgwriter alone?\n\nCan't one look at pg_stat_slru's blks_written\n(pgstat_count_slru_page_written()) to really track the SLRUs written?\nOr is it that one may want to track SLRUs during a checkpoint\nseparately? Is there a real use-case/customer reported issue driving\nthis change?\n\nAfter looking at pg_stat_slru.blks_written, I think the best way is to\njust leave things as-is and let CheckpointStats count slru buffers too\nunless there's really a reported issue that says\npg_stat_slru.blks_written doesn't serve the purpose.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 10:45:08 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "> IMO, there's no need for 2 separate patches for these changes.\n\nI will make it a single patch while sharing the next patch.\n\n\n> + (errmsg(\"restartpoint complete: wrote %d buffers (%.1f%%), \"\n> + \"wrote %d slru buffers (%.1f%%); %d WAL\n> file(s) added, \"\n> + \"%d removed, %d recycled; write=%ld.%03d s, \"\n> + \"sync=%ld.%03d s, total=%ld.%03d s; sync files=%d, \"\n> + \"longest=%ld.%03d s, average=%ld.%03d s;\n> distance=%d kB, \"\n> + \"estimate=%d kB; lsn=%X/%X, redo lsn=%X/%X\",\n> Hm, restartpoint /checkpoint complete message is already too long to\n> read and adding slru buffers to it make it further longer. Note that\n> we don't need to add every checkpoint stat to the log message but to\n> pg_stat_bgwriter. Isn't it enough to show SLRU buffers information in\n> pg_stat_bgwriter alone?\n\nI understand that the log message is too long already but I feel it is\nok since it logs only one time per checkpoint and as discussed\nupthread, SLRU information is potentially useful.\n\n\n> Can't one look at pg_stat_slru's blks_written\n> (pgstat_count_slru_page_written()) to really track the SLRUs written?\n> Or is it that one may want to track SLRUs during a checkpoint\n> separately? Is there a real use-case/customer reported issue driving\n> this change?\n>\n> After looking at pg_stat_slru.blks_written, I think the best way is to\n> just leave things as-is and let CheckpointStats count slru buffers too\n> unless there's really a reported issue that says\n> pg_stat_slru.blks_written doesn't serve the purpose.\n\nThe v1 patch corresponds to what you are suggesting. But the question\nis not about tracking slru buffers, it is about separating this\ninformation from main buffers count during checkpoint. I think there\nis enough discussion done upthread to exclude slru buffers from\nCheckpointStats.ckpt_bufs_written. Please share if you still strongly\nfeel that a separate counter is not required.\n\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Jan 27, 2023 at 10:45 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Dec 21, 2022 at 5:15 PM Nitin Jadhav\n> <nitinjadhavpostgres@gmail.com> wrote:\n> >\n> > I have modified the code accordingly and attached the new version of\n> > patches. patch 0001 fixes the inconsistency in checkpointer stats and\n> > patch 0002 separates main buffer and SLRU buffer count from checkpoint\n> > complete log message.\n>\n> IMO, there's no need for 2 separate patches for these changes.\n>\n> + (errmsg(\"restartpoint complete: wrote %d buffers (%.1f%%), \"\n> + \"wrote %d slru buffers (%.1f%%); %d WAL\n> file(s) added, \"\n> + \"%d removed, %d recycled; write=%ld.%03d s, \"\n> + \"sync=%ld.%03d s, total=%ld.%03d s; sync files=%d, \"\n> + \"longest=%ld.%03d s, average=%ld.%03d s;\n> distance=%d kB, \"\n> + \"estimate=%d kB; lsn=%X/%X, redo lsn=%X/%X\",\n> Hm, restartpoint /checkpoint complete message is already too long to\n> read and adding slru buffers to it make it further longer. Note that\n> we don't need to add every checkpoint stat to the log message but to\n> pg_stat_bgwriter. Isn't it enough to show SLRU buffers information in\n> pg_stat_bgwriter alone?\n>\n> Can't one look at pg_stat_slru's blks_written\n> (pgstat_count_slru_page_written()) to really track the SLRUs written?\n> Or is it that one may want to track SLRUs during a checkpoint\n> separately? Is there a real use-case/customer reported issue driving\n> this change?\n>\n> After looking at pg_stat_slru.blks_written, I think the best way is to\n> just leave things as-is and let CheckpointStats count slru buffers too\n> unless there's really a reported issue that says\n> pg_stat_slru.blks_written doesn't serve the purpose.\n>\n> --\n> Bharath Rupireddy\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 19:55:04 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "\nOn 2022-12-21 17:14:12 +0530, Nitin Jadhav wrote:\n> Kindly review and share your comments.\n\nThis doesn't pass the tests, because the regression tests weren't adjusted:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/5937624817336320/testrun/build/testrun/regress/regress/regression.diffs\n\n\n",
"msg_date": "Mon, 13 Feb 2023 16:38:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "> This doesn't pass the tests, because the regression tests weren't adjusted:\n> https://api.cirrus-ci.com/v1/artifact/task/5937624817336320/testrun/build/testrun/regress/regress/regression.diffs\n\nThanks for sharing this. I have fixed this in the patch attached.\n\n\n>> IMO, there's no need for 2 separate patches for these changes.\n> I will make it a single patch while sharing the next patch.\n\nClubbed both patches into one.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Tue, Feb 14, 2023 at 6:08 AM Andres Freund <andres@anarazel.de> wrote:\n>\n>\n> On 2022-12-21 17:14:12 +0530, Nitin Jadhav wrote:\n> > Kindly review and share your comments.\n>\n> This doesn't pass the tests, because the regression tests weren't adjusted:\n>\n> https://api.cirrus-ci.com/v1/artifact/task/5937624817336320/testrun/build/testrun/regress/regress/regression.diffs",
"msg_date": "Sun, 19 Feb 2023 15:27:31 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Sun, 19 Feb 2023 at 04:58, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > This doesn't pass the tests, because the regression tests weren't adjusted:\n> > https://api.cirrus-ci.com/v1/artifact/task/5937624817336320/testrun/build/testrun/regress/regress/regression.diffs\n>\n> Thanks for sharing this. I have fixed this in the patch attached.\n\nAFAICS this patch was marked Waiting on Author due to this feedback on\nFeb 14 but the patch was updated Feb 19 and is still passing today.\nI've marked it Needs Review.\n\nI'm not sure if all of the feedback from Bharath Rupireddy and Kyotaro\nHoriguchi was addressed.\n\nIf there's any specific questions remaining you need feedback on it\nwould be good to ask explicitly. Otherwise if you think it's ready you\ncould mark it Ready for Commit.\n\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 20 Mar 2023 16:01:02 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Sun, 19 Feb 2023 at 15:28, Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > This doesn't pass the tests, because the regression tests weren't adjusted:\n> > https://api.cirrus-ci.com/v1/artifact/task/5937624817336320/testrun/build/testrun/regress/regress/regression.diffs\n>\n> Thanks for sharing this. I have fixed this in the patch attached.\n>\n>\n> >> IMO, there's no need for 2 separate patches for these changes.\n> > I will make it a single patch while sharing the next patch.\n>\n> Clubbed both patches into one.\n\nThe patch does not apply anymore, please post a rebased version of the patch :\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/catalog/system_views.sql.rej\npatching file src/backend/utils/activity/pgstat_checkpointer.c\nHunk #1 FAILED at 52.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/backend/utils/activity/pgstat_checkpointer.c.rej\npatching file src/backend/utils/adt/pgstatfuncs.c\nHunk #1 succeeded at 1217 with fuzz 1 (offset 24 lines).\npatching file src/include/access/xlog.h\nHunk #1 succeeded at 165 (offset 3 lines).\npatching file src/include/catalog/pg_proc.dat\nHunk #1 FAILED at 5680.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/include/catalog/pg_proc.dat.rej\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 20 Jan 2024 08:10:03 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "On Sat, Jan 20, 2024 at 08:10:03AM +0530, vignesh C wrote:\n> The patch does not apply anymore, please post a rebased version of the patch :\n\nThere is more to it. Some of the columns of pg_stat_bgwriter have\nbeen moved to a different view, aka pg_stat_checkpointer. I have\nmarked the patch as returned with feedback for now, so feel free to\nresubmit if you can get a new version of the patch.\n--\nMichael",
"msg_date": "Tue, 30 Jan 2024 16:50:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "I apologize for not being active on this thread. However, I have now\nreturned to the thread and confirmed that the inconsistency is still\npresent in the latest code. I believe it’s crucial to address this\nissue, and I am currently submitting the v5 version of the patch. The\nv4 version had addressed the feedback from Bharath, Kyotaro, Andres,\nand Robert. The current version has been rebased to incorporate\nVignesh’s suggestions. In response to Michael’s comments, I’ve moved\nthe new ‘slru_written’ column from the ‘pg_stat_bgwriter’ view to the\n‘pg_stat_checkpointer’ in the attached patch.\n\nTo summarize our discussions, we’ve reached a consensus to correct the\nmismatch between the information on buffers written as displayed in\nthe ‘pg_stat_checkpointer’ view and the checkpointer log message.\nWe’ve also agreed to separate the SLRU buffers data from the buffers\nwritten and present the SLRU buffers data in a distinct field.\n\nI have created the new commitfest entry here\nhttps://commitfest.postgresql.org/49/5130/.\nKindly share if any comments.\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft\n\n\nOn Tue, Jan 30, 2024 at 1:20 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jan 20, 2024 at 08:10:03AM +0530, vignesh C wrote:\n> > The patch does not apply anymore, please post a rebased version of the patch :\n>\n> There is more to it. Some of the columns of pg_stat_bgwriter have\n> been moved to a different view, aka pg_stat_checkpointer. I have\n> marked the patch as returned with feedback for now, so feel free to\n> resubmit if you can get a new version of the patch.\n> --\n> Michael",
"msg_date": "Thu, 18 Jul 2024 12:38:00 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "\n\nOn 2024/07/18 16:08, Nitin Jadhav wrote:\n> I apologize for not being active on this thread. However, I have now\n> returned to the thread and confirmed that the inconsistency is still\n> present in the latest code. I believe it’s crucial to address this\n> issue, and I am currently submitting the v5 version of the patch. The\n> v4 version had addressed the feedback from Bharath, Kyotaro, Andres,\n> and Robert. The current version has been rebased to incorporate\n> Vignesh’s suggestions. In response to Michael’s comments, I’ve moved\n> the new ‘slru_written’ column from the ‘pg_stat_bgwriter’ view to the\n> ‘pg_stat_checkpointer’ in the attached patch.\n> \n> To summarize our discussions, we’ve reached a consensus to correct the\n> mismatch between the information on buffers written as displayed in\n> the ‘pg_stat_checkpointer’ view and the checkpointer log message.\n> We’ve also agreed to separate the SLRU buffers data from the buffers\n> written and present the SLRU buffers data in a distinct field.\n> \n> I have created the new commitfest entry here\n> https://commitfest.postgresql.org/49/5130/.\n> Kindly share if any comments.\n\nThanks for updating the patch!\n\nIn pgstat_checkpointer.c, it looks like you missed adding\nCHECKPOINTER_COMP(slru_written) in pgstat_checkpointer_snapshot_cb().\n\n+ <row>\n+ <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n+ <structfield>slru_written</structfield> <type>bigint</type>\n+ </para>\n+ <para>\n+ Number of SLRU buffers written during checkpoints and restartpoints\n+ </para></entry>\n+ </row>\n\nThis entry should be moved to the pg_stat_checkpointer documentation.\n\n+\t\t\t\t\t\tCheckpointStats.ckpt_slru_written,\n+\t\t\t\t\t\t(double) CheckpointStats.ckpt_slru_written * 100 / NBuffers,\n\nI don't think NBuffers represents the maximum number of SLRU buffers.\nWe might need to calculate this based on specific GUC settings,\nlike transaction_buffers.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Wed, 18 Sep 2024 22:27:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "Thanks for the review.\n\n> In pgstat_checkpointer.c, it looks like you missed adding\n> CHECKPOINTER_COMP(slru_written) in pgstat_checkpointer_snapshot_cb().\n\nFixed it.\n\n\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>slru_written</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of SLRU buffers written during checkpoints and restartpoints\n> + </para></entry>\n> + </row>\n>\n> This entry should be moved to the pg_stat_checkpointer documentation.\n\nFixed it.\n\n\n> + CheckpointStats.ckpt_slru_written,\n> + (double) CheckpointStats.ckpt_slru_written * 100 / NBuffers,\n>\n> I don't think NBuffers represents the maximum number of SLRU buffers.\n> We might need to calculate this based on specific GUC settings,\n> like transaction_buffers.\n\nGreat observation. Since the SLRU buffers written during a checkpoint\ncan include transaction_buffers, commit_timestamp_buffers,\nsubtransaction_buffers, multixact_member_buffers,\nmultixact_offset_buffers, and serializable_buffers, the total count of\nSLRU buffers should be the sum of all these types. We might need to\nintroduce a global variable, such as total_slru_count, in the\nglobals.c file to hold this sum. The num_slots variable in the\nSlruSharedData structure needs to be accessed from all types of SLRU\nand stored in total_slru_count. This can then be used during logging\nto calculate the percentage of SLRU buffers written. However, I’m\nunsure if this effort is justified. While it makes sense for normal\nbuffers to display the percentage, the additional code required might\nnot provide significant value to users. Therefore, I have removed this\nin the attached v6 patch. If it is really required, I am happy to make\nthe above changes and share the updated patch.\n\nBest Regards,\nNitin Jadhav\nAzure Database for PostgreSQL\nMicrosoft\n\nOn Wed, Sep 18, 2024 at 6:57 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2024/07/18 16:08, Nitin Jadhav wrote:\n> > I apologize for not being active on this thread. However, I have now\n> > returned to the thread and confirmed that the inconsistency is still\n> > present in the latest code. I believe it’s crucial to address this\n> > issue, and I am currently submitting the v5 version of the patch. The\n> > v4 version had addressed the feedback from Bharath, Kyotaro, Andres,\n> > and Robert. The current version has been rebased to incorporate\n> > Vignesh’s suggestions. In response to Michael’s comments, I’ve moved\n> > the new ‘slru_written’ column from the ‘pg_stat_bgwriter’ view to the\n> > ‘pg_stat_checkpointer’ in the attached patch.\n> >\n> > To summarize our discussions, we’ve reached a consensus to correct the\n> > mismatch between the information on buffers written as displayed in\n> > the ‘pg_stat_checkpointer’ view and the checkpointer log message.\n> > We’ve also agreed to separate the SLRU buffers data from the buffers\n> > written and present the SLRU buffers data in a distinct field.\n> >\n> > I have created the new commitfest entry here\n> > https://commitfest.postgresql.org/49/5130/.\n> > Kindly share if any comments.\n>\n> Thanks for updating the patch!\n>\n> In pgstat_checkpointer.c, it looks like you missed adding\n> CHECKPOINTER_COMP(slru_written) in pgstat_checkpointer_snapshot_cb().\n>\n> + <row>\n> + <entry role=\"catalog_table_entry\"><para role=\"column_definition\">\n> + <structfield>slru_written</structfield> <type>bigint</type>\n> + </para>\n> + <para>\n> + Number of SLRU buffers written during checkpoints and restartpoints\n> + </para></entry>\n> + </row>\n>\n> This entry should be moved to the pg_stat_checkpointer documentation.\n>\n> + CheckpointStats.ckpt_slru_written,\n> + (double) CheckpointStats.ckpt_slru_written * 100 / NBuffers,\n>\n> I don't think NBuffers represents the maximum number of SLRU buffers.\n> We might need to calculate this based on specific GUC settings,\n> like transaction_buffers.\n>\n> Regards,\n>\n> --\n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n>",
"msg_date": "Sun, 22 Sep 2024 17:14:31 +0530",
"msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
},
{
"msg_contents": "\n\nOn 2024/09/22 20:44, Nitin Jadhav wrote:\n>> + CheckpointStats.ckpt_slru_written,\n>> + (double) CheckpointStats.ckpt_slru_written * 100 / NBuffers,\n>>\n>> I don't think NBuffers represents the maximum number of SLRU buffers.\n>> We might need to calculate this based on specific GUC settings,\n>> like transaction_buffers.\n> \n> Great observation. Since the SLRU buffers written during a checkpoint\n> can include transaction_buffers, commit_timestamp_buffers,\n> subtransaction_buffers, multixact_member_buffers,\n> multixact_offset_buffers, and serializable_buffers, the total count of\n> SLRU buffers should be the sum of all these types. We might need to\n> introduce a global variable, such as total_slru_count, in the\n> globals.c file to hold this sum. The num_slots variable in the\n> SlruSharedData structure needs to be accessed from all types of SLRU\n> and stored in total_slru_count. This can then be used during logging\n> to calculate the percentage of SLRU buffers written. However, I’m\n> unsure if this effort is justified. While it makes sense for normal\n> buffers to display the percentage, the additional code required might\n> not provide significant value to users. Therefore, I have removed this\n> in the attached v6 patch.\n\n+1\n\nThanks for updating the patch! It looks good to me.\nBarring any objections, I will commit this patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n\n",
"msg_date": "Tue, 1 Oct 2024 03:33:30 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Inconsistency in reporting checkpointer stats"
}
] |
[
{
"msg_contents": "This patch exposes the ICU facility to add custom collation rules to a \nstandard collation. This would allow users to customize any ICU \ncollation to whatever they want. A very simple example from the \ndocumentation/tests:\n\nCREATE COLLATION en_custom\n (provider = icu, locale = 'en', rules = '&a < g');\n\nThis places \"g\" after \"a\" before \"b\". Details about the syntax can be \nfound at \n<https://unicode-org.github.io/icu/userguide/collation/customization/>.\n\nThe code is pretty straightforward. It mainly just records these rules \nin the catalog and feeds them to ICU when creating the collator object.",
"msg_date": "Wed, 14 Dec 2022 10:26:07 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "Patch needed a rebase; no functionality changes.\n\nOn 14.12.22 10:26, Peter Eisentraut wrote:\n> This patch exposes the ICU facility to add custom collation rules to a \n> standard collation. This would allow users to customize any ICU \n> collation to whatever they want. A very simple example from the \n> documentation/tests:\n> \n> CREATE COLLATION en_custom\n> (provider = icu, locale = 'en', rules = '&a < g');\n> \n> This places \"g\" after \"a\" before \"b\". Details about the syntax can be \n> found at \n> <https://unicode-org.github.io/icu/userguide/collation/customization/>.\n> \n> The code is pretty straightforward. It mainly just records these rules \n> in the catalog and feeds them to ICU when creating the collator object.",
"msg_date": "Thu, 5 Jan 2023 16:15:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 20:45, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Patch needed a rebase; no functionality changes.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\n=== Applying patches on top of PostgreSQL commit ID\nd952373a987bad331c0e499463159dd142ced1ef ===\n=== applying patch\n./v2-0001-Allow-tailoring-of-ICU-locales-with-custom-rules.patch\npatching file doc/src/sgml/catalogs.sgml\npatching file doc/src/sgml/ref/create_collation.sgml\npatching file doc/src/sgml/ref/create_database.sgml\nHunk #1 FAILED at 192.\n1 out of 1 hunk FAILED -- saving rejects to file\ndoc/src/sgml/ref/create_database.sgml.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4075.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 08:20:50 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 11.01.23 03:50, vignesh C wrote:\n> On Thu, 5 Jan 2023 at 20:45, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Patch needed a rebase; no functionality changes.\n> \n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nUpdated patch attached.",
"msg_date": "Mon, 16 Jan 2023 12:18:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Mon, 2023-01-16 at 12:18 +0100, Peter Eisentraut wrote:\n> Updated patch attached.\n\nI like that patch. It applies and passes regression tests.\n\nI played with it:\n\n CREATE COLLATION german_phone (LOCALE = 'de-AT', PROVIDER = icu, RULES = '&oe < ö');\n\n SELECT * FROM (VALUES ('od'), ('oe'), ('of'), ('p'), ('ö')) AS q(c)\n ORDER BY c COLLATE german_phone;\n\n c \n ════\n od\n oe\n ö\n of\n p\n (5 rows)\n\nCool so far. Now I created a database with that locale:\n\n CREATE DATABASE teutsch LOCALE_PROVIDER icu ICU_LOCALE german_phone\n LOCALE \"de_AT.utf8\" TEMPLATE template0;\n\nNow the rules are not in \"pg_database\":\n\n SELECT datcollate, daticulocale, daticurules FROM pg_database WHERE datname = 'teutsch';\n\n datcollate │ daticulocale │ daticurules \n ════════════╪══════════════╪═════════════\n de_AT.utf8 │ german_phone │ ∅\n (1 row)\n\nI connect to the database and try:\n\n SELECT * FROM (VALUES ('od'), ('oe'), ('of'), ('p'), ('ö')) AS q(c)\n ORDER BY c COLLATE german_phone;\n\n ERROR: collation \"german_phone\" for encoding \"UTF8\" does not exist\n LINE 1: ... ('oe'), ('of'), ('p'), ('ö')) AS q(c) ORDER BY c COLLATE ge...\n ^\n\nIndeed, the collation isn't there...\n\nI guess that it is not the fault of this patch that the collation isn't there,\nbut I think it is surprising. What good is a database collation that does not\nexist in the database?\n\nWhat might be the fault of this patch, however, is that \"daticurules\" is not\nset in \"pg_database\". Looking at the code, that column seems to be copied\nfrom the template database, but cannot be overridden.\n\nPerhaps this only needs more documentation, but I am confused.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 31 Jan 2023 17:35:12 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "\tLaurenz Albe wrote:\n\n> Cool so far. Now I created a database with that locale:\n> \n> CREATE DATABASE teutsch LOCALE_PROVIDER icu ICU_LOCALE german_phone\n> LOCALE \"de_AT.utf8\" TEMPLATE template0;\n> \n> Now the rules are not in \"pg_database\":\n\nThe parameter after ICU_LOCALE is passed directly to ICU as a locale\nID, as opposed to refering a collation name in the current database.\nThis CREATE DATABASE doesn't fail because ICU accepts pretty much\nanything as a locale ID, ignoring what it can't parse instead of\nerroring out.\n\nI think the way to express what you want should be:\n\nCREATE DATABASE teutsch \n LOCALE_PROVIDER icu\n ICU_LOCALE 'de_AT'\n LOCALE 'de_AT.utf8'\n ICU_RULES '&a < g';\n\nHowever it still leaves \"daticurules\" empty in the destination db,\nbecause of an actual bug in the current patch.\n\nLooking at createdb() in commands.c, it creates this variable:\n\n@@ -711,6 +714,7 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)\n\tchar\t *dbcollate = NULL;\n\tchar\t *dbctype = NULL;\n\tchar\t *dbiculocale = NULL;\n+\tchar\t *dbicurules = NULL;\n\tchar\t\tdblocprovider = '\\0';\n\tchar\t *canonname;\n\tint\t\t\tencoding = -1;\n\nand then reads it later\n\n@@ -1007,6 +1017,8 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)\n\t\tdblocprovider = src_locprovider;\n\tif (dbiculocale == NULL && dblocprovider == COLLPROVIDER_ICU)\n\t\tdbiculocale = src_iculocale;\n+\tif (dbicurules == NULL && dblocprovider == COLLPROVIDER_ICU)\n+\t\tdbicurules = src_icurules;\n \n\t/* Some encodings are client only */\n\tif (!PG_VALID_BE_ENCODING(encoding))\n\nbut it forgets to assign it in between, so it stays NULL and src_icurules\nis taken instead.\n\n> I guess that it is not the fault of this patch that the collation\n> isn't there, but I think it is surprising. What good is a database\n> collation that does not exist in the database?\n\nEven if the above invocation of CREATE DATABASE worked as you\nintuitively expected, by getting the characteristics from the\nuser-defined collation for the destination db, it still wouldn't work to\nrefer\nto COLLATE \"german_phone\" in the destination database.\nThat's because there would be no \"german_phone\" entry in the\npg_collation of the destination db, as it's cloned from the template\ndb, which has no reason to have this collation.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Sat, 04 Feb 2023 14:41:18 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Sat, 2023-02-04 at 14:41 +0100, Daniel Verite wrote:\n> Laurenz Albe wrote:\n> \n> > Cool so far. Now I created a database with that locale:\n> > \n> > CREATE DATABASE teutsch LOCALE_PROVIDER icu ICU_LOCALE german_phone\n> > LOCALE \"de_AT.utf8\" TEMPLATE template0;\n> > \n> > Now the rules are not in \"pg_database\":\n> \n> The parameter after ICU_LOCALE is passed directly to ICU as a locale\n> ID, as opposed to refering a collation name in the current database.\n> This CREATE DATABASE doesn't fail because ICU accepts pretty much\n> anything as a locale ID, ignoring what it can't parse instead of\n> erroring out.\n> \n> I think the way to express what you want should be:\n> \n> CREATE DATABASE teutsch \n> LOCALE_PROVIDER icu\n> ICU_LOCALE 'de_AT'\n> LOCALE 'de_AT.utf8'\n> ICU_RULES '&a < g';\n> \n> However it still leaves \"daticurules\" empty in the destination db,\n> because of an actual bug in the current patch.\n\nI see. Thanks for the explanation.\n\n> > I guess that it is not the fault of this patch that the collation\n> > isn't there, but I think it is surprising. What good is a database\n> > collation that does not exist in the database?\n> \n> Even if the above invocation of CREATE DATABASE worked as you\n> intuitively expected, by getting the characteristics from the\n> user-defined collation for the destination db, it still wouldn't work to\n> refer\n> to COLLATE \"german_phone\" in the destination database.\n> That's because there would be no \"german_phone\" entry in the\n> pg_collation of the destination db, as it's cloned from the template\n> db, which has no reason to have this collation.\n\nThat makes sense. Then I think that the current behavior is buggy:\nYou should not be allowed to specify a collation that does not exist in\nthe template database. Otherwise you end up with something weird.\n\nThis is not the fault of this patch though.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 04 Feb 2023 21:46:16 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 04.02.23 14:41, Daniel Verite wrote:\n> However it still leaves \"daticurules\" empty in the destination db,\n> because of an actual bug in the current patch.\n> \n> Looking at createdb() in commands.c, it creates this variable:\n> \n> @@ -711,6 +714,7 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)\n> \tchar\t *dbcollate = NULL;\n> \tchar\t *dbctype = NULL;\n> \tchar\t *dbiculocale = NULL;\n> +\tchar\t *dbicurules = NULL;\n> \tchar\t\tdblocprovider = '\\0';\n> \tchar\t *canonname;\n> \tint\t\t\tencoding = -1;\n> \n> and then reads it later\n> \n> @@ -1007,6 +1017,8 @@ createdb(ParseState *pstate, const CreatedbStmt *stmt)\n> \t\tdblocprovider = src_locprovider;\n> \tif (dbiculocale == NULL && dblocprovider == COLLPROVIDER_ICU)\n> \t\tdbiculocale = src_iculocale;\n> +\tif (dbicurules == NULL && dblocprovider == COLLPROVIDER_ICU)\n> +\t\tdbicurules = src_icurules;\n> \n> \t/* Some encodings are client only */\n> \tif (!PG_VALID_BE_ENCODING(encoding))\n> \n> but it forgets to assign it in between, so it stays NULL and src_icurules\n> is taken instead.\n\nRight. Here is a new patch with this fixed.",
"msg_date": "Mon, 6 Feb 2023 22:16:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Mon, 2023-02-06 at 22:16 +0100, Peter Eisentraut wrote:\n> Right. Here is a new patch with this fixed.\n\nThanks. I played some more with it, and still are still some missing\nodds and ends:\n\n- There is a new option ICU_RULES to CREATE DATABASE, but it is not\n reflected in \\h CREATE DATABASE. sql_help_CREATE_DATABASE() needs to\n be amended.\n\n- There is no way to show the rules except by querying \"pg_collation\" or\n \"pg_database\". I think it would be good to show the rules with\n \\dO+ and \\l+.\n\n- If I create a collation \"x\" with RULES and then create a database\n with \"ICU_LOCALE x\", the rules are not copied over.\n\n I don't know if that is intended or not, but it surprises me.\n Should that be a WARNING? Or, since creating a database with a collation\n that does not exist in \"template0\" doesn't make much sense (or does it?),\n is there a way to forbid that?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Tue, 14 Feb 2023 17:53:33 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 14.02.23 17:53, Laurenz Albe wrote:\n> On Mon, 2023-02-06 at 22:16 +0100, Peter Eisentraut wrote:\n>> Right. Here is a new patch with this fixed.\n> \n> Thanks. I played some more with it, and still are still some missing\n> odds and ends:\n> \n> - There is a new option ICU_RULES to CREATE DATABASE, but it is not\n> reflected in \\h CREATE DATABASE. sql_help_CREATE_DATABASE() needs to\n> be amended.\n\nFixed.\n\n> - There is no way to show the rules except by querying \"pg_collation\" or\n> \"pg_database\". I think it would be good to show the rules with\n> \\dO+ and \\l+.\n\nFixed. I adjusted the order of the columns a bit, to make the overall \npicture more sensible. The locale provider column is now earlier, since \nit indicates which of the subsequent columns are applicable.\n\n> - If I create a collation \"x\" with RULES and then create a database\n> with \"ICU_LOCALE x\", the rules are not copied over.\n> \n> I don't know if that is intended or not, but it surprises me.\n> Should that be a WARNING? Or, since creating a database with a collation\n> that does not exist in \"template0\" doesn't make much sense (or does it?),\n> is there a way to forbid that?\n\nThis is a misunderstanding of how things work. The value of the \ndatabase ICU_LOCALE attribute is passed to the ICU library. It does not \nrefer to a PostgreSQL collation object.",
"msg_date": "Mon, 20 Feb 2023 10:00:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n[patch v5]\n\nTwo quick comments:\n\n- pg_dump support need to be added for CREATE COLLATION / DATABASE\n\n- there doesn't seem to be a way to add rules to template1.\nIf someone wants to have icu rules and initial contents to their new\ndatabases, I think they need to create a custom template database\n(from template0) for that purpose, in addition to template1.\nFrom a usability standpoint, this is a bit cumbersome, as it's\nnormally the role of template1.\nTo improve on that, shouldn't initdb be able to create template0 with\nrules too?\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 20 Feb 2023 17:30:59 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 20.02.23 17:30, Daniel Verite wrote:\n> - pg_dump support need to be added for CREATE COLLATION / DATABASE\n\nI have added that.\n\n> \n> - there doesn't seem to be a way to add rules to template1.\n> If someone wants to have icu rules and initial contents to their new\n> databases, I think they need to create a custom template database\n> (from template0) for that purpose, in addition to template1.\n> From a usability standpoint, this is a bit cumbersome, as it's\n> normally the role of template1.\n> To improve on that, shouldn't initdb be able to create template0 with\n> rules too?\n\nRight, that would be an initdb option. Is that too many initdb options \nthen? It would be easy to add, if we think it's worth it.",
"msg_date": "Wed, 22 Feb 2023 18:35:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Wed, 2023-02-22 at 18:35 +0100, Peter Eisentraut wrote:\n> > - there doesn't seem to be a way to add rules to template1.\n> > If someone wants to have icu rules and initial contents to their new\n> > databases, I think they need to create a custom template database\n> > (from template0) for that purpose, in addition to template1.\n> > From a usability standpoint, this is a bit cumbersome, as it's\n> > normally the role of template1.\n> > To improve on that, shouldn't initdb be able to create template0 with\n> > rules too?\n> \n> Right, that would be an initdb option. Is that too many initdb options \n> then? It would be easy to add, if we think it's worth it.\n\nAn alternative would be to document that you can drop \"template1\" and\ncreate it again using the ICU collation rules you need.\n\nBut I'd prefer an \"initdb\" option.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Thu, 02 Mar 2023 16:39:16 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 02.03.23 16:39, Laurenz Albe wrote:\n> On Wed, 2023-02-22 at 18:35 +0100, Peter Eisentraut wrote:\n>>> - there doesn't seem to be a way to add rules to template1.\n>>> If someone wants to have icu rules and initial contents to their new\n>>> databases, I think they need to create a custom template database\n>>> (from template0) for that purpose, in addition to template1.\n>>> From a usability standpoint, this is a bit cumbersome, as it's\n>>> normally the role of template1.\n>>> To improve on that, shouldn't initdb be able to create template0 with\n>>> rules too?\n>>\n>> Right, that would be an initdb option. Is that too many initdb options\n>> then? It would be easy to add, if we think it's worth it.\n> \n> An alternative would be to document that you can drop \"template1\" and\n> create it again using the ICU collation rules you need.\n> \n> But I'd prefer an \"initdb\" option.\n\nOk, here is a version with an initdb option and also a createdb option \n(to expose the CREATE DATABASE option).\n\nYou can mess with people by setting up your databases like this:\n\ninitdb -D data --locale-provider=icu --icu-rules='&a < c < b < e < d'\n\n;-)",
"msg_date": "Fri, 3 Mar 2023 13:45:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Fri, 2023-03-03 at 13:45 +0100, Peter Eisentraut wrote:\n> You can mess with people by setting up your databases like this:\n> \n> initdb -D data --locale-provider=icu --icu-rules='&a < c < b < e < d'\n> \n> ;-)\n\nWould we be the first major database to support custom collation rules?\nThis sounds useful for testing, experimentation, hacking, etc.\n\nWhat are some of the use cases? Is it helpful to comply with unusual or\noutdated standards or formats? Maybe there are people using special\ndelimiters/terminators and they need them to be treated a certain way\nduring comparisons?\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Tue, 07 Mar 2023 22:06:48 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Tue, 2023-03-07 at 22:06 -0800, Jeff Davis wrote:\n> On Fri, 2023-03-03 at 13:45 +0100, Peter Eisentraut wrote:\n> > You can mess with people by setting up your databases like this:\n> > \n> > initdb -D data --locale-provider=icu --icu-rules='&a < c < b < e < d'\n> > \n> > ;-)\n> \n> Would we be the first major database to support custom collation rules?\n> This sounds useful for testing, experimentation, hacking, etc.\n> \n> What are some of the use cases? Is it helpful to comply with unusual or\n> outdated standards or formats? Maybe there are people using special\n> delimiters/terminators and they need them to be treated a certain way\n> during comparisons?\n\nI regularly see complaints about the sort order; recently this one:\nhttps://postgr.es/m/CAFCRh--xt-J8awOavhB216kom6TQnaP35TTVEQQS5bHH7gMemQ@mail.gmail.com\n\nSo being able to influence the sort order is useful.\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Mar 2023 11:38:00 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On Fri, 2023-03-03 at 13:45 +0100, Peter Eisentraut wrote:\n> Ok, here is a version with an initdb option and also a createdb option \n> (to expose the CREATE DATABASE option).\n> \n> You can mess with people by setting up your databases like this:\n> \n> initdb -D data --locale-provider=icu --icu-rules='&a < c < b < e < d'\n\nLooks good. I cannot get it to misbehave, \"make check-world\" is successful\n(the regression tests misbehave in interesting ways when running\n\"make installcheck\" on a cluster created with non-standard ICU rules, but\nthat can be expected).\n\nI checked the documentation, tested \"pg_dump\" support, everything fine.\n\nI'll mark it as \"ready for committer\".\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Wed, 08 Mar 2023 15:18:47 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
},
{
"msg_contents": "On 08.03.23 15:18, Laurenz Albe wrote:\n> On Fri, 2023-03-03 at 13:45 +0100, Peter Eisentraut wrote:\n>> Ok, here is a version with an initdb option and also a createdb option\n>> (to expose the CREATE DATABASE option).\n>>\n>> You can mess with people by setting up your databases like this:\n>>\n>> initdb -D data --locale-provider=icu --icu-rules='&a < c < b < e < d'\n> \n> Looks good. I cannot get it to misbehave, \"make check-world\" is successful\n> (the regression tests misbehave in interesting ways when running\n> \"make installcheck\" on a cluster created with non-standard ICU rules, but\n> that can be expected).\n> \n> I checked the documentation, tested \"pg_dump\" support, everything fine.\n> \n> I'll mark it as \"ready for committer\".\n\ncommitted\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 17:05:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Allow tailoring of ICU locales with custom rules"
}
] |
[
{
"msg_contents": "Hi\n\nWhile studying Jeff's new crop of collation patches I noticed in\npassing that check_strxfrm_bug() must surely by now be unnecessary.\nThe buffer overrun bugs were fixed a decade ago, and the relevant\nsystems are way out of support. If you're worried that the bugs might\ncome back, then the test is insufficient: modern versions of both OSes\nhave strxfrm_l(), which we aren't checking. In any case, we also\ncompletely disable this stuff because of bugs and quality problems in\nevery other known implementation, via TRUST_STRXFRM (or rather the\nlack of it). So I think it's time to remove that function; please see\nattached.\n\nJust by the way, if you like slow motion domino runs, check this out:\n\n* Original pgsql-bugs investigation into strxfrm() inconsistencies\n https://www.postgresql.org/message-id/flat/111D0E27-A8F3-4A84-A4E0-B0FB703863DF@s24.com\n\n* I happened to test that on bleeding-edge FreeBSD 11 (wasn't released\nyet), because at that time FreeBSD was in the process of adopting\nillumos's new collation code, and reported teething problems:\n https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=208266\n\n* FreeBSD, DragonFly and illumos's trees were then partially fixed by\nthe authors, but our strcolltest.c still showed some remaining\nproblems in some locales (and it still does on my local FreeBSD\nbattlestation):\n https://github.com/freebsd/freebsd-src/commit/c48dc2a193b9befceda8dfc6f894d73251cc00a4\n https://www.illumos.org/rb/r/402/\n\n* The authors traced the remaining problem to flaws in the Unicode\nproject's CLDR/POSIX data, and the report was accepted:\n https://www.illumos.org/issues/7962\n https://unicode-org.atlassian.net/browse/CLDR-10394\n\nEventually that'll be fixed, and (I guess) trigger at least a CLDR\nminor version bump affecting all downstream consumers (ICU, ...).\nThen... maybe... at least FreeBSD will finally pass that test. I do\nwonder whether other consumer libraries are also confused by that\nproblem source data, and if not, why not; are glibc's problems related\nor just random code or data quality problems in different areas? (I\nalso don't know why a problem in that data should affect strxfrm() and\nstrcoll() differently, but I don't plan to find out owing to an acute\nshortage of round tuits).\n\nBut in the meantime, I still can't recommend turning on TRUST_STRXFRM\non any OS that I know of! The strcolltest.c program certainly still\nfinds fault with glibc 2.36 despite the last update on that redhat\nbugzilla ticket that suggested that the big resync back in 2.28 was\ngoing to fix it.\n\nTo be fair, macOS does actually pass that test for all locales, but\nthe strxfrm() result is too narrow to be useful, according to comments\nin our tree. I would guess that a couple of other OSes with the old\nBerkeley locale code are similar.",
"msg_date": "Thu, 15 Dec 2022 15:22:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "check_strxfrm_bug()"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 3:22 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> ... If you're worried that the bugs might\n> come back, then the test is insufficient: modern versions of both OSes\n> have strxfrm_l(), which we aren't checking.\n\nWith my garbage collector hat on, that made me wonder if there was\nsome more potential cleanup here: could we require locale_t yet? The\nlast straggler systems on our target OS list to add the POSIX locale_t\nstuff were Solaris 11.4 (2018) and OpenBSD 6.2 (2018). Apparently\nit's still too soon: we have two EOL'd OSes in the farm that are older\nthan that. But here's an interesting fact about wrasse, assuming its\nhost is gcc211: it looks like it can't even apply further OS updates\nbecause the hardware[1] is so old that Solaris doesn't support it\nanymore[2].\n\n[1] https://cfarm.tetaneutral.net/machines/list/\n[2] https://support.oracle.com/knowledge/Sun%20Microsystems/2382427_1.html\n\n\n",
"msg_date": "Sun, 18 Dec 2022 10:27:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 10:27 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> With my garbage collector hat on, that made me wonder if there was\n> some more potential cleanup here: could we require locale_t yet? The\n> last straggler systems on our target OS list to add the POSIX locale_t\n> stuff were Solaris 11.4 (2018) and OpenBSD 6.2 (2018). Apparently\n> it's still too soon: we have two EOL'd OSes in the farm that are older\n> than that. But here's an interesting fact about wrasse, assuming its\n> host is gcc211: it looks like it can't even apply further OS updates\n> because the hardware[1] is so old that Solaris doesn't support it\n> anymore[2].\n\nFor the record, now the OpenBSD machines have been upgraded, so now\n\"wrasse\" is the last relevant computer on earth with no POSIX\nlocale_t. Unfortunately there is no reason to think it's going to go\naway soon, so I'm just noting this fact here as a reminder for when it\neventually does...\n\n\n",
"msg_date": "Mon, 17 Apr 2023 08:00:22 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 03:22:59PM +1300, Thomas Munro wrote:\n> While studying Jeff's new crop of collation patches I noticed in\n> passing that check_strxfrm_bug() must surely by now be unnecessary.\n> The buffer overrun bugs were fixed a decade ago, and the relevant\n> systems are way out of support. If you're worried that the bugs might\n> come back, then the test is insufficient: modern versions of both OSes\n> have strxfrm_l(), which we aren't checking. In any case, we also\n> completely disable this stuff because of bugs and quality problems in\n> every other known implementation, via TRUST_STRXFRM (or rather the\n> lack of it). So I think it's time to remove that function; please see\n> attached.\n\nSeems reasonable to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 17 Apr 2023 14:06:21 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Dec 15, 2022 at 03:22:59PM +1300, Thomas Munro wrote:\n>> While studying Jeff's new crop of collation patches I noticed in\n>> passing that check_strxfrm_bug() must surely by now be unnecessary.\n>> The buffer overrun bugs were fixed a decade ago, and the relevant\n>> systems are way out of support. If you're worried that the bugs might\n>> come back, then the test is insufficient: modern versions of both OSes\n>> have strxfrm_l(), which we aren't checking. In any case, we also\n>> completely disable this stuff because of bugs and quality problems in\n>> every other known implementation, via TRUST_STRXFRM (or rather the\n>> lack of it). So I think it's time to remove that function; please see\n>> attached.\n\n> Seems reasonable to me.\n\n+1. I wonder if we should go further and get rid of TRUST_STRXFRM\nand the not-so-trivial amount of code around it (pg_strxfrm_enabled\netc). Carrying that indefinitely in the probably-vain hope that\nthe libraries will become trustworthy seems rather pointless.\nBesides, if such a miracle does occur, we can dig the code out\nof our git history.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 17 Apr 2023 17:48:19 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> +1. I wonder if we should go further and get rid of TRUST_STRXFRM\n> and the not-so-trivial amount of code around it (pg_strxfrm_enabled\n> etc). Carrying that indefinitely in the probably-vain hope that\n> the libraries will become trustworthy seems rather pointless.\n> Besides, if such a miracle does occur, we can dig the code out\n> of our git history.\n\n+1 for getting rid of TRUST_STRXFRM.\n\nICU-based collations (which aren't affected by TRUST_STRXFRM) are\nbecoming the de facto standard (possibly even the de jure standard).\nSo even if we thought that the situation with strxfrm() had improved,\nwe'd still have little motivation to do anything about it.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 17 Apr 2023 15:40:14 -0700",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 03:40:14PM -0700, Peter Geoghegan wrote:\n> On Mon, Apr 17, 2023 at 2:48 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> +1. I wonder if we should go further and get rid of TRUST_STRXFRM\n>> and the not-so-trivial amount of code around it (pg_strxfrm_enabled\n>> etc). Carrying that indefinitely in the probably-vain hope that\n>> the libraries will become trustworthy seems rather pointless.\n>> Besides, if such a miracle does occur, we can dig the code out\n>> of our git history.\n> \n> +1 for getting rid of TRUST_STRXFRM.\n> \n> ICU-based collations (which aren't affected by TRUST_STRXFRM) are\n> becoming the de facto standard (possibly even the de jure standard).\n> So even if we thought that the situation with strxfrm() had improved,\n> we'd still have little motivation to do anything about it.\n\nMakes sense to do some cleanup now as this is new in the tree.\nPerhaps somebody from the RMT would like to comment?\n\nFYI, Jeff has also posted patches to replace this CFLAGS with a GUC:\nhttps://www.postgresql.org/message-id/6ec4ad7f93f255dbb885da0a664d9c77ed4872c4.camel@j-davis.com\n--\nMichael",
"msg_date": "Tue, 18 Apr 2023 08:52:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Tue, Apr 18, 2023 at 11:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n> On Mon, Apr 17, 2023 at 03:40:14PM -0700, Peter Geoghegan wrote:\n> > +1 for getting rid of TRUST_STRXFRM.\n\n+1\n\nThe situation is not improving fast, and requires hard work to follow\non each OS. Clearly, mainstreaming ICU is the way to go. libc\nsupport will always have niche uses, to be compatible with other\nsoftware on the box, but trusting strxfrm doesn't seem to be on the\ncards any time soon.\n\n\n",
"msg_date": "Wed, 19 Apr 2023 13:19:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 4/18/23 9:19 PM, Thomas Munro wrote:\r\n> On Tue, Apr 18, 2023 at 11:52 AM Michael Paquier <michael@paquier.xyz> wrote:\r\n>> On Mon, Apr 17, 2023 at 03:40:14PM -0700, Peter Geoghegan wrote:\r\n>>> +1 for getting rid of TRUST_STRXFRM.\r\n> \r\n> +1\r\n> \r\n> The situation is not improving fast, and requires hard work to follow\r\n> on each OS. Clearly, mainstreaming ICU is the way to go. libc\r\n> support will always have niche uses, to be compatible with other\r\n> software on the box, but trusting strxfrm doesn't seem to be on the\r\n> cards any time soon.\r\n\r\n[RMT hat, personal opinion on RMT]\r\n\r\nTo be clear, is the proposal to remove both \"check_strxfrm_bug\" and \r\n\"TRUST_STRXFRM\"?\r\n\r\nGiven a bunch of folks who have expertise in this area of code all agree \r\nwith removing the above as part of the collation cleanups targeted for \r\nv16, I'm inclined to agree. I don't really see the need for an explicit \r\nRMT action, but based on the consensus this seems OK to add as an open item.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 18 Apr 2023 22:31:15 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 4/18/23 21:19, Thomas Munro wrote:\n> On Tue, Apr 18, 2023 at 11:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> On Mon, Apr 17, 2023 at 03:40:14PM -0700, Peter Geoghegan wrote:\n>> > +1 for getting rid of TRUST_STRXFRM.\n> \n> +1\n> \n> The situation is not improving fast, and requires hard work to follow\n> on each OS. Clearly, mainstreaming ICU is the way to go. libc\n> support will always have niche uses, to be compatible with other\n> software on the box, but trusting strxfrm doesn't seem to be on the\n> cards any time soon.\n\nI have wondered a few times, given the known issues with strxfrm, how is \nthe use in selfuncs.c still ok. Has anyone taken a hard look at that?\n\n-- \nJoe Conway\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n\n",
"msg_date": "Wed, 19 Apr 2023 10:40:11 -0400",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I have wondered a few times, given the known issues with strxfrm, how is \n> the use in selfuncs.c still ok. Has anyone taken a hard look at that?\n\nOn the one hand, we only need approximately-correct results in that\ncode. On the other, the result is fed to convert_string_to_scalar(),\nwhich has a rather naive idea that it's dealing with ASCII text.\nI've seen at least some strxfrm output that isn't even vaguely\ntextual-looking.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 19 Apr 2023 11:01:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Wed, Apr 19, 2023 at 2:31 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> To be clear, is the proposal to remove both \"check_strxfrm_bug\" and\n> \"TRUST_STRXFRM\"?\n>\n> Given a bunch of folks who have expertise in this area of code all agree\n> with removing the above as part of the collation cleanups targeted for\n> v16, I'm inclined to agree. I don't really see the need for an explicit\n> RMT action, but based on the consensus this seems OK to add as an open item.\n\nThanks all. I went ahead and removed check_strxfrm_bug().\n\nI could write a patch to remove the libc strxfrm support, but since\nJeff recently wrote new code in 16 to abstract that stuff, he might\nprefer to look at it?\n\n\n",
"msg_date": "Thu, 20 Apr 2023 13:34:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Thu, 2023-04-20 at 13:34 +1200, Thomas Munro wrote:\n> I could write a patch to remove the libc strxfrm support, but since\n> Jeff recently wrote new code in 16 to abstract that stuff, he might\n> prefer to look at it?\n\n+1 to removing it.\n\nAs far as how it's removed, we could directly check:\n\n if (!collate_c && !(locale && locale->provider == COLLPROVIDER_ICU))\n abbreviate = false;\n\nas it was before, or we could still try to hide it as a detail behind a\nfunction. I don't have a strong opinion there, though I thought it\nmight be good for varlena.c to not know those internal details.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Thu, 20 Apr 2023 08:10:54 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 4/19/23 9:34 PM, Thomas Munro wrote:\r\n> On Wed, Apr 19, 2023 at 2:31 PM Jonathan S. Katz <jkatz@postgresql.org> wrote:\r\n>> To be clear, is the proposal to remove both \"check_strxfrm_bug\" and\r\n>> \"TRUST_STRXFRM\"?\r\n>>\r\n>> Given a bunch of folks who have expertise in this area of code all agree\r\n>> with removing the above as part of the collation cleanups targeted for\r\n>> v16, I'm inclined to agree. I don't really see the need for an explicit\r\n>> RMT action, but based on the consensus this seems OK to add as an open item.\r\n> \r\n> Thanks all. I went ahead and removed check_strxfrm_bug().\r\n\r\nThanks! For housekeeping, I put this into \"Open Items\" and marked it as \r\nresolved.\r\n\r\n> I could write a patch to remove the libc strxfrm support, but since\r\n> Jeff recently wrote new code in 16 to abstract that stuff, he might\r\n> prefer to look at it?\r\n\r\nI believe we'd be qualifying this as an open item too? If so, let's add it.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Sat, 22 Apr 2023 15:14:49 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Mon, Apr 17, 2023 at 8:00 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Dec 18, 2022 at 10:27 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > With my garbage collector hat on, that made me wonder if there was\n> > some more potential cleanup here: could we require locale_t yet? The\n> > last straggler systems on our target OS list to add the POSIX locale_t\n> > stuff were Solaris 11.4 (2018) and OpenBSD 6.2 (2018). Apparently\n> > it's still too soon: we have two EOL'd OSes in the farm that are older\n> > than that. But here's an interesting fact about wrasse, assuming its\n> > host is gcc211: it looks like it can't even apply further OS updates\n> > because the hardware[1] is so old that Solaris doesn't support it\n> > anymore[2].\n>\n> For the record, now the OpenBSD machines have been upgraded, so now\n> \"wrasse\" is the last relevant computer on earth with no POSIX\n> locale_t. Unfortunately there is no reason to think it's going to go\n> away soon, so I'm just noting this fact here as a reminder for when it\n> eventually does...\n\nSince talk of server threads erupted again, I just wanted to note that\nthis system locale API stuff would be on the long list of\nmicro-obstacles. You'd *have* to use the locale_t-based interfaces\n(or come up with replacements using a big ugly lock to serialise all\naccess to the process-global locale, or allow only ICU locale support\nin that build, but those seem like strange lengths to go to just to\nsupport a dead version of Solaris). There are still at least a couple\nof functions that lack XXX_l variants in the standard: mbstowcs() and\nwcstombs() (though we use the non-standard _l variants if we find them\nin <xlocale.h>), but that's OK because we use uselocale() and not\nsetlocale(), because uselocale() is required to be thread-local. The\nuse of setlocale() to set up the per-backend/per-database default\nlocale would have to be replaced with uselocale(). In other words, I\nthink wrasse would not be coming with us on this hypothetical quest.\n\n\n",
"msg_date": "Mon, 12 Jun 2023 10:15:15 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 12/06/2023 01:15, Thomas Munro wrote:\n> There are still at least a couple\n> of functions that lack XXX_l variants in the standard: mbstowcs() and\n> wcstombs() (though we use the non-standard _l variants if we find them\n> in <xlocale.h>), but that's OK because we use uselocale() and not\n> setlocale(), because uselocale() is required to be thread-local.\n\nRight, mbstowcs() and wcstombs() are already thread-safe, that's why \nthere are no _l variants of them.\n\n> The use of setlocale() to set up the per-backend/per-database\n> default locale would have to be replaced with uselocale().\nThis recent bug report is also related to that: \nhttps://www.postgresql.org/message-id/17946-3e84cb577e9551c3%40postgresql.org. \nIn a nutshell, libperl calls uselocale(), which overrides the setting we \ntry set with setlocale().\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 12 Jun 2023 11:48:14 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "The GCC build farm has just received some SPARC hardware new enough to\nrun modern Solaris (hostname gcc106), so if wrasse were moved over\nthere we could finally assume all systems have POSIX 2008 (AKA\nSUSv4)'s locale_t.\n\nIt's slightly annoying that Windows has locale_t but doesn't have\nuselocale(). It does have thread-local locales via another API,\nthough. I wonder how hard it would be to get to a point where all\nsystems have uselocale() too, by supplying a replacement. I noticed\nthat some other projects eg older versions of LLVM libcxx do that. I\nsee from one of their discussions[1] that it worked, except that\nthread-local locales are only available with one of the MinGW C\nruntimes and not another. We'd have to get to the bottom of that.\n\n[1] https://reviews.llvm.org/D40181\n\n\n",
"msg_date": "Wed, 28 Jun 2023 11:03:46 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 11:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The GCC build farm has just received some SPARC hardware new enough to\n> run modern Solaris (hostname gcc106), so if wrasse were moved over\n> there we could finally assume all systems have POSIX 2008 (AKA\n> SUSv4)'s locale_t.\n\nThat would look something like this.",
"msg_date": "Wed, 28 Jun 2023 13:02:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Wed, Jun 28, 2023 at 01:02:21PM +1200, Thomas Munro wrote:\n> On Wed, Jun 28, 2023 at 11:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > The GCC build farm has just received some SPARC hardware new enough to\n> > run modern Solaris (hostname gcc106), so if wrasse were moved over\n> > there we could finally assume all systems have POSIX 2008 (AKA\n> > SUSv4)'s locale_t.\n> \n> That would look something like this.\n\nThis removes thirty-eight ifdefs, most of them located in the middle of\nfunction bodies. That's far more beneficial than most proposals to raise\nminimum requirements. +1 for revoking support for wrasse's OS version.\n(wrasse wouldn't move, but it would stop testing v17+.)\n\n\n",
"msg_date": "Sat, 1 Jul 2023 16:25:45 +0000",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sun, Jul 2, 2023 at 4:25 AM Noah Misch <noah@leadboat.com> wrote:\n> On Wed, Jun 28, 2023 at 01:02:21PM +1200, Thomas Munro wrote:\n> > On Wed, Jun 28, 2023 at 11:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > The GCC build farm has just received some SPARC hardware new enough to\n> > > run modern Solaris (hostname gcc106), so if wrasse were moved over\n> > > there we could finally assume all systems have POSIX 2008 (AKA\n> > > SUSv4)'s locale_t.\n> >\n> > That would look something like this.\n>\n> This removes thirty-eight ifdefs, most of them located in the middle of\n> function bodies. That's far more beneficial than most proposals to raise\n> minimum requirements. +1 for revoking support for wrasse's OS version.\n> (wrasse wouldn't move, but it would stop testing v17+.)\n\nGreat. It sounds like I should wait a few days for any other feedback\nand then push this patch. Thanks for looking.\n\n\n",
"msg_date": "Mon, 3 Jul 2023 13:49:02 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sun Jul 2, 2023 at 8:49 PM CDT, Thomas Munro wrote:\n> On Sun, Jul 2, 2023 at 4:25 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Wed, Jun 28, 2023 at 01:02:21PM +1200, Thomas Munro wrote:\n> > > On Wed, Jun 28, 2023 at 11:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > > The GCC build farm has just received some SPARC hardware new enough to\n> > > > run modern Solaris (hostname gcc106), so if wrasse were moved over\n> > > > there we could finally assume all systems have POSIX 2008 (AKA\n> > > > SUSv4)'s locale_t.\n> > >\n> > > That would look something like this.\n> >\n> > This removes thirty-eight ifdefs, most of them located in the middle of\n> > function bodies. That's far more beneficial than most proposals to raise\n> > minimum requirements. +1 for revoking support for wrasse's OS version.\n> > (wrasse wouldn't move, but it would stop testing v17+.)\n>\n> Great. It sounds like I should wait a few days for any other feedback\n> and then push this patch. Thanks for looking.\n\nThe patch looks good to me as well. Happy to rebase my other patch on\nthis one.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Mon, 03 Jul 2023 09:52:23 -0500",
"msg_from": "\"Tristan Partin\" <tristan@neon.tech>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Tue, Jul 4, 2023 at 2:52 AM Tristan Partin <tristan@neon.tech> wrote:\n> The patch looks good to me as well. Happy to rebase my other patch on\n> this one.\n\nThanks. Here is a slightly tidier version. It passes on CI[1]\nincluding the optional extra MinGW64/Meson task, and the\nMinGW64/autoconf configure+build that is in the SanityCheck task.\nThere are two questions I'm hoping to get feedback on: (1) I believe\nthat defining HAVE_MBSTOWCS_L etc in win32_port.h is the best idea\nbecause that is also where we define mbstowcs_l() etc. Does that make\nsense? (2) IIRC, ye olde Solution.pm system might break if I were to\ncompletely remove HAVE_MBSTOWCS_L and HAVE_WCSTOMBS_L from Solution.pm\n(there must be a check somewhere that compares it with pg_config.h.in\nor something like that), but it would also break if I defined them as\n1 there (macro redefinition). Will undef in Solution.pm be\nacceptable (ie define nothing to avoid redefinition, but side-step the\nsanity check)? It's a bit of a kludge, but IIRC we're dropping that\n3rd build system in 17 so maybe that's OK? (Not tested as I don't\nhave Windows and CI doesn't test Solution.pm, so I'd be grateful if\nsomeone who has Windows/Solution.pm setup could try this.)\n\n[1] https://cirrus-ci.com/build/5298278007308288",
"msg_date": "Wed, 5 Jul 2023 10:15:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Wed, Jul 5, 2023 at 10:15 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> [1] https://cirrus-ci.com/build/5298278007308288\n\nThat'll teach me to be impatient. I only waited for compiling to\nfinish after triggering the optional extra MinGW task before sending\nthe above email, figuring that the only risk was there, but then the\npg_upgrade task failed due to mismatched locales. Apparently there is\nsomething I don't understand yet about locale_t support under MinGW.\n\n\n",
"msg_date": "Wed, 5 Jul 2023 11:27:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 05.07.23 00:15, Thomas Munro wrote:\n> On Tue, Jul 4, 2023 at 2:52 AM Tristan Partin <tristan@neon.tech> wrote:\n>> The patch looks good to me as well. Happy to rebase my other patch on\n>> this one.\n> \n> Thanks. Here is a slightly tidier version. It passes on CI[1]\n> including the optional extra MinGW64/Meson task, and the\n> MinGW64/autoconf configure+build that is in the SanityCheck task.\n> There are two questions I'm hoping to get feedback on: (1) I believe\n> that defining HAVE_MBSTOWCS_L etc in win32_port.h is the best idea\n> because that is also where we define mbstowcs_l() etc. Does that make\n> sense? (2) IIRC, ye olde Solution.pm system might break if I were to\n> completely remove HAVE_MBSTOWCS_L and HAVE_WCSTOMBS_L from Solution.pm\n> (there must be a check somewhere that compares it with pg_config.h.in\n> or something like that), but it would also break if I defined them as\n> 1 there (macro redefinition).\n\nI think the correct solution is to set HAVE_MBSTOWCS_L in Solution.pm. \nCompare HAVE_FSEEKO, which is set in Solution.pm with fseeko being \ndefined in win32_port.h.\n\n\n\n",
"msg_date": "Thu, 6 Jul 2023 18:20:26 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Fri, Jul 7, 2023 at 4:20 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I think the correct solution is to set HAVE_MBSTOWCS_L in Solution.pm.\n> Compare HAVE_FSEEKO, which is set in Solution.pm with fseeko being\n> defined in win32_port.h.\n\nIn this version I have it in there, but set it to undef. This way\nconfigure (MinGW), meson (MinGW or MSVC), and Solution.pm all agree.\nThis version passes on CI (not quite sure what I screwed up before).\n\nTo restate the problem I am solving with this Solution.pm change +\nassociated changes in the C: configure+MinGW disabled the libc\nprovider completely before. With this patch that is fixed, and that\nforced me to address the inconsistency (because if you have the libc\nprovider but no _l functions, you hit uselocale() code paths that\nWindows can't do). It was a bug, really, but I don't plan to\nback-patch anything (nobody really uses configure+MinGW builds AFAIK,\nthey exist purely as a developer [in]convenience). But that explains\nwhy I needed to make a change.\n\nThinking about how to bring this all into \"normal\" form -- where\nHAVE_XXX means \"system defines XXX\", not \"system defines XXX || we\ndefine a replacement\" -- leads to the attached. Do you like this one?",
"msg_date": "Sat, 8 Jul 2023 08:30:31 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Fri Jul 7, 2023 at 3:30 PM CDT, Thomas Munro wrote:\n> On Fri, Jul 7, 2023 at 4:20 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > I think the correct solution is to set HAVE_MBSTOWCS_L in Solution.pm.\n> > Compare HAVE_FSEEKO, which is set in Solution.pm with fseeko being\n> > defined in win32_port.h.\n>\n> In this version I have it in there, but set it to undef. This way\n> configure (MinGW), meson (MinGW or MSVC), and Solution.pm all agree.\n> This version passes on CI (not quite sure what I screwed up before).\n>\n> To restate the problem I am solving with this Solution.pm change +\n> associated changes in the C: configure+MinGW disabled the libc\n> provider completely before. With this patch that is fixed, and that\n> forced me to address the inconsistency (because if you have the libc\n> provider but no _l functions, you hit uselocale() code paths that\n> Windows can't do). It was a bug, really, but I don't plan to\n> back-patch anything (nobody really uses configure+MinGW builds AFAIK,\n> they exist purely as a developer [in]convenience). But that explains\n> why I needed to make a change.\n>\n> Thinking about how to bring this all into \"normal\" form -- where\n> HAVE_XXX means \"system defines XXX\", not \"system defines XXX || we\n> define a replacement\" -- leads to the attached. Do you like this one?\n\nShould you wrap the two _l function replacements in HAVE_USELOCALE\ninstead of WIN32?\n\n> +if not cc.has_type('locale_t', prefix: '#include <locale.h>') and cc.has_type('locale_t', prefix: '#include <xlocale.h>')\n\nI wouldn't mind a line break after the 'and'.\n\nOther than these comments, the patch looks fine to me.\n\n-- \nTristan Partin\nNeon (https://neon.tech)\n\n\n",
"msg_date": "Fri, 07 Jul 2023 15:52:36 -0500",
"msg_from": "\"Tristan Partin\" <tristan@neon.tech>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 8:52 AM Tristan Partin <tristan@neon.tech> wrote:\n> Should you wrap the two _l function replacements in HAVE_USELOCALE\n> instead of WIN32?\n\nI find that more confusing, and I'm also not sure if HAVE_USELOCALE is\neven going to survive (based on your nearby thread). I mean, by the\nusual criteria that we applied to a lot of other system library\nfunctions in the 16 cycle, I'd drop it. It's in the standard and all\nrelevant systems have it except Windows which we have to handle with\nspecial pain-in-the-neck logic anyway.\n\n> > +if not cc.has_type('locale_t', prefix: '#include <locale.h>') and cc.has_type('locale_t', prefix: '#include <xlocale.h>')\n>\n> I wouldn't mind a line break after the 'and'.\n\nAh, right, I am still learning what is allowed in this syntax... will do.\n\n> Other than these comments, the patch looks fine to me.\n\nThanks. I will wait a bit to see if Peter has any other comments and\nthen push this. I haven't actually tested with Solution.pm due to\nlack of CI for that, but fingers crossed, since the build systems will\nnow agree, reducing the screw-up surface.\n\n\n",
"msg_date": "Sat, 8 Jul 2023 10:13:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sat, Jul 8, 2023 at 10:13 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks. I will wait a bit to see if Peter has any other comments and\n> then push this. I haven't actually tested with Solution.pm due to\n> lack of CI for that, but fingers crossed, since the build systems will\n> now agree, reducing the screw-up surface.\n\nDone. Let's see what the build farm thinks...\n\n\n",
"msg_date": "Sun, 9 Jul 2023 12:03:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 07.07.23 22:30, Thomas Munro wrote:\n> Thinking about how to bring this all into \"normal\" form -- where\n> HAVE_XXX means \"system defines XXX\", not \"system defines XXX || we\n> define a replacement\"\n\nHAVE_XXX means \"code can use XXX\", doesn't matter how it got there (it \ncould also be a libpgport replacement).\n\nSo I don't think this code is correct. AFAICT, there is nothing right \nnow that can possibly define HAVE_MBSTOWCS_L on Windows/MSVC. Was that \nthe intention?\n\nI think these changes would need to be reverted to make this correct:\n\n-# MSVC has replacements defined in src/include/port/win32_port.h.\n-if cc.get_id() == 'msvc'\n- cdata.set('HAVE_WCSTOMBS_L', 1)\n- cdata.set('HAVE_MBSTOWCS_L', 1)\n-endif\n\n- HAVE_MBSTOWCS_L => 1,\n+ HAVE_MBSTOWCS_L => undef,\n\n- HAVE_WCSTOMBS_L => 1,\n+ HAVE_WCSTOMBS_L => undef,\n\n\n\n",
"msg_date": "Sun, 9 Jul 2023 08:20:14 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 6:20 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> So I don't think this code is correct. AFAICT, there is nothing right\n> now that can possibly define HAVE_MBSTOWCS_L on Windows/MSVC. Was that\n> the intention?\n\nYes, that was my intention. Windows actually doesn't have them. The\nautoconf/MinGW test result was telling the truth. Somehow I had to\nmake the three build systems agree on this. Either by strong-arming\nall three of them to emit a hard-coded claim that it does, or by\nremoving the test that produces a different answer in different build\nsystems. I will happily do it the other way if you insist, which\nwould involve restoring the meson.build and Solultion.pm kludges you\nquoted, but I'd also have to add a compatible kludge to configure.ac.\nIt doesn't seem like an improvement to me but I don't feel strongly\nabout it. In the end, Solution.pm and configure.ac will be vaporised\nby lasers, so we'll be left with 0 or 1 special cases. I don't care\nmuch, but I like 0, it's nice and round.\n\n\n",
"msg_date": "Sun, 9 Jul 2023 18:35:39 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Sun, Jul 9, 2023 at 6:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Jul 9, 2023 at 6:20 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > So I don't think this code is correct. AFAICT, there is nothing right\n> > now that can possibly define HAVE_MBSTOWCS_L on Windows/MSVC. Was that\n> > the intention?\n>\n> Yes, that was my intention. Windows actually doesn't have them.\n\nThinking about that some more... Its _XXX implementations don't deal\nwith UTF-8 the way Unix-based developers would expect, and are\ntherefore just portability hazards, aren't they? What would we gain\nby restoring the advertisement that they are available? Perhaps we\nshould go the other way completely and remove the relevant #defines\nfrom win32_port.h, and fully confine knowledge of them to pg_locale.c?\n It knows how to deal with that. Here is a patch trying this idea\nout, with as slightly longer explanation.",
"msg_date": "Mon, 10 Jul 2023 14:51:37 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On 10.07.23 04:51, Thomas Munro wrote:\n> On Sun, Jul 9, 2023 at 6:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Sun, Jul 9, 2023 at 6:20 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>> So I don't think this code is correct. AFAICT, there is nothing right\n>>> now that can possibly define HAVE_MBSTOWCS_L on Windows/MSVC. Was that\n>>> the intention?\n>>\n>> Yes, that was my intention. Windows actually doesn't have them.\n> \n> Thinking about that some more... Its _XXX implementations don't deal\n> with UTF-8 the way Unix-based developers would expect, and are\n> therefore just portability hazards, aren't they? What would we gain\n> by restoring the advertisement that they are available? Perhaps we\n> should go the other way completely and remove the relevant #defines\n> from win32_port.h, and fully confine knowledge of them to pg_locale.c?\n> It knows how to deal with that. Here is a patch trying this idea\n> out, with as slightly longer explanation.\n\nThis looks sensible to me.\n\nIf we ever need mbstowcs_l() etc. outside of pg_locale.c, then the \nproper way would be to make a mbstowcs_l.c file in src/port/.\n\nBut I like your approach for now because it moves us more firmly into \nthe direction of having it contained in pg_locale.c, instead of having \nsome knowledge global and some local.\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:28:40 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: check_strxfrm_bug()"
},
{
"msg_contents": "On Tue, Jul 11, 2023 at 2:28 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> This looks sensible to me.\n>\n> If we ever need mbstowcs_l() etc. outside of pg_locale.c, then the\n> proper way would be to make a mbstowcs_l.c file in src/port/.\n>\n> But I like your approach for now because it moves us more firmly into\n> the direction of having it contained in pg_locale.c, instead of having\n> some knowledge global and some local.\n\nThanks. Pushed.\n\nThat leaves only one further cleanup opportunity from discussion this\nthread, for future work: a TRUST_STRXFRM-ectomy.\n\n\n",
"msg_date": "Tue, 11 Jul 2023 09:54:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: check_strxfrm_bug()"
}
] |
[
{
"msg_contents": "I'm in the middle of working on making some adjustments to the costs\nof Incremental Sorts and I see the patch I wrote changes the plan in\nthe drop-index-concurrently-1 isolation test.\n\nThe particular plan changed currently expects:\n\n-----------------------------------------------\nSort\n Sort Key: id, data\n -> Index Scan using test_dc_pkey on test_dc\n Filter: ((data)::text = '34'::text)\n\nIt seems fairly clear from reading the test spec that this plan is\nreally meant to be a seq scan plan and the change made in [1] adjusted\nthat without any regard for that.\n\nThat seems to have come around because of how the path generation of\nincremental sorts work. The current incremental sort path generation\nwill put a Sort path atop of the cheapest input path, even if that\ncheapest input path has presorted keys. The test_dc_pkey index\nprovides presorted input for the required sort order. Prior to\nincremental sort, we did not consider paths which only provided\npresorted input to be useful paths, hence we used to get a seq scan\nplan.\n\nI propose the attached which gets rid of the not-so-great casting\nmethod that was originally added to this test to try and force the seq\nscan. It seems a little dangerous to put in hacks like that to force\na particular plan when the resulting plan ends up penalized with a\n(1.0e10) disable_cost. The planner is just not going to be stable\nwhen the plan includes such a large penalty. To force the planner,\nI've added another test step to do set enable_seqscan to true and\nadjusted the permutations to run that just before preparing the seq\nscan query.\n\nI also tried to make it more clear that we want to be running the\nquery twice, once with an index scan and again with a seq scan. I'm\nhoping the changes to the prepared query names and the extra comments\nwill help reduce the chances of this getting broken again in the\nfuture.\n\nDavid\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blobdiff;f=src/test/isolation/expected/drop-index-concurrently-1.out;h=8e6adb66bb1479b8d7db2fcf5f70b89acd3af577;hp=75dff56bc46d40aa8eb012543044b7c10d516b7e;hb=d2d8a229bc5;hpb=3c8553547b1493c4afdb80393f4a47dbfa019a79",
"msg_date": "Thu, 15 Dec 2022 18:26:39 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "The drop-index-concurrently-1 isolation test no longer tests what it\n was meant to"
},
{
"msg_contents": "On Thu, 15 Dec 2022 at 18:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> I propose the attached which gets rid of the not-so-great casting\n> method that was originally added to this test to try and force the seq\n> scan. It seems a little dangerous to put in hacks like that to force\n> a particular plan when the resulting plan ends up penalized with a\n> (1.0e10) disable_cost. The planner is just not going to be stable\n> when the plan includes such a large penalty. To force the planner,\n> I've added another test step to do set enable_seqscan to true and\n> adjusted the permutations to run that just before preparing the seq\n> scan query.\n\nPushed and backpatched to 13, where incremental sorts were added.\n\nDavid\n\n\n",
"msg_date": "Fri, 16 Dec 2022 11:42:30 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: The drop-index-concurrently-1 isolation test no longer tests what\n it was meant to"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile investigating the issue reported on pg_hint_plan[1], I realized\nthat stmt_end() callback is not called if an error raised during the\nstatement execution is caught. I've attached the patch to check when\nstmt_beg() and stmt_end() are called. Here is an example:\n\npostgres(1:3220232)=# create or replace function testfn(a text) returns int as\n$$\ndeclare\n x int;\nbegin\n select a::int into x;\n return x;\n exception when others then return 99;\nend;\n$$\nlanguage plpgsql;\nCREATE FUNCTION\n\npostgres(1:3220232)=# select testfn('1');\nNOTICE: stmt_beg toplevel_block\nNOTICE: stmt_beg stmt SQL statement\nNOTICE: stmt_end stmt SQL statement\nNOTICE: stmt_beg stmt RETURN\nNOTICE: stmt_end stmt RETURN\nNOTICE: stmt_end toplevel_block\n testfn\n--------\n 1\n(1 row)\n\npostgres(1:3220232)=# select testfn('x');\nNOTICE: stmt_beg toplevel_block\nNOTICE: stmt_beg stmt SQL statement\nNOTICE: stmt_beg stmt RETURN\nNOTICE: stmt_end stmt RETURN\nNOTICE: stmt_end toplevel_block\n testfn\n--------\n 99\n(1 row)\n\nIn exec_stmt_block(), we call exec_stmts() in a PG_TRY() block and\ncall stmt_beg() and stmt_end() callbacks for each statement executed\nthere. However, if an error is caught during executing a statement, we\njump to PG_CATCH() block in exec_stmt_block() so we don't call\nstmt_end() callback that is supposed to be called in exec_stmts(). To\nfix it, I think we can call stmt_end() callback in PG_CATCH() block.\n\npg_hint_plan increments and decrements a count in stmt_beg() and\nstmt_end() callbacks, respectively[2]. It resets the counter when\nraising an ERROR (not caught). But if an ERROR is caught, the counter\ncould be left as an invalid value.\n\nIs this a bug in plpgsql?\n\nRegards,\n\n[1] https://github.com/ossc-db/pg_hint_plan/issues/93\n[2] https://github.com/ossc-db/pg_hint_plan/blob/master/pg_hint_plan.c#L4870\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Dec 2022 16:24:23 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com>\nnapsal:\n\n> Hi,\n>\n> While investigating the issue reported on pg_hint_plan[1], I realized\n> that stmt_end() callback is not called if an error raised during the\n> statement execution is caught. I've attached the patch to check when\n> stmt_beg() and stmt_end() are called. Here is an example:\n>\n> postgres(1:3220232)=# create or replace function testfn(a text) returns\n> int as\n> $$\n> declare\n> x int;\n> begin\n> select a::int into x;\n> return x;\n> exception when others then return 99;\n> end;\n> $$\n> language plpgsql;\n> CREATE FUNCTION\n>\n> postgres(1:3220232)=# select testfn('1');\n> NOTICE: stmt_beg toplevel_block\n> NOTICE: stmt_beg stmt SQL statement\n> NOTICE: stmt_end stmt SQL statement\n> NOTICE: stmt_beg stmt RETURN\n> NOTICE: stmt_end stmt RETURN\n> NOTICE: stmt_end toplevel_block\n> testfn\n> --------\n> 1\n> (1 row)\n>\n> postgres(1:3220232)=# select testfn('x');\n> NOTICE: stmt_beg toplevel_block\n> NOTICE: stmt_beg stmt SQL statement\n> NOTICE: stmt_beg stmt RETURN\n> NOTICE: stmt_end stmt RETURN\n> NOTICE: stmt_end toplevel_block\n> testfn\n> --------\n> 99\n> (1 row)\n>\n> In exec_stmt_block(), we call exec_stmts() in a PG_TRY() block and\n> call stmt_beg() and stmt_end() callbacks for each statement executed\n> there. However, if an error is caught during executing a statement, we\n> jump to PG_CATCH() block in exec_stmt_block() so we don't call\n> stmt_end() callback that is supposed to be called in exec_stmts(). To\n> fix it, I think we can call stmt_end() callback in PG_CATCH() block.\n>\n> pg_hint_plan increments and decrements a count in stmt_beg() and\n> stmt_end() callbacks, respectively[2]. It resets the counter when\n> raising an ERROR (not caught). But if an ERROR is caught, the counter\n> could be left as an invalid value.\n>\n> Is this a bug in plpgsql?\n>\n\nI think it is by design. There is not any callback that is called after an\nexception.\n\nIt is true, so some callbacks on statement error and function's error can\nbe nice. It can help me to implement profilers, or tracers more simply and\nmore robustly.\n\nBut I am not sure about performance impacts. This is on a critical path.\n\nRegards\n\nPavel\n\n\n\n\n\n> Regards,\n>\n> [1] https://github.com/ossc-db/pg_hint_plan/issues/93\n> [2]\n> https://github.com/ossc-db/pg_hint_plan/blob/master/pg_hint_plan.c#L4870\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n>\n\nčt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com> napsal:Hi,\n\nWhile investigating the issue reported on pg_hint_plan[1], I realized\nthat stmt_end() callback is not called if an error raised during the\nstatement execution is caught. I've attached the patch to check when\nstmt_beg() and stmt_end() are called. Here is an example:\n\npostgres(1:3220232)=# create or replace function testfn(a text) returns int as\n$$\ndeclare\n x int;\nbegin\n select a::int into x;\n return x;\n exception when others then return 99;\nend;\n$$\nlanguage plpgsql;\nCREATE FUNCTION\n\npostgres(1:3220232)=# select testfn('1');\nNOTICE: stmt_beg toplevel_block\nNOTICE: stmt_beg stmt SQL statement\nNOTICE: stmt_end stmt SQL statement\nNOTICE: stmt_beg stmt RETURN\nNOTICE: stmt_end stmt RETURN\nNOTICE: stmt_end toplevel_block\n testfn\n--------\n 1\n(1 row)\n\npostgres(1:3220232)=# select testfn('x');\nNOTICE: stmt_beg toplevel_block\nNOTICE: stmt_beg stmt SQL statement\nNOTICE: stmt_beg stmt RETURN\nNOTICE: stmt_end stmt RETURN\nNOTICE: stmt_end toplevel_block\n testfn\n--------\n 99\n(1 row)\n\nIn exec_stmt_block(), we call exec_stmts() in a PG_TRY() block and\ncall stmt_beg() and stmt_end() callbacks for each statement executed\nthere. However, if an error is caught during executing a statement, we\njump to PG_CATCH() block in exec_stmt_block() so we don't call\nstmt_end() callback that is supposed to be called in exec_stmts(). To\nfix it, I think we can call stmt_end() callback in PG_CATCH() block.\n\npg_hint_plan increments and decrements a count in stmt_beg() and\nstmt_end() callbacks, respectively[2]. It resets the counter when\nraising an ERROR (not caught). But if an ERROR is caught, the counter\ncould be left as an invalid value.\n\nIs this a bug in plpgsql?I think it is by design. There is not any callback that is called after an exception.It is true, so some callbacks on statement error and function's error can be nice. It can help me to implement profilers, or tracers more simply and more robustly.But I am not sure about performance impacts. This is on a critical path.RegardsPavel\n\nRegards,\n\n[1] https://github.com/ossc-db/pg_hint_plan/issues/93\n[2] https://github.com/ossc-db/pg_hint_plan/blob/master/pg_hint_plan.c#L4870\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Dec 2022 08:41:21 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "At Thu, 15 Dec 2022 08:41:21 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com>\n> napsal:\n> > Is this a bug in plpgsql?\n> >\n> \n> I think it is by design. There is not any callback that is called after an\n> exception.\n> \n> It is true, so some callbacks on statement error and function's error can\n> be nice. It can help me to implement profilers, or tracers more simply and\n> more robustly.\n> \n> But I am not sure about performance impacts. This is on a critical path.\n\nI didn't searched for, but I guess all of the end-side callback of all\nbegin-end type callbacks are not called on exception. Additional\nPG_TRY level wouldn't be acceptable for performance reasons.\n\nWhat we (pg_hint_plan people) want is any means to know that the\ntop-level function is exited, to reset function nest level. It would\nbe simpler than calling end callback at every nest level.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 16:53:25 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is\n caught"
},
{
"msg_contents": "čt 15. 12. 2022 v 8:53 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> At Thu, 15 Dec 2022 08:41:21 +0100, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote in\n> > čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com\n> >\n> > napsal:\n> > > Is this a bug in plpgsql?\n> > >\n> >\n> > I think it is by design. There is not any callback that is called after\n> an\n> > exception.\n> >\n> > It is true, so some callbacks on statement error and function's error can\n> > be nice. It can help me to implement profilers, or tracers more simply\n> and\n> > more robustly.\n> >\n> > But I am not sure about performance impacts. This is on a critical path.\n>\n> I didn't searched for, but I guess all of the end-side callback of all\n> begin-end type callbacks are not called on exception. Additional\n> PG_TRY level wouldn't be acceptable for performance reasons.\n>\n> What we (pg_hint_plan people) want is any means to know that the\n> top-level function is exited, to reset function nest level. It would\n> be simpler than calling end callback at every nest level.\n>\n>\nI found some solution based by using fmgr hook\n\nhttps://github.com/okbob/plpgsql_check/commit/9a17e97354a48913de5219048ee3be6f8460bae9\n\nregards\n\nPavel\n\n\nregards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nčt 15. 12. 2022 v 8:53 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:At Thu, 15 Dec 2022 08:41:21 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com>\n> napsal:\n> > Is this a bug in plpgsql?\n> >\n> \n> I think it is by design. There is not any callback that is called after an\n> exception.\n> \n> It is true, so some callbacks on statement error and function's error can\n> be nice. It can help me to implement profilers, or tracers more simply and\n> more robustly.\n> \n> But I am not sure about performance impacts. This is on a critical path.\n\nI didn't searched for, but I guess all of the end-side callback of all\nbegin-end type callbacks are not called on exception. Additional\nPG_TRY level wouldn't be acceptable for performance reasons.\n\nWhat we (pg_hint_plan people) want is any means to know that the\ntop-level function is exited, to reset function nest level. It would\nbe simpler than calling end callback at every nest level.\nI found some solution based by using fmgr hookhttps://github.com/okbob/plpgsql_check/commit/9a17e97354a48913de5219048ee3be6f8460bae9regardsPavel\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 15 Dec 2022 09:03:12 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "At Thu, 15 Dec 2022 09:03:12 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> I found some solution based by using fmgr hook\n> \n> https://github.com/okbob/plpgsql_check/commit/9a17e97354a48913de5219048ee3be6f8460bae9\n\nOh! Thanks for the pointer, will look into that.\n\nBy the way, It seems to me that the tool is using\nRegisterResourceReleaseCallback to reset the function nest level. But\nsince there's a case where the mechanism doesn't work, I suspect that\nthe callback can be missed in some cases of error return, which seems\nto be a bug if it is true..\n\n# I haven't confirmed that behavior by myself, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 15 Dec 2022 17:34:41 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is\n caught"
},
{
"msg_contents": "čt 15. 12. 2022 v 9:34 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nnapsal:\n\n> At Thu, 15 Dec 2022 09:03:12 +0100, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote in\n> > I found some solution based by using fmgr hook\n> >\n> >\n> https://github.com/okbob/plpgsql_check/commit/9a17e97354a48913de5219048ee3be6f8460bae9\n>\n> Oh! Thanks for the pointer, will look into that.\n>\n> By the way, It seems to me that the tool is using\n> RegisterResourceReleaseCallback to reset the function nest level. But\n> since there's a case where the mechanism doesn't work, I suspect that\n> the callback can be missed in some cases of error return, which seems\n> to be a bug if it is true..\n>\n> # I haven't confirmed that behavior by myself, though.\n>\n\nit should be executed\n\n/*\n * Register or deregister callback functions for resource cleanup\n *\n * These functions are intended for use by dynamically loaded modules.\n * For built-in modules we generally just hardwire the appropriate calls.\n *\n * Note that the callback occurs post-commit or post-abort, so the callback\n * functions can only do noncritical cleanup.\n */\nvoid\nRegisterResourceReleaseCallback(ResourceReleaseCallback callback, void *arg)\n{\n\nbut it is based on resource owner, so timing can be different than you\nexpect\n\nRegards\n\nPavel\n\n\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nčt 15. 12. 2022 v 9:34 odesílatel Kyotaro Horiguchi <horikyota.ntt@gmail.com> napsal:At Thu, 15 Dec 2022 09:03:12 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in \n> I found some solution based by using fmgr hook\n> \n> https://github.com/okbob/plpgsql_check/commit/9a17e97354a48913de5219048ee3be6f8460bae9\n\nOh! Thanks for the pointer, will look into that.\n\nBy the way, It seems to me that the tool is using\nRegisterResourceReleaseCallback to reset the function nest level. But\nsince there's a case where the mechanism doesn't work, I suspect that\nthe callback can be missed in some cases of error return, which seems\nto be a bug if it is true..\n\n# I haven't confirmed that behavior by myself, though.it should be executed/* * Register or deregister callback functions for resource cleanup * * These functions are intended for use by dynamically loaded modules. * For built-in modules we generally just hardwire the appropriate calls. * * Note that the callback occurs post-commit or post-abort, so the callback * functions can only do noncritical cleanup. */voidRegisterResourceReleaseCallback(ResourceReleaseCallback callback, void *arg){but it is based on resource owner, so timing can be different than you expectRegardsPavel \n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 15 Dec 2022 12:07:19 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 4:53 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Dec 2022 08:41:21 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in\n> > čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com>\n> > napsal:\n> > > Is this a bug in plpgsql?\n> > >\n> >\n> > I think it is by design. There is not any callback that is called after an\n> > exception.\n> >\n> > It is true, so some callbacks on statement error and function's error can\n> > be nice. It can help me to implement profilers, or tracers more simply and\n> > more robustly.\n> >\n> > But I am not sure about performance impacts. This is on a critical path.\n>\n> I didn't searched for, but I guess all of the end-side callback of all\n> begin-end type callbacks are not called on exception. Additional\n> PG_TRY level wouldn't be acceptable for performance reasons.\n\nI don't think we need additional PG_TRY() for that since exec_stmts()\nis already called in PG_TRY() if there is an exception block. I meant\nto call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\nan error is caught by the exception block). Currently, if an error is\ncaught, we call stmt_begin() and stmt_end() for statements executed\ninside the exception block but call only stmt_begin() for the\nstatement that raised an error.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 15 Dec 2022 20:50:56 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "čt 15. 12. 2022 v 12:51 odesílatel Masahiko Sawada <sawada.mshk@gmail.com>\nnapsal:\n\n> On Thu, Dec 15, 2022 at 4:53 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Thu, 15 Dec 2022 08:41:21 +0100, Pavel Stehule <\n> pavel.stehule@gmail.com> wrote in\n> > > čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <\n> sawada.mshk@gmail.com>\n> > > napsal:\n> > > > Is this a bug in plpgsql?\n> > > >\n> > >\n> > > I think it is by design. There is not any callback that is called\n> after an\n> > > exception.\n> > >\n> > > It is true, so some callbacks on statement error and function's error\n> can\n> > > be nice. It can help me to implement profilers, or tracers more simply\n> and\n> > > more robustly.\n> > >\n> > > But I am not sure about performance impacts. This is on a critical\n> path.\n> >\n> > I didn't searched for, but I guess all of the end-side callback of all\n> > begin-end type callbacks are not called on exception. Additional\n> > PG_TRY level wouldn't be acceptable for performance reasons.\n>\n> I don't think we need additional PG_TRY() for that since exec_stmts()\n> is already called in PG_TRY() if there is an exception block. I meant\n> to call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\n> an error is caught by the exception block). Currently, if an error is\n> caught, we call stmt_begin() and stmt_end() for statements executed\n> inside the exception block but call only stmt_begin() for the\n> statement that raised an error.\n>\n\nPG_TRY is used only for STMT_BLOCK, other statements don't use PG_TRY.\n\nI have no idea about possible performance impacts, I never tested it.\nPersonally, I like the possibility of having some error callback function.\nMaybe PG_TRY can be used, only when this callback is used. So there will\nnot be any impact on performance without some extensions that use it.\nUnfortunately, there are two functions necessary. Some exceptions can be\nraised after the last statement before the function ends. Changing\nbehaviour of stmt_end or func_end can be problematic, because after an\nexception a lot of internal API is not available, and you should know, so\nthis is that situation. Now anybody knows so at stmt_end function, the code\nis not after an exception.\n\nBut it can be not too easy, because there can be more chained extensions\nthat use dbg API - like PL profiler, PL debugger and plpgsql_check - and\nmaybe others.\n\nRegards\n\nPavel\n\n\n\n>\n> Regards,\n>\n> --\n> Masahiko Sawada\n> Amazon Web Services: https://aws.amazon.com\n>\n\nčt 15. 12. 2022 v 12:51 odesílatel Masahiko Sawada <sawada.mshk@gmail.com> napsal:On Thu, Dec 15, 2022 at 4:53 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 15 Dec 2022 08:41:21 +0100, Pavel Stehule <pavel.stehule@gmail.com> wrote in\n> > čt 15. 12. 2022 v 8:25 odesílatel Masahiko Sawada <sawada.mshk@gmail.com>\n> > napsal:\n> > > Is this a bug in plpgsql?\n> > >\n> >\n> > I think it is by design. There is not any callback that is called after an\n> > exception.\n> >\n> > It is true, so some callbacks on statement error and function's error can\n> > be nice. It can help me to implement profilers, or tracers more simply and\n> > more robustly.\n> >\n> > But I am not sure about performance impacts. This is on a critical path.\n>\n> I didn't searched for, but I guess all of the end-side callback of all\n> begin-end type callbacks are not called on exception. Additional\n> PG_TRY level wouldn't be acceptable for performance reasons.\n\nI don't think we need additional PG_TRY() for that since exec_stmts()\nis already called in PG_TRY() if there is an exception block. I meant\nto call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\nan error is caught by the exception block). Currently, if an error is\ncaught, we call stmt_begin() and stmt_end() for statements executed\ninside the exception block but call only stmt_begin() for the\nstatement that raised an error.PG_TRY is used only for STMT_BLOCK, other statements don't use PG_TRY.I have no idea about possible performance impacts, I never tested it. Personally, I like the possibility of having some error callback function. Maybe PG_TRY can be used, only when this callback is used. So there will not be any impact on performance without some extensions that use it. Unfortunately, there are two functions necessary. Some exceptions can be raised after the last statement before the function ends. Changing behaviour of stmt_end or func_end can be problematic, because after an exception a lot of internal API is not available, and you should know, so this is that situation. Now anybody knows so at stmt_end function, the code is not after an exception. But it can be not too easy, because there can be more chained extensions that use dbg API - like PL profiler, PL debugger and plpgsql_check - and maybe others.RegardsPavel \n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 15 Dec 2022 14:34:16 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> I don't think we need additional PG_TRY() for that since exec_stmts()\n> is already called in PG_TRY() if there is an exception block. I meant\n> to call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\n> an error is caught by the exception block). Currently, if an error is\n> caught, we call stmt_begin() and stmt_end() for statements executed\n> inside the exception block but call only stmt_begin() for the\n> statement that raised an error.\n\nI fail to see anything wrong with that. We never completed execution\nof the statement that raised an error, but calling stmt_end for it\nwould imply that we did. I think changing this will break more things\nthan it fixes, completely independently of whatever cost it would add.\n\nOr in other words: the initial complaint describes a bug in pg_hint_plan,\nnot one in plpgsql.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Dec 2022 10:49:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "On Thu, Dec 15, 2022 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > I don't think we need additional PG_TRY() for that since exec_stmts()\n> > is already called in PG_TRY() if there is an exception block. I meant\n> > to call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\n> > an error is caught by the exception block). Currently, if an error is\n> > caught, we call stmt_begin() and stmt_end() for statements executed\n> > inside the exception block but call only stmt_begin() for the\n> > statement that raised an error.\n>\n> I fail to see anything wrong with that. We never completed execution\n> of the statement that raised an error, but calling stmt_end for it\n> would imply that we did. I think changing this will break more things\n> than it fixes, completely independently of whatever cost it would add.\n>\n> Or in other words: the initial complaint describes a bug in pg_hint_plan,\n> not one in plpgsql.\n>\n>\nThe OP suggests needing something akin to a \"finally\" callback for\nstatement. While a fine feature request for plpgsql its absence doesn't\nconstitute a bug.\n\nDavid J.\n\nOn Thu, Dec 15, 2022 at 8:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> I don't think we need additional PG_TRY() for that since exec_stmts()\n> is already called in PG_TRY() if there is an exception block. I meant\n> to call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\n> an error is caught by the exception block). Currently, if an error is\n> caught, we call stmt_begin() and stmt_end() for statements executed\n> inside the exception block but call only stmt_begin() for the\n> statement that raised an error.\n\nI fail to see anything wrong with that. We never completed execution\nof the statement that raised an error, but calling stmt_end for it\nwould imply that we did. I think changing this will break more things\nthan it fixes, completely independently of whatever cost it would add.\n\nOr in other words: the initial complaint describes a bug in pg_hint_plan,\nnot one in plpgsql.The OP suggests needing something akin to a \"finally\" callback for statement. While a fine feature request for plpgsql its absence doesn't constitute a bug.David J.",
"msg_date": "Thu, 15 Dec 2022 09:18:04 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 12:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <sawada.mshk@gmail.com> writes:\n> > I don't think we need additional PG_TRY() for that since exec_stmts()\n> > is already called in PG_TRY() if there is an exception block. I meant\n> > to call stmt_end() in PG_CATCH() in exec_stmt_block() (i.e. only when\n> > an error is caught by the exception block). Currently, if an error is\n> > caught, we call stmt_begin() and stmt_end() for statements executed\n> > inside the exception block but call only stmt_begin() for the\n> > statement that raised an error.\n>\n> I fail to see anything wrong with that. We never completed execution\n> of the statement that raised an error, but calling stmt_end for it\n> would imply that we did.\n\nThank you for the comment. Agreed.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 16 Dec 2022 16:23:13 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: plpgsq_plugin's stmt_end() is not called when an error is caught"
}
] |
[
{
"msg_contents": "I would find it useful if the cirrus scripts used make -k (and ninja -k) \nto keep building everything in the presence of errors. For example, I \nonce had some issue in code that caused a bunch of compiler warnings on \nWindows. The cirrus build would only show me the first one, then I had \nto fix, reupload, see the next one, etc. Are there any drawbacks to \nusing these options in this context?\n\n\n",
"msg_date": "Thu, 15 Dec 2022 12:31:26 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "cirrus scripts could use make -k"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-15 12:31:26 +0100, Peter Eisentraut wrote:\n> I would find it useful if the cirrus scripts used make -k (and ninja -k) to\n> keep building everything in the presence of errors. For example, I once had\n> some issue in code that caused a bunch of compiler warnings on Windows. The\n> cirrus build would only show me the first one, then I had to fix, reupload,\n> see the next one, etc. Are there any drawbacks to using these options in\n> this context?\n> \n\n-1 - it makes it much harder to find the first error. To the point of\nthe error output getting big enough that CI task output gets deleted\nmore quickly because of the larger output.\n\nI can see -k 3 or something though.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 23 Dec 2022 03:27:08 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: cirrus scripts could use make -k"
}
] |
[
{
"msg_contents": "Hello Hackers,\n\nI'm trying to get things going again on my temporal tables work, and \nhere is a small patch to move that forward.\n\nIt lets you create exclusion constraints on partitioned tables, similar \nto today's rules for b-tree primary keys & unique constraints:\njust as we permit a PK on a partitioned table when the PK's columns are \na superset of the partition keys, so we could also allow an exclusion \nconstraint when its columns are a superset of the partition keys.\n\nThis patch also requires the matching constraint columns to use equality \ncomparisons (`(foo WITH =)`), so it is really equivalent to the existing \nb-tree rule. Perhaps that is more conservative than necessary, but we \ncan't permit an arbitrary operator, since some might require testing \nrows that fall into other partitions. For example `(foo WITH <>)` would \nobviously cause problems.\n\nThe exclusion constraint may still include other columns beyond the \npartition keys, and those may use equality operators or something else.\n\nThis patch is required to support temporal partitioned tables, because \ntemporal tables use exclusion constraints as their primary key.\nEssentially they are `(id WITH =, valid_at with &&)`. Since the primary \nkey is not a b-tree, partitioning them would be forbidden prior to this \npatch. But now you could partition that table on `id`, and we could \nstill correctly validate the temporal PK without requiring rows from \nother partitions.\n\nThis patch may be helpful beyond just temporal tables (or for DIY \ntemporal tables), so it seems worth submitting it separately.\n\nYours,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Thu, 15 Dec 2022 15:33:34 -0800",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Exclusion constraints on partitioned tables"
},
{
"msg_contents": "Paul Jungwirth <pj@illuminatedcomputing.com> writes:\n> It lets you create exclusion constraints on partitioned tables, similar \n> to today's rules for b-tree primary keys & unique constraints:\n> just as we permit a PK on a partitioned table when the PK's columns are \n> a superset of the partition keys, so we could also allow an exclusion \n> constraint when its columns are a superset of the partition keys.\n\nOK. AFAICS that works in principle.\n\n> This patch also requires the matching constraint columns to use equality \n> comparisons (`(foo WITH =)`), so it is really equivalent to the existing \n> b-tree rule.\n\nThat's not quite good enough: you'd better enforce that it's the same\nequality operator (and same collation, if relevant) as is being used\nin the partition key. Remember that we don't have a requirement that\na datatype have only one equality operator; and these days I think\ncollation can affect equality, too.\n\nAnother problem is that while we can safely assume that we know what\nBTEqualStrategyNumber means in btree, we can NOT assume that we know\nwhat gist opclass strategy numbers mean: each opclass is free to\ndefine those as it sees fit. The part of your patch that is looking\nat RTEqualStrategyNumber seems dangerously broken to me.\n\nIt might work better to consider the operator itself and ask if\nit's equality in the same btree opfamily that's used by the\npartition key. (Hm, do we use btree opfamilies for all types of\npartitioning?)\n\nAnyway, I think something can be made of this, but you need to be less\nfuzzy about matching the equality semantics of the partition key.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 15 Dec 2022 19:12:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On 12/15/22 16:12, Tom Lane wrote:\n>> This patch also requires the matching constraint columns to use equality\n>> comparisons (`(foo WITH =)`), so it is really equivalent to the existing\n>> b-tree rule.\n> \n> That's not quite good enough: you'd better enforce that it's the same\n> equality operator (and same collation, if relevant) as is being used\n> in the partition key.\n> [snip]\n> It might work better to consider the operator itself and ask if\n> it's equality in the same btree opfamily that's used by the\n> partition key.\n\nThank you for taking a look! Here is a comparison on just the operator \nitself.\n\nI included a collation check too, but I'm not sure it's necessary. \nExclusion constraints don't have a collation per se; it comes from the \nindex, and we choose it just a little above in this function. (I'm not \neven sure how to elicit that new error message in a test case.)\n\nI'm not sure what to do about matching the opfamily. In practice an \nexclusion constraint will typically use gist, but the partition key will \nalways use btree/hash. You're saying that the equals operator can be \ninconsistent between those access methods? That is surprising to me, but \nI admit op classes/families are still sinking in. (Even prior to this \npatch, isn't the code for hash-based partitions looking up ptkey_eqop \nvia the hash opfamily, and then comparing it to idx_eqop looked up via \nthe btree opfamily?)\n\nIf partitions can only support btree-based exclusion constraints, you \nstill wouldn't be able to partition a temporal table, because those \nconstraints would always be gist. So I guess what I really want is to \nsupport gist index constraints on partitioned tables.\n\nRegards,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Thu, 15 Dec 2022 21:11:49 -0800",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "Le vendredi 16 décembre 2022, 06:11:49 CET Paul Jungwirth a écrit :\n> On 12/15/22 16:12, Tom Lane wrote:\n> >> This patch also requires the matching constraint columns to use equality\n> >> comparisons (`(foo WITH =)`), so it is really equivalent to the existing\n> >> b-tree rule.\n> > \n> > That's not quite good enough: you'd better enforce that it's the same\n> > equality operator (and same collation, if relevant) as is being used\n> > in the partition key.\n> > [snip]\n> > It might work better to consider the operator itself and ask if\n> > it's equality in the same btree opfamily that's used by the\n> > partition key.\n> \n> Thank you for taking a look! Here is a comparison on just the operator\n> itself.\n> \n\nI've taken a look at the patch, and I'm not sure why you keep the restriction \non the Gist operator being of the RTEqualStrategyNumber strategy. I don't \nthink we have any other place where we expect those strategy numbers to \nmatch. For hash it's different, as the hash-equality is the only operator \nstrategy and as such there is no other way to look at it. Can't we just \nenforce partition_operator == exclusion_operator without adding the \nRTEqualStrategyNumber for the opfamily into the mix ?\n\n\n\n\n\n\n\n",
"msg_date": "Tue, 24 Jan 2023 15:38:13 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On 1/24/23 06:38, Ronan Dunklau wrote:\n> I've taken a look at the patch, and I'm not sure why you keep the restriction\n> on the Gist operator being of the RTEqualStrategyNumber strategy. I don't\n> think we have any other place where we expect those strategy numbers to\n> match. For hash it's different, as the hash-equality is the only operator\n> strategy and as such there is no other way to look at it. Can't we just\n> enforce partition_operator == exclusion_operator without adding the\n> RTEqualStrategyNumber for the opfamily into the mix ?\n\nThank you for taking a look! I did some research on the history of the \ncode here, and I think I understand Tom's concern about making sure the \nindex uses the same equality operator as the partition. I was confused \nabout his remarks about the opfamily, but I agree with you that if the \noperator is the same, we should be okay.\n\nI added the code about RTEqualStrategyNumber because that's what we need \nto find an equals operator when the index is GiST (except if it's using \nan opclass from btree_gist; then it needs to be BTEqual again). But then \nI realized that for exclusion constraints we have already figured out \nthe operator (in RelationGetExclusionInfo) and put it in \nindexInfo->ii_ExclusionOps. So we can just compare against that. This \nworks whether your index uses btree_gist or not.\n\nHere is an updated patch with that change (also rebased).\n\nI also included a more specific error message. If we find a matching \ncolumn in the index but with the wrong operator, we should say so, and \nnot say there is no matching column.\n\nThanks,\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Fri, 17 Mar 2023 09:03:09 -0700",
"msg_from": "Paul Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": true,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "Le vendredi 17 mars 2023, 17:03:09 CET Paul Jungwirth a écrit :\n> I added the code about RTEqualStrategyNumber because that's what we need\n> to find an equals operator when the index is GiST (except if it's using\n> an opclass from btree_gist; then it needs to be BTEqual again). But then\n> I realized that for exclusion constraints we have already figured out\n> the operator (in RelationGetExclusionInfo) and put it in\n> indexInfo->ii_ExclusionOps. So we can just compare against that. This\n> works whether your index uses btree_gist or not.\n> \n> Here is an updated patch with that change (also rebased).\n\nThanks ! This looks fine to me like this.\n\n> \n> I also included a more specific error message. If we find a matching\n> column in the index but with the wrong operator, we should say so, and\n> not say there is no matching column.\n>\n\nI agree that's a nicer improvement. \n\nRegards,\n\n--\nRonan Dunklau\n\n\n\n\n",
"msg_date": "Mon, 20 Mar 2023 09:24:59 +0100",
"msg_from": "Ronan Dunklau <ronan.dunklau@aiven.io>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On 17.03.23 17:03, Paul Jungwirth wrote:\n> Thank you for taking a look! I did some research on the history of the \n> code here, and I think I understand Tom's concern about making sure the \n> index uses the same equality operator as the partition. I was confused \n> about his remarks about the opfamily, but I agree with you that if the \n> operator is the same, we should be okay.\n> \n> I added the code about RTEqualStrategyNumber because that's what we need \n> to find an equals operator when the index is GiST (except if it's using \n> an opclass from btree_gist; then it needs to be BTEqual again). But then \n> I realized that for exclusion constraints we have already figured out \n> the operator (in RelationGetExclusionInfo) and put it in \n> indexInfo->ii_ExclusionOps. So we can just compare against that. This \n> works whether your index uses btree_gist or not.\n> \n> Here is an updated patch with that change (also rebased).\n> \n> I also included a more specific error message. If we find a matching \n> column in the index but with the wrong operator, we should say so, and \n> not say there is no matching column.\n\nThis looks all pretty good to me. A few more comments:\n\nIt seems to me that many of the test cases added in indexing.sql are \nredundant with create_table.sql/alter_table.sql (or vice versa). Is \nthere a reason for this?\n\n\nThis is not really a problem in your patch, but I think in\n\n- if (partitioned && (stmt->unique || stmt->primary))\n+ if (partitioned && (stmt->unique || stmt->primary || \nstmt->excludeOpNames != NIL))\n\nthe stmt->primary is redundant and should be removed. Right now \n\"primary\" is always a subset of \"unique\", but presumably a future patch \nof yours wants to change that.\n\n\nFurthermore, I think it would be more elegant in your patch if you wrote \nstmt->excludeOpNames without the \"== NIL\" or \"!= NIL\", so that it \nbecomes a peer of stmt->unique. (I understand some people don't like \nthat style. But it is already used in that file.)\n\n\nI would consider rearranging some of the conditionals more as a \nselection of cases, like \"is it a unique constraint?\", \"else, is it an \nexclusion constraint?\" -- rather than the current \"is it an exclusion \nconstraint?, \"else, various old code\". For example, instead of\n\n if (stmt->excludeOpNames != NIL)\n idx_eqop = indexInfo->ii_ExclusionOps[j];\n else\n idx_eqop = get_opfamily_member(..., eq_strategy);\n\nconsider\n\n if (stmt->unique)\n idx_eqop = get_opfamily_member(..., eq_strategy);\n else if (stmt->excludeOpNames)\n idx_eqop = indexInfo->ii_ExclusionOps[j];\n Assert(idx_eqop);\n\nAlso, I would push the code\n\n if (accessMethodId == BTREE_AM_OID)\n eq_strategy = BTEqualStrategyNumber;\n\nfurther down into the loop, so that you don't have to remember in which \ncases eq_strategy is assigned or not.\n\n(It's also confusing that the eq_strategy variable is used for two \ndifferent things in this function, and that would clean that up.)\n\n\nFinally, this code\n\n+ att = TupleDescAttr(RelationGetDescr(rel),\n+ key->partattrs[i] - 1);\n+ ereport(ERROR,\n+ (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n+ errmsg(\"cannot match partition key \nto index on column \\\"%s\\\" using non-equal operator \\\"%s\\\".\",\n+ NameStr(att->attname), \nget_opname(indexInfo->ii_ExclusionOps[j]))));\n\ncould be simplified by using get_attname().\n\n\nThis is all just a bit of polishing. I think it would be good to go \nafter that.\n\n\n",
"msg_date": "Thu, 6 Jul 2023 10:03:20 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On Thu, Jul 6, 2023 at 1:03 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> This looks all pretty good to me. A few more comments:\n\nThanks for the feedback! New patch attached here. Responses below:\n\n> It seems to me that many of the test cases added in indexing.sql are\n> redundant with create_table.sql/alter_table.sql (or vice versa). Is\n> there a reason for this?\n\nYes, there is some overlap. I think that's just because there was\noverlap before, and I didn't want to delete the old tests completely.\nBut since indexing.sql has a fuller list of tests and is a superset of\nthe others, this new patch removes the redundant tests from\n{create,alter}_table.sql.\n\nBtw speaking of tests, I want to make sure this new feature will still\nwork when you're using btree_gist and and `EXCLUDE WITH (myint =,\nmytsrange &&)` (and not just `(myint4range =, mytsrange &&)`). Some of\nmy early attempts writing this patch worked w/o btree_gist but not w/\n(or vice versa). But as far as I know there's no way to test that in\nregress. I wound up writing a private shell script that just does\nthis:\n\n```\n--------\n-- test against btree_gist since we can't do that in the postgres\nregress test suite:\n\nCREATE EXTENSION btree_gist;\n\ncreate table partitioned (id int, valid_at tsrange, exclude using gist\n(id with =, valid_at with &&)) partition by range (id);\n-- should fail with a good error message:\ncreate table partitioned2 (id int, valid_at tsrange, exclude using\ngist (id with <>, valid_at with &&)) partition by range (id);\n```\n\nIs there some place in the repo to include a test like that? It seems\na little funny to put it in the btree_gist suite, but maybe that's the\nright answer.\n\n> This is not really a problem in your patch, but I think in\n>\n> - if (partitioned && (stmt->unique || stmt->primary))\n> + if (partitioned && (stmt->unique || stmt->primary ||\n> stmt->excludeOpNames != NIL))\n>\n> the stmt->primary is redundant and should be removed. Right now\n> \"primary\" is always a subset of \"unique\", but presumably a future patch\n> of yours wants to change that.\n\nDone! I don't think my temporal work changes that primary ⊆ unique. It\ndoes change that some primary/unique constraints will have non-null\nexcludeOpNames, which will require small changes here eventually. But\nthat should be part of the temporal patches, not this one.\n\n> Furthermore, I think it would be more elegant in your patch if you wrote\n> stmt->excludeOpNames without the \"== NIL\" or \"!= NIL\", so that it\n> becomes a peer of stmt->unique. (I understand some people don't like\n> that style. But it is already used in that file.)\n\nDone.\n\n> I would consider rearranging some of the conditionals more as a\n> selection of cases, like \"is it a unique constraint?\", \"else, is it an\n> exclusion constraint?\" -- rather than the current \"is it an exclusion\n> constraint?, \"else, various old code\".\n\nDone.\n\n> Also, I would push the code\n>\n> if (accessMethodId == BTREE_AM_OID)\n> eq_strategy = BTEqualStrategyNumber;\n>\n> further down into the loop, so that you don't have to remember in which\n> cases eq_strategy is assigned or not.\n>\n> (It's also confusing that the eq_strategy variable is used for two\n> different things in this function, and that would clean that up.)\n\nAgreed that it's confusing. Done.\n\n> Finally, this code\n>\n> + att = TupleDescAttr(RelationGetDescr(rel),\n> + key->partattrs[i] - 1);\n> + ereport(ERROR,\n> + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> + errmsg(\"cannot match partition key\n> to index on column \\\"%s\\\" using non-equal operator \\\"%s\\\".\",\n> + NameStr(att->attname),\n> get_opname(indexInfo->ii_ExclusionOps[j]))));\n>\n> could be simplified by using get_attname().\n\nOkay, done. I changed the similar error message just below too.\n\n> This is all just a bit of polishing. I think it would be good to go\n> after that.\n\nThanks!\n\n-- \nPaul ~{:-)\npj@illuminatedcomputing.com",
"msg_date": "Sat, 8 Jul 2023 18:21:55 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On 09.07.23 03:21, Paul A Jungwirth wrote:\n>> It seems to me that many of the test cases added in indexing.sql are\n>> redundant with create_table.sql/alter_table.sql (or vice versa). Is\n>> there a reason for this?\n> \n> Yes, there is some overlap. I think that's just because there was\n> overlap before, and I didn't want to delete the old tests completely.\n> But since indexing.sql has a fuller list of tests and is a superset of\n> the others, this new patch removes the redundant tests from\n> {create,alter}_table.sql.\n\nThis looks better.\n\n> Btw speaking of tests, I want to make sure this new feature will still\n> work when you're using btree_gist and and `EXCLUDE WITH (myint =,\n> mytsrange &&)` (and not just `(myint4range =, mytsrange &&)`). Some of\n> my early attempts writing this patch worked w/o btree_gist but not w/\n> (or vice versa). But as far as I know there's no way to test that in\n> regress. I wound up writing a private shell script that just does\n> this:\n\n> Is there some place in the repo to include a test like that? It seems\n> a little funny to put it in the btree_gist suite, but maybe that's the\n> right answer.\n\nI'm not sure what value we would get from testing this with btree_gist, \nbut if we wanted to do that, then adding a new test file to the \nbtree_gist sql/ directory would seem reasonable to me.\n\n(I would make the test a little bit bigger than you had shown, like \ninsert a few values.)\n\nIf you want to do that, please send another patch. Otherwise, I'm ok to \ncommit this one.\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 16:05:18 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 7:05 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> I'm not sure what value we would get from testing this with btree_gist,\n> but if we wanted to do that, then adding a new test file to the\n> btree_gist sql/ directory would seem reasonable to me.\n>\n> (I would make the test a little bit bigger than you had shown, like\n> insert a few values.)\n>\n> If you want to do that, please send another patch. Otherwise, I'm ok to\n> commit this one.\n\nI can get you a patch tonight or tomorrow. I think it's worth it since\nbtree_gist uses different strategy numbers than ordinary gist.\n\nThanks!\nPaul\n\n\n",
"msg_date": "Mon, 10 Jul 2023 08:06:31 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On Mon, Jul 10, 2023 at 8:06 AM Paul A Jungwirth\n<pj@illuminatedcomputing.com> wrote:\n>\n> On Mon, Jul 10, 2023 at 7:05 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > I'm not sure what value we would get from testing this with btree_gist,\n> > but if we wanted to do that, then adding a new test file to the\n> > btree_gist sql/ directory would seem reasonable to me.\n> >\n> > (I would make the test a little bit bigger than you had shown, like\n> > insert a few values.)\n> >\n> > If you want to do that, please send another patch. Otherwise, I'm ok to\n> > commit this one.\n>\n> I can get you a patch tonight or tomorrow. I think it's worth it since\n> btree_gist uses different strategy numbers than ordinary gist.\n\nPatch attached.\n\nRegards,\nPaul",
"msg_date": "Mon, 10 Jul 2023 22:52:05 -0700",
"msg_from": "Paul A Jungwirth <pj@illuminatedcomputing.com>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
},
{
"msg_contents": "On 11.07.23 07:52, Paul A Jungwirth wrote:\n> On Mon, Jul 10, 2023 at 8:06 AM Paul A Jungwirth\n> <pj@illuminatedcomputing.com> wrote:\n>>\n>> On Mon, Jul 10, 2023 at 7:05 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>> I'm not sure what value we would get from testing this with btree_gist,\n>>> but if we wanted to do that, then adding a new test file to the\n>>> btree_gist sql/ directory would seem reasonable to me.\n>>>\n>>> (I would make the test a little bit bigger than you had shown, like\n>>> insert a few values.)\n>>>\n>>> If you want to do that, please send another patch. Otherwise, I'm ok to\n>>> commit this one.\n>>\n>> I can get you a patch tonight or tomorrow. I think it's worth it since\n>> btree_gist uses different strategy numbers than ordinary gist.\n> \n> Patch attached.\n\nLooks good, committed.\n\nI had some second thoughts about the use of get_attname(). It seems the \nprevious code used the dominant style of extracting the attribute name \nfrom the open relation handle, so I kept it that way. That's also more \nefficient than going via the syscache.\n\n\n\n",
"msg_date": "Wed, 12 Jul 2023 09:34:42 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Exclusion constraints on partitioned tables"
}
] |
[
{
"msg_contents": "Over on [1] Pavel reports that PG16's performance is slower than\nPG14's for his ORDER BY aggregate query. This seems to be mainly down\nto how the costing works for Incremental Sort. Now, ever since\n1349d279, the planner will make a plan that's ordered by the GROUP BY\nclause and add on the path key requirements for any ORDER BY\naggregates. I had in mind that if someone already had tuned their\nschema to have a btree index on the GROUP BY clause to allow Group\nAggregate to have presorted rows from the index that the planner would\nlikely just take that index and add an Incremental Sort on top of that\nto obtain the rows in the order required for the aggregate functions.\n\nThe main reason for Pavel's reported regression is due to a cost\n\"pessimism factor\" that was added to incremental sorts where we\nmultiply by 1.5 to reduce the chances of an Incremental Sort plan due\nto uncertainties around the number of tuples per presorted group. We\nassume each group will have the same number of tuples. If there's some\nskew then there will be a larger N factor in the qsort complexity of\nO(N * log2(N)).\n\nOver on [1], I'm proposing that we remove that pessimism factor. If we\nkeep teaching the planner new tricks but cost them pessimistically\nthen we're not taking full advantage of said new tricks. If you\ndisagree with that, then best to raise it on [1].\n\nOn [1], in addition to removing the * 1.5 factor, I also propose that\nwe add a new GUC named \"enable_presorted_aggregate\", which, if turned\noff will make the planner not request that the plan is also ordered by\nthe aggregate function's ORDER BY / DISTINCT clause. The reason I\nthink that this is required is that even with removing the pessimism\nfactor from incremental sort, it's possible the planner will choose to\nperform a full sort rather than an incremental sort on top of some\nexisting index which provides presorted input for only the query's\nGROUP BY clause. e.g.\n\nCREATE TABLE ab (a INT, b INT);\nCREATE INDEX ON ab(a);\n\nEXPLAIN SELECT a,array_agg(b ORDER BY b) FROM ab GROUP BY a;\n\nPrevious to 1349d279, the planner, assuming there's a good amount of\nrows in the table, would be very likely to use the index for the GROUP\nBY then the executor would be left to do a sort on \"b\" within\nnodeAgg.c. The equivalent of that post-1349d279 is Index Scan ->\nIncremental Sort -> Group Aggregate (with presorted input). However,\nthe planner could choose to: Seq Scan -> Sort (a,b) -> Group Aggregate\n(with presorted input). So this leaves an opportunity for the planner\nto choose a worse plan.\n\nNormally we add some enable_* GUC to leave an escape hatch when we add\nsome new feature like this. I likely should have done that when I\nadded 1349d279, but I didn't and I want to now.\n\nI mainly just shifted this discussion out of [1] as we normally like\nto debate GUC names and I feel that the discussion over on the other\nthread is buried a little too deep to be visible to most people.\n\nCan anyone think of a better name? Or does anyone see error with my ambition?\n\nDavid\n\n[1] https://www.postgresql.org/message-id/9f61ddbf-2989-1536-b31e-6459370a6baa@postgrespro.ru",
"msg_date": "Fri, 16 Dec 2022 12:47:51 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add enable_presorted_aggregate GUC"
},
{
"msg_contents": "On Fri, 16 Dec 2022 at 12:47, David Rowley <dgrowleyml@gmail.com> wrote:\n> Normally we add some enable_* GUC to leave an escape hatch when we add\n> some new feature like this. I likely should have done that when I\n> added 1349d279, but I didn't and I want to now.\n>\n> I mainly just shifted this discussion out of [1] as we normally like\n> to debate GUC names and I feel that the discussion over on the other\n> thread is buried a little too deep to be visible to most people.\n>\n> Can anyone think of a better name? Or does anyone see error with my ambition?\n\nI've now pushed this.\n\nI'm still happy to consider other names if anyone has any good ideas,\nbut so far it's all quiet here, so I've assumed that's a good thing.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Dec 2022 22:31:54 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add enable_presorted_aggregate GUC"
}
] |
[
{
"msg_contents": "\nHi, hackers\n\nRecently, I compile PostgreSQL on FreeBSD, I find commit a2a8acd152 introduecs\n__freebsd__ macro, however, I cannot find this macro on FreeBSD 13. There only\nhas __FreeBSD__ macro. Is this a typo?\n\n root@freebsd:~ # uname -a\n FreeBSD freebsd 13.1-RELEASE-p3 FreeBSD 13.1-RELEASE-p3 GENERIC amd64\n root@freebsd:~ # echo | gcc10 -dM -E - | grep -i 'freebsd'\n #define __FreeBSD__ 13\n\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 16 Dec 2022 11:43:36 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Typo macro name on FreeBSD?"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 4:44 PM Japin Li <japinli@hotmail.com> wrote:\n> Recently, I compile PostgreSQL on FreeBSD, I find commit a2a8acd152 introduecs\n> __freebsd__ macro, however, I cannot find this macro on FreeBSD 13. There only\n> has __FreeBSD__ macro. Is this a typo?\n\nYeah, that seems to be my fault. Will fix. Thanks!\n\n\n",
"msg_date": "Fri, 16 Dec 2022 17:25:16 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Typo macro name on FreeBSD?"
},
{
"msg_contents": "\nOn Fri, 16 Dec 2022 at 12:25, Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Dec 16, 2022 at 4:44 PM Japin Li <japinli@hotmail.com> wrote:\n>> Recently, I compile PostgreSQL on FreeBSD, I find commit a2a8acd152 introduecs\n>> __freebsd__ macro, however, I cannot find this macro on FreeBSD 13. There only\n>> has __FreeBSD__ macro. Is this a typo?\n>\n> Yeah, that seems to be my fault. Will fix. Thanks!\n\nThanks!\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n",
"msg_date": "Fri, 16 Dec 2022 13:21:59 +0800",
"msg_from": "Japin Li <japinli@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Typo macro name on FreeBSD?"
}
] |
[
{
"msg_contents": "Hi Team,\n\nGood Day!\n\nMy name is Vinay Kumar. I am checking with you to see if you can add the\nnumber of rows extracted by the pg_dump utility for tables when using\nverbose mode.\n\nThis will help end users to validate the amount of data extracted.\n\nI understand that pg_class.reltuples can help identify the number of rows\nbut it's only an estimated value.\n\nThanks & Regards,\nVinay Kumar Dumpa\n\nHi Team,Good Day!My name is Vinay Kumar. I am checking with you to see if you can add the number of rows extracted by the pg_dump utility for tables when using verbose mode.This will help end users to validate the amount of data extracted.I understand that pg_class.reltuples can help identify the number of rows but it's only an estimated value.Thanks & Regards,Vinay Kumar Dumpa",
"msg_date": "Fri, 16 Dec 2022 18:03:04 +0530",
"msg_from": "vinay kumar <vnykmr36@gmail.com>",
"msg_from_op": true,
"msg_subject": "Feature request to add rows extracted using pg_dump utility"
}
] |
[
{
"msg_contents": "I was surprised to see that this has been here for a few years (since\n77517ba59) without complaints or inquiries from translators.\n\nsrc/bin/pg_upgrade/option.c: check_required_directory(&old_cluster.bindir, \"PGBINOLD\", false,\nsrc/bin/pg_upgrade/option.c- \"-b\", _(\"old cluster binaries reside\"), false);\nsrc/bin/pg_upgrade/option.c: check_required_directory(&new_cluster.bindir, \"PGBINNEW\", false,\nsrc/bin/pg_upgrade/option.c- \"-B\", _(\"new cluster binaries reside\"), true);\nsrc/bin/pg_upgrade/option.c: check_required_directory(&old_cluster.pgdata, \"PGDATAOLD\", false,\nsrc/bin/pg_upgrade/option.c- \"-d\", _(\"old cluster data resides\"), false);\nsrc/bin/pg_upgrade/option.c: check_required_directory(&new_cluster.pgdata, \"PGDATANEW\", false,\nsrc/bin/pg_upgrade/option.c- \"-D\", _(\"new cluster data resides\"), false);\nsrc/bin/pg_upgrade/option.c: check_required_directory(&user_opts.socketdir, \"PGSOCKETDIR\", true,\nsrc/bin/pg_upgrade/option.c- \"-s\", _(\"sockets will be created\"), false);\n\nsrc/bin/pg_upgrade/option.c: pg_fatal(\"You must identify the directory where the %s.\\n\"\nsrc/bin/pg_upgrade/option.c- \"Please use the %s command-line option or the %s environment variable.\",\nsrc/bin/pg_upgrade/option.c- description, cmdLineOption, envVarName);\n\nDue to incomplete translation, that allows some pretty fancy output,\nlike:\n| You must identify the directory where the residen los binarios del cl�ster antiguo.\n\nThat commit also does this a couple times:\n\n+ _(\" which is an index on \\\"%s.%s\\\"\"),\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 16 Dec 2022 07:24:52 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "(non) translatable string splicing"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 07:24:52AM -0600, Justin Pryzby wrote:\n> Due to incomplete translation, that allows some pretty fancy output,\n> like:\n> | You must identify the directory where the residen los binarios del clúster antiguo.\n> \n> That commit also does this a couple times:\n> \n> + _(\" which is an index on \\\"%s.%s\\\"\"),\n\nUgh. Perhaps we could just simplify the wordings as of \"index on\nblah\", \"index on OID %u\", \"TOAST table for blah\" and \"TOAST table for\nOID %u\" with newlines after each item?\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 13:20:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: (non) translatable string splicing"
},
{
"msg_contents": "On 2022-Dec-16, Justin Pryzby wrote:\n\n> I was surprised to see that this has been here for a few years (since\n> 77517ba59) without complaints or inquiries from translators.\n\nSee https://postgr.es/m/13948.1501733752@sss.pgh.pa.us\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"It takes less than 2 seconds to get to 78% complete; that's a good sign.\nA few seconds later it's at 90%, but it seems to have stuck there. Did\nsomebody make percentages logarithmic while I wasn't looking?\"\n http://smylers.hates-software.com/2005/09/08/1995c749.html\n\n\n",
"msg_date": "Mon, 19 Dec 2022 09:07:57 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: (non) translatable string splicing"
},
{
"msg_contents": "On Fri, Dec 16, 2022 at 8:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Due to incomplete translation, that allows some pretty fancy output,\n> like:\n> | You must identify the directory where the residen los binarios del clúster antiguo.\n\nI can't see how that could be mejor. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 16:14:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: (non) translatable string splicing"
},
{
"msg_contents": "At Mon, 19 Dec 2022 13:20:55 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Dec 16, 2022 at 07:24:52AM -0600, Justin Pryzby wrote:\n> > Due to incomplete translation, that allows some pretty fancy output,\n> > like:\n> > | You must identify the directory where the residen los binarios del clúster antiguo.\n> > \n> > That commit also does this a couple times:\n> > \n> > + _(\" which is an index on \\\"%s.%s\\\"\"),\n\nFor this specific case I didn't feel a difficulty since it is\ncompatible with \"This is blah\" in that context.\n\n> Ugh. Perhaps we could just simplify the wordings as of \"index on\n> blah\", \"index on OID %u\", \"TOAST table for blah\" and \"TOAST table for\n> OID %u\" with newlines after each item?\n\nI'm fine with just removing \" which \". but I don't understand about\nthe extra newlines.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 21 Dec 2022 16:23:14 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: (non) translatable string splicing"
},
{
"msg_contents": "At Mon, 19 Dec 2022 16:14:04 -0500, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Fri, Dec 16, 2022 at 8:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Due to incomplete translation, that allows some pretty fancy output,\n> > like:\n> > | You must identify the directory where the residen los binarios del clúster antiguo.\n> \n> I can't see how that could be mejor. :-)\n\nIt was quite annoying but not untranslatable. But the \"the\" before\n\"residen\" looks like badly misplaced:p It should be a part of the\ninner text (\"residen los..\").\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 21 Dec 2022 16:32:04 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: (non) translatable string splicing"
}
] |
[
{
"msg_contents": "Hi there,\n\nfirst-time contributor here. I certainly hope I got the patch\ncreation and email workflow right. Let me know if anything can be\nimproved as I`m eager to learn. Regression tests (check) were \nsuccessful on native Win32 MSVC as well as Debian. Here comes the \npatch and corresponding commit text.\n\nDuring archive initialization pg_backup_custom.c determines if the file\npointer it should read from or write to is seekable. pg_dump\nuses this information to rewrite the custom output format's TOC\nenriched with known offsets into the archive on close. pg_restore uses\nseeking to speed up file operations when searching for specific\nblocks within the archive.\n\nThe seekable property of a file pointer is currently checked by\ninvoking ftello and subsequently fseeko. Both calls succeed\non Windows platforms if the underlying file descriptor represents a\nterminal handle or an anonymous or named pipe. Obviously, these type\nof devices do not support seeking. In the case of pg_dump, this\nleads to the TOC being appended to the end of the output when attempting\nto rewrite known offsets. Furthermore, pg_restore may try to seek to\nknown file offsets if the custom format archive's TOC supports it\nand subsequently fails to locate blocks.\n\nThis commit improves the detection of the seekable property by checking\na descriptor's file type (st_mode) and filtering character special\ndevices and pipes. The current customized implementation of fstat on\nWindows platforms (_pgfstat64) erroneously marks terminal and pipe\nhandles as regular files (_S_IFREG). This was improved on by\nutilizing WinAPI functionality (GetFileType) to correctly distinguish\nand flag descriptors based on their native OS handle's file type.\n\nDaniel\n\n---\n src/bin/pg_dump/pg_backup_archiver.c | 12 +++++\n src/include/port/win32_port.h | 6 +++\n src/port/win32stat.c | 68 ++++++++++++++++++++--------\n 3 files changed, 67 insertions(+), 19 deletions(-)",
"msg_date": "Fri, 16 Dec 2022 16:09:21 +0100",
"msg_from": "Daniel Watzinger <daniel.watzinger@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "> On 16 Dec 2022, at 16:09, Daniel Watzinger <daniel.watzinger@gmail.com> wrote:\n\n> first-time contributor here. I certainly hope I got the patch\n> creation and email workflow right. Let me know if anything can be\n> improved as I`m eager to learn.\n\nWelcome! The patch seems to be in binary format or using some form of\nnon-standard encoding? Can you re-send in plain text format?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Fri, 16 Dec 2022 17:06:48 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "Well, this is embarassing. Sorry for the inconvenience. Some part\nof my company's network infrastruture must have mangled the attachment.\nBoth mails were sent using a combination of git format-patch \nand git send-email. However, as this is my first foray into this \nemail-based workflow, I won't rule out a failure on my part. Bear\nwith me and let's try again.",
"msg_date": "Sat, 17 Dec 2022 00:55:24 +0100",
"msg_from": "Daniel Watzinger <daniel.watzinger@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 12:55:24AM +0100, Daniel Watzinger wrote:\n> Well, this is embarassing. Sorry for the inconvenience. Some part\n> of my company's network infrastruture must have mangled the attachment.\n> Both mails were sent using a combination of git format-patch \n> and git send-email. However, as this is my first foray into this \n> email-based workflow, I won't rule out a failure on my part. Bear\n> with me and let's try again. \n\nNo problem. You got the email and the patch format rights!\n\nWe had better make sure that this does not break again 10260c7, and\nthese could not be reproduced with automated tests as they needed a\nWindows terminal. Isn't this issue like the other commit, where the\nautomated testing cannot reproduce any of that because it requires a\nterminal? If not, could it be possible to add some tests to have some\ncoverage? The tests of pg_dump in src/bin/pg_dump/t/ invoke the\ncustom format in a few scenarios already, and these are tested in the\nbuildfarm for a couple of years now, without failing, but perhaps we'd\nneed a small tweak to have a reproducible test case for automation?\n\nThe patch has some formatting problems, see git diff --check for\nexample. This does not prevent looking at the patch.\n\nThe internal implementation of _pgstat64() is used in quite a few\nplaces, so we'd better update this part first, IMO, and then focus on\nthe pg_dump part. Anyway, it looks like you are right here: there is\nnothing for FILE_TYPE_PIPE or FILE_TYPE_CHAR in this WIN32\nimplementation of fstat().\n\nI am amazed to hear that both ftello64() and fseek64() actually\nsucceed if you use a pipe:\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/fseek.html\nCould it be something we should try to make more portable by ourselves\nwith a wrapper for these on WIN32? That would not be the first one to\naccomodate our code with POSIX, and who knows what code could be broken\nbecause of that, like external extensions that use fseek64() without\nknowing it.\n\n- if (hFile == INVALID_HANDLE_VALUE || buf == NULL)\n+ if (hFile == INVALID_HANDLE_VALUE || hFile == (HANDLE)-2 || buf == NULL)\nWhat's the -2 for? Perhaps this should have a comment?\n\n+ fileType = GetFileType(hFile);\n+ lastError = GetLastError();\n[...]\n if (fileType == FILE_TYPE_UNKNOWN && lastError != NO_ERROR) {\n+ _dosmaperr(lastError);\n+ return -1;\n }\nSo, the patched code assumes that all the file types classified as\nFILE_TYPE_UNKNOWN when GetFileType() does not fail refer to fileno\nbeing either stdin, stderr or stdout. Perhaps we had better\ncross-check that fileno points to one of these three cases in the\nswitch under FILE_TYPE_UNKNOWN? Could there be other cases where we\nhave FILE_TYPE_UNKNOWN but GetFileType() does not fail?\n\nPer the documentation of GetFileType, FILE_TYPE_REMOTE is unused:\nhttps://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfiletype\nPerhaps it would be safer to fail in this case?\n\n checkSeek(FILE *fp)\n {\n pgoff_t tpos;\n+ struct stat st;\n+\n+ /* Check if this is a terminal descriptor */\n+ if (isatty(fileno(fp))) {\n+ return false;\n+ }\n+\n+ /* Check if this is an unseekable character special device or pipe */\n+ if ((fstat(fileno(fp), &st) == 0) && (S_ISCHR(st.st_mode)\n+ || S_ISFIFO(st.st_mode))) {\n+ return false;\n+ }\nUsing that without a control of WIN32 is disturbing, but that comes\ndown to if we'd want to tackle that within an extra layer of\nfseek()/ftello() in the WIN32 port.\n\nI am adding Juan in CC, as I am sure he'd have comments to offer on\nthis area of the code.\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 16:01:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format\n on Win32"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 8:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> We had better make sure that this does not break again 10260c7, and\n> these could not be reproduced with automated tests as they needed a\n> Windows terminal. Isn't this issue like the other commit, where the\n> automated testing cannot reproduce any of that because it requires a\n> terminal? If not, could it be possible to add some tests to have some\n> coverage? The tests of pg_dump in src/bin/pg_dump/t/ invoke the\n> custom format in a few scenarios already, and these are tested in the\n> buildfarm for a couple of years now, without failing, but perhaps we'd\n> need a small tweak to have a reproducible test case for automation?\n>\n\nI've been able to manually reproduce the problem with:\n\npg_dump --format=custom > custom.dump\npg_restore --file=toc.txt --list custom.dump\npg_dump --format=custom | pg_restore --file=toc.dump --use-list=toc.txt\n\nThe error I get is:\n\npg_restore: error: unsupported version (0.7) in file header\n\nI'm not really sure how to integrate this in a tap test.\n\n\n> The internal implementation of _pgstat64() is used in quite a few\n> places, so we'd better update this part first, IMO, and then focus on\n> the pg_dump part. Anyway, it looks like you are right here: there is\n> nothing for FILE_TYPE_PIPE or FILE_TYPE_CHAR in this WIN32\n> implementation of fstat().\n>\n> I am amazed to hear that both ftello64() and fseek64() actually\n> succeed if you use a pipe:\n> https://pubs.opengroup.org/onlinepubs/9699919799/functions/fseek.html\n> Could it be something we should try to make more portable by ourselves\n> with a wrapper for these on WIN32? That would not be the first one to\n> accomodate our code with POSIX, and who knows what code could be broken\n> because of that, like external extensions that use fseek64() without\n> knowing it.\n>\n\nThe error is reproducible in versions previous to win32stat.c, so that\nmight work as bug fix.\n\n\n> - if (hFile == INVALID_HANDLE_VALUE || buf == NULL)\n> + if (hFile == INVALID_HANDLE_VALUE || hFile == (HANDLE)-2 || buf ==\n> NULL)\n> What's the -2 for? Perhaps this should have a comment?\n>\n\n There's a note on _get_osfhandle() [1] about when -2 is returned, but a\ncomment seems appropriate.\n\n+ fileType = GetFileType(hFile);\n> + lastError = GetLastError();\n> [...]\n> if (fileType == FILE_TYPE_UNKNOWN && lastError != NO_ERROR) {\n> + _dosmaperr(lastError);\n> + return -1;\n> }\n> So, the patched code assumes that all the file types classified as\n> FILE_TYPE_UNKNOWN when GetFileType() does not fail refer to fileno\n> being either stdin, stderr or stdout. Perhaps we had better\n> cross-check that fileno points to one of these three cases in the\n> switch under FILE_TYPE_UNKNOWN? Could there be other cases where we\n> have FILE_TYPE_UNKNOWN but GetFileType() does not fail?\n>\n\nI don't think we should set st_mode for FILE_TYPE_UNKNOWN.\n\n\n> Per the documentation of GetFileType, FILE_TYPE_REMOTE is unused:\n>\n> https://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfiletype\n> Perhaps it would be safer to fail in this case?\n>\n\n+1, we don't know what that might involve.\n\n[1]\nhttps://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/get-osfhandle?view=msvc-170\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Mar 2, 2023 at 8:01 AM Michael Paquier <michael@paquier.xyz> wrote:\nWe had better make sure that this does not break again 10260c7, and\nthese could not be reproduced with automated tests as they needed a\nWindows terminal. Isn't this issue like the other commit, where the\nautomated testing cannot reproduce any of that because it requires a\nterminal? If not, could it be possible to add some tests to have some\ncoverage? The tests of pg_dump in src/bin/pg_dump/t/ invoke the\ncustom format in a few scenarios already, and these are tested in the\nbuildfarm for a couple of years now, without failing, but perhaps we'd\nneed a small tweak to have a reproducible test case for automation?I've been able to manually reproduce the problem with:pg_dump --format=custom > custom.dumppg_restore --file=toc.txt --list custom.dumppg_dump --format=custom | pg_restore --file=toc.dump --use-list=toc.txtThe error I get is:pg_restore: error: unsupported version (0.7) in file headerI'm not really sure how to integrate this in a tap test. The internal implementation of _pgstat64() is used in quite a few\nplaces, so we'd better update this part first, IMO, and then focus on\nthe pg_dump part. Anyway, it looks like you are right here: there is\nnothing for FILE_TYPE_PIPE or FILE_TYPE_CHAR in this WIN32\nimplementation of fstat().\n\nI am amazed to hear that both ftello64() and fseek64() actually\nsucceed if you use a pipe:\nhttps://pubs.opengroup.org/onlinepubs/9699919799/functions/fseek.html\nCould it be something we should try to make more portable by ourselves\nwith a wrapper for these on WIN32? That would not be the first one to\naccomodate our code with POSIX, and who knows what code could be broken\nbecause of that, like external extensions that use fseek64() without\nknowing it.The error is reproducible in versions previous to win32stat.c, so that might work as bug fix. - if (hFile == INVALID_HANDLE_VALUE || buf == NULL)\n+ if (hFile == INVALID_HANDLE_VALUE || hFile == (HANDLE)-2 || buf == NULL)\nWhat's the -2 for? Perhaps this should have a comment? There's a note on _get_osfhandle() [1] about when -2 is returned, but a comment seems appropriate. + fileType = GetFileType(hFile);\n+ lastError = GetLastError();\n[...]\n if (fileType == FILE_TYPE_UNKNOWN && lastError != NO_ERROR) {\n+ _dosmaperr(lastError);\n+ return -1;\n }\nSo, the patched code assumes that all the file types classified as\nFILE_TYPE_UNKNOWN when GetFileType() does not fail refer to fileno\nbeing either stdin, stderr or stdout. Perhaps we had better\ncross-check that fileno points to one of these three cases in the\nswitch under FILE_TYPE_UNKNOWN? Could there be other cases where we\nhave FILE_TYPE_UNKNOWN but GetFileType() does not fail?I don't think we should set st_mode for FILE_TYPE_UNKNOWN. Per the documentation of GetFileType, FILE_TYPE_REMOTE is unused:\nhttps://learn.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-getfiletype\nPerhaps it would be safer to fail in this case?+1, we don't know what that might involve. [1] https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/get-osfhandle?view=msvc-170Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 7 Mar 2023 13:36:59 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "On Tue, Mar 7, 2023 at 1:36 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> On Thu, Mar 2, 2023 at 8:01 AM Michael Paquier <michael@paquier.xyz>\n> wrote:\n>\n>>\n>> The internal implementation of _pgstat64() is used in quite a few\n>>\n> places, so we'd better update this part first, IMO, and then focus on\n>> the pg_dump part. Anyway, it looks like you are right here: there is\n>> nothing for FILE_TYPE_PIPE or FILE_TYPE_CHAR in this WIN32\n>> implementation of fstat().\n>>\n>> I am amazed to hear that both ftello64() and fseek64() actually\n>> succeed if you use a pipe:\n>> https://pubs.opengroup.org/onlinepubs/9699919799/functions/fseek.html\n>> Could it be something we should try to make more portable by ourselves\n>> with a wrapper for these on WIN32? That would not be the first one to\n>> accomodate our code with POSIX, and who knows what code could be broken\n>> because of that, like external extensions that use fseek64() without\n>> knowing it.\n>>\n>\n> The error is reproducible in versions previous to win32stat.c, so that\n> might work as bug fix.\n>\n\nI've broken the patch in two:\n1. fixes the detection of unseekable files in checkSeek(), using logic that\nhopefully is backpatchable,\n2. the improvements on file type detection for stat() proposed by the OP.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Fri, 10 Mar 2023 00:12:37 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 12:12:37AM +0100, Juan José Santamaría Flecha wrote:\n> I've broken the patch in two:\n> 1. fixes the detection of unseekable files in checkSeek(), using logic that\n> hopefully is backpatchable,\n> 2. the improvements on file type detection for stat() proposed by the OP.\n\nI am OK with 0002, so I'll try to get this part backpatched down to\nwhere the implementation of stat() has been added. I am not\ncompletely sure that 0001 is the right way forward, though,\nparticularly with the long-term picture.. In the backend, we have one\ncaller of fseeko() as of read_binary_file(), so we would never pass \ndown a pipe to that. However, there could be a risk of some silent\nbreakages on Windows if some new code relies on that?\n\nThere is a total of 11 callers of fseeko() in pg_dump, so rather than\nrelying on checkSeek() to see if it actually works, I'd like to think\nthat we should have a central policy to make this code more\nbullet-proof in the future.\n--\nMichael",
"msg_date": "Fri, 10 Mar 2023 10:37:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format\n on Win32"
},
{
"msg_contents": "On Fri, Mar 10, 2023 at 2:37 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Mar 10, 2023 at 12:12:37AM +0100, Juan José Santamaría Flecha\n> wrote:\n> > I've broken the patch in two:\n> > 1. fixes the detection of unseekable files in checkSeek(), using logic\n> that\n> > hopefully is backpatchable,\n> > 2. the improvements on file type detection for stat() proposed by the OP.\n>\n> I am OK with 0002, so I'll try to get this part backpatched down to\n> where the implementation of stat() has been added. I am not\n> completely sure that 0001 is the right way forward, though,\n> particularly with the long-term picture.. In the backend, we have one\n> caller of fseeko() as of read_binary_file(), so we would never pass\n> down a pipe to that. However, there could be a risk of some silent\n> breakages on Windows if some new code relies on that?\n>\n> There is a total of 11 callers of fseeko() in pg_dump, so rather than\n> relying on checkSeek() to see if it actually works, I'd like to think\n> that we should have a central policy to make this code more\n> bullet-proof in the future.\n>\n\nWFM, making fseek() behaviour more resilient seems like a good improvement\noverall.\n\nShould we open a new thread to make that part more visible?\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Mar 10, 2023 at 2:37 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Mar 10, 2023 at 12:12:37AM +0100, Juan José Santamaría Flecha wrote:\n> I've broken the patch in two:\n> 1. fixes the detection of unseekable files in checkSeek(), using logic that\n> hopefully is backpatchable,\n> 2. the improvements on file type detection for stat() proposed by the OP.\n\nI am OK with 0002, so I'll try to get this part backpatched down to\nwhere the implementation of stat() has been added. I am not\ncompletely sure that 0001 is the right way forward, though,\nparticularly with the long-term picture.. In the backend, we have one\ncaller of fseeko() as of read_binary_file(), so we would never pass \ndown a pipe to that. However, there could be a risk of some silent\nbreakages on Windows if some new code relies on that?\n\nThere is a total of 11 callers of fseeko() in pg_dump, so rather than\nrelying on checkSeek() to see if it actually works, I'd like to think\nthat we should have a central policy to make this code more\nbullet-proof in the future.WFM, making fseek() behaviour more resilient seems like a good improvement overall.Should we open a new thread to make that part more visible?Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 13 Mar 2023 17:49:41 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "On Mon, Mar 13, 2023 at 05:49:41PM +0100, Juan José Santamaría Flecha wrote:\n> WFM, making fseek() behaviour more resilient seems like a good improvement\n> overall.\n\nI have not looked in details, but my guess would be to add a\nwin32seek.c similar to win32stat.c with a port of fseek() that's more\nresilient to the definitions in POSIX.\n\n> Should we open a new thread to make that part more visible?\n\nYes, perhaps it makes sense to do so to attract the correct audience,\nThere may be a few things we are missing.\n\nWhen it comes to pg_dump, both fixes are required, still it seems to\nme that adjusting the fstat() port and the fseek() ports are two\ndifferent bugs, as they influence different parts of the code base\nwhen taken individually (aka this fseek() port for WIN32 would need\nfstat() to properly detect a pipe, as far as I understand).\n\nMeanwhile, I'll go apply and backpatch 0001 to fix the first bug at\nhand with the fstat() port, if there are no objections.\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 08:40:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format\n on Win32"
},
{
"msg_contents": "I'm sorry I couldn't contribute to the discussion in time. The fix of the\nfstat() Win32 port looks good to me. I agree that there's a need for\nmultiple fseek() ports to address the shortcomings of the MSVC\nfunctionality.\n\nThe documentation event states that \"on devices incapable of seeking, the\nreturn value is undefined\". A simple wrapper using GetFileType() or the new\nfstat(), to filter non-seekable devices before delegation, will probably\nsuffice.\n\nhttps://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fseek-fseeki64?view=msvc-170\n\nRegarding test automation and regression testing, there's a programmatic\nway to simulate how the \"pipe operator\" of cmd.exe and other shells works\nusing CreateProcess and manual \"piping\" by means of various WinAPI\nfunctionality. This is actually how the bug was discovered in the first\ncase. However, existing tests are probably platform-agnostic.\n\n--\nDaniel\n\nOn Tue, Mar 14, 2023 at 12:41 AM Michael Paquier <michael@paquier.xyz>\nwrote:\n\n> On Mon, Mar 13, 2023 at 05:49:41PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > WFM, making fseek() behaviour more resilient seems like a good\n> improvement\n> > overall.\n>\n> I have not looked in details, but my guess would be to add a\n> win32seek.c similar to win32stat.c with a port of fseek() that's more\n> resilient to the definitions in POSIX.\n>\n> > Should we open a new thread to make that part more visible?\n>\n> Yes, perhaps it makes sense to do so to attract the correct audience,\n> There may be a few things we are missing.\n>\n> When it comes to pg_dump, both fixes are required, still it seems to\n> me that adjusting the fstat() port and the fseek() ports are two\n> different bugs, as they influence different parts of the code base\n> when taken individually (aka this fseek() port for WIN32 would need\n> fstat() to properly detect a pipe, as far as I understand).\n>\n> Meanwhile, I'll go apply and backpatch 0001 to fix the first bug at\n> hand with the fstat() port, if there are no objections.\n> --\n> Michael\n>\n\nI'm sorry I couldn't contribute to the discussion in time. The fix of the fstat() Win32 port looks good to me. I agree that there's a need for multiple fseek() ports to address the shortcomings of the MSVC functionality.The documentation event states that \"on devices incapable of seeking, the return value is undefined\". A simple wrapper using GetFileType() or the new fstat(), to filter non-seekable devices before delegation, will probably suffice.https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fseek-fseeki64?view=msvc-170Regarding test automation and regression testing, there's a programmatic way to simulate how the \"pipe operator\" of cmd.exe and other shells works using CreateProcess and manual \"piping\" by means of various WinAPI functionality. This is actually how the bug was discovered in the first case. However, existing tests are probably platform-agnostic.--DanielOn Tue, Mar 14, 2023 at 12:41 AM Michael Paquier <michael@paquier.xyz> wrote:On Mon, Mar 13, 2023 at 05:49:41PM +0100, Juan José Santamaría Flecha wrote:\n> WFM, making fseek() behaviour more resilient seems like a good improvement\n> overall.\n\nI have not looked in details, but my guess would be to add a\nwin32seek.c similar to win32stat.c with a port of fseek() that's more\nresilient to the definitions in POSIX.\n\n> Should we open a new thread to make that part more visible?\n\nYes, perhaps it makes sense to do so to attract the correct audience,\nThere may be a few things we are missing.\n\nWhen it comes to pg_dump, both fixes are required, still it seems to\nme that adjusting the fstat() port and the fseek() ports are two\ndifferent bugs, as they influence different parts of the code base\nwhen taken individually (aka this fseek() port for WIN32 would need\nfstat() to properly detect a pipe, as far as I understand).\n\nMeanwhile, I'll go apply and backpatch 0001 to fix the first bug at\nhand with the fstat() port, if there are no objections.\n--\nMichael",
"msg_date": "Tue, 14 Mar 2023 12:30:18 +0100",
"msg_from": "Daniel Watzinger <daniel.watzinger@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "Please, don't top post.\n\nOn Tue, Mar 14, 2023 at 12:30 PM Daniel Watzinger <\ndaniel.watzinger@gmail.com> wrote:\n\n> I'm sorry I couldn't contribute to the discussion in time. The fix of the\n> fstat() Win32 port looks good to me. I agree that there's a need for\n> multiple fseek() ports to address the shortcomings of the MSVC\n> functionality.\n>\n> The documentation event states that \"on devices incapable of seeking, the\n> return value is undefined\". A simple wrapper using GetFileType() or the new\n> fstat(), to filter non-seekable devices before delegation, will probably\n> suffice.\n>\n>\n> https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fseek-fseeki64?view=msvc-170\n>\n\nI have just posted a patch to enforce the detection of unseekable streams\nin the fseek() calls [1], please feel free to review it.\n\n[1]\nhttps://www.postgresql.org/message-id/CAC%2BAXB26a4EmxM2suXxPpJaGrqAdxracd7hskLg-zxtPB50h7A%40mail.gmail.com\n\nRegards,\n\nJuan José Santamaría Flecha\n\nPlease, don't top post.On Tue, Mar 14, 2023 at 12:30 PM Daniel Watzinger <daniel.watzinger@gmail.com> wrote:I'm sorry I couldn't contribute to the discussion in time. The fix of the fstat() Win32 port looks good to me. I agree that there's a need for multiple fseek() ports to address the shortcomings of the MSVC functionality.The documentation event states that \"on devices incapable of seeking, the return value is undefined\". A simple wrapper using GetFileType() or the new fstat(), to filter non-seekable devices before delegation, will probably suffice.https://learn.microsoft.com/en-us/cpp/c-runtime-library/reference/fseek-fseeki64?view=msvc-170I have just posted a patch to enforce the detection of unseekable streams in the fseek() calls [1], please feel free to review it.[1] https://www.postgresql.org/message-id/CAC%2BAXB26a4EmxM2suXxPpJaGrqAdxracd7hskLg-zxtPB50h7A%40mail.gmail.comRegards,Juan José Santamaría Flecha",
"msg_date": "Tue, 14 Mar 2023 13:47:09 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format on\n Win32"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 01:47:09PM +0100, Juan José Santamaría Flecha wrote:\n> I have just posted a patch to enforce the detection of unseekable streams\n> in the fseek() calls [1], please feel free to review it.\n\nThanks. I have been able to get around 0001 to fix _pgfstat64() and\napplied it down to v14 where this code has been introduced. Now to\nthe part about fseek() and ftello()..\n--\nMichael",
"msg_date": "Wed, 15 Mar 2023 12:58:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump/pg_restore: Fix stdin/stdout handling of custom format\n on Win32"
}
] |
[
{
"msg_contents": "Attached is a new patch series. I think there are enough changes that\nthis has become more of a \"rework\" of the collation code rather than\njust a refactoring. This is a continuation of some prior work[1][2] in\na new thread given its new scope.\n\nBenefits:\n\n1. Clearer division of responsibilities.\n2. More consistent between libc and ICU providers.\n3. Hooks that allow extensions to replace collation provider libraries.\n4. New tests for the collation provider library hooks.\n\nThere are a lot of changes, and still some loose ends, but I believe a\nfew of these patches are close to ready.\n\nThis set of changes does not express an opinion on how we might want to\nsupport multiple provider libraries in core; but whatever we choose, it\nshould be easier to accomplish. Right now, the hooks have limited\ninformation on which to make the choice for a specific version of a\ncollation provider library, but that's because there's limited\ninformation in the catalog. If the discussion here[3] concludes in\nadding collation provider library or library version information to the\ncatalog, we can add additional parameters to the hooks.\n\n[1]\nhttps://postgr.es/m/99aa79cceefd1fe84fda23510494b8fbb7ad1e70.camel@j-davis.com\n[2]\nhttps://postgr.es/m/c4fda90ec6a7568a896f243a38eb273c3b5c3d93.camel@j-davis.com\n[3]\nhttps://postgr.es/m/CA+hUKGLEqMhnpZrgAcisoUeYFGz8W6EWdhtK2h-4QN0iOSFRqw@mail.gmail.com\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Sat, 17 Dec 2022 19:14:23 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Rework of collation code, extensibility"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 7:14 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> Attached is a new patch series. I think there are enough changes that\n> this has become more of a \"rework\" of the collation code rather than\n> just a refactoring. This is a continuation of some prior work[1][2] in\n> a new thread given its new scope.\n>\n> Benefits:\n>\n> 1. Clearer division of responsibilities.\n> 2. More consistent between libc and ICU providers.\n> 3. Hooks that allow extensions to replace collation provider libraries.\n> 4. New tests for the collation provider library hooks.\n>\n> There are a lot of changes, and still some loose ends, but I believe a\n> few of these patches are close to ready.\n>\n> This set of changes does not express an opinion on how we might want to\n> support multiple provider libraries in core; but whatever we choose, it\n> should be easier to accomplish. Right now, the hooks have limited\n> information on which to make the choice for a specific version of a\n> collation provider library, but that's because there's limited\n> information in the catalog. If the discussion here[3] concludes in\n> adding collation provider library or library version information to the\n> catalog, we can add additional parameters to the hooks.\n>\n> [1]\n>\n> https://postgr.es/m/99aa79cceefd1fe84fda23510494b8fbb7ad1e70.camel@j-davis.com\n> [2]\n>\n> https://postgr.es/m/c4fda90ec6a7568a896f243a38eb273c3b5c3d93.camel@j-davis.com\n> [3]\n>\n> https://postgr.es/m/CA+hUKGLEqMhnpZrgAcisoUeYFGz8W6EWdhtK2h-4QN0iOSFRqw@mail.gmail.com\n>\n>\n> --\n> Jeff Davis\n> PostgreSQL Contributor Team - AWS\n>\n> Hi,\nFor pg_strxfrm_libc in v4-0002-Add-pg_strxfrm-and-pg_strxfrm_prefix.patch:\n\n+#ifdef HAVE_LOCALE_T\n+ if (locale)\n+ return strxfrm_l(dest, src, destsize, locale->info.lt);\n+ else\n+#endif\n+ return strxfrm(dest, src, destsize);\n\nIt seems the `else` is not needed (since when the if branch is taken, we\nreturn from the func).\n\n+ /* nul-terminate arguments */\n\nnul-terminate -> null-terminate\n\nFor pg_strnxfrm(), I think `result` can be removed - we directly return the\nresult from pg_strnxfrm_libc or pg_strnxfrm_icu.\n\nCheers\n\nOn Sat, Dec 17, 2022 at 7:14 PM Jeff Davis <pgsql@j-davis.com> wrote:Attached is a new patch series. I think there are enough changes that\nthis has become more of a \"rework\" of the collation code rather than\njust a refactoring. This is a continuation of some prior work[1][2] in\na new thread given its new scope.\n\nBenefits:\n\n1. Clearer division of responsibilities.\n2. More consistent between libc and ICU providers.\n3. Hooks that allow extensions to replace collation provider libraries.\n4. New tests for the collation provider library hooks.\n\nThere are a lot of changes, and still some loose ends, but I believe a\nfew of these patches are close to ready.\n\nThis set of changes does not express an opinion on how we might want to\nsupport multiple provider libraries in core; but whatever we choose, it\nshould be easier to accomplish. Right now, the hooks have limited\ninformation on which to make the choice for a specific version of a\ncollation provider library, but that's because there's limited\ninformation in the catalog. If the discussion here[3] concludes in\nadding collation provider library or library version information to the\ncatalog, we can add additional parameters to the hooks.\n\n[1]\nhttps://postgr.es/m/99aa79cceefd1fe84fda23510494b8fbb7ad1e70.camel@j-davis.com\n[2]\nhttps://postgr.es/m/c4fda90ec6a7568a896f243a38eb273c3b5c3d93.camel@j-davis.com\n[3]\nhttps://postgr.es/m/CA+hUKGLEqMhnpZrgAcisoUeYFGz8W6EWdhtK2h-4QN0iOSFRqw@mail.gmail.com\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWSHi,For pg_strxfrm_libc in v4-0002-Add-pg_strxfrm-and-pg_strxfrm_prefix.patch:+#ifdef HAVE_LOCALE_T+ if (locale)+ return strxfrm_l(dest, src, destsize, locale->info.lt);+ else+#endif+ return strxfrm(dest, src, destsize);It seems the `else` is not needed (since when the if branch is taken, we return from the func).+ /* nul-terminate arguments */nul-terminate -> null-terminateFor pg_strnxfrm(), I think `result` can be removed - we directly return the result from pg_strnxfrm_libc or pg_strnxfrm_icu.Cheers",
"msg_date": "Sat, 17 Dec 2022 19:27:20 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 10:28 AM Ted Yu <yuzhihong@gmail.com> wrote:\n\n> It seems the `else` is not needed (since when the if branch is taken, we\nreturn from the func).\n\nBy that same logic, this review comment is not needed, since compiler\nvendors don't charge license fees by the number of keywords. ;-)\nJoking aside, we don't really have a project style preference for this case.\n\n> nul-terminate -> null-terminate\n\nNUL is a common abbreviation for the zero byte (but not for zero pointers).\nSee the ascii manpage.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Dec 18, 2022 at 10:28 AM Ted Yu <yuzhihong@gmail.com> wrote:> It seems the `else` is not needed (since when the if branch is taken, we return from the func).By that same logic, this review comment is not needed, since compiler vendors don't charge license fees by the number of keywords. ;-)Joking aside, we don't really have a project style preference for this case.> nul-terminate -> null-terminateNUL is a common abbreviation for the zero byte (but not for zero pointers). See the ascii manpage.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sun, 18 Dec 2022 11:54:36 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Sat, Dec 17, 2022 at 8:54 PM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n>\n> > nul-terminate -> null-terminate\n>\n> NUL is a common abbreviation for the zero byte (but not for zero\n> pointers). See the ascii manpage.\n>\n> --\n> John Naylor\n> EDB: http://www.enterprisedb.com\n>\n> Ah.\n\n`nul-terminated` does appear in the codebase.\nShould have checked earlier.\n\nOn Sat, Dec 17, 2022 at 8:54 PM John Naylor <john.naylor@enterprisedb.com> wrote:> nul-terminate -> null-terminateNUL is a common abbreviation for the zero byte (but not for zero pointers). See the ascii manpage.--John NaylorEDB: http://www.enterprisedb.comAh.`nul-terminated` does appear in the codebase.Should have checked earlier.",
"msg_date": "Sat, 17 Dec 2022 21:12:00 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Sat, 2022-12-17 at 19:14 -0800, Jeff Davis wrote:\n> Attached is a new patch series. I think there are enough changes that\n> this has become more of a \"rework\" of the collation code rather than\n> just a refactoring. This is a continuation of some prior work[1][2]\n> in\n> a new thread given its new scope.\n\nHere's version 5. There are a number of fixes, and better tests, and\nit's passing in CI.\n\nThe libc hook support is still experimental, but what's working is\npassing in CI, even on windows. The challenges with libc hook support\nare:\n\n * It obviously doesn't replace all of libc, so the separation is not\nas clean and there are a number of callers throughout the code that\ndon't necessarily care about specific collations. \n\n * libc relies on setlocale() / uselocale(), which is global state and\nnot as easy to track.\n\n * More platform issues (obviously) and harder to test.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 21 Dec 2022 21:40:57 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On 22.12.22 06:40, Jeff Davis wrote:\n> On Sat, 2022-12-17 at 19:14 -0800, Jeff Davis wrote:\n>> Attached is a new patch series. I think there are enough changes that\n>> this has become more of a \"rework\" of the collation code rather than\n>> just a refactoring. This is a continuation of some prior work[1][2]\n>> in\n>> a new thread given its new scope.\n> \n> Here's version 5. There are a number of fixes, and better tests, and\n> it's passing in CI.\n> \n> The libc hook support is still experimental, but what's working is\n> passing in CI, even on windows. The challenges with libc hook support\n> are:\n> \n> * It obviously doesn't replace all of libc, so the separation is not\n> as clean and there are a number of callers throughout the code that\n> don't necessarily care about specific collations.\n> \n> * libc relies on setlocale() / uselocale(), which is global state and\n> not as easy to track.\n> \n> * More platform issues (obviously) and harder to test.\n\nI'm confused by this patch set.\n\nIt combines some refactoring that was previously posted with partial \nsupport for multiple ICU libraries with partial support for some new \nhooks. Shouldn't those be three separate threads? I think the multiple \nICU libraries already does have a separate thread; how does this relate \nto that work? I don't know what the hooks are supposed to be for? What \nother locale libraries are you thinking about using this way? How can \nwe asses whether these interfaces are sufficient for that? The \nrefactoring patches don't look convincing just by looking at the numbers:\n\n 3 files changed, 406 insertions(+), 247 deletions(-)\n 6 files changed, 481 insertions(+), 150 deletions(-)\n 12 files changed, 400 insertions(+), 323 deletions(-)\n\nMy sense is this is trying to do too many things at once, and those \nthings are each not fully developed yet.\n\n\n\n",
"msg_date": "Wed, 4 Jan 2023 22:46:23 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Wed, 2023-01-04 at 22:46 +0100, Peter Eisentraut wrote:\n> It combines some refactoring that was previously posted with partial \n> support for multiple ICU libraries with partial support for some new \n> hooks. Shouldn't those be three separate threads?\n\nOriginally they felt more separate to me, too; but as I worked on them\nit seemed better to consider them as a patch series. Whatever is easier\nfor reviewers works for me, though.\n\n> I think the multiple \n> ICU libraries already does have a separate thread; how does this\n> relate \n> to that work?\n\nMultilib ICU support adds complexity, and my hope is that this patch\nset cleans up and organizes things to better prepare for that\ncomplexity.\n\n> I don't know what the hooks are supposed to be for?\n\nI found them very useful for testing during development. One of the\npatches adds a test module for the ICU hook, and I think that's a\nvaluable place to test regardless of whether any other extension uses\nthe hook. Also, if proper multilib support doesn't land in 16, then the\nhooks could be a way to build rudimentary multilib support (or at least\nsome kind of ICU version lockdown) until it does land.\n\nWhen Thomas's work is in place, I expect the hooks to change slightly.\nThe hooks are not meant to set any specific API in stone.\n\n> What \n> other locale libraries are you thinking about using this way? How\n> can \n> we asses whether these interfaces are sufficient for that?\n\nI'm not considering any other locale libraries, nor did I see much\ndiscussion of that.\n\n> The \n> refactoring patches don't look convincing just by looking at the\n> numbers:\n> \n> 3 files changed, 406 insertions(+), 247 deletions(-)\n> 6 files changed, 481 insertions(+), 150 deletions(-)\n> 12 files changed, 400 insertions(+), 323 deletions(-)\n\nThe existing code is not great, in my opinion: it doesn't have clear\nAPI boundaries, the comments are insufficient, and lots of special\ncases need to be handled awkwardly by callers. That style is hard to\nbeat when it comes to the raw line count; but it's quite difficult to\nunderstand and work on.\n\nI think my changes are an improvement, but obviously that depends on\nthe opinion of others who are working in this part of the code. What do\nyou think?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 05 Jan 2023 23:04:32 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On 06.01.23 08:04, Jeff Davis wrote:\n> The existing code is not great, in my opinion: it doesn't have clear\n> API boundaries, the comments are insufficient, and lots of special\n> cases need to be handled awkwardly by callers. That style is hard to\n> beat when it comes to the raw line count; but it's quite difficult to\n> understand and work on.\n> \n> I think my changes are an improvement, but obviously that depends on\n> the opinion of others who are working in this part of the code. What do\n> you think?\n\nI think the refactoring that you proposed in the thread \"Refactor to \nintroduce pg_strcoll().\" was on a sensible track. Maybe we should try \nto get that done. The multiple-ICU stuff is still experimental and has \nits own rather impressive thread, so I don't think it's sensible to try \nto sort that out here.\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:08:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Thu, 22 Dec 2022 at 11:11, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Sat, 2022-12-17 at 19:14 -0800, Jeff Davis wrote:\n> > Attached is a new patch series. I think there are enough changes that\n> > this has become more of a \"rework\" of the collation code rather than\n> > just a refactoring. This is a continuation of some prior work[1][2]\n> > in\n> > a new thread given its new scope.\n>\n> Here's version 5. There are a number of fixes, and better tests, and\n> it's passing in CI.\n>\n> The libc hook support is still experimental, but what's working is\n> passing in CI, even on windows. The challenges with libc hook support\n> are:\n>\n> * It obviously doesn't replace all of libc, so the separation is not\n> as clean and there are a number of callers throughout the code that\n> don't necessarily care about specific collations.\n>\n> * libc relies on setlocale() / uselocale(), which is global state and\n> not as easy to track.\n>\n> * More platform issues (obviously) and harder to test.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nc971a5b27ac946e7c94f7f655d321279512c7ee7 ===\n=== applying patch ./v5-0003-Refactor-pg_locale_t-routines.patch\n....\nHunk #1 FAILED at 88.\n...\n1 out of 9 hunks FAILED -- saving rejects to file\nsrc/backend/utils/adt/formatting.c.rej\npatching file src/backend/utils/adt/like.c\nHunk #1 FAILED at 24.\nHunk #2 succeeded at 97 (offset 1 line).\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/utils/adt/like.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4058.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 11 Jan 2023 22:07:22 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Wed, 2022-12-21 at 21:40 -0800, Jeff Davis wrote:\n> Here's version 5. There are a number of fixes, and better tests, and\n> it's passing in CI.\n\nAttached trivial rebase as v6.\n\n> The libc hook support is still experimental\n\nPatches 0006 and 0007 should still be considered experimental and don't\nrequire review right now.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Wed, 11 Jan 2023 15:43:48 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Wed, 2023-01-11 at 15:08 +0100, Peter Eisentraut wrote:\n> I think the refactoring that you proposed in the thread \"Refactor to \n> introduce pg_strcoll().\" was on a sensible track. Maybe we should\n> try \n> to get that done.\n\nThose should be patches 0001-0003 in this thread (now at v6), which are\nall pure refactoring.\n\nLet's consider those patches the topic of this thread and I'll move\n0004-0007 back to the multi-lib ICU thread on the next revision.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:52:26 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 3:44 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Attached trivial rebase as v6.\n\nSome review comments for this v6.\n\nComments on 0001-*:\n\n* I think that 0002-* can be squashed with 0001-*, since there isn't\nany functional reason why you'd want to commit the strcoll() and\nstrxfrm() changes separately.\n\nSometimes it can be useful to break things up, despite the fact that\nit couldn't possibly make sense to commit just one of the resulting\npatches on its own. However, I don't think that that's appropriate\nhere. There is no apparent conceptual boundary that you're\nhighlighting by splitting things up like this. strxfrm() and strcoll()\nare defined in terms of each other -- they're siblings, joined at the\nhip -- so this seems kinda jarring.\n\n* Your commit message for 0001 (and other patches) don't set things up\nby describing what the point is, and what the work anticipates. I\nthink that they should do that.\n\nYou're adding a layer of indirection that's going to set things up for\nlater patches that add a layer of indirection for version ICU\nlibraries (and even libc itself), and some of the details only make\nsense in that context. This isn't just refactoring work that could\njust as easily have happened in some quite different context.\n\n* I'm not sure that pg_strcoll() should be breaking ties itself. We\nbreak ties using strcmp() for historical reasons, but must not do that\nfor deterministic ICU collations, which may be obscured.\n\nThat means that pg_strcoll()'s relationship to pg_strxfrm()'s isn't\nthe same as the well known strcoll()/strxfrm() relationship. That kind\nof makes pg_strcoll() somewhat more than a strcoll() shim, which is\ninconsistent. Another concern is that the deterministic collation\nhandling isn't handled in any one layer, which would have been nice.\n\nDo we need to do things this way? What's it adding?\n\n* varstrfastcmp_locale() is no longer capable of calling\nucol_strcollUTF8() through the shim interface, meaning that it has to\ndetermine string length based on NUL-termination, when in principle it\ncould just use the known length of the string.\n\nPresumably this might have performance implications. Have you thought\nabout that?\n\nSome comments on 0002-*:\n\n* I don't see much point in this new varstr_abbrev_convert() variable:\n\n+ const size_t max_prefix_bytes = sizeof(Datum);\n\nvarstr_abbrev_convert() is concerned with packing abbreviated key\nbytes into Datums, so it's perfectly reasonable to deal with\nDatums/sizeof(Datum) directly.\n\n* Having a separate pg_strxfrm_prefix_libc() function just to throw an\nerror doesn't really add much IMV.\n\nComments on 0003-*:\n\nI suggest that some of the very trivial functions you have here (such\nas pg_locale_deterministic()) be made inline functions.\n\nComments on 0006-*:\n\n* get_builtin_libc_library() could be indented in a way that would\nmake it easier to understand.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 13 Jan 2023 11:57:08 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Fri, 2023-01-13 at 11:57 -0800, Peter Geoghegan wrote:\n> You're adding a layer of indirection that's going to set things up\n> for\n> later patches that add a layer of indirection for version ICU\n> libraries (and even libc itself), and some of the details only make\n> sense in that context. This isn't just refactoring work that could\n> just as easily have happened in some quite different context.\n\nRight, well put. I have two goals and felt that they merged into one\npatchset, but I think that caused more confusion.\n\nThe first goal I had was simply that the code was really hard to\nunderstand and work on, and refactoring was justified to improve the\nsituation.\n\nThe second goal, which is somewhat dependent on the first goal, is that\nwe really need an ability to support multiple ICU libraries, and I\nwanted to do some common groundwork that would be needed for any\napproach we choose there, and provide some hooks to get us there. You\nare right that this goal influenced the first goal.\n\nI attached new patches:\n\n v7-0001: pg_strcoll and pg_strxfrm patches combined, your comments\naddressed\n v7-0002: add pg_locale_internal.h (and other refactoring)\n\nI will post the other patches in the other thread.\n\n> That means that pg_strcoll()'s relationship to pg_strxfrm()'s isn't\n> the same as the well known strcoll()/strxfrm() relationship.\n\nThat's a really good point. I changed tiebreaking to be the caller's\nresponsibility.\n\n> * varstrfastcmp_locale() is no longer capable of calling\n> ucol_strcollUTF8() through the shim interface, meaning that it has to\n> determine string length based on NUL-termination, when in principle\n> it\n> could just use the known length of the string.\n\nI think you misread, it still calls ucol_strcollUTF8() when applicable,\nwhich is impoartant because otherwise it would require a conversion to\na UChar string.\n\nucol_strcollUTF8() accepts -1 to mean \"nul-terminated\". I did some\nbasic testing and it doesn't seem like it's slower than using the\nlength. If passing the length is faster for some reason, it would\ncomplicate the API because we'd need an entry point that's expecting\nnul-termination and lengths, which is awkward (as Peter E. pointed\nout).\n\n> * I don't see much point in this new varstr_abbrev_convert()\n> variable:\n> \n> + const size_t max_prefix_bytes = sizeof(Datum);\n> \n> varstr_abbrev_convert() is concerned with packing abbreviated key\n> bytes into Datums, so it's perfectly reasonable to deal with\n> Datums/sizeof(Datum) directly.\n\nI felt it was a little clearer amongst the other code, to a casual\nreader, but I suppose it's a style thing. I will change it if you\ninsist.\n\n> * Having a separate pg_strxfrm_prefix_libc() function just to throw\n> an\n> error doesn't really add much IMV.\n\nRemoved.\n\n> Comments on 0003-*:\n> \n> I suggest that some of the very trivial functions you have here (such\n> as pg_locale_deterministic()) be made inline functions.\n\nI'd have to expose the pg_locale_t struct, which didn't seem desirable\nto me. Do you think it's enough of a performance concern to be worth\nsome ugliness there?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Sat, 14 Jan 2023 12:03:48 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Sat, Jan 14, 2023 at 12:03 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> The first goal I had was simply that the code was really hard to\n> understand and work on, and refactoring was justified to improve the\n> situation.\n>\n> The second goal, which is somewhat dependent on the first goal, is that\n> we really need an ability to support multiple ICU libraries, and I\n> wanted to do some common groundwork that would be needed for any\n> approach we choose there, and provide some hooks to get us there. You\n> are right that this goal influenced the first goal.\n\nI don't disagree that it was somewhat independent of the first goal. I\njust think that it makes sense to \"round up to fully dependent\".\nBasically it's not independent enough to be worth talking about as an\nindependent thing, just as a practical matter - it's confusing at the\nlevel of things like the commit message. There is a clear direction\nthat you're going in here from the start, and your intentions in 0001\ndo matter to somebody that's just looking at 0001 in isolation. That\nis my opinion, at least.\n\nThe second goal is a perfectly good enough goal on its own, and one\nthat I am totally supportive of. Making the code clearer is icing on\nthe cake.\n\n> ucol_strcollUTF8() accepts -1 to mean \"nul-terminated\". I did some\n> basic testing and it doesn't seem like it's slower than using the\n> length. If passing the length is faster for some reason, it would\n> complicate the API because we'd need an entry point that's expecting\n> nul-termination and lengths, which is awkward (as Peter E. pointed\n> out).\n\nThat's good. I'm happy to leave it at that. I was only enquiring.\n\n> I felt it was a little clearer amongst the other code, to a casual\n> reader, but I suppose it's a style thing. I will change it if you\n> insist.\n\nI certainly won't insist.\n\n> I'd have to expose the pg_locale_t struct, which didn't seem desirable\n> to me. Do you think it's enough of a performance concern to be worth\n> some ugliness there?\n\nI don't know. Quite possibly not. It would be nice to have some data\non that, though.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 17 Jan 2023 14:18:20 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Tue, 2023-01-17 at 14:18 -0800, Peter Geoghegan wrote:\n> The second goal is a perfectly good enough goal on its own, and one\n> that I am totally supportive of. Making the code clearer is icing on\n> the cake.\n\nAttached v8, which is just a rebase.\n\nTo reiterate: commitfest entry\nhttps://commitfest.postgresql.org/41/3956/ is dependent on these\npatches and is a big part of the motivation for refactoring.\n\n> \n> I don't know. Quite possibly not. It would be nice to have some data\n> on that, though.\n\nI tested with hash aggregation, which might be more dependent on\npg_locale_deterministic() than sorting. I didn't see any significant\ndifference between master and the refactoring branch, so I don't see a\nneed to make that function \"inline\".\n\nI also re-tested sorting and found some interesting results for en-US-\nx-icu on a UTF-8 database (which is I suspect one of the most common\nconfigurations for ICU):\n\n * the refactoring branch is now more than 5% faster, whether using\nabbreviated keys or not\n * disabling abbreviated keys makes sorting 8-10% faster on both\nmaster and the refactoring branch\n\nBoth of these are surprising, and I haven't investigated deeply yet.\nMaybe something about LTO, some intervening patch, or I just made some\nmistakes somewhere (I did this fairly quickly). But as of now, it\ndoesn't look like the refactoring patch hurts anything.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Fri, 20 Jan 2023 12:54:25 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Fri, 2023-01-20 at 12:54 -0800, Jeff Davis wrote:\n> Both of these are surprising, and I haven't investigated deeply yet.\n\nIt's just because autoconf defaults to -O2 and meson to -O3, at least\non my machine. It turns out that, at -O2, master and the refactoring\nbranch are even; but at -O3, both get faster, and the refactoring pulls\nahead by a few percentage points.\n\nAt least that's what's happening for en-US-x-icu on UTF-8 with my test\ndata set. I didn't see much of a difference in other situations, but I\ndidn't retest those other situations this time around.\n\nWe should still look into why disabling abbreviated keys improves\nperformance in some cases. Maybe we need a GUC for that?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 20 Jan 2023 16:34:44 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "Attached v9 and added perf numbers below.\n\nI'm hoping to commit 0002 and 0003 soon-ish, maybe a week or two,\nplease let me know if you want me to hold off. (I won't commit the GUCs\nunless others find them generally useful; they are included here to\nmore easily reproduce my performance tests.)\n\nMy primary motivation is still related to\nhttps://commitfest.postgresql.org/41/3956/ but the combination of\ncleaner code and a performance boost seems like reasonable\njustification for this patch set independently.\n\nThere aren't any clear open items on this patch. Peter Eisentraut asked\nme to focus this thread on the refactoring, which I've done by reducing\nit to 2 patches, and I left multilib ICU up to the other thread. He\nalso questioned the increased line count, but I think the currently-low\nline count is due to bad style. PeterG provided some review comments,\nin particular when to do the tiebreaking, which I addressed.\n\nThis patch has been around for a while, so I'll take a fresh look and\nsee if I see risk areas, and re-run a few sanity checks. Of course more\nfeedback would also be welcome.\n\nPERFORMANCE:\n\n======\nSetup:\n======\n\nbase: master with v9-0001 applied (GUCs only)\nrefactor: master with v9-0001, v9-0002, v9-0003 applied\n\nNote that I wasn't able to see any performance difference between the\nbase and master, v9-0001 just adds some GUCs to make testing easier.\n\nglibc 2.35 ICU 70.1\ngcc 11.3.0 LLVM 14.0.0\n\nbuilt with meson (uses -O3)\n\n$ perl text_generator.pl 10000000 10 > /tmp/strings.utf8.txt\n\nCREATE TABLE s (t TEXT);\nCOPY s FROM '/tmp/strings.utf8.txt';\nVACUUM FREEZE s;\nCHECKPOINT;\nSET work_mem='10GB';\nSET max_parallel_workers = 0;\nSET max_parallel_workers_per_gather = 0;\n\n=============\nTest queries:\n=============\n\nEXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"C\";\nEXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"en_US\";\nEXPLAIN ANALYZE SELECT t FROM s ORDER BY t COLLATE \"en-US-x-icu\";\n\nTimings are measured as the milliseconds to return the first tuple from\nthe Sort operator (as reported in EXPLAIN ANALYZE). Median of three\nruns.\n\n========\nResults:\n========\n\n base refactor speedup\n\nsort_abbreviated_keys=false:\n C 7377 7273 1.4%\n en_US 35081 35090 0.0%\n en-US-x-ixu 20520 19465 5.4%\n\nsort_abbreviated_keys=true:\n C 8105 8008 1.2%\n en_US 35067 34850 0.6%\n en-US-x-icu 22626 21507 5.2%\n\n===========\nConclusion:\n===========\n\nThese numbers can move +/-1 percentage point, so I'd interpret anything\nless than that as noise. This happens to be the first run where all the\nnumbers favored the refactoring patch, but it is generally consistent\nwith what I had seen before.\n\nThe important part is that, for ICU, it appears to be a substantial\nspeedup when using meson (-O3).\n\nAlso, when/if the multilib ICU support goes in, that may lose some of\nthese gains due to an extra indirect function call.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Thu, 26 Jan 2023 15:47:13 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On 27.01.23 00:47, Jeff Davis wrote:\n> I'm hoping to commit 0002 and 0003 soon-ish, maybe a week or two,\n> please let me know if you want me to hold off. (I won't commit the GUCs\n> unless others find them generally useful; they are included here to\n> more easily reproduce my performance tests.)\n\nI have looked a bit at 0002 and 0003. I like the direction. I'll spend \na bit more time reviewing it in detail. It moves a lot of code around.\n\nI don't know to what extent this depends on the abbreviated key GUC \ndiscussion. Does the rest of this patch set depend on this?\n\n\n\n",
"msg_date": "Tue, 31 Jan 2023 11:40:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Tue, 2023-01-31 at 11:40 +0100, Peter Eisentraut wrote:\n> I don't know to what extent this depends on the abbreviated key GUC \n> discussion. Does the rest of this patch set depend on this?\n\nThe overall refactoring is not dependent logically on the GUC patch. It\nmay require some trivial fixup if you eliminate the GUC patch.\n\nI left it there because it makes exploring/testing easier (at least for\nme), but the GUC patch doesn't need to be committed if there's no\nconsensus.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 31 Jan 2023 15:33:10 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On 01.02.23 00:33, Jeff Davis wrote:\n> On Tue, 2023-01-31 at 11:40 +0100, Peter Eisentraut wrote:\n>> I don't know to what extent this depends on the abbreviated key GUC\n>> discussion. Does the rest of this patch set depend on this?\n> \n> The overall refactoring is not dependent logically on the GUC patch. It\n> may require some trivial fixup if you eliminate the GUC patch.\n> \n> I left it there because it makes exploring/testing easier (at least for\n> me), but the GUC patch doesn't need to be committed if there's no\n> consensus.\n\nI took another closer look at the 0002 and 0003 patches.\n\nThe commit message for 0002 says \"Also remove the TRUST_STRXFRM define\", \nbut I think this is incorrect, as that is done in the 0001 patch.\n\nI don't like that the pg_strnxfrm() function requires these kinds of \nrepetitive error checks:\n\n+ if (rsize != bsize)\n+ elog(ERROR, \"pg_strnxfrm() returned unexpected result\");\n\nThis could be checked inside the function itself, so that the callers \ndon't have to do this themselves every time.\n\nI don't really understand the 0003 patch. It's a lot of churn but I'm \nnot sure that it achieves more clarity or something.\n\nThe new function pg_locale_deterministic() seems sensible. Maybe this \ncould be proposed as a separate patch.\n\nI don't understand the new header pg_locale_internal.h. What is \n\"internal\" and what is not internal? What are we hiding from whom? \nThere are no code comments about this AFAICT.\n\npg_locale_struct has new fields\n\n+ char *collate;\n+ char *ctype;\n\nthat are not explained anywhere.\n\nI think this patch would need a bit more explanation and commenting.\n\n\n\n",
"msg_date": "Mon, 13 Feb 2023 11:35:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "New version attached. Changes:\n\n* I moved the GUC patch to the end (so you can ignore it if it's not\nuseful for review)\n* I cut out the pg_locale_internal.h rearrangement (at least for now,\nit might seem useful after the dust settles on the other changes).\n* I added a separate patch for pg_locale_deterministic().\n* I added a separate patch for a simple cleanup of a USE_ICU special\ncase.\n\nNow the patches are:\n\n 0001: pg_strcoll/pg_strxfrm\n 0002: pg_locale_deterministic()\n 0003: cleanup a USE_ICU special case\n 0004: GUCs (only for testing, not for commit)\n\nResponses to your review comments inline below:\n\nOn Mon, 2023-02-13 at 11:35 +0100, Peter Eisentraut wrote:\n> The commit message for 0002 says \"Also remove the TRUST_STRXFRM\n> define\", \n> but I think this is incorrect, as that is done in the 0001 patch.\n\nFixed.\n\n> I don't like that the pg_strnxfrm() function requires these kinds of \n> repetitive error checks:\n> \n> + if (rsize != bsize)\n> + elog(ERROR, \"pg_strnxfrm() returned unexpected\n> result\");\n> \n> This could be checked inside the function itself, so that the callers\n> don't have to do this themselves every time.\n\nThe current API allows for a pattern like:\n\n /* avoids extra work if existing buffer is big enough */\n len = pg_strxfrm(buf, src, bufSize, loc);\n if (len >= bufSize)\n {\n buf = repalloc(len+1);\n bufSize = len+1;\n len2 = pg_strxfrm(buf, src, bufSize, loc);\n }\n\nThe test for rsize != bsize are just there to check that the underlying\nlibrary calls (strxfrm or getSortKey) behave as documented, and we\nexpect that they'd never be hit. It's hard to move that kind of check\ninto pg_strxfrm() without making it also manage the buffers.\n\nDo you have a more specific suggestion? I'd like to keep the API\nflexible enough that the caller can manage the buffers, like with\nabbreviated keys. Perhaps the check can just be removed if we trust\nthat the library functions at least get the size calculation right? Or\nturned into an Assert?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS",
"msg_date": "Mon, 13 Feb 2023 15:45:41 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Mon, 2023-02-13 at 15:45 -0800, Jeff Davis wrote:\n> New version attached. Changes:\n\nThese patches, especially 0001, have been around for a while, and\nthey've received some review attention with no outstanding TODOs that\nI'm aware of.\n\nI plan to commit v10 (or something close to it) soon unless someone has\nadditional feedback.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Mon, 20 Feb 2023 16:38:18 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On 14.02.23 00:45, Jeff Davis wrote:\n> Now the patches are:\n> \n> 0001: pg_strcoll/pg_strxfrm\n> 0002: pg_locale_deterministic()\n> 0003: cleanup a USE_ICU special case\n> 0004: GUCs (only for testing, not for commit)\n\nI haven't read the whole thing again, but this arrangement looks good to \nme. I don't have an opinion on whether 0004 is actually useful.\n\n\n\n",
"msg_date": "Wed, 22 Feb 2023 20:49:15 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Rework of collation code, extensibility"
},
{
"msg_contents": "On Wed, 2023-02-22 at 20:49 +0100, Peter Eisentraut wrote:\n> > On 14.02.23 00:45, Jeff Davis wrote:\n> > > > Now the patches are:\n> > > > \n> > > > 0001: pg_strcoll/pg_strxfrm\n> > > > 0002: pg_locale_deterministic()\n> > > > 0003: cleanup a USE_ICU special case\n> > > > 0004: GUCs (only for testing, not for commit)\n> > \n> > I haven't read the whole thing again, but this arrangement looks\n> > good > to \n> > me. I don't have an opinion on whether 0004 is actually useful.\n\nCommitted with a few revisions after I took a fresh look over the\npatch.\n\nThe most significant was that I found out that we are also hashing the\nNUL byte at the end of the string when the collation is non-\ndeterministic. The refactoring patch doesn't change that of course, but\nthe API from pg_strnxfrm() is more clear and I added comments.\n\nAlso, ICU uses int32_t for string lengths rather than size_t (I'm not\nsure that's a great idea, but that's what ICU does). I clarified the\nboundary by changing the argument types of the ICU-specific static\nfunctions to int32_t, while leaving the API entry points as size_t.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 15:59:04 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": true,
"msg_subject": "Re: Rework of collation code, extensibility"
}
] |
[
{
"msg_contents": "The standard uses XQuery regular expressions, which I believe are subtly \ndifferent from ours. Because of that, I had been hesitant to add some \nstandard functions to our library such as <regex occurrences function>.\n\nWhile looking at commit 6424337073589476303b10f6d7cc74f501b8d9d7 from \nlast year (which will come up soon from somebody else for a different \nreason), I noticed that we added those functions for Oracle \ncompatibility even though the regexp language was not the same.\n\nAre there any objections to me writing a patch to add SQL Standard \nregular expression functions even though they call for XQuery and we \nwould use our own language?\n-- \nVik Fearing\n\n\n",
"msg_date": "Sun, 18 Dec 2022 14:59:56 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Standard REGEX functions"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> Are there any objections to me writing a patch to add SQL Standard \n> regular expression functions even though they call for XQuery and we \n> would use our own language?\n\nYes. If we provide spec-defined syntax it should have spec-defined\nbehavior. I really don't see the value of providing different\nsyntactic sugar for functionality we already have, unless the point\nof it is to be spec-compliant, and what you suggest is exactly not\nthat.\n\nI recall having looked at the points of inconsistency (see 9.7.3.8)\nand thought that we could probably create an option flag for our regex\nengine that would address them, or at least get pretty close. It'd\ntake some work though, especially for somebody who never looked at\nthat code before.\n\nI'd be willing to blow off the locale discrepancies by continuing\nto say that you have to use an appropriate locale, and I think the\nbusiness around varying newline representations is in the way-more-\ntrouble-than-its-worth department. But we should at least match\nthe spec on available escape sequences and flag names. It would\nbe a seriously bad idea, for example, if the default\ndoes-dot-match-newline behavior wasn't per spec.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Dec 2022 09:24:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Standard REGEX functions"
},
{
"msg_contents": "On 12/18/22 15:24, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> Are there any objections to me writing a patch to add SQL Standard\n>> regular expression functions even though they call for XQuery and we\n>> would use our own language?\n> \n> Yes. If we provide spec-defined syntax it should have spec-defined\n> behavior. I really don't see the value of providing different\n> syntactic sugar for functionality we already have, unless the point\n> of it is to be spec-compliant, and what you suggest is exactly not\n> that.\n\nI was expecting this answer and I can't say I disagree with it.\n\n> I recall having looked at the points of inconsistency (see 9.7.3.8)\n\nOh sweet! I was not aware of that section.\n\n> and thought that we could probably create an option flag for our regex\n> engine that would address them, or at least get pretty close. It'd\n> take some work though, especially for somebody who never looked at\n> that code before.\nYeah. If I had the chops to do this, I would have tackled row pattern \nrecognition long ago.\n\nI don't suppose project policy would allow us to use an external \nlibrary. I assume there is one out there that implements XQuery regular \nexpressions.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 00:15:46 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: Standard REGEX functions"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> I don't suppose project policy would allow us to use an external \n> library. I assume there is one out there that implements XQuery regular \n> expressions.\n\nProbably, but is it in C and does it have a compatible license?\n\nThe bigger picture here is that our track record with use of external\nlibraries is just terrible: we've outlived the original maintainers'\ninterest multiple times. (That's how we got to the current situation\nwith the regex code, for example.) So I'd want to see a pretty darn\nvibrant-looking upstream community for any new dependency we adopt.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Dec 2022 18:30:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Standard REGEX functions"
}
] |
[
{
"msg_contents": "Hello!\n\nFound that pg_upgrade test has broken for upgrades from older versions.\nThis happened for two reasons.\n1) In 7b378237a the format of \"aclitem\" changed so upgrade from <=15\nfails with error:\n\"Your installation contains the \"aclitem\" data type in user tables.\nThe internal format of \"aclitem\" changed in PostgreSQL version 16\nso this cluster cannot currently be upgraded... \"\n\nTried to fix it by changing the column type in the upgrade_adapt.sql.\nPlease see the patch attached.\n\n2) In 60684dd83 and b5d63824 there are two changes in the set of specific privileges.\nThe thing is that in the privileges.sql test there is REVOKE DELETE command\nwhich becomes pair of REVOKE ALL and GRANT all specific privileges except DELETE\nin the result dump. Therefore, any change in the set of specific privileges will lead to\na non-zero dumps diff.\nTo avoid this, i propose to replace any specific GRANT and REVOKE in the result dumps with ALL.\nThis also made in the patch attached.\n\nWould be glad to any remarks.\n\nWith best regards,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 19 Dec 2022 03:50:19 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "[BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n> 2) In 60684dd83 and b5d63824 there are two changes in the set of specific privileges.\n> The thing is that in the privileges.sql test there is REVOKE DELETE command\n> which becomes pair of REVOKE ALL and GRANT all specific privileges except DELETE\n> in the result dump. Therefore, any change in the set of specific privileges will lead to\n> a non-zero dumps diff.\n> To avoid this, i propose to replace any specific GRANT and REVOKE in the result dumps with ALL.\n> This also made in the patch attached.\n\nIsn't that likely to mask actual bugs?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Dec 2022 20:56:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 03:50:19AM +0300, Anton A. Melnikov wrote:\n> +-- The internal format of \"aclitem\" changed in PostgreSQL version 16\n> +-- so replace it with text type\n> +\\if :oldpgversion_le15\n> +DO $$\n> +DECLARE\n> + change_aclitem_type TEXT;\n> +BEGIN\n> + FOR change_aclitem_type IN\n> + SELECT 'ALTER TABLE ' || table_schema || '.' ||\n> + table_name || ' ALTER COLUMN ' ||\n> +\t\tcolumn_name || ' SET DATA TYPE text;'\n> + AS change_aclitem_type\n> + FROM information_schema.columns\n> + WHERE data_type = 'aclitem' and table_schema != 'pg_catalog'\n> + LOOP\n> + EXECUTE change_aclitem_type;\n> + END LOOP;\n> +END;\n> +$$;\n> +\\endif\n\nThis is forgetting about materialized views, which is something that\npg_upgrade would also complain about when checking for relations with\naclitems. As far as I can see, the only place in the main regression\ntest suite where we have an aclitem attribute is tab_core_types for\nHEAD and the stable branches, so it would be enough to do this\nchange. Anyway, wouldn't it be better to use the same conditions as\nthe WITH OIDS queries a few lines above, at least for consistency?\n\nNote that check_for_data_types_usage() checks for tables, matviews and\nindexes.\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 12:10:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 08:56:48PM -0500, Tom Lane wrote:\n> \"Anton A. Melnikov\" <aamelnikov@inbox.ru> writes:\n>> 2) In 60684dd83 and b5d63824 there are two changes in the set of specific privileges.\n>> The thing is that in the privileges.sql test there is REVOKE DELETE command\n>> which becomes pair of REVOKE ALL and GRANT all specific privileges except DELETE\n>> in the result dump. Therefore, any change in the set of specific privileges will lead to\n>> a non-zero dumps diff.\n>> To avoid this, i propose to replace any specific GRANT and REVOKE in the result dumps with ALL.\n>> This also made in the patch attached.\n> \n> Isn't that likely to mask actual bugs?\n\n+ # Replace specific privilegies with ALL\n+ $dump_contents =~ s/^(GRANT\\s|REVOKE\\s)(\\S*)\\s/$1ALL /mgx;\nYes, this would silence some diffs in the dumps taken from the old and\nthe new clusters. It seems to me that it is one of the things where\nthe original dumps have better be tweaked, as this does not cause a\nhard failure when running pg_upgrade.\n\nWhile thinking about that, an extra idea popped in my mind as it may\nbe interesting to be able to filter out some of the diffs in some\ncontexts. So what about adding in 002_pg_upgrade.pl a small-ish hook\nin the shape of a new environment variable pointing to a file adds\nsome custom filtering rules?\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 12:16:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "Hello!\n\nDivided patch into two parts: first part refers to the modification of\nthe old dump while the second one relates to dump filtering.\n\n1) v2-0001-Remove-aclitem-from-old-dump.patch\n\nOn 19.12.2022 06:10, Michael Paquier wrote:\n> This is forgetting about materialized views, which is something that\n> pg_upgrade would also complain about when checking for relations with\n> aclitems. As far as I can see, the only place in the main regression\n> test suite where we have an aclitem attribute is tab_core_types for\n> HEAD and the stable branches, so it would be enough to do this\n> change. Anyway, wouldn't it be better to use the same conditions as\n> the WITH OIDS queries a few lines above, at least for consistency?\n> \n> Note that check_for_data_types_usage() checks for tables, matviews and\n> indexes.\n\nFound that 'ALTER ... ALTER COLUMN SET DATA TYPE text'\nis not applicable to materialized views and indexes as well as DROP COLUMN.\nSo couldn't make anything better than drop its in the old dump if they\ncontain at least one column of 'aclitem' type.\n\ni've tested this script with:\nCREATE TABLE acltable AS SELECT 1 AS int, 'postgres=a/postgres'::aclitem AS aclitem;\nCREATE MATERIALIZED VIEW aclmview AS SELECT 'postgres=a/postgres'::aclitem AS aclitem;\nCREATE INDEX aclindex on acltable (int) INCLUDE (aclitem);\nperformed in the regression database before creating the old dump.\n\nThe only thing i haven't been able to find a case when an an 'acltype' column would\nbe preserved in the index when this type was replaced in the parent table.\nSo passing relkind = 'i' is probably redundant.\nIf it is possible to find such a case, it would be very interesting.\n\nAlso made the replacement logic for 'acltype' in the tables more closer\nto above the script that removes OIDs columns. In this script found likewise that\nALTER TABLE ... SET WITHOUT OIDS is not applicable to materialized views\nand ALTER MATERIALIZED VIEW doesn't support WITHOUT OIDS clause.\nBesides i couldn't find any legal way to create materialized view with oids in versions 11 or lower.\nCommand 'CREATE MATERIALIZED VIEW' doesn't support WITH OIDS or (OIDS) clause,\nas well as ALTER MATERIALIZED VIEW as mentioned above.\nEven with GUC default_with_oids = true\":\nCREATE TABLE oidtable AS SELECT 1 AS int;\nCREATE MATERIALIZED VIEW oidmv AS SELECT * FROM oidtable;\ngive:\npostgres=# SELECT oid::regclass::text FROM pg_class WHERE relname !~ '^pg_' AND relhasoids;\n oid\n----------\n oidtable\n(1 row)\nSo suggest to exclude the check of materialized views from this DO block.\nWould be grateful for remarks if i didn't consider some cases.\n\n2) v2-0002-Additional-dumps-filtering.patch\n\nOn 19.12.2022 06:16, Michael Paquier wrote:\n> \n> While thinking about that, an extra idea popped in my mind as it may\n> be interesting to be able to filter out some of the diffs in some\n> contexts. So what about adding in 002_pg_upgrade.pl a small-ish hook\n> in the shape of a new environment variable pointing to a file adds\n> some custom filtering rules?\n\nYes. Made a hook that allows to proceed an external text file with additional\nfiltering rules and example of such file. Please take a look on it.\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Thu, 22 Dec 2022 09:59:18 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 09:59:18AM +0300, Anton A. Melnikov wrote:\n> 2) v2-0002-Additional-dumps-filtering.patch\n\n+ # Replace specific privilegies with ALL\n+ $dump_contents =~ s/^(GRANT\\s|REVOKE\\s)(\\S*)\\s/$1ALL /mgx;\nThis should not be in 0002, I guess..\n\n> Yes. Made a hook that allows to proceed an external text file with additional\n> filtering rules and example of such file. Please take a look on it.\n> \n> With the best wishes,\n\nHmm. 0001 does a direct check on aclitem as data type used in an\nattribute, but misses anything related to arrays, domains or even\ncomposite types, not to mention that we'd miss uses of aclitems in\nindex expressions.\n\nThat's basically the kind of thing check_for_data_types_usage() does.\nI am not sure that it is a good idea to provide a limited coverage if\nwe do that for matviews and indexes, and the complexity induced in\nupgrade_adapt.sql is not really appealing either. For now, I have\nfixed the most pressing part for tables to match with the buildfarm\ncode that just drops the aclitem column rather than doing that for all\nthe relations that could have one.\n\nThe part on WITH OIDS has been addressed in its own commit down to\nv12, removing the handling for matviews but adding one for foreign\ntables where the operation is supported.\n--\nMichael",
"msg_date": "Fri, 23 Dec 2022 11:42:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 11:42:39AM +0900, Michael Paquier wrote:\n> Hmm. 0001 does a direct check on aclitem as data type used in an\n> attribute,\n\n> For now, I have fixed the most pressing part for tables to match with\n> the buildfarm\n\n+DO $$\n+ DECLARE\n+ rec text;\n+ col text;\n+ BEGIN\n+ FOR rec in\n+ SELECT oid::regclass::text\n+ FROM pg_class\n+ WHERE relname !~ '^pg_'\n+ AND relkind IN ('r')\n+ ORDER BY 1\n+ LOOP\n+ FOR col in SELECT attname FROM pg_attribute\n+ WHERE attrelid::regclass::text = rec\n+ AND atttypid = 'aclitem'::regtype\n+ LOOP\n+ EXECUTE 'ALTER TABLE ' || quote_ident(rec) || ' ALTER COLUMN ' ||\n+ quote_ident(col) || ' SET DATA TYPE text';\n+ END LOOP;\n+ END LOOP;\n+ END; $$;\n\nThis will do a seq scan around pg_attribute for each relation (currently\n~600)...\n\nHere, that takes a few seconds in a debug build, and I guess it'll be\nmore painful when running under valgrind/discard_caches/antiquated\nhardware/etc.\n\nThis would do a single seqscan:\nSELECT format('ALTER TABLE %I ALTER COLUMN %I TYPE TEXT', attrelid::regclass, attname) FROM pg_attribute WHERE atttypid='aclitem'::regtype; -- AND ...\n\\gexec\n\n-- \nJustin\n\n\n",
"msg_date": "Thu, 22 Dec 2022 21:27:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 09:27:24PM -0600, Justin Pryzby wrote:\n> This would do a single seqscan:\n> SELECT format('ALTER TABLE %I ALTER COLUMN %I TYPE TEXT',\n> attrelid::regclass, attname) FROM pg_attribute WHERE\n> atttypid='aclitem'::regtype; -- AND ...\n> \\gexec\n\nFWIW, I find the use of a FOR loop with a DO block much cleaner to\nfollow in this context, so something like the attached would be able\nto group the two queries and address your point on O(N^2). Do you\nlike that? \n--\nMichael",
"msg_date": "Fri, 23 Dec 2022 17:51:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "Hello!\n\nOn 23.12.2022 06:27, Justin Pryzby wrote:\n> \n> This would do a single seqscan:\n> SELECT format('ALTER TABLE %I ALTER COLUMN %I TYPE TEXT', attrelid::regclass, attname) FROM pg_attribute WHERE atttypid='aclitem'::regtype; -- AND ...\n> \\gexec\n> \n\nTouched a bit on how long it takes to execute different types of queries on my PC.\nAt each measurement, the server restarted with a freshly copied regression database.\n1)\nDO $$\nDECLARE\n change_aclitem_type TEXT;\nBEGIN\n FOR change_aclitem_type IN\n SELECT 'ALTER TABLE ' || table_schema || '.' ||\n table_name || ' ALTER COLUMN ' ||\n\t\tcolumn_name || ' SET DATA TYPE text;'\n AS change_aclitem_type\n FROM information_schema.columns\n WHERE data_type = 'aclitem' and table_schema != 'pg_catalog'\n LOOP\n EXECUTE change_aclitem_type;\n END LOOP;\nEND;\n$$;\n\n2)\nDO $$\n DECLARE\n rec text;\n\tcol text;\n BEGIN\n FOR rec in\n SELECT oid::regclass::text\n FROM pg_class\n WHERE relname !~ '^pg_'\n AND relkind IN ('r')\n ORDER BY 1\n LOOP\n FOR col in SELECT attname FROM pg_attribute\n WHERE attrelid::regclass::text = rec\n AND atttypid = 'aclitem'::regtype\n LOOP\n EXECUTE 'ALTER TABLE ' || quote_ident(rec) || ' ALTER COLUMN ' ||\n quote_ident(col) || ' SET DATA TYPE text';\n END LOOP;\n END LOOP;\n END; $$;\n\n3)\nSELECT format('ALTER TABLE %I ALTER COLUMN %I TYPE TEXT', attrelid::regclass, attname) FROM pg_attribute WHERE atttypid='aclitem'::regtype;\n\\gexec\n\n4) The same as 3) but in the DO block\nDO $$\nDECLARE\n change_aclitem_type TEXT;\nBEGIN\n FOR change_aclitem_type IN\n SELECT 'ALTER TABLE ' || attrelid::regclass || ' ALTER COLUMN ' ||\n\t\tattname || ' TYPE TEXT;'\n AS change_aclitem_type\n FROM pg_attribute\n WHERE atttypid = 'aclitem'::regtype\n LOOP\n EXECUTE change_aclitem_type;\n END LOOP;\nEND;\n$$;\n\nAverage execution time for three times:\n_____________________________________\n|N of query: | 1 | 2 | 3 | 4 |\n|____________________________________\n|Avg time, ms: | 58 | 1076 | 51 | 33 |\n|____________________________________\n\nRaw results in timing.txt\n\nBest wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Fri, 23 Dec 2022 12:17:18 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "Sorry, didn't get to see the last letter!\n\nOn 23.12.2022 11:51, Michael Paquier wrote:\n> \n> FWIW, I find the use of a FOR loop with a DO block much cleaner to\n> follow in this context, so something like the attached would be able\n> to group the two queries and address your point on O(N^2). Do you\n> like that?\n> --\n> Michael\n\nThe query:\n\n DO $$\n DECLARE\n rec record;\n BEGIN\n FOR rec in\n SELECT oid::regclass::text as rel, attname as col\n FROM pg_class c, pg_attribute a\n WHERE c.relname !~ '^pg_'\n AND c.relkind IN ('r')\n AND a.attrelid = c.oid\n AND a.atttypid = 'aclitem'::regtype\n ORDER BY 1\n LOOP\n EXECUTE 'ALTER TABLE ' || quote_ident(rec.rel) || ' ALTER COLUMN ' ||\n quote_ident(rec.col) || ' SET DATA TYPE text';\n END LOOP;\n END; $$;\n\ngives the average time of 36 ms at the same conditions.\n\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:43:00 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 05:51:28PM +0900, Michael Paquier wrote:\n> On Thu, Dec 22, 2022 at 09:27:24PM -0600, Justin Pryzby wrote:\n> > This would do a single seqscan:\n> > SELECT format('ALTER TABLE %I ALTER COLUMN %I TYPE TEXT',\n> > attrelid::regclass, attname) FROM pg_attribute WHERE\n> > atttypid='aclitem'::regtype; -- AND ...\n> > \\gexec\n> \n> FWIW, I find the use of a FOR loop with a DO block much cleaner to\n> follow in this context, so something like the attached would be able\n> to group the two queries and address your point on O(N^2). Do you\n> like that? \n\nLGTM. Thanks.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 23 Dec 2022 10:39:25 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 10:39:25AM -0600, Justin Pryzby wrote:\n> On Fri, Dec 23, 2022 at 05:51:28PM +0900, Michael Paquier wrote:\n>> FWIW, I find the use of a FOR loop with a DO block much cleaner to\n>> follow in this context, so something like the attached would be able\n>> to group the two queries and address your point on O(N^2). Do you\n>> like that? \n> \n> LGTM. Thanks.\n\nI am a bit busy for the next few days, but I may be able to get that\ndone next Monday.\n--\nMichael",
"msg_date": "Sat, 24 Dec 2022 09:55:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 10:39:25AM -0600, Justin Pryzby wrote:\n> LGTM. Thanks.\n\nDone as of d3c0cc4.\n--\nMichael",
"msg_date": "Mon, 26 Dec 2022 08:02:28 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 12:43:00PM +0300, Anton A. Melnikov wrote:\n> Sorry, didn't get to see the last letter!\n\nNo worries, the result is the same :)\n\nI was looking at 0002 to add a callback to provide custom filtering\nrules.\n\n+ my @ext_filter = split('\\/', $_);\nAre you sure that enforcing a separation with a slash is a good idea?\nWhat if the filters include directory paths or characters that are\nescaped, for example?\n\nRather than introducing a filter.regex, I would tend to just document\nthat in the README with a small example. I have been considering a\nfew alternatives while making this useful in most cases, still my mind\nalrways comes back to the simplest thing we to just read each line of\nthe file, chomp it and apply the pattern to the log file..\n--\nMichael",
"msg_date": "Mon, 26 Dec 2022 11:52:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "Hello!\n\nOn 23.12.2022 05:42, Michael Paquier wrote:\n> On Thu, Dec 22, 2022 at 09:59:18AM +0300, Anton A. Melnikov wrote:\n>> 2) v2-0002-Additional-dumps-filtering.patch\n> \n> + # Replace specific privilegies with ALL\n> + $dump_contents =~ s/^(GRANT\\s|REVOKE\\s)(\\S*)\\s/$1ALL /mgx;\n> This should not be in 0002, I guess..\n\nMade a separate patch for it: v3-0001-Fix-dumps-filtering.patch\n\nOn 26.12.2022 05:52, Michael Paquier wrote:\n> On Fri, Dec 23, 2022 at 12:43:00PM +0300, Anton A. Melnikov wrote:\n> I was looking at 0002 to add a callback to provide custom filtering\n> rules.\n> \n> + my @ext_filter = split('\\/', $_);\n> Are you sure that enforcing a separation with a slash is a good idea?\n> What if the filters include directory paths or characters that are\n> escaped, for example?\n> \n> Rather than introducing a filter.regex, I would tend to just document\n> that in the README with a small example. I have been considering a\n> few alternatives while making this useful in most cases, still my mind\n> alrways comes back to the simplest thing we to just read each line of\n> the file, chomp it and apply the pattern to the log file..\n\nThanks for your attention!\nYes, indeed. It will be really simpler.\nMade it in the v3-0002-Add-external-dumps-filtering.patch\n\nWith the best wishes,\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Mon, 26 Dec 2022 09:22:08 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 09:22:08AM +0300, Anton A. Melnikov wrote:\n> Made a separate patch for it: v3-0001-Fix-dumps-filtering.patch\n\nWell, the thing about this part is that is it is not needed: the same\ncan be achieved with 0002 in place.\n\n> Yes, indeed. It will be really simpler.\n> Made it in the v3-0002-Add-external-dumps-filtering.patch\n\nI have fixed a few things in the patch, like switching the step\nskipping comments with a regexp, adding one step to ignore empty\nlines, applying a proper indentation and fixing comments here and\nthere (TESTING was incorrect, btw).\n\nIt is worth noting that perlcritic was complaining here, as eval is\ngetting used with a string. I have spent a few days looking at that,\nand I really want a maximum of flexibility in the rules that can be\napplied so I have put a \"no critic\" rule, which is fine by me as this\nextra file is something owned by the user and it would apply only to\ncross-version upgrades.\n\nSo it looks like we are now done here.. With all these pieces in\nplace in the tests, I don't see why it would not be possible to\nautomate the cross-version tests of pg_upgrade.\n--\nMichael",
"msg_date": "Tue, 27 Dec 2022 14:44:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "Hello!\n\nOn 27.12.2022 08:44, Michael Paquier wrote:\n> \n> It is worth noting that perlcritic was complaining here, as eval is\n> getting used with a string. I have spent a few days looking at that,\n> and I really want a maximum of flexibility in the rules that can be\n> applied so I have put a \"no critic\" rule, which is fine by me as this\n> extra file is something owned by the user and it would apply only to\n> cross-version upgrades.\n\nI think it's a very smart decision. Thank you very match!\n\n> So it looks like we are now done here.. With all these pieces in\n> place in the tests, I don't see why it would not be possible to\n> automate the cross-version tests of pg_upgrade.\n\nI've checked the cross-upgrade test form 9.5+ to current master and\nhave found no problem with accuracy up to dumps filtering.\nFor cross-version tests automation one have to write additional\nfiltering rules in the external files.\n\nI would like to try realize this, better in a separate thread.\nIf there are no other considerations could you close the corresponding\nrecord on the January CF, please?\n\n\nWith the best wishes!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 27 Dec 2022 15:26:10 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 03:26:10PM +0300, Anton A. Melnikov wrote:\n> I would like to try realize this, better in a separate thread.\n\nI don't think that this should be added into the tree, but if you have\nper-version filtering rules, of course feel free to publish that to\nthe lists. I am sure that this could be helpful for others.\n\n> If there are no other considerations could you close the corresponding\n> record on the January CF, please?\n\nIndeed, now marked as committed.\n--\nMichael",
"msg_date": "Tue, 27 Dec 2022 22:50:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
},
{
"msg_contents": "On 27.12.2022 16:50, Michael Paquier wrote:\n\n>> If there are no other considerations could you close the corresponding\n>> record on the January CF, please?\n> \n> Indeed, now marked as committed.\n-\n\nThanks a lot!\n\nMerry Christmas!\n\n-- \nAnton A. Melnikov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n",
"msg_date": "Tue, 27 Dec 2022 16:55:54 +0300",
"msg_from": "\"Anton A. Melnikov\" <aamelnikov@inbox.ru>",
"msg_from_op": true,
"msg_subject": "Re: [BUG] pg_upgrade test fails from older versions."
}
] |
[
{
"msg_contents": "Hi hackers!\n\nI saw a thread in a social network[0] about GROUP BY ALL. The idea seems useful.\nI always was writing something like\n select datname, usename, count(*) from pg_stat_activity group by 1,2;\nand then rewriting to\n select datname, usename, query, count(*) from pg_stat_activity group by 1,2;\nand then \"aaahhhh, add a number at the end\".\n\nWith the proposed feature I can write just\n select datname, usename, count(*) from pg_stat_activity group by all;\n\nPFA very dummy implementation just for a discussion. I think we can\nadd all non-aggregating targets.\n\nWhat do you think?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.linkedin.com/posts/mosha_duckdb-firebolt-snowflake-activity-7009615821006131200-VQ0o/",
"msg_date": "Sun, 18 Dec 2022 20:19:10 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "GROUP BY ALL"
},
{
"msg_contents": "Andrey Borodin <amborodin86@gmail.com> writes:\n> I saw a thread in a social network[0] about GROUP BY ALL. The idea seems useful.\n\nIsn't that just a nonstandard spelling of SELECT DISTINCT?\n\nWhat would happen if there are aggregate functions in the tlist?\nI'm not especially on board with \"ALL\" meaning \"ALL (oh, but not\naggregates)\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 18 Dec 2022 23:30:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY ALL"
},
{
"msg_contents": "On Sun, Dec 18, 2022 at 8:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I'm not especially on board with \"ALL\" meaning \"ALL (oh, but not\n> aggregates)\".\n\nYes, that's the weak part of the proposal. I even thought about\nrenaming it to \"GROUP BY SOMEHOW\" or even \"GROUP BY SURPRISE ME\".\nI mean I see some cases when it's useful and much less cases when it's\ndangerously ambiguous. E.g. grouping by result of a subquery looks way\ntoo complex and unpredictable. But with simple Vars... what could go\nwrong?\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Sun, 18 Dec 2022 20:40:00 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GROUP BY ALL"
},
{
"msg_contents": "On Sunday, December 18, 2022, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrey Borodin <amborodin86@gmail.com> writes:\n> > I saw a thread in a social network[0] about GROUP BY ALL. The idea seems\n> useful.\n>\n> Isn't that just a nonstandard spelling of SELECT DISTINCT?\n>\n> What would happen if there are aggregate functions in the tlist?\n> I'm not especially on board with \"ALL\" meaning \"ALL (oh, but not\n> aggregates)\".\n>\n>\nIIUC some systems treat any non-aggregated column as an implicit group by\ncolumn. This proposal is an explicit way to enable that implicit behavior\nin PostgreSQL. It is, as you note, an odd meaning for the word ALL.\n\nWe tend to not accept non-standard usability syntax extensions even if\nothers systems implement them. I don’t see this one ending up being an\nexception…\n\nDavid J.\n\nOn Sunday, December 18, 2022, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrey Borodin <amborodin86@gmail.com> writes:\n> I saw a thread in a social network[0] about GROUP BY ALL. The idea seems useful.\n\nIsn't that just a nonstandard spelling of SELECT DISTINCT?\n\nWhat would happen if there are aggregate functions in the tlist?\nI'm not especially on board with \"ALL\" meaning \"ALL (oh, but not\naggregates)\".\nIIUC some systems treat any non-aggregated column as an implicit group by column. This proposal is an explicit way to enable that implicit behavior in PostgreSQL. It is, as you note, an odd meaning for the word ALL.We tend to not accept non-standard usability syntax extensions even if others systems implement them. I don’t see this one ending up being an exception…David J.",
"msg_date": "Sun, 18 Dec 2022 21:45:58 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY ALL"
},
{
"msg_contents": "On Sun, 18 Dec 2022 at 23:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrey Borodin <amborodin86@gmail.com> writes:\n> > I saw a thread in a social network[0] about GROUP BY ALL. The idea seems\n> useful.\n>\n> Isn't that just a nonstandard spelling of SELECT DISTINCT?\n>\n\nIn a pure relational system, yes; but since Postgres allows duplicate rows,\nboth in actual table data and in intermediate and final result sets, no.\nAlthough I'm pretty sure no aggregates other than count() are useful - any\nother aggregate would always just combine count() copies of the duplicated\nvalue in some way.\n\nWhat would happen if there are aggregate functions in the tlist?\n> I'm not especially on board with \"ALL\" meaning \"ALL (oh, but not\n> aggregates)\".\n>\n\nThe requested behaviour can be accomplished by an invocation something like:\n\nselect (t).*, count(*) from (select (…field1, field2, …) as t from\n…tables…) s group by t;\n\nSo we collect all the required fields as a tuple, group by the tuple, and\nthen unpack it into separate columns in the outer query.\n\nOn Sun, 18 Dec 2022 at 23:30, Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrey Borodin <amborodin86@gmail.com> writes:\n> I saw a thread in a social network[0] about GROUP BY ALL. The idea seems useful.\n\nIsn't that just a nonstandard spelling of SELECT DISTINCT?In a pure relational system, yes; but since Postgres allows duplicate rows, both in actual table data and in intermediate and final result sets, no. Although I'm pretty sure no aggregates other than count() are useful - any other aggregate would always just combine count() copies of the duplicated value in some way.\nWhat would happen if there are aggregate functions in the tlist?\nI'm not especially on board with \"ALL\" meaning \"ALL (oh, but not\naggregates)\".\nThe requested behaviour can be accomplished by an invocation something like:select (t).*, count(*) from (select (…field1, field2, …) as t from …tables…) s group by t;So we collect all the required fields as a tuple, group by the tuple, and then unpack it into separate columns in the outer query.",
"msg_date": "Mon, 19 Dec 2022 08:44:39 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY ALL"
},
{
"msg_contents": "On 12/19/22 05:19, Andrey Borodin wrote:\n> Hi hackers!\n> \n> I saw a thread in a social network[0] about GROUP BY ALL. The idea seems useful.\n> I always was writing something like\n> select datname, usename, count(*) from pg_stat_activity group by 1,2;\n> and then rewriting to\n> select datname, usename, query, count(*) from pg_stat_activity group by 1,2;\n> and then \"aaahhhh, add a number at the end\".\n> \n> With the proposed feature I can write just\n> select datname, usename, count(*) from pg_stat_activity group by all;\n\n\nWe already have GROUP BY ALL, but it doesn't do this.\n\n\n> PFA very dummy implementation just for a discussion. I think we can\n> add all non-aggregating targets.\n> \n> What do you think?\n\n\nI think this is a pretty terrible idea. If we want that kind of \nbehavior, we should just allow the GROUP BY to be omitted since without \ngrouping sets, it is kind of redundant anyway.\n\nI don't know what my opinion is on that.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 17:53:46 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY ALL"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 05:53:46PM +0100, Vik Fearing wrote:\n> I think this is a pretty terrible idea. If we want that kind of behavior,\n> we should just allow the GROUP BY to be omitted since without grouping sets,\n> it is kind of redundant anyway.\n> \n> I don't know what my opinion is on that.\n\nThis is a very interesting concept. Because Postgres requires GROUP BY\nof all non-aggregate columns of a target list, Postgres could certainly\nautomatically generate the GROUP BY. However, readers of the query\nmight not easily distinguish function calls from aggregates, so in a way\nthe GROUP BY is for the reader, not for the database server.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Fri, 6 Jan 2023 16:56:11 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: GROUP BY ALL"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 1:56 PM Bruce Momjian <bruce@momjian.us> wrote:\n> Because Postgres requires GROUP BY\n> of all non-aggregate columns of a target list, Postgres could certainly\n> automatically generate the GROUP BY. However, readers of the query\n> might not easily distinguish function calls from aggregates, so in a way\n> the GROUP BY is for the reader, not for the database server.\n>\n\nHow about \"SELECT a,b, count(*) FROM t GROUP AUTOMATICALLY;\" ? And\nthen a shorthand for \"SELECT a,b, count(*) FROM t GROUP;\".\n\nAnyway, the problem is not in clever syntax, but in the fact that it's\nan SQL extension, not a standard...\n\nBest regards, Andrey Borodin.\n\n\n",
"msg_date": "Fri, 6 Jan 2023 16:40:02 -0800",
"msg_from": "Andrey Borodin <amborodin86@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: GROUP BY ALL"
}
] |
[
{
"msg_contents": "Hi all,\n\nWhile looking at a different patch, I have noticed that the error\nmessages produced by pg_basebackup and pg_dump are a bit inconsistent\nwith the other others. Why not switching all of them as of the\nattached? This reduces the overall translation effort, using more:\n\"this build does not support compression with %s\"\n\nThoughts or objections?\n--\nMichael",
"msg_date": "Mon, 19 Dec 2022 14:42:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Simplifications for error messages related to compression"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 02:42:13PM +0900, Michael Paquier wrote:\n> Thoughts or objections?\n\nHearing nothing, done..\n--\nMichael",
"msg_date": "Wed, 21 Dec 2022 10:43:19 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Simplifications for error messages related to compression"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 10:43:19AM +0900, Michael Paquier wrote:\n> On Mon, Dec 19, 2022 at 02:42:13PM +0900, Michael Paquier wrote:\n> > Thoughts or objections?\n> \n> Hearing nothing, done..\n\n- pg_fatal(\"not built with zlib support\");\n+ pg_fatal(\"this build does not support compression with %s\", \"gzip\");\n\nI tried to say in the other thread that gzip != zlib.\n\nThis message may be better for translation, but (for WriteDataToArchive\net al) the message is now less accurate, and I suspect will cause some\nconfusion.\n\n5e73a6048 introduced a similar user-facing issue: pg_dump -Fc -Z gzip\ndoes not output a gzip.\n\n$ ./tmp_install/usr/local/pgsql/bin/pg_dump -h /tmp -Fc -Z gzip regression |xxd |head\n00000000: 5047 444d 5001 0e00 0408 0101 0100 0000 PGDMP...........\n\nI'm okay with it if you think this is no problem - maybe it's enough to\ndocument that the output is zlib and not gzip.\n\nOtherwise, one idea was to reject \"gzip\" with -Fc. It could accept\nintegers only.\n\nBTW I think it's helpful to include the existing participants when\nforking a thread.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 20 Dec 2022 20:29:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplifications for error messages related to compression"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 08:29:32PM -0600, Justin Pryzby wrote:\n> - pg_fatal(\"not built with zlib support\");\n> + pg_fatal(\"this build does not support compression with %s\", \"gzip\");\n> \n> I tried to say in the other thread that gzip != zlib.\n>\n> This message may be better for translation, but (for WriteDataToArchive\n> et al) the message is now less accurate, and I suspect will cause some\n> confusion.\n\nCompression specifications use this term, so, like bbstreamer_gzip.c,\nthat does not sound like a big difference to me as everything depends\non HAVE_LIBZ, still we use gzip for all the user-facing terms.\n\n> 5e73a6048 introduced a similar user-facing issue: pg_dump -Fc -Z gzip\n> does not output a gzip.\n\nWe've never mentioned any compression method in the past docs, just\nthat things can be compressed.\n\n> $ ./tmp_install/usr/local/pgsql/bin/pg_dump -h /tmp -Fc -Z gzip regression |xxd |head\n> 00000000: 5047 444d 5001 0e00 0408 0101 0100 0000 PGDMP...........\n> \n> I'm okay with it if you think this is no problem - maybe it's enough to\n> document that the output is zlib and not gzip.\n\nPerhaps.\n\n> Otherwise, one idea was to reject \"gzip\" with -Fc. It could accept\n> integers only.\n\nI am not sure what we would gain by doing that, except complications\nwith the code surrounding the handling of compression specifications,\nwhich is a backward-compatible thing as it can handle integer-only\ninputs.\n\n> BTW I think it's helpful to include the existing participants when\n> forking a thread.\n\nErr, okay.. Sorry about that.\n--\nMichael",
"msg_date": "Wed, 21 Dec 2022 13:52:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Simplifications for error messages related to compression"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 01:52:21PM +0900, Michael Paquier wrote:\n> On Tue, Dec 20, 2022 at 08:29:32PM -0600, Justin Pryzby wrote:\n> > - pg_fatal(\"not built with zlib support\");\n> > + pg_fatal(\"this build does not support compression with %s\", \"gzip\");\n> > \n> > I tried to say in the other thread that gzip != zlib.\n> >\n> > This message may be better for translation, but (for WriteDataToArchive\n> > et al) the message is now less accurate, and I suspect will cause some\n> > confusion.\n> \n> Compression specifications use this term, so, like bbstreamer_gzip.c,\n\nYes, and its current users (basebackup) output a gzip file, right ?\n\npg_dump -Fc doesn't output a gzip file, but now it's using user-facing\ncompression specifications referring to it as \"gzip\".\n\n> that does not sound like a big difference to me as everything depends\n> on HAVE_LIBZ, still we use gzip for all the user-facing terms.\n\npostgres is using -lz to write both gzip files and non-gzip libz files;\nits associated compiletime constant has nothing to do with which header\nformat is being output.\n\n\t* This file includes two APIs for dealing with compressed data. The first\n\t* provides more flexibility, using callbacks to read/write data from the\n\t* underlying stream. The second API is a wrapper around fopen/gzopen and\n\t* friends, providing an interface similar to those, but abstracts away\n\t* the possible compression. Both APIs use libz for the compression, but\n\t* the second API uses gzip headers, so the resulting files can be easily\n\t* manipulated with the gzip utility.\n\n> > 5e73a6048 introduced a similar user-facing issue: pg_dump -Fc -Z gzip\n> > does not output a gzip.\n> \n> We've never mentioned any compression method in the past docs, just\n> that things can be compressed.\n\nWhat do you mean ?\n\nThe commit added:\n+ The compression method can be set to <literal>gzip</literal> or ...\n\nAnd the docs still say:\n\n- For plain text output, setting a nonzero compression level causes\n- the entire output file to be compressed, as though it had been\n- fed through <application>gzip</application>; but the default is not to compress.\n\nIf you tell someone they can write -Z gzip, they'll be justified in\nexpecting to see \"gzip\" as output.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 20 Dec 2022 23:12:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Simplifications for error messages related to compression"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 11:12:22PM -0600, Justin Pryzby wrote:\n> Yes, and its current users (basebackup) output a gzip file, right ?\n> \n> pg_dump -Fc doesn't output a gzip file, but now it's using user-facing\n> compression specifications referring to it as \"gzip\".\n\nNot all of them are compressed either, like the base TOC file.\n\n> If you tell someone they can write -Z gzip, they'll be justified in\n> expecting to see \"gzip\" as output.\n\nThat's the point where my interpretation is different than yours,\nwhere I don't really see as an issue that we do not generate a gzip\nfile all the time in the output. Honestly, I am not sure that there\nis anything to win here by not using the same option interface for all\nthe binaries or have tweaks to make pg_dump cope with that (like using\nzlib as an extra alias). The custom, directory and tar formats of\npg_dumps have their own idea of the files to compress or not (like\nthe base TOC file is never compressed so as one can do a pg_restore\n-l).\n--\nMichael",
"msg_date": "Thu, 22 Dec 2022 14:04:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Simplifications for error messages related to compression"
}
] |
[
{
"msg_contents": "Hi hackers.\n\nI wanted some way to test overlapping transactions from different\npublisher sessions so I could better test the logical replication\n\"parallel apply\" feature being developed in another thread [1]. AFAIK\ncurrently there is no way to do this kind of test except manually\n(e.g. using separate psql sessions).\n\nMeanwhile, using the isolationtester spec tests [2] it is already\npossible to test overlapping transactions on multiple sessions. So\nisolationtester already does almost everything I wanted except that it\ncurrently only knows about a single connection string (either default\nor passed as argument) which every one of the \"spec sessions\" uses. In\ncontrast, for my pub/sub testing, I wanted multiple servers.\n\nThe test_decoding tests [3] have specs a bit similar to this - the\ndifference is that I want to observe subscriber-side apply workers\nexecute and actually do something.\n\n~\n\nMy patch/idea makes a small change to the isolationtester spec\ngrammar. Now each session can optionally specify its own connection\nstring. When specified, this will override any connection string for\nthat session that would otherwise have been used. This is the only\nchange.\n\nWith this change now it is possible to write spec test code like\nbelow. Here I have 2 publisher sessions and 1 subscriber session, with\nmy new session ‘conninfo’ specified appropriately for each session\n\n======\n\n# This test assumes there is already setup as follows:\n#\n# PG server for publisher (running on port 7651)\n# - has TABLE tbl\n# - has PUBLICATION pub1\n#\n# PG server for subscriber (running on port 7652)\n# - has TABLE tbl\n# - has SUBSCRIPTION sub1 subscribing to pub1\n#\n\n################\n# Publisher node\n################\nsession ps1\nconninfo \"host=localhost port=7651\"\nsetup\n{\n TRUNCATE TABLE tbl;\n}\nstep ps1_ins { INSERT INTO tbl VALUES (111); }\nstep ps1_sel { SELECT * FROM tbl ORDER BY id; }\nstep ps1_begin { BEGIN; }\nstep ps1_commit { COMMIT; }\nstep ps1_rollback { ROLLBACK; }\n\nsession ps2\nconninfo \"host=localhost port=7651\"\nstep ps2_ins { INSERT INTO tbl VALUES (222); }\nstep ps2_sel { SELECT * FROM tbl ORDER BY id; }\nstep ps2_begin { BEGIN; }\nstep ps2_commit { COMMIT; }\nstep ps2_rollback { ROLLBACK; }\n\n\n#################\n# Subscriber node\n#################\nsession sub\nconninfo \"host=localhost port=7652\"\nsetup\n{\n TRUNCATE TABLE tbl;\n}\nstep sub_sleep { SELECT pg_sleep(3); }\nstep sub_sel { SELECT * FROM tbl ORDER BY id; }\n\n\n#######\n# Tests\n#######\n\n# overlapping tx commits\npermutation ps1_begin ps1_ins ps2_begin ps2_ins ps2_commit ps1_commit\nsub_sleep sub_sel\npermutation ps1_begin ps1_ins ps2_begin ps2_ins ps1_commit ps2_commit\nsub_sleep sub_sel\n\n======\n\nBecause there is still some external setup needed to make the 2\nservers (with their own configurations and publication/subscription)\nthis kind of spec test can't be added to the 'isolation_schedule'\nfile. But even so, it seems to be working OK, so I think this\nisolationtester enhancement can be an efficient way to write and run\nsome difficult pub/sub regression tests without having to test\neverything entirely manually each time.\n\n\nThoughts?\n\n~~\n\nPSA\n\nv1-0001 - This is the enhancement to add 'conninfo' to the isloationtester.\nv1-0002 - An example of how 'conninfo' can be used (requires external setup)\ntest_init.sh - this is my external setup script pre-requisite for the\npub-sub.spec in v1-0002\n\n------\n[1] parallel apply thread -\nhttps://www.postgresql.org/message-id/flat/CAA4eK1%2BwyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw%40mail.gmail.com\n[2] isolation tests -\nhttps://github.com/postgres/postgres/blob/master/src/test/isolation/README\n[3] test_decoding spec tests -\nhttps://github.com/postgres/postgres/tree/master/contrib/test_decoding/specs\n\nKind Regards,\nPeter Smith\nFujitsu Australia",
"msg_date": "Mon, 19 Dec 2022 16:57:59 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "isolationtester - allow a session specific connection string"
},
{
"msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> My patch/idea makes a small change to the isolationtester spec\n> grammar. Now each session can optionally specify its own connection\n> string. When specified, this will override any connection string for\n> that session that would otherwise have been used. This is the only\n> change.\n\nSurely this cannot work, because isolationtester only runs one\nmonitoring session. How will it detect wait conditions for\nsessions connected to some other postmaster?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 01:04:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: isolationtester - allow a session specific connection string"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 5:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > My patch/idea makes a small change to the isolationtester spec\n> > grammar. Now each session can optionally specify its own connection\n> > string. When specified, this will override any connection string for\n> > that session that would otherwise have been used. This is the only\n> > change.\n>\n> Surely this cannot work, because isolationtester only runs one\n> monitoring session. How will it detect wait conditions for\n> sessions connected to some other postmaster?\n>\n\nYou are right - probably it can't work in a generic sense. But if the\n\"controller session\" (internal session 0) is also configured to use\nthe same conninfo as all my \"publisher\" sessions (the current patch\ncan't do this but it seems only a small change) then all of the\npublisher-side sessions will be monitored like they ought to be --\nwhich is all I really needed I think.\n\n------\nKind Regards,\nPeter Smith\nFujitsu Australia\n\n\n",
"msg_date": "Mon, 19 Dec 2022 17:35:14 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: isolationtester - allow a session specific connection string"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 5:35 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Dec 19, 2022 at 5:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Peter Smith <smithpb2250@gmail.com> writes:\n> > > My patch/idea makes a small change to the isolationtester spec\n> > > grammar. Now each session can optionally specify its own connection\n> > > string. When specified, this will override any connection string for\n> > > that session that would otherwise have been used. This is the only\n> > > change.\n> >\n> > Surely this cannot work, because isolationtester only runs one\n> > monitoring session. How will it detect wait conditions for\n> > sessions connected to some other postmaster?\n> >\n>\n> You are right - probably it can't work in a generic sense. But if the\n> \"controller session\" (internal session 0) is also configured to use\n> the same conninfo as all my \"publisher\" sessions (the current patch\n> can't do this but it seems only a small change) then all of the\n> publisher-side sessions will be monitored like they ought to be --\n> which is all I really needed I think.\n>\n\nPSA v2 of this patch. Now the conninfo can be specified at the *.spec\nfile global scope. This will set the connection string for the\n\"controller\", and this will be used by every other session unless they\ntoo specify a conninfo. For example,\n\n======\n# Set the isolationtester controller's conninfo. User sessions will also use\n# this unless they specify otherwise.\nconninfo \"host=localhost port=7651\"\n\n################\n# Publisher node\n################\nsession ps1\nsetup\n{\n TRUNCATE TABLE tbl;\n}\nstep ps1_ins { INSERT INTO tbl VALUES (111); }\nstep ps1_sel { SELECT * FROM tbl ORDER BY id; }\nstep ps1_begin { BEGIN; }\nstep ps1_commit { COMMIT; }\nstep ps1_rollback { ROLLBACK; }\n\nsession ps2\nstep ps2_ins { INSERT INTO tbl VALUES (222); }\nstep ps2_sel { SELECT * FROM tbl ORDER BY id; }\nstep ps2_begin { BEGIN; }\nstep ps2_commit { COMMIT; }\nstep ps2_rollback { ROLLBACK; }\n\n#################\n# Subscriber node\n#################\nsession sub\nconninfo \"host=localhost port=7652\"\nsetup\n{\n TRUNCATE TABLE tbl;\n}\nstep sub_sleep { SELECT pg_sleep(3); }\nstep sub_sel { SELECT * FROM tbl ORDER BY id; }\n\n...\n\n======\n\nThe above spec file gives:\n\n======\nParsed test spec with 3 sessions\ncontrol connection conninfo 'host=localhost port=7651'\nps1 conninfo 'host=localhost port=7651'\nps2 conninfo 'host=localhost port=7651'\nsub conninfo 'host=localhost port=7652'\n WARNING: session sub is not using same connection as the controller\n...\n======\n\n\nIn this way, IIUC the isolationtester's session locking mechanism can\nwork OK at least for all of my \"publishing\" sessions.\n\n------\n\nKind Regards,\nPeter Smith\nFujitsu Australia",
"msg_date": "Mon, 19 Dec 2022 18:56:42 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: isolationtester - allow a session specific connection string"
}
] |
[
{
"msg_contents": "I found a couple of adjacent weird things:\n\nThere are a bunch of places in the json code that use \nappendBinaryStringInfo() where appendStringInfoString() could be used, e.g.,\n\n appendBinaryStringInfo(buf, \".size()\", 7);\n\nIs there a reason for this? Are we that stretched for performance? I \nfind this kind of code very fragile.\n\nAlso, the argument type of appendBinaryStringInfo() is char *. There is \nsome code that uses this function to assemble some kind of packed binary \nlayout, which requires a bunch of casts because of this. I think \nfunctions taking binary data plus length should take void * instead, \nlike memcpy() for example.\n\nAttached are two patches that illustrate these issues and show proposed \nchanges.",
"msg_date": "Mon, 19 Dec 2022 07:13:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "appendBinaryStringInfo stuff"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-19 07:13:40 +0100, Peter Eisentraut wrote:\n> I found a couple of adjacent weird things:\n> \n> There are a bunch of places in the json code that use\n> appendBinaryStringInfo() where appendStringInfoString() could be used, e.g.,\n> \n> appendBinaryStringInfo(buf, \".size()\", 7);\n> \n> Is there a reason for this? Are we that stretched for performance?\n\nstrlen() isn't that cheap, so it doesn't generally seem unreasonable. I\ndon't think we should add the strlen overhead in places that can\nconceivably be a bottleneck - and some of the jsonb code clearly can be\nthat.\n\n\n> I find this kind of code very fragile.\n\nBut this is obviously an issue.\n\nPerhaps we should make appendStringInfoString() a static inline function\n- most compilers can compute strlen() of a constant string at compile\ntime.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 19 Dec 2022 00:12:41 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Mon, 19 Dec 2022 at 21:12, Andres Freund <andres@anarazel.de> wrote:\n> Perhaps we should make appendStringInfoString() a static inline function\n> - most compilers can compute strlen() of a constant string at compile\n> time.\n\nI had wondered about that, but last time I looked into it there was a\nsmall increase in the size of the binary from doing it. Perhaps it\ndoes not matter, but it's something to consider.\n\nRe-thinking, I wonder if we could use the same macro trick used in\nereport_domain(). Something like:\n\n#ifdef HAVE__BUILTIN_CONSTANT_P\n#define appendStringInfoString(str, s) \\\n __builtin_constant_p(s) ? \\\n appendBinaryStringInfo(str, s, sizeof(s) - 1) : \\\n appendStringInfoStringInternal(str, s)\n#else\n#define appendStringInfoString(str, s) \\\n appendStringInfoStringInternal(str, s)\n#endif\n\nand rename the existing function to appendStringInfoStringInternal.\n\nBecause __builtin_constant_p is a known compile-time constant, it\nshould be folded to just call the corresponding function during\ncompilation.\n\nJust looking at the binary sizes for postgres. I see:\n\nunpatched = 9972128 bytes\ninline function = 9990064 bytes\nmacro trick = 9984968 bytes\n\nI'm currently not sure why the macro trick increases the binary at\nall. I understand why the inline function does.\n\nDavid\n\n\n",
"msg_date": "Mon, 19 Dec 2022 21:29:10 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> I'm currently not sure why the macro trick increases the binary at\n> all. I understand why the inline function does.\n\nIn the places where it changes the code at all, you're replacing\n\n\tappendStringInfoString(buf, s);\n\nwith\n\n\tappendBinaryStringInfo(buf, s, n);\n\nEven if n is a constant, the latter surely requires more instructions\nper call site.\n\nWhether this is a win seems to depend on how many of these are\nperformance-critical.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 10:12:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On 19.12.22 09:12, Andres Freund wrote:\n>> There are a bunch of places in the json code that use\n>> appendBinaryStringInfo() where appendStringInfoString() could be used, e.g.,\n>>\n>> appendBinaryStringInfo(buf, \".size()\", 7);\n>>\n>> Is there a reason for this? Are we that stretched for performance?\n> strlen() isn't that cheap, so it doesn't generally seem unreasonable. I\n> don't think we should add the strlen overhead in places that can\n> conceivably be a bottleneck - and some of the jsonb code clearly can be\n> that.\n\nAFAICT, the code in question is for the text output of the jsonpath \ntype, which is used ... for barely anything?\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 21:23:09 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Tue, 20 Dec 2022 at 09:23, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> AFAICT, the code in question is for the text output of the jsonpath\n> type, which is used ... for barely anything?\n\nI think the performance of a type's output function is quite critical.\nI've seen huge performance gains in COPY TO performance from\noptimising output functions in the past (see dad75eb4a and aa2387e2f).\nIt would be good to see some measurements to find out how much adding\nthe strlen calls back in would cost us. If we're unable to measure the\nchange, then maybe the cleanup patch would be nice. If it's going to\nslow COPY TO down 10-20%, then we need to leave this or consider the\ninline function mentioned by Andres or the macro trick mentioned by\nme.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Dec 2022 11:26:26 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> On Tue, 20 Dec 2022 at 09:23, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> AFAICT, the code in question is for the text output of the jsonpath\n>> type, which is used ... for barely anything?\n\n> I think the performance of a type's output function is quite critical.\n\nI think Peter is entirely right to question whether *this* type's\noutput function is performance-critical. Who's got large tables with\njsonpath columns? It seems to me the type would mostly only exist\nas constants within queries.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 17:42:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Tue, 20 Dec 2022 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think Peter is entirely right to question whether *this* type's\n> output function is performance-critical. Who's got large tables with\n> jsonpath columns? It seems to me the type would mostly only exist\n> as constants within queries.\n\nThe patch touches code in the path of jsonb's output function too. I\ndon't think you could claim the same for that.\n\nDavid\n\n\n",
"msg_date": "Tue, 20 Dec 2022 11:48:37 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "\nOn 2022-12-19 Mo 17:48, David Rowley wrote:\n> On Tue, 20 Dec 2022 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think Peter is entirely right to question whether *this* type's\n>> output function is performance-critical. Who's got large tables with\n>> jsonpath columns? It seems to me the type would mostly only exist\n>> as constants within queries.\n> The patch touches code in the path of jsonb's output function too. I\n> don't think you could claim the same for that.\n>\n\nI agree that some of the uses in the jsonpath code could reasonably just\nbe converted to use appendStringInfoString()\n\nThere are 5 uses in the jsonb code where the length param is a compile\ntime constant:\n\nandrew@ub22:adt $ grep appendBinary.*[0-9] jsonb*\njsonb.c: appendBinaryStringInfo(out, \"null\", 4);\njsonb.c: appendBinaryStringInfo(out, \"true\", 4);\njsonb.c: appendBinaryStringInfo(out, \"false\", 5);\njsonb.c: appendBinaryStringInfo(out, \": \", 2);\njsonb.c: appendBinaryStringInfo(out, \" \", 4);\n\nNone of these really bother me much, TBH. In fact the last one is\narguably nicer because it tells you without counting how many spaces\nthere are.\n\nChanging the type of the second argument to appendBinaryStringInfo to\nvoid* seems reasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:47:29 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-19 21:29:10 +1300, David Rowley wrote:\n> On Mon, 19 Dec 2022 at 21:12, Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps we should make appendStringInfoString() a static inline function\n> > - most compilers can compute strlen() of a constant string at compile\n> > time.\n> \n> I had wondered about that, but last time I looked into it there was a\n> small increase in the size of the binary from doing it. Perhaps it\n> does not matter, but it's something to consider.\n\nI'd not be too worried about that in this case.\n\n\n> Re-thinking, I wonder if we could use the same macro trick used in\n> ereport_domain(). Something like:\n> \n> #ifdef HAVE__BUILTIN_CONSTANT_P\n> #define appendStringInfoString(str, s) \\\n> __builtin_constant_p(s) ? \\\n> appendBinaryStringInfo(str, s, sizeof(s) - 1) : \\\n> appendStringInfoStringInternal(str, s)\n> #else\n> #define appendStringInfoString(str, s) \\\n> appendStringInfoStringInternal(str, s)\n> #endif\n> \n> and rename the existing function to appendStringInfoStringInternal.\n> \n> Because __builtin_constant_p is a known compile-time constant, it\n> should be folded to just call the corresponding function during\n> compilation.\n\nSeveral compilers can optimize away repeated strlen() calls, even if the\nstring isn't a compile-time constant. So I'm not really convinced that\ntying inlining-strlen to __builtin_constant_p() is a good ida.\n\n> Just looking at the binary sizes for postgres. I see:\n> \n> unpatched = 9972128 bytes\n> inline function = 9990064 bytes\n\nThat seems acceptable to me.\n\n\n> macro trick = 9984968 bytes\n>\n> I'm currently not sure why the macro trick increases the binary at\n> all. I understand why the inline function does.\n\nI think Tom's explanation is on point.\n\n\nI've in the past looked at stringinfo.c being the bottleneck in a bunch\nof places and concluded that we really need to remove the function call\nin the happy path entirely - we should have an enlargeStringInfo() that\nwe can call externally iff needed and then implement the rest of\nappendBinaryStringInfo() etc in an inline function. That allows the\ncompiler to e.g. optimize out the repeated maintenance of the \\0 write\netc.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:45:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Wed, 21 Dec 2022 at 04:47, Andrew Dunstan <andrew@dunslane.net> wrote:\n> jsonb.c: appendBinaryStringInfo(out, \" \", 4);\n>\n> None of these really bother me much, TBH. In fact the last one is\n> arguably nicer because it tells you without counting how many spaces\n> there are.\n\nappendStringInfoSpaces() might be even better.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Dec 2022 10:05:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 10:47 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> There are 5 uses in the jsonb code where the length param is a compile\n> time constant:\n>\n> andrew@ub22:adt $ grep appendBinary.*[0-9] jsonb*\n> jsonb.c: appendBinaryStringInfo(out, \"null\", 4);\n> jsonb.c: appendBinaryStringInfo(out, \"true\", 4);\n> jsonb.c: appendBinaryStringInfo(out, \"false\", 5);\n> jsonb.c: appendBinaryStringInfo(out, \": \", 2);\n> jsonb.c: appendBinaryStringInfo(out, \" \", 4);\n>\n> None of these really bother me much, TBH. In fact the last one is\n> arguably nicer because it tells you without counting how many spaces\n> there are.\n\n+1. There are certainly cases where this kind of style can create\nconfusion, but I have a hard time putting any of these instances into\nthat category. It's obvious at a glance that null is 4 bytes, false is\n5, etc.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 20 Dec 2022 16:43:58 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Tue, 20 Dec 2022 at 11:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> It would be good to see some measurements to find out how much adding\n> the strlen calls back in would cost us.\n\nI tried this out. I'm not pretending I found the best test which\nhighlights how much the performance will change in a real-world case.\nI just wanted to try to get an indication of if changing jsonb's\noutput function to make more use of appendStringInfoString instead of\nappendBinaryStringInfo is likely to affect performance.\n\nAlso, in test 2, I picked a use case that makes quite a bit of use of\nappendStringInfoString already and checked if inlining that function\nwould help improve things. I imagine test 2 really is not\nbottlenecked on appendStringInfoString enough to get a true idea of\nhow much inlining appendStringInfoString could really help (spoiler,\nit helps quite a bit)\n\nTest 1: See if using appendStringInfoString instead of\nappendBinaryStringInfo hinders jsonb output performance.\n\nsetup:\ncreate table jb (j jsonb);\ninsert into jb select row_to_json(pg_class) from pg_class;\nvacuum freeze analyze jb;\n\nbench.sql:\nselect sum(length(j::text)) from jb;\n\nmaster (@3f28bd73):\n$ pgbench -n -T 60 -f bench.sql -M prepared postgres | grep latency\nlatency average = 1.896 ms\nlatency average = 1.885 ms\nlatency average = 1.899 ms\n\n 22.57% postgres [.] escape_json\n 21.83% postgres [.] pg_utf_mblen\n 9.23% postgres [.] JsonbIteratorNext.part.0\n 7.12% postgres [.] AllocSetAlloc\n 4.07% postgres [.] pg_mbstrlen_with_len\n 3.71% postgres [.] JsonbToCStringWorker\n 3.70% postgres [.] fillJsonbValue\n 3.17% postgres [.] appendBinaryStringInfo\n 2.95% postgres [.] enlargeStringInfo\n 2.09% postgres [.] jsonb_put_escaped_value\n 1.89% postgres [.] palloc\n\nmaster + 0001-Use-appendStringInfoString-instead-of-appendBinarySt.patch\n\n$ pgbench -n -T 60 -f bench.sql -M prepared postgres | grep latency\nlatency average = 1.912 ms\nlatency average = 1.912 ms\nlatency average = 1.912 ms (~1% slower)\n\n 22.38% postgres [.] escape_json\n 21.98% postgres [.] pg_utf_mblen\n 9.07% postgres [.] JsonbIteratorNext.part.0\n 5.93% postgres [.] AllocSetAlloc\n 4.11% postgres [.] pg_mbstrlen_with_len\n 3.87% postgres [.] fillJsonbValue\n 3.66% postgres [.] JsonbToCStringWorker\n 2.28% postgres [.] enlargeStringInfo\n 2.15% postgres [.] appendStringInfoString\n 1.98% postgres [.] jsonb_put_escaped_value\n 1.92% postgres [.] palloc\n 1.58% postgres [.] appendBinaryStringInfo\n 1.42% postgres [.] pnstrdup\n\nTest 2: Test if inlining appendStringInfoString helps\n\nbench.sql:\nselect sum(length(pg_get_ruledef(oid))) from pg_rewrite;\n\nmaster (@3f28bd73):\n$ pgbench -n -T 60 -f bench.sql postgres | grep latency\nlatency average = 16.355 ms\nlatency average = 16.290 ms\nlatency average = 16.303 ms\n\nstatic inline appendStringInfoString\n$ pgbench -n -T 60 -f bench.sql postgres | grep latency\nlatency average = 15.690 ms\nlatency average = 15.575 ms\nlatency average = 15.604 ms (~4.4% faster)\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Dec 2022 20:48:39 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> 22.57% postgres [.] escape_json\n\nHmm ... shouldn't we do something like\n\n- appendStringInfoString(buf, \"\\\\b\");\n+ appendStringInfoCharMacro(buf, '\\\\');\n+ appendStringInfoCharMacro(buf, 'b');\n\nand so on in that function? I'm not convinced that this one\nhotspot justifies inlining appendStringInfoString everywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Dec 2022 02:56:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Thu, 22 Dec 2022 at 20:56, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > 22.57% postgres [.] escape_json\n>\n> Hmm ... shouldn't we do something like\n>\n> - appendStringInfoString(buf, \"\\\\b\");\n> + appendStringInfoCharMacro(buf, '\\\\');\n> + appendStringInfoCharMacro(buf, 'b');\n>\n> and so on in that function? I'm not convinced that this one\n> hotspot justifies inlining appendStringInfoString everywhere.\n\nIt improves things slightly:\n\nTest 1 (from earlier)\n\nmaster + escape_json using appendStringInfoCharMacro\n$ pgbench -n -T 60 -f bench.sql -M prepared postgres | grep latency\nlatency average = 1.807 ms\nlatency average = 1.800 ms\nlatency average = 1.812 ms (~4.8% faster than master)\n\n 23.05% postgres [.] pg_utf_mblen\n 22.55% postgres [.] escape_json\n 8.58% postgres [.] JsonbIteratorNext.part.0\n 6.80% postgres [.] AllocSetAlloc\n 4.23% postgres [.] pg_mbstrlen_with_len\n 3.88% postgres [.] JsonbToCStringWorker\n 3.79% postgres [.] fillJsonbValue\n 3.18% postgres [.] appendBinaryStringInfo\n 2.43% postgres [.] enlargeStringInfo\n 2.02% postgres [.] palloc\n 1.61% postgres [.] jsonb_put_escaped_value\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Dec 2022 22:19:16 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 4:19 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> Test 1 (from earlier)\n>\n> master + escape_json using appendStringInfoCharMacro\n> $ pgbench -n -T 60 -f bench.sql -M prepared postgres | grep latency\n> latency average = 1.807 ms\n> latency average = 1.800 ms\n> latency average = 1.812 ms (~4.8% faster than master)\n\n> 23.05% postgres [.] pg_utf_mblen\n\nI get about 20% improvement by adding an ascii fast path in\npg_mbstrlen_with_len, which I think would work with all encodings we\nsupport:\n\n@@ -1064,7 +1064,12 @@ pg_mbstrlen_with_len(const char *mbstr, int limit)\n\n while (limit > 0 && *mbstr)\n {\n- int l = pg_mblen(mbstr);\n+ int l;\n+\n+ if (!IS_HIGHBIT_SET(*mbstr))\n+ l = 1;\n+ else\n+ l = pg_mblen(mbstr);\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Dec 22, 2022 at 4:19 PM David Rowley <dgrowleyml@gmail.com> wrote:>> Test 1 (from earlier)>> master + escape_json using appendStringInfoCharMacro> $ pgbench -n -T 60 -f bench.sql -M prepared postgres | grep latency> latency average = 1.807 ms> latency average = 1.800 ms> latency average = 1.812 ms (~4.8% faster than master)> 23.05% postgres [.] pg_utf_mblenI get about 20% improvement by adding an ascii fast path in pg_mbstrlen_with_len, which I think would work with all encodings we support:@@ -1064,7 +1064,12 @@ pg_mbstrlen_with_len(const char *mbstr, int limit) while (limit > 0 && *mbstr) {- int l = pg_mblen(mbstr);+ int l;++ if (!IS_HIGHBIT_SET(*mbstr))+ l = 1;+ else+ l = pg_mblen(mbstr);--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 22 Dec 2022 19:20:37 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On 19.12.22 23:48, David Rowley wrote:\n> On Tue, 20 Dec 2022 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think Peter is entirely right to question whether *this* type's\n>> output function is performance-critical. Who's got large tables with\n>> jsonpath columns? It seems to me the type would mostly only exist\n>> as constants within queries.\n> \n> The patch touches code in the path of jsonb's output function too. I\n> don't think you could claim the same for that.\n\nOk, let's leave the jsonb output alone. The jsonb output code also \nwon't change a lot, but there is a bunch of stuff for jsonpath on the \nhorizon, so having some more robust coding style to imitate there seems \nuseful. Here is another patch set with the jsonb changes omitted.",
"msg_date": "Fri, 23 Dec 2022 10:04:33 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Fri, 23 Dec 2022 at 22:04, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 19.12.22 23:48, David Rowley wrote:\n> > On Tue, 20 Dec 2022 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think Peter is entirely right to question whether *this* type's\n> >> output function is performance-critical. Who's got large tables with\n> >> jsonpath columns? It seems to me the type would mostly only exist\n> >> as constants within queries.\n> >\n> > The patch touches code in the path of jsonb's output function too. I\n> > don't think you could claim the same for that.\n>\n> Ok, let's leave the jsonb output alone. The jsonb output code also\n> won't change a lot, but there is a bunch of stuff for jsonpath on the\n> horizon, so having some more robust coding style to imitate there seems\n> useful. Here is another patch set with the jsonb changes omitted.\n\nMaybe if there's concern that inlining appendStringInfoString is going\nto bloat the binary too much, then how about we just invent an inlined\nversion of it using some other name that we can use when performance\nmatters? We could then safely replace the offending\nappendBinaryStringInfos from both places without any concern for\nregressing performance.\n\nFWIW, I just did a few compilation runs of our supported versions to\nsee how much postgres binary grew release to release:\n\nbranch postgres binary size growth bytes\nREL_10_STABLE 8230232 0\nREL_11_STABLE 8586024 355792\nREL_12_STABLE 8831664 245640\nREL_13_STABLE 8990824 159160\nREL_14_STABLE 9484848 494024\nREL_15_STABLE 9744680 259832\nmaster 9977896 233216\ninline_asis 10004032 26136\n\n(inlined_asis = inlined appendStringInfoString)\n\nOn the other hand, if we went with inlining the existing function,\nthen it looks to be about 10% of the growth we saw between v14 and\nv15. That seems quite large.\n\nDavid\n\n\n",
"msg_date": "Sat, 24 Dec 2022 02:01:43 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On 23.12.22 10:04, Peter Eisentraut wrote:\n> On 19.12.22 23:48, David Rowley wrote:\n>> On Tue, 20 Dec 2022 at 11:42, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I think Peter is entirely right to question whether *this* type's\n>>> output function is performance-critical. Who's got large tables with\n>>> jsonpath columns? It seems to me the type would mostly only exist\n>>> as constants within queries.\n>>\n>> The patch touches code in the path of jsonb's output function too. I\n>> don't think you could claim the same for that.\n> \n> Ok, let's leave the jsonb output alone. The jsonb output code also \n> won't change a lot, but there is a bunch of stuff for jsonpath on the \n> horizon, so having some more robust coding style to imitate there seems \n> useful. Here is another patch set with the jsonb changes omitted.\n\nI have committed these.\n\n\n",
"msg_date": "Fri, 30 Dec 2022 11:19:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On 23.12.22 14:01, David Rowley wrote:\n> Maybe if there's concern that inlining appendStringInfoString is going\n> to bloat the binary too much, then how about we just invent an inlined\n> version of it using some other name that we can use when performance\n> matters? We could then safely replace the offending\n> appendBinaryStringInfos from both places without any concern for\n> regressing performance.\n\nThe jsonpath output routines don't appear to be written with deep \nconcern about performance now, so I'm not sure this is the place to \nstart tweaking. For the jsonb parts, there are only a handful of \nstrings this affects (\"true\", \"false\", \"null\"), so using \nappendBinaryStringInfo() there a few times doesn't seem so bad. So I'm \nnot too worried about this altogether.\n\n\n\n",
"msg_date": "Fri, 30 Dec 2022 11:25:23 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On 19.12.22 07:13, Peter Eisentraut wrote:\n> Also, the argument type of appendBinaryStringInfo() is char *. There is \n> some code that uses this function to assemble some kind of packed binary \n> layout, which requires a bunch of casts because of this. I think \n> functions taking binary data plus length should take void * instead, \n> like memcpy() for example.\n\nI found a little follow-up for this one: Make the same change to \npq_sendbytes(), which is a thin wrapper around appendBinaryStringInfo(). \n This would allow getting rid of further casts at call sites.",
"msg_date": "Fri, 10 Feb 2023 13:15:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 7:16 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 19.12.22 07:13, Peter Eisentraut wrote:\n> > Also, the argument type of appendBinaryStringInfo() is char *. There is\n> > some code that uses this function to assemble some kind of packed binary\n> > layout, which requires a bunch of casts because of this. I think\n> > functions taking binary data plus length should take void * instead,\n> > like memcpy() for example.\n>\n> I found a little follow-up for this one: Make the same change to\n> pq_sendbytes(), which is a thin wrapper around appendBinaryStringInfo().\n> This would allow getting rid of further casts at call sites.\n>\n\n+1\n\nHas all the benefits that 54a177a948b0a773c25c6737d1cc3cc49222a526 had.\n\nPasses make check-world.\n\nOn Fri, Feb 10, 2023 at 7:16 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 19.12.22 07:13, Peter Eisentraut wrote:\n> Also, the argument type of appendBinaryStringInfo() is char *. There is \n> some code that uses this function to assemble some kind of packed binary \n> layout, which requires a bunch of casts because of this. I think \n> functions taking binary data plus length should take void * instead, \n> like memcpy() for example.\n\nI found a little follow-up for this one: Make the same change to \npq_sendbytes(), which is a thin wrapper around appendBinaryStringInfo(). \n This would allow getting rid of further casts at call sites.+1 Has all the benefits that 54a177a948b0a773c25c6737d1cc3cc49222a526 had.Passes make check-world.",
"msg_date": "Fri, 10 Feb 2023 14:08:35 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: appendBinaryStringInfo stuff"
},
{
"msg_contents": "On 10.02.23 20:08, Corey Huinker wrote:\n> \n> \n> On Fri, Feb 10, 2023 at 7:16 AM Peter Eisentraut \n> <peter.eisentraut@enterprisedb.com \n> <mailto:peter.eisentraut@enterprisedb.com>> wrote:\n> \n> On 19.12.22 07:13, Peter Eisentraut wrote:\n> > Also, the argument type of appendBinaryStringInfo() is char *. \n> There is\n> > some code that uses this function to assemble some kind of packed\n> binary\n> > layout, which requires a bunch of casts because of this. I think\n> > functions taking binary data plus length should take void * instead,\n> > like memcpy() for example.\n> \n> I found a little follow-up for this one: Make the same change to\n> pq_sendbytes(), which is a thin wrapper around\n> appendBinaryStringInfo().\n> This would allow getting rid of further casts at call sites.\n> \n> \n> +1\n> \n> Has all the benefits that 54a177a948b0a773c25c6737d1cc3cc49222a526 had.\n> \n> Passes make check-world.\n\ncommitted, thanks\n\n\n",
"msg_date": "Tue, 14 Feb 2023 13:51:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: appendBinaryStringInfo stuff"
}
] |
[
{
"msg_contents": "I notice that none of the meson files contain copyright notices. Shall I\nadd them?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 08:55:48 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "meson files copyright"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I notice that none of the meson files contain copyright notices. Shall I\n> add them?\n\n+1. Their comment density is pretty awful too --- maybe I'm just\nnot used to meson, but they seem just about completely undocumented.\nAnd there's certainly been no effort to transfer the accumulated wisdom\nof the makefile comments (where it's still relevant, of course).\n\nDon't see any simple fix for that, but copyright notices would\nbe a good idea, and so would file identifiers according to our\nlongstanding practice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 10:20:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "On 12/19/22 16:20, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I notice that none of the meson files contain copyright notices. Shall I\n>> add them?\n> \n> +1. Their comment density is pretty awful too --- maybe I'm just\n> not used to meson, but they seem just about completely undocumented.\n> And there's certainly been no effort to transfer the accumulated wisdom\n> of the makefile comments (where it's still relevant, of course).\n> \n> Don't see any simple fix for that, but copyright notices would\n> be a good idea, and so would file identifiers according to our\n> longstanding practice.\n\nPerhaps a bit off-topic, but what is the point of the file identifiers?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 18:18:41 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> Perhaps a bit off-topic, but what is the point of the file identifiers?\n\nIMO, it helps to tell things apart when you've got a bunch of editor\nwindows open on some mighty samey-looking meson.build files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 13:03:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n> > Perhaps a bit off-topic, but what is the point of the file identifiers?\n>\n> IMO, it helps to tell things apart when you've got a bunch of editor\n> windows open on some mighty samey-looking meson.build files.\n\nOn the other hand, maintaining those identification lines in all of\nour files has a pretty high distributed cost. I never use them to\nfigure out what file I'm editing because my editor can tell me that.\nBut I do have to keep fixing those lines as I create new files. It's\nnot the most annoying thing ever, but I wouldn't mind a bit if I\ndidn't have to do it any more.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Dec 2022 13:33:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "On 19.12.22 19:33, Robert Haas wrote:\n> On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Vik Fearing <vik@postgresfriends.org> writes:\n>>> Perhaps a bit off-topic, but what is the point of the file identifiers?\n>>\n>> IMO, it helps to tell things apart when you've got a bunch of editor\n>> windows open on some mighty samey-looking meson.build files.\n> \n> On the other hand, maintaining those identification lines in all of\n> our files has a pretty high distributed cost. I never use them to\n> figure out what file I'm editing because my editor can tell me that.\n> But I do have to keep fixing those lines as I create new files. It's\n> not the most annoying thing ever, but I wouldn't mind a bit if I\n> didn't have to do it any more.\n\nI agree it's not very useful and a bit annoying.\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 21:09:25 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "\nOn 2022-12-19 Mo 15:09, Peter Eisentraut wrote:\n> On 19.12.22 19:33, Robert Haas wrote:\n>> On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Vik Fearing <vik@postgresfriends.org> writes:\n>>>> Perhaps a bit off-topic, but what is the point of the file\n>>>> identifiers?\n>>>\n>>> IMO, it helps to tell things apart when you've got a bunch of editor\n>>> windows open on some mighty samey-looking meson.build files.\n>>\n>> On the other hand, maintaining those identification lines in all of\n>> our files has a pretty high distributed cost. I never use them to\n>> figure out what file I'm editing because my editor can tell me that.\n>> But I do have to keep fixing those lines as I create new files. It's\n>> not the most annoying thing ever, but I wouldn't mind a bit if I\n>> didn't have to do it any more.\n>\n> I agree it's not very useful and a bit annoying.\n\n\nNot sure I agree the cost is high, but yes it's not quite zero either. I\ncan see a bit more value when it's used with files we have a lot of like\nmeson.build.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 17:10:17 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 09:09:25PM +0100, Peter Eisentraut wrote:\n> On 19.12.22 19:33, Robert Haas wrote:\n> >On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>Vik Fearing <vik@postgresfriends.org> writes:\n> >>>Perhaps a bit off-topic, but what is the point of the file identifiers?\n> >>\n> >>IMO, it helps to tell things apart when you've got a bunch of editor\n> >>windows open on some mighty samey-looking meson.build files.\n> >\n> >On the other hand, maintaining those identification lines in all of\n> >our files has a pretty high distributed cost. I never use them to\n> >figure out what file I'm editing because my editor can tell me that.\n> >But I do have to keep fixing those lines as I create new files. It's\n> >not the most annoying thing ever, but I wouldn't mind a bit if I\n> >didn't have to do it any more.\n> \n> I agree it's not very useful and a bit annoying.\n\nAgreed. To me, it's just one more thing to get wrong.\n\n\n",
"msg_date": "Mon, 19 Dec 2022 21:26:50 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "\nOn 2022-12-20 Tu 00:26, Noah Misch wrote:\n> On Mon, Dec 19, 2022 at 09:09:25PM +0100, Peter Eisentraut wrote:\n>> On 19.12.22 19:33, Robert Haas wrote:\n>>> On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>>> Vik Fearing <vik@postgresfriends.org> writes:\n>>>>> Perhaps a bit off-topic, but what is the point of the file identifiers?\n>>>> IMO, it helps to tell things apart when you've got a bunch of editor\n>>>> windows open on some mighty samey-looking meson.build files.\n>>> On the other hand, maintaining those identification lines in all of\n>>> our files has a pretty high distributed cost. I never use them to\n>>> figure out what file I'm editing because my editor can tell me that.\n>>> But I do have to keep fixing those lines as I create new files. It's\n>>> not the most annoying thing ever, but I wouldn't mind a bit if I\n>>> didn't have to do it any more.\n>> I agree it's not very useful and a bit annoying.\n> Agreed. To me, it's just one more thing to get wrong.\n\n\nOK, I think there are enough objections that we can put that aside for\nnow, I will just go and add the copyright notices.\n\n\ncheers\n\n\nandew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 20 Dec 2022 07:14:18 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "On 2022-12-19 13:33:46 -0500, Robert Haas wrote:\n> On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Vik Fearing <vik@postgresfriends.org> writes:\n> > > Perhaps a bit off-topic, but what is the point of the file identifiers?\n> >\n> > IMO, it helps to tell things apart when you've got a bunch of editor\n> > windows open on some mighty samey-looking meson.build files.\n> \n> On the other hand, maintaining those identification lines in all of\n> our files has a pretty high distributed cost. I never use them to\n> figure out what file I'm editing because my editor can tell me that.\n> But I do have to keep fixing those lines as I create new files. It's\n> not the most annoying thing ever, but I wouldn't mind a bit if I\n> didn't have to do it any more.\n\n+1\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:25:24 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-19 10:20:45 -0500, Tom Lane wrote:\n> Their comment density is pretty awful too --- maybe I'm just\n> not used to meson, but they seem just about completely undocumented.\n> And there's certainly been no effort to transfer the accumulated wisdom\n> of the makefile comments (where it's still relevant, of course).\n\nI did try to retain comments that seemed useful. E.g.\n\ntoplevel meson.build:\n\n # The separate ldap_r library only exists in OpenLDAP < 2.5, and if we\n # have 2.5 or later, we shouldn't even probe for ldap_r (we might find a\n # library from a separate OpenLDAP installation). The most reliable\n # way to check that is to check for a function introduced in 2.5.\n...\n # We are after Embed's ldopts, but without the subset mentioned in\n # Config's ccdlflags and ldflags. (Those are the choices of those who\n # built the Perl installation, which are not necessarily appropriate\n # for building PostgreSQL.)\n...\n # Functions introduced in OpenSSL 1.1.0. We used to check for\n # OPENSSL_VERSION_NUMBER, but that didn't work with 1.1.0, because LibreSSL\n # defines OPENSSL_VERSION_NUMBER to claim version 2.0.0, even though it\n # doesn't have these OpenSSL 1.1.0 functions. So check for individual\n # functions.\n...\n# Check if the C compiler understands __builtin_$op_overflow(),\n# and define HAVE__BUILTIN_OP_OVERFLOW if so.\n#\n# Check for the most complicated case, 64 bit multiplication, as a\n# proxy for all of the operations. To detect the case where the compiler\n# knows the function but library support is missing, we must link not just\n# compile, and store the results in global variables so the compiler doesn't\n# optimize away the call.\n...\n\nsrc/backend/meson.build:\n...\n# As of 1/2010:\n# The probes.o file is necessary for dtrace support on Solaris, and on recent\n# versions of systemtap. (Older systemtap releases just produce an empty\n# file, but that's okay.) However, macOS's dtrace doesn't use it and doesn't\n# even recognize the -G option. So, build probes.o except on macOS.\n# This might need adjustment as other platforms add dtrace support.\n\n\nI'm sure there are a lot of comments that could also have been useful\nthat I missed - but there's also a lot that just didn't seem\nmeaningful. E.g. stuff like\n\n# The following targets are specified in make commands that appear in\n# the make files in our subdirectories. Note that it's important we\n# match the dependencies shown in the subdirectory makefiles!\n# Also, in cases where a subdirectory makefile generates two files in\n# what's really one step, such as bison producing both gram.h and gram.c,\n# we must request making the one that is shown as the secondary (dependent)\n# output, else the timestamp on it might be wrong. By project convention,\n# the .h file is the dependent one for bison output, so we need only request\n# that; but in other cases, request both for safety.\n\nwhich just doesn't apply to meson.\n\n- Andres\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:35:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
},
{
"msg_contents": "On Tue, Dec 20, 2022 at 6:27 PM Noah Misch <noah@leadboat.com> wrote:\n> On Mon, Dec 19, 2022 at 09:09:25PM +0100, Peter Eisentraut wrote:\n> > On 19.12.22 19:33, Robert Haas wrote:\n> > >On Mon, Dec 19, 2022 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >>Vik Fearing <vik@postgresfriends.org> writes:\n> > >>>Perhaps a bit off-topic, but what is the point of the file identifiers?\n> > >>\n> > >>IMO, it helps to tell things apart when you've got a bunch of editor\n> > >>windows open on some mighty samey-looking meson.build files.\n> > >\n> > >On the other hand, maintaining those identification lines in all of\n> > >our files has a pretty high distributed cost. I never use them to\n> > >figure out what file I'm editing because my editor can tell me that.\n> > >But I do have to keep fixing those lines as I create new files. It's\n> > >not the most annoying thing ever, but I wouldn't mind a bit if I\n> > >didn't have to do it any more.\n> >\n> > I agree it's not very useful and a bit annoying.\n>\n> Agreed. To me, it's just one more thing to get wrong.\n\n+1\n\nWe're just cargo-culting the old CVS $Id$ tags.\n\n\n",
"msg_date": "Wed, 21 Dec 2022 12:00:02 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson files copyright"
}
] |
[
{
"msg_contents": "We don't generate SSL certificates for running the SSL tests, but\ninstead use pregenerated certificates that are part of our source code.\nThis patch applies the same policy to the LDAP tests, and in fact simply\nreuses certificates from the SSL test suite by copying them. It won't\nsave much but it should save a handful of cycles at run time.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 19 Dec 2022 09:52:45 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Avoid generating SSL certs for LDAP tests"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> We don't generate SSL certificates for running the SSL tests, but\n> instead use pregenerated certificates that are part of our source code.\n> This patch applies the same policy to the LDAP tests, and in fact simply\n> reuses certificates from the SSL test suite by copying them. It won't\n> save much but it should save a handful of cycles at run time.\n\n+1, but should there be a comment somewhere under test/ssl pointing\nout this external use of the certs?\n\nAlso, I bet this needs some adjustment for VPATH builds.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 19 Dec 2022 10:25:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoid generating SSL certs for LDAP tests"
},
{
"msg_contents": "\nOn 2022-12-19 Mo 10:25, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> We don't generate SSL certificates for running the SSL tests, but\n>> instead use pregenerated certificates that are part of our source code.\n>> This patch applies the same policy to the LDAP tests, and in fact simply\n>> reuses certificates from the SSL test suite by copying them. It won't\n>> save much but it should save a handful of cycles at run time.\n> +1, but should there be a comment somewhere under test/ssl pointing\n> out this external use of the certs?\n\n\nOK, I'll find a place to mention that.\n\n\n> Also, I bet this needs some adjustment for VPATH builds.\t\t\t\n\n\nI have tested it with both a make style vpath build and with meson - it\nworks fine.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 11:04:53 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Avoid generating SSL certs for LDAP tests"
},
{
"msg_contents": "\nOn 2022-12-19 Mo 11:04, Andrew Dunstan wrote:\n> On 2022-12-19 Mo 10:25, Tom Lane wrote:\n>> Andrew Dunstan <andrew@dunslane.net> writes:\n>>> We don't generate SSL certificates for running the SSL tests, but\n>>> instead use pregenerated certificates that are part of our source code.\n>>> This patch applies the same policy to the LDAP tests, and in fact simply\n>>> reuses certificates from the SSL test suite by copying them. It won't\n>>> save much but it should save a handful of cycles at run time.\n>> +1, but should there be a comment somewhere under test/ssl pointing\n>> out this external use of the certs?\n>\n> OK, I'll find a place to mention that.\n\n\nDone.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:05:37 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Avoid generating SSL certs for LDAP tests"
}
] |
[
{
"msg_contents": "There is currently no test for the use of ldapbindpasswd in the\npg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 19 Dec 2022 11:16:08 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Add a test to ldapbindpasswd"
},
{
"msg_contents": "\nOn 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n> There is currently no test for the use of ldapbindpasswd in the\n> pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n>\n>\n\nThis currently has failures on the cfbot for meson builds on FBSD13 and\nDebian Bullseye, but it's not at all clear why. In both cases it fails\nwhere the ldap server is started.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 1 Jan 2023 09:04:14 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 3:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n> > There is currently no test for the use of ldapbindpasswd in the\n> > pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n> >\n> >\n>\n> This currently has failures on the cfbot for meson builds on FBSD13 and\n> Debian Bullseye, but it's not at all clear why. In both cases it fails\n> where the ldap server is started.\n\nI think it's failing when using meson. I guess it fails to fail on\nmacOS only because you need to add a new path for Homebrew/ARM like\ncommit 14d63dd2, so it's skipping (it'd be nice if we didn't need\nanother copy of all that logic). Trying locally... it looks like\nslapd is failing silently, and with some tracing I can see it's\nsending an error message to my syslog daemon, which logged:\n\n2023-01-02T07:50:20.853019+13:00 x1 slapd[153599]: main: TLS init def\nctx failed: -1\n\nAh, it looks like this test is relying on \"slapd-certs\", which doesn't exist:\n\ntmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/001_auth/data/\nldap.conf ldappassword openldap-data portlock slapd-certs slapd.conf\ntmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/002_bindpasswd/data/\nportlock slapd.conf\n\nI didn't look closely, but apparently there is something wrong in the\npart that copies certs from the ssl test? Not sure why it works for\nautoconf...\n\n\n",
"msg_date": "Mon, 2 Jan 2023 08:02:24 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "\n\n> On Jan 1, 2023, at 2:03 PM, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n> On Mon, Jan 2, 2023 at 3:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> On 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n>>> There is currently no test for the use of ldapbindpasswd in the\n>>> pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n>>> \n>>> \n>> \n>> This currently has failures on the cfbot for meson builds on FBSD13 and\n>> Debian Bullseye, but it's not at all clear why. In both cases it fails\n>> where the ldap server is started.\n> \n> I think it's failing when using meson. I guess it fails to fail on\n> macOS only because you need to add a new path for Homebrew/ARM like\n> commit 14d63dd2, so it's skipping (it'd be nice if we didn't need\n> another copy of all that logic). Trying locally... it looks like\n> slapd is failing silently, and with some tracing I can see it's\n> sending an error message to my syslog daemon, which logged:\n> \n> 2023-01-02T07:50:20.853019+13:00 x1 slapd[153599]: main: TLS init def\n> ctx failed: -1\n> \n> Ah, it looks like this test is relying on \"slapd-certs\", which doesn't exist:\n> \n> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/001_auth/data/\n> ldap.conf ldappassword openldap-data portlock slapd-certs slapd.conf\n> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/002_bindpasswd/data/\n> portlock slapd.conf\n> \n> I didn't look closely, but apparently there is something wrong in the\n> part that copies certs from the ssl test? Not sure why it works for\n> autoconf...\n\nThanks, I see the problem. Will post a revised patch shortly \n\nCheers \n\nAndrew \n\n",
"msg_date": "Sun, 1 Jan 2023 14:58:10 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "On 2023-01-01 Su 14:02, Thomas Munro wrote:\n> On Mon, Jan 2, 2023 at 3:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n>>> There is currently no test for the use of ldapbindpasswd in the\n>>> pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n>>>\n>>>\n>> This currently has failures on the cfbot for meson builds on FBSD13 and\n>> Debian Bullseye, but it's not at all clear why. In both cases it fails\n>> where the ldap server is started.\n> I think it's failing when using meson. I guess it fails to fail on\n> macOS only because you need to add a new path for Homebrew/ARM like\n> commit 14d63dd2, so it's skipping (it'd be nice if we didn't need\n> another copy of all that logic). Trying locally... it looks like\n> slapd is failing silently, and with some tracing I can see it's\n> sending an error message to my syslog daemon, which logged:\n>\n> 2023-01-02T07:50:20.853019+13:00 x1 slapd[153599]: main: TLS init def\n> ctx failed: -1\n>\n> Ah, it looks like this test is relying on \"slapd-certs\", which doesn't exist:\n>\n> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/001_auth/data/\n> ldap.conf ldappassword openldap-data portlock slapd-certs slapd.conf\n> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/002_bindpasswd/data/\n> portlock slapd.conf\n>\n> I didn't look closely, but apparently there is something wrong in the\n> part that copies certs from the ssl test? Not sure why it works for\n> autoconf...\n\n\n\nLet's see how we fare with this patch.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Sun, 1 Jan 2023 18:31:50 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "On 2023-01-01 Su 18:31, Andrew Dunstan wrote:\n> On 2023-01-01 Su 14:02, Thomas Munro wrote:\n>> On Mon, Jan 2, 2023 at 3:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> On 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n>>>> There is currently no test for the use of ldapbindpasswd in the\n>>>> pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n>>>>\n>>>>\n>>> This currently has failures on the cfbot for meson builds on FBSD13 and\n>>> Debian Bullseye, but it's not at all clear why. In both cases it fails\n>>> where the ldap server is started.\n>> I think it's failing when using meson. I guess it fails to fail on\n>> macOS only because you need to add a new path for Homebrew/ARM like\n>> commit 14d63dd2, so it's skipping (it'd be nice if we didn't need\n>> another copy of all that logic). Trying locally... it looks like\n>> slapd is failing silently, and with some tracing I can see it's\n>> sending an error message to my syslog daemon, which logged:\n>>\n>> 2023-01-02T07:50:20.853019+13:00 x1 slapd[153599]: main: TLS init def\n>> ctx failed: -1\n>>\n>> Ah, it looks like this test is relying on \"slapd-certs\", which doesn't exist:\n>>\n>> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/001_auth/data/\n>> ldap.conf ldappassword openldap-data portlock slapd-certs slapd.conf\n>> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/002_bindpasswd/data/\n>> portlock slapd.conf\n>>\n>> I didn't look closely, but apparently there is something wrong in the\n>> part that copies certs from the ssl test? Not sure why it works for\n>> autoconf...\n>\n>\n> Let's see how we fare with this patch.\n>\n>\n\nNot so well :-(. This version tries to make the tests totally\nindependent, as they should be. That's an attempt to get the cfbot to go\ngreen, but I am intending to refactor this code substantially so the\ncommon bits are in a module each test file will load.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 2 Jan 2023 09:45:27 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 02, 2023 at 09:45:27AM -0500, Andrew Dunstan wrote:\n>\n> On 2023-01-01 Su 18:31, Andrew Dunstan wrote:\n> > Let's see how we fare with this patch.\n> >\n> >\n>\n> Not so well :-(. This version tries to make the tests totally\n> independent, as they should be. That's an attempt to get the cfbot to go\n> green, but I am intending to refactor this code substantially so the\n> common bits are in a module each test file will load.\n\nFTR you can run the same set of CI tests using your own GH account rather than\nsedning patches, see src/tools/ci/README/\n\n\n",
"msg_date": "Tue, 3 Jan 2023 13:48:48 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "On 2023-01-02 Mo 09:45, Andrew Dunstan wrote:\n> On 2023-01-01 Su 18:31, Andrew Dunstan wrote:\n>> On 2023-01-01 Su 14:02, Thomas Munro wrote:\n>>> On Mon, Jan 2, 2023 at 3:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>> On 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n>>>>> There is currently no test for the use of ldapbindpasswd in the\n>>>>> pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n>>>>>\n>>>>>\n>>>> This currently has failures on the cfbot for meson builds on FBSD13 and\n>>>> Debian Bullseye, but it's not at all clear why. In both cases it fails\n>>>> where the ldap server is started.\n>>> I think it's failing when using meson. I guess it fails to fail on\n>>> macOS only because you need to add a new path for Homebrew/ARM like\n>>> commit 14d63dd2, so it's skipping (it'd be nice if we didn't need\n>>> another copy of all that logic). Trying locally... it looks like\n>>> slapd is failing silently, and with some tracing I can see it's\n>>> sending an error message to my syslog daemon, which logged:\n>>>\n>>> 2023-01-02T07:50:20.853019+13:00 x1 slapd[153599]: main: TLS init def\n>>> ctx failed: -1\n>>>\n>>> Ah, it looks like this test is relying on \"slapd-certs\", which doesn't exist:\n>>>\n>>> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/001_auth/data/\n>>> ldap.conf ldappassword openldap-data portlock slapd-certs slapd.conf\n>>> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/002_bindpasswd/data/\n>>> portlock slapd.conf\n>>>\n>>> I didn't look closely, but apparently there is something wrong in the\n>>> part that copies certs from the ssl test? Not sure why it works for\n>>> autoconf...\n>>\n>> Let's see how we fare with this patch.\n>>\n>>\n> Not so well :-(. This version tries to make the tests totally\n> independent, as they should be. That's an attempt to get the cfbot to go\n> green, but I am intending to refactor this code substantially so the\n> common bits are in a module each test file will load.\n>\n>\n\nThis version factors out the creation of the LDAP server into a separate\nperl Module. That makes both the existing test script and the new test\nscript a lot shorter, and will be useful for the nearby patch for a hook\nfor the ldapbindpassword.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 4 Jan 2023 16:26:39 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "On 2023-01-04 We 16:26, Andrew Dunstan wrote:\n> On 2023-01-02 Mo 09:45, Andrew Dunstan wrote:\n>> On 2023-01-01 Su 18:31, Andrew Dunstan wrote:\n>>> On 2023-01-01 Su 14:02, Thomas Munro wrote:\n>>>> On Mon, Jan 2, 2023 at 3:04 AM Andrew Dunstan <andrew@dunslane.net> wrote:\n>>>>> On 2022-12-19 Mo 11:16, Andrew Dunstan wrote:\n>>>>>> There is currently no test for the use of ldapbindpasswd in the\n>>>>>> pg_hba.conf file. This patch, mostly the work of John Naylor, remedies that.\n>>>>>>\n>>>>>>\n>>>>> This currently has failures on the cfbot for meson builds on FBSD13 and\n>>>>> Debian Bullseye, but it's not at all clear why. In both cases it fails\n>>>>> where the ldap server is started.\n>>>> I think it's failing when using meson. I guess it fails to fail on\n>>>> macOS only because you need to add a new path for Homebrew/ARM like\n>>>> commit 14d63dd2, so it's skipping (it'd be nice if we didn't need\n>>>> another copy of all that logic). Trying locally... it looks like\n>>>> slapd is failing silently, and with some tracing I can see it's\n>>>> sending an error message to my syslog daemon, which logged:\n>>>>\n>>>> 2023-01-02T07:50:20.853019+13:00 x1 slapd[153599]: main: TLS init def\n>>>> ctx failed: -1\n>>>>\n>>>> Ah, it looks like this test is relying on \"slapd-certs\", which doesn't exist:\n>>>>\n>>>> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/001_auth/data/\n>>>> ldap.conf ldappassword openldap-data portlock slapd-certs slapd.conf\n>>>> tmunro@x1:~/projects/postgresql/build$ ls testrun/ldap/002_bindpasswd/data/\n>>>> portlock slapd.conf\n>>>>\n>>>> I didn't look closely, but apparently there is something wrong in the\n>>>> part that copies certs from the ssl test? Not sure why it works for\n>>>> autoconf...\n>>> Let's see how we fare with this patch.\n>>>\n>>>\n>> Not so well :-(. This version tries to make the tests totally\n>> independent, as they should be. That's an attempt to get the cfbot to go\n>> green, but I am intending to refactor this code substantially so the\n>> common bits are in a module each test file will load.\n>>\n>>\n> This version factors out the creation of the LDAP server into a separate\n> perl Module. That makes both the existing test script and the new test\n> script a lot shorter, and will be useful for the nearby patch for a hook\n> for the ldapbindpassword.\n>\n>\n\nLooks like I fat fingered this. Here's a version that works.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 4 Jan 2023 17:33:50 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
},
{
"msg_contents": "\nOn 2023-01-04 We 17:33, Andrew Dunstan wrote:\n>\n>> This version factors out the creation of the LDAP server into a separate\n>> perl Module. That makes both the existing test script and the new test\n>> script a lot shorter, and will be useful for the nearby patch for a hook\n>> for the ldapbindpassword.\n>>\n>>\n> Looks like I fat fingered this. Here's a version that works.\n>\n>\n\npushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 23 Jan 2023 08:45:01 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a test to ldapbindpasswd"
}
] |
[
{
"msg_contents": "This patch, mostly the work of John Naylor, provides a hook whereby a\nmodule can modify the ldapbindpasswd before it is handed to the ldap\nserver. This is similar in concept to the ssl_passphrase_callback\nfeature, and allows the user not to have to put the cleartext password\nin the pg_hba.conf file. A trivial test is added which provides an\nexample of such a module.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 19 Dec 2022 11:29:09 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On 2022-12-19 Mo 11:29, Andrew Dunstan wrote:\n> This patch, mostly the work of John Naylor, provides a hook whereby a\n> module can modify the ldapbindpasswd before it is handed to the ldap\n> server. This is similar in concept to the ssl_passphrase_callback\n> feature, and allows the user not to have to put the cleartext password\n> in the pg_hba.conf file. A trivial test is added which provides an\n> example of such a module.\n\n\nUpdated to take advantage of refactoring of ldap tests.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Mon, 23 Jan 2023 14:11:54 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "The CFBot says this patch is failing but I find it hard to believe\nthis is related to this patch...\n\n2023-03-05 20:56:58.705 UTC [33902][client backend]\n[pg_regress/btree_index][18/750:0] STATEMENT: ALTER INDEX\nbtree_part_idx ALTER COLUMN id SET (n_distinct=100);\n2023-03-05 20:56:58.709 UTC [33902][client backend]\n[pg_regress/btree_index][:0] LOG: disconnection: session time:\n0:00:02.287 user=postgres database=regression host=[local]\n2023-03-05 20:56:58.710 UTC [33889][client backend]\n[pg_regress/join][:0] LOG: disconnection: session time: 0:00:02.289\nuser=postgres database=regression host=[local]\n2023-03-05 20:56:58.749 UTC [33045][postmaster] LOG: server process\n(PID 33898) was terminated by signal 6: Abort trap\n2023-03-05 20:56:58.749 UTC [33045][postmaster] DETAIL: Failed\nprocess was running: SELECT * FROM writetest;\n2023-03-05 20:56:58.749 UTC [33045][postmaster] LOG: terminating any\nother active server processes\n\n\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Mon, 6 Mar 2023 15:16:19 -0500",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On 2023-03-06 Mo 15:16, Gregory Stark (as CFM) wrote:\n> The CFBot says this patch is failing but I find it hard to believe\n> this is related to this patch...\n>\n> 2023-03-05 20:56:58.705 UTC [33902][client backend]\n> [pg_regress/btree_index][18/750:0] STATEMENT: ALTER INDEX\n> btree_part_idx ALTER COLUMN id SET (n_distinct=100);\n> 2023-03-05 20:56:58.709 UTC [33902][client backend]\n> [pg_regress/btree_index][:0] LOG: disconnection: session time:\n> 0:00:02.287 user=postgres database=regression host=[local]\n> 2023-03-05 20:56:58.710 UTC [33889][client backend]\n> [pg_regress/join][:0] LOG: disconnection: session time: 0:00:02.289\n> user=postgres database=regression host=[local]\n> 2023-03-05 20:56:58.749 UTC [33045][postmaster] LOG: server process\n> (PID 33898) was terminated by signal 6: Abort trap\n> 2023-03-05 20:56:58.749 UTC [33045][postmaster] DETAIL: Failed\n> process was running: SELECT * FROM writetest;\n> 2023-03-05 20:56:58.749 UTC [33045][postmaster] LOG: terminating any\n> other active server processes\n>\n\nYeah. It says it's fine now. Neither of the two recent failures look \nlike they have anything to do with this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-06 Mo 15:16, Gregory Stark\n (as CFM) wrote:\n\n\nThe CFBot says this patch is failing but I find it hard to believe\nthis is related to this patch...\n\n2023-03-05 20:56:58.705 UTC [33902][client backend]\n[pg_regress/btree_index][18/750:0] STATEMENT: ALTER INDEX\nbtree_part_idx ALTER COLUMN id SET (n_distinct=100);\n2023-03-05 20:56:58.709 UTC [33902][client backend]\n[pg_regress/btree_index][:0] LOG: disconnection: session time:\n0:00:02.287 user=postgres database=regression host=[local]\n2023-03-05 20:56:58.710 UTC [33889][client backend]\n[pg_regress/join][:0] LOG: disconnection: session time: 0:00:02.289\nuser=postgres database=regression host=[local]\n2023-03-05 20:56:58.749 UTC [33045][postmaster] LOG: server process\n(PID 33898) was terminated by signal 6: Abort trap\n2023-03-05 20:56:58.749 UTC [33045][postmaster] DETAIL: Failed\nprocess was running: SELECT * FROM writetest;\n2023-03-05 20:56:58.749 UTC [33045][postmaster] LOG: terminating any\nother active server processes\n\n\n\n\n\nYeah. It says it's fine now. Neither of the two recent failures\n look like they have anything to do with this.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 7 Mar 2023 09:32:40 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On 2023-01-23 Mo 14:11, Andrew Dunstan wrote:\n> On 2022-12-19 Mo 11:29, Andrew Dunstan wrote:\n>> This patch, mostly the work of John Naylor, provides a hook whereby a\n>> module can modify the ldapbindpasswd before it is handed to the ldap\n>> server. This is similar in concept to the ssl_passphrase_callback\n>> feature, and allows the user not to have to put the cleartext password\n>> in the pg_hba.conf file. A trivial test is added which provides an\n>> example of such a module.\n>\n> Updated to take advantage of refactoring of ldap tests.\n>\n>\n\npushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-01-23 Mo 14:11, Andrew Dunstan\n wrote:\n\n\n\nOn 2022-12-19 Mo 11:29, Andrew Dunstan wrote:\n\n\nThis patch, mostly the work of John Naylor, provides a hook whereby a\nmodule can modify the ldapbindpasswd before it is handed to the ldap\nserver. This is similar in concept to the ssl_passphrase_callback\nfeature, and allows the user not to have to put the cleartext password\nin the pg_hba.conf file. A trivial test is added which provides an\nexample of such a module.\n\n\n\n\nUpdated to take advantage of refactoring of ldap tests.\n\n\n\n\n\n\npushed.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 15 Mar 2023 16:39:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> pushed.\n\ndrongo is not happy with this, but I'm kind of baffled as to why:\n\n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\pgsql.sln\" (default target) (1) ->\n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj\" (default target) (60) ->\n(Link target) -> \n ldap_password_func.obj : error LNK2001: unresolved external symbol __imp_ldap_password_hook [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n .\\\\Release\\\\ldap_password_func\\\\ldap_password_func.dll : fatal error LNK1120: 1 unresolved externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n\nThe only obvious explanation for a link problem would be if the\nvariable's declaration were missing PGDLLIMPORT; but it's not.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 15 Mar 2023 17:50:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On 2023-03-15 We 17:50, Tom Lane wrote:\n> Andrew Dunstan<andrew@dunslane.net> writes:\n>> pushed.\n> drongo is not happy with this, but I'm kind of baffled as to why:\n>\n> \"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\pgsql.sln\" (default target) (1) ->\n> \"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj\" (default target) (60) ->\n> (Link target) ->\n> ldap_password_func.obj : error LNK2001: unresolved external symbol __imp_ldap_password_hook [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n> .\\\\Release\\\\ldap_password_func\\\\ldap_password_func.dll : fatal error LNK1120: 1 unresolved externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n>\n> The only obvious explanation for a link problem would be if the\n> variable's declaration were missing PGDLLIMPORT; but it's not.\n>\n> \t\t\t\n\n\nUgh. Not batting 1000 today. Will investigate.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-15 We 17:50, Tom Lane wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\npushed.\n\n\n\ndrongo is not happy with this, but I'm kind of baffled as to why:\n\n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\pgsql.sln\" (default target) (1) ->\n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj\" (default target) (60) ->\n(Link target) -> \n ldap_password_func.obj : error LNK2001: unresolved external symbol __imp_ldap_password_hook [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n .\\\\Release\\\\ldap_password_func\\\\ldap_password_func.dll : fatal error LNK1120: 1 unresolved externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n\nThe only obvious explanation for a link problem would be if the\nvariable's declaration were missing PGDLLIMPORT; but it's not.\n\n\t\t\t\n\n\n\nUgh. Not batting 1000 today. Will investigate.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 15 Mar 2023 18:18:28 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On Wed, Mar 15, 2023 at 06:18:28PM -0400, Andrew Dunstan wrote:\n> Ugh. Not batting 1000 today. Will investigate.\n\nI have noticed that you forgot a .gitignore in this new path, as well,\nso I have taken the liberty to add one ;)\n\nFWIW, I use git-sh-prompt prompt to detect such things quickly.\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 09:39:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On 2023-03-15 We 20:39, Michael Paquier wrote:\n> On Wed, Mar 15, 2023 at 06:18:28PM -0400, Andrew Dunstan wrote:\n>> Ugh. Not batting 1000 today. Will investigate.\n> I have noticed that you forgot a .gitignore in this new path, as well,\n> so I have taken the liberty to add one ;)\n\n\nThanks. One benefit of moving to meson is that it would make this sort \nof thing obsolete, since it doesn't pollute the source directory.\n\n\n>\n> FWIW, I use git-sh-prompt prompt to detect such things quickly.\n\n\nI used to use a similar gadget, but I found it occasionally adding a \nsecond or two to return the prompt, so I turned it off. In any case, I \nnormally use vpath builds, so it probably wouldn't have caught this for \nme anyway.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-15 We 20:39, Michael Paquier\n wrote:\n\n\nOn Wed, Mar 15, 2023 at 06:18:28PM -0400, Andrew Dunstan wrote:\n\n\nUgh. Not batting 1000 today. Will investigate.\n\n\n\nI have noticed that you forgot a .gitignore in this new path, as well,\nso I have taken the liberty to add one ;)\n\n\n\nThanks. One benefit of moving to meson is that it would make this\n sort of thing obsolete, since it doesn't pollute the source\n directory.\n\n\n\n\n\n\nFWIW, I use git-sh-prompt prompt to detect such things quickly.\n\n\n\n\n\nI used to use a similar gadget, but I found it occasionally\n adding a second or two to return the prompt, so I turned it off.\n In any case, I normally use vpath builds, so it probably wouldn't\n have caught this for me anyway.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 16 Mar 2023 06:27:59 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
},
{
"msg_contents": "On 2023-03-15 We 18:18, Andrew Dunstan wrote:\n>\n>\n> On 2023-03-15 We 17:50, Tom Lane wrote:\n>> Andrew Dunstan<andrew@dunslane.net> writes:\n>>> pushed.\n>> drongo is not happy with this, but I'm kind of baffled as to why:\n>>\n>> \"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\pgsql.sln\" (default target) (1) ->\n>> \"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj\" (default target) (60) ->\n>> (Link target) ->\n>> ldap_password_func.obj : error LNK2001: unresolved external symbol __imp_ldap_password_hook [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n>> .\\\\Release\\\\ldap_password_func\\\\ldap_password_func.dll : fatal error LNK1120: 1 unresolved externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n>>\n>> The only obvious explanation for a link problem would be if the\n>> variable's declaration were missing PGDLLIMPORT; but it's not.\n>>\n>> \t\t\t\n>\n>\n> Ugh. Not batting 1000 today. Will investigate.\n>\n>\n>\n\nThe issue was apparently that I had neglected to suppress building the \ntest module on MSVC if not configured to build with LDAP, since the hook \nis only defined in that case. I have pushed a fix for that and drongo is \nhappy once more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-03-15 We 18:18, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-03-15 We 17:50, Tom Lane\n wrote:\n\n\nAndrew Dunstan <andrew@dunslane.net> writes:\n\n\npushed.\n\n\ndrongo is not happy with this, but I'm kind of baffled as to why:\n\n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\pgsql.sln\" (default target) (1) ->\n\"c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj\" (default target) (60) ->\n(Link target) -> \n ldap_password_func.obj : error LNK2001: unresolved external symbol __imp_ldap_password_hook [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n .\\\\Release\\\\ldap_password_func\\\\ldap_password_func.dll : fatal error LNK1120: 1 unresolved externals [c:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\ldap_password_func.vcxproj]\n\nThe only obvious explanation for a link problem would be if the\nvariable's declaration were missing PGDLLIMPORT; but it's not.\n\n\t\t\t\n\n\n\nUgh. Not batting 1000 today. Will investigate.\n\n\n\n\n\n\nThe issue was apparently that I had neglected to suppress\n building the test module on MSVC if not configured to build with\n LDAP, since the hook is only defined in that case. I have pushed a\n fix for that and drongo is happy once more.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Thu, 16 Mar 2023 06:33:27 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Add a hook to allow modification of the ldapbindpasswd"
}
] |
[
{
"msg_contents": "Hi,\nI was going over 002_pg_dump.pl and saw a typo in pgdump_runs.\n\nPlease see the patch.\n\nThanks",
"msg_date": "Mon, 19 Dec 2022 10:53:59 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fixing typo in 002_pg_dump.pl"
},
{
"msg_contents": "On 19.12.22 19:53, Ted Yu wrote:\n> I was going over 002_pg_dump.pl <http://002_pg_dump.pl> and saw a typo \n> in pgdump_runs.\n> \n> Please see the patch.\n\nfixed\n\n\n\n",
"msg_date": "Mon, 19 Dec 2022 21:08:54 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing typo in 002_pg_dump.pl"
},
{
"msg_contents": "On Mon, Dec 19, 2022 at 09:08:54PM +0100, Peter Eisentraut wrote:\n> fixed\n\nThanks!\n--\nMichael",
"msg_date": "Tue, 20 Dec 2022 06:57:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fixing typo in 002_pg_dump.pl"
}
] |
[
{
"msg_contents": "Attached is my work in progress to implement the changes to the CAST()\nfunction as proposed by Vik Fearing.\n\nThis work builds upon the Error-safe User Functions work currently ongoing.\n\nThe proposed changes are as follows:\n\nCAST(expr AS typename)\n continues to behave as before.\n\nCAST(expr AS typename ERROR ON ERROR)\n has the identical behavior as the unadorned CAST() above.\n\nCAST(expr AS typename NULL ON ERROR)\n will use error-safe functions to do the cast of expr, and will return\nNULL if the cast fails.\n\nCAST(expr AS typename DEFAULT expr2 ON ERROR)\n will use error-safe functions to do the cast of expr, and will return\nexpr2 if the cast fails.\n\nThere is an additional FORMAT parameter that I have not yet implemented, my\nunderstanding is that it is largely intended for DATE/TIME field\nconversions, but others are certainly possible.\nCAST(expr AS typename FORMAT fmt DEFAULT expr2 ON ERROR)\n\nWhat is currently working:\n- Any scalar expression that can be evaluated at parse time. These tests\nfrom cast.sql all currently work:\n\nVALUES (CAST('error' AS integer));\nVALUES (CAST('error' AS integer ERROR ON ERROR));\nVALUES (CAST('error' AS integer NULL ON ERROR));\nVALUES (CAST('error' AS integer DEFAULT 42 ON ERROR));\n\nSELECT CAST('{123,abc,456}' AS integer[] DEFAULT '{-789}' ON ERROR) as\narray_test1;\n\n- Scalar values evaluated at runtime.\n\nCREATE TEMPORARY TABLE t(t text);\nINSERT INTO t VALUES ('a'), ('1'), ('b'), (2);\nSELECT CAST(t.t AS integer DEFAULT -1 ON ERROR) AS foo FROM t;\n foo\n-----\n -1\n 1\n -1\n 2\n(4 rows)\n\n\nAlong the way, I made a few design decisions, each of which is up for\ndebate:\n\nFirst, I created OidInputFunctionCallSafe, which is to OidInputFunctionCall\nwhat InputFunctionCallSafe is to InputFunctionCall. Given that the only\nplace I ended up using it was stringTypeDatumSafe(), it may be possible to\njust move that code inside stringTypeDatumSafe.\n\nNext, I had a need for FuncExpr, CoerceViaIO, and ArrayCoerce to all report\nif their expr argument failed, and if not, just past the evaluation of\nexpr2. Rather than duplicate this logic in several places, I chose instead\nto modify CoalesceExpr to allow for an error-test mode in addition to its\ndefault null-test mode, and then to provide this altered node with two\nexpressions, the first being the error-safe typecast of expr and the second\nbeing the non-error-safe typecast of expr2.\n\nI still don't have array-to-array casts working, as the changed I would\nlikely need to make to ArrayCoerce get somewhat invasive, so this seemed\nlike a good time to post my work so far and solicit some feedback beyond\nwhat I've already been getting from Jeff Davis and Michael Paquier.\n\nI've sidestepped domains as well for the time being as well as avoiding JIT\nissues entirely.\n\nNo documentation is currently prepared. All but one of the regression test\nqueries work, the one that is currently failing is:\n\nSELECT CAST('{234,def,567}'::text[] AS integer[] DEFAULT '{-1011}' ON\nERROR) as array_test2;\n\nOther quirks:\n- an unaliased CAST ON DEFAULT will return the column name of \"coalesce\",\nwhich internally is true, but obviously would be quite confusing to a user.\n\nAs a side observation, I noticed that the optimizer already tries to\nresolve expressions based on constants and to collapse expression trees\nwhere possible, which makes me wonder if the work done to do the same in\ntransformTypeCast/ and coerce_to_target_type and coerce_type isn't also\nwasted.",
"msg_date": "Mon, 19 Dec 2022 17:56:37 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "CAST(... ON DEFAULT) - WIP build on top of Error-Safe User Functions"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> The proposed changes are as follows:\n> CAST(expr AS typename)\n> continues to behave as before.\n> CAST(expr AS typename ERROR ON ERROR)\n> has the identical behavior as the unadorned CAST() above.\n> CAST(expr AS typename NULL ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return\n> NULL if the cast fails.\n> CAST(expr AS typename DEFAULT expr2 ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return\n> expr2 if the cast fails.\n\nWhile I approve of trying to get some functionality in this area,\nI'm not sure that extending CAST is a great idea, because I'm afraid\nthat the SQL committee will do something that conflicts with it.\nIf we know that they are about to standardize exactly this syntax,\nwhere is that information available? If we don't know that,\nI'd prefer to invent some kind of function or other instead of\nextending the grammar.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Jan 2023 10:57:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On Tue, 20 Dec 2022 at 04:27, Corey Huinker <corey.huinker@gmail.com> wrote:\n>\n>\n> Attached is my work in progress to implement the changes to the CAST() function as proposed by Vik Fearing.\n>\n> This work builds upon the Error-safe User Functions work currently ongoing.\n>\n> The proposed changes are as follows:\n>\n> CAST(expr AS typename)\n> continues to behave as before.\n>\n> CAST(expr AS typename ERROR ON ERROR)\n> has the identical behavior as the unadorned CAST() above.\n>\n> CAST(expr AS typename NULL ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return NULL if the cast fails.\n>\n> CAST(expr AS typename DEFAULT expr2 ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return expr2 if the cast fails.\n>\n> There is an additional FORMAT parameter that I have not yet implemented, my understanding is that it is largely intended for DATE/TIME field conversions, but others are certainly possible.\n> CAST(expr AS typename FORMAT fmt DEFAULT expr2 ON ERROR)\n>\n> What is currently working:\n> - Any scalar expression that can be evaluated at parse time. These tests from cast.sql all currently work:\n>\n> VALUES (CAST('error' AS integer));\n> VALUES (CAST('error' AS integer ERROR ON ERROR));\n> VALUES (CAST('error' AS integer NULL ON ERROR));\n> VALUES (CAST('error' AS integer DEFAULT 42 ON ERROR));\n>\n> SELECT CAST('{123,abc,456}' AS integer[] DEFAULT '{-789}' ON ERROR) as array_test1;\n>\n> - Scalar values evaluated at runtime.\n>\n> CREATE TEMPORARY TABLE t(t text);\n> INSERT INTO t VALUES ('a'), ('1'), ('b'), (2);\n> SELECT CAST(t.t AS integer DEFAULT -1 ON ERROR) AS foo FROM t;\n> foo\n> -----\n> -1\n> 1\n> -1\n> 2\n> (4 rows)\n>\n>\n> Along the way, I made a few design decisions, each of which is up for debate:\n>\n> First, I created OidInputFunctionCallSafe, which is to OidInputFunctionCall what InputFunctionCallSafe is to InputFunctionCall. Given that the only place I ended up using it was stringTypeDatumSafe(), it may be possible to just move that code inside stringTypeDatumSafe.\n>\n> Next, I had a need for FuncExpr, CoerceViaIO, and ArrayCoerce to all report if their expr argument failed, and if not, just past the evaluation of expr2. Rather than duplicate this logic in several places, I chose instead to modify CoalesceExpr to allow for an error-test mode in addition to its default null-test mode, and then to provide this altered node with two expressions, the first being the error-safe typecast of expr and the second being the non-error-safe typecast of expr2.\n>\n> I still don't have array-to-array casts working, as the changed I would likely need to make to ArrayCoerce get somewhat invasive, so this seemed like a good time to post my work so far and solicit some feedback beyond what I've already been getting from Jeff Davis and Michael Paquier.\n>\n> I've sidestepped domains as well for the time being as well as avoiding JIT issues entirely.\n>\n> No documentation is currently prepared. All but one of the regression test queries work, the one that is currently failing is:\n>\n> SELECT CAST('{234,def,567}'::text[] AS integer[] DEFAULT '{-1011}' ON ERROR) as array_test2;\n>\n> Other quirks:\n> - an unaliased CAST ON DEFAULT will return the column name of \"coalesce\", which internally is true, but obviously would be quite confusing to a user.\n>\n> As a side observation, I noticed that the optimizer already tries to resolve expressions based on constants and to collapse expression trees where possible, which makes me wonder if the work done to do the same in transformTypeCast/ and coerce_to_target_type and coerce_type isn't also wasted.\n\nCFBot shows some compilation errors as in [1], please post an updated\nversion for the same:\n[02:53:44.829] time make -s -j${BUILD_JOBS} world-bin\n[02:55:41.164] llvmjit_expr.c: In function ‘llvm_compile_expr’:\n[02:55:41.164] llvmjit_expr.c:928:6: error: ‘v_resnull’ undeclared\n(first use in this function); did you mean ‘v_resnullp’?\n[02:55:41.164] 928 | v_resnull = LLVMBuildLoad(b, v_reserrorp, \"\");\n[02:55:41.164] | ^~~~~~~~~\n[02:55:41.164] | v_resnullp\n[02:55:41.164] llvmjit_expr.c:928:6: note: each undeclared identifier\nis reported only once for each function it appears in\n[02:55:41.164] llvmjit_expr.c:928:35: error: ‘v_reserrorp’ undeclared\n(first use in this function); did you mean ‘v_reserror’?\n[02:55:41.164] 928 | v_resnull = LLVMBuildLoad(b, v_reserrorp, \"\");\n[02:55:41.164] | ^~~~~~~~~~~\n[02:55:41.164] | v_reserror\n[02:55:41.165] make[2]: *** [<builtin>: llvmjit_expr.o] Error 1\n[02:55:41.165] make[2]: *** Waiting for unfinished jobs....\n[02:55:45.495] make[1]: *** [Makefile:42: all-backend/jit/llvm-recurse] Error 2\n[02:55:45.495] make: *** [GNUmakefile:21: world-bin-src-recurse] Error 2\n\n[1] - https://cirrus-ci.com/task/6687753371385856?logs=gcc_warning#L448\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:40:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "\nOn 2023-01-02 Mo 10:57, Tom Lane wrote:\n> Corey Huinker <corey.huinker@gmail.com> writes:\n>> The proposed changes are as follows:\n>> CAST(expr AS typename)\n>> continues to behave as before.\n>> CAST(expr AS typename ERROR ON ERROR)\n>> has the identical behavior as the unadorned CAST() above.\n>> CAST(expr AS typename NULL ON ERROR)\n>> will use error-safe functions to do the cast of expr, and will return\n>> NULL if the cast fails.\n>> CAST(expr AS typename DEFAULT expr2 ON ERROR)\n>> will use error-safe functions to do the cast of expr, and will return\n>> expr2 if the cast fails.\n> While I approve of trying to get some functionality in this area,\n> I'm not sure that extending CAST is a great idea, because I'm afraid\n> that the SQL committee will do something that conflicts with it.\n> If we know that they are about to standardize exactly this syntax,\n> where is that information available? If we don't know that,\n> I'd prefer to invent some kind of function or other instead of\n> extending the grammar.\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 12:08:56 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Corey Huinker <corey.huinker@gmail.com> writes:\n> > The proposed changes are as follows:\n> > CAST(expr AS typename)\n> > continues to behave as before.\n> > CAST(expr AS typename ERROR ON ERROR)\n> > has the identical behavior as the unadorned CAST() above.\n> > CAST(expr AS typename NULL ON ERROR)\n> > will use error-safe functions to do the cast of expr, and will return\n> > NULL if the cast fails.\n> > CAST(expr AS typename DEFAULT expr2 ON ERROR)\n> > will use error-safe functions to do the cast of expr, and will return\n> > expr2 if the cast fails.\n>\n> While I approve of trying to get some functionality in this area,\n> I'm not sure that extending CAST is a great idea, because I'm afraid\n> that the SQL committee will do something that conflicts with it.\n> If we know that they are about to standardize exactly this syntax,\n> where is that information available? If we don't know that,\n> I'd prefer to invent some kind of function or other instead of\n> extending the grammar.\n>\n> regards, tom lane\n>\n\nI'm going off the spec that Vik presented in\nhttps://www.postgresql.org/message-id/f8600a3b-f697-2577-8fea-f40d3e18bea8@postgresfriends.org\nwhich is his effort to get it through the SQL committee. I was\nalreading thinking about how to get the SQLServer TRY_CAST() function into\npostgres, so this seemed like the logical next step.\n\nWhile the syntax may change, the underlying infrastructure would remain\nbasically the same: we would need the ability to detect that a typecast had\nfailed, and replace it with the default value, and handle that at parse\ntime, or executor time, and handle array casts where the array has the\ndefault but the underlying elements can't.\n\nIt would be simple to move the grammar changes to their own patch if that\nremoves a barrier for people.\n\nOn Mon, Jan 2, 2023 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Corey Huinker <corey.huinker@gmail.com> writes:\n> The proposed changes are as follows:\n> CAST(expr AS typename)\n> continues to behave as before.\n> CAST(expr AS typename ERROR ON ERROR)\n> has the identical behavior as the unadorned CAST() above.\n> CAST(expr AS typename NULL ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return\n> NULL if the cast fails.\n> CAST(expr AS typename DEFAULT expr2 ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return\n> expr2 if the cast fails.\n\nWhile I approve of trying to get some functionality in this area,\nI'm not sure that extending CAST is a great idea, because I'm afraid\nthat the SQL committee will do something that conflicts with it.\nIf we know that they are about to standardize exactly this syntax,\nwhere is that information available? If we don't know that,\nI'd prefer to invent some kind of function or other instead of\nextending the grammar.\n\n regards, tom laneI'm going off the spec that Vik presented in https://www.postgresql.org/message-id/f8600a3b-f697-2577-8fea-f40d3e18bea8@postgresfriends.org which is his effort to get it through the SQL committee. I was alreading thinking about how to get the SQLServer TRY_CAST() function into postgres, so this seemed like the logical next step.While the syntax may change, the underlying infrastructure would remain basically the same: we would need the ability to detect that a typecast had failed, and replace it with the default value, and handle that at parse time, or executor time, and handle array casts where the array has the default but the underlying elements can't.It would be simple to move the grammar changes to their own patch if that removes a barrier for people.",
"msg_date": "Tue, 3 Jan 2023 13:02:36 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "Corey Huinker <corey.huinker@gmail.com> writes:\n> On Mon, Jan 2, 2023 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While I approve of trying to get some functionality in this area,\n>> I'm not sure that extending CAST is a great idea, because I'm afraid\n>> that the SQL committee will do something that conflicts with it.\n\n> I'm going off the spec that Vik presented in\n> https://www.postgresql.org/message-id/f8600a3b-f697-2577-8fea-f40d3e18bea8@postgresfriends.org\n> which is his effort to get it through the SQL committee.\n\nI'm pretty certain that sending something to pgsql-hackers will have\nexactly zero impact on the SQL committee. Is there anything actually\nsubmitted to the committee, and if so what's its status?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 13:14:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On 1/3/23 19:14, Tom Lane wrote:\n> Corey Huinker <corey.huinker@gmail.com> writes:\n>> On Mon, Jan 2, 2023 at 10:57 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> While I approve of trying to get some functionality in this area,\n>>> I'm not sure that extending CAST is a great idea, because I'm afraid\n>>> that the SQL committee will do something that conflicts with it.\n> \n>> I'm going off the spec that Vik presented in\n>> https://www.postgresql.org/message-id/f8600a3b-f697-2577-8fea-f40d3e18bea8@postgresfriends.org\n>> which is his effort to get it through the SQL committee.\n> \n> I'm pretty certain that sending something to pgsql-hackers will have\n> exactly zero impact on the SQL committee. Is there anything actually\n> submitted to the committee, and if so what's its status?\n\nI have not posted my paper to the committee yet, but I plan to do so \nbefore the working group's meeting early February. Just like with \nposting patches here, I cannot guarantee that it will get accepted but I \nwill be arguing for it.\n\nI don't think we should add that syntax until I do get it through the \ncommittee, just in case they change something.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 19:32:58 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> I have not posted my paper to the committee yet, but I plan to do so \n> before the working group's meeting early February. Just like with \n> posting patches here, I cannot guarantee that it will get accepted but I \n> will be arguing for it.\n\n> I don't think we should add that syntax until I do get it through the \n> committee, just in case they change something.\n\nAgreed. So this is something we won't be able to put into v16;\nit'll have to wait till there's something solid from the committee.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 14:15:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 14:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Vik Fearing <vik@postgresfriends.org> writes:\n>\n> > I don't think we should add that syntax until I do get it through the\n> > committee, just in case they change something.\n>\n> Agreed. So this is something we won't be able to put into v16;\n> it'll have to wait till there's something solid from the committee.\n\nI guess I'll mark this Rejected in the CF then. Who knows when the SQL\ncommittee will look at this...\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 28 Mar 2023 14:52:50 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On Mon, 19 Dec 2022 at 17:57, Corey Huinker <corey.huinker@gmail.com> wrote:\n\n>\n> Attached is my work in progress to implement the changes to the CAST()\n> function as proposed by Vik Fearing.\n>\n> CAST(expr AS typename NULL ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return\n> NULL if the cast fails.\n>\n> CAST(expr AS typename DEFAULT expr2 ON ERROR)\n> will use error-safe functions to do the cast of expr, and will return\n> expr2 if the cast fails.\n>\n\nIs there any difference between NULL and DEFAULT NULL?\n\nOn Mon, 19 Dec 2022 at 17:57, Corey Huinker <corey.huinker@gmail.com> wrote:Attached is my work in progress to implement the changes to the CAST() function as proposed by Vik Fearing.CAST(expr AS typename NULL ON ERROR) will use error-safe functions to do the cast of expr, and will return NULL if the cast fails.CAST(expr AS typename DEFAULT expr2 ON ERROR) will use error-safe functions to do the cast of expr, and will return expr2 if the cast fails.Is there any difference between NULL and DEFAULT NULL?",
"msg_date": "Tue, 28 Mar 2023 15:25:05 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 2:53 PM Gregory Stark (as CFM) <stark.cfm@gmail.com>\nwrote:\n\n> On Tue, 3 Jan 2023 at 14:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Vik Fearing <vik@postgresfriends.org> writes:\n> >\n> > > I don't think we should add that syntax until I do get it through the\n> > > committee, just in case they change something.\n> >\n> > Agreed. So this is something we won't be able to put into v16;\n> > it'll have to wait till there's something solid from the committee.\n>\n> I guess I'll mark this Rejected in the CF then. Who knows when the SQL\n> committee will look at this...\n>\n> --\n> Gregory Stark\n> As Commitfest Manager\n>\n\nYes, for now. I'm in touch with the pg-people on the committee and will\nresume work when there's something to act upon.\n\nOn Tue, Mar 28, 2023 at 2:53 PM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:On Tue, 3 Jan 2023 at 14:16, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Vik Fearing <vik@postgresfriends.org> writes:\n>\n> > I don't think we should add that syntax until I do get it through the\n> > committee, just in case they change something.\n>\n> Agreed. So this is something we won't be able to put into v16;\n> it'll have to wait till there's something solid from the committee.\n\nI guess I'll mark this Rejected in the CF then. Who knows when the SQL\ncommittee will look at this...\n\n-- \nGregory Stark\nAs Commitfest ManagerYes, for now. I'm in touch with the pg-people on the committee and will resume work when there's something to act upon.",
"msg_date": "Tue, 28 Mar 2023 16:06:20 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 3:25 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Mon, 19 Dec 2022 at 17:57, Corey Huinker <corey.huinker@gmail.com>\n> wrote:\n>\n>>\n>> Attached is my work in progress to implement the changes to the CAST()\n>> function as proposed by Vik Fearing.\n>>\n>> CAST(expr AS typename NULL ON ERROR)\n>> will use error-safe functions to do the cast of expr, and will return\n>> NULL if the cast fails.\n>>\n>> CAST(expr AS typename DEFAULT expr2 ON ERROR)\n>> will use error-safe functions to do the cast of expr, and will return\n>> expr2 if the cast fails.\n>>\n>\n> Is there any difference between NULL and DEFAULT NULL?\n>\n\nWhat I think you're asking is: is there a difference between these two\nstatements:\n\nSELECT CAST(my_string AS integer NULL ON ERROR) FROM my_table;\n\n\nSELECT CAST(my_string AS integer DEFAULT NULL ON ERROR) FROM my_table;\n\n\nAnd as I understand it, the answer would be no, there is no practical\ndifference. The first case is just a convenient shorthand, whereas the\nsecond case tees you up for a potentially complex expression. Before you\nask, I believe the ON ERROR syntax could be made optional. As I implemented\nit, both cases create a default expression which then typecast to integer,\nand in both cases that expression would be a const-null, so the optimizer\nsteps would very quickly collapse those steps into a plain old constant.\n\nOn Tue, Mar 28, 2023 at 3:25 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Mon, 19 Dec 2022 at 17:57, Corey Huinker <corey.huinker@gmail.com> wrote:Attached is my work in progress to implement the changes to the CAST() function as proposed by Vik Fearing.CAST(expr AS typename NULL ON ERROR) will use error-safe functions to do the cast of expr, and will return NULL if the cast fails.CAST(expr AS typename DEFAULT expr2 ON ERROR) will use error-safe functions to do the cast of expr, and will return expr2 if the cast fails.Is there any difference between NULL and DEFAULT NULL? What I think you're asking is: is there a difference between these two statements:SELECT CAST(my_string AS integer NULL ON ERROR) FROM my_table;SELECT CAST(my_string AS integer DEFAULT NULL ON ERROR) FROM my_table;And as I understand it, the answer would be no, there is no practical difference. The first case is just a convenient shorthand, whereas the second case tees you up for a potentially complex expression. Before you ask, I believe the ON ERROR syntax could be made optional. As I implemented it, both cases create a default expression which then typecast to integer, and in both cases that expression would be a const-null, so the optimizer steps would very quickly collapse those steps into a plain old constant.",
"msg_date": "Tue, 28 Mar 2023 16:23:26 -0400",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CAST(... ON DEFAULT) - WIP build on top of Error-Safe User\n Functions"
}
] |
[
{
"msg_contents": "The standard has several constructs for creating new types from other \ntypes. I don't mean anything like CREATE TYPE here, I mean things like \nthis:\n\n- ROW(a, b, c), (<explicit row value constructor>)\n- ARRAY[a, b, c], (<array value constructor by enumeration>)\n- PERIOD(a, b), (<period predicand>)\n- MULTISET[a, b, c], (<multiset value constructor by enumeration>)\n- MDARRAY[x(1:3)][a, b, c], (<md-array value constructor by enumeration>)\n\nI am not sure what magic we use for the row value constructor. We \nhandle ARRAY by creating an array type for every non-array type that is \ncreated. Periods are very similar to range types and we have to create \nnew functions such as int4range(a,b) and int8range(a,b) instead of some \nkind of generic RANGE(a, b, '[)') and not worrying about what the type \nis as long as there is a btree opclass for it.\n\nObviously there would have to be an actual type in order to store it in \na table, but what I am most interested in here is being able to create \nthem on the fly. I do not think it is feasible to create N new types \nfor every type like we do for arrays on the off-chance you would want to \nput it in a PERIOD for example.\n\nFor those who know the code much better than I do, what would be a \nplausible way forward to support these containers?\n-- \nVik Fearing\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:24:22 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Container Types"
},
{
"msg_contents": "On Tue, 2022-12-20 at 10:24 +0100, Vik Fearing wrote:\n> Obviously there would have to be an actual type in order to store it\n> in \n> a table, but what I am most interested in here is being able to\n> create \n> them on the fly. I do not think it is feasible to create N new types\n> for every type like we do for arrays on the off-chance you would want\n> to \n> put it in a PERIOD for example.\n\nBy \"on the fly\" do you mean when creating real objects, like a table?\nIn that case it might not be so hard, because we can just create an\nordinary entry in pg_type.\n\nBut for this to be a complete feature, I think we need the container\ntypes to be useful when constructed within a query, too. E.g.\n\n SELECT two_things(v1, v2) FROM foo;\n\nwhere the result of two_things is some new type two_things_int_text\nwhich is based on the types of v1 and v2 and has never been used\nbefore.\n\nI don't think it's reasonable to create a permanent pg_type entry on\nthe fly to answer a read-only query. But we could introduce some notion\nof an ephemeral in-memory pg_type entry with its own OID, and create\nthat on the fly.\n\nOne way to do that might be to reserve some of the system OID space\n(e.g. 15000-16000) for OIDs for temporary catalog entries, and then\nhave some in-memory structure that holds those temporary entries. Any\nlookups in that range would search the in-memory structure instead of\nthe real catalog. All of this is easier said than done, but I think it\ncould work.\n\nWe'd also need to think about how to infer types through a container\ntype, e.g.\n\n SELECT second_thing(two_things(v1,v2)) FROM foo;\n\nshould infer that the return type of second_thing() is the type of v2.\nTo do that, perhaps pg_proc entries can include some kind of type\nsublanguage to do this inference, e.g. \"a, b -> b\" for second_thing(),\nor \"a, b -> a\" for first_thing().\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 25 Oct 2023 15:03:04 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Container Types"
},
{
"msg_contents": "Hi,\n\nOn 2023-10-25 15:03:04 -0700, Jeff Davis wrote:\n> On Tue, 2022-12-20 at 10:24 +0100, Vik Fearing wrote:\n> > Obviously there would have to be an actual type in order to store it\n> > in \n> > a table, but what I am most interested in here is being able to\n> > create \n> > them on the fly.� I do not think it is feasible to create N new types\n> > for every type like we do for arrays on the off-chance you would want\n> > to \n> > put it in a PERIOD for example.\n> \n> By \"on the fly\" do you mean when creating real objects, like a table?\n> In that case it might not be so hard, because we can just create an\n> ordinary entry in pg_type.\n> \n> But for this to be a complete feature, I think we need the container\n> types to be useful when constructed within a query, too. E.g.\n> \n> SELECT two_things(v1, v2) FROM foo;\n> \n> where the result of two_things is some new type two_things_int_text\n> which is based on the types of v1 and v2 and has never been used\n> before.\n> \n> I don't think it's reasonable to create a permanent pg_type entry on\n> ethe fly to answer a read-only query. But we could introduce some notion\n> of an ephemeral in-memory pg_type entry with its own OID, and create\n> that on the fly.\n\nI don't think particularly like the idea of an in-memory pg_type entry. But\nI'm not sure we need that anyway - we already have this problem with record\ntypes. We support both named record types (tables and explicitly created\ncomposite types) and ad-hoc ones (created if you write ROW(foo, bar) or\nsomething like that). If a record's typmod is negative, it refers to an\nanonymous row type, if positive it's a named typmod.\n\nWe even have support for sharing such ad-hoc rowtypes across backends for\nparallel query...\n\nI'd look whether you can generalize that infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 25 Oct 2023 16:01:51 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Container Types"
},
{
"msg_contents": "On Wed, 2023-10-25 at 16:01 -0700, Andres Freund wrote:\n> I'd look whether you can generalize that infrastructure.\n\nI had briefly looked at using the record type mechanism before, and it\nseemed like a challenge because it doesn't really work when passing\nthrough a function call:\n\n CREATE TABLE t(a INT, b TEXT);\n INSERT INTO t VALUES(1, 'one');\n CREATE FUNCTION id(RECORD) RETURNS RECORD LANGUAGE plpgsql AS\n $$ BEGIN RETURN $1; END; $$;\n SELECT t.a FROM t; -- 1\n SELECT (id(t)).a FROM t; -- ERROR\n\nBut now that I think about it, that's really a type inference\nlimitation, and that needs to be solved regardless.\n\nAfter the type inference figures out what the right type is, then I\nthink you're right that an OID is not required to track it, and however\nwe do track it should be able to reuse some of the existing\ninfrastructure for dealing with record types.\n\nRegards,\n\tJeff Davis\n\n\n\n",
"msg_date": "Wed, 25 Oct 2023 23:13:31 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Container Types"
}
] |
[
{
"msg_contents": "Yesterday when testing a patch I got annoyed when my test failed. I\ntested it like this:\n\n meson test ldap_password_func/001_mutated_bindpasswd\n\nIt turned out that I needed to do this:\n\n meson test tmp_install ldap_password_func/001_mutated_bindpasswd\n\nThe Makefile equivalent ensures that you have a tmp_install without\nhaving to request to explicitly. It would be nice if we could make meson\ndo the same thing, and also honor NO_TEMP_INSTALL if set.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 20 Dec 2022 07:12:04 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "meson and tmp_install"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-20 07:12:04 -0500, Andrew Dunstan wrote:\n> Yesterday when testing a patch I got annoyed when my test failed. I\n> tested it like this:\n> \n> meson test ldap_password_func/001_mutated_bindpasswd\n> \n> It turned out that I needed to do this:\n> \n> meson test tmp_install ldap_password_func/001_mutated_bindpasswd\n> \n> The Makefile equivalent ensures that you have a tmp_install without\n> having to request to explicitly. It would be nice if we could make meson\n> do the same thing, and also honor NO_TEMP_INSTALL if set.\n\nI would like that too, but there's no easy fix that doesn't have\ndownsides as far as I am aware. We could make the temp install a build\ntarget that the tests depend on, but for historical reasons in meson\nthat means that the 'all' target depends on temp-install. Which isn't\ngreat.\n\nMy current thinking is that we should get away from needing the\ntemporary install and instead allow to run the tests against the build\ndirectory itself. The temp-install adds a fair bit of overhead and\nfailure potential. The only reason we need it is that a) initdb and a\nfew other programs insist that postgres needs to be in the same\ndirectory b) contrib modules currently need to reside in one single\ndirectory.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Dec 2022 09:29:58 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson and tmp_install"
},
{
"msg_contents": "Hi!\n\nDidn't know where to ask, so I've chosen this thread - there is no any\ndocumentation on meson build platform in PostgreSQL docs. Is this\nokay? For me it was a surprise when the meson platform was added,\nand I have had to spend some time sweeping through meson docs\nwhen I'd added new source files, to build Postgres successfully.\n\nThanks!\n\nOn Tue, Dec 20, 2022 at 8:30 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-12-20 07:12:04 -0500, Andrew Dunstan wrote:\n> > Yesterday when testing a patch I got annoyed when my test failed. I\n> > tested it like this:\n> >\n> > meson test ldap_password_func/001_mutated_bindpasswd\n> >\n> > It turned out that I needed to do this:\n> >\n> > meson test tmp_install ldap_password_func/001_mutated_bindpasswd\n> >\n> > The Makefile equivalent ensures that you have a tmp_install without\n> > having to request to explicitly. It would be nice if we could make meson\n> > do the same thing, and also honor NO_TEMP_INSTALL if set.\n>\n> I would like that too, but there's no easy fix that doesn't have\n> downsides as far as I am aware. We could make the temp install a build\n> target that the tests depend on, but for historical reasons in meson\n> that means that the 'all' target depends on temp-install. Which isn't\n> great.\n>\n> My current thinking is that we should get away from needing the\n> temporary install and instead allow to run the tests against the build\n> directory itself. The temp-install adds a fair bit of overhead and\n> failure potential. The only reason we need it is that a) initdb and a\n> few other programs insist that postgres needs to be in the same\n> directory b) contrib modules currently need to reside in one single\n> directory.\n>\n> Greetings,\n>\n> Andres Freund\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Didn't know where to ask, so I've chosen this thread - there is no anydocumentation on meson build platform in PostgreSQL docs. Is thisokay? For me it was a surprise when the meson platform was added,and I have had to spend some time sweeping through meson docswhen I'd added new source files, to build Postgres successfully.Thanks!On Tue, Dec 20, 2022 at 8:30 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-12-20 07:12:04 -0500, Andrew Dunstan wrote:\n> Yesterday when testing a patch I got annoyed when my test failed. I\n> tested it like this:\n> \n> meson test ldap_password_func/001_mutated_bindpasswd\n> \n> It turned out that I needed to do this:\n> \n> meson test tmp_install ldap_password_func/001_mutated_bindpasswd\n> \n> The Makefile equivalent ensures that you have a tmp_install without\n> having to request to explicitly. It would be nice if we could make meson\n> do the same thing, and also honor NO_TEMP_INSTALL if set.\n\nI would like that too, but there's no easy fix that doesn't have\ndownsides as far as I am aware. We could make the temp install a build\ntarget that the tests depend on, but for historical reasons in meson\nthat means that the 'all' target depends on temp-install. Which isn't\ngreat.\n\nMy current thinking is that we should get away from needing the\ntemporary install and instead allow to run the tests against the build\ndirectory itself. The temp-install adds a fair bit of overhead and\nfailure potential. The only reason we need it is that a) initdb and a\nfew other programs insist that postgres needs to be in the same\ndirectory b) contrib modules currently need to reside in one single\ndirectory.\n\nGreetings,\n\nAndres Freund\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 20 Dec 2022 21:11:26 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson and tmp_install"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-20 21:11:26 +0300, Nikita Malakhov wrote:\n> Didn't know where to ask, so I've chosen this thread - there is no any\n> documentation on meson build platform in PostgreSQL docs.\n\nThere is now:\nhttps://www.postgresql.org/docs/devel/install-meson.html\n\nNeeds further work, but it's a start.\n\n\n> Is this okay? For me it was a surprise when the meson platform was\n> added\n\nIt's been discussed on the list for a year or so before it was\nadded. It's a large change, so unfortunately it's not something that I\ncould get done in a single day, with perfect docs from the get go.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 20 Dec 2022 10:22:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: meson and tmp_install"
},
{
"msg_contents": "Hi!\n\nThat's great, thanks! Discussion list is very long so I've missed this\ntopic.\nJust a suggestion - I've checked the link above, maybe there should be\nadded a small part on where build files are located and how to add new\nsources for successful build?\n\nOn Tue, Dec 20, 2022 at 9:22 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2022-12-20 21:11:26 +0300, Nikita Malakhov wrote:\n> > Didn't know where to ask, so I've chosen this thread - there is no any\n> > documentation on meson build platform in PostgreSQL docs.\n>\n> There is now:\n> https://www.postgresql.org/docs/devel/install-meson.html\n>\n> Needs further work, but it's a start.\n>\n>\n> > Is this okay? For me it was a surprise when the meson platform was\n> > added\n>\n> It's been discussed on the list for a year or so before it was\n> added. It's a large change, so unfortunately it's not something that I\n> could get done in a single day, with perfect docs from the get go.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!That's great, thanks! Discussion list is very long so I've missed this topic.Just a suggestion - I've checked the link above, maybe there should beadded a small part on where build files are located and how to add newsources for successful build?On Tue, Dec 20, 2022 at 9:22 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2022-12-20 21:11:26 +0300, Nikita Malakhov wrote:\n> Didn't know where to ask, so I've chosen this thread - there is no any\n> documentation on meson build platform in PostgreSQL docs.\n\nThere is now:\nhttps://www.postgresql.org/docs/devel/install-meson.html\n\nNeeds further work, but it's a start.\n\n\n> Is this okay? For me it was a surprise when the meson platform was\n> added\n\nIt's been discussed on the list for a year or so before it was\nadded. It's a large change, so unfortunately it's not something that I\ncould get done in a single day, with perfect docs from the get go.\n\nGreetings,\n\nAndres Freund\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Wed, 21 Dec 2022 20:38:12 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: meson and tmp_install"
}
] |
[
{
"msg_contents": "Justin Pryzby <pryzby(at)telsasoft(dot)com> writes:\n> Some modest cleanups I've accumulated\n\nHi Justin.\n\n0001:\nRegarding initializer {0}, the problem is still with old compilers, which\ndon't initialize exactly like memset.\nOnly more modern compilers fill in any \"holes\" that may exist.\nThis means that as old compilers are not supported, this will no longer be\na problem.\nFast and secure solution: memset\n\n+1 to switch from loop to memset, same as many places in the code base.\n\n- /* initialize nulls and values */\n- for (i = 0; i < Natts_pg_constraint; i++)\n- {\n- nulls[i] = false;\n- values[i] = (Datum) NULL;\n- }\n+ memset(nulls, false, sizeof(nulls));\n+ memset(values, 0, sizeof(values));\n\nWhat made me curious about this excerpt is that the Datum values are NULL,\nbut aren't nulls?\nWould it not be?\n+ memset(nulls, true, sizeof(nulls));\n+ memset(values, 0, sizeof(values));\n\nsrc/backend/tcop/pquery.c:\n/* single format specified, use for all columns */\n-int16 format1 = formats[0];\n-\n-for (i = 0; i < natts; i++)\n-portal->formats[i] = format1;\n+ memset(portal->formats, formats[0], natts * sizeof(*portal->formats));\n\n0002:\ncontrib/sslinfo/sslinfo.c\n\nmemset is faster than intercalated stores.\n\nsrc/backend/replication/logical/origin.c\n+1\none store, is better than three.\nbut, should be:\n- memset(nulls, 1, sizeof(nulls));\n+memset(nulls, false, sizeof(nulls));\n\nThe correct style is false, not 0.\n\nsrc/backend/utils/adt/misc.c:\n-1\nIt got worse.\nIt's only one store, which could be avoided by the \"continue\" statement.\n\nsrc/backend/utils/misc/pg_controldata.c:\n+1\n+memset(nulls, false, sizeof(nulls));\n\nor\nnulls[0] = false;\nnulls[1] = false;\nnulls[2] = false;\nnulls[3] = false;\n\nBad style, intercalated stores are worse.\n\n\n0003:\n+1\n\nBut you should reduce the scope of vars:\nRangeTblEntry *rte\nOid userid;\n\n+ if (varno != relid)\n+ {\n+ RangeTblEntry *rte;\n+ Oid userid;\n\n0005:\n+1\n\n0006:\n+1\n\nregards,\nRanier Vilela\n\n\nJustin Pryzby <pryzby(at)telsasoft(dot)com> writes:> Some modest cleanups I've accumulatedHi Justin.0001:Regarding initializer {0}, the problem is still with old compilers, which don't initialize exactly like memset.Only more modern compilers fill in any \"holes\" that may exist.This means that as old compilers are not supported, this will no longer be a problem.Fast and secure solution: memset+1 to switch from loop to memset, same as many places in the code base.-\t/* initialize nulls and values */-\tfor (i = 0; i < Natts_pg_constraint; i++)-\t{-\t\tnulls[i] = false;-\t\tvalues[i] = (Datum) NULL;-\t}+ memset(nulls, false, sizeof(nulls));+ memset(values, 0, sizeof(values));\nWhat made me curious about this excerpt is that the Datum values are NULL, but aren't nulls?Would it not be?\n+ memset(nulls, true, sizeof(nulls));+ memset(values, 0, sizeof(values));src/backend/tcop/pquery.c:\t\t/* single format specified, use for all columns */-int16\t\tformat1 = formats[0];-\t\t-for (i = 0; i < natts; i++)\t\t\t-portal->formats[i] = format1;+\t\tmemset(portal->formats, formats[0], natts * sizeof(*portal->formats));0002:contrib/sslinfo/sslinfo.cmemset is faster than intercalated stores.src/backend/replication/logical/origin.c+1one store, is better than three.but, should be:-\t\tmemset(nulls, 1, sizeof(nulls));+memset(nulls, false, sizeof(nulls));The correct style is false, not 0.src/backend/utils/adt/misc.c:-1It got worse.It's only one store, which could be avoided by the \"continue\" statement.src/backend/utils/misc/pg_controldata.c:+1+memset(nulls, false, sizeof(nulls));ornulls[0] = false;\nnulls[1] = false; \n\nnulls[2] = false; \n\nnulls[3] = false; Bad style, intercalated stores\n\n are worse.0003:+1But you should reduce the scope of vars:\nRangeTblEntry\t*rte \nOid\t\tuserid;\n\n+\t\tif (varno != relid)+\t\t{+ RangeTblEntry\t*rte;+ \t\t\tOid\t\tuserid;0005:+10006:+1regards,Ranier Vilela",
"msg_date": "Tue, 20 Dec 2022 10:21:24 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: code cleanups"
}
] |
[
{
"msg_contents": "Hello!\n\nThis patch is a new function based on the implementation of to_hex(int).\n\nSince support for octal integer literals was added, to_oct(int) allows\noctal values to be easily stored and returned in query results.\n\n to_oct(0o755) = '755'\n\nThis is probably most useful for storing file system permissions.\n\n--\nEric Radman",
"msg_date": "Tue, 20 Dec 2022 17:08:13 -0500",
"msg_from": "Eric Radman <ericshane@eradman.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add function to_oct"
},
{
"msg_contents": "2022年12月21日(水) 7:08 Eric Radman <ericshane@eradman.com>:>\n> Hello!\n>\n> This patch is a new function based on the implementation of to_hex(int).\n>\n> Since support for octal integer literals was added, to_oct(int) allows\n> octal values to be easily stored and returned in query results.\n>\n> to_oct(0o755) = '755'\n>\n> This is probably most useful for storing file system permissions.\n\nSeems like it would be convenient to have. Any reason why there's\nno matching \"to_oct(bigint)\" version?\n\nPatch has been added to the next commitfest [1], thanks.\n\n[1] https://commitfest.postgresql.org/41/4071/\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 21 Dec 2022 08:36:40 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 08:36:40AM +0900, Ian Lawrence Barwick wrote:\n> 2022年12月21日(水) 7:08 Eric Radman <ericshane@eradman.com>:>\n> > Hello!\n> >\n> > This patch is a new function based on the implementation of to_hex(int).\n> >\n> > Since support for octal integer literals was added, to_oct(int) allows\n> > octal values to be easily stored and returned in query results.\n> >\n> > to_oct(0o755) = '755'\n> >\n> > This is probably most useful for storing file system permissions.\n> \n> Seems like it would be convenient to have. Any reason why there's\n> no matching \"to_oct(bigint)\" version?\n\nI couldn't think of a reason someone might want an octal\nrepresentation of a bigint. Certainly it would be easy to add\nif there is value in supporting all of the same argument types.\n\n\n",
"msg_date": "Tue, 20 Dec 2022 20:42:10 -0500",
"msg_from": "Eric Radman <ericshane@eradman.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "2022年12月21日(水) 10:42 Eric Radman <ericshane@eradman.com>:\n>\n> On Wed, Dec 21, 2022 at 08:36:40AM +0900, Ian Lawrence Barwick wrote:\n> > 2022年12月21日(水) 7:08 Eric Radman <ericshane@eradman.com>:>\n> > > Hello!\n> > >\n> > > This patch is a new function based on the implementation of to_hex(int).\n> > >\n> > > Since support for octal integer literals was added, to_oct(int) allows\n> > > octal values to be easily stored and returned in query results.\n> > >\n> > > to_oct(0o755) = '755'\n> > >\n> > > This is probably most useful for storing file system permissions.\n> >\n> > Seems like it would be convenient to have. Any reason why there's\n> > no matching \"to_oct(bigint)\" version?\n>\n> I couldn't think of a reason someone might want an octal\n> representation of a bigint. Certainly it would be easy to add\n> if there is value in supporting all of the same argument types.\n\nYeah, I am struggling to think of a practical application other than\nsymmetry with to_hex().\n\n\nRegards\n\nIan Barwick\n\n\n",
"msg_date": "Wed, 21 Dec 2022 10:56:29 +0900",
"msg_from": "Ian Lawrence Barwick <barwick@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: not tested\nSpec compliant: not tested\nDocumentation: not tested\n\nThis is a mini-review of to_oct :-)\r\n\r\nThe function seems useful to me, and is obviously correct.\r\n\r\nI don't know whether optimization at such a low level is considered in PostgreSQL, but here goes.\r\n\r\nThe calculation of quotient and remainder can be replaced by less costly masking and shifting.\r\n\r\nDefining\r\n\r\n#define OCT_DIGIT_BITS 3\r\n#define OCT_DIGIT_BITMASK 0x7\r\n\r\nthe content of the loop can be replaced by\r\n\r\n\t\t*--ptr = digits[value & OCT_DIGIT_BITMASK];\r\n\t\tvalue >>= OCT_DIGIT_BITS;\r\n\r\nAlso, the check for ptr > buf in the while loop can be removed. The check is superfluous, since buf cannot possibly be exhausted by a 32 bit integer (the maximum octal number being 37777777777).\r\n\r\n\r\nBest regards\r\n\r\nDag Lem\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Thu, 22 Dec 2022 10:08:17 +0000",
"msg_from": "Dag Lem <dag@nimrod.no>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 10:08:17AM +0000, Dag Lem wrote:\n> \n> The calculation of quotient and remainder can be replaced by less costly masking and shifting.\n> \n> Defining\n> \n> #define OCT_DIGIT_BITS 3\n> #define OCT_DIGIT_BITMASK 0x7\n> \n> the content of the loop can be replaced by\n> \n> \t\t*--ptr = digits[value & OCT_DIGIT_BITMASK];\n> \t\tvalue >>= OCT_DIGIT_BITS;\n> \n> Also, the check for ptr > buf in the while loop can be removed. The\n> check is superfluous, since buf cannot possibly be exhausted by a 32\n> bit integer (the maximum octal number being 37777777777).\n\nI integrated these suggestions in the attached -v2 patch and tested\nrange of values manually.\n\nAlso picked an OID > 8000 as suggested by unused_oids.\n\n..Eric",
"msg_date": "Thu, 22 Dec 2022 12:41:24 -0500",
"msg_from": "Eric Radman <ericshane@eradman.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Thu, 22 Dec 2022 at 23:11, Eric Radman <ericshane@eradman.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 10:08:17AM +0000, Dag Lem wrote:\n> >\n> > The calculation of quotient and remainder can be replaced by less costly masking and shifting.\n> >\n> > Defining\n> >\n> > #define OCT_DIGIT_BITS 3\n> > #define OCT_DIGIT_BITMASK 0x7\n> >\n> > the content of the loop can be replaced by\n> >\n> > *--ptr = digits[value & OCT_DIGIT_BITMASK];\n> > value >>= OCT_DIGIT_BITS;\n> >\n> > Also, the check for ptr > buf in the while loop can be removed. The\n> > check is superfluous, since buf cannot possibly be exhausted by a 32\n> > bit integer (the maximum octal number being 37777777777).\n>\n> I integrated these suggestions in the attached -v2 patch and tested\n> range of values manually.\n>\n> Also picked an OID > 8000 as suggested by unused_oids.\n\nFew suggestions\n1) We could use to_oct instead of to_oct32 as we don't have multiple\nimplementations for to_oct\ndiff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat\nindex 98d90d9338..fde0b24563 100644\n--- a/src/include/catalog/pg_proc.dat\n+++ b/src/include/catalog/pg_proc.dat\n@@ -3687,6 +3687,9 @@\n { oid => '2090', descr => 'convert int8 number to hex',\n proname => 'to_hex', prorettype => 'text', proargtypes => 'int8',\n prosrc => 'to_hex64' },\n+{ oid => '8335', descr => 'convert int4 number to oct',\n+ proname => 'to_oct', prorettype => 'text', proargtypes => 'int4',\n+ prosrc => 'to_oct32' },\n\n2) Similarly we could change this to \"to_oct\"\n+/*\n+ * Convert an int32 to a string containing a base 8 (oct) representation of\n+ * the number.\n+ */\n+Datum\n+to_oct32(PG_FUNCTION_ARGS)\n+{\n+ uint32 value = (uint32) PG_GETARG_INT32(0);\n\n3) I'm not sure if AS \"77777777\" is required, but I also noticed it\nis done similarly in to_hex too:\n+--\n+-- test to_oct\n+--\n+select to_oct(256*256*256 - 1) AS \"77777777\";\n+ 77777777\n+----------\n+ 77777777\n+(1 row)\n\n4) You could add a commit message for the patch\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 7 Jan 2023 16:32:26 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> Few suggestions\n> 1) We could use to_oct instead of to_oct32 as we don't have multiple\n> implementations for to_oct\n\nThat seems (a) shortsighted and (b) inconsistent with the naming\npattern used for to_hex, so I doubt it'd be an improvement.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 07 Jan 2023 10:29:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On 20.12.22 23:08, Eric Radman wrote:\n> This patch is a new function based on the implementation of to_hex(int).\n> \n> Since support for octal integer literals was added, to_oct(int) allows\n> octal values to be easily stored and returned in query results.\n> \n> to_oct(0o755) = '755'\n> \n> This is probably most useful for storing file system permissions.\n\nNote this subsequent discussion about the to_hex function: \nhttps://www.postgresql.org/message-id/flat/CAEZATCVbkL1ynqpsKiTDpch34%3DSCr5nnau%3DnfNmiy2nM3SJHtw%40mail.gmail.com\n\nAlso, I think there is no \"to binary\" function, so perhaps if we're \ngoing down this road one way or the other, we should probably complete \nthe set.\n\n\n\n",
"msg_date": "Thu, 23 Feb 2023 12:32:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 6:32 AM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 20.12.22 23:08, Eric Radman wrote:\n> > This patch is a new function based on the implementation of to_hex(int).\n> >\n> > Since support for octal integer literals was added, to_oct(int) allows\n> > octal values to be easily stored and returned in query results.\n> >\n> > to_oct(0o755) = '755'\n> >\n> > This is probably most useful for storing file system permissions.\n>\n> Note this subsequent discussion about the to_hex function:\n>\n> https://www.postgresql.org/message-id/flat/CAEZATCVbkL1ynqpsKiTDpch34%3DSCr5nnau%3DnfNmiy2nM3SJHtw%40mail.gmail.com\n>\n> Also, I think there is no \"to binary\" function, so perhaps if we're\n> going down this road one way or the other, we should probably complete\n> the set.\n>\n> The code reads clearly. It works as expected (being an old PDP-11\nguy!)... And the docs make sense and build as well.\nNothing larger than an int gets in. I was \"missing\" the bigint version,\nbut read through ALL of the messages to see (and agree)\nThat that's okay.\nMarked Ready for Committer.\n\nThanks, Kirk\n\nOn Thu, Feb 23, 2023 at 6:32 AM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 20.12.22 23:08, Eric Radman wrote:\n> This patch is a new function based on the implementation of to_hex(int).\n> \n> Since support for octal integer literals was added, to_oct(int) allows\n> octal values to be easily stored and returned in query results.\n> \n> to_oct(0o755) = '755'\n> \n> This is probably most useful for storing file system permissions.\n\nNote this subsequent discussion about the to_hex function: \nhttps://www.postgresql.org/message-id/flat/CAEZATCVbkL1ynqpsKiTDpch34%3DSCr5nnau%3DnfNmiy2nM3SJHtw%40mail.gmail.com\n\nAlso, I think there is no \"to binary\" function, so perhaps if we're \ngoing down this road one way or the other, we should probably complete \nthe set.The code reads clearly. It works as expected (being an old PDP-11 guy!)... And the docs make sense and build as well.Nothing larger than an int gets in. I was \"missing\" the bigint version, but read through ALL of the messages to see (and agree)That that's okay.Marked Ready for Committer.Thanks, Kirk",
"msg_date": "Tue, 4 Apr 2023 20:45:36 -0400",
"msg_from": "Kirk Wolak <wolakk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Apr 04, 2023 at 08:45:36PM -0400, Kirk Wolak wrote:\n> Marked Ready for Committer.\n\nI started taking a look at this and ended up adding to_binary() and a\nbigint version of to_oct() for completeness. While I was at it, I moved\nthe base-conversion logic out to a separate static function that\nto_binary/oct/hex all use.\n\n From the other discussion referenced upthread, it sounds like we might want\nto replace to_binary/oct/hex with a more generic base-conversion function.\nMaybe we should try to do that instead.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 25 Jul 2023 16:24:26 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 04:24:26PM -0700, Nathan Bossart wrote:\n> I started taking a look at this and ended up adding to_binary() and a\n> bigint version of to_oct() for completeness. While I was at it, I moved\n> the base-conversion logic out to a separate static function that\n> to_binary/oct/hex all use.\n\nBleh, this patch seems to fail horribly on 32-bit builds. I'll look into\nit soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 25 Jul 2023 17:16:56 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 05:16:56PM -0700, Nathan Bossart wrote:\n> On Tue, Jul 25, 2023 at 04:24:26PM -0700, Nathan Bossart wrote:\n>> I started taking a look at this and ended up adding to_binary() and a\n>> bigint version of to_oct() for completeness. While I was at it, I moved\n>> the base-conversion logic out to a separate static function that\n>> to_binary/oct/hex all use.\n> \n> Bleh, this patch seems to fail horribly on 32-bit builds. I'll look into\n> it soon.\n\nHere's a new version of the patch with the silly mistakes fixed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 25 Jul 2023 20:29:17 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 08:29:17PM -0700, Nathan Bossart wrote:\n> Here's a new version of the patch with the silly mistakes fixed.\n\nIf there are no objections, I'd like to commit this patch soon.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 14 Aug 2023 21:11:29 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On 8/15/23 06:11, Nathan Bossart wrote:\n> On Tue, Jul 25, 2023 at 08:29:17PM -0700, Nathan Bossart wrote:\n>> Here's a new version of the patch with the silly mistakes fixed.\n> \n> If there are no objections, I'd like to commit this patch soon.\n\nI just took a look at this (and the rest of the thread). I am a little \nbit disappointed that we don't have a generic base conversion function, \nbut even if we did I think these specialized functions would still be \nuseful.\n\nNo objection from me.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Tue, 15 Aug 2023 07:58:17 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Jul 25, 2023 at 05:16:56PM -0700, Nathan Bossart wrote:\n> [v4]\n\nIf we're not going to have a general SQL conversion function, here are some\ncomments on the current patch.\n\n+static char *convert_to_base(uint64 value, int base);\n\nNot needed if the definition is above the callers.\n\n+ * Workhorse for to_binary, to_oct, and to_hex. Note that base must be\neither\n+ * 2, 8, or 16.\n\nWhy wouldn't it work with any base <= 16?\n\n- *ptr = '\\0';\n+ Assert(base == 2 || base == 8 || base == 16);\n\n+ *ptr = '\\0';\n\nSpurious whitespace change?\n\n- char buf[32]; /* bigger than needed, but reasonable */\n+ char *buf = palloc(sizeof(uint64) * BITS_PER_BYTE + 1);\n\nWhy is this no longer allocated on the stack? Maybe needs a comment about\nthe size calculation.\n\n+static char *\n+convert_to_base(uint64 value, int base)\n\nOn my machine this now requires a function call and a DIV instruction, even\nthough the patch claims not to support anything but a power of two. It's\ntiny enough to declare it inline so the compiler can specialize for each\ncall site.\n\n+{ oid => '5101', descr => 'convert int4 number to binary',\n\nNeeds to be over 8000.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jul 25, 2023 at 05:16:56PM -0700, Nathan Bossart wrote:> [v4]If we're not going to have a general SQL conversion function, here are some comments on the current patch.+static char *convert_to_base(uint64 value, int base);Not needed if the definition is above the callers.+ * Workhorse for to_binary, to_oct, and to_hex. Note that base must be either+ * 2, 8, or 16.Why wouldn't it work with any base <= 16?-\t*ptr = '\\0';+\tAssert(base == 2 || base == 8 || base == 16); +\t*ptr = '\\0';Spurious whitespace change?-\tchar\t\tbuf[32];\t\t/* bigger than needed, but reasonable */+\tchar\t *buf = palloc(sizeof(uint64) * BITS_PER_BYTE + 1);Why is this no longer allocated on the stack? Maybe needs a comment about the size calculation.+static char *+convert_to_base(uint64 value, int base)On my machine this now requires a function call and a DIV instruction, even though the patch claims not to support anything but a power of two. It's tiny enough to declare it inline so the compiler can specialize for each call site.+{ oid => '5101', descr => 'convert int4 number to binary',Needs to be over 8000.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 15 Aug 2023 13:53:25 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 07:58:17AM +0200, Vik Fearing wrote:\n> On 8/15/23 06:11, Nathan Bossart wrote:\n>> If there are no objections, I'd like to commit this patch soon.\n> \n> I just took a look at this (and the rest of the thread). I am a little bit\n> disappointed that we don't have a generic base conversion function, but even\n> if we did I think these specialized functions would still be useful.\n> \n> No objection from me.\n\nThanks for taking a look. I don't mean for this to preclude a generic base\nconversion function that would supersede the functions added by this patch.\nHowever, I didn't want to hold up $SUBJECT because of something that may or\nmay not happen after lots of discussion. If it does happen within the v17\ndevelopment cycle, we could probably just remove to_oct/binary.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 15 Aug 2023 08:42:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Aug 15, 2023 at 01:53:25PM +0700, John Naylor wrote:\n> If we're not going to have a general SQL conversion function, here are some\n> comments on the current patch.\n\nI appreciate the review.\n\n> +static char *convert_to_base(uint64 value, int base);\n> \n> Not needed if the definition is above the callers.\n\nDone.\n\n> + * Workhorse for to_binary, to_oct, and to_hex. Note that base must be\n> either\n> + * 2, 8, or 16.\n> \n> Why wouldn't it work with any base <= 16?\n\nYou're right. I changed this in v5.\n\n> - *ptr = '\\0';\n> + Assert(base == 2 || base == 8 || base == 16);\n> \n> + *ptr = '\\0';\n> \n> Spurious whitespace change?\n\nI think this might just be a weird artifact of the diff algorithm.\n\n> - char buf[32]; /* bigger than needed, but reasonable */\n> + char *buf = palloc(sizeof(uint64) * BITS_PER_BYTE + 1);\n> \n> Why is this no longer allocated on the stack? Maybe needs a comment about\n> the size calculation.\n\nIt really should be. IIRC I wanted to avoid passing a pre-allocated buffer\nto convert_to_base(), but I don't remember why. I fixed this in v5.\n\n> +static char *\n> +convert_to_base(uint64 value, int base)\n> \n> On my machine this now requires a function call and a DIV instruction, even\n> though the patch claims not to support anything but a power of two. It's\n> tiny enough to declare it inline so the compiler can specialize for each\n> call site.\n\nThis was on my list of things to check before committing. I assumed that\nit would be automatically inlined, but given your analysis, I went ahead\nand added the inline keyword. My compiler took the hint and inlined the\nfunction, and it used SHR instead of DIV instructions. The machine code\nfor to_hex32/64 is still a couple of instructions longer than before\n(probably because of the uint64 casts), but I don't know if we need to\nworry about micro-optimizing these functions any further.\n\n> +{ oid => '5101', descr => 'convert int4 number to binary',\n> \n> Needs to be over 8000.\n\nDone.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 15 Aug 2023 10:17:15 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 12:17 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> On Tue, Aug 15, 2023 at 01:53:25PM +0700, John Naylor wrote:\n\n> > - *ptr = '\\0';\n> > + Assert(base == 2 || base == 8 || base == 16);\n> >\n> > + *ptr = '\\0';\n> >\n> > Spurious whitespace change?\n>\n> I think this might just be a weird artifact of the diff algorithm.\n\nDon't believe everything you think. :-)\n\n```\n*ptr = '\\0';\n\ndo\n```\n\nto\n\n```\n*ptr = '\\0';\ndo\n```\n\n> > - char buf[32]; /* bigger than needed, but reasonable */\n> > + char *buf = palloc(sizeof(uint64) * BITS_PER_BYTE + 1);\n> >\n> > Why is this no longer allocated on the stack? Maybe needs a comment\nabout\n> > the size calculation.\n>\n> It really should be. IIRC I wanted to avoid passing a pre-allocated\nbuffer\n> to convert_to_base(), but I don't remember why. I fixed this in v5.\n\nNow I'm struggling to understand why each and every instance has its own\nnominal buffer, passed down to the implementation. All we care about is the\nresult -- is there some reason not to confine the buffer declaration to the\ngeneral implementation?\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Aug 16, 2023 at 12:17 AM Nathan Bossart <nathandbossart@gmail.com> wrote:>> On Tue, Aug 15, 2023 at 01:53:25PM +0700, John Naylor wrote:> > - *ptr = '\\0';> > + Assert(base == 2 || base == 8 || base == 16);> >> > + *ptr = '\\0';> >> > Spurious whitespace change?>> I think this might just be a weird artifact of the diff algorithm.Don't believe everything you think. :-)```\t*ptr = '\\0';\tdo```to```\t*ptr = '\\0';\tdo```> > - char buf[32]; /* bigger than needed, but reasonable */> > + char *buf = palloc(sizeof(uint64) * BITS_PER_BYTE + 1);> >> > Why is this no longer allocated on the stack? Maybe needs a comment about> > the size calculation.>> It really should be. IIRC I wanted to avoid passing a pre-allocated buffer> to convert_to_base(), but I don't remember why. I fixed this in v5.Now I'm struggling to understand why each and every instance has its own nominal buffer, passed down to the implementation. All we care about is the result -- is there some reason not to confine the buffer declaration to the general implementation?--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 16 Aug 2023 10:35:27 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 10:35:27AM +0700, John Naylor wrote:\n> ```\n> *ptr = '\\0';\n> \n> do\n> ```\n> \n> to\n> \n> ```\n> *ptr = '\\0';\n> do\n> ```\n\nOh, I misunderstood. I thought you meant that there might be a whitespace\nchange on that line, not the surrounding ones. This is fixed in v6.\n\n> Now I'm struggling to understand why each and every instance has its own\n> nominal buffer, passed down to the implementation. All we care about is the\n> result -- is there some reason not to confine the buffer declaration to the\n> general implementation?\n\nWe can do that if we use a static variable, which is what I've done in v6.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 16 Aug 2023 07:24:10 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Wed, Aug 16, 2023 at 9:24 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> On Wed, Aug 16, 2023 at 10:35:27AM +0700, John Naylor wrote:\n\n> > Now I'm struggling to understand why each and every instance has its own\n> > nominal buffer, passed down to the implementation. All we care about is\nthe\n> > result -- is there some reason not to confine the buffer declaration to\nthe\n> > general implementation?\n>\n> We can do that if we use a static variable, which is what I've done in v6.\n\nThat makes it a lexically-scoped global variable, which we don't need\neither. Can we have the internal function allocate on the stack, then\ncall cstring_to_text() on that, returning the text result? That does its\nown palloc.\n\nOr maybe better, save the starting pointer, compute the length at the end,\nand call cstring_to_text_with_len()? (It seems we wouldn't need\nthe nul-terminator then, either.)\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Aug 16, 2023 at 9:24 PM Nathan Bossart <nathandbossart@gmail.com> wrote:>> On Wed, Aug 16, 2023 at 10:35:27AM +0700, John Naylor wrote:> > Now I'm struggling to understand why each and every instance has its own> > nominal buffer, passed down to the implementation. All we care about is the> > result -- is there some reason not to confine the buffer declaration to the> > general implementation?>> We can do that if we use a static variable, which is what I've done in v6.That makes it a lexically-scoped global variable, which we don't need either. Can we have the internal function allocate on the stack, then call cstring_to_text() on that, returning the text result? That does its own palloc.Or maybe better, save the starting pointer, compute the length at the end, and call cstring_to_text_with_len()? (It seems we wouldn't need the nul-terminator then, either.)--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 17 Aug 2023 12:35:54 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 12:35:54PM +0700, John Naylor wrote:\n> That makes it a lexically-scoped global variable, which we don't need\n> either. Can we have the internal function allocate on the stack, then\n> call cstring_to_text() on that, returning the text result? That does its\n> own palloc.\n> \n> Or maybe better, save the starting pointer, compute the length at the end,\n> and call cstring_to_text_with_len()? (It seems we wouldn't need\n> the nul-terminator then, either.)\n\nWorks for me. I did it that way in v7.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 17 Aug 2023 08:26:28 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Thu, Aug 17, 2023 at 10:26 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n>\n> On Thu, Aug 17, 2023 at 12:35:54PM +0700, John Naylor wrote:\n> > That makes it a lexically-scoped global variable, which we don't need\n> > either. Can we have the internal function allocate on the stack, then\n> > call cstring_to_text() on that, returning the text result? That does its\n> > own palloc.\n> >\n> > Or maybe better, save the starting pointer, compute the length at the\nend,\n> > and call cstring_to_text_with_len()? (It seems we wouldn't need\n> > the nul-terminator then, either.)\n>\n> Works for me. I did it that way in v7.\n\nThis looks nicer, but still doesn't save the starting pointer, and so needs\nto lug around that big honking macro. This is what I mean:\n\nstatic inline text *\nconvert_to_base(uint64 value, int base)\n{\n const char *digits = \"0123456789abcdef\";\n /* We size the buffer for to_binary's longest possible return value. */\n char buf[sizeof(uint64) * BITS_PER_BYTE];\n char * const end = buf + sizeof(buf);\n char *ptr = end;\n\n Assert(base > 1);\n Assert(base <= 16);\n\n do\n {\n *--ptr = digits[value % base];\n value /= base;\n } while (ptr > buf && value);\n\n return cstring_to_text_with_len(ptr, end - ptr);\n}\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Aug 17, 2023 at 10:26 PM Nathan Bossart <nathandbossart@gmail.com> wrote:>> On Thu, Aug 17, 2023 at 12:35:54PM +0700, John Naylor wrote:> > That makes it a lexically-scoped global variable, which we don't need> > either. Can we have the internal function allocate on the stack, then> > call cstring_to_text() on that, returning the text result? That does its> > own palloc.> >> > Or maybe better, save the starting pointer, compute the length at the end,> > and call cstring_to_text_with_len()? (It seems we wouldn't need> > the nul-terminator then, either.)>> Works for me. I did it that way in v7.This looks nicer, but still doesn't save the starting pointer, and so needs to lug around that big honking macro. This is what I mean:static inline text *convert_to_base(uint64 value, int base){ const char *digits = \"0123456789abcdef\"; /* We size the buffer for to_binary's longest possible return value. */ char buf[sizeof(uint64) * BITS_PER_BYTE]; char * const end = buf + sizeof(buf); char *ptr = end; Assert(base > 1); Assert(base <= 16); do { *--ptr = digits[value % base]; value /= base; } while (ptr > buf && value); return cstring_to_text_with_len(ptr, end - ptr);}--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 19 Aug 2023 11:41:56 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Thu, 17 Aug 2023 at 16:26, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Works for me. I did it that way in v7.\n>\n\nI note that there are no tests for negative inputs.\n\nDoing a quick test, shows that this changes the current behaviour,\nbecause all inputs are now treated as 64-bit:\n\nHEAD:\n\nselect to_hex((-1234)::int);\n to_hex\n----------\n fffffb2e\n\nWith patch:\n\nselect to_hex((-1234)::int);\n to_hex\n------------------\n fffffffffffffb2e\n\nThe way that negative inputs are handled really should be documented,\nor at least it should include a couple of examples.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 19 Aug 2023 08:35:46 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Sat, Aug 19, 2023 at 11:41:56AM +0700, John Naylor wrote:\n> This looks nicer, but still doesn't save the starting pointer, and so needs\n> to lug around that big honking macro. This is what I mean:\n> \n> static inline text *\n> convert_to_base(uint64 value, int base)\n> {\n> const char *digits = \"0123456789abcdef\";\n> /* We size the buffer for to_binary's longest possible return value. */\n> char buf[sizeof(uint64) * BITS_PER_BYTE];\n> char * const end = buf + sizeof(buf);\n> char *ptr = end;\n> \n> Assert(base > 1);\n> Assert(base <= 16);\n> \n> do\n> {\n> *--ptr = digits[value % base];\n> value /= base;\n> } while (ptr > buf && value);\n> \n> return cstring_to_text_with_len(ptr, end - ptr);\n> }\n\nI will use this in v8.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sun, 20 Aug 2023 08:19:37 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Sat, Aug 19, 2023 at 08:35:46AM +0100, Dean Rasheed wrote:\n> I note that there are no tests for negative inputs.\n\nI added some in v8.\n\n> Doing a quick test, shows that this changes the current behaviour,\n> because all inputs are now treated as 64-bit:\n> \n> HEAD:\n> \n> select to_hex((-1234)::int);\n> to_hex\n> ----------\n> fffffb2e\n> \n> With patch:\n> \n> select to_hex((-1234)::int);\n> to_hex\n> ------------------\n> fffffffffffffb2e\n\nGood catch. In v8, I fixed this by first casting the input to uint32 for\nthe 32-bit versions of the functions. This prevents the conversion to\nuint64 from setting the rest of the bits. AFAICT this behavior is pretty\nwell defined in the standard.\n\n> The way that negative inputs are handled really should be documented,\n> or at least it should include a couple of examples.\n\nI used your suggestion and noted that the output is the two's complement\nrepresentation [0].\n\n[0] https://postgr.es/m/CAEZATCVbkL1ynqpsKiTDpch34%3DSCr5nnau%3DnfNmiy2nM3SJHtw%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sun, 20 Aug 2023 08:25:51 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Sun, 20 Aug 2023 at 16:25, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Sat, Aug 19, 2023 at 08:35:46AM +0100, Dean Rasheed wrote:\n>\n> > The way that negative inputs are handled really should be documented,\n> > or at least it should include a couple of examples.\n>\n> I used your suggestion and noted that the output is the two's complement\n> representation [0].\n>\n\nHmm, I think just including the doc text update, without the examples\nof positive and negative inputs, might not be sufficient to make the\nmeaning clear to everyone.\n\nSomething else that bothers me slightly is the function naming --\n\"hexadecimal\" gets abbreviated to \"hex\", \"octal\" gets abbreviated to\n\"oct\", but \"binary\" is left as-is. I think it ought to be \"to_bin()\"\non consistency grounds, even though I understand the words \"to bin\"\ncould be interpreted differently. (Looking elsewhere for precedents,\nPython has bin(), oct() and hex() functions.)\n\nAlso, I think the convention is to always list functions\nalphabetically, so to_oct() should really come after to_hex().\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 21 Aug 2023 09:31:37 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Mon, Aug 21, 2023 at 09:31:37AM +0100, Dean Rasheed wrote:\n> Hmm, I think just including the doc text update, without the examples\n> of positive and negative inputs, might not be sufficient to make the\n> meaning clear to everyone.\n\nI added some examples for negative inputs.\n\n> Something else that bothers me slightly is the function naming --\n> \"hexadecimal\" gets abbreviated to \"hex\", \"octal\" gets abbreviated to\n> \"oct\", but \"binary\" is left as-is. I think it ought to be \"to_bin()\"\n> on consistency grounds, even though I understand the words \"to bin\"\n> could be interpreted differently. (Looking elsewhere for precedents,\n> Python has bin(), oct() and hex() functions.)\n\nI renamed it to to_bin().\n\n> Also, I think the convention is to always list functions\n> alphabetically, so to_oct() should really come after to_hex().\n\nI reordered the functions in the docs.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 21 Aug 2023 12:15:28 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Mon, 21 Aug 2023 at 20:15, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> I added some examples for negative inputs.\n>\n> I renamed it to to_bin().\n>\n> I reordered the functions in the docs.\n>\n\nOK, cool. This looks good to me.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 21 Aug 2023 21:10:43 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 3:10 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n>\n> OK, cool. This looks good to me.\n\nLGTM too.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Aug 22, 2023 at 3:10 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:>> OK, cool. This looks good to me.LGTM too.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 22 Aug 2023 10:17:45 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On 20.08.23 17:25, Nathan Bossart wrote:\n>> Doing a quick test, shows that this changes the current behaviour,\n>> because all inputs are now treated as 64-bit:\n>>\n>> HEAD:\n>>\n>> select to_hex((-1234)::int);\n>> to_hex\n>> ----------\n>> fffffb2e\n>>\n>> With patch:\n>>\n>> select to_hex((-1234)::int);\n>> to_hex\n>> ------------------\n>> fffffffffffffb2e\n> Good catch. In v8, I fixed this by first casting the input to uint32 for\n> the 32-bit versions of the functions. This prevents the conversion to\n> uint64 from setting the rest of the bits. AFAICT this behavior is pretty\n> well defined in the standard.\n\nWhat standard?\n\nI don't understand the reason for this handling of negative values. I \nwould expect that, say, to_hex(-1234) would return '-' || to_hex(1234).\n\n\n\n",
"msg_date": "Tue, 22 Aug 2023 16:20:02 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 04:20:02PM +0200, Peter Eisentraut wrote:\n> On 20.08.23 17:25, Nathan Bossart wrote:\n>> > Doing a quick test, shows that this changes the current behaviour,\n>> > because all inputs are now treated as 64-bit:\n>> > \n>> > HEAD:\n>> > \n>> > select to_hex((-1234)::int);\n>> > to_hex\n>> > ----------\n>> > fffffb2e\n>> > \n>> > With patch:\n>> > \n>> > select to_hex((-1234)::int);\n>> > to_hex\n>> > ------------------\n>> > fffffffffffffb2e\n>> Good catch. In v8, I fixed this by first casting the input to uint32 for\n>> the 32-bit versions of the functions. This prevents the conversion to\n>> uint64 from setting the rest of the bits. AFAICT this behavior is pretty\n>> well defined in the standard.\n> \n> What standard?\n\nC99\n\n> I don't understand the reason for this handling of negative values. I would\n> expect that, say, to_hex(-1234) would return '-' || to_hex(1234).\n\nFor this patch set, I was trying to retain the current behavior, which is\nto return the two's complement representation. I'm open to changing this\nif there's agreement on what the output should be.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 22 Aug 2023 07:26:08 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On 22.08.23 16:26, Nathan Bossart wrote:\n>> I don't understand the reason for this handling of negative values. I would\n>> expect that, say, to_hex(-1234) would return '-' || to_hex(1234).\n> For this patch set, I was trying to retain the current behavior, which is\n> to return the two's complement representation. I'm open to changing this\n> if there's agreement on what the output should be.\n\nAh, if there is existing behavior then we should probably keep it.\n\n\n",
"msg_date": "Tue, 22 Aug 2023 16:57:02 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
},
{
"msg_contents": "On Tue, Aug 22, 2023 at 04:57:02PM +0200, Peter Eisentraut wrote:\n> On 22.08.23 16:26, Nathan Bossart wrote:\n>> > I don't understand the reason for this handling of negative values. I would\n>> > expect that, say, to_hex(-1234) would return '-' || to_hex(1234).\n>> For this patch set, I was trying to retain the current behavior, which is\n>> to return the two's complement representation. I'm open to changing this\n>> if there's agreement on what the output should be.\n> \n> Ah, if there is existing behavior then we should probably keep it.\n\nCommitted.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 23 Aug 2023 07:54:30 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add function to_oct"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile hacking on a new system catalogue for a nearby thread, it\noccurred to me that syscache.c's table of entries could be made more\nreadable and less error prone. They look like this:\n\n {AttributeRelationId, /* ATTNUM */\n AttributeRelidNumIndexId,\n 2,\n {\n Anum_pg_attribute_attrelid,\n Anum_pg_attribute_attnum,\n 0,\n 0\n },\n 128\n },\n\nDo you think this is better?\n\n [ATTNUM] = {\n AttributeRelationId,\n AttributeRelidNumIndexId,\n {\n Anum_pg_attribute_attrelid,\n Anum_pg_attribute_attnum\n },\n 128\n },\n\nWe could also consider writing eg \".nbuckets = 128\", but it's not a\ncomplicated struct that the eye gets lost in, so I didn't bother with\nthat in the attached.",
"msg_date": "Wed, 21 Dec 2022 11:55:39 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Array initialisation notation in syscache.c"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Do you think this is better?\n\nI'm not at all on board with adding runtime overhead to\nsave maintaining the nkeys fields.\n\nGetting rid of the useless trailing zeroes in the key[] arrays\nis clearly a win, though.\n\nI'm kind of neutral on using \"[N] = \" as a substitute for\nordering the entries correctly. While that does remove\none failure mode, it seems like it adds another (ie\nfailure to provide an entry at all would be masked).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 20 Dec 2022 18:05:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 12:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > Do you think this is better?\n>\n> I'm not at all on board with adding runtime overhead to\n> save maintaining the nkeys fields.\n\nI don't see how to do it at compile time without getting the\npreprocessor involved. What do you think about this version?\n\n [ATTNUM] = {\n AttributeRelationId,\n AttributeRelidNumIndexId,\n KEY(Anum_pg_attribute_attrelid,\n Anum_pg_attribute_attnum),\n 128\n },\n\n> I'm kind of neutral on using \"[N] = \" as a substitute for\n> ordering the entries correctly. While that does remove\n> one failure mode, it seems like it adds another (ie\n> failure to provide an entry at all would be masked).\n\nIt fails very early in testing if you do that. Admittedly, the\nassertion is hard to understand, but if I add a new assertion close to\nthe cause with a new comment to say what you did wrong, I think that\nshould be good enough?",
"msg_date": "Wed, 21 Dec 2022 13:33:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 1:33 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> KEY(Anum_pg_attribute_attrelid,\n> Anum_pg_attribute_attnum),\n\nI independently rediscovered that our VA_ARGS_NARGS() macro in c.h\nalways returns 1 on MSVC via trial-by-CI. Derp. Here is the same\npatch, no change from v2, but this time accompanied by Victor Spirin's\nfix, which I found via one of the tab-completion-is-busted-on-Windows\ndiscussions. I can't supply a useful commit message, because I\nhaven't understood why it works, but it does indeed seem to work and\nthis should make cfbot green.",
"msg_date": "Wed, 21 Dec 2022 16:16:27 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On 21.12.22 04:16, Thomas Munro wrote:\n> On Wed, Dec 21, 2022 at 1:33 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> KEY(Anum_pg_attribute_attrelid,\n>> Anum_pg_attribute_attnum),\n> \n> I independently rediscovered that our VA_ARGS_NARGS() macro in c.h\n> always returns 1 on MSVC via trial-by-CI. Derp. Here is the same\n> patch, no change from v2, but this time accompanied by Victor Spirin's\n> fix, which I found via one of the tab-completion-is-busted-on-Windows\n> discussions. I can't supply a useful commit message, because I\n> haven't understood why it works, but it does indeed seem to work and\n> this should make cfbot green.\n\nThis looks like a good improvement to me.\n\n(I have also thought about having this generated from the catalog \ndefinition files somehow, but one step at a time ...)\n\n\n\n",
"msg_date": "Wed, 21 Dec 2022 17:36:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "Hi!\n\nWanted to ask this since I encountered a need for a cache with 5 keys -\nwhy is the syscache index still limited to 4 keys?\n\nThanks!\n\nOn Wed, Dec 21, 2022 at 7:36 PM Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> On 21.12.22 04:16, Thomas Munro wrote:\n> > On Wed, Dec 21, 2022 at 1:33 PM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> >> KEY(Anum_pg_attribute_attrelid,\n> >> Anum_pg_attribute_attnum),\n> >\n> > I independently rediscovered that our VA_ARGS_NARGS() macro in c.h\n> > always returns 1 on MSVC via trial-by-CI. Derp. Here is the same\n> > patch, no change from v2, but this time accompanied by Victor Spirin's\n> > fix, which I found via one of the tab-completion-is-busted-on-Windows\n> > discussions. I can't supply a useful commit message, because I\n> > haven't understood why it works, but it does indeed seem to work and\n> > this should make cfbot green.\n>\n> This looks like a good improvement to me.\n>\n> (I have also thought about having this generated from the catalog\n> definition files somehow, but one step at a time ...)\n>\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Wanted to ask this since I encountered a need for a cache with 5 keys -why is the syscache index still limited to 4 keys?Thanks!On Wed, Dec 21, 2022 at 7:36 PM Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:On 21.12.22 04:16, Thomas Munro wrote:\n> On Wed, Dec 21, 2022 at 1:33 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> KEY(Anum_pg_attribute_attrelid,\n>> Anum_pg_attribute_attnum),\n> \n> I independently rediscovered that our VA_ARGS_NARGS() macro in c.h\n> always returns 1 on MSVC via trial-by-CI. Derp. Here is the same\n> patch, no change from v2, but this time accompanied by Victor Spirin's\n> fix, which I found via one of the tab-completion-is-busted-on-Windows\n> discussions. I can't supply a useful commit message, because I\n> haven't understood why it works, but it does indeed seem to work and\n> this should make cfbot green.\n\nThis looks like a good improvement to me.\n\n(I have also thought about having this generated from the catalog \ndefinition files somehow, but one step at a time ...)\n\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Wed, 21 Dec 2022 22:39:41 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "Nikita Malakhov <hukutoc@gmail.com> writes:\n> Wanted to ask this since I encountered a need for a cache with 5 keys -\n> why is the syscache index still limited to 4 keys?\n\nBecause there are no cases requiring 5, so far.\n\n(A unique index with as many as 5 keys seems a bit fishy btw.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Dec 2022 14:45:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 5:36 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> This looks like a good improvement to me.\n\nThanks both. Pushed.\n\n> (I have also thought about having this generated from the catalog\n> definition files somehow, but one step at a time ...)\n\nGood plan.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 11:06:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "From the light relief department, here is some more variadic macrology:\n\n- tp = SearchSysCache1(TSPARSEROID, ObjectIdGetDatum(prsId));\n+ tp = SearchSysCache(TSPARSEROID, ObjectIdGetDatum(prsId));",
"msg_date": "Fri, 31 Mar 2023 15:16:25 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On 31.03.23 04:16, Thomas Munro wrote:\n> From the light relief department, here is some more variadic macrology:\n> \n> - tp = SearchSysCache1(TSPARSEROID, ObjectIdGetDatum(prsId));\n> + tp = SearchSysCache(TSPARSEROID, ObjectIdGetDatum(prsId));\n\nI'm worried that if we are removing the variants with the explicit \nnumbers, it will make it difficult for extensions to maintain \ncompatibility with previous PG major versions. They would probably have \nto copy much of your syscache.h changes into their own code. Seems messy.\n\n\n\n",
"msg_date": "Thu, 21 Sep 2023 10:19:41 +0200",
"msg_from": "Peter Eisentraut <peter@eisentraut.org>",
"msg_from_op": false,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On Thu, Sep 21, 2023 at 8:19 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> On 31.03.23 04:16, Thomas Munro wrote:\n> > From the light relief department, here is some more variadic macrology:\n> >\n> > - tp = SearchSysCache1(TSPARSEROID, ObjectIdGetDatum(prsId));\n> > + tp = SearchSysCache(TSPARSEROID, ObjectIdGetDatum(prsId));\n>\n> I'm worried that if we are removing the variants with the explicit\n> numbers, it will make it difficult for extensions to maintain\n> compatibility with previous PG major versions. They would probably have\n> to copy much of your syscache.h changes into their own code. Seems messy.\n\nI suppose we could also supply a set of macros with the numbers that\nmap straight onto the numberless ones, with a note that they will be\ndeleted after N releases. But maybe not worth the hassle for such a\ntiny improvement in core code readability. I will withdraw this\nentry. Thanks.\n\n\n",
"msg_date": "Tue, 14 Nov 2023 17:31:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Array initialisation notation in syscache.c"
},
{
"msg_contents": "On 2023-Nov-14, Thomas Munro wrote:\n\n> I suppose we could also supply a set of macros with the numbers that\n> map straight onto the numberless ones, with a note that they will be\n> deleted after N releases.\n\nMaybe just keep compatibility ones with 1 and 2 arguments (the ones most\nused) forever, or 15 years, and drop the rest.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Tue, 14 Nov 2023 15:25:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Array initialisation notation in syscache.c"
}
] |
[
{
"msg_contents": "Hi.\n\nIMO, I think that commit a61b1f7\n<https://github.com/postgres/postgres/commit/a61b1f74823c9c4f79c95226a461f1e7a367764b>,\nhas an oversight.\nCurrently is losing the result of recursion of function\ntranslate_col_privs_multilevel.\n\nOnce the variable result (Bitmapset pointer) is reassigned.\n\nWithout a test case for this patch.\nBut also, do not have a test case for the current thinko in head.\n\nPass regress check.\n\nregards,\nRanier Vilela",
"msg_date": "Tue, 20 Dec 2022 21:14:48 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Avoid lost result of recursion (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Wed, 21 Dec 2022 at 13:15, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> IMO, I think that commit a61b1f7, has an oversight.\n> Currently is losing the result of recursion of function translate_col_privs_multilevel.\n>\n> Once the variable result (Bitmapset pointer) is reassigned.\n>\n> Without a test case for this patch.\n> But also, do not have a test case for the current thinko in head.\n\nhmm, that code looks a bit suspect to me too.\n\nAre you able to write a test that shows the bug which fails before\nyour change and passes after applying it? I don't think it's quite\nenough to claim that your changes pass make check given that didn't\nfail before your change.\n\nAlso, I think it might be better to take the opportunity to rewrite\nthe function to not use recursion. I don't quite see the need for it\nhere and it looks like that might have helped contribute to the\nreported issue. Can't we just write this as a while loop instead of\nhaving the function call itself? It's not as if we need stack space\nfor keeping track of multiple parents. A child relation can only have\n1 parent. It seems to me that we can just walk there by looping.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Dec 2022 14:45:18 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "Thanks David, for looking at this.\n\nEm ter., 20 de dez. de 2022 às 22:45, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n> On Wed, 21 Dec 2022 at 13:15, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > IMO, I think that commit a61b1f7, has an oversight.\n> > Currently is losing the result of recursion of function\n> translate_col_privs_multilevel.\n> >\n> > Once the variable result (Bitmapset pointer) is reassigned.\n> >\n> > Without a test case for this patch.\n> > But also, do not have a test case for the current thinko in head.\n>\n> hmm, that code looks a bit suspect to me too.\n>\n> Are you able to write a test that shows the bug which fails before\n> your change and passes after applying it? I don't think it's quite\n> enough to claim that your changes pass make check given that didn't\n> fail before your change.\n>\nNo, unfortunately not yet. Of course that test case would be very nice.\nBut my time for postgres is very limited.\nFor voluntary work, without any payment, I think what I have contributed is\ngood.\n\n\n> Also, I think it might be better to take the opportunity to rewrite\n> the function to not use recursion. I don't quite see the need for it\n> here and it looks like that might have helped contribute to the\n> reported issue. Can't we just write this as a while loop instead of\n> having the function call itself? It's not as if we need stack space\n> for keeping track of multiple parents. A child relation can only have\n> 1 parent. It seems to me that we can just walk there by looping.\n>\n I took a look at the code that deals with the array (append_rel_array) and\nall the loops seem different from each other and out of any pattern.\nUnfortunately, I still can't get this loop to work correctly,\nI need to learn more about Postgres structures and the correct way to\nprocess them.\nIf you can do it, I'd be happy to learn the right way.\n\nregards,\nRanier Vilela\n\nThanks David, for looking at this.Em ter., 20 de dez. de 2022 às 22:45, David Rowley <dgrowleyml@gmail.com> escreveu:On Wed, 21 Dec 2022 at 13:15, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> IMO, I think that commit a61b1f7, has an oversight.\n> Currently is losing the result of recursion of function translate_col_privs_multilevel.\n>\n> Once the variable result (Bitmapset pointer) is reassigned.\n>\n> Without a test case for this patch.\n> But also, do not have a test case for the current thinko in head.\n\nhmm, that code looks a bit suspect to me too.\n\nAre you able to write a test that shows the bug which fails before\nyour change and passes after applying it? I don't think it's quite\nenough to claim that your changes pass make check given that didn't\nfail before your change.No, unfortunately not yet. Of course that test case would be very nice.But my time for postgres is very limited. For voluntary work, without any payment, I think what I have contributed is good.\n\nAlso, I think it might be better to take the opportunity to rewrite\nthe function to not use recursion. I don't quite see the need for it\nhere and it looks like that might have helped contribute to the\nreported issue. Can't we just write this as a while loop instead of\nhaving the function call itself? It's not as if we need stack space\nfor keeping track of multiple parents. A child relation can only have\n1 parent. It seems to me that we can just walk there by looping. I took a look at the code that deals with the array (append_rel_array) and all the loops seem different from each other and out of any pattern.Unfortunately, I still can't get this loop to work correctly, I need to learn more about Postgres structures and the correct way to process them.If you can do it, I'd be happy to learn the right way.regards,Ranier Vilela",
"msg_date": "Wed, 21 Dec 2022 20:30:42 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 9:45 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> Also, I think it might be better to take the opportunity to rewrite\n> the function to not use recursion. I don't quite see the need for it\n> here and it looks like that might have helped contribute to the\n> reported issue. Can't we just write this as a while loop instead of\n> having the function call itself? It's not as if we need stack space\n> for keeping track of multiple parents. A child relation can only have\n> 1 parent. It seems to me that we can just walk there by looping.\n\n\nMy best guess is that this function is intended to share the same code\npattern as in adjust_appendrel_attrs_multilevel. The recursion is\nneeded as 'rel' can be more than one inheritance level below the top\nparent. I think we can keep the recursion, as in other similar\nfunctions, as long as we make it right, as in attached patch.\n\nThanks\nRichard",
"msg_date": "Thu, 22 Dec 2022 16:18:13 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Thu, 22 Dec 2022 at 21:18, Richard Guo <guofenglinux@gmail.com> wrote:\n> My best guess is that this function is intended to share the same code\n> pattern as in adjust_appendrel_attrs_multilevel. The recursion is\n> needed as 'rel' can be more than one inheritance level below the top\n> parent. I think we can keep the recursion, as in other similar\n> functions, as long as we make it right, as in attached patch.\n\nI still think we should have a test to ensure this is actually\nworking. Do you want to write one?\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Dec 2022 22:21:51 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 5:22 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 22 Dec 2022 at 21:18, Richard Guo <guofenglinux@gmail.com> wrote:\n> > My best guess is that this function is intended to share the same code\n> > pattern as in adjust_appendrel_attrs_multilevel. The recursion is\n> > needed as 'rel' can be more than one inheritance level below the top\n> > parent. I think we can keep the recursion, as in other similar\n> > functions, as long as we make it right, as in attached patch.\n>\n> I still think we should have a test to ensure this is actually\n> working. Do you want to write one?\n\n\nI agree that we should have a test. According to the code coverage\nreport, the recursion part of this function is never tested. I will\nhave a try to see if I can come up with a proper test case.\n\nThanks\nRichard\n\nOn Thu, Dec 22, 2022 at 5:22 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 22 Dec 2022 at 21:18, Richard Guo <guofenglinux@gmail.com> wrote:\n> My best guess is that this function is intended to share the same code\n> pattern as in adjust_appendrel_attrs_multilevel. The recursion is\n> needed as 'rel' can be more than one inheritance level below the top\n> parent. I think we can keep the recursion, as in other similar\n> functions, as long as we make it right, as in attached patch.\n\nI still think we should have a test to ensure this is actually\nworking. Do you want to write one? I agree that we should have a test. According to the code coveragereport, the recursion part of this function is never tested. I willhave a try to see if I can come up with a proper test case.ThanksRichard",
"msg_date": "Fri, 23 Dec 2022 10:21:14 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, 23 Dec 2022 at 15:21, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> On Thu, Dec 22, 2022 at 5:22 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> I still think we should have a test to ensure this is actually\n>> working. Do you want to write one?\n>\n>\n> I agree that we should have a test. According to the code coverage\n> report, the recursion part of this function is never tested. I will\n> have a try to see if I can come up with a proper test case.\n\nI started having a go at writing one yesterday. I only got as far as\nthe following.\nIt looks like it'll need a trigger or something added to the foreign\ntable to hit the code path that would be needed to trigger the issue,\nso it'll need more work. It might be a worthy starting point.\n\nCREATE EXTENSION postgres_fdw;\n\n-- this will need to work the same way as it does in postgres_fdw.sql\nby using current_database()\nCREATE SERVER loopback FOREIGN DATA WRAPPER postgres_fdw OPTIONS\n(dbname 'postgres', port '5432');\n\ncreate table t_gc (a int, b int, c int);\ncreate table t_c (b int, c int, a int) partition by list(a);\ncreate table t_tlp (c int, a int, b int) partition by list (a);\n\nCREATE FOREIGN TABLE ft_tlp (\nc int,\na int,\nb int\n) SERVER loopback OPTIONS (schema_name 'public', table_name 't_tlp');\n\n\nalter table t_c attach partition t_gc for values in (1);\nalter table t_tlp attach partition t_c for values in (1);\n\ncreate role perm_check login;\n\nCREATE USER MAPPING FOR perm_check SERVER loopback OPTIONS (user\n'perm_check', password_required 'false');\n\ngrant update (b) on t_tlp to perm_check;\ngrant update (b) on ft_tlp to perm_check;\n\nset session authorization perm_check;\n\n-- should pass\nupdate ft_tlp set b = 1;\n\n-- should fail\nupdate ft_tlp set a = 1;\nupdate ft_tlp set c = 1;\n\n-- cleanup\n\ndrop foreign table ft_tlp cascade;\ndrop table t_tlp cascade;\ndrop role perm_check;\ndrop server loopback cascade;\ndrop extension postgres_fdw;\n\nDavid\n\n\n",
"msg_date": "Fri, 23 Dec 2022 15:26:20 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "Hi,\n\nThanks everyone for noticing this. It is indeed very obviously broken. :(\n\nOn Fri, Dec 23, 2022 at 11:26 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> On Fri, 23 Dec 2022 at 15:21, Richard Guo <guofenglinux@gmail.com> wrote:\n> >\n> > On Thu, Dec 22, 2022 at 5:22 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >> I still think we should have a test to ensure this is actually\n> >> working. Do you want to write one?\n> >\n> >\n> > I agree that we should have a test. According to the code coverage\n> > report, the recursion part of this function is never tested. I will\n> > have a try to see if I can come up with a proper test case.\n>\n> I started having a go at writing one yesterday. I only got as far as\n> the following.\n> It looks like it'll need a trigger or something added to the foreign\n> table to hit the code path that would be needed to trigger the issue,\n> so it'll need more work. It might be a worthy starting point.\n\nI was looking at this last night and found that having a generated\ncolumn in the table, but not a trigger, helps hit the buggy code.\nHaving a generated column in the foreign partition prevents a direct\nupdate and thus hitting the following block of\npostgresPlanForeignModify():\n\n else if (operation == CMD_UPDATE)\n {\n int col;\n RelOptInfo *rel = find_base_rel(root, resultRelation);\n Bitmapset *allUpdatedCols = get_rel_all_updated_cols(root, rel);\n\n col = -1;\n while ((col = bms_next_member(allUpdatedCols, col)) >= 0)\n {\n /* bit numbers are offset by FirstLowInvalidHeapAttributeNumber */\n AttrNumber attno = col + FirstLowInvalidHeapAttributeNumber;\n\n if (attno <= InvalidAttrNumber) /* shouldn't happen */\n elog(ERROR, \"system-column update is not supported\");\n targetAttrs = lappend_int(targetAttrs, attno);\n }\n }\n\nIf you add a trigger, which does help with getting a non-direct\nupdate, the code block above this one is executed, so\nget_rel_all_updated_cols() isn't called.\n\nAttached shows a test case I was able to come up with that I can see\nis broken by a61b1f74 though passes after applying Richard's patch.\nWhat's broken is that deparseUpdateSql() outputs a remote UPDATE\nstatement with the wrong SET column list, because the wrong attribute\nnumbers would be added to the targetAttrs list by the above code block\nafter the buggy multi-level translation in ger_rel_all_updated_cols().\n\nThanks for writing the patch, Richard.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 23 Dec 2022 12:22:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 12:22 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached shows a test case I was able to come up with that I can see\n> is broken by a61b1f74 though passes after applying Richard's patch.\n\nBTW, I couldn't help but notice in the output of the test case I wrote\nthat a generated column of a foreign table is not actually generated\nlocally, neither when inserting into the foreign table nor when\nupdating it, so it is left NULL when passing the NEW row to the remote\nserver. Behavior is the same irrespective of whether the\ninsert/update is performed directly on the foreign table or indirectly\nvia an insert/update on the parent. If that's documented behavior of\npostgres_fdw, maybe we are fine, but just wanted to mention that it's\nnot related to the bug being discussed here.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:29:56 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, 23 Dec 2022 at 16:22, Amit Langote <amitlangote09@gmail.com> wrote:\n> Attached shows a test case I was able to come up with that I can see\n> is broken by a61b1f74 though passes after applying Richard's patch.\n\nThanks for the test case. I'll look at this now.\n\n+UPDATE rootp SET b = b || 'd' RETURNING a, b, c, d;\n+ a | b | c | d\n+---+------+-----+---\n+ 1 | food | 1.1 |\n\nCoding on an empty stomach I see! :)\n\nDavid\n\n\n",
"msg_date": "Fri, 23 Dec 2022 16:36:14 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 11:22 AM Amit Langote <amitlangote09@gmail.com>\nwrote:\n\n> Attached shows a test case I was able to come up with that I can see\n> is broken by a61b1f74 though passes after applying Richard's patch.\n> What's broken is that deparseUpdateSql() outputs a remote UPDATE\n> statement with the wrong SET column list, because the wrong attribute\n> numbers would be added to the targetAttrs list by the above code block\n> after the buggy multi-level translation in ger_rel_all_updated_cols().\n\n\nThanks for the test! I looked at this and found that with WCO\nconstraints we can also hit the buggy code. Based on David's test case,\nI came up with the following in the morning.\n\nCREATE FOREIGN TABLE ft_gc (\na int,\nb int,\nc int\n) SERVER loopback OPTIONS (schema_name 'public', table_name 't_gc');\n\nalter table t_c attach partition ft_gc for values in (1);\nalter table t_tlp attach partition t_c for values in (1);\n\nCREATE VIEW rw_view AS SELECT * FROM t_tlp where a < b WITH CHECK OPTION;\n\nexplain (verbose, costs off) update rw_view set c = 42;\n\nCurrently on HEAD, we can see something wrong in the plan.\n\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Update on public.t_tlp\n Foreign Update on public.ft_gc t_tlp_1\n Remote SQL: UPDATE public.t_gc SET b = $2 WHERE ctid = $1 RETURNING a,\nb\n -> Foreign Scan on public.ft_gc t_tlp_1\n Output: 42, t_tlp_1.tableoid, t_tlp_1.ctid, t_tlp_1.*\n Remote SQL: SELECT a, b, c, ctid FROM public.t_gc WHERE ((a < b))\nFOR UPDATE\n(6 rows)\n\nNote that this is wrong: 'UPDATE public.t_gc SET b = $2'.\n\nThanks\nRichard\n\nOn Fri, Dec 23, 2022 at 11:22 AM Amit Langote <amitlangote09@gmail.com> wrote:\nAttached shows a test case I was able to come up with that I can see\nis broken by a61b1f74 though passes after applying Richard's patch.\nWhat's broken is that deparseUpdateSql() outputs a remote UPDATE\nstatement with the wrong SET column list, because the wrong attribute\nnumbers would be added to the targetAttrs list by the above code block\nafter the buggy multi-level translation in ger_rel_all_updated_cols(). Thanks for the test! I looked at this and found that with WCOconstraints we can also hit the buggy code. Based on David's test case,I came up with the following in the morning.CREATE FOREIGN TABLE ft_gc (a int,b int,c int) SERVER loopback OPTIONS (schema_name 'public', table_name 't_gc');alter table t_c attach partition ft_gc for values in (1);alter table t_tlp attach partition t_c for values in (1);CREATE VIEW rw_view AS SELECT * FROM t_tlp where a < b WITH CHECK OPTION;explain (verbose, costs off) update rw_view set c = 42;Currently on HEAD, we can see something wrong in the plan. QUERY PLAN-------------------------------------------------------------------------------------- Update on public.t_tlp Foreign Update on public.ft_gc t_tlp_1 Remote SQL: UPDATE public.t_gc SET b = $2 WHERE ctid = $1 RETURNING a, b -> Foreign Scan on public.ft_gc t_tlp_1 Output: 42, t_tlp_1.tableoid, t_tlp_1.ctid, t_tlp_1.* Remote SQL: SELECT a, b, c, ctid FROM public.t_gc WHERE ((a < b)) FOR UPDATE(6 rows)Note that this is wrong: 'UPDATE public.t_gc SET b = $2'.ThanksRichard",
"msg_date": "Fri, 23 Dec 2022 12:03:51 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 1:04 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> On Fri, Dec 23, 2022 at 11:22 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Attached shows a test case I was able to come up with that I can see\n>> is broken by a61b1f74 though passes after applying Richard's patch.\n>> What's broken is that deparseUpdateSql() outputs a remote UPDATE\n>> statement with the wrong SET column list, because the wrong attribute\n>> numbers would be added to the targetAttrs list by the above code block\n>> after the buggy multi-level translation in ger_rel_all_updated_cols().\n>\n> Thanks for the test! I looked at this and found that with WCO\n> constraints we can also hit the buggy code.\n\nAh, yes.\n\n /*\n * Try to modify the foreign table directly if (1) the FDW provides\n * callback functions needed for that and (2) there are no local\n * structures that need to be run for each modified row: row-level\n * triggers on the foreign table, stored generated columns, WITH CHECK\n * OPTIONs from parent views.\n */\n direct_modify = false;\n if (fdwroutine != NULL &&\n fdwroutine->PlanDirectModify != NULL &&\n fdwroutine->BeginDirectModify != NULL &&\n fdwroutine->IterateDirectModify != NULL &&\n fdwroutine->EndDirectModify != NULL &&\n withCheckOptionLists == NIL &&\n !has_row_triggers(root, rti, operation) &&\n !has_stored_generated_columns(root, rti))\n direct_modify = fdwroutine->PlanDirectModify(root, node, rti, i);\n\n\n> Based on David's test case,\n> I came up with the following in the morning.\n>\n> CREATE FOREIGN TABLE ft_gc (\n> a int,\n> b int,\n> c int\n> ) SERVER loopback OPTIONS (schema_name 'public', table_name 't_gc');\n>\n> alter table t_c attach partition ft_gc for values in (1);\n> alter table t_tlp attach partition t_c for values in (1);\n>\n> CREATE VIEW rw_view AS SELECT * FROM t_tlp where a < b WITH CHECK OPTION;\n>\n> explain (verbose, costs off) update rw_view set c = 42;\n>\n> Currently on HEAD, we can see something wrong in the plan.\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------\n> Update on public.t_tlp\n> Foreign Update on public.ft_gc t_tlp_1\n> Remote SQL: UPDATE public.t_gc SET b = $2 WHERE ctid = $1 RETURNING a, b\n> -> Foreign Scan on public.ft_gc t_tlp_1\n> Output: 42, t_tlp_1.tableoid, t_tlp_1.ctid, t_tlp_1.*\n> Remote SQL: SELECT a, b, c, ctid FROM public.t_gc WHERE ((a < b)) FOR UPDATE\n> (6 rows)\n>\n> Note that this is wrong: 'UPDATE public.t_gc SET b = $2'.\n\nRight, because of the mangled targetAttrs in postgresPlanForeignModify().\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 23 Dec 2022 13:33:43 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, 23 Dec 2022 at 17:04, Richard Guo <guofenglinux@gmail.com> wrote:\n> Thanks for the test! I looked at this and found that with WCO\n> constraints we can also hit the buggy code. Based on David's test case,\n> I came up with the following in the morning.\n\nI ended up going with a WCO option test in the end. I wanted to steer\nclear of having a test that has expected broken results from the\ngenerated column code. Also, I just couldn't help thinking the\ngenerated column test felt like it was just glued on the end and not\nreally in the correct place in the file.\n\nI've put the new WCO test in along with the existing one. I also\nconsidered modifying one of the existing tests to add another\npartitioning level, but I ended up staying clear of that as I felt\nlike it caused a bit more churn than I wanted with an existing test.\nThe test I put together tests for the bug and also checks the WCO\nworks by not updating the row that's outside of the scope of the WCO\nview and updating the row that is in the scope of the view.\n\nI've now pushed your fix plus that test.\n\nIt feels a bit like famine to feast when it comes to tests for this bug today.\n\nDavid\n\n\n",
"msg_date": "Sat, 24 Dec 2022 01:10:31 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "Em sex., 23 de dez. de 2022 às 09:10, David Rowley <dgrowleyml@gmail.com>\nescreveu:\n\n>\n> I've now pushed your fix plus that test.\n>\nThank you all for getting involved to resolve this.\n\nregards,\nRanier Vilela\n\nEm sex., 23 de dez. de 2022 às 09:10, David Rowley <dgrowleyml@gmail.com> escreveu:\nI've now pushed your fix plus that test.Thank you all for getting involved to resolve this.regards,Ranier Vilela",
"msg_date": "Fri, 23 Dec 2022 10:07:14 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 9:10 PM David Rowley <dgrowleyml@gmail.com> wrote\n> On Fri, 23 Dec 2022 at 17:04, Richard Guo <guofenglinux@gmail.com> wrote:\n> > Thanks for the test! I looked at this and found that with WCO\n> > constraints we can also hit the buggy code. Based on David's test case,\n> > I came up with the following in the morning.\n>\n> I ended up going with a WCO option test in the end. I wanted to steer\n> clear of having a test that has expected broken results from the\n> generated column code. Also, I just couldn't help thinking the\n> generated column test felt like it was just glued on the end and not\n> really in the correct place in the file.\n>\n> I've put the new WCO test in along with the existing one. I also\n> considered modifying one of the existing tests to add another\n> partitioning level, but I ended up staying clear of that as I felt\n> like it caused a bit more churn than I wanted with an existing test.\n> The test I put together tests for the bug and also checks the WCO\n> works by not updating the row that's outside of the scope of the WCO\n> view and updating the row that is in the scope of the view.\n>\n> I've now pushed your fix plus that test.\n>\n> It feels a bit like famine to feast when it comes to tests for this bug today.\n\nThanks for working on this.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 26 Dec 2022 17:01:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Avoid lost result of recursion\n (src/backend/optimizer/util/inherit.c)"
}
] |
[
{
"msg_contents": "Hi,\n\nThe goal is to have access to all the tables that are being scanned or will\nbe scanned as a part of the query. Basically, the callback looks like this:\n\ntypedef void (*set_rel_pathlist_hook_type) (PlannerInfo *root,\n RelOptInfo *rel,\n Index rti,\n RangeTblEntry *rte);\n\nNow, the problem is when there is a nested query, the function will be\ncalled once for the parent query and once for the subquery. However, I need\naccess to the whole query in this function. There seems to be no CustomScan\ncallback before this that has the whole query passed to it. Is there any\nway I can get access to the complete query (or all the relations in the\nquery) by using the parameters passed to this function? Or any other\nworkaround?\n\nThank you and happy holidays!\n\nHi,The goal is to have access to all the tables that are being scanned or will be scanned as a part of the query. Basically, the callback looks like this:\ntypedef void (*set_rel_pathlist_hook_type) (PlannerInfo *root,\n RelOptInfo *rel,\n Index rti,\n RangeTblEntry *rte);Now, the problem is when there is a nested query, the function will be called once for the parent query and once for the subquery. However, I need access to the whole query in this function. There seems to be no CustomScan callback before this that has the whole query passed to it. Is there any way I can get access to the complete query (or all the relations in the query) by using the parameters passed to this function? Or any other workaround?Thank you and happy holidays!",
"msg_date": "Tue, 20 Dec 2022 18:34:32 -0800",
"msg_from": "Amin <amin.fallahi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Get access to the whole query in CustomScan path callback"
},
{
"msg_contents": "Amin <amin.fallahi@gmail.com> writes:\n> The goal is to have access to all the tables that are being scanned or will\n> be scanned as a part of the query. Basically, the callback looks like this:\n\n> typedef void (*set_rel_pathlist_hook_type) (PlannerInfo *root,\n> RelOptInfo *rel,\n> Index rti,\n> RangeTblEntry *rte);\n\n> Now, the problem is when there is a nested query, the function will be\n> called once for the parent query and once for the subquery. However, I need\n> access to the whole query in this function. There seems to be no CustomScan\n> callback before this that has the whole query passed to it. Is there any\n> way I can get access to the complete query (or all the relations in the\n> query) by using the parameters passed to this function? Or any other\n> workaround?\n\nEverything the planner knows is accessible via the \"root\" pointer.\n\nI very strongly question the idea that a custom scan provider should\nbe doing what you say you want to do, but the info is there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Dec 2022 12:46:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Get access to the whole query in CustomScan path callback"
},
{
"msg_contents": "I cannot find any information related to other relations in the query other\nthan the one which is being scanned in the root pointer. Is there any\nfunction call which can be used to get access to it?\n\nOn Wed, Dec 21, 2022 at 9:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Amin <amin.fallahi@gmail.com> writes:\n> > The goal is to have access to all the tables that are being scanned or\n> will\n> > be scanned as a part of the query. Basically, the callback looks like\n> this:\n>\n> > typedef void (*set_rel_pathlist_hook_type) (PlannerInfo *root,\n> > RelOptInfo *rel,\n> > Index rti,\n> > RangeTblEntry *rte);\n>\n> > Now, the problem is when there is a nested query, the function will be\n> > called once for the parent query and once for the subquery. However, I\n> need\n> > access to the whole query in this function. There seems to be no\n> CustomScan\n> > callback before this that has the whole query passed to it. Is there any\n> > way I can get access to the complete query (or all the relations in the\n> > query) by using the parameters passed to this function? Or any other\n> > workaround?\n>\n> Everything the planner knows is accessible via the \"root\" pointer.\n>\n> I very strongly question the idea that a custom scan provider should\n> be doing what you say you want to do, but the info is there.\n>\n> regards, tom lane\n>\n\nI cannot find any information related to other relations in the query other than the one which is being scanned in the root pointer. Is there any function call which can be used to get access to it?On Wed, Dec 21, 2022 at 9:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Amin <amin.fallahi@gmail.com> writes:\n> The goal is to have access to all the tables that are being scanned or will\n> be scanned as a part of the query. Basically, the callback looks like this:\n\n> typedef void (*set_rel_pathlist_hook_type) (PlannerInfo *root,\n> RelOptInfo *rel,\n> Index rti,\n> RangeTblEntry *rte);\n\n> Now, the problem is when there is a nested query, the function will be\n> called once for the parent query and once for the subquery. However, I need\n> access to the whole query in this function. There seems to be no CustomScan\n> callback before this that has the whole query passed to it. Is there any\n> way I can get access to the complete query (or all the relations in the\n> query) by using the parameters passed to this function? Or any other\n> workaround?\n\nEverything the planner knows is accessible via the \"root\" pointer.\n\nI very strongly question the idea that a custom scan provider should\nbe doing what you say you want to do, but the info is there.\n\n regards, tom lane",
"msg_date": "Thu, 12 Jan 2023 17:52:21 -0800",
"msg_from": "Amin <amin.fallahi@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Get access to the whole query in CustomScan path callback"
}
] |
[
{
"msg_contents": "Hi, hackers\n\n\nI've found two assertion failures on BuildFarm [1][2].\nThe call stack can be found in [2].\n\nTRAP: failed Assert(\"dlist_is_empty(blocklist)\"), File: \"slab.c\", Line: 564, PID: 16148\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(ExceptionalCondition+0x54)[0x983564]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(SlabAlloc+0x50d)[0x9b554d]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(MemoryContextAlloc+0x4c)[0x9b2b2c]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(ReorderBufferGetChange+0x15)[0x7d62d5]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(heap_decode+0x4d1)[0x7cb071]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(LogicalDecodingProcessRecord+0x63)[0x7ca033]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION[0x7f1962]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION[0x7f42e2]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(exec_replication_command+0xa88)[0x7f4ec8]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(PostgresMain+0x524)[0x847a44]\n2022-12-20 13:18:39.725 UTC [16167:4] 015_stream.pl LOG: disconnection: session time: 0:00:00.019 user=postgres database=postgres host=[local]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION[0x7b57e6]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(PostmasterMain+0xe27)[0x7b6757]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(main+0x683)[0x4c74e3]\n/lib/libc.so.6(__libc_start_main+0xf0)[0x7f5179a8d070]\npostgres: publisher: walsender postgres postgres [local] START_REPLICATION(_start+0x2a)[0x4c75aa]\n2022-12-20 13:18:39.795 UTC [16107:4] LOG: server process (PID 16148) was terminated by signal 6: Aborted\n\n\nThe last failure for subscriptionCheck before those for HEAD happened 66 days ago [3]\nand I checked all failures there within 90 days. There is no similar failure.\n\nI'm not sure, but this could be related to a recent commit (d21ded75fd) in [4].\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=butterflyfish&dt=2022-12-20%2013%3A00%3A19\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2022-12-20%2010%3A34%3A39\n[3] - https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=90&stage=subscriptionCheck&filter=Submit\n[4] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d21ded75fdbc18d68be6e6c172f0f842c50e9263\n\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Wed, 21 Dec 2022 03:28:01 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "assertion failures on BuildFarm that happened in slab.c "
},
{
"msg_contents": "On Wed, Dec 21, 2022 at 03:28:01AM +0000, Takamichi Osumi (Fujitsu) wrote:\n> The last failure for subscriptionCheck before those for HEAD happened 66 days ago [3]\n> and I checked all failures there within 90 days. There is no similar failure.\n> \n> I'm not sure, but this could be related to a recent commit (d21ded75fd) in [4].\n> \n> [1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=butterflyfish&dt=2022-12-20%2013%3A00%3A19\n> [2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=grassquit&dt=2022-12-20%2010%3A34%3A39\n> [3] - https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=90&stage=subscriptionCheck&filter=Submit\n> [4] - https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d21ded75fdbc18d68be6e6c172f0f842c50e9263\n\nGood catch. Yes, d21ded7 looks to have issues. I am adding David in\nCC.\n--\nMichael",
"msg_date": "Wed, 21 Dec 2022 14:04:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: assertion failures on BuildFarm that happened in slab.c"
},
{
"msg_contents": "On Wed, 21 Dec 2022 at 18:04, Michael Paquier <michael@paquier.xyz> wrote:\n> Good catch. Yes, d21ded7 looks to have issues. I am adding David in\n> CC.\n\nThanks. I'll look now.\n\nDavid\n\n\n",
"msg_date": "Wed, 21 Dec 2022 18:07:23 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: assertion failures on BuildFarm that happened in slab.c"
},
{
"msg_contents": "On Wed, 21 Dec 2022 at 16:28, Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n> TRAP: failed Assert(\"dlist_is_empty(blocklist)\"), File: \"slab.c\", Line: 564, PID: 16148\n\n> I'm not sure, but this could be related to a recent commit (d21ded75fd) in [4].\n\nIt was. I've just pushed a fix. Thanks for highlighting the problem.\n\nDavid\n\n\n",
"msg_date": "Thu, 22 Dec 2022 09:59:42 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: assertion failures on BuildFarm that happened in slab.c"
}
] |
[
{
"msg_contents": "Hi,\n\nClient report on a corner case have shown up possible minor \nnon-optimality in procedure of transformation of simple UNION ALL \nstatement tree.\nComplaint is about auto-generated query with 1E4 simple union all's (see \nt.sh to generate a demo script). The reason: in REL_11_STABLE it is \nplanned and executed in a second, but REL_12_STABLE and beyond makes \nmatters worse: planning of such a query needs tons of gigabytes of RAM.\n\nSuperficial study revealed possibly unnecessary operations that could be \navoided:\n1. Walking across a query by calling substitute_phv_relids() even if \nlastPHId shows that no one phv is presented.\n2. Iterative passes along the append_rel_list for replacing vars in the \ntranslated_vars field. I can't grasp real necessity of passing all the \nappend_rel_list during flattening of an union all leaf subquery. No one \ncan reference this leaf, isn't it?\n\nIn attachment you can see some sketch that reduces a number of planner \ncycles/copyings.\n\n-- \nRegards\nAndrey Lepikhov\nPostgres Professional",
"msg_date": "Wed, 21 Dec 2022 09:14:02 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Optimization issue of branching UNION ALL"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> Complaint is about auto-generated query with 1E4 simple union all's (see \n> t.sh to generate a demo script). The reason: in REL_11_STABLE it is \n> planned and executed in a second, but REL_12_STABLE and beyond makes \n> matters worse: planning of such a query needs tons of gigabytes of RAM.\n\nv11 (and prior versions) sucks just as badly. In this example it\naccidentally escapes trouble because it doesn't know how to pull up\na subquery with empty FROM. But if you make the query look like\n\nSELECT 1,1 FROM dual\n UNION ALL\nSELECT 2,2 FROM dual\n UNION ALL\nSELECT 3,3 FROM dual\n...\n\nthen v11 chokes as well. (Seems like we've overlooked the need\nfor check_stack_depth() and CHECK_FOR_INTERRUPTS() here ...)\n\n> Superficial study revealed possibly unnecessary operations that could be \n> avoided:\n> 1. Walking across a query by calling substitute_phv_relids() even if \n> lastPHId shows that no one phv is presented.\n\nYeah, we could do that, and it'd help some.\n\n> 2. Iterative passes along the append_rel_list for replacing vars in the \n> translated_vars field. I can't grasp real necessity of passing all the \n> append_rel_list during flattening of an union all leaf subquery. No one \n> can reference this leaf, isn't it?\n\nAfter thinking about that for awhile, I believe we can go further:\nthe containing_appendrel is actually the *only* part of the upper\nquery that needs to be adjusted. So we could do something like\nthe attached.\n\nThis passes check-world, but I don't have quite enough confidence\nin it to just commit it.\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 21 Dec 2022 20:50:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization issue of branching UNION ALL"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 9:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> > Superficial study revealed possibly unnecessary operations that could be\n> > avoided:\n> > 1. Walking across a query by calling substitute_phv_relids() even if\n> > lastPHId shows that no one phv is presented.\n>\n> Yeah, we could do that, and it'd help some.\n\n\nI noticed we also check 'parse->hasSubLinks' when we fix PHVs and\nAppendRelInfos in pull_up_simple_subquery. I'm not sure why we have\nthis check. It seems not necessary.\n\nIn remove_result_refs, I don't think we need to check 'lastPHId' again\nbefore calling substitute_phv_relids, since it has been checked a few\nlines earlier.\n\nThanks\nRichard\n\nOn Thu, Dec 22, 2022 at 9:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> Superficial study revealed possibly unnecessary operations that could be \n> avoided:\n> 1. Walking across a query by calling substitute_phv_relids() even if \n> lastPHId shows that no one phv is presented.\n\nYeah, we could do that, and it'd help some. I noticed we also check 'parse->hasSubLinks' when we fix PHVs andAppendRelInfos in pull_up_simple_subquery. I'm not sure why we havethis check. It seems not necessary.In remove_result_refs, I don't think we need to check 'lastPHId' againbefore calling substitute_phv_relids, since it has been checked a fewlines earlier.ThanksRichard",
"msg_date": "Thu, 22 Dec 2022 10:37:35 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimization issue of branching UNION ALL"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> I noticed we also check 'parse->hasSubLinks' when we fix PHVs and\n> AppendRelInfos in pull_up_simple_subquery. I'm not sure why we have\n> this check. It seems not necessary.\n\nYeah, I was wondering about that too ... maybe it was important\nin some previous state of the code? I didn't do any archeology\nthough.\n\n> In remove_result_refs, I don't think we need to check 'lastPHId' again\n> before calling substitute_phv_relids, since it has been checked a few\n> lines earlier.\n\nOh, duh ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Dec 2022 22:48:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization issue of branching UNION ALL"
},
{
"msg_contents": "I wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n>> I noticed we also check 'parse->hasSubLinks' when we fix PHVs and\n>> AppendRelInfos in pull_up_simple_subquery. I'm not sure why we have\n>> this check. It seems not necessary.\n\n> Yeah, I was wondering about that too ... maybe it was important\n> in some previous state of the code? I didn't do any archeology\n> though.\n\nAfter a bit of \"git blame\"-ing, it appears that that hasSubLinks\ncheck was introduced in e006a24ad, which added a FlattenedSubLink\nnode type and needed to fix them up here:\n\n+\t * We also have to fix the relid sets of any FlattenedSubLink nodes in\n+\t * the parent query. (This could perhaps be done by ResolveNew, but it\n\nThen when I got rid of FlattenedSubLink in e549722a8, I neglected\nto remove that check. So I think maybe we don't need it, but I've\nnot tested.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Dec 2022 23:14:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization issue of branching UNION ALL"
},
{
"msg_contents": "On 22/12/2022 06:50, Tom Lane wrote:\n>> 2. Iterative passes along the append_rel_list for replacing vars in the\n>> translated_vars field. I can't grasp real necessity of passing all the\n>> append_rel_list during flattening of an union all leaf subquery. No one\n>> can reference this leaf, isn't it?\n> \n> After thinking about that for awhile, I believe we can go further:\n> the containing_appendrel is actually the *only* part of the upper\n> query that needs to be adjusted. So we could do something like\n> the attached.\n> \n> This passes check-world, but I don't have quite enough confidence\n> in it to just commit it.\nThanks, I have written the letter because of some doubts too. But only \none weak point I could imagine - if someday sql standard will be changed.\nYour code looks better, than previous attempt.\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n\n",
"msg_date": "Thu, 22 Dec 2022 16:59:41 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Optimization issue of branching UNION ALL"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> Thanks, I have written the letter because of some doubts too. But only \n> one weak point I could imagine - if someday sql standard will be changed.\n\nYeah, if they ever decide that LATERAL should be allowed to reference a\nprevious sub-query of UNION ALL, that'd probably break this. But it'd\nbreak a lot of other code too, so I'm not going to worry about it.\n\nI pushed the main fix to HEAD only, and the recursion checks to\nall branches.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 22 Dec 2022 11:05:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimization issue of branching UNION ALL"
}
] |
[
{
"msg_contents": "The attached patch factors out the guts of float4in so that the new\nfloat4in_internal function is callable without going through the fmgr\ncall sequence. This will make adjusting the seg module's input function\nto handle soft errors simpler. A similar operation was done for float8in\nsome years ago in commit 50861cd683e. The new function has an identical\nargument structure to float8in_internal.\n\nWe could probably call these two internal functions directly in\nnumeric_float4() and numeric_float8() - not sure if it's worth it rght\nnow but we might end up wanting something like that for error-safe casts.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 21 Dec 2022 09:21:55 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "float4in_internal"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> The attached patch factors out the guts of float4in so that the new\n> float4in_internal function is callable without going through the fmgr\n> call sequence. This will make adjusting the seg module's input function\n> to handle soft errors simpler. A similar operation was done for float8in\n> some years ago in commit 50861cd683e. The new function has an identical\n> argument structure to float8in_internal.\n\nLooks reasonable except for one nitpick: the \"out of range\" message\nin the ERANGE case should be kept mentioning real, not the passed\ntype_name, to be equivalent to the way float8in_internal does it.\nI lack enough caffeine to recall exactly why float8in_internal\ndoes it that way, but the comments are exceedingly clear that it was\nintentional, and I'm sure the same rationale would apply here.\n\n(float8in_internal also goes out of its way to show just the part of\nthe string that is the number in that case, but I'm willing to let\nthat pass for now.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Dec 2022 10:33:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: float4in_internal"
},
{
"msg_contents": "\nOn 2022-12-21 We 10:33, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> The attached patch factors out the guts of float4in so that the new\n>> float4in_internal function is callable without going through the fmgr\n>> call sequence. This will make adjusting the seg module's input function\n>> to handle soft errors simpler. A similar operation was done for float8in\n>> some years ago in commit 50861cd683e. The new function has an identical\n>> argument structure to float8in_internal.\n> Looks reasonable except for one nitpick: the \"out of range\" message\n> in the ERANGE case should be kept mentioning real, not the passed\n> type_name, to be equivalent to the way float8in_internal does it.\n> I lack enough caffeine to recall exactly why float8in_internal\n> does it that way, but the comments are exceedingly clear that it was\n> intentional, and I'm sure the same rationale would apply here.\n>\n> (float8in_internal also goes out of its way to show just the part of\n> the string that is the number in that case, but I'm willing to let\n> that pass for now.)\n>\n> \t\t\t\n\n\nThanks for reviewing.\n\nI made both these changes, to keep the two functions more closely aligned.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 21 Dec 2022 17:01:30 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: float4in_internal"
}
] |
[
{
"msg_contents": "I discovered that if you do this:\n\ndiff --git a/contrib/postgres_fdw/sql/postgres_fdw.sql b/contrib/postgres_fdw/sql/postgres_fdw.sql\nindex 2e6f7f4852..2ae231fd90 100644\n--- a/contrib/postgres_fdw/sql/postgres_fdw.sql\n+++ b/contrib/postgres_fdw/sql/postgres_fdw.sql\n@@ -366,7 +366,7 @@ CREATE FUNCTION postgres_fdw_abs(int) RETURNS int AS $$\n BEGIN\n RETURN abs($1);\n END\n-$$ LANGUAGE plpgsql IMMUTABLE;\n+$$ LANGUAGE plpgsql IMMUTABLE STRICT;\n CREATE OPERATOR === (\n LEFTARG = int,\n RIGHTARG = int,\n\none of the plan changes that you get (attached) is that this query:\n\nEXPLAIN (VERBOSE, COSTS OFF)\nSELECT * FROM local_tbl LEFT JOIN (SELECT ft1.* FROM ft1 INNER JOIN ft2 ON (ft1.c1 = ft2.c1 AND ft1.c1 < 100 AND ft1.c1 = postgres_fdw_abs(ft2.c2))) ss ON (local_tbl.c3 = ss.c3) ORDER BY local_tbl.c1 FOR UPDATE OF local_tbl;\n\ncan no longer do the join as a foreign join. There are some other\nchanges that are more explicable, because strictness of the function\nallows the planner to strength-reduce full joins to left joins, but\nwhat happened here?\n\nThe answer is that once postgres_fdw_abs() is marked strict,\nthe EquivalenceClass machinery will group these clauses as an\nEquivalenceClass:\n\n\tft1.c1 = ft2.c1 AND ft1.c1 = postgres_fdw_abs(ft2.c2)\n\nwhich it will then choose to implement as a restriction clause\non ft2\n\tft2.c1 = postgres_fdw_abs(ft2.c2)\nfollowed by a join clause\n\tft1.c1 = ft2.c1\nThis is a good and useful transformation, because it can get rid\nof ft2 rows at the scan level instead of waiting for them to be\njoined. However, because we are treating postgres_fdw_abs() as\nnon-shippable in this particular test case, that means that ft2\nnow has a non-shippable restriction clause, causing foreign_join_ok\nto give up here:\n\n /*\n * If joining relations have local conditions, those conditions are\n * required to be applied before joining the relations. Hence the join can\n * not be pushed down.\n */\n if (fpinfo_o->local_conds || fpinfo_i->local_conds)\n return false;\n\nIn the other formulation, \"ft1.c1 = postgres_fdw_abs(ft2.c2)\"\nis a non-shippable join clause, which foreign_join_ok knows how\nto cope with. So this seems like a fairly nasty asymmetry.\n\nI ran into this while experimenting with the next phase in my\nouter-join-vars patch set, in which the restriction that\nbelow-outer-join Equivalence classes contain only strict members\nwill go away. So that breaks this test, and I need to either\nfix postgres_fdw or change the test case.\n\nI experimented with teaching foreign_join_ok to pull up the child rels'\nlocal_conds to be join local_conds if the join is an inner join,\nwhich seems like a legal transformation. I ran into a couple of\nissues though, the hardest of which to solve is that in DML queries\nwe get \"variable not found in subplan target lists\" failures while\ntrying to build some EPQ queries. That's because the pulled-up\ncondition uses a variable that we didn't think we'd need at the join\nlevel. That could possibly be fixed by handling these conditions\ndifferently for the transmitted query than the EPQ query, but I'm\nnot sufficiently familiar with the postgres_fdw code to want to\ntake point on coding that. In any case, this line of thought\nwould lead to several other plan changes in the postgres_fdw\nregression tests, and I'm not sure if any of those would be\nbreaking the intent of the test cases.\n\nOr I could just hack this one test so that it continues to\nnot be an EquivalenceClass case.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 21 Dec 2022 16:57:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "postgres_fdw planning issue: EquivalenceClass changes confuse it"
}
] |
[
{
"msg_contents": "Dear hackers,\n(I added Amit as CC because we discussed in another thread)\n\nThis is a fork thread from time-delayed logical replication [1].\nWhile discussing, we thought that we could extend the condition of walsender shutdown[2][3].\n\nCurrently, walsenders delay the shutdown request until confirming all sent data\nare flushed on remote side. This condition was added in 985bd7[4], which is for\nsupporting clean switchover. Supposing that there is a primary-secondary\nphysical replication system, and do following steps. If any changes are come\nwhile step 2 but the walsender does not confirm the remote flush, the reboot in\nstep 3 may be failed.\n\n1. Stops primary server.\n2. Promotes secondary to new primary.\n3. Reboot (old)primary as new secondary.\n\nIn case of logical replication, however, we cannot support the use-case that\nswitches the role publisher <-> subscriber. Suppose same case as above, additional\ntransactions are committed while doing step2. To catch up such changes subscriber\nmust receive WALs related with trans, but it cannot be done because subscriber\ncannot request WALs from the specific position. In the case, we must truncate all\ndata in new subscriber once, and then create new subscription with copy_data\n= true.\n\nTherefore, I think that we can ignore the condition for shutting down the\nwalsender in logical replication.\n\nThis change may be useful for time-delayed logical replication. The walsender\nwaits the shutdown until all changes are applied on subscriber, even if it is\ndelayed. This causes that publisher cannot be stopped if large delay-time is\nspecified.\n\nPSA the minimal patch for that. I'm not sure whether WalSndCaughtUp should be\nalso omitted or not. It seems that changes may affect other parts like\nWalSndWaitForWal(), but we can investigate more about it.\n\n[1]: https://commitfest.postgresql.org/41/3581/\n[2]: https://www.postgresql.org/message-id/TYAPR01MB58661BA3BF38E9798E59AE14F5E19%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n[3]: https://www.postgresql.org/message-id/CAA4eK1LyetktcphdRrufHac4t5DGyhsS2xG2DSOGb7OaOVcDVg%40mail.gmail.com\n[4]: https://github.com/postgres/postgres/commit/985bd7d49726c9f178558491d31a570d47340459\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 22 Dec 2022 05:46:11 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Exit walsender before confirming remote flush in logical replication"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 11:16 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear hackers,\n> (I added Amit as CC because we discussed in another thread)\n>\n> This is a fork thread from time-delayed logical replication [1].\n> While discussing, we thought that we could extend the condition of walsender shutdown[2][3].\n>\n> Currently, walsenders delay the shutdown request until confirming all sent data\n> are flushed on remote side. This condition was added in 985bd7[4], which is for\n> supporting clean switchover. Supposing that there is a primary-secondary\n> physical replication system, and do following steps. If any changes are come\n> while step 2 but the walsender does not confirm the remote flush, the reboot in\n> step 3 may be failed.\n>\n> 1. Stops primary server.\n> 2. Promotes secondary to new primary.\n> 3. Reboot (old)primary as new secondary.\n>\n> In case of logical replication, however, we cannot support the use-case that\n> switches the role publisher <-> subscriber. Suppose same case as above, additional\n> transactions are committed while doing step2. To catch up such changes subscriber\n> must receive WALs related with trans, but it cannot be done because subscriber\n> cannot request WALs from the specific position. In the case, we must truncate all\n> data in new subscriber once, and then create new subscription with copy_data\n> = true.\n>\n> Therefore, I think that we can ignore the condition for shutting down the\n> walsender in logical replication.\n>\n> This change may be useful for time-delayed logical replication. The walsender\n> waits the shutdown until all changes are applied on subscriber, even if it is\n> delayed. This causes that publisher cannot be stopped if large delay-time is\n> specified.\n\nI think the current behaviour is an artifact of using the same WAL\nsender code for both logical and physical replication.\n\nI agree with you that the logical WAL sender need not wait for all the\nWAL to be replayed downstream.\n\nI have not reviewed the patch though.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n",
"msg_date": "Thu, 22 Dec 2022 17:29:34 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Thu, 22 Dec 2022 17:29:34 +0530, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote in \n> On Thu, Dec 22, 2022 at 11:16 AM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> > In case of logical replication, however, we cannot support the use-case that\n> > switches the role publisher <-> subscriber. Suppose same case as above, additional\n..\n> > Therefore, I think that we can ignore the condition for shutting down the\n> > walsender in logical replication.\n...\n> > This change may be useful for time-delayed logical replication. The walsender\n> > waits the shutdown until all changes are applied on subscriber, even if it is\n> > delayed. This causes that publisher cannot be stopped if large delay-time is\n> > specified.\n> \n> I think the current behaviour is an artifact of using the same WAL\n> sender code for both logical and physical replication.\n\nYeah, I don't think we do that for the reason of switchover. On the\nother hand I think the behavior was intentionally taken over since it\nis thought as sensible alone. And I'm afraind that many people already\nrelies on that behavior.\n\n> I agree with you that the logical WAL sender need not wait for all the\n> WAL to be replayed downstream.\n\nThus I feel that it might be a bit outrageous to get rid of that\nbahavior altogether because of a new feature stumbling on it. I'm\nfine doing that only in the apply_delay case, though. However, I have\nanother concern that we are introducing the second exception for\nXLogSendLogical in the common path.\n\nI doubt that anyone wants to use synchronous logical replication with\napply_delay since the sender transaction is inevitablly affected back\nby that delay.\n\nThus how about before entering an apply_delay, logrep worker sending a\nkind of crafted feedback, which reports commit_data.end_lsn as\nflushpos? A little tweak is needed in send_feedback() but seems to\nwork..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 23 Dec 2022 11:21:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 7:51 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 22 Dec 2022 17:29:34 +0530, Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote in\n> > On Thu, Dec 22, 2022 at 11:16 AM Hayato Kuroda (Fujitsu)\n> > <kuroda.hayato@fujitsu.com> wrote:\n> > > In case of logical replication, however, we cannot support the use-case that\n> > > switches the role publisher <-> subscriber. Suppose same case as above, additional\n> ..\n> > > Therefore, I think that we can ignore the condition for shutting down the\n> > > walsender in logical replication.\n> ...\n> > > This change may be useful for time-delayed logical replication. The walsender\n> > > waits the shutdown until all changes are applied on subscriber, even if it is\n> > > delayed. This causes that publisher cannot be stopped if large delay-time is\n> > > specified.\n> >\n> > I think the current behaviour is an artifact of using the same WAL\n> > sender code for both logical and physical replication.\n>\n> Yeah, I don't think we do that for the reason of switchover. On the\n> other hand I think the behavior was intentionally taken over since it\n> is thought as sensible alone.\n>\n\nDo you see it was discussed somewhere? If so, can you please point to\nthat discussion?\n\n> And I'm afraind that many people already\n> relies on that behavior.\n>\n\nBut OTOH, it can also be annoying for users to see some wait during\nthe shutdown which is actually not required.\n\n> > I agree with you that the logical WAL sender need not wait for all the\n> > WAL to be replayed downstream.\n>\n> Thus I feel that it might be a bit outrageous to get rid of that\n> bahavior altogether because of a new feature stumbling on it. I'm\n> fine doing that only in the apply_delay case, though. However, I have\n> another concern that we are introducing the second exception for\n> XLogSendLogical in the common path.\n>\n> I doubt that anyone wants to use synchronous logical replication with\n> apply_delay since the sender transaction is inevitablly affected back\n> by that delay.\n>\n> Thus how about before entering an apply_delay, logrep worker sending a\n> kind of crafted feedback, which reports commit_data.end_lsn as\n> flushpos? A little tweak is needed in send_feedback() but seems to\n> work..\n>\n\nHow can we send commit_data.end_lsn before actually committing the\nxact? I think this can lead to a problem because next time (say after\nrestart of walsender) server can skip sending the xact even if it is\nnot committed by the client.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 11:26:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Horiguchi-san,\n\n> Thus how about before entering an apply_delay, logrep worker sending a\n> kind of crafted feedback, which reports commit_data.end_lsn as\n> flushpos? A little tweak is needed in send_feedback() but seems to\n> work..\n\nThanks for replying! I tested your saying but it could not work well...\n\nI made PoC based on the latest time-delayed patches [1] for non-streaming case.\nApply workers that are delaying applications send begin_data.final_lsn as recvpos and flushpos in send_feedback().\n\nFollowings were contents of the feedback message I got, and we could see that recv and flush were overwritten.\n\n```\nDEBUG: sending feedback (force 1) to recv 0/1553638, write 0/1553550, flush 0/1553638\nCONTEXT: processing remote data for replication origin \"pg_16390\" during message type \"BEGIN\" in transaction 730, finished at 0/1553638\n```\n\nIn terms of walsender, however, sentPtr seemed to be slightly larger than flushed position on subscriber.\n\n```\n(gdb) p MyWalSnd->sentPtr \n$2 = 22361760\n(gdb) p MyWalSnd->flush\n$3 = 22361656\n(gdb) p *MyWalSnd\n$4 = {pid = 28807, state = WALSNDSTATE_STREAMING, sentPtr = 22361760, needreload = false, write = 22361656, \n flush = 22361656, apply = 22361424, writeLag = 20020343, flushLag = 20020343, applyLag = 20020343, \n sync_standby_priority = 0, mutex = 0 '\\000', latch = 0x7ff0350cbb94, replyTime = 725113263592095}\n```\n\nTherefore I could not shut down the publisher node when applications were delaying.\nDo you have any opinions about them?\n\n```\n$ pg_ctl stop -D data_pub/\nwaiting for server to shut down............................................................... failed\npg_ctl: server does not shut down\n```\n\n[1]: https://www.postgresql.org/message-id/TYCPR01MB83730A3E21E921335F6EFA38EDE89@TYCPR01MB8373.jpnprd01.prod.outlook.com \n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:54:15 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Horiguchi-san,\n\n> > Thus how about before entering an apply_delay, logrep worker sending a\n> > kind of crafted feedback, which reports commit_data.end_lsn as\n> > flushpos? A little tweak is needed in send_feedback() but seems to\n> > work..\n> \n> Thanks for replying! I tested your saying but it could not work well...\n> \n> I made PoC based on the latest time-delayed patches [1] for non-streaming case.\n> Apply workers that are delaying applications send begin_data.final_lsn as recvpos\n> and flushpos in send_feedback().\n\nMaybe I misunderstood what you said... I have also found that sentPtr is not the actual sent\nposition, but the starting point of the next WAL. You can see the comment below.\n\n```\n/*\n * How far have we sent WAL already? This is also advertised in\n * MyWalSnd->sentPtr. (Actually, this is the next WAL location to send.)\n */\nstatic XLogRecPtr sentPtr = InvalidXLogRecPtr;\n```\n\nWe must use end_lsn for crafting messages to cheat the walsender, but such records\nare included in COMMIT, not in BEGIN for the non-streaming case.\nBut if workers are delayed in apply_handle_commit(), will they hold locks for database\nobjects for a long time and it causes another issue.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED\n\n\n\n",
"msg_date": "Mon, 26 Dec 2022 12:27:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 11:16 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n\n> In case of logical replication, however, we cannot support the use-case that\n> switches the role publisher <-> subscriber. Suppose same case as above, additional\n> transactions are committed while doing step2. To catch up such changes subscriber\n> must receive WALs related with trans, but it cannot be done because subscriber\n> cannot request WALs from the specific position. In the case, we must truncate all\n> data in new subscriber once, and then create new subscription with copy_data\n> = true.\n>\n> Therefore, I think that we can ignore the condition for shutting down the\n> walsender in logical replication.\n>\n+1 for the idea.\n\n- * Note that if we determine that there's still more data to send, this\n- * function will return control to the caller.\n+ * Note that if we determine that there's still more data to send or we are in\n+ * the physical replication more, this function will return control to the\n+ * caller.\n\nI think in this comment you meant to say\n\n1. \"or we are in physical replication mode and all WALs are not yet replicated\"\n2. Typo /replication more/replication mode\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 27 Dec 2022 13:10:05 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Dilip,\r\n\r\nThanks for checking my proposal!\r\n\r\n> - * Note that if we determine that there's still more data to send, this\r\n> - * function will return control to the caller.\r\n> + * Note that if we determine that there's still more data to send or we are in\r\n> + * the physical replication more, this function will return control to the\r\n> + * caller.\r\n> \r\n> I think in this comment you meant to say\r\n> \r\n> 1. \"or we are in physical replication mode and all WALs are not yet replicated\"\r\n> 2. Typo /replication more/replication mode\r\n\r\nFirstly I considered 2, but I thought 1 seemed to be better.\r\nPSA the updated patch.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Tue, 27 Dec 2022 08:14:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 1:44 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Thanks for checking my proposal!\n>\n> > - * Note that if we determine that there's still more data to send, this\n> > - * function will return control to the caller.\n> > + * Note that if we determine that there's still more data to send or we are in\n> > + * the physical replication more, this function will return control to the\n> > + * caller.\n> >\n> > I think in this comment you meant to say\n> >\n> > 1. \"or we are in physical replication mode and all WALs are not yet replicated\"\n> > 2. Typo /replication more/replication mode\n>\n> Firstly I considered 2, but I thought 1 seemed to be better.\n> PSA the updated patch.\n>\n\nI think even for logical replication we should check whether there is\nany pending WAL (via pq_is_send_pending()) to be sent. Otherwise, what\nis the point to send the done message? Also, the caller of\nWalSndDone() already has that check which is another reason why I\ncan't see why you didn't have the same check in function WalSndDone().\n\nBTW, even after fixing this, I think logical replication will behave\ndifferently when due to some reason (like time-delayed replication)\nsend buffer gets full and walsender is not able to send data. I think\nthis will be less of an issue with physical replication because there\nis a separate walreceiver process to flush the WAL which doesn't wait\nbut the same is not true for logical replication. Do you have any\nthoughts on this matter?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Dec 2022 14:50:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 2:50 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Dec 27, 2022 at 1:44 PM Hayato Kuroda (Fujitsu)\n> <kuroda.hayato@fujitsu.com> wrote:\n> >\n> > Thanks for checking my proposal!\n> >\n> > > - * Note that if we determine that there's still more data to send, this\n> > > - * function will return control to the caller.\n> > > + * Note that if we determine that there's still more data to send or we are in\n> > > + * the physical replication more, this function will return control to the\n> > > + * caller.\n> > >\n> > > I think in this comment you meant to say\n> > >\n> > > 1. \"or we are in physical replication mode and all WALs are not yet replicated\"\n> > > 2. Typo /replication more/replication mode\n> >\n> > Firstly I considered 2, but I thought 1 seemed to be better.\n> > PSA the updated patch.\n> >\n>\n> I think even for logical replication we should check whether there is\n> any pending WAL (via pq_is_send_pending()) to be sent. Otherwise, what\n> is the point to send the done message? Also, the caller of\n> WalSndDone() already has that check which is another reason why I\n> can't see why you didn't have the same check in function WalSndDone().\n>\n> BTW, even after fixing this, I think logical replication will behave\n> differently when due to some reason (like time-delayed replication)\n> send buffer gets full and walsender is not able to send data. I think\n> this will be less of an issue with physical replication because there\n> is a separate walreceiver process to flush the WAL which doesn't wait\n> but the same is not true for logical replication. Do you have any\n> thoughts on this matter?\n>\n\nIn logical replication, it can happen today as well without\ntime-delayed replication. Basically, say apply worker is waiting to\nacquire some lock that is already acquired by some backend then it\nwill have the same behavior. I have not verified this, so you may want\nto check it once.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Dec 2022 14:55:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> > Firstly I considered 2, but I thought 1 seemed to be better.\r\n> > PSA the updated patch.\r\n> >\r\n> \r\n> I think even for logical replication we should check whether there is\r\n> any pending WAL (via pq_is_send_pending()) to be sent. Otherwise, what\r\n> is the point to send the done message? Also, the caller of\r\n> WalSndDone() already has that check which is another reason why I\r\n> can't see why you didn't have the same check in function WalSndDone().\r\n\r\nI did not have strong opinion around here. Fixed.\r\n\r\n> BTW, even after fixing this, I think logical replication will behave\r\n> differently when due to some reason (like time-delayed replication)\r\n> send buffer gets full and walsender is not able to send data. I think\r\n> this will be less of an issue with physical replication because there\r\n> is a separate walreceiver process to flush the WAL which doesn't wait\r\n> but the same is not true for logical replication. Do you have any\r\n> thoughts on this matter?\r\n\r\nYes, it may happen even if this work is done. And your analysis is correct that\r\nthe receive buffer is rarely full in physical replication because walreceiver\r\nimmediately flush WALs.\r\nI think this is an architectural problem. Maybe we have assumed that the decoded\r\nWALs are consumed in as short time. I do not have good idea, but one approach is\r\nintroducing a new process logical-walreceiver. It will record the decoded WALs to\r\nthe persistent storage and workers consume and then remove them. It may have huge\r\nimpact for other features and should not be accepted...\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Wed, 28 Dec 2022 02:47:56 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> In logical replication, it can happen today as well without\r\n> time-delayed replication. Basically, say apply worker is waiting to\r\n> acquire some lock that is already acquired by some backend then it\r\n> will have the same behavior. I have not verified this, so you may want\r\n> to check it once.\r\n\r\nRight, I could reproduce the scenario with following steps.\r\n\r\n1. Construct pub -> sub logical replication system with streaming = off.\r\n2. Define a table on both nodes.\r\n\r\n```\r\nCREATE TABLE tbl (id int PRIMARY KEY);\r\n```\r\n\r\n3. Execute concurrent transactions.\r\n\r\nTx-1 (on subscriber)\r\nBEGIN;\r\nINSERT INTO tbl SELECT i FROM generate_series(1, 5000) s(i);\r\n\r\n\tTx-2 (on publisher)\r\n\tINSERT INTO tbl SELECT i FROM generate_series(1, 5000) s(i);\r\n\r\n4. Try to shutdown publisher but it will be failed.\r\n\r\n```\r\n$ pg_ctl stop -D publisher\r\nwaiting for server to shut down............................................................... failed\r\npg_ctl: server does not shut down\r\n```\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 28 Dec 2022 02:49:53 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 8:19 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > In logical replication, it can happen today as well without\n> > time-delayed replication. Basically, say apply worker is waiting to\n> > acquire some lock that is already acquired by some backend then it\n> > will have the same behavior. I have not verified this, so you may want\n> > to check it once.\n>\n> Right, I could reproduce the scenario with following steps.\n>\n> 1. Construct pub -> sub logical replication system with streaming = off.\n> 2. Define a table on both nodes.\n>\n> ```\n> CREATE TABLE tbl (id int PRIMARY KEY);\n> ```\n>\n> 3. Execute concurrent transactions.\n>\n> Tx-1 (on subscriber)\n> BEGIN;\n> INSERT INTO tbl SELECT i FROM generate_series(1, 5000) s(i);\n>\n> Tx-2 (on publisher)\n> INSERT INTO tbl SELECT i FROM generate_series(1, 5000) s(i);\n>\n> 4. Try to shutdown publisher but it will be failed.\n>\n> ```\n> $ pg_ctl stop -D publisher\n> waiting for server to shut down............................................................... failed\n> pg_ctl: server does not shut down\n> ```\n\nThanks for the verification. BTW, do you think we should document this\neither with time-delayed replication or otherwise unless this is\nalready documented?\n\nAnother thing we can investigate here why do we need to ensure that\nthere is no pending send before shutdown.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Dec 2022 09:26:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Thanks for the verification. BTW, do you think we should document this\r\n> either with time-delayed replication or otherwise unless this is\r\n> already documented?\r\n\r\nI think this should be documented at \"Shutting Down the Server\" section in runtime.sgml\r\nor logical-replicaiton.sgml, but I cannot find. It will be included in next version.\r\n\r\n> Another thing we can investigate here why do we need to ensure that\r\n> there is no pending send before shutdown.\r\n\r\nI have not done yet about it, will continue next year.\r\nIt seems that walsenders have been sending all data before shutting down since ea5516,\r\ne0b581 and 754baa. \r\nThere were many threads related with streaming replication, so I could not pin\r\nthe specific message that written in the commit message of ea5516.\r\n\r\nI have also checked some wiki pages [1][2], but I could not find any design about it.\r\n\r\n[1]: https://wiki.postgresql.org/wiki/Streaming_Replication\r\n[2]: https://wiki.postgresql.org/wiki/Synchronous_Replication_9/2010_Proposal\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 28 Dec 2022 09:15:41 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 8:18 AM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit,\n>\n> > > Firstly I considered 2, but I thought 1 seemed to be better.\n> > > PSA the updated patch.\n> > >\n> >\n> > I think even for logical replication we should check whether there is\n> > any pending WAL (via pq_is_send_pending()) to be sent. Otherwise, what\n> > is the point to send the done message? Also, the caller of\n> > WalSndDone() already has that check which is another reason why I\n> > can't see why you didn't have the same check in function WalSndDone().\n>\n> I did not have strong opinion around here. Fixed.\n>\n> > BTW, even after fixing this, I think logical replication will behave\n> > differently when due to some reason (like time-delayed replication)\n> > send buffer gets full and walsender is not able to send data. I think\n> > this will be less of an issue with physical replication because there\n> > is a separate walreceiver process to flush the WAL which doesn't wait\n> > but the same is not true for logical replication. Do you have any\n> > thoughts on this matter?\n>\n> Yes, it may happen even if this work is done. And your analysis is correct that\n> the receive buffer is rarely full in physical replication because walreceiver\n> immediately flush WALs.\n>\n\nOkay, but what happens in the case of physical replication when\nsynchronous_commit = remote_apply? In that case, won't it ensure that\napply has also happened? If so, then shouldn't the time delay feature\nalso cause a similar problem for physical replication as well?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 13 Jan 2023 16:41:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Wed, 28 Dec 2022 09:15:41 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> > Another thing we can investigate here why do we need to ensure that\n> > there is no pending send before shutdown.\n> \n> I have not done yet about it, will continue next year.\n> It seems that walsenders have been sending all data before shutting down since ea5516,\n> e0b581 and 754baa. \n> There were many threads related with streaming replication, so I could not pin\n> the specific message that written in the commit message of ea5516.\n> \n> I have also checked some wiki pages [1][2], but I could not find any design about it.\n> \n> [1]: https://wiki.postgresql.org/wiki/Streaming_Replication\n> [2]: https://wiki.postgresql.org/wiki/Synchronous_Replication_9/2010_Proposal\n\nIf I'm grabbing the discussion here correctly, in my memory, it is\nbecause: physical replication needs all records that have written on\nprimary are written on standby for switchover to succeed. It is\nannoying that normal shutdown occasionally leads to switchover\nfailure. Thus WalSndDone explicitly waits for remote flush/write\nregardless of the setting of synchronous_commit. Thus apply delay\ndoesn't affect shutdown (AFAICS), and that is sufficient since all the\nrecords will be applied at the next startup.\n\nIn logical replication apply preceeds write and flush so we have no\nindication whether a record is \"replicated\" to standby by other than\napply LSN. On the other hand, logical recplication doesn't have a\nbusiness with switchover so that assurarance is useless. Thus I think\nwe can (practically) ignore apply_lsn at shutdown. It seems subtly\nirregular, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 16 Jan 2023 10:28:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Fri, 13 Jan 2023 16:41:08 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> Okay, but what happens in the case of physical replication when\n> synchronous_commit = remote_apply? In that case, won't it ensure that\n> apply has also happened? If so, then shouldn't the time delay feature\n> also cause a similar problem for physical replication as well?\n\nAs written in another mail, WalSndDone doesn't honor\nsynchronous_commit. In other words, AFAIS walsender finishes not\nwaiting remote_apply. The unapplied recods will be applied at the\nnext startup.\n\nI didn't confirmed that behavior for myself, though..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 16 Jan 2023 10:31:53 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Horiguchi-san, Amit,\n\n> At Fri, 13 Jan 2023 16:41:08 +0530, Amit Kapila <amit.kapila16@gmail.com>\n> wrote in\n> > Okay, but what happens in the case of physical replication when\n> > synchronous_commit = remote_apply? In that case, won't it ensure that\n> > apply has also happened? If so, then shouldn't the time delay feature\n> > also cause a similar problem for physical replication as well?\n> \n> As written in another mail, WalSndDone doesn't honor\n> synchronous_commit. In other words, AFAIS walsender finishes not\n> waiting remote_apply. The unapplied recods will be applied at the\n> next startup.\n> \n> I didn't confirmed that behavior for myself, though..\n\nIf Amit wanted to say about the case that sending data is pending in physical\nreplication, the walsender cannot stop. But this is not related with the\nsynchronous_commit: it is caused because it must sweep all pending data before\nshutting down. We can reproduce the situation with:\n\n1. build streaming replication system\n2. kill -STOP $walreceiver\n3. insert data to primary server\n4. try to stop the primary server\n\nIf what you said was not related with pending data, walsender can be stopped even\nif the synchronous_commit = remote_apply. As Horiguchi-san said, such a condition\nis not written in WalSndDone() [1]. I think the parameter synchronous_commit does\nnot affect walsender process so well. It just define when backend returns the\nresult to client.\n\nI could check by following steps:\n\n1. built streaming replication system. PSA the script to follow that.\n\nPrimary config.\n\n```\nsynchronous_commit = 'remote_apply'\nsynchronous_standby_names = 'secondary'\n```\n\nSecondary config.\n\n```\nrecovery_min_apply_delay = 1d\nprimary_conninfo = 'user=postgres port=$port_N1 application_name=secondary'\nhot_standby = on\n```\n\n2. inserted data to primary. This waited the remote apply\n\npsql -U postgres -p $port_primary -c \"INSERT INTO tbl SELECT generate_series(1, 5000)\"\n\n3. Stopped the primary server from another terminal. It could be done.\nThe terminal on step2 said like:\n\n```\nWARNING: canceling the wait for synchronous replication and terminating connection due to administrator command\nDETAIL: The transaction has already committed locally, but might not have been replicated to the standby.\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nconnection to server was lost\n```\n\n[1]: https://github.com/postgres/postgres/blob/master/src/backend/replication/walsender.c#L3121\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 16 Jan 2023 11:08:30 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Horiguchi-san,\n\n> If I'm grabbing the discussion here correctly, in my memory, it is\n> because: physical replication needs all records that have written on\n> primary are written on standby for switchover to succeed. It is\n> annoying that normal shutdown occasionally leads to switchover\n> failure. Thus WalSndDone explicitly waits for remote flush/write\n> regardless of the setting of synchronous_commit.\n\nAFAIK the condition (sentPtr == replicatedPtr) seemed to be introduced for the purpose[1].\nYou meant to say that the conditon (!pq_is_send_pending()) has same motivation, right?\n\n> Thus apply delay\n> doesn't affect shutdown (AFAICS), and that is sufficient since all the\n> records will be applied at the next startup.\n\nI was not clear the word \"next startup\", but I agreed that we can shut down the\nwalsender in case of recovery_min_apply_delay > 0 and synchronous_commit = remote_apply.\nThe startup process will be not terminated even if the primary crashes, so I\nthink the process will apply the transaction sooner or later.\n\n> In logical replication apply preceeds write and flush so we have no\n> indication whether a record is \"replicated\" to standby by other than\n> apply LSN. On the other hand, logical recplication doesn't have a\n> business with switchover so that assurarance is useless. Thus I think\n> we can (practically) ignore apply_lsn at shutdown. It seems subtly\n> irregular, though.\n\nAnother consideration is that the condition (!pq_is_send_pending()) ensures that\nthere are no pending messages, including other packets. Currently we force walsenders\nto clean up all messages before shutting down, even if it is a keepalive one.\nI cannot have any problems caused by this, but I can keep the condition in case of\nlogical replication.\n\nI updated the patch accordingly. Also, I found that the previous version\ndid not work well in case of streamed transactions. When a streamed transaction\nis committed on publisher but the application is delayed on subscriber, the\nprocess sometimes waits until there is no pending write. This is done in\nProcessPendingWrites(). I added another termination path in the function.\n\n[1]: https://github.com/postgres/postgres/commit/985bd7d49726c9f178558491d31a570d47340459\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Mon, 16 Jan 2023 11:09:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 4:39 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> > In logical replication apply preceeds write and flush so we have no\n> > indication whether a record is \"replicated\" to standby by other than\n> > apply LSN. On the other hand, logical recplication doesn't have a\n> > business with switchover so that assurarance is useless. Thus I think\n> > we can (practically) ignore apply_lsn at shutdown. It seems subtly\n> > irregular, though.\n>\n> Another consideration is that the condition (!pq_is_send_pending()) ensures that\n> there are no pending messages, including other packets. Currently we force walsenders\n> to clean up all messages before shutting down, even if it is a keepalive one.\n> I cannot have any problems caused by this, but I can keep the condition in case of\n> logical replication.\n>\n\nLet me try to summarize the discussion till now. The problem we are\ntrying to solve here is to allow a shutdown to complete when walsender\nis not able to send the entire WAL. Currently, in such cases, the\nshutdown fails. As per our current understanding, this can happen when\n(a) walreceiver/walapply process is stuck (not able to receive more\nWAL) due to locks or some other reason; (b) a long time delay has been\nconfigured to apply the WAL (we don't yet have such a feature for\nlogical replication but the discussion for same is in progress).\n\nBoth reasons mostly apply to logical replication because there is no\nseparate walreceiver process whose job is to just flush the WAL. In\nlogical replication, the process that receives the WAL also applies\nit. So, while applying it can stuck for a long time waiting for some\nheavy-weight lock to be released by some other long-running\ntransaction by the backend. Similarly, if the user has configured a\nlarge value of time-delayed apply, it can lead to a network buffer\nfull between walsender and receive/process.\n\nThe condition to allow the shutdown to wait for all WAL to be sent has\ntwo parts: (a) it confirms that there is no pending WAL to be sent;\n(b) it confirms all the WAL sent has been flushed by the client. As\nper our understanding, both these conditions are to allow clean\nswitchover/failover which seems to be useful only for physical\nreplication. The logical replication doesn't provide such\nfunctionality.\n\nThe proposed patch tries to eliminate condition (b) for logical\nreplication in the hopes that the same will allow the shutdown to be\ncomplete in most cases. There is no specific reason discussed to not\ndo (a) for logical replication.\n\nNow, to proceed here we have the following options: (1) Fix (b) as\nproposed by the patch and document the risks related to (a); (2) Fix\nboth (a) and (b); (3) Do nothing and document that users need to\nunblock the subscribers to complete the shutdown.\n\nThoughts?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 17 Jan 2023 14:41:00 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit, hackers,\r\n\r\n> Let me try to summarize the discussion till now. The problem we are\r\n> trying to solve here is to allow a shutdown to complete when walsender\r\n> is not able to send the entire WAL. Currently, in such cases, the\r\n> shutdown fails. As per our current understanding, this can happen when\r\n> (a) walreceiver/walapply process is stuck (not able to receive more\r\n> WAL) due to locks or some other reason; (b) a long time delay has been\r\n> configured to apply the WAL (we don't yet have such a feature for\r\n> logical replication but the discussion for same is in progress).\r\n\r\nThanks for summarizing.\r\nWhile analyzing stuck, I noticed that there are two types of shutdown failures.\r\nThey could be characterized by the back trace. They are shown at the bottom.\r\n\r\nType i)\r\nThe walsender executes WalSndDone(), but cannot satisfy the condition.\r\nIt means that all WALs have been sent to the subscriber but have not flushed;\r\nsentPtr is not the same as replicatedPtr. This stuck can happen when the delayed\r\ntransaction is small or streamed.\r\n\r\nType ii)\r\nThe walsender cannot execute WalSndDone(), stacks at ProcessPendingWrites().\r\nIt means that when the send buffer becomes full while replicating a transaction;\r\npq_is_send_pending() returns true and the walsender cannot break the loop.\r\nThis stuck can happen when the delayed transaction is large, but it is not a streamed one.\r\n\r\nIf we choose modification (1), we can only fix type (i) because pending WALs cause\r\nthe failure. IIUC if we want to shut down walsender processes even if (ii), we must\r\nchoose (2) and additional fixes are needed.\r\n\r\nBased on the above, I prefer modification (2) because it can rescue more cases. Thoughts?\r\nPSA the patch for it. It is almost the same as the previous version, but the comments are updated.\r\n\r\n\r\nAppendinx:\r\n\r\nThe backtrace for type i)\r\n\r\n```\r\n#0 WalSndDone (send_data=0x87f825 <XLogSendLogical>) at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:3111\r\n#1 0x000000000087ed1d in WalSndLoop (send_data=0x87f825 <XLogSendLogical>) at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:2525\r\n#2 0x000000000087d40a in StartLogicalReplication (cmd=0x1f49030) at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:1320\r\n#3 0x000000000087df29 in exec_replication_command (\r\n cmd_string=0x1f15498 \"START_REPLICATION SLOT \\\"sub\\\" LOGICAL 0/0 (proto_version '4', streaming 'on', origin 'none', publication_names '\\\"pub\\\"')\")\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:1830\r\n#4 0x000000000091b032 in PostgresMain (dbname=0x1f4c938 \"postgres\", username=0x1f4c918 \"postgres\")\r\n at ../../PostgreSQL-Source-Dev/src/backend/tcop/postgres.c:4561\r\n#5 0x000000000085390b in BackendRun (port=0x1f3d0b0) at ../../PostgreSQL-Source-Dev/src/backend/postmaster/postmaster.c:4437\r\n#6 0x000000000085322c in BackendStartup (port=0x1f3d0b0) at ../../PostgreSQL-Source-Dev/src/backend/postmaster/postmaster.c:4165\r\n#7 0x000000000084f7a2 in ServerLoop () at ../../PostgreSQL-Source-Dev/src/backend/postmaster/postmaster.c:1762\r\n#8 0x000000000084f0a2 in PostmasterMain (argc=3, argv=0x1f0ff30) at ../../PostgreSQL-Source-Dev/src/backend/postmaster/postmaster.c:1452\r\n#9 0x000000000074a4d6 in main (argc=3, argv=0x1f0ff30) at ../../PostgreSQL-Source-Dev/src/backend/main/main.c:200\r\n```\r\n\r\nThe backtrace for type ii)\r\n\r\n```\r\n#0 ProcessPendingWrites () at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:1438\r\n#1 0x000000000087d635 in WalSndWriteData (ctx=0x1429ce8, lsn=22406440, xid=731, last_write=true)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:1405\r\n#2 0x0000000000888420 in OutputPluginWrite (ctx=0x1429ce8, last_write=true) at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/logical.c:669\r\n#3 0x00007f022dfe43a7 in pgoutput_change (ctx=0x1429ce8, txn=0x1457d40, relation=0x7f0245075268, change=0x1460ef8)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/pgoutput/pgoutput.c:1491\r\n#4 0x0000000000889125 in change_cb_wrapper (cache=0x142bcf8, txn=0x1457d40, relation=0x7f0245075268, change=0x1460ef8)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/logical.c:1077\r\n#5 0x000000000089507c in ReorderBufferApplyChange (rb=0x142bcf8, txn=0x1457d40, relation=0x7f0245075268, change=0x1460ef8, streaming=false)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/reorderbuffer.c:1969\r\n#6 0x0000000000895866 in ReorderBufferProcessTXN (rb=0x142bcf8, txn=0x1457d40, commit_lsn=23060624, snapshot_now=0x1440150, command_id=0, streaming=false)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/reorderbuffer.c:2245\r\n#7 0x0000000000896348 in ReorderBufferReplay (txn=0x1457d40, rb=0x142bcf8, xid=731, commit_lsn=23060624, end_lsn=23060672, commit_time=727353664342177, \r\n origin_id=0, origin_lsn=0) at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/reorderbuffer.c:2675\r\n#8 0x00000000008963d0 in ReorderBufferCommit (rb=0x142bcf8, xid=731, commit_lsn=23060624, end_lsn=23060672, commit_time=727353664342177, origin_id=0, \r\n origin_lsn=0) at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/reorderbuffer.c:2699\r\n#9 0x00000000008842c7 in DecodeCommit (ctx=0x1429ce8, buf=0x7ffcf03731a0, parsed=0x7ffcf0372fa0, xid=731, two_phase=false)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/decode.c:682\r\n#10 0x0000000000883667 in xact_decode (ctx=0x1429ce8, buf=0x7ffcf03731a0) at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/decode.c:216\r\n#11 0x000000000088338b in LogicalDecodingProcessRecord (ctx=0x1429ce8, record=0x142a080)\r\n at ../../PostgreSQL-Source-Dev/src/backend/replication/logical/decode.c:119\r\n#12 0x000000000087f8c7 in XLogSendLogical () at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:3060\r\n#13 0x000000000087ec5a in WalSndLoop (send_data=0x87f825 <XLogSendLogical>) at ../../PostgreSQL-Source-Dev/src/backend/replication/walsender.c:2490\r\n...\r\n```\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Thu, 19 Jan 2023 08:37:47 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Let me try to summarize the discussion till now. The problem we are\n> trying to solve here is to allow a shutdown to complete when walsender\n> is not able to send the entire WAL. Currently, in such cases, the\n> shutdown fails. As per our current understanding, this can happen when\n> (a) walreceiver/walapply process is stuck (not able to receive more\n> WAL) due to locks or some other reason; (b) a long time delay has been\n> configured to apply the WAL (we don't yet have such a feature for\n> logical replication but the discussion for same is in progress).\n>\n> Both reasons mostly apply to logical replication because there is no\n> separate walreceiver process whose job is to just flush the WAL. In\n> logical replication, the process that receives the WAL also applies\n> it. So, while applying it can stuck for a long time waiting for some\n> heavy-weight lock to be released by some other long-running\n> transaction by the backend.\n>\n\nWhile checking the commits and email discussions in this area, I came\nacross the email [1] from Michael where something similar seems to\nhave been discussed. Basically, whether the early shutdown of\nwalsender can prevent a switchover between publisher and subscriber\nbut that part was never clearly answered in that email chain. It might\nbe worth reading the entire discussion [2]. That discussion finally\nlead to the following commit:\n\ncommit c6c333436491a292d56044ed6e167e2bdee015a2\nAuthor: Andres Freund <andres@anarazel.de>\nDate: Mon Jun 5 18:53:41 2017 -0700\n\n Prevent possibility of panics during shutdown checkpoint.\n\n When the checkpointer writes the shutdown checkpoint, it checks\n afterwards whether any WAL has been written since it started and\n throws a PANIC if so. At that point, only walsenders are still\n active, so one might think this could not happen, but walsenders can\n also generate WAL, for instance in BASE_BACKUP and logical decoding\n related commands (e.g. via hint bits). So they can trigger this panic\n if such a command is run while the shutdown checkpoint is being\n written.\n\n To fix this, divide the walsender shutdown into two phases. First,\n checkpointer, itself triggered by postmaster, sends a\n PROCSIG_WALSND_INIT_STOPPING signal to all walsenders. If the backend\n is idle or runs an SQL query this causes the backend to shutdown, if\n logical replication is in progress all existing WAL records are\n processed followed by a shutdown.\n...\n...\n\nHere, as mentioned in the commit, we are trying to ensure that before\ncheckpoint writes its shutdown WAL record, we ensure that \"if logical\nreplication is in progress all existing WAL records are processed\nfollowed by a shutdown.\". I think even before this commit, we try to\nsend the entire WAL before shutdown but not completely sure. There was\nno discussion on what happens if the logical walreceiver/walapply\nprocess is waiting on some heavy-weight lock and the network socket\nbuffer is full due to which walsender is not able to process its WAL.\nIs it okay for shutdown to fail in such a case as it is happening now,\nor shall we somehow detect that and shut down the walsender, or we\njust allow logical walsender to always exit immediately as soon as the\nshutdown signal came?\n\nNote: I have added some of the people involved in the previous\nthread's [2] discussion in the hope that they can share their\nthoughts.\n\n[1] - https://www.postgresql.org/message-id/CAB7nPqR3icaA%3DqMv_FuU8YVYH3KUrNMnq_OmCfkzxCHC4fox8w%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/CAHGQGwEsttg9P9LOOavoc9d6VB1zVmYgfBk%3DLjsk-UL9cEf-eA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 20 Jan 2023 16:15:27 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 4:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Let me try to summarize the discussion till now. The problem we are\n> > trying to solve here is to allow a shutdown to complete when walsender\n> > is not able to send the entire WAL. Currently, in such cases, the\n> > shutdown fails. As per our current understanding, this can happen when\n> > (a) walreceiver/walapply process is stuck (not able to receive more\n> > WAL) due to locks or some other reason; (b) a long time delay has been\n> > configured to apply the WAL (we don't yet have such a feature for\n> > logical replication but the discussion for same is in progress).\n> >\n> > Both reasons mostly apply to logical replication because there is no\n> > separate walreceiver process whose job is to just flush the WAL. In\n> > logical replication, the process that receives the WAL also applies\n> > it. So, while applying it can stuck for a long time waiting for some\n> > heavy-weight lock to be released by some other long-running\n> > transaction by the backend.\n> >\n>\n> While checking the commits and email discussions in this area, I came\n> across the email [1] from Michael where something similar seems to\n> have been discussed. Basically, whether the early shutdown of\n> walsender can prevent a switchover between publisher and subscriber\n> but that part was never clearly answered in that email chain. It might\n> be worth reading the entire discussion [2]. That discussion finally\n> lead to the following commit:\n\n\nRight, in the thread the question is raised about whether it makes\nsense for logical replication to send all WALs but there is no\nconclusion on that. But I think this patch is mainly about resolving\nthe PANIC due to extra WAL getting generated by walsender during\ncheckpoint processing and that's the reason the behavior of sending\nall the WAL is maintained but only the extra WAL generation stopped\n(before shutdown checkpoint can proceed) using this new state\n\n>\n> commit c6c333436491a292d56044ed6e167e2bdee015a2\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Mon Jun 5 18:53:41 2017 -0700\n>\n> Prevent possibility of panics during shutdown checkpoint.\n>\n> When the checkpointer writes the shutdown checkpoint, it checks\n> afterwards whether any WAL has been written since it started and\n> throws a PANIC if so. At that point, only walsenders are still\n> active, so one might think this could not happen, but walsenders can\n> also generate WAL, for instance in BASE_BACKUP and logical decoding\n> related commands (e.g. via hint bits). So they can trigger this panic\n> if such a command is run while the shutdown checkpoint is being\n> written.\n>\n> To fix this, divide the walsender shutdown into two phases. First,\n> checkpointer, itself triggered by postmaster, sends a\n> PROCSIG_WALSND_INIT_STOPPING signal to all walsenders. If the backend\n> is idle or runs an SQL query this causes the backend to shutdown, if\n> logical replication is in progress all existing WAL records are\n> processed followed by a shutdown.\n> ...\n> ...\n>\n> Here, as mentioned in the commit, we are trying to ensure that before\n> checkpoint writes its shutdown WAL record, we ensure that \"if logical\n> replication is in progress all existing WAL records are processed\n> followed by a shutdown.\". I think even before this commit, we try to\n> send the entire WAL before shutdown but not completely sure.\n\n\nYes, I think that there is no change in that behavior.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 23 Jan 2023 11:09:14 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Dilip, hackers,\r\n\r\nThanks for giving your opinion. I analyzed the relation with the given commit,\r\nand I thought I could keep my patch. How do you think?\r\n\r\n# Abstract\r\n\r\n* Some modifications should be needed.\r\n* We cannot rollback the shutdown if walsenders are stuck\r\n* We don't have a good way to detect stuck\r\n\r\n# Discussion\r\n\r\nCompared to physical replication, it is likely to happen that logical replication is stuck.\r\nI think the risk should be avoided as much as possible by fixing codes.\r\nThen, if it leads another failure, we can document the caution to users.\r\n\r\nWhile shutting down the server, checkpointer sends SIGUSR1 signal to wansenders.\r\nIt is done after being exited other processes, so we cannot raise ERROR and rollback\r\nthe operation if checkpointer recognize the process stuck at that time.\r\n\r\nWe don't have any features that postmaster can check whether this node is\r\npublisher or not. So if we want to add the mechanism that can check the health\r\nof walsenders before shutting down, we must do that at the top of\r\nprocess_pm_shutdown_request() even if we are not in logical replication.\r\nI think it affects the basis of postgres largely, and in the first place,\r\nPostgreSQL does not have a mechanism to check the health of process.\r\n\r\nTherefore, I want to adopt the approach that walsender itself exits immediately when they get signals.\r\n\r\n## About patch - Were fixes correct?\r\n\r\nIn ProcessPendingWrites(), my patch, wansender calls WalSndDone() when it gets\r\nSIGUSR1 signal. I think this should be. From the patch [1]:\r\n\r\n```\r\n@@ -1450,6 +1450,10 @@ ProcessPendingWrites(void)\r\n /* Try to flush pending output to the client */\r\n if (pq_flush_if_writable() != 0)\r\n WalSndShutdown();\r\n+\r\n+ /* If we got shut down requested, try to exit the process */\r\n+ if (got_STOPPING)\r\n+ WalSndDone(XLogSendLogical);\r\n }\r\n \r\n /* reactivate latch so WalSndLoop knows to continue */\r\n```\r\n\r\n\r\nPer my analysis, in case of logical replication, walsenders exit with following\r\nsteps. Note that logical walsender does not receive SIGUSR2 signal, set flag by\r\nthemselves instead:\r\n\r\n1. postmaster sends shutdown requests to checkpointer\r\n2. checkpointer sends SIGUSR1 to walsenders and wait\r\n3. when walsenders accept SIGUSR1, they turn got_SIGUSR1 on.\r\n4. walsenders consume all WALs. @XLogSendLogical\r\n5. walsenders turn got_SIGUSR2 on by themselves @XLogSendLogical\r\n6. walsenders recognize the flag is on, so call WalSndDone() @ WalSndLoop\r\n7. proc_exit(0)\r\n8. checkpoitner writes shutdown record\r\n...\r\n\r\nType (i) stuck, I reported in -hackers[1], means that processes stop at step 6\r\nand Type (ii) stuck means that processes stop at 4. In step4, got_SIGUSR2 is never set to on, so\r\nwe must use got_STOPPING flag.\r\n\r\n[1]: https://www.postgresql.org/message-id/TYCPR01MB58701A47F35FED0A2B399662F5C49@TYCPR01MB5870.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 27 Jan 2023 04:11:53 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Fri, Jan 20, 2023 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Jan 17, 2023 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Let me try to summarize the discussion till now. The problem we are\n> > trying to solve here is to allow a shutdown to complete when walsender\n> > is not able to send the entire WAL. Currently, in such cases, the\n> > shutdown fails. As per our current understanding, this can happen when\n> > (a) walreceiver/walapply process is stuck (not able to receive more\n> > WAL) due to locks or some other reason; (b) a long time delay has been\n> > configured to apply the WAL (we don't yet have such a feature for\n> > logical replication but the discussion for same is in progress).\n> >\n> > Both reasons mostly apply to logical replication because there is no\n> > separate walreceiver process whose job is to just flush the WAL. In\n> > logical replication, the process that receives the WAL also applies\n> > it. So, while applying it can stuck for a long time waiting for some\n> > heavy-weight lock to be released by some other long-running\n> > transaction by the backend.\n> >\n>\n> While checking the commits and email discussions in this area, I came\n> across the email [1] from Michael where something similar seems to\n> have been discussed. Basically, whether the early shutdown of\n> walsender can prevent a switchover between publisher and subscriber\n> but that part was never clearly answered in that email chain. It might\n> be worth reading the entire discussion [2]. That discussion finally\n> lead to the following commit:\n>\n> commit c6c333436491a292d56044ed6e167e2bdee015a2\n> Author: Andres Freund <andres@anarazel.de>\n> Date: Mon Jun 5 18:53:41 2017 -0700\n>\n> Prevent possibility of panics during shutdown checkpoint.\n>\n> When the checkpointer writes the shutdown checkpoint, it checks\n> afterwards whether any WAL has been written since it started and\n> throws a PANIC if so. At that point, only walsenders are still\n> active, so one might think this could not happen, but walsenders can\n> also generate WAL, for instance in BASE_BACKUP and logical decoding\n> related commands (e.g. via hint bits). So they can trigger this panic\n> if such a command is run while the shutdown checkpoint is being\n> written.\n>\n> To fix this, divide the walsender shutdown into two phases. First,\n> checkpointer, itself triggered by postmaster, sends a\n> PROCSIG_WALSND_INIT_STOPPING signal to all walsenders. If the backend\n> is idle or runs an SQL query this causes the backend to shutdown, if\n> logical replication is in progress all existing WAL records are\n> processed followed by a shutdown.\n> ...\n> ...\n>\n> Here, as mentioned in the commit, we are trying to ensure that before\n> checkpoint writes its shutdown WAL record, we ensure that \"if logical\n> replication is in progress all existing WAL records are processed\n> followed by a shutdown.\". I think even before this commit, we try to\n> send the entire WAL before shutdown but not completely sure. There was\n> no discussion on what happens if the logical walreceiver/walapply\n> process is waiting on some heavy-weight lock and the network socket\n> buffer is full due to which walsender is not able to process its WAL.\n> Is it okay for shutdown to fail in such a case as it is happening now,\n> or shall we somehow detect that and shut down the walsender, or we\n> just allow logical walsender to always exit immediately as soon as the\n> shutdown signal came?\n\n+1 to eliminate condition (b) for logical replication.\n\nRegarding (a), as Amit mentioned before[1], I think we should check if\npq_is_send_pending() is false. Otherwise, we will end up terminating\nthe WAL stream without the done message. Which will lead to an error\nmessage \"ERROR: could not receive data from WAL stream: server closed\nthe connection unexpectedly\" on the subscriber even at a clean\nshutdown. In a case where pq_is_send_pending() doesn't become false\nfor a long time, (e.g., the network socket buffer got full due to the\napply worker waiting on a lock), I think users should unblock it by\nthemselves. Or it might be practically better to shutdown the\nsubscriber first in the logical replication case, unlike the physical\nreplication case. I've not studied the time-delayed logical\nreplication patch yet, though.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/CAA4eK1%2BpD654%2BXnrPugYueh7Oh22EBGTr6dA_fS0%2BgPiHayG9A%40mail.gmail.com\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Feb 2023 17:38:36 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 2:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Fri, Jan 20, 2023 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Jan 17, 2023 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > Let me try to summarize the discussion till now. The problem we are\n> > > trying to solve here is to allow a shutdown to complete when walsender\n> > > is not able to send the entire WAL. Currently, in such cases, the\n> > > shutdown fails. As per our current understanding, this can happen when\n> > > (a) walreceiver/walapply process is stuck (not able to receive more\n> > > WAL) due to locks or some other reason; (b) a long time delay has been\n> > > configured to apply the WAL (we don't yet have such a feature for\n> > > logical replication but the discussion for same is in progress).\n> > >\n> > > Both reasons mostly apply to logical replication because there is no\n> > > separate walreceiver process whose job is to just flush the WAL. In\n> > > logical replication, the process that receives the WAL also applies\n> > > it. So, while applying it can stuck for a long time waiting for some\n> > > heavy-weight lock to be released by some other long-running\n> > > transaction by the backend.\n> > >\n...\n...\n>\n> +1 to eliminate condition (b) for logical replication.\n>\n> Regarding (a), as Amit mentioned before[1], I think we should check if\n> pq_is_send_pending() is false.\n>\n\nSorry, but your suggestion is not completely clear to me. Do you mean\nto say that for logical replication, we shouldn't wait for all the WAL\nto be successfully replicated but we should ensure to inform the\nsubscriber that XLOG streaming is done (by ensuring\npq_is_send_pending() is false and by calling EndCommand, pq_flush())?\n\n> Otherwise, we will end up terminating\n> the WAL stream without the done message. Which will lead to an error\n> message \"ERROR: could not receive data from WAL stream: server closed\n> the connection unexpectedly\" on the subscriber even at a clean\n> shutdown.\n>\n\nBut will that be a problem? As per docs of shutdown [1] ( “Smart” mode\ndisallows new connections, then waits for all existing clients to\ndisconnect. If the server is in hot standby, recovery and streaming\nreplication will be terminated once all clients have disconnected.),\nthere is no such guarantee. I see that it is required for the\nswitchover in physical replication to ensure that all the WAL is sent\nand replicated but we don't need that for logical replication.\n\n> In a case where pq_is_send_pending() doesn't become false\n> for a long time, (e.g., the network socket buffer got full due to the\n> apply worker waiting on a lock), I think users should unblock it by\n> themselves. Or it might be practically better to shutdown the\n> subscriber first in the logical replication case, unlike the physical\n> replication case.\n>\n\nYeah, will users like such a dependency? And what will they gain by doing so?\n\n\n[1] - https://www.postgresql.org/docs/devel/app-pg-ctl.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 1 Feb 2023 14:58:14 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Wed, 1 Feb 2023 14:58:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Wed, Feb 1, 2023 at 2:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Otherwise, we will end up terminating\n> > the WAL stream without the done message. Which will lead to an error\n> > message \"ERROR: could not receive data from WAL stream: server closed\n> > the connection unexpectedly\" on the subscriber even at a clean\n> > shutdown.\n> >\n> \n> But will that be a problem? As per docs of shutdown [1] ( “Smart” mode\n> disallows new connections, then waits for all existing clients to\n> disconnect. If the server is in hot standby, recovery and streaming\n> replication will be terminated once all clients have disconnected.),\n> there is no such guarantee. I see that it is required for the\n> switchover in physical replication to ensure that all the WAL is sent\n> and replicated but we don't need that for logical replication.\n\n+1\n\nSince publisher is not aware of apply-delay (by this patch), as a\nmatter of fact publisher seems gone before sending EOS in that\ncase. The error message is correctly describing that situation.\n\n> > In a case where pq_is_send_pending() doesn't become false\n> > for a long time, (e.g., the network socket buffer got full due to the\n> > apply worker waiting on a lock), I think users should unblock it by\n> > themselves. Or it might be practically better to shutdown the\n> > subscriber first in the logical replication case, unlike the physical\n> > replication case.\n> >\n> \n> Yeah, will users like such a dependency? And what will they gain by doing so?\n\nIf PostgreSQL required such kind of special care about shutdown at\nfacing a trouble to keep replication consistency, that won't be\nacceptable. The current time-delayed logical replication can be seen\nas a kind of intentional continuous large network lag in this\naspect. And I think the consistency is guaranteed even in such cases.\n\nOn the other hand I don't think the almost all people care about the\nexact progress when facing such troubles, as far as replication\nconsistently is maintained. IMHO that is also true for the\nlogical-delayed-replication case.\n\n> [1] - https://www.postgresql.org/docs/devel/app-pg-ctl.html\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 02 Feb 2023 13:34:23 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Feb 1, 2023 at 2:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > On Fri, Jan 20, 2023 at 7:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > On Tue, Jan 17, 2023 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > >\n> > > > Let me try to summarize the discussion till now. The problem we are\n> > > > trying to solve here is to allow a shutdown to complete when walsender\n> > > > is not able to send the entire WAL. Currently, in such cases, the\n> > > > shutdown fails. As per our current understanding, this can happen when\n> > > > (a) walreceiver/walapply process is stuck (not able to receive more\n> > > > WAL) due to locks or some other reason; (b) a long time delay has been\n> > > > configured to apply the WAL (we don't yet have such a feature for\n> > > > logical replication but the discussion for same is in progress).\n> > > >\n> > > > Both reasons mostly apply to logical replication because there is no\n> > > > separate walreceiver process whose job is to just flush the WAL. In\n> > > > logical replication, the process that receives the WAL also applies\n> > > > it. So, while applying it can stuck for a long time waiting for some\n> > > > heavy-weight lock to be released by some other long-running\n> > > > transaction by the backend.\n> > > >\n> ...\n> ...\n> >\n> > +1 to eliminate condition (b) for logical replication.\n> >\n> > Regarding (a), as Amit mentioned before[1], I think we should check if\n> > pq_is_send_pending() is false.\n> >\n>\n> Sorry, but your suggestion is not completely clear to me. Do you mean\n> to say that for logical replication, we shouldn't wait for all the WAL\n> to be successfully replicated but we should ensure to inform the\n> subscriber that XLOG streaming is done (by ensuring\n> pq_is_send_pending() is false and by calling EndCommand, pq_flush())?\n\nYes.\n\n>\n> > Otherwise, we will end up terminating\n> > the WAL stream without the done message. Which will lead to an error\n> > message \"ERROR: could not receive data from WAL stream: server closed\n> > the connection unexpectedly\" on the subscriber even at a clean\n> > shutdown.\n> >\n>\n> But will that be a problem? As per docs of shutdown [1] ( “Smart” mode\n> disallows new connections, then waits for all existing clients to\n> disconnect. If the server is in hot standby, recovery and streaming\n> replication will be terminated once all clients have disconnected.),\n> there is no such guarantee.\n\nIn smart shutdown case, the walsender doesn't exit until it can flush\nthe done message, no?\n\n> I see that it is required for the\n> switchover in physical replication to ensure that all the WAL is sent\n> and replicated but we don't need that for logical replication.\n\nIt won't be a problem in practice in terms of logical replication. But\nI'm concerned that this error could confuse users. Is there any case\nwhere the client gets such an error at the smart shutdown?\n\n>\n> > In a case where pq_is_send_pending() doesn't become false\n> > for a long time, (e.g., the network socket buffer got full due to the\n> > apply worker waiting on a lock), I think users should unblock it by\n> > themselves. Or it might be practically better to shutdown the\n> > subscriber first in the logical replication case, unlike the physical\n> > replication case.\n> >\n>\n> Yeah, will users like such a dependency? And what will they gain by doing so?\n\nIIUC there is no difference between smart shutdown and fast shutdown\nin logical replication walsender, but reading the doc[1], it seems to\nme that in the smart shutdown mode, the server stops existing sessions\nnormally. For example, If the client is psql that gets stuck for some\nreason and the network buffer gets full, the smart shutdown waits for\na backend process to send all results to the client. I think the\nlogical replication walsender should follow this behavior for\nconsistency. One idea is to distinguish smart shutdown and fast\nshutdown also in logical replication walsender so that we disconnect\neven without the done message in fast shutdown mode, but I'm not sure\nit's worthwhile.\n\nRegards,\n\n[1] https://www.postgresql.org/docs/devel/server-shutdown.html\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Feb 2023 14:17:55 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 10:48 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Wed, Feb 1, 2023 at 6:28 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> >\n> > > In a case where pq_is_send_pending() doesn't become false\n> > > for a long time, (e.g., the network socket buffer got full due to the\n> > > apply worker waiting on a lock), I think users should unblock it by\n> > > themselves. Or it might be practically better to shutdown the\n> > > subscriber first in the logical replication case, unlike the physical\n> > > replication case.\n> > >\n> >\n> > Yeah, will users like such a dependency? And what will they gain by doing so?\n>\n> IIUC there is no difference between smart shutdown and fast shutdown\n> in logical replication walsender, but reading the doc[1], it seems to\n> me that in the smart shutdown mode, the server stops existing sessions\n> normally. For example, If the client is psql that gets stuck for some\n> reason and the network buffer gets full, the smart shutdown waits for\n> a backend process to send all results to the client. I think the\n> logical replication walsender should follow this behavior for\n> consistency. One idea is to distinguish smart shutdown and fast\n> shutdown also in logical replication walsender so that we disconnect\n> even without the done message in fast shutdown mode, but I'm not sure\n> it's worthwhile.\n>\n\nThe main problem we want to solve here is to avoid shutdown failing in\ncase walreceiver/applyworker is busy waiting for some lock or for some\nother reason as shown in the email [1]. I haven't tested it but if\nsuch a problem doesn't exist in smart shutdown mode then probably we\ncan allow walsender to wait till all the data is sent. We can once\ninvestigate what it takes to introduce shutdown mode knowledge for\nlogical walsender. OTOH, the docs for smart shutdown says \"If the\nserver is in hot standby, recovery and streaming replication will be\nterminated once all clients have disconnected.\" which to me indicates\nthat it is okay to terminate logical replication connections even in\nsmart mode.\n\n[1] - https://www.postgresql.org/message-id/TYAPR01MB58669CB06F6657ABCEFE6555F5F29%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Feb 2023 11:21:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Thu, Feb 2, 2023 at 10:04 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 1 Feb 2023 14:58:14 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in\n> > On Wed, Feb 1, 2023 at 2:09 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> > >\n> > > Otherwise, we will end up terminating\n> > > the WAL stream without the done message. Which will lead to an error\n> > > message \"ERROR: could not receive data from WAL stream: server closed\n> > > the connection unexpectedly\" on the subscriber even at a clean\n> > > shutdown.\n> > >\n> >\n> > But will that be a problem? As per docs of shutdown [1] ( “Smart” mode\n> > disallows new connections, then waits for all existing clients to\n> > disconnect. If the server is in hot standby, recovery and streaming\n> > replication will be terminated once all clients have disconnected.),\n> > there is no such guarantee. I see that it is required for the\n> > switchover in physical replication to ensure that all the WAL is sent\n> > and replicated but we don't need that for logical replication.\n>\n> +1\n>\n> Since publisher is not aware of apply-delay (by this patch), as a\n> matter of fact publisher seems gone before sending EOS in that\n> case. The error message is correctly describing that situation.\n>\n\nThis can happen even without apply-delay patch. For example, when\napply process is waiting on some lock.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 2 Feb 2023 11:23:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit, Sawada-san,\r\n\r\n> > IIUC there is no difference between smart shutdown and fast shutdown\r\n> > in logical replication walsender, but reading the doc[1], it seems to\r\n> > me that in the smart shutdown mode, the server stops existing sessions\r\n> > normally. For example, If the client is psql that gets stuck for some\r\n> > reason and the network buffer gets full, the smart shutdown waits for\r\n> > a backend process to send all results to the client. I think the\r\n> > logical replication walsender should follow this behavior for\r\n> > consistency. One idea is to distinguish smart shutdown and fast\r\n> > shutdown also in logical replication walsender so that we disconnect\r\n> > even without the done message in fast shutdown mode, but I'm not sure\r\n> > it's worthwhile.\r\n> >\r\n> \r\n> The main problem we want to solve here is to avoid shutdown failing in\r\n> case walreceiver/applyworker is busy waiting for some lock or for some\r\n> other reason as shown in the email [1]. I haven't tested it but if\r\n> such a problem doesn't exist in smart shutdown mode then probably we\r\n> can allow walsender to wait till all the data is sent.\r\n\r\nBased on the idea, I made a PoC patch to introduce the smart shutdown to walsenders.\r\nPSA 0002 patch. 0001 is not changed from v5.\r\nWhen logical walsenders got shutdown request but their send buffer is full due to\r\nthe delay, they will:\r\n\r\n* wait to complete to send data to subscriber if we are in smart shutdown mode\r\n* exit immediately if we are in fast shutdown mode\r\n\r\nNote that in both case, walsender does not wait the remote flush of WALs.\r\n\r\nFor implementing that, I added new attribute to WalSndCtlData that indicates the\r\nshutdown status. Basically it is zero, but it will be changed by postmaster when\r\nit gets request.\r\n\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 3 Feb 2023 12:08:48 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Fri, Feb 3, 2023 at 5:38 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> Dear Amit, Sawada-san,\n>\n> > > IIUC there is no difference between smart shutdown and fast shutdown\n> > > in logical replication walsender, but reading the doc[1], it seems to\n> > > me that in the smart shutdown mode, the server stops existing sessions\n> > > normally. For example, If the client is psql that gets stuck for some\n> > > reason and the network buffer gets full, the smart shutdown waits for\n> > > a backend process to send all results to the client. I think the\n> > > logical replication walsender should follow this behavior for\n> > > consistency. One idea is to distinguish smart shutdown and fast\n> > > shutdown also in logical replication walsender so that we disconnect\n> > > even without the done message in fast shutdown mode, but I'm not sure\n> > > it's worthwhile.\n> > >\n> >\n> > The main problem we want to solve here is to avoid shutdown failing in\n> > case walreceiver/applyworker is busy waiting for some lock or for some\n> > other reason as shown in the email [1].\n> >\n\nFor this problem isn't using -t (timeout) avoid it? So, if there is a\npending WAL, users can always use -t option to allow the shutdown to\ncomplete. Now, I agree that it is not very clear how much time to\nspecify but a user has some option to allow the shutdown to complete.\nI am not telling that teaching walsenders about shutdown modes is\ncompletely a bad idea but it doesn't seem necessary to allow shutdowns\nto complete.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 4 Feb 2023 16:34:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-02 11:21:54 +0530, Amit Kapila wrote:\n> The main problem we want to solve here is to avoid shutdown failing in\n> case walreceiver/applyworker is busy waiting for some lock or for some\n> other reason as shown in the email [1].\n\nIsn't handling this part of the job of wal_sender_timeout?\n\n\nI don't at all agree that it's ok to just stop replicating changes\nbecause we're blocked on network IO. The patch justifies this with:\n\n> Currently, at shutdown, walsender processes wait to send all pending data and\n> ensure the all data is flushed in remote node. This mechanism was added by\n> 985bd7 for supporting clean switch over, but such use-case cannot be supported\n> for logical replication. This commit remove the blocking in the case.\n\nand at the start of the thread with:\n\n> In case of logical replication, however, we cannot support the use-case that\n> switches the role publisher <-> subscriber. Suppose same case as above, additional\n> transactions are committed while doing step2. To catch up such changes subscriber\n> must receive WALs related with trans, but it cannot be done because subscriber\n> cannot request WALs from the specific position. In the case, we must truncate all\n> data in new subscriber once, and then create new subscription with copy_data\n> = true.\n\nBut that seems a too narrow view to me. Imagine you want to decomission\nthe current primary, and instead start to use the logical standby as the\nprimary. For that you'd obviously want to replicate the last few\nchanges. But with the proposed change, that'd be hard to ever achieve.\n\nNote that even disallowing any writes on the logical primary would make\nit hard to be sure that everything is replicated, because autovacuum,\nbgwriter, checkpointer all can continue to write WAL. Without being able\nto check that the last LSN has indeed been sent out, how do you know\nthat you didn't miss something?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 4 Feb 2023 05:01:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Sat, Feb 4, 2023 at 6:31 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-02 11:21:54 +0530, Amit Kapila wrote:\n> > The main problem we want to solve here is to avoid shutdown failing in\n> > case walreceiver/applyworker is busy waiting for some lock or for some\n> > other reason as shown in the email [1].\n>\n> Isn't handling this part of the job of wal_sender_timeout?\n>\n\nIn some cases, it is not clear whether we can handle it by\nwal_sender_timeout. Consider a case of a time-delayed replica where\nthe applyworker will keep sending some response/alive message so that\nwalsender doesn't timeout in that (during delay) period. In that case,\nbecause walsender won't timeout, the shutdown will fail (with the\nfailed message) even though it will be complete after the walsender is\nable to send all the WAL and shutdown. The time-delayed replica patch\nis still under discussion [1]. Also, for large values of\nwal_sender_timeout, it will wait till the walsender times out and can\nreturn with a failed message.\n\n>\n> I don't at all agree that it's ok to just stop replicating changes\n> because we're blocked on network IO. The patch justifies this with:\n>\n> > Currently, at shutdown, walsender processes wait to send all pending data and\n> > ensure the all data is flushed in remote node. This mechanism was added by\n> > 985bd7 for supporting clean switch over, but such use-case cannot be supported\n> > for logical replication. This commit remove the blocking in the case.\n>\n> and at the start of the thread with:\n>\n> > In case of logical replication, however, we cannot support the use-case that\n> > switches the role publisher <-> subscriber. Suppose same case as above, additional\n> > transactions are committed while doing step2. To catch up such changes subscriber\n> > must receive WALs related with trans, but it cannot be done because subscriber\n> > cannot request WALs from the specific position. In the case, we must truncate all\n> > data in new subscriber once, and then create new subscription with copy_data\n> > = true.\n>\n> But that seems a too narrow view to me. Imagine you want to decomission\n> the current primary, and instead start to use the logical standby as the\n> primary. For that you'd obviously want to replicate the last few\n> changes. But with the proposed change, that'd be hard to ever achieve.\n>\n\nI think that can still be achieved with the idea being discussed which\nis to keep allowing sending the WAL for smart shutdown mode but not\nfor other modes(fast, immediate). I don't know whether it is a good\nidea or not but Kuroda-San has produced a POC patch for it. We can\ninstead choose to improve our docs related to shutdown to explain a\nbit more about the shutdown's interaction with (logical and physical)\nreplication. As of now, it says: (“Smart” mode disallows new\nconnections, then waits for all existing clients to disconnect. If the\nserver is in hot standby, recovery and streaming replication will be\nterminated once all clients have disconnected.)[2]. Here, it is not\nclear that shutdown will wait for sending and flushing all the WALs.\nThe information for fast and immediate modes is even lesser which\nmakes it more difficult to understand what kind of behavior is\nexpected in those modes.\n\n[1] - https://commitfest.postgresql.org/42/3581/\n[2] - https://www.postgresql.org/docs/devel/app-pg-ctl.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 09:59:19 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Hi, \n\nOn February 5, 2023 8:29:19 PM PST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>On Sat, Feb 4, 2023 at 6:31 PM Andres Freund <andres@anarazel.de> wrote:\n>>\n>> On 2023-02-02 11:21:54 +0530, Amit Kapila wrote:\n>> > The main problem we want to solve here is to avoid shutdown failing in\n>> > case walreceiver/applyworker is busy waiting for some lock or for some\n>> > other reason as shown in the email [1].\n>>\n>> Isn't handling this part of the job of wal_sender_timeout?\n>>\n>\n>In some cases, it is not clear whether we can handle it by\n>wal_sender_timeout. Consider a case of a time-delayed replica where\n>the applyworker will keep sending some response/alive message so that\n>walsender doesn't timeout in that (during delay) period. In that case,\n>because walsender won't timeout, the shutdown will fail (with the\n>failed message) even though it will be complete after the walsender is\n>able to send all the WAL and shutdown. The time-delayed replica patch\n>is still under discussion [1]. Also, for large values of\n>wal_sender_timeout, it will wait till the walsender times out and can\n>return with a failed message.\n>\n>>\n>> I don't at all agree that it's ok to just stop replicating changes\n>> because we're blocked on network IO. The patch justifies this with:\n>>\n>> > Currently, at shutdown, walsender processes wait to send all pending data and\n>> > ensure the all data is flushed in remote node. This mechanism was added by\n>> > 985bd7 for supporting clean switch over, but such use-case cannot be supported\n>> > for logical replication. This commit remove the blocking in the case.\n>>\n>> and at the start of the thread with:\n>>\n>> > In case of logical replication, however, we cannot support the use-case that\n>> > switches the role publisher <-> subscriber. Suppose same case as above, additional\n>> > transactions are committed while doing step2. To catch up such changes subscriber\n>> > must receive WALs related with trans, but it cannot be done because subscriber\n>> > cannot request WALs from the specific position. In the case, we must truncate all\n>> > data in new subscriber once, and then create new subscription with copy_data\n>> > = true.\n>>\n>> But that seems a too narrow view to me. Imagine you want to decomission\n>> the current primary, and instead start to use the logical standby as the\n>> primary. For that you'd obviously want to replicate the last few\n>> changes. But with the proposed change, that'd be hard to ever achieve.\n>>\n>\n>I think that can still be achieved with the idea being discussed which\n>is to keep allowing sending the WAL for smart shutdown mode but not\n>for other modes(fast, immediate). I don't know whether it is a good\n>idea or not but Kuroda-San has produced a POC patch for it. We can\n>instead choose to improve our docs related to shutdown to explain a\n>bit more about the shutdown's interaction with (logical and physical)\n>replication. As of now, it says: (“Smart” mode disallows new\n>connections, then waits for all existing clients to disconnect. If the\n>server is in hot standby, recovery and streaming replication will be\n>terminated once all clients have disconnected.)[2]. Here, it is not\n>clear that shutdown will wait for sending and flushing all the WALs.\n>The information for fast and immediate modes is even lesser which\n>makes it more difficult to understand what kind of behavior is\n>expected in those modes.\n>\n>[1] - https://commitfest.postgresql.org/42/3581/\n>[2] - https://www.postgresql.org/docs/devel/app-pg-ctl.html\n>\n\nSmart shutdown is practically unusable. I don't think it makes sense to tie behavior of walsender to it in any way. \n\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Sun, 05 Feb 2023 21:03:03 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "=?US-ASCII?Q?Re=3A_Exit_walsender_before_confirming?=\n =?US-ASCII?Q?_remote_flush_in_logical_replication?="
},
{
"msg_contents": "On Mon, Feb 6, 2023 at 10:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On February 5, 2023 8:29:19 PM PST, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> But that seems a too narrow view to me. Imagine you want to decomission\n> >> the current primary, and instead start to use the logical standby as the\n> >> primary. For that you'd obviously want to replicate the last few\n> >> changes. But with the proposed change, that'd be hard to ever achieve.\n> >>\n> >\n> >I think that can still be achieved with the idea being discussed which\n> >is to keep allowing sending the WAL for smart shutdown mode but not\n> >for other modes(fast, immediate). I don't know whether it is a good\n> >idea or not but Kuroda-San has produced a POC patch for it. We can\n> >instead choose to improve our docs related to shutdown to explain a\n> >bit more about the shutdown's interaction with (logical and physical)\n> >replication. As of now, it says: (“Smart” mode disallows new\n> >connections, then waits for all existing clients to disconnect. If the\n> >server is in hot standby, recovery and streaming replication will be\n> >terminated once all clients have disconnected.)[2]. Here, it is not\n> >clear that shutdown will wait for sending and flushing all the WALs.\n> >The information for fast and immediate modes is even lesser which\n> >makes it more difficult to understand what kind of behavior is\n> >expected in those modes.\n> >\n> >[1] - https://commitfest.postgresql.org/42/3581/\n> >[2] - https://www.postgresql.org/docs/devel/app-pg-ctl.html\n> >\n>\n> Smart shutdown is practically unusable. I don't think it makes sense to tie behavior of walsender to it in any way.\n>\n\nSo, we have the following options: (a) do nothing for this; (b)\nclarify the current behavior in docs. Any suggestions?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 6 Feb 2023 12:23:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-06 12:23:54 +0530, Amit Kapila wrote:\n> On Mon, Feb 6, 2023 at 10:33 AM Andres Freund <andres@anarazel.de> wrote:\n> > Smart shutdown is practically unusable. I don't think it makes sense to tie behavior of walsender to it in any way.\n> >\n> \n> So, we have the following options: (a) do nothing for this; (b)\n> clarify the current behavior in docs. Any suggestions?\n\nb) seems good.\n\nI also think it'd make sense to improve this on a code-level. Just not in the\nwholesale way discussed so far.\n\nHow about we make it an option in START_REPLICATION? Delayed logical rep can\ntoggle that on by default.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 12:34:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Tue, Feb 7, 2023 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-06 12:23:54 +0530, Amit Kapila wrote:\n> > On Mon, Feb 6, 2023 at 10:33 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Smart shutdown is practically unusable. I don't think it makes sense to tie behavior of walsender to it in any way.\n> > >\n> >\n> > So, we have the following options: (a) do nothing for this; (b)\n> > clarify the current behavior in docs. Any suggestions?\n>\n> b) seems good.\n>\n> I also think it'd make sense to improve this on a code-level. Just not in the\n> wholesale way discussed so far.\n>\n> How about we make it an option in START_REPLICATION? Delayed logical rep can\n> toggle that on by default.\n>\n\nWorks for me. So, when this option is set in START_REPLICATION\nmessage, walsender will set some flag and allow itself to exit at\nshutdown without waiting for WAL to be sent?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 7 Feb 2023 09:00:13 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-07 09:00:13 +0530, Amit Kapila wrote:\n> On Tue, Feb 7, 2023 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n> > How about we make it an option in START_REPLICATION? Delayed logical rep can\n> > toggle that on by default.\n\n> Works for me. So, when this option is set in START_REPLICATION\n> message, walsender will set some flag and allow itself to exit at\n> shutdown without waiting for WAL to be sent?\n\nYes. I think that might be useful in other situations as well, but we don't\nneed to make those configurable initially. But I imagine it'd be useful to set\nthings up so that non-HA physical replicas don't delay shutdown, particularly\nif they're geographically far away.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 6 Feb 2023 20:49:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Andres, Amit,\n\n> On 2023-02-07 09:00:13 +0530, Amit Kapila wrote:\n> > On Tue, Feb 7, 2023 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n> > > How about we make it an option in START_REPLICATION? Delayed logical\n> rep can\n> > > toggle that on by default.\n> \n> > Works for me. So, when this option is set in START_REPLICATION\n> > message, walsender will set some flag and allow itself to exit at\n> > shutdown without waiting for WAL to be sent?\n> \n> Yes. I think that might be useful in other situations as well, but we don't\n> need to make those configurable initially. But I imagine it'd be useful to set\n> things up so that non-HA physical replicas don't delay shutdown, particularly\n> if they're geographically far away.\n\nBased on the discussion, I made a patch for adding a walsender option\nexit_before_confirming to the START_STREAMING replication command. It can be\nused for both physical and logical replication. I made the patch with\nextendibility - it allows adding further options.\nAnd better naming are very welcome.\n\nFor physical replication, the grammar was slightly changed like a logical one.\nIt can now accept options but currently, only one option is allowed. And it is\nnot used in normal streaming replication. For logical replication, the option is\ncombined with options for the output plugin. Of course, we can modify the API to\nbetter style.\n\n0001 patch was ported from time-delayed logical replication thread[1], which uses\nthe added option. When the min_apply_delay option is specified and publisher seems\nto be PG16 or later, the apply worker sends a START_REPLICATION query with\nexit_before_confirming = true. And the worker will reboot and send START_REPLICATION\nagain when min_apply_delay is changed from zero to a non-zero value or non-zero to zero.\n\nNote that I removed version number because the approach is completely changed.\n\n[1]: https://www.postgresql.org/message-id/TYCPR01MB8373BA483A6D2C924C600968EDDB9@TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 7 Feb 2023 14:41:13 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "> Dear Andres, Amit,\n> \n> > On 2023-02-07 09:00:13 +0530, Amit Kapila wrote:\n> > > On Tue, Feb 7, 2023 at 2:04 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > How about we make it an option in START_REPLICATION? Delayed logical\n> > rep can\n> > > > toggle that on by default.\n> >\n> > > Works for me. So, when this option is set in START_REPLICATION\n> > > message, walsender will set some flag and allow itself to exit at\n> > > shutdown without waiting for WAL to be sent?\n> >\n> > Yes. I think that might be useful in other situations as well, but we don't\n> > need to make those configurable initially. But I imagine it'd be useful to set\n> > things up so that non-HA physical replicas don't delay shutdown, particularly\n> > if they're geographically far away.\n> \n> Based on the discussion, I made a patch for adding a walsender option\n> exit_before_confirming to the START_STREAMING replication command. It can\n> be\n> used for both physical and logical replication. I made the patch with\n> extendibility - it allows adding further options.\n> And better naming are very welcome.\n> \n> For physical replication, the grammar was slightly changed like a logical one.\n> It can now accept options but currently, only one option is allowed. And it is\n> not used in normal streaming replication. For logical replication, the option is\n> combined with options for the output plugin. Of course, we can modify the API to\n> better style.\n> \n> 0001 patch was ported from time-delayed logical replication thread[1], which\n> uses\n> the added option. When the min_apply_delay option is specified and publisher\n> seems\n> to be PG16 or later, the apply worker sends a START_REPLICATION query with\n> exit_before_confirming = true. And the worker will reboot and send\n> START_REPLICATION\n> again when min_apply_delay is changed from zero to a non-zero value or non-zero\n> to zero.\n> \n> Note that I removed version number because the approach is completely changed.\n> \n> [1]:\n> https://www.postgresql.org/message-id/TYCPR01MB8373BA483A6D2C924C60\n> 0968EDDB9@TYCPR01MB8373.jpnprd01.prod.outlook.com\n\nI noticed that previous ones are rejected by cfbot, even if they passed on my environment...\nPSA fixed version.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 7 Feb 2023 16:07:07 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "> I noticed that previous ones are rejected by cfbot, even if they passed on my\n> environment...\n> PSA fixed version.\n\nWhile analyzing more, I found the further bug that forgets initialization.\nPSA new version that could be passed automated tests on my github repository.\nSorry for noise.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Tue, 7 Feb 2023 17:08:54 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "I agree to the direction and thanks for the patch.\n\nAt Tue, 7 Feb 2023 17:08:54 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> > I noticed that previous ones are rejected by cfbot, even if they passed on my\n> > environment...\n> > PSA fixed version.\n> \n> While analyzing more, I found the further bug that forgets initialization.\n> PSA new version that could be passed automated tests on my github repository.\n> Sorry for noise.\n\n0002:\n\nThis patch doesn't seem to offer a means to change the default\nwalsender behavior. We need a subscription option named like\n\"walsender_exit_mode\" to do that.\n\n\n+ConsumeWalsenderOptions(List *options, WalSndData *data)\n\nI wonder if it is the right design to put options for different things\ninto a single list. I rather choose to embed the walsender option in\nthe syntax than needing this function.\n\nK_START_REPLICATION opt_slot opt_physical RECPTR opt_timeline opt_shutdown_mode\n\nK_START_REPLICATION K_SLOTIDENT K_LOGICAL RECPTR opt_shutdown_mode plugin_options\n\nwhere opt_shutdown_mode would be like \"SHUTDOWN_MODE immediate\".\n\n\n======\nIf we go with the current design, I think it is better to share the\noption list rule between the logical and physical START_REPLCIATION\ncommands.\n\nI'm not sure I like the option syntax\n\"exit_before_confirming=<Boolean>\". I imagin that other options may\ncome in future. Thus, how about \"walsender_shutdown_mode=<mode>\",\nwhere the mode is one of \"wait_flush\"(default) and \"immediate\"?\n\n\n+typedef struct\n+{\n+\tbool\t\texit_before_confirming;\n+} WalSndData;\n\nData doesn't seem to represent the variable. Why not WalSndOptions?\n\n\n-\t\t!equal(newsub->publications, MySubscription->publications))\n+\t\t!equal(newsub->publications, MySubscription->publications) ||\n+\t\t(newsub->minapplydelay > 0 && MySubscription->minapplydelay == 0) ||\n+\t\t(newsub->minapplydelay == 0 && MySubscription->minapplydelay > 0))\n\n I slightly prefer the following expression (Others may disagree:p):\n\n ((newsub->minapplydelay == 0) != (MySubscription->minapplydelay == 0))\n\n And I think we need a comment for the term. For example,\n\n /* minapplydelay affects START_REPLICATION option exit_before_confirming */\n\n\n+ * Reads all entrly of the list and consume if needed.\ns/entrly/entries/ ?\n...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Feb 2023 11:27:17 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Wed, Feb 8, 2023 at 7:57 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> I agree to the direction and thanks for the patch.\n>\n> At Tue, 7 Feb 2023 17:08:54 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > > I noticed that previous ones are rejected by cfbot, even if they passed on my\n> > > environment...\n> > > PSA fixed version.\n> >\n> > While analyzing more, I found the further bug that forgets initialization.\n> > PSA new version that could be passed automated tests on my github repository.\n> > Sorry for noise.\n>\n> 0002:\n>\n> This patch doesn't seem to offer a means to change the default\n> walsender behavior. We need a subscription option named like\n> \"walsender_exit_mode\" to do that.\n>\n\nI don't think at this stage we need a subscription-level option, we\ncan extend it later if this is really useful for users. For now, we\ncan set this new option when min_apply_delay > 0.\n\n>\n> +ConsumeWalsenderOptions(List *options, WalSndData *data)\n>\n> I wonder if it is the right design to put options for different things\n> into a single list. I rather choose to embed the walsender option in\n> the syntax than needing this function.\n>\n> K_START_REPLICATION opt_slot opt_physical RECPTR opt_timeline opt_shutdown_mode\n>\n> K_START_REPLICATION K_SLOTIDENT K_LOGICAL RECPTR opt_shutdown_mode plugin_options\n>\n> where opt_shutdown_mode would be like \"SHUTDOWN_MODE immediate\".\n>\n\nThe other option could have been that we just add it as a\nplugin_option for logical replication but it doesn't seem to match\nwith the other plugin options. I think it would be better to have it\nas a separate option something like opt_shutdown_immediate and extend\nthe logical replication syntax for now. We can later extend physical\nreplication syntax when we want to expose such an option via physical\nreplication.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 8 Feb 2023 11:06:08 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit,\r\n\r\nThanks for giving comments!\r\n\r\n> >\r\n> > 0002:\r\n> >\r\n> > This patch doesn't seem to offer a means to change the default\r\n> > walsender behavior. We need a subscription option named like\r\n> > \"walsender_exit_mode\" to do that.\r\n> >\r\n> \r\n> I don't think at this stage we need a subscription-level option, we\r\n> can extend it later if this is really useful for users. For now, we\r\n> can set this new option when min_apply_delay > 0.\r\n\r\nAgreed. I wanted to keep the feature closed for PG16 and then will extend if needed.\r\n\r\n> >\r\n> > +ConsumeWalsenderOptions(List *options, WalSndData *data)\r\n> >\r\n> > I wonder if it is the right design to put options for different things\r\n> > into a single list. I rather choose to embed the walsender option in\r\n> > the syntax than needing this function.\r\n> >\r\n> > K_START_REPLICATION opt_slot opt_physical RECPTR opt_timeline\r\n> opt_shutdown_mode\r\n> >\r\n> > K_START_REPLICATION K_SLOTIDENT K_LOGICAL RECPTR\r\n> opt_shutdown_mode plugin_options\r\n> >\r\n> > where opt_shutdown_mode would be like \"SHUTDOWN_MODE immediate\".\r\n> >\r\n> \r\n> The other option could have been that we just add it as a\r\n> plugin_option for logical replication but it doesn't seem to match\r\n> with the other plugin options. I think it would be better to have it\r\n> as a separate option something like opt_shutdown_immediate and extend\r\n> the logical replication syntax for now. We can later extend physical\r\n> replication syntax when we want to expose such an option via physical\r\n> replication.\r\n\r\nThe main intention for us is to shut down logical walsenders. Therefore, same as above,\r\nI want to develop the feature for logical replication once and then try to extend if we want.\r\nTBH I think adding physicalrep support seems not to be so hard,\r\nbut I want to keep the patch smaller.\r\n\r\nThe new patch will be attached soon in another mail.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Wed, 8 Feb 2023 07:59:09 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Horiguchi-san,\n\nThank you for checking the patch! PSA new version.\n\n> 0002:\n> \n> This patch doesn't seem to offer a means to change the default\n> walsender behavior. We need a subscription option named like\n> \"walsender_exit_mode\" to do that.\n\nAs I said in another mail[1], I'm thinking the feature does not have to be\nused alone for now.\n\n> +ConsumeWalsenderOptions(List *options, WalSndData *data)\n> \n> I wonder if it is the right design to put options for different things\n> into a single list. I rather choose to embed the walsender option in\n> the syntax than needing this function.\n> \n> K_START_REPLICATION opt_slot opt_physical RECPTR opt_timeline\n> opt_shutdown_mode\n> \n> K_START_REPLICATION K_SLOTIDENT K_LOGICAL RECPTR\n> opt_shutdown_mode plugin_options\n> \n> where opt_shutdown_mode would be like \"SHUTDOWN_MODE immediate\".\n\nRight, the option handling was quite bad. I added new syntax opt_shutdown_mode\nto logical replication. And many codes were modified accordingly.\nNote that based on the other discussion, I removed codes\nfor supporting physical replication but tried to keep the extensibility.\n\n> ======\n> If we go with the current design, I think it is better to share the\n> option list rule between the logical and physical START_REPLCIATION\n> commands.\n> \n> I'm not sure I like the option syntax\n> \"exit_before_confirming=<Boolean>\". I imagin that other options may\n> come in future. Thus, how about \"walsender_shutdown_mode=<mode>\",\n> where the mode is one of \"wait_flush\"(default) and \"immediate\"?\n\nSeems better, I changed to from boolean to enumeration.\n\n> +typedef struct\n> +{\n> +\tbool\t\texit_before_confirming;\n> +} WalSndData;\n> \n> Data doesn't seem to represent the variable. Why not WalSndOptions?\n\nThis is inspired by PGOutputData, but I prefer your idea. Fixed.\n\n> -\t\t!equal(newsub->publications, MySubscription->publications))\n> +\t\t!equal(newsub->publications, MySubscription->publications) ||\n> +\t\t(newsub->minapplydelay > 0 &&\n> MySubscription->minapplydelay == 0) ||\n> +\t\t(newsub->minapplydelay == 0 &&\n> MySubscription->minapplydelay > 0))\n> \n> I slightly prefer the following expression (Others may disagree:p):\n> \n> ((newsub->minapplydelay == 0) != (MySubscription->minapplydelay == 0))\n\nI think conditions for the same parameter should be aligned one line,\nSo your posted seems better. Fixed.\n\n> \n> And I think we need a comment for the term. For example,\n> \n> /* minapplydelay affects START_REPLICATION option exit_before_confirming\n> */\n\nAdded just above the condition.\n\n> + * Reads all entrly of the list and consume if needed.\n> s/entrly/entries/ ?\n> ...\n\nThis part is no longer needed.\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866D3EC780D251953BDE7FAF5D89%40TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 8 Feb 2023 08:01:24 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "> \n> Dear Horiguchi-san,\n> \n> Thank you for checking the patch! PSA new version.\n\nPSA rebased patch that supports updated time-delayed patch[1].\n\n[1]: https://www.postgresql.org/message-id/TYAPR01MB5866C11DAF8AB04F3CC181D3F5D89@TYAPR01MB5866.jpnprd01.prod.outlook.com\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Wed, 8 Feb 2023 09:47:08 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Wednesday, February 8, 2023 6:47 PM Hayato Kuroda (Fujitsu) <kuroda.hayato@fujitsu.com> wrote:\n> PSA rebased patch that supports updated time-delayed patch[1].\nHi,\n\nThanks for creating the patch ! Minor review comments on v5-0002.\n\n(1)\n\n+ Decides the condition for exiting the walsender process.\n+ <literal>'wait_flush'</literal>, which is the default, the walsender\n+ will wait for all the sent WALs to be flushed on the subscriber side,\n+ before exiting the process. <literal>'immediate'</literal> will exit\n+ without confirming the remote flush. This may break the consistency\n+ between publisher and subscriber, but it may be useful for a system\n+ that has a high-latency network to reduce the amount of time for\n+ shutdown.\n\n(1-1)\n\nThe first part \"exiting the walsender process\" can be improved.\nProbably, you can say \"the exiting walsender process\" or\n\"Decides the behavior of the walsender process at shutdown\" instread.\n\n(1-2)\n\nAlso, the next sentence can be improved something like\n\"If the shutdown mode is wait_flush, which is the default, the\nwalsender waits for all the sent WALs to be flushed on the subscriber side.\nIf it is immediate, the walsender exits without confirming the remote flush\".\n\n(1-3)\n\nWe don't need to wrap wait_flush and immediate by single quotes\nwithin the literal tag.\n\n(2)\n\n+ /* minapplydelay affects SHUTDOWN_MODE option */\n\nI think we can move this comment to just above the 'if' condition\nand combine it with the existing 'if' conditions comments.\n\n(3) 001_rep_changes.pl\n\n(3-1) Question\n\nIn general, do we add this kind of check when we extend the protocol (STREAM_REPLICATION command) \nor add a new condition for apply worker exit ?\nIn case when we would like to know the restart of the walsender process in TAP tests,\nthen could you tell me why the new test code matches the purpose of this patch ?\n\n(3-2)\n\n+ \"Timed out while waiting for apply to restart after changing min_apply_delay to non-zero value\";\n\nProbably, we can partly change this sentence like below, because we check walsender's pid.\nFROM: \"... while waiting for apply to restart...\"\nTO: \"... while waiting for the walsender to restart...\"\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n",
"msg_date": "Thu, 9 Feb 2023 05:50:05 +0000",
"msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Hi,\n\nOn Wed, Feb 8, 2023 at 6:47 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n> >\n> > Dear Horiguchi-san,\n> >\n> > Thank you for checking the patch! PSA new version.\n>\n> PSA rebased patch that supports updated time-delayed patch[1].\n>\n\nThank you for the patch! Here are some comments on v5 patch:\n\n+/*\n+ * Options for controlling the behavior of the walsender. Options can be\n+ * specified in the START_STREAMING replication command. Currently only one\n+ * option is allowed.\n+ */\n+typedef struct\n+{\n+ WalSndShutdownMode shutdown_mode;\n+} WalSndOptions;\n+\n+static WalSndOptions *my_options = NULL;\n\nI'm not sure we need to have it as a struct at this stage since we\nsupport only one option. I wonder if we can have one value, say\nshutdown_mode, and we can make it a struct when we really need it.\nEven if we use WalSndOptions struct, I don't think we need to\ndynamically allocate it. Since a walsender can start logical\nreplication multiple times in principle, my_options is not freed.\n\n---\n+/*\n+ * Parse given shutdown mode.\n+ *\n+ * Currently two values are accepted - \"wait_flush\" and \"immediate\"\n+ */\n+static void\n+ParseShutdownMode(char *shutdownmode)\n+{\n+ if (pg_strcasecmp(shutdownmode, \"wait_flush\") == 0)\n+ my_options->shutdown_mode = WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\n+ else if (pg_strcasecmp(shutdownmode, \"immediate\") == 0)\n+ my_options->shutdown_mode = WALSND_SHUTDOWN_MODE_IMMIDEATE;\n+ else\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"SHUTDOWN_MODE requires\n\\\"wait_flush\\\" or \\\"immediate\\\"\"));\n+}\n\nI think we should make the error message consistent with other enum\nparameters. How about the message like:\n\nERROR: invalid value shutdown mode: \"%s\"\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Feb 2023 17:33:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Osumi-san,\n\nThank you for reviewing! PSA new version.\n\n> (1)\n> \n> + Decides the condition for exiting the walsender process.\n> + <literal>'wait_flush'</literal>, which is the default, the walsender\n> + will wait for all the sent WALs to be flushed on the subscriber side,\n> + before exiting the process. <literal>'immediate'</literal> will exit\n> + without confirming the remote flush. This may break the consistency\n> + between publisher and subscriber, but it may be useful for a system\n> + that has a high-latency network to reduce the amount of time for\n> + shutdown.\n>\n> (1-1)\n> \n> The first part \"exiting the walsender process\" can be improved.\n> Probably, you can say \"the exiting walsender process\" or\n> \"Decides the behavior of the walsender process at shutdown\" instread.\n\nFixed. Second idea was chosen.\n\n> (1-2)\n> \n> Also, the next sentence can be improved something like\n> \"If the shutdown mode is wait_flush, which is the default, the\n> walsender waits for all the sent WALs to be flushed on the subscriber side.\n> If it is immediate, the walsender exits without confirming the remote flush\".\n\nFixed.\n\n> (1-3)\n> \n> We don't need to wrap wait_flush and immediate by single quotes\n> within the literal tag.\n\nThis style was ported from the SNAPSHOT options part, so I decided to keep.\n\n\n> (2)\n> \n> + /* minapplydelay affects SHUTDOWN_MODE option */\n> \n> I think we can move this comment to just above the 'if' condition\n> and combine it with the existing 'if' conditions comments.\n\nMoved and added some comments.\n\n> (3) 001_rep_changes.pl\n> \n> (3-1) Question\n> \n> In general, do we add this kind of check when we extend the protocol\n> (STREAM_REPLICATION command)\n> or add a new condition for apply worker exit ?\n> In case when we would like to know the restart of the walsender process in TAP\n> tests,\n> then could you tell me why the new test code matches the purpose of this patch ?\n\nThe replication command is not for normal user, so I think we don't have to test itself.\n\nThe check that waits to restart the apply worker was added to improve the robustness.\nI think there is a possibility to fail the test when the apply worker recevies a transaction\nbefore it checks new subscription option. Now the failure can be avoided by\nconfriming to reload pg_subscription and restart.\n\n> (3-2)\n> \n> + \"Timed out while waiting for apply to restart after changing min_apply_delay\n> to non-zero value\";\n> \n> Probably, we can partly change this sentence like below, because we check\n> walsender's pid.\n> FROM: \"... while waiting for apply to restart...\"\n> TO: \"... while waiting for the walsender to restart...\"\n\nRight, fixed.\n\nBest Regards,\nHayato Kuroda\nFUJITSU LIMITED",
"msg_date": "Thu, 9 Feb 2023 10:11:10 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Sawada-san,\r\n\r\nThank you for reviewing!\r\n\r\n> +/*\r\n> + * Options for controlling the behavior of the walsender. Options can be\r\n> + * specified in the START_STREAMING replication command. Currently only one\r\n> + * option is allowed.\r\n> + */\r\n> +typedef struct\r\n> +{\r\n> + WalSndShutdownMode shutdown_mode;\r\n> +} WalSndOptions;\r\n> +\r\n> +static WalSndOptions *my_options = NULL;\r\n> \r\n> I'm not sure we need to have it as a struct at this stage since we\r\n> support only one option. I wonder if we can have one value, say\r\n> shutdown_mode, and we can make it a struct when we really need it.\r\n> Even if we use WalSndOptions struct, I don't think we need to\r\n> dynamically allocate it. Since a walsender can start logical\r\n> replication multiple times in principle, my_options is not freed.\r\n\r\n+1, removed the structure.\r\n\r\n> ---\r\n> +/*\r\n> + * Parse given shutdown mode.\r\n> + *\r\n> + * Currently two values are accepted - \"wait_flush\" and \"immediate\"\r\n> + */\r\n> +static void\r\n> +ParseShutdownMode(char *shutdownmode)\r\n> +{\r\n> + if (pg_strcasecmp(shutdownmode, \"wait_flush\") == 0)\r\n> + my_options->shutdown_mode =\r\n> WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\r\n> + else if (pg_strcasecmp(shutdownmode, \"immediate\") == 0)\r\n> + my_options->shutdown_mode =\r\n> WALSND_SHUTDOWN_MODE_IMMIDEATE;\r\n> + else\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_SYNTAX_ERROR),\r\n> + errmsg(\"SHUTDOWN_MODE requires\r\n> \\\"wait_flush\\\" or \\\"immediate\\\"\"));\r\n> +}\r\n> \r\n> I think we should make the error message consistent with other enum\r\n> parameters. How about the message like:\r\n> \r\n> ERROR: invalid value shutdown mode: \"%s\"\r\n\r\nModified like enum parameters and hint message was also provided.\r\n\r\nNew patch is attached in [1].\r\n\r\n[1]: https://www.postgresql.org/message-id/TYAPR01MB586683FC450662990E356A0EF5D99%40TYAPR01MB5866.jpnprd01.prod.outlook.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Thu, 9 Feb 2023 10:12:03 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Here are my review comments for the v6-0002 patch.\n\n======\nCommit Message\n\n1.\nThis commit extends START_REPLICATION to accept SHUTDOWN_MODE term. Currently,\nit works well only for logical replication.\n\n~\n\n1a.\n\"to accept SHUTDOWN term\" --> \"to include a SHUTDOWN_MODE clause.\"\n\n~\n\n1b.\n\"it works well only for...\" --> do you mean \"it is currently\nimplemented only for...\"\n\n~~~\n\n2.\nWhen 'wait_flush', which is the default, is specified, the walsender will wait\nfor all the sent WALs to be flushed on the subscriber side, before exiting the\nprocess. 'immediate' will exit without confirming the remote flush. This may\nbreak the consistency between publisher and subscriber, but it may be useful\nfor a system that has a high-latency network to reduce the amount of time for\nshutdown. This may be useful to shut down the publisher even when the\nworker is stuck.\n\n~\n\nSUGGESTION\nThe shutdown modes are:\n\n1) 'wait_flush' (the default). In this mode, the walsender will wait\nfor all the sent WALs to be flushed on the subscriber side, before\nexiting the process.\n\n2) 'immediate'. In this mode, the walsender will exit without\nconfirming the remote flush. This may break the consistency between\npublisher and subscriber. This mode might be useful for a system that\nhas a high-latency network (to reduce the amount of time for\nshutdown), or to allow the shutdown of the publisher even when the\nworker is stuck.\n\n======\ndoc/src/sgml/protocol.sgml\n\n3.\n+ <varlistentry>\n+ <term><literal>SHUTDOWN_MODE { 'wait_flush' | 'immediate'\n}</literal></term>\n+ <listitem>\n+ <para>\n+ Decides the behavior of the walsender process at shutdown. If the\n+ shutdown mode is <literal>'wait_flush'</literal>, which is the\n+ default, the walsender waits for all the sent WALs to be flushed\n+ on the subscriber side. If it is <literal>'immediate'</literal>,\n+ the walsender exits without confirming the remote flush.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nThe synopsis said:\n[ SHUTDOWN_MODE shutdown_mode ]\n\nBut then the 'shutdown_mode' term was never mentioned again (??).\nInstead it says:\nSHUTDOWN_MODE { 'wait_flush' | 'immediate' }\n\nIMO the detailed explanation should not say SHUTDOWN_MODE again. It\nshould be writtenmore like this:\n\nSUGGESTION\nshutdown_mode\n\nDetermines the behavior of the walsender process at shutdown. If\nshutdown_mode is 'wait_flush', the walsender waits for all the sent\nWALs to be flushed on the subscriber side. This is the default when\nSHUTDOWN_MODE is not specified.\n\nIf shutdown_mode is 'immediate', the walsender exits without\nconfirming the remote flush.\n\n======\n.../libpqwalreceiver/libpqwalreceiver.c\n\n4.\n+ /* Add SHUTDOWN_MODE option if needed */\n+ if (options->shutdown_mode &&\n+ PQserverVersion(conn->streamConn) >= 160000)\n+ appendStringInfo(&cmd, \" SHUTDOWN_MODE '%s'\",\n+ options->shutdown_mode);\n\nMaybe you can expand on the meaning of \"if needed\".\n\nSUGGESTION\nAdd SHUTDOWN_MODE clause if needed (i.e. if not using the default shutdown_mode)\n\n======\nsrc/backend/replication/logical/worker.c\n\n5. maybe_reread_subscription\n\n+ *\n+ * minapplydelay affects SHUTDOWN_MODE option. 'immediate' shutdown mode\n+ * will be specified if it is set to non-zero, otherwise default mode will\n+ * be set.\n\nReworded this comment slightly and give a reference to ApplyWorkerMain.\n\nSUGGESTION\nTime-delayed logical replication affects the SHUTDOWN_MODE clause. The\n'immediate' shutdown mode will be specified if min_apply_delay is\nnon-zero, otherwise the default shutdown mode will be used. See\nApplyWorkerMain.\n\n~~~\n\n6. ApplyWorkerMain\n+ /*\n+ * time-delayed logical replication does not support tablesync\n+ * workers, so only the leader apply worker can request walsenders to\n+ * exit before confirming remote flush.\n+ */\n\n\"time-delayed\" --> \"Time-delayed\"\n\n======\nsrc/backend/replication/repl_gram.y\n\n7.\n@@ -91,6 +92,7 @@ Node *replication_parse_result;\n %type <boolval> opt_temporary\n %type <list> create_slot_options create_slot_legacy_opt_list\n %type <defelt> create_slot_legacy_opt\n+%type <str> opt_shutdown_mode\n\nThe tab alignment seemed not quite right. Not 100% sure.\n\n~~~\n\n8.\n@@ -270,20 +272,22 @@ start_replication:\n cmd->slotname = $2;\n cmd->startpoint = $4;\n cmd->timeline = $5;\n+ cmd->shutdownmode = NULL;\n $$ = (Node *) cmd;\n }\n\nIt seemed a bit inconsistent. E.g. the cmd->options member was not set\nfor physical replication, so why then set this member?\n\nAlternatively, maybe should set cmd->options = NULL here as well?\n\n======\nsrc/backend/replication/walsender.c\n\n9.\n+/* Indicator for specifying the shutdown mode */\n+typedef enum\n+{\n+ WALSND_SHUTDOWN_MODE_WAIT_FLUSH = 0,\n+ WALSND_SHUTDOWN_MODE_IMMIDEATE\n+} WalSndShutdownMode;\n\n~\n\n9a.\n\"Indicator for specifying\" (??). How about just saying: \"Shutdown modes\"\n\n~\n\n9b.\nTypo: WALSND_SHUTDOWN_MODE_IMMIDEATE ==> WALSND_SHUTDOWN_MODE_IMMEDIATE\n\n~\n\n9c.\nAFAICT the fact that the first enum value is assigned 0 is not really\nof importance. If that is correct, then IMO maybe it's better to\nremove the \"= 0\" because the explicit assignment made me expect that\nit had special meaning, and then it was confusing when I could not\nfind a reason.\n\n~~~\n\n10. ProcessPendingWrites\n\n+ /*\n+ * In this function, there is a possibility that the walsender is\n+ * stuck. It is caused when the opposite worker is stuck and then the\n+ * send-buffer of the walsender becomes full. Therefore, we must add\n+ * an additional path for shutdown for immediate shutdown mode.\n+ */\n+ if (shutdown_mode == WALSND_SHUTDOWN_MODE_IMMIDEATE &&\n+ got_STOPPING)\n+ WalSndDone(XLogSendLogical);\n\n10a.\nCan this comment say something like \"receiving worker\" instead of\n\"opposite worker\"?\n\nSUGGESTION\nThis can happen when the receiving worker is stuck, and then the\nsend-buffer of the walsender...\n\n~\n\n10b.\nIMO it makes more sense to check this around the other way. E.g. we\ndon't care what is the shutdown_mode value unless got_STOPPING is\ntrue.\n\nSUGGESTION\nif (got_STOPPING && (shutdown_mode == WALSND_SHUTDOWN_MODE_IMMEDIATE))\n\n~~~\n\n11. WalSndDone\n\n+ * If we are in the immediate shutdown mode, flush location and output\n+ * buffer is not checked. This may break the consistency between nodes,\n+ * but it may be useful for the system that has high-latency network to\n+ * reduce the amount of time for shutdown.\n\nAdd some quotes for the mode.\n\nSUGGESTION\n'immediate' shutdown mode\n\n~~~\n\n12.\n+/*\n+ * Check options for walsender itself and set flags accordingly.\n+ *\n+ * Currently only one option is accepted.\n+ */\n+static void\n+CheckWalSndOptions(const StartReplicationCmd *cmd)\n+{\n+ if (cmd->shutdownmode)\n+ ParseShutdownMode(cmd->shutdownmode);\n+}\n+\n+/*\n+ * Parse given shutdown mode.\n+ *\n+ * Currently two values are accepted - \"wait_flush\" and \"immediate\"\n+ */\n+static void\n+ParseShutdownMode(char *shutdownmode)\n+{\n+ if (pg_strcasecmp(shutdownmode, \"wait_flush\") == 0)\n+ shutdown_mode = WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\n+ else if (pg_strcasecmp(shutdownmode, \"immediate\") == 0)\n+ shutdown_mode = WALSND_SHUTDOWN_MODE_IMMIDEATE;\n+ else\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"invalid value for shutdown mode: \\\"%s\\\"\", shutdownmode),\n+ errhint(\"Available values: wait_flush, immediate.\"));\n+}\n\nIMO the ParseShutdownMode function seems unnecessary because it's not\nreally \"parsing\" anything and it is only called in one place. I\nsuggest wrapping everything into the CheckWalSndOptions function. The\nend result is still only a simple function:\n\nSUGGESTION\n\nstatic void\nCheckWalSndOptions(const StartReplicationCmd *cmd)\n{\nif (cmd->shutdownmode)\n{\nchar *mode = cmd->shutdownmode;\n\nif (pg_strcasecmp(mode, \"wait_flush\") == 0)\nshutdown_mode = WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\nelse if (pg_strcasecmp(mode, \"immediate\") == 0)\nshutdown_mode = WALSND_SHUTDOWN_MODE_IMMEDIATE;\n\nelse\nereport(ERROR,\nerrcode(ERRCODE_SYNTAX_ERROR),\nerrmsg(\"invalid value for shutdown mode: \\\"%s\\\"\", mode),\nerrhint(\"Available values: wait_flush, immediate.\"));\n}\n}\n\n======\nsrc/include/replication/walreceiver.h\n\n13.\n@@ -170,6 +170,7 @@ typedef struct\n * false if physical stream. */\n char *slotname; /* Name of the replication slot or NULL. */\n XLogRecPtr startpoint; /* LSN of starting point. */\n+ char *shutdown_mode; /* Name of specified shutdown name */\n\n union\n {\n~\n\n13a.\nTypo (shutdown name?)\n\nSUGGESTION\n/* The specified shutdown mode string, or NULL. */\n\n~\n\n13b.\nBecause they have the same member names I kept confusing this option\nshutdown_mode with the other enum also called shutdown_mode.\n\nI wonder if is it possible to call this one something like\n'shutdown_mode_str' to make reading the code easier?\n\n~\n\n13c.\nIs this member in the right place? AFAIK this is not even implemented\nfor physical replication. e.g. Why isn't this new member part of the\n'logical' sub-structure in the union?\n\n======\nsrc/test/subscription/t/001_rep_changes.pl\n\n14.\n-# Set min_apply_delay parameter to 3 seconds\n+# Check restart on changing min_apply_delay to 3 seconds\n my $delay = 3;\n $node_subscriber->safe_psql('postgres',\n \"ALTER SUBSCRIPTION tap_sub_renamed SET (min_apply_delay = '${delay}s')\");\n+$node_publisher->poll_query_until('postgres',\n+ \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\napplication_name = 'tap_sub_renamed' AND state = 'streaming';\"\n+ )\n+ or die\n+ \"Timed out while waiting for the walsender to restart after\nchanging min_apply_delay to non-zero value\";\n\nIIUC this test is for verifying that a new walsender worker was\nstarted if the delay was changed from 0 to non-zero. E.g. I think it\nis for it is testing your new logic of the maybe_reread_subscription.\n\nProbably more complete testing also needs to check the other scenarios:\n* min_apply_delay from one non-zero value to another non-zero value\n--> verify a new worker is NOT started.\n* change min_apply_delay from non-zero to zero --> verify a new worker\nIS started\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n",
"msg_date": "Fri, 10 Feb 2023 18:10:53 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Peter,\r\n\r\nThank you for reviewing! PSA new version.\r\n\r\n> ======\r\n> Commit Message\r\n> \r\n> 1.\r\n> This commit extends START_REPLICATION to accept SHUTDOWN_MODE term.\r\n> Currently,\r\n> it works well only for logical replication.\r\n> \r\n> ~\r\n> \r\n> 1a.\r\n> \"to accept SHUTDOWN term\" --> \"to include a SHUTDOWN_MODE clause.\"\r\n\r\nFixed.\r\n\r\n> 1b.\r\n> \"it works well only for...\" --> do you mean \"it is currently\r\n> implemented only for...\"\r\n\r\nFixed.\r\n\r\n> 2.\r\n> When 'wait_flush', which is the default, is specified, the walsender will wait\r\n> for all the sent WALs to be flushed on the subscriber side, before exiting the\r\n> process. 'immediate' will exit without confirming the remote flush. This may\r\n> break the consistency between publisher and subscriber, but it may be useful\r\n> for a system that has a high-latency network to reduce the amount of time for\r\n> shutdown. This may be useful to shut down the publisher even when the\r\n> worker is stuck.\r\n> \r\n> ~\r\n> \r\n> SUGGESTION\r\n> The shutdown modes are:\r\n> \r\n> 1) 'wait_flush' (the default). In this mode, the walsender will wait\r\n> for all the sent WALs to be flushed on the subscriber side, before\r\n> exiting the process.\r\n> \r\n> 2) 'immediate'. In this mode, the walsender will exit without\r\n> confirming the remote flush. This may break the consistency between\r\n> publisher and subscriber. This mode might be useful for a system that\r\n> has a high-latency network (to reduce the amount of time for\r\n> shutdown), or to allow the shutdown of the publisher even when the\r\n> worker is stuck.\r\n> \r\n> ======\r\n> doc/src/sgml/protocol.sgml\r\n> \r\n> 3.\r\n> + <varlistentry>\r\n> + <term><literal>SHUTDOWN_MODE { 'wait_flush' | 'immediate'\r\n> }</literal></term>\r\n> + <listitem>\r\n> + <para>\r\n> + Decides the behavior of the walsender process at shutdown. If the\r\n> + shutdown mode is <literal>'wait_flush'</literal>, which is the\r\n> + default, the walsender waits for all the sent WALs to be flushed\r\n> + on the subscriber side. If it is <literal>'immediate'</literal>,\r\n> + the walsender exits without confirming the remote flush.\r\n> + </para>\r\n> + </listitem>\r\n> + </varlistentry>\r\n> \r\n> The synopsis said:\r\n> [ SHUTDOWN_MODE shutdown_mode ]\r\n> \r\n> But then the 'shutdown_mode' term was never mentioned again (??).\r\n> Instead it says:\r\n> SHUTDOWN_MODE { 'wait_flush' | 'immediate' }\r\n> \r\n> IMO the detailed explanation should not say SHUTDOWN_MODE again. It\r\n> should be writtenmore like this:\r\n> \r\n> SUGGESTION\r\n> shutdown_mode\r\n> \r\n> Determines the behavior of the walsender process at shutdown. If\r\n> shutdown_mode is 'wait_flush', the walsender waits for all the sent\r\n> WALs to be flushed on the subscriber side. This is the default when\r\n> SHUTDOWN_MODE is not specified.\r\n> \r\n> If shutdown_mode is 'immediate', the walsender exits without\r\n> confirming the remote flush.\r\n\r\nFixed.\r\n\r\n> .../libpqwalreceiver/libpqwalreceiver.c\r\n> \r\n> 4.\r\n> + /* Add SHUTDOWN_MODE option if needed */\r\n> + if (options->shutdown_mode &&\r\n> + PQserverVersion(conn->streamConn) >= 160000)\r\n> + appendStringInfo(&cmd, \" SHUTDOWN_MODE '%s'\",\r\n> + options->shutdown_mode);\r\n> \r\n> Maybe you can expand on the meaning of \"if needed\".\r\n> \r\n> SUGGESTION\r\n> Add SHUTDOWN_MODE clause if needed (i.e. if not using the default\r\n> shutdown_mode)\r\n\r\nFixed, but not completely same as your suggestion.\r\n\r\n> src/backend/replication/logical/worker.c\r\n> \r\n> 5. maybe_reread_subscription\r\n> \r\n> + *\r\n> + * minapplydelay affects SHUTDOWN_MODE option. 'immediate' shutdown\r\n> mode\r\n> + * will be specified if it is set to non-zero, otherwise default mode will\r\n> + * be set.\r\n> \r\n> Reworded this comment slightly and give a reference to ApplyWorkerMain.\r\n> \r\n> SUGGESTION\r\n> Time-delayed logical replication affects the SHUTDOWN_MODE clause. The\r\n> 'immediate' shutdown mode will be specified if min_apply_delay is\r\n> non-zero, otherwise the default shutdown mode will be used. See\r\n> ApplyWorkerMain.\r\n\r\nFixed.\r\n\r\n> 6. ApplyWorkerMain\r\n> + /*\r\n> + * time-delayed logical replication does not support tablesync\r\n> + * workers, so only the leader apply worker can request walsenders to\r\n> + * exit before confirming remote flush.\r\n> + */\r\n> \r\n> \"time-delayed\" --> \"Time-delayed\"\r\n\r\nFixed.\r\n\r\n> src/backend/replication/repl_gram.y\r\n> \r\n> 7.\r\n> @@ -91,6 +92,7 @@ Node *replication_parse_result;\r\n> %type <boolval> opt_temporary\r\n> %type <list> create_slot_options create_slot_legacy_opt_list\r\n> %type <defelt> create_slot_legacy_opt\r\n> +%type <str> opt_shutdown_mode\r\n> \r\n> The tab alignment seemed not quite right. Not 100% sure.\r\n\r\nFixed accordingly.\r\n\r\n> 8.\r\n> @@ -270,20 +272,22 @@ start_replication:\r\n> cmd->slotname = $2;\r\n> cmd->startpoint = $4;\r\n> cmd->timeline = $5;\r\n> + cmd->shutdownmode = NULL;\r\n> $$ = (Node *) cmd;\r\n> }\r\n> \r\n> It seemed a bit inconsistent. E.g. the cmd->options member was not set\r\n> for physical replication, so why then set this member?\r\n> \r\n> Alternatively, maybe should set cmd->options = NULL here as well?\r\n\r\nRemoved. I checked makeNode() macro, found that palloc0fast() is called there.\r\nThis means that we do not have to initialize unused attributes.\r\n\r\n> src/backend/replication/walsender.c\r\n> \r\n> 9.\r\n> +/* Indicator for specifying the shutdown mode */\r\n> +typedef enum\r\n> +{\r\n> + WALSND_SHUTDOWN_MODE_WAIT_FLUSH = 0,\r\n> + WALSND_SHUTDOWN_MODE_IMMIDEATE\r\n> +} WalSndShutdownMode;\r\n> \r\n> ~\r\n> \r\n> 9a.\r\n> \"Indicator for specifying\" (??). How about just saying: \"Shutdown modes\"\r\n\r\nFixed.\r\n\r\n> 9b.\r\n> Typo: WALSND_SHUTDOWN_MODE_IMMIDEATE ==>\r\n> WALSND_SHUTDOWN_MODE_IMMEDIATE\r\n\r\nReplaced.\r\n\r\n> 9c.\r\n> AFAICT the fact that the first enum value is assigned 0 is not really\r\n> of importance. If that is correct, then IMO maybe it's better to\r\n> remove the \"= 0\" because the explicit assignment made me expect that\r\n> it had special meaning, and then it was confusing when I could not\r\n> find a reason.\r\n\r\nRemoved. This was added for skipping the initialization for previous version,\r\nbut no longer needed.\r\n\r\n> 10. ProcessPendingWrites\r\n> \r\n> + /*\r\n> + * In this function, there is a possibility that the walsender is\r\n> + * stuck. It is caused when the opposite worker is stuck and then the\r\n> + * send-buffer of the walsender becomes full. Therefore, we must add\r\n> + * an additional path for shutdown for immediate shutdown mode.\r\n> + */\r\n> + if (shutdown_mode == WALSND_SHUTDOWN_MODE_IMMIDEATE &&\r\n> + got_STOPPING)\r\n> + WalSndDone(XLogSendLogical);\r\n> \r\n> 10a.\r\n> Can this comment say something like \"receiving worker\" instead of\r\n> \"opposite worker\"?\r\n> \r\n> SUGGESTION\r\n> This can happen when the receiving worker is stuck, and then the\r\n> send-buffer of the walsender...\r\n\r\nChanged.\r\n\r\n> 10b.\r\n> IMO it makes more sense to check this around the other way. E.g. we\r\n> don't care what is the shutdown_mode value unless got_STOPPING is\r\n> true.\r\n> \r\n> SUGGESTION\r\n> if (got_STOPPING && (shutdown_mode ==\r\n> WALSND_SHUTDOWN_MODE_IMMEDIATE))\r\n\r\nChanged.\r\n\r\n> 11. WalSndDone\r\n> \r\n> + * If we are in the immediate shutdown mode, flush location and output\r\n> + * buffer is not checked. This may break the consistency between nodes,\r\n> + * but it may be useful for the system that has high-latency network to\r\n> + * reduce the amount of time for shutdown.\r\n> \r\n> Add some quotes for the mode.\r\n> \r\n> SUGGESTION\r\n> 'immediate' shutdown mode\r\n\r\nChanged.\r\n\r\n> 12.\r\n> +/*\r\n> + * Check options for walsender itself and set flags accordingly.\r\n> + *\r\n> + * Currently only one option is accepted.\r\n> + */\r\n> +static void\r\n> +CheckWalSndOptions(const StartReplicationCmd *cmd)\r\n> +{\r\n> + if (cmd->shutdownmode)\r\n> + ParseShutdownMode(cmd->shutdownmode);\r\n> +}\r\n> +\r\n> +/*\r\n> + * Parse given shutdown mode.\r\n> + *\r\n> + * Currently two values are accepted - \"wait_flush\" and \"immediate\"\r\n> + */\r\n> +static void\r\n> +ParseShutdownMode(char *shutdownmode)\r\n> +{\r\n> + if (pg_strcasecmp(shutdownmode, \"wait_flush\") == 0)\r\n> + shutdown_mode = WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\r\n> + else if (pg_strcasecmp(shutdownmode, \"immediate\") == 0)\r\n> + shutdown_mode = WALSND_SHUTDOWN_MODE_IMMIDEATE;\r\n> + else\r\n> + ereport(ERROR,\r\n> + errcode(ERRCODE_SYNTAX_ERROR),\r\n> + errmsg(\"invalid value for shutdown mode: \\\"%s\\\"\", shutdownmode),\r\n> + errhint(\"Available values: wait_flush, immediate.\"));\r\n> +}\r\n> \r\n> IMO the ParseShutdownMode function seems unnecessary because it's not\r\n> really \"parsing\" anything and it is only called in one place. I\r\n> suggest wrapping everything into the CheckWalSndOptions function. The\r\n> end result is still only a simple function:\r\n> \r\n> SUGGESTION\r\n> \r\n> static void\r\n> CheckWalSndOptions(const StartReplicationCmd *cmd)\r\n> {\r\n> if (cmd->shutdownmode)\r\n> {\r\n> char *mode = cmd->shutdownmode;\r\n> \r\n> if (pg_strcasecmp(mode, \"wait_flush\") == 0)\r\n> shutdown_mode = WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\r\n> else if (pg_strcasecmp(mode, \"immediate\") == 0)\r\n> shutdown_mode = WALSND_SHUTDOWN_MODE_IMMEDIATE;\r\n> \r\n> else\r\n> ereport(ERROR,\r\n> errcode(ERRCODE_SYNTAX_ERROR),\r\n> errmsg(\"invalid value for shutdown mode: \\\"%s\\\"\", mode),\r\n> errhint(\"Available values: wait_flush, immediate.\"));\r\n> }\r\n> }\r\n\r\nRemoved.\r\n\r\n> ======\r\n> src/include/replication/walreceiver.h\r\n> \r\n> 13.\r\n> @@ -170,6 +170,7 @@ typedef struct\r\n> * false if physical stream. */\r\n> char *slotname; /* Name of the replication slot or NULL. */\r\n> XLogRecPtr startpoint; /* LSN of starting point. */\r\n> + char *shutdown_mode; /* Name of specified shutdown name */\r\n> \r\n> union\r\n> {\r\n> ~\r\n> \r\n> 13a.\r\n> Typo (shutdown name?)\r\n> \r\n> SUGGESTION\r\n> /* The specified shutdown mode string, or NULL. */\r\n\r\nFixed.\r\n\r\n> 13b.\r\n> Because they have the same member names I kept confusing this option\r\n> shutdown_mode with the other enum also called shutdown_mode.\r\n> \r\n> I wonder if is it possible to call this one something like\r\n> 'shutdown_mode_str' to make reading the code easier?\r\n\r\nChanged.\r\n\r\n> 13c.\r\n> Is this member in the right place? AFAIK this is not even implemented\r\n> for physical replication. e.g. Why isn't this new member part of the\r\n> 'logical' sub-structure in the union?\r\n\r\nI remained for future extendibility, but it seemed not to be needed. Moved.\r\n\r\n> ======\r\n> src/test/subscription/t/001_rep_changes.pl\r\n> \r\n> 14.\r\n> -# Set min_apply_delay parameter to 3 seconds\r\n> +# Check restart on changing min_apply_delay to 3 seconds\r\n> my $delay = 3;\r\n> $node_subscriber->safe_psql('postgres',\r\n> \"ALTER SUBSCRIPTION tap_sub_renamed SET (min_apply_delay =\r\n> '${delay}s')\");\r\n> +$node_publisher->poll_query_until('postgres',\r\n> + \"SELECT pid != $oldpid FROM pg_stat_replication WHERE\r\n> application_name = 'tap_sub_renamed' AND state = 'streaming';\"\r\n> + )\r\n> + or die\r\n> + \"Timed out while waiting for the walsender to restart after\r\n> changing min_apply_delay to non-zero value\";\r\n> \r\n> IIUC this test is for verifying that a new walsender worker was\r\n> started if the delay was changed from 0 to non-zero. E.g. I think it\r\n> is for it is testing your new logic of the maybe_reread_subscription.\r\n> \r\n> Probably more complete testing also needs to check the other scenarios:\r\n> * min_apply_delay from one non-zero value to another non-zero value\r\n> --> verify a new worker is NOT started.\r\n> * change min_apply_delay from non-zero to zero --> verify a new worker\r\n> IS started\r\n\r\nHmm. These tests do not improve the coverage, so not sure we should test or not.\r\nMoreover, IIUC we do not have a good way to verify that the worker does not restart.\r\nEven if the old pid is remained in the pg_stat_replication, there is a possibility\r\nthat walsender exits after that. So currently I added only the case that change\r\nmin_apply_delay from non-zero to zero.\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED",
"msg_date": "Fri, 10 Feb 2023 11:54:01 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Fri, Feb 10, 2023 at 5:24 PM Hayato Kuroda (Fujitsu)\n<kuroda.hayato@fujitsu.com> wrote:\n>\n\nCan't we have this option just as a bool (like shutdown_immediate)?\nWhy do we want to keep multiple modes?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 10 Feb 2023 17:45:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Dear Amit,\r\n\r\n> Can't we have this option just as a bool (like shutdown_immediate)?\r\n> Why do we want to keep multiple modes?\r\n\r\nOf course we can use boolean instead, but current style is motivated by the post[1].\r\nThis allows to add another option in future, whereas I do not have idea now.\r\n\r\nI want to ask other reviewers which one is better...\r\n\r\n[1]: https://www.postgresql.org/message-id/20230208.112717.1140830361804418505.horikyota.ntt%40gmail.com\r\n\r\nBest Regards,\r\nHayato Kuroda\r\nFUJITSU LIMITED\r\n\r\n",
"msg_date": "Fri, 10 Feb 2023 12:40:43 +0000",
"msg_from": "\"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "RE: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Fri, 10 Feb 2023 12:40:43 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in \n> Dear Amit,\n> \n> > Can't we have this option just as a bool (like shutdown_immediate)?\n> > Why do we want to keep multiple modes?\n> \n> Of course we can use boolean instead, but current style is motivated by the post[1].\n> This allows to add another option in future, whereas I do not have idea now.\n> \n> I want to ask other reviewers which one is better...\n> \n> [1]: https://www.postgresql.org/message-id/20230208.112717.1140830361804418505.horikyota.ntt%40gmail.com\n\nIMHO I vaguely don't like that we lose a means to specify the default\nbehavior here. And I'm not sure we definitely don't need other than\nflush and immedaite for both physical and logical replication. If it's\nnot the case, I don't object to make it a Boolean.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 13 Feb 2023 10:56:51 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 7:26 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 10 Feb 2023 12:40:43 +0000, \"Hayato Kuroda (Fujitsu)\" <kuroda.hayato@fujitsu.com> wrote in\n> > Dear Amit,\n> >\n> > > Can't we have this option just as a bool (like shutdown_immediate)?\n> > > Why do we want to keep multiple modes?\n> >\n> > Of course we can use boolean instead, but current style is motivated by the post[1].\n> > This allows to add another option in future, whereas I do not have idea now.\n> >\n> > I want to ask other reviewers which one is better...\n> >\n> > [1]: https://www.postgresql.org/message-id/20230208.112717.1140830361804418505.horikyota.ntt%40gmail.com\n>\n> IMHO I vaguely don't like that we lose a means to specify the default\n> behavior here. And I'm not sure we definitely don't need other than\n> flush and immedaite for both physical and logical replication.\n>\n\nIf we can think of any use case that requires its extension then it\nmakes sense to make it a non-boolean option but otherwise, let's keep\nthings simple by having a boolean option.\n\n> If it's\n> not the case, I don't object to make it a Boolean.\n>\n\nThanks.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 08:27:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Here are some comments for patch v7-0002.\n\n======\nCommit Message\n\n1.\nThis commit extends START_REPLICATION to accept SHUTDOWN_MODE clause. It is\ncurrently implemented only for logical replication.\n\n~\n\n\"to accept SHUTDOWN_MODE clause.\" --> \"to accept a SHUTDOWN_MODE clause.\"\n\n======\ndoc/src/sgml/protocol.sgml\n\n2.\nSTART_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ SHUTDOWN_MODE {\n'wait_flush' | 'immediate' } ] [ ( option_name [ option_value ] [,\n...] ) ]\n\n~\n\nIMO this should say shutdown_mode as it did before:\nSTART_REPLICATION SLOT slot_name LOGICAL XXX/XXX [ SHUTDOWN_MODE\nshutdown_mode ] [ ( option_name [ option_value ] [, ...] ) ]\n\n~~~\n\n3.\n+ <varlistentry>\n+ <term><literal>shutdown_mode</literal></term>\n+ <listitem>\n+ <para>\n+ Determines the behavior of the walsender process at shutdown. If\n+ shutdown_mode is <literal>'wait_flush'</literal>, the walsender waits\n+ for all the sent WALs to be flushed on the subscriber side. This is\n+ the default when SHUTDOWN_MODE is not specified. If shutdown_mode is\n+ <literal>'immediate'</literal>, the walsender exits without\n+ confirming the remote flush.\n+ </para>\n+ </listitem>\n+ </varlistentry>\n\nIs the font of the \"shutdown_mode\" correct? I expected it to be like\nthe others (e.g. slot_name)\n\n======\nsrc/backend/replication/walsender.c\n\n4.\n+static void\n+CheckWalSndOptions(const StartReplicationCmd *cmd)\n+{\n+ if (cmd->shutdownmode)\n+ {\n+ char *mode = cmd->shutdownmode;\n+\n+ if (pg_strcasecmp(mode, \"wait_flush\") == 0)\n+ shutdown_mode = WALSND_SHUTDOWN_MODE_WAIT_FLUSH;\n+ else if (pg_strcasecmp(mode, \"immediate\") == 0)\n+ shutdown_mode = WALSND_SHUTDOWN_MODE_IMMEDIATE;\n+ else\n+ ereport(ERROR,\n+ errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"invalid value for shutdown mode: \\\"%s\\\"\", mode),\n+ errhint(\"Available values: wait_flush, immediate.\"));\n+ }\n+\n+}\n\nUnnecessary extra whitespace at end of the function.\n\n======\nsrc/include/nodes/replnodes.\n\n5.\n@@ -83,6 +83,7 @@ typedef struct StartReplicationCmd\n char *slotname;\n TimeLineID timeline;\n XLogRecPtr startpoint;\n+ char *shutdownmode;\n List *options;\n } StartReplicationCmd;\n\nIMO I those the last 2 members should have a comment something like:\n/* Only for logical replication */\n\nbecause that will make it more clear why sometimes they are assigned\nand sometimes they are not.\n\n======\nsrc/include/replication/walreceiver.h\n\n6.\nShould the protocol version be bumped (and documented) now that the\nSTART REPLICATION supports a new extended syntax? Or is that done only\nfor messages sent by pgoutput?\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 15:09:59 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Mon, 13 Feb 2023 08:27:01 +0530, Amit Kapila <amit.kapila16@gmail.com> wrote in \n> On Mon, Feb 13, 2023 at 7:26 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> >\n> > IMHO I vaguely don't like that we lose a means to specify the default\n> > behavior here. And I'm not sure we definitely don't need other than\n> > flush and immedaite for both physical and logical replication.\n> >\n> \n> If we can think of any use case that requires its extension then it\n> makes sense to make it a non-boolean option but otherwise, let's keep\n> things simple by having a boolean option.\n\nWhat do you think about the need for explicitly specifying the\ndefault? I'm fine with specifying the default using a single word,\nsuch as WAIT_FOR_REMOTE_FLUSH.\n\n> > If it's\n> > not the case, I don't object to make it a Boolean.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 14 Feb 2023 10:05:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "On 2023-02-14 10:05:40 +0900, Kyotaro Horiguchi wrote:\n> What do you think about the need for explicitly specifying the\n> default? I'm fine with specifying the default using a single word,\n> such as WAIT_FOR_REMOTE_FLUSH.\n\nWe obviously shouldn't force the option to be present. Why would we want to\nbreak existing clients unnecessarily? Without it the behaviour should be\nunchanged from today's.\n\n\n",
"msg_date": "Mon, 13 Feb 2023 17:13:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "At Mon, 13 Feb 2023 17:13:43 -0800, Andres Freund <andres@anarazel.de> wrote in \n> On 2023-02-14 10:05:40 +0900, Kyotaro Horiguchi wrote:\n> > What do you think about the need for explicitly specifying the\n> > default? I'm fine with specifying the default using a single word,\n> > such as WAIT_FOR_REMOTE_FLUSH.\n> \n> We obviously shouldn't force the option to be present. Why would we want to\n> break existing clients unnecessarily? Without it the behaviour should be\n> unchanged from today's.\n\nI didn't suggest making the option mandatory. I just suggested\nproviding a way to specify the default value explicitly, like in the\nrecent commit 746915c686.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Feb 2023 15:20:59 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
},
{
"msg_contents": "Thanks for everyone's work on this, I am very interested in it getting into\na release. What is the status of this?\n\nMy use case is Patroni - when it needs to do a failover, it shuts down the\nprimary. However, large transactions can cause it to stay in the \"shutting\ndown\" state for a long time, which means your entire HA system is now\nnon-functional. I like the idea of a new flag. I'll test this out soon if\nthe original authors want to make a rebased patch. This thread is old, so\nif I don't hear back in a bit, I'll create and test a new one myself. :)\n\nCheers,\nGreg\n\nThanks for everyone's work on this, I am very interested in it getting into a release. What is the status of this?My use case is Patroni - when it needs to do a failover, it shuts down the primary. However, large transactions can cause it to stay in the \"shutting down\" state for a long time, which means your entire HA system is now non-functional. I like the idea of a new flag. I'll test this out soon if the original authors want to make a rebased patch. This thread is old, so if I don't hear back in a bit, I'll create and test a new one myself. :)Cheers,Greg",
"msg_date": "Tue, 17 Sep 2024 08:29:56 -0400",
"msg_from": "Greg Sabino Mullane <htamfids@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exit walsender before confirming remote flush in logical\n replication"
}
] |
[
{
"msg_contents": "I happened to notice $subject. It happens when we build eqfunctions for\neach grouping set.\n\n /* for each grouping set */\n for (int k = 0; k < phasedata->numsets; k++)\n {\n int length = phasedata->gset_lengths[k];\n\n if (phasedata->eqfunctions[length - 1] != NULL)\n continue;\n\n phasedata->eqfunctions[length - 1] =\n execTuplesMatchPrepare(scanDesc,\n length,\n aggnode->grpColIdx,\n aggnode->grpOperators,\n aggnode->grpCollations,\n (PlanState *) aggstate);\n }\n\nIf it is an empty grouping set, its length will be zero, and accessing\nphasedata->eqfunctions[length - 1] is not right.\n\nI think we can just skip building the eqfunctions for empty grouping\nset.\n\n--- a/src/backend/executor/nodeAgg.c\n+++ b/src/backend/executor/nodeAgg.c\n@@ -3494,6 +3494,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)\n {\n int length = phasedata->gset_lengths[k];\n\n+ /* skip empty grouping set */\n+ if (length == 0)\n+ continue;\n+\n if (phasedata->eqfunctions[length - 1] != NULL)\n continue;\n\nThanks\nRichard\n\nI happened to notice $subject. It happens when we build eqfunctions foreach grouping set. /* for each grouping set */ for (int k = 0; k < phasedata->numsets; k++) { int length = phasedata->gset_lengths[k]; if (phasedata->eqfunctions[length - 1] != NULL) continue; phasedata->eqfunctions[length - 1] = execTuplesMatchPrepare(scanDesc, length, aggnode->grpColIdx, aggnode->grpOperators, aggnode->grpCollations, (PlanState *) aggstate); }If it is an empty grouping set, its length will be zero, and accessingphasedata->eqfunctions[length - 1] is not right.I think we can just skip building the eqfunctions for empty groupingset.--- a/src/backend/executor/nodeAgg.c+++ b/src/backend/executor/nodeAgg.c@@ -3494,6 +3494,10 @@ ExecInitAgg(Agg *node, EState *estate, int eflags) { int length = phasedata->gset_lengths[k];+ /* skip empty grouping set */+ if (length == 0)+ continue;+ if (phasedata->eqfunctions[length - 1] != NULL) continue;ThanksRichard",
"msg_date": "Thu, 22 Dec 2022 14:02:42 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 2:02 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> I happened to notice $subject. It happens when we build eqfunctions for\n> each grouping set.\n>\n> /* for each grouping set */\n> for (int k = 0; k < phasedata->numsets; k++)\n> {\n> int length = phasedata->gset_lengths[k];\n>\n> if (phasedata->eqfunctions[length - 1] != NULL)\n> continue;\n>\n> phasedata->eqfunctions[length - 1] =\n> execTuplesMatchPrepare(scanDesc,\n> length,\n> aggnode->grpColIdx,\n> aggnode->grpOperators,\n> aggnode->grpCollations,\n> (PlanState *) aggstate);\n> }\n>\n> If it is an empty grouping set, its length will be zero, and accessing\n> phasedata->eqfunctions[length - 1] is not right.\n>\n> I think we can just skip building the eqfunctions for empty grouping\n> set.\n>\n\nAttached is a trivial patch for the fix.\n\nThanks\nRichard",
"msg_date": "Tue, 27 Dec 2022 15:12:17 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Thu, Dec 22, 2022 at 2:02 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>> If it is an empty grouping set, its length will be zero, and accessing\n>> phasedata->eqfunctions[length - 1] is not right.\n\n> Attached is a trivial patch for the fix.\n\nAgreed, that's a latent bug. It's only latent because the word just\nbefore a palloc chunk will never be zero, either in our historical\npalloc code or in v16's shiny new implementation. Nonetheless it\nis a horrible idea for ExecInitAgg to depend on that fact, so I\npushed your fix.\n\nThe thing that I find really distressing here is that it's been\nlike this for years and none of our automated testing caught it.\nYou'd have expected valgrind testing to do so ... but it does not,\nbecause we've never marked that word NOACCESS. Maybe we should\nrethink that? It'd require making mcxt.c do some valgrind flag\nmanipulations so it could access the hdrmask when appropriate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Jan 2023 16:25:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 5:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Agreed, that's a latent bug. It's only latent because the word just\n> before a palloc chunk will never be zero, either in our historical\n> palloc code or in v16's shiny new implementation. Nonetheless it\n> is a horrible idea for ExecInitAgg to depend on that fact, so I\n> pushed your fix.\n\n\nThanks for pushing it!\n\n\n> The thing that I find really distressing here is that it's been\n> like this for years and none of our automated testing caught it.\n> You'd have expected valgrind testing to do so ... but it does not,\n> because we've never marked that word NOACCESS. Maybe we should\n> rethink that? It'd require making mcxt.c do some valgrind flag\n> manipulations so it could access the hdrmask when appropriate.\n\n\nYeah, maybe we can do that. It's true that it requires some additional\nwork to access hdrmask, as in the new implementation the palloc'd chunk\nis always prefixed by hdrmask.\n\nBTW, I noticed a typo in the comment of memorychunk.h.\n\n--- a/src/include/utils/memutils_memorychunk.h\n+++ b/src/include/utils/memutils_memorychunk.h\n@@ -5,9 +5,9 @@\n * MemoryContexts may use as a header for chunks of memory they\nallocate.\n *\n * MemoryChunk provides a lightweight header that a MemoryContext can use\nto\n- * store a reference back to the block the which the given chunk is\nallocated\n- * on and also an additional 30-bits to store another value such as the\nsize\n- * of the allocated chunk.\n+ * store a reference back to the block which the given chunk is allocated\non\n+ * and also an additional 30-bits to store another value such as the size\nof\n+ * the allocated chunk.\n\nThanks\nRichard\n\nOn Tue, Jan 3, 2023 at 5:25 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nAgreed, that's a latent bug. It's only latent because the word just\nbefore a palloc chunk will never be zero, either in our historical\npalloc code or in v16's shiny new implementation. Nonetheless it\nis a horrible idea for ExecInitAgg to depend on that fact, so I\npushed your fix. Thanks for pushing it! \nThe thing that I find really distressing here is that it's been\nlike this for years and none of our automated testing caught it.\nYou'd have expected valgrind testing to do so ... but it does not,\nbecause we've never marked that word NOACCESS. Maybe we should\nrethink that? It'd require making mcxt.c do some valgrind flag\nmanipulations so it could access the hdrmask when appropriate. Yeah, maybe we can do that. It's true that it requires some additionalwork to access hdrmask, as in the new implementation the palloc'd chunkis always prefixed by hdrmask.BTW, I noticed a typo in the comment of memorychunk.h.--- a/src/include/utils/memutils_memorychunk.h+++ b/src/include/utils/memutils_memorychunk.h@@ -5,9 +5,9 @@ * MemoryContexts may use as a header for chunks of memory they allocate. * * MemoryChunk provides a lightweight header that a MemoryContext can use to- * store a reference back to the block the which the given chunk is allocated- * on and also an additional 30-bits to store another value such as the size- * of the allocated chunk.+ * store a reference back to the block which the given chunk is allocated on+ * and also an additional 30-bits to store another value such as the size of+ * the allocated chunk.ThanksRichard",
"msg_date": "Tue, 3 Jan 2023 17:20:14 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 10:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The thing that I find really distressing here is that it's been\n> like this for years and none of our automated testing caught it.\n> You'd have expected valgrind testing to do so ... but it does not,\n> because we've never marked that word NOACCESS. Maybe we should\n> rethink that? It'd require making mcxt.c do some valgrind flag\n> manipulations so it could access the hdrmask when appropriate.\n\nYeah, that probably could have been improved during the recent change.\nHere's a patch for it.\n\nI'm just doing a final Valgrind run on it now to check for errors.\n\nDavid",
"msg_date": "Thu, 5 Jan 2023 11:17:48 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 6:18 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 3 Jan 2023 at 10:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > The thing that I find really distressing here is that it's been\n> > like this for years and none of our automated testing caught it.\n> > You'd have expected valgrind testing to do so ... but it does not,\n> > because we've never marked that word NOACCESS. Maybe we should\n> > rethink that? It'd require making mcxt.c do some valgrind flag\n> > manipulations so it could access the hdrmask when appropriate.\n>\n> Yeah, that probably could have been improved during the recent change.\n> Here's a patch for it.\n\n\nThanks for the patch. With it Valgrind is able to catch the invalid\nread discussed in the initial email of this thread.\n\n VALGRINDERROR-BEGIN\n Invalid read of size 8\n at 0x4DB056: ExecInitAgg\n by 0x4C486A: ExecInitNode\n by 0x4B92B7: InitPlan\n by 0x4B81D7: standard_ExecutorStart\n by 0x4B7F1B: ExecutorStart\n\nI reviewed this patch and have some comments.\n\nIt seems that for MemoryContextMethods in alignedalloc.c the access to\nthe chunk header is not wrapped by VALGRIND_MAKE_MEM_DEFINED and\nVALGRIND_MAKE_MEM_NOACCESS. Should we do that?\n\nIn GenerationFree, I think the VALGRIND_MAKE_MEM_DEFINED should be moved\nto the start of this function, before we call MemoryChunkIsExternal.\n\nIn SlabFree, we should call MemoryChunkGetBlock after the call of\nVALGRIND_MAKE_MEM_DEFINED, just like how we do in SlabRealloc.\n\nIn AllocSetStats, we have a call of MemoryChunkGetValue in Assert. I\nthink we should wrap it with VALGRIND_MAKE_MEM_DEFINED and\nVALGRIND_MAKE_MEM_NOACCESS.\n\nThanks\nRichard\n\nOn Thu, Jan 5, 2023 at 6:18 AM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 3 Jan 2023 at 10:25, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The thing that I find really distressing here is that it's been\n> like this for years and none of our automated testing caught it.\n> You'd have expected valgrind testing to do so ... but it does not,\n> because we've never marked that word NOACCESS. Maybe we should\n> rethink that? It'd require making mcxt.c do some valgrind flag\n> manipulations so it could access the hdrmask when appropriate.\n\nYeah, that probably could have been improved during the recent change.\nHere's a patch for it. Thanks for the patch. With it Valgrind is able to catch the invalidread discussed in the initial email of this thread. VALGRINDERROR-BEGIN Invalid read of size 8 at 0x4DB056: ExecInitAgg by 0x4C486A: ExecInitNode by 0x4B92B7: InitPlan by 0x4B81D7: standard_ExecutorStart by 0x4B7F1B: ExecutorStartI reviewed this patch and have some comments.It seems that for MemoryContextMethods in alignedalloc.c the access tothe chunk header is not wrapped by VALGRIND_MAKE_MEM_DEFINED andVALGRIND_MAKE_MEM_NOACCESS. Should we do that?In GenerationFree, I think the VALGRIND_MAKE_MEM_DEFINED should be movedto the start of this function, before we call MemoryChunkIsExternal.In SlabFree, we should call MemoryChunkGetBlock after the call ofVALGRIND_MAKE_MEM_DEFINED, just like how we do in SlabRealloc.In AllocSetStats, we have a call of MemoryChunkGetValue in Assert. Ithink we should wrap it with VALGRIND_MAKE_MEM_DEFINED andVALGRIND_MAKE_MEM_NOACCESS.ThanksRichard",
"msg_date": "Thu, 5 Jan 2023 15:06:21 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 20:06, Richard Guo <guofenglinux@gmail.com> wrote:\n> I reviewed this patch and have some comments.\n\nThanks for looking at this. I think I've fixed all the issues you mentioned.\n\nOne extra thing I noticed was that I had to add a new\nVALGRIND_MAKE_MEM_DEFINED in AllocSetAlloc when grabbing an item off\nthe freelist. I didn't quite manage to figure out why that's needed as\nwhen we do AllocSetFree() we don't mark the pfree'd memory with\nNOACCESS, and it also looks like AllocSetReset() sets the keeper\nblock's memory to NOACCESS, but that function also clears the\nfreelists too, so the freelist chunk is not coming from a recently\nreset context.\n\nI might need to spend a bit more time on this to see if I can figure\nout why this is happening. On the other hand, maybe we should just\nmark pfree'd memory as NOACCESS as that might find another class of\nissues.\n\nDavid",
"msg_date": "Mon, 9 Jan 2023 22:21:05 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 5:21 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 5 Jan 2023 at 20:06, Richard Guo <guofenglinux@gmail.com> wrote:\n> > I reviewed this patch and have some comments.\n>\n> Thanks for looking at this. I think I've fixed all the issues you\n> mentioned.\n>\n> One extra thing I noticed was that I had to add a new\n> VALGRIND_MAKE_MEM_DEFINED in AllocSetAlloc when grabbing an item off\n> the freelist. I didn't quite manage to figure out why that's needed as\n> when we do AllocSetFree() we don't mark the pfree'd memory with\n> NOACCESS, and it also looks like AllocSetReset() sets the keeper\n> block's memory to NOACCESS, but that function also clears the\n> freelists too, so the freelist chunk is not coming from a recently\n> reset context.\n>\n> I might need to spend a bit more time on this to see if I can figure\n> out why this is happening. On the other hand, maybe we should just\n> mark pfree'd memory as NOACCESS as that might find another class of\n> issues.\n\n\nIt occurred to me that this hasn't been applied. Should we add it to\nthe CF to not lose track of it?\n\nThanks\nRichard\n\nOn Mon, Jan 9, 2023 at 5:21 PM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 5 Jan 2023 at 20:06, Richard Guo <guofenglinux@gmail.com> wrote:\n> I reviewed this patch and have some comments.\n\nThanks for looking at this. I think I've fixed all the issues you mentioned.\n\nOne extra thing I noticed was that I had to add a new\nVALGRIND_MAKE_MEM_DEFINED in AllocSetAlloc when grabbing an item off\nthe freelist. I didn't quite manage to figure out why that's needed as\nwhen we do AllocSetFree() we don't mark the pfree'd memory with\nNOACCESS, and it also looks like AllocSetReset() sets the keeper\nblock's memory to NOACCESS, but that function also clears the\nfreelists too, so the freelist chunk is not coming from a recently\nreset context.\n\nI might need to spend a bit more time on this to see if I can figure\nout why this is happening. On the other hand, maybe we should just\nmark pfree'd memory as NOACCESS as that might find another class of\nissues.It occurred to me that this hasn't been applied. Should we add it tothe CF to not lose track of it?ThanksRichard",
"msg_date": "Mon, 20 Mar 2023 14:18:15 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Mon, 20 Mar 2023 at 19:18, Richard Guo <guofenglinux@gmail.com> wrote:\n> It occurred to me that this hasn't been applied. Should we add it to\n> the CF to not lose track of it?\n\nI have a git branch with it. That'll work for me personally as a\nreminder to come back to it during the v17 cycle.\n\nDavid\n\n\n",
"msg_date": "Wed, 22 Mar 2023 21:36:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 22:21, David Rowley <dgrowleyml@gmail.com> wrote:\n> One extra thing I noticed was that I had to add a new\n> VALGRIND_MAKE_MEM_DEFINED in AllocSetAlloc when grabbing an item off\n> the freelist. I didn't quite manage to figure out why that's needed as\n> when we do AllocSetFree() we don't mark the pfree'd memory with\n> NOACCESS, and it also looks like AllocSetReset() sets the keeper\n> block's memory to NOACCESS, but that function also clears the\n> freelists too, so the freelist chunk is not coming from a recently\n> reset context.\n\nIt seems I didn't look hard enough for NOACCESS marking. It's in\nwipe_mem(). So that explains why the VALGRIND_MAKE_MEM_DEFINED is\nrequired in AllocSetAlloc.\n\nSince this patch really only touches Valgrind macros, I don't really\nfeel like there's a good reason we can't still do this for v16, but\nI'll start another thread to increase visibility to see if anyone else\nthinks differently about that.\n\nDavid\n\n\n",
"msg_date": "Wed, 12 Apr 2023 01:17:55 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: An oversight in ExecInitAgg for grouping sets"
}
] |
[
{
"msg_contents": "Hi All,\nA customer ran a script dropping a few dozens of users in a transaction.\nBefore dropping a user they change the ownership of the tables owned by\nthat user to another user and revoking all the accesses from that user in\nthe same transaction. There were a few thousand tables whose privileges and\nownership was changed by this transaction. Since all of these changes were\nin catalog table, those changes were filtered out\nin ReorderBufferProcessTXN()\nby the following code\n if (!RelationIsLogicallyLogged(relation))\n goto change_done;\n\nI tried to reproduce a similar situation through the attached TAP test. For\n500 users and 1000 tables, we see that the transaction takes significant\ntime but logical decoding does not take much time. So with the default 1\nmin WAL sender and receiver timeout I could not reproduce the timeout.\nBeyond that our TAp test itself times out.\n\nBut I think there's a possibility that the logical receiver will time out\nthis way when decoding a sufficiently large transaction which takes more\nthan the timeout amount of time to decode. So I think we need to\ncall OutputPluginUpdateProgress() after a regular interval (in terms of\ntime or number of changes) to consume any feedback from the subscriber or\nsend a keep-alive message.\n\nFollowing commit\n```\ncommit 87c1dd246af8ace926645900f02886905b889718\nAuthor: Amit Kapila <akapila@postgresql.org>\nDate: Wed May 11 10:12:23 2022 +0530\n\n Fix the logical replication timeout during large transactions.\n\n ```\nfixed a similar problem when the changes were filtered by an output plugin,\nbut in this case the changes are not being handed over to the output plugin\nas well. If we fix it in the core we may not need to handle it in the\noutput plugin as that commit does. The commit does not have a test case\nwhich I could run to reproduce the timeout.\n\n-- \nBest Wishes,\nAshutosh",
"msg_date": "Thu, 22 Dec 2022 18:57:52 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Timeout when changes are filtered out by the core during logical\n replication"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 6:58 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> Hi All,\n> A customer ran a script dropping a few dozens of users in a transaction. Before dropping a user they change the ownership of the tables owned by that user to another user and revoking all the accesses from that user in the same transaction. There were a few thousand tables whose privileges and ownership was changed by this transaction. Since all of these changes were in catalog table, those changes were filtered out in ReorderBufferProcessTXN()\n> by the following code\n> if (!RelationIsLogicallyLogged(relation))\n> goto change_done;\n>\n> I tried to reproduce a similar situation through the attached TAP test. For 500 users and 1000 tables, we see that the transaction takes significant time but logical decoding does not take much time. So with the default 1 min WAL sender and receiver timeout I could not reproduce the timeout. Beyond that our TAp test itself times out.\n>\n> But I think there's a possibility that the logical receiver will time out this way when decoding a sufficiently large transaction which takes more than the timeout amount of time to decode. So I think we need to call OutputPluginUpdateProgress() after a regular interval (in terms of time or number of changes) to consume any feedback from the subscriber or send a keep-alive message.\n>\n\nI don't think it will be a good idea to directly call\nOutputPluginUpdateProgress() from reorderbuffer.c. There is already a\npatch to discuss this problem [1].\n\n> Following commit\n> ```\n> commit 87c1dd246af8ace926645900f02886905b889718\n> Author: Amit Kapila <akapila@postgresql.org>\n> Date: Wed May 11 10:12:23 2022 +0530\n>\n> Fix the logical replication timeout during large transactions.\n>\n> ```\n> fixed a similar problem when the changes were filtered by an output plugin, but in this case the changes are not being handed over to the output plugin as well. If we fix it in the core we may not need to handle it in the output plugin as that commit does. The commit does not have a test case which I could run to reproduce the timeout.\n>\n\nIt is not evident how to write a stable test for this because\nestimating how many changes are enough for the configured\nwal_receiver_timeout to\npass on all the buildfarm machines is tricky. If you have good ideas\nthen feel free to propose a test patch.\n\n[1] - https://www.postgresql.org/message-id/OS3PR01MB62751A8063A9A75A096000D89E3F9%40OS3PR01MB6275.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 14:45:28 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Timeout when changes are filtered out by the core during logical\n replication"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Thu, Dec 22, 2022 at 6:58 PM Ashutosh Bapat\n> <ashutosh.bapat@enterprisedb.com> wrote:\n> >\n> > Hi All,\n> > A customer ran a script dropping a few dozens of users in a transaction.\n> Before dropping a user they change the ownership of the tables owned by\n> that user to another user and revoking all the accesses from that user in\n> the same transaction. There were a few thousand tables whose privileges and\n> ownership was changed by this transaction. Since all of these changes were\n> in catalog table, those changes were filtered out in\n> ReorderBufferProcessTXN()\n> > by the following code\n> > if (!RelationIsLogicallyLogged(relation))\n> > goto change_done;\n> >\n> > I tried to reproduce a similar situation through the attached TAP test.\n> For 500 users and 1000 tables, we see that the transaction takes\n> significant time but logical decoding does not take much time. So with the\n> default 1 min WAL sender and receiver timeout I could not reproduce the\n> timeout. Beyond that our TAp test itself times out.\n> >\n> > But I think there's a possibility that the logical receiver will time\n> out this way when decoding a sufficiently large transaction which takes\n> more than the timeout amount of time to decode. So I think we need to call\n> OutputPluginUpdateProgress() after a regular interval (in terms of time or\n> number of changes) to consume any feedback from the subscriber or send a\n> keep-alive message.\n> >\n>\n> I don't think it will be a good idea to directly call\n> OutputPluginUpdateProgress() from reorderbuffer.c. There is already a\n> patch to discuss this problem [1].\n>\n\nYeah. I don't mean to use OutputPluginUpdateProgress() directly. The patch\njust showed that it helps calling it there in some way. Thanks for pointing\nthe other thread. I have reviewed the patch on that thread and continue the\ndiscussion there.\n\n\n>\n> > Following commit\n> > ```\n> > commit 87c1dd246af8ace926645900f02886905b889718\n> > Author: Amit Kapila <akapila@postgresql.org>\n> > Date: Wed May 11 10:12:23 2022 +0530\n> >\n> > Fix the logical replication timeout during large transactions.\n> >\n> > ```\n> > fixed a similar problem when the changes were filtered by an output\n> plugin, but in this case the changes are not being handed over to the\n> output plugin as well. If we fix it in the core we may not need to handle\n> it in the output plugin as that commit does. The commit does not have a\n> test case which I could run to reproduce the timeout.\n> >\n>\n> It is not evident how to write a stable test for this because\n> estimating how many changes are enough for the configured\n> wal_receiver_timeout to\n> pass on all the buildfarm machines is tricky. If you have good ideas\n> then feel free to propose a test patch.\n>\n\nWill continue this on the other thread.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, Dec 23, 2022 at 2:45 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Thu, Dec 22, 2022 at 6:58 PM Ashutosh Bapat\n<ashutosh.bapat@enterprisedb.com> wrote:\n>\n> Hi All,\n> A customer ran a script dropping a few dozens of users in a transaction. Before dropping a user they change the ownership of the tables owned by that user to another user and revoking all the accesses from that user in the same transaction. There were a few thousand tables whose privileges and ownership was changed by this transaction. Since all of these changes were in catalog table, those changes were filtered out in ReorderBufferProcessTXN()\n> by the following code\n> if (!RelationIsLogicallyLogged(relation))\n> goto change_done;\n>\n> I tried to reproduce a similar situation through the attached TAP test. For 500 users and 1000 tables, we see that the transaction takes significant time but logical decoding does not take much time. So with the default 1 min WAL sender and receiver timeout I could not reproduce the timeout. Beyond that our TAp test itself times out.\n>\n> But I think there's a possibility that the logical receiver will time out this way when decoding a sufficiently large transaction which takes more than the timeout amount of time to decode. So I think we need to call OutputPluginUpdateProgress() after a regular interval (in terms of time or number of changes) to consume any feedback from the subscriber or send a keep-alive message.\n>\n\nI don't think it will be a good idea to directly call\nOutputPluginUpdateProgress() from reorderbuffer.c. There is already a\npatch to discuss this problem [1].Yeah. I don't mean to use OutputPluginUpdateProgress() directly. The patch just showed that it helps calling it there in some way. Thanks for pointing the other thread. I have reviewed the patch on that thread and continue the discussion there. \n\n> Following commit\n> ```\n> commit 87c1dd246af8ace926645900f02886905b889718\n> Author: Amit Kapila <akapila@postgresql.org>\n> Date: Wed May 11 10:12:23 2022 +0530\n>\n> Fix the logical replication timeout during large transactions.\n>\n> ```\n> fixed a similar problem when the changes were filtered by an output plugin, but in this case the changes are not being handed over to the output plugin as well. If we fix it in the core we may not need to handle it in the output plugin as that commit does. The commit does not have a test case which I could run to reproduce the timeout.\n>\n\nIt is not evident how to write a stable test for this because\nestimating how many changes are enough for the configured\nwal_receiver_timeout to\npass on all the buildfarm machines is tricky. If you have good ideas\nthen feel free to propose a test patch.Will continue this on the other thread.-- Best Wishes,Ashutosh",
"msg_date": "Mon, 9 Jan 2023 20:20:47 +0530",
"msg_from": "Ashutosh Bapat <ashutosh.bapat@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Timeout when changes are filtered out by the core during logical\n replication"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI realized a behaviour of logical replication that seems unexpected to me,\nbut not totally sure.\n\nLet's say a new table is created and added into a publication and not\ncreated on subscriber yet. Also \"ALTER SUBSCRIPTION ... REFRESH\nPUBLICATION\" has not been called yet.\nWhat I expect in that case would be that logical replication continues to\nwork as it was working before the new table was created. The new table does\nnot get replicated until \"REFRESH PUBLICATION\" as stated here [1].\nThis is indeed how it actually seems to work. Until we insert a row into\nthe new table.\n\nAfter a new row into the new table, the apply worker gets this change and\ntries to apply it. As expected, it fails since the table does not exist on\nthe subscriber yet. And the worker keeps crashing without and can't apply\nany changes for any table.\nThe obvious way to resolve this is creating the table on subscriber as\nwell. After that apply worker will be back to work and skip changes for the\nnew table and move to other changes.\nSince REFRESH PUBLICATION is not called yet, any change for the new table\nwill not be replicated.\n\nIf replication of the new table will not start anyway (until REFRESH\nPUBLICATION), do we really need to have that table on the subscriber for\napply worker to work?\nAFAIU any change on publication would not affect logical replication setup\nuntil the publication gets refreshed on subscriber. If this understanding\nis correct, then apply worker should be able to run without needing new\ntables.\nWhat do you think?\n\nAlso; if you agree, then the attached patch attempts to fix this issue.\nIt relies on the info from pg_subscription_rel so that apply worker only\napplies changes for the relations exist in pg_subscription_rel.\nSince new tables wouldn't be in there until the next REFRESH PUBLICATION,\nmissing those tables won't be a problem for existing subscriptions.\n\n[1] https://www.postgresql.org/docs/current/sql-altersubscription.html\n\nThanks,\n-- \nMelih Mutlu\nMicrosoft",
"msg_date": "Thu, 22 Dec 2022 16:46:02 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Apply worker fails if a relation is missing on subscriber even if\n refresh publication has not been refreshed yet"
},
{
"msg_contents": "On Thu, Dec 22, 2022 at 7:16 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n> Hi hackers,\n>\n> I realized a behaviour of logical replication that seems unexpected to me, but not totally sure.\n>\n> Let's say a new table is created and added into a publication and not created on subscriber yet. Also \"ALTER SUBSCRIPTION ... REFRESH PUBLICATION\" has not been called yet.\n> What I expect in that case would be that logical replication continues to work as it was working before the new table was created. The new table does not get replicated until \"REFRESH PUBLICATION\" as stated here [1].\n> This is indeed how it actually seems to work. Until we insert a row into the new table.\n>\n> After a new row into the new table, the apply worker gets this change and tries to apply it. As expected, it fails since the table does not exist on the subscriber yet. And the worker keeps crashing without and can't apply any changes for any table.\n> The obvious way to resolve this is creating the table on subscriber as well. After that apply worker will be back to work and skip changes for the new table and move to other changes.\n> Since REFRESH PUBLICATION is not called yet, any change for the new table will not be replicated.\n>\n> If replication of the new table will not start anyway (until REFRESH PUBLICATION), do we really need to have that table on the subscriber for apply worker to work?\n> AFAIU any change on publication would not affect logical replication setup until the publication gets refreshed on subscriber.\n>\n\nI also have the same understanding but I think if we skip replicating\nsome table due to the reason that the corresponding publication has\nnot been refreshed then it is better to LOG that information instead\nof silently skipping it. Along similar lines, personally, I don't see\na very strong reason to not throw the ERROR in the case you mentioned.\nDo you have any use case in mind where the user has added a table to\nthe publication even though she doesn't want it to be replicated? One\nthing that came to my mind is that due to some reason after adding a\ntable to the publication, there is some delay in creating the table on\nthe subscriber and then refreshing the publication and during that\ntime user expects replication to proceed smoothly. But for that isn't\nit better that the user completes the setup on the subscriber before\nperforming operations on such a table? Because say there is some error\nin the subscriber-side setup that the user misses then it would be a\nsurprise for a user to not see the table data. In such a case, an\nERROR/LOG information could be helpful for users.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 23 Dec 2022 12:09:30 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Apply worker fails if a relation is missing on subscriber even if\n refresh publication has not been refreshed yet"
},
{
"msg_contents": "Hi Amit,\n\nAmit Kapila <amit.kapila16@gmail.com>, 23 Ara 2022 Cum, 09:39 tarihinde\nşunu yazdı:\n\n> I also have the same understanding but I think if we skip replicating\n> some table due to the reason that the corresponding publication has\n> not been refreshed then it is better to LOG that information instead\n> of silently skipping it.\n\n\nBy skipping it, I mean the apply worker does not try to do anything with\nthe changes for the missing table since the worker simply cannot apply it\nand only fails.\nBut I agree with you about logging it, the patch currently logs such cases\nas warnings instead of errors.\nI can make it LOG instead of WARNING, just wanted to make something\ndifferent than ERROR.\n\nDo you have any use case in mind where the user has added a table to\n> the publication even though she doesn't want it to be replicated? One\n> thing that came to my mind is that due to some reason after adding a\n> table to the publication, there is some delay in creating the table on\n> the subscriber and then refreshing the publication and during that\n> time user expects replication to proceed smoothly. But for that isn't\n> it better that the user completes the setup on the subscriber before\n> performing operations on such a table? Because say there is some error\n> in the subscriber-side setup that the user misses then it would be a\n> surprise for a user to not see the table data. In such a case, an\n> ERROR/LOG information could be helpful for users.\n>\n\nI don't really see a specific use case for this. The delay between creating\na table on publisher and then on subscriber usually may not be even\nthat long to hurt anything. It just seems unnecessary to me that apply\nworker goes into a failure loop until someone creates the table on the\nsubscriber, even though the table will not be replicated immediately.\n\n\nUsers also shouldn't expect for such tables to be replicated if they did\nnot refresh the publication. That will not happen with or without this\nchange. So I don't think it would be a surprise when they see their new\ntable has not been replicated yet. This issue will also be visible in the\nlogs, just not as an error.\nAnd if users decide/remember to refresh the publication, they cannot do\nthat anyway if the table is still missing on the subscriber. So the REFRESH\nPUBLICATION command will fail and then users will see an error log.\n\nBest,\n-- \nMelih Mutlu\nMicrosoft\n\nHi Amit,Amit Kapila <amit.kapila16@gmail.com>, 23 Ara 2022 Cum, 09:39 tarihinde şunu yazdı:\nI also have the same understanding but I think if we skip replicating\nsome table due to the reason that the corresponding publication has\nnot been refreshed then it is better to LOG that information instead\nof silently skipping it. By skipping it, I mean the apply worker does not try to do anything with the changes for the missing table since the worker simply cannot apply it and only fails. But I agree with you about logging it, the patch currently logs such cases as warnings instead of errors.I can make it LOG instead of WARNING, just wanted to make something different than ERROR. \nDo you have any use case in mind where the user has added a table to\nthe publication even though she doesn't want it to be replicated? One\nthing that came to my mind is that due to some reason after adding a\ntable to the publication, there is some delay in creating the table on\nthe subscriber and then refreshing the publication and during that\ntime user expects replication to proceed smoothly. But for that isn't\nit better that the user completes the setup on the subscriber before\nperforming operations on such a table? Because say there is some error\nin the subscriber-side setup that the user misses then it would be a\nsurprise for a user to not see the table data. In such a case, an\nERROR/LOG information could be helpful for users.I don't really see a specific use case for this. The delay between creating a table on publisher and then on subscriber usually may not be even that long to hurt anything. It just seems unnecessary to me that apply worker goes into a failure loop until someone creates the table on the subscriber, even though the table will not be replicated immediately.Users also shouldn't expect for such tables to be replicated if they did not refresh the publication. That will not happen with or without this change. So I don't think it would be a surprise when they see their new table has not been replicated yet. This issue will also be visible in the logs, just not as an error.And if users decide/remember to refresh the publication, they cannot do that anyway if the table is still missing on the subscriber. So the REFRESH PUBLICATION command will fail and then users will see an error log. Best,-- Melih MutluMicrosoft",
"msg_date": "Mon, 26 Dec 2022 13:11:06 +0300",
"msg_from": "Melih Mutlu <m.melihmutlu@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Apply worker fails if a relation is missing on subscriber even if\n refresh publication has not been refreshed yet"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 3:41 PM Melih Mutlu <m.melihmutlu@gmail.com> wrote:\n>\n>> Do you have any use case in mind where the user has added a table to\n>> the publication even though she doesn't want it to be replicated? One\n>> thing that came to my mind is that due to some reason after adding a\n>> table to the publication, there is some delay in creating the table on\n>> the subscriber and then refreshing the publication and during that\n>> time user expects replication to proceed smoothly. But for that isn't\n>> it better that the user completes the setup on the subscriber before\n>> performing operations on such a table? Because say there is some error\n>> in the subscriber-side setup that the user misses then it would be a\n>> surprise for a user to not see the table data. In such a case, an\n>> ERROR/LOG information could be helpful for users.\n>\n>\n> I don't really see a specific use case for this. The delay between creating a table on publisher and then on subscriber usually may not be even that long to hurt anything. It just seems unnecessary to me that apply worker goes into a failure loop until someone creates the table on the subscriber, even though the table will not be replicated immediately.\n>\n\nTo avoid the failure loop, users can use disable_on_error subscription\nparameter. I see your point but not sure if it is worth changing the\ncurrent behavior without any specific use case which we want to\naddress with this change.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 26 Dec 2022 16:02:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Apply worker fails if a relation is missing on subscriber even if\n refresh publication has not been refreshed yet"
}
] |
[
{
"msg_contents": "Hi.\n\nPer Coverity.\n\nThe commit ccff2d2\n<https://github.com/postgres/postgres/commit/ccff2d20ed9622815df2a7deffce8a7b14830965>,\nchanged the behavior function ArrayGetNItems,\nwith the introduction of the function ArrayGetNItemsSafe.\n\nNow ArrayGetNItems may return -1, according to the comment.\n\" instead of throwing an exception. -1 is returned after an error.\"\n\nSo the macro ARRNELEMS can fail entirely with -1 return,\nresulting in codes failing to use without checking the function return.\n\nLike (contrib/intarray/_int_gist.c):\n{\nint nel;\n\nnel = ARRNELEMS(ent);\nmemcpy(ptr, ARRPTR(ent), nel * sizeof(int32));\n}\n\nSources possibly affecteds:\ncontrib\\cube\\cube.c\ncontrib\\intarray\\_intbig_gist.c\ncontrib\\intarray\\_int_bool.c\ncontrib\\intarray\\_int_gin.c\ncontrib\\intarray\\_int_gist.c\ncontrib\\intarray\\_int_op.c\ncontrib\\intarray\\_int_tool.c:\n\nThoughts?\n\nregards,\nRanier Vilela\n\nHi.Per Coverity.The commit ccff2d2, changed the behavior function ArrayGetNItems,with the introduction of the function ArrayGetNItemsSafe.Now \nArrayGetNItems may return -1, according to the comment.\"\n instead of throwing an exception. -1 is returned after an error.\" So the macro ARRNELEMS can fail entirely with -1 return,resulting in codes failing to use without checking the function return.Like (contrib/intarray/_int_gist.c):{\t\tint\t\t\tnel;\t\tnel = ARRNELEMS(ent);\t\tmemcpy(ptr, ARRPTR(ent), nel * sizeof(int32));}Sources possibly affecteds:contrib\\cube\\cube.ccontrib\\intarray\\_intbig_gist.ccontrib\\intarray\\_int_bool.ccontrib\\intarray\\_int_gin.ccontrib\\intarray\\_int_gist.ccontrib\\intarray\\_int_op.ccontrib\\intarray\\_int_tool.c:Thoughts?regards,Ranier Vilela",
"msg_date": "Thu, 22 Dec 2022 12:35:58 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Hi,\n\nActually, there would be much more sources affected, like\n nbytes += subbytes[outer_nelems];\n subnitems[outer_nelems] = ArrayGetNItems(this_ndims,\n ARR_DIMS(array));\n nitems += subnitems[outer_nelems];\n havenulls |= ARR_HASNULL(array);\n outer_nelems++;\n }\n\nMaybe it is better for most calls like this to keep old behavior, by\npassing a flag\nthat says which behavior is expected by caller?\n\nOn Thu, Dec 22, 2022 at 6:36 PM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Hi.\n>\n> Per Coverity.\n>\n> The commit ccff2d2\n> <https://github.com/postgres/postgres/commit/ccff2d20ed9622815df2a7deffce8a7b14830965>,\n> changed the behavior function ArrayGetNItems,\n> with the introduction of the function ArrayGetNItemsSafe.\n>\n> Now ArrayGetNItems may return -1, according to the comment.\n> \" instead of throwing an exception. -1 is returned after an error.\"\n>\n> So the macro ARRNELEMS can fail entirely with -1 return,\n> resulting in codes failing to use without checking the function return.\n>\n> Like (contrib/intarray/_int_gist.c):\n> {\n> int nel;\n>\n> nel = ARRNELEMS(ent);\n> memcpy(ptr, ARRPTR(ent), nel * sizeof(int32));\n> }\n>\n> Sources possibly affecteds:\n> contrib\\cube\\cube.c\n> contrib\\intarray\\_intbig_gist.c\n> contrib\\intarray\\_int_bool.c\n> contrib\\intarray\\_int_gin.c\n> contrib\\intarray\\_int_gist.c\n> contrib\\intarray\\_int_op.c\n> contrib\\intarray\\_int_tool.c:\n>\n> Thoughts?\n>\n> regards,\n> Ranier Vilela\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Actually, there would be much more sources affected, like nbytes += subbytes[outer_nelems]; subnitems[outer_nelems] = ArrayGetNItems(this_ndims, ARR_DIMS(array)); nitems += subnitems[outer_nelems]; havenulls |= ARR_HASNULL(array); outer_nelems++; }Maybe it is better for most calls like this to keep old behavior, by passing a flagthat says which behavior is expected by caller?On Thu, Dec 22, 2022 at 6:36 PM Ranier Vilela <ranier.vf@gmail.com> wrote:Hi.Per Coverity.The commit ccff2d2, changed the behavior function ArrayGetNItems,with the introduction of the function ArrayGetNItemsSafe.Now \nArrayGetNItems may return -1, according to the comment.\"\n instead of throwing an exception. -1 is returned after an error.\" So the macro ARRNELEMS can fail entirely with -1 return,resulting in codes failing to use without checking the function return.Like (contrib/intarray/_int_gist.c):{\t\tint\t\t\tnel;\t\tnel = ARRNELEMS(ent);\t\tmemcpy(ptr, ARRPTR(ent), nel * sizeof(int32));}Sources possibly affecteds:contrib\\cube\\cube.ccontrib\\intarray\\_intbig_gist.ccontrib\\intarray\\_int_bool.ccontrib\\intarray\\_int_gin.ccontrib\\intarray\\_int_gist.ccontrib\\intarray\\_int_op.ccontrib\\intarray\\_int_tool.c:Thoughts?regards,Ranier Vilela\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Thu, 22 Dec 2022 21:45:35 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Em qui., 22 de dez. de 2022 às 15:45, Nikita Malakhov <hukutoc@gmail.com>\nescreveu:\n\n> Hi,\n>\n> Actually, there would be much more sources affected, like\n> nbytes += subbytes[outer_nelems];\n> subnitems[outer_nelems] = ArrayGetNItems(this_ndims,\n> ARR_DIMS(array));\n> nitems += subnitems[outer_nelems];\n> havenulls |= ARR_HASNULL(array);\n> outer_nelems++;\n> }\n>\n> Maybe it is better for most calls like this to keep old behavior, by\n> passing a flag\n> that says which behavior is expected by caller?\n>\nI agreed that it is better to keep old behavior.\nEven the value 0 is problematic, with calls like this:\n\nnel = ARRNELEMS(ent);\nmemcpy(ptr, ARRPTR(ent), nel * sizeof(int32));\n\nregards,\nRanier Vilela\n\nEm qui., 22 de dez. de 2022 às 15:45, Nikita Malakhov <hukutoc@gmail.com> escreveu:Hi,Actually, there would be much more sources affected, like nbytes += subbytes[outer_nelems]; subnitems[outer_nelems] = ArrayGetNItems(this_ndims, ARR_DIMS(array)); nitems += subnitems[outer_nelems]; havenulls |= ARR_HASNULL(array); outer_nelems++; }Maybe it is better for most calls like this to keep old behavior, by passing a flagthat says which behavior is expected by caller?I agreed that it is better to keep old behavior.Even the value 0 is problematic, with calls like this:\nnel = ARRNELEMS(ent);\t\tmemcpy(ptr, ARRPTR(ent), nel * sizeof(int32)); regards,Ranier Vilela",
"msg_date": "Thu, 22 Dec 2022 19:20:26 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Hi,\n\nThe most obvious solution I see is to check all calls and for cases like we\nboth mentioned\nto pass a flag meaning safe or unsafe (for these cases) behavior is\nexpected, like\n\n#define ARRNELEMS(x) ArrayGetNItems( ARR_NDIM(x), ARR_DIMS(x), false)\n\n...\n\nint\nArrayGetNItems(int ndim, const int *dims, bool issafe)\n{\nreturn ArrayGetNItemsSafe(ndim, dims, NULL, issafe);\n}\n\nint\nArrayGetNItemsSafe(int ndim, const int *dims, struct Node *escontext, bool\nissafe)\n{\n...\n\nAnother solution is revision of wrapping code for all calls of\nArrayGetNItems.\nSafe functions is a good idea overall, but a lot of code needs to be\nrevised.\n\nOn Fri, Dec 23, 2022 at 1:20 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n\n> Em qui., 22 de dez. de 2022 às 15:45, Nikita Malakhov <hukutoc@gmail.com>\n> escreveu:\n>\n>> Hi,\n>>\n>> Actually, there would be much more sources affected, like\n>> nbytes += subbytes[outer_nelems];\n>> subnitems[outer_nelems] = ArrayGetNItems(this_ndims,\n>> ARR_DIMS(array));\n>> nitems += subnitems[outer_nelems];\n>> havenulls |= ARR_HASNULL(array);\n>> outer_nelems++;\n>> }\n>>\n>> Maybe it is better for most calls like this to keep old behavior, by\n>> passing a flag\n>> that says which behavior is expected by caller?\n>>\n> I agreed that it is better to keep old behavior.\n> Even the value 0 is problematic, with calls like this:\n>\n> nel = ARRNELEMS(ent);\n> memcpy(ptr, ARRPTR(ent), nel * sizeof(int32));\n>\n> regards,\n> Ranier Vilela\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,The most obvious solution I see is to check all calls and for cases like we both mentionedto pass a flag meaning safe or unsafe (for these cases) behavior is expected, like#define ARRNELEMS(x) ArrayGetNItems( ARR_NDIM(x), ARR_DIMS(x), false)...intArrayGetNItems(int ndim, const int *dims, bool issafe){\treturn ArrayGetNItemsSafe(ndim, dims, NULL, issafe);}intArrayGetNItemsSafe(int ndim, const int *dims, struct Node *escontext, bool issafe){...Another solution is revision of wrapping code for all calls of ArrayGetNItems.Safe functions is a good idea overall, but a lot of code needs to be revised.On Fri, Dec 23, 2022 at 1:20 AM Ranier Vilela <ranier.vf@gmail.com> wrote:Em qui., 22 de dez. de 2022 às 15:45, Nikita Malakhov <hukutoc@gmail.com> escreveu:Hi,Actually, there would be much more sources affected, like nbytes += subbytes[outer_nelems]; subnitems[outer_nelems] = ArrayGetNItems(this_ndims, ARR_DIMS(array)); nitems += subnitems[outer_nelems]; havenulls |= ARR_HASNULL(array); outer_nelems++; }Maybe it is better for most calls like this to keep old behavior, by passing a flagthat says which behavior is expected by caller?I agreed that it is better to keep old behavior.Even the value 0 is problematic, with calls like this:\nnel = ARRNELEMS(ent);\t\tmemcpy(ptr, ARRPTR(ent), nel * sizeof(int32)); regards,Ranier Vilela\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Fri, 23 Dec 2022 10:57:25 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "At Thu, 22 Dec 2022 12:35:58 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi.\n> \n> Per Coverity.\n> \n> The commit ccff2d2\n> <https://github.com/postgres/postgres/commit/ccff2d20ed9622815df2a7deffce8a7b14830965>,\n> changed the behavior function ArrayGetNItems,\n> with the introduction of the function ArrayGetNItemsSafe.\n> \n> Now ArrayGetNItems may return -1, according to the comment.\n> \" instead of throwing an exception. -1 is returned after an error.\"\n\nIf I'm reading the code correctly, it's the definition of\nArrayGetNItems*Safe*. ArrayGetNItems() calls that function with a NULL\nescontext and the NULL turns ereturn() into ereport(). That doesn't\nseem to be changed by the commit.\n\nOf course teaching Coverity not to issue the false warnings would be\nanother actual issue that we should do, maybe.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 23 Dec 2022 17:37:55 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "At Fri, 23 Dec 2022 17:37:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 22 Dec 2022 12:35:58 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > Hi.\n> > \n> > Per Coverity.\n> > \n> > The commit ccff2d2\n> > <https://github.com/postgres/postgres/commit/ccff2d20ed9622815df2a7deffce8a7b14830965>,\n> > changed the behavior function ArrayGetNItems,\n> > with the introduction of the function ArrayGetNItemsSafe.\n> > \n> > Now ArrayGetNItems may return -1, according to the comment.\n> > \" instead of throwing an exception. -1 is returned after an error.\"\n> \n> If I'm reading the code correctly, it's the definition of\n> ArrayGetNItems*Safe*. ArrayGetNItems() calls that function with a NULL\n> escontext and the NULL turns ereturn() into ereport().\n\n> That doesn't seem to be changed by the commit.\n\nNo.. It seems to me that the commit didn't change its behavior in that\nregard.\n\n> Of course teaching Coverity not to issue the false warnings would be\n> another actual issue that we should do, maybe.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 23 Dec 2022 17:40:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Hi,\n\nEven with null context it does not turn to ereport, and returns dummy value\n-\n\n#define errsave_domain(context, domain, ...) \\\ndo { \\\nstruct Node *context_ = (context); \\\npg_prevent_errno_in_scope(); \\\nif (errsave_start(context_, domain)) \\\n__VA_ARGS__, errsave_finish(context_, __FILE__, __LINE__, __func__); \\\n} while(0)\n\n#define errsave(context, ...) \\\nerrsave_domain(context, TEXTDOMAIN, __VA_ARGS__)\n\n/*\n * \"ereturn(context, dummy_value, ...);\" is exactly the same as\n * \"errsave(context, ...); return dummy_value;\". This saves a bit\n * of typing in the common case where a function has no cleanup\n * actions to take after reporting a soft error. \"dummy_value\"\n * can be empty if the function returns void.\n */\n#define ereturn_domain(context, dummy_value, domain, ...) \\\ndo { \\\nerrsave_domain(context, domain, __VA_ARGS__); \\\nreturn dummy_value; \\\n} while(0)\n\n#define ereturn(context, dummy_value, ...) \\\nereturn_domain(context, dummy_value, TEXTDOMAIN, __VA_ARGS__)\n\n\nOn Fri, Dec 23, 2022 at 11:40 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Fri, 23 Dec 2022 17:37:55 +0900 (JST), Kyotaro Horiguchi <\n> horikyota.ntt@gmail.com> wrote in\n> > At Thu, 22 Dec 2022 12:35:58 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > > Hi.\n> > >\n> > > Per Coverity.\n> > >\n> > > The commit ccff2d2\n> > > <\n> https://github.com/postgres/postgres/commit/ccff2d20ed9622815df2a7deffce8a7b14830965\n> >,\n> > > changed the behavior function ArrayGetNItems,\n> > > with the introduction of the function ArrayGetNItemsSafe.\n> > >\n> > > Now ArrayGetNItems may return -1, according to the comment.\n> > > \" instead of throwing an exception. -1 is returned after an error.\"\n> >\n> > If I'm reading the code correctly, it's the definition of\n> > ArrayGetNItems*Safe*. ArrayGetNItems() calls that function with a NULL\n> > escontext and the NULL turns ereturn() into ereport().\n>\n> > That doesn't seem to be changed by the commit.\n>\n> No.. It seems to me that the commit didn't change its behavior in that\n> regard.\n>\n> > Of course teaching Coverity not to issue the false warnings would be\n> > another actual issue that we should do, maybe.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Even with null context it does not turn to ereport, and returns dummy value -#define errsave_domain(context, domain, ...)\t\\\tdo { \\\t\tstruct Node *context_ = (context); \\\t\tpg_prevent_errno_in_scope(); \\\t\tif (errsave_start(context_, domain)) \\\t\t\t__VA_ARGS__, errsave_finish(context_, __FILE__, __LINE__, __func__); \\\t} while(0)#define errsave(context, ...)\t\\\terrsave_domain(context, TEXTDOMAIN, __VA_ARGS__)/* * \"ereturn(context, dummy_value, ...);\" is exactly the same as * \"errsave(context, ...); return dummy_value;\". This saves a bit * of typing in the common case where a function has no cleanup * actions to take after reporting a soft error. \"dummy_value\" * can be empty if the function returns void. */#define ereturn_domain(context, dummy_value, domain, ...)\t\\\tdo { \\\t\terrsave_domain(context, domain, __VA_ARGS__); \\\t\treturn dummy_value; \\\t} while(0)#define ereturn(context, dummy_value, ...)\t\\\tereturn_domain(context, dummy_value, TEXTDOMAIN, __VA_ARGS__)On Fri, Dec 23, 2022 at 11:40 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Fri, 23 Dec 2022 17:37:55 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Thu, 22 Dec 2022 12:35:58 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> > Hi.\n> > \n> > Per Coverity.\n> > \n> > The commit ccff2d2\n> > <https://github.com/postgres/postgres/commit/ccff2d20ed9622815df2a7deffce8a7b14830965>,\n> > changed the behavior function ArrayGetNItems,\n> > with the introduction of the function ArrayGetNItemsSafe.\n> > \n> > Now ArrayGetNItems may return -1, according to the comment.\n> > \" instead of throwing an exception. -1 is returned after an error.\"\n> \n> If I'm reading the code correctly, it's the definition of\n> ArrayGetNItems*Safe*. ArrayGetNItems() calls that function with a NULL\n> escontext and the NULL turns ereturn() into ereport().\n\n> That doesn't seem to be changed by the commit.\n\nNo.. It seems to me that the commit didn't change its behavior in that\nregard.\n\n> Of course teaching Coverity not to issue the false warnings would be\n> another actual issue that we should do, maybe.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Sat, 24 Dec 2022 18:10:47 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Nikita Malakhov <hukutoc@gmail.com> writes:\n> Even with null context it does not turn to ereport, and returns dummy value\n\nRead the code. ArrayGetNItems passes NULL for escontext, therefore\nif there's a problem the ereturn calls in ArrayGetNItemsSafe will\nthrow error, *not* return -1.\n\nNot sure how we could persuade Coverity of that, though,\nif it fails to understand that for itself.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 24 Dec 2022 11:05:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Hi,\n\nMy bad, I was misleaded by unconditional return in ereturn_domain.\nSorry for the noise.\n\n\nOn Sat, Dec 24, 2022 at 7:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Nikita Malakhov <hukutoc@gmail.com> writes:\n> > Even with null context it does not turn to ereport, and returns dummy\n> value\n>\n> Read the code. ArrayGetNItems passes NULL for escontext, therefore\n> if there's a problem the ereturn calls in ArrayGetNItemsSafe will\n> throw error, *not* return -1.\n>\n> Not sure how we could persuade Coverity of that, though,\n> if it fails to understand that for itself.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,My bad, I was misleaded by unconditional return in ereturn_domain.Sorry for the noise.On Sat, Dec 24, 2022 at 7:05 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Nikita Malakhov <hukutoc@gmail.com> writes:\n> Even with null context it does not turn to ereport, and returns dummy value\n\nRead the code. ArrayGetNItems passes NULL for escontext, therefore\nif there's a problem the ereturn calls in ArrayGetNItemsSafe will\nthrow error, *not* return -1.\n\nNot sure how we could persuade Coverity of that, though,\nif it fails to understand that for itself.\n\n regards, tom lane\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Mon, 26 Dec 2022 21:45:28 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
},
{
"msg_contents": "Em seg., 26 de dez. de 2022 às 15:45, Nikita Malakhov <hukutoc@gmail.com>\nescreveu:\n\n> Hi,\n>\n> My bad, I was misleaded by unconditional return in ereturn_domain.\n> Sorry for the noise.\n>\nBy no means, the mistake was entirely mine, I apologize to you.\n\nregards,\nRanier Vilela\n\nEm seg., 26 de dez. de 2022 às 15:45, Nikita Malakhov <hukutoc@gmail.com> escreveu:Hi,My bad, I was misleaded by unconditional return in ereturn_domain.Sorry for the noise.By no means, the mistake was entirely mine, I apologize to you.regards,Ranier Vilela",
"msg_date": "Mon, 26 Dec 2022 15:53:55 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: ARRNELEMS Out-of-bounds possible errors"
}
] |
[
{
"msg_contents": ">\n> Hi,\n> I was looking at commit 941aa6a6268a6a66f6895401aad6b5329111d412 .\n>\n> I think it would be better to move the assertion into\n> `index_beginscan_internal` because it is called by index_beginscan\n> and index_beginscan_bitmap\n>\n> In the future, when a new variant of `index_beginscan_XX` is added, the\n> assertion would be effective for the new variant.\n>\n> Please let me know what you think.\n>\n> Cheers\n>",
"msg_date": "Thu, 22 Dec 2022 12:35:59 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: checking snapshot argument for index_beginscan"
},
{
"msg_contents": "On Fri, 23 Dec 2022 at 00:36, Ted Yu <yuzhihong@gmail.com> wrote:\n>>\n>> Hi,\n>> I was looking at commit 941aa6a6268a6a66f6895401aad6b5329111d412 .\n>>\n>> I think it would be better to move the assertion into `index_beginscan_internal` because it is called by index_beginscan and index_beginscan_bitmap\n>>\n>> In the future, when a new variant of `index_beginscan_XX` is added, the assertion would be effective for the new variant.\n>>\n>> Please let me know what you think.\n\nHi, Ted!\n\nThe two asserts out of four could be combined as you proposed in the patch.\nAssert added by 941aa6a6268a6a66f6 to index_parallelscan_initialize\nshould remain anyway as otherwise, we can have Segfault in\nSerializeSnapshot called from this function.\nAssert in index_parallelscan_estimate can be omitted as there is the\nsame assert inside EstimateSnapshotSpace called from this function.\nI've included it into version 2 of a patch.\n\nNot sure it's worth the effort but IMO the patch is right and can be committed.\n\nKind regards,\nPavel Borisov,\nSupabase.",
"msg_date": "Fri, 23 Dec 2022 01:13:50 +0400",
"msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: checking snapshot argument for index_beginscan"
}
] |
[
{
"msg_contents": "Hi there,\n\nWe'd like to be able to call the lock manager's WaitForLockers() and\nWaitForLockersMultiple() from SQL. Below I describe our use case, but\nbasically I'm wondering if this:\n\n 1. Seems like a reasonable thing to do\n\n 2. Would be of interest upstream\n\n 3. Should be done with a new pg_foo() function (taking an\n oid?), or a whole new SQL command, or something else\n\nIf this sounds promising, we may be able to code this up and submit it.\n\nThe rest of this email describes our use cases and related observations:\n\n\n==== Use Case Background ====\n\nOur use case is inspired by this blog post by Marco Slot (CC'ed) at\nCitus Data: https://www.citusdata.com/blog/2018/06/14/scalable-incremental-data-aggregation/\n. This describes a scheme for correctly aggregating rows given minimal\ncoordination with an arbitrary number of writers while keeping minimal\nadditional state. It relies on two simple facts:\n\n 1. INSERT/UPDATE take their ROW EXCLUSIVE lock on the target\n table before evaluating any column DEFAULT expressions,\n and thus before e.g. calling nextval() on a sequence in\n the DEFAULT expression. And of course, this lock is only\n released when the transaction commits or rolls back.\n\n 2. pg_sequence_last_value() (still undocumented!) can be\n used to obtain an instantaneous upper bound on the\n sequence values that have been returned by nextval(), even\n if the transaction that called nextval() hasn't yet\n committed.\n\nSo, assume we have a table:\n\n create table tbl (\n id bigserial,\n data text\n );\n\nwhich is only ever modified by INSERTs that use DEFAULT for id. Then,\na client can process each row exactly once using a loop like this\n(excuse the pseudo-SQL):\n\n min_id := 0;\n while true:\n max_id := pg_sequence_last_value('tbl_id_seq');\n wait_for_writers('tbl'::regclass);\n SELECT\n some_aggregation(data)\n FROM tbl\n WHERE id > min_id AND id <= max_id;\n min_id := max_id;\n\nIn the blog post, the equivalent of wait_for_writers() is implemented\nby taking and immediately releasing a SHARE ROW EXCLUSIVE lock on tbl.\nIt's unclear why this can't be SHARE, since it just needs to conflict\nwith INSERT's ROW EXCLUSIVE, but in any case it's sufficient for\ncorrectness.\n\n(Note that this version only works if the rows committed by the\ntransactions that it waited for are actually visible to the SELECT, so\nfor example, the whole thing can't be within a Repeatable Read or\nSerializable transaction.)\n\n\n==== Why WaitForLockers()? ====\n\nNo new writer can acquire a ROW EXCLUSIVE lock as long as we're\nwaiting to obtain the SHARE lock, even if we only hold it for an\ninstant. If we have to wait a long time, because some existing writer\nholds its ROW EXCLUSIVE lock for a long time, this could noticeably\nreduce overall writer throughput.\n\nBut we don't actually need to obtain a lock at all--and waiting for\ntransactions that already hold conflicting locks is exactly what\nWaitForLockers() / WaitForLockersMultiple() does. Using it instead\nwould prevent any interference with writers.\n\n\n==== Appendix: Extensions and Observations ====\n\nAside from downgrading to SHARE mode and merely waiting instead of\nlocking, we propose a couple other extensions and observations related\nto Citus' scheme. These only tangentially motivate our need for\nWaitForLockers(), so you may stop reading here unless the overall\nscheme is of interest.\n\n== Separate client for reading sequences and waiting ==\n\nFirst, in our use case each batch of rows might require extensive\nprocessing as part of a larger operation that doesn't want to block\nwaiting for writers to commit. A simple extension is to separate the\nprocessing from the determination of sequence values. In other words,\nhave a single client that sits in a loop:\n\n while true:\n seq_val := pg_sequence_last_value('tbl_id_seq');\n WaitForLockers('tbl'::regclass, 'SHARE');\n publish(seq_val);\n\nand any number of other clients that use the series of published\nsequence values to do their own independent processing (maintaining\ntheir own additional state).\n\nThis can be extended to multiple tables with WaitForLockersMultiple():\n\n while true:\n seq_val1 := pg_sequence_last_value('tbl1_id_seq');\n seq_val2 := pg_sequence_last_value('tbl2_id_seq');\n WaitForLockersMultiple(\n ARRAY['tbl1', 'tbl2']::regclass[], 'SHARE');\n publish('tbl1', seq_val1);\n publish('tbl2', seq_val2);\n\nWhich is clearly more efficient than locking or waiting for the tables\nin sequence, hence the desire for that function as well.\n\n== Latency ==\n\nThis brings us to a series of observations about latency. If some\nwriters take a long time to commit, some already-committed rows might\nnot be processed for a long time. To avoid exacerbating this when\nusing WaitForLockersMultiple(), which obviously has to wait for the\nlast writer of any specified table, it should be used with groups of\ntables that are generally written by the same transactions.\n\nAlso, while in Citus' example the aggregation needs to process each\nrow exactly once, latency can be reduced if a row may be processed\nmore than once and if rows can be processed out of order by sequence\nvalue (id), by simply removing the \"id <= max_id\" term from the WHERE\nclause in the reader. This particularly reduces latency if waiting and\nprocessing are separated as described in the above section.\n\n== Updates and latency ==\n\nIn our application we have some use cases with a table like:\n\n create table tbl (\n id bigint primary key,\n data text,\n mod_idx bigserial\n );\n\nwhere writers do:\n\n INSERT INTO tbl (id, data) VALUES (1, 'foo')\n ON CONFLICT (id) DO UPDATE\n SET data = excluded.data, mod_idx = DEFAULT;\n\nand where the reader's job is to continuously replicate rows within a\nfixed range of id's in an eventually-consistent fashion. Since the\nwriter always bumps mod_idx by setting it to DEFAULT, superficially it\nseems we can use this scheme with mod_idx:\n\n min_mod_idx := 0;\n while true:\n max_mod_idx := pg_sequence_last_value('tbl_mod_idx_seq');\n WaitForLockers('tbl'::regclass, 'SHARE');\n SELECT\n do_replicate(id, data, mod_idx)\n FROM tbl\n WHERE\n id >= my_min_id -- app-specific\n AND id < my_max_id -- app-specific\n AND mod_idx > min_mod_idx\n AND mod_idx <= max_mod_idx\n ORDER BY mod_idx;\n min_mod_idx := max_mod_idx;\n\nThis version replicates all rows eventually (if writers stop),\nreplicates each version of a row at most once (allowing updates to be\nskipped if obviated by a later committed update), and replicates\nchanges in order by mod_idx, which may make bookkeeping easier. But\ncontinuous overlapping updates could keep some rows *perpetually* out\nof the reader's reach, leading to *unbounded* latency. If the\napplication can instead tolerate potentially replicating the same\nversion of a row more than once, and replicating changes to different\nrows out of order by mod_idx, latency can be minimized by removing\n\"mod_idx <= max_mod_idx\" from the WHERE clause. (The ORDER BY should\nlikely also be removed, since later batches may contain rows with a\nlower mod_idx.) The remainder of the scheme still ensures that all\nrows are eventually replicated, and limits redundant replication while\nkeeping minimal state.\n\n== Latency tradeoff and advantages ==\n\nIn conclusion, with this scheme there is a tradeoff between minimizing\nlatency and avoiding redundant processing, where (depending on the\nscenario) the amount of latency or redundant processing is related to\nthe maximum amount of time that a writer transaction holds a ROW\nEXCLUSIVE lock on the table. Therefore, this time should be minimized\nwherever possible.\n\nThis tradeoff seems to be an inherent consequence of the minimalist\nadvantages of this scheme:\n\n 1. If we use WaitForLockers(), no additional locks are taken,\n so there's no impact on concurrency of writers\n\n 2. If WaitForLockers() is separated from readers, there's no\n impact on concurrency/waiting of readers\n\n 3. Can be used to guarantee eventual consistency as desired\n\n 4. Keeps O(1) state per table (per reader)--no tracking of\n individual writers or individual row updates\n\n 5. Requires minimal cooperation from writers (just use DEFAULT\n expressions that use nextval())\n\n\n",
"msg_date": "Fri, 23 Dec 2022 02:41:20 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "On Fri, Dec 23, 2022 at 11:43 AM Will Mortensen <will@extrahop.com> wrote:\n> We'd like to be able to call the lock manager's WaitForLockers() and\n> WaitForLockersMultiple() from SQL. Below I describe our use case, but\n> basically I'm wondering if this:\n>\n> 1. Seems like a reasonable thing to do\n>\n> 2. Would be of interest upstream\n>\n> 3. Should be done with a new pg_foo() function (taking an\n> oid?), or a whole new SQL command, or something else\n\nDefinitely +1 on adding a function/syntax to wait for lockers without\nactually taking a lock. The get sequence value + lock-and-release\napproach is still the only reliable scheme I've found for reliably and\nefficiently processing new inserts in PostgreSQL. I'm wondering\nwhether it could be an option of the LOCK command. (LOCK WAIT ONLY?)\n\nMarco\n\n\n",
"msg_date": "Tue, 10 Jan 2023 10:01:25 +0100",
"msg_from": "Marco Slot <marco.slot@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi Marco, thanks for the reply! Glad to know you'd find it useful too. :-)\n\nOn Tue, Jan 10, 2023 at 1:01 AM Marco Slot <marco.slot@gmail.com> wrote:\n> I'm wondering whether it could be an option of the LOCK command.\n> (LOCK WAIT ONLY?)\n\nI assume that's doable, but just from looking at the docs, it might be\na little confusing. For example, at least if we use\nWaitForLockersMultiple(), waiting for multiple tables would happen in\nparallel (which I think is good), while locking them is documented to\nhappen sequentially. Also, normal LOCK is illegal outside a\ntransaction, but waiting makes perfect sense. (Actually, normal LOCK\nmakes sense too, if the goal was just to wait. :-) )\n\nBy contrast, while LOCK has NOWAIT, and SELECT's locking clause\nhas NOWAIT and SKIP LOCKED, they only change the blocking/failure\nbehavior, while success still means taking the lock and has the same\nsemantics.\n\nBut I'm really no expert on SQL syntax or typical practice for things like\nthis. Anything that works is fine with me. :-)\n\n====\n\nAs a possibly superfluous sidebar, I wanted to correct this part of my\noriginal message:\n\n> On Fri, Dec 23, 2022 at 11:43 AM Will Mortensen <will@extrahop.com> wrote:\n> > pg_sequence_last_value() (still undocumented!) can be used to\n> > obtain an instantaneous upper bound on the sequence values\n> > that have been returned by nextval(), even if the transaction\n> > that called nextval() hasn't yet committed.\n\nThis is true, but not the most important part of making this scheme\nwork: as you mentioned in the Citus blog post, to avoid missing rows,\nwe need (and this gives us) an instantaneous *lower* bound on the\nsequence values that could be used by transactions that commit after\nwe finish waiting (and start processing). This doesn't work with\nsequence caching, since without somehow inspecting all sessions'\nsequence caches, rows with arbitrarily old/low cached sequence\nvalues could be committed arbitrarily far into the future, and we'd\nfail to process them.\n\nAs you also implied in the blog post, the upper bound is what\nallows us to also process each row *exactly* once (instead of at\nleast once) and in sequence order, if desired.\n\nSo those are the respective justifications for both arms of the\nWHERE clause: id > min_id AND id <= max_id .\n\nOn Tue, Jan 10, 2023 at 1:01 AM Marco Slot <marco.slot@gmail.com> wrote:\n>\n> On Fri, Dec 23, 2022 at 11:43 AM Will Mortensen <will@extrahop.com> wrote:\n> > We'd like to be able to call the lock manager's WaitForLockers() and\n> > WaitForLockersMultiple() from SQL. Below I describe our use case, but\n> > basically I'm wondering if this:\n> >\n> > 1. Seems like a reasonable thing to do\n> >\n> > 2. Would be of interest upstream\n> >\n> > 3. Should be done with a new pg_foo() function (taking an\n> > oid?), or a whole new SQL command, or something else\n>\n> Definitely +1 on adding a function/syntax to wait for lockers without\n> actually taking a lock. The get sequence value + lock-and-release\n> approach is still the only reliable scheme I've found for reliably and\n> efficiently processing new inserts in PostgreSQL. I'm wondering\n> whether it could be an option of the LOCK command. (LOCK WAIT ONLY?)\n>\n> Marco\n\n\n",
"msg_date": "Wed, 11 Jan 2023 01:59:38 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-10 10:01:25 +0100, Marco Slot wrote:\n> On Fri, Dec 23, 2022 at 11:43 AM Will Mortensen <will@extrahop.com> wrote:\n> > We'd like to be able to call the lock manager's WaitForLockers() and\n> > WaitForLockersMultiple() from SQL. Below I describe our use case, but\n> > basically I'm wondering if this:\n> >\n> > 1. Seems like a reasonable thing to do\n> >\n> > 2. Would be of interest upstream\n> >\n> > 3. Should be done with a new pg_foo() function (taking an\n> > oid?), or a whole new SQL command, or something else\n> \n> Definitely +1 on adding a function/syntax to wait for lockers without\n> actually taking a lock.\n\nI think such a function would still have to integrate enough with the lock\nmanager infrastructure to participate in the deadlock detector. Otherwise I\nthink you'd trivially end up with loads of deadlocks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:33:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi Andres,\n\nOn Wed, Jan 11, 2023 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> I think such a function would still have to integrate enough with the lock\n> manager infrastructure to participate in the deadlock detector. Otherwise I\n> think you'd trivially end up with loads of deadlocks.\n\nCould you elaborate on which unusual deadlock concerns arise? To be\nclear, WaitForLockers() is an existing function in lmgr.c\n(https://github.com/postgres/postgres/blob/216a784829c2c5f03ab0c43e009126cbb819e9b2/src/backend/storage/lmgr/lmgr.c#L986),\nand naively it seems like we mostly just need to call it. To my very\nlimited understanding, from looking at the existing callers and the\nimplementation of LOCK, that would look something like this\n(assuming we're in a SQL command like LOCK and calling unmodified\nWaitForLockers() with a single table):\n\n1. Call something like RangeVarGetRelidExtended() with AccessShareLock\nto ensure the table is not dropped and obtain the table oid\n\n2. Use SET_LOCKTAG_RELATION() to construct the lock tag from the oid\n\n3. Call WaitForLockers(), which internally calls GetLockConflicts() and\nVirtualXactLock(). These certainly take plenty of locks of various types,\nand will likely sleep in LockAcquire() waiting for transactions to finish,\nbut there don't seem to be any unusual pre/postconditions, nor do we\nhold any unusual locks already.\n\nObviously a deadlock is possible if transactions end up waiting for each\nother, just as when taking table or row locks, etc., but it seems like this\nwould be detected as usual?\n\n\n",
"msg_date": "Wed, 11 Jan 2023 23:03:30 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "I suppose if it's correct that we need to lock the table first (at least\nin ACCESS SHARE mode), an option to LOCK perhaps makes\nmore sense. Maybe you could specify two modes like:\n\nLOCK TABLE IN _lockmode_ MODE AND THEN WAIT FOR CONFLICTS WITH _waitmode_ MODE;\n\nBut that might be excessive. :-D And I don't know if there's any\nreason to use a _lockmode_ other than ACCESS SHARE.\n\nOn Wed, Jan 11, 2023 at 11:03 PM Will Mortensen <will@extrahop.com> wrote:\n>\n> Hi Andres,\n>\n> On Wed, Jan 11, 2023 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think such a function would still have to integrate enough with the lock\n> > manager infrastructure to participate in the deadlock detector. Otherwise I\n> > think you'd trivially end up with loads of deadlocks.\n>\n> Could you elaborate on which unusual deadlock concerns arise? To be\n> clear, WaitForLockers() is an existing function in lmgr.c\n> (https://github.com/postgres/postgres/blob/216a784829c2c5f03ab0c43e009126cbb819e9b2/src/backend/storage/lmgr/lmgr.c#L986),\n> and naively it seems like we mostly just need to call it. To my very\n> limited understanding, from looking at the existing callers and the\n> implementation of LOCK, that would look something like this\n> (assuming we're in a SQL command like LOCK and calling unmodified\n> WaitForLockers() with a single table):\n>\n> 1. Call something like RangeVarGetRelidExtended() with AccessShareLock\n> to ensure the table is not dropped and obtain the table oid\n>\n> 2. Use SET_LOCKTAG_RELATION() to construct the lock tag from the oid\n>\n> 3. Call WaitForLockers(), which internally calls GetLockConflicts() and\n> VirtualXactLock(). These certainly take plenty of locks of various types,\n> and will likely sleep in LockAcquire() waiting for transactions to finish,\n> but there don't seem to be any unusual pre/postconditions, nor do we\n> hold any unusual locks already.\n>\n> Obviously a deadlock is possible if transactions end up waiting for each\n> other, just as when taking table or row locks, etc., but it seems like this\n> would be detected as usual?\n\n\n",
"msg_date": "Thu, 12 Jan 2023 00:17:48 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-11 23:03:30 -0800, Will Mortensen wrote:\n> On Wed, Jan 11, 2023 at 12:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think such a function would still have to integrate enough with the lock\n> > manager infrastructure to participate in the deadlock detector. Otherwise I\n> > think you'd trivially end up with loads of deadlocks.\n>\n> Could you elaborate on which unusual deadlock concerns arise? To be\n> clear, WaitForLockers() is an existing function in lmgr.c\n> (https://github.com/postgres/postgres/blob/216a784829c2c5f03ab0c43e009126cbb819e9b2/src/backend/storage/lmgr/lmgr.c#L986),\n> and naively it seems like we mostly just need to call it.\n\nI know that WaitForLockers() is an existing function :). I'm not sure it's\nentirely suitable for your use case. So I mainly wanted to point out that if\nyou end up writing a separate version of it, you still need to integrate with\nthe deadlock detection. WaitForLockers() does that by actually acquiring a\nlock on the \"transaction\" its waiting for.\n\n\n> To my very limited understanding, from looking at the existing callers and\n> the implementation of LOCK, that would look something like this (assuming\n> we're in a SQL command like LOCK and calling unmodified WaitForLockers()\n> with a single table):\n>\n> 1. Call something like RangeVarGetRelidExtended() with AccessShareLock\n> to ensure the table is not dropped and obtain the table oid\n>\n> 2. Use SET_LOCKTAG_RELATION() to construct the lock tag from the oid\n>\n> 3. Call WaitForLockers(), which internally calls GetLockConflicts() and\n> VirtualXactLock(). These certainly take plenty of locks of various types,\n> and will likely sleep in LockAcquire() waiting for transactions to finish,\n> but there don't seem to be any unusual pre/postconditions, nor do we\n> hold any unusual locks already.\n\nI suspect that keeping the AccessShareLock while doing the WaitForLockers() is\nlikely to increase the deadlock risk noticeably. I think for the use case you\nmight get away with resolving the relation names, building the locktags, and\nthen release the lock, before calling WaitForLockers. If somebody drops the\ntable or such, you'd presumably still get desired behaviour that way, without\nthe increased deaadlock risk.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 12 Jan 2023 11:31:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi Andres,\n\nOn Thu, Jan 12, 2023 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:\n> I know that WaitForLockers() is an existing function :). I'm not sure it's\n> entirely suitable for your use case. So I mainly wanted to point out that if\n> you end up writing a separate version of it, you still need to integrate with\n> the deadlock detection.\n\nI see. What about it seems potentially unsuitable?\n\n> On 2023-01-11 23:03:30 -0800, Will Mortensen wrote:\n> > To my very limited understanding, from looking at the existing callers and\n> > the implementation of LOCK, that would look something like this (assuming\n> > we're in a SQL command like LOCK and calling unmodified WaitForLockers()\n> > with a single table):\n> >\n> > 1. Call something like RangeVarGetRelidExtended() with AccessShareLock\n> > to ensure the table is not dropped and obtain the table oid\n> >\n> > 2. Use SET_LOCKTAG_RELATION() to construct the lock tag from the oid\n> >\n> > 3. Call WaitForLockers(), which internally calls GetLockConflicts() and\n> > VirtualXactLock(). These certainly take plenty of locks of various types,\n> > and will likely sleep in LockAcquire() waiting for transactions to finish,\n> > but there don't seem to be any unusual pre/postconditions, nor do we\n> > hold any unusual locks already.\n>\n> I suspect that keeping the AccessShareLock while doing the WaitForLockers() is\n> likely to increase the deadlock risk noticeably. I think for the use case you\n> might get away with resolving the relation names, building the locktags, and\n> then release the lock, before calling WaitForLockers. If somebody drops the\n> table or such, you'd presumably still get desired behaviour that way, without\n> the increased deaadlock risk.\n\nThat makes sense. I agree it seems fine to just return if e.g. the table is\ndropped.\n\nFWIW re: deadlocks in general, I probably didn't highlight it well in my\noriginal email, but the existing solution for this use case (as Marco\ndescribed in his blog post) is to actually lock the table momentarily.\nMarco's blog post uses ShareRowExclusiveLock, but I think ShareLock is\nsufficient for us; in any case, that's stronger than the AccessShareLock that\nwe need to merely wait.\n\nAnd actually locking the table with e.g. ShareLock seems perhaps *more*\nlikely to cause deadlocks (and hurts performance), since it not only waits for\nexisting conflicting lockers (e.g. RowExclusiveLock) as desired, but also\nundesirably blocks other transactions from newly acquiring conflicting locks\nin the meantime. Hence the motivation for this feature. :-)\n\nI'm sure I may be missing something though. Thanks for all your feedback. :-)\n\n\n",
"msg_date": "Thu, 12 Jan 2023 19:21:00 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-12 19:21:00 -0800, Will Mortensen wrote:\n> FWIW re: deadlocks in general, I probably didn't highlight it well in my\n> original email, but the existing solution for this use case (as Marco\n> described in his blog post) is to actually lock the table momentarily.\n> Marco's blog post uses ShareRowExclusiveLock, but I think ShareLock is\n> sufficient for us; in any case, that's stronger than the AccessShareLock that\n> we need to merely wait.\n> \n> And actually locking the table with e.g. ShareLock seems perhaps *more*\n> likely to cause deadlocks (and hurts performance), since it not only waits for\n> existing conflicting lockers (e.g. RowExclusiveLock) as desired, but also\n> undesirably blocks other transactions from newly acquiring conflicting locks\n> in the meantime. Hence the motivation for this feature. :-)\n> \n> I'm sure I may be missing something though. Thanks for all your feedback. :-)\n\n From a deadlock risk pov, it's worse to hold an AccessShareLock and then wait\nfor other transaction to end, than to just wait for ShareRowExclusiveLock,\nwithout holding any locks.\n\nIf you don't hold any locks (*) and wait for a lock, you cannot participate in\na deadlock, because nobody will wait for you. A deadlock is a cycle in the\nlock graph, a node can't participate in a deadlock if it doesn't have any\nincoming edges, and there can't be incoming edges if there's nothing to wait\non.\n\nConsider a scenario like this:\n\ntx 1: acquires RowExclusiveLock on tbl1 to insert rows\ntx 2: acquires AccessShareLock on tbl1\ntx 2: WaitForLockers(ShareRowExclusiveLock, tbl1) ends up waiting for tx1\ntx 1: truncate tbl1 needs an AccessExclusiveLock\n\nBoom, a simple deadlock. tx1 can't progress, because it can't get\nAccessExclusiveLock, and tx2 can't progress because tx1 didn't finish.\n\nBut if tx2 directly waited for ShareRowExclusiveLock, there'd not been any\ncycle in the lock graph, and everything would have worked.\n\nRegards,\n\nAndres\n\n(*) If you define holding locks expansive, it's impossible to wait for a lock\nwithout holding a lock, since every transaction holds a lock on its own\nvirtual transactionid. But normally nobody just waits for a transaction that\nhasn't done anything.\n\n\n",
"msg_date": "Thu, 12 Jan 2023 19:49:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi Andres,\n\nOn Thu, Jan 12, 2023 at 7:49 PM Andres Freund <andres@anarazel.de> wrote:\n> Consider a scenario like this:\n>\n> tx 1: acquires RowExclusiveLock on tbl1 to insert rows\n> tx 2: acquires AccessShareLock on tbl1\n> tx 2: WaitForLockers(ShareRowExclusiveLock, tbl1) ends up waiting for tx1\n> tx 1: truncate tbl1 needs an AccessExclusiveLock\n\nOh of course, thanks.\n\nIs it even necessary to take the AccessShareLock? I see that one can call e.g.\nRangeVarGetRelidExtended() with NoLock, and from the comments it seems\nlike that might be OK here?\n\nDid you have any remaining concerns about the suitability of WaitForLockers()\nfor the use case?\n\nAny thoughts on the syntax? It seems like an option to LOCK (like Marco\nsuggested) might be simplest to implement albeit a little tricky to document.\n\nSupporting descendant tables looks straightforward enough (just collect more\nlocktags?). Views look more involved; maybe we can avoid supporting them?\n\n\n",
"msg_date": "Thu, 12 Jan 2023 23:02:46 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Here is a first attempt at a WIP patch. Sorry about the MIME type.\n\nIt doesn't take any locks on the tables, but I'm not super confident\nthat that's safe, so any input would be appreciated.\n\nI omitted view support for simplicity, but if that seems like a\nrequirement I'll see about adding it. I assume we would need to take\nAccessShareLock on views (and release it, per above).\n\nIf the syntax and behavior seem roughly correct I'll work on updating the docs.\n\nThe commit message at the beginning of the .patch has slightly more commentary.\n\nThanks for any and all feedback!",
"msg_date": "Wed, 1 Feb 2023 21:55:28 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Updated patch with more tests and a first attempt at doc updates.\n\nAs the commit message and doc now point out, using\nWaitForLockersMultiple() makes for a behavior difference with actually\nlocking multiple tables, in that the combined set of conflicting locks\nis obtained only once for all tables, rather than obtaining conflicts\nand locking / waiting for just the first table and then obtaining\nconflicts and locking / waiting for the second table, etc. This is\ndefinitely desirable for my use case, but maybe these kinds of\ndifferences illustrate the potential awkwardness of extending LOCK?\n\nThanks again for any and all feedback!",
"msg_date": "Tue, 4 Jul 2023 01:11:05 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Updated docs a bit. I'll see about adding this to the next CF in hopes\nof attracting a reviewer. :-)",
"msg_date": "Wed, 2 Aug 2023 23:30:36 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "I realized that for our use case, we'd ideally wait for holders of\nRowExclusiveLock only, and not e.g. VACUUM holding\nShareUpdateExclusiveLock. Waiting for lockers in a specific mode seems\npossible by generalizing/duplicating WaitForLockersMultiple() and\nGetLockConflicts(), but I'd love to have a sanity check before\nattempting that. Also, I imagine those semantics might be too\ndifferent to make sense as part of the LOCK command.\n\nAlternatively, I had originally been trying to use the pg_locks view,\nwhich obviously provides flexibility in identifying existing lock\nholders. But I couldn't find a way to wait for the locks to be\nreleased / transactions to finish, and I was a little concerned about\nthe performance impact of selecting from it frequently when we only\ncare about a subset of the locks, although I didn't try to assess that\nin our particular application.\n\nIn any case, I'm looking forward to hearing more feedback from\nreviewers and potential users. :-)\n\n\n",
"msg_date": "Sun, 3 Sep 2023 23:16:52 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "On Sun, Sep 3, 2023 at 11:16 PM Will Mortensen <will@extrahop.com> wrote:\n> I realized that for our use case, we'd ideally wait for holders of\n> RowExclusiveLock only, and not e.g. VACUUM holding\n> ShareUpdateExclusiveLock. Waiting for lockers in a specific mode seems\n> possible by generalizing/duplicating WaitForLockersMultiple() and\n> GetLockConflicts(), but I'd love to have a sanity check before\n> attempting that. Also, I imagine those semantics might be too\n> different to make sense as part of the LOCK command.\n\nWell I attempted it. :-) Here is a new series that refactors\nGetLockConflicts(), generalizes WaitForLockersMultiple(), and adds a\nnew WAIT FOR LOCKERS command.\n\nI first tried extending LOCK further, but the code became somewhat\nunwieldy and the syntax became very confusing. I also thought again\nabout making new pg_foo() functions, but that would seemingly make it\nharder to share code with LOCK, and sharing syntax (to the extent it\nmakes sense) feels very natural. Also, a new SQL command provides\nplenty of doc space. :-) (I'll see about adding more examples later.)\n\nI'll try to edit the title of the CF entry accordingly. Still looking\nforward to any feedback. :-)",
"msg_date": "Sat, 23 Dec 2023 01:47:40 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "I meant to add that the example in the doc is adapted from Marco\nSlot's blog post linked earlier:\nhttps://www.citusdata.com/blog/2018/06/14/scalable-incremental-data-aggregation/\n\n\n",
"msg_date": "Sat, 23 Dec 2023 01:56:37 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Simplified the code and docs, and rewrote the example with more prose\ninstead of PL/pgSQL, which unfortunately made it longer, although it\ncould be truncated. Not really sure what's best...",
"msg_date": "Sat, 6 Jan 2024 02:57:04 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "On Sat, 2024-01-06 at 02:57 -0800, Will Mortensen wrote:\n> Simplified the code and docs, and rewrote the example with more prose\n> instead of PL/pgSQL, which unfortunately made it longer, although it\n> could be truncated. Not really sure what's best...\n\nI thought about this idea, and I have some doubts.\n\nWAIT FOR LOCKERS only waits for transactions that were holding locks\nwhen the statement started. Transactions that obtailed locks later on\nare ignored. While your original use case is valid, I cannot think of\nany other use case. So it is a special-purpose statement that is only\nuseful for certain processing of append-only tables.\n\nIs it worth creating a new SQL statement for that, which could lead to\na conflict with future editions of the SQL standard? Couldn't we follow\nthe PostgreSQL idiosyncrasy of providing a function with side effects\ninstead?\n\nYours,\nLaurenz Albe\n\n\n",
"msg_date": "Sat, 06 Jan 2024 13:00:08 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Hi Laurenz, thanks for taking a look!\n\nOn Sat, Jan 6, 2024 at 4:00 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> While your original use case is valid, I cannot think of\n> any other use case. So it is a special-purpose statement that is only\n> useful for certain processing of append-only tables.\n\nIt is definitely somewhat niche. :-) But as I mentioned in my\nlongwinded original message, the scheme is easily extended (with some\ntradeoffs) to process updates, if they set a non-primary-key column\nusing a sequence. As for deletions though, our applications handle\nthem separately.\n\n> Is it worth creating a new SQL statement for that, which could lead to\n> a conflict with future editions of the SQL standard? Couldn't we follow\n> the PostgreSQL idiosyncrasy of providing a function with side effects\n> instead?\n\nI would be happy to add a pg_foo() function instead. Here are a few\nthings to figure out:\n\n* To support waiting for lockers in a specified mode vs. conflicting\nwith a specified mode, should there be two functions, or one function\nwith a boolean argument like I used in C?\n\n* Presumably the function(s) would take a regclass[] argument?\n\n* Presumably the lock mode would be specified using strings like\n'ShareLock'? There's no code to parse these AFAICT, but we could add\nit.\n\n* Maybe we could omit LOCK's handling of descendant tables for\nsimplicity? I will have to see how much other code needs to be\nduplicated or shared.\n\nI'll look further into it later this week.\n\n\n",
"msg_date": "Tue, 9 Jan 2024 00:18:28 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Here is a new series adding a single pg_wait_for_lockers() function\nthat takes a boolean argument to control the interpretation of the\nlock mode. It omits LOCK's handling of descendant tables so it\nrequires permissions directly on descendants in order to wait for\nlocks on them. Not sure if that would be a problem for anyone.",
"msg_date": "Thu, 11 Jan 2024 01:51:20 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "On Thu, 11 Jan 2024 at 15:22, Will Mortensen <will@extrahop.com> wrote:\n>\n> Here is a new series adding a single pg_wait_for_lockers() function\n> that takes a boolean argument to control the interpretation of the\n> lock mode. It omits LOCK's handling of descendant tables so it\n> requires permissions directly on descendants in order to wait for\n> locks on them. Not sure if that would be a problem for anyone.\n\nCFBot shows that there is one warning as in [1]:\npatching file doc/src/sgml/libpq.sgml\n...\n[09:30:40.000] [940/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/storage_lmgr_condition_variable.c.obj\n[09:30:40.000] [941/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/storage_lmgr_deadlock.c.obj\n[09:30:40.000] [942/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/storage_lmgr_lmgr.c.obj\n[09:30:40.000] [943/2212] Compiling C object\nsrc/backend/postgres_lib.a.p/storage_lmgr_lock.c.obj\n[09:30:40.000] c:\\cirrus\\src\\backend\\storage\\lmgr\\lock.c(4084) :\nwarning C4715: 'ParseLockmodeName': not all control paths return a\nvalue\n\nPlease post an updated version for the same.\n\n[1] - https://api.cirrus-ci.com/v1/task/4884224944111616/logs/build.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 18:24:01 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "On Fri, Jan 26, 2024 at 4:54 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> CFBot shows that there is one warning as in [1]:\n> patching file doc/src/sgml/libpq.sgml\n> ...\n> [09:30:40.000] [943/2212] Compiling C object\n> src/backend/postgres_lib.a.p/storage_lmgr_lock.c.obj\n> [09:30:40.000] c:\\cirrus\\src\\backend\\storage\\lmgr\\lock.c(4084) :\n> warning C4715: 'ParseLockmodeName': not all control paths return a\n> value\n\nThanks Vignesh, I guess the MS compiler doesn't have\n__builtin_constant_p()? So I added an unreachable return, and a\nregression test that exercises this error path.\n\nI also made various other simplifications and minor fixes to the code,\ndocs, and tests.\n\nBack in v5 (with a new SQL command) I had a detailed example in the\ndocs, which I removed when changing to a function, and I'm not sure if\nI should try to add it back now...I could shrink it but it might still\nbe too long for this part of the docs?\n\nAnyway, please see attached.",
"msg_date": "Sun, 28 Jan 2024 19:28:05 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "I guess the output of the deadlock test was unstable, so I simply\nremoved it in v8 here, but I can try to fix it instead if it seems\nimportant to test that.",
"msg_date": "Sun, 28 Jan 2024 20:06:03 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Minor style fix; sorry for the spam.",
"msg_date": "Sun, 28 Jan 2024 23:44:05 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Rebased and fixed conflicts.\n\nFWIW re: Andrey's comment in his excellent CF summary email[0]: we're\ncurrently using vanilla Postgres (via Gentoo) on single nodes, and not\nanything fancy like Citus. The Citus relationship is just that we were\ninspired by Marco's blog post there. We have a variety of clients\nwritten in different languages that generally don't coordinate their\ntable modifications, and Marco's scheme merely requires them to use\nsequences idiomatically, which we can just about manage. :-)\n\nThis feature is then a performance optimization to support this scheme\nwhile avoiding the case where one writer holding a RowExclusiveLock\nblocks the reader from taking a ShareLock which in turn prevents other\nwriters from taking a RowExclusiveLock for a long time. Instead, the\nreader can wait for the first writer without taking any locks or\nblocking later writers. I've illustrated this difference in the\nisolation tests.\n\nStill hoping we can get this into 17. :-)\n\n[0] https://www.postgresql.org/message-id/C8D65462-0888-4484-A72C-C99A94381ECD%40yandex-team.ru",
"msg_date": "Fri, 8 Mar 2024 20:25:36 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "Rebased, fixed a couple typos, and reordered the isolation tests to\nput the most elaborate pair last.",
"msg_date": "Tue, 26 Mar 2024 22:16:47 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "I got some very helpful off-list feedback from Robert Haas that this\nneeded more self-contained explanation/motivation. So here goes. :-)\n\nThis patch set adds a new SQL function pg_wait_for_lockers(), which\nwaits for transactions holding specified table locks to commit or roll\nback. This can be useful with knowledge of the queries in those\ntransactions, particularly for asynchronous and incremental processing\nof inserted/updated rows.\n\nSpecifically, consider a scenario where INSERTs and UPDATEs always set\na serial column to its default value. A client can call\npg_sequence_last_value() + pg_wait_for_lockers() and then take a new\nDB snapshot and know that rows committed after this snapshot will have\nvalues of the serial column greater than the value from\npg_sequence_last_value(). As shown in the example at the end, this\nallows the client to asynchronously and incrementally read\ninserted/updated rows with minimal per-client state, without buffering\nchanges, and without affecting writer transactions.\n\nThere are lots of other ways to support incrementally reading new\nrows, but they don’t have all of those qualities. For example:\n\n* Forcing writers to commit in a specific order (e.g. by serial column\nvalue) would reduce throughput\n\n* Explicitly tracking or coordinating with writers would likely be\nmore complex, impact performance, and/or require much more state\n\n* Methods that are synchronous or buffer/queue changes are problematic\nif readers fall behind\n\nExisting ways to wait for table locks also have downsides:\n\n* Taking a conflicting lock with LOCK blocks new transactions from\ntaking the lock of interest while LOCK waits. And in order to wait for\nwriters holding RowExclusiveLock, we must take ShareLock, which also\nconflicts with ShareUpdateExclusiveLock and therefore unnecessarily\ninterferes with (auto)vacuum. Finally, with multiple tables LOCK locks\nthem one at a time, so it waits (and holds locks) longer than\nnecessary.\n\n* Using pg_locks / pg_lock_status() to identify the transactions\nholding the locks is more expensive since it also returns all other\nlocks in the DB cluster, plus there’s no efficient built-in way to\nwait for the transactions to commit or roll back.\n\nBy contrast, pg_wait_for_lockers() doesn’t block other transactions,\nwaits on multiple tables in parallel, and doesn’t spend time looking\nat irrelevant locks.\n\nThis change is split into three patches for ease of review. The first\ntwo patches modify the existing WaitForLockers() C function and other\nlocking internals to support waiting for lockers in a single lock\nmode, which allows waiting for INSERT/UPDATE without waiting for\nvacuuming. These changes could be omitted at the cost of unnecessary\nwaiting, potentially for a long time with slow vacuums. The third\npatch adds the pg_wait_for_lockers() SQL function, which just calls\nWaitForLockers().\n\nFWIW, another solution might be to directly expose the functions that\nWaitForLockers() calls, namely GetLockConflicts() (generalized to\nGetLockers() in the first patch) to identify the transactions holding\nthe locks, and VirtualXactLock() to wait for each transaction to\ncommit or roll back. That would be more complicated for the client but\ncould be more broadly useful. I could investigate that further if it\nseems preferable.\n\n\n=== Example ===\n\nAssume we have the following table:\n\nCREATE TABLE page_views (\n id bigserial,\n view_time timestamptz\n);\n\nwhich is only ever modified by (potentially concurrent) INSERT\ncommands that assign the default value to the id column. We can run\nthe following commands:\n\nSELECT pg_sequence_last_value('page_views_id_seq');\n\n pg_sequence_last_value\n------------------------\n 4\n\nSELECT pg_wait_for_lockers(array['page_views']::oid, regclass[],\n'RowExclusiveLock', FALSE);\n\nNow we know that all rows where id <= 4 have been committed or rolled\nback, and we can observe/process them:\n\nSELECT * FROM page_views WHERE id <= 4;\n\n id | view_time\n----+-------------------------------\n 2 | 2024-01-01 12:34:01.000000-00\n 3 | 2024-01-01 12:34:00.000000-00\n\nLater we can iterate:\n\nSELECT pg_sequence_last_value('page_views_id_seq');\n\n pg_sequence_last_value\n------------------------\n 9\n\nSELECT pg_wait_for_lockers(array['page_views']::oid, regclass[],\n'RowExclusiveLock', FALSE);\n\nWe already observed all the rows where id <= 4, so this time we can\nfilter them out:\n\nSELECT * FROM page_views WHERE id > 4 AND id <= 9;\n\n id | view_time\n----+-------------------------------\n 5 | 2024-01-01 12:34:05.000000-00\n 8 | 2024-01-01 12:34:04.000000-00\n 9 | 2024-01-01 12:34:07.000000-00\n\nWe can continue iterating like this to incrementally observe more\nnewly inserted rows. Note that the only state we persist across\niterations is the value returned by pg_sequence_last_value().\n\nIn this example, we processed inserted rows exactly once. Variations\nare possible for handling updates, as discussed in the original email,\nand I could explain that again better if it would be helpful. :-)",
"msg_date": "Thu, 30 May 2024 00:01:32 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "I should add that the latest patches remove permissions checks because\npg_locks doesn't have any, and improve the commit messages. Hope I\ndidn't garble anything doing this late after the dev conference. :-)\n\nRobert asked me about other existing functions that could be\nleveraged, such as GetConflictingVirtualXIDs(), but I didn't see any\nwith close-enough semantics that handle fast-path locks as needed for\ntables/relations.\n\n\n",
"msg_date": "Thu, 30 May 2024 00:10:57 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
},
{
"msg_contents": "On Thu, May 30, 2024 at 12:01 AM Will Mortensen <will@extrahop.com> wrote:\n> FWIW, another solution might be to directly expose the functions that\n> WaitForLockers() calls, namely GetLockConflicts() (generalized to\n> GetLockers() in the first patch) to identify the transactions holding\n> the locks, and VirtualXactLock() to wait for each transaction to\n> commit or roll back. That would be more complicated for the client but\n> could be more broadly useful. I could investigate that further if it\n> seems preferable.\n\nWe will look further into this. Since the main advantage over polling\nthe existing pg_locks view would be efficiency, we will try to provide\nmore quantitative evidence/analysis of that. That will probably want\nto be a new thread and CF entry, so I'm withdrawing this one.\n\nThanks again for all the replies, and to Robert for your off-list\nfeedback and letting me bend your ear in Vancouver. :-)\n\n\n",
"msg_date": "Sun, 21 Jul 2024 23:46:51 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "Re: Exposing the lock manager's WaitForLockers() to SQL"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is a PoC patch which implements distinct operation in window\naggregates (without order by and for single column aggregation, final\nversion may vary wrt these limitations). Purpose of this PoC is to get\nfeedback on the approach used and corresponding implementation, any\nnitpicking as deemed reasonable.\n\nDistinct operation is mirrored from implementation in nodeAgg. Existing\npartitioning logic determines if row is in partition and when distinct is\nrequired, all tuples for the aggregate column are stored in tuplesort. When\nfinalize_windowaggregate gets called, tuples are sorted and duplicates are\nremoved, followed by calling the transition function on each tuple.\nWhen distinct is not required, the above process is skipped and the\ntransition function gets called directly and nothing gets inserted into\ntuplesort.\nNote: For each partition, in tuplesort_begin and tuplesort_end is involved\nto rinse tuplesort, so at any time, max tuples in tuplesort is equal to\ntuples in a particular partition.\n\nI have verified it for interger and interval column aggregates (to rule out\nobvious issues related to data types).\n\nSample cases:\n\ncreate table mytable(id int, name text);\ninsert into mytable values(1, 'A');\ninsert into mytable values(1, 'A');\ninsert into mytable values(5, 'B');\ninsert into mytable values(3, 'A');\ninsert into mytable values(1, 'A');\n\nselect avg(distinct id) over (partition by name) from mytable;\n avg\n--------------------\n2.0000000000000000\n2.0000000000000000\n2.0000000000000000\n2.0000000000000000\n5.0000000000000000\n\nselect avg(id) over (partition by name) from mytable;\n avg\n--------------------\n 1.5000000000000000\n 1.5000000000000000\n 1.5000000000000000\n 1.5000000000000000\n 5.0000000000000000\n\nselect avg(distinct id) over () from mytable;\n avg\n--------------------\n 3.0000000000000000\n 3.0000000000000000\n 3.0000000000000000\n 3.0000000000000000\n 3.0000000000000000\n\nselect avg(distinct id) from mytable;\n avg\n--------------------\n 3.0000000000000000\n\nThis is my first-time contribution. Please let me know if anything can be\nimproved as I`m eager to learn.\n\nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sat, 24 Dec 2022 18:22:03 +0530",
"msg_from": "Ankit Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PoC] Implementation of distinct in Window Aggregates"
},
{
"msg_contents": "\nOn 24/12/22 18:22, Ankit Pandey wrote:\n> Hi,\n>\n> This is a PoC patch which implements distinct operation in window \n> aggregates (without order by and for single column aggregation, final \n> version may vary wrt these limitations). Purpose of this PoC is to get \n> feedback on the approach used and corresponding implementation, any \n> nitpicking as deemed reasonable.\n>\n> Distinct operation is mirrored from implementation in nodeAgg. \n> Existing partitioning logic determines if row is in partition and when \n> distinct is required, all tuples for the aggregate column are stored \n> in tuplesort. When finalize_windowaggregate gets called, tuples are \n> sorted and duplicates are removed, followed by calling the transition \n> function on each tuple.\n> When distinct is not required, the above process is skipped and the \n> transition function gets called directly and nothing gets inserted \n> into tuplesort.\n> Note: For each partition, in tuplesort_begin and tuplesort_end is \n> involved to rinse tuplesort, so at any time, max tuples in tuplesort \n> is equal to tuples in a particular partition.\n>\n> I have verified it for interger and interval column aggregates (to \n> rule out obvious issues related to data types).\n>\n> Sample cases:\n>\n> create table mytable(id int, name text);\n> insert into mytable values(1, 'A');\n> insert into mytable values(1, 'A');\n> insert into mytable values(5, 'B');\n> insert into mytable values(3, 'A');\n> insert into mytable values(1, 'A');\n>\n> select avg(distinct id) over (partition by name) from mytable;\n> avg\n> --------------------\n> 2.0000000000000000\n> 2.0000000000000000\n> 2.0000000000000000\n> 2.0000000000000000\n> 5.0000000000000000\n>\n> select avg(id) over (partition by name) from mytable;\n> avg\n> --------------------\n> 1.5000000000000000\n> 1.5000000000000000\n> 1.5000000000000000\n> 1.5000000000000000\n> 5.0000000000000000\n>\n> select avg(distinct id) over () from mytable;\n> avg\n> --------------------\n> 3.0000000000000000\n> 3.0000000000000000\n> 3.0000000000000000\n> 3.0000000000000000\n> 3.0000000000000000\n>\n> select avg(distinct id) from mytable;\n> avg\n> --------------------\n> 3.0000000000000000\n>\n> This is my first-time contribution. Please let me know if anything can be\n> improved as I`m eager to learn.\n>\n> Regards,\n> Ankit Kumar Pandey\n\nHi all,\n\nI know everyone is busy with holidays (well, Happy Holidays!) but I will \nbe glad if someone can take a quick look at this PoC and share thoughts.\n\nThis is my first time contribution so I am pretty sure there will be \nsome very obvious feedbacks (which will help me to move forward with \nthis change).\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Thu, 29 Dec 2022 20:58:58 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates"
},
{
"msg_contents": "On 29/12/22 20:58, Ankit Kumar Pandey wrote:\n>\n> On 24/12/22 18:22, Ankit Pandey wrote:\n>> Hi,\n>>\n>> This is a PoC patch which implements distinct operation in window \n>> aggregates (without order by and for single column aggregation, final \n>> version may vary wrt these limitations). Purpose of this PoC is to \n>> get feedback on the approach used and corresponding implementation, \n>> any nitpicking as deemed reasonable.\n>>\n>> Distinct operation is mirrored from implementation in nodeAgg. \n>> Existing partitioning logic determines if row is in partition and \n>> when distinct is required, all tuples for the aggregate column are \n>> stored in tuplesort. When finalize_windowaggregate gets called, \n>> tuples are sorted and duplicates are removed, followed by calling the \n>> transition function on each tuple.\n>> When distinct is not required, the above process is skipped and the \n>> transition function gets called directly and nothing gets inserted \n>> into tuplesort.\n>> Note: For each partition, in tuplesort_begin and tuplesort_end is \n>> involved to rinse tuplesort, so at any time, max tuples in tuplesort \n>> is equal to tuples in a particular partition.\n>>\n>> I have verified it for interger and interval column aggregates (to \n>> rule out obvious issues related to data types).\n>>\n>> Sample cases:\n>>\n>> create table mytable(id int, name text);\n>> insert into mytable values(1, 'A');\n>> insert into mytable values(1, 'A');\n>> insert into mytable values(5, 'B');\n>> insert into mytable values(3, 'A');\n>> insert into mytable values(1, 'A');\n>>\n>> select avg(distinct id) over (partition by name) from mytable;\n>> avg\n>> --------------------\n>> 2.0000000000000000\n>> 2.0000000000000000\n>> 2.0000000000000000\n>> 2.0000000000000000\n>> 5.0000000000000000\n>>\n>> select avg(id) over (partition by name) from mytable;\n>> avg\n>> --------------------\n>> 1.5000000000000000\n>> 1.5000000000000000\n>> 1.5000000000000000\n>> 1.5000000000000000\n>> 5.0000000000000000\n>>\n>> select avg(distinct id) over () from mytable;\n>> avg\n>> --------------------\n>> 3.0000000000000000\n>> 3.0000000000000000\n>> 3.0000000000000000\n>> 3.0000000000000000\n>> 3.0000000000000000\n>>\n>> select avg(distinct id) from mytable;\n>> avg\n>> --------------------\n>> 3.0000000000000000\n>>\n>> This is my first-time contribution. Please let me know if anything \n>> can be\n>> improved as I`m eager to learn.\n>>\n>> Regards,\n>> Ankit Kumar Pandey\n>\n> Hi all,\n>\n> I know everyone is busy with holidays (well, Happy Holidays!) but I \n> will be glad if someone can take a quick look at this PoC and share \n> thoughts.\n>\n> This is my first time contribution so I am pretty sure there will be \n> some very obvious feedbacks (which will help me to move forward with \n> this change).\n>\n>\nUpdated patch with latest master. Last patch was an year old.\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Wed, 4 Jan 2023 18:10:32 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates"
},
{
"msg_contents": "On 04/01/23 18:10, Ankit Kumar Pandey wrote:\n> On 29/12/22 20:58, Ankit Kumar Pandey wrote:\n> >\n> > On 24/12/22 18:22, Ankit Pandey wrote:\n> >> Hi,\n> >>\n> >> This is a PoC patch which implements distinct operation in window \n> >> aggregates (without order by and for single column aggregation, final \n> >> version may vary wrt these limitations). Purpose of this PoC is to \n> >> get feedback on the approach used and corresponding implementation, \n> >> any nitpicking as deemed reasonable.\n> >>\n> >> Distinct operation is mirrored from implementation in nodeAgg. \n> >> Existing partitioning logic determines if row is in partition and \n> >> when distinct is required, all tuples for the aggregate column are \n> >> stored in tuplesort. When finalize_windowaggregate gets called, \n> >> tuples are sorted and duplicates are removed, followed by calling the \n> >> transition function on each tuple.\n> >> When distinct is not required, the above process is skipped and the \n> >> transition function gets called directly and nothing gets inserted \n> >> into tuplesort.\n> >> Note: For each partition, in tuplesort_begin and tuplesort_end is \n> >> involved to rinse tuplesort, so at any time, max tuples in tuplesort \n> >> is equal to tuples in a particular partition.\n> >>\n> >> I have verified it for interger and interval column aggregates (to \n> >> rule out obvious issues related to data types).\n> >>\n> >> Sample cases:\n> >>\n> >> create table mytable(id int, name text);\n> >> insert into mytable values(1, 'A');\n> >> insert into mytable values(1, 'A');\n> >> insert into mytable values(5, 'B');\n> >> insert into mytable values(3, 'A');\n> >> insert into mytable values(1, 'A');\n> >>\n> >> select avg(distinct id) over (partition by name) from mytable;\n> >> avg\n> >> --------------------\n> >> 2.0000000000000000\n> >> 2.0000000000000000\n> >> 2.0000000000000000\n> >> 2.0000000000000000\n> >> 5.0000000000000000\n> >>\n> >> select avg(id) over (partition by name) from mytable;\n> >> avg\n> >> --------------------\n> >> 1.5000000000000000\n> >> 1.5000000000000000\n> >> 1.5000000000000000\n> >> 1.5000000000000000\n> >> 5.0000000000000000\n> >>\n> >> select avg(distinct id) over () from mytable;\n> >> avg\n> >> --------------------\n> >> 3.0000000000000000\n> >> 3.0000000000000000\n> >> 3.0000000000000000\n> >> 3.0000000000000000\n> >> 3.0000000000000000\n> >>\n> >> select avg(distinct id) from mytable;\n> >> avg\n> >> --------------------\n> >> 3.0000000000000000\n> >>\n> >> This is my first-time contribution. Please let me know if anything \n> >> can be\n> >> improved as I`m eager to learn.\n> >>\n> >> Regards,\n> >> Ankit Kumar Pandey\n> >\n> > Hi all,\n> >\n> > I know everyone is busy with holidays (well, Happy Holidays!) but I \n> > will be glad if someone can take a quick look at this PoC and share \n> > thoughts.\n> >\n> > This is my first time contribution so I am pretty sure there will be \n> > some very obvious feedbacks (which will help me to move forward with \n> > this change).\n> >\n> >\n> Updated patch with latest master. Last patch was an year old.\n>\nAttaching patch with rebase from latest HEAD\n\n\nThanks,\n\nAnkit",
"msg_date": "Sun, 12 Mar 2023 12:55:48 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates"
},
{
"msg_contents": "Attaching updated patch with a fix for an issue in window function.\n\nI have also fixed naming convention of patch as last patch had \nincompatible name.\n\nNote:\n\n1. Pending: Investigation of test cases failures.\n\n\nRegards,\n\nAnkit",
"msg_date": "Sun, 12 Mar 2023 13:47:50 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates"
},
{
"msg_contents": "On 3/12/23 09:17, Ankit Kumar Pandey wrote:\n> Attaching updated patch with a fix for an issue in window function.\n> \n> I have also fixed naming convention of patch as last patch had \n> incompatible name.\n\nHi,\n\nThis patch does not apply to master. Could you rebase it and submit it \nas one patch which applies directly to master? Maybe I am wrong but the \nlatest version looks like it only applies on top of one of your previous \npatches which makes it hard for the reviewer.\n\nAndreas\n\n\n",
"msg_date": "Tue, 11 Jul 2023 01:06:07 +0200",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates"
},
{
"msg_contents": "> On 11 Jul 2023, at 01:06, Andreas Karlsson <andreas@proxel.se> wrote:\n> \n> On 3/12/23 09:17, Ankit Kumar Pandey wrote:\n>> Attaching updated patch with a fix for an issue in window function.\n>> I have also fixed naming convention of patch as last patch had incompatible name.\n> \n> Hi,\n> \n> This patch does not apply to master. Could you rebase it and submit it as one patch which applies directly to master? Maybe I am wrong but the latest version looks like it only applies on top of one of your previous patches which makes it hard for the reviewer.\n\nSince no update was posted, the patch was considered a PoC and the thread has\nstalled, I will mark this returned with feedback. Please feel free to reopen a\nnew CF entry when there is a new patch available which addresses Andreas'\nfeedback on patch structure.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 3 Aug 2023 22:35:11 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: [PoC] Implementation of distinct in Window Aggregates"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile looking at one of the todo item in Window function, namely:\n\n/Teach planner to evaluate multiple windows in the optimal order \nCurrently windows are always evaluated in the query-specified order./\n\n From threads, relevant points.\n\nPoint #1\n\n> In the above query Oracle 10g performs 2 sorts, DB2 and Sybase perform 3\n> sorts. We also perform 3.\nand Point #2\n> Teach planner to decide which window to evaluate first based on costs.\n> Currently the first window in the query is evaluated first, there may \n> be no\n> index to help sort the first window, but perhaps there are for other \n> windows\n> in the query. This may allow an index scan instead of a seqscan -> sort.\nRepro:\n\nselect pg_catalog.version();\n\n/version //\n//----------------------------------------------------------------------------------------------------//\n// PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu \n12.2.0-3ubuntu1) 12.2.0, 64-bit//\n//(1 row)/\n\ncreate table empsalary(depname text, empno int, salary int);\ninsert into empsalary values (select substr(md5(random()::text), 0, 25), \ngenerate_series(1,10), generate_series(10000,12000));\n\nexplain SELECT depname, SUM(salary) OVER (ORDER BY salary), SUM(salary) \nOVER (ORDER BY empno) FROM empsalary ORDER BY salary;\n\n/ QUERY PLAN //\n//--------------------------------------------------------------------------------------//\n// WindowAgg (cost=289.47..324.48 rows=2001 width=49)//\n// -> Sort (cost=289.47..294.47 rows=2001 width=41)//\n// Sort Key: salary//\n// -> WindowAgg (cost=144.73..179.75 rows=2001 width=41)//\n// -> Sort (cost=144.73..149.73 rows=2001 width=33)//\n// Sort Key: empno//\n// -> Seq Scan on empsalary (cost=0.00..35.01 \nrows=2001 width=33)//\n//(7 rows)/\n\nAs it is seen, for case #1, issue looks like resolved and only 2 sorts \nare performed.\n\nFor #2, index col ordering is changed.\n\ncreate index idx_emp on empsalary (empno);\nexplain SELECT depname, SUM(salary) OVER (ORDER BY salary), SUM(salary) \nOVER (ORDER BY empno) FROM empsalary ORDER BY salary;\n/ QUERY PLAN //\n//------------------------------------------------------------------------------------------------//\n// WindowAgg (cost=204.03..239.04 rows=2001 width=49)//\n// -> Sort (cost=204.03..209.03 rows=2001 width=41)//\n// Sort Key: salary//\n// -> WindowAgg (cost=0.28..94.31 rows=2001 width=41)//\n// -> Index Scan using idx_emp on empsalary \n(cost=0.28..64.29 rows=2001 width=33)//\n//(5 rows)/\n\nexplain SELECT depname, SUM(salary) OVER (ORDER BY empno), SUM(salary) \nOVER (ORDER BY salary) FROM empsalary ORDER BY salary;\n/ QUERY PLAN //\n//------------------------------------------------------------------------------------------------//\n// WindowAgg (cost=204.03..239.04 rows=2001 width=49)//\n// -> Sort (cost=204.03..209.03 rows=2001 width=41)//\n// Sort Key: salary//\n// -> WindowAgg (cost=0.28..94.31 rows=2001 width=41)//\n// -> Index Scan using idx_emp on empsalary \n(cost=0.28..64.29 rows=2001 width=33)//\n//(5 rows)/\n\nIn both cases, index scan is performed, which means this issue is \nresolved as well.\n\nIs this todo still relevant?\n\nFurther down the threads:\n\n> I do think the patch has probably left some low-hanging fruit on the\n> simpler end of the difficulty spectrum, namely when the window stuff\n> requires only one ordering that could be done either explicitly or\n> by an indexscan. That choice should ideally be done with a proper\n> cost comparison taking any LIMIT into account. I think right now\n> the LIMIT might not be accounted for, or might be considered even\n> when it shouldn't be because another sort is needed anyway.\n\nI am not sure if I understand this fully but does it means proposed todo \n(to use indexes) should be refined to\n\nteach planner to take into account of cost model while deciding to use \nindex or not in window functions? Meaning not always go with index route \n(modify point #2)?\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\nHi,\nWhile looking at one of the todo item in Window function, namely:\nTeach planner to evaluate multiple windows in the optimal\n order\n Currently windows are always evaluated in the query-specified\n order.\nFrom threads, relevant points.\nPoint #1\n\n\nIn the above query Oracle 10g performs 2 sorts,\n DB2 and Sybase perform 3\nsorts. We also perform 3.\n\n\n and Point #2\n\n\nTeach planner to decide which window to evaluate\n first based on costs.\n Currently the first window in the query is evaluated first,\n there may be no\n index to help sort the first window, but perhaps there are for\n other windows\n in the query. This may allow an index scan instead of a\n seqscan -> sort.\n\n Repro:\n select pg_catalog.version();\n \n version \n----------------------------------------------------------------------------------------------------\n PostgreSQL 16devel on x86_64-pc-linux-gnu, compiled by gcc\n (Ubuntu 12.2.0-3ubuntu1) 12.2.0, 64-bit\n(1 row)\n\ncreate table empsalary(depname text, empno int, salary int);\n insert into empsalary values (select substr(md5(random()::text),\n 0, 25), generate_series(1,10), generate_series(10000,12000));\nexplain SELECT depname, SUM(salary) OVER (ORDER BY salary),\n SUM(salary) OVER (ORDER BY empno) FROM empsalary ORDER BY salary;\n\n QUERY\n PLAN \n--------------------------------------------------------------------------------------\n WindowAgg (cost=289.47..324.48 rows=2001 width=49)\n -> Sort (cost=289.47..294.47 rows=2001 width=41)\n Sort Key: salary\n -> WindowAgg (cost=144.73..179.75 rows=2001\n width=41)\n -> Sort (cost=144.73..149.73 rows=2001\n width=33)\n Sort Key: empno\n -> Seq Scan on empsalary \n (cost=0.00..35.01 rows=2001 width=33)\n(7 rows)\nAs it is seen, for case #1, issue looks like resolved and only 2\n sorts are performed.\nFor #2, index col ordering is changed.\n\ncreate index idx_emp on empsalary (empno);\n explain SELECT depname, SUM(salary) OVER (ORDER BY salary),\n SUM(salary) OVER (ORDER BY empno) FROM empsalary ORDER BY salary;\n QUERY\n PLAN \n------------------------------------------------------------------------------------------------\n WindowAgg (cost=204.03..239.04 rows=2001 width=49)\n -> Sort (cost=204.03..209.03 rows=2001 width=41)\n Sort Key: salary\n -> WindowAgg (cost=0.28..94.31 rows=2001\n width=41)\n -> Index Scan using idx_emp on\n empsalary (cost=0.28..64.29 rows=2001 width=33)\n(5 rows)\n\n explain SELECT depname, SUM(salary) OVER (ORDER BY empno),\n SUM(salary) OVER (ORDER BY salary) FROM empsalary ORDER BY salary;\n QUERY\n PLAN \n------------------------------------------------------------------------------------------------\n WindowAgg (cost=204.03..239.04 rows=2001 width=49)\n -> Sort (cost=204.03..209.03 rows=2001 width=41)\n Sort Key: salary\n -> WindowAgg (cost=0.28..94.31 rows=2001\n width=41)\n -> Index Scan using idx_emp on\n empsalary (cost=0.28..64.29 rows=2001 width=33)\n(5 rows)\n\nIn both cases, index scan is performed, which means this issue is\n resolved as well.\nIs this todo still relevant?\nFurther down the threads:\n \n\nI do think the patch has probably left some\n low-hanging fruit on the\n simpler end of the difficulty spectrum, namely when the window\n stuff\n requires only one ordering that could be done either\n explicitly or\n by an indexscan. That choice should ideally be done with a\n proper\n cost comparison taking any LIMIT into account. I think right\n now\n the LIMIT might not be accounted for, or might be considered\n even\n when it shouldn't be because another sort is needed anyway.\n\nI am not sure if I understand this fully but does it means\n proposed todo (to use indexes) should be refined to \n\nteach planner to take into account of cost model while deciding\n to use index or not in window functions? Meaning not always go\n with index route (modify point #2)?\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sun, 25 Dec 2022 18:34:38 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Todo: Teach planner to evaluate multiple windows in the optimal order"
},
{
"msg_contents": "On Mon, 26 Dec 2022 at 02:04, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Point #1\n>\n> In the above query Oracle 10g performs 2 sorts, DB2 and Sybase perform 3\n> sorts. We also perform 3.\n\nThis shouldn't be too hard to do. See the code in\nselect_active_windows(). You'll likely want to pay attention to the\nDISTINCT pathkeys if they exist and just use the ORDER BY pathkeys if\nthe query has no DISTINCT clause. DISTINCT is evaluated after Window\nand before ORDER BY.\n\nOne idea to implement this would be to adjust the loop in\nselect_active_windows() so that we record any WindowClauses which have\nthe pathkeys contained in the ORDER BY / DISTINCT pathkeys then record\nthose separately and append those onto the end of the actives array\nafter the sort.\n\nI do think you'll likely want to put any WindowClauses which have\npathkeys which are a true subset or true superset of the ORDER BY /\nDISTINCT pathkeys last. If they're a superset then we won't need to\nperform any additional ordering for the DISTINCT / ORDER BY clause.\nIf they're a subset then we might be able to perform an Incremental\nSort, which is likely much cheaper than a full sort. The existing\ncode should handle that part. You just need to make\nselect_active_windows() more intelligent.\n\nYou might also think that we could perform additional optimisations\nand also adjust the ORDER BY clause of a WindowClause if it contains\nthe pathkeys of the DISTINCT / ORDER BY clause. For example:\n\nSELECT *,row_number() over (order by a,b) from tab order by a,b,c;\n\nHowever, if you were to adjust the WindowClauses ORDER BY to become\na,b,c then you could produce incorrect results for window functions\nthat change their result based on peer rows.\n\nNote the difference in results from:\n\ncreate table ab(a int, b int);\ninsert into ab select x,y from generate_series(1,5) x, generate_Series(1,5)y;\n\nselect a,b,count(*) over (order by a) from ab order by a,b;\nselect a,b,count(*) over (order by a,b) from ab order by a,b;\n\n> and Point #2\n>\n> Teach planner to decide which window to evaluate first based on costs.\n> Currently the first window in the query is evaluated first, there may be no\n> index to help sort the first window, but perhaps there are for other windows\n> in the query. This may allow an index scan instead of a seqscan -> sort.\n\nWhat Tom wrote about that in the first paragraph of [1] still applies.\nThe problem is that if the query contains many joins that to properly\nfind the cheapest way of executing the query we'd have to perform the\njoin search once for each unique sort order of each WindowClause.\nThat's just not practical to do from a performance standpoint. The\njoin search can be very expensive. There may be something that could\nbe done to better determine the most likely candidate for the first\nWindowClause using some heuristics, but I've no idea what those would\nbe. You should look into point #1 first. Point #2 is significantly\nmore difficult to solve in a way that would be acceptable to the\nproject.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/11535.1230501658%40sss.pgh.pa.us\n\n\n",
"msg_date": "Tue, 3 Jan 2023 15:51:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 03/01/23 08:21, David Rowley wrote:\n> On Mon, 26 Dec 2022 at 02:04, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> Point #1\n>>\n>> In the above query Oracle 10g performs 2 sorts, DB2 and Sybase perform 3\n>> sorts. We also perform 3.\n> This shouldn't be too hard to do. See the code in\n> select_active_windows(). You'll likely want to pay attention to the\n> DISTINCT pathkeys if they exist and just use the ORDER BY pathkeys if\n> the query has no DISTINCT clause. DISTINCT is evaluated after Window\n> and before ORDER BY.\n>\n> One idea to implement this would be to adjust the loop in\n> select_active_windows() so that we record any WindowClauses which have\n> the pathkeys contained in the ORDER BY / DISTINCT pathkeys then record\n> those separately and append those onto the end of the actives array\n> after the sort.\n>\n> I do think you'll likely want to put any WindowClauses which have\n> pathkeys which are a true subset or true superset of the ORDER BY /\n> DISTINCT pathkeys last. If they're a superset then we won't need to\n> perform any additional ordering for the DISTINCT / ORDER BY clause.\n> If they're a subset then we might be able to perform an Incremental\n> Sort, which is likely much cheaper than a full sort. The existing\n> code should handle that part. You just need to make\n> select_active_windows() more intelligent.\n>\n> You might also think that we could perform additional optimisations\n> and also adjust the ORDER BY clause of a WindowClause if it contains\n> the pathkeys of the DISTINCT / ORDER BY clause. For example:\n>\n> SELECT *,row_number() over (order by a,b) from tab order by a,b,c;\n>\n> However, if you were to adjust the WindowClauses ORDER BY to become\n> a,b,c then you could produce incorrect results for window functions\n> that change their result based on peer rows.\n>\n> Note the difference in results from:\n>\n> create table ab(a int, b int);\n> insert into ab select x,y from generate_series(1,5) x, generate_Series(1,5)y;\n>\n> select a,b,count(*) over (order by a) from ab order by a,b;\n> select a,b,count(*) over (order by a,b) from ab order by a,b;\n>\nThanks, let me try this.\n\n\n>> and Point #2\n>>\n>> Teach planner to decide which window to evaluate first based on costs.\n>> Currently the first window in the query is evaluated first, there may be no\n>> index to help sort the first window, but perhaps there are for other windows\n>> in the query. This may allow an index scan instead of a seqscan -> sort.\n> What Tom wrote about that in the first paragraph of [1] still applies.\n> The problem is that if the query contains many joins that to properly\n> find the cheapest way of executing the query we'd have to perform the\n> join search once for each unique sort order of each WindowClause.\n> That's just not practical to do from a performance standpoint. The\n> join search can be very expensive. There may be something that could\n> be done to better determine the most likely candidate for the first\n> WindowClause using some heuristics, but I've no idea what those would\n> be. You should look into point #1 first. Point #2 is significantly\n> more difficult to solve in a way that would be acceptable to the\n> project.\n>\nOkay, leaving this out for now.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 12:39:41 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Hi,\n\nOn 03/01/23 08:21, David Rowley wrote:\n>\n> I do think you'll likely want to put any WindowClauses which have\n> pathkeys which are a true subset or true superset of the ORDER BY /\n> DISTINCT pathkeys last. If they're a superset then we won't need to\n> perform any additional ordering for the DISTINCT / ORDER BY clause.\n> If they're a subset then we might be able to perform an Incremental\n> Sort, which is likely much cheaper than a full sort. The existing\n> code should handle that part. You just need to make\n> select_active_windows() more intelligent.\n\nI think current implementation does exactly this.\n\n#1. If order by clause in the window function is subset of order by in query\n\ncreate table abcd(a int, b int, c int, d int);\ninsert into abcd select x,y,z,c from generate_series(1,5) x, generate_Series(1,5)y, generate_Series(1,5) z, generate_Series(1,5) c;\nexplain analyze select a,row_number() over (order by b),count(*) over (order by a,b) from abcd order by a,b,c;\n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------\n--------\n Incremental Sort (cost=80.32..114.56 rows=625 width=28) (actual time=1.440..3.311 rows=625 loops=1)\n Sort Key: a, b, c\n Presorted Key: a, b\n Full-sort Groups: 13 Sort Method: quicksort Average Memory: 28kB Peak Memory: 28kB\n -> WindowAgg (cost=79.24..91.74 rows=625 width=28) (actual time=1.272..2.567 rows=625 loops=1)\n -> Sort (cost=79.24..80.80 rows=625 width=20) (actual time=1.233..1.296 rows=625 loops=1)\n Sort Key: a, b\n Sort Method: quicksort Memory: 64kB\n -> WindowAgg (cost=39.27..50.21 rows=625 width=20) (actual time=0.304..0.786 rows=625 loops=1)\n -> Sort (cost=39.27..40.84 rows=625 width=12) (actual time=0.300..0.354 rows=625 loops=1)\n Sort Key: b\n Sort Method: quicksort Memory: 54kB\n -> Seq Scan on abcd (cost=0.00..10.25 rows=625 width=12) (actual time=0.021..0.161 rows=625 l\noops=1)\n Planning Time: 0.068 ms\n Execution Time: 3.509 ms\n(15 rows)\n\nHere, as window function (row count) has two cols a, b for order by, \nincremental sort is performed for remaining col in query,\n\nwhich makes sense.\n\n\n#2. If order by clause in the Window function is superset of order by in \nquery\n\nexplain analyze select a,row_number() over (order by a,b,c),count(*) over (order by a,b) from abcd order by a;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=39.27..64.27 rows=625 width=28) (actual time=1.089..3.020 rows=625 loops=1)\n -> WindowAgg (cost=39.27..53.34 rows=625 width=20) (actual time=1.024..1.635 rows=625 loops=1)\n -> Sort (cost=39.27..40.84 rows=625 width=12) (actual time=1.019..1.084 rows=625 loops=1)\n Sort Key: a, b, c\n Sort Method: quicksort Memory: 54kB\n -> Seq Scan on abcd (cost=0.00..10.25 rows=625 width=12) (actual time=0.023..0.265 rows=625 loops=1)\n Planning Time: 0.071 ms\n Execution Time: 3.156 ms\n(8 rows)\n\nNo, additional sort is needed to be performed in this case, as you referred.\n\nOn 03/01/23 08:21, David Rowley wrote:\n> If they're a superset then we won't need to perform any additional \n> ordering for the DISTINCT / ORDER BY clause.\n> If they're a subset then we might be able to perform an Incremental\n> Sort, which is likely much cheaper than a full sort.\n\nSo question is, how current implementation is different from desired one?\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\nHi,\n\nOn 03/01/23 08:21, David Rowley wrote:\n\n\nI do think you'll likely want to put any WindowClauses which have\npathkeys which are a true subset or true superset of the ORDER BY /\nDISTINCT pathkeys last. If they're a superset then we won't need to\nperform any additional ordering for the DISTINCT / ORDER BY clause.\nIf they're a subset then we might be able to perform an Incremental\nSort, which is likely much cheaper than a full sort. The existing\ncode should handle that part. You just need to make\nselect_active_windows() more intelligent.\n\nI think current implementation does exactly this.\n#1. If order by clause in the window function is subset of order\n by in query\n\ncreate table abcd(a int, b int, c int, d int);\ninsert into abcd select x,y,z,c from generate_series(1,5) x, generate_Series(1,5)y, generate_Series(1,5) z, generate_Series(1,5) c;\nexplain analyze select a,row_number() over (order by b),count(*) over (order by a,b) from abcd order by a,b,c;\n\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------\n--------\n Incremental Sort (cost=80.32..114.56 rows=625 width=28) (actual time=1.440..3.311 rows=625 loops=1)\n Sort Key: a, b, c\n Presorted Key: a, b\n Full-sort Groups: 13 Sort Method: quicksort Average Memory: 28kB Peak Memory: 28kB\n -> WindowAgg (cost=79.24..91.74 rows=625 width=28) (actual time=1.272..2.567 rows=625 loops=1)\n -> Sort (cost=79.24..80.80 rows=625 width=20) (actual time=1.233..1.296 rows=625 loops=1)\n Sort Key: a, b\n Sort Method: quicksort Memory: 64kB\n -> WindowAgg (cost=39.27..50.21 rows=625 width=20) (actual time=0.304..0.786 rows=625 loops=1)\n -> Sort (cost=39.27..40.84 rows=625 width=12) (actual time=0.300..0.354 rows=625 loops=1)\n Sort Key: b\n Sort Method: quicksort Memory: 54kB\n -> Seq Scan on abcd (cost=0.00..10.25 rows=625 width=12) (actual time=0.021..0.161 rows=625 l\noops=1)\n Planning Time: 0.068 ms\n Execution Time: 3.509 ms\n(15 rows)\n\n\n\nHere, as window function (row count) has two cols a, b for order\n by, incremental sort is performed for remaining col in query,\nwhich makes sense.\n\n\n\n#2. If order by clause in the Window function is superset of\n order by in query\n\nexplain analyze select a,row_number() over (order by a,b,c),count(*) over (order by a,b) from abcd order by a;\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------\n WindowAgg (cost=39.27..64.27 rows=625 width=28) (actual time=1.089..3.020 rows=625 loops=1)\n -> WindowAgg (cost=39.27..53.34 rows=625 width=20) (actual time=1.024..1.635 rows=625 loops=1)\n -> Sort (cost=39.27..40.84 rows=625 width=12) (actual time=1.019..1.084 rows=625 loops=1)\n Sort Key: a, b, c\n Sort Method: quicksort Memory: 54kB\n -> Seq Scan on abcd (cost=0.00..10.25 rows=625 width=12) (actual time=0.023..0.265 rows=625 loops=1)\n Planning Time: 0.071 ms\n Execution Time: 3.156 ms\n(8 rows)\n\nNo, additional sort is needed to be performed in this case, as\n you referred.\nOn 03/01/23 08:21, David Rowley wrote:If they're a superset then we won't need to\nperform any additional ordering for the DISTINCT / ORDER BY clause.\nIf they're a subset then we might be able to perform an Incremental\nSort, which is likely much cheaper than a full sort. \nSo question is, how current implementation is different from\n desired one?\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Tue, 3 Jan 2023 19:41:28 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 03:11, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> #2. If order by clause in the Window function is superset of order by in query\n>\n> explain analyze select a,row_number() over (order by a,b,c),count(*) over (order by a,b) from abcd order by a;\n>\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> WindowAgg (cost=39.27..64.27 rows=625 width=28) (actual time=1.089..3.020 rows=625 loops=1)\n> -> WindowAgg (cost=39.27..53.34 rows=625 width=20) (actual time=1.024..1.635 rows=625 loops=1)\n> -> Sort (cost=39.27..40.84 rows=625 width=12) (actual time=1.019..1.084 rows=625 loops=1)\n> Sort Key: a, b, c\n> Sort Method: quicksort Memory: 54kB\n> -> Seq Scan on abcd (cost=0.00..10.25 rows=625 width=12) (actual time=0.023..0.265 rows=625 loops=1)\n> Planning Time: 0.071 ms\n> Execution Time: 3.156 ms\n> (8 rows)\n>\n> No, additional sort is needed to be performed in this case, as you referred.\n\nIt looks like that works by accident. I see no mention of this either\nin the comments or in [1]. What seems to be going on is that\ncommon_prefix_cmp() is coded in such a way that the WindowClauses end\nup ordered by the highest tleSortGroupRef first, resulting in the\nlowest order tleSortGroupRefs being the last WindowAgg to be\nprocessed. We do transformSortClause() before\ntransformWindowDefinitions(), this is where the tleSortGroupRef\nindexes are assigned, so the ORDER BY clause will have a lower\ntleSortGroupRef than the WindowClauses.\n\nIf we don't have one already, then we should likely add a regression\ntest that ensures that this remains true. Since it does not seem to\nbe documented in the code anywhere, it seems like something that could\neasily be overlooked if we were to ever refactor that code.\n\nI just tried moving the calls to transformWindowDefinitions() so that\nthey come before transformSortClause() and our regression tests still\npass. That's not great.\n\nWith that change, the following query has an additional sort for the\nORDER BY clause which previously wasn't done.\n\nexplain select a,b,c,row_number() over (order by a) rn1, row_number()\nover(partition by b) rn2, row_number() over (order by c) from abc\norder by b;\n\nDavid\n\n[1] https://www.postgresql.org/message-id/flat/124A7F69-84CD-435B-BA0E-2695BE21E5C2%40yesql.se\n\n\n",
"msg_date": "Wed, 4 Jan 2023 17:02:15 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On 04/01/23 09:32, David Rowley wrote:\n>\n> It looks like that works by accident. I see no mention of this either\n> in the comments or in [1].\n\nThis kind of troubles me because function name \n/select_active_windows///doesn't tell me if its only job is\n\nto reorder window clauses for optimizing sort. From code, I don't see it \ndoing anything else either.\n\n\n> If we don't have one already, then we should likely add a regression\n> test that ensures that this remains true. Since it does not seem to\n> be documented in the code anywhere, it seems like something that could\n> easily be overlooked if we were to ever refactor that code.\n>\nI don't see any tests in windows specific to sorting operation (and in \nwhat order). I will add those.\n\n\nAlso, one thing, consider the following query:\n\nexplain analyze select row_number() over (order by a,b),count(*) over \n(order by a) from abcd order by a,b,c;\n\nIn this case, sorting is done on (a,b) followed by incremental sort on c \nat final stage.\n\nIf we do just one sort: a,b,c at first stage then there won't be need to \ndo another sort (incremental one).\n\n\nNow, I am not sure if which one would be faster: sorting (a,b,c) vs \nsort(a,b) + incremental sort(c)\n\nbecause even though datum sort is fast, there can be n number of combos \nwhere we won't be doing that.\n\nI might be looking at extreme corner cases though but still wanted to share.\n\n\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n\n\n\n\n\nOn 04/01/23 09:32, David Rowley wrote:\n\n\nIt looks like that works by accident. I see no mention of this either\nin the comments or in [1]. \n\n\nThis kind of troubles me because function name select_active_windows\ndoesn't tell me if its only job is\nto reorder window clauses for optimizing sort. From code, I don't\n see it doing anything else either. \n\n\n\nIf we don't have one already, then we should likely add a regression\ntest that ensures that this remains true. Since it does not seem to\nbe documented in the code anywhere, it seems like something that could\neasily be overlooked if we were to ever refactor that code.\n\n\n\nI don't see any tests in windows specific to sorting operation\n (and in what order). I will add those.\n\n\nAlso, one thing, consider the following query:\nexplain analyze select row_number() over (order by a,b),count(*)\n over (order by a) from abcd order by a,b,c;\n\nIn this case, sorting is done on (a,b) followed by incremental\n sort on c at final stage.\nIf we do just one sort: a,b,c at first stage then there won't be\n need to do another sort (incremental one).\n\n\n\nNow, I am not sure if which one would be faster: sorting (a,b,c) \n vs sort(a,b) + incremental sort(c)\nbecause even though datum sort is fast, there can be n number of\n combos where we won't be doing that.\nI might be looking at extreme corner cases though but still\n wanted to share.\n\n\n\n\n\n\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Wed, 4 Jan 2023 17:37:46 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Attaching test cases for this (+ small change in doc).\n\nTested this in one of WIP branch where I had modified \nselect_active_windows and it failed\n\nas expected.\n\nPlease let me know if something can be improved in this.\n\n\nRegards,\nAnkit Kumar Pandey",
"msg_date": "Wed, 4 Jan 2023 20:41:27 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 05/01/23 07:48, Vik Fearing wrote:\n> On 1/4/23 13:07, Ankit Kumar Pandey wrote:\n>> Also, one thing, consider the following query:\n>>\n>> explain analyze select row_number() over (order by a,b),count(*) over \n>> (order by a) from abcd order by a,b,c;\n>>\n>> In this case, sorting is done on (a,b) followed by incremental sort \n>> on c at final stage.\n>>\n>> If we do just one sort: a,b,c at first stage then there won't be need \n>> to do another sort (incremental one).\n>\n>\n> This could give incorrect results. Consider the following query:\n>\n> postgres=# select a, b, c, rank() over (order by a, b)\n> from (values (1, 2, 1), (1, 2, 2), (1, 2, 1)) as abcd (a, b, c)\n> order by a, b, c;\n>\n> a | b | c | rank\n> ---+---+---+------\n> 1 | 2 | 1 | 1\n> 1 | 2 | 1 | 1\n> 1 | 2 | 2 | 1\n> (3 rows)\n>\n>\n> If you change the window's ordering like you suggest, you get this \n> different result:\n>\n>\n> postgres=# select a, b, c, rank() over (order by a, b, c)\n> from (values (1, 2, 1), (1, 2, 2), (1, 2, 1)) as abcd (a, b, c)\n> order by a, b, c;\n>\n> a | b | c | rank\n> ---+---+---+------\n> 1 | 2 | 1 | 1\n> 1 | 2 | 1 | 1\n> 1 | 2 | 2 | 3\n> (3 rows)\n>\n>\nWe are already doing something like I mentioned.\n\nConsider this example:\n\nexplain SELECT rank() OVER (ORDER BY a), count(*) OVER (ORDER BY a,b) \nFROM abcd;\n QUERY PLAN\n--------------------------------------------------------------------------\n WindowAgg (cost=83.80..127.55 rows=1250 width=24)\n -> WindowAgg (cost=83.80..108.80 rows=1250 width=16)\n -> Sort (cost=83.80..86.92 rows=1250 width=8)\n Sort Key: a, b\n -> Seq Scan on abcd (cost=0.00..19.50 rows=1250 width=8)\n(5 rows)\n\n\nIf it is okay to do extra sort for first window function (rank) here, \nwhy would it be\n\nany different in case which I mentioned?\n\nMy suggestion rest on assumption that for a window function, say\n\nrank() OVER (ORDER BY a), ordering of columns (other than column 'a') \nshouldn't matter.\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Wed, 4 Jan 2023 21:00:50 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On 1/4/23 13:07, Ankit Kumar Pandey wrote:\n> Also, one thing, consider the following query:\n> \n> explain analyze select row_number() over (order by a,b),count(*) over \n> (order by a) from abcd order by a,b,c;\n> \n> In this case, sorting is done on (a,b) followed by incremental sort on c \n> at final stage.\n> \n> If we do just one sort: a,b,c at first stage then there won't be need to \n> do another sort (incremental one).\n\n\nThis could give incorrect results. Consider the following query:\n\npostgres=# select a, b, c, rank() over (order by a, b)\nfrom (values (1, 2, 1), (1, 2, 2), (1, 2, 1)) as abcd (a, b, c)\norder by a, b, c;\n\n a | b | c | rank\n---+---+---+------\n 1 | 2 | 1 | 1\n 1 | 2 | 1 | 1\n 1 | 2 | 2 | 1\n(3 rows)\n\n\nIf you change the window's ordering like you suggest, you get this \ndifferent result:\n\n\npostgres=# select a, b, c, rank() over (order by a, b, c)\nfrom (values (1, 2, 1), (1, 2, 2), (1, 2, 1)) as abcd (a, b, c)\norder by a, b, c;\n\n a | b | c | rank\n---+---+---+------\n 1 | 2 | 1 | 1\n 1 | 2 | 1 | 1\n 1 | 2 | 2 | 3\n(3 rows)\n\n\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Thu, 5 Jan 2023 03:18:24 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 15:18, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 1/4/23 13:07, Ankit Kumar Pandey wrote:\n> > Also, one thing, consider the following query:\n> >\n> > explain analyze select row_number() over (order by a,b),count(*) over\n> > (order by a) from abcd order by a,b,c;\n> >\n> > In this case, sorting is done on (a,b) followed by incremental sort on c\n> > at final stage.\n> >\n> > If we do just one sort: a,b,c at first stage then there won't be need to\n> > do another sort (incremental one).\n>\n>\n> This could give incorrect results.\n\nYeah, this seems to be what I warned against in [1].\n\nIf we wanted to make that work we'd need to do it without adjusting\nthe WindowClause's orderClause so that the peer row checks still\nworked correctly in nodeWindowAgg.c.\n\nAdditionally, it's also not that clear to me that sorting by more\ncolumns in the sort below the WindowAgg would always be a win over\ndoing the final sort for the ORDER BY. What if the WHERE clause (that\ncould not be pushed down before a join) filtered out the vast majority\nof the rows before the ORDER BY. It might be cheaper to do the sort\nthen than to sort by the additional columns earlier.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvp=r1LnEKCmWCYaruMPL-jP4j_sdc8yeFYwaDT1ac5GsQ@mail.gmail.com\n\n\n",
"msg_date": "Thu, 5 Jan 2023 15:30:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> On 1/4/23 13:07, Ankit Kumar Pandey wrote:\n>> Also, one thing, consider the following query:\n>> explain analyze select row_number() over (order by a,b),count(*) over \n>> (order by a) from abcd order by a,b,c;\n>> In this case, sorting is done on (a,b) followed by incremental sort on c \n>> at final stage.\n>> If we do just one sort: a,b,c at first stage then there won't be need to \n>> do another sort (incremental one).\n\n> This could give incorrect results.\n\nMmmm ... your counterexample doesn't really prove that. Yes,\nthe \"rank()\" step must consider only two ORDER BY columns while\ndeciding which rows are peers, but I don't see why it wouldn't\nbe okay if the rows happened to already be sorted by \"c\" within\nthose peer groups.\n\nI don't recall the implementation details well enough to be sure\nhow hard it would be to keep that straight.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Jan 2023 21:48:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Additionally, it's also not that clear to me that sorting by more\n> columns in the sort below the WindowAgg would always be a win over\n> doing the final sort for the ORDER BY. What if the WHERE clause (that\n> could not be pushed down before a join) filtered out the vast majority\n> of the rows before the ORDER BY. It might be cheaper to do the sort\n> then than to sort by the additional columns earlier.\n\nThat's certainly a legitimate question to ask, but I don't quite see\nwhere you figure we'd be sorting more rows? WHERE filtering happens\nbefore window functions, which never eliminate any rows. So it seems\nlike a sort just before the window functions must sort the same number\nof rows as one just after them.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Jan 2023 22:12:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 16:12, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Additionally, it's also not that clear to me that sorting by more\n> > columns in the sort below the WindowAgg would always be a win over\n> > doing the final sort for the ORDER BY. What if the WHERE clause (that\n> > could not be pushed down before a join) filtered out the vast majority\n> > of the rows before the ORDER BY. It might be cheaper to do the sort\n> > then than to sort by the additional columns earlier.\n>\n> That's certainly a legitimate question to ask, but I don't quite see\n> where you figure we'd be sorting more rows? WHERE filtering happens\n> before window functions, which never eliminate any rows. So it seems\n> like a sort just before the window functions must sort the same number\n> of rows as one just after them.\n\nYeah, I didn't think the WHERE clause thing out carefully enough. I\nthink it's only the WindowClause's runCondition that could possibly\nfilter any rows between the Sort below the WindowAgg and before the\nORDER BY is evaluated.\n\nDavid\n\n\n",
"msg_date": "Thu, 5 Jan 2023 20:14:49 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 19:14, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> We are already doing something like I mentioned.\n>\n> Consider this example:\n>\n> explain SELECT rank() OVER (ORDER BY a), count(*) OVER (ORDER BY a,b)\n> FROM abcd;\n> QUERY PLAN\n> --------------------------------------------------------------------------\n> WindowAgg (cost=83.80..127.55 rows=1250 width=24)\n> -> WindowAgg (cost=83.80..108.80 rows=1250 width=16)\n> -> Sort (cost=83.80..86.92 rows=1250 width=8)\n> Sort Key: a, b\n> -> Seq Scan on abcd (cost=0.00..19.50 rows=1250 width=8)\n> (5 rows)\n>\n>\n> If it is okay to do extra sort for first window function (rank) here,\n> why would it be\n>\n> any different in case which I mentioned?\n\nWe *can* reuse Sorts where a more strict or equivalent sort order is\navailable. The question is how do we get the final WindowClause to do\nsomething slightly more strict to save having to do anything for the\nORDER BY. One way you might think would be to adjust the\nWindowClause's orderClause to add the additional clauses, but that\ncannot be done because that would cause are_peers() in nodeWindowAgg.c\nto not count some rows as peers when they maybe should be given a less\nstrict orderClause in the WindowClause.\n\nIt might be possible to adjust create_one_window_path() so that when\nprocessing the final WindowClause that it looks at the DISTINCT or\nORDER BY clause to see if we can sort on a few extra columns to save\nhaving to do any further sorting. We just *cannot* make any\nadjustments to the WindowClause's orderClause.\n\nDavid\n\n\n",
"msg_date": "Thu, 5 Jan 2023 20:23:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 05/01/23 12:53, David Rowley wrote:\n>\n> We *can* reuse Sorts where a more strict or equivalent sort order is\n> available. The question is how do we get the final WindowClause to do\n> something slightly more strict to save having to do anything for the\n> ORDER BY. One way you might think would be to adjust the\n> WindowClause's orderClause to add the additional clauses, but that\n> cannot be done because that would cause are_peers() in nodeWindowAgg.c\n> to not count some rows as peers when they maybe should be given a less\n> strict orderClause in the WindowClause.\nOkay, now I see issue in my approach.\n> It might be possible to adjust create_one_window_path() so that when\n> processing the final WindowClause that it looks at the DISTINCT or\n> ORDER BY clause to see if we can sort on a few extra columns to save\n> having to do any further sorting. We just *cannot* make any\n> adjustments to the WindowClause's orderClause.\n>\nThis is much better solution. I will check\n\ncreate_one_window_path for the same.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Thu, 5 Jan 2023 13:16:15 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Sorry if multiple mails has been sent for this.\n\n\n> On 05/01/23 12:53, David Rowley wrote:\n>>\n>> We *can* reuse Sorts where a more strict or equivalent sort order is\n>> available. The question is how do we get the final WindowClause to do\n>> something slightly more strict to save having to do anything for the\n>> ORDER BY. One way you might think would be to adjust the\n>> WindowClause's orderClause to add the additional clauses, but that\n>> cannot be done because that would cause are_peers() in nodeWindowAgg.c\n>> to not count some rows as peers when they maybe should be given a less\n>> strict orderClause in the WindowClause.\n\nI attempted this in attached patch.\n\n\n#1. No op case\n\n------------------------------------------In patched \nversion-----------------------------------------------\n\nexplain (costs off) SELECT rank() OVER (ORDER BY b), count(*) OVER \n(ORDER BY a,b,c) FROM abcd order by a;\n QUERY PLAN\n------------------------------------------\n WindowAgg\n -> Sort\n Sort Key: a, b, c\n -> WindowAgg\n -> Sort\n Sort Key: b\n -> Seq Scan on abcd\n(7 rows)\n\nexplain (costs off) SELECT rank() OVER (ORDER BY b), count(*) OVER \n(ORDER BY a,b,c) FROM abcd;\n QUERY PLAN\n------------------------------------------\n WindowAgg\n -> Sort\n Sort Key: b\n -> WindowAgg\n -> Sort\n Sort Key: a, b, c\n -> Seq Scan on abcd\n(7 rows)\n\n----------------------------------------------In \nmaster--------------------------------------------------------\n\n\nexplain (costs off) SELECT rank() OVER (ORDER BY b), count(*) OVER \n(ORDER BY a,b,c) FROM abcd order by a;\n QUERY PLAN\n------------------------------------------\n WindowAgg\n -> Sort\n Sort Key: a, b, c\n -> WindowAgg\n -> Sort\n Sort Key: b\n -> Seq Scan on abcd\n(7 rows)\nexplain (costs off) SELECT rank() OVER (ORDER BY b), count(*) OVER \n(ORDER BY a,b,c) FROM abcd;\n QUERY PLAN\n------------------------------------------\n WindowAgg\n -> Sort\n Sort Key: b\n -> WindowAgg\n -> Sort\n Sort Key: a, b, c\n -> Seq Scan on abcd\n(7 rows)\n\nNo change between patched version and master.\n\n\n2. In case where optimization can happen\n\n----------------------------In patched \nversion-------------------------------------------------------\n\nexplain (costs off) SELECT rank() OVER (ORDER BY b), count(*) OVER \n(ORDER BY a) FROM abcd order by a,b;\n QUERY PLAN\n------------------------------------------\n WindowAgg\n -> Sort\n Sort Key: a, b\n -> WindowAgg\n -> Sort\n Sort Key: b\n -> Seq Scan on abcd\n(7 rows)\n\nexplain (costs off) SELECT rank() OVER (ORDER BY a), count(*) OVER \n(ORDER BY b), count(*) OVER (PARTITION BY a ORDER BY b) FROM abcd order \nby a,b,c,d;\n QUERY PLAN\n------------------------------------------------\n WindowAgg\n -> WindowAgg\n -> Sort\n Sort Key: a, b, c, d\n -> WindowAgg\n -> Sort\n Sort Key: b\n -> Seq Scan on abcd\n(8 rows)\n\n-------------------------------------------In \nmaster--------------------------------------------------------\n\nexplain (costs off) SELECT rank() OVER (ORDER BY b), count(*) OVER \n(ORDER BY a) FROM abcd order by a,b;\n QUERY PLAN\n------------------------------------------------\n Incremental Sort\n Sort Key: a, b\n Presorted Key: a\n -> WindowAgg\n -> Sort\n Sort Key: a\n -> WindowAgg\n -> Sort\n Sort Key: b\n -> Seq Scan on abcd\n(10 rows)\n\nexplain (costs off) SELECT rank() OVER (ORDER BY a), count(*) OVER \n(ORDER BY b), count(*) OVER (PARTITION BY a ORDER BY b) FROM abcd order \nby a,b,c,d;\n QUERY PLAN\n------------------------------------------------------\n Incremental Sort\n Sort Key: a, b, c, d\n Presorted Key: a, b\n -> WindowAgg\n -> WindowAgg\n -> Sort\n Sort Key: a, b\n -> WindowAgg\n -> Sort\n Sort Key: b\n -> Seq Scan on abcd\n(11 rows)\n\nPatched version removes few sorts.\n\nRegression tests all passed so it is not breaking anything existing.\n\nWe don't have any tests for verifying sorting plan in window functions \n(which would have failed, if present).\n\nPlease let me know any feedbacks (I have added some my own concerns in \nthe comments)\n\nThanks\n\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Fri, 6 Jan 2023 18:41:33 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 04:11, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n> Attaching test cases for this (+ small change in doc).\n>\n> Tested this in one of WIP branch where I had modified\n> select_active_windows and it failed\n>\n> as expected.\n>\n> Please let me know if something can be improved in this.\n\nThanks for writing that.\n\nI had a look over the patch and ended up making some adjustments to\nthe tests. Looking back at 728202b63, I think any tests we add here\nshould be kept alongside the tests added by that commit rather than\ntacked on to the end of the test file. It also makes sense to me just\nto use the same table as the original tests. I also thought the\ncomment in select_active_windows should be in the sort comparator\nfunction instead. I think that's a more likely place to capture the\nattention of anyone making modifications.\n\nI've now pushed the adjusted patch.\n\nDavid\n\n\n",
"msg_date": "Sat, 7 Jan 2023 15:29:55 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Sat, 7 Jan 2023 at 02:11, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> > On 05/01/23 12:53, David Rowley wrote:\n> >>\n> >> We *can* reuse Sorts where a more strict or equivalent sort order is\n> >> available. The question is how do we get the final WindowClause to do\n> >> something slightly more strict to save having to do anything for the\n> >> ORDER BY. One way you might think would be to adjust the\n> >> WindowClause's orderClause to add the additional clauses, but that\n> >> cannot be done because that would cause are_peers() in nodeWindowAgg.c\n> >> to not count some rows as peers when they maybe should be given a less\n> >> strict orderClause in the WindowClause.\n>\n> I attempted this in attached patch.\n\nI had a quick look at this and it's going to need some additional code\nto ensure there are no WindowFuncs in the ORDER BY clause. We can't\nsort on those before we evaluate them.\n\nRight now you get:\n\npostgres=# explain select *,row_number() over (order by oid) rn1 from\npg_class order by oid,rn1;\nERROR: could not find pathkey item to sort\n\nI also don't think there's any point in adding the additional pathkeys\nwhen the input path is already presorted. Have a look at:\n\npostgres=# set enable_seqscan=0;\nSET\npostgres=# explain select *,row_number() over (order by oid) rn1 from\npg_class order by oid,relname;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n WindowAgg (cost=0.43..85.44 rows=412 width=281)\n -> Incremental Sort (cost=0.43..79.26 rows=412 width=273)\n Sort Key: oid, relname\n Presorted Key: oid\n -> Index Scan using pg_class_oid_index on pg_class\n(cost=0.27..60.72 rows=412 width=273)\n(5 rows)\n\nIt would be better to leave this case alone and just do the\nincremental sort afterwards.\n\nYou also don't seem to be considering the fact that the query might\nhave a DISTINCT clause. That's evaluated between window functions and\nthe order by. It would be fairly useless to do a more strict sort when\nthe sort order is going to be obliterated by a Hash Aggregate. Perhaps\nwe can just not do this when the query has a DISTINCT clause.\n\nOn the other hand, there are also a few reasons why we shouldn't do\nthis. I mentioned the WindowClause runConditions earlier here.\n\nThe patched version produces:\n\npostgres=# explain (analyze, costs off) select * from (select\noid,relname,row_number() over (order by oid) rn1 from pg_class order\nby oid,relname) where rn1 < 10;\n QUERY PLAN\n------------------------------------------------------------------------------\n WindowAgg (actual time=0.488..0.497 rows=9 loops=1)\n Run Condition: (row_number() OVER (?) < 10)\n -> Sort (actual time=0.466..0.468 rows=10 loops=1)\n Sort Key: pg_class.oid, pg_class.relname\n Sort Method: quicksort Memory: 67kB\n -> Seq Scan on pg_class (actual time=0.028..0.170 rows=420 loops=1)\n Planning Time: 0.214 ms\n Execution Time: 0.581 ms\n(8 rows)\n\nWhereas master produces:\n\npostgres=# explain (analyze, costs off) select * from (select\noid,relname,row_number() over (order by oid) rn1 from pg_class order\nby oid,relname) where rn1 < 10;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Incremental Sort (actual time=0.506..0.508 rows=9 loops=1)\n Sort Key: pg_class.oid, pg_class.relname\n Presorted Key: pg_class.oid\n Full-sort Groups: 1 Sort Method: quicksort Average Memory: 26kB\nPeak Memory: 26kB\n -> WindowAgg (actual time=0.475..0.483 rows=9 loops=1)\n Run Condition: (row_number() OVER (?) < 10)\n -> Sort (actual time=0.461..0.461 rows=10 loops=1)\n Sort Key: pg_class.oid\n Sort Method: quicksort Memory: 67kB\n -> Seq Scan on pg_class (actual time=0.022..0.178\nrows=420 loops=1)\n Planning Time: 0.245 ms\n Execution Time: 0.594 ms\n(12 rows)\n\n(slightly bad example since oid is unique but...)\n\nIt's not too clear to me that the patched version is a better plan.\nThe bottom level sort, which sorts 420 rows has a more complex\ncomparison to do. Imagine the 2nd column is a long text string. That\nwould make the sort much more expensive. The top-level sort has far\nfewer rows to sort due to the runCondition filtering out anything that\ndoes not match rn1 < 10. The same can be said for a query with a LIMIT\nclause.\n\nI think the patch should also be using pathkeys_contained_in() and\nLists of pathkeys rather than concatenating lists of SortGroupClauses\ntogether. That should allow things to work correctly when a given\npathkey has become redundant due to either duplication or a Const in\nthe Eclass.\n\nAlso, since I seem to be only be able to think of these cases properly\nby actually trying them, I ended up with the attached patch. I opted\nto not do the optimisation when there are runConditions or a LIMIT\nclause. Doing it when there are runConditions really should be a\ncost-based decision, but we've about no hope of having any idea about\nhow many rows will match the runCondition. For the LIMIT case, it's\nalso difficult as it would be hard to get an idea of how many times\nthe additional sort columns would need their comparison function\ncalled. That's only required in a tie-break when the leading columns\nare all equal.\n\nThe attached patch has no tests added. It's going to need some of\nthose. These tests should go directly after the tests added in\na14a58329 and likely use the same table for consistency.\n\nDavid",
"msg_date": "Sat, 7 Jan 2023 17:28:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Thanks for looking into this.\n\nOn 07/01/23 09:58, David Rowley wrote:\n> On Sat, 7 Jan 2023 at 02:11, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>>> On 05/01/23 12:53, David Rowley wrote:\n>>>> We *can* reuse Sorts where a more strict or equivalent sort order is\n>>>> available. The question is how do we get the final WindowClause to do\n>>>> something slightly more strict to save having to do anything for the\n>>>> ORDER BY. One way you might think would be to adjust the\n>>>> WindowClause's orderClause to add the additional clauses, but that\n>>>> cannot be done because that would cause are_peers() in nodeWindowAgg.c\n>>>> to not count some rows as peers when they maybe should be given a less\n>>>> strict orderClause in the WindowClause.\n>> I attempted this in attached patch.\n> I had a quick look at this and it's going to need some additional code\n> to ensure there are no WindowFuncs in the ORDER BY clause. We can't\n> sort on those before we evaluate them.\nOkay I will add this check.\n> I also don't think there's any point in adding the additional pathkeys\n> when the input path is already presorted.\n>\n> It would be better to leave this case alone and just do the\n> incremental sort afterwards.\n\nSo this will be no operation case well.\n\n> You also don't seem to be considering the fact that the query might\n> have a DISTINCT clause.\n\nMajor reason for this was that I am not exactly aware of what distinct \nclause means (especially in\n\ncontext of window functions) and how it is different from other \nsortClauses (partition, order by, group).\n\nComments in parsenodes.h didn't help.\n\n> That's evaluated between window functions and\n> the order by. It would be fairly useless to do a more strict sort when\n> the sort order is going to be obliterated by a Hash Aggregate. Perhaps\n> we can just not do this when the query has a DISTINCT clause.\n>\n> On the other hand, there are also a few reasons why we shouldn't do\n> this. I mentioned the WindowClause runConditions earlier here.\n>\n> The patched version produces:\n>\n> postgres=# explain (analyze, costs off) select * from (select\n> oid,relname,row_number() over (order by oid) rn1 from pg_class order\n> by oid,relname) where rn1 < 10;\n> QUERY PLAN\n> ------------------------------------------------------------------------------\n> WindowAgg (actual time=0.488..0.497 rows=9 loops=1)\n> Run Condition: (row_number() OVER (?) < 10)\n> -> Sort (actual time=0.466..0.468 rows=10 loops=1)\n> Sort Key: pg_class.oid, pg_class.relname\n> Sort Method: quicksort Memory: 67kB\n> -> Seq Scan on pg_class (actual time=0.028..0.170 rows=420 loops=1)\n> Planning Time: 0.214 ms\n> Execution Time: 0.581 ms\n> (8 rows)\n>\n> Whereas master produces:\n>\n> postgres=# explain (analyze, costs off) select * from (select\n> oid,relname,row_number() over (order by oid) rn1 from pg_class order\n> by oid,relname) where rn1 < 10;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------\n> Incremental Sort (actual time=0.506..0.508 rows=9 loops=1)\n> Sort Key: pg_class.oid, pg_class.relname\n> Presorted Key: pg_class.oid\n> Full-sort Groups: 1 Sort Method: quicksort Average Memory: 26kB\n> Peak Memory: 26kB\n> -> WindowAgg (actual time=0.475..0.483 rows=9 loops=1)\n> Run Condition: (row_number() OVER (?) < 10)\n> -> Sort (actual time=0.461..0.461 rows=10 loops=1)\n> Sort Key: pg_class.oid\n> Sort Method: quicksort Memory: 67kB\n> -> Seq Scan on pg_class (actual time=0.022..0.178\n> rows=420 loops=1)\n> Planning Time: 0.245 ms\n> Execution Time: 0.594 ms\n> (12 rows)\n>\n> (slightly bad example since oid is unique but...)\n>\n> It's not too clear to me that the patched version is a better plan.\n> The bottom level sort, which sorts 420 rows has a more complex\n> comparison to do. Imagine the 2nd column is a long text string. That\n> would make the sort much more expensive. The top-level sort has far\n> fewer rows to sort due to the runCondition filtering out anything that\n> does not match rn1 < 10. The same can be said for a query with a LIMIT\n> clause.\n\nYes, this is a fair point. Multiple sort is actually beneficial in cases\n\nlike this, perhaps limits clause and runCondition should be no op too?\n\n> I think the patch should also be using pathkeys_contained_in() and\n> Lists of pathkeys rather than concatenating lists of SortGroupClauses\n> together. That should allow things to work correctly when a given\n> pathkey has become redundant due to either duplication or a Const in\n> the Eclass.\n\nMake sense, I actually duplicated that logic from\n\nmake_pathkeys_for_window. We should make this changes there as well because\n\nif we have SELECT rank() OVER (PARTITION BY a ORDER BY a)\n\n(weird example but you get the idea), it leads to duplicates in \nwindow_sortclauses.\n\n> Also, since I seem to be only be able to think of these cases properly\n> by actually trying them, I ended up with the attached patch. I opted\n> to not do the optimisation when there are runConditions or a LIMIT\n> clause. Doing it when there are runConditions really should be a\n> cost-based decision, but we've about no hope of having any idea about\n> how many rows will match the runCondition. For the LIMIT case, it's\n> also difficult as it would be hard to get an idea of how many times\n> the additional sort columns would need their comparison function\n> called. That's only required in a tie-break when the leading columns\n> are all equal.\n\nAgree with runConditions part but for limit clause, row reduction happens\n\nat the last, so whether we use patched version or master version,\n\nnone of sorts would benefit/degrade from that, right?\n\n\n> The attached patch has no tests added. It's going to need some of\n> those. These tests should go directly after the tests added in\n> a14a58329 and likely use the same table for consistency.\n>\nThanks for the patch. It looks much neater now. I will add cases for this\n\n(after a14a58329). I do have a very general question though. Is it okay\n\nto add comments in test cases? I don't see it much on existing cases\n\nso kind of reluctant to add but it makes intentions much more clear.\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sat, 7 Jan 2023 16:40:05 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 07/01/23 07:59, David Rowley wrote:\n> On Thu, 5 Jan 2023 at 04:11, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> Attaching test cases for this (+ small change in doc).\n>>\n>> Tested this in one of WIP branch where I had modified\n>> select_active_windows and it failed\n>>\n>> as expected.\n>>\n>> Please let me know if something can be improved in this.\n> Thanks for writing that.\n>\n> I had a look over the patch and ended up making some adjustments to\n> the tests. Looking back at 728202b63, I think any tests we add here\n> should be kept alongside the tests added by that commit rather than\n> tacked on to the end of the test file. It also makes sense to me just\n> to use the same table as the original tests. I also thought the\n> comment in select_active_windows should be in the sort comparator\n> function instead. I think that's a more likely place to capture the\n> attention of anyone making modifications.\nThanks, I will look it through.\n> I've now pushed the adjusted patch.\n>\nI can't seem to find updated patch in the attachment, can you please\n\nforward the patch again.\n\nThanks.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sat, 7 Jan 2023 16:44:40 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Your email client seems to be adding additional vertical space to your\nemails. I've removed the additional newlines in the quotes. Are you\nable to fix the client so it does not do that?\n\nOn Sun, 8 Jan 2023 at 00:10, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n> On 07/01/23 09:58, David Rowley wrote:\n> > You also don't seem to be considering the fact that the query might\n> > have a DISTINCT clause.\n>\n> Major reason for this was that I am not exactly aware of what distinct\n> clause means (especially in\n>\n> context of window functions) and how it is different from other\n> sortClauses (partition, order by, group).\n\nI'm talking about the query's DISTINCT clause. i.e SELECT DISTINCT.\nIf you look in the grouping_planner() function, you'll see that\ncreate_distinct_paths() is called between create_window_paths() and\ncreate_ordered_paths().\n\n> Yes, this is a fair point. Multiple sort is actually beneficial in cases\n> like this, perhaps limits clause and runCondition should be no op too?\n\nI'm not sure what you mean by \"no op\". Do you mean not apply the optimization?\n\n> > I think the patch should also be using pathkeys_contained_in() and\n> > Lists of pathkeys rather than concatenating lists of SortGroupClauses\n> > together. That should allow things to work correctly when a given\n> > pathkey has become redundant due to either duplication or a Const in\n> > the Eclass.\n>\n> Make sense, I actually duplicated that logic from\n> make_pathkeys_for_window. We should make this changes there as well because\n> if we have SELECT rank() OVER (PARTITION BY a ORDER BY a)\n> (weird example but you get the idea), it leads to duplicates in\n> window_sortclauses.\n\nIt won't lead to duplicate pathkeys. Look in\nmake_pathkeys_for_sortclauses() and what pathkey_is_redundant() does.\nNotice that it checks if the pathkey already exists in the list before\nappending.\n\n> Agree with runConditions part but for limit clause, row reduction happens\n> at the last, so whether we use patched version or master version,\n> none of sorts would benefit/degrade from that, right?\n\nMaybe you're right. Just be aware that a sort done for a query with an\nORDER BY LIMIT will perform a top-n sort. top-n sorts only need to\nstore the top-n tuples and that can significantly reduce the memory\nrequired for sorting perhaps resulting in the sort fitting in memory\nrather than spilling out to disk.\n\nYou might want to test this by having the leading sort column as an\nINT, and then the 2nd one as a long text column of maybe around two\nkilobytes. Make all the leading column values the same so that the\ncomparison for the text column is always performed. Make the LIMIT\nsmall compared to the total number of rows, that should test the worse\ncase. Check the performance with and without the limitCount != NULL\npart of the patch that disables the optimization for LIMIT.\n\n> Is it okay\n> to add comments in test cases? I don't see it much on existing cases\n> so kind of reluctant to add but it makes intentions much more clear.\n\nI think tests should always have a comment to state what they're\ntesting. Not many people seem to do that, unfortunately. The problem\nwith not stating what the test is testing is that if, for example, the\ntest is checking that the EXPLAIN output is showing a Sort, what if at\nsome point in the future someone adjusts some costing code and the\nplan changes to an Index Scan. If there's no comment to state that\nwe're looking for a Sort plan, then the author of the patch that's\nadjusting the costs might just think it's ok to change the expected\nplan to an Index Scan. I've seen this problem occur even when the\ncomments *do* exist. There's just about no hope of such a test\ncontinuing to do what it's meant to if the comments don't exist.\n\nDavid\n\n\n",
"msg_date": "Sun, 8 Jan 2023 00:58:03 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 07/01/23 17:28, David Rowley wrote:\n> Your email client seems to be adding additional vertical space to your\n> emails. I've removed the additional newlines in the quotes. Are you\n> able to fix the client so it does not do that?\nI have adjusted my mail client, hope it is better now?\n> On Sun, 8 Jan 2023 at 00:10, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> >\n> > On 07/01/23 09:58, David Rowley wrote:\n> > > You also don't seem to be considering the fact that the query might\n> > > have a DISTINCT clause.\n> >\n> > Major reason for this was that I am not exactly aware of what distinct\n> > clause means (especially in\n> >\n> > context of window functions) and how it is different from other\n> > sortClauses (partition, order by, group).\n>\n> I'm talking about the query's DISTINCT clause. i.e SELECT DISTINCT.\n> If you look in the grouping_planner() function, you'll see that\n> create_distinct_paths() is called between create_window_paths() and\n> create_ordered_paths().\n\nYes just saw this and got what you meant.\n\n\n> > Yes, this is a fair point. Multiple sort is actually beneficial in cases\n> > like this, perhaps limits clause and runCondition should be no op too?\n>\n> I'm not sure what you mean by \"no op\". Do you mean not apply the optimization?\nYes, no op = no optimization. Sorry I didn't mention it before.\n> > > I think the patch should also be using pathkeys_contained_in() and\n> > > Lists of pathkeys rather than concatenating lists of SortGroupClauses\n> > > together. That should allow things to work correctly when a given\n> > > pathkey has become redundant due to either duplication or a Const in\n> > > the Eclass.\n> >\n> > Make sense, I actually duplicated that logic from\n> > make_pathkeys_for_window. We should make this changes there as well because\n> > if we have SELECT rank() OVER (PARTITION BY a ORDER BY a)\n> > (weird example but you get the idea), it leads to duplicates in\n> > window_sortclauses.\n>\n> It won't lead to duplicate pathkeys. Look in\n> make_pathkeys_for_sortclauses() and what pathkey_is_redundant() does.\n> Notice that it checks if the pathkey already exists in the list before\n> appending.\n\nOkay I see this, pathkey_is_redundant is much more smarter as well.\n\nReplacing list_concat_copy with list_concat_unique in \nmake_pathkeys_for_window\n\nwon't be of much benefit.\n\n> > Agree with runConditions part but for limit clause, row reduction happens\n> > at the last, so whether we use patched version or master version,\n> > none of sorts would benefit/degrade from that, right?\n>\n> Maybe you're right. Just be aware that a sort done for a query with an\n> ORDER BY LIMIT will perform a top-n sort. top-n sorts only need to\n> store the top-n tuples and that can significantly reduce the memory\n> required for sorting perhaps resulting in the sort fitting in memory\n> rather than spilling out to disk.\n>\n> You might want to test this by having the leading sort column as an\n> INT, and then the 2nd one as a long text column of maybe around two\n> kilobytes. Make all the leading column values the same so that the\n> comparison for the text column is always performed. Make the LIMIT\n> small compared to the total number of rows, that should test the worse\n> case. Check the performance with and without the limitCount != NULL\n> part of the patch that disables the optimization for LIMIT.\n\nI checked this. For limit <<< total number of rows, top-n sort was\n\nperformed but when I changed limit to higher value (or no limit),\n\nquick sort was performed.\n\nTop-n sort was twice as fast.\n\nAlso, tested (first) patch version vs master, top-n sort\n\nwas twice as fast there as well (outputs mentioned below).\n\nCurrent patch version (with limit excluded for optimizations)\n\nexplain (analyze ,costs off) select count(*) over (order by id) from tt \norder by id, name limit 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Limit (actual time=1.718..1.719 rows=1 loops=1)\n -> Incremental Sort (actual time=1.717..1.717 rows=1 loops=1)\n Sort Key: id, name\n Presorted Key: id\n Full-sort Groups: 1 Sort Method: top-N heapsort Average \nMemory: 25kB Peak Memory: 25kB\n -> WindowAgg (actual time=0.028..0.036 rows=6 loops=1)\n -> Sort (actual time=0.017..0.018 rows=6 loops=1)\n Sort Key: id\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on tt (actual time=0.011..0.012 \nrows=6 loops=1)\n Planning Time: 0.069 ms\n Execution Time: 1.799 ms\n\nEarlier patch(which included limit clause for optimizations)\n\nexplain (analyze ,costs off) select count(*) over (order by id) from tt \norder by id, name limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------\n Limit (actual time=3.766..3.767 rows=1 loops=1)\n -> WindowAgg (actual time=3.764..3.765 rows=1 loops=1)\n -> Sort (actual time=3.749..3.750 rows=6 loops=1)\n Sort Key: id, name\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on tt (actual time=0.011..0.013 rows=6 loops=1)\n Planning Time: 0.068 ms\n Execution Time: 3.881 ms\n\nI am just wondering though, why can we not do top-N sort\n\nin optimized version if we include limit clause? Is top-N sort is \nlimited to non strict sorting or\n\ncases last operation before limit is sort? .\n\n> > Is it okay\n> > to add comments in test cases? I don't see it much on existing cases\n> > so kind of reluctant to add but it makes intentions much more clear.\n>\n> I think tests should always have a comment to state what they're\n> testing. Not many people seem to do that, unfortunately. The problem\n> with not stating what the test is testing is that if, for example, the\n> test is checking that the EXPLAIN output is showing a Sort, what if at\n> some point in the future someone adjusts some costing code and the\n> plan changes to an Index Scan. If there's no comment to state that\n> we're looking for a Sort plan, then the author of the patch that's\n> adjusting the costs might just think it's ok to change the expected\n> plan to an Index Scan. I've seen this problem occur even when the\n> comments *do* exist. There's just about no hope of such a test\n> continuing to do what it's meant to if the comments don't exist.\n\nThanks for clarifying this out, I will freely add comments if that helps\n\nto explain things better.\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sat, 7 Jan 2023 18:21:58 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 07/01/23 09:58, David Rowley wrote:\n>\n> The attached patch has no tests added. It's going to need some of\n> those.\n\nWhile writing test cases, I found that optimization do not happen for \ncase #1\n\n(which is prime candidate for such operation) like\n\nEXPLAIN (COSTS OFF)\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n sum(salary) OVER (PARTITION BY depname) depsalary\nFROM empsalary\nORDER BY depname, empno, enroll_date\n\nThis happens because mutual exclusiveness of two operands (when number \nof window functions > 1) viz\n\nis_sorted and last activeWindow in the condition:\n\n( !is_sorted && lnext(activeWindows, l) == NULL)\n\nFor 2nd last window function, is_sorted is false and path keys get added.\n\nIn next run (for last window function), is_sorted becomes true and whole \noptimization\n\npart is skipped.\n\nNote: Major issue that if I remove is_sorted from condition, even though\n\npath keys are added, it still do not perform optimization and works same \nas in master/unoptimized case.\n\nPerhaps adding path keys at last window function is not doing trick? \nMaybe we need to add pathkeys\n\nto all window functions which are subset of query's order by \nirrespective of being last or not?\n\n\nCase #2:\n\nFor presorted columns, eg\n\nCREATE INDEX depname_idx ON empsalary(depname);\nSET enable_seqscan=0;\nEXPLAIN (COSTS OFF)\nSELECT empno,\n min(salary) OVER (PARTITION BY depname) depminsalary\nFROM empsalary\nORDER BY depname, empno;\n\nIs this correct plan:\n\na)\n\n QUERY PLAN\n-------------------------------------------------------\n Incremental Sort\n Sort Key: depname, empno\n Presorted Key: depname\n -> WindowAgg\n -> Index Scan using depname_idx on empsalary\n(5 rows)\n\nor this:\n\nb) (Akin to Optimized version)\n\n QUERY PLAN\n-------------------------------------------------------\n WindowAgg\n -> Incremental Sort\n Sort Key: depname, empno\n Presorted Key: depname\n -> Index Scan using depname_idx on empsalary\n(5 rows)\n\nPatched version does (a) because of is_sorted condition.\n\nIf we remove both is_sorted and lnext(activeWindows, l) == NULL conditions,\n\nwe get correct results in these two cases.\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sat, 7 Jan 2023 21:57:07 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On 07/01/23 21:57, Ankit Kumar Pandey wrote:\n> On 07/01/23 09:58, David Rowley wrote:\n> >\n> > The attached patch has no tests added. It's going to need some of\n> > those.\n>\n> While writing test cases, I found that optimization do not happen for\n> case #1\n>\n> (which is prime candidate for such operation) like\n>\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> depname,\n> min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> sum(salary) OVER (PARTITION BY depname) depsalary\n> FROM empsalary\n> ORDER BY depname, empno, enroll_date\n>\n> This happens because mutual exclusiveness of two operands (when number\n> of window functions > 1) viz\n>\n> is_sorted and last activeWindow in the condition:\n>\n> ( !is_sorted && lnext(activeWindows, l) == NULL)\n>\n> For 2nd last window function, is_sorted is false and path keys get added.\n>\n> In next run (for last window function), is_sorted becomes true and whole\n> optimization\n>\n> part is skipped.\n>\n> Note: Major issue that if I remove is_sorted from condition, even though\n>\n> path keys are added, it still do not perform optimization and works same\n> as in master/unoptimized case.\n>\n> Perhaps adding path keys at last window function is not doing trick?\n> Maybe we need to add pathkeys\n>\n> to all window functions which are subset of query's order by\n> irrespective of being last or not?\n>\n>\n> Case #2:\n>\n> For presorted columns, eg\n>\n> CREATE INDEX depname_idx ON empsalary(depname);\n> SET enable_seqscan=0;\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> min(salary) OVER (PARTITION BY depname) depminsalary\n> FROM empsalary\n> ORDER BY depname, empno;\n>\n> Is this correct plan:\n>\n> a)\n>\n> QUERY PLAN\n> -------------------------------------------------------\n> Incremental Sort\n> Sort Key: depname, empno\n> Presorted Key: depname\n> -> WindowAgg\n> -> Index Scan using depname_idx on empsalary\n> (5 rows)\n>\n> or this:\n>\n> b) (Akin to Optimized version)\n>\n> QUERY PLAN\n> -------------------------------------------------------\n> WindowAgg\n> -> Incremental Sort\n> Sort Key: depname, empno\n> Presorted Key: depname\n> -> Index Scan using depname_idx on empsalary\n> (5 rows)\n>\n> Patched version does (a) because of is_sorted condition.\n>\n> If we remove both is_sorted and lnext(activeWindows, l) == NULL conditions,\n>\n> we get correct results in these two cases.\n>\n>\nAttached patch with test cases.\n\nFor case #2, test cases still uses (a) as expected output which I don't \nthink is right\n\nand we should revisit. Other than that, only failing case is due to \nissue mentioned in case #1.\n\nThanks\n\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sat, 7 Jan 2023 22:15:49 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Sun, 8 Jan 2023 at 01:52, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> I am just wondering though, why can we not do top-N sort\n> in optimized version if we include limit clause? Is top-N sort is\n> limited to non strict sorting or\n> cases last operation before limit is sort? .\n\nMaybe the sort bound can be pushed down. You'd need to adjust\nExecSetTupleBound() so that it pushes the bound through\nWindowAggState.\n\nDavid\n\n\n",
"msg_date": "Sun, 8 Jan 2023 11:05:15 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "(your email client still seems broken)\n\nOn Sun, 8 Jan 2023 at 05:27, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n>\n> While writing test cases, I found that optimization do not happen for\n> case #1\n>\n> (which is prime candidate for such operation) like\n>\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> depname,\n> min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> sum(salary) OVER (PARTITION BY depname) depsalary\n> FROM empsalary\n> ORDER BY depname, empno, enroll_date\n>\n> This happens because mutual exclusiveness of two operands (when number\n> of window functions > 1) viz\n>\n> is_sorted and last activeWindow in the condition:\n>\n> ( !is_sorted && lnext(activeWindows, l) == NULL)\n>\n> For 2nd last window function, is_sorted is false and path keys get added.\n>\n> In next run (for last window function), is_sorted becomes true and whole\n> optimization\n>\n> part is skipped.\n>\n> Note: Major issue that if I remove is_sorted from condition, even though\n>\n> path keys are added, it still do not perform optimization and works same\n> as in master/unoptimized case.\n>\n> Perhaps adding path keys at last window function is not doing trick?\n> Maybe we need to add pathkeys\n>\n> to all window functions which are subset of query's order by\n> irrespective of being last or not?\n\nYou might need to have another loop before the foreach loop that loops\nbackwards through the WindowClauses and remembers the index of the\nWindowClause which has pathkeys contained in the query's ORDER BY\npathkeys then apply the optimisation from that point in the main\nforeach loop. Also, if the condition within the foreach loop which\nchecks when we want to apply this optimisation is going to be run > 1\ntime, then you should probably have boolean variable that's set\nbefore the loop which saves if we're going to try to apply the\noptimisation. That'll save from having to check things like if the\nquery has a LIMIT clause multiple times.\n\n> Case #2:\n>\n> For presorted columns, eg\n>\n> CREATE INDEX depname_idx ON empsalary(depname);\n> SET enable_seqscan=0;\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> min(salary) OVER (PARTITION BY depname) depminsalary\n> FROM empsalary\n> ORDER BY depname, empno;\n>\n> Is this correct plan:\n>\n> a)\n>\n> QUERY PLAN\n> -------------------------------------------------------\n> Incremental Sort\n> Sort Key: depname, empno\n> Presorted Key: depname\n> -> WindowAgg\n> -> Index Scan using depname_idx on empsalary\n> (5 rows)\n>\n> or this:\n>\n> b) (Akin to Optimized version)\n>\n> QUERY PLAN\n> -------------------------------------------------------\n> WindowAgg\n> -> Incremental Sort\n> Sort Key: depname, empno\n> Presorted Key: depname\n> -> Index Scan using depname_idx on empsalary\n> (5 rows)\n>\n> Patched version does (a) because of is_sorted condition.\n\na) looks like the best plan to me. What's the point of pushing the\nsort below the WindowAgg in this case? The point of this optimisation\nis to reduce the number of sorts not to push them as deep into the\nplan as possible. We should only be pushing them down when it can\nreduce the number of sorts. There's no reduction in the number of\nsorts in the above plan.\n\nDavid\n\n\n",
"msg_date": "Sun, 8 Jan 2023 11:26:08 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Sun, 8 Jan 2023 at 05:45, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Attached patch with test cases.\n\nI can look at this in a bit more detail if you find a way to fix the\ncase you mentioned earlier. i.e, push the sort down to the deepest\nWindowAgg that has pathkeys contained in the query's ORDER BY\npathkeys.\n\nEXPLAIN (COSTS OFF)\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n sum(salary) OVER (PARTITION BY depname) depsalary\nFROM empsalary\nORDER BY depname, empno, enroll_date;\n QUERY PLAN\n-----------------------------------------------\n Incremental Sort\n Sort Key: depname, empno, enroll_date\n Presorted Key: depname, empno\n -> WindowAgg\n -> WindowAgg\n -> Sort\n Sort Key: depname, empno\n -> Seq Scan on empsalary\n(8 rows)\n\nYou'll also need to pay attention to how the has_runcondition is set.\nIf you start pushing before looking at all the WindowClauses then you\nwon't know if some later WindowClause has a runCondition. Adding an\nadditional backwards foreach loop should allow you to do all the\nrequired prechecks and find the index of the WindowClause which you\nshould start pushing from.\n\nDavid\n\n\n",
"msg_date": "Sun, 8 Jan 2023 11:36:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 08/01/23 03:56, David Rowley wrote:\n\n > (your email client still seems broken)\n\nI am looking at this again, will be changing client for here onward.\n\n> You might need to have another loop before the foreach loop that loops\n> backwards through the WindowClauses and remembers the index of the\n> WindowClause which has pathkeys contained in the query's ORDER BY\n> pathkeys then apply the optimisation from that point in the main\n> foreach loop. Also, if the condition within the foreach loop which\n> checks when we want to apply this optimisation is going to be run > 1\n> time, then you should probably have boolean variable that's set\n> before the loop which saves if we're going to try to apply the\n> optimisation. That'll save from having to check things like if the\n> query has a LIMIT clause multiple times.\n\nThanks, this should do the trick.\n\n> a) looks like the best plan to me. What's the point of pushing the\n> sort below the WindowAgg in this case? The point of this optimisation\n> is to reduce the number of sorts not to push them as deep into the\n> plan as possible. We should only be pushing them down when it can\n> reduce the number of sorts. There's no reduction in the number of\n> sorts in the above plan.\n\nYes, you are right, not in this case. I actually mentioned wrong case here,\n\nreal problematic case is:\n\nEXPLAIN (COSTS OFF)\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n sum(salary) OVER (PARTITION BY depname) depsalary\nFROM empsalary\nORDER BY depname, empno, enroll_date;\n QUERY PLAN\n-------------------------------------------------------------------\n Incremental Sort\n Sort Key: depname, empno, enroll_date\n Presorted Key: depname, empno\n -> WindowAgg\n -> WindowAgg\n -> Incremental Sort\n Sort Key: depname, empno\n Presorted Key: depname\n -> Index Scan using depname_idx on empsalary\n(9 rows)\n\nHere, it could have sorted on depname, empno, enroll_date.\n\nAgain, as I mentioned before, this is implementation issue. We shouldn't be\n\nskipping optimization if pre-sorted keys are present.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sun, 8 Jan 2023 11:48:05 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\nOn 08/01/23 04:06, David Rowley wrote:\n\n> On Sun, 8 Jan 2023 at 05:45, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> Attached patch with test cases.\n\n> I can look at this in a bit more detail if you find a way to fix the\n> case you mentioned earlier. i.e, push the sort down to the deepest\n> WindowAgg that has pathkeys contained in the query's ORDER BY\n> pathkeys.\n\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> depname,\n> min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> sum(salary) OVER (PARTITION BY depname) depsalary\n> FROM empsalary\n> ORDER BY depname, empno, enroll_date;\n> QUERY PLAN\n> -----------------------------------------------\n> Incremental Sort\n> Sort Key: depname, empno, enroll_date\n> Presorted Key: depname, empno\n> -> WindowAgg\n> -> WindowAgg\n> -> Sort\n> Sort Key: depname, empno\n> -> Seq Scan on empsalary\n> (8 rows)\n>\n> You'll also need to pay attention to how the has_runcondition is set.\n> If you start pushing before looking at all the WindowClauses then you\n> won't know if some later WindowClause has a runCondition. \n\nYes, this should be the main culprit.\n\n\n> Adding an additional backwards foreach loop should allow you to do all the\n> required prechecks and find the index of the WindowClause which you\n> should start pushing from.\n\nThis should do the trick. Thanks for headup, I will update the patch \nwith suggested\n\nchanges and required fixes.\n\n\nRegards,\n\nAnkit\n\n\n\n",
"msg_date": "Sun, 8 Jan 2023 11:59:27 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "Hi David,\n\nPlease find attached patch with addressed issues mentioned before.\n\nThings resolved:\n\n1. Correct position of window function from where order by push down can \nhappen\n\nis determined, this fixes issue mentioned in case #1.\n\n> While writing test cases, I found that optimization do not happen for\n> case #1\n>\n> (which is prime candidate for such operation) like\n>\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> depname,\n> min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> sum(salary) OVER (PARTITION BY depname) depsalary\n> FROM empsalary\n> ORDER BY depname, empno, enroll_date\n\n2. Point #2 as in above discussions\n\n> a) looks like the best plan to me. What's the point of pushing the\n> sort below the WindowAgg in this case? The point of this optimisation\n> is to reduce the number of sorts not to push them as deep into the\n> plan as possible. We should only be pushing them down when it can\n> reduce the number of sorts. There's no reduction in the number of\n> sorts in the above plan.\n\nWorks as mentioned.\n\nAll test cases (newly added and existing ones) are green.\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sun, 8 Jan 2023 15:51:35 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Sun, 8 Jan 2023 at 23:21, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Please find attached patch with addressed issues mentioned before.\n\nHere's a quick review:\n\n1. You can use foreach_current_index(l) to obtain the index of the list element.\n\n2. I'd rather see you looping backwards over the list in the first\nloop. I think it'll be more efficient to loop backwards as you can\njust break out the loop when you see the pathkeys are not contained in\nthe order by pathkreys. When the optimisation does not apply that\nmeans you only need to look at the last item in the list. You could\nmaybe just invent foreach_reverse() for this purpose and put it in\npg_list.h. That'll save having to manually code up the loop.\n\n3. I don't think you should call the variable\nenable_order_by_pushdown. We've a bunch of planner related GUCs that\nstart with enable_. That might cause a bit of confusion. Maybe just\ntry_sort_pushdown.\n\n4. You should widen the scope of orderby_pathkeys and set it within\nthe if (enable_order_by_pushdown) scope. You can reuse this again in\nthe 2nd loop too. Just initialise it to NIL\n\n5. You don't seem to be checking all WindowClauses for a runCondtion.\nIf you do #2 above then you can start that process in the initial\nreverse loop so that you've checked them all by the time you get\naround to that WindowClause again in the 2nd loop.\n\n6. The test with \"+-- Do not perform additional sort if column is\npresorted\". I don't think \"additional\" is the correct word here. I\nthink you want to ensure that we don't push down the ORDER BY below\nthe WindowAgg for this case. There is no ability to reduce the sorts\nhere, only move them around, which we agreed we don't want to do for\nthis case.\n\nAlso, do you want to have a go at coding up the sort bound pushdown\ntoo? It'll require removing the limitCount restriction and adjusting\nExecSetTupleBound() to recurse through a WindowAggState. I think it's\npretty easy. You could try it then play around with it to make sure it\nworks and we get the expected performance. You'll likely want to add a\nfew more rows than the last performance test you did or run the query\nwith pgbench. Running a query once that only takes 1-2ms is likely not\na reliable way to test the performance of something.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Jan 2023 00:03:54 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On 1/8/23 11:21, Ankit Kumar Pandey wrote:\n> \n> Please find attached patch with addressed issues mentioned before.\n\n\nI am curious about this plan:\n\n+-- ORDER BY's in multiple Window functions can be combined into one\n+-- if they are subset of QUERY's ORDER BY\n+EXPLAIN (COSTS OFF)\n+SELECT empno,\n+ depname,\n+ min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n+ sum(salary) OVER (PARTITION BY depname) depsalary,\n+ count(*) OVER (ORDER BY enroll_date DESC) c\n+FROM empsalary\n+ORDER BY depname, empno, enroll_date;\n+ QUERY PLAN\n+------------------------------------------------------\n+ WindowAgg\n+ -> WindowAgg\n+ -> Sort\n+ Sort Key: depname, empno, enroll_date\n+ -> WindowAgg\n+ -> Sort\n+ Sort Key: enroll_date DESC\n+ -> Seq Scan on empsalary\n+(8 rows)\n+\n\n\nWhy aren't min() and sum() calculated on the same WindowAgg run?\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 8 Jan 2023 17:06:13 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 08/01/23 21:36, Vik Fearing wrote:\n\n> On 1/8/23 11:21, Ankit Kumar Pandey wrote:\n>> \n>> Please find attached patch with addressed issues mentioned before.\n\n\n> I am curious about this plan:\n\n> +-- ORDER BY's in multiple Window functions can be combined into one\n> +-- if they are subset of QUERY's ORDER BY\n> +EXPLAIN (COSTS OFF)\n> +SELECT empno,\n> + depname,\n> + min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> + sum(salary) OVER (PARTITION BY depname) depsalary,\n> + count(*) OVER (ORDER BY enroll_date DESC) c\n> +FROM empsalary\n> +ORDER BY depname, empno, enroll_date;\n> + QUERY PLAN\n> +------------------------------------------------------\n> + WindowAgg\n> + -> WindowAgg\n> + -> Sort\n> + Sort Key: depname, empno, enroll_date\n> + -> WindowAgg\n> + -> Sort\n> + Sort Key: enroll_date DESC\n> + -> Seq Scan on empsalary\n> +(8 rows)\n> +\n\n\n> Why aren't min() and sum() calculated on the same WindowAgg run?\n\nIsn't that exactly what is happening here? First count() with sort on \nenroll_date is run and\n\nthen min() and sum()?\n\n\nOnly difference between this and plan generated by master(given below) \nis a sort in the end.\n\n QUERY PLAN\n------------------------------------------------------------\n Incremental Sort\n Sort Key: depname, empno, enroll_date\n Presorted Key: depname, empno\n -> WindowAgg\n -> WindowAgg\n -> Sort\n Sort Key: depname, empno\n -> WindowAgg\n -> Sort\n Sort Key: enroll_date DESC\n -> Seq Scan on empsalary\n\nLet me know if I am missing anything. Thanks.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sun, 8 Jan 2023 22:35:08 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "> On 08/01/23 16:33, David Rowley wrote:\n\n> On Sun, 8 Jan 2023 at 23:21, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> Please find attached patch with addressed issues mentioned before.\n>\n> Here's a quick review:\n>\n> 1. You can use foreach_current_index(l) to obtain the index of the list element.\n> \n> 2. I'd rather see you looping backwards over the list in the first\n> loop. I think it'll be more efficient to loop backwards as you can\n> just break out the loop when you see the pathkeys are not contained in\n> the order by pathkreys. When the optimisation does not apply that\n> means you only need to look at the last item in the list. You could\n> maybe just invent foreach_reverse() for this purpose and put it in\n> pg_list.h. That'll save having to manually code up the loop.\n>\n> 3. I don't think you should call the variable\n> enable_order_by_pushdown. We've a bunch of planner related GUCs that\n> start with enable_. That might cause a bit of confusion. Maybe just\n> try_sort_pushdown.\n>\n> 4. You should widen the scope of orderby_pathkeys and set it within\n> the if (enable_order_by_pushdown) scope. You can reuse this again in\n> the 2nd loop too. Just initialise it to NIL\n>\n> 5. You don't seem to be checking all WindowClauses for a runCondtion.\n> If you do #2 above then you can start that process in the initial\n> reverse loop so that you've checked them all by the time you get\n> around to that WindowClause again in the 2nd loop.\n>\n> 6. The test with \"+-- Do not perform additional sort if column is\n> presorted\". I don't think \"additional\" is the correct word here. I\n> think you want to ensure that we don't push down the ORDER BY below\n> the WindowAgg for this case. There is no ability to reduce the sorts\n> here, only move them around, which we agreed we don't want to do for\n> this case.\n\n\nI have addressed all points 1-6 in the attached patch.\n\nI have one doubt regarding runCondition, do we only need to ensure\n\nthat window function which has subset sort clause of main query should\n\nnot have runCondition or none of the window functions should not contain\n\nrunCondition? I have gone with later case but wanted to clarify.\n\n\n> Also, do you want to have a go at coding up the sort bound pushdown\n> too? It'll require removing the limitCount restriction and adjusting\n> ExecSetTupleBound() to recurse through a WindowAggState. I think it's\n> pretty easy. You could try it then play around with it to make sure it\n> works and we get the expected performance.\n\nI tried this in the patch but kept getting `retrieved too many tuples in \na bounded sort`.\n\nAdded following code in ExecSetTupleBound which correctly found sortstate\n\nand set bound value.\n\n\telse if(IsA(child_node, WindowAggState))\n\n\t{\n\n\t\tWindowAggState *winstate = (WindowAggState *) child_node;\n\n\t\tif (outerPlanState(winstate))\n\n\t\t\tExecSetTupleBound(tuples_needed, outerPlanState(winstate));\n\n\t}\n\nI think problem is that are not using limit clause inside window \nfunction (which\n\nmay need to scan all tuples) so passing bound value to \nWindowAggState->sortstate\n\nis not working as we might expect. Or maybe I am getting it wrong? I was \ntrying to\n\nhave top-N sort for limit clause if orderby pushdown happens.\n\n> You'll likely want to add a few more rows than the last performance test you did or run the query\n> with pgbench. Running a query once that only takes 1-2ms is likely not\n> a reliable way to test the performance of something.\n\nI did some profiling.\n\nCREATE TABLE empsalary1 (\n depname varchar,\n empno bigint,\n salary int,\n enroll_date date\n);\nINSERT INTO empsalary1(depname, empno, salary, enroll_date)\nSELECT string_agg (substr('abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789', ceil (random() * 62)::integer, 1000), '')\n AS depname, generate_series(1, 10000000) AS empno, ceil (random()*10000)::integer AS salary\n, NOW() + (random() * (interval '90 days')) + '30 days' AS enroll_date;\n\n1) Optimization case\n\nEXPLAIN (ANALYZE)\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n sum(salary) OVER (PARTITION BY depname) depsalary,\n count(*) OVER (ORDER BY enroll_date DESC) c\nFROM empsalary1\nORDER BY depname, empno, enroll_date;\n\nEXPLAIN (ANALYZE)\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname) depminsalary\nFROM empsalary1\nORDER BY depname, empno;\n\n\n\nIn patched version:\n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------\n-----------------------\n WindowAgg (cost=93458.04..93458.08 rows=1 width=61) (actual time=149996.006..156756.995 rows=10000000 loops=1)\n -> WindowAgg (cost=93458.04..93458.06 rows=1 width=57) (actual time=108559.126..135892.188 rows=10000000 loops=1)\n -> Sort (cost=93458.04..93458.04 rows=1 width=53) (actual time=108554.213..112564.168 rows=10000000 loops=1)\n Sort Key: depname, empno, enroll_date\n Sort Method: external merge Disk: 645856kB\n -> WindowAgg (cost=93458.01..93458.03 rows=1 width=53) (actual time=30386.551..62357.669 rows=10000000 lo\nops=1)\n -> Sort (cost=93458.01..93458.01 rows=1 width=45) (actual time=23260.104..26313.395 rows=10000000 l\noops=1)\n Sort Key: enroll_date DESC\n Sort Method: external merge Disk: 528448kB\n -> Seq Scan on empsalary1 (cost=0.00..93458.00 rows=1 width=45) (actual time=0.032..4833.603\nrows=10000000 loops=1)\n Planning Time: 4.693 ms\n Execution Time: 158161.281 ms\n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------\n-------------\n WindowAgg (cost=1903015.63..2078015.74 rows=10000006 width=39) (actual time=40565.305..46598.984 rows=10000000 loops=1)\n -> Sort (cost=1903015.63..1928015.65 rows=10000006 width=39) (actual time=23411.837..27467.962 rows=10000000 loops=1)\n Sort Key: depname, empno\n Sort Method: external merge Disk: 528448kB\n -> Seq Scan on empsalary1 (cost=0.00..193458.06 rows=10000006 width=39) (actual time=5.095..5751.675 rows=10000\n000 loops=1)\n Planning Time: 0.099 ms\n Execution Time: 47415.926 ms\n\n\nIn master:\n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------\n-------------------------------------\n Incremental Sort (cost=3992645.36..4792645.79 rows=10000006 width=59) (actual time=147130.132..160985.373 rows=10000000\nloops=1)\n Sort Key: depname, empno, enroll_date\n Presorted Key: depname, empno\n Full-sort Groups: 312500 Sort Method: quicksort Average Memory: 28kB Peak Memory: 28kB\n -> WindowAgg (cost=3992645.31..4342645.52 rows=10000006 width=59) (actual time=147129.936..154023.147 rows=10000000 l\noops=1)\n -> WindowAgg (cost=3992645.31..4192645.43 rows=10000006 width=55) (actual time=104665.289..133089.188 rows=1000\n0000 loops=1)\n -> Sort (cost=3992645.31..4017645.33 rows=10000006 width=51) (actual time=104665.257..108710.282 rows=100\n00000 loops=1)\n Sort Key: depname, empno\n Sort Method: external merge Disk: 645856kB\n -> WindowAgg (cost=1971370.63..2146370.74 rows=10000006 width=51) (actual time=28314.300..59737.949\n rows=10000000 loops=1)\n -> Sort (cost=1971370.63..1996370.65 rows=10000006 width=43) (actual time=21190.188..24098.59\n6 rows=10000000 loops=1)\n Sort Key: enroll_date DESC\n Sort Method: external merge Disk: 528448kB\n -> Seq Scan on empsalary1 (cost=0.00..193458.06 rows=10000006 width=43) (actual time=0.\n630..5317.862 rows=10000000 loops=1)\n Planning Time: 0.982 ms\n Execution Time: 163369.242 ms\n(16 rows)\n\n QUERY PLAN\n \n--------------------------------------------------------------------------------------------------------------------------\n-------------------\n Incremental Sort (cost=3787573.31..3912573.41 rows=10000006 width=39) (actual time=51547.195..53950.034 rows=10000000 lo\nops=1)\n Sort Key: depname, empno\n Presorted Key: depname\n Full-sort Groups: 1 Sort Method: quicksort Average Memory: 30kB Peak Memory: 30kB\n Pre-sorted Groups: 1 Sort Method: external merge Average Disk: 489328kB Peak Disk: 489328kB\n -> WindowAgg (cost=1903015.63..2078015.74 rows=10000006 width=39) (actual time=33413.954..39771.262 rows=10000000 loo\nps=1)\n -> Sort (cost=1903015.63..1928015.65 rows=10000006 width=39) (actual time=18991.129..21992.353 rows=10000000 lo\nops=1)\n Sort Key: depname\n Sort Method: external merge Disk: 528456kB\n -> Seq Scan on empsalary1 (cost=0.00..193458.06 rows=10000006 width=39) (actual time=1.300..5269.729 rows\n=10000000 loops=1)\n Planning Time: 4.506 ms\n Execution Time: 54768.697 ms\n\n\n2) No optimization case:\n\nEXPLAIN (ANALYZE)\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary\nFROM empsalary1\nORDER BY enroll_date;\n\nPatch:\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------\n Sort (cost=3968191.79..3993191.80 rows=10000006 width=43) (actual \ntime=57863.173..60976.324 rows=10000000 loops=1)\n Sort Key: enroll_date\n Sort Method: external merge Disk: 528448kB\n -> WindowAgg (cost=850613.62..2190279.21 rows=10000006 width=43) \n(actual time=7478.966..42502.541 rows=10000000 loops\n=1)\n -> Gather Merge (cost=850613.62..2015279.11 rows=10000006 \nwidth=43) (actual time=7478.935..18037.001 rows=10000\n000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=849613.60..860030.27 rows=4166669 \nwidth=43) (actual time=7349.101..9397.713 rows=3333333 lo\nops=3)\n Sort Key: depname, empno\n Sort Method: external merge Disk: 181544kB\n Worker 0: Sort Method: external merge Disk: 169328kB\n Worker 1: Sort Method: external merge Disk: 177752kB\n -> Parallel Seq Scan on empsalary1 \n(cost=0.00..135124.69 rows=4166669 width=43) (actual time=0.213.\n.2450.635 rows=3333333 loops=3)\n Planning Time: 0.100 ms\n Execution Time: 63341.783 ms\n\nmaster:\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n--------------------------------\n Sort (cost=3968191.79..3993191.80 rows=10000006 width=43) (actual \ntime=54097.880..57000.806 rows=10000000 loops=1)\n Sort Key: enroll_date\n Sort Method: external merge Disk: 528448kB\n -> WindowAgg (cost=850613.62..2190279.21 rows=10000006 width=43) \n(actual time=7075.245..39200.756 rows=10000000 loops\n=1)\n -> Gather Merge (cost=850613.62..2015279.11 rows=10000006 \nwidth=43) (actual time=7075.217..15988.922 rows=10000\n000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (cost=849613.60..860030.27 rows=4166669 \nwidth=43) (actual time=6993.974..8799.701 rows=3333333 lo\nops=3)\n Sort Key: depname, empno\n Sort Method: external merge Disk: 171904kB\n Worker 0: Sort Method: external merge Disk: 178496kB\n Worker 1: Sort Method: external merge Disk: 178224kB\n -> Parallel Seq Scan on empsalary1 \n(cost=0.00..135124.69 rows=4166669 width=43) (actual time=0.044.\n.2683.598 rows=3333333 loops=3)\n Planning Time: 5.718 ms\n Execution Time: 59188.469 ms\n(15 rows)\n\nMaster and patch have same performance as plan is same.\n\n\npgbench (this is to find average performance):\n\ncreate table empsalary2 as select * from empsalary1 limit 1000;\n-------------------------------------------------------------------\ntest.sql\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n sum(salary) OVER (PARTITION BY depname) depsalary,\n count(*) OVER (ORDER BY enroll_date DESC) c\nFROM empsalary2\nORDER BY depname, empno, enroll_date;\n\nSELECT empno,\n depname,\n min(salary) OVER (PARTITION BY depname) depminsalary\nFROM empsalary2\nORDER BY depname, empno;\n----------------------------------------------------------------------\n\n/usr/local/pgsql/bin/pgbench -d test -c 10 -j 4 -t 1000 -f test.sql\n\nPatch:\n\ntransaction type: test.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 4\nmaximum number of tries: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\nnumber of failed transactions: 0 (0.000%)\nlatency average = 55.262 ms\ninitial connection time = 8.480 ms\ntps = 180.957685 (without initial connection time)\n\n\nMaster:\n\ntransaction type: test.sql\nscaling factor: 1\nquery mode: simple\nnumber of clients: 10\nnumber of threads: 4\nmaximum number of tries: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 10000/10000\nnumber of failed transactions: 0 (0.000%)\nlatency average = 60.489 ms\ninitial connection time = 7.069 ms\ntps = 165.320205 (without initial connection time)\n\n\nTPS of patched version is higher than that of master's for same set of \nqueries.\n\nwhere optimization is performed.\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sun, 8 Jan 2023 22:47:19 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On 1/8/23 18:05, Ankit Kumar Pandey wrote:\n> \n>> On 08/01/23 21:36, Vik Fearing wrote:\n> \n>> On 1/8/23 11:21, Ankit Kumar Pandey wrote:\n>>>\n>>> Please find attached patch with addressed issues mentioned before.\n> \n> \n>> I am curious about this plan:\n> \n>> +-- ORDER BY's in multiple Window functions can be combined into one\n>> +-- if they are subset of QUERY's ORDER BY\n>> +EXPLAIN (COSTS OFF)\n>> +SELECT empno,\n>> + depname,\n>> + min(salary) OVER (PARTITION BY depname ORDER BY empno) \n>> depminsalary,\n>> + sum(salary) OVER (PARTITION BY depname) depsalary,\n>> + count(*) OVER (ORDER BY enroll_date DESC) c\n>> +FROM empsalary\n>> +ORDER BY depname, empno, enroll_date;\n>> + QUERY PLAN\n>> +------------------------------------------------------\n>> + WindowAgg\n>> + -> WindowAgg\n>> + -> Sort\n>> + Sort Key: depname, empno, enroll_date\n>> + -> WindowAgg\n>> + -> Sort\n>> + Sort Key: enroll_date DESC\n>> + -> Seq Scan on empsalary\n>> +(8 rows)\n>> +\n> \n> \n>> Why aren't min() and sum() calculated on the same WindowAgg run?\n> \n> Isn't that exactly what is happening here? First count() with sort on \n> enroll_date is run and\n> \n> then min() and sum()?\n\nNo, there are two passes over the window for those two but I don't see \nthat there needs to be.\n\n> Only difference between this and plan generated by master(given below) \n> is a sort in the end.\n\nThen this is probably not this patch's job to fix.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 8 Jan 2023 20:21:03 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 05:06, Vik Fearing <vik@postgresfriends.org> wrote:\n> +EXPLAIN (COSTS OFF)\n> +SELECT empno,\n> + depname,\n> + min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> + sum(salary) OVER (PARTITION BY depname) depsalary,\n> + count(*) OVER (ORDER BY enroll_date DESC) c\n> +FROM empsalary\n> +ORDER BY depname, empno, enroll_date;\n> + QUERY PLAN\n> +------------------------------------------------------\n> + WindowAgg\n> + -> WindowAgg\n> + -> Sort\n> + Sort Key: depname, empno, enroll_date\n> + -> WindowAgg\n> + -> Sort\n> + Sort Key: enroll_date DESC\n> + -> Seq Scan on empsalary\n\n> Why aren't min() and sum() calculated on the same WindowAgg run?\n\nWe'd need to have an ORDER BY per WindowFunc rather than per\nWindowClause to do that. The problem is when there is no ORDER BY,\nall rows are peers.\n\nLikely there likely are a bunch more optimisations we could do in that\narea. I think all the builtin window functions (not aggregates being\nused as window functions) don't care about peer rows, so it may be\npossible to merge the WindowClauses when the WIndowClause being merged\nonly has window functions that don't care about peer rows. Not for\nthis patch though.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Jan 2023 09:52:07 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 06:17, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> I have addressed all points 1-6 in the attached patch.\n\nA few more things:\n\n1. You're still using the 'i' variable in the foreach loop.\nforeach_current_index() will work.\n\n2. I think the \"index\" variable needs a better name. sort_pushdown_idx maybe.\n\n3. I don't think you need to set \"index\" on every loop. Why not just\nset it to foreach_current_index(l) - 1; break;\n\n4. You're still setting orderby_pathkeys in the foreach loop. That's\nalready been set above and it won't have changed.\n\n5. I don't think there's any need to check pathkeys_contained_in() in\nthe foreach loop anymore. With #3 the index will be -1 if the\noptimisation cannot apply. You could likely also get rid of\ntry_sort_pushdown too and just make the condition \"if\n(sort_pushdown_idx == foreach_current_index(l))\". I'm a little unsure\nwhy there's still the is_sorted check there. Shouldn't that always be\nfalse now that you're looping until the pathkeys don't match in the\nforeach_reverse loop?\n\nCorrect me if I'm wrong as I've not tested, but I think the new code\nin the foreach loop can just become:\n\nif (sort_pushdown_idx == foreach_current_index(l))\n{\n Assert(!is_sorted);\n window_pathkeys = orderby_pathkeys;\n is_sorted = pathkeys_count_contained_in(window_pathkeys,\npath->pathkeys, &presorted_keys);\n}\n\n\n> I have one doubt regarding runCondition, do we only need to ensure\n> that window function which has subset sort clause of main query should\n> not have runCondition or none of the window functions should not contain\n> runCondition? I have gone with later case but wanted to clarify.\n\nActually, maybe it's ok just to check the top-level WindowClause for\nrunConditions. It's only that one that'll filter rows. That probably\nsimplifies the code quite a bit. Lower-level runConditions only serve\nto halt the evaluation of WindowFuncs when the runCondition is no\nlonger met.\n\n>\n>\n> > Also, do you want to have a go at coding up the sort bound pushdown\n> > too? It'll require removing the limitCount restriction and adjusting\n> > ExecSetTupleBound() to recurse through a WindowAggState. I think it's\n> > pretty easy. You could try it then play around with it to make sure it\n> > works and we get the expected performance.\n>\n> I tried this in the patch but kept getting `retrieved too many tuples in\n> a bounded sort`.\n>\n> Added following code in ExecSetTupleBound which correctly found sortstate\n>\n> and set bound value.\n>\n> else if(IsA(child_node, WindowAggState))\n>\n> {\n>\n> WindowAggState *winstate = (WindowAggState *) child_node;\n>\n> if (outerPlanState(winstate))\n>\n> ExecSetTupleBound(tuples_needed, outerPlanState(winstate));\n>\n> }\n>\n> I think problem is that are not using limit clause inside window\n> function (which\n> may need to scan all tuples) so passing bound value to\n> WindowAggState->sortstate\n> is not working as we might expect. Or maybe I am getting it wrong? I was\n> trying to\n> have top-N sort for limit clause if orderby pushdown happens.\n\nhmm, perhaps the Limit would have to be put between the WindowAgg and\nSort for it to work. Maybe that's more complexity than it's worth.\n\nDavid\n\n\n",
"msg_date": "Mon, 9 Jan 2023 11:18:58 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "> On 09/01/23 03:48, David Rowley wrote:\n\n> On Mon, 9 Jan 2023 at 06:17, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>> I have addressed all points 1-6 in the attached patch.\n\n> A few more things:\n\n> 1. You're still using the 'i' variable in the foreach loop.\nforeach_current_index() will work.\n\n> 2. I think the \"index\" variable needs a better name. sort_pushdown_idx maybe.\n\nDone these (1 & 2)\n\n> 3. I don't think you need to set \"index\" on every loop. Why not just\nset it to foreach_current_index(l) - 1; break;\n\nConsider this query\n\nEXPLAIN (COSTS OFF)\n\nSELECT empno,\n\n depname,\n\n min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n\n sum(salary) OVER (PARTITION BY depname) depsalary,\n\n count(*) OVER (ORDER BY enroll_date DESC) c\n\nFROM empsalary\n\nORDER BY depname, empno, enroll_date;\n\n\nHere, W1 = min(salary) OVER (PARTITION BY depname ORDER BY empno) W2 = \nsum(salary) OVER (PARTITION BY depname)\n\nW3 = count(*) OVER (ORDER BY enroll_date DESC)\n\n(1,2,3 are winref).\n\n\nactiveWindows = [W3, W1, W2]\n\n\nIf we iterate from reverse and break at first occurrence, we will\n\nbreak at W2 and add extra keys there, but what we want it to add\n\nkeys at W1 so that it gets spilled to W2 (as existing logic is designed to\n\ncarry over sorted cols first to last).\n\n> 4. You're still setting orderby_pathkeys in the foreach loop. That's\nalready been set above and it won't have changed.\n\n> 5. I don't think there's any need to check pathkeys_contained_in() in\nthe foreach loop anymore. With #3 the index will be -1 if the\noptimisation cannot apply. You could likely also get rid of\ntry_sort_pushdown too and just make the condition \"if\n(sort_pushdown_idx == foreach_current_index(l))\".\n\nDone this.\n\nAdded pathkeys_contained_in as an assert, hope that's okay.\n\n> I'm a little unsure why there's still the is_sorted check there. \nShouldn't that always be false now that you're looping until the pathkeys\ndon't match in the foreach_reverse loop?\n\nRemoving is_sorted causes issue if there is matching pathkey which is \npresorted\n\neg this case\n\n-- Do not perform sort pushdown if column is presorted\nCREATE INDEX depname_idx ON empsalary(depname);\nSET enable_seqscan=0;\n\nEXPLAIN (COSTS OFF)\nSELECT empno,\n min(salary) OVER (PARTITION BY depname) depminsalary\nFROM empsalary\nORDER BY depname, empno;\n\nWe can move this to if (try_sort_pushdown) block but it looks to me bit \nugly.\n\nNevertheless, it make sense to have it here, sort_pushdown_index should \npoint to exact\n\nwindow function which needs to be modified. Having extra check (for \nis_sorted) in 2nd foreach loop\n\nadds ambiguity if we don't add it in first check.\n\nforeach_reverse(l, activeWindows)\n{\n\tWindowClause *wc = lfirst_node(WindowClause, l);\n\torderby_pathkeys = make_pathkeys_for_sortclauses(root,root->parse->sortClause,root->processed_tlist);\n\twindow_pathkeys = make_pathkeys_for_window(root,wc,root->processed_tlist);\n\tis_sorted = pathkeys_count_contained_in(window_pathkeys,path->pathkeys,&presorted_keys);\n\thas_runcondition |= (wc->runCondition != NIL);\n\tif (!pathkeys_contained_in(window_pathkeys, orderby_pathkeys) || has_runcondition)\n\t\tbreak;\n\tif(!is_sorted)\n\t\tsort_pushdown_idx = foreach_current_index(l);\n}\n\nTests passes on this so logically it is ok.\n\n> Correct me if I'm wrong as I've not tested, but I think the new code\n> in the foreach loop can just become:\n> \n> if (sort_pushdown_idx == foreach_current_index(l))\n> {\n> Assert(!is_sorted);\n> window_pathkeys = orderby_pathkeys;\n> is_sorted = pathkeys_count_contained_in(window_pathkeys,\n> path->pathkeys, &presorted_keys);\n> }\n\nDepending on where we have is_sorted (as mentioned above) it looks a lot \nlike you mentioned.\n\nAlso, we can add Assert(pathkeys_contained_in(window_pathkeys, \norderby_pathkeys))\n\n>> I have one doubt regarding runCondition, do we only need to ensure\n>> that window function which has subset sort clause of main query should\n>> not have runCondition or none of the window functions should not contain\n>> runCondition? I have gone with later case but wanted to clarify.\n\n> Actually, maybe it's ok just to check the top-level WindowClause for\n> runConditions. It's only that one that'll filter rows. That probably\n> simplifies the code quite a bit. Lower-level runConditions only serve\n> to halt the evaluation of WindowFuncs when the runCondition is no\n> longer met.\n\nOkay, then this approach makes sense.\n\n> hmm, perhaps the Limit would have to be put between the WindowAgg and\n> Sort for it to work. Maybe that's more complexity than it's worth.\n\nYes, not specific to this change. It is more around allowing top-N sort in\n\nwindow functions (in general). Once we have it there, then this could be \ntaken care of.\n\n\nI have attached patch which fixes 1 & 2 and rearrange is_sorted.\n\nPoint #3 needs to be resolved (and perhaps another way to handle is_sorted)\n\n\nThanks,\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Mon, 9 Jan 2023 14:04:08 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Mon, 9 Jan 2023 at 21:34, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n>\n> > On 09/01/23 03:48, David Rowley wrote:\n> > 3. I don't think you need to set \"index\" on every loop. Why not just\n> set it to foreach_current_index(l) - 1; break;\n>\n> Consider this query\n>\n> EXPLAIN (COSTS OFF)\n> SELECT empno,\n> depname,\n> min(salary) OVER (PARTITION BY depname ORDER BY empno) depminsalary,\n> sum(salary) OVER (PARTITION BY depname) depsalary,\n> count(*) OVER (ORDER BY enroll_date DESC) c\n> FROM empsalary\n> ORDER BY depname, empno, enroll_date;\n>\n>\n> Here, W1 = min(salary) OVER (PARTITION BY depname ORDER BY empno) W2 =\n> sum(salary) OVER (PARTITION BY depname)\n>\n> W3 = count(*) OVER (ORDER BY enroll_date DESC)\n>\n> (1,2,3 are winref).\n>\n>\n> activeWindows = [W3, W1, W2]\n>\n> If we iterate from reverse and break at first occurrence, we will\n> break at W2 and add extra keys there, but what we want it to add\n> keys at W1 so that it gets spilled to W2 (as existing logic is designed to\n> carry over sorted cols first to last).\n\nWe need to keep looping backwards until we find the first WindowClause\nwhich does not contain the pathkeys of the ORDER BY. When we find a\nWindowClause that does not contain the pathkeys of the ORDER BY, then\nwe must set the sort_pushdown_idx to the index of the prior\nWindowClause. I couldn't quite understand why the foreach() loop's\ncondition couldn't just be \"if (foreach_current_index(l) ==\nsort_pushdown_idx)\", but I see that if we don't check if the path is\nalready correctly sorted that we could end up pushing the sort down\nonto the path that's already correctly sorted. We decided we didn't\nwant to move the sort around if it does not reduce the amount of\nsorting.\n\nI had to try this out for myself and I've ended up with the attached\nv6 patch. All the tests you added still pass. Although, I didn't\nreally study the tests yet to see if everything we talked about is\ncovered.\n\nIt turned out the sort_pushdown_idx = foreach_current_index(l) - 1;\nbreak; didn't work as if all the WindowClauses have pathkeys contained\nin the order by pathkeys then we don't ever set sort_pushdown_idx. I\nadjusted it to do:\n\nif (pathkeys_contained_in(window_pathkeys, orderby_pathkeys))\n sort_pushdown_idx = foreach_current_index(l);\nelse\n break;\n\nI also fixed up the outdated comments and changed it so we only set\norderby_pathkeys once instead of once per loop in the\nforeach_reverse() loop.\n\nI gave some thought to whether doing foreach_delete_current() is safe\nwithin a foreach_reverse() loop. I didn't try it, but I couldn't see\nany reason why not. It is pretty late here and I'd need to test that\nto be certain. If it turns out not to be safe then we need to document\nthat fact in the comments of the foreach_reverse() macro and the\nforeach_delete_current() macro.\n\nDavid",
"msg_date": "Tue, 10 Jan 2023 01:23:14 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "> On 09/01/23 17:53, David Rowley wrote:\n\n> We need to keep looping backwards until we find the first WindowClause\n> which does not contain the pathkeys of the ORDER BY.\n\nI found the cause of confusion, *first* WindowClause means first from\n\nforward direction. Since we are looping from backward, I took it first\n\nfrom the last.\n\n\neg:\n\nselect count(*) over (order by a), count(*) over (order by a,b), \ncount(*) over (order by a,b,c) from abcd order by a,b,c,d;\n\nfirst window clause is count(*) over (order by a) which we are using for \norder-by pushdown.\n\n\nThis is in sync with implementation as well.\n\n\n> I couldn't quite understand why the foreach() loop's\n> condition couldn't just be \"if (foreach_current_index(l) ==\n> sort_pushdown_idx)\", but I see that if we don't check if the path is\n> already correctly sorted that we could end up pushing the sort down\n> onto the path that's already correctly sorted. We decided we didn't\n> want to move the sort around if it does not reduce the amount of\n> sorting.\n\nYes, this was the reason, the current patch handles this without is_sort \nnow, which is great.\n\n> All the tests you added still pass. Although, I didn't\n> really study the tests yet to see if everything we talked about is\n> covered.\n\nIt covers general cases and exceptions. Also, I did few additional \ntests. Looked good.\n\n> It turned out the sort_pushdown_idx = foreach_current_index(l) - 1;\n> break; didn't work as if all the WindowClauses have pathkeys contained\n> in the order by pathkeys then we don't ever set sort_pushdown_idx. I\n> adjusted it to do:\n\n> if (pathkeys_contained_in(window_pathkeys, orderby_pathkeys))\n> sort_pushdown_idx = foreach_current_index(l);\n> else\n> break;\n\nYes, that would have been problematic. I have verified this case\n\nand on related note, I have added a test case that ensure order by pushdown\n\nshouldn't happen if window function's order by is superset of query's \norder by.\n\n> I also fixed up the outdated comments and changed it so we only set\n> orderby_pathkeys once instead of once per loop in the\n> foreach_reverse() loop.\n\nThanks, code look a lot neater now (is_sorted is gone and handled in a \nbetter way).\n\n> I gave some thought to whether doing foreach_delete_current() is safe\n> within a foreach_reverse() loop. I didn't try it, but I couldn't see\n> any reason why not. It is pretty late here and I'd need to test that\n> to be certain. If it turns out not to be safe then we need to document\n> that fact in the comments of the foreach_reverse() macro and the\n> foreach_delete_current() macro.\n\nI tested foreach_delete_current inside foreach_reverse loop.\n\nIt worked fine.\n\n\nI have attached patch with one extra test case (as mentioned above). \nRest of the changes are looking fine.\n\nRan pgbench again and optimized version still had a lead (168 tps vs 135 \ntps) in performance.\n\n\nDo we have any pending items for this patch now?\n\n-- \n\nRegards,\nAnkit Kumar Pandey",
"msg_date": "Mon, 9 Jan 2023 22:45:12 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 06:15, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Do we have any pending items for this patch now?\n\nI'm just wondering if not trying this when the query has a DISTINCT\nclause is a copout. What I wanted to avoid was doing additional\nsorting work for WindowAgg just to have it destroyed by Hash\nAggregate. I'm now wondering if adding both the original\nslightly-less-sorted path plus the new slightly-more-sorted path then\nif distinct decides to Hash Aggregate then it'll still be able to pick\nthe cheapest input path to do that on. Unfortunately, our sort\ncosting just does not seem to be advanced enough to know that sorting\nby fewer columns might be cheaper, so adding the additional path is\nlikely just going to result in add_path() ditching the old\nslightly-less-sorted path due to the new slightly-more-sorted path\nhaving better pathkeys. So, we'd probably be wasting our time if we\nadded both paths with the current sort costing code.\n\n# explain analyze select * from pg_Class order by relkind,relname;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Sort (cost=36.01..37.04 rows=412 width=273) (actual\ntime=0.544..0.567 rows=412 loops=1)\n Sort Key: relkind, relname\n Sort Method: quicksort Memory: 109kB\n -> Seq Scan on pg_class (cost=0.00..18.12 rows=412 width=273)\n(actual time=0.014..0.083 rows=412 loops=1)\n Planning Time: 0.152 ms\n Execution Time: 0.629 ms\n(6 rows)\n\n\n# explain analyze select * from pg_Class order by relkind;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Sort (cost=36.01..37.04 rows=412 width=273) (actual\ntime=0.194..0.218 rows=412 loops=1)\n Sort Key: relkind\n Sort Method: quicksort Memory: 109kB\n -> Seq Scan on pg_class (cost=0.00..18.12 rows=412 width=273)\n(actual time=0.014..0.083 rows=412 loops=1)\n Planning Time: 0.143 ms\n Execution Time: 0.278 ms\n(6 rows)\n\nthe total cost is the same for both of these, but the execution time\nseems to vary quite a bit.\n\nMaybe we should try and do this for DISTINCT queries if the\ndistinct_pathkeys match the orderby_pathkeys. That seems a little less\ncopout-ish. If the ORDER BY is the same as the DISTINCT then it seems\nlikely that the ORDER BY might opt to use the Unique path for DISTINCT\nsince it'll already have the correct pathkeys. However, if the ORDER\nBY has fewer columns then it might be cheaper to Hash Aggregate and\nthen sort all over again, especially so when the DISTINCT removes a\nlarge proportion of the rows.\n\nIdeally, our sort costing would just be better, but I think that\nraises the bar a little too high to start thinking of making\nimprovements to that for this patch.\n\nDavid\n\n\n",
"msg_date": "Tue, 10 Jan 2023 18:23:50 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> Ideally, our sort costing would just be better, but I think that\n> raises the bar a little too high to start thinking of making\n> improvements to that for this patch.\n\nIt's trickier than it looks, cf f4c7c410e. But if you just want\nto add a small correction based on number of columns being sorted\nby, that seems within reach. See the comment for cost_sort though.\nAlso, I suppose for incremental sorts we'd want to consider only\nthe number of newly-sorted columns, but I'm not sure if that info\nis readily at hand either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 00:36:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 10/01/23 10:53, David Rowley wrote:\n\n> On Tue, 10 Jan 2023 at 06:15, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> > Do we have any pending items for this patch now?\n>\n> I'm just wondering if not trying this when the query has a DISTINCT\n> clause is a copout. What I wanted to avoid was doing additional\n> sorting work for WindowAgg just to have it destroyed by Hash\n> Aggregate. I'm now wondering if adding both the original\n> slightly-less-sorted path plus the new slightly-more-sorted path then\n> if distinct decides to Hash Aggregate then it'll still be able to pick\n> the cheapest input path to do that on. Unfortunately, our sort\n> costing just does not seem to be advanced enough to know that sorting\n> by fewer columns might be cheaper, so adding the additional path is\n> likely just going to result in add_path() ditching the old\n> slightly-less-sorted path due to the new slightly-more-sorted path\n> having better pathkeys. So, we'd probably be wasting our time if we\n> added both paths with the current sort costing code.\n\n> Maybe we should try and do this for DISTINCT queries if the\n> distinct_pathkeys match the orderby_pathkeys. That seems a little less\n> copout-ish. If the ORDER BY is the same as the DISTINCT then it seems\n> likely that the ORDER BY might opt to use the Unique path for DISTINCT\n> since it'll already have the correct pathkeys. However, if the ORDER\n> BY has fewer columns then it might be cheaper to Hash Aggregate and\n> then sort all over again, especially so when the DISTINCT removes a\n> large proportion of the rows.\n>\n> Ideally, our sort costing would just be better, but I think that\n> raises the bar a little too high to start thinking of making\n> improvements to that for this patch.\n\nLet me take a stab at this. Depending on complexity, we can take\n\na call to address this in current patch or a follow up.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 14:01:52 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 10/01/23 10:53, David Rowley wrote:\n\n> the total cost is the same for both of these, but the execution time\n> seems to vary quite a bit.\n\nThis is really weird, I tried it different ways (to rule out any issues \ndue to\n\ncaching) and execution time varied in spite of having same cost.\n\n> Maybe we should try and do this for DISTINCT queries if the\n> distinct_pathkeys match the orderby_pathkeys. That seems a little less\n> copout-ish. If the ORDER BY is the same as the DISTINCT then it seems\n> likely that the ORDER BY might opt to use the Unique path for DISTINCT\n> since it'll already have the correct pathkeys.\n\n> However, if the ORDER BY has fewer columns then it might be cheaper to Hash Aggregate and\n> then sort all over again, especially so when the DISTINCT removes a\n> large proportion of the rows.\n\nIsn't order by pathkeys are always fewer than distinct pathkeys?\n\ndistinct pathkeys = order by pathkeys + window functions pathkeys\n\nAgain, I got your point which that it is okay to pushdown order by clause\n\nif distinct is doing unique sort. But problem is (atleast from what I am \nfacing),\n\ndistinct is not caring about pushed down sortkeys, it goes with hashagg \nor unique\n\nwith some other logic (mentioned below).\n\n\nConsider following (with distinct clause restriction removed)\n\nif (parse->distinctClause)\n{\n\tList* distinct_pathkeys = make_pathkeys_for_sortclauses(root, parse->distinctClause, root->processed_tlist);\n\tif (!compare_pathkeys(distinct_pathkeys, orderby_pathkeys)==1) // distinct key > order by key\n\t\tskip = true; // this is used to skip order by pushdown\n\n\n}\n\nCASE #1:\n\nexplain (costs off) select distinct a,b, min(a) over (partition by a), sum (a) over (partition by a) from abcd order by a,b;\n QUERY PLAN\n-----------------------------------------------------------\n Sort\n Sort Key: a, b\n -> HashAggregate\n Group Key: a, b, min(a) OVER (?), sum(a) OVER (?)\n -> WindowAgg\n -> Sort\n Sort Key: a, b\n -> Seq Scan on abcd\n(8 rows)\n\nexplain (costs off) select distinct a,b,c, min(a) over (partition by a), sum (a) over (partition by a) from abcd order by a,b,c;\n QUERY PLAN\n--------------------------------------------------------------\n Sort\n Sort Key: a, b, c\n -> HashAggregate\n Group Key: a, b, c, min(a) OVER (?), sum(a) OVER (?)\n -> WindowAgg\n -> Sort\n Sort Key: a, b, c\n -> Seq Scan on abcd\n(8 rows)\n\nNo matter how many columns are pushed down, it does hashagg.\n\nOn the other hand:\n\nCASE #2:\n\nEXPLAIN (costs off) SELECT DISTINCT depname, empno, min(salary) OVER (PARTITION BY depname) depminsalary,sum(salary) OVER (PARTITION BY depname) depsalary\nFROM empsalary\nORDER BY depname, empno;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique\n -> Sort\n Sort Key: depname, empno, (min(salary) OVER (?)), (sum(salary) OVER (?))\n -> WindowAgg\n -> Sort\n Sort Key: depname, empno\n -> Seq Scan on empsalary\n(7 rows)\n\nEXPLAIN (costs off) SELECT DISTINCT depname, empno, enroll_date, min(salary) OVER (PARTITION BY depname) depminsalary,sum(salary) OVER (PARTITION BY depname) depsalary\nFROM empsalary\nORDER BY depname, empno, enroll_date;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Unique\n -> Sort\n Sort Key: depname, empno, enroll_date, (min(salary) OVER (?)), (sum(salary) OVER (?))\n -> WindowAgg\n -> Sort\n Sort Key: depname, empno, enroll_date\n -> Seq Scan on empsalary\n(7 rows)\n\nIt keeps doing Unique.\n\nIn both of the cases, compare_pathkeys(distinct_pathkeys, \norderby_pathkeys) returns 1\n\nLooking bit further, planner is choosing things correctly.\n\nI could see cost of unique being higher in 1st case and lower in 2nd case.\n\nBut the point is, if sort for orderby is pushdown, shouldn't there be \nsome discount on\n\ncost of Unique sort (so that there is more possibility of it being \nfavorable compared to HashAgg in certain cases)?\n\nAgain, cost of Unqiue node is taken as cost of sort node as it is, but\n\nfor HashAgg, new cost is being computed. If we do incremental sort here \n(for unique node),\n\nas we have pushed down orderby's, unique cost could be reduced and our \noptimization could\n\nbe made worthwhile (I assume this is what you intended here) in case of \ndistinct.\n\nEg:\n\nEXPLAIN SELECT DISTINCT depname, empno, enroll_date, min(salary) OVER (PARTITION BY depname) depminsalary,sum(salary) OVER (PARTITION BY depname) depsalary\nFROM empsalary\nORDER BY depname, empno, enroll_date;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Unique (cost=1.63..1.78 rows=10 width=56)\n -> Sort (cost=1.63..1.66 rows=10 width=56)\n Sort Key: depname, empno, enroll_date, (min(salary) OVER (?)), (sum(salary) OVER (?))\n -> WindowAgg (cost=1.27..1.47 rows=10 width=56)\n -> Sort (cost=1.27..1.29 rows=10 width=48)\n Sort Key: depname, empno, enroll_date\n -> Seq Scan on empsalary (cost=0.00..1.10 rows=10 width=48)\n\ndepname, empno, enroll_date are presorted but still strict sorting is \ndone on all columns.\n\n\nAdditionally,\n\n> the total cost is the same for both of these, but the execution time\n> seems to vary quite a bit.\n\nEven if I pushdown one or two path keys, end result is same cost (which \nisn't helping).\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 00:47:25 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, 11 Jan 2023 at 08:17, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n>\n> > On 10/01/23 10:53, David Rowley wrote:\n>\n> > the total cost is the same for both of these, but the execution time\n> > seems to vary quite a bit.\n>\n> This is really weird, I tried it different ways (to rule out any issues\n> due to\n>\n> caching) and execution time varied in spite of having same cost.\n>\n> > Maybe we should try and do this for DISTINCT queries if the\n> > distinct_pathkeys match the orderby_pathkeys. That seems a little less\n> > copout-ish. If the ORDER BY is the same as the DISTINCT then it seems\n> > likely that the ORDER BY might opt to use the Unique path for DISTINCT\n> > since it'll already have the correct pathkeys.\n>\n> > However, if the ORDER BY has fewer columns then it might be cheaper to Hash Aggregate and\n> > then sort all over again, especially so when the DISTINCT removes a\n> > large proportion of the rows.\n>\n> Isn't order by pathkeys are always fewer than distinct pathkeys?\n\nJust thinking about this again, I remembered why I thought DISTINCT\nwas uninteresting to start with. The problem is that if the query has\nWindowFuncs and also has a DISTINCT clause, then the WindowFunc\nresults *must* be in the DISTINCT clause and, optionally also in the\nORDER BY clause. There's no other place to write WindowFuncs IIRC.\nSince we cannot pushdown the sort when the more strict version of the\npathkeys have WindowFuncs, then we must still perform the additional\nsort if the planner chooses to do a non-hashed DISTINCT. The aim of\nthis patch is to reduce the total number of sorts, and I don't think\nthat's possible in this case as you can't have WindowFuncs in the\nORDER BY when they're not in the DISTINCT clause:\n\npostgres=# select distinct relname from pg_Class order by row_number()\nover (order by oid);\nERROR: for SELECT DISTINCT, ORDER BY expressions must appear in select list\nLINE 1: select distinct relname from pg_Class order by row_number() ...\n\nAnother type of query which is suboptimal still is when there's a\nDISTINCT and WindowClause but no ORDER BY. We'll reorder the DISTINCT\nclause so that the leading columns of the ORDER BY come first in\ntransformDistinctClause(), but we've nothing to do the same for\nWindowClauses. It can't happen around when transformDistinctClause()\nis called as we've yet to decide the WindowClause evaluation order,\nso if we were to try to make that better it would maybe have to do in\nthe upper planner somewhere. It's possible it's too late by that time\nto adjust the DISTINCT clause.\n\nHere's an example of it.\n\n# set enable_hashagg=0;\n# explain select distinct relname,relkind,count(*) over (partition by\nrelkind) from pg_Class;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Unique (cost=61.12..65.24 rows=412 width=73)\n -> Sort (cost=61.12..62.15 rows=412 width=73)\n Sort Key: relname, relkind, (count(*) OVER (?))\n -> WindowAgg (cost=36.01..43.22 rows=412 width=73)\n -> Sort (cost=36.01..37.04 rows=412 width=65)\n Sort Key: relkind\n -> Seq Scan on pg_class (cost=0.00..18.12\nrows=412 width=65)\n(7 rows)\n\nWe can simulate the optimisation by swapping the column order in the\ntargetlist. Note the planner can use Incremental Sort (at least since\n3c6fc5820, from about 2 hours ago)\n\n# explain select distinct relkind,relname,count(*) over (partition by\nrelkind) from pg_Class;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Unique (cost=41.26..65.32 rows=412 width=73)\n -> Incremental Sort (cost=41.26..62.23 rows=412 width=73)\n Sort Key: relkind, relname, (count(*) OVER (?))\n Presorted Key: relkind\n -> WindowAgg (cost=36.01..43.22 rows=412 width=73)\n -> Sort (cost=36.01..37.04 rows=412 width=65)\n Sort Key: relkind\n -> Seq Scan on pg_class (cost=0.00..18.12\nrows=412 width=65)\n(8 rows)\n\nNot sure if we should be trying to improve that in this patch. I just\nwanted to identify it as something else that perhaps could be done.\nI'm not really all that sure the above query shape makes much sense in\nthe real world. Would anyone ever want to use DISTINCT on some results\ncontaining WindowFuncs?\n\nDavid\n\n\n",
"msg_date": "Wed, 11 Jan 2023 13:48:16 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 18:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > Ideally, our sort costing would just be better, but I think that\n> > raises the bar a little too high to start thinking of making\n> > improvements to that for this patch.\n>\n> It's trickier than it looks, cf f4c7c410e. But if you just want\n> to add a small correction based on number of columns being sorted\n> by, that seems within reach. See the comment for cost_sort though.\n> Also, I suppose for incremental sorts we'd want to consider only\n> the number of newly-sorted columns, but I'm not sure if that info\n> is readily at hand either.\n\nYeah, I had exactly that in mind when I mentioned about setting the\nbar higher. It seems like a worthy enough goal to improve the sort\ncosts separately from this work. I'm starting to consider if we might\nneed to revisit cost_sort() anyway. There's been quite a number of\nperformance improvements made to sort in the past few years and I\ndon't recall if anything has been done to check if the sort costs are\nstill realistic. I'm aware that it's a difficult problem as the number\nof comparisons is highly dependent on the order of the input rows.\n\nDavid\n\n\n",
"msg_date": "Wed, 11 Jan 2023 13:54:04 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "> On 11/01/23 06:18, David Rowley wrote:\n\n>\n> Not sure if we should be trying to improve that in this patch. I just\n> wanted to identify it as something else that perhaps could be done.\n\nThis could be within reach but still original problem of having hashagg \nremoving\n\nany gains from this remains.\n\n\neg\n\nset enable_hashagg=0;\n\nexplain select distinct relkind, relname, count(*) over (partition by\nrelkind) from pg_Class;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Unique (cost=41.26..65.32 rows=412 width=73)\n -> Incremental Sort (cost=41.26..62.23 rows=412 width=73)\n Sort Key: relkind, relname, (count(*) OVER (?))\n Presorted Key: relkind\n -> WindowAgg (cost=36.01..43.22 rows=412 width=73)\n -> Sort (cost=36.01..37.04 rows=412 width=65)\n Sort Key: relkind\n -> Seq Scan on pg_class (cost=0.00..18.12 rows=412 width=65)\n(8 rows)\n\nreset enable_hashagg;\nexplain select distinct relkind, relname, count(*) over (partition by\nrelkind) from pg_Class;\n QUERY PLAN\n------------------------------------------------------------------------------\n HashAggregate (cost=46.31..50.43 rows=412 width=73)\n Group Key: relkind, relname, count(*) OVER (?)\n -> WindowAgg (cost=36.01..43.22 rows=412 width=73)\n -> Sort (cost=36.01..37.04 rows=412 width=65)\n Sort Key: relkind\n -> Seq Scan on pg_class (cost=0.00..18.12 rows=412 width=65)\n(6 rows)\n\nHashAgg has better cost than Unique even with incremental sort (tried \nwith other case\n\nwhere we have more columns pushed down but still hashAgg wins).\n\nexplain select distinct a, b, count(*) over (partition by a order by b) from abcd;\n QUERY PLAN\n--------------------------------------------------------------------------------------\n Unique (cost=345712.12..400370.25 rows=1595 width=16)\n -> Incremental Sort (cost=345712.12..395456.14 rows=655214 width=16)\n Sort Key: a, b, (count(*) OVER (?))\n Presorted Key: a, b\n -> WindowAgg (cost=345686.08..358790.36 rows=655214 width=16)\n -> Sort (cost=345686.08..347324.11 rows=655214 width=8)\n Sort Key: a, b\n -> Seq Scan on abcd (cost=0.00..273427.14 rows=655214 width=8)\n\nexplain select distinct a, b, count(*) over (partition by a order by b) from abcd;\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n\n HashAggregate (cost=363704.46..363720.41 rows=1595 width=16)\n\n Group Key: a, b, count(*) OVER (?)\n\n -> WindowAgg (cost=345686.08..358790.36 rows=655214 width=16)\n\n -> Sort (cost=345686.08..347324.11 rows=655214 width=8)\n\n Sort Key: a, b\n\n -> Seq Scan on abcd (cost=0.00..273427.14 rows=655214 width=8)\n\n(6 rows)\n\n\n> I'm not really all that sure the above query shape makes much sense in\n> the real world. Would anyone ever want to use DISTINCT on some results\n> containing WindowFuncs?\n\nThis could still have been good to have if there were no negative impact\n\nand some benefit in few cases but as mentioned before, if hashagg removes\n\nany sort (which happened due to push down), all gains will be lost\n\nand we will be probably worse off than before.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 11:51:06 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, 11 Jan 2023 at 19:21, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> HashAgg has better cost than Unique even with incremental sort (tried\n> with other case\n>\n> where we have more columns pushed down but still hashAgg wins).\n\nI don't think you can claim that one so easily. The two should have\nquite different scaling characteristics which will be more evident\nwith a larger number of input rows. Also, Hash Aggregate makes use of\nwork_mem * hash_mem_multiplier, whereas sort uses work_mem. Consider\na hash_mem_multiplier less than 1.0.\n\n> > I'm not really all that sure the above query shape makes much sense in\n> > the real world. Would anyone ever want to use DISTINCT on some results\n> > containing WindowFuncs?\n>\n> This could still have been good to have if there were no negative impact\n>\n> and some benefit in few cases but as mentioned before, if hashagg removes\n>\n> any sort (which happened due to push down), all gains will be lost\n>\n> and we will be probably worse off than before.\n\nWe could consider adjusting the create_distinct_paths() so that it\nuses some newly invented and less strict pathkey comparison where the\norder of the pathkeys does not matter. It would just care if the\npathkeys were present and return a list of pathkeys not contained so\nthat an incremental sort could be done only on the returned list and a\nUnique on an empty returned list. Something like that might be able\nto apply in more cases, for example:\n\nselect distinct b,a from ab where a < 10;\n\nthe distinct pathkeys would be b,a but if there's an index on (a),\nthen we might have a path with pathkeys containing \"a\".\n\nYou can see when we manually swap the order of the DISTINCT clause\nthat we get a more optimal plan (even if they're not costed quite as\naccurately as we might have liked)\n\ncreate table ab(a int, b int);\ncreate index on ab(a);\nset enable_hashagg=0;\nset enable_seqscan=0;\ninsert into ab select x,y from generate_series(1,100)x, generate_Series(1,100)y;\nanalyze ab;\n\n# explain select distinct b,a from ab where a < 10;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique (cost=72.20..78.95 rows=611 width=8)\n -> Sort (cost=72.20..74.45 rows=900 width=8)\n Sort Key: b, a\n -> Index Scan using ab_a_idx on ab (cost=0.29..28.04\nrows=900 width=8)\n Index Cond: (a < 10)\n(5 rows)\n\n# explain select distinct a,b from ab where a < 10; -- manually swap\nDISTINCT column order.\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique (cost=0.71..60.05 rows=611 width=8)\n -> Incremental Sort (cost=0.71..55.55 rows=900 width=8)\n Sort Key: a, b\n Presorted Key: a\n -> Index Scan using ab_a_idx on ab (cost=0.29..28.04\nrows=900 width=8)\n Index Cond: (a < 10)\n(6 rows)\n\nWe might also want to also consider if Pathkey.pk_strategy and\npk_nulls_first need to be compared too. That makes the check a bit\nmore expensive as Pathkeys are canonical and if those fields vary then\nwe need to perform more than just a comparison by the memory address\nof the pathkey. This very much seems like a separate effort than the\nWindowClause sort reduction work. I think it gives us everything we've\ntalked about extra we might want out of reducing WindowClause sorts\nand more.\n\nDavid\n\n\n",
"msg_date": "Fri, 13 Jan 2023 15:18:20 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 13/01/23 07:48, David Rowley wrote:\n\n> I don't think you can claim that one so easily. The two should have\n> quite different scaling characteristics which will be more evident\n> with a larger number of input rows. Also, Hash Aggregate makes use of\n> work_mem * hash_mem_multiplier, whereas sort uses work_mem. Consider\n> a hash_mem_multiplier less than 1.0.\n\nIn this case, it would make sense to do push down. I will do testing \naround large data to see it myself.\n\n\n> We could consider adjusting the create_distinct_paths() so that it\n> uses some newly invented and less strict pathkey comparison where the\n> order of the pathkeys does not matter. It would just care if the\n> pathkeys were present and return a list of pathkeys not contained so\n> that an incremental sort could be done only on the returned list and a\n> Unique on an empty returned list. Something like that might be able\n> to apply in more cases, for example:\n\n> select distinct b,a from ab where a < 10;\n\n> the distinct pathkeys would be b,a but if there's an index on (a),\n> then we might have a path with pathkeys containing \"a\".\n\nThis would be a very good improvement.\n\n> even if they're not costed quite as\n> accurately as we might have liked\n\nThis is very exciting piece actually. Once current set of optimizations \ngets ahead,\n\nI will be giving this a shot. We need to look at cost models for sorting.\n\n\n> We might also want to also consider if Pathkey.pk_strategy and\n> pk_nulls_first need to be compared too. That makes the check a bit\n> more expensive as Pathkeys are canonical and if those fields vary then\n> we need to perform more than just a comparison by the memory address\n> of the pathkey.\n \n> This very much seems like a separate effort than the\n> WindowClause sort reduction work. I think it gives us everything we've\n> talked about extra we might want out of reducing WindowClause sorts\n> and more.\n\nI will work on this as separate patch (against HEAD). It makes much more\n\nsense to look this as distinct sort related optimizations (which window \nsort optimization\n\ncan take benefit). We may take a call to combine them or apply in series.\n\nFor unit of work perspective, I would prefer later.\n\n\nAnyways, the forthcoming patch will contain the following:\n\n1. Modify create_distinct_paths with newly invented and less strict \npathkey comparison where the\n\norder of the pathkeys does not matter.\n\n2. Handle Pathkey.pk_strategy and pk_nulls comparison.\n\n3. Test cases\n\n\nThanks\n\n\nRegards,\n\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Fri, 13 Jan 2023 11:06:19 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "> On 13/01/23 07:48, David Rowley wrote:\n\n> It would just care if the\n> pathkeys were present and return a list of pathkeys not contained so\n> that an incremental sort could be done only on the returned list and a\n> Unique on an empty returned list. \n\nIn create_final_distinct_paths, presorted keys are determined from\n\ninput_rel->pathlist & needed_pathkeys. Problem with input_rel->pathlist\n\nis that, for index node, useful_pathkeys is stored in \ninput_rel->pathlist but this useful_pathkeys\n\nis determined from truncate_useless_pathkeys(index_pathkeys) which \nremoves index_keys if ordering is different.\n\nHence, input_rel->pathlist returns null for select distinct b,a from ab \nwhere a < 10; even if index is created on a.\n\nIn order to tackle this, I have added index_pathkeys in indexpath node \nitself.\n\nAlthough I started this patch from master, I merged changes to window sort\n\noptimizations.\n\n\nIn patched version:\n\nset enable_hashagg=0;\nset enable_seqscan=0;\n\nexplain select distinct relname,relkind,count(*) over (partition by\nrelkind) from pg_Class;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Unique (cost=10000000039.49..10000000063.73 rows=415 width=73)\n -> Incremental Sort (cost=10000000039.49..10000000060.62 rows=415 width=73)\n Sort Key: relkind, relname, (count(*) OVER (?))\n Presorted Key: relkind\n -> WindowAgg (cost=10000000034.20..10000000041.46 rows=415 width=73)\n -> Sort (cost=10000000034.20..10000000035.23 rows=415 width=65)\n Sort Key: relkind\n -> Seq Scan on pg_class (cost=10000000000.00..10000000016.15 rows=415 width=65)\n(8 rows)\n\nexplain select distinct b,a from ab where a < 10;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique (cost=0.71..60.05 rows=611 width=8)\n -> Incremental Sort (cost=0.71..55.55 rows=900 width=8)\n Sort Key: a, b\n Presorted Key: a\n -> Index Scan using ab_a_idx on ab (cost=0.29..28.04 rows=900 width=8)\n Index Cond: (a < 10)\n(6 rows)\n\nexplain select distinct b,a, count(*) over (partition by a) from abcd order by a,b;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Unique (cost=10000021174.63..10000038095.75 rows=60 width=16)\n -> Incremental Sort (cost=10000021174.63..10000036745.75 rows=180000 width=16)\n Sort Key: a, b, (count(*) OVER (?))\n Presorted Key: a, b\n -> WindowAgg (cost=10000020948.87..10000024098.87 rows=180000 width=16)\n -> Sort (cost=10000020948.87..10000021398.87 rows=180000 width=8)\n Sort Key: a, b\n -> Seq Scan on abcd (cost=10000000000.00..10000002773.00 rows=180000 width=8)\n(8 rows)\n\nexplain select distinct a, b, count(*) over (partition by a,b,c) from abcd;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------\n Unique (cost=10000021580.47..10000036629.31 rows=60 width=20)\n -> Incremental Sort (cost=10000021580.47..10000035279.31 rows=180000 width=20)\n Sort Key: a, b, c, (count(*) OVER (?))\n Presorted Key: a, b, c\n -> WindowAgg (cost=10000021561.37..10000025611.37 rows=180000 width=20)\n -> Sort (cost=10000021561.37..10000022011.37 rows=180000 width=12)\n Sort Key: a, b, c\n -> Seq Scan on abcd (cost=10000000000.00..10000002773.00 rows=180000 width=12)\n(8 rows)\n\nexplain select distinct a, b, count(*) over (partition by b,a, c) from abcd;\n\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------\n\n Unique (cost=2041.88..36764.90 rows=60 width=20)\n\n -> Incremental Sort (cost=2041.88..35414.90 rows=180000 width=20)\n\n Sort Key: b, a, c, (count(*) OVER (?))\n\n Presorted Key: b, a, c\n\n -> WindowAgg (cost=1989.94..25746.96 rows=180000 width=20)\n\n -> Incremental Sort (cost=1989.94..22146.96 rows=180000 width=12)\n\n Sort Key: b, a, c\n\n Presorted Key: b\n\n -> Index Scan using b_idx on abcd (cost=0.29..7174.62 rows=180000 width=12)\n\n(9 rows)\n\n\nIn master:\n\nexplain select distinct relname,relkind,count(*) over (partition by\nrelkind) from pg_Class;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------\n Unique (cost=10000000059.50..10000000063.65 rows=415 width=73)\n -> Sort (cost=10000000059.50..10000000060.54 rows=415 width=73)\n Sort Key: relname, relkind, (count(*) OVER (?))\n -> WindowAgg (cost=10000000034.20..10000000041.46 rows=415 width=73)\n -> Sort (cost=10000000034.20..10000000035.23 rows=415 width=65)\n Sort Key: relkind\n -> Seq Scan on pg_class (cost=10000000000.00..10000000016.15 rows=415 width=65)\n(7 rows)\n\nexplain select distinct b,a from ab where a < 10;\n\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique (cost=72.20..78.95 rows=611 width=8)\n -> Sort (cost=72.20..74.45 rows=900 width=8)\n Sort Key: b, a\n -> Index Scan using ab_a_idx on ab (cost=0.29..28.04 rows=900 width=8)\n Index Cond: (a < 10)\n(5 rows)\n\nexplain select distinct b,a, count(*) over (partition by a) from abcd order by a,b;\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------\n\n Unique (cost=10000023704.77..10000041084.40 rows=60 width=16)\n\n -> Incremental Sort (cost=10000023704.77..10000039734.40 rows=180000 width=16)\n\n Sort Key: a, b, (count(*) OVER (?))\n\n Presorted Key: a\n\n -> WindowAgg (cost=10000020948.87..10000024098.87 rows=180000 width=16)\n\n -> Sort (cost=10000020948.87..10000021398.87 rows=180000 width=8)\n\n Sort Key: a\n\n -> Seq Scan on abcd (cost=10000000000.00..10000002773.00 rows=180000 width=8)\n\n(8 rows)\nexplain select distinct a, b, count(*) over (partition by b,a, c) from abcd;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Unique (cost=45151.33..46951.33 rows=60 width=20)\n -> Sort (cost=45151.33..45601.33 rows=180000 width=20)\n Sort Key: a, b, (count(*) OVER (?))\n -> WindowAgg (cost=1989.94..25746.96 rows=180000 width=20)\n -> Incremental Sort (cost=1989.94..22146.96 rows=180000 width=12)\n Sort Key: b, a, c\n Presorted Key: b\n -> Index Scan using b_idx on abcd (cost=0.29..7174.62 rows=180000 width=12)\n(8 rows)\n\n\nNote: Composite keys are also handled.\n\ncreate index xy_idx on xyz(x,y);\n\nexplain select distinct x,z,y from xyz;\n QUERY PLAN\n----------------------------------------------------------------------------------\n Unique (cost=0.86..55.97 rows=60 width=12)\n -> Incremental Sort (cost=0.86..51.47 rows=600 width=12)\n Sort Key: x, y, z\n Presorted Key: x, y\n -> Index Scan using xy_idx on xyz (cost=0.15..32.80 rows=600 width=12)\n(5 rows)\n\n\nThere are some cases where different kind of scan happens\n\nexplain select distinct x,y from xyz where y < 10;\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Unique (cost=47.59..51.64 rows=60 width=8)\n -> Sort (cost=47.59..48.94 rows=540 width=8)\n Sort Key: x, y\n -> Bitmap Heap Scan on xyz (cost=12.34..23.09 rows=540 width=8)\n Recheck Cond: (y < 10)\n -> Bitmap Index Scan on y_idx (cost=0.00..12.20 \nrows=540 width=0)\n Index Cond: (y < 10)\n(7 rows)\n\nAs code only checks from IndexPath (at the moment), other scan paths are \nnot covered.\n\nIs it okay to cover these in same way as I did for IndexPath? (with no \nlimitation on this behaviour on certain path types?)\n\nAlso, I am assuming distinct pathkeys can be changed without any issues. \nAs changes are limited to modification in distinct path only,\n\nI don't see this affecting other nodes. Test cases are green,\n\nwith a couple of failures in window functions (one which I had added) \nand one very weird:\n\nEXPLAIN (COSTS OFF)\nSELECT DISTINCT\n empno,\n enroll_date,\n depname,\n sum(salary) OVER (PARTITION BY depname order by empno) depsalary,\n min(salary) OVER (PARTITION BY depname order by enroll_date) depminsalary\nFROM empsalary\nORDER BY depname, empno;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Incremental Sort\n Sort Key: depname, empno\n Presorted Key: depname\n -> Unique\n -> Incremental Sort\n Sort Key: depname, enroll_date, empno, (sum(salary) OVER (?)), (min(salary) OVER (?))\n Presorted Key: depname\n -> WindowAgg\n -> Incremental Sort\n Sort Key: depname, empno\n Presorted Key: depname\n -> WindowAgg\n -> Sort\n Sort Key: depname, enroll_date\n -> Seq Scan on empsalary\n(15 rows)\n\nIn above query plan, unique used to come after Incremental sort in the \nmaster.\n\n\nPending:\n\n1. Consider if Pathkey.pk_strategy and pk_nulls_first need to be \ncompared too, this is pending\n\nas I have to look these up and understand them.\n\n2. Test cases (failures and new cases)\n\n3. Improve comments\n\n\nPatch v8 attached.\n\nPlease let me know any review comments, will address these in followup patch\n\nwith pending items.\n\n\nThanks,\n\nAnkit",
"msg_date": "Sun, 15 Jan 2023 23:22:47 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Mon, 16 Jan 2023 at 06:52, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Hence, input_rel->pathlist returns null for select distinct b,a from ab\n> where a < 10; even if index is created on a.\n>\n> In order to tackle this, I have added index_pathkeys in indexpath node\n> itself.\n\nI don't think we should touch this. It could significantly increase\nthe number of indexes that we consider when generating paths on base\nrelations and therefore *significantly* increase the number of paths\nwe consider during the join search. What I had in mind was just\nmaking better use of existing paths to see if we can find a cheaper\nway to perform the DISTINCT. That'll only possibly increase the\nnumber of paths for the distinct upper relation which really only\nincreases the number of paths which are considered in\ncreate_ordered_paths(). That's unlikely to cause much of a slowdown in\nthe planner.\n\n> Although I started this patch from master, I merged changes to window sort\n> optimizations.\n\nI'm seeing these two things as separate patches. I don't think there's\nany need to add further complexity to the patch that tries to reduce\nthe number of sorts for WindowAggs. I think you'd better start a new\nthread for this.\n\n> Also, I am assuming distinct pathkeys can be changed without any issues.\n> As changes are limited to modification in distinct path only,\n\nAs far as I see it, you shouldn't be touching the distinct_pathkeys.\nThose are set in such a way as to minimise the likelihood of an\nadditional sort for the ORDER BY. If you've fiddled with that, then I\nimagine this is why the plan below has an additional Incremental Sort\nthat didn't exist before.\n\nI've not looked at your patch, but all I imagine you need to do for it\nis to invent a function in pathkeys.c which is along the lines of what\npathkeys_count_contained_in() does, but returns a List of pathkeys\nwhich are in keys1 but not in keys2 and NIL if keys2 has a pathkey\nthat does not exist as a pathkey in keys1. In\ncreate_final_distinct_paths(), you can then perform an incremental\nsort on any input_path which has a non-empty return list and in\ncreate_incremental_sort_path(), you'll pass presorted_keys as the\nnumber of pathkeys in the path, and the required pathkeys the\ninput_path->pathkeys + the pathkeys returned from the new function.\n\nAs an optimization, you might want to consider that the\ndistinct_pathkeys list might be long and that the new function, if you\ncode the lookup as a nested loop, might be slow. You might want to\nconsider hashing the distinct_pathkeys once in\ncreate_final_distinct_paths(), then for each input_path, perform a\nseries of hash lookups to see which of the input_path->pathkeys are in\nthe hash table. That might require adding two functions to pathkeys.c,\none to build the hash table and then another to probe it and return\nthe remaining pathkeys list. I'd go and make sure it all works as we\nexpect before going to the trouble of trying to optimize this. A\nsimple nested loop lookup will allow us to review that this works as\nwe expect.\n\n> EXPLAIN (COSTS OFF)\n> SELECT DISTINCT\n> empno,\n> enroll_date,\n> depname,\n> sum(salary) OVER (PARTITION BY depname order by empno) depsalary,\n> min(salary) OVER (PARTITION BY depname order by enroll_date) depminsalary\n> FROM empsalary\n> ORDER BY depname, empno;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------\n> Incremental Sort\n> Sort Key: depname, empno\n> Presorted Key: depname\n> -> Unique\n> -> Incremental Sort\n> Sort Key: depname, enroll_date, empno, (sum(salary) OVER (?)), (min(salary) OVER (?))\n> Presorted Key: depname\n> -> WindowAgg\n> -> Incremental Sort\n> Sort Key: depname, empno\n> Presorted Key: depname\n> -> WindowAgg\n> -> Sort\n> Sort Key: depname, enroll_date\n> -> Seq Scan on empsalary\n> (15 rows)\n\nDavid\n\n\n",
"msg_date": "Mon, 16 Jan 2023 17:18:32 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 16/01/23 09:48, David Rowley wrote:\n> I don't think we should touch this. It could significantly increase\n> the number of indexes that we consider when generating paths on base\n> relations and therefore *significantly* increase the number of paths\n> we consider during the join search. What I had in mind was just\n> making better use of existing paths to see if we can find a cheaper\n> way to perform the DISTINCT. That'll only possibly increase the\n> number of paths for the distinct upper relation which really only\n> increases the number of paths which are considered in\n> create_ordered_paths(). That's unlikely to cause much of a slowdown in\n> the planner.\n\nOkay, I see the issue. Makes sense\n\n\n> I'm seeing these two things as separate patches. I don't think there's\n> any need to add further complexity to the patch that tries to reduce\n> the number of sorts for WindowAggs. I think you'd better start a new\n> thread for this.\n\nWill be starting new thread for this and separate patch.\n\n\n> As far as I see it, you shouldn't be touching the distinct_pathkeys.\n> Those are set in such a way as to minimise the likelihood of an\n> additional sort for the ORDER BY. If you've fiddled with that, then I\n> imagine this is why the plan below has an additional Incremental Sort\n> that didn't exist before.\n\n> I've not looked at your patch, but all I imagine you need to do for it\n> is to invent a function in pathkeys.c which is along the lines of what\n> pathkeys_count_contained_in() does, but returns a List of pathkeys\n> which are in keys1 but not in keys2 and NIL if keys2 has a pathkey\n> that does not exist as a pathkey in keys1. In\n> create_final_distinct_paths(), you can then perform an incremental\n> sort on any input_path which has a non-empty return list and in\n> create_incremental_sort_path(), you'll pass presorted_keys as the\n> number of pathkeys in the path, and the required pathkeys the\n> input_path->pathkeys + the pathkeys returned from the new function.\n\nOkay, this should be straightforward. Let me try this.\n\n> As an optimization, you might want to consider that the\n> distinct_pathkeys list might be long and that the new function, if you\n> code the lookup as a nested loop, might be slow. You might want to\n> consider hashing the distinct_pathkeys once in\n> create_final_distinct_paths(), then for each input_path, perform a\n> series of hash lookups to see which of the input_path->pathkeys are in\n> the hash table. That might require adding two functions to pathkeys.c,\n> one to build the hash table and then another to probe it and return\n> the remaining pathkeys list. I'd go and make sure it all works as we\n> expect before going to the trouble of trying to optimize this. A\n> simple nested loop lookup will allow us to review that this works as\n> we expect.\n\nOkay make sense, will start with nested loop while it is in review and then\n\noptimal version once it is all good to go.\n\nThanks,\n\nAnkit\n\n\n\n",
"msg_date": "Mon, 16 Jan 2023 11:38:45 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 06:15, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n>\n> > On 09/01/23 17:53, David Rowley wrote:\n> > I gave some thought to whether doing foreach_delete_current() is safe\n> > within a foreach_reverse() loop. I didn't try it, but I couldn't see\n> > any reason why not. It is pretty late here and I'd need to test that\n> > to be certain. If it turns out not to be safe then we need to document\n> > that fact in the comments of the foreach_reverse() macro and the\n> > foreach_delete_current() macro.\n>\n> I tested foreach_delete_current inside foreach_reverse loop.\n>\n> It worked fine.\n\nI also thought I'd better test that foreach_delete_current() works\nwith foreach_reverse(). I can confirm that it *does not* work\ncorrectly. I guess maybe you only tested the fact that it deleted the\ncurrent item and not that the subsequent loop correctly went to the\nitem directly before the deleted item. There's a problem with that. We\nskip an item.\n\nInstead of fixing that, I think it's likely better just to loop\nbackwards manually with a for() loop, so I've adjusted the patch to\nwork that way. It's quite likely that the additional code in\nforeach() and what was in foreach_reverse() is slower than looping\nmanually due to the additional work those macros do to set the cell to\nNULL when we run out of cells to loop over.\n\n> I have attached patch with one extra test case (as mentioned above).\n> Rest of the changes are looking fine.\n\nI made another pass over the v7 patch and fixed a bug that was\ndisabling the optimization when the deepest WindowAgg had a\nrunCondition. This should have been using llast_node instead of\nlinitial_node. The top-level WindowAgg is the last in the list.\n\nI also made a pass over the tests and added a test case for the\nrunCondition check to make sure we disable the optimization when the\ntop-level WindowAgg has one of those. I wasn't sure what your test\ncomments case numbers were meant to represent. They were not aligned\nwith the code comments that document when the optimisation is\ndisabled, they started out aligned, but seemed to go off the rails at\n#3. I've now made them follow the comments in create_one_window_path()\nand made it more clear what we expect the test outcome to be in each\ncase.\n\nI've attached the v9 patch. I feel like this patch is quite\nself-contained and I'm quite happy with it now. If there are no\nobjections soon, I'm planning on pushing it.\n\nDavid",
"msg_date": "Wed, 18 Jan 2023 22:42:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 18/01/23 15:12, David Rowley wrote:\n\n> I also thought I'd better test that foreach_delete_current() works\n> with foreach_reverse(). I can confirm that it *does not* work\n> correctly. I guess maybe you only tested the fact that it deleted the\n> current item and not that the subsequent loop correctly went to the\n> item directly before the deleted item. There's a problem with that. We\n> skip an item.\n\nHmm, not really sure why did I miss that. I tried this again (added \nfollowing in postgres.c above\n\nPortalStart)\n\nList* l = NIL;\nl = lappend(l, 1);\nl = lappend(l, 2);\nl = lappend(l, 3);\nl = lappend(l, 4);\n\nListCell *lc;\nforeach_reverse(lc, l)\n{\n\tif (foreach_current_index(lc) == 2) // delete 3\n\t{\n\t\tforeach_delete_current(l, lc);\n\t}\n}\n\nforeach(lc, l)\n{\n\tint i = (int) lfirst(lc);\n\tereport(LOG,(errmsg(\"%d\", i)));\n}\n\nGot result:\n2023-01-18 20:23:28.115 IST [51007] LOG: 1\n2023-01-18 20:23:28.115 IST [51007] STATEMENT: select pg_backend_pid();\n2023-01-18 20:23:28.115 IST [51007] LOG: 2\n2023-01-18 20:23:28.115 IST [51007] STATEMENT: select pg_backend_pid();\n2023-01-18 20:23:28.115 IST [51007] LOG: 4\n2023-01-18 20:23:28.115 IST [51007] STATEMENT: select pg_backend_pid();\n\nI had expected list_delete_cell to take care of rest.\n\n> Instead of fixing that, I think it's likely better just to loop\n> backwards manually with a for() loop, so I've adjusted the patch to\n> work that way. It's quite likely that the additional code in\n> foreach() and what was in foreach_reverse() is slower than looping\n> manually due to the additional work those macros do to set the cell to\n> NULL when we run out of cells to loop over.\n\nOkay, current version looks fine as well.\n\n> I made another pass over the v7 patch and fixed a bug that was\n> disabling the optimization when the deepest WindowAgg had a\n> runCondition. This should have been using llast_node instead of\n> linitial_node. The top-level WindowAgg is the last in the list.\n\n> I also made a pass over the tests and added a test case for the\n> runCondition check to make sure we disable the optimization when the\n> top-level WindowAgg has one of those. \n\n> I wasn't sure what your test comments case numbers were meant to represent. \n> They were not aligned with the code comments that document when the optimisation is\n> disabled, they started out aligned, but seemed to go off the rails at\n> #3. I've now made them follow the comments in create_one_window_path()\n> and made it more clear what we expect the test outcome to be in each\n> case.\n\nThose were just numbering for exceptional cases, making them in sync \nwith comments\n\nwasn't really on my mind, but now they looks better.\n\n> I've attached the v9 patch. I feel like this patch is quite\n> self-contained and I'm quite happy with it now. If there are no\n> objections soon, I'm planning on pushing it.\n\nPatch is already rebased with latest master, tests are all green.\n\nTried some basic profiling and it looked good.\n\n\nI also tried a bit unrealistic case.\n\ncreate table abcdefgh(a int, b int, c int, d int, e int, f int, g int, h int);\n\ninsert into abcdefgh select a,b,c,d,e,f,g,h from\ngenerate_series(1,7) a,\ngenerate_series(1,7) b,\ngenerate_series(1,7) c,\ngenerate_series(1,7) d,\ngenerate_series(1,7) e,\ngenerate_series(1,7) f,\ngenerate_series(1,7) g,\ngenerate_series(1,7) h;\n\nexplain analyze select count(*) over (order by a),\nrow_number() over (partition by a order by b) from abcdefgh order by a,b,c,d,e,f,g,h;\n\nIn patch version\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n---------------\n WindowAgg (cost=1023241.14..1225007.67 rows=5764758 width=48) (actual \ntime=64957.894..81950.352 rows=5764801 loops=1)\n -> WindowAgg (cost=1023241.14..1138536.30 rows=5764758 width=40) \n(actual time=37959.055..60391.799 rows=5764801 loops\n=1)\n -> Sort (cost=1023241.14..1037653.03 rows=5764758 width=32) \n(actual time=37959.045..52968.791 rows=5764801 loop\ns=1)\n Sort Key: a, b, c, d, e, f, g, h\n Sort Method: external merge Disk: 237016kB\n -> Seq Scan on abcdefgh (cost=0.00..100036.58 \nrows=5764758 width=32) (actual time=0.857..1341.107 rows=57\n64801 loops=1)\n Planning Time: 0.168 ms\n Execution Time: 82748.789 ms\n(8 rows)\n\nIn Master\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n---------------------\n Incremental Sort (cost=1040889.72..1960081.97 rows=5764758 width=48) \n(actual time=23461.815..69654.700 rows=5764801 loop\ns=1)\n Sort Key: a, b, c, d, e, f, g, h\n Presorted Key: a, b\n Full-sort Groups: 49 Sort Method: quicksort Average Memory: 30kB \nPeak Memory: 30kB\n Pre-sorted Groups: 49 Sort Method: external merge Average Disk: \n6688kB Peak Disk: 6688kB\n -> WindowAgg (cost=1023241.14..1225007.67 rows=5764758 width=48) \n(actual time=22729.171..40189.407 rows=5764801 loops\n=1)\n -> WindowAgg (cost=1023241.14..1138536.30 rows=5764758 \nwidth=40) (actual time=8726.562..18268.663 rows=5764801\nloops=1)\n -> Sort (cost=1023241.14..1037653.03 rows=5764758 \nwidth=32) (actual time=8726.551..11291.494 rows=5764801\n loops=1)\n Sort Key: a, b\n Sort Method: external merge Disk: 237016kB\n -> Seq Scan on abcdefgh (cost=0.00..100036.58 \nrows=5764758 width=32) (actual time=0.029..1600.042 r\nows=5764801 loops=1)\n Planning Time: 2.742 ms\n Execution Time: 71172.586 ms\n(13 rows)\n\n\nPatch version is approx 11 sec slower.\n\nPatch version sort took around 15 sec, whereas master sort took ~2 + 46 \n= around 48 sec\n\nBUT somehow master version is faster as Window function took 10 sec less \ntime.\n\nMaybe I am interpreting it wrong but still wanted to bring this to notice.\n\n\nThanks,\n\nAnkit\n\n\n\n\n\n",
"msg_date": "Wed, 18 Jan 2023 22:57:09 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, 19 Jan 2023 at 06:27, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Hmm, not really sure why did I miss that. I tried this again (added\n> following in postgres.c above\n>\n> PortalStart)\n>\n> List* l = NIL;\n> l = lappend(l, 1);\n> l = lappend(l, 2);\n> l = lappend(l, 3);\n> l = lappend(l, 4);\n>\n> ListCell *lc;\n> foreach_reverse(lc, l)\n> {\n> if (foreach_current_index(lc) == 2) // delete 3\n> {\n> foreach_delete_current(l, lc);\n> }\n> }\n\nThe problem is that the next item looked at is 1 and the value 2 is skipped.\n\n> I also tried a bit unrealistic case.\n>\n> create table abcdefgh(a int, b int, c int, d int, e int, f int, g int, h int);\n>\n> insert into abcdefgh select a,b,c,d,e,f,g,h from\n> generate_series(1,7) a,\n> generate_series(1,7) b,\n> generate_series(1,7) c,\n> generate_series(1,7) d,\n> generate_series(1,7) e,\n> generate_series(1,7) f,\n> generate_series(1,7) g,\n> generate_series(1,7) h;\n>\n> explain analyze select count(*) over (order by a),\n> row_number() over (partition by a order by b) from abcdefgh order by a,b,c,d,e,f,g,h;\n>\n> In patch version\n\n> Execution Time: 82748.789 ms\n\n> In Master\n\n> Execution Time: 71172.586 ms\n\n> Patch version sort took around 15 sec, whereas master sort took ~2 + 46\n> = around 48 sec\n\n\n> Maybe I am interpreting it wrong but still wanted to bring this to notice.\n\nI think you are misinterpreting the results, but the main point\nremains - it's slower. The explain analyze timing shows the time\nbetween outputting the first row and the last row. For sort, there's a\nlot of work that needs to be done before you output the first row.\n\nI looked into this a bit further and using the same table as you, and\nthe attached set of hacks that adjust the ORDER BY path generation to\nsplit a Sort into a Sort and Incremental Sort when the number of\npathkeys to sort by is > 2.\n\nwork_mem = '1GB' in all cases below:\n\nI turned off the timing in EXPLAIN so that wasn't a factor in the\nresults. Generally having more nodes means more timing requests and\nthat's got > 0 overhead.\n\nexplain (analyze,timing off) select * from abcdefgh order by a,b,c,d,e,f,g,h;\n\n7^8 rows\nMaster: Execution Time: 7444.479 ms\nmaster + sort_hacks.diff: Execution Time: 5147.937 ms\n\nSo I'm also seeing Incremental Sort - > Sort faster than a single Sort\nfor 7^8 rows.\n\nWith 5^8 rows:\nmaster: Execution Time: 299.949 ms\nmaster + sort_hacks: Execution Time: 239.447 ms\n\nand 4^8 rows:\nmaster: Execution Time: 62.596 ms\nmaster + sort_hacks: Execution Time: 67.900 ms\n\nSo at 4^8 sort_hacks becomes slower. I suspect this might be to do\nwith having to perform more swaps in the array elements that we're\nsorting by with the full sort. When work_mem is large this array no\nlonger fits in CPU cache and I suspect those swaps become more costly\nwhen there are more cache lines having to be fetched from RAM.\n\nI think we really should fix tuplesort.c so that it batches sorts into\nabout L3 CPU cache-sized chunks in RAM rather than trying to sort much\nlarger arrays.\n\nI'm just unsure if we should write this off as the expected behaviour\nof Sort and continue with the patch or delay the whole thing until we\nmake some improvements to sort. I think more benchmarking is required\nso we can figure out if this is a corner case or a common case. On the\nother hand, we already sort WindowClauses with the most strict sort\norder first which results in a full Sort and no additional sort for\nsubsequent WindowClauses that can make use of that sort order. It\nwould be bizarre to reverse the final few lines of common_prefix_cmp\nso that we sort the least strict order first so we end up injecting\nIncremental Sorts into the plan to make it faster!\n\nDavid",
"msg_date": "Thu, 19 Jan 2023 16:28:01 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 19/01/23 08:58, David Rowley wrote:\n\n> The problem is that the next item looked at is 1 and the value 2 is skipped.\n\nOkay, I see the issue.\n\n> I think you are misinterpreting the results, but the main point\n> remains - it's slower. The explain analyze timing shows the time\n> between outputting the first row and the last row. For sort, there's a\n> lot of work that needs to be done before you output the first row.\n\nOkay got it.\n\n> I looked into this a bit further and using the same table as you, and\n> the attached set of hacks that adjust the ORDER BY path generation to\n> split a Sort into a Sort and Incremental Sort when the number of\n> pathkeys to sort by is > 2.\n\n\nOkay, so this is issue with strict sort itself.\n\nI will play around with current patch but it should be fine for\n\ncurrent work and would account for issue in strict sort.\n\n\n> I suspect this might be to do with having to perform more swaps in the \n> array elements that we'resorting by with the full sort. When work_mem is \n> large this array no longer fits in CPU cache and I suspect those swaps become \n> more costly when there are more cache lines having to be fetched from RAM.\n\nLooks like possible reason.\n\n> I think we really should fix tuplesort.c so that it batches sorts into\n> about L3 CPU cache-sized chunks in RAM rather than trying to sort much\n> larger arrays.\n\n\nI assume same thing we are doing for incremental sort and that's why it \nperform\n\nbetter here?\n\nOr is it due to shorter tuple width and thus being able to fit into \nmemory easily?\n\n \n\n> On the\n> other hand, we already sort WindowClauses with the most strict sort\n> order first which results in a full Sort and no additional sort for\n> subsequent WindowClauses that can make use of that sort order. It\n> would be bizarre to reverse the final few lines of common_prefix_cmp\n> so that we sort the least strict order first so we end up injecting\n> Incremental Sorts into the plan to make it faster!\n\nI would see this as beneficial when tuple size is too huge for strict sort.\n\n\nBasically\n\ncost(sort(a,b,c,d,e)) > cost(sort(a)+sort(b)+sort(c)+ sort(d,e))\n\nDon't know if cost based decision is realistic or not in current\n\nstate of things.\n\n> I think more benchmarking is required\n> so we can figure out if this is a corner case or a common case\n\nI will do few more benchmarks. I assume all scaling issue with strict sort\n\nto remain as it is but no further issue due to this change itself comes up.\n\n\n> I'm just unsure if we should write this off as the expected behaviour\n\n> of Sort and continue with the patch or delay the whole thing until we\n\n> make some improvements to sort. \n\nUnless we found more issues, we might be good with the former but\n\nyou can make better call on this. As much as I would love my first patch \nto get\n\nmerged, I would want it to be right.\n\n\nThanks,\n\nAnkit\n\n\n\n\n",
"msg_date": "Thu, 19 Jan 2023 10:52:00 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> I think more benchmarking is required\n> so we can figure out if this is a corner case or a common case\n\nI did some more benchmarks:\n\n#1. AIM: Pushdown column whose size is very high\n\ncreate table test(a int, b int, c text);\ninsert into test select a,b,c from generate_series(1,1000)a, generate_series(1,1000)b, repeat(md5(random()::text), 999)c;\n\nexplain (analyze, costs off) select count(*) over (order by a), row_number() over (order by a, b) from test order by a,b,c;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------\n Incremental Sort (actual time=1161.605..6577.141 rows=1000000 loops=1)\n Sort Key: a, b, c\n Presorted Key: a, b\n Full-sort Groups: 31250 Sort Method: quicksort Average Memory: 39kB Peak Memory: 39kB\n -> WindowAgg (actual time=1158.896..5819.460 rows=1000000 loops=1)\n -> WindowAgg (actual time=1154.614..3391.537 rows=1000000 loops=1)\n -> Gather Merge (actual time=1154.602..2404.125 rows=1000000 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n -> Sort (actual time=1118.326..1295.743 rows=333333 loops=3)\n Sort Key: a, b\n Sort Method: external merge Disk: 145648kB\n Worker 0: Sort Method: external merge Disk: 140608kB\n Worker 1: Sort Method: external merge Disk: 132792kB\n -> Parallel Seq Scan on test (actual time=0.018..169.319 rows=333333 loops=3)\n Planning Time: 0.091 ms\n Execution Time: 6816.616 ms\n(17 rows)\n\nPlanner choose faster path correctly (which was not path which had pushed down column).\n\n#2. AIM: Check strict vs incremental sorts wrt to large size data\nPatch version is faster as for external merge sort, disk IO is main bottleneck and if we sort an extra column,\nit doesn't have major impact. This is when work mem is very small.\n\nFor larger work_mem, difference between patched version and master is minimal and\nthey both provide somewhat comparable performance.\n\nTried permutation of few cases which we have already covered but I did not see anything alarming in those.\n\n\n> I'm just unsure if we should write this off as the expected behaviour\n\n> of Sort and continue with the patch or delay the whole thing until we\n\n> make some improvements to sort. \n\nI am not seeing other cases where patch version is consistently slower.\n\n\nThanks,\nAnkit\n\n\n\n\n",
"msg_date": "Tue, 24 Jan 2023 23:02:18 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, 25 Jan 2023 at 06:32, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> I am not seeing other cases where patch version is consistently slower.\n\nI just put together a benchmark script to try to help paint a picture\nof under what circumstances reducing the number of sorts slows down\nperformance.\n\nThe benchmark I used creates a table with 2 int4 columns, \"a\" and \"b\".\nI have a single WindowClause that sorts on \"a\" and then ORDER BY a,b.\nIn each test, I change the number of distinct values in \"a\" starting\nwith 1 distinct value then in each subsequent test I multiply that\nnumber by 10 each time all the way up to 1 million. The idea here is\nto control how often the sort that's performing a sort by both columns\nmust call the tiebreak function. I did 3 subtests for each number of\ndistinct rows in \"a\". 1) Unsorted case where the rows are put into the\ntuples store in an order that's not presorted, 2) tuplestore rows are\nalready sorted leading to hitting our qsort fast path. 3) random\norder.\n\nYou can see from the results that the patched version is not looking\nparticularly great. I did a 1 million row test and a 10 million row\ntest. work_mem was 4GB for each, so the sorts never went to disk.\n\nOverall, the 1 million row test on master took 11.411 seconds and the\npatched version took 11.282 seconds, so there was a *small* gain\noverall there. With the 10 million row test, overall, master took\n121.063 seconds, whereas patched took 130.727 seconds. So quite a\nlarge performance regression there.\n\nI'm unsure if 69749243 might be partially to blame here as it favours\nsingle-key sorts. If you look at qsort_tuple_signed_compare(), you'll\nsee that the tiebreak function will be called only when it's needed\nand there are > 1 sort keys. The comparetup function will re-compare\nthe first key all over again. If I get some more time I'll run the\ntests again with the sort specialisation code disabled to see if the\nsituation is the same or not.\n\nAnother way to test that would be to have a table with 3 columns and\nalways sort by at least 2.\n\nI've attached the benchmark script that I used and also a copy of the\npatch with a GUC added solely to allow easier benchmarking of patched\nvs unpatched.\n\nI think we need to park this patch until we can figure out what can be\ndone to stop these regressions. We might want to look at 1) Expanding\non what 69749243 did and considering if we want sort specialisations\nthat are specifically for 1 column and another set for multi-columns.\nThe multi-column ones don't need to re-compare key[0] again. 2)\nSorting in smaller batches that better fit into CPU cache. Both of\nthese ideas would require a large amount of testing and discussion.\nFor #1 we're considering other specialisations, for example NOT NULL,\nand we don't want to explode the number of specialisations we have to\ncompile into the binary.\n\nDavid",
"msg_date": "Thu, 26 Jan 2023 15:10:44 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n> On 26/01/23 07:40, David Rowley wrote:\n\n> You can see from the results that the patched version is not looking\n> particularly great. I did a 1 million row test and a 10 million row\n> test. work_mem was 4GB for each, so the sorts never went to disk.\n\nYes, its lackluster gains are very limited (pretty much when data gets\npushed to disk).\n\n\n> I'm unsure if 69749243 might be partially to blame here as it favours\n> single-key sorts. If you look at qsort_tuple_signed_compare(), you'll\n> see that the tiebreak function will be called only when it's needed\n> and there are > 1 sort keys. The comparetup function will re-compare\n> the first key all over again. If I get some more time I'll run the\n> tests again with the sort specialisation code disabled to see if the\n> situation is the same or not.\n> Another way to test that would be to have a table with 3 columns and\n> always sort by at least 2.\n\nI will need to go through this.\n\n> I've attached the benchmark script that I used and also a copy of the\n> patch with a GUC added solely to allow easier benchmarking of patched\n> vs unpatched.\n\nThis is much relief, will be easier to repro and create more cases. Thanks.\n\n> I think we need to park this patch until we can figure out what can be\n> done to stop these regressions. \n\nMakes sense.\n\n> We might want to look at 1) Expanding\n> on what 69749243 did and considering if we want sort specialisations\n> that are specifically for 1 column and another set for multi-columns.\n> The multi-column ones don't need to re-compare key[0] again. 2)\n> Sorting in smaller batches that better fit into CPU cache. Both of\n> these ideas would require a large amount of testing and discussion.\n> For #1 we're considering other specialisations, for example NOT NULL,\n> and we don't want to explode the number of specialisations we have to\n> compile into the binary.\n\nYes, 1 & 2 needs to be addressed before going ahead with this patch.\nDo we any have ongoing thread with #1 and #2?\n\nAlso, we have seen an issue with cost ( 1 sort vs 2 sorts, both having same cost)\nhttps://www.postgresql.org/message-id/CAApHDvo2y9S2AO-BPYo7gMPYD0XE2Lo-KFLnqX80fcftqBCcyw@mail.gmail.com\nThis needs to be investigated too (although might not be related but need to check at least)\n\nI will do some study around things mentioned here and will setup new threads (if they don't exists)\n based on discussions we had here .\n\nThanks,\nAnkit\n\n\n\n",
"msg_date": "Sat, 28 Jan 2023 13:51:09 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Sat, Jan 28, 2023 at 3:21 PM Ankit Kumar Pandey <itsankitkp@gmail.com>\nwrote:\n>\n> > On 26/01/23 07:40, David Rowley wrote:\n\n> > We might want to look at 1) Expanding\n> > on what 69749243 did and considering if we want sort specialisations\n> > that are specifically for 1 column and another set for multi-columns.\n> > The multi-column ones don't need to re-compare key[0] again. 2)\n> > Sorting in smaller batches that better fit into CPU cache. Both of\n> > these ideas would require a large amount of testing and discussion.\n> > For #1 we're considering other specialisations, for example NOT NULL,\n> > and we don't want to explode the number of specialisations we have to\n> > compile into the binary.\n>\n> Yes, 1 & 2 needs to be addressed before going ahead with this patch.\n> Do we any have ongoing thread with #1 and #2?\n\nI recently brought up this older thread (mostly about #1), partly because\nof the issues discovered above, and partly because I hope to make progress\non it before feature freeze (likely early April):\n\nhttps://www.postgresql.org/message-id/CAApHDvqXcmzAZDsj8iQs84%2Bdrzo0PgYub_QYzT6Zdj6p4A3A5A%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sat, Jan 28, 2023 at 3:21 PM Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:>> > On 26/01/23 07:40, David Rowley wrote:> > We might want to look at 1) Expanding> > on what 69749243 did and considering if we want sort specialisations> > that are specifically for 1 column and another set for multi-columns.> > The multi-column ones don't need to re-compare key[0] again. 2)> > Sorting in smaller batches that better fit into CPU cache. Both of> > these ideas would require a large amount of testing and discussion.> > For #1 we're considering other specialisations, for example NOT NULL,> > and we don't want to explode the number of specialisations we have to> > compile into the binary.>> Yes, 1 & 2 needs to be addressed before going ahead with this patch.> Do we any have ongoing thread with #1 and #2?I recently brought up this older thread (mostly about #1), partly because of the issues discovered above, and partly because I hope to make progress on it before feature freeze (likely early April):https://www.postgresql.org/message-id/CAApHDvqXcmzAZDsj8iQs84%2Bdrzo0PgYub_QYzT6Zdj6p4A3A5A%40mail.gmail.com--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Sat, 28 Jan 2023 16:01:35 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n > On 28/01/23 14:31, John Naylor wrote:\n > I recently brought up this older thread (mostly about #1), partly \nbecause of the issues discovered above, and partly because I hope to \nmake progress on it before feature freeze (likely early April):\n > \nhttps://www.postgresql.org/message-id/CAApHDvqXcmzAZDsj8iQs84%2Bdrzo0PgYub_QYzT6Zdj6p4A3A5A%40mail.gmail.com \n<https://www.postgresql.org/message-id/CAApHDvqXcmzAZDsj8iQs84%2Bdrzo0PgYub_QYzT6Zdj6p4A3A5A%40mail.gmail.com>\n\nThanks John for letting me know. I will keep eye on above thread and \nwill focus on rest of issues.\n\nRegards,\nAnkit\n\n\n",
"msg_date": "Sat, 28 Jan 2023 16:19:59 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "> On 26/01/23 07:40, David Rowley wrote:\n> I just put together a benchmark script to try to help paint a picture\n> of under what circumstances reducing the number of sorts slows down\n> performance. work_mem was 4GB for each, so the sorts never went to disk.\n\nI tried same test suit albeit work_mem = 1 MB to check how sort behaves with frequent\ndisk IO.\nFor 1 million row test, patch version shows some promise but performance degrades in 10 million row test.\nThere is also an anomalies in middle range for 100/1000 random value.\n\nI have also added results of benchmark with sorting fix enabled and table with 3 columns\nand sorting done on > 2 columns. I don't see much improvement vs master here.\nAgain, these results are for work_mem = 1 MB.\n\n\n> Sorting in smaller batches that better fit into CPU cache.\n\nMore reading yielded that we are looking for cache-oblivious\nsorting algorithm.\nOne the papers[1] mentions that in quick sort, whenever we reach size which can fit in cache,\ninstead of partitioning it further, we can do insertion sort there itself.\n\n> Memory-tuned quicksort uses insertion sort to sort small subsets while they are in\n> the cache, instead of partitioning further. This increases the instruction count,\n> but reduces the cache misses\n\nIf this looks step in right direction, I can give it a try and do more reading and experimentation.\n\n[1] http://www.ittc.ku.edu/~jsv/Papers/WAC01.sorting_using_registers.pdf\n\nThanks,\nAnkit",
"msg_date": "Sun, 29 Jan 2023 17:35:15 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Sun, Jan 29, 2023 at 7:05 PM Ankit Kumar Pandey <itsankitkp@gmail.com>\nwrote:\n>\n>\n> > On 26/01/23 07:40, David Rowley wrote:\n\n> > Sorting in smaller batches that better fit into CPU cache.\n>\n> More reading yielded that we are looking for cache-oblivious\n> sorting algorithm.\n\nSince David referred to L3 size as the starting point of a possible\nconfiguration parameter, that's actually cache-conscious.\n\n> One the papers[1] mentions that in quick sort, whenever we reach size\nwhich can fit in cache,\n> instead of partitioning it further, we can do insertion sort there itself.\n\n> > Memory-tuned quicksort uses insertion sort to sort small subsets while\nthey are in\n> > the cache, instead of partitioning further. This increases the\ninstruction count,\n> > but reduces the cache misses\n>\n> If this looks step in right direction, I can give it a try and do more\nreading and experimentation.\n\nI'm not close enough to this thread to guess at the right direction\n(although I hope related work will help), but I have a couple general\nremarks:\n\n1. In my experience, academic papers like to test sorting with\nregister-sized numbers or strings. Our sorttuples are bigger, have\ncomplex comparators, and can fall back to fields in the full tuple.\n2. That paper is over 20 years old. If it demonstrated something genuinely\nuseful, some of those concepts would likely be implemented in the\nreal-world somewhere. Looking for evidence of that might be a good exercise.\n3. 20 year-old results may not carry over to modern hardware.\n4. Open source software is more widespread in the academic world now than\n20 years ago, so papers with code (maybe even the author's github) are much\nmore useful to us in my view.\n5. It's actually not terribly hard to make sorting faster for some specific\ncases -- the hard part is keeping other inputs of interest from regressing.\n6. The bigger the change, the bigger the risk of regressing somewhere.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Jan 29, 2023 at 7:05 PM Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:>>> > On 26/01/23 07:40, David Rowley wrote:> > Sorting in smaller batches that better fit into CPU cache.>> More reading yielded that we are looking for cache-oblivious> sorting algorithm.Since David referred to L3 size as the starting point of a possible configuration parameter, that's actually cache-conscious.> One the papers[1] mentions that in quick sort, whenever we reach size which can fit in cache,> instead of partitioning it further, we can do insertion sort there itself.> > Memory-tuned quicksort uses insertion sort to sort small subsets while they are in> > the cache, instead of partitioning further. This increases the instruction count,> > but reduces the cache misses>> If this looks step in right direction, I can give it a try and do more reading and experimentation.I'm not close enough to this thread to guess at the right direction (although I hope related work will help), but I have a couple general remarks:1. In my experience, academic papers like to test sorting with register-sized numbers or strings. Our sorttuples are bigger, have complex comparators, and can fall back to fields in the full tuple.2. That paper is over 20 years old. If it demonstrated something genuinely useful, some of those concepts would likely be implemented in the real-world somewhere. Looking for evidence of that might be a good exercise.3. 20 year-old results may not carry over to modern hardware.4. Open source software is more widespread in the academic world now than 20 years ago, so papers with code (maybe even the author's github) are much more useful to us in my view.5. It's actually not terribly hard to make sorting faster for some specific cases -- the hard part is keeping other inputs of interest from regressing.6. The bigger the change, the bigger the risk of regressing somewhere.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Jan 2023 12:31:47 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "\n > On 30/01/23 11:01, John Naylor wrote:\n\n > Since David referred to L3 size as the starting point of a possible \nconfiguration parameter, that's actually cache-conscious.\n\nOkay, makes sense. I am correcting error on my part.\n\n\n > I'm not close enough to this thread to guess at the right direction \n(although I hope related work will help), but I have a couple general \nremarks:\n\n > 1. In my experience, academic papers like to test sorting with \nregister-sized numbers or strings. Our sorttuples are bigger, have \ncomplex comparators, and can fall back to fields in the full tuple.\n > 2. That paper is over 20 years old. If it demonstrated something \ngenuinely useful, some of those concepts would likely be implemented in \nthe real-world somewhere. Looking for evidence of that might be a good \nexercise.\n > 3. 20 year-old results may not carry over to modern hardware.\n > 4. Open source software is more widespread in the academic world now \nthan 20 years ago, so papers with code (maybe even the author's github) \nare much more useful to us in my view.\n > 5. It's actually not terribly hard to make sorting faster for some \nspecific cases -- the hard part is keeping other inputs of interest from \nregressing.\n > 6. The bigger the change, the bigger the risk of regressing somewhere.\n\nThanks John, these inputs are actually what I was looking for. I will do \nmore research based on these inputs and build up my understanding.\n\n\nRegards,\n\nAnkit\n\n\n\n",
"msg_date": "Mon, 30 Jan 2023 15:02:37 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 9:11 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I'm unsure if 69749243 might be partially to blame here as it favours\n> single-key sorts. If you look at qsort_tuple_signed_compare(), you'll\n> see that the tiebreak function will be called only when it's needed\n> and there are > 1 sort keys. The comparetup function will re-compare\n> the first key all over again. If I get some more time I'll run the\n> tests again with the sort specialisation code disabled to see if the\n> situation is the same or not.\n\n> I've attached the benchmark script that I used and also a copy of the\n> patch with a GUC added solely to allow easier benchmarking of patched\n> vs unpatched.\n\nI've attached a cleaned up v2 (*) of a patch to avoid rechecking the first\ncolumn if a specialized comparator already did so, and the results of the\n\"bench_windowsort\" benchmark. Here, master vs. patch refers to skipping the\nfirst column recheck, and on/off is the dev guc from David's last patch.\n\nIn my test, orderby_windowclause_pushdown caused 6 regressions by itself.\n\nNot rechecking seems to eliminate the regression in 4 cases, and reduce it\nin the other 2 cases. For those 2 cases (10e6 rows, random, mod 10 and\n100), it might be worthwhile to \"zoom in\" with more measurements, but\nhaven't done that yet.\n\n* v1 was here, but I thought it best to keep everything in the same thread,\nand that thread is nominally about a different kind of specialization:\n\nhttps://www.postgresql.org/message-id/CAFBsxsFdFpzyBekxxkiA4vXnLpw-wcaQXz%3DEAP4pzkZMo91-MA%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 14 Feb 2023 11:21:43 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Tue, 14 Feb 2023 at 17:21, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Not rechecking seems to eliminate the regression in 4 cases, and reduce it in the other 2 cases. For those 2 cases (10e6 rows, random, mod 10 and 100), it might be worthwhile to \"zoom in\" with more measurements, but haven't done that yet.\n\nThanks for writing up this patch and for running those tests again.\nI'm surprised to see there's a decent about of truth in the surplus\nrecheck of the first column in tiebreaks (mostly) causing the\nregression. I really would have suspected CPU caching effects to be a\nbigger factor. From looking at your numbers, it looks like it's just\nthe mod=100 test in random and unsorted. fallback-on looks faster than\nmaster-off for random mod=10.\n\nI didn't do a detailed review of the sort patch, but I did wonder\nabout the use of the name \"fallback\" in the new functions. The\ncomment in the following snippet from qsort_tuple_unsigned_compare()\nmakes me think \"tiebreak\" is a better name.\n\n/*\n* No need to waste effort calling the tiebreak function when there are no\n* other keys to sort on.\n*/\nif (state->base.onlyKey != NULL)\n return 0;\n\nreturn state->base.comparetup_fallback(a, b, state);\n\nI think if we fixed this duplicate recompare thing then I'd be much\nmore inclined to push the windowagg sort reduction stuff.\n\nI also wonder if the weirdness I reported in [1] would also disappear\nwith your patch. There's a patch on that thread that hacks up the\nplanner to split multi-column sorts into Sort -> Incremental Sort\nrather than just a single sort.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAApHDvpAO5H_L84kn9gCJ_hihOavtmDjimKYyftjWtF69BJ=8Q@mail.gmail.com\n\n\n",
"msg_date": "Tue, 14 Feb 2023 23:45:53 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Tue, Feb 14, 2023 at 5:46 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I didn't do a detailed review of the sort patch, but I did wonder\n> about the use of the name \"fallback\" in the new functions. The\n> comment in the following snippet from qsort_tuple_unsigned_compare()\n> makes me think \"tiebreak\" is a better name.\n\nI agree \"tiebreak\" is better.\n\n> I also wonder if the weirdness I reported in [1] would also disappear\n> with your patch. There's a patch on that thread that hacks up the\n> planner to split multi-column sorts into Sort -> Incremental Sort\n> rather than just a single sort.\n\nI tried that test (attached in script form) with and without the tiebreaker\npatch and got some improvement:\n\nHEAD:\n\n4 ^ 8: latency average = 113.976 ms\n5 ^ 8: latency average = 783.830 ms\n6 ^ 8: latency average = 3990.351 ms\n7 ^ 8: latency average = 15793.629 ms\n\nSkip rechecking first key:\n\n4 ^ 8: latency average = 107.028 ms\n5 ^ 8: latency average = 732.327 ms\n6 ^ 8: latency average = 3709.882 ms\n7 ^ 8: latency average = 14570.651 ms\n\nI gather that planner hack was just a demonstration, so I didn't test\nit, but if that was a move toward something larger I can run additional\ntests.\n\nThe configuration was (same as yesterday, but forgot to mention then)\n\nturbo off\nshared_buffers = '8GB';\nwork_mem = '4GB';\nmax_parallel_workers = 0;\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 15 Feb 2023 11:23:15 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, 15 Feb 2023 at 17:23, John Naylor <john.naylor@enterprisedb.com> wrote:\n> HEAD:\n>\n> 4 ^ 8: latency average = 113.976 ms\n> 5 ^ 8: latency average = 783.830 ms\n> 6 ^ 8: latency average = 3990.351 ms\n> 7 ^ 8: latency average = 15793.629 ms\n>\n> Skip rechecking first key:\n>\n> 4 ^ 8: latency average = 107.028 ms\n> 5 ^ 8: latency average = 732.327 ms\n> 6 ^ 8: latency average = 3709.882 ms\n> 7 ^ 8: latency average = 14570.651 ms\n\nThanks for testing that. It's good to see improvements in each of them.\n\n> I gather that planner hack was just a demonstration, so I didn't test it, but if that was a move toward something larger I can run additional tests.\n\nYeah, just a hack. My intention with it was just to prove we had a\nproblem because sometimes Sort -> Incremental Sort was faster than\nSort. Ideally, with your change, we'd see that it's always faster to\ndo the full sort in one go. It would be good to see your patch with\nand without the planner hack patch to ensure sort is now always faster\nthan sort -> incremental sort.\n\nDavid\n\n\n",
"msg_date": "Wed, 15 Feb 2023 21:02:31 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "I wrote:\n> it might be worthwhile to \"zoom in\" with more measurements, but haven't\ndone that yet.\n\nI've attached the script and image for 1 million / random / varying the mod\nby quarter-log intervals. Unfortunately I didn't get as good results as\nyesterday. Immediately going from mod 1 to mod 2, sort pushdown regresses\nsharply and stays regressed up until 10000. The tiebreaker patch helps but\nnever removes the regression.\n\nI suspect that I fat-fingered work_mem yesterday, so next I'll pick a\nbadly-performing mod like 32, then range over work_mem and see if that\nexplains anything, especially whether L3 effects are in fact more important\nin this workload.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 15 Feb 2023 15:03:22 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 3:02 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 15 Feb 2023 at 17:23, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n> > HEAD:\n> >\n> > 4 ^ 8: latency average = 113.976 ms\n> > 5 ^ 8: latency average = 783.830 ms\n> > 6 ^ 8: latency average = 3990.351 ms\n> > 7 ^ 8: latency average = 15793.629 ms\n> >\n> > Skip rechecking first key:\n> >\n> > 4 ^ 8: latency average = 107.028 ms\n> > 5 ^ 8: latency average = 732.327 ms\n> > 6 ^ 8: latency average = 3709.882 ms\n> > 7 ^ 8: latency average = 14570.651 ms\n\n> Yeah, just a hack. My intention with it was just to prove we had a\n> problem because sometimes Sort -> Incremental Sort was faster than\n> Sort. Ideally, with your change, we'd see that it's always faster to\n> do the full sort in one go. It would be good to see your patch with\n> and without the planner hack patch to ensure sort is now always faster\n> than sort -> incremental sort.\n\nOkay, here's a rerun including the sort hack, and it looks like incremental\nsort is only ahead with the smallest set, otherwise same or maybe slightly\nslower:\n\nHEAD:\n\n4 ^ 8: latency average = 113.461 ms\n5 ^ 8: latency average = 786.080 ms\n6 ^ 8: latency average = 3948.388 ms\n7 ^ 8: latency average = 15733.348 ms\n\ntiebreaker:\n\n4 ^ 8: latency average = 106.556 ms\n5 ^ 8: latency average = 734.834 ms\n6 ^ 8: latency average = 3640.507 ms\n7 ^ 8: latency average = 14470.199 ms\n\ntiebreaker + incr sort hack:\n\n4 ^ 8: latency average = 93.998 ms\n5 ^ 8: latency average = 740.120 ms\n6 ^ 8: latency average = 3715.942 ms\n7 ^ 8: latency average = 14749.323 ms\n\nAnd as far as this:\n\n> I suspect that I fat-fingered work_mem yesterday, so next I'll pick a\nbadly-performing mod like 32, then range over work_mem and see if that\nexplains anything, especially whether L3 effects are in fact more important\nin this workload.\n\nAttached is a script and image for fixing the input at random / mod32 and\nvarying work_mem. There is not a whole lot of variation here: pushdown\nregresses and the tiebreaker patch only helped marginally. I'm still not\nsure why the results from yesterday looked better than today.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 15 Feb 2023 18:45:01 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, 16 Feb 2023 at 00:45, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Okay, here's a rerun including the sort hack, and it looks like incremental sort is only ahead with the smallest set, otherwise same or maybe slightly slower:\n>\n> HEAD:\n>\n> 4 ^ 8: latency average = 113.461 ms\n> 5 ^ 8: latency average = 786.080 ms\n> 6 ^ 8: latency average = 3948.388 ms\n> 7 ^ 8: latency average = 15733.348 ms\n>\n> tiebreaker:\n>\n> 4 ^ 8: latency average = 106.556 ms\n> 5 ^ 8: latency average = 734.834 ms\n> 6 ^ 8: latency average = 3640.507 ms\n> 7 ^ 8: latency average = 14470.199 ms\n>\n> tiebreaker + incr sort hack:\n>\n> 4 ^ 8: latency average = 93.998 ms\n> 5 ^ 8: latency average = 740.120 ms\n> 6 ^ 8: latency average = 3715.942 ms\n> 7 ^ 8: latency average = 14749.323 ms\n\nSad news :( the sort hacks are still quite a bit faster for 4 ^ 8.\n\nI was fooling around with the attached (very quickly and crudely put\ntogether) patch just there. The idea is to sort during\ntuplesort_puttuple_common() when the memory consumption goes over some\napproximation of L3 cache size in the hope we still have cache lines\nfor the tuples in question still. The code is not ideal there as we\ntrack availMem rather than the used mem, so maybe I need to do that\nbetter as we could cross some boundary without actually having done\nvery much, plus we USEMEM for other reasons too.\n\nI found that the patch didn't really help:\n\ncreate table t (a int not null, b int not null);\ninsert into t select (x*random())::int % 100,(x*random())::int % 100\nfrom generate_Series(1,10000000)x;\nvacuum freeze t;\nselect pg_prewarm('t');\nshow work_mem;\n work_mem\n----------\n 4GB\n\nexplain (analyze, timing off) select * from t order by a,b;\n\nmaster:\nExecution Time: 5620.704 ms\nExecution Time: 5506.705 ms\n\npatched:\nExecution Time: 6801.421 ms\nExecution Time: 6762.130 ms\n\nI suspect it's slower because the final sort must sort the entire\narray still without knowledge that portions of it are pre-sorted. It\nwould be very interesting to improve this and do some additional work\nand keep track of the \"memtupsortedto\" index by pushing them onto a\nList each time we cross the availMem boundary, then do then qsort just\nthe final portion of the array in tuplesort_performsort() before doing\na k-way merge on each segment rather than qsorting the entire thing\nagain. I suspect this would be faster when work_mem exceeds L3 by some\nlarge amount.\n\nDavid",
"msg_date": "Thu, 16 Feb 2023 16:02:44 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 10:03 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I suspect it's slower because the final sort must sort the entire\n> array still without knowledge that portions of it are pre-sorted. It\n> would be very interesting to improve this and do some additional work\n> and keep track of the \"memtupsortedto\" index by pushing them onto a\n> List each time we cross the availMem boundary, then do then qsort just\n> the final portion of the array in tuplesort_performsort() before doing\n> a k-way merge on each segment rather than qsorting the entire thing\n> again. I suspect this would be faster when work_mem exceeds L3 by some\n> large amount.\n\nSounds like a reasonable thing to try.\n\nIt seems like in-memory merge could still use abbreviation, unlike external\nmerge.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, Feb 16, 2023 at 10:03 AM David Rowley <dgrowleyml@gmail.com> wrote:> I suspect it's slower because the final sort must sort the entire> array still without knowledge that portions of it are pre-sorted. It> would be very interesting to improve this and do some additional work> and keep track of the \"memtupsortedto\" index by pushing them onto a> List each time we cross the availMem boundary, then do then qsort just> the final portion of the array in tuplesort_performsort() before doing> a k-way merge on each segment rather than qsorting the entire thing> again. I suspect this would be faster when work_mem exceeds L3 by some> large amount.Sounds like a reasonable thing to try.It seems like in-memory merge could still use abbreviation, unlike external merge. --John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Thu, 16 Feb 2023 18:28:23 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 11:23 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n>\n>\n> On Tue, Feb 14, 2023 at 5:46 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> >\n> > I didn't do a detailed review of the sort patch, but I did wonder\n> > about the use of the name \"fallback\" in the new functions. The\n> > comment in the following snippet from qsort_tuple_unsigned_compare()\n> > makes me think \"tiebreak\" is a better name.\n>\n> I agree \"tiebreak\" is better.\n\nHere is v3 with that change. I still need to make sure the tests cover all\ncases, so I'll do that as time permits. Also creating CF entry.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com",
"msg_date": "Fri, 30 Jun 2023 13:45:32 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Fri, 30 Jun 2023 at 18:45, John Naylor <john.naylor@enterprisedb.com> wrote:\n> Here is v3 with that change. I still need to make sure the tests cover all cases, so I'll do that as time permits. Also creating CF entry.\n\nThanks for picking this back up again for the v17 cycle. I've reread\nthe entire thread to remind myself where we got to.\n\nI looked over your patch and don't see anything to report aside from\nthe unfinished/undecided part around the tiebreak function for\ntuplesort_begin_index_hash().\n\nI also ran the benchmark script [1] with the patch from [2] and\ncalculated the speedup with [2] with and without your v3 patch. I've\nattached two graphs with the benchmark results. Any value >100%\nindicates that performing the sort for the ORDER BY at the same time\nas the WindowAgg improves performance, whereas anything < 100%\nindicates a regression. The bars in blue show the results without\nyour v3 patch and the red bars show the results with your v3 patch.\nLooking at the remaining regressions it does not really feel like\nwe've found the culprit for the regressions. Certainly, v3 helps, but\nI just don't think it's to the level we'd need to make the window sort\nchanges a good idea.\n\nI'm not sure exactly how best to proceed here. I think the tiebreak\nstuff is worth doing regardless, so maybe that can just go in to\neliminate that as a factor and we or I can continue to see what else\nis to blame.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/attachment/143109/bench_windowsort.sh.txt\n[2] https://www.postgresql.org/message-id/attachment/143112/orderby_windowclause_pushdown_testing_only.patch.txt",
"msg_date": "Mon, 3 Jul 2023 22:14:50 +1200",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
},
{
"msg_contents": "On Mon, Jul 3, 2023 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Fri, 30 Jun 2023 at 18:45, John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> I looked over your patch and don't see anything to report aside from\n> the unfinished/undecided part around the tiebreak function for\n> tuplesort_begin_index_hash().\n\nI went ahead and added a degenerate function, just for consistency -- might\nalso head off possible alarms from code analysis tools.\n\n> I also ran the benchmark script [1] with the patch from [2] and\n> calculated the speedup with [2] with and without your v3 patch. I've\n> attached two graphs with the benchmark results. Any value >100%\n> indicates that performing the sort for the ORDER BY at the same time\n> as the WindowAgg improves performance, whereas anything < 100%\n> indicates a regression. The bars in blue show the results without\n> your v3 patch and the red bars show the results with your v3 patch.\n> Looking at the remaining regressions it does not really feel like\n> we've found the culprit for the regressions. Certainly, v3 helps, but\n> I just don't think it's to the level we'd need to make the window sort\n> changes a good idea.\n>\n> I'm not sure exactly how best to proceed here. I think the tiebreak\n> stuff is worth doing regardless, so maybe that can just go in to\n> eliminate that as a factor and we or I can continue to see what else\n> is to blame.\n\nThanks for testing again. Sounds good, I removed a now-invalidated comment,\npgindent'd, and pushed.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Jul 3, 2023 at 5:15 PM David Rowley <dgrowleyml@gmail.com> wrote:>> On Fri, 30 Jun 2023 at 18:45, John Naylor <john.naylor@enterprisedb.com> wrote:> I looked over your patch and don't see anything to report aside from> the unfinished/undecided part around the tiebreak function for> tuplesort_begin_index_hash().I went ahead and added a degenerate function, just for consistency -- might also head off possible alarms from code analysis tools. > I also ran the benchmark script [1] with the patch from [2] and> calculated the speedup with [2] with and without your v3 patch. I've> attached two graphs with the benchmark results. Any value >100%> indicates that performing the sort for the ORDER BY at the same time> as the WindowAgg improves performance, whereas anything < 100%> indicates a regression. The bars in blue show the results without> your v3 patch and the red bars show the results with your v3 patch.> Looking at the remaining regressions it does not really feel like> we've found the culprit for the regressions. Certainly, v3 helps, but> I just don't think it's to the level we'd need to make the window sort> changes a good idea.>> I'm not sure exactly how best to proceed here. I think the tiebreak> stuff is worth doing regardless, so maybe that can just go in to> eliminate that as a factor and we or I can continue to see what else> is to blame.Thanks for testing again. Sounds good, I removed a now-invalidated comment, pgindent'd, and pushed.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 16 Aug 2023 17:24:33 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Todo: Teach planner to evaluate multiple windows in the optimal\n order"
}
] |
[
{
"msg_contents": "Hi all,\n\n\nThis is patch for TODO item: /Improve ability to display optimizer \nanalysis using OPTIMIZER_DEBUG\n/\n\nAs per as suggestion in the mailing list which is to replace current \nmechanism of getting optimizer log via OPTIMIZER_DEBUG macro\n\nto something more configurable (which doesn't require rebuilding \npostgres from source code). This patch replaces /OPTIMIZER_DEBUG\n/\n\nby introducing a//GUC /show_optimizer_log /which can be configured on \nand off to//display (or hide)//previously generated log from stdout to \npostmaster's log.\n\nPlease check attached patch, any feedback is appreciated.\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sun, 25 Dec 2022 21:50:16 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n> As per as suggestion in the mailing list which is to replace current \n> mechanism of getting optimizer log via OPTIMIZER_DEBUG macro\n> to something more configurable (which doesn't require rebuilding \n> postgres from source code). This patch replaces /OPTIMIZER_DEBUG\n> by introducing a//GUC /show_optimizer_log /which can be configured on \n> and off to//display (or hide)//previously generated log from stdout to \n> postmaster's log.\n\nThe problem with OPTIMIZER_DEBUG is that it's useless. I've\nbeen hacking on the PG planner for well more than twenty years,\nand I do not think I've ever made any use of that code ---\ncertainly not since the turn of the century or so.\nMaking it GUC-accessible isn't going to make it more useful.\n\nThere certainly could be value in having some better trace\nof what the planner is doing, but I honestly don't have much\nof an idea of what that would look like. debug_print_rel\nisn't that, however, because it won't show you anything about\npaths that were considered and immediately rejected by\nadd_path (or never got to add_path at all, because of the\napproximate-initial-cost filters in joinpath.c). We've\nsometimes speculated about logging every path submitted to\nadd_path, but that would likely be too verbose to be very\nhelpful.\n\nAlso, because OPTIMIZER_DEBUG is such a backwater, it's\nnever gotten much polishing for usability. There are large\ngaps in what it can deal with (print_expr is pretty much\na joke for example), yet also lots of redundancy in the\noutput ... and dumping stuff to stdout isn't terribly helpful\nto the average user in the first place. These days people\nwould probably also wish that the output could be machine-\nreadable in some way (JSON-formatted, perhaps).\n\nSo I think this whole area would need some pretty serious\nrethinking and attention to detail before we should consider\nclaiming that it's a useful feature.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 25 Dec 2022 13:24:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "\nOn 25/12/22 23:54, Tom Lane wrote:\n>\n> These days people\n> would probably also wish that the output could be machine-\n> readable in some way (JSON-formatted, perhaps).\n\nPerhaps switch to enable logs could be on (standard logging style), \njson, xml etc and print the output\n\nas required?\n\n\n> however, because it won't show you anything about\n> paths that were considered and immediately rejected by\n> add_path (or never got to add_path at all, because of the\n> approximate-initial-cost filters in joinpath.c). We've\n> sometimes speculated about logging every path submitted to\n> add_path, but that would likely be too verbose to be very\n> helpful.\n>\nMaybe we could add verbose option too and this could be one of the \noutput. Are there any more relevant information\n\nthat you could think of which can be included here?\n\n\nAlso, inputs from other hackers are welcomed here.\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Mon, 26 Dec 2022 00:35:40 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Mon, 26 Dec 2022 at 08:05, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> Also, inputs from other hackers are welcomed here.\n\nI'm with Tom on this. I've never once used this feature to try to\nfigure out why a certain plan was chosen or not chosen.\n\nDo you actually have a need for this or are you just trying to tick\noff some TODO items?\n\nI'd really rather not see us compiling all that debug code in by\ndefault unless it's actually going to be useful to a meaningful number\nof people.\n\nDavid\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:08:36 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "\nOn 03/01/23 08:38, David Rowley wrote:\n> I'm with Tom on this. I've never once used this feature to try to\n> figure out why a certain plan was chosen or not chosen.\n>\n> I'd really rather not see us compiling all that debug code in by\n> default unless it's actually going to be useful to a meaningful number\n> of people.\n>\nOkay this makes sense.\n\n\n>\n> Do you actually have a need for this or are you just trying to tick\n> off some TODO items?\n>\nI would say Iatter but reason I picked it up was more on side of \nlearning optimizer better. Currently, I am left with\n\nexplain analyze which does its job but for understanding internal \nworking of optimizer, there are not much alternatives.\n\nAgain, if I know where to put breakpoint, I could see required \npath/states but point of this todo item is\n\nability to do this without need of developer tools.\n\nAlso from the thread,\n\nhttps://www.postgresql.org/message-id/20120821.121611.501104647612634419.t-ishii@sraoss.co.jp\n\n> +1. It would also be popular with our academic users.\n>\nThere could be potential for this as well.\n\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 12:29:00 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 19:59, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n>\n> On 03/01/23 08:38, David Rowley wrote:\n> > Do you actually have a need for this or are you just trying to tick\n> > off some TODO items?\n> >\n> I would say Iatter but reason I picked it up was more on side of\n> learning optimizer better.\n\nI think it's better you leave this then. I think if someone comes\nalong and demonstrates the feature's usefulness and can sell us having\nit so we can easily enable it by GUC then maybe that's the time to\nconsider it. I don't think ticking off a TODO item is reason enough.\n\n> Also from the thread,\n>\n> https://www.postgresql.org/message-id/20120821.121611.501104647612634419.t-ishii@sraoss.co.jp\n>\n> > +1. It would also be popular with our academic users.\n> >\n> There could be potential for this as well.\n\nI think the argument is best coming from someone who'll actually use it.\n\nDavid\n\n\n",
"msg_date": "Wed, 4 Jan 2023 13:57:28 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "As an end user that spends a lot of time optimizing pretty complicated\nqueries, I'd say that something like this could be useful.\nRight now the optimizer is mostly a black box. Why it chooses one plan or\nthe other, it's a mystery. I have some general ideas about that,\nand I can even read and sometimes debug optimizer's code to dig deeper\n(although it's not always possible to reproduce the same behavior as in the\nproduction system anyway).\nI'm mostly interested to find where exactly the optimizer was wrong and\nwhat would be the best way to fix it. Currently Postgres is not doing a\ngreat job in that department.\nEXPLAIN output can tell you about mispredictions, but the logic of choosing\nparticular plans is still obscure, because the reasons for optimizer's\ndecisions are not visible.\nIf configuring OPTIMIZER_DEBUG through GUC can help with that, I think it\nwould be a useful addition.\nNow, that's general considerations, I'm not somebody who actually uses\nOPTIMIZER_DEBUG regularly (but maybe I would if it's accessible\nthrough GUC),\nI'm just saying that is an area where improvements would be very much\nwelcomed.\n\n-Vladimir\n\nOn Tue, Jan 3, 2023 at 4:57 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Tue, 3 Jan 2023 at 19:59, Ankit Kumar Pandey <itsankitkp@gmail.com>\n> wrote:\n> >\n> >\n> > On 03/01/23 08:38, David Rowley wrote:\n> > > Do you actually have a need for this or are you just trying to tick\n> > > off some TODO items?\n> > >\n> > I would say Iatter but reason I picked it up was more on side of\n> > learning optimizer better.\n>\n> I think it's better you leave this then. I think if someone comes\n> along and demonstrates the feature's usefulness and can sell us having\n> it so we can easily enable it by GUC then maybe that's the time to\n> consider it. I don't think ticking off a TODO item is reason enough.\n>\n> > Also from the thread,\n> >\n> >\n> https://www.postgresql.org/message-id/20120821.121611.501104647612634419.t-ishii@sraoss.co.jp\n> >\n> > > +1. It would also be popular with our academic users.\n> > >\n> > There could be potential for this as well.\n>\n> I think the argument is best coming from someone who'll actually use it.\n>\n> David\n>\n>\n>\n\nAs an end user that spends a lot of time optimizing pretty complicated queries, I'd say that something like this could be useful.Right now the optimizer is mostly a black box. Why it chooses one plan or the other, it's a mystery. I have some general ideas about that, and I can even read and sometimes debug optimizer's code to dig deeper (although it's not always possible to reproduce the same behavior as in the production system anyway).I'm mostly interested to find where exactly the optimizer was wrong and what would be the best way to fix it. Currently Postgres is not doing a great job in that department.EXPLAIN output can tell you about mispredictions, but the logic of choosing particular plans is still obscure, because the reasons for optimizer's decisions are not visible.If configuring OPTIMIZER_DEBUG through GUC can help with that, I think it would be a useful addition.Now, that's general considerations, I'm not somebody who actually uses OPTIMIZER_DEBUG regularly (but maybe I would if it's accessible through GUC), I'm just saying that is an area where improvements would be very much welcomed. -Vladimir On Tue, Jan 3, 2023 at 4:57 PM David Rowley <dgrowleyml@gmail.com> wrote:On Tue, 3 Jan 2023 at 19:59, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n>\n>\n> On 03/01/23 08:38, David Rowley wrote:\n> > Do you actually have a need for this or are you just trying to tick\n> > off some TODO items?\n> >\n> I would say Iatter but reason I picked it up was more on side of\n> learning optimizer better.\n\nI think it's better you leave this then. I think if someone comes\nalong and demonstrates the feature's usefulness and can sell us having\nit so we can easily enable it by GUC then maybe that's the time to\nconsider it. I don't think ticking off a TODO item is reason enough.\n\n> Also from the thread,\n>\n> https://www.postgresql.org/message-id/20120821.121611.501104647612634419.t-ishii@sraoss.co.jp\n>\n> > +1. It would also be popular with our academic users.\n> >\n> There could be potential for this as well.\n\nI think the argument is best coming from someone who'll actually use it.\n\nDavid",
"msg_date": "Tue, 3 Jan 2023 19:15:44 -0800",
"msg_from": "Vladimir Churyukin <vladimir@churyukin.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 16:15, Vladimir Churyukin <vladimir@churyukin.com> wrote:\n> As an end user that spends a lot of time optimizing pretty complicated queries, I'd say that something like this could be useful.\n\nI think we really need to at least see that it *is* useful, not that\nit *could be* useful. For example, as an end user, you might not find\nit great that the output is sent to stdout rather than to the window\nthat you execute the query in.\n\n From what I can see here, the motivation to make this a useful feature\nis backwards from what is normal. I think if you're keen to see a\nfeature that allows you better visibility into rejected paths then you\nneed to prove this is it rather than speculating that it might be\nuseful.\n\nThere was a bit of work being done in [1] with the end goal of having\nthe ability for add_path to call a hook function before it outright\nrejects a path. Maybe that would be a better place to put this and\nthen write some contrib module that provides some extended output in\nEXPLAIN. That might require some additional fields so that we could\ncarry forward additional information that we'd like to show in\nEXPLAIN. I imagine it's not ok just to start writing result lines in\nthe planner. The EXPLAIN format must be considered too and explain.c\nseems like the place that should be done. add_path might need to\nbecome a bit more verbose about the reason it rejected a certain path\nfor this to be useful.\n\nDavid\n\n[1] https://commitfest.postgresql.org/39/3599/\n\n\n",
"msg_date": "Wed, 4 Jan 2023 16:41:42 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 7:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 4 Jan 2023 at 16:15, Vladimir Churyukin <vladimir@churyukin.com>\n> wrote:\n> > As an end user that spends a lot of time optimizing pretty complicated\n> queries, I'd say that something like this could be useful.\n>\n> I think we really need to at least see that it *is* useful, not that\n> it *could be* useful. For example, as an end user, you might not find\n> it great that the output is sent to stdout rather than to the window\n> that you execute the query in.\n>\n\nThat's true, as an end user I would expect to see the output as a query\noutput, not in stdout.\n\n\n> From what I can see here, the motivation to make this a useful feature\n> is backwards from what is normal. I think if you're keen to see a\n> feature that allows you better visibility into rejected paths then you\n> need to prove this is it rather than speculating that it might be\n> useful.\n>\n>\nYou can't see people using the feature unless you make it useful. If it's\nnot useful right now (because it's implemented as a compile-time flag with\nstdout prints for example),\nit doesn't mean it's not useful when it becomes more convenient. Probably\nthe best way to find out is to create a *convenient* extension and see if\npeople start using it.\n\n\n> There was a bit of work being done in [1] with the end goal of having\n> the ability for add_path to call a hook function before it outright\n> rejects a path. Maybe that would be a better place to put this and\n> then write some contrib module that provides some extended output in\n> EXPLAIN. That might require some additional fields so that we could\n> carry forward additional information that we'd like to show in\n> EXPLAIN. I imagine it's not ok just to start writing result lines in\n> the planner. The EXPLAIN format must be considered too and explain.c\n> seems like the place that should be done. add_path might need to\n> become a bit more verbose about the reason it rejected a certain path\n> for this to be useful.\n>\n\nI agree, extended EXPLAIN output would be a much better solution than\nwriting into stdout. Can be implemented as an extra EXPLAIN flag, something\nlike EXPLAIN (TRACE).\nOne of the issues here is the result will rather be pretty long (and may\nconsist of multiple parts, so something like returning multiple\nrefcursors might be necessary, so a client can fetch multiple result sets.\nOtherwise it won't be human-readable. Although it's not necessary the\npurpose, if the purpose is to make it machine-readable and create tools to\ninterpret the results, json format and a single resultset would be ok.\nThe result can be represented as a list of trace events that shows profiler\nlogic (the traces can be generated by the hook you mentioned and or by some\nother additional hooks).\nIs that what you were talking about?\nAnother thing, since people react to this TODO item on\nhttps://wiki.postgresql.org/wiki/Todo, maybe it's better to modify\nor remove it, so they don't spend time working on something that is pretty\nmuch a dead end currently?\n\n-Vladimir\n\nOn Tue, Jan 3, 2023 at 7:41 PM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 4 Jan 2023 at 16:15, Vladimir Churyukin <vladimir@churyukin.com> wrote:\n> As an end user that spends a lot of time optimizing pretty complicated queries, I'd say that something like this could be useful.\n\nI think we really need to at least see that it *is* useful, not that\nit *could be* useful. For example, as an end user, you might not find\nit great that the output is sent to stdout rather than to the window\nthat you execute the query in.That's true, as an end user I would expect to see the output as a query output, not in stdout. \n From what I can see here, the motivation to make this a useful feature\nis backwards from what is normal. I think if you're keen to see a\nfeature that allows you better visibility into rejected paths then you\nneed to prove this is it rather than speculating that it might be\nuseful.\nYou can't see people using the feature unless you make it useful. If it's not useful right now (because it's implemented as a compile-time flag with stdout prints for example), it doesn't mean it's not useful when it becomes more convenient. Probably the best way to find out is to create a *convenient* extension and see if people start using it. \nThere was a bit of work being done in [1] with the end goal of having\nthe ability for add_path to call a hook function before it outright\nrejects a path. Maybe that would be a better place to put this and\nthen write some contrib module that provides some extended output in\nEXPLAIN. That might require some additional fields so that we could\ncarry forward additional information that we'd like to show in\nEXPLAIN. I imagine it's not ok just to start writing result lines in\nthe planner. The EXPLAIN format must be considered too and explain.c\nseems like the place that should be done. add_path might need to\nbecome a bit more verbose about the reason it rejected a certain path\nfor this to be useful.I agree, extended EXPLAIN output would be a much better solution than writing into stdout. Can be implemented as an extra EXPLAIN flag, something like EXPLAIN (TRACE).One of the issues here is the result will rather be pretty long (and may consist of multiple parts, so something like returning multiple refcursors might be necessary, so a client can fetch multiple result sets.Otherwise it won't be human-readable. Although it's not necessary the purpose, if the purpose is to make it machine-readable and create tools to interpret the results, json format and a single resultset would be ok. The result can be represented as a list of trace events that shows profiler logic (the traces can be generated by the hook you mentioned and or by some other additional hooks).Is that what you were talking about?Another thing, since people react to this TODO item on https://wiki.postgresql.org/wiki/Todo, maybe it's better to modify or remove it, so they don't spend time working on something that is pretty much a dead end currently?-Vladimir",
"msg_date": "Tue, 3 Jan 2023 20:39:19 -0800",
"msg_from": "Vladimir Churyukin <vladimir@churyukin.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 17:39, Vladimir Churyukin <vladimir@churyukin.com> wrote:\n>\n> On Tue, Jan 3, 2023 at 7:41 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>> From what I can see here, the motivation to make this a useful feature\n>> is backwards from what is normal. I think if you're keen to see a\n>> feature that allows you better visibility into rejected paths then you\n>> need to prove this is it rather than speculating that it might be\n>> useful.\n>>\n>\n> You can't see people using the feature unless you make it useful. If it's not useful right now (because it's implemented as a compile-time flag with stdout prints for example),\n> it doesn't mean it's not useful when it becomes more convenient. Probably the best way to find out is to create a *convenient* extension and see if people start using it.\n\nI don't think anyone is against making it useful. It's just not\nseemingly useful enough for either Tom or I to make use of it. Nobody\nelse seems to have come along to tell us that it's useful to them.\nSome people only speculated that it might be useful.\n\nAs I said before, if someone wants to make this work, then I think the\nproblem needs to be approached from the opposite direction. i.e they\nhave a plan that they're not happy with and they need to come up with\nsomething that usefully shows the reason why the plan that they expect\nto be better is not chosen. That's not what's happened here. add_path\ndoes not just reject paths based on cost, so ISTM, for this to be\nmeaningful, add_path would need to do something to specify the reason\nthat the path was rejected, i.e a similarly costed path has pathkeys,\nthis one does not.\n\n> I agree, extended EXPLAIN output would be a much better solution than writing into stdout. Can be implemented as an extra EXPLAIN flag, something like EXPLAIN (TRACE).\n> One of the issues here is the result will rather be pretty long (and may consist of multiple parts, so something like returning multiple refcursors might be necessary, so a client can fetch multiple result sets.\n> Otherwise it won't be human-readable. Although it's not necessary the purpose, if the purpose is to make it machine-readable and create tools to interpret the results, json format and a single resultset would be ok.\n> The result can be represented as a list of trace events that shows profiler logic (the traces can be generated by the hook you mentioned and or by some other additional hooks).\n> Is that what you were talking about?\n\nThe thing I had in mind was some mode that would record additional\ndetails during planning that could be tagged onto the final plan in\ncreateplan.c so that EXPLAIN could display them. I just think that\nEXPLAIN is the place where people go to learn about _what_ the plan is\nand it might also be the place where they might expect to go to find\nout more details about _why_ that plan was chosen. I by no means have\na fully bakes idea on what that would look like, but I just think that\ndumping a bunch of lines to stdout is not going to be useful to many\npeople and we need to think of something better in order to properly\nmake this useful.\n\n> Another thing, since people react to this TODO item on https://wiki.postgresql.org/wiki/Todo, maybe it's better to modify or remove it, so they don't spend time working on something that is pretty much a dead end currently?\n\nI've just adjusted it based on the discussion that's going on on this thread.\n\nDavid\n\n\n",
"msg_date": "Wed, 4 Jan 2023 18:06:46 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> The thing I had in mind was some mode that would record additional\n> details during planning that could be tagged onto the final plan in\n> createplan.c so that EXPLAIN could display them. I just think that\n> EXPLAIN is the place where people go to learn about _what_ the plan is\n> and it might also be the place where they might expect to go to find\n> out more details about _why_ that plan was chosen. I by no means have\n> a fully bakes idea on what that would look like, but I just think that\n> dumping a bunch of lines to stdout is not going to be useful to many\n> people and we need to think of something better in order to properly\n> make this useful.\n\nThere's a number of problems in this area, but I think the really\nfundamental issue is that for speed reasons the planner wants to\nreject losing plan alternatives as quickly as possible. So we simply\ndon't pursue those alternatives far enough to produce anything that\ncould serve as input for EXPLAIN (in its current form, anyway).\nWhat that means is that a trace of add_path decisions just can't be\nvery useful to an end user: there isn't enough data to present the\ndecisions in a recognizable form, besides which there is too much\nnoise because most of the rejected options are in fact silly.\nSo indeed we find that even hard-core developers aren't interested\nin consuming the data in that form.\n\nAnother issue is that frequently the problem is that we never\nconsidered the desired plan at all, so that even if you had a\nperfectly readable add_path trace it wouldn't show you what you want\nto see. This might happen because the planner is simply incapable of\nproducing that plan shape from the given query, but often it happens\nfor reasons like \"function F() used in the query is marked volatile,\nso we didn't flatten a subquery or consider an indexscan or whatever\".\nI'm not sure how we could produce output that would help people\ndiscover that kind of problem ... but I am sure that an add_path\ntrace won't do it.\n\nSo, not only am I pretty down on exposing OPTIMIZER_DEBUG in\nits current form, but I don't really believe that adding hooks\nto add_path would allow an extension to produce anything of value.\nI'd for sure want to see a convincing demonstration to the contrary\nbefore we slow down that hot code path by adding hook calls.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 04 Jan 2023 00:55:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "\nOn 04/01/23 06:27, David Rowley wrote:\n> I think it's better you leave this then. I think if someone comes\n> along and demonstrates the feature's usefulness and can sell us having\n> it so we can easily enable it by GUC then maybe that's the time to\n> consider it. I don't think ticking off a TODO item is reason enough.\n>\nMake sense, not going further with this.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Wed, 4 Jan 2023 13:51:56 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 9:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> David Rowley <dgrowleyml@gmail.com> writes:\n> > The thing I had in mind was some mode that would record additional\n> > details during planning that could be tagged onto the final plan in\n> > createplan.c so that EXPLAIN could display them. I just think that\n> > EXPLAIN is the place where people go to learn about _what_ the plan is\n> > and it might also be the place where they might expect to go to find\n> > out more details about _why_ that plan was chosen. I by no means have\n> > a fully bakes idea on what that would look like, but I just think that\n> > dumping a bunch of lines to stdout is not going to be useful to many\n> > people and we need to think of something better in order to properly\n> > make this useful.\n>\n> There's a number of problems in this area, but I think the really\n> fundamental issue is that for speed reasons the planner wants to\n> reject losing plan alternatives as quickly as possible. So we simply\n> don't pursue those alternatives far enough to produce anything that\n> could serve as input for EXPLAIN (in its current form, anyway).\n>\n\nThat's not necessarily a fundamental issue for EXPLAIN (well, in theory,\nnot sure if there are fundamental limitations of the current\nimplementation).\nWhen somebody runs EXPLAIN, they don't necessarily care that much about its\nperformance, as long as it returns results in reasonable time.\nSo if the planner does some extra work in that mode to better display why\nthe specific path was chosen, it should probably be ok from the performance\nperspective.\n\n\n> What that means is that a trace of add_path decisions just can't be\n> very useful to an end user: there isn't enough data to present the\n> decisions in a recognizable form, besides which there is too much\n> noise because most of the rejected options are in fact silly.\n> So indeed we find that even hard-core developers aren't interested\n> in consuming the data in that form.\n>\n\nEven if the output is not very human-readable, it still can be useful, if\nthere are tools that consume the output and extract\nmeaningful data while omitting meaningless noise (if the meaningful data\nexists there of course).\n\n\n> Another issue is that frequently the problem is that we never\n> considered the desired plan at all, so that even if you had a\n> perfectly readable add_path trace it wouldn't show you what you want\n> to see. This might happen because the planner is simply incapable of\n> producing that plan shape from the given query, but often it happens\n> for reasons like \"function F() used in the query is marked volatile,\n> so we didn't flatten a subquery or consider an indexscan or whatever\".\n> I'm not sure how we could produce output that would help people\n> discover that kind of problem ... but I am sure that an add_path\n> trace won't do it.\n>\n> So, not only am I pretty down on exposing OPTIMIZER_DEBUG in\n> its current form, but I don't really believe that adding hooks\n> to add_path would allow an extension to produce anything of value.\n> I'd for sure want to see a convincing demonstration to the contrary\n> before we slow down that hot code path by adding hook calls.\n\n\nPardon my ignorance, but I'm curious, how changes in planner code are\ncurrently validated?\nLet's say, you add some extra logic that introduces different paths in some\ncases, or adjust some constants. How do you validate this logic doesn't\nslow down something else dramatically?\nI see some EXPLAIN output checks in regression tests (not that many\nthough), so I'm curious how regressions in planning are currently tested.\nNot the simple ones, when you have a small input and predictable\nplan/output, but something that can happen with more or less real data\ndistribution on medium / large datasets.\n\n-Vladimir Churyukin\n\nOn Tue, Jan 3, 2023 at 9:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:David Rowley <dgrowleyml@gmail.com> writes:\n> The thing I had in mind was some mode that would record additional\n> details during planning that could be tagged onto the final plan in\n> createplan.c so that EXPLAIN could display them. I just think that\n> EXPLAIN is the place where people go to learn about _what_ the plan is\n> and it might also be the place where they might expect to go to find\n> out more details about _why_ that plan was chosen. I by no means have\n> a fully bakes idea on what that would look like, but I just think that\n> dumping a bunch of lines to stdout is not going to be useful to many\n> people and we need to think of something better in order to properly\n> make this useful.\n\nThere's a number of problems in this area, but I think the really\nfundamental issue is that for speed reasons the planner wants to\nreject losing plan alternatives as quickly as possible. So we simply\ndon't pursue those alternatives far enough to produce anything that\ncould serve as input for EXPLAIN (in its current form, anyway).That's not necessarily a fundamental issue for EXPLAIN (well, in theory, not sure if there are fundamental limitations of the current implementation).When somebody runs EXPLAIN, they don't necessarily care that much about its performance, as long as it returns results in reasonable time.So if the planner does some extra work in that mode to better display why the specific path was chosen, it should probably be ok from the performance perspective. \nWhat that means is that a trace of add_path decisions just can't be\nvery useful to an end user: there isn't enough data to present the\ndecisions in a recognizable form, besides which there is too much\nnoise because most of the rejected options are in fact silly.\nSo indeed we find that even hard-core developers aren't interested\nin consuming the data in that form.Even if the output is not very human-readable, it still can be useful, if there are tools that consume the output and extractmeaningful data while omitting meaningless noise (if the meaningful data exists there of course). \nAnother issue is that frequently the problem is that we never\nconsidered the desired plan at all, so that even if you had a\nperfectly readable add_path trace it wouldn't show you what you want\nto see. This might happen because the planner is simply incapable of\nproducing that plan shape from the given query, but often it happens\nfor reasons like \"function F() used in the query is marked volatile,\nso we didn't flatten a subquery or consider an indexscan or whatever\".\nI'm not sure how we could produce output that would help people\ndiscover that kind of problem ... but I am sure that an add_path\ntrace won't do it.\n\nSo, not only am I pretty down on exposing OPTIMIZER_DEBUG in\nits current form, but I don't really believe that adding hooks\nto add_path would allow an extension to produce anything of value.\nI'd for sure want to see a convincing demonstration to the contrary\nbefore we slow down that hot code path by adding hook calls. Pardon my ignorance, but I'm curious, how changes in planner code are currently validated?Let's say, you add some extra logic that introduces different paths in some cases, or adjust some constants. How do you validate this logic doesn't slow down something else dramatically?I see some EXPLAIN output checks in regression tests (not that many though), so I'm curious how regressions in planning are currently tested.Not the simple ones, when you have a small input and predictable plan/output, but something that can happen with more or less real data distribution on medium / large datasets.-Vladimir Churyukin",
"msg_date": "Wed, 4 Jan 2023 00:52:41 -0800",
"msg_from": "Vladimir Churyukin <vladimir@churyukin.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 1:59 PM Ankit Kumar Pandey <itsankitkp@gmail.com>\nwrote:\n>\n>\n> On 03/01/23 08:38, David Rowley wrote:\n> >\n> > Do you actually have a need for this or are you just trying to tick\n> > off some TODO items?\n> >\n> I would say Iatter but reason I picked it up was more on side of\n> learning optimizer better.\n\nNote that the TODO list has accumulated some cruft over the years. Some\ntime ago I started an effort to remove outdated/undesirable entries, and I\nshould get back to that, but for the present, please take the warning at\nthe top to heart:\n\n\"WARNING for Developers: Unfortunately this list does not contain all the\ninformation necessary for someone to start coding a feature. Some of these\nitems might have become unnecessary since they were added --- others might\nbe desirable but the implementation might be unclear. When selecting items\nlisted below, be prepared to first discuss the value of the feature. Do not\nassume that you can select one, code it and then expect it to be committed.\n\"\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Jan 3, 2023 at 1:59 PM Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:>>> On 03/01/23 08:38, David Rowley wrote:> >> > Do you actually have a need for this or are you just trying to tick> > off some TODO items?> >> I would say Iatter but reason I picked it up was more on side of> learning optimizer better.Note that the TODO list has accumulated some cruft over the years. Some time ago I started an effort to remove outdated/undesirable entries, and I should get back to that, but for the present, please take the warning at the top to heart: \"WARNING for Developers: Unfortunately this list does not contain all the information necessary for someone to start coding a feature. Some of these items might have become unnecessary since they were added --- others might be desirable but the implementation might be unclear. When selecting items listed below, be prepared to first discuss the value of the feature. Do not assume that you can select one, code it and then expect it to be committed. \"--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 10:56:22 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "John Naylor <john.naylor@enterprisedb.com> writes:\n> Note that the TODO list has accumulated some cruft over the years. Some\n> time ago I started an effort to remove outdated/undesirable entries, and I\n> should get back to that, but for the present, please take the warning at\n> the top to heart:\n\n> \"WARNING for Developers: Unfortunately this list does not contain all the\n> information necessary for someone to start coding a feature. Some of these\n> items might have become unnecessary since they were added --- others might\n> be desirable but the implementation might be unclear. When selecting items\n> listed below, be prepared to first discuss the value of the feature. Do not\n> assume that you can select one, code it and then expect it to be committed.\n> \"\n\nI think we could make that even stronger: there's basically nothing on\nthe TODO list that isn't problematic in some way. Otherwise it would\nhave been done already. The entries involve large amounts of work,\nor things that are subtler than they might appear, or cases where the\ndesirable semantics aren't clear, or tradeoffs that there's not\nconsensus about, or combinations of those.\n\nIME it's typically a lot more productive to approach things via\n\"scratch your own itch\". If a problem is biting you directly, then\nat least you have some clear idea of what it is that needs to be fixed.\nYou might have to work up to an understanding of how to fix it, but\nyou have a clear goal.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 23:27:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "\n> On 11/01/23 09:57, Tom Lane wrote:\n> IME it's typically a lot more productive to approach things via\n> \"scratch your own itch\". If a problem is biting you directly, then\n> at least you have some clear idea of what it is that needs to be fixed.\n> You might have to work up to an understanding of how to fix it, but\n> you have a clear goal.\n\n\nQuestion is, how newcomers should start contribution if they are not \ncoming with a problem in their hand?\n\nTodo list is possibly first thing anyone, who is willing to contribute \nis going to read and for a new\n\ncontributor, it is not easy to judge situation (if todo item is easy for \nnewcomers or bit involved).\n\nOne way to exacerbate this issue is to mention mailing thread with \ndiscussions under todo items.\n\nIt is done for most of todo items but sometime pressing issues are left out.\n\n\nThat being said, I think this is part of learning process and okay to \ncome up with ideas and fail.\nPghackers can possibly bring up issues in their approach (if discussion \nfor the issue is not mentioned under\ntodo item).\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Wed, 11 Jan 2023 12:07:58 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 1:38 PM Ankit Kumar Pandey <itsankitkp@gmail.com>\nwrote:\n>\n>\n> > On 11/01/23 09:57, Tom Lane wrote:\n> > IME it's typically a lot more productive to approach things via\n> > \"scratch your own itch\". If a problem is biting you directly, then\n> > at least you have some clear idea of what it is that needs to be fixed.\n> > You might have to work up to an understanding of how to fix it, but\n> > you have a clear goal.\n>\n>\n> Question is, how newcomers should start contribution if they are not\n> coming with a problem in their hand?\n\nI would say find something that gets you excited. Worked for me, at least.\n\n> Todo list is possibly first thing anyone, who is willing to contribute\n> is going to read and for a new\n\nYeah, that's a problem we need to address.\n\n> That being said, I think this is part of learning process and okay to\n> come up with ideas and fail.\n\nOf course it is! A key skill in engineering is to fail as quickly as\npossible, preferably before doing any actual work.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jan 11, 2023 at 1:38 PM Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:>>> > On 11/01/23 09:57, Tom Lane wrote:> > IME it's typically a lot more productive to approach things via> > \"scratch your own itch\". If a problem is biting you directly, then> > at least you have some clear idea of what it is that needs to be fixed.> > You might have to work up to an understanding of how to fix it, but> > you have a clear goal.>>> Question is, how newcomers should start contribution if they are not> coming with a problem in their hand?I would say find something that gets you excited. Worked for me, at least.> Todo list is possibly first thing anyone, who is willing to contribute> is going to read and for a newYeah, that's a problem we need to address.> That being said, I think this is part of learning process and okay to> come up with ideas and fail.Of course it is! A key skill in engineering is to fail as quickly as possible, preferably before doing any actual work.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 15:22:57 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Improve ability to display optimizer analysis using\n OPTIMIZER_DEBUG"
}
] |
[
{
"msg_contents": "Hi,\n\nIt looks like assign_checkpoint_completion_target() is defined [1],\nbut never used, because of which CheckPointSegments may miss to\naccount for changed checkpoint_completion_target. I'm attaching a tiny\npatch to fix this.\n\nThoughts?\n\n[1]\ncommit 88e982302684246e8af785e78a467ac37c76dee9\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: Mon Feb 23 18:53:02 2015 +0200\n\n Replace checkpoint_segments with min_wal_size and max_wal_size.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Dec 2022 18:12:34 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make use of assign_checkpoint_completion_target() to calculate\n CheckPointSegments correctly"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 06:12:34PM +0530, Bharath Rupireddy wrote:\n> It looks like assign_checkpoint_completion_target() is defined [1],\n> but never used, because of which CheckPointSegments may miss to\n> account for changed checkpoint_completion_target. I'm attaching a tiny\n> patch to fix this.\n> \n> Thoughts?\n\nOops. It looks like you are right here. This would impact the\ncalculation of CheckPointSegments on reload when\ncheckpoint_completion_target is updated. This is wrong since we've\nswitched to max_wal_size as of 88e9823, so this had better be\nbackpatched all the way down.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 17 Jan 2023 16:01:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make use of assign_checkpoint_completion_target() to calculate\n CheckPointSegments correctly"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 12:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 26, 2022 at 06:12:34PM +0530, Bharath Rupireddy wrote:\n> > It looks like assign_checkpoint_completion_target() is defined [1],\n> > but never used, because of which CheckPointSegments may miss to\n> > account for changed checkpoint_completion_target. I'm attaching a tiny\n> > patch to fix this.\n> >\n> > Thoughts?\n>\n> Oops. It looks like you are right here. This would impact the\n> calculation of CheckPointSegments on reload when\n> checkpoint_completion_target is updated. This is wrong since we've\n> switched to max_wal_size as of 88e9823, so this had better be\n> backpatched all the way down.\n>\n> Thoughts?\n\n+1 to backpatch as setting checkpoint_completion_target will not take\neffect immediately.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 17 Jan 2023 19:55:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Make use of assign_checkpoint_completion_target() to calculate\n CheckPointSegments correctly"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 07:55:53PM +0530, Bharath Rupireddy wrote:\n> On Tue, Jan 17, 2023 at 12:31 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Oops. It looks like you are right here. This would impact the\n>> calculation of CheckPointSegments on reload when\n>> checkpoint_completion_target is updated. This is wrong since we've\n>> switched to max_wal_size as of 88e9823, so this had better be\n>> backpatched all the way down.\n>>\n>> Thoughts?\n> \n> +1 to backpatch as setting checkpoint_completion_target will not take\n> effect immediately.\n\nOkay, applied down to 11. I have double-checked the surroundings to\nsee if there was a similar mistake or something hiding around\nCheckpointSegments but noticed nothing (also did some primary/standby\npgbench-ing with periodic reloads of checkpoint_completion_target,\nwhile on it).\n--\nMichael",
"msg_date": "Thu, 19 Jan 2023 13:16:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Make use of assign_checkpoint_completion_target() to calculate\n CheckPointSegments correctly"
}
] |
[
{
"msg_contents": "Hi,\n\nIsInstallXLogFileSegmentActive() is currently being used for assert\nchecks. How about making it an assert-only function to disable it on\nproduction builds? This can shave an unused function on production\nbuilds. We can easily switch back when there comes a real caller\noutside of xlog.c. I'm attaching a tiny patch for this.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 26 Dec 2022 18:12:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Make IsInstallXLogFileSegmentActive() an assert-only function"
}
] |
[
{
"msg_contents": "Hello.\n\nJust a small story about small data-loss on logical replication.\n\nWe were logically replicating a 4 TB database from\n\n> PostgreSQL 12.12 (Ubuntu 12.12-201-yandex.49163.d86383ed5b) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n\nto\n\n> PostgreSQL 14.5 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n\nDatabase includes many tables, but there are A, B and C tables. Tables\nA, B and are C changed in the same transaction (new tuples created in\nB and C with corresponding update of A).\n\nTable A was added to the PUBLICATION from the start. Once initial sync\nwas done, tables B and C were added to PUBLICATION with REFRESH on\nsubscription (to reduce the WAL collection on the source database).\n\nSo, we see in logs:\n\n> -2022-12-13 13:19:55 UTC-63987bfb.2733-LOG: logical replication table synchronization worker for subscription \"cloud_production_main_sub_v4\", table \"A\" has started\n> -2022-12-13 14:41:49 UTC-63987bfb.2733-LOG: logical replication table synchronization worker for subscription \"cloud_production_main_sub_v4\", table \"A\" has finished\n\n\n> -2022-12-14 08:08:34 UTC-63998482.7d10-LOG: logical replication table synchronization worker for subscription \"cloud_production_main_sub_v4\", table \"B\" has started\n> -2022-12-14 10:19:08 UTC-63998482.7d10-LOG: logical replication table synchronization worker for subscription \"cloud_production_main_sub_v4\", table \"B\" has finished\n\n\n> -2022-12-14 10:37:47 UTC-6399a77b.1fc-LOG: logical replication table synchronization worker for subscription \"cloud_production_main_sub_v4\", table \"C\" has started\n> -2022-12-14 10:48:46 UTC-6399a77b.1fc-LOG: logical replication table synchronization worker for subscription \"cloud_production_main_sub_v4\", table \"C\" has finished\n\n\nAlso, we had to reboot subscription server twice. Moreover, we have\nplenty of messages like:\n\n\n> -2022-12-13 15:53:30 UTC-639872fa.1-LOG: background worker \"logical replication worker\" (PID 47960) exited with exit code 1\n> -2022-12-13 21:04:31 UTC-6398e8df.4f7c-LOG: logical replication apply worker for subscription \"cloud_production_main_sub_v4\" has started\n> -2022-12-14 10:19:22 UTC-6398e8df.4f7c-ERROR: could not receive data from WAL stream: SSL SYSCALL error: EOF detected\n\nAdditionally, our HA replica of subscriber was broken and recovered by\nsupport… And logs like this:\n\n> psql-2022-12-14 10:24:18 UTC-63999d2c.2020-WARNING: user requested cancel while waiting for synchronous replication ack.\n> The COMMIT record has already flushed to WAL locally and might not have been replicated to the standby. We must wait here.\n\nFurthermore, we were adding\\removing another table D from publication\nfew times. So, it was a little bit messy story.\n\nAfter all, we got streaming working for the whole database.\n\nBut after some time we realized we have lost 203 records for table B\ncreated from\n\n2022-12-14 09:21:25.705 to\n2022-12-14 09:49:20.664 (after synchronization start, but before finish).\n\nAnd the most tricky part here - A, B and C are changed in the same\ntransaction (related tuples). But tables A and C - are fine, only few\nrecords from B are lost.\n\nWe have compared all other tables record to record - only 203 records\nfrom B are missing. We have restored the server from backup with\npoint-in-time-recovery (to exclude case with application or human\nerror) - the same results. Furthermore, we have tried different\nindexes in search (to exclude issue with broken btree) - the same\nresults.\n\nSo, yes, we understand our replication story was not a classical happy\npath even close. But the result feels a little bit scary.\n\nCurrently, I have access to database and logs - so, feel free to ask\nfor additional debugging information if you like.\n\nThanks a lot,\nMichail.\n\n\n",
"msg_date": "Mon, 26 Dec 2022 17:28:00 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Data loss on logical replication, 12.12 to 14.5, ALTER SUBSCRIPTION"
},
{
"msg_contents": "Hello again.\n\nJust small a fix for:\n\n> 2022-12-14 09:21:25.705 to\n> 2022-12-14 09:49:20.664 (after synchronization start, but before finish).\n\nCorrect values are:\n\n2022-12-14 09:49:31.340\n2022-12-14 09:49:41.683\n\nSo, it looks like we lost about 10s of one of the tables WAL.\n\n\n",
"msg_date": "Mon, 26 Dec 2022 18:19:35 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Mon, Dec 26, 2022 at 8:50 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> Hello again.\n>\n> Just small a fix for:\n>\n> > 2022-12-14 09:21:25.705 to\n> > 2022-12-14 09:49:20.664 (after synchronization start, but before finish).\n>\n> Correct values are:\n>\n> 2022-12-14 09:49:31.340\n> 2022-12-14 09:49:41.683\n>\n> So, it looks like we lost about 10s of one of the tables WAL.\n>\n\n\nIIUC, this is the time when only table B's initial sync was\nin-progress. Table A's initial sync was finished by that time and for\nTable C, it is yet not started. We have some logic where in such\noverlapping phases where we allow apply worker to skip the data which\nwill be later applied by the initial sync worker. During the time of\nthe initial sync of B, are there any other operations on that table\napart from the missing ones? You may want to see the LOGs of\nsubscribers during the initial sync time for any errors or other\nmessages.\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 27 Dec 2022 10:27:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "Hello, Amit!\n\n> IUC, this is the time when only table B's initial sync was\n> in-progress. Table A's initial sync was finished by that time and for\n> Table C, it is yet not started.\nYes, it is correct. C was started too, but unsuccessfully (restarted\nafter, see below).\n\n> During the time of\n> the initial sync of B, are there any other operations on that table\n> apart from the missing ones?\nYes, a lot of them. Tuples created all the time without any pauses,\nonly ~10s interval is gone.\n\n> You may want to see the LOGs of\n> subscribers during the initial sync time for any errors or other\n> messages.\n\nLooks like I have found something interesting. Below, sorted by time\nwith some comments.\n\nRestart of the logical apply worker day before issue.\n\n -2022-12-13 21:04:25 UTC-6398e8d9.4ec3-LOG: logical\nreplication apply worker for subscription\n\"cloud_production_main_sub_v4\" has started\n -2022-12-13 21:04:26 UTC-6398e8d9.4ec3-ERROR: could not start\nWAL streaming: FATAL: out of relcache_callback_list slots CONTEXT:\nslot \"cloud_production_main_sub_v4\", output plugin \"pgoutput\", in the\nstartup callback ERROR: odyssey: c1747b31d0187: remote server\nread/write error s721c052ace56: (null) SSL SYSCALL error: EOF detected\n -2022-12-13 21:04:26 UTC-639872fa.1-LOG: background worker\n\"logical replication worker\" (PID 20163) exited with exit code 1\n -2022-12-13 21:04:31 UTC-6398e8df.4f7c-LOG: logical\nreplication apply worker for subscription\n\"cloud_production_main_sub_v4\" has started\n\nStart of B and C table initial sync (adding tables to the\nsubscription, table A already in streaming mode):\n\n -2022-12-14 08:08:34 UTC-63998482.7d10-LOG: logical\nreplication table synchronization worker for subscription\n\"cloud_production_main_sub_v4\", table \"B\" has started\n -2022-12-14 08:08:34 UTC-63998482.7d13-LOG: logical\nreplication table synchronization worker for subscription\n\"cloud_production_main_sub_v4\", table \"C\" has started\n\nB is synchronized without any errors:\n\n -2022-12-14 10:19:08 UTC-63998482.7d10-LOG: logical\nreplication table synchronization worker for subscription\n\"cloud_production_main_sub_v4\", table \"B\" has finished\n\nAfter about 15 seconds, replication worker is restarted because of\nissues with I/O:\n\n -2022-12-14 10:19:22 UTC-6398e8df.4f7c-ERROR: could not\nreceive data from WAL stream: SSL SYSCALL error: EOF detected\n -2022-12-14 10:19:22 UTC-6399a32a.6af3-LOG: logical\nreplication apply worker for subscription\n\"cloud_production_main_sub_v4\" has started\n -2022-12-14 10:19:22 UTC-639872fa.1-LOG: background worker\n\"logical replication worker\" (PID 20348) exited with exit code 1\n\nThen cancel of query (something about insert into public.lsnmover):\n\n -2022-12-14 10:21:03 UTC-63999c6a.f25d-ERROR: canceling\nstatement due to user request\n -2022-12-14 10:21:39 UTC-639997f9.fd6e-ERROR: canceling\nstatement due to user request\n\nAfter small amount of time, synchronous replication seems to be\nbroken, we see tons of:\n\n -2022-12-14 10:24:18 UTC-63999d2c.2020-WARNING: shutdown\nrequested while waiting for synchronous replication ack.\n -2022-12-14 10:24:18 UTC-63999d2c.2020-WARNING: user requested\ncancel while waiting for synchronous\n\nAfter few minutes at 10:36:05 we initiated database restart:\n\n -2022-12-14 10:35:20 UTC-63999c6a.f25d-ERROR: canceling\nstatement due to user request\n -2022-12-14 10:37:25 UTC-6399a765.1-LOG: pgms_stats: Finishing PG_init\n -2022-12-14 10:37:26 UTC-6399a765.1-LOG: listening on IPv4\naddress \"0.0.0.0\", port 5432\n -2022-12-14 10:37:26 UTC-6399a765.1-LOG: listening on IPv6\naddress \"::\", port 5432\n -2022-12-14 10:37:26 UTC-6399a765.1-LOG: starting PostgreSQL\n14.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu\n7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit\n -2022-12-14 10:37:26 UTC-6399a765.1-LOG: listening on Unix\nsocket \"/tmp/.s.PGSQL.5432\"\n -2022-12-14 10:37:26 UTC-6399a766.103-LOG: database system was\ninterrupted; last known up at 2022-12-14 10:36:47 UTC\n -2022-12-14 10:37:26 UTC-6399a766.105-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:26 UTC-6399a766.107-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:27 UTC-6399a767.108-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:27 UTC-6399a767.109-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:27 UTC-6399a767.10a-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:27 UTC-6399a767.10b-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:27 UTC-6399a767.113-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:29 UTC-6399a766.103-LOG: recovered\nreplication state of node 4 to 185F6/CBF4BBB8\n -2022-12-14 10:37:29 UTC-6399a766.103-LOG: recovered\nreplication state of node 1 to 18605/FE229E10\n -2022-12-14 10:37:29 UTC-6399a766.103-LOG: recovered\nreplication state of node 6 to 185F8/86AEC880\n -2022-12-14 10:37:29 UTC-6399a766.103-LOG: database system was\nnot properly shut down; automatic recovery in progress\n -2022-12-14 10:37:29 UTC-6399a766.103-LOG: recovered\nreplication state of node 5 to 185F6/CBF4BBB8\n -2022-12-14 10:37:29 UTC-6399a766.103-LOG: redo starts at D76/557F4210\n -2022-12-14 10:37:34 UTC-6399a76e.14a-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:34 UTC-6399a766.103-LOG: redo done at\nD76/BDDBDAE0 system usage: CPU: user: 3.31 s, system: 1.36 s, elapsed:\n4.68 s\n -2022-12-14 10:37:34 UTC-6399a766.103-LOG: checkpoint\nstarting: end-of-recovery immediate\n -2022-12-14 10:37:34 UTC-6399a76e.14b-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:34 UTC-6399a76e.14c-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:45 UTC-6399a779.190-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:45 UTC-6399a766.103-LOG: checkpoint\ncomplete: wrote 300326 buffers (3.6%);\n -2022-12-14 10:37:45 UTC-6399a779.191-FATAL: the database\nsystem is starting up\n -2022-12-14 10:37:45 UTC-6399a779.199-LOG: pgms_stats:\nbgworker: pid: 409.\n -2022-12-14 10:37:45 UTC-6399a779.19a-LOG: pg_qs: qs bgworker: pid: 410.\n -2022-12-14 10:37:45 UTC-6399a765.1-LOG: database system is\nready to accept connections\n -2022-12-14 10:37:45 UTC-6399a779.199-LOG: pgms_stats: qs\nbgworker: bgworker started running with host database: cloud_sys\n -2022-12-14 10:37:45 UTC-6399a779.19a-LOG: pg_qs: qs bgworker:\nbgworker started running with host database: cloud_sys\n -2022-12-14 10:37:45 UTC-6399a779.19d-LOG: logical replication\napply worker for subscription \"cloud_production_main_sub_v4\" has\nstarted\n -2022-12-14 10:37:46 UTC-6399a779.19b-LOG: pg_cron scheduler started\n\nInitial sync for C started from scratch (with some other tables):\n\n -2022-12-14 10:37:47 UTC-6399a77b.1f9-LOG: logical replication\ntable synchronization worker for subscription\n\"cloud_production_main_sub_v4\", table \"E\" has started\n -2022-12-14 10:37:47 UTC-6399a77b.1fa-LOG: logical replication\ntable synchronization worker for subscription\n\"cloud_production_main_sub_v4\", table \"F\" has started\n -2022-12-14 10:37:47 UTC-6399a77b.1fc-LOG: logical replication\ntable synchronization worker for subscription\n\"cloud_production_main_sub_v4\", table \"C\" has started\n\nAnd a few more flaps with WAL streaming later:\n\n -2022-12-14 10:50:20 UTC-6399a779.19d-ERROR: could not send\ndata to WAL stream: SSL SYSCALL error: EOF detected\n -2022-12-14 10:50:20 UTC-6399a765.1-LOG: background worker\n\"logical replication worker\" (PID 413) exited with exit code 1\n\nProbably a small part of WAL was somehow skipped by logical worker in\nall that mess.\n\nBest regards,\nMichail.\n\n\n",
"msg_date": "Tue, 27 Dec 2022 15:19:01 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 5:49 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n>\n> Probably a small part of WAL was somehow skipped by logical worker in\n> all that mess.\n>\n\nNone of these entries are from the point mentioned by you [1]\nyesterday where you didn't find the corresponding data in the\nsubscriber. How did you identify that the entries corresponding to\nthat timing were missing?\n\n[1] -\n> 2022-12-14 09:49:31.340\n> 2022-12-14 09:49:41.683\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 28 Dec 2022 10:03:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "Hello.\n\n> None of these entries are from the point mentioned by you [1]\n> yesterday where you didn't find the corresponding data in the\n> subscriber. How did you identify that the entries corresponding to\n> that timing were missing?\n\nSome of the before the interval, some after... But the source database\nwas generating a lot of WAL during logical replication\n- some of these log entries from time AFTER completion of initial sync\nof B but (probably) BEFORE finishing B table catch up (entering\nstreaming mode).\n\nJust to clarify, tables A, B and C are updated in the same\ntransaction, something like:\n\nBEGIN;\nUPDATE A SET x = x +1 WHERE id = :id;\nINSERT INTO B(current_time, :id);\nINSERT INTO C(current_time, :id);\nCOMMIT;\n\nOther (non-mentioned) tables also included into this transaction, but\nonly B missed small amount of data.\n\nSo, shortly the story looks like:\n\n* initial sync of A (and other tables) started and completed, they are\nin streaming mode\n* B and C initial sync started (by altering PUBLICATION and SUBSCRIPTION)\n* B sync completed, but new changes are still applying to the tables\nto catch up primary\n* logical replication apply worker is restarted because IO error on WAL receive\n* Postgres killed\n* Postgres restarted\n* C initial sync restarted\n* logical replication apply worker few times restarted because IO\nerror on WAL receive\n* finally every table in streaming mode but with small gap in B\n\nThanks,\nMichail.\n\n\n",
"msg_date": "Wed, 28 Dec 2022 14:22:08 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 4:52 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> Hello.\n>\n> > None of these entries are from the point mentioned by you [1]\n> > yesterday where you didn't find the corresponding data in the\n> > subscriber. How did you identify that the entries corresponding to\n> > that timing were missing?\n>\n> Some of the before the interval, some after... But the source database\n> was generating a lot of WAL during logical replication\n> - some of these log entries from time AFTER completion of initial sync\n> of B but (probably) BEFORE finishing B table catch up (entering\n> streaming mode).\n>\n...\n...\n>\n> So, shortly the story looks like:\n>\n> * initial sync of A (and other tables) started and completed, they are\n> in streaming mode\n> * B and C initial sync started (by altering PUBLICATION and SUBSCRIPTION)\n> * B sync completed, but new changes are still applying to the tables\n> to catch up primary\n>\n\nThe point which is not completely clear from your description is the\ntiming of missing records. In one of your previous emails, you seem to\nhave indicated that the data missed from Table B is from the time when\nthe initial sync for Table B was in-progress, right? Also, from your\ndescription, it seems there is no error or restart that happened\nduring the time of initial sync for Table B. Is that understanding\ncorrect?\n\n> * logical replication apply worker is restarted because IO error on WAL receive\n> * Postgres killed\n> * Postgres restarted\n> * C initial sync restarted\n> * logical replication apply worker few times restarted because IO\n> error on WAL receive\n> * finally every table in streaming mode but with small gap in B\n>\n\nI am not able to see how these steps can lead to the problem. If the\nproblem is reproducible at your end, you might want to increase LOG\nverbosity to DEBUG1 and see if there is additional information in the\nLOGs that can help or it would be really good if there is a\nself-sufficient test to reproduce it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 Jan 2023 18:19:39 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "Hello, Amid.\n\n> The point which is not completely clear from your description is the\n> timing of missing records. In one of your previous emails, you seem to\n> have indicated that the data missed from Table B is from the time when\n> the initial sync for Table B was in-progress, right? Also, from your\n> description, it seems there is no error or restart that happened\n> during the time of initial sync for Table B. Is that understanding\n> correct?\n\nYes and yes.\n* B sync started - 08:08:34\n* lost records are created - 09:49:xx\n* B initial sync finished - 10:19:08\n* I/O error with WAL - 10:19:22\n* SIGTERM - 10:35:20\n\n\"Finished\" here is `logical replication table synchronization worker\nfor subscription \"cloud_production_main_sub_v4\", table \"B\" has\nfinished`.\nAs far as I know, it is about COPY command.\n\n> I am not able to see how these steps can lead to the problem.\n\nOne idea I have here - it is something related to the patch about\nforbidding of canceling queries while waiting for synchronous\nreplication acknowledgement [1].\nIt is applied to Postgres in the cloud we were using [2]. We started\nto see such errors in 10:24:18:\n\n `The COMMIT record has already flushed to WAL locally and might\nnot have been replicated to the standby. We must wait here.`\n\nI wonder could it be some tricky race because of downtime of\nsynchronous replica and queries stuck waiting for ACK forever?\n\n> If the problem is reproducible at your end, you might want to increase LOG\n> verbosity to DEBUG1 and see if there is additional information in the\n> LOGs that can help or it would be really good if there is a\n> self-sufficient test to reproduce it.\n\nUnfortunately, it looks like it is really hard to reproduce.\n\nBest regards,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/flat/CALj2ACU%3DnzEb_dEfoLqez5CLcwvx1GhkdfYRNX%2BA4NDRbjYdBg%40mail.gmail.com#8b7ffc8cdecb89de43c0701b4b6b5142\n[2]: https://www.postgresql.org/message-id/flat/CAAhFRxgcBy-UCvyJ1ZZ1UKf4Owrx4J2X1F4tN_FD%3Dfh5wZgdkw%40mail.gmail.com#9c71a85cb6009eb60d0361de82772a50\n\n\n",
"msg_date": "Tue, 3 Jan 2023 11:43:54 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 2:14 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> > The point which is not completely clear from your description is the\n> > timing of missing records. In one of your previous emails, you seem to\n> > have indicated that the data missed from Table B is from the time when\n> > the initial sync for Table B was in-progress, right? Also, from your\n> > description, it seems there is no error or restart that happened\n> > during the time of initial sync for Table B. Is that understanding\n> > correct?\n>\n> Yes and yes.\n> * B sync started - 08:08:34\n> * lost records are created - 09:49:xx\n> * B initial sync finished - 10:19:08\n> * I/O error with WAL - 10:19:22\n> * SIGTERM - 10:35:20\n>\n> \"Finished\" here is `logical replication table synchronization worker\n> for subscription \"cloud_production_main_sub_v4\", table \"B\" has\n> finished`.\n> As far as I know, it is about COPY command.\n>\n> > I am not able to see how these steps can lead to the problem.\n>\n> One idea I have here - it is something related to the patch about\n> forbidding of canceling queries while waiting for synchronous\n> replication acknowledgement [1].\n> It is applied to Postgres in the cloud we were using [2]. We started\n> to see such errors in 10:24:18:\n>\n> `The COMMIT record has already flushed to WAL locally and might\n> not have been replicated to the standby. We must wait here.`\n>\n\nDoes that by any chance mean you are using a non-community version of\nPostgres which has some other changes?\n\n> I wonder could it be some tricky race because of downtime of\n> synchronous replica and queries stuck waiting for ACK forever?\n>\n\nIt is possible but ideally, in that case, the client should request\nsuch a transaction again.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:30:09 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "> Does that by any chance mean you are using a non-community version of\n> Postgres which has some other changes?\n\nIt is a managed Postgres service in the general cloud. Usually, such\nproviders apply some custom minor patches.\nThe only one I know about - about forbidding of canceling queries\nwhile waiting for synchronous replication acknowledgement.\n\n> It is possible but ideally, in that case, the client should request\n> such a transaction again.\n\nI am not sure I get you here.\n\nI'll try to explain what I mean:\n\nThe patch I'm referring to does not allow canceling a query while it\nwaiting acknowledge for ACK for COMMIT message in case of synchronous\nreplication.\nIf synchronous standby is down - query and connection just stuck until\nserver restart (or until standby become available to process ACK).\nTuples changed by such a hanging transaction are not visible by other\ntransactions. It is all done to prevent seeing spurious tuples in case\nof network split.\n\nSo, it seems like we had such a situation during that story because of\nour synchronous standby downtime (before server restart).\nMy thoughts just about the possibility of fact that such transactions\n(waiting for ACK for COMMIT) are handled somehow incorrectly by\nlogical replication engine.\n\nMichail.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 18:20:11 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 8:50 PM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n>\n> > Does that by any chance mean you are using a non-community version of\n> > Postgres which has some other changes?\n>\n> It is a managed Postgres service in the general cloud. Usually, such\n> providers apply some custom minor patches.\n> The only one I know about - about forbidding of canceling queries\n> while waiting for synchronous replication acknowledgement.\n>\n\nOkay, but it would be better to know what all the other changes they have.\n\n> > It is possible but ideally, in that case, the client should request\n> > such a transaction again.\n>\n> I am not sure I get you here.\n>\n> I'll try to explain what I mean:\n>\n> The patch I'm referring to does not allow canceling a query while it\n> waiting acknowledge for ACK for COMMIT message in case of synchronous\n> replication.\n> If synchronous standby is down - query and connection just stuck until\n> server restart (or until standby become available to process ACK).\n> Tuples changed by such a hanging transaction are not visible by other\n> transactions. It is all done to prevent seeing spurious tuples in case\n> of network split.\n>\n> So, it seems like we had such a situation during that story because of\n> our synchronous standby downtime (before server restart).\n> My thoughts just about the possibility of fact that such transactions\n> (waiting for ACK for COMMIT) are handled somehow incorrectly by\n> logical replication engine.\n>\n\nI understood this point yesterday but we do have handling for such\ncases. Say, if the subscriber is down during the time of such\nsynchronous transactions, after the restart, it will request to\nrestart the replication from a point which is prior to such\ntransactions. We ensure this by replication origins. See docs [1] for\nmore information about the same. Now, it is possible that there is a\nbug in that mechanism but it is difficult to find it without some\nhints from LOGs or a reproducible test. It is also possible that there\nis another area that has a bug in the Postgres code. But, OTOH, we\ncan't rule out the possibility that it is because of some features\nadded by managed service unless you can reproduce it on the Postgres\nbuild.\n\n[1] - https://www.postgresql.org/docs/devel/replication-origins.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 Jan 2023 19:16:52 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Data loss on logical replication, 12.12 to 14.5,\n ALTER SUBSCRIPTION"
}
] |
[
{
"msg_contents": "Hi hackers!\n\nWhile working on Pluggable TOAST [1] we found out that creation\nof new relation with CREATE TABLE AS... or CREATE TABLE LIKE -\nmethod\nstatic ObjectAddress create_ctas_internal(List *attrList, IntoClause *into)\ndoes not receive any metadata from columns or tables used in query\n(if any). It makes sense to pass not only column type and size, but\nall other metadata - like attoptions,base relation OID (and, maybe,\nreloptions), if the column from existing relation was used.\n\nA good example is the creation of new relation from base one where\nsome other Toaster was assigned to a column - it seems reasonable\nthat the same column in new table must have the same Toaster assigned\nas the base one. And we already have a couple of other practical uses\nfor the metadata passed along with column definitions.\n\nAny thoughts or suggestions?\n\n[1] https://commitfest.postgresql.org/41/3490/\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!While working on Pluggable TOAST [1] we found out that creationof new relation with CREATE TABLE AS... or CREATE TABLE LIKE -methodstatic ObjectAddress create_ctas_internal(List *attrList, IntoClause *into)does not receive any metadata from columns or tables used in query(if any). It makes sense to pass not only column type and size, butall other metadata - like attoptions,base relation OID (and, maybe,reloptions), if the column from existing relation was used. A good example is the creation of new relation from base one wheresome other Toaster was assigned to a column - it seems reasonablethat the same column in new table must have the same Toaster assignedas the base one. And we already have a couple of other practical usesfor the metadata passed along with column definitions.Any thoughts or suggestions?[1] https://commitfest.postgresql.org/41/3490/-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Mon, 26 Dec 2022 22:15:05 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Passing relation metadata to Exec routine"
},
{
"msg_contents": "Nikita Malakhov <hukutoc@gmail.com> writes:\n> While working on Pluggable TOAST [1] we found out that creation\n> of new relation with CREATE TABLE AS... or CREATE TABLE LIKE -\n> method\n> static ObjectAddress create_ctas_internal(List *attrList, IntoClause *into)\n> does not receive any metadata from columns or tables used in query\n> (if any). It makes sense to pass not only column type and size, but\n> all other metadata - like attoptions,base relation OID (and, maybe,\n> reloptions), if the column from existing relation was used.\n\nI am very, very skeptical of the premise here.\n\nCREATE TABLE AS creates a table based on the output of a query.\nThat query could involve arbitrarily complex processing -- joins,\ngrouping, what-have-you. I don't see how it makes sense to\nconsider the properties of the base table(s) in deciding how to\ncreate the output table. I certainly do not think it'd be sane for\nthat to behave differently depending on how complicated the query is.\n\nAs for CREATE TABLE LIKE, the point of that is to create a table\nby copying a *specified* set of properties of a reference table.\nI don't think letting an access method copy some other properties\nbehind the user's back is a great idea. If you've got things that\nit'd make sense to be able to inherit, let's discuss adding more\nLIKE options to support that --- in which case the implementation\nwould presumably pass the data through in a non-back-door fashion.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 26 Dec 2022 16:56:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Passing relation metadata to Exec routine"
},
{
"msg_contents": "Hi Tom!\n\nThank you for your feedback. I agree that for complex columns\ncreated with joins, grouping, etc considering properties of the base\ntable does not make sense at all.\n\nBut for CREATE TABLE LIKE and simple columns that are inherited\nfrom some existing relations - it does, if we consider some advanced\nproperties and from user's perspective - want our new table [columns]\nto behave exactly as the base ones (in some ways like encryption,\nstorage, compression methods, etc). LIKE options is a good idea,\nthank you, but when we CREATE TABLE AS - maybe, we take it\ninto account too?\n\nI agree that passing these parameters in a backdoor fashion is not\nvery transparent and user-friendly, too.\n\nOn Tue, Dec 27, 2022 at 12:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Nikita Malakhov <hukutoc@gmail.com> writes:\n> > While working on Pluggable TOAST [1] we found out that creation\n> > of new relation with CREATE TABLE AS... or CREATE TABLE LIKE -\n> > method\n> > static ObjectAddress create_ctas_internal(List *attrList, IntoClause\n> *into)\n> > does not receive any metadata from columns or tables used in query\n> > (if any). It makes sense to pass not only column type and size, but\n> > all other metadata - like attoptions,base relation OID (and, maybe,\n> > reloptions), if the column from existing relation was used.\n>\n> I am very, very skeptical of the premise here.\n>\n> CREATE TABLE AS creates a table based on the output of a query.\n> That query could involve arbitrarily complex processing -- joins,\n> grouping, what-have-you. I don't see how it makes sense to\n> consider the properties of the base table(s) in deciding how to\n> create the output table. I certainly do not think it'd be sane for\n> that to behave differently depending on how complicated the query is.\n>\n> As for CREATE TABLE LIKE, the point of that is to create a table\n> by copying a *specified* set of properties of a reference table.\n> I don't think letting an access method copy some other properties\n> behind the user's back is a great idea. If you've got things that\n> it'd make sense to be able to inherit, let's discuss adding more\n> LIKE options to support that --- in which case the implementation\n> would presumably pass the data through in a non-back-door fashion.\n>\n> regards, tom lane\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi Tom!Thank you for your feedback. I agree that for complex columnscreated with joins, grouping, etc considering properties of the basetable does not make sense at all.But for CREATE TABLE LIKE and simple columns that are inheritedfrom some existing relations - it does, if we consider some advancedproperties and from user's perspective - want our new table [columns]to behave exactly as the base ones (in some ways like encryption,storage, compression methods, etc). LIKE options is a good idea,thank you, but when we CREATE TABLE AS - maybe, we take itinto account too?I agree that passing these parameters in a backdoor fashion is notvery transparent and user-friendly, too.On Tue, Dec 27, 2022 at 12:56 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Nikita Malakhov <hukutoc@gmail.com> writes:\n> While working on Pluggable TOAST [1] we found out that creation\n> of new relation with CREATE TABLE AS... or CREATE TABLE LIKE -\n> method\n> static ObjectAddress create_ctas_internal(List *attrList, IntoClause *into)\n> does not receive any metadata from columns or tables used in query\n> (if any). It makes sense to pass not only column type and size, but\n> all other metadata - like attoptions,base relation OID (and, maybe,\n> reloptions), if the column from existing relation was used.\n\nI am very, very skeptical of the premise here.\n\nCREATE TABLE AS creates a table based on the output of a query.\nThat query could involve arbitrarily complex processing -- joins,\ngrouping, what-have-you. I don't see how it makes sense to\nconsider the properties of the base table(s) in deciding how to\ncreate the output table. I certainly do not think it'd be sane for\nthat to behave differently depending on how complicated the query is.\n\nAs for CREATE TABLE LIKE, the point of that is to create a table\nby copying a *specified* set of properties of a reference table.\nI don't think letting an access method copy some other properties\nbehind the user's back is a great idea. If you've got things that\nit'd make sense to be able to inherit, let's discuss adding more\nLIKE options to support that --- in which case the implementation\nwould presumably pass the data through in a non-back-door fashion.\n\n regards, tom lane\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Tue, 27 Dec 2022 15:18:09 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Passing relation metadata to Exec routine"
}
] |
[
{
"msg_contents": "Hi\n\nI got new warning\n\n -o session.bc session.c\nanalyze.c: In function ‘transformStmt’:\nanalyze.c:550:21: warning: ‘sub_rteperminfos’ may be used uninitialized\n[-Wmaybe-uninitialized]\n 550 | List *sub_rteperminfos;\n | ^~~~~~~~~~~~~~~~\n\n<-->if (isGeneralSelect)\n<-->{\n<--><-->sub_rtable = pstate->p_rtable;\n<--><-->pstate->p_rtable = NIL;\n<--><-->sub_rteperminfos = pstate->p_rteperminfos;\n<--><-->pstate->p_rteperminfos = NIL;\n<--><-->sub_namespace = pstate->p_namespace;\n<--><-->pstate->p_namespace = NIL;\n<-->}\n<-->else\n<-->{\n<--><-->sub_rtable = NIL;<-><-->/* not used, but keep compiler quiet */\n<--><-->sub_namespace = NIL;\n --- missing sub_rteperminfos\n<-->}\n\nRegards\n\nPavel\n\nHiI got new warning -o session.bc session.canalyze.c: In function ‘transformStmt’:analyze.c:550:21: warning: ‘sub_rteperminfos’ may be used uninitialized [-Wmaybe-uninitialized] 550 | List *sub_rteperminfos; | ^~~~~~~~~~~~~~~~<-->if (isGeneralSelect)<-->{<--><-->sub_rtable = pstate->p_rtable;<--><-->pstate->p_rtable = NIL;<--><-->sub_rteperminfos = pstate->p_rteperminfos;<--><-->pstate->p_rteperminfos = NIL;<--><-->sub_namespace = pstate->p_namespace;<--><-->pstate->p_namespace = NIL;<-->}<-->else<-->{<--><-->sub_rtable = NIL;<-><-->/* not used, but keep compiler quiet */<--><-->sub_namespace = NIL; --- missing sub_rteperminfos<-->}RegardsPavel",
"msg_date": "Tue, 27 Dec 2022 06:55:30 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "build gcc warning"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> I got new warning\n> analyze.c: In function ‘transformStmt’:\n> analyze.c:550:21: warning: ‘sub_rteperminfos’ may be used uninitialized\n> [-Wmaybe-uninitialized]\n\nA couple of buildfarm animals are warning about that too ... but\nonly a couple.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Dec 2022 01:55:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: build gcc warning"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-27 01:55:06 -0500, Tom Lane wrote:\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > I got new warning\n> > analyze.c: In function ‘transformStmt’:\n> > analyze.c:550:21: warning: ‘sub_rteperminfos’ may be used uninitialized\n> > [-Wmaybe-uninitialized]\n> \n> A couple of buildfarm animals are warning about that too ... but\n> only a couple.\n\nI'm a bit confused by gcc getting confused here - the condition for\nsub_rteperminfos getting initialized and used are the same. Most of the time\nthe maybe-uninitialized logic seems to be better than this.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Dec 2022 14:17:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: build gcc warning"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-12-27 01:55:06 -0500, Tom Lane wrote:\n>> A couple of buildfarm animals are warning about that too ... but\n>> only a couple.\n\n> I'm a bit confused by gcc getting confused here - the condition for\n> sub_rteperminfos getting initialized and used are the same. Most of the time\n> the maybe-uninitialized logic seems to be better than this.\n\nApparently the key phrase there is \"most of the time\" ;-).\n\nI see that we've had an equally \"unnecessary\" initialization of the\nsibling variable sub_rtable for a long time, so the problem's been\nthere for some people before. I made it initialize sub_rteperminfos\nthe same way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Dec 2022 18:10:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: build gcc warning"
}
] |
[
{
"msg_contents": "Another aclchk.c refactoring patch, similar to [0] and [1].\n\nRefactor recordExtObjInitPriv(): Instead of half a dozen of \nmostly-duplicate conditional branches, write one common one that can \nhandle most catalogs. We already have all the information we need, such \nas which system catalog corresponds to which catalog table and which \ncolumn is the ACL column.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/95c30f96-4060-2f48-98b5-a4392d3b6066@enterprisedb.com\n[1]: \nhttps://www.postgresql.org/message-id/flat/22c7e802-4e7d-8d87-8b71-cba95e6f4bcf%40enterprisedb.com",
"msg_date": "Tue, 27 Dec 2022 09:56:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 09:56:10AM +0100, Peter Eisentraut wrote:\n> Refactor recordExtObjInitPriv(): Instead of half a dozen of\n> mostly-duplicate conditional branches, write one common one that can handle\n> most catalogs. We already have all the information we need, such as which\n> system catalog corresponds to which catalog table and which column is the\n> ACL column.\n\nThis seems reasonable.\n\n> +\t/* This will error on unsupported classoid. */\n> +\telse if (get_object_attnum_acl(classoid))\n\nnitpick: I would suggest explicitly checking that it isn't\nInvalidAttrNumber instead of relying on it always being 0.\n\n> -\t\t\t classoid == AggregateRelationId ||\n\nI noticed that AggregateRelationId isn't listed in the ObjectProperty\narray, so I think recordExtObjInitPriv() will begin erroring for that\nclassoid instead of ignoring it like we do today.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 11 Jan 2023 16:04:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "On 12.01.23 01:04, Nathan Bossart wrote:\n>> -\t\t\t classoid == AggregateRelationId ||\n> I noticed that AggregateRelationId isn't listed in the ObjectProperty\n> array, so I think recordExtObjInitPriv() will begin erroring for that\n> classoid instead of ignoring it like we do today.\n\nHmm, we do have some extensions in contrib that add aggregates (citext, \nintagg). I suspect that the aggregate function is actually registered \ninto the extension via its pg_proc entry, so this wouldn't actually \nmatter. But maybe the commenting should be clearer?\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 18:15:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 12.01.23 01:04, Nathan Bossart wrote:\n> -\t\t\t classoid == AggregateRelationId ||\n>> I noticed that AggregateRelationId isn't listed in the ObjectProperty\n>> array, so I think recordExtObjInitPriv() will begin erroring for that\n>> classoid instead of ignoring it like we do today.\n\n> Hmm, we do have some extensions in contrib that add aggregates (citext, \n> intagg). I suspect that the aggregate function is actually registered \n> into the extension via its pg_proc entry, so this wouldn't actually \n> matter. But maybe the commenting should be clearer?\n\nYeah, I don't believe that AggregateRelationId is used in object\naddresses; we just refer to pg_proc for any kind of function including\naggregates. Note that there is no \"oid\" column in pg_aggregate.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 12 Jan 2023 12:20:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 12:20:50PM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 12.01.23 01:04, Nathan Bossart wrote:\n>> -\t\t\t classoid == AggregateRelationId ||\n>>> I noticed that AggregateRelationId isn't listed in the ObjectProperty\n>>> array, so I think recordExtObjInitPriv() will begin erroring for that\n>>> classoid instead of ignoring it like we do today.\n> \n>> Hmm, we do have some extensions in contrib that add aggregates (citext, \n>> intagg). I suspect that the aggregate function is actually registered \n>> into the extension via its pg_proc entry, so this wouldn't actually \n>> matter. But maybe the commenting should be clearer?\n> \n> Yeah, I don't believe that AggregateRelationId is used in object\n> addresses; we just refer to pg_proc for any kind of function including\n> aggregates. Note that there is no \"oid\" column in pg_aggregate.\n\nGot it, thanks for clarifying.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 09:40:49 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "On 12.01.23 18:40, Nathan Bossart wrote:\n> On Thu, Jan 12, 2023 at 12:20:50PM -0500, Tom Lane wrote:\n>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>> On 12.01.23 01:04, Nathan Bossart wrote:\n>>> -\t\t\t classoid == AggregateRelationId ||\n>>>> I noticed that AggregateRelationId isn't listed in the ObjectProperty\n>>>> array, so I think recordExtObjInitPriv() will begin erroring for that\n>>>> classoid instead of ignoring it like we do today.\n>>\n>>> Hmm, we do have some extensions in contrib that add aggregates (citext,\n>>> intagg). I suspect that the aggregate function is actually registered\n>>> into the extension via its pg_proc entry, so this wouldn't actually\n>>> matter. But maybe the commenting should be clearer?\n>>\n>> Yeah, I don't believe that AggregateRelationId is used in object\n>> addresses; we just refer to pg_proc for any kind of function including\n>> aggregates. Note that there is no \"oid\" column in pg_aggregate.\n> \n> Got it, thanks for clarifying.\n\nI have updated the patch as you suggested and split out the aggregate \nissue into a separate patch for clarity.",
"msg_date": "Mon, 16 Jan 2023 12:01:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 12:01:47PM +0100, Peter Eisentraut wrote:\n> I have updated the patch as you suggested and split out the aggregate issue\n> into a separate patch for clarity.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 16 Jan 2023 14:43:21 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
},
{
"msg_contents": "On 16.01.23 23:43, Nathan Bossart wrote:\n> On Mon, Jan 16, 2023 at 12:01:47PM +0100, Peter Eisentraut wrote:\n>> I have updated the patch as you suggested and split out the aggregate issue\n>> into a separate patch for clarity.\n> \n> LGTM\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 17 Jan 2023 20:16:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Refactor recordExtObjInitPriv()"
}
] |
[
{
"msg_contents": "Here is a patch to add support for underscores in numeric literals, for \nvisual grouping, like\n\n 1_500_000_000\n 0b10001000_00000000\n 0o_1_755\n 0xFFFF_FFFF\n 1.618_034\n\nper SQL:202x draft.\n\nThis adds support in the lexer as well as in the integer type input \nfunctions.\n\nTODO: float/numeric type input support\n\nI did some performance tests similar to what was done in [0] and didn't \nfind any problematic deviations. Other tests would be welcome.\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com",
"msg_date": "Tue, 27 Dec 2022 10:15:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Underscores in numeric literals"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Here is a patch to add support for underscores in numeric literals, for \n> visual grouping, like\n\n> 1_500_000_000\n> 0b10001000_00000000\n> 0o_1_755\n> 0xFFFF_FFFF\n> 1.618_034\n\n> per SQL:202x draft.\n\n> This adds support in the lexer as well as in the integer type input \n> functions.\n> TODO: float/numeric type input support\n\nHmm ... I'm on board with allowing this in SQL if the committee says\nso. I'm not especially on board with accepting it in datatype input\nfunctions. There's been zero demand for that AFAIR. Moreover,\nI don't think we need the inevitable I/O performance hit, nor the\nincreased risk of accepting garbage, nor the certainty of\ninconsistency with other places that don't get converted (because\nthey depend on strtoul() or whatever).\n\nWe already accept that numeric input is different from numeric\nliterals: you can't write Infinity or NaN in SQL without quotes.\nSo I don't see an argument that we have to allow this in numeric\ninput for consistency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 27 Dec 2022 09:55:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 09:55:32AM -0500, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > Here is a patch to add support for underscores in numeric literals, for \n> > visual grouping, like\n> \n> > 1_500_000_000\n> > 0b10001000_00000000\n> > 0o_1_755\n> > 0xFFFF_FFFF\n> > 1.618_034\n> \n> > per SQL:202x draft.\n> \n> > This adds support in the lexer as well as in the integer type input \n> > functions.\n> > TODO: float/numeric type input support\n> \n> Hmm ... I'm on board with allowing this in SQL if the committee says\n> so.\n\n> I'm not especially on board with accepting it in datatype input\n> functions. There's been zero demand for that AFAIR. Moreover,\n> I don't think we need the inevitable I/O performance hit, nor the\n> increased risk of accepting garbage, nor the certainty of\n> inconsistency with other places that don't get converted (because\n> they depend on strtoul() or whatever).\n\n+1 to accept underscores only in literals and leave input functions\nalone.\n\n(When I realized that python3.6 changed to accept things like\nint(\"3_5\"), I felt compelled to write a wrapper to check for embedded\nunderscores and raise an exception in that case. And I'm sure it\naffected performance.)\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 27 Dec 2022 09:16:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "\nOn 2022-12-27 Tu 09:55, Tom Lane wrote:\n> We already accept that numeric input is different from numeric\n> literals: you can't write Infinity or NaN in SQL without quotes.\n> So I don't see an argument that we have to allow this in numeric\n> input for consistency.\n>\n\nThat's almost the same, but not quite, ISTM. Those are things you can't\nsay without quotes, but here unless I'm mistaken you'd be disallowing\nthis style if you use quotes. I get the difficulties with input\nfunctions, but it seems like we'll be building lots of grounds for\nconfusion.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 28 Dec 2022 09:28:11 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On Wed, 28 Dec 2022 at 14:28, Andrew Dunstan <andrew@dunslane.net> wrote:\n>\n> On 2022-12-27 Tu 09:55, Tom Lane wrote:\n> > We already accept that numeric input is different from numeric\n> > literals: you can't write Infinity or NaN in SQL without quotes.\n> > So I don't see an argument that we have to allow this in numeric\n> > input for consistency.\n>\n> That's almost the same, but not quite, ISTM. Those are things you can't\n> say without quotes, but here unless I'm mistaken you'd be disallowing\n> this style if you use quotes. I get the difficulties with input\n> functions, but it seems like we'll be building lots of grounds for\n> confusion.\n>\n\nYeah, it's easy to see why something like 'NaN' needs quotes, but it\nwould be harder to explain why something like 1000_000 mustn't have\nquotes, and couldn't be used as input to COPY.\n\nMy feeling is that we should try to make the datatype input functions\naccept anything that is legal syntax as a numeric literal, even if the\nreverse isn't always possible.\n\nThat said, I think it's very important to minimise any performance\nhit, especially in the existing case of inputs with no underscores.\n\nLooking at the patch's changes to pg_strtointNN(), I think there's\nmore that can be done to reduce that performance hit. As it stands,\nevery input character is checked to see if it's an underscore, and\nthen there's a new check at the end to ensure that the input string\ndoesn't have a trailing underscore. Both of those can be avoided by\nrearranging things a little, as in the attached v2 patch.\n\nIn the v2 patch, each input character is only compared with underscore\nif it's not a digit, so in the case of an input with no underscores or\ntrailing spaces, the new checks for underscores are never executed.\n\nIn addition, if an underscore is seen, it now checks that the next\ncharacter is a digit. This eliminates the possibility of two\nunderscores in a row, and also of a trailing underscore, and so there\nis no need for the final check for trailing underscores.\n\nThus, if the input consists only of digits, it never has to test for\nunderscores at all, and the performance hit for this case is\nminimised.\n\nMy other concern with this patch is that the responsibility for\nhandling underscores is distributed over a couple of different places.\nI had the same concern about the non-decimal integer patch, but at the\ntime I couldn't see any way round it. Now that we have soft error\nhandling though, I think that there is a way to improve this,\ncentralising the logic for both underscore and non-decimal handling to\none place for each datatype, reducing code duplication and the chances\nof bugs.\n\nFor example, make_const() in the T_Float case has gained new code to\nparse both the sign and base-prefix of the input, duplicating the\nlogic in pg_strtointNN(). That can now be avoided by having it call\npg_strtoint64_safe() with an ErrorSaveContext, instead of strtoi64().\nIn the process, it would then gain the ability to handle underscores,\nso they wouldn't need to be stripped off elsewhere.\n\nSimilarly, process_integer_literal() could be made to call\npg_strtoint32_safe() with an ErrorSaveContext instead of strtoint(),\nand it then wouldn't need to strip off underscores, or be passed the\nnumber's base, since pg_strtoint32_safe() would handle all of that.\n\nIn addition, I think that strip_underscores() could then go away if\nnumeric_in() were made to handle underscores.\n\nEssentially then, that would move all responsibility for parsing\nunderscores and non-decimal integers to the datatype input functions,\nor their support routines, rather than having it distributed.\n\nRegards,\nDean",
"msg_date": "Wed, 4 Jan 2023 09:28:20 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "Oh, one other minor nit -- in parser/scan.l:\n\n-real ({decinteger}|{numeric})[Ee][-+]?{decdigit}+\n+real ({decinteger}|{numeric})[Ee][-+]?{decinteger}+\n\nthe final \"+\" isn't necessary now.\n\nRegards,\nDean\n\n\n",
"msg_date": "Wed, 4 Jan 2023 09:31:14 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 09:28, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> In addition, I think that strip_underscores() could then go away if\n> numeric_in() were made to handle underscores.\n>\n> Essentially then, that would move all responsibility for parsing\n> underscores and non-decimal integers to the datatype input functions,\n> or their support routines, rather than having it distributed.\n>\n\nHere's an update with those changes.\n\nRegards,\nDean",
"msg_date": "Mon, 23 Jan 2023 20:45:19 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On 23.01.23 21:45, Dean Rasheed wrote:\n> On Wed, 4 Jan 2023 at 09:28, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> In addition, I think that strip_underscores() could then go away if\n>> numeric_in() were made to handle underscores.\n>>\n>> Essentially then, that would move all responsibility for parsing\n>> underscores and non-decimal integers to the datatype input functions,\n>> or their support routines, rather than having it distributed.\n> \n> Here's an update with those changes.\n\nThis looks good to me.\n\nDid you have any thoughts about what to do with the float types? I \nguess we could handle those in a separate patch?\n\n\n\n",
"msg_date": "Tue, 31 Jan 2023 16:28:48 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On Tue, 31 Jan 2023 at 15:28, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Did you have any thoughts about what to do with the float types? I\n> guess we could handle those in a separate patch?\n>\n\nI was assuming that we'd do nothing for float types, because anything\nwe did would necessarily impact their performance.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 31 Jan 2023 16:09:29 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On 31.01.23 17:09, Dean Rasheed wrote:\n> On Tue, 31 Jan 2023 at 15:28, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Did you have any thoughts about what to do with the float types? I\n>> guess we could handle those in a separate patch?\n>>\n> \n> I was assuming that we'd do nothing for float types, because anything\n> we did would necessarily impact their performance.\n\nYeah, as long as we are using strtof() and strtod() we should just leave \nit alone. If we have break that open and hand-code something, we can \nreconsider it.\n\nSo I think you could go ahead with committing your patch and we can \nconsider this topic done for now.\n\n\n\n",
"msg_date": "Thu, 2 Feb 2023 23:39:59 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Underscores in numeric literals"
},
{
"msg_contents": "On Thu, 2 Feb 2023 at 22:40, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 31.01.23 17:09, Dean Rasheed wrote:\n> > On Tue, 31 Jan 2023 at 15:28, Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >>\n> >> Did you have any thoughts about what to do with the float types? I\n> >> guess we could handle those in a separate patch?\n> >>\n> >\n> > I was assuming that we'd do nothing for float types, because anything\n> > we did would necessarily impact their performance.\n>\n> Yeah, as long as we are using strtof() and strtod() we should just leave\n> it alone. If we have break that open and hand-code something, we can\n> reconsider it.\n>\n> So I think you could go ahead with committing your patch and we can\n> consider this topic done for now.\n>\n\nDone.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 4 Feb 2023 10:29:48 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Underscores in numeric literals"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a patch that implements the idea of extracting full page images\nfrom WAL records [1] [2] with a function in pg_walinspect. This new\nfunction accepts start and end lsn and returns full page image info\nsuch as WAL record lsn, tablespace oid, database oid, relfile number,\nblock number, fork name and the raw full page (as bytea). I'll\nregister this in the next commitfest.\n\nThoughts?\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d497093cbecccf6df26365e06a5f8f8614b591c8\n[2] https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 27 Dec 2022 17:18:53 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "Hi,\n\nOn 12/27/22 12:48 PM, Bharath Rupireddy wrote:\n> Hi,\n> \n> Here's a patch that implements the idea of extracting full page images\n> from WAL records [1] [2] with a function in pg_walinspect. This new\n> function accepts start and end lsn and returns full page image info\n> such as WAL record lsn, tablespace oid, database oid, relfile number,\n> block number, fork name and the raw full page (as bytea). I'll\n> register this in the next commitfest.\n> \n> Thoughts?\n> \n\nI think it makes sense to somehow align the pg_walinspect functions with the pg_waldump \"features\".\nAnd since [1] added FPI \"extraction\" then +1 for the proposed patch in this thread.\n\n> [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d497093cbecccf6df26365e06a5f8f8614b591c8\n> [2] https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com\n\nI just have a few comments:\n\n+\n+/*\n+ * Get full page images and their info associated with a given WAL record.\n+ */\n\nWhat about adding a few words about compression? (like \"Decompression is applied if necessary\"?)\n\n\n+ /* Full page exists, so let's output it. */\n+ if (!RestoreBlockImage(record, block_id, page))\n\n\"Full page exists, so let's output its info and content.\" instead?\n\n\n+ <para>\n+ Gets raw full page images and their information associated with all the\n+ valid WAL records between <replaceable>start_lsn</replaceable> and\n+ <replaceable>end_lsn</replaceable>. Returns one row per full page image.\n\nWorth to add a few words about decompression too?\n\n\nI'm also wondering if it would make sense to extend the test coverage of it (and pg_waldump) to \"validate\" that both\nextracted images are the same and matches the one modified right after the checkpoint.\n\nWhat do you think? (could be done later in another patch though).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:49:45 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 8:19 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> I think it makes sense to somehow align the pg_walinspect functions with the pg_waldump \"features\".\n> And since [1] added FPI \"extraction\" then +1 for the proposed patch in this thread.\n>\n> > [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d497093cbecccf6df26365e06a5f8f8614b591c8\n> > [2] https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com\n>\n> I just have a few comments:\n\nThanks for reviewing.\n\n> +\n> +/*\n> + * Get full page images and their info associated with a given WAL record.\n> + */\n>\n>\n> + <para>\n> + Gets raw full page images and their information associated with all the\n> + valid WAL records between <replaceable>start_lsn</replaceable> and\n> + <replaceable>end_lsn</replaceable>. Returns one row per full page image.\n>\n> Worth to add a few words about decompression too?\n\nDone.\n\n> What about adding a few words about compression? (like \"Decompression is applied if necessary\"?)\n>\n>\n> + /* Full page exists, so let's output it. */\n> + if (!RestoreBlockImage(record, block_id, page))\n>\n> \"Full page exists, so let's output its info and content.\" instead?\n\nDone.\n\n> I'm also wondering if it would make sense to extend the test coverage of it (and pg_waldump) to \"validate\" that both\n> extracted images are the same and matches the one modified right after the checkpoint.\n>\n> What do you think? (could be done later in another patch though).\n\nI think pageinspect can be used here. We can fetch the raw page from\nthe table after the checkpoint and raw FPI from the WAL record logged\nas part of the update. I've tried to do so [1], but I see a slight\ndifference in the raw output. The expectation is that they both be the\nsame. It might be that the update operation logs the FPI with some\nmore info set (prune_xid). I'll try to see why it is so.\n\nI'm attaching the v2 patch for further review.\n\n[1]\nSELECT * FROM page_header(:'page_from_table');\n lsn | checksum | flags | lower | upper | special | pagesize |\nversion | prune_xid\n-----------+----------+-------+-------+-------+---------+----------+---------+-----------\n 0/1891D78 | 0 | 0 | 40 | 8064 | 8192 | 8192 |\n 4 | 0\n(1 row)\n\nSELECT * FROM page_header(:'page_from_wal');\n lsn | checksum | flags | lower | upper | special | pagesize |\nversion | prune_xid\n-----------+----------+-------+-------+-------+---------+----------+---------+-----------\n 0/1891D78 | 0 | 0 | 44 | 8032 | 8192 | 8192 |\n 4 | 735\n(1 row)\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 5 Jan 2023 18:51:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 18:52, Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 4, 2023 at 8:19 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n> >\n> > I think it makes sense to somehow align the pg_walinspect functions with the pg_waldump \"features\".\n> > And since [1] added FPI \"extraction\" then +1 for the proposed patch in this thread.\n> >\n> > > [1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d497093cbecccf6df26365e06a5f8f8614b591c8\n> > > [2] https://postgr.es/m/CAOxo6XKjQb2bMSBRpePf3ZpzfNTwjQUc4Tafh21=jzjX6bX8CA@mail.gmail.com\n> >\n> > I just have a few comments:\n>\n> Thanks for reviewing.\n>\n> > +\n> > +/*\n> > + * Get full page images and their info associated with a given WAL record.\n> > + */\n> >\n> >\n> > + <para>\n> > + Gets raw full page images and their information associated with all the\n> > + valid WAL records between <replaceable>start_lsn</replaceable> and\n> > + <replaceable>end_lsn</replaceable>. Returns one row per full page image.\n> >\n> > Worth to add a few words about decompression too?\n>\n> Done.\n>\n> > What about adding a few words about compression? (like \"Decompression is applied if necessary\"?)\n> >\n> >\n> > + /* Full page exists, so let's output it. */\n> > + if (!RestoreBlockImage(record, block_id, page))\n> >\n> > \"Full page exists, so let's output its info and content.\" instead?\n>\n> Done.\n>\n> > I'm also wondering if it would make sense to extend the test coverage of it (and pg_waldump) to \"validate\" that both\n> > extracted images are the same and matches the one modified right after the checkpoint.\n> >\n> > What do you think? (could be done later in another patch though).\n>\n> I think pageinspect can be used here. We can fetch the raw page from\n> the table after the checkpoint and raw FPI from the WAL record logged\n> as part of the update. I've tried to do so [1], but I see a slight\n> difference in the raw output. The expectation is that they both be the\n> same. It might be that the update operation logs the FPI with some\n> more info set (prune_xid). I'll try to see why it is so.\n>\n> I'm attaching the v2 patch for further review.\n\nI felt one of the files was missing in the patch:\n[13:39:03.534] contrib/pg_walinspect/meson.build:19:0: ERROR: File\npg_walinspect--1.0--1.1.sql does not exist.\n\nPlease post an updated version for the same.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 6 Jan 2023 11:24:47 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 6:51 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > I'm also wondering if it would make sense to extend the test coverage of it (and pg_waldump) to \"validate\" that both\n> > extracted images are the same and matches the one modified right after the checkpoint.\n> >\n> > What do you think? (could be done later in another patch though).\n>\n> I think pageinspect can be used here. We can fetch the raw page from\n> the table after the checkpoint and raw FPI from the WAL record logged\n> as part of the update. I've tried to do so [1], but I see a slight\n> difference in the raw output. The expectation is that they both be the\n> same. It might be that the update operation logs the FPI with some\n> more info set (prune_xid). I'll try to see why it is so.\n>\n> I'm attaching the v2 patch for further review.\n>\n> [1]\n> SELECT * FROM page_header(:'page_from_table');\n> lsn | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n> 0/1891D78 | 0 | 0 | 40 | 8064 | 8192 | 8192 |\n> 4 | 0\n> (1 row)\n>\n> SELECT * FROM page_header(:'page_from_wal');\n> lsn | checksum | flags | lower | upper | special | pagesize |\n> version | prune_xid\n> -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n> 0/1891D78 | 0 | 0 | 44 | 8032 | 8192 | 8192 |\n> 4 | 735\n> (1 row)\n\nUgh, v2 patch missed the new file added, I'm attaching v3 patch for\nfurther review. Sorry for the noise.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 6 Jan 2023 11:47:55 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 11:47 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 5, 2023 at 6:51 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > I'm also wondering if it would make sense to extend the test coverage of it (and pg_waldump) to \"validate\" that both\n> > > extracted images are the same and matches the one modified right after the checkpoint.\n> > >\n> > > What do you think? (could be done later in another patch though).\n> >\n> > I think pageinspect can be used here. We can fetch the raw page from\n> > the table after the checkpoint and raw FPI from the WAL record logged\n> > as part of the update. I've tried to do so [1], but I see a slight\n> > difference in the raw output. The expectation is that they both be the\n> > same. It might be that the update operation logs the FPI with some\n> > more info set (prune_xid). I'll try to see why it is so.\n> >\n> > I'm attaching the v2 patch for further review.\n> >\n> > [1]\n> > SELECT * FROM page_header(:'page_from_table');\n> > lsn | checksum | flags | lower | upper | special | pagesize |\n> > version | prune_xid\n> > -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n> > 0/1891D78 | 0 | 0 | 40 | 8064 | 8192 | 8192 |\n> > 4 | 0\n> > (1 row)\n> >\n> > SELECT * FROM page_header(:'page_from_wal');\n> > lsn | checksum | flags | lower | upper | special | pagesize |\n> > version | prune_xid\n> > -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n> > 0/1891D78 | 0 | 0 | 44 | 8032 | 8192 | 8192 |\n> > 4 | 735\n> > (1 row)\n>\n> Ugh, v2 patch missed the new file added, I'm attaching v3 patch for\n> further review. Sorry for the noise.\n\nI took a stab at how and what gets logged as FPI in WAL records:\n\nOption 1:\nWAL record with FPI contains both the unmodified table page from the\ndisk after checkpoint and new tuple (not applied to the unmodified\npage) and the recovery (redo) applies the new tuple to the unmodified\npage as part of recovery. A bit more WAL is needed to store both\nunmodified page and new tuple data in the WAL record and recovery can\nget slower a bit too as it needs to stitch the modified page.\n\nOption 2:\nWAL record with FPI contains only the modified page (new tuple applied\nto the unmodified page from the disk after checkpoint) and the\nrecovery (redo) just returns the applied block as BLK_RESTORED.\nRecovery can get faster with this approach and less WAL is needed to\nstore just the modified page.\n\nMy earlier understanding was that postgres does option (1), however, I\nwas wrong, option (2) is what actually postgres has implemented for\nthe obvious advantages specified.\n\nI now made the tests a bit stricter in checking the FPI contents\n(tuple values) pulled from the WAL record with raw page contents\npulled from the table using the pageinspect extension. Please see the\nattached v4 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 6 Jan 2023 23:11:26 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "Hi,\n\nOn 1/6/23 6:41 PM, Bharath Rupireddy wrote:\n> On Fri, Jan 6, 2023 at 11:47 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Thu, Jan 5, 2023 at 6:51 PM Bharath Rupireddy\n>> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>>\n>>>> I'm also wondering if it would make sense to extend the test coverage of it (and pg_waldump) to \"validate\" that both\n>>>> extracted images are the same and matches the one modified right after the checkpoint.\n>>>>\n>>>> What do you think? (could be done later in another patch though).\n>>>\n>>> I think pageinspect can be used here. We can fetch the raw page from\n>>> the table after the checkpoint and raw FPI from the WAL record logged\n>>> as part of the update. I've tried to do so [1], but I see a slight\n>>> difference in the raw output. The expectation is that they both be the\n>>> same. It might be that the update operation logs the FPI with some\n>>> more info set (prune_xid). I'll try to see why it is so.\n>>>\n>>> I'm attaching the v2 patch for further review.\n>>>\n>>> [1]\n>>> SELECT * FROM page_header(:'page_from_table');\n>>> lsn | checksum | flags | lower | upper | special | pagesize |\n>>> version | prune_xid\n>>> -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n>>> 0/1891D78 | 0 | 0 | 40 | 8064 | 8192 | 8192 |\n>>> 4 | 0\n>>> (1 row)\n>>>\n>>> SELECT * FROM page_header(:'page_from_wal');\n>>> lsn | checksum | flags | lower | upper | special | pagesize |\n>>> version | prune_xid\n>>> -----------+----------+-------+-------+-------+---------+----------+---------+-----------\n>>> 0/1891D78 | 0 | 0 | 44 | 8032 | 8192 | 8192 |\n>>> 4 | 735\n>>> (1 row)\n>>\n>> Ugh, v2 patch missed the new file added, I'm attaching v3 patch for\n>> further review. Sorry for the noise.\n> \n> I took a stab at how and what gets logged as FPI in WAL records:\n> \n> Option 1:\n> WAL record with FPI contains both the unmodified table page from the\n> disk after checkpoint and new tuple (not applied to the unmodified\n> page) and the recovery (redo) applies the new tuple to the unmodified\n> page as part of recovery. A bit more WAL is needed to store both\n> unmodified page and new tuple data in the WAL record and recovery can\n> get slower a bit too as it needs to stitch the modified page.\n> \n> Option 2:\n> WAL record with FPI contains only the modified page (new tuple applied\n> to the unmodified page from the disk after checkpoint) and the\n> recovery (redo) just returns the applied block as BLK_RESTORED.\n> Recovery can get faster with this approach and less WAL is needed to\n> store just the modified page.\n> \n> My earlier understanding was that postgres does option (1), however, I\n> was wrong, option (2) is what actually postgres has implemented for\n> the obvious advantages specified.\n> \n> I now made the tests a bit stricter in checking the FPI contents\n> (tuple values) pulled from the WAL record with raw page contents\n> pulled from the table using the pageinspect extension. Please see the\n> attached v4 patch.\n> \n\nThanks for updating the patch!\n\n+-- Compare FPI from WAL record and page from table, they must be same\n\nI think \"must be the same\" or \"must be identical\" sounds better (but not 100% sure).\n\nExcept this nit, V4 looks good to me.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 09:29:03 +0100",
"msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 09:29:03AM +0100, Drouvot, Bertrand wrote:\n> Thanks for updating the patch!\n> \n> +-- Compare FPI from WAL record and page from table, they must be same\n> \n> I think \"must be the same\" or \"must be identical\" sounds better (but not 100% sure).\n> \n> Except this nit, V4 looks good to me.\n\n+postgres=# SELECT lsn, tablespace_oid, database_oid, relfile_number,\nblock_number, fork_name, length(fpi) > 0 as fpi_ok FROM\npg_get_wal_fpi_info('0/7418E60', '0/7518218');\n\nThis query in the docs is too long IMO. Could you split that across\nmultiple lines for readability?\n\n+ pg_get_wal_fpi_info(start_lsn pg_lsn,\n+ end_lsn pg_lsn,\n+ lsn OUT pg_lsn,\n+ tablespace_oid OUT oid,\n+ database_oid OUT oid,\n+ relfile_number OUT oid,\n+ block_number OUT int8,\n+ fork_name OUT text,\n+ fpi OUT bytea)\nI am a bit surprised by this format, used to define the functions part\nof the module in the docs, while we have examples that actually show\nwhat's printed out. I understand that this comes from the original\ncommit of the module, but the rendered docs are really hard to parse\nas well, no? FWIW, I think that this had better be fixed as well in\nthe docs of v15.. Showing a full set of attributes for the returned\nrecord is fine by me, still if these are too long we could just use\n\\x. For this one, I think that there is little point in showing 14\nrecords, so I would stick with a style similar to pageinspect.\n\n+CREATE FUNCTION pg_get_wal_fpi_info(IN start_lsn pg_lsn,\n+ IN end_lsn pg_lsn,\n+ OUT lsn pg_lsn,\n+ OUT tablespace_oid oid,\nSlight indentation issue here.\n\nUsing \"relfile_number\" would be a first, for what is defined in the\ncode and the docs as a filenode.\n\n+SELECT pg_current_wal_lsn() AS wal_lsn4 \\gset\n+-- Get FPI from WAL record\n+SELECT fpi AS page_from_wal FROM pg_get_wal_fpi_info(:'wal_lsn3', :'wal_lsn4')\n+ WHERE relfile_number = :'sample_tbl_oid' \\gset\nI would be tempted to keep the checks run here minimal with only a\nbasic set of checks on the LSN, without the dependencies to\npageinspect (tuple_data_split and get_raw_page), which would be fine\nenough to check the execution of the function.\n\nFWIW, I am surprised by the design choice behind ValidateInputLSNs()\nto allow data to be gathered until the end of WAL in some cases, but\nto not allow it in others. It is likely too late to come back to this\nchoice for the existing functions in v15 (quoique?), but couldn't it\nbe useful to make this new FPI function work at least with an insanely\nhigh LSN value to make sure that we fetch all the FPIs from a given\nstart position, up to the end of WAL? That looks like a pretty good\ndefault behavior to me, rather than issuing an error when a LSN is\ndefined as in the future.. I am really wondering why we have\nValidateInputLSNs(till_end_of_wal=false) to begin with, while we could\njust allow any LSN value in the future automatically, as we can know\nthe current insert or replay LSNs (depending on the recovery state).\n--\nMichael",
"msg_date": "Wed, 11 Jan 2023 13:37:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 10:07 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> +postgres=# SELECT lsn, tablespace_oid, database_oid, relfile_number,\n> block_number, fork_name, length(fpi) > 0 as fpi_ok FROM\n> pg_get_wal_fpi_info('0/7418E60', '0/7518218');\n>\n> This query in the docs is too long IMO. Could you split that across\n> multiple lines for readability?\n\nDone.\n\n> + pg_get_wal_fpi_info(start_lsn pg_lsn,\n> + end_lsn pg_lsn,\n> + lsn OUT pg_lsn,\n> + tablespace_oid OUT oid,\n> + database_oid OUT oid,\n> + relfile_number OUT oid,\n> + block_number OUT int8,\n> + fork_name OUT text,\n> + fpi OUT bytea)\n> I am a bit surprised by this format, used to define the functions part\n> of the module in the docs, while we have examples that actually show\n> what's printed out. I understand that this comes from the original\n> commit of the module, but the rendered docs are really hard to parse\n> as well, no? FWIW, I think that this had better be fixed as well in\n> the docs of v15.. Showing a full set of attributes for the returned\n> record is fine by me, still if these are too long we could just use\n> \\x.\n\nThanks. I'll work on that separately.\n\n> For this one, I think that there is little point in showing 14\n> records, so I would stick with a style similar to pageinspect.\n\nI've done it that way for pg_get_wal_fpi_info. If this format looks\nokay, I can propose to do the same for other functions (for\nbackpatching too) in a separate thread though.\n\n> +CREATE FUNCTION pg_get_wal_fpi_info(IN start_lsn pg_lsn,\n> + IN end_lsn pg_lsn,\n> + OUT lsn pg_lsn,\n> + OUT tablespace_oid oid,\n> Slight indentation issue here.\n\nDone.\n\n> Using \"relfile_number\" would be a first, for what is defined in the\n> code and the docs as a filenode.\n\nYes, I've changed the column names to be consistent (like in pg_buffercache).\n\n> +SELECT pg_current_wal_lsn() AS wal_lsn4 \\gset\n> +-- Get FPI from WAL record\n> +SELECT fpi AS page_from_wal FROM pg_get_wal_fpi_info(:'wal_lsn3', :'wal_lsn4')\n> + WHERE relfile_number = :'sample_tbl_oid' \\gset\n> I would be tempted to keep the checks run here minimal with only a\n> basic set of checks on the LSN, without the dependencies to\n> pageinspect (tuple_data_split and get_raw_page), which would be fine\n> enough to check the execution of the function.\n\nI understand the concern here that creating dependency between\nextensions just for testing isn't good.\n\nI'm okay to just read the LSN (lsn1) from raw FPI (bytea stream) and\nthe WAL record's LSN (lsn2) and compare them to be lsn2 > lsn1. I'm\nlooking for a way to convert the first 8 bytes from bytea stream to\npg_lsn type, on a quick look I couldn't find direct conversion\nfunctions, however, I'll try to figure out a way.\n\n> FWIW, I am surprised by the design choice behind ValidateInputLSNs()\n> to allow data to be gathered until the end of WAL in some cases, but\n> to not allow it in others. It is likely too late to come back to this\n> choice for the existing functions in v15 (quoique?), but couldn't it\n\nSeparate functions for users passing end_lsn by themselves and users\nletting functions decide the end_lsn (current flush LSN or replay LSN)\nwere chosen for better and easier usability and easier validation of\nuser-entered input lsns.\n\nWe deliberated to have something like below:\npg_get_wal_stats(start_lsn, end_lsn, till_end_of_wal default false);\npg_get_wal_records_info(start_lsn, end_lsn, till_end_of_wal default false);\n\nWe wanted to have better validation of the start_lsn and end_lsn, that\nis, start_lsn < end_lsn and end_lsn mustn't fall into the future when\nusers specify it by themselves (otherwise, one can easily trick the\nserver by passing in the extreme end of the LSN - 0xFFFFFFFFFFFFFFFF).\nAnd, we couldn't find a better way to deal with when till_end_of_wal\nis passed as true (in the above version of the functions).\n\nAnother idea was to have something like below:\npg_get_wal_stats(start_lsn, end_lsn default '0/0');\npg_get_wal_records_info(start_lsn, end_lsn default '0/0');\n\nWhen end_lsn is not entered or entered as invalid lsn, then return the\nstats/info till end of the WAL. Again, we wanted to have some\nvalidation of the user-entered end_lsn.\n\nInstead of cooking multiple behaviours into a single function we opted\nfor till_end_of_wal versions. I still feel this is better unless\nthere's a strong reason against till_end_of_wal versions.\n\n> be useful to make this new FPI function work at least with an insanely\n> high LSN value to make sure that we fetch all the FPIs from a given\n> start position, up to the end of WAL? That looks like a pretty good\n> default behavior to me, rather than issuing an error when a LSN is\n> defined as in the future.. I am really wondering why we have\n> ValidateInputLSNs(till_end_of_wal=false) to begin with, while we could\n> just allow any LSN value in the future automatically, as we can know\n> the current insert or replay LSNs (depending on the recovery state).\n\nHm. How about having pg_get_wal_fpi_info_till_end_of_wal() then?\n\nWith some of the review comments addressed, I'm attaching v5 patch\nherewith. I would like to hear thoughts on the above open points\nbefore writing the v6 patch.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 11 Jan 2023 18:59:18 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 06:59:18PM +0530, Bharath Rupireddy wrote:\n> I've done it that way for pg_get_wal_fpi_info. If this format looks\n> okay, I can propose to do the same for other functions (for\n> backpatching too) in a separate thread though.\n\nMy vote would be to make that happen first, to have in place cleaner\nbasics for the docs. I could just do it and move on..\n\n> We deliberated to have something like below:\n> pg_get_wal_stats(start_lsn, end_lsn, till_end_of_wal default false);\n> pg_get_wal_records_info(start_lsn, end_lsn, till_end_of_wal default false);\n> \n> We wanted to have better validation of the start_lsn and end_lsn, that\n> is, start_lsn < end_lsn and end_lsn mustn't fall into the future when\n> users specify it by themselves (otherwise, one can easily trick the\n> server by passing in the extreme end of the LSN - 0xFFFFFFFFFFFFFFFF).\n> And, we couldn't find a better way to deal with when till_end_of_wal\n> is passed as true (in the above version of the functions).\n>\n> Another idea was to have something like below:\n> pg_get_wal_stats(start_lsn, end_lsn default '0/0');\n> pg_get_wal_records_info(start_lsn, end_lsn default '0/0');\n> \n> When end_lsn is not entered or entered as invalid lsn, then return the\n> stats/info till end of the WAL. Again, we wanted to have some\n> validation of the user-entered end_lsn.\n\nThis reminds me of the slot advancing, where we discarded this case\njust because it is useful to enforce a LSN far in the future.\nHonestly, I cannot think of any case where I would use this level of\nvalidation, especially having *two* functions to decide one behavior\nor the other: this stuff could just use one function and use for\nexample NULL as a setup to enforce the end of WAL, on top of a LSN far\nahead.. But well..\n\n>> be useful to make this new FPI function work at least with an insanely\n>> high LSN value to make sure that we fetch all the FPIs from a given\n>> start position, up to the end of WAL? That looks like a pretty good\n>> default behavior to me, rather than issuing an error when a LSN is\n>> defined as in the future.. I am really wondering why we have\n>> ValidateInputLSNs(till_end_of_wal=false) to begin with, while we could\n>> just allow any LSN value in the future automatically, as we can know\n>> the current insert or replay LSNs (depending on the recovery state).\n> \n> Hm. How about having pg_get_wal_fpi_info_till_end_of_wal() then?\n\nI don't really want to make the interface more bloated with more\nfunctions than necessary, TBH, though I get your point to push for\nconsistency in these functions. This makes me really wonder whether\nwe'd better make relax all the existing functions, but I may get\noutvoted.\n--\nMichael",
"msg_date": "Thu, 12 Jan 2023 14:53:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 11:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 11, 2023 at 06:59:18PM +0530, Bharath Rupireddy wrote:\n> > I've done it that way for pg_get_wal_fpi_info. If this format looks\n> > okay, I can propose to do the same for other functions (for\n> > backpatching too) in a separate thread though.\n>\n> My vote would be to make that happen first, to have in place cleaner\n> basics for the docs. I could just do it and move on..\n\nSure. Posted a separate patch here -\nhttps://www.postgresql.org/message-id/CALj2ACVGcUpziGgQrcT-1G3dHWQQfWjYBu1YQ2ypv9y86dgogg%40mail.gmail.com\n\n> This reminds me of the slot advancing, where we discarded this case\n> just because it is useful to enforce a LSN far in the future.\n> Honestly, I cannot think of any case where I would use this level of\n> validation, especially having *two* functions to decide one behavior\n> or the other: this stuff could just use one function and use for\n> example NULL as a setup to enforce the end of WAL, on top of a LSN far\n> ahead.. But well..\n\nI understand. I don't mind discussing something like [1] with the\nfollowing behaviour and discarding till_end_of_wal functions\naltogether:\nIf start_lsn is NULL, error out/return NULL.\nIf end_lsn isn't specified, default to NULL, then determine the end_lsn.\nIf end_lsn is specified as NULL, then determine the end_lsn.\nIf end_lsn is specified as non-NULL, then determine if it is greater\nthan start_lsn if yes, go ahead do the job, otherwise error out.\n\nI'll think a bit more on this and perhaps discuss it separately.\n\n> > Hm. How about having pg_get_wal_fpi_info_till_end_of_wal() then?\n>\n> I don't really want to make the interface more bloated with more\n> functions than necessary, TBH, though I get your point to push for\n> consistency in these functions. This makes me really wonder whether\n> we'd better make relax all the existing functions, but I may get\n> outvoted.\n\nI'll keep the FPI extract function simple as proposed in the patch and\nI'll not go write till_end_of_wal version. If needed to get all the\nFPIs till the end of WAL, one can always determine the end_lsn with\npg_current_wal_flush_lsn()/pg_last_wal_replay_lsn() when in recovery,\nand use the proposed function.\n\nNote that I kept the FPI extract function test simple - ensure FPI\ngets generated and check if the new function can fetch it, discarding\nlsn or other FPI sanity checks. Whole FPI sanity check needs\npageinspect and we don't want to create the dependency just for tests.\nAnd checking for FPI lsn with WAL record lsn requires us to fetch lsn\nfrom raw bytea stream. As there's no straightforward way to convert\nraw bytes from bytea to pg_lsn, doing that in a platform-agnostic\nmanner (little-endian and big-endian comes into play here) actually is\na no-go IMO. FWIW, for a little-endian system I had to do [2].\n\nTherefore I stick to the simple test unless there's a better way.\n\nI'm attaching the v6 patch for further review.\n\n[1]\nCREATE FUNCTION pg_get_wal_records_info(IN start_lsn pg_lsn,\n IN end_lsn pg_lsn DEFAULT NULL,\n OUT start_lsn pg_lsn,\n OUT end_lsn pg_lsn,\n OUT prev_lsn pg_lsn,\n OUT xid xid,\n OUT resource_manager text,\n OUT record_type text,\n OUT record_length int4,\n OUT main_data_length int4,\n OUT fpi_length int4,\n OUT description text,\n OUT block_ref text\n)\nRETURNS SETOF record\nAS 'MODULE_PATHNAME', 'pg_get_wal_records_info'\nLANGUAGE C CALLED ON NULL INPUT IMMUTABLE PARALLEL SAFE STRICT PARALLEL SAFE;\n\nCREATE FUNCTION pg_get_wal_stats(IN start_lsn pg_lsn,\n IN end_lsn pg_lsn DEFAULT NULL,\n IN per_record boolean DEFAULT false,\n OUT \"resource_manager/record_type\" text,\n OUT count int8,\n OUT count_percentage float8,\n OUT record_size int8,\n OUT record_size_percentage float8,\n OUT fpi_size int8,\n OUT fpi_size_percentage float8,\n OUT combined_size int8,\n OUT combined_size_percentage float8\n)\nRETURNS SETOF record\nAS 'MODULE_PATHNAME', 'pg_get_wal_stats'\nLANGUAGE C CALLED ON NULL INPUT IMMUTABLE PARALLEL SAFE STRICT PARALLEL SAFE;\n\n[2]\n\nselect '\\x00000000b8078901000000002c00601f00200420df020000e09f3800c09f3800a09f380080'::bytea\nAS fpi \\gset\nselect substring(:'fpi' from 3 for 16) AS rlsn \\gset\nselect substring(:'rlsn' from 7 for 2) || substring(:'rlsn' from 5 for\n2) || substring(:'rlsn' from 3 for 2) || substring(:'rlsn' from 1 for\n2) AS hi \\gset\nselect substring(:'rlsn' from 15 for 2) || substring(:'rlsn' from 13\nfor 2) || substring(:'rlsn' from 11 for 2) || substring(:'rlsn' from 9\nfor 2) AS lo \\gset\nselect (:'hi' || '/' || :'lo')::pg_lsn;\n\npostgres=# select (:'hi' || '/' || :'lo')::pg_lsn;\n pg_lsn\n-----------\n 0/18907B8\n(1 row)\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 12 Jan 2023 17:37:40 +0530",
"msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 05:37:40PM +0530, Bharath Rupireddy wrote:\n> I understand. I don't mind discussing something like [1] with the\n> following behaviour and discarding till_end_of_wal functions\n> altogether:\n> If start_lsn is NULL, error out/return NULL.\n> If end_lsn isn't specified, default to NULL, then determine the end_lsn.\n> If end_lsn is specified as NULL, then determine the end_lsn.\n> If end_lsn is specified as non-NULL, then determine if it is greater\n> than start_lsn if yes, go ahead do the job, otherwise error out.\n> \n> I'll think a bit more on this and perhaps discuss it separately.\n\nFWIW, I still find the current interface of the module bloated. So,\nwhile it is possible to stick some pg_current_wal_lsn() calls to\nbypass the error in most cases, enforcing the end of WAL with a NULL\nor larger value would still be something I would push for based on my\nown experience as there would be no need to worry about the latest LSN\nas being two different values in two function contexts. You could\nkeep the functions as STRICT for consistency, and just allow larger\nvalues as a synonym for the end of WAL.\n\nSaying that, the version of pg_get_wal_fpi_info() committed respects\nthe current behavior of the module, with an error on an incorrect end\nLSN.\n\n> I'll keep the FPI extract function simple as proposed in the patch and\n> I'll not go write till_end_of_wal version. If needed to get all the\n> FPIs till the end of WAL, one can always determine the end_lsn with\n> pg_current_wal_flush_lsn()/pg_last_wal_replay_lsn() when in recovery,\n> and use the proposed function.\n\nI was reading the patch this morning, and that's pretty much what I\nwould have done in terms of simplicity with a test checking that at\nleast one FPI has been generated. I have shortened a bit the\ndocumentation, tweaked a few comments and applied the whole after\nseeing the result.\n\nOne thing that I have been wondering about is whether it is worth\nadding the block_id from the record in the output, but discarded this\nidea as it could be confused with the physical block number, even if\nthis function is for advanced users.\n--\nMichael",
"msg_date": "Mon, 23 Jan 2023 14:15:15 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add a new pg_walinspect function to extract FPIs from WAL records"
}
] |
[
{
"msg_contents": "Hi!\n\nWhile playing with some unrelated to the topic stuff, I've noticed a\nstrange warning from verify_heapam.c:730:25:\nwarning: ‘xmax_status’ may be used uninitialized in this function.\n\nThis happens only when get_xid_status is inlined, and only in GCC with O3.\nI use a GCC version 11.3.0.\n\nFor the purpose of investigation, I've created a PFA patch to force\nget_xid_status\ninline.\n\n$ CFLAGS=\"-O3\" ./configure -q && make -s -j12 >/dev/null && make -s -j12 -C\ncontrib\nverify_heapam.c: In function ‘check_tuple_visibility’:\nverify_heapam.c:730:25: warning: ‘xmax_status’ may be used uninitialized in\nthis function [-Wmaybe-uninitialized]\n 730 | XidCommitStatus xmax_status;\n | ^~~~~~~~~~~\nverify_heapam.c:909:25: warning: ‘xmin_status’ may be used uninitialized in\nthis function [-Wmaybe-uninitialized]\n 909 | else if (xmin_status != XID_COMMITTED)\n |\n\nI believe, this warning is false positive, since mentioned row is\nunreachable. If xid is invalid, we return from get_xid_status\nXID_INVALID and could not pass\n 770 ········if (HeapTupleHeaderXminInvalid(tuphdr))\n 771 ············return false;·······/* inserter aborted, don't check */\n\nSo, I think this warning is false positive. On the other hand, we could\nsimply init status variable in get_xid_status and\nmake this code more errors/warnings safe. Thoughts?\n\n-- \nBest regards,\nMaxim Orlov.",
"msg_date": "Tue, 27 Dec 2022 18:35:37 +0300",
"msg_from": "Maxim Orlov <orlovmg@gmail.com>",
"msg_from_op": true,
"msg_subject": "False positive warning in verify_heapam.c with GCC 03"
}
] |
[
{
"msg_contents": "While pg_hba.conf has supported the \"all\" keyword since a very long\ntime, pg_ident.conf doesn't have this same functionality. This changes\npermission checking in pg_ident.conf to handle \"all\" differently from\nany other value in the database-username column. If \"all\" is specified\nand the system-user matches the identifier, then the user is allowed to\nauthenticate no matter what user it tries to authenticate as.\n\nThis change makes it much easier to have a certain database\nadministrator peer or cert authentication, that allows connecting as\nany user. Without this change you would need to add a line to\npg_ident.conf for every user that is in the database.\n\nIn some small sense this is a breaking change if anyone is using \"all\"\nas a user currently and has pg_ident.conf rules for it. This seems\nunlikely, since \"all\" was already handled specially in pg_hb.conf.\nAlso it can easily be worked around by quoting the all token in\npg_ident.conf. As long as this is called out in the release notes\nit seems okay to me. However, if others disagree there would\nbe the option of changing the token to \"pg_all\". Since any\npg_ prefixed users are reserved by postgres there can be no user.\nFor now I used \"all\" though to stay consistent with pg_hba.conf.",
"msg_date": "Tue, 27 Dec 2022 15:54:46 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Support using \"all\" for the db user in pg_ident.conf"
},
{
"msg_contents": "On Tue, 27 Dec 2022 at 10:54, Jelte Fennema <Jelte.Fennema@microsoft.com>\nwrote:\n\nThis change makes it much easier to have a certain database\n> administrator peer or cert authentication, that allows connecting as\n> any user. Without this change you would need to add a line to\n> pg_ident.conf for every user that is in the database.\n>\n> In some small sense this is a breaking change if anyone is using \"all\"\n> as a user currently and has pg_ident.conf rules for it. This seems\n> unlikely, since \"all\" was already handled specially in pg_hb.conf.\n> Also it can easily be worked around by quoting the all token in\n> pg_ident.conf. As long as this is called out in the release notes\n> it seems okay to me. However, if others disagree there would\n> be the option of changing the token to \"pg_all\". Since any\n> pg_ prefixed users are reserved by postgres there can be no user.\n> For now I used \"all\" though to stay consistent with pg_hba.conf.\n\n\n+1 from me. I recently was setting up a Vagrant VM for testing and wanted\nto allow the OS user which runs the application to connect to the database\nas whatever user it wants and was surprised to find I had to list all the\npotential target DB users in the pg_ident.conf (in production it uses\npassword authentication and each server gets just the passwords it needs\nstored in ~/.pgpass). I like the idea that both config files would be\nconsistent, although the use of keywords such as \"replication\" in the DB\ncolumn has always made me a bit uncomfortable.\n\nRelated question: is there a reason why pg_ident.conf can't/shouldn't be\nreplaced by a system table? As far as I can tell, it's just a 3-column\ntable, essentially, with all columns in the primary key. This latest\nproposal changes that a little; strictly, it should probably introduce a\nsecond table with just two columns identifying which OS users can connect\nas any user, but existing system table style seems to suggest that we would\njust use a special value in the DB user column for \"all\".\n\nOn Tue, 27 Dec 2022 at 10:54, Jelte Fennema <Jelte.Fennema@microsoft.com> wrote:\nThis change makes it much easier to have a certain database\nadministrator peer or cert authentication, that allows connecting as\nany user. Without this change you would need to add a line to\npg_ident.conf for every user that is in the database.\n\nIn some small sense this is a breaking change if anyone is using \"all\"\nas a user currently and has pg_ident.conf rules for it. This seems\nunlikely, since \"all\" was already handled specially in pg_hb.conf.\nAlso it can easily be worked around by quoting the all token in\npg_ident.conf. As long as this is called out in the release notes\nit seems okay to me. However, if others disagree there would\nbe the option of changing the token to \"pg_all\". Since any\npg_ prefixed users are reserved by postgres there can be no user.\nFor now I used \"all\" though to stay consistent with pg_hba.conf.+1 from me. I recently was setting up a Vagrant VM for testing and wanted to allow the OS user which runs the application to connect to the database as whatever user it wants and was surprised to find I had to list all the potential target DB users in the pg_ident.conf (in production it uses password authentication and each server gets just the passwords it needs stored in ~/.pgpass). I like the idea that both config files would be consistent, although the use of keywords such as \"replication\" in the DB column has always made me a bit uncomfortable.Related question: is there a reason why pg_ident.conf can't/shouldn't be replaced by a system table? As far as I can tell, it's just a 3-column table, essentially, with all columns in the primary key. This latest proposal changes that a little; strictly, it should probably introduce a second table with just two columns identifying which OS users can connect as any user, but existing system table style seems to suggest that we would just use a special value in the DB user column for \"all\".",
"msg_date": "Tue, 27 Dec 2022 11:21:28 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support using \"all\" for the db user in pg_ident.conf"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 03:54:46PM +0000, Jelte Fennema wrote:\n> This change makes it much easier to have a certain database\n> administrator peer or cert authentication, that allows connecting as\n> any user. Without this change you would need to add a line to\n> pg_ident.conf for every user that is in the database.\n\nThat seems pretty dangerous to me. For one, how does this work in\ncases where we expect the ident entry to be case-sensitive, aka\nauthentication methods where check_ident_usermap() and check_usermap()\nuse case_insensitive = false?\n\nAnyway, it is a bit confusing to see a patch touching parts of the\nident code related to the system-username while it claims to provide a\nmean to shortcut a check on the database-username. If you think that\nsome renames should be done to IdentLine, these ought to be done\nfirst.\n--\nMichael",
"msg_date": "Wed, 28 Dec 2022 09:10:49 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support using \"all\" for the db user in pg_ident.conf"
}
] |
[
{
"msg_contents": "I've attached a patch set that adds the restore_library,\narchive_cleanup_library, and recovery_end_library parameters to allow\narchive recovery via loadable modules. This is a follow-up to the\narchive_library parameter added in v15 [0] [1].\n\nThe motivation behind this change is similar to that of archive_library\n(e.g., robustness, performance). The recovery functions are provided via a\nsimilar interface to archive modules (i.e., an initialization function that\nreturns the function pointers). Also, I've extended basic_archive to work\nas a restore_library, which makes it easy to demonstrate both archiving and\nrecovery via a loadable module in a TAP test.\n\nA few miscellaneous design notes:\n\n* Unlike archive modules, recovery libraries cannot be changed at runtime.\nThere isn't a safe way to unload a library, and archive libraries work\naround this restriction by restarting the archiver process. Since recovery\nlibraries are loaded via the startup and checkpointer processes (which\ncannot be trivially restarted like the archiver), the same workaround is\nnot feasible.\n\n* pg_rewind uses restore_command, but there isn't a straightforward path to\nsupport restore_library. I haven't addressed this in the attached patches,\nbut perhaps this is a reason to allow specifying both restore_command and\nrestore_library at the same time. pg_rewind would use restore_command, and\nthe server would use restore_library.\n\n* I've combined the documentation to create one \"Archive and Recovery\nModules\" chapter. They are similar enough that it felt silly to write a\nseparate chapter for recovery modules. However, I've still split them up\nwithin the chapter, and they have separate initialization functions. This\nretains backward compatibility with v15 archive modules, keeps them\nlogically separate, and hopefully hints at the functional differences.\nEven so, if you want to create one library for both archive and recovery,\nthere is nothing stopping you.\n\n* Unlike archive modules, I didn't add any sort of \"check\" or \"shutdown\"\ncallbacks. The recovery_end_library parameter makes a \"shutdown\" callback\nlargely redundant, and I couldn't think of any use-case for a \"check\"\ncallback. However, new callbacks could be added in the future if needed.\n\n* Unlike archive modules, restore_library and recovery_end_library may be\nloaded in single-user mode. I believe this works out-of-the-box, but it's\nan extra thing to be cognizant of.\n\n* If both the library and command parameter for a recovery action is\nspecified, the server should fail to startup, but if a misconfiguration is\ndetected after SIGHUP, we emit a WARNING and continue using the library. I\noriginally thought about emitting an ERROR like the archiver does in this\ncase, but failing recovery and stopping the server felt a bit too harsh.\nI'm curious what folks think about this.\n\n* Іt could be nice to rewrite pg_archivecleanup for use as an\narchive_cleanup_library, but I don't think that needs to be a part of this\npatch set.\n\n[0] https://postgr.es/m/668D2428-F73B-475E-87AE-F89D67942270%40amazon.com\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5ef1eef\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 27 Dec 2022 11:24:49 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-27 11:24:49 -0800, Nathan Bossart wrote:\n> I've attached a patch set that adds the restore_library,\n> archive_cleanup_library, and recovery_end_library parameters to allow\n> archive recovery via loadable modules. This is a follow-up to the\n> archive_library parameter added in v15 [0] [1].\n\nWhy do we need N parameters for this? To me it seems more sensible to have one\nparameter that then allows a library to implement all these (potentially\noptionally).\n\n\n> * Unlike archive modules, recovery libraries cannot be changed at runtime.\n> There isn't a safe way to unload a library, and archive libraries work\n> around this restriction by restarting the archiver process. Since recovery\n> libraries are loaded via the startup and checkpointer processes (which\n> cannot be trivially restarted like the archiver), the same workaround is\n> not feasible.\n\nI don't think that's a convincing reason to not support configuration\nchanges. Sure, libraries cannot be unloaded, but an unnecessarily loaded\nlibrary is cheap. All that's needed is to redirect the relevant function\ncalls.\n\n\n> * pg_rewind uses restore_command, but there isn't a straightforward path to\n> support restore_library. I haven't addressed this in the attached patches,\n> but perhaps this is a reason to allow specifying both restore_command and\n> restore_library at the same time. pg_rewind would use restore_command, and\n> the server would use restore_library.\n\nThat seems problematic, leading to situations where one might not be able to\nuse restore_command anymore, because it's not feasible to do\nsegment-by-segment restoration.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Dec 2022 14:11:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 02:11:11PM -0800, Andres Freund wrote:\n> On 2022-12-27 11:24:49 -0800, Nathan Bossart wrote:\n>> I've attached a patch set that adds the restore_library,\n>> archive_cleanup_library, and recovery_end_library parameters to allow\n>> archive recovery via loadable modules. This is a follow-up to the\n>> archive_library parameter added in v15 [0] [1].\n> \n> Why do we need N parameters for this? To me it seems more sensible to have one\n> parameter that then allows a library to implement all these (potentially\n> optionally).\n\nThe main reason is flexibility. Separate parameters allow using a library\nfor one thing and a command for another, or different libraries for\ndifferent things. If that isn't a use-case we wish to support, I don't\nmind combining all three into a single recovery_library parameter.\n\n>> * Unlike archive modules, recovery libraries cannot be changed at runtime.\n>> There isn't a safe way to unload a library, and archive libraries work\n>> around this restriction by restarting the archiver process. Since recovery\n>> libraries are loaded via the startup and checkpointer processes (which\n>> cannot be trivially restarted like the archiver), the same workaround is\n>> not feasible.\n> \n> I don't think that's a convincing reason to not support configuration\n> changes. Sure, libraries cannot be unloaded, but an unnecessarily loaded\n> library is cheap. All that's needed is to redirect the relevant function\n> calls.\n\nThis might leave some stuff around (e.g., GUCs, background workers), but if\nthat isn't a concern, I can adjust it to work as you describe.\n\n>> * pg_rewind uses restore_command, but there isn't a straightforward path to\n>> support restore_library. I haven't addressed this in the attached patches,\n>> but perhaps this is a reason to allow specifying both restore_command and\n>> restore_library at the same time. pg_rewind would use restore_command, and\n>> the server would use restore_library.\n> \n> That seems problematic, leading to situations where one might not be able to\n> use restore_command anymore, because it's not feasible to do\n> segment-by-segment restoration.\n\nI'm not following why this would make segment-by-segment restoration\ninfeasible. Would you mind elaborating?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Dec 2022 14:37:11 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-27 14:37:11 -0800, Nathan Bossart wrote:\n> On Tue, Dec 27, 2022 at 02:11:11PM -0800, Andres Freund wrote:\n> > On 2022-12-27 11:24:49 -0800, Nathan Bossart wrote:\n> >> I've attached a patch set that adds the restore_library,\n> >> archive_cleanup_library, and recovery_end_library parameters to allow\n> >> archive recovery via loadable modules. This is a follow-up to the\n> >> archive_library parameter added in v15 [0] [1].\n> > \n> > Why do we need N parameters for this? To me it seems more sensible to have one\n> > parameter that then allows a library to implement all these (potentially\n> > optionally).\n> \n> The main reason is flexibility. Separate parameters allow using a library\n> for one thing and a command for another, or different libraries for\n> different things. If that isn't a use-case we wish to support, I don't\n> mind combining all three into a single recovery_library parameter.\n\nI think the configuration complexity is a sufficient concern to not go that\ndirection...\n\n\n> >> * Unlike archive modules, recovery libraries cannot be changed at runtime.\n> >> There isn't a safe way to unload a library, and archive libraries work\n> >> around this restriction by restarting the archiver process. Since recovery\n> >> libraries are loaded via the startup and checkpointer processes (which\n> >> cannot be trivially restarted like the archiver), the same workaround is\n> >> not feasible.\n> > \n> > I don't think that's a convincing reason to not support configuration\n> > changes. Sure, libraries cannot be unloaded, but an unnecessarily loaded\n> > library is cheap. All that's needed is to redirect the relevant function\n> > calls.\n> \n> This might leave some stuff around (e.g., GUCs, background workers), but if\n> that isn't a concern, I can adjust it to work as you describe.\n\nYou can still have a shutdown hook re background workers. I don't think the\nGUCs matter, given that it's the startup/checkpointer processes.\n\n\n> >> * pg_rewind uses restore_command, but there isn't a straightforward path to\n> >> support restore_library. I haven't addressed this in the attached patches,\n> >> but perhaps this is a reason to allow specifying both restore_command and\n> >> restore_library at the same time. pg_rewind would use restore_command, and\n> >> the server would use restore_library.\n> > \n> > That seems problematic, leading to situations where one might not be able to\n> > use restore_command anymore, because it's not feasible to do\n> > segment-by-segment restoration.\n> \n> I'm not following why this would make segment-by-segment restoration\n> infeasible. Would you mind elaborating?\n\nLatency effects for example can make it infeasible to do segment-by-segment\nrestoration infeasible performance wise. On the most extreme end, imagine WAL\narchived to tape or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Dec 2022 14:45:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 02:45:30PM -0800, Andres Freund wrote:\n> On 2022-12-27 14:37:11 -0800, Nathan Bossart wrote:\n>> On Tue, Dec 27, 2022 at 02:11:11PM -0800, Andres Freund wrote:\n>> > On 2022-12-27 11:24:49 -0800, Nathan Bossart wrote:\n>> >> * pg_rewind uses restore_command, but there isn't a straightforward path to\n>> >> support restore_library. I haven't addressed this in the attached patches,\n>> >> but perhaps this is a reason to allow specifying both restore_command and\n>> >> restore_library at the same time. pg_rewind would use restore_command, and\n>> >> the server would use restore_library.\n>> > \n>> > That seems problematic, leading to situations where one might not be able to\n>> > use restore_command anymore, because it's not feasible to do\n>> > segment-by-segment restoration.\n>> \n>> I'm not following why this would make segment-by-segment restoration\n>> infeasible. Would you mind elaborating?\n> \n> Latency effects for example can make it infeasible to do segment-by-segment\n> restoration infeasible performance wise. On the most extreme end, imagine WAL\n> archived to tape or such.\n\nI'm sorry, I'm still lost here. Wouldn't restoration via library tend to\nimprove latency? Is your point that clusters may end up depending on this\nimprovement so much that a shell command would no longer be able to keep\nup? I might be creating a straw man, but this seems like less of a concern\nfor pg_rewind since it isn't meant for continuous, ongoing restoration.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 27 Dec 2022 15:04:28 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Dec 27, 2022 at 02:11:11PM -0800, Andres Freund wrote:\n> On 2022-12-27 11:24:49 -0800, Nathan Bossart wrote:\n>> * Unlike archive modules, recovery libraries cannot be changed at runtime.\n>> There isn't a safe way to unload a library, and archive libraries work\n>> around this restriction by restarting the archiver process. Since recovery\n>> libraries are loaded via the startup and checkpointer processes (which\n>> cannot be trivially restarted like the archiver), the same workaround is\n>> not feasible.\n> \n> I don't think that's a convincing reason to not support configuration\n> changes. Sure, libraries cannot be unloaded, but an unnecessarily loaded\n> library is cheap. All that's needed is to redirect the relevant function\n> calls.\n\nAgreed. That seems worth the cost to switching this stuff to be\nSIGHUP-able.\n\n>> * pg_rewind uses restore_command, but there isn't a straightforward path to\n>> support restore_library. I haven't addressed this in the attached patches,\n>> but perhaps this is a reason to allow specifying both restore_command and\n>> restore_library at the same time. pg_rewind would use restore_command, and\n>> the server would use restore_library.\n> \n> That seems problematic, leading to situations where one might not be able to\n> use restore_command anymore, because it's not feasible to do\n> segment-by-segment restoration.\n\nDo you mean that supporting restore_library in pg_rewind is a hard\nrequirement? I fail to see why this should be the case. Note that\nhaving the possibility to do dlopen() calls in the frontend (aka\nlibpq) for loading some callbacks is something I've been looking for\nin the past. Having consistency in terms of restrictions between\nlibrary and command like for archives would make sense I guess (FWIW,\nI mentioned on the thread of d627ce3 that we'd better not put any\nrestrictions for the archives).\n--\nMichael",
"msg_date": "Wed, 28 Dec 2022 09:26:13 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-27 15:04:28 -0800, Nathan Bossart wrote:\n> I'm sorry, I'm still lost here. Wouldn't restoration via library tend to\n> improve latency? Is your point that clusters may end up depending on this\n> improvement so much that a shell command would no longer be able to keep\n> up?\n\nYes.\n\n\n> I might be creating a straw man, but this seems like less of a concern\n> for pg_rewind since it isn't meant for continuous, ongoing restoration.\n\npg_rewind is in the critical path of a bunch of HA scenarios, so I wouldn't\nsay that restore performance isn't important...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 27 Dec 2022 16:43:07 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Here is a new patch set with the following changes:\n\n* The restore_library, archive_cleanup_library, and recovery_end_library\nparameters are merged into one parameter.\n\n* restore_library can now be changed via SIGHUP. To provide a way for\nmodules to clean up when their callbacks are unloaded, I've introduced an\noptional shutdown callback.\n\n* Parameter misconfigurations are now always ERRORs. I'm less confident\nthat we can get by with just a WARNING now that restore_library can be\nchanged via SIGHUP, and this makes things more consistent with\narchive_library/command.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Dec 2022 11:43:26 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Here is a rebased patch set for cfbot.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 3 Jan 2023 09:59:17 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 09:59:17AM -0800, Nathan Bossart wrote:\n> Here is a rebased patch set for cfbot.\n\nI noticed that cfbot's Windows tests are failing because the backslashes in\nthe archive directory path are causing escaping problems. Here is an\nattempt to fix that by converting all backslashes to forward slashes, which\nis what other tests (e.g., 025_stuck_on_old_timeline.pl) do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 3 Jan 2023 11:05:38 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 11:05:38AM -0800, Nathan Bossart wrote:\n> I noticed that cfbot's Windows tests are failing because the backslashes in\n> the archive directory path are causing escaping problems. Here is an\n> attempt to fix that by converting all backslashes to forward slashes, which\n> is what other tests (e.g., 025_stuck_on_old_timeline.pl) do.\n\n+ GetOldestRestartPoint(&restartRedoPtr, &restartTli);\n+ XLByteToSeg(restartRedoPtr, restartSegNo, wal_segment_size);\n+ XLogFileName(lastRestartPointFname, restartTli, restartSegNo,\n+ wal_segment_size);\n+\n+ shell_archive_cleanup(lastRestartPointFname);\n\nHmm. Is passing down the file name used as a cutoff point the best\ninterface for the modules? Perhaps passing down the redo LSN and its\nTLI would be a cleaner approach in terms of flexibility? I agree with\nletting the startup enforce these numbers as that can be easy to mess\nup for plugin authors, leading to critical problems. The same code\npattern is repeated twice for the end-of-recovery callback and the\ncleanup commands when it comes to building the file name. Not\ncritical, still not really nice.\n\n MODULES = basic_archive\n-PGFILEDESC = \"basic_archive - basic archive module\"\n+PGFILEDESC = \"basic_archive - basic archive and recovery module\"\n\"basic_archive\" does not reflect what this module does. Using one\nlibrary simplifies the whole configuration picture and the tests, so\nperhaps something like basic_wal_module, or something like that, would\nbe better long-term?\n--\nMichael",
"msg_date": "Wed, 11 Jan 2023 16:53:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Wed, Jan 11, 2023 at 04:53:39PM +0900, Michael Paquier wrote:\n> Hmm. Is passing down the file name used as a cutoff point the best\n> interface for the modules? Perhaps passing down the redo LSN and its\n> TLI would be a cleaner approach in terms of flexibility? I agree with\n> letting the startup enforce these numbers as that can be easy to mess\n> up for plugin authors, leading to critical problems.\n\nI'm having trouble thinking of any practical advantage of providing the\nredo LSN and TLI. If the main use-case is removing older archives as the\ndocumentation indicates, it seems better to provide the file name so that\nyou can plug it straight into strcmp() to determine whether the file can be\nremoved (i.e., what pg_archivecleanup does). If we provided the LSN and\nTLI instead, you'd either need to convert that into a WAL file name for\nstrcmp(), or you'd need to convert the candidate file name into an LSN and\nTLI and compare against those.\n\n> \"basic_archive\" does not reflect what this module does. Using one\n> library simplifies the whole configuration picture and the tests, so\n> perhaps something like basic_wal_module, or something like that, would\n> be better long-term?\n\nI initially created a separate basic_restore module, but decided to fold it\ninto basic_archive to simplify the patch and tests. I hesitated to rename\nit because it already exists in v15, and since it deals with creating and\nrestoring archive files, the name still seemed somewhat accurate. That\nbeing said, I don't mind renaming it if that's what folks want.\n\nI've attached a new patch set that is rebased over c96de2c. There are no\nother changes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 11 Jan 2023 11:29:01 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 11:29:01AM -0800, Nathan Bossart wrote:\n> I'm having trouble thinking of any practical advantage of providing the\n> redo LSN and TLI. If the main use-case is removing older archives as the\n> documentation indicates, it seems better to provide the file name so that\n> you can plug it straight into strcmp() to determine whether the file can be\n> removed (i.e., what pg_archivecleanup does). If we provided the LSN and\n> TLI instead, you'd either need to convert that into a WAL file name for\n> strcmp(), or you'd need to convert the candidate file name into an LSN and\n> TLI and compare against those.\n\nLogging was one thing that came immediately in mind, to let the module\nknow the redo LSN and TLI the segment name was built from without\nhaving to recalculate it back. If you don't feel strongly about that,\nI am fine to discard this remark. It is not like this hook should be\nset in stone across major releases, in any case.\n\n> I initially created a separate basic_restore module, but decided to fold it\n> into basic_archive to simplify the patch and tests. I hesitated to rename\n> it because it already exists in v15, and since it deals with creating and\n> restoring archive files, the name still seemed somewhat accurate. That\n> being said, I don't mind renaming it if that's what folks want.\n\nI've done that in the past for pg_verify_checksums -> pg_checksums, so\nI would not mind renaming it so as it reflects better its role.\n(Being outvoted is fine for me if this suggestion sounds bad).\n\nSaying that, 0001 seems fine on its own (minus the redo LSN/TLI with\nthe duplication for the segment name build), so I would be tempted to\nget this one done. My gut tells me that we'd better remove the\nduplication and just pass down the two fields to\nshell_archive_cleanup() and shell_recovery_end(), with the segment\nname given to ExecuteRecoveryCommand()..\n--\nMichael",
"msg_date": "Thu, 12 Jan 2023 15:30:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 03:30:40PM +0900, Michael Paquier wrote:\n> On Wed, Jan 11, 2023 at 11:29:01AM -0800, Nathan Bossart wrote:\n>> I initially created a separate basic_restore module, but decided to fold it\n>> into basic_archive to simplify the patch and tests. I hesitated to rename\n>> it because it already exists in v15, and since it deals with creating and\n>> restoring archive files, the name still seemed somewhat accurate. That\n>> being said, I don't mind renaming it if that's what folks want.\n> \n> I've done that in the past for pg_verify_checksums -> pg_checksums, so\n> I would not mind renaming it so as it reflects better its role.\n> (Being outvoted is fine for me if this suggestion sounds bad).\n\nIMHO I don't think there's an urgent need to rename it, but if there's a\nbetter name that people like, I'm happy to do so.\n\n> Saying that, 0001 seems fine on its own (minus the redo LSN/TLI with\n> the duplication for the segment name build), so I would be tempted to\n> get this one done. My gut tells me that we'd better remove the\n> duplication and just pass down the two fields to\n> shell_archive_cleanup() and shell_recovery_end(), with the segment\n> name given to ExecuteRecoveryCommand()..\n\nI moved the duplicated logic to its own function in v6.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 12 Jan 2023 10:17:21 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 10:17:21AM -0800, Nathan Bossart wrote:\n> On Thu, Jan 12, 2023 at 03:30:40PM +0900, Michael Paquier wrote:\n> IMHO I don't think there's an urgent need to rename it, but if there's a\n> better name that people like, I'm happy to do so.\n\nOkay.\n\n>> Saying that, 0001 seems fine on its own (minus the redo LSN/TLI with\n>> the duplication for the segment name build), so I would be tempted to\n>> get this one done. My gut tells me that we'd better remove the\n>> duplication and just pass down the two fields to\n>> shell_archive_cleanup() and shell_recovery_end(), with the segment\n>> name given to ExecuteRecoveryCommand()..\n> \n> I moved the duplicated logic to its own function in v6.\n\nWhile looking at 0001, I have noticed one issue as of the following\nblock in shell_restore():\n\n+ if (wait_result_is_signal(rc, SIGTERM))\n+ proc_exit(1);\n+\n+ ereport(wait_result_is_any_signal(rc, true) ? FATAL : DEBUG2,\n+ (errmsg(\"could not restore file \\\"%s\\\" from archive: %s\",\n+ file, wait_result_to_str(rc))));\n\nThis block of code would have been executed even if rc == 0, which is\nincorrect because the command would have succeeded. HEAD would not\nhave done that, actually, as RestoreArchivedFile() would return\nbefore. I guess that you have not noticed it because this basically\njust generated incorrect DEBUG2 messages when rc == 0?\n\nOne part that this slightly influences is the order of the reports\nwhen the command succeeds the follow-up stat() fails to check the\nsize, where we reverse these two logs:\nDEBUG2, \"could not restore\"\nLOG/FATAL, \"could not stat file\"\n\nHowever, that does not really change the value of the information\nreported: if the stat() failed, the failure mode is the same except\nthat we don't get the extra DEBUG2/\"could not restore\" message, which\ndoes not really matter except if your elevel is high enough for\nthat and there is always the LOG for that..\n\nOnce this issue was fixed, nothing else stood out, so applied this\npart.\n--\nMichael",
"msg_date": "Mon, 16 Jan 2023 16:36:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 04:36:01PM +0900, Michael Paquier wrote:\n> Once this issue was fixed, nothing else stood out, so applied this\n> part.\n\nThanks! I've attached a rebased version of the rest of the patch set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 16 Jan 2023 14:40:40 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 16, 2023 at 02:40:40PM -0800, Nathan Bossart wrote:\n> On Mon, Jan 16, 2023 at 04:36:01PM +0900, Michael Paquier wrote:\n> > Once this issue was fixed, nothing else stood out, so applied this\n> > part.\n> \n> Thanks! I've attached a rebased version of the rest of the patch set.\n\nWhen it comes to 0002, the only difference between the three code\npaths of shell_recovery_end(), shell_archive_cleanup() and\nshell_restore() is the presence of BuildRestoreCommand(). However\nthis is now just a thin wrapper of replace_percent_placeholders() that\ndoes just one extra make_native_path() for the xlogpath.\n\nCould it be cleaner in the long term to remove entirely\nBuildRestoreCommand() and move the conversion of the xlogpath with\nmake_native_path() one level higher in the stack?\n--\nMichael",
"msg_date": "Tue, 17 Jan 2023 14:32:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 02:32:03PM +0900, Michael Paquier wrote:\n> Could it be cleaner in the long term to remove entirely\n> BuildRestoreCommand() and move the conversion of the xlogpath with\n> make_native_path() one level higher in the stack?\n\nYeah, this seems cleaner. I removed BuildRestoreCommand() in v8.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 17 Jan 2023 10:23:56 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 10:23:56AM -0800, Nathan Bossart wrote:\n> Yeah, this seems cleaner. I removed BuildRestoreCommand() in v8.\n\n if (*sp == *lp)\n {\n- if (val)\n- {\n- appendStringInfoString(&result, val);\n- found = true;\n- }\n- /* If val is NULL, we will report an error. */\n+ appendStringInfoString(&result, val);\n+ found = true;\n\nIn 0002, this code block has been removed as an effect of the removal\nof BuildRestoreCommand(), because RestoreArchivedFile() needs to\nhandle two flags with two values. The current design has the\nadvantage to warn extension developers with an unexpected\nmanipulation, as well, so I have kept the logic in percentrepl.c\nas-is.\n\nI was wondering also if ExecuteRecoveryCommand() should use a bits32\nfor its two boolean flags, but did not bother as it is static in\nshell_restore.c so ABI does not matter, even if there are three\ncallers of it with 75% of the combinations possible (only false/true\nis not used).\n\nAnd 0002 is applied.\n--\nMichael",
"msg_date": "Wed, 18 Jan 2023 11:29:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 11:29:20AM +0900, Michael Paquier wrote:\n> And 0002 is applied.\n\nThanks. Here's a rebased version of the last patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 17 Jan 2023 20:44:27 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 17, 2023 at 08:44:27PM -0800, Nathan Bossart wrote:\n> Thanks. Here's a rebased version of the last patch.\n\nThanks for the rebase.\n\nThe final state of the documentation is as follows:\n51. Archive and Recovery Modules\n 51.1. Archive Module Initialization Functions\n 51.2. Archive Module Callbacks\n 51.3. Recovery Module Initialization Functions\n 51.4. Recovery Module Callbacks\n\nI am not completely sure whether this grouping is the best thing to\ndo. Wouldn't it be better to move that into two different\nsub-sections instead? One layout suggestion:\n51. WAL Modules\n 51.1. Archive Modules\n 51.1.1. Archive Module Initialization Functions\n 51.1.2. Archive Module Callbacks\n 51.2. Recovery Modules\n 51.2.1. Recovery Module Initialization Functions\n 51.2.2. Recovery Module Callbacks\n\nPutting both of them in the same section sounds like a good idea per\nthe symmetry that one would likely have between the code paths of\narchiving and recovery, so as they share some common knowledge.\n\nThis kinds of comes back to the previous suggestion to rename\nbasic_archive to something like wal_modules, that covers both\narchiving and recovery. I does not seem like this would overlap with\nRMGRs, which is much more internal anyway.\n\n (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"must specify restore_command when standby mode is not enabled\")));\n+ errmsg(\"must specify restore_command or a restore_library that defines \"\n+ \"a restore callback when standby mode is not enabled\")));\nThis is long. Shouldn't this be split into an errdetail() to list all\nthe options at hand?\n\n- if (XLogArchiveLibrary[0] != '\\0' && XLogArchiveCommand[0] != '\\0')\n- ereport(ERROR,\n- (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n- errmsg(\"both archive_command and archive_library set\"),\n- errdetail(\"Only one of archive_command, archive_library may be set.\")));\n+ CheckMutuallyExclusiveGUCs(XLogArchiveLibrary, \"archive_library\",\n+ XLogArchiveCommand, \"archive_command\");\n\nThe introduction of this routine could be a patch on its own, as it\nimpacts the archiving path.\n\n+ CheckMutuallyExclusiveGUCs(restoreLibrary, \"restore_library\",\n+ archiveCleanupCommand, \"archive_cleanup_command\");\n+ if (strcmp(prevRestoreLibrary, restoreLibrary) != 0 ||\n+ strcmp(prevArchiveCleanupCommand, archiveCleanupCommand) != 0)\n+ {\n+ call_recovery_module_shutdown_cb(0, (Datum) 0);\n+ LoadRecoveryCallbacks();\n+ }\n+\n+ pfree(prevRestoreLibrary);\n+ pfree(prevArchiveCleanupCommand);\n\nHm.. The callers of CheckMutuallyExclusiveGUCs() with the new ERROR\npaths they introduce need a close lookup. As far as I can see this\nconcerns four areas depending on the three restore commands\n(restore_command and recovery_end command for startup,\narchive_cleanup_command for the checkpointer):\n- Startup process initialization, as of validateRecoveryParameters()\nwhere the postmaster GUCs for the recovery target are evaluated. This\none is an early stage which is fine.\n- Startup reloading, as of StartupRereadConfig(). This code could\ninvolve a WAL receiver restart depending on a change in the slot\nchange or in primary_conninfo, and force at the same time an ERROR\nbecause of conflicting recovery library and command configuration.\nThis one should be safe because pendingWalRcvRestart would just be\nconsidered later on by the startup process itself while waiting for\nWAL to become available. Still this could deserve a comment? Even if\nthere is a misconfiguration, a reload on a standby would enforce a\nFATAL in the startup process, taking down the whole server.\n- Checkpointer initialization, as of CheckpointerMain(). A\nconfiguration failure in this code path, aka server startup, causes\nthe server to loop infinitely on FATAL with the misconfiguration\nshowing up all the time.. This is a problem.\n- Last comes the checkpointer GUC reloading, as of\nHandleCheckpointerInterrupts(), with a second problem. This\nintroduces a failure path where ConfigReloadPending is processed at\nthe same time as ShutdownRequestPending based on the way it is coded,\ninteracting with what would be a normal shutdown in some cases? And\nactually, if you enforce a misconfiguration on reload, the\ncheckpointer reports an error but it does not enforce a process\nrestart, hence it keeps around the new, incorrect, configuration while\nwaiting for a new checkpoint to happen once restore_library and\narchive_cleanup_command are set. This could lead to surprises, IMO.\nUpgrading to a FATAL in this code path triggers an infinite loop, like\nthe startup path.\n\nIf the archive_cleanup_command ballback of a restore library triggers\na FATAL, it seems to me that it would continuously trigger a server\nrestart, actually. Perhaps that's something to document, in\ncomparison to the safe fallbacks of the shell command where we don't\nforce an ERROR to give priority to the stability of the checkpointer.\n--\nMichael",
"msg_date": "Mon, 23 Jan 2023 11:44:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 11:44:57AM +0900, Michael Paquier wrote:\n> Thanks for the rebase.\n\nThanks for the detailed review.\n\n> The final state of the documentation is as follows:\n> 51. Archive and Recovery Modules\n> 51.1. Archive Module Initialization Functions\n> 51.2. Archive Module Callbacks\n> 51.3. Recovery Module Initialization Functions\n> 51.4. Recovery Module Callbacks\n> \n> I am not completely sure whether this grouping is the best thing to\n> do. Wouldn't it be better to move that into two different\n> sub-sections instead? One layout suggestion:\n> 51. WAL Modules\n> 51.1. Archive Modules\n> 51.1.1. Archive Module Initialization Functions\n> 51.1.2. Archive Module Callbacks\n> 51.2. Recovery Modules\n> 51.2.1. Recovery Module Initialization Functions\n> 51.2.2. Recovery Module Callbacks\n> \n> Putting both of them in the same section sounds like a good idea per\n> the symmetry that one would likely have between the code paths of\n> archiving and recovery, so as they share some common knowledge.\n> \n> This kinds of comes back to the previous suggestion to rename\n> basic_archive to something like wal_modules, that covers both\n> archiving and recovery. I does not seem like this would overlap with\n> RMGRs, which is much more internal anyway.\n\nI updated the documentation as you suggested, and I renamed basic_archive\nto basic_wal_module.\n\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"must specify restore_command when standby mode is not enabled\")));\n> + errmsg(\"must specify restore_command or a restore_library that defines \"\n> + \"a restore callback when standby mode is not enabled\")));\n> This is long. Shouldn't this be split into an errdetail() to list all\n> the options at hand?\n\nShould the errmsg() be something like \"recovery parameters misconfigured\"?\n\n> - if (XLogArchiveLibrary[0] != '\\0' && XLogArchiveCommand[0] != '\\0')\n> - ereport(ERROR,\n> - (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> - errmsg(\"both archive_command and archive_library set\"),\n> - errdetail(\"Only one of archive_command, archive_library may be set.\")));\n> + CheckMutuallyExclusiveGUCs(XLogArchiveLibrary, \"archive_library\",\n> + XLogArchiveCommand, \"archive_command\");\n> \n> The introduction of this routine could be a patch on its own, as it\n> impacts the archiving path.\n\nI moved this to a separate patch.\n\n> - Startup reloading, as of StartupRereadConfig(). This code could\n> involve a WAL receiver restart depending on a change in the slot\n> change or in primary_conninfo, and force at the same time an ERROR\n> because of conflicting recovery library and command configuration.\n> This one should be safe because pendingWalRcvRestart would just be\n> considered later on by the startup process itself while waiting for\n> WAL to become available. Still this could deserve a comment? Even if\n> there is a misconfiguration, a reload on a standby would enforce a\n> FATAL in the startup process, taking down the whole server.\n\nDo you think the parameter checks should go before the WAL receiver restart\nlogic?\n\n> - Checkpointer initialization, as of CheckpointerMain(). A\n> configuration failure in this code path, aka server startup, causes\n> the server to loop infinitely on FATAL with the misconfiguration\n> showing up all the time.. This is a problem.\n\nPerhaps this is a reason to move the parameter check in CheckpointerMain()\nto after the sigsetjmp() block. This should avoid full server restarts.\nOnly the checkpointer process would loop with the ERROR.\n\n> - Last comes the checkpointer GUC reloading, as of\n> HandleCheckpointerInterrupts(), with a second problem. This\n> introduces a failure path where ConfigReloadPending is processed at\n> the same time as ShutdownRequestPending based on the way it is coded,\n> interacting with what would be a normal shutdown in some cases? And\n> actually, if you enforce a misconfiguration on reload, the\n> checkpointer reports an error but it does not enforce a process\n> restart, hence it keeps around the new, incorrect, configuration while\n> waiting for a new checkpoint to happen once restore_library and\n> archive_cleanup_command are set. This could lead to surprises, IMO.\n> Upgrading to a FATAL in this code path triggers an infinite loop, like\n> the startup path.\n\nIf we move the parameter check in CheckpointerMain() as described above,\nthe checkpointer should be unable to proceed with an incorrect\nconfiguration. For the normal shutdown part, do you think the\nShutdownRequestPending block should be moved to before the\nConfigReloadPending block in HandleCheckpointerInterrupts()?\n\n> If the archive_cleanup_command ballback of a restore library triggers\n> a FATAL, it seems to me that it would continuously trigger a server\n> restart, actually. Perhaps that's something to document, in\n> comparison to the safe fallbacks of the shell command where we don't\n> force an ERROR to give priority to the stability of the checkpointer.\n\nI'm not sure it's worth documenting that ereport(FATAL, ...) in the\ncheckpointer process will cause a server restart. In most cases, an\nextension author would use ERROR, which, if we make the aforementioned\nchanges, would at most cause the checkpointer to effectively restart. This\nis similar to archive modules where an ERROR causes only the archiver\nprocess to restart. Also, we document that recovery libraries are loaded\nin the startup and checkpointer processes, so IMO it should be relatively\napparent that doing something like FATAL or proc_exit() is bad.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 23 Jan 2023 13:44:28 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 23, 2023 at 01:44:28PM -0800, Nathan Bossart wrote:\n> On Mon, Jan 23, 2023 at 11:44:57AM +0900, Michael Paquier wrote:\n> I updated the documentation as you suggested, and I renamed basic_archive\n> to basic_wal_module.\n\nThanks. The renaming of basic_archive to basic_wal_module looked\nfine, so applied.\n\nWhile looking at the docs, I found a bit confusing that the main\nsection of the WAL modules included the full description for the\narchive modules, so I have moved that into the sect2 dedicated to the\narchive modules instead, as of the attached. 0004 has been updated in\nconsequence, with details about the recovery bits within its own\nsect2.\n\n>> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> - errmsg(\"must specify restore_command when standby mode is not enabled\")));\n>> + errmsg(\"must specify restore_command or a restore_library that defines \"\n>> + \"a restore callback when standby mode is not enabled\")));\n>> This is long. Shouldn't this be split into an errdetail() to list all\n>> the options at hand?\n> \n> Should the errmsg() be something like \"recovery parameters misconfigured\"?\n\nHmm. Here is an idea:\n- errmsg: \"must specify restore option when standby mode is not enabled\"\n- errdetail: \"Either restore_command or restore_library need to be\nspecified.\"\n\n>> - if (XLogArchiveLibrary[0] != '\\0' && XLogArchiveCommand[0] != '\\0')\n>> - ereport(ERROR,\n>> - (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n>> - errmsg(\"both archive_command and archive_library set\"),\n>> - errdetail(\"Only one of archive_command, archive_library may be set.\")));\n>> + CheckMutuallyExclusiveGUCs(XLogArchiveLibrary, \"archive_library\",\n>> + XLogArchiveCommand, \"archive_command\");\n>> \n>> The introduction of this routine could be a patch on its own, as it\n>> impacts the archiving path.\n> \n> I moved this to a separate patch.\n\nWhile pondering about that, I found a bit sad that this only works for\nstring GUCs, while it could be possible to do the same kind of checks\nfor the other GUC types with a more generic routine? Not enum,\nobviously, but int, float, bool and real, with the a failure if both\nGUCs are set to non-default values? Also, and I may be missing\nsomething here, do we really need to pass the value of the parameters\nto check? Wouldn't it be enough to check for the case where both\nparameters are set to their non-default values after reloading?\n\n>> - Startup reloading, as of StartupRereadConfig(). This code could\n>> involve a WAL receiver restart depending on a change in the slot\n>> change or in primary_conninfo, and force at the same time an ERROR\n>> because of conflicting recovery library and command configuration.\n>> This one should be safe because pendingWalRcvRestart would just be\n>> considered later on by the startup process itself while waiting for\n>> WAL to become available. Still this could deserve a comment? Even if\n>> there is a misconfiguration, a reload on a standby would enforce a\n>> FATAL in the startup process, taking down the whole server.\n> \n> Do you think the parameter checks should go before the WAL receiver restart\n> logic?\n\nYeah, switching the order makes the logic more robust IMO.\n\n>> - Checkpointer initialization, as of CheckpointerMain(). A\n>> configuration failure in this code path, aka server startup, causes\n>> the server to loop infinitely on FATAL with the misconfiguration\n>> showing up all the time.. This is a problem.\n> \n> Perhaps this is a reason to move the parameter check in CheckpointerMain()\n> to after the sigsetjmp() block. This should avoid full server restarts.\n> Only the checkpointer process would loop with the ERROR.\n\nThe loop part is annoying.. I've never been a fan of adding this\ncross-value checks for the archiver modules in the first place, and it\nwould make things much simpler in the checkpointer if we need to think\nabout that as we want these values to be reloadable. Perhaps this\ncould just be an exception where we just give priority on one over the\nother archive_cleanup_command? The startup process has a well-defined\nsequence after a failure, while the checkpointer is designed to remain\nrobust.\n\n>> - Last comes the checkpointer GUC reloading, as of\n>> HandleCheckpointerInterrupts(), with a second problem. This\n>> introduces a failure path where ConfigReloadPending is processed at\n>> the same time as ShutdownRequestPending based on the way it is coded,\n>> interacting with what would be a normal shutdown in some cases? And\n>> actually, if you enforce a misconfiguration on reload, the\n>> checkpointer reports an error but it does not enforce a process\n>> restart, hence it keeps around the new, incorrect, configuration while\n>> waiting for a new checkpoint to happen once restore_library and\n>> archive_cleanup_command are set. This could lead to surprises, IMO.\n>> Upgrading to a FATAL in this code path triggers an infinite loop, like\n>> the startup path.\n> \n> If we move the parameter check in CheckpointerMain() as described above,\n> the checkpointer should be unable to proceed with an incorrect\n> configuration. For the normal shutdown part, do you think the\n> ShutdownRequestPending block should be moved to before the\n> ConfigReloadPending block in HandleCheckpointerInterrupts()?\n\nI would not touch this order. This could influence the setup a\nshutdown checkpoint relies on, for one.\n\n>> If the archive_cleanup_command ballback of a restore library triggers\n>> a FATAL, it seems to me that it would continuously trigger a server\n>> restart, actually. Perhaps that's something to document, in\n>> comparison to the safe fallbacks of the shell command where we don't\n>> force an ERROR to give priority to the stability of the checkpointer.\n> \n> I'm not sure it's worth documenting that ereport(FATAL, ...) in the\n> checkpointer process will cause a server restart. In most cases, an\n> extension author would use ERROR, which, if we make the aforementioned\n> changes, would at most cause the checkpointer to effectively restart. This\n> is similar to archive modules where an ERROR causes only the archiver\n> process to restart. Also, we document that recovery libraries are loaded\n> in the startup and checkpointer processes, so IMO it should be relatively\n> apparent that doing something like FATAL or proc_exit() is bad.\n\nOkay. Fine by me. This could always be amended later, as required.\n\n+ if (recoveryRestoreCommand[0] == '\\0')\n+ RecoveryContext.restore_cb = NULL;\n+ if (archiveCleanupCommand[0] == '\\0')\n+ RecoveryContext.archive_cleanup_cb = NULL;\n+ if (recoveryEndCommand[0] == '\\0')\n+ RecoveryContext.recovery_end_cb = NULL;\n\nCould it be cleaner to put this knowledge directly in shell_restore.c\nwith a fast-exit path after entering each callback? It does not\nstrike me as a good thing to sprinkle more than necessary the\nknowledge about the commands.\n\nAnother question that popped in my mind: could it be better to have\ntwo different shutdown callbacks for the checkpointer and the startup\nprocess? Having some tests for both, like shell_archive.c, would be\nnice, actually.\n--\nMichael",
"msg_date": "Wed, 25 Jan 2023 16:34:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Jan 25, 2023 at 04:34:21PM +0900, Michael Paquier wrote:\n> The loop part is annoying.. I've never been a fan of adding this\n> cross-value checks for the archiver modules in the first place, and it\n> would make things much simpler in the checkpointer if we need to think\n> about that as we want these values to be reloadable. Perhaps this\n> could just be an exception where we just give priority on one over the\n> other archive_cleanup_command? The startup process has a well-defined\n> sequence after a failure, while the checkpointer is designed to remain\n> robust.\n\nYeah, there are some problems here. If we ERROR, we'll just bounce back to\nthe sigsetjmp() block once a second, and we'll never pick up configuration\nreloads, shutdown signals, etc. If we FATAL, we'll just rapidly restart\nover and over. Given the dicussion about misconfigured archiving\nparameters [0], I doubt folks will be okay with giving priority to one or\nthe other.\n\nI'm currently thinking that the checkpointer should set a flag and clear\nthe recovery callbacks when a misconfiguration is detected. Anytime the\ncheckpointer tries to use the archive-cleanup callback, a WARNING would be\nemitted. This is similar to an approach I proposed for archiving\nmisconfigurations (that we didn't proceed with) [1]. Given the\naformentioned problems, this approach might be more suitable for the\ncheckpointer than it is for the archiver.\n\nThoughts?\n\n[0] https://postgr.es/m/9ee5d180-2c32-a1ca-d3d7-63a723f68d9a%40enterprisedb.com\n[1] https://postgr.es/m/20220914222736.GA3042279%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 26 Jan 2023 21:40:58 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 09:40:58PM -0800, Nathan Bossart wrote:\n> On Wed, Jan 25, 2023 at 04:34:21PM +0900, Michael Paquier wrote:\n>> The loop part is annoying.. I've never been a fan of adding this\n>> cross-value checks for the archiver modules in the first place, and it\n>> would make things much simpler in the checkpointer if we need to think\n>> about that as we want these values to be reloadable. Perhaps this\n>> could just be an exception where we just give priority on one over the\n>> other archive_cleanup_command? The startup process has a well-defined\n>> sequence after a failure, while the checkpointer is designed to remain\n>> robust.\n> \n> Yeah, there are some problems here. If we ERROR, we'll just bounce back to\n> the sigsetjmp() block once a second, and we'll never pick up configuration\n> reloads, shutdown signals, etc. If we FATAL, we'll just rapidly restart\n> over and over. Given the dicussion about misconfigured archiving\n> parameters [0], I doubt folks will be okay with giving priority to one or\n> the other.\n> \n> I'm currently thinking that the checkpointer should set a flag and clear\n> the recovery callbacks when a misconfiguration is detected. Anytime the\n> checkpointer tries to use the archive-cleanup callback, a WARNING would be\n> emitted. This is similar to an approach I proposed for archiving\n> misconfigurations (that we didn't proceed with) [1]. Given the\n> aformentioned problems, this approach might be more suitable for the\n> checkpointer than it is for the archiver.\n\nThe more I think about this, the more I wonder whether we really need to\ninclude archive_cleanup_command and recovery_end_command in recovery\nmodules. Another weird thing with the checkpointer is that the\nrestore_library will stay loaded long after recovery is finished, and it'll\nbe loaded regardless of whether recovery is required in the first place.\nOf course, that typically won't cause any problems, and we could wait until\nwe need to do archive cleanup to load the library (and call its shutdown\ncallback when recovery is finished), but this strikes me as potentially\nmore complexity than the feature is worth. Perhaps we should just focus on\ncovering the restore_command functionality for now and add the rest later.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 15:28:21 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Jan 26, 2023 at 09:40:58PM -0800, Nathan Bossart wrote:\n> Yeah, there are some problems here. If we ERROR, we'll just bounce back to\n> the sigsetjmp() block once a second, and we'll never pick up configuration\n> reloads, shutdown signals, etc. If we FATAL, we'll just rapidly restart\n> over and over. Given the dicussion about misconfigured archiving\n> parameters [0], I doubt folks will be okay with giving priority to one or\n> the other.\n> \n> I'm currently thinking that the checkpointer should set a flag and clear\n> the recovery callbacks when a misconfiguration is detected. Anytime the\n> checkpointer tries to use the archive-cleanup callback, a WARNING would be\n> emitted. This is similar to an approach I proposed for archiving\n> misconfigurations (that we didn't proceed with) [1]. Given the\n> aformentioned problems, this approach might be more suitable for the\n> checkpointer than it is for the archiver.\n\nSo, by doing that, archive_library would be ignored. What should be\nthe checkpointer do when a aconfiguration error is found on\nmisconfiguration? Would archive_cleanup_command be equally ignored or\ncould there be a risk to see it used by the checkpointer?\n\nIgnoring it would be fine as the user intended the use of a library,\nrather than enforcing its use via a value one did not really want.\nSo, on top of cleaning the callbacks, archive_cleanup_command should\nalso be cleaned up in the checkpointer? Issuing one WARNING per\ncheckpoint would be indeed much better than looping over and over,\nimpacting the system reliability.\n\nThoughts or comments from anyone?\n--\nMichael",
"msg_date": "Sat, 28 Jan 2023 08:31:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-27 15:28:21 -0800, Nathan Bossart wrote:\n> The more I think about this, the more I wonder whether we really need to\n> include archive_cleanup_command and recovery_end_command in recovery\n> modules.\n\nI think it would be hard to write a good module that isn't just implementing\nthe existing commands without it. Needing to clean up archives and reacting to\nthe end of recovery seems a pretty core task.\n\n\n\n> Another weird thing with the checkpointer is that the restore_library will\n> stay loaded long after recovery is finished, and it'll be loaded regardless\n> of whether recovery is required in the first place.\n\nI don't see a problem with that. And I suspect we might even end up there\nfor other reasons.\n\nI was briefly wondering whether it'd be worth trying to offload things like\narchive_cleanup_command from checkpointer to a different process, for\nrobustness. But given that it's pretty much required for performance that the\nmodule runs in the startup process, that ship probably has sailed.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 16:09:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-25 16:34:21 +0900, Michael Paquier wrote:\n> diff --git a/src/include/access/xlogarchive.h b/src/include/access/xlogarchive.h\n> index 299304703e..71c9b88165 100644\n> --- a/src/include/access/xlogarchive.h\n> +++ b/src/include/access/xlogarchive.h\n> @@ -30,9 +30,45 @@ extern bool XLogArchiveIsReady(const char *xlog);\n> extern bool XLogArchiveIsReadyOrDone(const char *xlog);\n> extern void XLogArchiveCleanup(const char *xlog);\n> \n> -extern bool shell_restore(const char *file, const char *path,\n> -\t\t\t\t\t\t const char *lastRestartPointFileName);\n> -extern void shell_archive_cleanup(const char *lastRestartPointFileName);\n> -extern void shell_recovery_end(const char *lastRestartPointFileName);\n> +/*\n> + * Recovery module callbacks\n> + *\n> + * These callback functions should be defined by recovery libraries and\n> + * returned via _PG_recovery_module_init(). For more information about the\n> + * purpose of each callback, refer to the recovery modules documentation.\n> + */\n> +typedef bool (*RecoveryRestoreCB) (const char *file, const char *path,\n> +\t\t\t\t\t\t\t\t const char *lastRestartPointFileName);\n> +typedef void (*RecoveryArchiveCleanupCB) (const char *lastRestartPointFileName);\n> +typedef void (*RecoveryEndCB) (const char *lastRestartPointFileName);\n> +typedef void (*RecoveryShutdownCB) (void);\n\nI think the signature of these forces bad coding practices, because there's no\nway to have state within a recovery module (as there's no parameter for it).\n\n\nIt's possible we would eventually support multiple modules, e.g. restoring\nfrom shorter term file based archiving and from longer term archiving in some\nblob store. Then we'll regret not having a varible for this.\n\n\n> +typedef struct RecoveryModuleCallbacks\n> +{\n> +\tRecoveryRestoreCB restore_cb;\n> +\tRecoveryArchiveCleanupCB archive_cleanup_cb;\n> +\tRecoveryEndCB recovery_end_cb;\n> +\tRecoveryShutdownCB shutdown_cb;\n> +} RecoveryModuleCallbacks;\n> +\n> +extern RecoveryModuleCallbacks RecoveryContext;\n\nI think that'll typically be interpreteted as a MemoryContext by readers.\n\nAlso, why is this a global var? Exported too?\n\n\n> +/*\n> + * Type of the shared library symbol _PG_recovery_module_init that is looked up\n> + * when loading a recovery library.\n> + */\n> +typedef void (*RecoveryModuleInit) (RecoveryModuleCallbacks *cb);\n\nI think this is a bad way to return callbacks. This way the\nRecoveryModuleCallbacks needs to be modifiable, which makes the job for the\ncompiler harder (and isn't the greatest for security).\n\nI strongly encourage you to follow the model used e.g. by tableam. The init\nfunction should return a pointer to a *constant* struct. Which is compile-time\ninitialized with the function pointers.\n\nSee the bottom of heapam_handler.c for how that ends up looking.\n\n\n> +void\n> +LoadRecoveryCallbacks(void)\n> +{\n> +\tRecoveryModuleInit init;\n> +\n> +\t/*\n> +\t * If the shell command is enabled, use our special initialization\n> +\t * function. Otherwise, load the library and call its\n> +\t * _PG_recovery_module_init().\n> +\t */\n> +\tif (restoreLibrary[0] == '\\0')\n> +\t\tinit = shell_restore_init;\n> +\telse\n> +\t\tinit = (RecoveryModuleInit)\n> +\t\t\tload_external_function(restoreLibrary, \"_PG_recovery_module_init\",\n> +\t\t\t\t\t\t\t\t false, NULL);\n\nWhy a special rule for shell, instead of just defaulting the GUC to it?\n\n\n> +\t/*\n> +\t * If using shell commands, remove callbacks for any commands that are not\n> +\t * set.\n> +\t */\n> +\tif (restoreLibrary[0] == '\\0')\n> +\t{\n> +\t\tif (recoveryRestoreCommand[0] == '\\0')\n> +\t\t\tRecoveryContext.restore_cb = NULL;\n> +\t\tif (archiveCleanupCommand[0] == '\\0')\n> +\t\t\tRecoveryContext.archive_cleanup_cb = NULL;\n> +\t\tif (recoveryEndCommand[0] == '\\0')\n> +\t\t\tRecoveryContext.recovery_end_cb = NULL;\n\nI'd just mandate that these are implemented and that the module has to handle\nif it doesn't need to do anything.\n\n\n\n> +\t/*\n> +\t * Check for invalid combinations of the command/library parameters and\n> +\t * load the callbacks.\n> +\t */\n> +\tCheckMutuallyExclusiveGUCs(restoreLibrary, \"restore_library\",\n> +\t\t\t\t\t\t\t recoveryRestoreCommand, \"restore_command\");\n> +\tCheckMutuallyExclusiveGUCs(restoreLibrary, \"restore_library\",\n> +\t\t\t\t\t\t\t recoveryEndCommand, \"recovery_end_command\");\n> +\tbefore_shmem_exit(call_recovery_module_shutdown_cb, 0);\n> +\tLoadRecoveryCallbacks();\n\nThis kind of sequence is duplicated into several places.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 16:23:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 04:09:39PM -0800, Andres Freund wrote:\n> I think it would be hard to write a good module that isn't just implementing\n> the existing commands without it. Needing to clean up archives and reacting to\n> the end of recovery seems a pretty core task.\n\nFWIW, recovery_end_command is straight-forward as it is done by the\nstartup process, so that's an easy take. You could split the cake\ninto two parts, as well, aka first focus on restore_command and\nrecovery_end_command as a first step, then we could try to figure out \nhow archive_cleanup_command would fit in this picture with the\ncheckpointer or a different process. There are a bunch of deployments\nwhere WAL archive retention is controlled by the age of the backups,\nnot by the backend deciding when these should be removed as a\ncheckpoint runs depending on the computed redo LSN, so recovery\nmodules would still be useful with just coverage for restore_command\nand recovery_end_command.\n\n> I was briefly wondering whether it'd be worth trying to offload things like\n> archive_cleanup_command from checkpointer to a different process, for\n> robustness. But given that it's pretty much required for performance that the\n> module runs in the startup process, that ship probably has sailed.\n\nYeah, agreed that this could be interesting. That could leverage the\nwork of the checkpointer. Nathan has proposed a patch for that\nrecently, as far as I recall, to offload some tasks from the startup\nand checkpointer processes:\nhttps://commitfest.postgresql.org/41/3448/\n\nSo that pretty much goes into the same area?\n--\nMichael",
"msg_date": "Sat, 28 Jan 2023 09:24:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 04:23:19PM -0800, Andres Freund wrote:\n>> +typedef bool (*RecoveryRestoreCB) (const char *file, const char *path,\n>> +\t\t\t\t\t\t\t\t const char *lastRestartPointFileName);\n>> +typedef void (*RecoveryArchiveCleanupCB) (const char *lastRestartPointFileName);\n>> +typedef void (*RecoveryEndCB) (const char *lastRestartPointFileName);\n>> +typedef void (*RecoveryShutdownCB) (void);\n> \n> I think the signature of these forces bad coding practices, because there's no\n> way to have state within a recovery module (as there's no parameter for it).\n> \n> It's possible we would eventually support multiple modules, e.g. restoring\n> from shorter term file based archiving and from longer term archiving in some\n> blob store. Then we'll regret not having a varible for this.\n\nAre you suggesting that we add a \"void *arg\" to each one of these? Or put\nthe arguments into a struct? Or something else?\n\n>> +extern RecoveryModuleCallbacks RecoveryContext;\n> \n> I think that'll typically be interpreteted as a MemoryContext by readers.\n\nHow about RecoveryCallbacks?\n\n> Also, why is this a global var? Exported too?\n\nIt's needed in xlog.c, xlogarchive.c, and xlogrecovery.c. Would you rather\nit be static to xlogarchive.c and provide accessors for the others?\n\n>> +/*\n>> + * Type of the shared library symbol _PG_recovery_module_init that is looked up\n>> + * when loading a recovery library.\n>> + */\n>> +typedef void (*RecoveryModuleInit) (RecoveryModuleCallbacks *cb);\n> \n> I think this is a bad way to return callbacks. This way the\n> RecoveryModuleCallbacks needs to be modifiable, which makes the job for the\n> compiler harder (and isn't the greatest for security).\n> \n> I strongly encourage you to follow the model used e.g. by tableam. The init\n> function should return a pointer to a *constant* struct. Which is compile-time\n> initialized with the function pointers.\n> \n> See the bottom of heapam_handler.c for how that ends up looking.\n\nHm. I used the existing strategy for archive modules and logical decoding\noutput plugins here. I think it would be weird for the archive module and\nrecovery module interfaces to look so different, but if that's okay, I can\nchange it.\n\n>> +void\n>> +LoadRecoveryCallbacks(void)\n>> +{\n>> +\tRecoveryModuleInit init;\n>> +\n>> +\t/*\n>> +\t * If the shell command is enabled, use our special initialization\n>> +\t * function. Otherwise, load the library and call its\n>> +\t * _PG_recovery_module_init().\n>> +\t */\n>> +\tif (restoreLibrary[0] == '\\0')\n>> +\t\tinit = shell_restore_init;\n>> +\telse\n>> +\t\tinit = (RecoveryModuleInit)\n>> +\t\t\tload_external_function(restoreLibrary, \"_PG_recovery_module_init\",\n>> +\t\t\t\t\t\t\t\t false, NULL);\n> \n> Why a special rule for shell, instead of just defaulting the GUC to it?\n\nI'm not following this one. The default value of the restore_library GUC\nis an empty string, which means that the shell commands should be used.\n\n>> +\t/*\n>> +\t * If using shell commands, remove callbacks for any commands that are not\n>> +\t * set.\n>> +\t */\n>> +\tif (restoreLibrary[0] == '\\0')\n>> +\t{\n>> +\t\tif (recoveryRestoreCommand[0] == '\\0')\n>> +\t\t\tRecoveryContext.restore_cb = NULL;\n>> +\t\tif (archiveCleanupCommand[0] == '\\0')\n>> +\t\t\tRecoveryContext.archive_cleanup_cb = NULL;\n>> +\t\tif (recoveryEndCommand[0] == '\\0')\n>> +\t\t\tRecoveryContext.recovery_end_cb = NULL;\n> \n> I'd just mandate that these are implemented and that the module has to handle\n> if it doesn't need to do anything.\n\nWouldn't this just force module authors to write empty functions for the\nparts they don't need?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 16:59:10 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-27 16:59:10 -0800, Nathan Bossart wrote:\n> On Fri, Jan 27, 2023 at 04:23:19PM -0800, Andres Freund wrote:\n> >> +typedef bool (*RecoveryRestoreCB) (const char *file, const char *path,\n> >> +\t\t\t\t\t\t\t\t const char *lastRestartPointFileName);\n> >> +typedef void (*RecoveryArchiveCleanupCB) (const char *lastRestartPointFileName);\n> >> +typedef void (*RecoveryEndCB) (const char *lastRestartPointFileName);\n> >> +typedef void (*RecoveryShutdownCB) (void);\n> > \n> > I think the signature of these forces bad coding practices, because there's no\n> > way to have state within a recovery module (as there's no parameter for it).\n> > \n> > It's possible we would eventually support multiple modules, e.g. restoring\n> > from shorter term file based archiving and from longer term archiving in some\n> > blob store. Then we'll regret not having a varible for this.\n> \n> Are you suggesting that we add a \"void *arg\" to each one of these?\n\nYes. Or pass a pointer to a struct with a \"private_data\" style field to all\nof them.\n\n\n\n> >> +extern RecoveryModuleCallbacks RecoveryContext;\n> > \n> > I think that'll typically be interpreteted as a MemoryContext by readers.\n> \n> How about RecoveryCallbacks?\n\nCallbacks is better than Context here imo, but I think 'Recovery' makes it\nsound like this actually performs WAL replay or such. Seems like it should be\nRestoreCallbacks at the very least?\n\n\n> > Also, why is this a global var? Exported too?\n> \n> It's needed in xlog.c, xlogarchive.c, and xlogrecovery.c. Would you rather\n> it be static to xlogarchive.c and provide accessors for the others?\n\nMaybe? Something about this feels wrong to me, but I can't entirely put my\nfinger on it.\n\n\n> >> +/*\n> >> + * Type of the shared library symbol _PG_recovery_module_init that is looked up\n> >> + * when loading a recovery library.\n> >> + */\n> >> +typedef void (*RecoveryModuleInit) (RecoveryModuleCallbacks *cb);\n> > \n> > I think this is a bad way to return callbacks. This way the\n> > RecoveryModuleCallbacks needs to be modifiable, which makes the job for the\n> > compiler harder (and isn't the greatest for security).\n> > \n> > I strongly encourage you to follow the model used e.g. by tableam. The init\n> > function should return a pointer to a *constant* struct. Which is compile-time\n> > initialized with the function pointers.\n> > \n> > See the bottom of heapam_handler.c for how that ends up looking.\n> \n> Hm. I used the existing strategy for archive modules and logical decoding\n> output plugins here.\n\nUnfortunately I didn't realize the problem when I was designing the output\nplugin interface. But there's probably too many users of it out there now to\nchange it.\n\nThe interface does at least provide a way to have its own \"per instance\"\nstate, via the startup callback and LogicalDecodingContext->output_plugin_private.\n\n\nThe worst interface in this area is index AMs - the handler returns a pointer\nto a palloced struct with callbacks. That then is copied into a new allocation\nin the relcache entry. We have hundreds to thousands of copies of what\nbthandler() sets up in memory. Without any sort of benefit.\n\n\n> I think it would be weird for the archive module and\n> recovery module interfaces to look so different, but if that's okay, I can\n> change it.\n\nI'm a bit sad about the archive module case - I wonder if we should change it\nnow, there can't be many users of it out there. And I think it's more likely\nthat we'll eventually want multiple archiving scripts to run concurrently -\nwhich will be quite hard with the current interface (no private state).\n\nJust btw: It's imo a bit awkward for the definition of the archiving plugin\ninterface to be in pgarch.h: \"Exports from postmaster/pgarch.c\" doesn't\ndescribe that well. A dedicated header seems cleaner.\n\n\n> \n> >> +void\n> >> +LoadRecoveryCallbacks(void)\n> >> +{\n> >> +\tRecoveryModuleInit init;\n> >> +\n> >> +\t/*\n> >> +\t * If the shell command is enabled, use our special initialization\n> >> +\t * function. Otherwise, load the library and call its\n> >> +\t * _PG_recovery_module_init().\n> >> +\t */\n> >> +\tif (restoreLibrary[0] == '\\0')\n> >> +\t\tinit = shell_restore_init;\n> >> +\telse\n> >> +\t\tinit = (RecoveryModuleInit)\n> >> +\t\t\tload_external_function(restoreLibrary, \"_PG_recovery_module_init\",\n> >> +\t\t\t\t\t\t\t\t false, NULL);\n> > \n> > Why a special rule for shell, instead of just defaulting the GUC to it?\n> \n> I'm not following this one. The default value of the restore_library GUC\n> is an empty string, which means that the shell commands should be used.\n\nI was wondering why we implement \"shell\" via a separate mechanism from\nrestore_library. I.e. a) why doesn't restore_library default to 'shell',\ninstead of an empty string, b) why aren't restore_command et al implemented\nusing a restore module.\n\n\n> >> +\t/*\n> >> +\t * If using shell commands, remove callbacks for any commands that are not\n> >> +\t * set.\n> >> +\t */\n> >> +\tif (restoreLibrary[0] == '\\0')\n> >> +\t{\n> >> +\t\tif (recoveryRestoreCommand[0] == '\\0')\n> >> +\t\t\tRecoveryContext.restore_cb = NULL;\n> >> +\t\tif (archiveCleanupCommand[0] == '\\0')\n> >> +\t\t\tRecoveryContext.archive_cleanup_cb = NULL;\n> >> +\t\tif (recoveryEndCommand[0] == '\\0')\n> >> +\t\t\tRecoveryContext.recovery_end_cb = NULL;\n> > \n> > I'd just mandate that these are implemented and that the module has to handle\n> > if it doesn't need to do anything.\n> \n> Wouldn't this just force module authors to write empty functions for the\n> parts they don't need?\n\nYes. But what's the point of a restore library that doesn't implement a\nrestore command? Making some/all callbacks mandatory and validating mandatory\ncallbacks are set, during load, IME makes it easier to evolve the interface\nover time, because problems become immediately apparent, rather than having to\nwait for a certain callback to be hit.\n\n\nIt's not actually clear to me why another restore library shouldn't be able to\nuse restore_command etc, given that we have the parameters. One quite useful\nmodule would be a version of the \"shell\" interface that runs multiple restore\ncommands in parallel, assuming we'll need subsequent files as well.\n\nThe fact that restore_command are not run in parallel, and that many useful\nrestore commands have a fair bit of latency, is an issue. So a shell_parallel\nrestore library would e.g. be useful?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 27 Jan 2023 17:55:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 05:55:42PM -0800, Andres Freund wrote:\n> On 2023-01-27 16:59:10 -0800, Nathan Bossart wrote:\n>> I think it would be weird for the archive module and\n>> recovery module interfaces to look so different, but if that's okay, I can\n>> change it.\n> \n> I'm a bit sad about the archive module case - I wonder if we should change it\n> now, there can't be many users of it out there. And I think it's more likely\n> that we'll eventually want multiple archiving scripts to run concurrently -\n> which will be quite hard with the current interface (no private state).\n\nI'm open to that. IIUC it wouldn't require too many changes to existing\narchive modules, and if it gets us closer to batching or parallelism, it's\nprobably worth doing sooner than later.\n\n> I was wondering why we implement \"shell\" via a separate mechanism from\n> restore_library. I.e. a) why doesn't restore_library default to 'shell',\n> instead of an empty string, b) why aren't restore_command et al implemented\n> using a restore module.\n\nI think that's the long-term idea. For archive modules, there were\nconcerns about backward compatibility [0].\n\n[0] https://postgr.es/m/CABUevEx8cKy%3D%2BYQU_3NaeXnZV2bSB7Lk6EE%2B-FEcmE4JO4V1hg%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 27 Jan 2023 20:17:46 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 08:17:46PM -0800, Nathan Bossart wrote:\n> On Fri, Jan 27, 2023 at 05:55:42PM -0800, Andres Freund wrote:\n>> On 2023-01-27 16:59:10 -0800, Nathan Bossart wrote:\n>>> I think it would be weird for the archive module and\n>>> recovery module interfaces to look so different, but if that's okay, I can\n>>> change it.\n>> \n>> I'm a bit sad about the archive module case - I wonder if we should change it\n>> now, there can't be many users of it out there. And I think it's more likely\n>> that we'll eventually want multiple archiving scripts to run concurrently -\n>> which will be quite hard with the current interface (no private state).\n> \n> I'm open to that. IIUC it wouldn't require too many changes to existing\n> archive modules, and if it gets us closer to batching or parallelism, it's\n> probably worth doing sooner than later.\n\nHere is a work-in-progress patch set for adjusting the archive modules\ninterface. Is this roughly what you had in mind?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 27 Jan 2023 22:27:29 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 10:27:29PM -0800, Nathan Bossart wrote:\n> Here is a work-in-progress patch set for adjusting the archive modules\n> interface. Is this roughly what you had in mind?\n\nI have been catching up with what is happening here. I can get\nbehind the idea to use the term \"callbacks\" vs \"context\" for clarity,\nto move the callback definitions into their own header, and to add\nextra arguments to the callback functions for some private data.\n\n-void\n-_PG_archive_module_init(ArchiveModuleCallbacks *cb)\n+const ArchiveModuleCallbacks *\n+_PG_archive_module_init(void **arg)\n {\n AssertVariableIsOfType(&_PG_archive_module_init, ArchiveModuleInit);\n \n- cb->check_configured_cb = basic_archive_configured;\n- cb->archive_file_cb = basic_archive_file;\n+ (*arg) = (void *) AllocSetContextCreate(TopMemoryContext,\n+ \"basic_archive\",\n+ ALLOCSET_DEFAULT_SIZES);\n+\n+ return &basic_archive_callbacks;\n\nNow, I find this part, where we use a double pointer to allow the\nmodule initialization to create and give back a private area, rather\nconfusing, and I think that this could be bug-prone, as well. Once\nyou incorporate some data within the set of callbacks, isn't this\nstuff close to a \"state\" data, or just something that we could call\nonly an \"ArchiveModule\"? Could it make more sense to have\n_PG_archive_module_init return a structure with everything rather than\na separate in/out argument? Here is the idea, simply:\ntypedef struct ArchiveModule {\n\tArchiveCallbacks *routines;\n\tvoid *private_data;\n\t/* Potentially more here, like some flags? */\n} ArchiveModule;\n\nThat would be closer to the style of xlogreader.h, for example.\n\nAll these choices should be documented in archive_module.h, at the\nend.\n--\nMichael",
"msg_date": "Mon, 30 Jan 2023 16:51:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 04:51:38PM +0900, Michael Paquier wrote:\n> Now, I find this part, where we use a double pointer to allow the\n> module initialization to create and give back a private area, rather\n> confusing, and I think that this could be bug-prone, as well. Once\n> you incorporate some data within the set of callbacks, isn't this\n> stuff close to a \"state\" data, or just something that we could call\n> only an \"ArchiveModule\"? Could it make more sense to have\n> _PG_archive_module_init return a structure with everything rather than\n> a separate in/out argument? Here is the idea, simply:\n> typedef struct ArchiveModule {\n> \tArchiveCallbacks *routines;\n> \tvoid *private_data;\n> \t/* Potentially more here, like some flags? */\n> } ArchiveModule;\n\nYeah, we could probably invent an ArchiveModuleContext struct. I think\nthis is similar to how LogicalDecodingContext is used.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:38:20 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-30 16:51:38 +0900, Michael Paquier wrote:\n> On Fri, Jan 27, 2023 at 10:27:29PM -0800, Nathan Bossart wrote:\n> > Here is a work-in-progress patch set for adjusting the archive modules\n> > interface. Is this roughly what you had in mind?\n> \n> I have been catching up with what is happening here. I can get\n> behind the idea to use the term \"callbacks\" vs \"context\" for clarity,\n> to move the callback definitions into their own header, and to add\n> extra arguments to the callback functions for some private data.\n> \n> -void\n> -_PG_archive_module_init(ArchiveModuleCallbacks *cb)\n> +const ArchiveModuleCallbacks *\n> +_PG_archive_module_init(void **arg)\n> {\n> AssertVariableIsOfType(&_PG_archive_module_init, ArchiveModuleInit);\n> \n> - cb->check_configured_cb = basic_archive_configured;\n> - cb->archive_file_cb = basic_archive_file;\n> + (*arg) = (void *) AllocSetContextCreate(TopMemoryContext,\n> + \"basic_archive\",\n> + ALLOCSET_DEFAULT_SIZES);\n> +\n> + return &basic_archive_callbacks;\n\n> Now, I find this part, where we use a double pointer to allow the\n> module initialization to create and give back a private area, rather\n> confusing, and I think that this could be bug-prone, as well.\n\nI don't think _PG_archive_module_init() should actually allocate a memory\ncontext and do other similar initializations. Instead it should just return\n'const ArchiveModuleCallbacks*', typically a single line.\n\nAllocations etc should happen in one of the callbacks. That way we can\nactually have multiple instances of a module.\n\n\n> Once\n> you incorporate some data within the set of callbacks, isn't this\n> stuff close to a \"state\" data, or just something that we could call\n> only an \"ArchiveModule\"? Could it make more sense to have\n> _PG_archive_module_init return a structure with everything rather than\n> a separate in/out argument? Here is the idea, simply:\n> typedef struct ArchiveModule {\n> \tArchiveCallbacks *routines;\n> \tvoid *private_data;\n> \t/* Potentially more here, like some flags? */\n> } ArchiveModule;\n\nI don't like this much. This still basically ends up with the module callbacks\nnot being sufficient to instantiate an archive module.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 30 Jan 2023 11:48:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 11:48:10AM -0800, Andres Freund wrote:\n> I don't think _PG_archive_module_init() should actually allocate a memory\n> context and do other similar initializations. Instead it should just return\n> 'const ArchiveModuleCallbacks*', typically a single line.\n> \n> Allocations etc should happen in one of the callbacks. That way we can\n> actually have multiple instances of a module.\n\nI think we'd need to invent a startup callback for archive modules for this\nto work, but that's easy enough.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 30 Jan 2023 12:04:22 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Jan 30, 2023 at 12:04:22PM -0800, Nathan Bossart wrote:\n> On Mon, Jan 30, 2023 at 11:48:10AM -0800, Andres Freund wrote:\n>> I don't think _PG_archive_module_init() should actually allocate a memory\n>> context and do other similar initializations. Instead it should just return\n>> 'const ArchiveModuleCallbacks*', typically a single line.\n>> \n>> Allocations etc should happen in one of the callbacks. That way we can\n>> actually have multiple instances of a module.\n> \n> I think we'd need to invent a startup callback for archive modules for this\n> to work, but that's easy enough.\n\nIf you don't return some (void *) pointing to a private area that\nwould be stored by the backend, allocated as part of the loading path,\nI agree that an extra callback is what makes the most sense,\npresumably called around the beginning of PgArchiverMain(). Doing\nthis kind of one-time action in the file callback woud be weird..\n--\nMichael",
"msg_date": "Tue, 31 Jan 2023 08:13:11 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 08:13:11AM +0900, Michael Paquier wrote:\n> On Mon, Jan 30, 2023 at 12:04:22PM -0800, Nathan Bossart wrote:\n>> On Mon, Jan 30, 2023 at 11:48:10AM -0800, Andres Freund wrote:\n>>> I don't think _PG_archive_module_init() should actually allocate a memory\n>>> context and do other similar initializations. Instead it should just return\n>>> 'const ArchiveModuleCallbacks*', typically a single line.\n>>> \n>>> Allocations etc should happen in one of the callbacks. That way we can\n>>> actually have multiple instances of a module.\n>> \n>> I think we'd need to invent a startup callback for archive modules for this\n>> to work, but that's easy enough.\n> \n> If you don't return some (void *) pointing to a private area that\n> would be stored by the backend, allocated as part of the loading path,\n> I agree that an extra callback is what makes the most sense,\n> presumably called around the beginning of PgArchiverMain(). Doing\n> this kind of one-time action in the file callback woud be weird..\n\nOkay, here is a new patch set with the aforementioned adjustments and\ndocumentation updates.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 31 Jan 2023 15:30:13 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Jan 31, 2023 at 03:30:13PM -0800, Nathan Bossart wrote:\n> Okay, here is a new patch set with the aforementioned adjustments and\n> documentation updates.\n\nSo, it looks like you have addressed the feedback received here, as\nof:\n- Rename of Context to Callback.\n- Move of the definition into their own header. \n- Introduction of a callback for the startup initialization.\n- Pass down a private state to each callback.\n\nI have a few minor comments.\n\n+/*\n+ * Since the logic for archiving via a shell command is in the core server\n+ * and does not need to be loaded via a shared library, it has a special\n+ * initialization function.\n+ */\n+extern const ArchiveModuleCallbacks *shell_archive_init(void);\nStoring that in archive_module.h is not incorrect, still feels a bit\nunnatural. I would have used a separate header for clarity. It may\nnot sound like a big deal, but we may want this separation if\narchive_module.h is used in some frontend code in the future. Perhaps\nthat will never be the case, but I've seen many fancy (as in useful)\nproposals in the past when it comes to such things.\n\n static bool\n-shell_archive_configured(void)\n+shell_archive_configured(void *private_state)\n {\n return XLogArchiveCommand[0] != '\\0';\nMaybe check that in this context private_state should be NULL? The\nother two callbacks could use an assert, as well.\n\n- <function>_PG_archive_module_init</function>. This function is passed a\n- struct that needs to be filled with the callback function pointers for\n- individual actions.\n+ <function>_PG_archive_module_init</function>. This function must return a\n+ struct filled with the callback function pointers for individual actions.\n\nWorth mentioning the name of the structure, as of \"This function must\nreturn a structure ArchiveModuleCallbacks filled with..\"\n\n+ The <function>startup_cb</function> callback is called shortly after the\n+ module is loaded. This callback can be used to perform any additional\n+ initialization required. If the archive module needs a state, it should\n+ return a pointer to the state. This pointer will be passed to each of the\n+ module's other callbacks via the <literal>void *private_state</literal>\n+ argument.\n\nNot sure about the complexity of two sentences here. This could\nsimply be:\nThis function can return a pointer to an area of memory dedicated to\nthe state of the archive module loaded. This pointer is passed to\neach of the module's other callbacks as the argument\n<literal>private_state</literal>.\n\nSide note: it looks like there is nothing in archive-modules.sgml\ntelling that these modules are only loaded by the archiver process.\n--\nMichael",
"msg_date": "Wed, 1 Feb 2023 13:57:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-31 15:30:13 -0800, Nathan Bossart wrote:\n\n\n> +/*\n> + * basic_archive_startup\n> + *\n> + * Creates the module's memory context.\n> + */\n> +void *\n> +basic_archive_startup(void)\n> +{\n> +\treturn (void *) AllocSetContextCreate(TopMemoryContext,\n> +\t\t\t\t\t\t\t\t\t\t \"basic_archive\",\n> +\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n> }\n\nI'd make basic_archive's private data a struct, with a member for the\ncontext, but it's not that important.\n\nI'd also be inclined to do the same for the private_state you're passing\naround for each module. Even if it's just to reduce the number of\nfunctions accepting void * - loosing compiler type checking isn't great.\n\nSo maybe an ArchiveModuleState { void *private_data } that's passed to\nbasic_archive_startup() and all the other callbacks.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Feb 2023 03:54:26 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 03:54:26AM -0800, Andres Freund wrote:\n> I'd make basic_archive's private data a struct, with a member for the\n> context, but it's not that important.\n> \n> I'd also be inclined to do the same for the private_state you're passing\n> around for each module. Even if it's just to reduce the number of\n> functions accepting void * - loosing compiler type checking isn't great.\n> \n> So maybe an ArchiveModuleState { void *private_data } that's passed to\n> basic_archive_startup() and all the other callbacks.\n\nHere's a new patch set in which I've attempted to address this feedback and\nMichael's feedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Feb 2023 12:15:29 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-01 12:15:29 -0800, Nathan Bossart wrote:\n> Here's a new patch set in which I've attempted to address this feedback and\n> Michael's feedback.\n\nLooks better!\n\n\n> @@ -25,12 +34,14 @@ extern PGDLLIMPORT char *XLogArchiveLibrary;\n> * For more information about the purpose of each callback, refer to the\n> * archive modules documentation.\n> */\n> -typedef bool (*ArchiveCheckConfiguredCB) (void);\n> -typedef bool (*ArchiveFileCB) (const char *file, const char *path);\n> -typedef void (*ArchiveShutdownCB) (void);\n> +typedef void (*ArchiveStartupCB) (ArchiveModuleState *state);\n> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n> +typedef bool (*ArchiveFileCB) (const char *file, const char *path, ArchiveModuleState *state);\n> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n\nPersonally I'd always pass ArchiveModuleState *state as the first arg,\nbut it's not important.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 1 Feb 2023 13:06:06 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 01:06:06PM -0800, Andres Freund wrote:\n> On 2023-02-01 12:15:29 -0800, Nathan Bossart wrote:\n>> Here's a new patch set in which I've attempted to address this feedback and\n>> Michael's feedback.\n> \n> Looks better!\n\nThanks!\n\n>> @@ -25,12 +34,14 @@ extern PGDLLIMPORT char *XLogArchiveLibrary;\n>> * For more information about the purpose of each callback, refer to the\n>> * archive modules documentation.\n>> */\n>> -typedef bool (*ArchiveCheckConfiguredCB) (void);\n>> -typedef bool (*ArchiveFileCB) (const char *file, const char *path);\n>> -typedef void (*ArchiveShutdownCB) (void);\n>> +typedef void (*ArchiveStartupCB) (ArchiveModuleState *state);\n>> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n>> +typedef bool (*ArchiveFileCB) (const char *file, const char *path, ArchiveModuleState *state);\n>> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n> \n> Personally I'd always pass ArchiveModuleState *state as the first arg,\n> but it's not important.\n\nYeah, that's nicer. cfbot is complaining about a missing #include, so I\nneed to send a new revision anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Feb 2023 13:23:26 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 01:23:26PM -0800, Nathan Bossart wrote:\n> Yeah, that's nicer. cfbot is complaining about a missing #include, so I\n> need to send a new revision anyway.\n\nOkay, the changes done here look straight-forward seen from here. I\ngot one small-ish comment.\n\n+basic_archive_startup(ArchiveModuleState *state)\n+{\n+ BasicArchiveData *data = palloc0(sizeof(BasicArchiveData));\n\nPerhaps this should use MemoryContextAlloc() rather than a plain\npalloc(). This should not matter based on the position where the\nstartup callback is called, still that may be a pattern worth\nencouraging.\n--\nMichael",
"msg_date": "Thu, 2 Feb 2023 14:34:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 02, 2023 at 02:34:17PM +0900, Michael Paquier wrote:\n> Okay, the changes done here look straight-forward seen from here. I\n> got one small-ish comment.\n> \n> +basic_archive_startup(ArchiveModuleState *state)\n> +{\n> + BasicArchiveData *data = palloc0(sizeof(BasicArchiveData));\n> \n> Perhaps this should use MemoryContextAlloc() rather than a plain\n> palloc(). This should not matter based on the position where the\n> startup callback is called, still that may be a pattern worth\n> encouraging.\n\nGood call.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 2 Feb 2023 11:37:00 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 02, 2023 at 11:37:00AM -0800, Nathan Bossart wrote:\n> On Thu, Feb 02, 2023 at 02:34:17PM +0900, Michael Paquier wrote:\n>> Okay, the changes done here look straight-forward seen from here. I\n>> got one small-ish comment.\n>> \n>> +basic_archive_startup(ArchiveModuleState *state)\n>> +{\n>> + BasicArchiveData *data = palloc0(sizeof(BasicArchiveData));\n>> \n>> Perhaps this should use MemoryContextAlloc() rather than a plain\n>> palloc(). This should not matter based on the position where the\n>> startup callback is called, still that may be a pattern worth\n>> encouraging.\n> \n> Good call.\n\n+ ArchiveModuleCallbacks struct filled with the callback function pointers for\nThis needs a structname markup.\n\n+ can use <literal>state->private_data</literal> to store it.\nAnd here it would be structfield.\n\nAs far as I can see, all the points raised about this redesign seem to\nhave been addressed. Andres, any comments?\n--\nMichael",
"msg_date": "Sat, 4 Feb 2023 11:59:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Sat, Feb 04, 2023 at 11:59:20AM +0900, Michael Paquier wrote:\n> + ArchiveModuleCallbacks struct filled with the callback function pointers for\n> This needs a structname markup.\n> \n> + can use <literal>state->private_data</literal> to store it.\n> And here it would be structfield.\n\nfixed\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 4 Feb 2023 10:14:36 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Sat, Feb 04, 2023 at 10:14:36AM -0800, Nathan Bossart wrote:\n> On Sat, Feb 04, 2023 at 11:59:20AM +0900, Michael Paquier wrote:\n>> + ArchiveModuleCallbacks struct filled with the callback function pointers for\n>> This needs a structname markup.\n>> \n>> + can use <literal>state->private_data</literal> to store it.\n>> And here it would be structfield.\n> \n> fixed\n\nAndres, did you have the change to look at that? I did look at it,\nbut it may not address all the points you may have in mind.\n--\nMichael",
"msg_date": "Wed, 8 Feb 2023 16:23:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 16:23:34 +0900, Michael Paquier wrote:\n> On Sat, Feb 04, 2023 at 10:14:36AM -0800, Nathan Bossart wrote:\n> > On Sat, Feb 04, 2023 at 11:59:20AM +0900, Michael Paquier wrote:\n> >> + ArchiveModuleCallbacks struct filled with the callback function pointers for\n> >> This needs a structname markup.\n> >> \n> >> + can use <literal>state->private_data</literal> to store it.\n> >> And here it would be structfield.\n> > \n> > fixed\n> \n> Andres, did you have the change to look at that? I did look at it,\n> but it may not address all the points you may have in mind.\n\nYes, I think this looks pretty good now.\n\nOne minor thing: I don't think we really need the AssertVariableIsOfType() for\nanything but the Init() one?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 08:27:13 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 08:27:13AM -0800, Andres Freund wrote:\n> One minor thing: I don't think we really need the AssertVariableIsOfType() for\n> anything but the Init() one?\n\nThis is another part that was borrowed from logical decoding output\nplugins. I'm not sure this adds much since f2b73c8 (\"Add central\ndeclarations for dlsym()ed symbols\"). Perhaps we should remove all of\nthese assertions for functions that now have central declarations.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 8 Feb 2023 09:27:05 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 09:27:05 -0800, Nathan Bossart wrote:\n> On Wed, Feb 08, 2023 at 08:27:13AM -0800, Andres Freund wrote:\n> > One minor thing: I don't think we really need the AssertVariableIsOfType() for\n> > anything but the Init() one?\n> \n> This is another part that was borrowed from logical decoding output\n> plugins.\n\nI know :(. It was needed in an earlier version of the output plugin interface,\nwhere all the different callbacks were looked up via dlsym(), but should have\nbeen removed after that.\n\n\n> I'm not sure this adds much since f2b73c8 (\"Add central\n> declarations for dlsym()ed symbols\"). Perhaps we should remove all of\n> these assertions for functions that now have central declarations.\n\nMost of them weren't needed even before that.\n\nAnd yes, I'd be for a patch to remove all of those assertions.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 09:33:44 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 09:33:44AM -0800, Andres Freund wrote:\n> And yes, I'd be for a patch to remove all of those assertions.\n\ndone\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Feb 2023 09:57:56 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-08 09:57:56 -0800, Nathan Bossart wrote:\n> On Wed, Feb 08, 2023 at 09:33:44AM -0800, Andres Freund wrote:\n> > And yes, I'd be for a patch to remove all of those assertions.\n> \n> done\n\nIf you'd reorder it so that 0004 applies independently from the other changes,\nI'd just push that now.\n\n\nI was remembering additional AssertVariableIsOfType(), but it looks like we\nactually did remember to take them out when redesigning the output plugin\ninterface...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Feb 2023 10:24:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 10:24:18AM -0800, Andres Freund wrote:\n> If you'd reorder it so that 0004 applies independently from the other changes,\n> I'd just push that now.\n\ndone\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 8 Feb 2023 10:55:44 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On 2023-02-08 10:55:44 -0800, Nathan Bossart wrote:\n> done\n\nPushed. Thanks!\n\n\n",
"msg_date": "Wed, 8 Feb 2023 21:16:19 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 09:16:19PM -0800, Andres Freund wrote:\n> Pushed. Thanks!\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 9 Feb 2023 09:24:58 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 9 Feb 2023 11:39:17 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-09 11:39:17 -0800, Nathan Bossart wrote:\n> rebased for cfbot\n\nI think this nearly ready. Michael, are you planning to commit this?\n\nPersonally I'd probably squash these into a single commit.\n\n\n> diff --git a/doc/src/sgml/archive-modules.sgml b/doc/src/sgml/archive-modules.sgml\n> index ef02051f7f..2db1b19216 100644\n> --- a/doc/src/sgml/archive-modules.sgml\n> +++ b/doc/src/sgml/archive-modules.sgml\n> @@ -47,23 +47,30 @@\n> normal library search path is used to locate the library. To provide the\n> required archive module callbacks and to indicate that the library is\n> actually an archive module, it needs to provide a function named\n> - <function>_PG_archive_module_init</function>. This function is passed a\n> - struct that needs to be filled with the callback function pointers for\n> - individual actions.\n> + <function>_PG_archive_module_init</function>. This function must return an\n> + <structname>ArchiveModuleCallbacks</structname> struct filled with the\n> + callback function pointers for individual actions.\n\nI'd probably mention that this should typically be of server lifetime / a\n'static const' struct. Tableam documents this as follows:\n\n The result of the function must be a pointer to a struct of type\n <structname>TableAmRoutine</structname>, which contains everything that the\n core code needs to know to make use of the table access method. The return\n value needs to be of server lifetime, which is typically achieved by\n defining it as a <literal>static const</literal> variable in global\n scope\n\n\n> +\n> + <note>\n> + <para>\n> + <varname>archive_library</varname> is only loaded in the archiver process.\n> + </para>\n> + </note>\n> </sect1>\n\nThat's not really related to any of the changes here, right?\n\nI'm not sure it's a good idea to document that. We e.g. probably should allow\nthe library to check that the configuration is correct, at postmaster start,\nrather than later, at runtime.\n\n\n> <sect1 id=\"archive-module-callbacks\">\n> @@ -73,6 +80,20 @@ typedef void (*ArchiveModuleInit) (struct ArchiveModuleCallbacks *cb);\n> The server will call them as required to process each individual WAL file.\n> </para>\n> \n> + <sect2 id=\"archive-module-startup\">\n> + <title>Startup Callback</title>\n> + <para>\n> + The <function>startup_cb</function> callback is called shortly after the\n> + module is loaded. This callback can be used to perform any additional\n> + initialization required. If the archive module needs to have a state, it\n> + can use <structfield>state->private_data</structfield> to store it.\n\ns/needs to have a state/has state/?\n\n\n> @@ -83,7 +104,7 @@ typedef void (*ArchiveModuleInit) (struct ArchiveModuleCallbacks *cb);\n> assumes the module is configured.\n> \n> <programlisting>\n> -typedef bool (*ArchiveCheckConfiguredCB) (void);\n> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n> </programlisting>\n> \n> If <literal>true</literal> is returned, the server will proceed with\n\nHm. I wonder if ArchiveCheckConfiguredCB() should actually work without the\nstate. We're not really doing anything yet, at that point, so it shouldn't\nreally need state?\n\nThe reason I'm wondering is that I think we should consider calling this from\nthe GUC assignment hook, at least in postmaster. Which would make it more\nconvenient to not have state, I think?\n\n\n\n> @@ -128,7 +149,7 @@ typedef bool (*ArchiveFileCB) (const char *file, const char *path);\n> these situations.\n> \n> <programlisting>\n> -typedef void (*ArchiveShutdownCB) (void);\n> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n> </programlisting>\n> </para>\n> </sect2>\n\nPerhaps mention that this needs to free state it allocated in the\nArchiveModuleState, or it'll be leaked?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 9 Feb 2023 12:18:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 12:18:55PM -0800, Andres Freund wrote:\n> On 2023-02-09 11:39:17 -0800, Nathan Bossart wrote:\n> Personally I'd probably squash these into a single commit.\n\ndone\n\n> I'd probably mention that this should typically be of server lifetime / a\n> 'static const' struct. Tableam documents this as follows:\n\ndone\n\n>> + <note>\n>> + <para>\n>> + <varname>archive_library</varname> is only loaded in the archiver process.\n>> + </para>\n>> + </note>\n>> </sect1>\n> \n> That's not really related to any of the changes here, right?\n> \n> I'm not sure it's a good idea to document that. We e.g. probably should allow\n> the library to check that the configuration is correct, at postmaster start,\n> rather than later, at runtime.\n\nremoved\n\n>> + <sect2 id=\"archive-module-startup\">\n>> + <title>Startup Callback</title>\n>> + <para>\n>> + The <function>startup_cb</function> callback is called shortly after the\n>> + module is loaded. This callback can be used to perform any additional\n>> + initialization required. If the archive module needs to have a state, it\n>> + can use <structfield>state->private_data</structfield> to store it.\n> \n> s/needs to have a state/has state/?\n\ndone\n\n>> <programlisting>\n>> -typedef bool (*ArchiveCheckConfiguredCB) (void);\n>> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n>> </programlisting>\n>> \n>> If <literal>true</literal> is returned, the server will proceed with\n> \n> Hm. I wonder if ArchiveCheckConfiguredCB() should actually work without the\n> state. We're not really doing anything yet, at that point, so it shouldn't\n> really need state?\n> \n> The reason I'm wondering is that I think we should consider calling this from\n> the GUC assignment hook, at least in postmaster. Which would make it more\n> convenient to not have state, I think?\n\nI agree that this callback should typically not need the state, but I'm not\nsure whether it fits into the assignment hook for archive_library. This\ncallback is primarily meant for situations when you have archiving enabled,\nbut your module isn't configured yet (e.g., archive_command is empty). In\nthis case, we keep the WAL around, but we don't try to archive it until\nthis hook returns true. It's up to each module to define that criteria. I\ncan imagine someone introducing a GUC in their archive module that\ntemporarily halts archiving via this callback. In that case, calling it\nvia a GUC assignment hook probably won't work. In fact, checking whether\narchive_command is empty in that hook might not work either.\n\n>> <programlisting>\n>> -typedef void (*ArchiveShutdownCB) (void);\n>> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n>> </programlisting>\n>> </para>\n>> </sect2>\n> \n> Perhaps mention that this needs to free state it allocated in the\n> ArchiveModuleState, or it'll be leaked?\n\ndone\n\nI left this out originally because the archiver exits shortly after calling\nthis. However, if you have DSM segments or something, it's probably wise\nto make sure those are cleaned up. And I suppose we might not always exit\nimmediately after this callback, so establishing the habit of freeing the\nstate could be a good idea. In addition to updating this part of the docs,\nI wrote a shutdown callback for basic_archive that frees its state.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 9 Feb 2023 14:48:26 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 12:18:55PM -0800, Andres Freund wrote:\n> I think this nearly ready. Michael, are you planning to commit this?\n\nI could take a stab at it, now if you feel strongly about doing it\nyourselfof course feel free :)\n\n> Personally I'd probably squash these into a single commit.\n\nSame impression here. Agreed that all these had better be merged\ntogether, still keeping them separated made their review so much\neasier.\n--\nMichael",
"msg_date": "Fri, 10 Feb 2023 12:07:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 09, 2023 at 02:48:26PM -0800, Nathan Bossart wrote:\n> On Thu, Feb 09, 2023 at 12:18:55PM -0800, Andres Freund wrote:\n>>> <programlisting>\n>>> -typedef bool (*ArchiveCheckConfiguredCB) (void);\n>>> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n>>> </programlisting>\n>>> \n>>> If <literal>true</literal> is returned, the server will proceed with\n>> \n>> Hm. I wonder if ArchiveCheckConfiguredCB() should actually work without the\n>> state. We're not really doing anything yet, at that point, so it shouldn't\n>> really need state?\n>> \n>> The reason I'm wondering is that I think we should consider calling this from\n>> the GUC assignment hook, at least in postmaster. Which would make it more\n>> convenient to not have state, I think?\n> \n> I agree that this callback should typically not need the state, but I'm not\n> sure whether it fits into the assignment hook for archive_library. This\n> callback is primarily meant for situations when you have archiving enabled,\n> but your module isn't configured yet (e.g., archive_command is empty). In\n> this case, we keep the WAL around, but we don't try to archive it until\n> this hook returns true. It's up to each module to define that criteria. I\n> can imagine someone introducing a GUC in their archive module that\n> temporarily halts archiving via this callback. In that case, calling it\n> via a GUC assignment hook probably won't work. In fact, checking whether\n> archive_command is empty in that hook might not work either.\n\nKeeping the state in the configure check callback does not strike me\nas a problem, FWIW.\n\n>>> <programlisting>\n>>> -typedef void (*ArchiveShutdownCB) (void);\n>>> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n>>> </programlisting>\n>>> </para>\n>>> </sect2>\n>> \n>> Perhaps mention that this needs to free state it allocated in the\n>> ArchiveModuleState, or it'll be leaked?\n> \n> done\n> \n> I left this out originally because the archiver exits shortly after calling\n> this. However, if you have DSM segments or something, it's probably wise\n> to make sure those are cleaned up. And I suppose we might not always exit\n> immediately after this callback, so establishing the habit of freeing the\n> state could be a good idea. In addition to updating this part of the docs,\n> I wrote a shutdown callback for basic_archive that frees its state.\n\nThis makes sense to me. Still, DSM segments had better be cleaned up\nwith dsm_backend_shutdown().\n\n+ basic_archive_context = data->context;\n+ if (CurrentMemoryContext == basic_archive_context)\n+ MemoryContextSwitchTo(TopMemoryContext);\n+\n+ if (MemoryContextIsValid(basic_archive_context))\n+ MemoryContextDelete(basic_archive_context);\n\nThis is a bit confusing, because it means that we enter in the\nshutdown callback with one context, but exit it under\nTopMemoryContext. Are you sure that this will be OK when there could\nbe multiple callbacks piled up with before_shmem_exit()? shmem_exit()\nhas nothing specific to memory contexts.\n\nIs putting the new headers in src/include/postmaster/ the best move in\nthe long term? Perhaps that's fine, but I was wondering whether a new\nlocation like archive/ would make more sense. pg_arch.h being in the\npostmaster section is fine.\n--\nMichael",
"msg_date": "Mon, 13 Feb 2023 16:37:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 04:37:10PM +0900, Michael Paquier wrote:\n> + basic_archive_context = data->context;\n> + if (CurrentMemoryContext == basic_archive_context)\n> + MemoryContextSwitchTo(TopMemoryContext);\n> +\n> + if (MemoryContextIsValid(basic_archive_context))\n> + MemoryContextDelete(basic_archive_context);\n> \n> This is a bit confusing, because it means that we enter in the\n> shutdown callback with one context, but exit it under\n> TopMemoryContext. Are you sure that this will be OK when there could\n> be multiple callbacks piled up with before_shmem_exit()? shmem_exit()\n> has nothing specific to memory contexts.\n\nWell, we can't free the memory context while we are in it, so we have to\nswitch to another one. I agree that this is a bit confusing, though.\n\nOn second thought, I'm not sure it's important to make sure the state is\nfreed in the shutdown callback. It's only called just before the archiver\nprocess exits, so we're not really at risk of leaking anything. I suppose\nwe might not always restart the archiver in this case, but I also don't\nanticipate that behavior changing in the near future. I think this\ncallback is more useful for things like shutting down background workers.\n\nI went ahead and removed the shutdown callback from basic_archive and the\nnote about leaking from the documentation.\n\n> Is putting the new headers in src/include/postmaster/ the best move in\n> the long term? Perhaps that's fine, but I was wondering whether a new\n> location like archive/ would make more sense. pg_arch.h being in the\n> postmaster section is fine.\n\nI moved them to archive/.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Feb 2023 14:56:47 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-13 14:56:47 -0800, Nathan Bossart wrote:\n> On Mon, Feb 13, 2023 at 04:37:10PM +0900, Michael Paquier wrote:\n> > + basic_archive_context = data->context;\n> > + if (CurrentMemoryContext == basic_archive_context)\n> > + MemoryContextSwitchTo(TopMemoryContext);\n> > +\n> > + if (MemoryContextIsValid(basic_archive_context))\n> > + MemoryContextDelete(basic_archive_context);\n> > \n> > This is a bit confusing, because it means that we enter in the\n> > shutdown callback with one context, but exit it under\n> > TopMemoryContext. Are you sure that this will be OK when there could\n> > be multiple callbacks piled up with before_shmem_exit()? shmem_exit()\n> > has nothing specific to memory contexts.\n> \n> Well, we can't free the memory context while we are in it, so we have to\n> switch to another one. I agree that this is a bit confusing, though.\n\nWhy would we be in that memory context? I'd just add an assert documenting\nwe're not.\n\n\n> On second thought, I'm not sure it's important to make sure the state is\n> freed in the shutdown callback. It's only called just before the archiver\n> process exits, so we're not really at risk of leaking anything. I suppose\n> we might not always restart the archiver in this case, but I also don't\n> anticipate that behavior changing in the near future. I think this\n> callback is more useful for things like shutting down background workers.\n\nI think it's crucial. Otherwise we're just ossifying the design that there's\njust one archive module active at a time.\n\n\n> I went ahead and removed the shutdown callback from basic_archive and the\n> note about leaking from the documentation.\n\n-1\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 13 Feb 2023 15:37:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 03:37:33PM -0800, Andres Freund wrote:\n> On 2023-02-13 14:56:47 -0800, Nathan Bossart wrote:\n>> On Mon, Feb 13, 2023 at 04:37:10PM +0900, Michael Paquier wrote:\n>> > + basic_archive_context = data->context;\n>> > + if (CurrentMemoryContext == basic_archive_context)\n>> > + MemoryContextSwitchTo(TopMemoryContext);\n>> > +\n>> > + if (MemoryContextIsValid(basic_archive_context))\n>> > + MemoryContextDelete(basic_archive_context);\n>> > \n>> > This is a bit confusing, because it means that we enter in the\n>> > shutdown callback with one context, but exit it under\n>> > TopMemoryContext. Are you sure that this will be OK when there could\n>> > be multiple callbacks piled up with before_shmem_exit()? shmem_exit()\n>> > has nothing specific to memory contexts.\n>> \n>> Well, we can't free the memory context while we are in it, so we have to\n>> switch to another one. I agree that this is a bit confusing, though.\n> \n> Why would we be in that memory context? I'd just add an assert documenting\n> we're not.\n> \n> \n>> On second thought, I'm not sure it's important to make sure the state is\n>> freed in the shutdown callback. It's only called just before the archiver\n>> process exits, so we're not really at risk of leaking anything. I suppose\n>> we might not always restart the archiver in this case, but I also don't\n>> anticipate that behavior changing in the near future. I think this\n>> callback is more useful for things like shutting down background workers.\n> \n> I think it's crucial. Otherwise we're just ossifying the design that there's\n> just one archive module active at a time.\n> \n> \n>> I went ahead and removed the shutdown callback from basic_archive and the\n>> note about leaking from the documentation.\n> \n> -1\n\nOkay. I've added it back in v12 with the suggested adjustment for the\nmemory context stuff.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Feb 2023 16:55:58 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 04:55:58PM -0800, Nathan Bossart wrote:\n> Okay. I've added it back in v12 with the suggested adjustment for the\n> memory context stuff.\n\nSorry for then noise, cfbot alerted me to a missing #include, which I've\nadded in v13.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 13 Feb 2023 17:02:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Mon, Feb 13, 2023 at 05:02:37PM -0800, Nathan Bossart wrote:\n> Sorry for then noise, cfbot alerted me to a missing #include, which I've\n> added in v13.\n\n+ basic_archive_context = data->context;\n+ Assert(CurrentMemoryContext != basic_archive_context);\n\nSo this is what it means to document that we are not in the memory\ncontext we are freeing here. That seems good enough to me in this\ncontext. Tracking if one of CurrentMemoryContext's parents is the\nmemory context that would be deleted would be another thing, but this\ndoes not apply here.\n\nI may tweak a bit the comments, but nothing more. And I don't think I\nhave more to add. Andres, do you have anything you would like to\nmention?\n--\nMichael",
"msg_date": "Wed, 15 Feb 2023 15:38:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Wed, Feb 15, 2023 at 03:38:21PM +0900, Michael Paquier wrote:\n> On Mon, Feb 13, 2023 at 05:02:37PM -0800, Nathan Bossart wrote:\n>> Sorry for then noise, cfbot alerted me to a missing #include, which I've\n>> added in v13.\n\nSorry for more noise. I noticed that I missed updating the IDENTIFICATION\nline for shell_archive.c. That's the only change in v14.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 Feb 2023 10:44:07 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-15 10:44:07 -0800, Nathan Bossart wrote:\n> On Wed, Feb 15, 2023 at 03:38:21PM +0900, Michael Paquier wrote:\n> > I may tweak a bit the comments, but nothing more. And I don't think I\n> > have more to add. Andres, do you have anything you would like to\n> > mention?\n\nJust some minor comments below. None of them needs to be addressed.\n\n\n> @@ -144,10 +170,12 @@ basic_archive_configured(void)\n> * Archives one file.\n> */\n> static bool\n> -basic_archive_file(const char *file, const char *path)\n> +basic_archive_file(ArchiveModuleState *state, const char *file, const char *path)\n> {\n> \tsigjmp_buf\tlocal_sigjmp_buf;\n\nNot related the things changed here, but this should never have been pushed\ndown into individual archive modules. There's absolutely no way that we're\ngoing to keep this up2date and working correctly in random archive\nmodules. And it would break if archive modules are ever called outside of\npgarch.c.\n\n\n> +static void\n> +basic_archive_shutdown(ArchiveModuleState *state)\n> +{\n> +\tBasicArchiveData *data = (BasicArchiveData *) (state->private_data);\n\nThe parens around (state->private_data) are imo odd.\n\n> +\tbasic_archive_context = data->context;\n> +\tAssert(CurrentMemoryContext != basic_archive_context);\n> +\n> +\tif (MemoryContextIsValid(basic_archive_context))\n> +\t\tMemoryContextDelete(basic_archive_context);\n\nI guess I'd personally be paranoid and clean data->context after\nthis. Obviously doesn't matter right now, but at some later date it could be\nthat we'd error out after this point, and re-entered the shutdown callback.\n\n\n> +\n> +/*\n> + * Archive module callbacks\n> + *\n> + * These callback functions should be defined by archive libraries and returned\n> + * via _PG_archive_module_init(). ArchiveFileCB is the only required callback.\n> + * For more information about the purpose of each callback, refer to the\n> + * archive modules documentation.\n> + */\n> +typedef void (*ArchiveStartupCB) (ArchiveModuleState *state);\n> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n> +typedef bool (*ArchiveFileCB) (ArchiveModuleState *state, const char *file, const char *path);\n> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n> +\n> +typedef struct ArchiveModuleCallbacks\n> +{\n> +\tArchiveStartupCB startup_cb;\n> +\tArchiveCheckConfiguredCB check_configured_cb;\n> +\tArchiveFileCB archive_file_cb;\n> +\tArchiveShutdownCB shutdown_cb;\n> +} ArchiveModuleCallbacks;\n\nIf you wanted you could just define the callback types in the struct now, as\nwe don't need asserts for the types.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 11:29:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Thanks for reviewing.\n\nOn Thu, Feb 16, 2023 at 11:29:56AM -0800, Andres Freund wrote:\n> On 2023-02-15 10:44:07 -0800, Nathan Bossart wrote:\n>> @@ -144,10 +170,12 @@ basic_archive_configured(void)\n>> * Archives one file.\n>> */\n>> static bool\n>> -basic_archive_file(const char *file, const char *path)\n>> +basic_archive_file(ArchiveModuleState *state, const char *file, const char *path)\n>> {\n>> \tsigjmp_buf\tlocal_sigjmp_buf;\n> \n> Not related the things changed here, but this should never have been pushed\n> down into individual archive modules. There's absolutely no way that we're\n> going to keep this up2date and working correctly in random archive\n> modules. And it would break if archive modules are ever called outside of\n> pgarch.c.\n\nYeah. IIRC I did briefly try to avoid this, but the difficulty was that\neach module will have its own custom cleanup logic. There's no requirement\nthat a module creates an exception handler, but I imagine that any\nsufficiently complex one will. In any case, I agree that it's worth trying\nto pull this out of the individual modules.\n\n>> +static void\n>> +basic_archive_shutdown(ArchiveModuleState *state)\n>> +{\n>> +\tBasicArchiveData *data = (BasicArchiveData *) (state->private_data);\n> \n> The parens around (state->private_data) are imo odd.\n\nOops, removed.\n\n>> +\tbasic_archive_context = data->context;\n>> +\tAssert(CurrentMemoryContext != basic_archive_context);\n>> +\n>> +\tif (MemoryContextIsValid(basic_archive_context))\n>> +\t\tMemoryContextDelete(basic_archive_context);\n> \n> I guess I'd personally be paranoid and clean data->context after\n> this. Obviously doesn't matter right now, but at some later date it could be\n> that we'd error out after this point, and re-entered the shutdown callback.\n\nDone.\n\n>> +/*\n>> + * Archive module callbacks\n>> + *\n>> + * These callback functions should be defined by archive libraries and returned\n>> + * via _PG_archive_module_init(). ArchiveFileCB is the only required callback.\n>> + * For more information about the purpose of each callback, refer to the\n>> + * archive modules documentation.\n>> + */\n>> +typedef void (*ArchiveStartupCB) (ArchiveModuleState *state);\n>> +typedef bool (*ArchiveCheckConfiguredCB) (ArchiveModuleState *state);\n>> +typedef bool (*ArchiveFileCB) (ArchiveModuleState *state, const char *file, const char *path);\n>> +typedef void (*ArchiveShutdownCB) (ArchiveModuleState *state);\n>> +\n>> +typedef struct ArchiveModuleCallbacks\n>> +{\n>> +\tArchiveStartupCB startup_cb;\n>> +\tArchiveCheckConfiguredCB check_configured_cb;\n>> +\tArchiveFileCB archive_file_cb;\n>> +\tArchiveShutdownCB shutdown_cb;\n>> +} ArchiveModuleCallbacks;\n> \n> If you wanted you could just define the callback types in the struct now, as\n> we don't need asserts for the types.\n\nThis crossed my mind. I thought it was nice to have a declaration for each\ncallback that we can copy into the docs, but I'm sure we could do without\nit, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 16 Feb 2023 12:15:12 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 12:15:12 -0800, Nathan Bossart wrote:\n> Thanks for reviewing.\n> \n> On Thu, Feb 16, 2023 at 11:29:56AM -0800, Andres Freund wrote:\n> > On 2023-02-15 10:44:07 -0800, Nathan Bossart wrote:\n> >> @@ -144,10 +170,12 @@ basic_archive_configured(void)\n> >> * Archives one file.\n> >> */\n> >> static bool\n> >> -basic_archive_file(const char *file, const char *path)\n> >> +basic_archive_file(ArchiveModuleState *state, const char *file, const char *path)\n> >> {\n> >> \tsigjmp_buf\tlocal_sigjmp_buf;\n> > \n> > Not related the things changed here, but this should never have been pushed\n> > down into individual archive modules. There's absolutely no way that we're\n> > going to keep this up2date and working correctly in random archive\n> > modules. And it would break if archive modules are ever called outside of\n> > pgarch.c.\n> \n> Yeah. IIRC I did briefly try to avoid this, but the difficulty was that\n> each module will have its own custom cleanup logic.\n\nIt can use PG_TRY/CATCH for that, if the top-level sigsetjmp is in pgarch.c.\nOr you could add a cleanup callback to the API, to be called after the\ntop-level cleanup in pgarch.c.\n\n\nI'm quite baffled by:\n\t\t/* Close any files left open by copy_file() or compare_files() */\n\t\tAtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId);\n\nin basic_archive_file(). It seems *really* off to call AtEOSubXact_Files()\ncompletely outside the context of a transaction environment. And it only does\nthe thing you want because you pass parameters that aren't actually valid in\nthe normal use in AtEOSubXact_Files(). I really don't understand how that's\nsupposed to be ok.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:17:54 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 01:17:54PM -0800, Andres Freund wrote:\n> On 2023-02-16 12:15:12 -0800, Nathan Bossart wrote:\n>> On Thu, Feb 16, 2023 at 11:29:56AM -0800, Andres Freund wrote:\n>> > On 2023-02-15 10:44:07 -0800, Nathan Bossart wrote:\n>> >> @@ -144,10 +170,12 @@ basic_archive_configured(void)\n>> >> * Archives one file.\n>> >> */\n>> >> static bool\n>> >> -basic_archive_file(const char *file, const char *path)\n>> >> +basic_archive_file(ArchiveModuleState *state, const char *file, const char *path)\n>> >> {\n>> >> \tsigjmp_buf\tlocal_sigjmp_buf;\n>> > \n>> > Not related the things changed here, but this should never have been pushed\n>> > down into individual archive modules. There's absolutely no way that we're\n>> > going to keep this up2date and working correctly in random archive\n>> > modules. And it would break if archive modules are ever called outside of\n>> > pgarch.c.\n>> \n>> Yeah. IIRC I did briefly try to avoid this, but the difficulty was that\n>> each module will have its own custom cleanup logic.\n> \n> It can use PG_TRY/CATCH for that, if the top-level sigsetjmp is in pgarch.c.\n> Or you could add a cleanup callback to the API, to be called after the\n> top-level cleanup in pgarch.c.\n\nYeah, that seems workable.\n\n> I'm quite baffled by:\n> \t\t/* Close any files left open by copy_file() or compare_files() */\n> \t\tAtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId);\n> \n> in basic_archive_file(). It seems *really* off to call AtEOSubXact_Files()\n> completely outside the context of a transaction environment. And it only does\n> the thing you want because you pass parameters that aren't actually valid in\n> the normal use in AtEOSubXact_Files(). I really don't understand how that's\n> supposed to be ok.\n\nHm. Should copy_file() and compare_files() have PG_FINALLY blocks that\nattempt to close the files instead? What would you recommend?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 16 Feb 2023 13:58:10 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Thu, Feb 16, 2023 at 01:17:54PM -0800, Andres Freund wrote:\n> On 2023-02-16 12:15:12 -0800, Nathan Bossart wrote:\n>> On Thu, Feb 16, 2023 at 11:29:56AM -0800, Andres Freund wrote:\n>>> Not related the things changed here, but this should never have been pushed\n>>> down into individual archive modules. There's absolutely no way that we're\n>>> going to keep this up2date and working correctly in random archive\n>>> modules. And it would break if archive modules are ever called outside of\n>>> pgarch.c.\n\nHmm, yes. That's a bad idea to copy the error handling stack. The\nmaintenance of this code could quickly go wrong. All that had better\nbe put into their own threads, IMO, to bring more visibility on these\nsubjects.\n \n> I'm quite baffled by:\n> \t\t/* Close any files left open by copy_file() or compare_files() */\n> \t\tAtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId);\n> \n> in basic_archive_file(). It seems *really* off to call AtEOSubXact_Files()\n> completely outside the context of a transaction environment. And it only does\n> the thing you want because you pass parameters that aren't actually valid in\n> the normal use in AtEOSubXact_Files(). I really don't understand how that's\n> supposed to be ok.\n\nAs does this part, probably with a backpatch..\n\nSaying that, I have spent more time on the revamped version of the\narchive modules and it was already doing a lot, so I have applied\nit as it covered all the points discussed..\n--\nMichael",
"msg_date": "Fri, 17 Feb 2023 17:01:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-16 13:58:10 -0800, Nathan Bossart wrote:\n> On Thu, Feb 16, 2023 at 01:17:54PM -0800, Andres Freund wrote:\n> > I'm quite baffled by:\n> > \t\t/* Close any files left open by copy_file() or compare_files() */\n> > \t\tAtEOSubXact_Files(false, InvalidSubTransactionId, InvalidSubTransactionId);\n> > \n> > in basic_archive_file(). It seems *really* off to call AtEOSubXact_Files()\n> > completely outside the context of a transaction environment. And it only does\n> > the thing you want because you pass parameters that aren't actually valid in\n> > the normal use in AtEOSubXact_Files(). I really don't understand how that's\n> > supposed to be ok.\n> \n> Hm. Should copy_file() and compare_files() have PG_FINALLY blocks that\n> attempt to close the files instead? What would you recommend?\n\nI don't fully now, it's not entirely clear to me what the goals here were. I\nthink you'd likely need to do a bit of infrastructure work to do this\nsanely. So far we just didn't have the need to handle files being released in\na way like you want to do there.\n\nI suspect a good direction would be to use resource owners. Add a separate set\nof functions that release files on resource owner release. Most of the\ninfrastructure is there already, for temporary files\n(c.f. OpenTemporaryFile()).\n\nThen that resource owner could be reset in case of error.\n\n\nI'm not even sure that erroring out is a reasonable way to implement\ncopy_file(), compare_files(), particularly because you want to return via a\nreturn code from basic_archive_files().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 17 Feb 2023 11:41:32 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 05:01:47PM +0900, Michael Paquier wrote:\n> All that had better\n> be put into their own threads, IMO, to bring more visibility on these\n> subjects.\n\nI created a new thread for these [0].\n\n> Saying that, I have spent more time on the revamped version of the\n> archive modules and it was already doing a lot, so I have applied\n> it as it covered all the points discussed..\n\nThanks!\n\n[0] https://postgr.es/m/20230217215624.GA3131134%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 17 Feb 2023 14:32:43 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Here is a new revision of the restore modules patch set. In this patch\nset, the interface looks similar to the recent archive modules redesign,\nand there are separate callbacks for retrieving different types of files.\nI've attempted to address all the feedback I've received, but there was a\nlot scattered across different threads, so it's possible I've missed\nsomething. Note that 0001 is the stopgap fix for restore_command that's\nbeing tracked elsewhere [0]. I was careful to avoid repeating the recent\nmistake with the SIGTERM handling.\n\nThis patch set is still a little rough around the edges, but I wanted to\npost it in case folks had general thoughts about the structure, interface,\netc. This implementation restores files synchronously one-by-one just like\narchive modules, but in the future, I would like to add\nasynchronous/parallel/batching support. My intent is for this work to move\nus closer to that.\n\n[0] https://postgr.es/m/20230214174755.GA1348509%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 17 Feb 2023 14:53:42 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Here is a rebased version of the restore modules patch set. I swapped the\npatch for the stopgap fix for restore_command with the latest version [0],\nand I marked the restore/ headers as installable (as was recently done for\narchive/ [1]). There are no other changes.\n\n[0] https://postgr.es/m/20230301224751.GA1823946%40nathanxps13\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=6ad5793\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 10 Mar 2023 14:02:56 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "I noticed that the new TAP test for basic_archive was failing\nintermittently for cfbot. It looks like the query for checking that the\npost-backup WAL is restored sometimes executes before archive recovery is\ncomplete (because hot_standby is on). To fix this, I adjusted the test to\nuse poll_query_until instead. There are no other changes in v14.\n\nI first tried to set hot_standby to off on the restored node so that the\nquery wouldn't run until archive recovery completed. This seemed like it\nwould work because start() useѕ \"pg_ctl --wait\", which has the following\nnote in the docs:\n\n\tStartup is considered complete when the PID file indicates that the\n\tserver is ready to accept connections.\n\nHowever, that's not what happens when hot_standby is off. In that case,\nthe postmaster.pid file is updated with PM_STATUS_STANDBY once recovery\nstarts, which wait_for_postmaster_start() interprets as \"ready.\" I see\nthis was reported before [0], but that discussion fizzled out. IIUC it was\ndone this way to avoid infinite waits when hot_standby is off and standby\nmode is enabled. I could be missing something obvious, but that doesn't\nseem necessary when hot_standby is off and recovery mode is enabled because\nrecovery should end at some point (never mind the halting problem). I'm\nstill digging into this and may spin off a new thread if I can conjure up a\nproposal.\n\n[0] https://postgr.es/m/CAMkU%3D1wrMqPggnEfszE-c3PPLmKgRK17_qr7tmxBECYEbyV-4Q%40mail.gmail.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 14 Mar 2023 21:13:09 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Tue, Mar 14, 2023 at 09:13:09PM -0700, Nathan Bossart wrote:\n> However, that's not what happens when hot_standby is off. In that case,\n> the postmaster.pid file is updated with PM_STATUS_STANDBY once recovery\n> starts, which wait_for_postmaster_start() interprets as \"ready.\" I see\n> this was reported before [0], but that discussion fizzled out. IIUC it was\n> done this way to avoid infinite waits when hot_standby is off and standby\n> mode is enabled. I could be missing something obvious, but that doesn't\n> seem necessary when hot_standby is off and recovery mode is enabled because\n> recovery should end at some point (never mind the halting problem). I'm\n> still digging into this and may spin off a new thread if I can conjure up a\n> proposal.\n> \n> [0] https://postgr.es/m/CAMkU%3D1wrMqPggnEfszE-c3PPLmKgRK17_qr7tmxBECYEbyV-4Q%40mail.gmail.com\n\nThese days, knowing hot_standby is on by default, and that users would\nrecover up to the end-of-backup record of just use read replicas, do\nwe have a strong case for keeping this GUC parameter at all? It does\nnot strike me that we really need to change a five-year-old behavior\nif there has been few complaints about it. I agree that it is\nconfusing as it stands, but the long-term simplifications may be worth\nit in the recovery code (aka less booleans needed to track the flow of\nthe startup process, and less confusion around that).\n--\nMichael",
"msg_date": "Thu, 16 Mar 2023 10:10:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "On Fri, Feb 17, 2023 at 11:41:32AM -0800, Andres Freund wrote:\n> I don't fully now, it's not entirely clear to me what the goals here were. I\n> think you'd likely need to do a bit of infrastructure work to do this\n> sanely. So far we just didn't have the need to handle files being released in\n> a way like you want to do there.\n> \n> I suspect a good direction would be to use resource owners. Add a separate set\n> of functions that release files on resource owner release. Most of the\n> infrastructure is there already, for temporary files\n> (c.f. OpenTemporaryFile()).\n\nYes, perhaps. I've had good experience with these when it comes to\navoid leakages when releasing resources, particularly for resources\nallocated by external libraries (cough, OpenSSL, cough). And there\nwas some work to make these more scalable, for example.\n\nAt this stage of the CF, it seems pretty clear to me that this should\nbe pushed to v17, so moved to next CF.\n--\nMichael",
"msg_date": "Thu, 6 Apr 2023 09:34:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 17 Apr 2023 09:19:14 -0700",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 23 Oct 2023 15:08:01 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 11 Jan 2024 10:56:33 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Here is another rebase. Given the size of this patch set and the lack of\nreview, I am going to punt this one to v18.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 6 Mar 2024 10:46:46 -0600",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 15 Mar 2024 09:16:07 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "another rebase for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 26 Mar 2024 21:40:50 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 10 Apr 2024 16:51:49 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 15 May 2024 16:10:39 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "rebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 20 May 2024 21:52:34 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
},
{
"msg_contents": "Given the lack of interest, I plan to mark the commitfest entry for this\npatch set as \"Withdrawn\" shortly.\n\n-- \nnathan\n\n\n",
"msg_date": "Fri, 19 Jul 2024 14:37:08 -0500",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: recovery modules"
}
] |
[
{
"msg_contents": "This patch is aimed at being smarter about cases where we have\nredundant GROUP BY entries, for example\n\nSELECT ... WHERE a.x = b.y GROUP BY a.x, b.y;\n\nIt's clearly not necessary to perform grouping using both columns.\nGrouping by either one alone would produce the same results,\nassuming compatible equality semantics. I'm not sure how often\nsuch cases arise in the wild; but we have about ten of them in our\nregression tests, which makes me think it's worth the trouble to\nde-duplicate as long as it doesn't cost too much. And it doesn't,\nbecause PathKey construction already detects exactly this sort of\nredundancy. We need only do something with the knowledge.\n\nWe can't simply make the planner replace parse->groupClause with\na shortened list of non-redundant columns, because it's possible\nthat we prove all the columns redundant, as in\n\nSELECT ... WHERE a.x = 1 GROUP BY a.x;\n\nIf we make parse->groupClause empty then some subsequent tests\nwill think no grouping was requested, leading to incorrect results.\nSo what I've done in the attached is to invent a new PlannerInfo\nfield processed_groupClause to hold the shortened list, and then run\naround and use that instead of parse->groupClause where appropriate.\n\n(Another way could be to invent a bool hasGrouping to remember whether\ngroupClause was initially nonempty, analogously to hasHavingQual.\nI rejected that idea after finding that there were still a few\nplaces where it's advantageous to use the original full list.)\n\nBeyond that, there's not too much to this patch. I had to fix\nnodeAgg.c to not crash when grouping on zero columns, and I spent\nsome effort on refactoring the grouping-clause preprocessing\nlogic in planner.c because it seemed to me to have gotten rather\nunintelligible. I didn't add any new test cases, because the changes\nin existing results seem to sufficiently prove that it works.\n\nI'll stick this in the January CF.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 27 Dec 2022 17:18:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Removing redundant grouping columns"
},
{
"msg_contents": "I wrote:\n> This patch is aimed at being smarter about cases where we have\n> redundant GROUP BY entries, for example\n> SELECT ... WHERE a.x = b.y GROUP BY a.x, b.y;\n\nThe cfbot didn't like this, because of a variable that wasn't\nused in non-assert builds. Fixed in v2.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 27 Dec 2022 18:24:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 6:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> This patch is aimed at being smarter about cases where we have\n> redundant GROUP BY entries, for example\n>\n> SELECT ... WHERE a.x = b.y GROUP BY a.x, b.y;\n>\n> It's clearly not necessary to perform grouping using both columns.\n> Grouping by either one alone would produce the same results,\n> assuming compatible equality semantics. I'm not sure how often\n> such cases arise in the wild; but we have about ten of them in our\n> regression tests, which makes me think it's worth the trouble to\n> de-duplicate as long as it doesn't cost too much. And it doesn't,\n> because PathKey construction already detects exactly this sort of\n> redundancy. We need only do something with the knowledge.\n\n\nWhile we are here, I wonder if we can do the same trick for\ndistinctClause, to cope with cases like\n\n select distinct a.x, b.y from a, b where a.x = b.y;\n\nAnd there is case from regression test 'select_distinct.sql' that can\nbenefit from this optimization.\n\n --\n -- Check mentioning same column more than once\n --\n\n EXPLAIN (VERBOSE, COSTS OFF)\n SELECT count(*) FROM\n (SELECT DISTINCT two, four, two FROM tenk1) ss;\n\nThanks\nRichard\n\nOn Wed, Dec 28, 2022 at 6:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:This patch is aimed at being smarter about cases where we have\nredundant GROUP BY entries, for example\n\nSELECT ... WHERE a.x = b.y GROUP BY a.x, b.y;\n\nIt's clearly not necessary to perform grouping using both columns.\nGrouping by either one alone would produce the same results,\nassuming compatible equality semantics. I'm not sure how often\nsuch cases arise in the wild; but we have about ten of them in our\nregression tests, which makes me think it's worth the trouble to\nde-duplicate as long as it doesn't cost too much. And it doesn't,\nbecause PathKey construction already detects exactly this sort of\nredundancy. We need only do something with the knowledge. While we are here, I wonder if we can do the same trick fordistinctClause, to cope with cases like select distinct a.x, b.y from a, b where a.x = b.y;And there is case from regression test 'select_distinct.sql' that canbenefit from this optimization. -- -- Check mentioning same column more than once -- EXPLAIN (VERBOSE, COSTS OFF) SELECT count(*) FROM (SELECT DISTINCT two, four, two FROM tenk1) ss;ThanksRichard",
"msg_date": "Fri, 30 Dec 2022 16:05:55 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> On Wed, Dec 28, 2022 at 6:18 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This patch is aimed at being smarter about cases where we have\n>> redundant GROUP BY entries, for example\n>> SELECT ... WHERE a.x = b.y GROUP BY a.x, b.y;\n\n> While we are here, I wonder if we can do the same trick for\n> distinctClause, to cope with cases like\n> select distinct a.x, b.y from a, b where a.x = b.y;\n\nWe do that already, no?\n\nregression=# create table foo (x int, y int);\nCREATE TABLE\nregression=# explain select distinct * from foo where x = 1;\n QUERY PLAN \n-----------------------------------------------------------------\n Unique (cost=38.44..38.50 rows=11 width=8)\n -> Sort (cost=38.44..38.47 rows=11 width=8)\n Sort Key: y\n -> Seq Scan on foo (cost=0.00..38.25 rows=11 width=8)\n Filter: (x = 1)\n(5 rows)\n\nregression=# explain select distinct * from foo where x = y;\n QUERY PLAN \n-----------------------------------------------------------------\n Unique (cost=38.44..38.50 rows=11 width=8)\n -> Sort (cost=38.44..38.47 rows=11 width=8)\n Sort Key: x\n -> Seq Scan on foo (cost=0.00..38.25 rows=11 width=8)\n Filter: (x = y)\n(5 rows)\n\nBut if you do\n\nregression=# explain select * from foo where x = y group by x, y;\n QUERY PLAN \n-----------------------------------------------------------------\n Group (cost=38.44..38.52 rows=11 width=8)\n Group Key: x, y\n -> Sort (cost=38.44..38.47 rows=11 width=8)\n Sort Key: x\n -> Seq Scan on foo (cost=0.00..38.25 rows=11 width=8)\n Filter: (x = y)\n(6 rows)\n\nthen you can see that the Sort step knows it need only consider\none column even though the Group step considers both.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Dec 2022 11:32:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "I wrote:\n> Richard Guo <guofenglinux@gmail.com> writes:\n>> While we are here, I wonder if we can do the same trick for\n>> distinctClause, to cope with cases like\n>> select distinct a.x, b.y from a, b where a.x = b.y;\n\n> We do that already, no?\n\nOh, wait, I see what you mean: we are smart in code paths that rely\non distinct_pathkeys, but not in the hash-based code paths. Right,\nthat can be fixed the same way. 0001 attached is the same as before,\n0002 adds similar logic for the distinctClause.\n\nThe plan change in expected/pg_trgm.out is surprising at first\nglance, but I believe it's correct: the item that is being\ndropped is a parameterless STABLE function, so its value is not\nsupposed to change for the duration of the scan.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 30 Dec 2022 16:02:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "On Sat, 31 Dec 2022 at 02:32, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Richard Guo <guofenglinux@gmail.com> writes:\n> >> While we are here, I wonder if we can do the same trick for\n> >> distinctClause, to cope with cases like\n> >> select distinct a.x, b.y from a, b where a.x = b.y;\n>\n> > We do that already, no?\n>\n> Oh, wait, I see what you mean: we are smart in code paths that rely\n> on distinct_pathkeys, but not in the hash-based code paths. Right,\n> that can be fixed the same way. 0001 attached is the same as before,\n> 0002 adds similar logic for the distinctClause.\n>\n> The plan change in expected/pg_trgm.out is surprising at first\n> glance, but I believe it's correct: the item that is being\n> dropped is a parameterless STABLE function, so its value is not\n> supposed to change for the duration of the scan.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nff23b592ad6621563d3128b26860bcb41daf9542 ===\n=== applying patch ./v3-0002-remove-redundant-DISTINCT.patch\n....\nHunk #4 FAILED at 4704.\n....\n1 out of 10 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/plan/planner.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4083.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sat, 14 Jan 2023 12:32:47 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nYeah, sideswiped by 3c6fc5820 apparently. No substantive change needed.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 14 Jan 2023 16:23:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "On Sun, Jan 15, 2023 at 5:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> vignesh C <vignesh21@gmail.com> writes:\n> > The patch does not apply on top of HEAD as in [1], please post a rebased\n> patch:\n>\n> Yeah, sideswiped by 3c6fc5820 apparently. No substantive change needed.\n\n\nI looked through these two patches and they look good to me.\n\nBTW, another run of rebase is needed, due to da5800d5fa.\n\nThanks\nRichard\n\nOn Sun, Jan 15, 2023 at 5:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:vignesh C <vignesh21@gmail.com> writes:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nYeah, sideswiped by 3c6fc5820 apparently. No substantive change needed. I looked through these two patches and they look good to me.BTW, another run of rebase is needed, due to da5800d5fa.ThanksRichard",
"msg_date": "Tue, 17 Jan 2023 15:15:37 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "I wrote:\n> Yeah, sideswiped by 3c6fc5820 apparently. No substantive change needed.\n\nAnd immediately sideswiped by da5800d5f.\n\nIf nobody has any comments on this, I'm going to go ahead and push\nit. The value of the improvement is rapidly paling in comparison\nto the patch's maintenance effort.\n\n\t\t\tregards, tom lane",
"msg_date": "Tue, 17 Jan 2023 17:51:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "On Wed, Jan 18, 2023 at 6:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Yeah, sideswiped by 3c6fc5820 apparently. No substantive change needed.\n>\n> And immediately sideswiped by da5800d5f.\n\n\nYeah, I noticed this too yesterday. I reviewed through these two\npatches yesterday and I think they are in good shape now.\n\nThanks\nRichard\n\nOn Wed, Jan 18, 2023 at 6:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Yeah, sideswiped by 3c6fc5820 apparently. No substantive change needed.\n\nAnd immediately sideswiped by da5800d5f. Yeah, I noticed this too yesterday. I reviewed through these twopatches yesterday and I think they are in good shape now.ThanksRichard",
"msg_date": "Wed, 18 Jan 2023 09:55:13 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "On Wed, 18 Jan 2023 at 14:55, Richard Guo <guofenglinux@gmail.com> wrote:\n> Yeah, I noticed this too yesterday. I reviewed through these two\n> patches yesterday and I think they are in good shape now.\n\nI'm currently reviewing the two patches.\n\nDavid\n\n\n",
"msg_date": "Wed, 18 Jan 2023 14:56:12 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "On Wed, 18 Jan 2023 at 11:51, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> If nobody has any comments on this, I'm going to go ahead and push\n> it. The value of the improvement is rapidly paling in comparison\n> to the patch's maintenance effort.\n\nNo objections from me.\n\nDavid\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:34:48 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Removing redundant grouping columns"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> No objections from me.\n\nPushed, thanks for looking at it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Jan 2023 12:39:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Removing redundant grouping columns"
}
] |
[
{
"msg_contents": "Hi All,\n\nIf no one has volunteered for the upcoming (January 2023) commitfest.\nI would like to volunteer for it.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 28 Dec 2022 10:52:38 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "CFM for 2023-01"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 10:52:38AM +0530, vignesh C wrote:\n> If no one has volunteered for the upcoming (January 2023) commitfest.\n> I would like to volunteer for it.\n\nIf you want to be up to the task, that would be great, of course. For\nnow, I have switched the CF as in progress.\n--\nMichael",
"msg_date": "Tue, 3 Jan 2023 11:22:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: CFM for 2023-01"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 07:52, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 28, 2022 at 10:52:38AM +0530, vignesh C wrote:\n> > If no one has volunteered for the upcoming (January 2023) commitfest.\n> > I would like to volunteer for it.\n>\n> If you want to be up to the task, that would be great, of course. For\n> now, I have switched the CF as in progress.\n\nThanks, I will start working on this.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 08:16:43 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: CFM for 2023-01"
}
] |
[
{
"msg_contents": "Hi,\n\nRebased the SQL/JSON patches over the latest HEAD. I've decided to\nkeep the same division of code into individual commits as that\nmentioned in the revert commit 2f2b18bd3f, squashing fixup commits in\nthat list into the appropriate feature commits.\n\nThe main difference from the patches as they were committed into v15\nis that JsonExpr evaluation no longer needs to use sub-transactions,\nthanks to the work done recently to handle type errors softly. I've\nmade the new code pass an ErrorSaveContext into the type-conversion\nrelated functions as needed and also added an ExecEvalExprSafe() to\nevaluate sub-expressions of JsonExpr that might contain expressions\nthat call type-conversion functions, such as CoerceViaIO contained in\nJsonCoercion nodes. ExecExprEvalSafe() is based on one of the patches\nthat Nikita Glukhov had submitted in a previous discussion about\nredesigning SQL/JSON expression evaluation [1]. Though, I think that\nnew interface will become unnecessary after I have finished rebasing\nmy patches to remove subsidiary ExprStates of JsonExprState that we\nhad also discussed back in [2].\n\nAdding this to January CF.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/message-id/c3b315b6-1e9f-6aa4-8708-daa19cf3f1a3%40postgrespro.ru\n[2] https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de",
"msg_date": "Wed, 28 Dec 2022 16:28:29 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "SQL/JSON revisited"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi,\n>\n> Rebased the SQL/JSON patches over the latest HEAD. I've decided to\n> keep the same division of code into individual commits as that\n> mentioned in the revert commit 2f2b18bd3f, squashing fixup commits in\n> that list into the appropriate feature commits.\n>\n> The main difference from the patches as they were committed into v15\n> is that JsonExpr evaluation no longer needs to use sub-transactions,\n> thanks to the work done recently to handle type errors softly. I've\n> made the new code pass an ErrorSaveContext into the type-conversion\n> related functions as needed and also added an ExecEvalExprSafe() to\n> evaluate sub-expressions of JsonExpr that might contain expressions\n> that call type-conversion functions, such as CoerceViaIO contained in\n> JsonCoercion nodes. ExecExprEvalSafe() is based on one of the patches\n> that Nikita Glukhov had submitted in a previous discussion about\n> redesigning SQL/JSON expression evaluation [1]. Though, I think that\n> new interface will become unnecessary after I have finished rebasing\n> my patches to remove subsidiary ExprStates of JsonExprState that we\n> had also discussed back in [2].\n>\n> Adding this to January CF.\n\nDone.\n\nhttps://commitfest.postgresql.org/41/4086/\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 28 Dec 2022 16:31:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\nThe Postgres Pro documentation team prepared another SQL/JSON \ndocumentation patch (attached), to apply on top of \nv1-0009-Documentation-for-SQL-JSON-features.patch.\nThe new patch:\n- Fixes minor typos\n- Does some rewording agreed with Nikita Glukhov\n- Updates Docbook markup to make tags consistent across SQL/JSON \ndocumentation and across func.sgml, and in particular, consistent with \nthe XMLTABLE function, which resembles SQL/JSON functions pretty much.\n\n-- \nElena Indrupskaya\nLead Technical Writer\nPostgres Professional http://www.postgrespro.com\n\nOn 28.12.2022 10:28, Amit Langote wrote:\n> Hi,\n>\n> Rebased the SQL/JSON patches over the latest HEAD. I've decided to\n> keep the same division of code into individual commits as that\n> mentioned in the revert commit 2f2b18bd3f, squashing fixup commits in\n> that list into the appropriate feature commits.\n>\n> The main difference from the patches as they were committed into v15\n> is that JsonExpr evaluation no longer needs to use sub-transactions,\n> thanks to the work done recently to handle type errors softly. I've\n> made the new code pass an ErrorSaveContext into the type-conversion\n> related functions as needed and also added an ExecEvalExprSafe() to\n> evaluate sub-expressions of JsonExpr that might contain expressions\n> that call type-conversion functions, such as CoerceViaIO contained in\n> JsonCoercion nodes. ExecExprEvalSafe() is based on one of the patches\n> that Nikita Glukhov had submitted in a previous discussion about\n> redesigning SQL/JSON expression evaluation [1]. Though, I think that\n> new interface will become unnecessary after I have finished rebasing\n> my patches to remove subsidiary ExprStates of JsonExprState that we\n> had also discussed back in [2].\n>\n> Adding this to January CF.\n>",
"msg_date": "Tue, 10 Jan 2023 15:51:04 +0300",
"msg_from": "Elena Indrupskaya <e.indrupskaya@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "\nOn 2023-01-10 Tu 07:51, Elena Indrupskaya wrote:\n> Hi,\n>\n> The Postgres Pro documentation team prepared another SQL/JSON\n> documentation patch (attached), to apply on top of\n> v1-0009-Documentation-for-SQL-JSON-features.patch.\n> The new patch:\n> - Fixes minor typos\n> - Does some rewording agreed with Nikita Glukhov\n> - Updates Docbook markup to make tags consistent across SQL/JSON\n> documentation and across func.sgml, and in particular, consistent with\n> the XMLTABLE function, which resembles SQL/JSON functions pretty much.\n>\n\nThat's nice, but please don't post incremental patches like this. It\nupsets the cfbot. (I wish there were a way to tell the cfbot to ignore\npatches)\n\nAlso, I'm fairly certain that a good many of your changes are not\naccording to project style. The rule as I understand it is that\n<parameter> is used for things that are parameters and <replaceable> is\nonly used for things that are not parameters. (I'm not sure where that's\ndocumented other than the comment on commit 47046763c3, but it's what I\nattempted to do with the earlier doc tidy up.)\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 10:03:20 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Tags in the patch follow the markup of the XMLTABLE function:\n\n<function>XMLTABLE</function> (\n <optional> <literal>XMLNAMESPACES</literal> ( \n<replaceable>namespace_uri</replaceable> <literal>AS</literal> \n<replaceable>namespace_name</replaceable> <optional>, ...</optional> ), \n</optional>\n <replaceable>row_expression</replaceable> \n<literal>PASSING</literal> <optional><literal>BY</literal> \n{<literal>REF</literal>|<literal>VALUE</literal>}</optional> \n<replaceable>document_expression</replaceable> \n<optional><literal>BY</literal> \n{<literal>REF</literal>|<literal>VALUE</literal>}</optional>\n <literal>COLUMNS</literal> <replaceable>name</replaceable> { \n<replaceable>type</replaceable> <optional><literal>PATH</literal> \n<replaceable>column_expression</replaceable></optional> \n<optional><literal>DEFAULT</literal> \n<replaceable>default_expression</replaceable></optional> \n<optional><literal>NOT NULL</literal> | <literal>NULL</literal></optional>\n | <literal>FOR ORDINALITY</literal> }\n <optional>, ...</optional>\n) <returnvalue>setof record</returnvalue>\n\nIn the above, as well as in the signatures of SQL/JSON functions, there \nare no exact parameter names; otherwise, they should have been followed \nby the <type> tag, which is not the case. There are no parameter names \nin the functions' code either. Therefore, <replaceable> tags seem more \nappropriate, according to the comment to commit 47046763c3.\n\nSorry for upsetting your bot. :(\n\n-- \nElena Indrupskaya\nLead Technical Writer\nPostgres Professional http://www.postgrespro.com\n> On 2023-01-10 Tu 07:51, Elena Indrupskaya wrote:\n>> Hi,\n>>\n>> The Postgres Pro documentation team prepared another SQL/JSON\n>> documentation patch (attached), to apply on top of\n>> v1-0009-Documentation-for-SQL-JSON-features.patch.\n>> The new patch:\n>> - Fixes minor typos\n>> - Does some rewording agreed with Nikita Glukhov\n>> - Updates Docbook markup to make tags consistent across SQL/JSON\n>> documentation and across func.sgml, and in particular, consistent with\n>> the XMLTABLE function, which resembles SQL/JSON functions pretty much.\n>>\n> That's nice, but please don't post incremental patches like this. It\n> upsets the cfbot. (I wish there were a way to tell the cfbot to ignore\n> patches)\n>\n> Also, I'm fairly certain that a good many of your changes are not\n> according to project style. The rule as I understand it is that\n> <parameter> is used for things that are parameters and <replaceable> is\n> only used for things that are not parameters. (I'm not sure where that's\n> documented other than the comment on commit 47046763c3, but it's what I\n> attempted to do with the earlier doc tidy up.)\n>\n>\n> cheers\n>\n>\n> andrew\n>\n> --\n> Andrew Dunstan\n> EDB: https://www.enterprisedb.com\n>\n\n\n",
"msg_date": "Wed, 11 Jan 2023 10:02:02 +0300",
"msg_from": "Elena Indrupskaya <e.indrupskaya@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 2:02 PM Elena Indrupskaya <\ne.indrupskaya@postgrespro.ru> wrote:\n>\n> Sorry for upsetting your bot. :(\n\nWhat I do in these cases is save the incremental patch as a .txt file --\nthat way people can read it, but the cf bot doesn't try to launch a CI run.\nAnd if I forget that detail, well it's not a big deal, it happens sometimes.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, Jan 11, 2023 at 2:02 PM Elena Indrupskaya <e.indrupskaya@postgrespro.ru> wrote:>> Sorry for upsetting your bot. :(What I do in these cases is save the incremental patch as a .txt file -- that way people can read it, but the cf bot doesn't try to launch a CI run. And if I forget that detail, well it's not a big deal, it happens sometimes.--John NaylorEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 11 Jan 2023 14:18:21 +0700",
"msg_from": "John Naylor <john.naylor@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Hi,\n>\n> Rebased the SQL/JSON patches over the latest HEAD. I've decided to\n> keep the same division of code into individual commits as that\n> mentioned in the revert commit 2f2b18bd3f, squashing fixup commits in\n> that list into the appropriate feature commits.\n>\n> The main difference from the patches as they were committed into v15\n> is that JsonExpr evaluation no longer needs to use sub-transactions,\n> thanks to the work done recently to handle type errors softly. I've\n> made the new code pass an ErrorSaveContext into the type-conversion\n> related functions as needed and also added an ExecEvalExprSafe() to\n> evaluate sub-expressions of JsonExpr that might contain expressions\n> that call type-conversion functions, such as CoerceViaIO contained in\n> JsonCoercion nodes. ExecExprEvalSafe() is based on one of the patches\n> that Nikita Glukhov had submitted in a previous discussion about\n> redesigning SQL/JSON expression evaluation [1]. Though, I think that\n> new interface will become unnecessary after I have finished rebasing\n> my patches to remove subsidiary ExprStates of JsonExprState that we\n> had also discussed back in [2].\n\nAnd I've just finished doing that. In the attached updated 0004,\nwhich adds the JsonExpr node, its evaluation code is now broken into\nExprEvalSteps to handle the subsidiary JsonCoercion and JsonBehavior\nexpression nodes that previously used ExprState for recursive\nevaluation. Andres didn't like the latter as previously discussed at\n[1].\n\nI've also attached the patch that Elena has proposed as the patch\n0011. I haven't managed to review it yet, though once I do, I'll\nmerge it into the main documentation patch 0009. Thanks Elena.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de",
"msg_date": "Tue, 17 Jan 2023 22:31:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Tue, 17 Jan 2023 at 19:01, Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Dec 28, 2022 at 4:28 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > Rebased the SQL/JSON patches over the latest HEAD. I've decided to\n> > keep the same division of code into individual commits as that\n> > mentioned in the revert commit 2f2b18bd3f, squashing fixup commits in\n> > that list into the appropriate feature commits.\n> >\n> > The main difference from the patches as they were committed into v15\n> > is that JsonExpr evaluation no longer needs to use sub-transactions,\n> > thanks to the work done recently to handle type errors softly. I've\n> > made the new code pass an ErrorSaveContext into the type-conversion\n> > related functions as needed and also added an ExecEvalExprSafe() to\n> > evaluate sub-expressions of JsonExpr that might contain expressions\n> > that call type-conversion functions, such as CoerceViaIO contained in\n> > JsonCoercion nodes. ExecExprEvalSafe() is based on one of the patches\n> > that Nikita Glukhov had submitted in a previous discussion about\n> > redesigning SQL/JSON expression evaluation [1]. Though, I think that\n> > new interface will become unnecessary after I have finished rebasing\n> > my patches to remove subsidiary ExprStates of JsonExprState that we\n> > had also discussed back in [2].\n>\n> And I've just finished doing that. In the attached updated 0004,\n> which adds the JsonExpr node, its evaluation code is now broken into\n> ExprEvalSteps to handle the subsidiary JsonCoercion and JsonBehavior\n> expression nodes that previously used ExprState for recursive\n> evaluation. Andres didn't like the latter as previously discussed at\n> [1].\n>\n> I've also attached the patch that Elena has proposed as the patch\n> 0011. I haven't managed to review it yet, though once I do, I'll\n> merge it into the main documentation patch 0009. Thanks Elena.\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n37e267335068059ac9bd4ec5d06b493afb4b73e8 ===\n=== applying patch ./v2-0001-Common-SQL-JSON-clauses.patch\n....\ncan't find file to patch at input line 717\nPerhaps you used the wrong -p or --strip option?\nThe text leading up to this was:\n--------------------------\n|diff --git a/src/backend/utils/misc/queryjumble.c\nb/src/backend/utils/misc/queryjumble.c\n|index 328995a7dc..2361845a62 100644\n|--- a/src/backend/utils/misc/queryjumble.c\n|+++ b/src/backend/utils/misc/queryjumble.c\n--------------------------\nNo file to patch. Skipping patch.\n1 out of 1 hunk ignored\n\n[1] - http://cfbot.cputube.org/patch_41_4086.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 27 Jan 2023 19:57:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Fri, Jan 27, 2023 at 11:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> On Tue, 17 Jan 2023 at 19:01, Amit Langote <amitlangote09@gmail.com> wrote:\n> > And I've just finished doing that. In the attached updated 0004,\n> > which adds the JsonExpr node, its evaluation code is now broken into\n> > ExprEvalSteps to handle the subsidiary JsonCoercion and JsonBehavior\n> > expression nodes that previously used ExprState for recursive\n> > evaluation. Andres didn't like the latter as previously discussed at\n> > [1].\n> >\n> > I've also attached the patch that Elena has proposed as the patch\n> > 0011. I haven't managed to review it yet, though once I do, I'll\n> > merge it into the main documentation patch 0009. Thanks Elena.\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nThanks for the heads up. Here's a rebased version.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 30 Jan 2023 15:39:12 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\nOn Mon, Jan 30, 2023 at 3:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Fri, Jan 27, 2023 at 11:27 PM vignesh C <vignesh21@gmail.com> wrote:\n> > On Tue, 17 Jan 2023 at 19:01, Amit Langote <amitlangote09@gmail.com> wrote:\n> > > And I've just finished doing that. In the attached updated 0004,\n> > > which adds the JsonExpr node, its evaluation code is now broken into\n> > > ExprEvalSteps to handle the subsidiary JsonCoercion and JsonBehavior\n> > > expression nodes that previously used ExprState for recursive\n> > > evaluation. Andres didn't like the latter as previously discussed at\n> > > [1].\n> > >\n> > > I've also attached the patch that Elena has proposed as the patch\n> > > 0011. I haven't managed to review it yet, though once I do, I'll\n> > > merge it into the main documentation patch 0009. Thanks Elena.\n> >\n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n>\n> Thanks for the heads up. Here's a rebased version.\n\nRebased again over queryjumble overhaul.\n\nI decided to squash what was \"[PATCH v3 01/11] Common SQL/JSON\nclauses\" into \"[PATCH v3 02/11] SQL/JSON constructors\", because I\nnoticed \"useless productions\" warnings against its gram.y additions\nwhen building just 0001.\n\nI also looked at squashing \"[PATCH v3 11/11] Proposed reworking of\nSQL/JSON documentaion\" into \"[PATCH v3 09/11] Documentation for\nSQL/JSON features\", but didn't, again, because I am still not sure\nwhich one of <parameter> and <replaceable> is correct for the SQL/JSON\nfunction constructs. Maybe it's the latter looking at the markup for\nsome text on [1], such as exists ( path_expression ) → boolean, but\nAndrew sounded doubtful about that upthread.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n[1] https://www.postgresql.org/docs/15/functions-json.html",
"msg_date": "Mon, 20 Feb 2023 16:35:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi Amit and Andrew,\n\nRegarding not squashing [PATCH v3 11/11] Proposed reworking of\nSQL/JSON documentaion, here is exactly what Tom Lane wrote in the comment to commit 47046763c3:\n\nUse <parameter>\n consistently for things that are in fact names of parameters (including\n OUT parameters), reserving <replaceable> for things that aren't.\n\nFollowing this, <parameter> tags should be replaced with <replaceable> because\nthe SQL/JSON functions' code does not explicitly specify those tagged variables\nas function parameters. Doesn't it convince you to look at the patch again? Thank you.\n\nOn 20.02.2023 10:35, Amit Langote wrote:\n>> no parameter names in the functions' code either\n> Hi,\n>\n> On Mon, Jan 30, 2023 at 3:39 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Fri, Jan 27, 2023 at 11:27 PM vignesh C <vignesh21@gmail.com> wrote:\n>>> On Tue, 17 Jan 2023 at 19:01, Amit Langote <amitlangote09@gmail.com> wrote:\n>>>> And I've just finished doing that. In the attached updated 0004,\n>>>> which adds the JsonExpr node, its evaluation code is now broken into\n>>>> ExprEvalSteps to handle the subsidiary JsonCoercion and JsonBehavior\n>>>> expression nodes that previously used ExprState for recursive\n>>>> evaluation. Andres didn't like the latter as previously discussed at\n>>>> [1].\n>>>>\n>>>> I've also attached the patch that Elena has proposed as the patch\n>>>> 0011. I haven't managed to review it yet, though once I do, I'll\n>>>> merge it into the main documentation patch 0009. Thanks Elena.\n>>> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n>> Thanks for the heads up. Here's a rebased version.\n> Rebased again over queryjumble overhaul.\n>\n> I decided to squash what was \"[PATCH v3 01/11] Common SQL/JSON\n> clauses\" into \"[PATCH v3 02/11] SQL/JSON constructors\", because I\n> noticed \"useless productions\" warnings against its gram.y additions\n> when building just 0001.\n>\n> I also looked at squashing \"[PATCH v3 11/11] Proposed reworking of\n> SQL/JSON documentaion\" into \"[PATCH v3 09/11] Documentation for\n> SQL/JSON features\", but didn't, again, because I am still not sure\n> which one of <parameter> and <replaceable> is correct for the SQL/JSON\n> function constructs. Maybe it's the latter looking at the markup for\n> some text on [1], such as exists ( path_expression ) → boolean, but\n> Andrew sounded doubtful about that upthread.\n>\n\n\n",
"msg_date": "Mon, 20 Feb 2023 12:47:54 +0300",
"msg_from": "e.indrupskaya@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Op 20-02-2023 om 08:35 schreef Amit Langote:\n>>\n> \n> Rebased again over queryjumble overhaul.\n>\n\nHi,\n\n\nBut the following statement is a problem. It does not crash but it goes \noff, half-freezing the machine, and only comes back after fanatic \nCtrl-C'ing.\n\nselect json_query(jsonb '[3,4]', '$[*]' returning bigint[] empty object \non error);\n\nCan you have a look?\n\nThanks,\n\nErik Rijkers\n\n\n\nPS\nLog doesn't really have anything interesting:\n\n2023-02-20 14:57:06.073 CET 1336 LOG: server process (PID 1493) was \nterminated by signal 9: Killed\n2023-02-20 14:57:06.073 CET 1336 DETAIL: Failed process was running: \nselect json_query(jsonb '[3,4]', '$[*]' returning bigint[] empty object \non error);\n2023-02-20 14:57:06.359 CET 1336 LOG: terminating any other active \nserver processes\n2023-02-20 14:57:06.667 CET 1336 LOG: all server processes terminated; \nreinitializing\n2023-02-20 14:57:11.870 CET 1556 LOG: database system was interrupted; \nlast known up at 2023-02-20 14:44:43 CET\n\n\n",
"msg_date": "Mon, 20 Feb 2023 15:41:36 +0100",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-20 16:35:52 +0900, Amit Langote wrote:\n> Subject: [PATCH v4 03/10] SQL/JSON query functions\n> +/*\n> + * Evaluate a JSON error/empty behavior result.\n> + */\n> +static Datum\n> +ExecEvalJsonBehavior(JsonBehavior *behavior, bool *is_null)\n> +{\n> +\t*is_null = false;\n> +\n> +\tswitch (behavior->btype)\n> +\t{\n> +\t\tcase JSON_BEHAVIOR_EMPTY_ARRAY:\n> +\t\t\treturn JsonbPGetDatum(JsonbMakeEmptyArray());\n> +\n> +\t\tcase JSON_BEHAVIOR_EMPTY_OBJECT:\n> +\t\t\treturn JsonbPGetDatum(JsonbMakeEmptyObject());\n> +\n> +\t\tcase JSON_BEHAVIOR_TRUE:\n> +\t\t\treturn BoolGetDatum(true);\n> +\n> +\t\tcase JSON_BEHAVIOR_FALSE:\n> +\t\t\treturn BoolGetDatum(false);\n> +\n> +\t\tcase JSON_BEHAVIOR_NULL:\n> +\t\tcase JSON_BEHAVIOR_UNKNOWN:\n> +\t\t\t*is_null = true;\n> +\t\t\treturn (Datum) 0;\n> +\n> +\t\tcase JSON_BEHAVIOR_DEFAULT:\n> +\t\t\t/* Always handled in the caller. */\n> +\t\t\tAssert(false);\n> +\t\t\treturn (Datum) 0;\n> +\n> +\t\tdefault:\n> +\t\t\telog(ERROR, \"unrecognized SQL/JSON behavior %d\", behavior->btype);\n> +\t\t\treturn (Datum) 0;\n> +\t}\n> +}\n\nDoes this actually need to be evaluated at expression eavluation time?\nCouldn't we just emit the proper constants in execExpr.c?\n\n> +/* ----------------------------------------------------------------\n> + *\t\tExecEvalJson\n> + * ----------------------------------------------------------------\n> + */\n> +void\n> +ExecEvalJson(ExprState *state, ExprEvalStep *op, ExprContext *econtext)\n\nPointless comment.\n\n\n> +{\n> +\tJsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> +\tJsonExprPreEvalState *pre_eval = &jsestate->pre_eval;\n> +\tJsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> +\tJsonExpr *jexpr = jsestate->jsexpr;\n> +\tDatum\t\titem;\n> +\tDatum\t\tres = (Datum) 0;\n> +\tJsonPath *path;\n> +\tbool\t\tthrowErrors = jexpr->on_error->btype == JSON_BEHAVIOR_ERROR;\n> +\n> +\t*op->resnull = true;\t\t/* until we get a result */\n> +\t*op->resvalue = (Datum) 0;\n> +\n> +\titem = pre_eval->formatted_expr.value;\n> +\tpath = DatumGetJsonPathP(pre_eval->pathspec.value);\n> +\n> +\t/* Reset JsonExprPostEvalState for this evaluation. */\n> +\tmemset(post_eval, 0, sizeof(*post_eval));\n> +\n> +\tres = ExecEvalJsonExpr(op, econtext, path, item, op->resnull,\n> +\t\t\t\t\t\t !throwErrors ? &post_eval->error : NULL);\n> +\n> +\t*op->resvalue = res;\n> +}\n\nI really don't like having both ExecEvalJson() and ExecEvalJsonExpr(). There's\nreally no way to know what which version does, just based on the name.\n\n\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n\nThis stuff adds quite a bit of complexity to the parser. Do we realy need like\na dozen new rules here?\n\n\n> +json_behavior_empty_array:\n> +\t\t\tEMPTY_P ARRAY\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> +\t\t\t/* non-standard, for Oracle compatibility only */\n> +\t\t\t| EMPTY_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> +\t\t;\n\nDo we really want to add random oracle compat crud here?\n\n\n> +/*\n> + * Evaluate a JSON path variable caching computed value.\n> + */\n> +int\n> +EvalJsonPathVar(void *cxt, char *varName, int varNameLen,\n> +\t\t\t\tJsonbValue *val, JsonbValue *baseObject)\n\nMissing static?\n\n> +{\n> +\tJsonPathVariableEvalContext *var = NULL;\n> +\tList\t *vars = cxt;\n> +\tListCell *lc;\n> +\tint\t\t\tid = 1;\n> +\n> +\tif (!varName)\n> +\t\treturn list_length(vars);\n> +\n> +\tforeach(lc, vars)\n> +\t{\n> +\t\tvar = lfirst(lc);\n> +\n> +\t\tif (!strncmp(var->name, varName, varNameLen))\n> +\t\t\tbreak;\n> +\n> +\t\tvar = NULL;\n> +\t\tid++;\n> +\t}\n> +\n> +\tif (!var)\n> +\t\treturn -1;\n> +\n> +\t/*\n> +\t * When belonging to a JsonExpr, path variables are computed with the\n> +\t * JsonExpr's ExprState (var->estate is NULL), so don't need to be computed\n> +\t * here. In some other cases, such as when the path variables belonging\n> +\t * to a JsonTable instead, those variables must be evaluated on their own,\n> +\t * without the enclosing JsonExpr itself needing to be evaluated, so must\n> +\t * be handled here.\n> +\t */\n> +\tif (var->estate && !var->evaluated)\n> +\t{\n> +\t\tAssert(var->econtext != NULL);\n> +\t\tvar->value = ExecEvalExpr(var->estate, var->econtext, &var->isnull);\n> +\t\tvar->evaluated = true;\n\nUh, so this continues to do recursive expression evaluation, as\nExecEvalJsonExpr()->JsonPathQuery()->executeJsonPath(EvalJsonPathVar)\n\nI'm getting grumpy here. This is wrong, has been pointed out many times. The\nonly thing that changes is that the point of recursion is moved around.\n\n\n> +\n> +/*\n> + * ExecEvalExprSafe\n> + *\n> + * Like ExecEvalExpr(), though this allows the caller to pass an\n> + * ErrorSaveContext to declare its intenion to catch any errors that occur when\n> + * executing the expression, such as when calling type input functions that may\n> + * be present in it.\n> + */\n> +static inline Datum\n> +ExecEvalExprSafe(ExprState *state,\n> +\t\t\t\t ExprContext *econtext,\n> +\t\t\t\t bool *isNull,\n> +\t\t\t\t Node *escontext,\n> +\t\t\t\t bool *error)\n\nAfaict there's no caller of this?\n\n\n> \n> +/*\n> + * ExecInitExprWithCaseValue\n> + *\n> + * This is the same as ExecInitExpr, except the caller passes the Datum and\n> + * bool pointers that it would like the ExprState.innermost_caseval\n> + * and ExprState.innermost_casenull, respectively, to be set to. That way,\n> + * it can pass an input value to evaluate the expression via a CaseTestExpr.\n> + */\n> +ExprState *\n> +ExecInitExprWithCaseValue(Expr *node, PlanState *parent,\n> +\t\t\t\t\t\t Datum *caseval, bool *casenull)\n> +{\n> +\tExprState *state;\n> +\tExprEvalStep scratch = {0};\n> +\n> +\t/* Special case: NULL expression produces a NULL ExprState pointer */\n> +\tif (node == NULL)\n> +\t\treturn NULL;\n> +\n> +\t/* Initialize ExprState with empty step list */\n> +\tstate = makeNode(ExprState);\n> +\tstate->expr = node;\n> +\tstate->parent = parent;\n> +\tstate->ext_params = NULL;\n> +\tstate->innermost_caseval = caseval;\n> +\tstate->innermost_casenull = casenull;\n> +\n> +\t/* Insert EEOP_*_FETCHSOME steps as needed */\n> +\tExecInitExprSlots(state, (Node *) node);\n> +\n> +\t/* Compile the expression proper */\n> +\tExecInitExprRec(node, state, &state->resvalue, &state->resnull);\n> +\n> +\t/* Finally, append a DONE step */\n> +\tscratch.opcode = EEOP_DONE;\n> +\tExprEvalPushStep(state, &scratch);\n> +\n> +\tExecReadyExpr(state);\n> +\n> +\treturn state;\n\n> +struct JsonTableJoinState\n> +{\n> +\tunion\n> +\t{\n> +\t\tstruct\n> +\t\t{\n> +\t\t\tJsonTableJoinState *left;\n> +\t\t\tJsonTableJoinState *right;\n> +\t\t\tbool\t\tadvanceRight;\n> +\t\t}\t\t\tjoin;\n> +\t\tJsonTableScanState scan;\n> +\t}\t\t\tu;\n> +\tbool\t\tis_join;\n> +};\n\nA join state that unions the join member with a scan, and has a is_join field?\n\n\n> +/*\n> + * JsonTableInitOpaque\n> + *\t\tFill in TableFuncScanState->opaque for JsonTable processor\n> + */\n> +static void\n> +JsonTableInitOpaque(TableFuncScanState *state, int natts)\n> +{\n> +\tJsonTableContext *cxt;\n> +\tPlanState *ps = &state->ss.ps;\n> +\tTableFuncScan *tfs = castNode(TableFuncScan, ps->plan);\n> +\tTableFunc *tf = tfs->tablefunc;\n> +\tJsonExpr *ci = castNode(JsonExpr, tf->docexpr);\n> +\tJsonTableParent *root = castNode(JsonTableParent, tf->plan);\n> +\tList\t *args = NIL;\n> +\tListCell *lc;\n> +\tint\t\t\ti;\n> +\n> +\tcxt = palloc0(sizeof(JsonTableContext));\n> +\tcxt->magic = JSON_TABLE_CONTEXT_MAGIC;\n> +\n> +\tif (ci->passing_values)\n> +\t{\n> +\t\tListCell *exprlc;\n> +\t\tListCell *namelc;\n> +\n> +\t\tforboth(exprlc, ci->passing_values,\n> +\t\t\t\tnamelc, ci->passing_names)\n> +\t\t{\n> +\t\t\tExpr\t *expr = (Expr *) lfirst(exprlc);\n> +\t\t\tString\t *name = lfirst_node(String, namelc);\n> +\t\t\tJsonPathVariableEvalContext *var = palloc(sizeof(*var));\n> +\n> +\t\t\tvar->name = pstrdup(name->sval);\n> +\t\t\tvar->typid = exprType((Node *) expr);\n> +\t\t\tvar->typmod = exprTypmod((Node *) expr);\n> +\t\t\tvar->estate = ExecInitExpr(expr, ps);\n> +\t\t\tvar->econtext = ps->ps_ExprContext;\n> +\t\t\tvar->mcxt = CurrentMemoryContext;\n> +\t\t\tvar->evaluated = false;\n> +\t\t\tvar->value = (Datum) 0;\n> +\t\t\tvar->isnull = true;\n> +\n> +\t\t\targs = lappend(args, var);\n> +\t\t}\n> +\t}\n> +\n> +\tcxt->colexprs = palloc(sizeof(*cxt->colexprs) *\n> +\t\t\t\t\t\t list_length(tf->colvalexprs));\n> +\n> +\tJsonTableInitScanState(cxt, &cxt->root, root, NULL, args,\n> +\t\t\t\t\t\t CurrentMemoryContext);\n> +\n> +\ti = 0;\n> +\n> +\tforeach(lc, tf->colvalexprs)\n> +\t{\n> +\t\tExpr\t *expr = lfirst(lc);\n> +\n> +\t\tcxt->colexprs[i].expr =\n> +\t\t\tExecInitExprWithCaseValue(expr, ps,\n> +\t\t\t\t\t\t\t\t\t &cxt->colexprs[i].scan->current,\n> +\t\t\t\t\t\t\t\t\t &cxt->colexprs[i].scan->currentIsNull);\n> +\n> +\t\ti++;\n> +\t}\n> +\n> +\tstate->opaque = cxt;\n> +}\n\nEvaluating N expressions for a json table isn't a good approach, both memory\nand CPU efficiency wise.\n\nWhy don't you just emit the proper expression directly, insted of the\nCaseTestExpr stuff, that you then separately evaluate?\n\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Feb 2023 09:24:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 11:41 PM Erik Rijkers <er@xs4all.nl> wrote:\n> Op 20-02-2023 om 08:35 schreef Amit Langote:\n> > Rebased again over queryjumble overhaul.\n> But the following statement is a problem. It does not crash but it goes\n> off, half-freezing the machine, and only comes back after fanatic\n> Ctrl-C'ing.\n>\n> select json_query(jsonb '[3,4]', '$[*]' returning bigint[] empty object\n> on error);\n>\n> Can you have a look?\n\nThanks for the test case. It caused ExecInterpExpr() to enter an\ninfinite loop, which I've fixed in the attached updated version. I've\nalso merged Elena's documentation changes; I can see that\n<replaceable> is more correct.\n\nNow looking at Andres' comments, though, posting a version containing\na fix for the above case so Erik may continue the testing in the\nmeantime.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 21 Feb 2023 12:09:15 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Tue, Feb 21, 2023 at 12:09 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Feb 20, 2023 at 11:41 PM Erik Rijkers <er@xs4all.nl> wrote:\n> > Op 20-02-2023 om 08:35 schreef Amit Langote:\n> > > Rebased again over queryjumble overhaul.\n> > But the following statement is a problem. It does not crash but it goes\n> > off, half-freezing the machine, and only comes back after fanatic\n> > Ctrl-C'ing.\n> >\n> > select json_query(jsonb '[3,4]', '$[*]' returning bigint[] empty object\n> > on error);\n> >\n> > Can you have a look?\n>\n> Thanks for the test case. It caused ExecInterpExpr() to enter an\n> infinite loop, which I've fixed in the attached updated version. I've\n> also merged Elena's documentation changes; I can see that\n> <replaceable> is more correct.\n\nOops, I hadn't actually done the latter. Will do when posting the next version.\n\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 21 Feb 2023 14:40:09 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Thanks a lot for taking a look at this and sorry about the delay in response.\n\nOn Tue, Feb 21, 2023 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-20 16:35:52 +0900, Amit Langote wrote:\n> > Subject: [PATCH v4 03/10] SQL/JSON query functions\n> > +/*\n> > + * Evaluate a JSON error/empty behavior result.\n> > + */\n> > +static Datum\n> > +ExecEvalJsonBehavior(JsonBehavior *behavior, bool *is_null)\n> > +{\n> > + *is_null = false;\n> > +\n> > + switch (behavior->btype)\n> > + {\n> > + case JSON_BEHAVIOR_EMPTY_ARRAY:\n> > + return JsonbPGetDatum(JsonbMakeEmptyArray());\n> > +\n> > + case JSON_BEHAVIOR_EMPTY_OBJECT:\n> > + return JsonbPGetDatum(JsonbMakeEmptyObject());\n> > +\n> > + case JSON_BEHAVIOR_TRUE:\n> > + return BoolGetDatum(true);\n> > +\n> > + case JSON_BEHAVIOR_FALSE:\n> > + return BoolGetDatum(false);\n> > +\n> > + case JSON_BEHAVIOR_NULL:\n> > + case JSON_BEHAVIOR_UNKNOWN:\n> > + *is_null = true;\n> > + return (Datum) 0;\n> > +\n> > + case JSON_BEHAVIOR_DEFAULT:\n> > + /* Always handled in the caller. */\n> > + Assert(false);\n> > + return (Datum) 0;\n> > +\n> > + default:\n> > + elog(ERROR, \"unrecognized SQL/JSON behavior %d\", behavior->btype);\n> > + return (Datum) 0;\n> > + }\n> > +}\n>\n> Does this actually need to be evaluated at expression eavluation time?\n> Couldn't we just emit the proper constants in execExpr.c?\n\nYes, done that way in the updated patch.\n\n> > +/* ----------------------------------------------------------------\n> > + * ExecEvalJson\n> > + * ----------------------------------------------------------------\n> > + */\n> > +void\n> > +ExecEvalJson(ExprState *state, ExprEvalStep *op, ExprContext *econtext)\n>\n> Pointless comment.\n\nRemoved this function altogether in favor of merging the body with\nExecEvalJsonExpr(), which does have a more sensible comment\n\n> > +{\n> > + JsonExprState *jsestate = op->d.jsonexpr.jsestate;\n> > + JsonExprPreEvalState *pre_eval = &jsestate->pre_eval;\n> > + JsonExprPostEvalState *post_eval = &jsestate->post_eval;\n> > + JsonExpr *jexpr = jsestate->jsexpr;\n> > + Datum item;\n> > + Datum res = (Datum) 0;\n> > + JsonPath *path;\n> > + bool throwErrors = jexpr->on_error->btype == JSON_BEHAVIOR_ERROR;\n> > +\n> > + *op->resnull = true; /* until we get a result */\n> > + *op->resvalue = (Datum) 0;\n> > +\n> > + item = pre_eval->formatted_expr.value;\n> > + path = DatumGetJsonPathP(pre_eval->pathspec.value);\n> > +\n> > + /* Reset JsonExprPostEvalState for this evaluation. */\n> > + memset(post_eval, 0, sizeof(*post_eval));\n> > +\n> > + res = ExecEvalJsonExpr(op, econtext, path, item, op->resnull,\n> > + !throwErrors ? &post_eval->error : NULL);\n> > +\n> > + *op->resvalue = res;\n> > +}\n>\n> I really don't like having both ExecEvalJson() and ExecEvalJsonExpr(). There's\n> really no way to know what which version does, just based on the name.\n\nYes, having two functions is no longer necessary.\n\n> > --- a/src/backend/parser/gram.y\n> > +++ b/src/backend/parser/gram.y\n>\n> This stuff adds quite a bit of complexity to the parser. Do we realy need like\n> a dozen new rules here?\n>>\n> > +json_behavior_empty_array:\n> > + EMPTY_P ARRAY { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> > + /* non-standard, for Oracle compatibility only */\n> > + | EMPTY_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> > + ;\n>\n> Do we really want to add random oracle compat crud here?\n\nHmm, sorry, but I haven't familiarized myself with the grammar side of\nthings as much as I perhaps should have, so I am not sure whether a\nmore simplified grammar would suffice for offering a\nstandard-compliant functionality.\n\nMaybe we could take out the oracle-compatibility bit, but I'd\nappreciate it if someone who has been involved with SQL/JSON from the\nbeginning can comment on the above 2 points.\n\n> > +/*\n> > + * Evaluate a JSON path variable caching computed value.\n> > + */\n> > +int\n> > +EvalJsonPathVar(void *cxt, char *varName, int varNameLen,\n> > + JsonbValue *val, JsonbValue *baseObject)\n>\n> Missing static?\n\nFixed.\n\n> > +{\n> > + JsonPathVariableEvalContext *var = NULL;\n> > + List *vars = cxt;\n> > + ListCell *lc;\n> > + int id = 1;\n> > +\n> > + if (!varName)\n> > + return list_length(vars);\n> > +\n> > + foreach(lc, vars)\n> > + {\n> > + var = lfirst(lc);\n> > +\n> > + if (!strncmp(var->name, varName, varNameLen))\n> > + break;\n> > +\n> > + var = NULL;\n> > + id++;\n> > + }\n> > +\n> > + if (!var)\n> > + return -1;\n> > +\n> > + /*\n> > + * When belonging to a JsonExpr, path variables are computed with the\n> > + * JsonExpr's ExprState (var->estate is NULL), so don't need to be computed\n> > + * here. In some other cases, such as when the path variables belonging\n> > + * to a JsonTable instead, those variables must be evaluated on their own,\n> > + * without the enclosing JsonExpr itself needing to be evaluated, so must\n> > + * be handled here.\n> > + */\n> > + if (var->estate && !var->evaluated)\n> > + {\n> > + Assert(var->econtext != NULL);\n> > + var->value = ExecEvalExpr(var->estate, var->econtext, &var->isnull);\n> > + var->evaluated = true;\n>\n> Uh, so this continues to do recursive expression evaluation, as\n> ExecEvalJsonExpr()->JsonPathQuery()->executeJsonPath(EvalJsonPathVar)\n>\n> I'm getting grumpy here. This is wrong, has been pointed out many times. The\n> only thing that changes is that the point of recursion is moved around.\n\nActually, these JSON path vars, along with other sub-expressions of\nJsonExpr, *are* computed non-recursively as ExprEvalSteps of the\nJsonExprState, at least in the cases where the vars are to be computed\nas part of evaluating the JsonExpr itself. So, the code path you've\nshown above perhaps as a hypothetical doesn't really exist, though\nthere *is* an instance where these path vars are computed *outside*\nthe context of evaluating the parent JsonExpr, such as in\nJsonTableResetContextItem(). Maybe there's a cleaner way of doing\nthat though...\n\n> > +\n> > +/*\n> > + * ExecEvalExprSafe\n> > + *\n> > + * Like ExecEvalExpr(), though this allows the caller to pass an\n> > + * ErrorSaveContext to declare its intenion to catch any errors that occur when\n> > + * executing the expression, such as when calling type input functions that may\n> > + * be present in it.\n> > + */\n> > +static inline Datum\n> > +ExecEvalExprSafe(ExprState *state,\n> > + ExprContext *econtext,\n> > + bool *isNull,\n> > + Node *escontext,\n> > + bool *error)\n>\n> Afaict there's no caller of this?\n\nOops, removed. This was used in a previous version of the patch that\nstill had nested ExprStates inside JsonExprState.\n\n> > +/*\n> > + * ExecInitExprWithCaseValue\n> > + *\n> > + * This is the same as ExecInitExpr, except the caller passes the Datum and\n> > + * bool pointers that it would like the ExprState.innermost_caseval\n> > + * and ExprState.innermost_casenull, respectively, to be set to. That way,\n> > + * it can pass an input value to evaluate the expression via a CaseTestExpr.\n> > + */\n> > +ExprState *\n> > +ExecInitExprWithCaseValue(Expr *node, PlanState *parent,\n> > + Datum *caseval, bool *casenull)\n> > +{\n> > + ExprState *state;\n> > + ExprEvalStep scratch = {0};\n> > +\n> > + /* Special case: NULL expression produces a NULL ExprState pointer */\n> > + if (node == NULL)\n> > + return NULL;\n> > +\n> > + /* Initialize ExprState with empty step list */\n> > + state = makeNode(ExprState);\n> > + state->expr = node;\n> > + state->parent = parent;\n> > + state->ext_params = NULL;\n> > + state->innermost_caseval = caseval;\n> > + state->innermost_casenull = casenull;\n> > +\n> > + /* Insert EEOP_*_FETCHSOME steps as needed */\n> > + ExecInitExprSlots(state, (Node *) node);\n> > +\n> > + /* Compile the expression proper */\n> > + ExecInitExprRec(node, state, &state->resvalue, &state->resnull);\n> > +\n> > + /* Finally, append a DONE step */\n> > + scratch.opcode = EEOP_DONE;\n> > + ExprEvalPushStep(state, &scratch);\n> > +\n> > + ExecReadyExpr(state);\n> > +\n> > + return state;\n>\n> > +struct JsonTableJoinState\n> > +{\n> > + union\n> > + {\n> > + struct\n> > + {\n> > + JsonTableJoinState *left;\n> > + JsonTableJoinState *right;\n> > + bool advanceRight;\n> > + } join;\n> > + JsonTableScanState scan;\n> > + } u;\n> > + bool is_join;\n> > +};\n>\n> A join state that unions the join member with a scan, and has a is_join field?\n\nYeah, I agree that's not the best form for what it is. I've replaced\nthat with the following:\n\n+/* Structures for JSON_TABLE execution */\n+\n+typedef enum JsonTablePlanStateType\n+{\n+ JSON_TABLE_SCAN_STATE = 0,\n+ JSON_TABLE_JOIN_STATE\n+} JsonTablePlanStateType;\n+\n+typedef struct JsonTablePlanState\n+{\n+ JsonTablePlanStateType type;\n+\n+ struct JsonTablePlanState *parent;\n+ struct JsonTablePlanState *nested;\n+} JsonTablePlanState;\n+\n+typedef struct JsonTableScanState\n+{\n+ JsonTablePlanState plan;\n+\n+ MemoryContext mcxt;\n+ JsonPath *path;\n+ List *args;\n+ JsonValueList found;\n+ JsonValueListIterator iter;\n+ Datum current;\n+ int ordinal;\n+ bool currentIsNull;\n+ bool outerJoin;\n+ bool errorOnError;\n+ bool advanceNested;\n+ bool reset;\n+} JsonTableScanState;\n+\n+typedef struct JsonTableJoinState\n+{\n+ JsonTablePlanState plan;\n+\n+ JsonTablePlanState *left;\n+ JsonTablePlanState *right;\n+ bool advanceRight;\n+} JsonTableJoinState;\n\nI considered using NodeTag but decided not to, because this stuff is\nlocal to jsonpath_exec.c.\n\n> > + * JsonTableInitOpaque\n> > + * Fill in TableFuncScanState->opaque for JsonTable processor\n> > + */\n> > +static void\n> > +JsonTableInitOpaque(TableFuncScanState *state, int natts)\n> > +{\n> > + JsonTableContext *cxt;\n> > + PlanState *ps = &state->ss.ps;\n> > + TableFuncScan *tfs = castNode(TableFuncScan, ps->plan);\n> > + TableFunc *tf = tfs->tablefunc;\n> > + JsonExpr *ci = castNode(JsonExpr, tf->docexpr);\n> > + JsonTableParent *root = castNode(JsonTableParent, tf->plan);\n> > + List *args = NIL;\n> > + ListCell *lc;\n> > + int i;\n> > +\n> > + cxt = palloc0(sizeof(JsonTableContext));\n> > + cxt->magic = JSON_TABLE_CONTEXT_MAGIC;\n> > +\n> > + if (ci->passing_values)\n> > + {\n> > + ListCell *exprlc;\n> > + ListCell *namelc;\n> > +\n> > + forboth(exprlc, ci->passing_values,\n> > + namelc, ci->passing_names)\n> > + {\n> > + Expr *expr = (Expr *) lfirst(exprlc);\n> > + String *name = lfirst_node(String, namelc);\n> > + JsonPathVariableEvalContext *var = palloc(sizeof(*var));\n> > +\n> > + var->name = pstrdup(name->sval);\n> > + var->typid = exprType((Node *) expr);\n> > + var->typmod = exprTypmod((Node *) expr);\n> > + var->estate = ExecInitExpr(expr, ps);\n> > + var->econtext = ps->ps_ExprContext;\n> > + var->mcxt = CurrentMemoryContext;\n> > + var->evaluated = false;\n> > + var->value = (Datum) 0;\n> > + var->isnull = true;\n> > +\n> > + args = lappend(args, var);\n> > + }\n> > + }\n> > +\n> > + cxt->colexprs = palloc(sizeof(*cxt->colexprs) *\n> > + list_length(tf->colvalexprs));\n> > +\n> > + JsonTableInitScanState(cxt, &cxt->root, root, NULL, args,\n> > + CurrentMemoryContext);\n> > +\n> > + i = 0;\n> > +\n> > + foreach(lc, tf->colvalexprs)\n> > + {\n> > + Expr *expr = lfirst(lc);\n> > +\n> > + cxt->colexprs[i].expr =\n> > + ExecInitExprWithCaseValue(expr, ps,\n> > + &cxt->colexprs[i].scan->current,\n> > + &cxt->colexprs[i].scan->currentIsNull);\n> > +\n> > + i++;\n> > + }\n> > +\n> > + state->opaque = cxt;\n> > +}\n>\n> Why don't you just emit the proper expression directly, insted of the\n> CaseTestExpr stuff, that you then separately evaluate?\n\nI suppose you mean emitting the expression that supplies the value\ngiven by scan->current and scan->currentIsNull into the same ExprState\nthat holds the steps for a given colvalexpr. If so, I don't really\nsee a way of doing that given the current model of JSON_TABLE\nexecution. The former is computed as part of\nTableFuncRoutine.FetchRow(scan), which sets scan.current (and\ncurrentIsNull) and the letter is computer as part of\nTableFuncRoutine.GetValue(scan, colnum).\n\n> Evaluating N expressions for a json table isn't a good approach, both memory\n> and CPU efficiency wise.\n\nAre you referring to JsonTableInitOpaque() initializing all these\nsub-expressions of JsonTableParent, especially colvalexprs, using N\n*independent* ExprStates? That could perhaps be made to work by\nmaking JsonTableParent be an expression recognized by execExpr.c, so\nthat a single ExprState can store the steps for all its\nsub-expressions, much like JsonExpr is. I'll give that a try, though\nI wonder if the semantics of making this work in a single\nExecEvalExpr() call will mismatch that of the current way, because\ndifferent sub-expressions are currently evaluated under different APIs\nof TableFuncRoutine.\n\nIn the meantime, I'm attaching a version of the patchset with a few\nthings fixed as mentioned above.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 27 Feb 2023 16:45:49 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 4:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 21, 2023 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > Evaluating N expressions for a json table isn't a good approach, both memory\n> > and CPU efficiency wise.\n>\n> Are you referring to JsonTableInitOpaque() initializing all these\n> sub-expressions of JsonTableParent, especially colvalexprs, using N\n> *independent* ExprStates? That could perhaps be made to work by\n> making JsonTableParent be an expression recognized by execExpr.c, so\n> that a single ExprState can store the steps for all its\n> sub-expressions, much like JsonExpr is. I'll give that a try, though\n> I wonder if the semantics of making this work in a single\n> ExecEvalExpr() call will mismatch that of the current way, because\n> different sub-expressions are currently evaluated under different APIs\n> of TableFuncRoutine.\n\nI was looking at this and realized that using N ExprStates for various\nsubsidiary expressions is not something specific to JSON_TABLE\nimplementation. I mean we already have bunch of ExprStates being\ncreated in ExecInitTableFuncScan():\n\n scanstate->ns_uris =\n ExecInitExprList(tf->ns_uris, (PlanState *) scanstate);\n scanstate->docexpr =\n ExecInitExpr((Expr *) tf->docexpr, (PlanState *) scanstate);\n scanstate->rowexpr =\n ExecInitExpr((Expr *) tf->rowexpr, (PlanState *) scanstate);\n scanstate->colexprs =\n ExecInitExprList(tf->colexprs, (PlanState *) scanstate);\n scanstate->coldefexprs =\n ExecInitExprList(tf->coldefexprs, (PlanState *) scanstate);\n\nOr maybe you're worried about jsonpath_exec.c using so many ExprStates\n*privately* to put into TableFuncScanState.opaque?\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 20:13:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 4:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Tue, Feb 21, 2023 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > Uh, so this continues to do recursive expression evaluation, as\n> > ExecEvalJsonExpr()->JsonPathQuery()->executeJsonPath(EvalJsonPathVar)\n> >\n> > I'm getting grumpy here. This is wrong, has been pointed out many times. The\n> > only thing that changes is that the point of recursion is moved around.\n>\n> Actually, these JSON path vars, along with other sub-expressions of\n> JsonExpr, *are* computed non-recursively as ExprEvalSteps of the\n> JsonExprState, at least in the cases where the vars are to be computed\n> as part of evaluating the JsonExpr itself. So, the code path you've\n> shown above perhaps as a hypothetical doesn't really exist, though\n> there *is* an instance where these path vars are computed *outside*\n> the context of evaluating the parent JsonExpr, such as in\n> JsonTableResetContextItem(). Maybe there's a cleaner way of doing\n> that though...\n>\n> > > + * JsonTableInitOpaque\n> > > + * Fill in TableFuncScanState->opaque for JsonTable processor\n> > > + */\n> > > +static void\n> > > +JsonTableInitOpaque(TableFuncScanState *state, int natts)\n> > > +{\n> > > + JsonTableContext *cxt;\n> > > + PlanState *ps = &state->ss.ps;\n> > > + TableFuncScan *tfs = castNode(TableFuncScan, ps->plan);\n> > > + TableFunc *tf = tfs->tablefunc;\n> > > + JsonExpr *ci = castNode(JsonExpr, tf->docexpr);\n> > > + JsonTableParent *root = castNode(JsonTableParent, tf->plan);\n> > > + List *args = NIL;\n> > > + ListCell *lc;\n> > > + int i;\n> > > +\n> > > + cxt = palloc0(sizeof(JsonTableContext));\n> > > + cxt->magic = JSON_TABLE_CONTEXT_MAGIC;\n> > > +\n> > > + if (ci->passing_values)\n> > > + {\n> > > + ListCell *exprlc;\n> > > + ListCell *namelc;\n> > > +\n> > > + forboth(exprlc, ci->passing_values,\n> > > + namelc, ci->passing_names)\n> > > + {\n> > > + Expr *expr = (Expr *) lfirst(exprlc);\n> > > + String *name = lfirst_node(String, namelc);\n> > > + JsonPathVariableEvalContext *var = palloc(sizeof(*var));\n> > > +\n> > > + var->name = pstrdup(name->sval);\n> > > + var->typid = exprType((Node *) expr);\n> > > + var->typmod = exprTypmod((Node *) expr);\n> > > + var->estate = ExecInitExpr(expr, ps);\n> > > + var->econtext = ps->ps_ExprContext;\n> > > + var->mcxt = CurrentMemoryContext;\n> > > + var->evaluated = false;\n> > > + var->value = (Datum) 0;\n> > > + var->isnull = true;\n> > > +\n> > > + args = lappend(args, var);\n> > > + }\n> > > + }\n> > > +\n> > > + cxt->colexprs = palloc(sizeof(*cxt->colexprs) *\n> > > + list_length(tf->colvalexprs));\n> > > +\n> > > + JsonTableInitScanState(cxt, &cxt->root, root, NULL, args,\n> > > + CurrentMemoryContext);\n> > > +\n> > > + i = 0;\n> > > +\n> > > + foreach(lc, tf->colvalexprs)\n> > > + {\n> > > + Expr *expr = lfirst(lc);\n> > > +\n> > > + cxt->colexprs[i].expr =\n> > > + ExecInitExprWithCaseValue(expr, ps,\n> > > + &cxt->colexprs[i].scan->current,\n> > > + &cxt->colexprs[i].scan->currentIsNull);\n> > > +\n> > > + i++;\n> > > + }\n> > > +\n> > > + state->opaque = cxt;\n> > > +}\n> >\n> > Why don't you just emit the proper expression directly, insted of the\n> > CaseTestExpr stuff, that you then separately evaluate?\n>\n> I suppose you mean emitting the expression that supplies the value\n> given by scan->current and scan->currentIsNull into the same ExprState\n> that holds the steps for a given colvalexpr. If so, I don't really\n> see a way of doing that given the current model of JSON_TABLE\n> execution. The former is computed as part of\n> TableFuncRoutine.FetchRow(scan), which sets scan.current (and\n> currentIsNull) and the letter is computer as part of\n> TableFuncRoutine.GetValue(scan, colnum).\n\nI looked around for another way to pass the value of evaluating one\nexpression (JsonTableParent.path) as input to the evaluation of\nanother (an expression in TableFunc.colvalexprs). The only thing that\ncame to mind is to use PARAM_EXEC parameters instead of CaseTestExpr\nplaceholders, though I'm not sure whether that is simpler or whether\nthat would really make things better?\n\n> > Evaluating N expressions for a json table isn't a good approach, both memory\n> > and CPU efficiency wise.\n>\n> Are you referring to JsonTableInitOpaque() initializing all these\n> sub-expressions of JsonTableParent, especially colvalexprs, using N\n> *independent* ExprStates? That could perhaps be made to work by\n> making JsonTableParent be an expression recognized by execExpr.c, so\n> that a single ExprState can store the steps for all its\n> sub-expressions, much like JsonExpr is. I'll give that a try, though\n> I wonder if the semantics of making this work in a single\n> ExecEvalExpr() call will mismatch that of the current way, because\n> different sub-expressions are currently evaluated under different APIs\n> of TableFuncRoutine.\n\nHmm, the idea to turn JSON_TABLE into a single expression turned out\nto be a non-starter after all, because, unlike JsonExpr, it can\nproduce multiple values. So there must be an ExprState for computing\neach column of its output rows. As I mentioned in my other reply,\nTableFuncScanState has a list of ExprStates anyway for\nTableFunc.colexprs. What we could do is move the ExprStates of\nTableFunc.colvalexprs into TableFuncScanState instead of making that\npart of the JSON_TABLE opaque state, as I've done in the attached\nupdated patch.\n\nI also found a way to not require ExecInitExprWithCaseValue() for the\ninitialization of those expressions by moving the responsibility of\npassing the value of CaseTestExpr placeholder contained in those\nexpressions to the time of evaluating the expressions rather than\ninitialization time.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 28 Feb 2023 20:36:33 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 8:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Mon, Feb 27, 2023 at 4:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > On Tue, Feb 21, 2023 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Evaluating N expressions for a json table isn't a good approach, both memory\n> > > and CPU efficiency wise.\n> >\n> > Are you referring to JsonTableInitOpaque() initializing all these\n> > sub-expressions of JsonTableParent, especially colvalexprs, using N\n> > *independent* ExprStates? That could perhaps be made to work by\n> > making JsonTableParent be an expression recognized by execExpr.c, so\n> > that a single ExprState can store the steps for all its\n> > sub-expressions, much like JsonExpr is. I'll give that a try, though\n> > I wonder if the semantics of making this work in a single\n> > ExecEvalExpr() call will mismatch that of the current way, because\n> > different sub-expressions are currently evaluated under different APIs\n> > of TableFuncRoutine.\n>\n> Hmm, the idea to turn JSON_TABLE into a single expression turned out\n> to be a non-starter after all, because, unlike JsonExpr, it can\n> produce multiple values. So there must be an ExprState for computing\n> each column of its output rows. As I mentioned in my other reply,\n> TableFuncScanState has a list of ExprStates anyway for\n> TableFunc.colexprs. What we could do is move the ExprStates of\n> TableFunc.colvalexprs into TableFuncScanState instead of making that\n> part of the JSON_TABLE opaque state, as I've done in the attached\n> updated patch.\n\nHere's another version in which I've also moved the ExprStates of\nPASSING args into TableFuncScanState instead of keeping them in\nJSON_TABLE opaque state. That means all the subsidiary ExprStates of\nTableFuncScanState are now initialized only once during\nExecInitTableFuncScan(). Previously, JSON_TABLE related ones would be\ninitialized on every call of JsonTableInitOpaque().\n\nI've also done some cosmetic changes such as renaming the\nJsonTableContext to JsonTableParseContext in parse_jsontable.c and to\nJsonTableExecContext in jsonpath_exec.c.\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Mon, 6 Mar 2023 12:18:39 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "\nOn 2023-03-05 Su 22:18, Amit Langote wrote:\n> On Tue, Feb 28, 2023 at 8:36 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>> On Mon, Feb 27, 2023 at 4:45 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> On Tue, Feb 21, 2023 at 2:25 AM Andres Freund <andres@anarazel.de> wrote:\n>>>> Evaluating N expressions for a json table isn't a good approach, both memory\n>>>> and CPU efficiency wise.\n>>> Are you referring to JsonTableInitOpaque() initializing all these\n>>> sub-expressions of JsonTableParent, especially colvalexprs, using N\n>>> *independent* ExprStates? That could perhaps be made to work by\n>>> making JsonTableParent be an expression recognized by execExpr.c, so\n>>> that a single ExprState can store the steps for all its\n>>> sub-expressions, much like JsonExpr is. I'll give that a try, though\n>>> I wonder if the semantics of making this work in a single\n>>> ExecEvalExpr() call will mismatch that of the current way, because\n>>> different sub-expressions are currently evaluated under different APIs\n>>> of TableFuncRoutine.\n>> Hmm, the idea to turn JSON_TABLE into a single expression turned out\n>> to be a non-starter after all, because, unlike JsonExpr, it can\n>> produce multiple values. So there must be an ExprState for computing\n>> each column of its output rows. As I mentioned in my other reply,\n>> TableFuncScanState has a list of ExprStates anyway for\n>> TableFunc.colexprs. What we could do is move the ExprStates of\n>> TableFunc.colvalexprs into TableFuncScanState instead of making that\n>> part of the JSON_TABLE opaque state, as I've done in the attached\n>> updated patch.\n> Here's another version in which I've also moved the ExprStates of\n> PASSING args into TableFuncScanState instead of keeping them in\n> JSON_TABLE opaque state. That means all the subsidiary ExprStates of\n> TableFuncScanState are now initialized only once during\n> ExecInitTableFuncScan(). Previously, JSON_TABLE related ones would be\n> initialized on every call of JsonTableInitOpaque().\n>\n> I've also done some cosmetic changes such as renaming the\n> JsonTableContext to JsonTableParseContext in parse_jsontable.c and to\n> JsonTableExecContext in jsonpath_exec.c.\n>\n>\n\nHi, I have just spent some time going through the first five patches \n(i.e. those that precede the JSONTABLE patches) and Andres's comments in\n\n<https://postgr.es/m/20230220172456.q3oshnvfk3wyhm5l@awork3.anarazel.de>\n\n\nAFAICT there are only two possible matters of concern that remain, both \nregarding the grammar.\n\n\nFirst is this general complaint:\n\n\n> This stuff adds quite a bit of complexity to the parser. Do we realy need like\n> a dozen new rules here?\n\nI mentioned that more than a year ago, I think, without anybody taking \nthe matter up, so I didn't pursue it. I guess I should have.\n\nThere are probably some fairly easy opportunities to reduce the number \nof non-terminals introduced here (e.g. I think json_aggregate_func could \npossibly be expanded in place without introducing \njson_object_aggregate_constructor and json_array_aggregate_constructor). \nI'm going to make an attempt at that, at least to pick some low hanging \nfruit. But in the end I think we are going to be left with a significant \nexpansion of the grammar rules, more or less forced on us by the way the \nSQL Standards Committee rather profligately invents new ways of \ncontorting the grammar.\n\nSecond is this complaint:\n\n\n> +json_behavior_empty_array:\n> +\t\t\tEMPTY_P ARRAY\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> +\t\t\t/* non-standard, for Oracle compatibility only */\n> +\t\t\t| EMPTY_P\t\t{ $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> +\t\t;\n> Do we really want to add random oracle compat crud here?\n>\n\nI think this case is pretty harmless, and it literally involves one line \nof code, so I'm inclined to leave it.\n\nThese both seem like things not worth holding up progress for, and I \nthink it would be good to get these patches committed as soon as \npossible. My intention is to commit them (after some grammar \nadjustments) plus their documentation in the next few days. That would \nleave the JSONTABLE patches still to go. They are substantial, but a far \nmore manageable chunk of work for some committer (not me) once we get \nthis foundational piece in.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:40:05 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 08.03.23 22:40, Andrew Dunstan wrote:\n> There are probably some fairly easy opportunities to reduce the number \n> of non-terminals introduced here (e.g. I think json_aggregate_func could \n> possibly be expanded in place without introducing \n> json_object_aggregate_constructor and json_array_aggregate_constructor). \n> I'm going to make an attempt at that, at least to pick some low hanging \n> fruit. But in the end I think we are going to be left with a significant \n> expansion of the grammar rules, more or less forced on us by the way the \n> SQL Standards Committee rather profligately invents new ways of \n> contorting the grammar.\n\nI browsed these patches, and I agree that the grammar is the thing that \nsticks out as something that could be tightened up a bit. Try to reduce \nthe number of different symbols, and check that the keywords are all in \nalphabetical order.\n\nThere are also various bits of code that are commented out, in some \ncases because they can't be implemented, in some cases without \nexplanation. I think these should all be removed. Otherwise, whoever \nneeds to touch this code next would be under some sort of obligation to \nkeep the commented-out code up to date with surrounding changes, which \nwould be awkward. We can find better ways to explain missing \nfunctionality and room for improvement.\n\nAlso, perhaps we can find better names for the new test files. Like, \nwhat does \"sqljson.sql\" mean, as opposed to, say, \"json.sql\"? Maybe \nsomething like \"json_functions\", \"json_expressions\", etc. would be \nclearer. (Starting it with \"json\" would also group the files better.)\n\n> These both seem like things not worth holding up progress for, and I \n> think it would be good to get these patches committed as soon as \n> possible. My intention is to commit them (after some grammar \n> adjustments) plus their documentation in the next few days.\n\nIf possible, the documentation for each incremental part should be part \nof that patch, not a separate all-in-one patch.\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 14:08:27 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\nOn Thu, Mar 9, 2023 at 10:08 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 08.03.23 22:40, Andrew Dunstan wrote:\n> > These both seem like things not worth holding up progress for, and I\n> > think it would be good to get these patches committed as soon as\n> > possible. My intention is to commit them (after some grammar\n> > adjustments) plus their documentation in the next few days.\n>\n> If possible, the documentation for each incremental part should be part\n> of that patch, not a separate all-in-one patch.\n\nHere's a version that includes documentation of the individual bits in\ntheir own commits. I've also merged the patch to add the PLAN clause\nto JSON_TABLE into the patch that adds JSON_TABLE itself.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Wed, 15 Mar 2023 21:49:49 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-03-15 We 08:49, Amit Langote wrote:\n> Hi,\n>\n> On Thu, Mar 9, 2023 at 10:08 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>> On 08.03.23 22:40, Andrew Dunstan wrote:\n>>> These both seem like things not worth holding up progress for, and I\n>>> think it would be good to get these patches committed as soon as\n>>> possible. My intention is to commit them (after some grammar\n>>> adjustments) plus their documentation in the next few days.\n>> If possible, the documentation for each incremental part should be part\n>> of that patch, not a separate all-in-one patch.\n> Here's a version that includes documentation of the individual bits in\n> their own commits. I've also merged the patch to add the PLAN clause\n> to JSON_TABLE into the patch that adds JSON_TABLE itself.\n\n\nHi, I have taken these and done some surgery to reduce the explosion on \ngrammar symbols. The attached set is just Amit's patches with some of \nthis surgery done - nothing other than gram.y has been touched. Patches \n2 and 5 in the series could be sanely squashed onto patches 1 and 4 \nrespectively. I haven't done anything significant yet with the JSONTABLE \npatch, there is probably some more low hanging fruit there, and possibly \nsome still in the earlier patches.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com",
"msg_date": "Thu, 16 Mar 2023 21:14:35 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-Mar-16, Andrew Dunstan wrote:\n\n> Hi, I have taken these and done some surgery to reduce the explosion on\n> grammar symbols. The attached set is just Amit's patches with some of this\n> surgery done - nothing other than gram.y has been touched. Patches 2 and 5\n> in the series could be sanely squashed onto patches 1 and 4 respectively. I\n> haven't done anything significant yet with the JSONTABLE patch, there is\n> probably some more low hanging fruit there, and possibly some still in the\n> earlier patches.\n\nHello,\n\nIt looks as if the grammar for this was originally written following the\nSQL standard's description to the letter. AFAICS reducing the number of\nnonterminals as you have done is a good thing. So I started from that\npoint (0001+0002) to see what else is missing to make that independently\ncommittable. One thing I noticed is that a number of grammar hacks are\nnot necessary until the IS JSON patch, so I've removed them from 0001\n(the constructors patch) in order to make things easier to comprehend.\nWe can put them back together with IS JSON. For the time being, 0001 is\nalready large enough.\n\nSo here's v11 of this (0001+0002 plus some changes of my own). At this\npoint, the main thing I'm unhappy about is the fact that the\ndocumentation addition puts the new contents at the end of the chapter,\nwhich makes no sense. So we now have:\n\n9.16.1. Processing and Creating JSON Data\n9.16.2. The SQL/JSON Path Language\n9.16.3. SQL/JSON Functions and Expressions\n\nwhere the standard functions are in 9.16.3 and describe functions that\nare for creating JSON data, so they should naturally be in 9.16.1. I'll\nsee about reformulating the whole chapter so that it makes sense.\n\nI added an ECPG test file, to make sure that the weird grammar\nproductions parse correctly.\n\nThere are other minor things too, which I'll see about.\n\nOnce I get this one done, I'll rebase and repost the rest of the series.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Puedes vivir sólo una vez, pero si lo haces bien, una vez es suficiente\"",
"msg_date": "Wed, 22 Mar 2023 13:18:46 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Docs amended as I threatened. Other than that, this has required more\nchanges than I thought it would. For one thing, I've continued to\nadjust the grammar a little bit, hopefully without breaking anything (at\nleast, the tests continue to work). Even so, I was unable to get bison\nto accept the 'KEY name VALUES blah' syntax; it might be a\nfun/challenging project to change the productions that we use for JSON\nnames and values:\n\n+json_name_and_value:\n+/* Supporting this syntax seems to require major surgery\n+ KEY c_expr VALUE_P json_value_expr\n+ { $$ = makeJsonKeyValue($2, $4); }\n+ |\n+*/\n+ c_expr VALUE_P json_value_expr\n+ { $$ = makeJsonKeyValue($1, $3); }\n+ |\n+ a_expr ':' json_value_expr\n+ { $$ = makeJsonKeyValue($1, $3); }\n+ ;\n\nIf we uncomment the KEY bit there, a bunch of conflicts emerge. Also,\nthe fact that we have a_expr on the third one but c_expr on the second\nbothers me on consistency grounds; but really we should have a separate\nproduction for things that can be JSON field names.\n\n(I also removed json_object_constructor_args_opt and json_object_args as\nseparate productions, because I didn't see that they got us anything.)\n\nI also noticed that we had a \"string of skipped keys\" thing in json.c\nthat was supposed to be used to detect keys already used but with only\nNULL values; however, that stuff had already been rewritten by Nikita on\nJuly 2020 to use a hash table, so the string itself was being built but\nuselessly so AFAICS. Removed that one.\n\nI added a bunch of comments in several places, and postponed addition of\na couple of structs that are not needed for this part of the features.\nSome of these will have to come back with the IS JSON support (0002 in\nthe original set).\n\nAnyway, barring objections or further problems, I intend to get this one\npushed early tomorrow. For the curious, I've pushed it here\nhttps://github.com/alvherre/postgres/tree/sqljson-constructors\nand the tests are currently running here:\nhttps://cirrus-ci.com/build/5468150918021120\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"La experiencia nos dice que el hombre peló millones de veces las patatas,\npero era forzoso admitir la posibilidad de que en un caso entre millones,\nlas patatas pelarían al hombre\" (Ijon Tichy)",
"msg_date": "Mon, 27 Mar 2023 20:54:56 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "I ran sqlsmith on this patch for a short while, and reduced one of its\nappalling queries to this:\n\npostgres=# SELECT jsonb_object_agg_unique_strict('', null::xid8);\nERROR: unexpected jsonb type as object key\n\npostgres=# \\errverbose \nERROR: XX000: unexpected jsonb type as object key\nUBICACI�N: JsonbIteratorNext, jsonb_util.c:958\n\nAs you know, it's considered bad if elog()s are reachable, user-facing errors.\n\n2023-03-27 15:46:47.351 CDT client backend[13361] psql ERROR: unexpected jsonb type as object key\n2023-03-27 15:46:47.351 CDT client backend[13361] psql BACKTRACE: \n postgres: pryzbyj postgres [local] SELECT(JsonbIteratorNext+0x1e5) [0x5638fa11ba82]\n postgres: pryzbyj postgres [local] SELECT(+0x4ff951) [0x5638fa114951]\n postgres: pryzbyj postgres [local] SELECT(JsonbToCString+0x12) [0x5638fa116584]\n postgres: pryzbyj postgres [local] SELECT(jsonb_out+0x24) [0x5638fa1165ad]\n postgres: pryzbyj postgres [local] SELECT(FunctionCall1Coll+0x51) [0x5638fa1ef585]\n postgres: pryzbyj postgres [local] SELECT(OutputFunctionCall+0x15) [0x5638fa1f067d]\n postgres: pryzbyj postgres [local] SELECT(+0xe7ef7) [0x5638f9cfcef7]\n postgres: pryzbyj postgres [local] SELECT(+0x2b4271) [0x5638f9ec9271]\n postgres: pryzbyj postgres [local] SELECT(standard_ExecutorRun+0x146) [0x5638f9ec9402]\n\nWhat might indicate a worse problem is that with debug_discard_caches=1, it\ndoes something different:\n\npostgres=# \\errverbose \nERROR: XX000: invalid jsonb scalar type\nUBICACI�N: convertJsonbScalar, jsonb_util.c:1865\n\n2023-03-27 15:51:21.788 CDT client backend[15939] psql ERROR: invalid jsonb scalar type\n2023-03-27 15:51:21.788 CDT client backend[15939] psql CONTEXT: parallel worker\n2023-03-27 15:51:21.788 CDT client backend[15939] psql BACKTRACE: \n postgres: pryzbyj postgres [local] SELECT(ThrowErrorData+0x2a6) [0x5638fa1ec8f3]\n postgres: pryzbyj postgres [local] SELECT(+0x194820) [0x5638f9da9820]\n postgres: pryzbyj postgres [local] SELECT(HandleParallelMessages+0x15d) [0x5638f9daac95]\n postgres: pryzbyj postgres [local] SELECT(ProcessInterrupts+0x906) [0x5638fa094873]\n postgres: pryzbyj postgres [local] SELECT(+0x2d202b) [0x5638f9ee702b]\n postgres: pryzbyj postgres [local] SELECT(+0x2d2206) [0x5638f9ee7206]\n postgres: pryzbyj postgres [local] SELECT(+0x2d245a) [0x5638f9ee745a]\n postgres: pryzbyj postgres [local] SELECT(+0x2bbcec) [0x5638f9ed0cec]\n postgres: pryzbyj postgres [local] SELECT(+0x2b4240) [0x5638f9ec9240]\n postgres: pryzbyj postgres [local] SELECT(standard_ExecutorRun+0x146) [0x5638f9ec9402]\n\n+valgrind indicates this:\n\n==14095== Use of uninitialised value of size 8\n==14095== at 0x60D1C9: convertJsonbScalar (jsonb_util.c:1822)\n==14095== by 0x60D44F: convertJsonbObject (jsonb_util.c:1741)\n==14095== by 0x60D630: convertJsonbValue (jsonb_util.c:1611)\n==14095== by 0x60D903: convertToJsonb (jsonb_util.c:1565)\n==14095== by 0x60F272: JsonbValueToJsonb (jsonb_util.c:117)\n==14095== by 0x60A504: jsonb_object_agg_finalfn (jsonb.c:2057)\n==14095== by 0x3D0806: finalize_aggregate (nodeAgg.c:1119)\n==14095== by 0x3D2210: finalize_aggregates (nodeAgg.c:1353)\n==14095== by 0x3D2E7F: agg_retrieve_direct (nodeAgg.c:2512)\n==14095== by 0x3D32DC: ExecAgg (nodeAgg.c:2172)\n==14095== by 0x3C3CEB: ExecProcNodeFirst (execProcnode.c:464)\n==14095== by 0x3BC23F: ExecProcNode (executor.h:272)\n==14095== by 0x3BC23F: ExecutePlan (execMain.c:1633)\n\nAnd then it shows a different error:\n2023-03-27 16:00:10.072 CDT standalone backend[14095] ERROR: unknown type of jsonb container to convert\n\nIn the docs:\n\n+ The <parameter>key</parameter> can not be null. If the\n+ <parameter>value</parameter> is null then the entry is skipped,\n\ns/can not/cannot/\nThe \",\" is dangling.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 27 Mar 2023 16:18:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Tue, Mar 28, 2023 at 6:18 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I ran sqlsmith on this patch for a short while, and reduced one of its\n> appalling queries to this:\n>\n> postgres=# SELECT jsonb_object_agg_unique_strict('', null::xid8);\n> ERROR: unexpected jsonb type as object key\n\nI think this may have to do with the following changes to\nuniqueifyJsonbObject() that the patch makes:\n\n@@ -1936,7 +1942,7 @@ lengthCompareJsonbPair(const void *a, const void\n*b, void *binequal)\n * Sort and unique-ify pairs in JsonbValue object\n */\n static void\n-uniqueifyJsonbObject(JsonbValue *object)\n+uniqueifyJsonbObject(JsonbValue *object, bool unique_keys, bool skip_nulls)\n {\n bool hasNonUniq = false;\n\n@@ -1946,15 +1952,32 @@ uniqueifyJsonbObject(JsonbValue *object)\n qsort_arg(object->val.object.pairs, object->val.object.nPairs,\nsizeof(JsonbPair),\n lengthCompareJsonbPair, &hasNonUniq);\n\n- if (hasNonUniq)\n+ if (hasNonUniq && unique_keys)\n+ ereport(ERROR,\n+ errcode(ERRCODE_DUPLICATE_JSON_OBJECT_KEY_VALUE),\n+ errmsg(\"duplicate JSON object key value\"));\n+\n+ if (hasNonUniq || skip_nulls)\n {\n- JsonbPair *ptr = object->val.object.pairs + 1,\n- *res = object->val.object.pairs;\n+ JsonbPair *ptr,\n+ *res;\n+\n+ while (skip_nulls && object->val.object.nPairs > 0 &&\n+ object->val.object.pairs->value.type == jbvNull)\n+ {\n+ /* If skip_nulls is true, remove leading items with null */\n+ object->val.object.pairs++;\n+ object->val.object.nPairs--;\n+ }\n+\n+ ptr = object->val.object.pairs + 1;\n+ res = object->val.object.pairs;\n\nThe code below the while loop does not take into account the\npossibility that object->val.object.pairs would be pointing to garbage\nwhen object->val.object.nPairs is 0.\n\nAttached delta patch that applies on top of Alvaro's v12-0001 fixes\nthe case for me:\n\npostgres=# SELECT jsonb_object_agg_unique_strict('', null::xid8);\n jsonb_object_agg_unique_strict\n--------------------------------\n {}\n(1 row)\n\npostgres=# SELECT jsonb_object_agg_unique_strict('1', null::xid8);\n jsonb_object_agg_unique_strict\n--------------------------------\n {}\n(1 row)\n\nSELECT jsonb_object_agg_unique_strict('1', '1'::xid8);\n jsonb_object_agg_unique_strict\n--------------------------------\n {\"1\": \"1\"}\n(1 row)\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 28 Mar 2023 12:29:23 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 27.03.23 20:54, Alvaro Herrera wrote:\n> Even so, I was unable to get bison\n> to accept the 'KEY name VALUES blah' syntax; it might be a\n> fun/challenging project to change the productions that we use for JSON\n> names and values:\n> \n> +json_name_and_value:\n> +/* Supporting this syntax seems to require major surgery\n> + KEY c_expr VALUE_P json_value_expr\n> + { $$ = makeJsonKeyValue($2, $4); }\n> + |\n> +*/\n> + c_expr VALUE_P json_value_expr\n> + { $$ = makeJsonKeyValue($1, $3); }\n> + |\n> + a_expr ':' json_value_expr\n> + { $$ = makeJsonKeyValue($1, $3); }\n> + ;\n> \n> If we uncomment the KEY bit there, a bunch of conflicts emerge.\n\nThis is a known bug in the SQL standard. Because KEY is a non-reserved \nkeyword, writing\n\n KEY (x) VALUE y\n\nis ambiguous because KEY could be the keyword for this clause or a \nfunction call key(x).\n\nIt's ok to leave it like this for now.\n\n\n\n",
"msg_date": "Tue, 28 Mar 2023 08:07:23 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Op 3/27/23 om 20:54 schreef Alvaro Herrera:\n> Docs amended as I threatened. Other than that, this has required more\n\n > [v12-0001-SQL-JSON-constructors.patch]\n > [v12-0001-delta-uniqueifyJsonbObject-bugfix.patch]\n\nIn doc/src/sgml/func.sgml, some minor stuff:\n\n'which specify the data type returned' should be\n'which specifies the data type returned'\n\nIn the json_arrayagg() description, it says:\n'If ABSENT ON NULL is specified, any NULL values are omitted.'\nThat's true, but as omitting NULL values is the default (i.e., also \nwithout that clause) maybe it's better to say:\n'Any NULL values are omitted unless NULL ON NULL is specified'\n\n\nI've found no bugs in functionality.\n\nThanks,\n\nErik Rijkers\n\n\n",
"msg_date": "Tue, 28 Mar 2023 15:40:24 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-Mar-28, Erik Rijkers wrote:\n\n> In the json_arrayagg() description, it says:\n> 'If ABSENT ON NULL is specified, any NULL values are omitted.'\n> That's true, but as omitting NULL values is the default (i.e., also without\n> that clause) maybe it's better to say:\n> 'Any NULL values are omitted unless NULL ON NULL is specified'\n\nDoh, somehow I misread your report and modified the json_object()\ndocumentation instead after experimenting with it (so now the\nABSENT/NULL ON NULL clause is inconsistenly described everywhere).\nWould you mind submitting a patch fixing this mistake?\n\n... and pushed it now, after some more meddling.\n\nI'll rebase the rest of the series now.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Entristecido, Wutra (canción de Las Barreras)\necha a Freyr a rodar\ny a nosotros al mar\"\n\n\n",
"msg_date": "Wed, 29 Mar 2023 12:27:58 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Op 3/29/23 om 12:27 schreef Alvaro Herrera:\n> On 2023-Mar-28, Erik Rijkers wrote:\n> \n>> In the json_arrayagg() description, it says:\n>> 'If ABSENT ON NULL is specified, any NULL values are omitted.'\n>> That's true, but as omitting NULL values is the default (i.e., also without\n>> that clause) maybe it's better to say:\n>> 'Any NULL values are omitted unless NULL ON NULL is specified'\n> \n> Doh, somehow I misread your report and modified the json_object()\n> documentation instead after experimenting with it (so now the\n> ABSENT/NULL ON NULL clause is inconsistenly described everywhere).\n> Would you mind submitting a patch fixing this mistake?\n\nI think the json_object text was OK. Attached are some changes where \nthey were needed IMHO.\n\nErik\n\n> \n> ... and pushed it now, after some more meddling.\n> \n> I'll rebase the rest of the series now.\n>",
"msg_date": "Wed, 29 Mar 2023 14:49:40 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\n29.03.2023 13:27, Alvaro Herrera wrote:\n> ... and pushed it now, after some more meddling.\n>\n> I'll rebase the rest of the series now.\n\nPlease look at the several minor issues/inconsistencies,\nI've spotted in the commit:\n\n1) s/JSON_ARRRAYAGG/JSON_ARRAYAGG/\n\n2)\ncheck_key_uniqueness vs check_unique\nIIUC, these are different names of the same entity.\n\n3)\nelog(ERROR, \"invalid JsonConstructorExprType %d\", ctor->type);\nvs\nelog(ERROR, \"invalid JsonConstructorExpr type %d\", ctor->type);\nI'd choose the latter spelling as the JsonConstructorExprType entity does not exist.\n\n4)\nIn the block:\n else\n {\n res = (Datum) 0;\n elog(ERROR, \"invalid JsonConstructorExpr type %d\", ctor->type);\n }\nres is assigned but never used.\n\n5)\n(expr [FORMAT json_format]) ->? (expr [FORMAT JsonFormat])\n(json_format not found anywhere else)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Wed, 29 Mar 2023 21:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-Mar-29, Alexander Lakhin wrote:\n\n> Hi,\n> \n> 29.03.2023 13:27, Alvaro Herrera wrote:\n> > ... and pushed it now, after some more meddling.\n> > \n> > I'll rebase the rest of the series now.\n> \n> Please look at the several minor issues/inconsistencies,\n> I've spotted in the commit:\n\nThanks, I'll look at this tomorrow.\n\nIn the meantime, here's the next two patches of the series: IS JSON and\nthe \"query\" functions. I think this is as much as I can get done for\nthis release, so the last two pieces of functionality would have to wait\nfor 17. I still need to clean these up some more. These are not\nthoroughly tested either; 0001 compiles and passes regression tests, but\nI didn't verify 0003 other than there being no Git conflicts and bison\ndoesn't complain.\n\nAlso, notable here is that I realized that I need to backtrack on my\nchange of the WITHOUT_LA: the original patch had it for TIME (in WITHOUT\nTIME ZONE), and I changed to be for UNIQUE. But now that I've done\n\"JSON query functions\" I realize that it needed to be the other way for\nthe WITHOUT ARRAY WRAPPER clause too. So 0002 reverts that choice.\n\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Crear es tan difícil como ser libre\" (Elsa Triolet)",
"msg_date": "Wed, 29 Mar 2023 20:17:08 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Here's v14 of the IS JSON predicate. I find no further problems here\nand intend to get it pushed later today.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Siempre hay que alimentar a los dioses, aunque la tierra esté seca\" (Orual)",
"msg_date": "Fri, 31 Mar 2023 18:49:40 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-Mar-29, Alvaro Herrera wrote:\n\n> In the meantime, here's the next two patches of the series: IS JSON and\n> the \"query\" functions. I think this is as much as I can get done for\n> this release, so the last two pieces of functionality would have to wait\n> for 17. I still need to clean these up some more. These are not\n> thoroughly tested either; 0001 compiles and passes regression tests, but\n> I didn't verify 0003 other than there being no Git conflicts and bison\n> doesn't complain.\n> \n> Also, notable here is that I realized that I need to backtrack on my\n> change of the WITHOUT_LA: the original patch had it for TIME (in WITHOUT\n> TIME ZONE), and I changed to be for UNIQUE. But now that I've done\n> \"JSON query functions\" I realize that it needed to be the other way for\n> the WITHOUT ARRAY WRAPPER clause too. So 0002 reverts that choice.\n\nSo I pushed 0001 on Friday, and here are 0002 (which I intend to push\nshortly, since it shouldn't be controversial) and the \"JSON query\nfunctions\" patch as 0003. After looking at it some more, I think there\nare some things that need to be addressed by one of the authors:\n\n- the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n I think we could make that stuff use something similar to\n ConstraintAttributeSpec with an accompanying post-processing function.\n That would reduce the number of ad-hoc hacks, which seem excessive.\n\n- the changes in formatting.h have no explanation whatsoever. At the\n very least, the new function should have a comment in the .c file.\n (And why is it at end of file? I bet there's a better location)\n\n- some nasty hacks are being used in the ECPG grammar with no tests at\n all. It's easy to add a few lines to the .pgc file I added in prior\n commits.\n\n- Some functions in jsonfuncs.c have changed from throwing hard errors\n into soft ones. I think this deserves more commentary.\n\n- func.sgml: The new functions are documented in a separate table for no\n reason that I can see. Needs to be merged into one of the existing\n tables. I didn't actually review the docs.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"I'm impressed how quickly you are fixing this obscure issue. I came from \nMS SQL and it would be hard for me to put into words how much of a better job\nyou all are doing on [PostgreSQL].\"\n Steve Midgley, http://archives.postgresql.org/pgsql-sql/2008-08/msg00000.php",
"msg_date": "Mon, 3 Apr 2023 19:16:47 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi Alvaro,\n\n03.04.2023 20:16, Alvaro Herrera wrote:\n\n> So I pushed 0001 on Friday, and here are 0002 (which I intend to push\n> shortly, since it shouldn't be controversial) and the \"JSON query\n> functions\" patch as 0003. After looking at it some more, I think there\n> are some things that need to be addressed by one of the authors:\n>\n> - the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n> I think we could make that stuff use something similar to\n> ConstraintAttributeSpec with an accompanying post-processing function.\n> That would reduce the number of ad-hoc hacks, which seem excessive.\n>\n> - the changes in formatting.h have no explanation whatsoever. At the\n> very least, the new function should have a comment in the .c file.\n> (And why is it at end of file? I bet there's a better location)\n>\n> - some nasty hacks are being used in the ECPG grammar with no tests at\n> all. It's easy to add a few lines to the .pgc file I added in prior\n> commits.\n>\n> - Some functions in jsonfuncs.c have changed from throwing hard errors\n> into soft ones. I think this deserves more commentary.\n>\n> - func.sgml: The new functions are documented in a separate table for no\n> reason that I can see. Needs to be merged into one of the existing\n> tables. I didn't actually review the docs.\n\nPlease take a look at the following minor issues in\nv15-0002-SQL-JSON-query-functions.patch:\n1)\ns/addreess/address/\n\n2)\nECPGColLabelCommon gone with 83f1c7b74, but is still mentioned in ecpg.trailer.\n\n3)\ns/ExecEvalJsonCoercion/ExecEvalJsonExprCoercion/ ?\n(there is no ExecEvalJsonCoercion() function)\n\n4)\njson_table mentioned in func.sgml:\n <xref linkend=\"functions-sqljson-querying\"/> details the SQL/JSON\n functions that can be used to query JSON data, except\n for <function>json_table</function>.\n\nbut if JSON_TABLE not going to be committed in v16, maybe remove that reference\nto it.\n\nThere is also a reference to JSON_TABLE in src/backend/parser/README:\nparse_jsontable.c handle JSON_TABLE\n(It was added with 9853bf6ab and survived the revert of SQL JSON last\nyear somehow.)\n\nBest regards,\nAlexander\n\n\n",
"msg_date": "Mon, 3 Apr 2023 23:00:00 +0300",
"msg_from": "Alexander Lakhin <exclusion@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Tue, Apr 4, 2023 at 2:16 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2023-Mar-29, Alvaro Herrera wrote:\n> > In the meantime, here's the next two patches of the series: IS JSON and\n> > the \"query\" functions. I think this is as much as I can get done for\n> > this release, so the last two pieces of functionality would have to wait\n> > for 17. I still need to clean these up some more. These are not\n> > thoroughly tested either; 0001 compiles and passes regression tests, but\n> > I didn't verify 0003 other than there being no Git conflicts and bison\n> > doesn't complain.\n> >\n> > Also, notable here is that I realized that I need to backtrack on my\n> > change of the WITHOUT_LA: the original patch had it for TIME (in WITHOUT\n> > TIME ZONE), and I changed to be for UNIQUE. But now that I've done\n> > \"JSON query functions\" I realize that it needed to be the other way for\n> > the WITHOUT ARRAY WRAPPER clause too. So 0002 reverts that choice.\n>\n> So I pushed 0001 on Friday, and here are 0002 (which I intend to push\n> shortly, since it shouldn't be controversial) and the \"JSON query\n> functions\" patch as 0003. After looking at it some more, I think there\n> are some things that need to be addressed by one of the authors:\n>\n> - the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n> I think we could make that stuff use something similar to\n> ConstraintAttributeSpec with an accompanying post-processing function.\n> That would reduce the number of ad-hoc hacks, which seem excessive.\n\nDo you mean the solution involving the JsonBehavior node?\n\n> - the changes in formatting.h have no explanation whatsoever. At the\n> very least, the new function should have a comment in the .c file.\n> (And why is it at end of file? I bet there's a better location)\n>\n> - some nasty hacks are being used in the ECPG grammar with no tests at\n> all. It's easy to add a few lines to the .pgc file I added in prior\n> commits.\n>\n> - Some functions in jsonfuncs.c have changed from throwing hard errors\n> into soft ones. I think this deserves more commentary.\n>\n> - func.sgml: The new functions are documented in a separate table for no\n> reason that I can see. Needs to be merged into one of the existing\n> tables. I didn't actually review the docs.\n\nI made the jsonfuncs.c changes to use soft error handling when needed,\nso I took a stab at that; attached a delta patch, which also fixes the\nproblematic comments mentioned by Alexander in his comments 1 and 3.\n\nI'll need to spend some time to address other points.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com",
"msg_date": "Tue, 4 Apr 2023 20:09:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-Apr-04, Amit Langote wrote:\n\n> On Tue, Apr 4, 2023 at 2:16 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > - the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n> > I think we could make that stuff use something similar to\n> > ConstraintAttributeSpec with an accompanying post-processing function.\n> > That would reduce the number of ad-hoc hacks, which seem excessive.\n> \n> Do you mean the solution involving the JsonBehavior node?\n\nRight. It has spilled as the separate on_behavior struct in the core\nparser %union in addition to the raw jsbehavior, which is something\nwe've gone 30 years without having, and I don't see why we should start\nnow.\n\nThis stuff is terrible:\n\njson_exists_error_clause_opt:\n json_exists_error_behavior ON ERROR_P { $$ = $1; } \n | /* EMPTY */ { $$ = NULL; }\n ;\n\njson_exists_error_behavior:\n ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n | TRUE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL); }\n | FALSE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL); }\n | UNKNOWN { $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL); }\n ;\n\njson_value_behavior:\n NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n | ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n ;\n\njson_value_on_behavior_clause_opt:\n json_value_behavior ON EMPTY_P\n { $$.on_empty = $1; $$.on_error = NULL; }\n | json_value_behavior ON EMPTY_P json_value_behavior ON ERROR_P\n { $$.on_empty = $1; $$.on_error = $4; }\n | json_value_behavior ON ERROR_P\n { $$.on_empty = NULL; $$.on_error = $1; }\n | /* EMPTY */\n { $$.on_empty = NULL; $$.on_error = NULL; }\n ;\n\njson_query_behavior:\n ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n | NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n | EMPTY_P ARRAY { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n /* non-standard, for Oracle compatibility only */\n | EMPTY_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n | EMPTY_P OBJECT_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL); }\n | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n ;\n\njson_query_on_behavior_clause_opt:\n json_query_behavior ON EMPTY_P\n { $$.on_empty = $1; $$.on_error = NULL; }\n | json_query_behavior ON EMPTY_P json_query_behavior ON ERROR_P\n { $$.on_empty = $1; $$.on_error = $4; }\n | json_query_behavior ON ERROR_P\n { $$.on_empty = NULL; $$.on_error = $1; }\n | /* EMPTY */\n { $$.on_empty = NULL; $$.on_error = NULL; }\n ;\n\nSurely this can be made cleaner.\n\nBy the way -- that comment about clauses being non-standard, can you\nspot exactly *which* clauses that comment applies to?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El número de instalaciones de UNIX se ha elevado a 10,\ny se espera que este número aumente\" (UPM, 1972)\n\n\n",
"msg_date": "Tue, 4 Apr 2023 14:36:25 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi hackers!\n\nThe latest SQL standard contains dot notation for JSON. Are there any plans\nto include it into Pg 16?\nOr maybe we should start a separate thread for it?\n\nThanks!\n\n\nOn Tue, Apr 4, 2023 at 3:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2023-Apr-04, Amit Langote wrote:\n>\n> > On Tue, Apr 4, 2023 at 2:16 AM Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>\n> > > - the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n> > > I think we could make that stuff use something similar to\n> > > ConstraintAttributeSpec with an accompanying post-processing\n> function.\n> > > That would reduce the number of ad-hoc hacks, which seem excessive.\n> >\n> > Do you mean the solution involving the JsonBehavior node?\n>\n> Right. It has spilled as the separate on_behavior struct in the core\n> parser %union in addition to the raw jsbehavior, which is something\n> we've gone 30 years without having, and I don't see why we should start\n> now.\n>\n> This stuff is terrible:\n>\n> json_exists_error_clause_opt:\n> json_exists_error_behavior ON ERROR_P { $$ = $1; }\n> | /* EMPTY */ { $$ = NULL; }\n> ;\n>\n> json_exists_error_behavior:\n> ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR,\n> NULL); }\n> | TRUE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE,\n> NULL); }\n> | FALSE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE,\n> NULL); }\n> | UNKNOWN { $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN,\n> NULL); }\n> ;\n>\n> json_value_behavior:\n> NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL);\n> }\n> | ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR,\n> NULL); }\n> | DEFAULT a_expr { $$ =\n> makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n> ;\n>\n> json_value_on_behavior_clause_opt:\n> json_value_behavior ON EMPTY_P\n> { $$.on_empty = $1; $$.on_error =\n> NULL; }\n> | json_value_behavior ON EMPTY_P json_value_behavior ON ERROR_P\n> { $$.on_empty = $1; $$.on_error = $4; }\n> | json_value_behavior ON ERROR_P\n> { $$.on_empty = NULL; $$.on_error =\n> $1; }\n> | /* EMPTY */\n> { $$.on_empty = NULL; $$.on_error =\n> NULL; }\n> ;\n>\n> json_query_behavior:\n> ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR,\n> NULL); }\n> | NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL,\n> NULL); }\n> | EMPTY_P ARRAY { $$ =\n> makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> /* non-standard, for Oracle compatibility only */\n> | EMPTY_P { $$ =\n> makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n> | EMPTY_P OBJECT_P { $$ =\n> makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL); }\n> | DEFAULT a_expr { $$ =\n> makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n> ;\n>\n> json_query_on_behavior_clause_opt:\n> json_query_behavior ON EMPTY_P\n> { $$.on_empty = $1; $$.on_error =\n> NULL; }\n> | json_query_behavior ON EMPTY_P json_query_behavior ON ERROR_P\n> { $$.on_empty = $1; $$.on_error = $4; }\n> | json_query_behavior ON ERROR_P\n> { $$.on_empty = NULL; $$.on_error =\n> $1; }\n> | /* EMPTY */\n> { $$.on_empty = NULL; $$.on_error =\n> NULL; }\n> ;\n>\n> Surely this can be made cleaner.\n>\n> By the way -- that comment about clauses being non-standard, can you\n> spot exactly *which* clauses that comment applies to?\n>\n> --\n> Álvaro Herrera PostgreSQL Developer —\n> https://www.EnterpriseDB.com/\n> \"El número de instalaciones de UNIX se ha elevado a 10,\n> y se espera que este número aumente\" (UPM, 1972)\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nThe Russian Postgres Company\nhttps://postgrespro.ru/\n\nHi hackers!The latest SQL standard contains dot notation for JSON. Are there any plans to include it into Pg 16?Or maybe we should start a separate thread for it?Thanks!On Tue, Apr 4, 2023 at 3:36 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2023-Apr-04, Amit Langote wrote:\n\n> On Tue, Apr 4, 2023 at 2:16 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > - the gram.y solution to the \"ON ERROR/ON EMPTY\" clauses is quite ugly.\n> > I think we could make that stuff use something similar to\n> > ConstraintAttributeSpec with an accompanying post-processing function.\n> > That would reduce the number of ad-hoc hacks, which seem excessive.\n> \n> Do you mean the solution involving the JsonBehavior node?\n\nRight. It has spilled as the separate on_behavior struct in the core\nparser %union in addition to the raw jsbehavior, which is something\nwe've gone 30 years without having, and I don't see why we should start\nnow.\n\nThis stuff is terrible:\n\njson_exists_error_clause_opt:\n json_exists_error_behavior ON ERROR_P { $$ = $1; } \n | /* EMPTY */ { $$ = NULL; }\n ;\n\njson_exists_error_behavior:\n ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n | TRUE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_TRUE, NULL); }\n | FALSE_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_FALSE, NULL); }\n | UNKNOWN { $$ = makeJsonBehavior(JSON_BEHAVIOR_UNKNOWN, NULL); }\n ;\n\njson_value_behavior:\n NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n | ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n ;\n\njson_value_on_behavior_clause_opt:\n json_value_behavior ON EMPTY_P\n { $$.on_empty = $1; $$.on_error = NULL; }\n | json_value_behavior ON EMPTY_P json_value_behavior ON ERROR_P\n { $$.on_empty = $1; $$.on_error = $4; }\n | json_value_behavior ON ERROR_P\n { $$.on_empty = NULL; $$.on_error = $1; }\n | /* EMPTY */\n { $$.on_empty = NULL; $$.on_error = NULL; }\n ;\n\njson_query_behavior:\n ERROR_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_ERROR, NULL); }\n | NULL_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_NULL, NULL); }\n | EMPTY_P ARRAY { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n /* non-standard, for Oracle compatibility only */\n | EMPTY_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_ARRAY, NULL); }\n | EMPTY_P OBJECT_P { $$ = makeJsonBehavior(JSON_BEHAVIOR_EMPTY_OBJECT, NULL); }\n | DEFAULT a_expr { $$ = makeJsonBehavior(JSON_BEHAVIOR_DEFAULT, $2); }\n ;\n\njson_query_on_behavior_clause_opt:\n json_query_behavior ON EMPTY_P\n { $$.on_empty = $1; $$.on_error = NULL; }\n | json_query_behavior ON EMPTY_P json_query_behavior ON ERROR_P\n { $$.on_empty = $1; $$.on_error = $4; }\n | json_query_behavior ON ERROR_P\n { $$.on_empty = NULL; $$.on_error = $1; }\n | /* EMPTY */\n { $$.on_empty = NULL; $$.on_error = NULL; }\n ;\n\nSurely this can be made cleaner.\n\nBy the way -- that comment about clauses being non-standard, can you\nspot exactly *which* clauses that comment applies to?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El número de instalaciones de UNIX se ha elevado a 10,\ny se espera que este número aumente\" (UPM, 1972)\n\n\n-- Regards,Nikita MalakhovPostgres ProfessionalThe Russian Postgres Companyhttps://postgrespro.ru/",
"msg_date": "Tue, 4 Apr 2023 22:40:16 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 4/4/23 3:40 PM, Nikita Malakhov wrote:\r\n> Hi hackers!\r\n> \r\n> The latest SQL standard contains dot notation for JSON. Are there any \r\n> plans to include it into Pg 16?\r\n> Or maybe we should start a separate thread for it?\r\n\r\nI would recommend starting a new thread to discuss the dot notation.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Tue, 4 Apr 2023 15:44:05 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-04-04 Tu 08:36, Alvaro Herrera wrote:\n>\n> Surely this can be made cleaner.\n>\n> By the way -- that comment about clauses being non-standard, can you\n> spot exactly *which* clauses that comment applies to?\n>\n\nSadly, I don't think we're going to be able to make further progress \nbefore feature freeze. Thanks to Alvaro for advancing us a way down the \nfield. I hope we can get the remainder committed in the July CF.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-04-04 Tu 08:36, Alvaro Herrera\n wrote:\n\n\n\nSurely this can be made cleaner.\n\nBy the way -- that comment about clauses being non-standard, can you\nspot exactly *which* clauses that comment applies to?\n\n\n\n\n\nSadly, I don't think we're going to be able to make further\n progress before feature freeze. Thanks to Alvaro for advancing us\n a way down the field. I hope we can get the remainder committed in\n the July CF.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Tue, 4 Apr 2023 16:05:47 -0400",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-Apr-04, Andrew Dunstan wrote:\n\n> On 2023-04-04 Tu 08:36, Alvaro Herrera wrote:\n> > \n> > Surely this can be made cleaner.\n> > \n> > By the way -- that comment about clauses being non-standard, can you\n> > spot exactly *which* clauses that comment applies to?\n> \n> Sadly, I don't think we're going to be able to make further progress before\n> feature freeze. Thanks to Alvaro for advancing us a way down the field. I\n> hope we can get the remainder committed in the July CF.\n\nOkay, I've marked the CF entry as committed then.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Wed, 5 Apr 2023 09:23:29 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\nIS JSON is documented as:\n\nexpression IS [ NOT ] JSON\n [ { VALUE | SCALAR | ARRAY | OBJECT } ]\n [ { WITH | WITHOUT } UNIQUE [ KEYS ] ]\n\nwhich is fine but 'VALUE' is nowhere mentioned\n(except in the commit-message as: IS JSON [VALUE] )\n\nUnless I'm mistaken 'VALUE' does indeed not change an IS JSON statement, \nso to document we could simply insert this line (as in the attached):\n\n\"The VALUE key word is optional noise.\"\n\nSomewhere in its text in func.sgml, which is now:\n\n\"This predicate tests whether expression can be parsed as JSON, possibly \nof a specified type. If SCALAR or ARRAY or OBJECT is specified, the \ntest is whether or not the JSON is of that particular type. If WITH \nUNIQUE KEYS is specified, then any object in the expression is also \ntested to see if it has duplicate keys.\"\n\n\nErik Rijkers",
"msg_date": "Wed, 12 Apr 2023 06:43:23 +0200",
"msg_from": "Erik Rijkers <er@xs4all.nl>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited (documentation)"
},
{
"msg_contents": "On Wed, 5 Apr 2023 at 09:53, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n>\n> Okay, I've marked the CF entry as committed then.\n>\n\nThis was marked as commited in the 2023-03 commitfest, however there are\nstill patches missing (for example the JSON_TABLE one).\nHowever, I can not see an entry in the current 2023-07 Commitfest.\nI think it would be a good idea for a new entry in the current commitfest,\njust to not forget about the not-yet-commited features.\n\nThanks!\nMatthias\n\nOn Wed, 5 Apr 2023 at 09:53, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\nOkay, I've marked the CF entry as committed then.This was marked as commited in the 2023-03 commitfest, however there are still patches missing (for example the JSON_TABLE one).However, I can not see an entry in the current 2023-07 Commitfest.I think it would be a good idea for a new entry in the current commitfest, just to not forget about the not-yet-commited features.Thanks!Matthias",
"msg_date": "Wed, 3 May 2023 18:51:56 +0200",
"msg_from": "Matthias Kurz <m.kurz@irregular.at>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On 2023-May-03, Matthias Kurz wrote:\n\n> On Wed, 5 Apr 2023 at 09:53, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> > Okay, I've marked the CF entry as committed then.\n> \n> This was marked as commited in the 2023-03 commitfest, however there are\n> still patches missing (for example the JSON_TABLE one).\n> However, I can not see an entry in the current 2023-07 Commitfest.\n> I think it would be a good idea for a new entry in the current commitfest,\n> just to not forget about the not-yet-commited features.\n\nYeah ... These remaining patches have to be rebased, and a few things\nfixed (I left a few review comments somewhere). I would suggest to\nstart a new thread with updated patches, and then a new commitfest entry\ncan be created with those.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Las mujeres son como hondas: mientras más resistencia tienen,\n más lejos puedes llegar con ellas\" (Jonas Nightingale, Leap of Faith)\n\n\n",
"msg_date": "Wed, 3 May 2023 20:17:32 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "On Wed, 3 May 2023 at 20:17, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n>\n> I would suggest to start a new thread with updated patches, and then a new\n> commitfest entry can be created with those.\n>\n\nWhoever starts that new thread, please link link it here, I am keen to\nfollow it ;) Thanks a lot!\nThanks a lot for all your hard work btw, it's highly appreciated!\n\nBest,\nMatthias\n\nOn Wed, 3 May 2023 at 20:17, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:I would suggest to start a new thread with updated patches, and then a new commitfest entry can be created with those.Whoever starts that new thread, please link link it here, I am keen to follow it ;) Thanks a lot!Thanks a lot for all your hard work btw, it's highly appreciated!Best,Matthias",
"msg_date": "Wed, 3 May 2023 20:58:02 +0200",
"msg_from": "Matthias Kurz <m.kurz@irregular.at>",
"msg_from_op": false,
"msg_subject": "Re: SQL/JSON revisited"
},
{
"msg_contents": "Hi,\n\nOn Thu, May 4, 2023 at 3:58 AM Matthias Kurz <m.kurz@irregular.at> wrote:\n> On Wed, 3 May 2023 at 20:17, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> I would suggest to start a new thread with updated patches, and then a new commitfest entry can be created with those.\n>\n> Whoever starts that new thread, please link link it here, I am keen to follow it ;) Thanks a lot!\n> Thanks a lot for all your hard work btw, it's highly appreciated!\n\nJust created a new thread:\n\nhttps://www.postgresql.org/message-id/CA%2BHiwqE4XTdfb1nW%3DOjoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg%40mail.gmail.com\n\nCF entry: https://commitfest.postgresql.org/43/4377/\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 19 Jun 2023 17:35:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: SQL/JSON revisited"
}
] |
[
{
"msg_contents": "@Michael\n\n> Anyway, it is a bit confusing to see a patch touching parts of the\n> ident code related to the system-username while it claims to provide a\n> mean to shortcut a check on the database-username. \n\nThat's totally fair, I attached a new iteration of this patchset where this\nrefactoring and the new functionality are split up in two patches.\n\n> That seems pretty dangerous to me. For one, how does this work in\n> cases where we expect the ident entry to be case-sensitive, aka\n> authentication methods where check_ident_usermap() and check_usermap()\n> use case_insensitive = false?\n\nI'm not sure if I'm understanding your concern correctly, but\nthe system username will still be compared case sensitively if requested.\nThe only thing this changes is that: before comparing the pg_role\ncolumn to the requested pg_role case sensitively, it now checks if \nthe value of the pg_role column is lowercase \"all\". If that's the case, \nthen the pg_role column is not compared to the requested\npg|role anymore, and instead access is granted.\n\n\n@Isaac\n\n> is there a reason why pg_ident.conf can't/shouldn't be replaced by a system table?\n\nI'm not sure of the exact reason, maybe someone else can clarify this.\nBut even if it could be replaced by a system table, I think that should\nbe a separate patch from this patch.",
"msg_date": "Wed, 28 Dec 2022 09:11:05 +0000",
"msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Support using \"all\" for the db user in pg_ident.conf"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 09:11:05AM +0000, Jelte Fennema wrote:\n> That's totally fair, I attached a new iteration of this patchset where this\n> refactoring and the new functionality are split up in two patches.\n\nThe confusion that 0001 is addressing is fair (cough, fc579e1, cough),\nstill I am wondering whether we could do a bit better to be more\nconsistent with the lines of the ident file, as of:\n- renaming \"pg_role\" to \"pg_user\", as we use pg-username or\ndatabase-username when referring to it in the docs or the conf sample\nfile.\n- renaming \"systemuser\" to \"system_user_token\" to outline that this is\nnot a simple string but an AuthToken with potentially a regexp?\n- Changing the order of the elements in IdentLine to map with the\nactual ident lines: usermap, systemuser and pg_user.\n\n> I'm not sure if I'm understanding your concern correctly, but\n> the system username will still be compared case sensitively if requested.\n> The only thing this changes is that: before comparing the pg_role\n> column to the requested pg_role case sensitively, it now checks if \n> the value of the pg_role column is lowercase \"all\". If that's the case, \n> then the pg_role column is not compared to the requested\n> pg|role anymore, and instead access is granted.\n\nYeah, still my spider sense reacts on this proposal, and I think that\nI know why.. In what is your proposal different from the following\nentry in pg_ident.conf? As of:\nmapname /^(.*)$ \\1\n\nThis would allow everything, and use for the PG user the same user as\nthe one provided by the client. That's what your proposal with \"all\"\ndoes, no? The heart of the problem is that you still require PG roles\nthat have a 1:1 mapping the user names provided by the clients.\n--\nMichael",
"msg_date": "Wed, 11 Jan 2023 14:27:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Support using \"all\" for the db user in pg_ident.conf"
}
] |
[
{
"msg_contents": "Most callers of BufFileRead() want to check whether they read the full \nspecified length. Checking this at every call site is very tedious. \nThis patch provides additional variants BufFileReadExact() and \nBufFileReadMaybeEOF() that include the length checks.\n\nI considered changing BufFileRead() itself, but this function is also \nused in extensions, and so changing the behavior like this would create \na lot of problems there. The new names are analogous to the existing \nLogicalTapeReadExact().",
"msg_date": "Wed, 28 Dec 2022 11:47:02 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 4:17 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Most callers of BufFileRead() want to check whether they read the full\n> specified length. Checking this at every call site is very tedious.\n> This patch provides additional variants BufFileReadExact() and\n> BufFileReadMaybeEOF() that include the length checks.\n>\n> I considered changing BufFileRead() itself, but this function is also\n> used in extensions, and so changing the behavior like this would create\n> a lot of problems there. The new names are analogous to the existing\n> LogicalTapeReadExact().\n>\n\n+1 for the new APIs. I have noticed that some of the existing places\nuse %m and the file path in messages which are not used in the new\ncommon function.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Mon, 2 Jan 2023 17:43:29 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On Wed, 28 Dec 2022 at 16:17, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> Most callers of BufFileRead() want to check whether they read the full\n> specified length. Checking this at every call site is very tedious.\n> This patch provides additional variants BufFileReadExact() and\n> BufFileReadMaybeEOF() that include the length checks.\n>\n> I considered changing BufFileRead() itself, but this function is also\n> used in extensions, and so changing the behavior like this would create\n> a lot of problems there. The new names are analogous to the existing\n> LogicalTapeReadExact().\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\ne351f85418313e97c203c73181757a007dfda6d0 ===\n=== applying patch\n./0001-Add-BufFileRead-variants-with-short-read-and-EOF-det.patch\npatching file src/backend/access/gist/gistbuildbuffers.c\n...\nHunk #1 FAILED at 38.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/include/storage/buffile.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4089.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:03:30 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On 02.01.23 13:13, Amit Kapila wrote:\n> On Wed, Dec 28, 2022 at 4:17 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> Most callers of BufFileRead() want to check whether they read the full\n>> specified length. Checking this at every call site is very tedious.\n>> This patch provides additional variants BufFileReadExact() and\n>> BufFileReadMaybeEOF() that include the length checks.\n>>\n>> I considered changing BufFileRead() itself, but this function is also\n>> used in extensions, and so changing the behavior like this would create\n>> a lot of problems there. The new names are analogous to the existing\n>> LogicalTapeReadExact().\n>>\n> \n> +1 for the new APIs. I have noticed that some of the existing places\n> use %m and the file path in messages which are not used in the new\n> common function.\n\nThe existing uses of %m are wrong. This was already fixed once in \n7897e3bb902c557412645b82120f4d95f7474906, but the affected areas of code \nwere apparently developed at around the same time and didn't get the \nfix. So I have attached a separate patch to fix this first, which could \nbe backpatched.\n\nThe original patch is then rebased on top of that. I have adjusted the \nerror message to include the file set name if available.\n\nWhat this doesn't keep is the purpose of the temp file in some cases, \nlike \"hash-join temporary file\". We could maybe make this an additional \nargument or an error context, but it seems cumbersome in any case. \nThoughts?",
"msg_date": "Fri, 6 Jan 2023 13:48:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 6:18 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 02.01.23 13:13, Amit Kapila wrote:\n> > On Wed, Dec 28, 2022 at 4:17 PM Peter Eisentraut\n> > <peter.eisentraut@enterprisedb.com> wrote:\n> >>\n> >> Most callers of BufFileRead() want to check whether they read the full\n> >> specified length. Checking this at every call site is very tedious.\n> >> This patch provides additional variants BufFileReadExact() and\n> >> BufFileReadMaybeEOF() that include the length checks.\n> >>\n> >> I considered changing BufFileRead() itself, but this function is also\n> >> used in extensions, and so changing the behavior like this would create\n> >> a lot of problems there. The new names are analogous to the existing\n> >> LogicalTapeReadExact().\n> >>\n> >\n> > +1 for the new APIs. I have noticed that some of the existing places\n> > use %m and the file path in messages which are not used in the new\n> > common function.\n>\n> The existing uses of %m are wrong. This was already fixed once in\n> 7897e3bb902c557412645b82120f4d95f7474906, but the affected areas of code\n> were apparently developed at around the same time and didn't get the\n> fix. So I have attached a separate patch to fix this first, which could\n> be backpatched.\n>\n\n+1. The patch is not getting applied due to a recent commit.\n\n> The original patch is then rebased on top of that. I have adjusted the\n> error message to include the file set name if available.\n>\n> What this doesn't keep is the purpose of the temp file in some cases,\n> like \"hash-join temporary file\". We could maybe make this an additional\n> argument or an error context, but it seems cumbersome in any case.\n>\n\nYeah, we can do that but not sure if it is worth doing any of those\nbecause there are already other places that don't use the exact\ncontext.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Jan 2023 11:50:15 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On 10.01.23 07:20, Amit Kapila wrote:\n>> The existing uses of %m are wrong. This was already fixed once in\n>> 7897e3bb902c557412645b82120f4d95f7474906, but the affected areas of code\n>> were apparently developed at around the same time and didn't get the\n>> fix. So I have attached a separate patch to fix this first, which could\n>> be backpatched.\n>>\n> +1. The patch is not getting applied due to a recent commit.\n> \n>> The original patch is then rebased on top of that. I have adjusted the\n>> error message to include the file set name if available.\n>>\n>> What this doesn't keep is the purpose of the temp file in some cases,\n>> like \"hash-join temporary file\". We could maybe make this an additional\n>> argument or an error context, but it seems cumbersome in any case.\n>>\n> Yeah, we can do that but not sure if it is worth doing any of those\n> because there are already other places that don't use the exact\n> context.\n\nOk, updated patches attached.",
"msg_date": "Thu, 12 Jan 2023 10:14:11 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On Thu, Jan 12, 2023 at 2:44 PM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 10.01.23 07:20, Amit Kapila wrote:\n> > Yeah, we can do that but not sure if it is worth doing any of those\n> > because there are already other places that don't use the exact\n> > context.\n>\n> Ok, updated patches attached.\n>\n\nBoth the patches look good to me.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Sat, 14 Jan 2023 11:31:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
},
{
"msg_contents": "On 14.01.23 07:01, Amit Kapila wrote:\n> On Thu, Jan 12, 2023 at 2:44 PM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 10.01.23 07:20, Amit Kapila wrote:\n>>> Yeah, we can do that but not sure if it is worth doing any of those\n>>> because there are already other places that don't use the exact\n>>> context.\n>>\n>> Ok, updated patches attached.\n> \n> Both the patches look good to me.\n\nCommitted, and the first part backpatched.\n\n\n\n",
"msg_date": "Mon, 16 Jan 2023 11:16:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Add BufFileRead variants with short read and EOF detection"
}
] |
[
{
"msg_contents": "Most backend code doesn't actually need the variable-length data types \nsupport (TOAST support) in postgres.h. So I figured we could try to put \nit into a separate header file. That makes postgres.h more manageable, \nand it avoids including a bunch of complicated unused stuff everywhere. \nI picked \"varatt.h\" as the name. Then we could either\n\n1) Include varatt.h in postgres.h, similar to elog.h and palloc.h. That \nway we clean up the files a bit but don't change any external interfaces.\n\n2) Just let everyone who needs it include the new file.\n\n3) Compromise: You can avoid most \"damage\" by having fmgr.h include \nvaratt.h. That satisfies most data types and extension code. That way, \nthere are only a few places that need an explicit include of varatt.h.\n\nI went with the last option in my patch.\n\nThoughts?",
"msg_date": "Wed, 28 Dec 2022 14:07:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "split TOAST support out of postgres.h"
},
{
"msg_contents": "On Wed, 28 Dec 2022 at 08:07, Peter Eisentraut <\npeter.eisentraut@enterprisedb.com> wrote:\n\n> Most backend code doesn't actually need the variable-length data types\n> support (TOAST support) in postgres.h. So I figured we could try to put\n> it into a separate header file. That makes postgres.h more manageable,\n> and it avoids including a bunch of complicated unused stuff everywhere.\n> I picked \"varatt.h\" as the name. Then we could either\n>\n[…]\n\n> I went with the last option in my patch.\n>\n> Thoughts?\n\n\nThis is a bit of a bikeshed suggestion, but I'm wondering if you considered\ncalling it toast.h? Only because the word is so distinctive within\nPostgres; everybody knows exactly to what it refers.\n\nI definitely agree with the principle of organizing and splitting up the\nheader files. Personally, I don't mind importing a bunch of headers if I'm\nusing a bunch of subsystems so I would be OK with needing to import this\nnew header if I need it.\n\nOn Wed, 28 Dec 2022 at 08:07, Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:Most backend code doesn't actually need the variable-length data types \nsupport (TOAST support) in postgres.h. So I figured we could try to put \nit into a separate header file. That makes postgres.h more manageable, \nand it avoids including a bunch of complicated unused stuff everywhere. \nI picked \"varatt.h\" as the name. Then we could either[…] \nI went with the last option in my patch.\n\nThoughts?This is a bit of a bikeshed suggestion, but I'm wondering if you considered calling it toast.h? Only because the word is so distinctive within Postgres; everybody knows exactly to what it refers.I definitely agree with the principle of organizing and splitting up the header files. Personally, I don't mind importing a bunch of headers if I'm using a bunch of subsystems so I would be OK with needing to import this new header if I need it.",
"msg_date": "Wed, 28 Dec 2022 09:07:12 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> ... Then we could either\n\n> 1) Include varatt.h in postgres.h, similar to elog.h and palloc.h. That \n> way we clean up the files a bit but don't change any external interfaces.\n\n> 2) Just let everyone who needs it include the new file.\n\n> 3) Compromise: You can avoid most \"damage\" by having fmgr.h include \n> varatt.h. That satisfies most data types and extension code. That way, \n> there are only a few places that need an explicit include of varatt.h.\n\n> I went with the last option in my patch.\n\nI dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\nso widely, I doubt it is buying very much in terms of reducing header\nfootprint. How bad is it to do #2?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Dec 2022 10:07:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "Hi,\n\nI've thought about this while working on Pluggable TOAST and 64-bit\nTOAST value ID myself. Agree with #2 to seem the best of the above.\nThere are not so many places where a new header should be included.\n\nOn Wed, Dec 28, 2022 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> > ... Then we could either\n>\n> > 1) Include varatt.h in postgres.h, similar to elog.h and palloc.h. That\n> > way we clean up the files a bit but don't change any external interfaces.\n>\n> > 2) Just let everyone who needs it include the new file.\n>\n> > 3) Compromise: You can avoid most \"damage\" by having fmgr.h include\n> > varatt.h. That satisfies most data types and extension code. That way,\n> > there are only a few places that need an explicit include of varatt.h.\n>\n> > I went with the last option in my patch.\n>\n> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n> so widely, I doubt it is buying very much in terms of reducing header\n> footprint. How bad is it to do #2?\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,I've thought about this while working on Pluggable TOAST and 64-bitTOAST value ID myself. Agree with #2 to seem the best of the above.There are not so many places where a new header should be included.On Wed, Dec 28, 2022 at 6:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> ... Then we could either\n\n> 1) Include varatt.h in postgres.h, similar to elog.h and palloc.h. That \n> way we clean up the files a bit but don't change any external interfaces.\n\n> 2) Just let everyone who needs it include the new file.\n\n> 3) Compromise: You can avoid most \"damage\" by having fmgr.h include \n> varatt.h. That satisfies most data types and extension code. That way, \n> there are only a few places that need an explicit include of varatt.h.\n\n> I went with the last option in my patch.\n\nI dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\nso widely, I doubt it is buying very much in terms of reducing header\nfootprint. How bad is it to do #2?\n\n regards, tom lane\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/",
"msg_date": "Thu, 29 Dec 2022 10:39:34 +0300",
"msg_from": "Nikita Malakhov <hukutoc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 09:07:12 -0500, Isaac Morland wrote:\n> This is a bit of a bikeshed suggestion, but I'm wondering if you considered\n> calling it toast.h? Only because the word is so distinctive within\n> Postgres; everybody knows exactly to what it refers.\n\nWe have a bunch of toast*.h files already. The new header should pretty much\nonly contain the types, given how widely the header is going to be\nincluded. So maybe toast_type.h?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Dec 2022 09:15:55 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On Thu, 29 Dec 2022 at 18:16, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2022-12-28 09:07:12 -0500, Isaac Morland wrote:\n> > This is a bit of a bikeshed suggestion, but I'm wondering if you considered\n> > calling it toast.h? Only because the word is so distinctive within\n> > Postgres; everybody knows exactly to what it refers.\n>\n> We have a bunch of toast*.h files already. The new header should pretty much\n> only contain the types, given how widely the header is going to be\n> included. So maybe toast_type.h?\n\nMy 2 cents: I don't think that toast_anything.h is appropriate,\nbecause even though the varatt infrastructure does enable\nexternally-stored oversized attributes (which is the essence of\nTOAST), this is not the only (or primary) use of the type.\n\nExample: Indexes do not (can not?) support toasted values, but\ngenerally do support variable length attributes that would be pulled\nin with varatt.h. I don't see why we'd call the headers of\nvariable-length attributes after one small - but not insignifcant -\nuse case.\n\nKind regards,\n\nMatthias van de Meent\n\n\n",
"msg_date": "Thu, 29 Dec 2022 18:23:34 +0100",
"msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> On Thu, 29 Dec 2022 at 18:16, Andres Freund <andres@anarazel.de> wrote:\n>> We have a bunch of toast*.h files already. The new header should pretty much\n>> only contain the types, given how widely the header is going to be\n>> included. So maybe toast_type.h?\n\n> My 2 cents: I don't think that toast_anything.h is appropriate,\n> because even though the varatt infrastructure does enable\n> externally-stored oversized attributes (which is the essence of\n> TOAST), this is not the only (or primary) use of the type.\n\n+1 ... varatt.h sounded fine to me. I'd suggest varlena.h except\nwe have one already.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 12:47:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On 28.12.22 16:07, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> ... Then we could either\n> \n>> 1) Include varatt.h in postgres.h, similar to elog.h and palloc.h. That\n>> way we clean up the files a bit but don't change any external interfaces.\n> \n>> 2) Just let everyone who needs it include the new file.\n> \n>> 3) Compromise: You can avoid most \"damage\" by having fmgr.h include\n>> varatt.h. That satisfies most data types and extension code. That way,\n>> there are only a few places that need an explicit include of varatt.h.\n> \n>> I went with the last option in my patch.\n> \n> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n> so widely, I doubt it is buying very much in terms of reducing header\n> footprint. How bad is it to do #2?\n\nSee this incremental patch set.\n\nIt seems like maybe there is some intermediate abstraction that a lot of \nthese places should be using that we haven't thought of yet.",
"msg_date": "Fri, 30 Dec 2022 12:53:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 28.12.22 16:07, Tom Lane wrote:\n>> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n>> so widely, I doubt it is buying very much in terms of reducing header\n>> footprint. How bad is it to do #2?\n\n> See this incremental patch set.\n\nWow, 41 files requiring varatt.h is a lot fewer than I would have guessed.\nI think that bears out my feeling that fmgr.h wasn't a great location:\nI count 117 #includes of that, many of which are in .h files themselves\nso that many more .c files would be required to read them.\n\n(You did check that this passes cpluspluscheck/headerscheck, right?)\n\n> It seems like maybe there is some intermediate abstraction that a lot of \n> these places should be using that we haven't thought of yet.\n\nHmm. Perhaps, but I think I'm content with this version of the patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 30 Dec 2022 11:50:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "\nOn 2022-12-30 Fr 11:50, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 28.12.22 16:07, Tom Lane wrote:\n>>> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n>>> so widely, I doubt it is buying very much in terms of reducing header\n>>> footprint. How bad is it to do #2?\n>> See this incremental patch set.\n> Wow, 41 files requiring varatt.h is a lot fewer than I would have guessed.\n> I think that bears out my feeling that fmgr.h wasn't a great location:\n> I count 117 #includes of that, many of which are in .h files themselves\n> so that many more .c files would be required to read them.\n>\n> (You did check that this passes cpluspluscheck/headerscheck, right?)\n>\n>> It seems like maybe there is some intermediate abstraction that a lot of \n>> these places should be using that we haven't thought of yet.\n> Hmm. Perhaps, but I think I'm content with this version of the patch.\n\n\nLooked good to me too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 31 Dec 2022 08:48:15 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On 30.12.22 17:50, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> On 28.12.22 16:07, Tom Lane wrote:\n>>> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n>>> so widely, I doubt it is buying very much in terms of reducing header\n>>> footprint. How bad is it to do #2?\n> \n>> See this incremental patch set.\n> \n> Wow, 41 files requiring varatt.h is a lot fewer than I would have guessed.\n> I think that bears out my feeling that fmgr.h wasn't a great location:\n> I count 117 #includes of that, many of which are in .h files themselves\n> so that many more .c files would be required to read them.\n\ncommitted\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 06:07:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 06:07:49AM +0100, Peter Eisentraut wrote:\n> On 30.12.22 17:50, Tom Lane wrote:\n> >Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> >>On 28.12.22 16:07, Tom Lane wrote:\n> >>>I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n> >>>so widely, I doubt it is buying very much in terms of reducing header\n> >>>footprint. How bad is it to do #2?\n> >\n> >>See this incremental patch set.\n> >\n> >Wow, 41 files requiring varatt.h is a lot fewer than I would have guessed.\n> >I think that bears out my feeling that fmgr.h wasn't a great location:\n> >I count 117 #includes of that, many of which are in .h files themselves\n> >so that many more .c files would be required to read them.\n> \n> committed\n\nSET_VARSIZE alone appears in 74 pgxn distributions, so I predict extension\nbreakage en masse. I would revert this.\n\n\n",
"msg_date": "Mon, 9 Jan 2023 23:39:49 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On 10.01.23 08:39, Noah Misch wrote:\n> On Tue, Jan 10, 2023 at 06:07:49AM +0100, Peter Eisentraut wrote:\n>> On 30.12.22 17:50, Tom Lane wrote:\n>>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>>> On 28.12.22 16:07, Tom Lane wrote:\n>>>>> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is included\n>>>>> so widely, I doubt it is buying very much in terms of reducing header\n>>>>> footprint. How bad is it to do #2?\n>>>\n>>>> See this incremental patch set.\n>>>\n>>> Wow, 41 files requiring varatt.h is a lot fewer than I would have guessed.\n>>> I think that bears out my feeling that fmgr.h wasn't a great location:\n>>> I count 117 #includes of that, many of which are in .h files themselves\n>>> so that many more .c files would be required to read them.\n>>\n>> committed\n> \n> SET_VARSIZE alone appears in 74 pgxn distributions, so I predict extension\n> breakage en masse. I would revert this.\n\nWell, that was sort of my thinking, but people seemed to like this. I'm \nhappy to consider alternatives.\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 09:48:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 3:48 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> >>> Wow, 41 files requiring varatt.h is a lot fewer than I would have guessed.\n> >>> I think that bears out my feeling that fmgr.h wasn't a great location:\n> >>> I count 117 #includes of that, many of which are in .h files themselves\n> >>> so that many more .c files would be required to read them.\n> >>\n> >> committed\n> >\n> > SET_VARSIZE alone appears in 74 pgxn distributions, so I predict extension\n> > breakage en masse. I would revert this.\n>\n> Well, that was sort of my thinking, but people seemed to like this. I'm\n> happy to consider alternatives.\n\nI don't think that the number of extensions that get broken is really\nthe right metric. It's not fantastic to break large numbers of\nextensions, of course, but if the solution is merely to add an #if\nPG_VERSION_NUM >= whatever #include \"newstuff\" #endif then I don't\nthink it's really an issue. If an extension doesn't have an author who\ncan do at least that much updating when a new PG release comes out,\nthen it's basically unmaintained, and I just don't feel that bad about\nbreaking unmaintained extensions now and then, even annually.\n\nOf course, if we go and remove something that's very widely used and\nfor which there's no simple workaround, that sucks. Say, removing\nLWLocks entirely. But we don't usually do that sort of thing unless\nthere's a good reason and significant benefits.\n\nI don't think it would be very nice to do something like this in a\nminor release. But in a new major release, I think it's fine. I've\nbeen on the hook to maintain extensions in the face of these kinds of\nchanges at various times over the years, and it's never taken me much\ntime.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 08:12:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 10, 2023 at 3:48 AM Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>> SET_VARSIZE alone appears in 74 pgxn distributions, so I predict extension\n>>> breakage en masse. I would revert this.\n\n>> Well, that was sort of my thinking, but people seemed to like this. I'm\n>> happy to consider alternatives.\n\n> I don't think it would be very nice to do something like this in a\n> minor release. But in a new major release, I think it's fine. I've\n> been on the hook to maintain extensions in the face of these kinds of\n> changes at various times over the years, and it's never taken me much\n> time.\n\nYeah, that was my thinking. We could never do any header refactoring\nat all if the standard is \"will some extension author need to add a #if\".\nIn practice, we make bigger adjustments than this all the time,\nboth in header layout and in individual function APIs.\n\nNow, there is a fair question whether splitting this code out of\npostgres.h is worth any trouble at all. TBH my initial reaction\nhad been \"no\". But once we found that only 40-ish backend files\nneed to read this new header, I became a \"yes\" vote because it\nseems clear that there will be a total-compilation-time benefit.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 10 Jan 2023 09:46:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 9:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now, there is a fair question whether splitting this code out of\n> postgres.h is worth any trouble at all. TBH my initial reaction\n> had been \"no\". But once we found that only 40-ish backend files\n> need to read this new header, I became a \"yes\" vote because it\n> seems clear that there will be a total-compilation-time benefit.\n\nI wasn't totally about this, either, but I think on balance it's\nprobably a smart thing to do. I always found it kind of weird that we\nput that stuff in postgres.h. It seems to privilege the TOAST\nmechanism to an undue degree; what's the argument, for example, that\nTOAST macros are more generally relevant than CHECK_FOR_INTERRUPTS()\nor LWLockAcquire or HeapTuple? It felt to me like we'd just decided\nthat one subsystem gets to go into the main header file and everybody\nelse just had to have their own headers.\n\nOne thing that's particularly awkward about that is that if you want\nto write some front-end code that knows about how varlenas are stored\non disk, it was very awkward with the old structure. You're not\nsupposed to include \"postgres.h\" into frontend code, but if the stuff\nyou need is defined there then what else can you do? So I generally\nthink that anything that's in postgres.h should have a strong claim to\nbe not only widely-needed in the backend, but also never needed at all\nin any other executable.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 12:00:46 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 12:00:46PM -0500, Robert Haas wrote:\n> On Tue, Jan 10, 2023 at 9:46 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Now, there is a fair question whether splitting this code out of\n> > postgres.h is worth any trouble at all. TBH my initial reaction\n> > had been \"no\". But once we found that only 40-ish backend files\n> > need to read this new header, I became a \"yes\" vote because it\n> > seems clear that there will be a total-compilation-time benefit.\n\nA time claim with no benchmarks is a red flag. I've chosen to run one:\n\nexport CCACHE_DISABLE=1\nchange=d952373a987bad331c0e499463159dd142ced1ef\nfor commit in $change $change^; do\n echo === git checkout $commit\n git checkout $commit\n for n in `seq 1 200`; do make -j20 clean; env time make -j20; done\ndone\n\nResults:\n\ncommit median mean count\nd952373a987bad331c0e499463159dd142ced1ef 49.35 49.37 200\nd952373a987bad331c0e499463159dd142ced1ef^ 49.33 49.36 200\n\nThat is to say, the patch made the build a bit slower, not faster. That's\nwith GCC 4.8.5 (RHEL 7). I likely should have interleaved the run types, but\nin any case the speed win didn't show up.\n\n> I wasn't totally about this, either, but I think on balance it's\n> probably a smart thing to do. I always found it kind of weird that we\n> put that stuff in postgres.h. It seems to privilege the TOAST\n> mechanism to an undue degree; what's the argument, for example, that\n> TOAST macros are more generally relevant than CHECK_FOR_INTERRUPTS()\n> or LWLockAcquire or HeapTuple? It felt to me like we'd just decided\n> that one subsystem gets to go into the main header file and everybody\n> else just had to have their own headers.\n> \n> One thing that's particularly awkward about that is that if you want\n> to write some front-end code that knows about how varlenas are stored\n> on disk, it was very awkward with the old structure. You're not\n> supposed to include \"postgres.h\" into frontend code, but if the stuff\n> you need is defined there then what else can you do? So I generally\n> think that anything that's in postgres.h should have a strong claim to\n> be not only widely-needed in the backend, but also never needed at all\n> in any other executable.\n\nIf the patch had just made postgres.h include varatt.h, like it does elog.h,\nI'd consider that change a nonnegative. Grouping things is nice, even if it\nmakes compilation a bit slower. That also covers your frontend use case. How\nabout that?\n\nI agree fixing any one extension is easy enough. Thinking back to the\nhtup_details.h refactor, I found the aggregate pain unreasonable relative to\nalleged benefits, even though each individual extension wasn't too bad.\nSET_VARSIZE is used more (74 pgxn distributions) than htup_details.h (62).\n\n\n",
"msg_date": "Tue, 10 Jan 2023 22:14:43 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 1:14 AM Noah Misch <noah@leadboat.com> wrote:\n> If the patch had just made postgres.h include varatt.h, like it does elog.h,\n> I'd consider that change a nonnegative. Grouping things is nice, even if it\n> makes compilation a bit slower. That also covers your frontend use case. How\n> about that?\n\nI'm not direly opposed to that, but I'm also unconvinced that having\nthe varatt.h stuff is important enough relative to other things to\njustify giving it a privileged place forever.\n\n> I agree fixing any one extension is easy enough. Thinking back to the\n> htup_details.h refactor, I found the aggregate pain unreasonable relative to\n> alleged benefits, even though each individual extension wasn't too bad.\n> SET_VARSIZE is used more (74 pgxn distributions) than htup_details.h (62).\n\nWhat annoyed me about that refactoring is that, in most cases where I\nhad been including htup.h, I had to change it to include\nhtup_details.h. In the main source tree, too, we've got 31 places that\ninclude access/htup.h and 241 that include access/htup_details.h. It's\nhard to argue that it was worth splitting the file given those\nnumbers, and in fact I think that change was a mistake. But that\ndoesn't mean every such change is a mistake.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 12 Jan 2023 10:30:44 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: split TOAST support out of postgres.h"
},
{
"msg_contents": "On 10.01.23 09:48, Peter Eisentraut wrote:\n> On 10.01.23 08:39, Noah Misch wrote:\n>> On Tue, Jan 10, 2023 at 06:07:49AM +0100, Peter Eisentraut wrote:\n>>> On 30.12.22 17:50, Tom Lane wrote:\n>>>> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>>>>> On 28.12.22 16:07, Tom Lane wrote:\n>>>>>> I dunno, #3 seems kind of unprincipled. Also, since fmgr.h is \n>>>>>> included\n>>>>>> so widely, I doubt it is buying very much in terms of reducing header\n>>>>>> footprint. How bad is it to do #2?\n>>>>\n>>>>> See this incremental patch set.\n>>>>\n>>>> Wow, 41 files requiring varatt.h is a lot fewer than I would have \n>>>> guessed.\n>>>> I think that bears out my feeling that fmgr.h wasn't a great location:\n>>>> I count 117 #includes of that, many of which are in .h files themselves\n>>>> so that many more .c files would be required to read them.\n>>>\n>>> committed\n>>\n>> SET_VARSIZE alone appears in 74 pgxn distributions, so I predict \n>> extension\n>> breakage en masse. I would revert this.\n> \n> Well, that was sort of my thinking, but people seemed to like this. I'm \n> happy to consider alternatives.\n\nGiven the subsequent discussion, I'll keep it as is for now but consider \nit a semi-open item. It's easy to change.\n\n\n\n",
"msg_date": "Thu, 12 Jan 2023 17:19:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: split TOAST support out of postgres.h"
}
] |
[
{
"msg_contents": "I took another look at the code coverage situation around freezing\nfollowing pushing the page-level freezing patch earlier today. I\nspotted an issue that I'd missed up until now: certain sanity checks\nin heap_prepare_freeze_tuple() call TransactionIdDidCommit() more\noften than really seems necessary.\n\nTheoretically this is an old issue that dates back to commit\n699bf7d05c, as opposed to an issue in the page-level freezing patch.\nBut that's not really true in a real practical sense. In practice the\ncalls to TransactionIdDidCommit() will happen far more frequently\nfollowing today's commit 1de58df4fe (since we're using OldestXmin as\nthe cutoff that gates performing freeze_xmin/freeze_xmax processing --\nnot FreezeLimit).\n\nNo regressions related to clog lookups by VACUUM were apparent from my\nperformance validation of the page-level freezing work, but I suspect\nthat the increase in TransactionIdDidCommit() calls will cause\nnoticeable regressions with the right/wrong workload and/or\nconfiguration. The page-level freezing work is expected to reduce clog\nlookups in VACUUM in general, which is bound to have been a\nconfounding factor.\n\nI see no reason why we can't just condition the XID sanity check calls\nto TransactionIdDidCommit() (for both the freeze_xmin and the\nfreeze_xmax callsites) on a cheap tuple hint bit precheck not being\nenough. We only need an expensive call to TransactionIdDidCommit()\nwhen the precheck doesn't immediately indicate that the tuple xmin\nlooks committed when that's what the sanity check expects to see (or\nthat the tuple's xmax looks aborted, in the case of the callsite where\nthat's what we expect to see).\n\nAttached patch shows what I mean. A quick run of the standard\nregression tests (with custom instrumentation) shows that this patch\neliminates 100% of all relevant calls to TransactionIdDidCommit(), for\nboth the freeze_xmin and the freeze_xmax callsites.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 28 Dec 2022 15:24:28 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 15:24:28 -0800, Peter Geoghegan wrote:\n> I took another look at the code coverage situation around freezing\n> following pushing the page-level freezing patch earlier today. I\n> spotted an issue that I'd missed up until now: certain sanity checks\n> in heap_prepare_freeze_tuple() call TransactionIdDidCommit() more\n> often than really seems necessary.\n> \n> Theoretically this is an old issue that dates back to commit\n> 699bf7d05c, as opposed to an issue in the page-level freezing patch.\n> But that's not really true in a real practical sense. In practice the\n> calls to TransactionIdDidCommit() will happen far more frequently\n> following today's commit 1de58df4fe (since we're using OldestXmin as\n> the cutoff that gates performing freeze_xmin/freeze_xmax processing --\n> not FreezeLimit).\n\nHm. But we still only do the check when we actually freeze, rather than just\nduring the pre-check in heap_tuple_should_freeze(). So we'll only incur the\nincreased overhead when we also do more WAL logging etc. Correct?\n\n\n> I took another look at the code coverage situation around freezing\n> I see no reason why we can't just condition the XID sanity check calls\n> to TransactionIdDidCommit() (for both the freeze_xmin and the\n> freeze_xmax callsites) on a cheap tuple hint bit precheck not being\n> enough. We only need an expensive call to TransactionIdDidCommit()\n> when the precheck doesn't immediately indicate that the tuple xmin\n> looks committed when that's what the sanity check expects to see (or\n> that the tuple's xmax looks aborted, in the case of the callsite where\n> that's what we expect to see).\n\nHm. I dimply recall that we had repeated cases where the hint bits were set\nwrongly due to some of the multixact related bugs. I think I was trying to be\nparanoid about not freezing stuff in those situations, since it can lead to\nreviving dead tuples, which obviously is bad.\n\n\n> Attached patch shows what I mean. A quick run of the standard\n> regression tests (with custom instrumentation) shows that this patch\n> eliminates 100% of all relevant calls to TransactionIdDidCommit(), for\n> both the freeze_xmin and the freeze_xmax callsites.\n\nThere's practically no tuple-level concurrent activity in a normal regression\ntest. So I'm a bit doubtful that is an interesting metric. It'd be a tad more\ninteresting to run tests, including isolationtester and pgbench, against a\nrunning cluster.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Dec 2022 16:20:17 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 4:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > Theoretically this is an old issue that dates back to commit\n> > 699bf7d05c, as opposed to an issue in the page-level freezing patch.\n> > But that's not really true in a real practical sense. In practice the\n> > calls to TransactionIdDidCommit() will happen far more frequently\n> > following today's commit 1de58df4fe (since we're using OldestXmin as\n> > the cutoff that gates performing freeze_xmin/freeze_xmax processing --\n> > not FreezeLimit).\n>\n> Hm. But we still only do the check when we actually freeze, rather than just\n> during the pre-check in heap_tuple_should_freeze(). So we'll only incur the\n> increased overhead when we also do more WAL logging etc. Correct?\n\nYes, that's how it worked up until today's commit 1de58df4fe.\n\nI don't have strong feelings about back patching a fix, but this does\nseem like something that I should fix now, on HEAD.\n\n> Hm. I dimply recall that we had repeated cases where the hint bits were set\n> wrongly due to some of the multixact related bugs. I think I was trying to be\n> paranoid about not freezing stuff in those situations, since it can lead to\n> reviving dead tuples, which obviously is bad.\n\nI think that it's a reasonable check, and I'm totally in favor of\nkeeping it (or something very close, at least).\n\n> There's practically no tuple-level concurrent activity in a normal regression\n> test. So I'm a bit doubtful that is an interesting metric. It'd be a tad more\n> interesting to run tests, including isolationtester and pgbench, against a\n> running cluster.\n\nEven on HEAD, with page-level freezing in place, we're only going to\ntest XIDs that are < OldestXmin, that appear on pages tha VACUUM\nactually scans in the first place. Just checking tuple-level hint bits\nshould be effective. But even if it isn't (for whatever reason) then\nit's similar to cases where our second heap pass has to do clog\nlookups in heap_page_is_all_visible() just because hint bits couldn't\nbe set earlier on, back when lazy_scan_prune() processed the same page\nduring VACUUM's initial heap pass.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 28 Dec 2022 16:37:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On 2022-12-28 16:37:27 -0800, Peter Geoghegan wrote:\n> On Wed, Dec 28, 2022 at 4:20 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Theoretically this is an old issue that dates back to commit\n> > > 699bf7d05c, as opposed to an issue in the page-level freezing patch.\n> > > But that's not really true in a real practical sense. In practice the\n> > > calls to TransactionIdDidCommit() will happen far more frequently\n> > > following today's commit 1de58df4fe (since we're using OldestXmin as\n> > > the cutoff that gates performing freeze_xmin/freeze_xmax processing --\n> > > not FreezeLimit).\n> >\n> > Hm. But we still only do the check when we actually freeze, rather than just\n> > during the pre-check in heap_tuple_should_freeze(). So we'll only incur the\n> > increased overhead when we also do more WAL logging etc. Correct?\n> \n> Yes, that's how it worked up until today's commit 1de58df4fe.\n> \n> I don't have strong feelings about back patching a fix, but this does\n> seem like something that I should fix now, on HEAD.\n>\n> > Hm. I dimply recall that we had repeated cases where the hint bits were set\n> > wrongly due to some of the multixact related bugs. I think I was trying to be\n> > paranoid about not freezing stuff in those situations, since it can lead to\n> > reviving dead tuples, which obviously is bad.\n> \n> I think that it's a reasonable check, and I'm totally in favor of\n> keeping it (or something very close, at least).\n\nI don't quite follow - one paragraph up you say we should fix something, and\nthen here you seem to say we should continue not to rely on the hint bits?\n\n\n",
"msg_date": "Wed, 28 Dec 2022 16:43:15 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Wed, Dec 28, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Hm. I dimply recall that we had repeated cases where the hint bits were set\n> > > wrongly due to some of the multixact related bugs. I think I was trying to be\n> > > paranoid about not freezing stuff in those situations, since it can lead to\n> > > reviving dead tuples, which obviously is bad.\n> >\n> > I think that it's a reasonable check, and I'm totally in favor of\n> > keeping it (or something very close, at least).\n>\n> I don't quite follow - one paragraph up you say we should fix something, and\n> then here you seem to say we should continue not to rely on the hint bits?\n\nI didn't mean that we should continue to not rely on the hint bits. Is\nthat really all that the test is for? I think of it as a general sanity check.\n\nThe important call to avoid with page-level freezing is the xmin call to\nTransactionIdDidCommit(), not the xmax call. The xmax call only occurs\nwhen VACUUM prepares to freeze a tuple that was updated by an updater\n(not a locker) that aborted. While the xmin calls will now take place with most\nunfrozen tuples.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 28 Dec 2022 22:36:53 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 22:36:53 -0800, Peter Geoghegan wrote:\n> On Wed, Dec 28, 2022 at 4:43 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > Hm. I dimply recall that we had repeated cases where the hint bits were set\n> > > > wrongly due to some of the multixact related bugs. I think I was trying to be\n> > > > paranoid about not freezing stuff in those situations, since it can lead to\n> > > > reviving dead tuples, which obviously is bad.\n> > >\n> > > I think that it's a reasonable check, and I'm totally in favor of\n> > > keeping it (or something very close, at least).\n> >\n> > I don't quite follow - one paragraph up you say we should fix something, and\n> > then here you seem to say we should continue not to rely on the hint bits?\n> \n> I didn't mean that we should continue to not rely on the hint bits. Is\n> that really all that the test is for? I think of it as a general sanity check.\n\nI do think we wanted to avoid reviving actually-dead tuples (present due to\nthe multixact and related bugs). And I'm worried about giving that checking\nup, I've seen it hit too many times. Both in the real world and during\ndevelopment.\n\nSomewhat of a tangent: I've previously wondered if we should have a small\nhash-table based clog cache. The current one-element cache doesn't suffice in\na lot of scenarios, but it wouldn't take a huge cache to end up filtering most\nclog accesses.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Dec 2022 09:21:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Somewhat of a tangent: I've previously wondered if we should have a small\n> hash-table based clog cache. The current one-element cache doesn't suffice in\n> a lot of scenarios, but it wouldn't take a huge cache to end up filtering most\n> clog accesses.\n\nI've wondered about that too. The one-element cache was a good hack\nin its day, but it looks a bit under-engineered by our current\nstandards. (Also, maybe it'd be plausible to have a one-element\ncache in front of a small hash?)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 29 Dec 2022 12:25:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 9:21 AM Andres Freund <andres@anarazel.de> wrote:\n> I do think we wanted to avoid reviving actually-dead tuples (present due to\n> the multixact and related bugs). And I'm worried about giving that checking\n> up, I've seen it hit too many times. Both in the real world and during\n> development.\n\nI could just move the same tests from heap_prepare_freeze_tuple() to\nheap_freeze_execute_prepared(), without changing any of the details.\nThat would mean that the TransactionIdDidCommit() calls would only\ntake place with tuples that actually get frozen, which is more or less\nhow it worked before now.\n\nheap_prepare_freeze_tuple() will now often prepare freeze plans that\njust get discarded by lazy_scan_prune(). My concern is the impact on\ntables/pages that almost always discard prepared freeze plans, and so\nrequire many TransactionIdDidCommit() calls that really aren't\nnecessary.\n\n> Somewhat of a tangent: I've previously wondered if we should have a small\n> hash-table based clog cache. The current one-element cache doesn't suffice in\n> a lot of scenarios, but it wouldn't take a huge cache to end up filtering most\n> clog accesses.\n\nI imagine that the one-element cache works alright in some scenarios,\nbut then suddenly doesn't work so well, even though not very much has\nchanged. Behavior like that makes the problems difficult to analyze,\nand easy to miss. I'm suspicious of that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 29 Dec 2022 09:43:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-29 09:43:39 -0800, Peter Geoghegan wrote:\n> On Thu, Dec 29, 2022 at 9:21 AM Andres Freund <andres@anarazel.de> wrote:\n> > I do think we wanted to avoid reviving actually-dead tuples (present due to\n> > the multixact and related bugs). And I'm worried about giving that checking\n> > up, I've seen it hit too many times. Both in the real world and during\n> > development.\n>\n> I could just move the same tests from heap_prepare_freeze_tuple() to\n> heap_freeze_execute_prepared(), without changing any of the details.\n\nThat might work, yes.\n\n\n> That would mean that the TransactionIdDidCommit() calls would only\n> take place with tuples that actually get frozen, which is more or less\n> how it worked before now.\n>\n> heap_prepare_freeze_tuple() will now often prepare freeze plans that\n> just get discarded by lazy_scan_prune(). My concern is the impact on\n> tables/pages that almost always discard prepared freeze plans, and so\n> require many TransactionIdDidCommit() calls that really aren't\n> necessary.\n\nIt seems somewhat wrong that we discard all the work that\nheap_prepare_freeze_tuple() did. Yes, we force freezing to actually happen in\na bunch of important cases (e.g. creating a new multixact), but even so,\ne.g. GetMultiXactIdMembers() is awfully expensive to do for nought. Nor is\njust creating the freeze plan free.\n\nI think the better approach might be to make heap_tuple_should_freeze() more\npowerful and to only create the freeze plan when actually freezing.\n\n\nI wonder how often it'd be worthwhile to also do opportunistic freezing during\nlazy_vacuum_heap_page(), given that we already will WAL log (and often issue\nan FPI).\n\n\n> > Somewhat of a tangent: I've previously wondered if we should have a small\n> > hash-table based clog cache. The current one-element cache doesn't suffice in\n> > a lot of scenarios, but it wouldn't take a huge cache to end up filtering most\n> > clog accesses.\n>\n> I imagine that the one-element cache works alright in some scenarios,\n> but then suddenly doesn't work so well, even though not very much has\n> changed. Behavior like that makes the problems difficult to analyze,\n> and easy to miss. I'm suspicious of that.\n\nI think there's a lot of situations where it flat out doesn't work - even if\nyou just have an inserting and a deleting transaction, we'll often end up not\nhitting the 1-element cache due to looking up two different xids in a roughly\nalternating pattern...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 29 Dec 2022 12:00:34 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 12:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > I could just move the same tests from heap_prepare_freeze_tuple() to\n> > heap_freeze_execute_prepared(), without changing any of the details.\n>\n> That might work, yes.\n\nAttached patch shows how that could work.\n\n> It seems somewhat wrong that we discard all the work that\n> heap_prepare_freeze_tuple() did. Yes, we force freezing to actually happen in\n> a bunch of important cases (e.g. creating a new multixact), but even so,\n> e.g. GetMultiXactIdMembers() is awfully expensive to do for nought. Nor is\n> just creating the freeze plan free.\n\nI'm not sure what you mean by that. I believe that the page-level\nfreezing changes do not allow FreezeMultiXactId() to call\nGetMultiXactIdMembers() any more often than before. Are you concerned\nabout a regression, or something more general than that?\n\nThe only case that we *don't* force xmax freezing in\nFreezeMultiXactId() is the FRM_NOOP case. Note in particular that we\nwill reliably force freezing for any Multi < OldestMxact (not <\nMultiXactCutoff).\n\n> I wonder how often it'd be worthwhile to also do opportunistic freezing during\n> lazy_vacuum_heap_page(), given that we already will WAL log (and often issue\n> an FPI).\n\nYeah, we don't actually need a cleanup lock for that. It might also\nmake sense to teach lazy_scan_prune() to anticipate what will happen\nlater on, in lazy_vacuum_heap_page(), so that it can freeze based on\nthe same observation about the cost. (It already does a certain amount\nof this kind of thing, in fact.)\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 29 Dec 2022 12:20:39 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > It seems somewhat wrong that we discard all the work that\n> > heap_prepare_freeze_tuple() did. Yes, we force freezing to actually happen in\n> > a bunch of important cases (e.g. creating a new multixact), but even so,\n> > e.g. GetMultiXactIdMembers() is awfully expensive to do for nought. Nor is\n> > just creating the freeze plan free.\n>\n> I'm not sure what you mean by that. I believe that the page-level\n> freezing changes do not allow FreezeMultiXactId() to call\n> GetMultiXactIdMembers() any more often than before. Are you concerned\n> about a regression, or something more general than that?\n\nHere's an idea that seems like it could ameliorate the issue:\n\nWhen we're looping through members from GetMultiXactIdMembers(), and\nseeing if we can get away with !need_replace/FRM_NOOP processing, why\nnot also check if there are any XIDs >= OldestXmin among the members?\nIf not (if they're all < OldestXmin), then we should prefer to go\nfurther with processing the Multi now -- FRM_NOOP processing isn't\nactually cheaper.\n\nWe'll already know that a second pass over the multi really isn't\nexpensive. And that it will actually result in FRM_INVALIDATE_XMAX\nprocessing, which is ideal. Avoiding a second pass is really just\nabout avoiding FRM_RETURN_IS_MULTI (and avoiding FRM_RETURN_IS_XID,\nperhaps, though to a much lesser degree).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 29 Dec 2022 12:50:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 12:50 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Dec 29, 2022 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > It seems somewhat wrong that we discard all the work that\n> > > heap_prepare_freeze_tuple() did. Yes, we force freezing to actually happen in\n> > > a bunch of important cases (e.g. creating a new multixact), but even so,\n> > > e.g. GetMultiXactIdMembers() is awfully expensive to do for nought. Nor is\n> > > just creating the freeze plan free.\n\n> Here's an idea that seems like it could ameliorate the issue:\n>\n> When we're looping through members from GetMultiXactIdMembers(), and\n> seeing if we can get away with !need_replace/FRM_NOOP processing, why\n> not also check if there are any XIDs >= OldestXmin among the members?\n> If not (if they're all < OldestXmin), then we should prefer to go\n> further with processing the Multi now -- FRM_NOOP processing isn't\n> actually cheaper.\n\nAttached patch shows what I mean here.\n\nI think that there is a tendency for OldestMxact to be held back by a\ndisproportionate amount (relative to OldestXmin) in the presence of\nlong running transactions and concurrent updates from short-ish\ntransactions. The way that we maintain state in shared memory to\ncompute OldestMxact (OldestMemberMXactId/OldestVisibleMXactId) seems\nto be vulnerable to that kind of thing. I'm thinking of scenarios\nwhere MultiXactIdSetOldestVisible() gets called from a long-running\nxact, at the first point that it examines any multi, just to read\nsomething. That effectively makes the long-running xact hold back\nOldestMxact, even when it doesn't hold back OldestXmin. A read-only\ntransaction that runs in READ COMMITTED could do this -- it'll call\nOldestVisibleMXactId() and \"lock in\" the oldest visible Multi that it\nneeds to continue to see as running, without clearing that until much\nlater (until AtEOXact_MultiXact() is called at xact commit/abort). And\nwithout doing anything to hold back OldestXmin by the same amount, or\nfor the same duration.\n\nThat's the kind of scenario that the patch might make a difference in\n-- because it exploits the fact that OldestXmin can be a lot less\nvulnerable than OldestMxact is to being held back by long running\nxacts. Does that seem plausible to you?\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 29 Dec 2022 15:10:54 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
},
{
"msg_contents": "On Thu, Dec 29, 2022 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Dec 29, 2022 at 12:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I could just move the same tests from heap_prepare_freeze_tuple() to\n> > > heap_freeze_execute_prepared(), without changing any of the details.\n> >\n> > That might work, yes.\n>\n> Attached patch shows how that could work.\n\nMy plan is to push something like this next week, barring objections.\nNote that I've inverted the xmax \"!TransactionIdDidCommit()\" test --\nit is now \"TransactionIdDidAbort()\" instead. I believe that this makes\nthe test more likely to catch problems, since we really should expect\nrelevant xmax XIDs to have aborted, specifically -- since the xmax\nXIDs in question are always < OldestXmin. (There is a need to use a\n\"!TransactionIdDidCommit()\" test specifically in nearby code in\nFreezeMultiXactId(), because that code has to also deal with\nin-progress XIDs that are multi members, but that's not the case\nhere.)\n\nI'm also going to create a CF entry for the other patch posted to this\nthread -- the enhancement to FreezeMultiXactId() that aims to get the\nmost out of any expensive calls to GetMultiXactIdMembers(). That\napproach seems quite promising, and relatively simple. I'm not\nparticularly worried about wasting a call to GetMultiXactIdMembers()\nduring VACUUM, though. I'm more concerned about the actual impact of\nnot doing our best to outright remove Multis during VACUUM. The\napplication itself can experience big performance cliffs from SLRU\nbuffer misses, which VACUUM should do its utmost to prevent. That is\nan occasional source of big problems [1].\n\nI'm particularly concerned about the possibility that having an\nupdater XID will totally change the characteristics of how multis are\nprocessed inside FreezeMultiXactId(). That seems like it might be a\nreally sharp edge. I believe that the page-level freezing patch has\nalready ameliorated the problem, since it made us much less reliant on\nthe case where GetMultiXactIdMembers() returns \"nmembers <= 0\" for a\nMulti that happens to be HEAP_XMAX_IS_LOCKED_ONLY(). But we can and\nshould go further than that.\n\n[1] https://buttondown.email/nelhage/archive/notes-on-some-postgresql-implementation-details/\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 30 Dec 2022 12:23:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding unnecessary clog lookups while freezing"
}
] |
[
{
"msg_contents": "Fix markup indentation and add a mention of MERGE.",
"msg_date": "Wed, 28 Dec 2022 16:02:58 -0800",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] two minor fixes to MVCC docs"
},
{
"msg_contents": "Trivial fix to make the indentation consistent.",
"msg_date": "Mon, 19 Jun 2023 23:30:52 -0700",
"msg_from": "Will Mortensen <will@extrahop.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] doc: fix markup indentation in MVCC"
},
{
"msg_contents": "\nMy apologies for not processing your December patch earlier, and I just\nsaw your partial patch post today. Applied to master and PG 15 where\nMERGE was added.\n\n---------------------------------------------------------------------------\n\nOn Wed, Dec 28, 2022 at 04:02:58PM -0800, Will Mortensen wrote:\n> Fix markup indentation and add a mention of MERGE.\n\n> From 46977fbe5fa0a26ef77938a8fe30b9def062e8f8 Mon Sep 17 00:00:00 2001\n> From: Will Mortensen <will@extrahop.com>\n> Date: Sat, 27 Aug 2022 17:07:11 -0700\n> Subject: [PATCH 1/6] doc: fix markup indentation in MVCC\n> \n> ---\n> doc/src/sgml/mvcc.sgml | 16 ++++++++--------\n> 1 file changed, 8 insertions(+), 8 deletions(-)\n> \n> diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml\n> index 337f6dd429..69b01d01b9 100644\n> --- a/doc/src/sgml/mvcc.sgml\n> +++ b/doc/src/sgml/mvcc.sgml\n> @@ -109,8 +109,8 @@\n> dirty read\n> <indexterm><primary>dirty read</primary></indexterm>\n> </term>\n> - <listitem>\n> - <para>\n> + <listitem>\n> + <para>\n> A transaction reads data written by a concurrent uncommitted transaction.\n> </para>\n> </listitem>\n> @@ -121,8 +121,8 @@\n> nonrepeatable read\n> <indexterm><primary>nonrepeatable read</primary></indexterm>\n> </term>\n> - <listitem>\n> - <para>\n> + <listitem>\n> + <para>\n> A transaction re-reads data it has previously read and finds that data\n> has been modified by another transaction (that committed since the\n> initial read).\n> @@ -135,8 +135,8 @@\n> phantom read\n> <indexterm><primary>phantom read</primary></indexterm>\n> </term>\n> - <listitem>\n> - <para>\n> + <listitem>\n> + <para>\n> A transaction re-executes a query returning a set of rows that satisfy a\n> search condition and finds that the set of rows satisfying the condition\n> has changed due to another recently-committed transaction.\n> @@ -149,8 +149,8 @@\n> serialization anomaly\n> <indexterm><primary>serialization anomaly</primary></indexterm>\n> </term>\n> - <listitem>\n> - <para>\n> + <listitem>\n> + <para>\n> The result of successfully committing a group of transactions\n> is inconsistent with all possible orderings of running those\n> transactions one at a time.\n> -- \n> 2.25.1\n> \n\n> From 7eaec62fd8665ba761114e8238f95f0f47924a21 Mon Sep 17 00:00:00 2001\n> From: Will Mortensen <will@extrahop.com>\n> Date: Sat, 27 Aug 2022 17:54:11 -0700\n> Subject: [PATCH 2/6] doc: add mention of MERGE in MVCC\n> \n> ---\n> doc/src/sgml/mvcc.sgml | 6 +++---\n> 1 file changed, 3 insertions(+), 3 deletions(-)\n> \n> diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml\n> index 69b01d01b9..512e8b710d 100644\n> --- a/doc/src/sgml/mvcc.sgml\n> +++ b/doc/src/sgml/mvcc.sgml\n> @@ -1750,9 +1750,9 @@ SELECT pg_advisory_lock(q.id) FROM\n> changes in the table. A repeatable read transaction's snapshot is actually\n> frozen at the start of its first query or data-modification command\n> (<literal>SELECT</literal>, <literal>INSERT</literal>,\n> - <literal>UPDATE</literal>, or <literal>DELETE</literal>), so\n> - it is possible to obtain locks explicitly before the snapshot is\n> - frozen.\n> + <literal>UPDATE</literal>, <literal>DELETE</literal>, or\n> + <literal>MERGE</literal>), so it is possible to obtain locks explicitly\n> + before the snapshot is frozen.\n> </para>\n> </sect2>\n> </sect1>\n> -- \n> 2.25.1\n> \n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n",
"msg_date": "Tue, 20 Jun 2023 16:26:39 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] two minor fixes to MVCC docs"
}
] |
[
{
"msg_contents": "Since a few days ago, the windows/meson task has been spewing messages for each tap\ntest:\n\n| Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.\n\nI guess because the image is updated to use meson v1.0.0.\nhttps://github.com/mesonbuild/meson/commit/b7a5c384a1f1ba80c09904e7ef4f5160bdae3345\n\nmesonbuild/mtest.py- if version is None:\nmesonbuild/mtest.py: self.warnings.append('Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.')\nmesonbuild/mtest.py- version = 12\n\n-- \nJustin\n\n\n",
"msg_date": "Wed, 28 Dec 2022 18:35:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "windows/meson cfbot warnings"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 18:35:38 -0600, Justin Pryzby wrote:\n> Since a few days ago, the windows/meson task has been spewing messages for each tap\n> test:\n> \n> | Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.\n> \n> I guess because the image is updated to use meson v1.0.0.\n> https://github.com/mesonbuild/meson/commit/b7a5c384a1f1ba80c09904e7ef4f5160bdae3345\n> \n> mesonbuild/mtest.py- if version is None:\n> mesonbuild/mtest.py: self.warnings.append('Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.')\n> mesonbuild/mtest.py- version = 12\n\nI think the change is somewhat likely to be reverted in the next meson minor\nversion. It apparently came about due to somewhat odd language in the tap-14\nspec... So I think we should just wait for now.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Dec 2022 17:44:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows/meson cfbot warnings"
},
{
"msg_contents": "Hi,\n\nOn 2022-12-28 17:44:47 -0800, Andres Freund wrote:\n> On 2022-12-28 18:35:38 -0600, Justin Pryzby wrote:\n> > Since a few days ago, the windows/meson task has been spewing messages for each tap\n> > test:\n> > \n> > | Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.\n> > \n> > I guess because the image is updated to use meson v1.0.0.\n> > https://github.com/mesonbuild/meson/commit/b7a5c384a1f1ba80c09904e7ef4f5160bdae3345\n> > \n> > mesonbuild/mtest.py- if version is None:\n> > mesonbuild/mtest.py: self.warnings.append('Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.')\n> > mesonbuild/mtest.py- version = 12\n> \n> I think the change is somewhat likely to be reverted in the next meson minor\n> version. It apparently came about due to somewhat odd language in the tap-14\n> spec... So I think we should just wait for now.\n\nFWIW, that did happen in 1.0.1, and the output is now clean again.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 25 Feb 2023 10:29:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows/meson cfbot warnings"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 10:29:33AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2022-12-28 17:44:47 -0800, Andres Freund wrote:\n> > On 2022-12-28 18:35:38 -0600, Justin Pryzby wrote:\n> > > Since a few days ago, the windows/meson task has been spewing messages for each tap\n> > > test:\n> > > \n> > > | Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.\n> > > \n> > > I guess because the image is updated to use meson v1.0.0.\n> > > https://github.com/mesonbuild/meson/commit/b7a5c384a1f1ba80c09904e7ef4f5160bdae3345\n> > > \n> > > mesonbuild/mtest.py- if version is None:\n> > > mesonbuild/mtest.py: self.warnings.append('Unknown TAP version. The first line MUST be `TAP version <int>`. Assuming version 12.')\n> > > mesonbuild/mtest.py- version = 12\n> > \n> > I think the change is somewhat likely to be reverted in the next meson minor\n> > version. It apparently came about due to somewhat odd language in the tap-14\n> > spec... So I think we should just wait for now.\n> \n> FWIW, that did happen in 1.0.1, and the output is now clean again.\n\nUnrelated, but something else changed and now there's this.\n\nhttps://cirrus-ci.com/task/6202242768830464\n\n[20:10:34.310] c:\\cirrus>call sh -c 'if grep \": warning \" build.txt; then exit 1; fi; exit 0' \n[20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n[20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n[20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n[20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n\n\n",
"msg_date": "Sat, 25 Feb 2023 16:45:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: windows/meson cfbot warnings"
},
{
"msg_contents": "Hi,\n\nOn 2023-02-25 16:45:38 -0600, Justin Pryzby wrote:\n> Unrelated, but something else changed and now there's this.\n> \n> https://cirrus-ci.com/task/6202242768830464\n> \n> [20:10:34.310] c:\\cirrus>call sh -c 'if grep \": warning \" build.txt; then exit 1; fi; exit 0' \n> [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n\nHm, odd.\n\nThere's a bit more context about the warning in the output:\n\n[21:43:58.782] [1509/2165] Compiling C object src/pl/plpython/plpython3.dll.p/plpy_exec.c.obj\n[21:43:58.782] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n[21:43:58.924] src/pl/plpython/plpython3.dll.p/meson_pch-c.c: note: see previous definition of 'MS_WIN64'\n\nI suspect one would have to look at the external source files to find where\nit's also defined. The way the warning is triggered it seems to have to be\npredefined somewhere in/below postgres.h.\n\nIt might be that we'll get more information after disabling precompiled headers.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 26 Feb 2023 09:54:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: windows/meson cfbot warnings"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 09:54:25AM -0800, Andres Freund wrote:\n> On 2023-02-25 16:45:38 -0600, Justin Pryzby wrote:\n> > Unrelated, but something else changed and now there's this.\n> > \n> > https://cirrus-ci.com/task/6202242768830464\n> > \n> > [20:10:34.310] c:\\cirrus>call sh -c 'if grep \": warning \" build.txt; then exit 1; fi; exit 0' \n> > [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> > [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> > [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> > [20:10:34.397] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> \n> Hm, odd.\n> \n> There's a bit more context about the warning in the output:\n> \n> [21:43:58.782] [1509/2165] Compiling C object src/pl/plpython/plpython3.dll.p/plpy_exec.c.obj\n> [21:43:58.782] C:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n> [21:43:58.924] src/pl/plpython/plpython3.dll.p/meson_pch-c.c: note: see previous definition of 'MS_WIN64'\n\nI don't remember right now what I did to get the output, but it's like:\n\n[18:33:22.351] c:\\python\\Include\\pyconfig.h(117): warning C4005: 'MS_WIN64': macro redefinition\n[18:33:22.351] c:\\python\\Include\\pyconfig.h(117): note: 'MS_WIN64' previously declared on the command line\n\nI found that this is caused by changes in meson between 1.0.0 and 1.0.1.\n\nProbably one of these.\nhttps://github.com/mesonbuild/meson/commit/aa69cf04484309f82d2da64c433539d2f6f2fa82\nhttps://github.com/mesonbuild/meson/commit/9bf718fcee0d9e30b5de2d6a2f154aa417aa8d4c\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 25 Mar 2023 15:11:07 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: windows/meson cfbot warnings"
}
] |
[
{
"msg_contents": "Hi Sir.I’m Computer engineering student from Egypt Interested in Database Management Systems.Can I know details about the list of ideas for 2023 projects or how to prepare myself to be ready with the required knowledge?Please if you can help me don't ignore my email.Sincerely.Diaa Badr.\n",
"msg_date": "Thu, 29 Dec 2022 04:10:38 +0200",
"msg_from": "diaa <diaabadr82@gmail.com>",
"msg_from_op": true,
"msg_subject": "GSOC2023"
},
{
"msg_contents": "Hi,\n\nOn 12/28/22 21:10, diaa wrote:\n> *Hi Sir.*\n> I’m Computer engineering student from Egypt Interested in Database Management\n> Systems.\n> Can I know details about the list of ideas for 2023 projects or how to prepare\n> myself to be ready with the required knowledge?\n\n\nThanks for your interest in GSoC 2023 !\n\n\nThe GSoC timeline is described here [1], and the PostgreSQL projects - \nif approved - will be listed here [2].\n\n\nSo, check back in February and see which projects has been proposed.\n\n\n[1] https://developers.google.com/open-source/gsoc/timeline\n\n[2] https://wiki.postgresql.org/wiki/GSoC_2023\n\n\nBest regards,\n\n Jesper\n\n\n\n\n",
"msg_date": "Wed, 4 Jan 2023 08:02:22 -0500",
"msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: GSOC2023"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\npsql \\copy command can stream data from client host just like the normal\ncopy command can do on server host. Let's assume we want to stream a\nlocal data file from psql:\npgsql =# \\copy tbl from '/tmp/datafile' (format 'text');\nIf there's error inside the data file, \\copy will still stream the\nwhole data file\nbefore it reports the error. This is undesirable If the data file is very large,\nor it's an infinite pipe. The normal copy command which reads file on the\nserver host can report error immediately as expected.\n\nThe problem seems to be pqParseInput3(). When error occurs in server\nbackend, it will send 'E' packet back to client. During \\copy command, the\nconnection's asyncStatus is PGASYNC_COPY_IN, any 'E' packet will\nget ignored by this path:\n\nelse if (conn->asyncStatus != PGASYNC_BUSY)\n{\n/* If not IDLE state, just wait ... */\nif (conn->asyncStatus != PGASYNC_IDLE)\nreturn;\n\nSo the client can't detect the error sent back by server.\n\nI've attached a patch to demonstrate one way to workaround this. Save\nthe error via pqGetErrorNotice3() if the conn is PGASYNC_COPY_IN\nstatus. The client code(psql) can detect the error via PQerrorMessage().\nProbably still lots of details need to be considered but should be good\nenough to start this discussion. Any thoughts on this issue?\n\nBest regards\nPeifeng Qiu",
"msg_date": "Thu, 29 Dec 2022 12:18:33 +0900",
"msg_from": "Peifeng Qiu <pgsql@qiupf.dev>",
"msg_from_op": true,
"msg_subject": "psql: stop at error immediately during \\copy"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI originally suggested $ЅUBJECT as part of the thread that ultimately led\nto the addition of PROCESS_TOAST [0], but we decided not to proceed with\nit. Recently, this idea came up again [1], so I thought I'd give it\nanother try.\n\nThe motivation for adding this option is to make it easier to VACUUM only a\nrelation's TOAST table. At the moment, you need to find the TOAST table by\nexamining a relation's reltoastrelid, and you need USAGE on the pg_toast\nschema. This option could also help make it possible to call only\nvac_update_datfrozenxid() without processing any relations, as discussed\nelsewhere [2].\n\nThe demand for all these niche VACUUM options is likely limited, but it\ndoes seem like there are some useful applications. If a new option is out\nof the question, perhaps this functionality could be added to the existing\nPROCESS_TOAST option.\n\n[0] https://postgr.es/m/BA8951E9-1524-48C5-94AF-73B1F0D7857F%40amazon.com\n[1] https://postgr.es/m/20221215191246.GA252861%40nathanxps13\n[2] https://postgr.es/m/20221229213719.GA301584%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 29 Dec 2022 16:00:28 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "rebased for cfbot\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Fri, 6 Jan 2023 21:07:24 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, 2022-12-29 at 16:00 -0800, Nathan Bossart wrote:\n> The motivation for adding this option is to make it easier to VACUUM\n> only a\n> relation's TOAST table. At the moment, you need to find the TOAST\n> table by\n> examining a relation's reltoastrelid, and you need USAGE on the\n> pg_toast\n> schema. This option could also help make it possible to call only\n> vac_update_datfrozenxid() without processing any relations, as\n> discussed\n> elsewhere [2].\n\nFor completeness, did you consider CLUSTER and REINDEX options as well?\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n",
"msg_date": "Fri, 13 Jan 2023 15:24:09 -0800",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 03:24:09PM -0800, Jeff Davis wrote:\n> For completeness, did you consider CLUSTER and REINDEX options as well?\n\nI have not, but I can put together patches for those as well.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 13 Jan 2023 15:30:15 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Sat, 7 Jan 2023 at 10:37, Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> rebased for cfbot\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\nd540a02a724b9643205abce8c5644a0f0908f6e3 ===\n=== applying patch ./v2-0001-add-PROCESS_MAIN-to-VACUUM.patch\npatching file src/backend/commands/vacuum.c\n....\nHunk #8 FAILED at 2097.\n1 out of 8 hunks FAILED -- saving rejects to file\nsrc/backend/commands/vacuum.c.rej\n\n[1] - http://cfbot.cputube.org/patch_41_4088.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Thu, 19 Jan 2023 17:28:25 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 05:28:25PM +0530, vignesh C wrote:\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n\nrebased\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 19 Jan 2023 11:08:07 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Fri, Jan 13, 2023 at 03:30:15PM -0800, Nathan Bossart wrote:\n> On Fri, Jan 13, 2023 at 03:24:09PM -0800, Jeff Davis wrote:\n> > For completeness, did you consider CLUSTER and REINDEX options as well?\n> \n> I have not, but I can put together patches for those as well.\n\nAre you planning to do that here, on this thread ?\n\nIt seems like a good idea - it would allow simplifying an existing or\nfuture scripts which needs to reindex a toast table, saving the trouble\nof looking up the name of the toast table, and a race condition and\nserver error log if the table itself were processed. I can imagine that\nmight happen if a separate process used TRUNCATE, for example.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 20 Feb 2023 10:31:11 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Feb 20, 2023 at 10:31:11AM -0600, Justin Pryzby wrote:\n> On Fri, Jan 13, 2023 at 03:30:15PM -0800, Nathan Bossart wrote:\n>> On Fri, Jan 13, 2023 at 03:24:09PM -0800, Jeff Davis wrote:\n>> > For completeness, did you consider CLUSTER and REINDEX options as well?\n>> \n>> I have not, but I can put together patches for those as well.\n> \n> Are you planning to do that here, on this thread ?\n\nYes, I just haven't made time for it yet. IIRC I briefly looked into\nCLUSTER and decided that it was probably not worth the effort, but I still\nthink it's worth adding this option to REINDEX.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 20 Feb 2023 09:14:19 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Jan 19, 2023 at 11:08:07AM -0800, Nathan Bossart wrote:\n> rebased\n\nPROCESS_TOAST has that:\n /* sanity check for PROCESS_TOAST */\n if ((params->options & VACOPT_FULL) != 0 &&\n (params->options & VACOPT_PROCESS_TOAST) == 0)\n ereport(ERROR,\n (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n errmsg(\"PROCESS_TOAST required with VACUUM FULL\")));\n[...]\n- if (params->options & VACOPT_FULL)\n+ if (params->options & VACOPT_FULL &&\n+ params->options & VACOPT_PROCESS_MAIN)\n {\n\nShouldn't we apply the same rule for PROCESS_MAIN? One of the\nregression tests added means that FULL takes priority over\nPROCESS_MAIN=FALSE, which is a bit confusing IMO. \n\n@@ -190,6 +190,7 @@ typedef struct VacAttrStats\n #define VACOPT_DISABLE_PAGE_SKIPPING 0x80 /* don't skip any pages */\n #define VACOPT_SKIP_DATABASE_STATS 0x100 /* skip vac_update_datfrozenxid() */\n #define VACOPT_ONLY_DATABASE_STATS 0x200 /* only vac_update_datfrozenxid() */\n+#define VACOPT_PROCESS_MAIN 0x400 /* process main relation */\n\nPerhaps the options had better be reorganized so as PROCESS_MAIN is\njust before PROCESS_TOAST?\n\n+-- PROCESS_MAIN option\n+VACUUM (PROCESS_MAIN FALSE) vactst;\n+VACUUM (PROCESS_MAIN FALSE, PROCESS_TOAST FALSE) vactst;\n+VACUUM (PROCESS_MAIN FALSE, FULL) vactst;\n\nThinking a bit here. This set of tests does not make sure that the\nmain relation and/or the toast relation have been actually processed. \npg_stat_user_tables does not track what's happening on the toast\nrelations. So... What about adding some tests in 100_vacuumdb.pl\nthat rely on vacuumdb --verbose and check the logs produced? We\nshould make sure that the toast or the main relation are processed,\nby tracking, for example, logs like vacuuming \"schema.table\". When\nFULL is involved, we may want to track the changes on relfilenodes\ndepending on what's wanted.\n--\nMichael",
"msg_date": "Wed, 1 Mar 2023 15:31:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "Thanks for taking a look.\n\nOn Wed, Mar 01, 2023 at 03:31:48PM +0900, Michael Paquier wrote:\n> PROCESS_TOAST has that:\n> /* sanity check for PROCESS_TOAST */\n> if ((params->options & VACOPT_FULL) != 0 &&\n> (params->options & VACOPT_PROCESS_TOAST) == 0)\n> ereport(ERROR,\n> (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> errmsg(\"PROCESS_TOAST required with VACUUM FULL\")));\n> [...]\n> - if (params->options & VACOPT_FULL)\n> + if (params->options & VACOPT_FULL &&\n> + params->options & VACOPT_PROCESS_MAIN)\n> {\n> \n> Shouldn't we apply the same rule for PROCESS_MAIN? One of the\n> regression tests added means that FULL takes priority over\n> PROCESS_MAIN=FALSE, which is a bit confusing IMO. \n\nI don't think so. We disallow FULL without PROCESS_TOAST because there\npresently isn't a way to VACUUM FULL the main relation without rebuilding\nits TOAST table. However, FULL without PROCESS_MAIN can be used to run\nVACUUM FULL on only the TOAST table.\n\n> @@ -190,6 +190,7 @@ typedef struct VacAttrStats\n> #define VACOPT_DISABLE_PAGE_SKIPPING 0x80 /* don't skip any pages */\n> #define VACOPT_SKIP_DATABASE_STATS 0x100 /* skip vac_update_datfrozenxid() */\n> #define VACOPT_ONLY_DATABASE_STATS 0x200 /* only vac_update_datfrozenxid() */\n> +#define VACOPT_PROCESS_MAIN 0x400 /* process main relation */\n> \n> Perhaps the options had better be reorganized so as PROCESS_MAIN is\n> just before PROCESS_TOAST?\n\nSure.\n\n> +-- PROCESS_MAIN option\n> +VACUUM (PROCESS_MAIN FALSE) vactst;\n> +VACUUM (PROCESS_MAIN FALSE, PROCESS_TOAST FALSE) vactst;\n> +VACUUM (PROCESS_MAIN FALSE, FULL) vactst;\n> \n> Thinking a bit here. This set of tests does not make sure that the\n> main relation and/or the toast relation have been actually processed. \n> pg_stat_user_tables does not track what's happening on the toast\n> relations. So... What about adding some tests in 100_vacuumdb.pl\n> that rely on vacuumdb --verbose and check the logs produced? We\n> should make sure that the toast or the main relation are processed,\n> by tracking, for example, logs like vacuuming \"schema.table\". When\n> FULL is involved, we may want to track the changes on relfilenodes\n> depending on what's wanted.\n\nThat seems like a good idea.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Wed, 1 Mar 2023 09:26:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On 2023-Mar-01, Michael Paquier wrote:\n\n> +-- PROCESS_MAIN option\n> +VACUUM (PROCESS_MAIN FALSE) vactst;\n> +VACUUM (PROCESS_MAIN FALSE, PROCESS_TOAST FALSE) vactst;\n> +VACUUM (PROCESS_MAIN FALSE, FULL) vactst;\n> \n> Thinking a bit here. This set of tests does not make sure that the\n> main relation and/or the toast relation have been actually processed. \n> pg_stat_user_tables does not track what's happening on the toast\n> relations. So... What about adding some tests in 100_vacuumdb.pl\n> that rely on vacuumdb --verbose and check the logs produced? We\n> should make sure that the toast or the main relation are processed,\n> by tracking, for example, logs like vacuuming \"schema.table\". When\n> FULL is involved, we may want to track the changes on relfilenodes\n> depending on what's wanted.\n\nMaybe instead of reading the log, read values from pg_stat_all_tables.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n\n",
"msg_date": "Wed, 1 Mar 2023 19:09:53 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 07:09:53PM +0100, Alvaro Herrera wrote:\n> On 2023-Mar-01, Michael Paquier wrote:\n> \n>> +-- PROCESS_MAIN option\n>> +VACUUM (PROCESS_MAIN FALSE) vactst;\n>> +VACUUM (PROCESS_MAIN FALSE, PROCESS_TOAST FALSE) vactst;\n>> +VACUUM (PROCESS_MAIN FALSE, FULL) vactst;\n>> \n>> Thinking a bit here. This set of tests does not make sure that the\n>> main relation and/or the toast relation have been actually processed. \n>> pg_stat_user_tables does not track what's happening on the toast\n>> relations. So... What about adding some tests in 100_vacuumdb.pl\n>> that rely on vacuumdb --verbose and check the logs produced? We\n>> should make sure that the toast or the main relation are processed,\n>> by tracking, for example, logs like vacuuming \"schema.table\". When\n>> FULL is involved, we may want to track the changes on relfilenodes\n>> depending on what's wanted.\n> \n> Maybe instead of reading the log, read values from pg_stat_all_tables.\n\nHere is an attempt at that. Thanks for the idea.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 11:13:44 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 07:09:53PM +0100, Alvaro Herrera wrote:\n> Maybe instead of reading the log, read values from pg_stat_all_tables.\n\nAh, right. I was looking at pg_stat_user_tables yesterday, and forgot\nthat pg_stat_all_tables tracks toast tables, so it should be fine to\ndo some validation with that.\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 10:03:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 4:13 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Wed, Mar 01, 2023 at 07:09:53PM +0100, Alvaro Herrera wrote:\n> > On 2023-Mar-01, Michael Paquier wrote:\n> >\n> >> +-- PROCESS_MAIN option\n> >> +VACUUM (PROCESS_MAIN FALSE) vactst;\n> >> +VACUUM (PROCESS_MAIN FALSE, PROCESS_TOAST FALSE) vactst;\n> >> +VACUUM (PROCESS_MAIN FALSE, FULL) vactst;\n> >>\n> >> Thinking a bit here. This set of tests does not make sure that the\n> >> main relation and/or the toast relation have been actually processed.\n> >> pg_stat_user_tables does not track what's happening on the toast\n> >> relations. So... What about adding some tests in 100_vacuumdb.pl\n> >> that rely on vacuumdb --verbose and check the logs produced? We\n> >> should make sure that the toast or the main relation are processed,\n> >> by tracking, for example, logs like vacuuming \"schema.table\". When\n> >> FULL is involved, we may want to track the changes on relfilenodes\n> >> depending on what's wanted.\n> >\n> > Maybe instead of reading the log, read values from pg_stat_all_tables.\n>\n> Here is an attempt at that. Thanks for the idea.\n\nI've reviewed the v4 patch. Here is a minor comment:\n\n+SELECT * FROM vactst_vacuum_counts;\n+ left | vacuum_count\n+----------+--------------\n+ pg_toast | 1\n+ vactst | 0\n+(2 rows)\n\n+CREATE VIEW vactst_vacuum_counts AS\n+ SELECT left(s.relname, 8), s.vacuum_count\n+ FROM pg_stat_all_tables s\n+ LEFT JOIN pg_class c ON s.relid = c.reltoastrelid\n+ WHERE c.relname = 'vactst' OR s.relname = 'vactst'\n+ ORDER BY s.relname;\n\nCutting the toast relation name to 'pg_toast' is a bit confusing to me\nas we have the pg_toast schema. How about using the following query\ninstead to improve the readability?\n\n SELECT\n CASE WHEN c.relname IS NULL THEN\n s.relname\n ELSE\n 'toast for ' || c.relname\n END as relname,\n s.vacuum_count\n FROM pg_stat_all_tables s\n LEFT JOIN pg_class c ON s.relid = c.reltoastrelid\n WHERE c.relname = 'vactst' OR s.relname = 'vactst'\n\nWe will get like:\n\n SELECT * FROM vactst_vacuum_counts;\n relname | vacuum_count\n------------------+--------------\n toast for vactst | 0\n vactst | 1\n (2 rows)\n\nThe rest looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Mar 2023 12:58:32 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Mar 02, 2023 at 12:58:32PM +0900, Masahiko Sawada wrote:\n> Cutting the toast relation name to 'pg_toast' is a bit confusing to me\n> as we have the pg_toast schema. How about using the following query\n> instead to improve the readability?\n> \n> SELECT\n> CASE WHEN c.relname IS NULL THEN\n> s.relname\n> ELSE\n> 'toast for ' || c.relname\n> END as relname,\n> s.vacuum_count\n> FROM pg_stat_all_tables s\n> LEFT JOIN pg_class c ON s.relid = c.reltoastrelid\n> WHERE c.relname = 'vactst' OR s.relname = 'vactst'\n\nAnother tweak that I have learnt to like is to apply a filter with\nregexp_replace(), see 090_reindexdb.pl:\nregexp_replace(b.indname::text, '(pg_toast.pg_toast_)\\\\d+(_index)', '\\\\1<oid>\\\\2')\n\nIf you make that part of the view definition, the result is the same,\nso that's up to which solution one prefers.\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 14:21:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Mar 02, 2023 at 02:21:08PM +0900, Michael Paquier wrote:\n> On Thu, Mar 02, 2023 at 12:58:32PM +0900, Masahiko Sawada wrote:\n>> Cutting the toast relation name to 'pg_toast' is a bit confusing to me\n>> as we have the pg_toast schema. How about using the following query\n>> instead to improve the readability?\n>> \n>> SELECT\n>> CASE WHEN c.relname IS NULL THEN\n>> s.relname\n>> ELSE\n>> 'toast for ' || c.relname\n>> END as relname,\n>> s.vacuum_count\n>> FROM pg_stat_all_tables s\n>> LEFT JOIN pg_class c ON s.relid = c.reltoastrelid\n>> WHERE c.relname = 'vactst' OR s.relname = 'vactst'\n> \n> Another tweak that I have learnt to like is to apply a filter with\n> regexp_replace(), see 090_reindexdb.pl:\n> regexp_replace(b.indname::text, '(pg_toast.pg_toast_)\\\\d+(_index)', '\\\\1<oid>\\\\2')\n> \n> If you make that part of the view definition, the result is the same,\n> so that's up to which solution one prefers.\n\nHere's a new version of the patch that uses Sawada-san's suggestion.\nThanks for taking a look.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 21:26:42 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Mar 2, 2023 at 2:26 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Thu, Mar 02, 2023 at 02:21:08PM +0900, Michael Paquier wrote:\n> > On Thu, Mar 02, 2023 at 12:58:32PM +0900, Masahiko Sawada wrote:\n> >> Cutting the toast relation name to 'pg_toast' is a bit confusing to me\n> >> as we have the pg_toast schema. How about using the following query\n> >> instead to improve the readability?\n> >>\n> >> SELECT\n> >> CASE WHEN c.relname IS NULL THEN\n> >> s.relname\n> >> ELSE\n> >> 'toast for ' || c.relname\n> >> END as relname,\n> >> s.vacuum_count\n> >> FROM pg_stat_all_tables s\n> >> LEFT JOIN pg_class c ON s.relid = c.reltoastrelid\n> >> WHERE c.relname = 'vactst' OR s.relname = 'vactst'\n> >\n> > Another tweak that I have learnt to like is to apply a filter with\n> > regexp_replace(), see 090_reindexdb.pl:\n> > regexp_replace(b.indname::text, '(pg_toast.pg_toast_)\\\\d+(_index)', '\\\\1<oid>\\\\2')\n> >\n> > If you make that part of the view definition, the result is the same,\n> > so that's up to which solution one prefers.\n>\n> Here's a new version of the patch that uses Sawada-san's suggestion.\n> Thanks for taking a look.\n\nThank you for updating the patch. The patch looks good to me.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 2 Mar 2023 14:38:49 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 09:26:37AM -0800, Nathan Bossart wrote:\n> Thanks for taking a look.\n> \n> On Wed, Mar 01, 2023 at 03:31:48PM +0900, Michael Paquier wrote:\n> > PROCESS_TOAST has that:\n> > /* sanity check for PROCESS_TOAST */\n> > if ((params->options & VACOPT_FULL) != 0 &&\n> > (params->options & VACOPT_PROCESS_TOAST) == 0)\n> > ereport(ERROR,\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"PROCESS_TOAST required with VACUUM FULL\")));\n> > [...]\n> > - if (params->options & VACOPT_FULL)\n> > + if (params->options & VACOPT_FULL &&\n> > + params->options & VACOPT_PROCESS_MAIN)\n> > {\n> > \n> > Shouldn't we apply the same rule for PROCESS_MAIN? One of the\n> > regression tests added means that FULL takes priority over\n> > PROCESS_MAIN=FALSE, which is a bit confusing IMO. \n> \n> I don't think so. We disallow FULL without PROCESS_TOAST because there\n> presently isn't a way to VACUUM FULL the main relation without rebuilding\n> its TOAST table. However, FULL without PROCESS_MAIN can be used to run\n> VACUUM FULL on only the TOAST table.\n\n- if (params->options & VACOPT_FULL)\n+ if (params->options & VACOPT_FULL &&\n+ params->options & VACOPT_PROCESS_MAIN)\nPerhaps this is a bit under-parenthesized, while reading through it\nonce again..\n\n+ {\n+ /* we force VACOPT_PROCESS_MAIN so vacuum_rel() processes it */\n+ bool force_opt = ((params->options & VACOPT_PROCESS_MAIN) == 0);\n+\n+ params->options |= VACOPT_PROCESS_MAIN;\n vacuum_rel(toast_relid, NULL, params, true);\n+ if (force_opt)\n+ params->options &= ~VACOPT_PROCESS_MAIN;\nZigzagging with the centralized VacuumParams is a bit inelegant.\nPerhaps it would be neater to copy VacuumParams and append\nVACOPT_PROCESS_MAIN on the copy?\n\nAn extra question: should we check the behavior of the commands when\napplying a list of relations to VACUUM?\n--\nMichael",
"msg_date": "Thu, 2 Mar 2023 14:53:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Thu, Mar 02, 2023 at 02:53:02PM +0900, Michael Paquier wrote:\n> - if (params->options & VACOPT_FULL)\n> + if (params->options & VACOPT_FULL &&\n> + params->options & VACOPT_PROCESS_MAIN)\n> Perhaps this is a bit under-parenthesized, while reading through it\n> once again..\n\nfixed\n\n> \n> + {\n> + /* we force VACOPT_PROCESS_MAIN so vacuum_rel() processes it */\n> + bool force_opt = ((params->options & VACOPT_PROCESS_MAIN) == 0);\n> +\n> + params->options |= VACOPT_PROCESS_MAIN;\n> vacuum_rel(toast_relid, NULL, params, true);\n> + if (force_opt)\n> + params->options &= ~VACOPT_PROCESS_MAIN;\n> Zigzagging with the centralized VacuumParams is a bit inelegant.\n> Perhaps it would be neater to copy VacuumParams and append\n> VACOPT_PROCESS_MAIN on the copy?\n\ndone\n\n> An extra question: should we check the behavior of the commands when\n> applying a list of relations to VACUUM?\n\nI don't feel a strong need for that, especially now that we aren't\nmodifying params anymore.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 1 Mar 2023 22:53:59 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Wed, Mar 01, 2023 at 10:53:59PM -0800, Nathan Bossart wrote:\n> I don't feel a strong need for that, especially now that we aren't\n> modifying params anymore.\n\nThat was mostly OK for me, so applied after tweaking a couple of\nplaces in the tests (extra explanations, for one), the comments and\nthe code.\n\nThe part that improved the tests of PROCESS_TOAST was useful on its\nown, so I have done that separately as 46d490a. Another thing that I\nhave found incorrect is the need for two pg_stat_reset() calls that\nwould reflect on the whole database, in the middle of a test running\nin parallel of other things. As far as I understand, you have added\nthat to provide fresh data after a single command while relying on\nvactst, but it is possible to get the same amount of coverage by\nrelying on cumulated counts, and that gets even more solid with all\nthese commands run on an independent table.\n--\nMichael",
"msg_date": "Mon, 6 Mar 2023 16:51:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 04:51:46PM +0900, Michael Paquier wrote:\n> That was mostly OK for me, so applied after tweaking a couple of\n> places in the tests (extra explanations, for one), the comments and\n> the code.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 09:37:23 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 09:37:23AM -0800, Nathan Bossart wrote:\n> On Mon, Mar 06, 2023 at 04:51:46PM +0900, Michael Paquier wrote:\n> > That was mostly OK for me, so applied after tweaking a couple of\n> > places in the tests (extra explanations, for one), the comments and\n> > the code.\n\nI noticed in vacuum_rel() in vacuum.c where table_relation_vacuum() is\ncalled, 4211fbd84 changes the else into an else if [1]. I understand\nafter reading the commit and re-reading the code why that is now, but I\nwas initially confused. I was thinking it might be nice to have a\ncomment mentioning why there is no else case here (i.e. that the main\ntable relation will be vacuumed on the else if branch).\n\n- Melanie\n\n[1] https://github.com/postgres/postgres/blob/master/src/backend/commands/vacuum.c#L2078\n\n\n",
"msg_date": "Mon, 6 Mar 2023 14:40:09 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 02:40:09PM -0500, Melanie Plageman wrote:\n> I noticed in vacuum_rel() in vacuum.c where table_relation_vacuum() is\n> called, 4211fbd84 changes the else into an else if [1]. I understand\n> after reading the commit and re-reading the code why that is now, but I\n> was initially confused. I was thinking it might be nice to have a\n> comment mentioning why there is no else case here (i.e. that the main\n> table relation will be vacuumed on the else if branch).\n\nThis was a hack to avoid another level of indentation for that whole block\nof code, but based on your comment, it might be better to just surround\nthis entire section with an \"if (params->options & VACOPT_PROCESS_MAIN)\"\ncheck. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 12:27:34 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 3:27 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Mar 06, 2023 at 02:40:09PM -0500, Melanie Plageman wrote:\n> > I noticed in vacuum_rel() in vacuum.c where table_relation_vacuum() is\n> > called, 4211fbd84 changes the else into an else if [1]. I understand\n> > after reading the commit and re-reading the code why that is now, but I\n> > was initially confused. I was thinking it might be nice to have a\n> > comment mentioning why there is no else case here (i.e. that the main\n> > table relation will be vacuumed on the else if branch).\n>\n> This was a hack to avoid another level of indentation for that whole block\n> of code, but based on your comment, it might be better to just surround\n> this entire section with an \"if (params->options & VACOPT_PROCESS_MAIN)\"\n> check. WDYT?\n\nI think that would be clearer.\n\n- Melanie\n\n\n",
"msg_date": "Mon, 6 Mar 2023 15:48:28 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 03:48:28PM -0500, Melanie Plageman wrote:\n> On Mon, Mar 6, 2023 at 3:27 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Mon, Mar 06, 2023 at 02:40:09PM -0500, Melanie Plageman wrote:\n>> > I noticed in vacuum_rel() in vacuum.c where table_relation_vacuum() is\n>> > called, 4211fbd84 changes the else into an else if [1]. I understand\n>> > after reading the commit and re-reading the code why that is now, but I\n>> > was initially confused. I was thinking it might be nice to have a\n>> > comment mentioning why there is no else case here (i.e. that the main\n>> > table relation will be vacuumed on the else if branch).\n>>\n>> This was a hack to avoid another level of indentation for that whole block\n>> of code, but based on your comment, it might be better to just surround\n>> this entire section with an \"if (params->options & VACOPT_PROCESS_MAIN)\"\n>> check. WDYT?\n> \n> I think that would be clearer.\n\nHere's a patch. Thanks for reviewing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 6 Mar 2023 13:13:37 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 01:13:37PM -0800, Nathan Bossart wrote:\n> On Mon, Mar 06, 2023 at 03:48:28PM -0500, Melanie Plageman wrote:\n> > On Mon, Mar 6, 2023 at 3:27 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> On Mon, Mar 06, 2023 at 02:40:09PM -0500, Melanie Plageman wrote:\n> >> > I noticed in vacuum_rel() in vacuum.c where table_relation_vacuum() is\n> >> > called, 4211fbd84 changes the else into an else if [1]. I understand\n> >> > after reading the commit and re-reading the code why that is now, but I\n> >> > was initially confused. I was thinking it might be nice to have a\n> >> > comment mentioning why there is no else case here (i.e. that the main\n> >> > table relation will be vacuumed on the else if branch).\n> >>\n> >> This was a hack to avoid another level of indentation for that whole block\n> >> of code, but based on your comment, it might be better to just surround\n> >> this entire section with an \"if (params->options & VACOPT_PROCESS_MAIN)\"\n> >> check. WDYT?\n> > \n> > I think that would be clearer.\n> \n> Here's a patch. Thanks for reviewing.\n\n> diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\n> index 580f966499..fb1ef28fa9 100644\n> --- a/src/backend/commands/vacuum.c\n> +++ b/src/backend/commands/vacuum.c\n> @@ -2060,23 +2060,25 @@ vacuum_rel(Oid relid, RangeVar *relation, VacuumParams *params, bool skip_privs)\n\nI would move this comment inside of the outer if statement since it is\ndistinguishing between the two branches of the inner if statement.\n\nAlso, I would still consider putting a comment above that reminds us that\nVACOPT_PROCESS_MAIN is the default and will vacuum the main relation.\n\n> \t/*\n> \t * Do the actual work --- either FULL or \"lazy\" vacuum\n> \t */\n> -\tif ((params->options & VACOPT_FULL) &&\n> -\t\t(params->options & VACOPT_PROCESS_MAIN))\n> +\tif (params->options & VACOPT_PROCESS_MAIN)\n> \t{\n> -\t\tClusterParams cluster_params = {0};\n> +\t\tif (params->options & VACOPT_FULL)\n> +\t\t{\n> +\t\t\tClusterParams cluster_params = {0};\n> \n> -\t\t/* close relation before vacuuming, but hold lock until commit */\n> -\t\trelation_close(rel, NoLock);\n> -\t\trel = NULL;\n> +\t\t\t/* close relation before vacuuming, but hold lock until commit */\n> +\t\t\trelation_close(rel, NoLock);\n> +\t\t\trel = NULL;\n> \n> -\t\tif ((params->options & VACOPT_VERBOSE) != 0)\n> -\t\t\tcluster_params.options |= CLUOPT_VERBOSE;\n> +\t\t\tif ((params->options & VACOPT_VERBOSE) != 0)\n> +\t\t\t\tcluster_params.options |= CLUOPT_VERBOSE;\n> \n> -\t\t/* VACUUM FULL is now a variant of CLUSTER; see cluster.c */\n> -\t\tcluster_rel(relid, InvalidOid, &cluster_params);\n> +\t\t\t/* VACUUM FULL is now a variant of CLUSTER; see cluster.c */\n> +\t\t\tcluster_rel(relid, InvalidOid, &cluster_params);\n> +\t\t}\n> +\t\telse\n> +\t\t\ttable_relation_vacuum(rel, params, vac_strategy);\n> \t}\n> -\telse if (params->options & VACOPT_PROCESS_MAIN)\n> -\t\ttable_relation_vacuum(rel, params, vac_strategy);\n> \n> \t/* Roll back any GUC changes executed by index functions */\n> \tAtEOXact_GUC(false, save_nestlevel);\n\n\n- Melanie\n\n\n",
"msg_date": "Mon, 6 Mar 2023 17:09:58 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 05:09:58PM -0500, Melanie Plageman wrote:\n> I would move this comment inside of the outer if statement since it is\n> distinguishing between the two branches of the inner if statement.\n\nOops, done.\n\n> Also, I would still consider putting a comment above that reminds us that\n> VACOPT_PROCESS_MAIN is the default and will vacuum the main relation.\n\nI tried adding something along these lines.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 6 Mar 2023 14:43:10 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 5:43 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Mon, Mar 06, 2023 at 05:09:58PM -0500, Melanie Plageman wrote:\n> > I would move this comment inside of the outer if statement since it is\n> > distinguishing between the two branches of the inner if statement.\n>\n> Oops, done.\n>\n> > Also, I would still consider putting a comment above that reminds us that\n> > VACOPT_PROCESS_MAIN is the default and will vacuum the main relation.\n>\n> I tried adding something along these lines.\n\nLGTM.\n\n\n",
"msg_date": "Mon, 6 Mar 2023 18:12:36 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 06:12:36PM -0500, Melanie Plageman wrote:\n> LGTM.\n\n- * Do the actual work --- either FULL or \"lazy\" vacuum\n+ * If PROCESS_MAIN is set (the default), it's time to vacuum the main\n+ * relation. Otherwise, we can skip this part. If required, we'll process\n+ * the TOAST table later.\n\nShould we mention that this part could be used for a toast table once\nwe've already looped once through vacuum_rel() when toast_relid was\nset? VACOPT_PROCESS_MAIN is enforced a few lines down the road,\nstill..\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 09:20:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 09:20:12AM +0900, Michael Paquier wrote:\n> - * Do the actual work --- either FULL or \"lazy\" vacuum\n> + * If PROCESS_MAIN is set (the default), it's time to vacuum the main\n> + * relation. Otherwise, we can skip this part. If required, we'll process\n> + * the TOAST table later.\n> \n> Should we mention that this part could be used for a toast table once\n> we've already looped once through vacuum_rel() when toast_relid was\n> set? VACOPT_PROCESS_MAIN is enforced a few lines down the road,\n> still..\n\nThat did cross my mind, but I was worried that trying to explain all that\nhere could cause confusion.\n\n\tIf PROCESS_MAIN is set (the default), it's time to vacuum the main\n\trelation. Otherwise, we can skip this part. If processing the TOAST\n\ttable is required (e.g., PROCESS_TOAST is set), we'll force\n\tPROCESS_MAIN to be set when we recurse to the TOAST table so that it\n\tgets processed here.\n\nHow does that sound?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 6 Mar 2023 16:59:49 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 06, 2023 at 04:59:49PM -0800, Nathan Bossart wrote:\n> That did cross my mind, but I was worried that trying to explain all that\n> here could cause confusion.\n> \n> \tIf PROCESS_MAIN is set (the default), it's time to vacuum the main\n> \trelation. Otherwise, we can skip this part. If processing the TOAST\n> \ttable is required (e.g., PROCESS_TOAST is set), we'll force\n> \tPROCESS_MAIN to be set when we recurse to the TOAST table so that it\n> \tgets processed here.\n> \n> How does that sound?\n\nSounds clear to me, thanks! Melanie, what do you think?\n--\nMichael",
"msg_date": "Tue, 7 Mar 2023 12:45:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Mon, Mar 6, 2023 at 10:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Mar 06, 2023 at 04:59:49PM -0800, Nathan Bossart wrote:\n> > That did cross my mind, but I was worried that trying to explain all that\n> > here could cause confusion.\n> >\n> > If PROCESS_MAIN is set (the default), it's time to vacuum the main\n> > relation. Otherwise, we can skip this part. If processing the TOAST\n> > table is required (e.g., PROCESS_TOAST is set), we'll force\n> > PROCESS_MAIN to be set when we recurse to the TOAST table so that it\n> > gets processed here.\n> >\n> > How does that sound?\n>\n> Sounds clear to me, thanks! Melanie, what do you think?\n\nYes, sounds clear to me also!\n\n- Melanie\n\n\n",
"msg_date": "Tue, 7 Mar 2023 12:39:29 -0500",
"msg_from": "Melanie Plageman <melanieplageman@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 12:39:29PM -0500, Melanie Plageman wrote:\n> Yes, sounds clear to me also!\n\nHere is an updated patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Tue, 7 Mar 2023 12:55:08 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
},
{
"msg_contents": "On Tue, Mar 07, 2023 at 12:55:08PM -0800, Nathan Bossart wrote:\n> On Tue, Mar 07, 2023 at 12:39:29PM -0500, Melanie Plageman wrote:\n>> Yes, sounds clear to me also!\n> \n> Here is an updated patch.\n\nFine by me, so done. (I have cut a few words from the comment,\nwithout changing its meaning.)\n--\nMichael",
"msg_date": "Wed, 8 Mar 2023 09:57:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: add PROCESS_MAIN to VACUUM"
}
] |
[
{
"msg_contents": "This allows MERGE to UPDATE or DELETE target rows where there is no\nmatching source row. In addition, it allows the existing \"WHEN NOT\nMATCHED\" syntax to include an optional \"BY TARGET\" to make its meaning\nmore explicit. E.g.,\n\nMERGE INTO tgt USING src ON ...\n WHEN NOT MATCHED BY SOURCE THEN UPDATE/DELETE ...\n WHEN NOT MATCHED BY TARGET THEN INSERT ...\n\nAFAIK, this is not part of the standard (though I only have a very old\ndraft copy). It is supported by at least 2 other major DB vendors\nthough, and I think it usefully rounds off the set of possible MERGE\nactions.\n\nAttached is a WIP patch. I haven't updated the docs yet, and there are\nprobably a few other things to tidy up and test, but the basic\nfunctionality is there.\n\nRegards,\nDean",
"msg_date": "Fri, 30 Dec 2022 16:56:17 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Fri, 30 Dec 2022 at 16:56, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Attached is a WIP patch.\n>\n\nUpdated patch attached, now with updated docs and some other minor tidying up.\n\nRegards,\nDean",
"msg_date": "Mon, 2 Jan 2023 18:24:10 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "I haven't read this patch other than superficially; I suppose the\nfeature it's introducing is an OK one to have as an extension to the\nstandard. (I hope the community members that are committee members \nwill propose this extension to become part of the standard.)\n\nOn 2023-Jan-02, Dean Rasheed wrote:\n\n> --- a/src/backend/optimizer/prep/preptlist.c\n> +++ b/src/backend/optimizer/prep/preptlist.c\n> @@ -157,15 +157,14 @@ preprocess_targetlist(PlannerInfo *root)\n> \t\t\t/*\n> \t\t\t * Add resjunk entries for any Vars used in each action's\n> \t\t\t * targetlist and WHEN condition that belong to relations other\n> -\t\t\t * than target. Note that aggregates, window functions and\n> -\t\t\t * placeholder vars are not possible anywhere in MERGE's WHEN\n> -\t\t\t * clauses. (PHVs may be added later, but they don't concern us\n> -\t\t\t * here.)\n> +\t\t\t * than target. Note that aggregates and window functions are not\n> +\t\t\t * possible anywhere in MERGE's WHEN clauses, but PlaceHolderVars\n> +\t\t\t * may have been added by subquery pullup.\n> \t\t\t */\n> \t\t\tvars = pull_var_clause((Node *)\n> \t\t\t\t\t\t\t\t list_concat_copy((List *) action->qual,\n> \t\t\t\t\t\t\t\t\t\t\t\t\taction->targetList),\n> -\t\t\t\t\t\t\t\t 0);\n> +\t\t\t\t\t\t\t\t PVC_INCLUDE_PLACEHOLDERS);\n\nHmm, is this new because of NOT MATCHED BY SOURCE, or is it something\nthat can already be hit by existing features of MERGE? In other words\n-- is this a bug fix that should be backpatched ahead of introducing NOT\nMATCHED BY SOURCE?\n\n> @@ -127,10 +143,12 @@ transformMergeStmt(ParseState *pstate, M\n> \t */\n> \tis_terminal[0] = false;\n> \tis_terminal[1] = false;\n> +\tis_terminal[2] = false;\n\nI think these 0/1/2 should be replaced by the values of MergeMatchKind.\n\n> +\t/* Join type required */\n> +\tif (left_join && right_join)\n> +\t\tqry->mergeJoinType = JOIN_FULL;\n> +\telse if (left_join)\n> +\t\tqry->mergeJoinType = JOIN_LEFT;\n> +\telse if (right_join)\n> +\t\tqry->mergeJoinType = JOIN_RIGHT;\n> +\telse\n> +\t\tqry->mergeJoinType = JOIN_INNER;\n\nOne of the review comments that MERGE got initially was that parse\nanalysis was not a place to \"do query optimization\", in the sense that\nthe original code was making a decision whether to make an outer or\ninner join based on the set of WHEN clauses that appear in the command.\nThat's how we ended up with transform_MERGE_to_join and\nmergeUseOuterJoin instead. This new code is certainly not the same, but\nit makes me a bit unconfortable. Maybe it's OK, though.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Wed, 4 Jan 2023 12:57:58 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 11:03, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> I haven't read this patch other than superficially; I suppose the\n> feature it's introducing is an OK one to have as an extension to the\n> standard. (I hope the community members that are committee members\n> will propose this extension to become part of the standard.)\n>\n\nThanks for looking!\n\n> > --- a/src/backend/optimizer/prep/preptlist.c\n> > +++ b/src/backend/optimizer/prep/preptlist.c\n> > @@ -157,15 +157,14 @@ preprocess_targetlist(PlannerInfo *root)\n> > /*\n> > * Add resjunk entries for any Vars used in each action's\n> > * targetlist and WHEN condition that belong to relations other\n> > - * than target. Note that aggregates, window functions and\n> > - * placeholder vars are not possible anywhere in MERGE's WHEN\n> > - * clauses. (PHVs may be added later, but they don't concern us\n> > - * here.)\n> > + * than target. Note that aggregates and window functions are not\n> > + * possible anywhere in MERGE's WHEN clauses, but PlaceHolderVars\n> > + * may have been added by subquery pullup.\n> > */\n> > vars = pull_var_clause((Node *)\n> > list_concat_copy((List *) action->qual,\n> > action->targetList),\n> > - 0);\n> > + PVC_INCLUDE_PLACEHOLDERS);\n>\n> Hmm, is this new because of NOT MATCHED BY SOURCE, or is it something\n> that can already be hit by existing features of MERGE? In other words\n> -- is this a bug fix that should be backpatched ahead of introducing NOT\n> MATCHED BY SOURCE?\n>\n\nIt's new because of NOT MATCHED BY SOURCE, and I also found that I had\nto make the same change in the MERGE INTO view patch, in the case\nwhere the target view is simple enough to allow subquery pullup, but\nalso had INSTEAD OF triggers causing the pullup to happen in the\nplanner rather than the rewriter.\n\nI couldn't think of a way that it could happen with the existing MERGE\ncode though, so I don't think it's a bug that needs fixing and\nback-patching.\n\n> > @@ -127,10 +143,12 @@ transformMergeStmt(ParseState *pstate, M\n> > */\n> > is_terminal[0] = false;\n> > is_terminal[1] = false;\n> > + is_terminal[2] = false;\n>\n> I think these 0/1/2 should be replaced by the values of MergeMatchKind.\n>\n\nAgreed.\n\n> > + /* Join type required */\n> > + if (left_join && right_join)\n> > + qry->mergeJoinType = JOIN_FULL;\n> > + else if (left_join)\n> > + qry->mergeJoinType = JOIN_LEFT;\n> > + else if (right_join)\n> > + qry->mergeJoinType = JOIN_RIGHT;\n> > + else\n> > + qry->mergeJoinType = JOIN_INNER;\n>\n> One of the review comments that MERGE got initially was that parse\n> analysis was not a place to \"do query optimization\", in the sense that\n> the original code was making a decision whether to make an outer or\n> inner join based on the set of WHEN clauses that appear in the command.\n> That's how we ended up with transform_MERGE_to_join and\n> mergeUseOuterJoin instead. This new code is certainly not the same, but\n> it makes me a bit unconfortable. Maybe it's OK, though.\n>\n\nYeah I agree, it's a bit ugly. Perhaps a better solution would be to\ndo away with that field entirely and just make the decision in\ntransform_MERGE_to_join() by examining the action list again. That\nwould require making MergeAction's \"matched\" field a MergeMatchKind\nrather than a bool, but maybe that's not so bad, since retaining that\ninformation might prove useful one day.\n\nRegards,\nDean\n\n\n",
"msg_date": "Thu, 5 Jan 2023 13:21:09 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Thu, 5 Jan 2023 at 13:21, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 5 Jan 2023 at 11:03, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > > + /* Join type required */\n> > > + if (left_join && right_join)\n> > > + qry->mergeJoinType = JOIN_FULL;\n> > > + else if (left_join)\n> > > + qry->mergeJoinType = JOIN_LEFT;\n> > > + else if (right_join)\n> > > + qry->mergeJoinType = JOIN_RIGHT;\n> > > + else\n> > > + qry->mergeJoinType = JOIN_INNER;\n> >\n> > One of the review comments that MERGE got initially was that parse\n> > analysis was not a place to \"do query optimization\", in the sense that\n> > the original code was making a decision whether to make an outer or\n> > inner join based on the set of WHEN clauses that appear in the command.\n> > That's how we ended up with transform_MERGE_to_join and\n> > mergeUseOuterJoin instead. This new code is certainly not the same, but\n> > it makes me a bit unconfortable. Maybe it's OK, though.\n> >\n>\n> Yeah I agree, it's a bit ugly. Perhaps a better solution would be to\n> do away with that field entirely and just make the decision in\n> transform_MERGE_to_join() by examining the action list again.\n>\n\nAttached is an updated patch taking that approach, allowing\nmergeUseOuterJoin to be removed from the Query node, which I think is\nprobably a good thing.\n\nAside from that, it includes a few additional comment updates in the\nexecutor that I'd missed, and psql tab completion support.\n\nRegards,\nDean",
"msg_date": "Sat, 7 Jan 2023 12:54:50 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "Rebased version attached.\n\nRegards,\nDean",
"msg_date": "Tue, 10 Jan 2023 14:43:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Tue, 10 Jan 2023 at 14:43, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version attached.\n>\n\nRebased version, following 8eba3e3f02 and 5d29d525ff.\n\nRegards,\nDean",
"msg_date": "Sat, 21 Jan 2023 11:05:08 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Sat, Jan 21, 2023 at 3:05 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Tue, 10 Jan 2023 at 14:43, Dean Rasheed <dean.a.rasheed@gmail.com>\n> wrote:\n> >\n> > Rebased version attached.\n> >\n>\n> Rebased version, following 8eba3e3f02 and 5d29d525ff.\n>\n> Regards,\n> Dean\n>\nHi,\nIn transform_MERGE_to_join :\n\n+ if (action->matchKind ==\nMERGE_WHEN_NOT_MATCHED_BY_SOURCE)\n+ tgt_only_tuples = true;\n+ if (action->matchKind ==\nMERGE_WHEN_NOT_MATCHED_BY_TARGET)\n\nThere should be an `else` in front of the second `if`.\nWhen tgt_only_tuples and src_only_tuples are both true, we can come out of\nthe loop.\n\nCheers\n\nOn Sat, Jan 21, 2023 at 3:05 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Tue, 10 Jan 2023 at 14:43, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version attached.\n>\n\nRebased version, following 8eba3e3f02 and 5d29d525ff.\n\nRegards,\nDeanHi,In transform_MERGE_to_join :+ if (action->matchKind == MERGE_WHEN_NOT_MATCHED_BY_SOURCE)+ tgt_only_tuples = true;+ if (action->matchKind == MERGE_WHEN_NOT_MATCHED_BY_TARGET)There should be an `else` in front of the second `if`.When tgt_only_tuples and src_only_tuples are both true, we can come out of the loop.Cheers",
"msg_date": "Sat, 21 Jan 2023 06:17:36 -0800",
"msg_from": "Ted Yu <yuzhihong@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Sat, 21 Jan 2023 at 14:18, Ted Yu <yuzhihong@gmail.com> wrote:\n>\n> On Sat, Jan 21, 2023 at 3:05 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>>\n>> Rebased version, following 8eba3e3f02 and 5d29d525ff.\n>>\n\nAnother rebased version attached.\n\n> In transform_MERGE_to_join :\n>\n> + if (action->matchKind == MERGE_WHEN_NOT_MATCHED_BY_SOURCE)\n> + tgt_only_tuples = true;\n> + if (action->matchKind == MERGE_WHEN_NOT_MATCHED_BY_TARGET)\n>\n> There should be an `else` in front of the second `if`.\n> When tgt_only_tuples and src_only_tuples are both true, we can come out of the loop.\n>\n\nI decided not to do that. Adding an \"else\" doesn't change the code\nthat the compiler generates, and IMO it's slightly more readable\nwithout it, since it keeps the line length shorter, and the test\nconditions aligned, but that's a matter of opinion / personal\npreference.\n\nI think adding extra logic to exit the loop early if both\ntgt_only_tuples and src_only_tuples are true would be a premature\noptimisation, increasing the code size for no real benefit. In\npractice, there are unlikely to be more than a few merge actions in\nthe list.\n\nRegards,\nDean",
"msg_date": "Tue, 7 Feb 2023 10:28:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On 1/4/23 12:57, Alvaro Herrera wrote:\n> I haven't read this patch other than superficially; I suppose the\n> feature it's introducing is an OK one to have as an extension to the\n> standard. (I hope the community members that are committee members\n> will propose this extension to become part of the standard.)\n\n\nI have been doing some research on this, reading the original papers \nthat introduced the feature and its improvements.\n\nI don't see anything that ever considered what this patch proposes, even \nthough SQL Server has it. (The initial MERGE didn't even have DELETE!)\n\nSOURCE and TARGET are not currently keywords, but the only things that \ncan come after MATCHED are THEN and AND, so I don't foresee any issues \nwith us implementing this before the committee accepts such a change \nproposal. I also don't see how the committee could possibly change the \nsemantics of this, and two implementations having it is a good argument \nfor getting it in.\n\nWe should be cautious in doing something differently from SQL Server \nhere, and I would appreciate any differences being brought to my \nattention so I can incorporate them into a specification, even if that \nmeans resorting to the hated \"implementation-defined\".\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 26 Feb 2023 02:13:57 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "I see the PlaceHolderVar issue turned out to be a pre-existing bug after all.\nRebased version attached.\n\nRegards,\nDean",
"msg_date": "Sun, 19 Mar 2023 09:10:15 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On 2023-Mar-19, Dean Rasheed wrote:\n\n> diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y\n> new file mode 100644\n> index efe88cc..e1ebc8d\n> --- a/src/backend/parser/gram.y\n> +++ b/src/backend/parser/gram.y\n\n> +merge_when_tgt_matched:\n> +\t\t\tWHEN MATCHED\t\t\t\t\t{ $$ = MERGE_WHEN_MATCHED; }\n> +\t\t\t| WHEN NOT MATCHED BY SOURCE\t{ $$ = MERGE_WHEN_NOT_MATCHED_BY_SOURCE; }\n> +\t\t;\n\nI think a one-line comment on why this \"matched\" production matches \"NOT\nMATCHED BY\" would be useful. I think you have a big one in\ntransformMergeStmt already.\n\n\n> +\t\t\t/* Combine it with the action's WHEN condition */\n> +\t\t\tif (action->qual == NULL)\n> +\t\t\t\taction->qual = (Node *) ntest;\n> +\t\t\telse\n> +\t\t\t\taction->qual =\n> +\t\t\t\t\t(Node *) makeBoolExpr(AND_EXPR,\n> +\t\t\t\t\t\t\t\t\t\t list_make2(ntest, action->qual),\n> +\t\t\t\t\t\t\t\t\t\t -1);\n\nHmm, I think ->qual is already in implicit-and form, so do you really\nneed to makeBoolExpr, or would it be sufficient to append this new\ncondition to the list?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n",
"msg_date": "Tue, 21 Mar 2023 11:28:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 10:28, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > + /* Combine it with the action's WHEN condition */\n> > + if (action->qual == NULL)\n> > + action->qual = (Node *) ntest;\n> > + else\n> > + action->qual =\n> > + (Node *) makeBoolExpr(AND_EXPR,\n> > + list_make2(ntest, action->qual),\n> > + -1);\n>\n> Hmm, I think ->qual is already in implicit-and form, so do you really\n> need to makeBoolExpr, or would it be sufficient to append this new\n> condition to the list?\n>\n\nNo, this has come directly from transformWhereClause() in the parser,\nso it's an expression tree, not a list. Transforming to implicit-and\nform doesn't happen until later.\n\nLooking at it with fresh eyes though, I realise that I could have just written\n\n action->qual = make_and_qual((Node *) ntest, action->qual);\n\nwhich is equivalent, but more concise.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 21 Mar 2023 12:24:31 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On 2023-Mar-21, Dean Rasheed wrote:\n\n> Looking at it with fresh eyes though, I realise that I could have just written\n> \n> action->qual = make_and_qual((Node *) ntest, action->qual);\n> \n> which is equivalent, but more concise.\n\nNice.\n\nI have no further observations about this patch.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 21 Mar 2023 13:26:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Tue, 21 Mar 2023 at 12:26, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Mar-21, Dean Rasheed wrote:\n>\n> > Looking at it with fresh eyes though, I realise that I could have just written\n> >\n> > action->qual = make_and_qual((Node *) ntest, action->qual);\n> >\n> > which is equivalent, but more concise.\n>\n> Nice.\n>\n> I have no further observations about this patch.\n>\n\nLooking at this one afresh, it seems that the change to make Vars\nouter-join aware broke it -- the Var in the qual to test whether the\nsource row is null needs to be marked as nullable by the join added by\ntransform_MERGE_to_join(). That's something that needs to be done in\ntransform_MERGE_to_join(), so it makes more sense to add the new qual\nthere rather than in transformMergeStmt().\n\nAlso, now that MERGE has ruleutils support, it's clear that adding the\nqual in transformMergeStmt() isn't right anyway, since it would then\nappear in the deparsed output.\n\nSo attached is an updated patch doing that, which seems neater all\nround, since adding the qual is closely related to the join-type\nchoice, which is now a decision taken entirely in\ntransform_MERGE_to_join(). This requires a new \"mergeSourceRelation\"\nfield on the Query structure, but as before, it does away with the\n\"mergeUseOuterJoin\" field.\n\nI've also updated the ruleutils support. In the absence of any WHEN\nNOT MATCHED BY SOURCE actions, this will output not-matched actions\nsimply as \"WHEN NOT MATCHED\" for backwards compatibility, and to be\nSQL-standard-compliant. If there are any WHEN NOT MATCHED BY SOURCE\nactions though, I think it's preferable to output explicit \"BY SOURCE\"\nand \"BY TARGET\" qualifiers for all not-matched actions, to make the\nmeaning clearer.\n\nRegards,\nDean",
"msg_date": "Sat, 1 Jul 2023 13:33:40 +0100",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\nbeen no activity on this thread for 6+ months.\n\nIs anything else planned? Can you post something to elicit more\ninterest in the latest patch? Otherwise, if nothing happens then the\nCF entry will be closed (\"Returned with feedback\") at the end of this\nCF.\n\n======\n[1] https://commitfest.postgresql.org/46/4092/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 13:10:09 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Sat, 1 Jul 2023 at 18:04, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Tue, 21 Mar 2023 at 12:26, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Mar-21, Dean Rasheed wrote:\n> >\n> > > Looking at it with fresh eyes though, I realise that I could have just written\n> > >\n> > > action->qual = make_and_qual((Node *) ntest, action->qual);\n> > >\n> > > which is equivalent, but more concise.\n> >\n> > Nice.\n> >\n> > I have no further observations about this patch.\n> >\n>\n> Looking at this one afresh, it seems that the change to make Vars\n> outer-join aware broke it -- the Var in the qual to test whether the\n> source row is null needs to be marked as nullable by the join added by\n> transform_MERGE_to_join(). That's something that needs to be done in\n> transform_MERGE_to_join(), so it makes more sense to add the new qual\n> there rather than in transformMergeStmt().\n>\n> Also, now that MERGE has ruleutils support, it's clear that adding the\n> qual in transformMergeStmt() isn't right anyway, since it would then\n> appear in the deparsed output.\n>\n> So attached is an updated patch doing that, which seems neater all\n> round, since adding the qual is closely related to the join-type\n> choice, which is now a decision taken entirely in\n> transform_MERGE_to_join(). This requires a new \"mergeSourceRelation\"\n> field on the Query structure, but as before, it does away with the\n> \"mergeUseOuterJoin\" field.\n>\n> I've also updated the ruleutils support. In the absence of any WHEN\n> NOT MATCHED BY SOURCE actions, this will output not-matched actions\n> simply as \"WHEN NOT MATCHED\" for backwards compatibility, and to be\n> SQL-standard-compliant. If there are any WHEN NOT MATCHED BY SOURCE\n> actions though, I think it's preferable to output explicit \"BY SOURCE\"\n> and \"BY TARGET\" qualifiers for all not-matched actions, to make the\n> meaning clearer.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\nf2bf8fb04886e3ea82e7f7f86696ac78e06b7e60 ===\n=== applying patch ./support-merge-when-not-matched-by-source-v8.patch\n...\npatching file doc/src/sgml/ref/merge.sgml\nHunk #5 FAILED at 409.\nHunk #9 FAILED at 673.\n2 out of 9 hunks FAILED -- saving rejects to file\ndoc/src/sgml/ref/merge.sgml.rej\n..\npatching file src/include/nodes/parsenodes.h\nHunk #1 succeeded at 175 (offset -8 lines).\nHunk #2 succeeded at 1657 (offset -6 lines).\nHunk #3 succeeded at 1674 (offset -6 lines).\nHunk #4 FAILED at 1696.\n1 out of 4 hunks FAILED -- saving rejects to file\nsrc/include/nodes/parsenodes.h.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4092.log\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Fri, 26 Jan 2024 20:29:20 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Mon, 22 Jan 2024 at 02:10, Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> Hi, this patch was marked in CF as \"Needs Review\" [1], but there has\n> been no activity on this thread for 6+ months.\n>\n> Is anything else planned? Can you post something to elicit more\n> interest in the latest patch? Otherwise, if nothing happens then the\n> CF entry will be closed (\"Returned with feedback\") at the end of this\n> CF.\n>\n\nI think it has had a decent amount of review and all the review\ncomments have been addressed. I'm not quite sure from Alvaro's last\ncomment whether he was implying that he thought it was ready for\ncommit.\n\nLooking back through the thread, the general sentiment seems to be in\nfavour of adding this feature, and I still think it's worth doing, but\nI haven't managed to find much time to progress it recently.\n\nRegards,\nDean\n\n\n",
"msg_date": "Fri, 26 Jan 2024 15:48:31 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "n Fri, 26 Jan 2024 at 14:59, vignesh C <vignesh21@gmail.com> wrote:\n>\n> CFBot shows that the patch does not apply anymore as in [1]:\n>\n\nRebased version attached.\n\nRegards,\nDean",
"msg_date": "Fri, 26 Jan 2024 15:48:44 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On 2024-Jan-26, Dean Rasheed wrote:\n\n> I think it has had a decent amount of review and all the review\n> comments have been addressed. I'm not quite sure from Alvaro's last\n> comment whether he was implying that he thought it was ready for\n> commit.\n\nWell, firstly this is clearly a feature we want to have, even though\nit's non-standard, because people use it and other implementations have\nit. (Eh, so maybe somebody should be talking to the SQL standard\ncommittee about it). As for code quality, I didn't do a comprehensive\nreview, but I think it is quite reasonable. Therefore, my inclination\nwould be to get it committed soonish, and celebrate it widely so that\npeople can test it soon and complain if they see something they don't\nlike.\n\nI have to say that I find the idea of booting patches as Returned with\nFeedback just because of inactivity (as opposed to unresponsive authors)\nrather wrong-headed, and I wish we didn't do it.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Fri, 26 Jan 2024 16:57:29 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Fri, 26 Jan 2024 at 15:57, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> Well, firstly this is clearly a feature we want to have, even though\n> it's non-standard, because people use it and other implementations have\n> it. (Eh, so maybe somebody should be talking to the SQL standard\n> committee about it). As for code quality, I didn't do a comprehensive\n> review, but I think it is quite reasonable. Therefore, my inclination\n> would be to get it committed soonish, and celebrate it widely so that\n> people can test it soon and complain if they see something they don't\n> like.\n>\n\nThanks. I have been going over this patch again, and for the most\npart, I'm pretty happy with it.\n\nOne thing that's bothering me though is what happens if a row being\nmerged is concurrently updated. Specifically, if a concurrent update\ncauses a formerly matching row to no longer match the join condition,\nand there are both NOT MATCHED BY SOURCE and NOT MATCHED BY TARGET\nactions, so that it's doing in full join between the source and target\nrelations. In this case, when the EPQ mechanism rescans the subplan\nnode, there will be 2 possible output tuples (one with source null,\nand one with target null), and EvalPlanQual() will just return the\nfirst one, which is a more-or-less arbitrary choice, depending on the\ntype of join (hash/merge), and (for a mergejoin) the values of the\ninner and outer join keys. Thus, it may execute a NOT MATCHED BY\nSOURCE action, or a NOT MATCHED BY TARGET action, and it's difficult\nto predict which.\n\nArguably it's not worth worrying too much about what happens in a\ncorner-case concurrent update like this, when MERGE is already\ninconsistent under other concurrent update scenarios, but I don't like\nhaving unpredictable results like this, which can depend on the plan\nchosen.\n\nI think the best (and probably simplest) solution is to always opt for\na NOT MATCHED BY TARGET action in this case, so then the result is\npredictable, and we can document what is expected to happen.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 29 Jan 2024 10:07:42 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Mon, 29 Jan 2024 at 10:07, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> One thing that's bothering me though is what happens if a row being\n> merged is concurrently updated. Specifically, if a concurrent update\n> causes a formerly matching row to no longer match the join condition,\n> and there are both NOT MATCHED BY SOURCE and NOT MATCHED BY TARGET\n> actions, so that it's doing in full join between the source and target\n> relations. In this case, when the EPQ mechanism rescans the subplan\n> node, there will be 2 possible output tuples (one with source null,\n> and one with target null), and EvalPlanQual() will just return the\n> first one, which is a more-or-less arbitrary choice, depending on the\n> type of join (hash/merge), and (for a mergejoin) the values of the\n> inner and outer join keys. Thus, it may execute a NOT MATCHED BY\n> SOURCE action, or a NOT MATCHED BY TARGET action, and it's difficult\n> to predict which.\n>\n\nI set out to rebase this on top of 5f2e179bd3 (support for MERGE into\nviews), and ended up hacking on it quite a bit. Aside from some\ncosmetic stuff, I made 3 bigger changes:\n\n1). It turned out that simply rebasing this didn't work for NOT\nMATCHED BY SOURCE actions on an auto-updatable view. This was due to\nthe fact that transformMergeStmt() puts the quals from a MERGE's join\ncondition temporarily into query->jointree->quals, as if they were\nnormal WHERE quals. That's a problem, because when the rewriter\nexpands a target auto-updatable view with its own WHERE quals, they\nend up getting added to the same overall set of WHERE quals, which\ntransform_MERGE_to_join() then attaches to the JoinExpr that it\nconstructs. That's not a problem for the INNER/RIGHT joins used\nwithout this patch, but for the LEFT/FULL joins produced when there\nare NOT MATCHED BY SOURCE actions, it produces incorrect results,\nbecause the view's WHERE quals on the target relation need to be\nunderneath the JoinExpr, not on it, to work correctly when the source\nrow is null.\n\nTo fix that, I added a new Query->mergeJoinCondition field to keep the\nMERGE join quals separate from the query's WHERE quals during query\nrewriting. That seems like a good separation to have on general\ngrounds anyway, but it's crucial to make this patch work properly. I\nadded a few more tests and this now seems to work well.\n\n\n2). Having added Query->mergeJoinCondition, it then made more sense to\nuse that in the executor to distinguish MATCHED candidate rows from\nNOT MATCHED BY SOURCE ones, rather than hacking each individual\naction's quals. This avoids an additional qual check for every action.\nThe executor now builds 3 lists of actions (one per match kind), and\nExecMergeMatched() decides at the start which list it needs to scan,\ndepending on whether or not the candidate row matches the join quals.\nThat seems somewhat neater, and helped with the next point.\n\nI'm not entirely happy with this though, since it means that the join\nquals get checked a second time when there are NOT MATCHED BY SOURCE\nactions. It would be better if it could somehow get that information\nout of the underlying join node, but I'm not sure how to do that.\n\n\n3). Thinking more about what to do if a concurrent update turns a\nmatched candidate row into a not matched one, and there are both NOT\nMATCHED BY SOURCE and NOT MATCHED BY TARGET actions, I think the right\nthing to do is to execute one action of each kind, as would happen if\nthe source and target rows had started out not matching. That's much\nbetter than arbitrarily preferring one kind of NOT MATCHED action over\nthe other.\n\nThat turned out to be relatively easy to achieve -- if\nExecMergeMatched() detects a concurrent update that causes the join\nquals to no longer pass when they used to, it switches from the\nMATCHED list of actions to the NOT MATCHED BY SOURCE list, before\nrescanning and executing the first qualifying action. Then it returns\nfalse instead of true, to cause ExecMerge() to call\nExecMergeNotMatched(), so that it also executes a NOT MATCHED BY\nTARGET action. I extended the isolation tests to test that, and the\nresults look quite good.\n\nThat'll need a little tweaking if MERGE gets RETURNING support, since\nit won't then be able to execute two actions in a single call to the\nModifyTable node. I think that should be fairly easy to deal with\nthough, just by setting a flag on the node to indicate that there is a\npending NOT MATCHED BY TARGET action to execute the next time it gets\ncalled.\n\nRegards,\nDean",
"msg_date": "Mon, 4 Mar 2024 09:44:55 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "Rebased version attached.\n\nRegards,\nDean",
"msg_date": "Wed, 13 Mar 2024 14:32:11 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Wed, 13 Mar 2024 at 14:32, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version attached.\n>\n\nRebased version attached, on top of c649fa24a4 (MERGE ... RETURNING support).\n\nAside from some cosmetic stuff, I've updated several tests to test\nthis together with RETURNING.\n\nThe updated isolation test tests the new interesting case where a\nconcurrent update causes a matched case to become not matched, and\nthere are both NOT MATCHED BY SOURCE and NOT MATCHED BY TARGET actions\nto execute, and RETURNING is specified so that it is forced to defer\nthe NOT MATCHED BY TARGET action until the next invocation of\nExecModifyTable(), in order to return the rows from both not matched\nactions.\n\nI also tried to tidy up ExecMergeMatched() a little --- since we know\nthat it's only ever called with matched = true, it's simpler to just\nAssert that at the top, and then only touch it in the few cases where\nit needs to be changed to false.\n\nA lot of the updates are comment updates, to try to make it clearer\nhow concurrent updates are handled, since that's a little more complex\nwith this patch.\n\nRegards,\nDean",
"msg_date": "Mon, 18 Mar 2024 08:59:28 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Mon, 18 Mar 2024 at 08:59, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version attached, on top of c649fa24a4 (MERGE ... RETURNING support).\n>\n\nTrivial rebase forced by 6185c9737c.\n\nRegards,\nDean",
"msg_date": "Thu, 21 Mar 2024 09:35:18 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
},
{
"msg_contents": "On Thu, 21 Mar 2024 at 09:35, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Trivial rebase forced by 6185c9737c.\n>\n\nI think it would be good to get this committed.\n\nIt has had a decent amount of review, at least up to v9, but a number\nof things have changed since then:\n\n1). Concurrent update behaviour -- now if a concurrent update causes a\nmatched candidate row to no longer match the join condition, it will\nexecute the first qualifying NOT MATCHED BY SOURCE action, and then\nthe first qualifying NOT MATCHED [BY TARGET] action. I.e., it may\nexecute 2 actions, which makes sense because if the rows had started\nout not matching, the full join would have output 2 rows.\n\n2). ResultRelInfo now has 3 lists of actions, one per match kind.\nPreviously I was putting the NOT MATCHED BY SOURCE actions in the same\nlist as the MATCHED actions, since they are both handled by\nExecMergeMatched(). However, to achieve (1) above, it turned out to be\neasier to have 3 separate lists, and this makes some other code a\nlittle neater.\n\n3). I've added a new field Query.mergeJoinCondition so that\ntransformMergeStmt() no longer puts the join conditions in\nqry->jointree->quals. That's necessary to make it work correctly on an\nauto-updatable view which might have its own quals, but it also seems\nneater anyway.\n\n4). To distinguish the MATCHED case from NOT MATCHED BY SOURCE case in\nthe executor, it now uses the join condition (previously it added a\n\"source IS [NOT] NULL\" clause to each merge action). This has the\nadvantage that it involves just one qual check per candidate row,\nrather than one for each action. On the downside, it's checking the\njoin condition twice (since the source subplan's join node already\nchecked it), but I couldn't see an easy way round that. (It only does\nthis if there are both MATCHED and NOT MATCHED BY SOURCE actions, so\nit's not making any existing queries worse.)\n\n5). To support (4), I added new fields\nModifyTablePath.mergeJoinConditions, ModifyTable.mergeJoinConditions\nand ResultRelInfo.ri_MergeJoinCondition, since the attribute numbers\nin the join condition might vary by partition.\n\n6). I got rid of Query.mergeSourceRelation, which is no longer needed,\nbecause of (4). (And as before, it also gets rid of\nQuery.mergeUseOuterJoin, since the parser is no longer making the\ndecision about what kind of join to build.)\n\n7). To support (1), I added a new field\nModifyTableState.mt_merge_pending_not_matched, because if it has to\nexecute 2 actions following a concurrent update, and there is a\nRETURNING clause, it has to defer the second action until the next\ncall to ExecModifyTable().\n\n8). I've added isolation tests to test (1).\n\n9). I've added a lot more regression tests.\n\n10). I've made a lot of comment changes in nodeModifyTable.c,\nespecially relating to the discussion around concurrent updates.\n\nOverall, I feel like this is in pretty good shape, and it manages to\nmake a few code simplifications that look quite nice.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 26 Mar 2024 08:08:52 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: MERGE ... WHEN NOT MATCHED BY SOURCE"
}
] |
[
{
"msg_contents": "Hi hackers.\nI'm studying the source code about creation of initial logical decoding snapshot. What confused me is that why must we process 3 xl_running_xacts before we get to the consistent state. I think we only need 2 xl_running_xacts.\nI think we can get to consistent state when we meet the 2nd xl_running_xact with its oldestRunningXid > 1st xl_running_xact's nextXid, this means the active transactions in 1st xl_running_xact all had commited, and we have all the logs of transactions who will commit afterwards, so there is consistent state in this time point and we can export a snapshot.\nI had read the discussion in [0] and the comment of commit '955a684', but I haven't got a detailed explanation about why we need 4 stages during creation of initial logical decoding snapshot but not 3 stages.\nMy rencent job is relevant to logical decoding so I want to figure this problem out, I'm very grateful if you can answer me, thanks.\n\n[0] https://www.postgresql.org/message-id/flat/f37e975c-908f-858e-707f-058d3b1eb214%402ndquadrant.com\n\n--\nBest regards\nChong Wang\nGreenplum DataFlow team\n\n\n\n\n\n\n\n\n\nHi hackers.\nI'm studying the source code about creation of initial logical decoding snapshot. What confused me is that why must we process 3 xl_running_xacts before we get to the consistent state. I think we only need 2 xl_running_xacts.\nI think we can get to consistent state when we meet the 2nd xl_running_xact with its oldestRunningXid > 1st xl_running_xact's nextXid, this means the active transactions in 1st xl_running_xact all had commited, and we\n have all the logs of transactions who will commit afterwards, so there is consistent state in this time point and we can export a snapshot.\nI had read the discussion in [0] and the comment of commit\n'955a684', but I haven't got a detailed explanation about why we need 4 stages during creation of initial logical decoding snapshot but not 3 stages.\nMy rencent job is relevant to logical decoding so I want to figure this problem out, I'm very grateful if you can answer me, thanks.\n \n[0] \nhttps://www.postgresql.org/message-id/flat/f37e975c-908f-858e-707f-058d3b1eb214%402ndquadrant.com\n \n--\nBest regards\nChong Wang\nGreenplum DataFlow team",
"msg_date": "Fri, 30 Dec 2022 18:26:46 +0000",
"msg_from": "Chong Wang <chongwa@vmware.com>",
"msg_from_op": true,
"msg_subject": "Question about initial logical decoding snapshot"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 11:57 PM Chong Wang <chongwa@vmware.com> wrote:\n>\n> I'm studying the source code about creation of initial logical decoding snapshot. What confused me is that why must we process 3 xl_running_xacts before we get to the consistent state. I think we only need 2 xl_running_xacts.\n>\n> I think we can get to consistent state when we meet the 2nd xl_running_xact with its oldestRunningXid > 1st xl_running_xact's nextXid, this means the active transactions in 1st xl_running_xact all had commited, and we have all the logs of transactions who will commit afterwards, so there is consistent state in this time point and we can export a snapshot.\n>\n\nYeah, we will have logs for all transactions in such a case but I\nthink we won't have a valid snapshot by that time. Consider a case\nthat there are two transactions 723,724 in the 2nd xl_running_xact\nrecord for which we have waited to finish and then consider that point\nas a consistent point and exported that snapshot. It is quite possible\nthat by that time the commit record of one or more of those xacts (say\n724) wouldn't have been encountered by decoding process and that means\nit won't be recorded in the xip list of the snapshot (we do that in\nDecodeCommit->SnapBuildCommitTxn). So, during export in function\nSnapBuildInitialSnapshot(), we will consider 723 as committed and 724\nas running. This could not lead to inconsistent data on the client\nside that imports such a snapshot and use it for copy and further\nreplicating the other xacts.\n\nOTOH, currently, before marking snapshot state as consistent we wait\nfor these xacts to finish and for another xl_running_xact where\noldestRunningXid >= builder->next_phase_at to appear which means the\ncommit for both 723 and 724 would have appeared in the snapshot.\n\nDoes that makes sense to you or am, I missing something here?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:44:23 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about initial logical decoding snapshot"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 4:44 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Dec 30, 2022 at 11:57 PM Chong Wang <chongwa@vmware.com> wrote:\n> >\n> > I'm studying the source code about creation of initial logical decoding snapshot. What confused me is that why must we process 3 xl_running_xacts before we get to the consistent state. I think we only need 2 xl_running_xacts.\n> >\n> > I think we can get to consistent state when we meet the 2nd xl_running_xact with its oldestRunningXid > 1st xl_running_xact's nextXid, this means the active transactions in 1st xl_running_xact all had commited, and we have all the logs of transactions who will commit afterwards, so there is consistent state in this time point and we can export a snapshot.\n> >\n>\n> Yeah, we will have logs for all transactions in such a case but I\n> think we won't have a valid snapshot by that time. Consider a case\n> that there are two transactions 723,724 in the 2nd xl_running_xact\n> record for which we have waited to finish and then consider that point\n> as a consistent point and exported that snapshot. It is quite possible\n> that by that time the commit record of one or more of those xacts (say\n> 724) wouldn't have been encountered by decoding process and that means\n> it won't be recorded in the xip list of the snapshot (we do that in\n> DecodeCommit->SnapBuildCommitTxn). So, during export in function\n> SnapBuildInitialSnapshot(), we will consider 723 as committed and 724\n> as running. This could not lead to inconsistent data on the client\n> side that imports such a snapshot and use it for copy and further\n> replicating the other xacts.\n>\n> OTOH, currently, before marking snapshot state as consistent we wait\n> for these xacts to finish and for another xl_running_xact where\n> oldestRunningXid >= builder->next_phase_at to appear which means the\n> commit for both 723 and 724 would have appeared in the snapshot.\n>\n> Does that makes sense to you or am, I missing something here?\n>\n\nYou can also refer to the discussion in the thread [1] which is\nrelated to your question.\n\n[1] - https://www.postgresql.org/message-id/c94be044-818f-15e3-1ad3-7a7ae2dfed0a%40iki.fi\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:49:04 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about initial logical decoding snapshot"
},
{
"msg_contents": "Hello,\n\nI was curious as to why we need 3rd running_xact and wanted to learn\nmore about it, so I have made a few changes to come up with a patch\nwhich builds the snapshot in 2 running_xacts. The motive is to run the\ntests to see the failures/issues with this approach to understand the\nneed of reading 3rd running_xact to build a consistent snapshot. On\nthis patch, I have got one test-failure which is\ntest_decoding/twophase_snapshot.\n\nApproach:\nWhen we start building a snapshot, on the occurrence of first\nrunning_xact, move the state from START to BUILDING and wait for all\nin-progress transactions to finish. On the second running_xact where\nwe find oldestRunningXid >= 1st xl_running_xact's nextXid, move to\nCONSISTENT state. So, it means all the transactions started before\nBUILDING state are now finished and all the new transactions that are\ncurrently in progress are the ones that are started after BUILDING\nstate and thus have enough info to be decoded.\n\nFailure analysis for twophase_snapshot test:\nAfter the patch application, test-case fails because slot is created\nsooner and 'PREPARE TRANSACTION test1' is available as result of first\n'pg_logical_slot_get_changes' itself. Intent of this testcase is to\nsee how two-phase txn is handled when snapshot-build completes in 3\nstages (BUILDING-->FULL-->CONSISTENT). Originally, the PREPARED txn is\nstarted between FULL and CONSISTENT stage and thus as per the current\ncode logic, 'DecodePrepare' will skip it. Please see code in\nDecodePrepare:\n\n /* We can't start streaming unless a consistent state is reached. */\n if (SnapBuildCurrentState(builder) < SNAPBUILD_CONSISTENT)\n {\n ReorderBufferSkipPrepare(ctx->reorder, xid);\n return;\n }\n\nSo first 'pg_logical_slot_get_changes' will not show these changes.\nOnce we do 'commit prepared' after CONSISTENT state is reached, it\nwill be available for next 'pg_logical_slot_get_changes' to consume.\n\nOn the other hand, after the current patch, since we reach consistent\nstate sooner, so with the same test-case, PREPARED transaction now\nends up starting after CONSISTENT state and thus will be available to\nbe consumed by first 'pg_logical_slot_get_changes' itself. This makes\nthe testcase to fail.\n\nPlease note that in the patch, I have maintained 'WAIT for all running\ntransactions to end' even after reaching CONSISTENT state. I have\ntried running tests even after removing that WAIT after CONSISTENT,\nwith that, we get one more test failure which is\ntest_decoding/ondisk_startup. The reason for failure here is the same\nas previous case i.e., since we reach CONSISTENT state earlier,\nslot-creation finishes faster and thus we see slight change in result\nfor this test. ('step s1init completed' seen earlier in log file).\n\nBoth the failing tests here are written in such a way that they align\nwith the 3-phase snapshot build process. Otherwise, I do not see any\nlogical issues yet with this approach based on the test-cases\navailable so far.\n\nSo, I still have not gotten clarity on why we need 3rd running_xact\nhere. In code, I see a comment in SnapBuildFindSnapshot() which says\n\"c) ...But for older running transactions no viable snapshot exists\nyet, so CONSISTENT will only be reached once all of those have\nfinished.\" This comment refers to txns started between BUILDING and\nFULL state. I do not understand it fully. I am not sure what tests I\nneed to run on the patch to reproduce this issue where we do not have\na viable snapshot when we go by two running_xacts only.\n\nAny thoughts/comments are most welcome. Attached the patch for review.\n\nThanks\nShveta\n\nOn Fri, Dec 30, 2022 at 11:57 PM Chong Wang <chongwa@vmware.com> wrote:\n>\n> Hi hackers.\n>\n> I'm studying the source code about creation of initial logical decoding snapshot. What confused me is that why must we process 3 xl_running_xacts before we get to the consistent state. I think we only need 2 xl_running_xacts.\n>\n> I think we can get to consistent state when we meet the 2nd xl_running_xact with its oldestRunningXid > 1st xl_running_xact's nextXid, this means the active transactions in 1st xl_running_xact all had commited, and we have all the logs of transactions who will commit afterwards, so there is consistent state in this time point and we can export a snapshot.\n>\n> I had read the discussion in [0] and the comment of commit '955a684', but I haven't got a detailed explanation about why we need 4 stages during creation of initial logical decoding snapshot but not 3 stages.\n>\n> My rencent job is relevant to logical decoding so I want to figure this problem out, I'm very grateful if you can answer me, thanks.\n>\n>\n>\n> [0] https://www.postgresql.org/message-id/flat/f37e975c-908f-858e-707f-058d3b1eb214%402ndquadrant.com\n>\n>\n>\n> --\n>\n> Best regards\n>\n> Chong Wang\n>\n> Greenplum DataFlow team",
"msg_date": "Wed, 18 Jan 2023 16:48:49 +0530",
"msg_from": "shveta malik <shveta.malik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Question about initial logical decoding snapshot"
}
] |
[
{
"msg_contents": "",
"msg_date": "Fri, 30 Dec 2022 17:12:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "typos"
},
{
"msg_contents": "On Fri, Dec 30, 2022 at 05:12:57PM -0600, Justin Pryzby wrote:\n\n # Use larger ccache cache, as this task compiles with multiple compilers /\n # flag combinations\n- CCACHE_MAXSIZE: \"1GB\"\n+ CCACHE_MAXSIZE: \"1G\"\n\nIn 0006, I am not sure how much this matters. Perhaps somebody more\nfluent with Cirrus, though, has a different opinion..\n\n * pointer to this structure. The information here must be sufficient to\n * properly initialize each new TableScanDesc as workers join the scan, and it\n- * must act as a information what to scan for those workers.\n+ * must provide information what to scan for those workers.\n\nThis comment in 0009 is obviously incorrect, but I am not sure whether\nyour new suggestion is an improvement. Do workers provide such\ninformation or has this structure some information that the workers\nrely on?\n\nNot sure that the whitespace issue in 0021 for the header of inval.c\nis worth caring about.\n\n0014 and 0013 do not reduce the translation workload, as the messages\ninclude some stuff specific to the GUC names accessed to, or some\nspecific details about the code paths triggered.\n\nThe rest has been applied where they matter.\n--\nMichael",
"msg_date": "Tue, 3 Jan 2023 16:28:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 12:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 30, 2022 at 05:12:57PM -0600, Justin Pryzby wrote:\n>\n> # Use larger ccache cache, as this task compiles with multiple compilers /\n> # flag combinations\n> - CCACHE_MAXSIZE: \"1GB\"\n> + CCACHE_MAXSIZE: \"1G\"\n>\n> In 0006, I am not sure how much this matters.\n>\n\nThe other places in that file use M, so maybe, this is more consistent.\n\nOne minor comment:\n- spoken in Belgium (BE), with a <acronym>UTF-8</acronym> character set\n+ spoken in Belgium (BE), with a <acronym>UTF</acronym>-8 character set\n\nShouldn't this be <acronym>UTF8</acronym> as we are using in func.sgml?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 3 Jan 2023 13:03:01 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 01:03:01PM +0530, Amit Kapila wrote:\n> One minor comment:\n> - spoken in Belgium (BE), with a <acronym>UTF-8</acronym> character set\n> + spoken in Belgium (BE), with a <acronym>UTF</acronym>-8 character set\n> \n> Shouldn't this be <acronym>UTF8</acronym> as we are using in func.sgml?\n\nYeah, I was wondering as well why this change is not worse, which is\nwhy I left it out of 33ab0a2. There is an acronym for UTF in\nacronym.sgml, which makes sense to me, but that's the only place where \nthis is used. To add more on top of that, the docs basically need\nonly UTF8, and we have three references to UTF-16, none of them using\nthe <acronym> markup.\n--\nMichael",
"msg_date": "Tue, 3 Jan 2023 17:41:58 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On 03.01.23 09:41, Michael Paquier wrote:\n> On Tue, Jan 03, 2023 at 01:03:01PM +0530, Amit Kapila wrote:\n>> One minor comment:\n>> - spoken in Belgium (BE), with a <acronym>UTF-8</acronym> character set\n>> + spoken in Belgium (BE), with a <acronym>UTF</acronym>-8 character set\n>>\n>> Shouldn't this be <acronym>UTF8</acronym> as we are using in func.sgml?\n> \n> Yeah, I was wondering as well why this change is not worse, which is\n> why I left it out of 33ab0a2. There is an acronym for UTF in\n> acronym.sgml, which makes sense to me, but that's the only place where\n> this is used. To add more on top of that, the docs basically need\n> only UTF8, and we have three references to UTF-16, none of them using\n> the <acronym> markup.\n\nThe thing is called \"UTF-8\". Here, we are not talking about the \nPostgreSQL identifier.\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:16:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 04:28:29PM +0900, Michael Paquier wrote:\n> On Fri, Dec 30, 2022 at 05:12:57PM -0600, Justin Pryzby wrote:\n> \n> # Use larger ccache cache, as this task compiles with multiple compilers /\n> # flag combinations\n> - CCACHE_MAXSIZE: \"1GB\"\n> + CCACHE_MAXSIZE: \"1G\"\n> \n> In 0006, I am not sure how much this matters. Perhaps somebody more\n> fluent with Cirrus, though, has a different opinion..\n\nIt's got almost nothing to do with cirrus. It's an environment\nvariable, and we're using a suffix other than what's\nsupported/documented by ccache, which only happens to work.\n\n> 0014 and 0013 do not reduce the translation workload, as the messages\n> include some stuff specific to the GUC names accessed to, or some\n> specific details about the code paths triggered.\n\nIt seems to matter because otherwise the translators sometimes re-type\nthe view name, which (not surprisingly) can get messed up, which is how\nI mentioned having noticed this.\n\nOn Tue, Jan 03, 2023 at 05:41:58PM +0900, Michael Paquier wrote:\n> On Tue, Jan 03, 2023 at 01:03:01PM +0530, Amit Kapila wrote:\n> > One minor comment:\n> > - spoken in Belgium (BE), with a <acronym>UTF-8</acronym>\n> > character set\n> > + spoken in Belgium (BE), with a <acronym>UTF</acronym>-8\n> > character set\n> > \n> > Shouldn't this be <acronym>UTF8</acronym> as we are using in\n> > func.sgml?\n> \n> Yeah, I was wondering as well why this change is not worse, which is\n> why I left it out of 33ab0a2. There is an acronym for UTF in\n> acronym.sgml, which makes sense to me, but that's the only place where \n> this is used. To add more on top of that, the docs basically need\n> only UTF8, and we have three references to UTF-16, none of them using\n> the <acronym> markup.\n\nI changed it for consistency, as it's the only thing that says <>UTF-8<>\nanywhere, and charset.sgml already says <>UTF<>-8 elsewhere.\n\nAlternately, I suggest to change charset to say <>UTF8<> in both places.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 3 Jan 2023 15:39:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 03, 2023 at 03:39:22PM -0600, Justin Pryzby wrote:\n> On Tue, Jan 03, 2023 at 04:28:29PM +0900, Michael Paquier wrote:\n> > On Fri, Dec 30, 2022 at 05:12:57PM -0600, Justin Pryzby wrote:\n> > \n> > # Use larger ccache cache, as this task compiles with multiple compilers /\n> > # flag combinations\n> > - CCACHE_MAXSIZE: \"1GB\"\n> > + CCACHE_MAXSIZE: \"1G\"\n> > \n> > In 0006, I am not sure how much this matters. Perhaps somebody more\n> > fluent with Cirrus, though, has a different opinion..\n> \n> It's got almost nothing to do with cirrus. It's an environment\n> variable, and we're using a suffix other than what's\n> supported/documented by ccache, which only happens to work.\n> \n> > 0014 and 0013 do not reduce the translation workload, as the messages\n> > include some stuff specific to the GUC names accessed to, or some\n> > specific details about the code paths triggered.\n> \n> It seems to matter because otherwise the translators sometimes re-type\n> the view name, which (not surprisingly) can get messed up, which is how\n> I mentioned having noticed this.\n> \n> On Tue, Jan 03, 2023 at 05:41:58PM +0900, Michael Paquier wrote:\n> > On Tue, Jan 03, 2023 at 01:03:01PM +0530, Amit Kapila wrote:\n> > > One minor comment:\n> > > - spoken in Belgium (BE), with a <acronym>UTF-8</acronym>\n> > > character set\n> > > + spoken in Belgium (BE), with a <acronym>UTF</acronym>-8\n> > > character set\n> > > \n> > > Shouldn't this be <acronym>UTF8</acronym> as we are using in\n> > > func.sgml?\n> > \n> > Yeah, I was wondering as well why this change is not worse, which is\n> > why I left it out of 33ab0a2. There is an acronym for UTF in\n> > acronym.sgml, which makes sense to me, but that's the only place where \n> > this is used. To add more on top of that, the docs basically need\n> > only UTF8, and we have three references to UTF-16, none of them using\n> > the <acronym> markup.\n> \n> I changed it for consistency, as it's the only thing that says <>UTF-8<>\n> anywhere, and charset.sgml already says <>UTF<>-8 elsewhere.\n> \n> Alternately, I suggest to change charset to say <>UTF8<> in both places.\n\nAs attached.\nThis also fixes \"specualtive\" in Amit's recent commit.\n\n-- \nJustin",
"msg_date": "Mon, 9 Jan 2023 22:57:22 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 10:27 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Jan 03, 2023 at 03:39:22PM -0600, Justin Pryzby wrote:\n> > On Tue, Jan 03, 2023 at 04:28:29PM +0900, Michael Paquier wrote:\n> > > On Fri, Dec 30, 2022 at 05:12:57PM -0600, Justin Pryzby wrote:\n> > >\n> > > # Use larger ccache cache, as this task compiles with multiple compilers /\n> > > # flag combinations\n> > > - CCACHE_MAXSIZE: \"1GB\"\n> > > + CCACHE_MAXSIZE: \"1G\"\n> > >\n> > > In 0006, I am not sure how much this matters. Perhaps somebody more\n> > > fluent with Cirrus, though, has a different opinion..\n> >\n> > It's got almost nothing to do with cirrus. It's an environment\n> > variable, and we're using a suffix other than what's\n> > supported/documented by ccache, which only happens to work.\n> >\n> > > 0014 and 0013 do not reduce the translation workload, as the messages\n> > > include some stuff specific to the GUC names accessed to, or some\n> > > specific details about the code paths triggered.\n> >\n> > It seems to matter because otherwise the translators sometimes re-type\n> > the view name, which (not surprisingly) can get messed up, which is how\n> > I mentioned having noticed this.\n> >\n> > On Tue, Jan 03, 2023 at 05:41:58PM +0900, Michael Paquier wrote:\n> > > On Tue, Jan 03, 2023 at 01:03:01PM +0530, Amit Kapila wrote:\n> > > > One minor comment:\n> > > > - spoken in Belgium (BE), with a <acronym>UTF-8</acronym>\n> > > > character set\n> > > > + spoken in Belgium (BE), with a <acronym>UTF</acronym>-8\n> > > > character set\n> > > >\n> > > > Shouldn't this be <acronym>UTF8</acronym> as we are using in\n> > > > func.sgml?\n> > >\n> > > Yeah, I was wondering as well why this change is not worse, which is\n> > > why I left it out of 33ab0a2. There is an acronym for UTF in\n> > > acronym.sgml, which makes sense to me, but that's the only place where\n> > > this is used. To add more on top of that, the docs basically need\n> > > only UTF8, and we have three references to UTF-16, none of them using\n> > > the <acronym> markup.\n> >\n> > I changed it for consistency, as it's the only thing that says <>UTF-8<>\n> > anywhere, and charset.sgml already says <>UTF<>-8 elsewhere.\n> >\n> > Alternately, I suggest to change charset to say <>UTF8<> in both places.\n>\n> As attached.\n> This also fixes \"specualtive\" in Amit's recent commit.\n>\n\nThanks for noticing this. I'll take care of this and some other typo\npatches together.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Jan 2023 12:24:40 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 12:24:40PM +0530, Amit Kapila wrote:\n> Thanks for noticing this. I'll take care of this and some other typo\n> patches together.\n\nDoes this include 0010? I was just looking at the whole set and this\none looked like a cleanup worth on its own so I was going to handle\nit, until I saw your update. If you are also looking at that, I won't\nstand in your way, of course :) \n--\nMichael",
"msg_date": "Tue, 10 Jan 2023 16:48:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 1:18 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 10, 2023 at 12:24:40PM +0530, Amit Kapila wrote:\n> > Thanks for noticing this. I'll take care of this and some other typo\n> > patches together.\n>\n> Does this include 0010? I was just looking at the whole set and this\n> one looked like a cleanup worth on its own so I was going to handle\n> it, until I saw your update. If you are also looking at that, I won't\n> stand in your way, of course :)\n>\n\nI have not yet started, so please go ahead.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Tue, 10 Jan 2023 13:55:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 01:55:56PM +0530, Amit Kapila wrote:\n> I have not yet started, so please go ahead.\n\nOkay, I have looked at that and fixed the whole new things, including\nthe typo you have introduced. 0001~0004 have been left out, as of the\nsame reasons as upthread.\n--\nMichael",
"msg_date": "Wed, 11 Jan 2023 15:26:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "Some more accumulated/new typos.",
"msg_date": "Wed, 8 Feb 2023 09:56:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: typos"
},
{
"msg_contents": "On Wed, Feb 08, 2023 at 09:56:44AM -0600, Justin Pryzby wrote:\n> Some more accumulated/new typos.\n\n0001 has been a debate for a long time, and it depends on the way SQL\nis spelled. For reference:\n$ git grep -i \" an sql\" -- *.c | wc -l\n63\n$ git grep -i \" a sql\" -- *.c | wc -l\n135\n\n0005 can indeed fix a lot of confusion around the spaces after an\n\"else if\" block. Is that something that could be automated with the\nindentation, though? Same remark for 0009 and 0010.\n\nApplied 0002, 0003, 0004, 0006, after rewording a bit 0003 to mention\nthe compression type.\n--\nMichael",
"msg_date": "Thu, 9 Feb 2023 14:45:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: typos"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Feb 08, 2023 at 09:56:44AM -0600, Justin Pryzby wrote:\n>> Some more accumulated/new typos.\n\n> 0005 can indeed fix a lot of confusion around the spaces after an\n> \"else if\" block. Is that something that could be automated with the\n> indentation, though? Same remark for 0009 and 0010.\n\nI see your point about 0005, but I've never seen pgindent remove\nvertical whitespace once it's been added. Not sure what it'd take\nto teach it to do so, or whether we'd like the results.\n\nI'd reject 0009 and 0010 altogether --- they don't add any readability\nthat's worth the potential increase in back-patch problems.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Feb 2023 01:36:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: typos"
}
] |
[
{
"msg_contents": "Hi,\n\n\nThis is in reference to BUG #5705 and corresponding todo item: Fix \n/contrib/btree_gist's implementation of inet indexing\n\nIssue: SELECT '1.255.255.200/8'::inet < '1.0.0.0'::inet didn't worked \nwith index.\n\nI am not able to repro this issue.\n\nSteps:\n\nSELECT '1.255.255.200/8'::inet < '1.0.0.0'::inet;\n ?column?\n----------\n t\n(1 row)\n\nCREATE TABLE inet_test (a inet);\nINSERT INTO inet_test VALUES ('1.255.255.200/8');\n\nSELECT * FROM inet_test WHERE a < '1.0.0.0'::inet;\n a\n-----------------\n 1.255.255.200/8\n(1 row)\n\nEXPLAIN ANALYZE SELECT * FROM inet_test WHERE a < '1.0.0.0'::inet;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------\n Seq Scan on inet_test (cost=0.00..1.01 rows=1 width=32) (actual \ntime=0.032..0.033 rows=1 loops=1)\n Filter: (a < '1.0.0.0'::inet)\n Planning Time: 0.040 ms\n Execution Time: 0.049 ms\n(4 rows)\n\nUPDATE pg_opclass SET opcdefault=true WHERE opcname = 'inet_ops';\n\nCREATE INDEX inet_test_idx ON inet_test USING gist (a);\nSET enable_seqscan = false;\n\nSELECT * FROM inet_test WHERE a < '1.0.0.0'::inet;\n a\n-----------------\n 1.255.255.200/8\n\n## This was expected to return 0 rows as in bug report\n\nEXPLAIN analyze SELECT * FROM inet_test WHERE a < '1.0.0.0'::inet;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------\n-----\n Index Only Scan using inet_test_idx on inet_test (cost=0.12..8.14 \nrows=1 width=32) (actual time=0.024..0.025 rows=1 loop\ns=1)\n Index Cond: (a < '1.0.0.0'::inet)\n Heap Fetches: 1\n Planning Time: 0.056 ms\n Execution Time: 0.044 ms\n(5 rows)\n\nGist index works fine as opposed to issue reported in the bug. Bug \nshould be marked as resolved and todo item can be removed.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sat, 31 Dec 2022 14:02:03 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Request for removal of BUG #5705 from todo items as no repro"
},
{
"msg_contents": "Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n> This is in reference to BUG #5705 and corresponding todo item: Fix \n> /contrib/btree_gist's implementation of inet indexing\n\n> I am not able to repro this issue.\n\nYou didn't test it right: the complaint is about the btree_gist\nextension, not the in-core inet opclass, which didn't even\nexist when this bug was filed. AFAICS btree_gist is still\nbroken. See\n\nhttps://www.postgresql.org/message-id/flat/201010112055.o9BKtZf7011251%40wwwmaster.postgresql.org\n\nThe commit message for f23a5630e may also be informative:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=f23a5630e\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 31 Dec 2022 13:02:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for removal of BUG #5705 from todo items as no repro"
},
{
"msg_contents": "\nOn 31/12/22 23:32, Tom Lane wrote:\n> Ankit Kumar Pandey <itsankitkp@gmail.com> writes:\n>> This is in reference to BUG #5705 and corresponding todo item: Fix\n>> /contrib/btree_gist's implementation of inet indexing\n>> I am not able to repro this issue.\n> You didn't test it right: the complaint is about the btree_gist\n> extension, not the in-core inet opclass, which didn't even\n> exist when this bug was filed. AFAICS btree_gist is still\n> broken. See\n>\n> https://www.postgresql.org/message-id/flat/201010112055.o9BKtZf7011251%40wwwmaster.postgresql.org\n>\n> The commit message for f23a5630e may also be informative:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=f23a5630e\n>\n> \t\t\tregards, tom lane\n\nHi,\n\nSorry I missed this. Thanks for the pointer, I will check this again \nproperly.\n\n-- \nRegards,\nAnkit Kumar Pandey\n\n\n\n",
"msg_date": "Sat, 31 Dec 2022 23:36:49 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for removal of BUG #5705 from todo items as no repro"
}
] |
[
{
"msg_contents": "Changes\n\n * add a new script |manage_alerts.pl| that lets the user enable or\n disable alerts for an animal\n This is especially useful in the case of animals that have stopped\n running for some reason.\n * check if a branch is up to date before trying to run it\n This only applies if the |branches_to_build| setting is a keyword\n rather than a list of branches. It reduces the number of useless\n calls to |git pull| to almost zero.\n * require Perl version 5.14 or later\n This should not be a problem, as it's more than 10 years old.\n * add |--avoid-ts-collisions| command line parameter\n This is for specialized uses, and imposes a penalty of a few seconds\n per run. |run_branches.pl| already does this, so it's not required for\n normal operations.\n * run TAP tests for |src/interfaces| subdirectories\n * add amcheck and extension upgrade tests to cross version upgrade testing\n * adjust to changes in postgres code, file locations, etc.\n * assorted minor bug fixes and tweaks\n\n\nThe release can be downloaded from\n\n<https://github.com/PGBuildFarm/client-code/releases/tag/REL_15> or\n<https://buildfarm.postgresql.org/downloads>\n\nUpgrading is highly recommended.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 31 Dec 2022 10:02:32 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Announcing Release 15 of the PostgreSQL Buildfarm client"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 10:02:32AM -0500, Andrew Dunstan wrote:\n> * check if a branch is up to date before trying to run it\n> This only applies if the |branches_to_build| setting is a keyword\n> rather than a list of branches. It reduces the number of useless\n> calls to |git pull| to almost zero.\n\nThis new reliance on buildfarm.postgresql.org/branches_of_interest.json is\ntrouble for non-SSL buildfarm animals.\nhttp://buildfarm.postgresql.org/branches_of_interest.txt has an exemption to\nallow serving over plain http, but the json URL just redirects the client to\nhttps. Can the json file get the same exemption-from-redirect that the txt\nfile has?\n\n\n",
"msg_date": "Sat, 31 Dec 2022 17:55:51 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Announcing Release 15 of the PostgreSQL Buildfarm client"
},
{
"msg_contents": "\nOn 2022-12-31 Sa 20:55, Noah Misch wrote:\n> On Sat, Dec 31, 2022 at 10:02:32AM -0500, Andrew Dunstan wrote:\n>> * check if a branch is up to date before trying to run it\n>> This only applies if the |branches_to_build| setting is a keyword\n>> rather than a list of branches. It reduces the number of useless\n>> calls to |git pull| to almost zero.\n> This new reliance on buildfarm.postgresql.org/branches_of_interest.json is\n> trouble for non-SSL buildfarm animals.\n> http://buildfarm.postgresql.org/branches_of_interest.txt has an exemption to\n> allow serving over plain http, but the json URL just redirects the client to\n> https. Can the json file get the same exemption-from-redirect that the txt\n> file has?\n\n\nI didn't realize there were animals left other than mine which had this\nissue. I asked the admins some weeks ago to fix this (I don't have\nprivilege to do so), but have not had a response yet. The temporary\nworkaround is to use a list of named branches, e.g. instead of 'ALL' use\n[qw(REL_11_STABLE REL_12_STABLE REL_13_STABLE REL_14_STABLE\nREL_15_STABLE HEAD)]\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sat, 31 Dec 2022 21:11:04 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Announcing Release 15 of the PostgreSQL Buildfarm client"
},
{
"msg_contents": "\nOn 2022-12-31 Sa 21:11, Andrew Dunstan wrote:\n> On 2022-12-31 Sa 20:55, Noah Misch wrote:\n>> On Sat, Dec 31, 2022 at 10:02:32AM -0500, Andrew Dunstan wrote:\n>>> * check if a branch is up to date before trying to run it\n>>> This only applies if the |branches_to_build| setting is a keyword\n>>> rather than a list of branches. It reduces the number of useless\n>>> calls to |git pull| to almost zero.\n>> This new reliance on buildfarm.postgresql.org/branches_of_interest.json is\n>> trouble for non-SSL buildfarm animals.\n>> http://buildfarm.postgresql.org/branches_of_interest.txt has an exemption to\n>> allow serving over plain http, but the json URL just redirects the client to\n>> https. Can the json file get the same exemption-from-redirect that the txt\n>> file has?\n>\n> I didn't realize there were animals left other than mine which had this\n> issue. I asked the admins some weeks ago to fix this (I don't have\n> privilege to do so), but have not had a response yet. The temporary\n> workaround is to use a list of named branches, e.g. instead of 'ALL' use\n> [qw(REL_11_STABLE REL_12_STABLE REL_13_STABLE REL_14_STABLE\n> REL_15_STABLE HEAD)]\n>\n>\n\n\nLooks like this is fixed now (Thanks Magnus!), the workaround should no\nlonger be necessary.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Sun, 1 Jan 2023 18:34:13 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Announcing Release 15 of the PostgreSQL Buildfarm client"
},
{
"msg_contents": "\nOn 2022-12-31 Sa 10:02, Andrew Dunstan wrote:\n> Changes\n>\n>\n> * check if a branch is up to date before trying to run it\n> This only applies if the |branches_to_build| setting is a keyword\n> rather than a list of branches. It reduces the number of useless\n> calls to |git pull| to almost zero.\n\n\nOccasionally things go wrong. It turns out this code was a bit too eager\nand ignored the force_every settings in the config file.\n\nThere's a hot fix at\n<https://github.com/PGBuildFarm/client-code/commit/c9693f86d9bd07b470bb2a106055b5801cd613ec>\nand I will push out a new release shortly.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 10:17:58 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Announcing Release 15 of the PostgreSQL Buildfarm client"
},
{
"msg_contents": "\nOn 2022-12-31 Sa 10:02, Andrew Dunstan wrote:\n> Changes\n>\n>\n> * check if a branch is up to date before trying to run it\n> This only applies if the |branches_to_build| setting is a keyword\n> rather than a list of branches. It reduces the number of useless\n> calls to |git pull| to almost zero.\n\n\nOccasionally things go wrong. It turns out this code was a bit too eager\nand ignored the force_every settings in the config file.\n\nThere's a hot fix at\n<https://github.com/PGBuildFarm/client-code/commit/c9693f86d9bd07b470bb2a106055b5801cd613ec>\nand I will push out a new release shortly.\n\nSystems that might well be affect include:\n\nalabio\ncalliphoridae\nculicidae\ndesmoxytes\ndragonet\nflaviventris\nfrancolin\ngerenuk\ngrassquit\nguaibasaurus\nhamerkop\nidiacanthus\nkestrel\nkomodoensis\nmassasauga\nmylodon\nolingo\npetalura\nphycodurus\npiculet\npogona\nrorqual\nserinus\nskink\nsnakefly\ntamandua\nxenodermus\n\nPlease check if your animals are affected. If you don't have any\nforce_every settings you won't be.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Tue, 10 Jan 2023 10:21:50 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: Announcing Release 15 of the PostgreSQL Buildfarm client"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is patch for todo item: Add overlaps geometric operators that \nignore point overlaps\n\nIssue:\n\nSELECT circle '((0,0), 1)' && circle '((2,0),1) returns True\n\nExpectation: In above case, both figures touch other but do not overlap \n(i.e. touching != overlap). Hence, it should return false.\n\nCause:\n\nLess than or equal check between distance of center and sum of radius\n\nDatum\ncircle_overlap(PG_FUNCTION_ARGS)\n{\n CIRCLE *circle1 = PG_GETARG_CIRCLE_P(0);\n CIRCLE *circle2 = PG_GETARG_CIRCLE_P(1);\n\n PG_RETURN_BOOL(FPle(point_dt(&circle1->center, &circle2->center),\n float8_pl(circle1->radius, circle2->radius)));\n}\n\nPossible fix:\n\n# Don't check for <= , just < would suffice.\n\nDatum\ncircle_overlap(PG_FUNCTION_ARGS)\n{\n CIRCLE *circle1 = PG_GETARG_CIRCLE_P(0);\n CIRCLE *circle2 = PG_GETARG_CIRCLE_P(1);\n\n PG_RETURN_BOOL(FPlt(point_dt(&circle1->center, &circle2->center),\n float8_pl(circle1->radius, circle2->radius)));\n}\n\nsame for boxes as well.\n\nResults:\n\nBefore:\n\nselect box '((0,0),(1,1))' && box '((0,1), (1,2))';\n ?column?\n----------\n t\n(1 row)\n\nWith patch:\n\nselect box '((0,1),(1,1))' && box '((1,1), (1,2))';\n ?column?\n----------\n f\n(1 row)\n\nBring box slightly ( > EPSILON) inside the other box\n\nselect box '((0,0),(1,1.0001))' && box '((0,1), (1,2))';\n ?column?\n----------\n t\n(1 row)\n\nsimilar for circle.\n\n\nNow, as per as discussion \n(https://www.postgresql.org/message-id/20100322175532.GG26428%40fetter.org) \nand corresponding change in docs, \nhttps://www.postgresql.org/docs/15/functions-geometry.html, it mentions\n\n`Do these objects overlap? (One point in common makes this true.) `. \nDoes this means current behavior is correct? Or do we still need the \nproposed change (if so, with proper updates in docs)?\n\nIf current behavior is correct, this todo item might need some update \n(unless I missed anything) otherwise any suggestion is welcomed.\n\nAlso, I did some search around this and there is general sense of \ndifferentiation between overlap and touch of geometric figures. I am not \nable to find any function which can determine if two geometric figures \ntouch each\n\nother at a point (and if there is real use case of this).\n\nIn any case, patch attached for a reference. Any feedback is welcomed.\n\n\n-- \nRegards,\nAnkit Kumar Pandey",
"msg_date": "Sun, 1 Jan 2023 01:13:24 +0530",
"msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Add overlaps geometric operators that ignore point overlaps"
},
{
"msg_contents": "Hello.\n\nAt Sun, 1 Jan 2023 01:13:24 +0530, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote in \n> This is patch for todo item: Add overlaps geometric operators that\n> ignore point overlaps\n> \n> Issue:\n> \n> SELECT circle '((0,0), 1)' && circle '((2,0),1) returns True\n> \n> Expectation: In above case, both figures touch other but do not\n> overlap (i.e. touching != overlap). Hence, it should return false.\n\nThis may be slightly off from the common definition in other geometric\nprocessing systems, it is the established behavior of PostgreSQL that\nshould already have users.\n\nAbout the behavior itself, since it seems to me that the words \"touch\"\nand \"overlap\" have no rigorous mathematical definitions, that depends\non definition. The following discussion would be mere a word play..\n\nIf circle ((0,0),1) means a circumference, i.e. a set of points\ndescribed as \"x^2 + y^2 = 1\" (or it may be a disc containing the area\ninside (<=) here) and \"overlap\" means \"share at least a point\", the\ntwo circles are overlapping. This seems to be our current stand point\nand what is expressed in the doc.\n\nIf it meant the area exclusively inside the outline (i.e. x^2 + y^2 <\n1), the two circles could be said touching but not overlapping. Or,\nif circle is defined as \"(<)= 1\" but \"overlap\" meant \"share at least\nan area\", they could be said not overlapping but touching? (I'm not\nsure about the border between a point and an area here and the\ndistinction would be connected with the annoying EPSILON..) The same\ndiscussion holds for boxes or other shapes.\n\n> Now, as per as discussion\n> (https://www.postgresql.org/message-id/20100322175532.GG26428%40fetter.org)\n> and corresponding change in docs,\n> https://www.postgresql.org/docs/15/functions-geometry.html, it\n> mentions\n> \n> `Do these objects overlap? (One point in common makes this true.)\n> `. Does this means current behavior is correct? Or do we still need\n> the proposed change (if so, with proper updates in docs)?\n> \n> If current behavior is correct, this todo item might need some update\n> (unless I missed anything) otherwise any suggestion is welcomed.\n\nI read the todo description as we may want *another set* of operators\nto do that, not to change the current behavior of the existing\noperators.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 11 Jan 2023 11:13:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Add overlaps geometric operators that ignore point\n overlaps"
}
] |
[
{
"msg_contents": "Attached are two patches, each of which fixes two historical buglets\naround VACUUM's approach to setting bits in the visibility map.\n(Whether or not this is actually refactoring work or hardening work is\ndebatable, I suppose.)\n\nThe first patch makes sure that the snapshotConflictHorizon cutoff\n(XID cutoff for recovery conflicts) is never a special XID, unless\nthat XID is InvalidTransactionId, which is interpreted as a\nsnapshotConflictHorizon value that will never need a recovery conflict\n(per the general convention for snapshotConflictHorizon values\nexplained above ResolveRecoveryConflictWithSnapshot). This patch\nestablishes a hard rule that snapshotConflictHorizon values can never\nbe a special XID value, unless it's InvalidTransactionId. An assertion\nenforces the rule for us in REDO routines (at the point that they call\nResolveRecoveryConflictWithSnapshot with the WAL record's\nsnapshotConflictHorizon XID value).\n\nThe second patch makes sure that VACUUM can never set a page\nall-frozen in the visibility map without also setting the same page\nall-visible in the same call to visibilitymap_set() -- regardless of\nwhat we think we know about the current state of the all-visible bit\nin the VM.\n\nThe second patch adjusts one of the visibilitymap_set() calls in\nvacuumlazy.c that would previously sometimes set a page's all-frozen\nbit without also setting its all-visible bit. This could allow VACUUM\nto leave a page all-frozen but not all-visible in the visibility map\n(since the value of all_visible_according_to_vm can go stale). I think\nthat this should be treated as a basic visibility map invariant: an\nall-frozen page must also be all-visible, by definition, so why should\nit be physically possible for the VM to give a contradictory picture\nof the all_visible/all_frozen status of any one page? Assertions are\nadded that more or less make this rule into an invariant.\namcheck/pg_visibility coverage might make sense too, but I haven't\ndone that here.\n\nThe second patch also adjusts a later visibilitymap_set() call site\n(the one used just after heap vacuuming runs in the final heap pass)\nin roughly the same way. It no longer reads from the visibility map to\nsee what bits need to be changed. The existing approach here seems\nrather odd. The whole point of calling lazy_vacuum_heap_page() is to\nset LP_DEAD items referenced by VACUUM's dead_items array to LP_UNUSED\n-- there has to have been at least one LP_DEAD item on the page for us\nto end up here (which a Postgres 14 era assertion verifies for us). So\nwe already know perfectly well that the visibility map shouldn't\nindicate that the page is all-visible yet -- why bother asking the VM?\nAnd besides, any call to visibilitymap_set() will only modify the VM\nwhen it directly observes that the bits have changed -- so why even\nattempt to duplicate that on the caller side?\n\nIt seems to me that the visibilitymap_get_status() call inside\nlazy_vacuum_heap_page() is actually abused to work as a substitute for\nvisibilitymap_pin(). Why not use top-level visibilitymap_pin() calls\ninstead, just like we do it in the first heap pass? That's how it's\ndone in the second patch; it adds a visibilitymap_pin() call in\nlazy_vacuum_heap_rel()'s blkno-wise loop. That gives us parity between\nthe first and second heap pass, which seems like a clear\nmaintainability win -- everybody can pass the\nalready-pinned/already-setup vmbuffer by value.\n\n-- \nPeter Geoghegan",
"msg_date": "Sat, 31 Dec 2022 16:53:29 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Sat, Dec 31, 2022 at 4:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> The first patch makes sure that the snapshotConflictHorizon cutoff\n> (XID cutoff for recovery conflicts) is never a special XID, unless\n> that XID is InvalidTransactionId, which is interpreted as a\n> snapshotConflictHorizon value that will never need a recovery conflict\n> (per the general convention for snapshotConflictHorizon values\n> explained above ResolveRecoveryConflictWithSnapshot).\n\nPushed this just now.\n\nAttached is another very simple refactoring patch for vacuumlazy.c. It\nmakes vacuumlazy.c save the result of get_database_name() in vacrel,\nwhich matches what we already do with things like\nget_namespace_name().\n\nWould be helpful if I could get a +1 on\nv1-0002-Never-just-set-the-all-frozen-bit-in-VM.patch, which is\nsomewhat more substantial than the others.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 2 Jan 2023 10:31:51 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 10:31 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Would be helpful if I could get a +1 on\n> v1-0002-Never-just-set-the-all-frozen-bit-in-VM.patch, which is\n> somewhat more substantial than the others.\n\nThere has been no response on this thread for over a full week at this\npoint. I'm CC'ing Robert now, since the bug is from his commit\na892234f83.\n\nAttached revision of the \"don't unset all-visible bit while unsetting\nall-frozen bit\" patch adds some assertions that verify that\nvisibility_cutoff_xid is InvalidTransactionId as expected when we go\nto set any page all-frozen in the VM. It also broadens an existing\nnearby test for corruption, which gives us some chance of detecting\nand repairing corruption of this sort that might have slipped in in\nthe field.\n\nMy current plan is to commit something like this in another week or\nso, barring any objections.\n\n-- \nPeter Geoghegan",
"msg_date": "Sun, 8 Jan 2023 14:39:19 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-08 14:39:19 -0800, Peter Geoghegan wrote:\n> One of the calls to visibilitymap_set() during VACUUM's initial heap\n> pass could unset a page's all-visible bit during the process of setting\n> the same page's all-frozen bit.\n\nHow? visibilitymap_set() just adds flags, it doesn't remove any already\nexisting bits:\n\n\t\tmap[mapByte] |= (flags << mapOffset);\n\nIt'll afaict lead to potentially unnecessary WAL records though, which does\nseem buggy:\n\tif (flags != (map[mapByte] >> mapOffset & VISIBILITYMAP_VALID_BITS))\n\nhere we check for *equivalence*, but then below we just or-in flags. So\nvisibilitymap_set() with just one of the flags bits set in the parameters,\nbut both set in the page would end up WAL logging unnecessarily.\n\n\n\n\n> @@ -2388,8 +2398,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)\n> static void\n> lazy_vacuum_heap_rel(LVRelState *vacrel)\n> {\n> -\tint\t\t\tindex;\n> -\tBlockNumber vacuumed_pages;\n> +\tint\t\t\tindex = 0;\n> +\tBlockNumber vacuumed_pages = 0;\n> \tBuffer\t\tvmbuffer = InvalidBuffer;\n> \tLVSavedErrInfo saved_err_info;\n> \n> @@ -2406,42 +2416,42 @@ lazy_vacuum_heap_rel(LVRelState *vacrel)\n> \t\t\t\t\t\t\t VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> \t\t\t\t\t\t\t InvalidBlockNumber, InvalidOffsetNumber);\n> \n> -\tvacuumed_pages = 0;\n> -\n> -\tindex = 0;\n\n\n> @@ -2473,12 +2484,12 @@ lazy_vacuum_heap_rel(LVRelState *vacrel)\n> */\n> static int\n> lazy_vacuum_heap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer,\n> -\t\t\t\t\t int index, Buffer *vmbuffer)\n> +\t\t\t\t\t int index, Buffer vmbuffer)\n> {\n> \tVacDeadItems *dead_items = vacrel->dead_items;\n> \tPage\t\tpage = BufferGetPage(buffer);\n> \tOffsetNumber unused[MaxHeapTuplesPerPage];\n> -\tint\t\t\tuncnt = 0;\n> +\tint\t\t\tnunused = 0;\n> \tTransactionId visibility_cutoff_xid;\n> \tbool\t\tall_frozen;\n> \tLVSavedErrInfo saved_err_info;\n> @@ -2508,10 +2519,10 @@ lazy_vacuum_heap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer,\n> \n> \t\tAssert(ItemIdIsDead(itemid) && !ItemIdHasStorage(itemid));\n> \t\tItemIdSetUnused(itemid);\n> -\t\tunused[uncnt++] = toff;\n> +\t\tunused[nunused++] = toff;\n> \t}\n> \n> -\tAssert(uncnt > 0);\n> +\tAssert(nunused > 0);\n> \n> \t/* Attempt to truncate line pointer array now */\n> \tPageTruncateLinePointerArray(page);\n> @@ -2527,13 +2538,13 @@ lazy_vacuum_heap_page(LVRelState *vacrel, BlockNumber blkno, Buffer buffer,\n> \t\txl_heap_vacuum xlrec;\n> \t\tXLogRecPtr\trecptr;\n> \n> -\t\txlrec.nunused = uncnt;\n> +\t\txlrec.nunused = nunused;\n> \n> \t\tXLogBeginInsert();\n> \t\tXLogRegisterData((char *) &xlrec, SizeOfHeapVacuum);\n> \n> \t\tXLogRegisterBuffer(0, buffer, REGBUF_STANDARD);\n> -\t\tXLogRegisterBufData(0, (char *) unused, uncnt * sizeof(OffsetNumber));\n> +\t\tXLogRegisterBufData(0, (char *) unused, nunused * sizeof(OffsetNumber));\n> \n> \t\trecptr = XLogInsert(RM_HEAP2_ID, XLOG_HEAP2_VACUUM);\n> \n\nYou have plenty of changes like this, which are afaict entirely unrelated to\nthe issue the commit is fixing, in here. It just makes it hard to review the\npatch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 8 Jan 2023 15:53:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> How?\n\nSee the commit message for the scenario I have in mind, which involves\na concurrent HOT update that aborts.\n\nWe're vulnerable to allowing \"all-frozen but not all-visible\"\ninconsistencies because of two factors: this business with not passing\nVISIBILITYMAP_ALL_VISIBLE along with VISIBILITYMAP_ALL_FROZEN to\nvisibilitymap_set(), *and* the use of all_visible_according_to_vm to\nset the VM (a local variable that can go stale). We sort of assume\nthat all_visible_according_to_vm cannot go stale here without our\ndetecting it. That's almost always the case, but it's not quite\nguaranteed.\n\n> visibilitymap_set() just adds flags, it doesn't remove any already\n> existing bits:\n\nI know. The concrete scenario I have in mind is very subtle (if the\nproblem was this obvious I'm sure somebody would have noticed it by\nnow, since we do hit this visibilitymap_set() call site reasonably\noften). A concurrent HOT update will certainly clear all the bits for\nus, which is enough.\n\n> You have plenty of changes like this, which are afaict entirely unrelated to\n> the issue the commit is fixing, in here. It just makes it hard to review the\n> patch.\n\nI didn't think that it was that big of a deal to tweak the style of\none or two details in and around lazy_vacuum_heap_rel() in passing,\nfor consistency with lazy_scan_heap(), since the patch already needs\nto do some of that. I do take your point, though.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 8 Jan 2023 16:27:59 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 4:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> We're vulnerable to allowing \"all-frozen but not all-visible\"\n> inconsistencies because of two factors: this business with not passing\n> VISIBILITYMAP_ALL_VISIBLE along with VISIBILITYMAP_ALL_FROZEN to\n> visibilitymap_set(), *and* the use of all_visible_according_to_vm to\n> set the VM (a local variable that can go stale). We sort of assume\n> that all_visible_according_to_vm cannot go stale here without our\n> detecting it. That's almost always the case, but it's not quite\n> guaranteed.\n\nOn further reflection even v2 won't repair the page-level\nPD_ALL_VISIBLE flag in passing in this scenario. ISTM that on HEAD we\nmight actually leave the all-frozen bit set in the VM, while both the\nall-visible bit and the page-level PD_ALL_VISIBLE bit remain unset.\nAgain, all due to the approach we take with\nall_visible_according_to_vm, which can go stale independently of both\nthe VM bit being unset and the PD_ALL_VISIBLE bit being unset (in my\nexample problem scenario).\n\nFWIW I don't have this remaining problem in my VACUUM\nfreezing/scanning strategies patchset. It just gets rid of\nall_visible_according_to_vm altogether, which makes things a lot\nsimpler at the point that we set VM bits at the end of lazy_scan_heap\n-- there is nothing left that can become stale. Quite a lot of the\ncode is just removed; there is exactly one call to visibilitymap_set()\nat the end of lazy_scan_heap with the patchset, that does everything\nwe need.\n\nThe patchset also has logic for setting PD_ALL_VISIBLE when it needs\nto be set, which isn't (and shouldn't) be conditioned on whether we're\ndoing a \"all visible -> all frozen \" transition or a \"neither -> all\nvisible\" transition. What it actually needs to be conditioned on is\nwhether it's unset now, and so needs to be set in passing, as part of\nsetting one or both VM bits -- simple as that.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sun, 8 Jan 2023 18:43:59 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Sun, Jan 8, 2023 at 6:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On further reflection even v2 won't repair the page-level\n> PD_ALL_VISIBLE flag in passing in this scenario. ISTM that on HEAD we\n> might actually leave the all-frozen bit set in the VM, while both the\n> all-visible bit and the page-level PD_ALL_VISIBLE bit remain unset.\n> Again, all due to the approach we take with\n> all_visible_according_to_vm, which can go stale independently of both\n> the VM bit being unset and the PD_ALL_VISIBLE bit being unset (in my\n> example problem scenario).\n\nAttached is v3, which explicitly checks the need to set the\nPD_ALL_VISIBLE flag at the relevant visibilitymap_set() call site. It\nalso has improved comments.\n\n-- \nPeter Geoghegan",
"msg_date": "Mon, 9 Jan 2023 10:16:03 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-08 16:27:59 -0800, Peter Geoghegan wrote:\n> On Sun, Jan 8, 2023 at 3:53 PM Andres Freund <andres@anarazel.de> wrote:\n> > How?\n> \n> See the commit message for the scenario I have in mind, which involves\n> a concurrent HOT update that aborts.\n\nI looked at it. I specifically was wondering about this part of it:\n> One of the calls to visibilitymap_set() during VACUUM's initial heap\n> pass could unset a page's all-visible bit during the process of setting\n> the same page's all-frozen bit.\n\nWhich I just don't see as possible, due to visibilitymap_set() simply never\nunsetting bits.\n\nI think that's just an imprecise formulation though - the problem is that we\ncan call visibilitymap_set() with just VISIBILITYMAP_ALL_FROZEN, even though\nVISIBILITYMAP_ALL_VISIBLE was concurrently unset.\n\nISTM that we ought to update all_visible_according_to_vm from\nPageIsAllVisible() once we've locked the page. Even if we avoid this specific\ncase, it seems a recipe for future bugs to have a potentially outdated\nvariable around.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 11:44:43 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-09 10:16:03 -0800, Peter Geoghegan wrote:\n> Attached is v3, which explicitly checks the need to set the PD_ALL_VISIBLE\n> flag at the relevant visibilitymap_set() call site. It also has improved\n> comments.\n\nAfaict we'll need to backpatch this all the way?\n\n\n> From e7788ebdb589fb7c6f866cf53658cc369f9858b5 Mon Sep 17 00:00:00 2001\n> From: Peter Geoghegan <pg@bowt.ie>\n> Date: Sat, 31 Dec 2022 15:13:01 -0800\n> Subject: [PATCH v3] Don't accidentally unset all-visible bit in VM.\n> \n> One of the calls to visibilitymap_set() during VACUUM's initial heap\n> pass could unset a page's all-visible bit during the process of setting\n> the same page's all-frozen bit.\n\nAs just mentioned upthread, this just seems wrong.\n\n\n> This could happen in the event of a\n> concurrent HOT update from a transaction that aborts soon after. Since\n> the all_visible_according_to_vm local variable that lazy_scan_heap works\n> off of when setting the VM doesn't reflect the current state of the VM,\n> and since visibilitymap_set() just requested that the all-frozen bit get\n> set in one case, there was a race condition. Heap pages could initially\n> be all-visible just as all_visible_according_to_vm is established, then\n> not be all-visible after the update, and then become eligible to be set\n> all-visible once more following pruning by VACUUM. There is no reason\n> why VACUUM can't remove a concurrently aborted heap-only tuple right\n> away, and so no reason why such a page won't be able to reach the\n> relevant visibilitymap_set() call site.\n\nDo you have a reproducer for this?\n\n\n> @@ -1120,8 +1123,8 @@ lazy_scan_heap(LVRelState *vacrel)\n> \t\t * got cleared after lazy_scan_skip() was called, so we must recheck\n> \t\t * with buffer lock before concluding that the VM is corrupt.\n> \t\t */\n> -\t\telse if (all_visible_according_to_vm && !PageIsAllVisible(page)\n> -\t\t\t\t && VM_ALL_VISIBLE(vacrel->rel, blkno, &vmbuffer))\n> +\t\telse if (all_visible_according_to_vm && !PageIsAllVisible(page) &&\n> +\t\t\t\t visibilitymap_get_status(vacrel->rel, blkno, &vmbuffer) != 0)\n> \t\t{\n> \t\t\telog(WARNING, \"page is not marked all-visible but visibility map bit is set in relation \\\"%s\\\" page %u\",\n> \t\t\t\t vacrel->relname, blkno);\n\nHm. The message gets a bit less accurate with the change. Perhaps OK? OTOH, it\nmight be useful to know what bit was wrong when debugging problems.\n\n\n> @@ -1164,12 +1167,34 @@ lazy_scan_heap(LVRelState *vacrel)\n> \t\t\t\t !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))\n> \t\t{\n> \t\t\t/*\n> -\t\t\t * We can pass InvalidTransactionId as the cutoff XID here,\n> -\t\t\t * because setting the all-frozen bit doesn't cause recovery\n> -\t\t\t * conflicts.\n> +\t\t\t * Avoid relying on all_visible_according_to_vm as a proxy for the\n> +\t\t\t * page-level PD_ALL_VISIBLE bit being set, since it might have\n> +\t\t\t * become stale -- even when all_visible is set in prunestate.\n> +\t\t\t *\n> +\t\t\t * Consider the example of a page that starts out all-visible and\n> +\t\t\t * then has a tuple concurrently deleted by an xact that aborts.\n> +\t\t\t * The page will be all_visible_according_to_vm, and will have\n> +\t\t\t * all_visible set in prunestate. It will nevertheless not have\n> +\t\t\t * PD_ALL_VISIBLE set by here (plus neither VM bit will be set).\n> +\t\t\t * And so we must check if PD_ALL_VISIBLE needs to be set.\n> \t\t\t */\n> +\t\t\tif (!PageIsAllVisible(page))\n> +\t\t\t{\n> +\t\t\t\tPageSetAllVisible(page);\n> +\t\t\t\tMarkBufferDirty(buf);\n> +\t\t\t}\n> +\n> +\t\t\t/*\n> +\t\t\t * Set the page all-frozen (and all-visible) in the VM.\n> +\t\t\t *\n> +\t\t\t * We can pass InvalidTransactionId as our visibility_cutoff_xid,\n> +\t\t\t * since a snapshotConflictHorizon sufficient to make everything\n> +\t\t\t * safe for REDO was logged when the page's tuples were frozen.\n> +\t\t\t */\n> +\t\t\tAssert(!TransactionIdIsValid(prunestate.visibility_cutoff_xid));\n> \t\t\tvisibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,\n> \t\t\t\t\t\t\t vmbuffer, InvalidTransactionId,\n> +\t\t\t\t\t\t\t VISIBILITYMAP_ALL_VISIBLE\t|\n> \t\t\t\t\t\t\t VISIBILITYMAP_ALL_FROZEN);\n> \t\t}\n> \n> @@ -1311,7 +1336,11 @@ lazy_scan_skip(LVRelState *vacrel, Buffer *vmbuffer, BlockNumber next_block,\n> \n> \t\t/* DISABLE_PAGE_SKIPPING makes all skipping unsafe */\n> \t\tif (!vacrel->skipwithvm)\n> +\t\t{\n> +\t\t\t/* Caller shouldn't rely on all_visible_according_to_vm */\n> +\t\t\t*next_unskippable_allvis = false;\n> \t\t\tbreak;\n> +\t\t}\n> \n> \t\t/*\n> \t\t * Aggressive VACUUM caller can't skip pages just because they are\n> @@ -1818,7 +1847,11 @@ retry:\n> \t\t\t * cutoff by stepping back from OldestXmin.\n> \t\t\t */\n> \t\t\tif (prunestate->all_visible && prunestate->all_frozen)\n> +\t\t\t{\n> +\t\t\t\t/* Using same cutoff when setting VM is now unnecessary */\n> \t\t\t\tsnapshotConflictHorizon = prunestate->visibility_cutoff_xid;\n> +\t\t\t\tprunestate->visibility_cutoff_xid = InvalidTransactionId;\n> +\t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\t/* Avoids false conflicts when hot_standby_feedback in use */\n> @@ -2388,8 +2421,8 @@ lazy_vacuum_all_indexes(LVRelState *vacrel)\n> static void\n> lazy_vacuum_heap_rel(LVRelState *vacrel)\n> {\n> -\tint\t\t\tindex;\n> -\tBlockNumber vacuumed_pages;\n> +\tint\t\t\tindex = 0;\n> +\tBlockNumber vacuumed_pages = 0;\n> \tBuffer\t\tvmbuffer = InvalidBuffer;\n> \tLVSavedErrInfo saved_err_info;\n> \n> @@ -2406,42 +2439,42 @@ lazy_vacuum_heap_rel(LVRelState *vacrel)\n> \t\t\t\t\t\t\t VACUUM_ERRCB_PHASE_VACUUM_HEAP,\n> \t\t\t\t\t\t\t InvalidBlockNumber, InvalidOffsetNumber);\n> \n> -\tvacuumed_pages = 0;\n> -\n> -\tindex = 0;\n> \twhile (index < vacrel->dead_items->num_items)\n> \t{\n> -\t\tBlockNumber tblk;\n> +\t\tBlockNumber blkno;\n> \t\tBuffer\t\tbuf;\n> \t\tPage\t\tpage;\n> \t\tSize\t\tfreespace;\n> \n> \t\tvacuum_delay_point();\n\nI still think such changes are inappropriate for a bugfix, particularly one\nthat needs to be backpatched.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 9 Jan 2023 11:57:29 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 11:44 AM Andres Freund <andres@anarazel.de> wrote:\n> I think that's just an imprecise formulation though - the problem is that we\n> can call visibilitymap_set() with just VISIBILITYMAP_ALL_FROZEN, even though\n> VISIBILITYMAP_ALL_VISIBLE was concurrently unset.\n\nThat's correct.\n\nYou're right that my description of the problem from the commit\nmessage was confusing. But we're on the same page about the problem\nnow.\n\n> ISTM that we ought to update all_visible_according_to_vm from\n> PageIsAllVisible() once we've locked the page. Even if we avoid this specific\n> case, it seems a recipe for future bugs to have a potentially outdated\n> variable around.\n\nI basically agree, but some of the details are tricky.\n\nAs I mentioned already, my work on visibility map snapshots just gets\nrid of all_visible_according_to_vm, which is my preferred long term\napproach. We will very likely need to keep all_visible_according_to_vm\nas a cache for performance reasons for as long as we have\nSKIP_PAGES_THRESHOLD.\n\nCan we just update all_visible_according_to_vm using\nPageIsAllVisible(), without making all_visible_according_to_vm\nsignificantly less useful as a cache? Maybe. Not sure offhand.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Jan 2023 12:04:15 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 1:32 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Sat, Dec 31, 2022 at 4:53 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The first patch makes sure that the snapshotConflictHorizon cutoff\n> > (XID cutoff for recovery conflicts) is never a special XID, unless\n> > that XID is InvalidTransactionId, which is interpreted as a\n> > snapshotConflictHorizon value that will never need a recovery conflict\n> > (per the general convention for snapshotConflictHorizon values\n> > explained above ResolveRecoveryConflictWithSnapshot).\n>\n> Pushed this just now.\n>\n> Attached is another very simple refactoring patch for vacuumlazy.c. It\n> makes vacuumlazy.c save the result of get_database_name() in vacrel,\n> which matches what we already do with things like\n> get_namespace_name().\n>\n> Would be helpful if I could get a +1 on\n> v1-0002-Never-just-set-the-all-frozen-bit-in-VM.patch, which is\n> somewhat more substantial than the others.\n\nI feel that you should at least have a reproducer for these problems\nposted to the thread, and ideally a regression test, before committing\nthings. I think it's very hard to understand what the problems are\nright now.\n\nI don't particularly have a problem with the idea of 0001, because if\nwe use InvalidTransactionId to mean that there cannot be any\nconflicts, we do not need FrozenTransactionId to mean the same thing.\nPicking one or the other makes sense. Perhaps we would need two values\nif we both needed a value that meant \"conflict with nothing\" and also\na value that meant \"conflict with everything,\" but in that case I\nsuppose we would want FrozenTransactionId to be the one that meant\nconflict with nothing, since it logically precedes all other XIDs, and\nconflicts are with XIDs that precede the value in the record. However,\nI don't find the patch very clear, either. It doesn't update any\ncomments, not even this one:\n\n /*\n * It's possible that we froze tuples and made the page's XID cutoff\n * (for recovery conflict purposes) FrozenTransactionId. This is okay\n * because visibility_cutoff_xid will be logged by our caller in a\n * moment.\n */\n- Assert(cutoff == FrozenTransactionId ||\n+ Assert(!TransactionIdIsValid(cutoff) ||\n cutoff == prunestate->visibility_cutoff_xid);\n\nIsn't the comment now incorrect as a direct result of the changes in the patch?\n\nAs for 0002, I agree that it's bad if we can get into a state where\nthe all-frozen bit is set and the all-visible bit is not. I'm not\ncompletely sure what concrete harm that will cause, but it does not\nseem good. But I also *entirely* agree with Andres that patches should\nrun around adjusting nearby code - e.g. variable names - in ways that\naren't truly necessary. That just makes life harder, not only for\nanyone who wants to review the patch now, but also for future readers\nwho may need to understand what the patch changed and why.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 9 Jan 2023 15:51:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 11:57 AM Andres Freund <andres@anarazel.de> wrote:\n> Afaict we'll need to backpatch this all the way?\n\nI thought that we probably wouldn't need to, at first. But I now think\nthat we really have to.\n\nI didn't realize that affected visibilitymap_set() calls could\ngenerate useless set-VM WAL records until you pointed it out. That's\nfar more likely to happen than the race condition that I described --\nit has nothing at all to do with concurrency. That's what clinches it\nfor me.\n\n> > One of the calls to visibilitymap_set() during VACUUM's initial heap\n> > pass could unset a page's all-visible bit during the process of setting\n> > the same page's all-frozen bit.\n>\n> As just mentioned upthread, this just seems wrong.\n\nI don't know why this sentence ever made sense to me. Anyway, it's not\nimportant now.\n\n> Do you have a reproducer for this?\n\nNo, but I'm quite certain that the race can happen.\n\nIf it's important to have a reproducer then I can probably come up\nwith one. I could likely figure out a way to write an isolation test\nthat reliably triggers the issue. It would have to work by playing\ngames with cleanup lock/buffer pin waits, since that's the only thing\nthat the test can hook into to make things happen in just the\nright/wrong order.\n\n> > elog(WARNING, \"page is not marked all-visible but visibility map bit is set in relation \\\"%s\\\" page %u\",\n> > vacrel->relname, blkno);\n>\n> Hm. The message gets a bit less accurate with the change. Perhaps OK? OTOH, it\n> might be useful to know what bit was wrong when debugging problems.\n\nTheoretically it might change again, if we call\nvisibilitymap_get_status() again. Maybe I should just broaden the\nerror message a bit instead?\n\n> I still think such changes are inappropriate for a bugfix, particularly one\n> that needs to be backpatched.\n\nI'll remove the changes that are inessential in the next revision. I\nwouldn't have done it if I'd fully understood the seriousness of the\nissue from the start.\n\nIf you're really concerned about diff size then I should point out\nthat the changes to lazy_vacuum_heap_rel() aren't strictly necessary,\nand probably shouldn't be backpatched. I deemed that in scope because\nit's part of the same overall problem of updating the visibility map\nbased on potentially stale information. It makes zero sense to check\nwith the visibility map before updating it when we already know that\nthe page is all-visible. I mean, are we trying to avoid the work of\nneedlessly updating the visibility map in cases where its state was\ncorrupt, but then became uncorrupt (relative to the heap page) by\nmistake?\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Jan 2023 12:58:08 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 12:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I feel that you should at least have a reproducer for these problems\n> posted to the thread, and ideally a regression test, before committing\n> things. I think it's very hard to understand what the problems are\n> right now.\n\nHard to understand relative to what, exactly? We're talking about a\nvery subtle race condition here.\n\nI'll try to come up with a reproducer, but I *utterly* reject your\nassertion that it's a hard requirement, sight unseen. Why should those\nbe the parameters of the discussion?\n\nFor one thing I'm quite confident that I'm right, with or without a\nreproducer. And my argument isn't all that hard to follow, if you have\nrelevant expertise, and actually take the time. But even this is\nunlikely to matter much. Even if I somehow turn out to have been\ncompletely wrong about the race condition, it is still self-evident\nthat the problem of uselessly WAL logging non-changes to the VM\nexists. That doesn't require any concurrent access at all. It's a\nnatural consequence of calling visibilitymap_set() with\nVISIBILITYMAP_ALL_FROZEN-only flags. You need only look at the code\nfor 2 minutes to see it.\n\n> I don't particularly have a problem with the idea of 0001, because if\n> we use InvalidTransactionId to mean that there cannot be any\n> conflicts, we do not need FrozenTransactionId to mean the same thing.\n> Picking one or the other makes sense.\n\nWe've already picked one, many years ago -- InvalidTransactionId. This\nis a long established convention, common to all REDO routines that are\ncapable of creating granular conflicts.\n\nI already committed 0001 over a week ago. We were calling\nResolveRecoveryConflictWithSnapshot with FrozenTransactionId arguments\nbefore now, which was 100% guaranteed to be a waste of cycles. I saw\nno need to wait more than a few days for a +1, given that this\nparticular issue was so completely clear cut.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Jan 2023 14:59:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 12:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I didn't realize that affected visibilitymap_set() calls could\n> generate useless set-VM WAL records until you pointed it out. That's\n> far more likely to happen than the race condition that I described --\n> it has nothing at all to do with concurrency. That's what clinches it\n> for me.\n\nI didn't spend as much time on this as I'd like to so far, but I think\nthat this concern about visibilitymap_set() actually turns out to not\napply. The visibilitymap_set() call in question is gated by a\n\"!VM_ALL_FROZEN()\", which is enough to avoid the problem with writing\nuseless VM set records.\n\nThat doesn't make me doubt my original concern about races where the\nall-frozen bit can be set, without setting the all-visible bit, and\nwithout accounting for the fact that it changed underneath us. That\nscenario will have !VM_ALL_FROZEN(), so that won't save us. (And we\nwon't test VM_ALL_VISIBLE() or PD_ALL_VISIBLE in a way that is\nsufficient to realize that all_visible_according_to_vm is stale.\nprunestate.all_visible being set doesn't reliably indicate that it's not stale,\nbut lazy_scan_heap incorrectly believes that it really does work that way.)\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 9 Jan 2023 22:28:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 5:59 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Jan 9, 2023 at 12:51 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I feel that you should at least have a reproducer for these problems\n> > posted to the thread, and ideally a regression test, before committing\n> > things. I think it's very hard to understand what the problems are\n> > right now.\n>\n> Hard to understand relative to what, exactly? We're talking about a\n> very subtle race condition here.\n>\n> I'll try to come up with a reproducer, but I *utterly* reject your\n> assertion that it's a hard requirement, sight unseen. Why should those\n> be the parameters of the discussion?\n>\n> For one thing I'm quite confident that I'm right, with or without a\n> reproducer. And my argument isn't all that hard to follow, if you have\n> relevant expertise, and actually take the time.\n\nLook, I don't want to spend time arguing about what seem to me to be\nbasic principles of good software engineering. When I don't put test\ncases into my patches, people complain at me and tell me that I'm a\nbad software engineer because I didn't include test cases. Your\nargument here seems to be that you're such a good software engineer\nthat you don't need any test cases to know what the bug is or that\nyou've fixed it correctly. That seems like a surprising argument, but\neven if it's true, test cases can have considerable value to future\ncode authors, because it allows them to avoid reintroducing bugs that\nhave previously been fixed. In my opinion, it's not worth trying to\nhave automated test cases for absolutely every bug we fix, because\nmany of them would be really hard to develop and executing all of them\nevery time we do anything would be unduly time-consuming. But I can't\nremember the last time before this that someone wanted to commit a\npatch for a data corruption issue without even providing a test case\nthat other people can run manually. If you think that is or ought to\nbe standard practice, I can only say that I disagree.\n\nI don't particularly appreciate the implication that I either lack\nrelevant or expertise or don't actually take time, either. I spent an\nhour yesterday looking at your patches yesterday and didn't feel I was\nvery close to understanding 0002 in that time. I feel that if the\npatches were better-written, with relevant comments and test cases and\nreally good commit messages and a lack of extraneous changes, I\nbelieve I probably would have gotten a lot further in the same amount\nof time. There is certainly an alternate explanation, which is that I\nam stupid. I'm inclined to think that's not the correct explanation,\nbut most stupid people believe that they aren't, so that doesn't\nreally prove anything.\n\n> But even this is\n> unlikely to matter much. Even if I somehow turn out to have been\n> completely wrong about the race condition, it is still self-evident\n> that the problem of uselessly WAL logging non-changes to the VM\n> exists. That doesn't require any concurrent access at all. It's a\n> natural consequence of calling visibilitymap_set() with\n> VISIBILITYMAP_ALL_FROZEN-only flags. You need only look at the code\n> for 2 minutes to see it.\n\nApparently not, because I spent more time than that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 13:50:14 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 10:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Look, I don't want to spend time arguing about what seem to me to be\n> basic principles of good software engineering. When I don't put test\n> cases into my patches, people complain at me and tell me that I'm a\n> bad software engineer because I didn't include test cases. Your\n> argument here seems to be that you're such a good software engineer\n> that you don't need any test cases to know what the bug is or that\n> you've fixed it correctly.\n\nThat's not what I said. This is a straw man.\n\nWhat I actually said was that there is no reason to declare up front\nthat the only circumstances under which a fix could be committed is\nwhen a clean repro is available. I never said that a test case has\nlittle or no value, and I certainly didn't assert that we definitely\ndon't need a test case to proceed with a commit -- since I am not in\nthe habit of presumptuously attaching conditions to such things well\nin advance.\n\n> I don't particularly appreciate the implication that I either lack\n> relevant or expertise or don't actually take time, either.\n\nThe implication was only that you didn't take the time. Clearly you\nhave the expertise. Obviously you're very far from stupid.\n\nI have been unable to reproduce the problem, and think it's possible\nthat the issue cannot be triggered in practice. Though only through\nsheer luck. Here's why that is:\n\nWhile pruning will remove aborted dead tuples, freezing will not\nremove the xmax of an aborted update unless the XID happens to be <\nOldestXmin. With my problem scenario, the page will be all_visible in\nprunestate, but not all_frozen -- so it dodges the relevant\nvisibilitymap_set() call site.\n\nThat just leaves inserts that abort, I think. An aborted insert will\nbe totally undone by pruning, but that does still leave behind an\nLP_DEAD item that needs to be vacuumed in the second heap pass. This\nmeans that we can only set the page all-visible/all-frozen in the VM\nin the second heap pass, which also dodges the relevant\nvisibilitymap_set() call site.\n\nIn summary, I think that there is currently no way that we can have\nthe VM (or the PD_ALL_VISIBLE flag) concurrently unset, while leaving\nthe page all_frozen. It can happen and leave the page all_visible, but\nnot all_frozen, due to these very fine details. (Assuming I haven't\nmissed another path to the problem with aborted Multis or something,\nbut looks like I haven't.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 10 Jan 2023 11:47:41 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 11:47 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> In summary, I think that there is currently no way that we can have\n> the VM (or the PD_ALL_VISIBLE flag) concurrently unset, while leaving\n> the page all_frozen. It can happen and leave the page all_visible, but\n> not all_frozen, due to these very fine details. (Assuming I haven't\n> missed another path to the problem with aborted Multis or something,\n> but looks like I haven't.)\n\nActually, FreezeMultiXactId() can fully remove an xmax that has some\nmember XIDs >= OldestXmin, provided FRM_NOOP processing isn't\npossible, at least when no individual member is still running. Doesn't\nhave to involve transaction aborts at all.\n\nLet me go try to break it that way...\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 10 Jan 2023 12:08:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 2:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> What I actually said was that there is no reason to declare up front\n> that the only circumstances under which a fix could be committed is\n> when a clean repro is available. I never said that a test case has\n> little or no value, and I certainly didn't assert that we definitely\n> don't need a test case to proceed with a commit -- since I am not in\n> the habit of presumptuously attaching conditions to such things well\n> in advance.\n\nI don't understand what distinction you're making. It seems like\nhair-splitting to me. We should be able to reproduce problems like\nthis reliably, at least with the aid of a debugger and some\nbreakpoints, before we go changing the code. The risk of being wrong\nis quite high because the code is subtle, and the consequences of\nbeing wrong are potentially very bad because the code is critical to\ndata integrity. If the reproducer doesn't require a debugger or other\nextreme contortions, then we should consider reducing it to a TAP test\nthat can be committed. If you agree with that, then I'm not sure what\nyour last email was complaining about. If you disagree, then I don't\nknow why.\n\n> I have been unable to reproduce the problem, and think it's possible\n> that the issue cannot be triggered in practice. Though only through\n> sheer luck. Here's why that is:\n>\n> While pruning will remove aborted dead tuples, freezing will not\n> remove the xmax of an aborted update unless the XID happens to be <\n> OldestXmin. With my problem scenario, the page will be all_visible in\n> prunestate, but not all_frozen -- so it dodges the relevant\n> visibilitymap_set() call site.\n>\n> That just leaves inserts that abort, I think. An aborted insert will\n> be totally undone by pruning, but that does still leave behind an\n> LP_DEAD item that needs to be vacuumed in the second heap pass. This\n> means that we can only set the page all-visible/all-frozen in the VM\n> in the second heap pass, which also dodges the relevant\n> visibilitymap_set() call site.\n\nI guess I'm not very sure that this is sheer luck. It seems like we\ncould equally well suppose that the people who wrote the code\ncorrectly understood the circumstances under which we needed to avoid\ncalling visibilitymap_set(), and wrote the code in a way that\naccomplished that purpose. Maybe there's contrary evidence or maybe it\nis actually broken somehow, but that's not currently clear to me.\n\nFor the purposes of clarifying my understanding, is this the code\nyou're principally worried about?\n\n /*\n * If the all-visible page is all-frozen but not marked as such yet,\n * mark it as all-frozen. Note that all_frozen is only valid if\n * all_visible is true, so we must check both prunestate fields.\n */\n else if (all_visible_according_to_vm && prunestate.all_visible &&\n prunestate.all_frozen &&\n !VM_ALL_FROZEN(vacrel->rel, blkno, &vmbuffer))\n {\n /*\n * We can pass InvalidTransactionId as the cutoff XID here,\n * because setting the all-frozen bit doesn't cause recovery\n * conflicts.\n */\n visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,\n vmbuffer, InvalidTransactionId,\n VISIBILITYMAP_ALL_FROZEN);\n }\n\nOr maybe this one?\n\n if (PageIsAllVisible(page))\n {\n uint8 flags = 0;\n uint8 vm_status = visibilitymap_get_status(vacrel->rel,\n blkno, vmbuffer);\n\n /* Set the VM all-frozen bit to flag, if needed */\n if ((vm_status & VISIBILITYMAP_ALL_VISIBLE) == 0)\n flags |= VISIBILITYMAP_ALL_VISIBLE;\n if ((vm_status & VISIBILITYMAP_ALL_FROZEN) == 0 && all_frozen)\n flags |= VISIBILITYMAP_ALL_FROZEN;\n\n Assert(BufferIsValid(*vmbuffer));\n if (flags != 0)\n visibilitymap_set(vacrel->rel, blkno, buffer, InvalidXLogRecPtr,\n *vmbuffer, visibility_cutoff_xid, flags);\n }\n\nThese are the only two call sites in vacuumlazy.c where I can see\nthere being a theoretical risk of the kind of problem that you're\ndescribing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 15:18:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 12:19 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I don't understand what distinction you're making. It seems like\n> hair-splitting to me. We should be able to reproduce problems like\n> this reliably, at least with the aid of a debugger and some\n> breakpoints, before we go changing the code.\n\nSo we can *never* change something defensively, on the basis of a\nsuspected or theoretical hazard, either in backbranches or just on\nHEAD? Not under any circumstances, ever?\n\n> The risk of being wrong\n> is quite high because the code is subtle, and the consequences of\n> being wrong are potentially very bad because the code is critical to\n> data integrity. If the reproducer doesn't require a debugger or other\n> extreme contortions, then we should consider reducing it to a TAP test\n> that can be committed. If you agree with that, then I'm not sure what\n> your last email was complaining about.\n\nI was complaining about your prescribing conditions on proceeding with\na commit, based on an understanding of things that you yourself\nacknowledged as incomplete. I cannot imagine how you read that as an\nunwillingness to test the issue, especially given that I agreed to\nwork on that before you chimed in.\n\n> > I have been unable to reproduce the problem, and think it's possible\n> > that the issue cannot be triggered in practice. Though only through\n> > sheer luck. Here's why that is:\n\n> I guess I'm not very sure that this is sheer luck.\n\nThat's just my characterization. Other people can make up their own minds.\n\n> For the purposes of clarifying my understanding, is this the code\n> you're principally worried about?\n\n> visibilitymap_set(vacrel->rel, blkno, buf, InvalidXLogRecPtr,\n> vmbuffer, InvalidTransactionId,\n> VISIBILITYMAP_ALL_FROZEN);\n\nObviously I meant this call site, since it's the only one that passes\nVISIBILITYMAP_ALL_FROZEN as its flags, without also passing\nVISIBILITYMAP_ALL_VISIBLE -- in vacuumlazy.c, and in general.\n\nThe other visibilitymap_set() callsite that you quoted is from the\nsecond heap pass, where LP_DEAD items are vacuumed and become\nLP_UNUSED items. That isn't buggy, but it is a silly approach, in that\nit cares about what the visibility map says about the page being\nall-visible, as if it might take a dissenting view that needs to be\ntaken into consideration (obviously we know what's going on with the\npage because we just scanned it ourselves, and determined that it was\nat least all-visible).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 10 Jan 2023 12:41:53 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 12:08 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually, FreezeMultiXactId() can fully remove an xmax that has some\n> member XIDs >= OldestXmin, provided FRM_NOOP processing isn't\n> possible, at least when no individual member is still running. Doesn't\n> have to involve transaction aborts at all.\n>\n> Let me go try to break it that way...\n\nAttached patch shows how this could break.\n\nIt adds an assertion that checks that the expected\nPD_ALL_VISIBLE/VM_ALL_VISIBLE() conditions hold at the right point. It\nalso comments out FreezeMultiXactId()'s FRM_NOOP handling.\n\nThe FRM_NOOP case is really just an optimization, and shouldn't be\nneeded for correctness. This is amply demonstrated by running \"meston\ntest\" with the patch applied, which will pass without incident.\n\nI can get the PD_ALL_VISIBLE assertion to fail by following this\nprocedure with the patch applied:\n\n* Run a plain VACUUM to set all the pages from a table all-visible,\nbut not all-frozen.\n\n* Set a breakpoint that will hit after all_visible_according_to_vm is\nset to true, for an interesting blkno.\n\n* Run VACUUM FREEZE. We need FREEZE in order to be able to hit the\nrelevant visibilitymap_set() call site (the one that passes\nVISIBILITYMAP_ALL_FROZEN as its flags, without also passing\nVISIBILITYMAP_ALL_VISIBLE).\n\nNow all_visible_according_to_vm is set to true, but we don't have a\nlock/pin on the same heap page just yet.\n\n* Acquire several non-conflicting row locks on a row on the block in\nquestion, so that a new multi is allocated.\n\n* End every session whose XID is stored in our multi (commit/abort).\n\n* Within GDB, continue from before -- observe assertion failure.\n\nObviously this scenario doesn't demonstrate the presence of a bug --\nnot quite. But it does prove that we rely on FRM_NOOP to not allow the\nVM to become corrupt, which just doesn't make any sense, and can't\nhave been intended. At a minimum, it strongly suggests that the\ncurrent approach is very fragile.\n\n-- \nPeter Geoghegan",
"msg_date": "Tue, 10 Jan 2023 16:39:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 4:39 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> * Run VACUUM FREEZE. We need FREEZE in order to be able to hit the\n> relevant visibilitymap_set() call site (the one that passes\n> VISIBILITYMAP_ALL_FROZEN as its flags, without also passing\n> VISIBILITYMAP_ALL_VISIBLE).\n>\n> Now all_visible_according_to_vm is set to true, but we don't have a\n> lock/pin on the same heap page just yet.\n>\n> * Acquire several non-conflicting row locks on a row on the block in\n> question, so that a new multi is allocated.\n\nForgot to mention that there needs to be a HOT update mixed in with\nthese SELECT ... FOR SHARE row lockers, too, which must abort once its\nXID has been added to a multi. Obviously heap_lock_tuple() won't ever\nunset VISIBILITYMAP_ALL_VISIBLE or PD_ALL_VISIBLE (it only ever clears\nVISIBILITYMAP_ALL_FROZEN) -- so we need a heap_update() to clear all\nof these status bits.\n\nThis enables the assertion to fail because:\n\n* Pruning can get rid of the aborted successor heap-only tuple right\naway, so it is not going to block us from setting the page all_visible\n(that just leaves the original tuple to consider).\n\n* The original tuple's xmax is a Multi, so it won't automatically be\nineligible for freezing because it's > OldestXmin in this scenario.\n\n* FreezeMultiXactId() processing will completely remove xmax, without\ncaring too much about cutoffs like OldestXmin -- it only cares about\nwhether each individual XID needs to be kept or not.\n\n(Granted, FreezeMultiXactId() will only remove xmax like this because\nI deliberately removed its FRM_NOOP handling, but that is a very\ndelicate thing to rely on, especially from such a great distance. I\ncan't imagine that it doesn't fail on HEAD for any reason beyond sheer\nluck.)\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 10 Jan 2023 17:36:29 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
},
{
"msg_contents": "On Mon, Jan 9, 2023 at 12:58 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Jan 9, 2023 at 11:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > Afaict we'll need to backpatch this all the way?\n>\n> I thought that we probably wouldn't need to, at first. But I now think\n> that we really have to.\n\nAttached is v4. This is almost the same as v3. The only notable change\nis in how the issue is explained in comments, and in the commit\nmessage.\n\nI have revised my opinion on this question once more. In light of what\nhas come to light about the issue from recent testing, I lean towards\na HEAD-only commit once again. What do you think?\n\nI still hope to be able to commit this on my original timeline (on\nMonday or so), without the issue taking up too much more of\neverybody's time.\n\nThanks\n-- \nPeter Geoghegan",
"msg_date": "Wed, 11 Jan 2023 19:46:59 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: Fixing a couple of buglets in how VACUUM sets visibility map bits"
}
] |
[
{
"msg_contents": "It has always annoyed me that we can't write '+infinity' for dates and \ntimestamps and get the OCD satisfaction of making our queries line up \nwith '-infinity'.\n\nI wrote a fix for that some time ago but apparently never posted it. I \nwas reminded of it by jian he in the Infinite Interval thread, and so \nhere it is.\n-- \nVik Fearing",
"msg_date": "Sun, 1 Jan 2023 03:10:23 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "+infinity for dates and timestamps"
},
{
"msg_contents": "On 1/1/23 03:10, Vik Fearing wrote:\n> It has always annoyed me that we can't write '+infinity' for dates and \n> timestamps and get the OCD satisfaction of making our queries line up \n> with '-infinity'.\n> \n> I wrote a fix for that some time ago but apparently never posted it. I \n> was reminded of it by jian he in the Infinite Interval thread, and so \n> here it is.\n\nHmm. Somehow the .out test files were not included.\n\nFixed with attached.\n-- \nVik Fearing",
"msg_date": "Sun, 1 Jan 2023 03:31:51 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: +infinity for dates and timestamps"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> It has always annoyed me that we can't write '+infinity' for dates and \n> timestamps and get the OCD satisfaction of making our queries line up \n> with '-infinity'.\n\n+1, since it works for numerics it should work for these types too.\n(I didn't read the patch though.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Jan 2023 00:24:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: +infinity for dates and timestamps"
},
{
"msg_contents": "Vik Fearing <vik@postgresfriends.org> writes:\n> Hmm. Somehow the .out test files were not included.\n> Fixed with attached.\n\nSomehow you'd managed to duplicate some of the other changes,\nso the cfbot still didn't like that :-(\n\nAnyway, pushed with cosmetic changes. Notably, I left out the\ndocumentation changes after observing that we don't document\n\"+infinity\" separately for the numeric types. Given the lack of\ncomplaints about that I think it's fine to do the same here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 01 Jan 2023 14:21:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: +infinity for dates and timestamps"
},
{
"msg_contents": "On 1/1/23 20:21, Tom Lane wrote:\n> Vik Fearing <vik@postgresfriends.org> writes:\n>> Hmm. Somehow the .out test files were not included.\n>> Fixed with attached.\n> \n> Somehow you'd managed to duplicate some of the other changes,\n> so the cfbot still didn't like that :-(\n> \n> Anyway, pushed with cosmetic changes. Notably, I left out the\n> documentation changes after observing that we don't document\n> \"+infinity\" separately for the numeric types. Given the lack of\n> complaints about that I think it's fine to do the same here.\n\nThanks, Tom! No objections to your changes.\n-- \nVik Fearing\n\n\n\n",
"msg_date": "Sun, 1 Jan 2023 20:22:58 +0100",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": true,
"msg_subject": "Re: +infinity for dates and timestamps"
}
] |
[
{
"msg_contents": "Hi,\n\nI was looking using enable_timeout_every() in another place with Lukas\njust now, and noticed the fin_time argument. It seems odd for an\ninterval firing interface to get an absolute timestamp as an\nargument. The only in-tree user of enable_timeout_every() computes\nfin_time explicitly using the interval time:\n\n\tstartup_progress_phase_start_time = GetCurrentTimestamp();\n\tfin_time = TimestampTzPlusMilliseconds(startup_progress_phase_start_time,\n\t\t\t\t\t\t\t\t\t\t log_startup_progress_interval);\n\tenable_timeout_every(STARTUP_PROGRESS_TIMEOUT, fin_time,\n\t\t\t\t\t\t log_startup_progress_interval);\n\nIn https://postgr.es/m/CA%2BTgmoYqSF5sCNrgTom9r3Nh%3Dat4WmYFD%3DgsV-omStZ60S0ZUQ%40mail.gmail.com\nRobert said:\n> Apparently not, but here's a v2 anyway. In this version I made\n> enable_timeout_every() a three-argument function, so that the caller\n> can specify both the first time at which the timeout routine should be\n> called and the interval between them, instead of only the latter. That\n> seems to be more convenient for this use case, and is more powerful in\n> general.\n\nWhat is the use case for an absolute start time plus a relative\ninterval?\n\nISTM that this will just lead to every caller ending up with a\ncalculation like the startup.c piece quoted above.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 1 Jan 2023 16:36:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "enable_timeout_every() and fin_time"
},
{
"msg_contents": "On Sun, Jan 1, 2023 at 7:36 PM Andres Freund <andres@anarazel.de> wrote:\n> What is the use case for an absolute start time plus a relative\n> interval?\n\nThe code snippet that you indicate has the important side effect of\nchanging the global variable startup_progress_phase_start_time, which\nis used by has_startup_progress_timeout_expired. Without the fin_time\nargument, the timeout machinery would have to call\nGetCurrentTimestamp() separately, and the caller wouldn't know what\nanswer it got. The result would be that the progress reports would\nindicate an elapsed time relative to one timestamp, but the time at\nwhich those progress reports were printed would be relative to a\nslightly different timestamp.\n\nMaybe nobody would notice such a minor discrepancy, but I wanted to avoid it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 13:33:34 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_timeout_every() and fin_time"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-03 13:33:34 -0500, Robert Haas wrote:\n> On Sun, Jan 1, 2023 at 7:36 PM Andres Freund <andres@anarazel.de> wrote:\n> > What is the use case for an absolute start time plus a relative\n> > interval?\n> \n> The code snippet that you indicate has the important side effect of\n> changing the global variable startup_progress_phase_start_time, which\n> is used by has_startup_progress_timeout_expired. Without the fin_time\n> argument, the timeout machinery would have to call\n> GetCurrentTimestamp() separately, and the caller wouldn't know what\n> answer it got. The result would be that the progress reports would\n> indicate an elapsed time relative to one timestamp, but the time at\n> which those progress reports were printed would be relative to a\n> slightly different timestamp.\n\n> Maybe nobody would notice such a minor discrepancy, but I wanted to avoid it.\n\nDoesn't that discrepancy already exist as the code stands, because\nstartup_progress_phase_start_time is also set in\nhas_startup_progress_timeout_expired()? I realize that was an example, but the\nissue seems broader: After the first \"firing\", the next timeout will be\ncomputed relative to an absolute time gathered in timestamp.c.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Jan 2023 12:14:00 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: enable_timeout_every() and fin_time"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 3:14 PM Andres Freund <andres@anarazel.de> wrote:\n> Doesn't that discrepancy already exist as the code stands, because\n> startup_progress_phase_start_time is also set in\n> has_startup_progress_timeout_expired()?\n\nI don't think it is, actually.\n\n> I realize that was an example, but the\n> issue seems broader: After the first \"firing\", the next timeout will be\n> computed relative to an absolute time gathered in timestamp.c.\n\nWe're computing the time since the start of the current phase, not the\ntime since the last timeout. So I don't see how this is relevant.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 3 Jan 2023 15:30:52 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: enable_timeout_every() and fin_time"
}
] |
[
{
"msg_contents": "Hi,\n\nSince EXPLAIN ANALYZE with TIMING ON still carries noticeable overhead on\nmodern hardware (despite time sources being faster), I'd like to propose a\nnew setting EXPLAIN ANALYZE, called \"TIMING SAMPLING\", as compared to\nTIMING ON.\n\nThis new timing mode uses a timer on a fixed recurring frequency (e.g. 100\nor 1000 Hz) to gather a sampled timestamp on a predefined schedule, instead\nof getting the time on-demand when InstrStartNode/InstrStopNode is called.\nTo implement the timer, we can use the existing timeout infrastructure,\nwhich is backed by a wall clock timer (ITIMER_REAL).\n\nConceptually this is inspired by how sampling profilers work (e.g. \"perf\"),\nbut it ties into the existing per-plan node instrumentation done by EXPLAIN\nANALYZE, and simply provides a lower accuracy version of the total time for\neach plan node.\n\nIn EXPLAIN output this is marked as \"sampled time\", and scaled to the total\nwall clock time (to adjust for the sampling undercounting):\n\n=# EXPLAIN (ANALYZE, BUFFERS, TIMING SAMPLING, SAMPLEFREQ 100) SELECT ...;\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=201747.90..201748.00 rows=10 width=12) (actual\nsampled time=5490.974 rows=9 loops=1)\n ...\n -> Hash Join (cost=0.23..199247.90 rows=499999 width=4) (actual\nsampled time=3738.619 rows=9000000 loops=1)\n ...\n -> Seq Scan on large (cost=0.00..144247.79 rows=9999979 width=4)\n(actual sampled time=1004.671 rows=10000001 loops=1)\n ...\n -> Hash (cost=0.10..0.10 rows=10 width=4) (actual sampled\ntime=0.000 rows=10 loops=1)\n ...\n Execution Time: 5491.475 ms\n---\n\nIn simple query tests like this on my local machine, this shows a\nconsistent benefit over TIMING ON (and behaves close to ANALYZE with TIMING\nOFF), whilst providing a \"good enough\" accuracy to identify which part of\nthe query was problematic.\n\nAttached is a prototype patch for early feedback on the concept, with tests\nand documentation to come in a follow-up. Since the January commitfest is\nstill marked as open I'll register it there, but note that my assumption is\nthis is *not* Postgres 16 material.\n\nAs an open item, note that in the patch the requested sampling frequency is\nnot yet passed to parallel workers (it always defaults to 1000 Hz when\nsampling is enabled). Also, note the timing frequency is limited to a\nmaximum of 1000 Hz (1ms) due to current limitations of the timeout\ninfrastructure.\n\nWith thanks to Andres Freund for help on refining the idea, collaborating\non early code and finding the approach to hook into the timeout API.\n\nThanks,\nLukas\n\n-- \nLukas Fittl",
"msg_date": "Mon, 2 Jan 2023 03:36:04 -0800",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": true,
"msg_subject": "Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "Nice addition! And the code looks pretty straight forward.\n\nThe current patch triggers warnings:\nhttps://cirrus-ci.com/task/6016013976731648 Looks like you need to add void\nas the argument.\n\nDo you have some performance comparison between TIMING ON and TIMING\nSAMPLING?\n\nIn InstrStartSampling there's logic to increase/decrease the frequency of\nan already existing timer. It's not clear to me when this can occur. I'd\nexpect sampling frequency to remain constant throughout an explain plan. If\nit's indeed needed, I think a code comment would be useful to explain why\nthis edge case is necessary.\n\nOn Fri, 6 Jan 2023 at 09:41, Lukas Fittl <lukas@fittl.com> wrote:\n\n> Hi,\n>\n> Since EXPLAIN ANALYZE with TIMING ON still carries noticeable overhead on\n> modern hardware (despite time sources being faster), I'd like to propose a\n> new setting EXPLAIN ANALYZE, called \"TIMING SAMPLING\", as compared to\n> TIMING ON.\n>\n> This new timing mode uses a timer on a fixed recurring frequency (e.g. 100\n> or 1000 Hz) to gather a sampled timestamp on a predefined schedule, instead\n> of getting the time on-demand when InstrStartNode/InstrStopNode is called.\n> To implement the timer, we can use the existing timeout infrastructure,\n> which is backed by a wall clock timer (ITIMER_REAL).\n>\n> Conceptually this is inspired by how sampling profilers work (e.g.\n> \"perf\"), but it ties into the existing per-plan node instrumentation done\n> by EXPLAIN ANALYZE, and simply provides a lower accuracy version of the\n> total time for each plan node.\n>\n> In EXPLAIN output this is marked as \"sampled time\", and scaled to the\n> total wall clock time (to adjust for the sampling undercounting):\n>\n> =# EXPLAIN (ANALYZE, BUFFERS, TIMING SAMPLING, SAMPLEFREQ 100) SELECT ...;\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=201747.90..201748.00 rows=10 width=12) (actual\n> sampled time=5490.974 rows=9 loops=1)\n> ...\n> -> Hash Join (cost=0.23..199247.90 rows=499999 width=4) (actual\n> sampled time=3738.619 rows=9000000 loops=1)\n> ...\n> -> Seq Scan on large (cost=0.00..144247.79 rows=9999979\n> width=4) (actual sampled time=1004.671 rows=10000001 loops=1)\n> ...\n> -> Hash (cost=0.10..0.10 rows=10 width=4) (actual sampled\n> time=0.000 rows=10 loops=1)\n> ...\n> Execution Time: 5491.475 ms\n> ---\n>\n> In simple query tests like this on my local machine, this shows a\n> consistent benefit over TIMING ON (and behaves close to ANALYZE with TIMING\n> OFF), whilst providing a \"good enough\" accuracy to identify which part of\n> the query was problematic.\n>\n> Attached is a prototype patch for early feedback on the concept, with\n> tests and documentation to come in a follow-up. Since the January\n> commitfest is still marked as open I'll register it there, but note that my\n> assumption is this is *not* Postgres 16 material.\n>\n> As an open item, note that in the patch the requested sampling frequency\n> is not yet passed to parallel workers (it always defaults to 1000 Hz when\n> sampling is enabled). Also, note the timing frequency is limited to a\n> maximum of 1000 Hz (1ms) due to current limitations of the timeout\n> infrastructure.\n>\n> With thanks to Andres Freund for help on refining the idea, collaborating\n> on early code and finding the approach to hook into the timeout API.\n>\n> Thanks,\n> Lukas\n>\n> --\n> Lukas Fittl\n>\n\nNice addition! And the code looks pretty straight forward.The current patch triggers warnings: https://cirrus-ci.com/task/6016013976731648 Looks like you need to add void as the argument.Do you have some performance comparison between TIMING ON and TIMING SAMPLING?In InstrStartSampling there's logic to increase/decrease the frequency of an already existing timer. It's not clear to me when this can occur. I'd expect sampling frequency to remain constant throughout an explain plan. If it's indeed needed, I think a code comment would be useful to explain why this edge case is necessary.On Fri, 6 Jan 2023 at 09:41, Lukas Fittl <lukas@fittl.com> wrote:Hi,Since EXPLAIN ANALYZE with TIMING ON still carries noticeable overhead on modern hardware (despite time sources being faster), I'd like to propose a new setting EXPLAIN ANALYZE, called \"TIMING SAMPLING\", as compared to TIMING ON.This new timing mode uses a timer on a fixed recurring frequency (e.g. 100 or 1000 Hz) to gather a sampled timestamp on a predefined schedule, instead of getting the time on-demand when InstrStartNode/InstrStopNode is called. To implement the timer, we can use the existing timeout infrastructure, which is backed by a wall clock timer (ITIMER_REAL).Conceptually this is inspired by how sampling profilers work (e.g. \"perf\"), but it ties into the existing per-plan node instrumentation done by EXPLAIN ANALYZE, and simply provides a lower accuracy version of the total time for each plan node.In EXPLAIN output this is marked as \"sampled time\", and scaled to the total wall clock time (to adjust for the sampling undercounting):=# EXPLAIN (ANALYZE, BUFFERS, TIMING SAMPLING, SAMPLEFREQ 100) SELECT ...; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=201747.90..201748.00 rows=10 width=12) (actual sampled time=5490.974 rows=9 loops=1) ... -> Hash Join (cost=0.23..199247.90 rows=499999 width=4) (actual sampled time=3738.619 rows=9000000 loops=1) ... -> Seq Scan on large (cost=0.00..144247.79 rows=9999979 width=4) (actual sampled time=1004.671 rows=10000001 loops=1) ... -> Hash (cost=0.10..0.10 rows=10 width=4) (actual sampled time=0.000 rows=10 loops=1) ... Execution Time: 5491.475 ms---In simple query tests like this on my local machine, this shows a consistent benefit over TIMING ON (and behaves close to ANALYZE with TIMING OFF), whilst providing a \"good enough\" accuracy to identify which part of the query was problematic.Attached is a prototype patch for early feedback on the concept, with tests and documentation to come in a follow-up. Since the January commitfest is still marked as open\n I'll register it there, but note that my assumption is this is *not* \nPostgres 16 material.As an open item, note that in the patch the requested sampling frequency is not yet passed to parallel workers (it always defaults to 1000 Hz when sampling is enabled). Also, note the timing frequency is limited to a maximum of 1000 Hz (1ms) due to current limitations of the timeout infrastructure.With thanks to Andres Freund for help on refining the idea, collaborating on early code and finding the approach to hook into the timeout API.Thanks,Lukas-- Lukas Fittl",
"msg_date": "Fri, 6 Jan 2023 10:19:16 +0100",
"msg_from": "Jelte Fennema <me@jeltef.nl>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "Nice idea.\n\nOn 1/6/23 10:19, Jelte Fennema wrote:\n> Do you have some performance comparison between TIMING ON and TIMING \n> SAMPLING?\n\n+1 to see some numbers compared to TIMING ON.\n\nMostly I'm wondering if the sampling based approach gains us enough to \nbe worth it, once the patch to use RDTSC hopefully landed (see [1]). I \nbelieve that with the RDTSC patch the overhead of TIMING ON is lower \nthan the overhead of using ANALYZE with TIMING OFF in the first place. \nHence, to be really useful, it would be great if we could on top of \nTIMING SAMPLING also lower the overhead of ANALYZE itself further (e.g. \nby using a fast path for the default EXPLAIN (ANALYZE, TIMING ON / \nSAMPLING)). Currently, InstrStartNode() and InstrStopNode() have a ton \nof branches and without all the typically deactivated code the \nimplementation would be very small and could be placed in an inlinable \nfunction.\n\n[1] \nhttps://www.postgresql.org/message-id/flat/20200612232810.f46nbqkdhbutzqdg%40alap3.anarazel.de\n\n-- \nDavid Geier\n(ServiceNow)\n\n\n\n",
"msg_date": "Fri, 13 Jan 2023 09:11:06 +0100",
"msg_from": "David Geier <geidav.pg@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-13 09:11:06 +0100, David Geier wrote:\n> Mostly I'm wondering if the sampling based approach gains us enough to be\n> worth it, once the patch to use RDTSC hopefully landed (see [1]).\n\nWell, I'm not sure we have a path forward on it. There's portability and\naccuracy concerns. But more importantly:\n\n> I believe that with the RDTSC patch the overhead of TIMING ON is lower than\n> the overhead of using ANALYZE with TIMING OFF in the first place. Hence, to\n> be really useful, it would be great if we could on top of TIMING SAMPLING\n> also lower the overhead of ANALYZE itself further (e.g. by using a fast path\n> for the default EXPLAIN (ANALYZE, TIMING ON / SAMPLING)). Currently,\n> InstrStartNode() and InstrStopNode() have a ton of branches and without all\n> the typically deactivated code the implementation would be very small and\n> could be placed in an inlinable function.\n\nYes, I think SAMPLING could get rid of most of the instrumentation overhead -\nat the cost of a bit less detail in the explain, of course. Which could make\nit a lot more feasible to enable something like auto_explain.log_timing in\nbusy workloads.\n\nFor the sampling mode we don't really need something like\nInstrStart/StopNode. We just need a pointer to node currently executing - not\nfree to set, but still a heck of a lot cheaper than InstrStopNode(), even\nwithout ->need_timer etc. Then the timer just needs to do\n instr->sampled_total += (now - last_sample)\n last_sample = now\n\n\nI've been thinking that we should consider making more of the instrumentation\ncode work like that. The amount of work we're doing in InstrStart/StopNode()\nhas steadily risen. When buffer usage and WAL usage are enabled, we're\nexecuting over 1000 instructions! And each single Instrumentation node is ~450\nbytes, to a good degree due to having 2 BufUsage and 2 WalUsage structs\nembedded.\n\nIf we instead have InstrStartNode() set up a global pointer to the\nInstrumentation node, we can make the instrumentation code modify both the\n\"global\" counters (pgBufferUsage, pgWalUsage) and, if set,\ncurrent_instr->{pgBufferUsage, pgWalUsage}. That'll require some larger\nchanges - right now nodes \"automatically\" include the IO/WAL incurred in child\nnodes, but that's just a small bit of additional summin-up to be done during\nEXPLAIN.\n\n\nSeparately, I think we should consider re-ordering Instrumentation so that\nbufusage_start, walusage_start are after the much more commonly used\nelements. We're forcing ntuples, nloops, .. onto separate cachelines, even\nthough they're accounted for unconditionally.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sun, 15 Jan 2023 12:22:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 1:19 AM Jelte Fennema <me@jeltef.nl> wrote:\n\n> Nice addition! And the code looks pretty straight forward.\n>\n\nThanks for reviewing!\n\nThe current patch triggers warnings:\n> https://cirrus-ci.com/task/6016013976731648 Looks like you need to add\n> void as the argument.\n>\n\nFixed in v2 attached. This also adds a simple regression test, as well as\nfixes the parallel working handling.\n\nDo you have some performance comparison between TIMING ON and TIMING\n> SAMPLING?\n>\n\nHere are some benchmarks of auto_explain overhead on my ARM-based M1\nMacbook for the following query run with pgbench on a scale factor 100 data\nset:\n\nSELECT COUNT(*) FROM pgbench_branches JOIN pgbench_accounts USING (bid)\nJOIN pgbench_tellers USING (bid) WHERE bid = 42;\n\n(the motivation is to use a query that is more complex than the standard\npgbench select-only test query)\n\navg latency (best of 3), -T 300, -c 4, -s 100, shared_buffers 2GB, fsync\noff, max_parallel_workers_per_gather 0:\n\nmaster, log_timing = off: 871 ms (878 / 877 / 871)\npatch, log_timing = off: 869 ms (882 / 880 / 869)\npatch, log_timing = on: 890 ms (917 / 930 / 890)\npatch, log_timing = sampling, samplefreq = 1000: 869 ms (887 / 869 / 894)\n\nAdditionally, here is Andres' benchmark from [1], with the sampling option\nadded:\n\n% psql -Xc 'DROP TABLE IF EXISTS t; CREATE TABLE t AS SELECT * FROM\ngenerate_series(1, 100000) g(i);' postgres && pgbench -n -r -t 100 -f\n<(echo -e \"SELECT COUNT(*) FROM t;EXPLAIN (ANALYZE, TIMING OFF) SELECT\nCOUNT(*) FROM t;EXPLAIN (ANALYZE, TIMING SAMPLING) SELECT COUNT(*) FROM\nt;EXPLAIN (ANALYZE, TIMING ON) SELECT COUNT(*) FROM t;\") postgres |grep '^ '\nDROP TABLE\nSELECT 100000\n 3.507 0 SELECT COUNT(*) FROM t;\n 3.476 0 EXPLAIN (ANALYZE, TIMING OFF) SELECT COUNT(*)\nFROM t;\n 3.576 0 EXPLAIN (ANALYZE, TIMING SAMPLING) SELECT\nCOUNT(*) FROM t;\n 5.096 0 EXPLAIN (ANALYZE, TIMING ON) SELECT COUNT(*)\nFROM t;\n\nMy pg_test_timing data for reference:\n\n% pg_test_timing\nTesting timing overhead for 3 seconds.\nPer loop time including overhead: 23.65 ns\nHistogram of timing durations:\n < us % of total count\n 1 97.64472 123876325\n 2 2.35421 2986658\n 4 0.00022 277\n 8 0.00016 202\n 16 0.00064 815\n 32 0.00005 64\n\nIn InstrStartSampling there's logic to increase/decrease the frequency of\n> an already existing timer. It's not clear to me when this can occur. I'd\n> expect sampling frequency to remain constant throughout an explain plan. If\n> it's indeed needed, I think a code comment would be useful to explain why\n> this edge case is necessary.\n>\n\nClarified in a code comment in v2. This is needed for handling nested\nstatements which could have different sampling frequencies for each nesting\nlevel, i.e. a function might want to sample it's queries at a higher\nfrequency than its caller.\n\nThanks,\nLukas\n\n[1] https://postgr.es/m/20230116213913.4oseovlzvc2674z7%40awork3.anarazel.de\n\n-- \nLukas Fittl",
"msg_date": "Tue, 17 Jan 2023 02:50:40 -0800",
"msg_from": "Lukas Fittl <lukas@fittl.com>",
"msg_from_op": true,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "\n\n\nOn 1/15/23 21:22, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-13 09:11:06 +0100, David Geier wrote:\n>> Mostly I'm wondering if the sampling based approach gains us enough to be\n>> worth it, once the patch to use RDTSC hopefully landed (see [1]).\n> \n> Well, I'm not sure we have a path forward on it. There's portability and\n> accuracy concerns. But more importantly:\n> \n>> I believe that with the RDTSC patch the overhead of TIMING ON is lower than\n>> the overhead of using ANALYZE with TIMING OFF in the first place. Hence, to\n>> be really useful, it would be great if we could on top of TIMING SAMPLING\n>> also lower the overhead of ANALYZE itself further (e.g. by using a fast path\n>> for the default EXPLAIN (ANALYZE, TIMING ON / SAMPLING)). Currently,\n>> InstrStartNode() and InstrStopNode() have a ton of branches and without all\n>> the typically deactivated code the implementation would be very small and\n>> could be placed in an inlinable function.\n> \n> Yes, I think SAMPLING could get rid of most of the instrumentation overhead -\n> at the cost of a bit less detail in the explain, of course. Which could make\n> it a lot more feasible to enable something like auto_explain.log_timing in\n> busy workloads.\n> \n> For the sampling mode we don't really need something like\n> InstrStart/StopNode. We just need a pointer to node currently executing - not\n> free to set, but still a heck of a lot cheaper than InstrStopNode(), even\n> without ->need_timer etc. Then the timer just needs to do\n> instr->sampled_total += (now - last_sample)\n> last_sample = now\n> \n\nI don't understand why we would even use timestamps, in this case? AFAIK\n\"sampling profilers\" simply increment a counter for the executing node,\nand then approximate the time as proportional to the count.\n\nThat also does not have issues with timestamp \"rounding\" - considering\ne.g. sample rate 1000Hz, that's 1ms between samples. And it's quite\npossible the node completes within 1ms, in which case\n\n (now - last_sample)\n\nends up being 0 (assuming I correctly understand the code).\n\nAnd I don't think there's any particularly good way to correct this.\n\nIt seems ExplainNode() attempts to do some correction, but I doubt\nthat's very reliable, as these fast nodes will have sampled_total=0, so\nno matter what you multiply this with, it'll still be 0.\n\n> \n> I've been thinking that we should consider making more of the instrumentation\n> code work like that. The amount of work we're doing in InstrStart/StopNode()\n> has steadily risen. When buffer usage and WAL usage are enabled, we're\n> executing over 1000 instructions! And each single Instrumentation node is ~450\n> bytes, to a good degree due to having 2 BufUsage and 2 WalUsage structs\n> embedded.\n> \n> If we instead have InstrStartNode() set up a global pointer to the\n> Instrumentation node, we can make the instrumentation code modify both the\n> \"global\" counters (pgBufferUsage, pgWalUsage) and, if set,\n> current_instr->{pgBufferUsage, pgWalUsage}. That'll require some larger\n> changes - right now nodes \"automatically\" include the IO/WAL incurred in child\n> nodes, but that's just a small bit of additional summin-up to be done during\n> EXPLAIN.\n> \n\nThat's certainly one way to implement that. I wonder if we could make\nthat work without the global pointer, but I can't think of any.\n\n> \n> Separately, I think we should consider re-ordering Instrumentation so that\n> bufusage_start, walusage_start are after the much more commonly used\n> elements. We're forcing ntuples, nloops, .. onto separate cachelines, even\n> though they're accounted for unconditionally.\n> \n\n+1 to that\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Jan 2023 15:52:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 15:52:07 +0100, Tomas Vondra wrote:\n> I don't understand why we would even use timestamps, in this case? AFAIK\n> \"sampling profilers\" simply increment a counter for the executing node,\n> and then approximate the time as proportional to the count.\n\nThe timer interrupt distances aren't all that evenly spaced, particularly\nunder load, and are easily distorted by having to wait for IO, an lwlock ...\n\n\n> That also does not have issues with timestamp \"rounding\" - considering\n> e.g. sample rate 1000Hz, that's 1ms between samples. And it's quite\n> possible the node completes within 1ms, in which case\n> \n> (now - last_sample)\n> \n> ends up being 0 (assuming I correctly understand the code).\n\nThat part should be counting in nanoseconds, I think? Unless I misunderstand\nsomething?\n\nWe already compute the timestamp inside timeout.c, but don't yet pass that to\ntimeout handlers. I think there's others re-computing timestamps.\n\n\n> And I don't think there's any particularly good way to correct this.\n> \n> It seems ExplainNode() attempts to do some correction, but I doubt\n> that's very reliable, as these fast nodes will have sampled_total=0, so\n> no matter what you multiply this with, it'll still be 0.\n\nThat's just the scaling to the \"actual time\" that you're talking about above,\nno?\n\n\n> > I've been thinking that we should consider making more of the instrumentation\n> > code work like that. The amount of work we're doing in InstrStart/StopNode()\n> > has steadily risen. When buffer usage and WAL usage are enabled, we're\n> > executing over 1000 instructions! And each single Instrumentation node is ~450\n> > bytes, to a good degree due to having 2 BufUsage and 2 WalUsage structs\n> > embedded.\n> > \n> > If we instead have InstrStartNode() set up a global pointer to the\n> > Instrumentation node, we can make the instrumentation code modify both the\n> > \"global\" counters (pgBufferUsage, pgWalUsage) and, if set,\n> > current_instr->{pgBufferUsage, pgWalUsage}. That'll require some larger\n> > changes - right now nodes \"automatically\" include the IO/WAL incurred in child\n> > nodes, but that's just a small bit of additional summin-up to be done during\n> > EXPLAIN.\n> > \n> \n> That's certainly one way to implement that. I wonder if we could make\n> that work without the global pointer, but I can't think of any.\n\nI don't see a realistic way at least. We could pass down an\n\"InstrumentationContext\" through everything that needs to do IO and WAL. But\nthat seems infeasible at this point.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 09:02:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "On 1/17/23 18:02, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-17 15:52:07 +0100, Tomas Vondra wrote:\n>> I don't understand why we would even use timestamps, in this case? AFAIK\n>> \"sampling profilers\" simply increment a counter for the executing node,\n>> and then approximate the time as proportional to the count.\n> \n> The timer interrupt distances aren't all that evenly spaced, particularly\n> under load, and are easily distorted by having to wait for IO, an lwlock ...\n> \n\nOK, so the difference is that these events (I/O, lwlocks) may block\nsignals, and after signals get unblocked we only get a single event for\neach signal. Yeah, the timestamp handles that case better.\n\n> \n>> That also does not have issues with timestamp \"rounding\" - considering\n>> e.g. sample rate 1000Hz, that's 1ms between samples. And it's quite\n>> possible the node completes within 1ms, in which case\n>>\n>> (now - last_sample)\n>>\n>> ends up being 0 (assuming I correctly understand the code).\n> \n> That part should be counting in nanoseconds, I think? Unless I misunderstand\n> something?\n> \n\nThe higher precision does not help, because both values come from the\n*sampled* timestamp (i.e. the one updated from the signal handler). So\nif the node happens to execute between two signals, the values are going\nto be the same, and the difference is 0.\n\nPerhaps for many executions it works out, because some executions will\ncross the boundary, and the average will converge to the right value.\n\n> We already compute the timestamp inside timeout.c, but don't yet pass that to\n> timeout handlers. I think there's others re-computing timestamps.\n> \n> \n>> And I don't think there's any particularly good way to correct this.\n>>\n>> It seems ExplainNode() attempts to do some correction, but I doubt\n>> that's very reliable, as these fast nodes will have sampled_total=0, so\n>> no matter what you multiply this with, it'll still be 0.\n> \n> That's just the scaling to the \"actual time\" that you're talking about above,\n> no?\n> \n\nMaybe, not sure.\n\n> \n>>> I've been thinking that we should consider making more of the instrumentation\n>>> code work like that. The amount of work we're doing in InstrStart/StopNode()\n>>> has steadily risen. When buffer usage and WAL usage are enabled, we're\n>>> executing over 1000 instructions! And each single Instrumentation node is ~450\n>>> bytes, to a good degree due to having 2 BufUsage and 2 WalUsage structs\n>>> embedded.\n>>>\n>>> If we instead have InstrStartNode() set up a global pointer to the\n>>> Instrumentation node, we can make the instrumentation code modify both the\n>>> \"global\" counters (pgBufferUsage, pgWalUsage) and, if set,\n>>> current_instr->{pgBufferUsage, pgWalUsage}. That'll require some larger\n>>> changes - right now nodes \"automatically\" include the IO/WAL incurred in child\n>>> nodes, but that's just a small bit of additional summin-up to be done during\n>>> EXPLAIN.\n>>>\n>>\n>> That's certainly one way to implement that. I wonder if we could make\n>> that work without the global pointer, but I can't think of any.\n> \n> I don't see a realistic way at least. We could pass down an\n> \"InstrumentationContext\" through everything that needs to do IO and WAL. But\n> that seems infeasible at this point.\n> \n\nWhy infeasible?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Jan 2023 19:00:02 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "Hi,\n\nOn 2023-01-17 19:00:02 +0100, Tomas Vondra wrote:\n> On 1/17/23 18:02, Andres Freund wrote:\n> > On 2023-01-17 15:52:07 +0100, Tomas Vondra wrote:\n> >> That also does not have issues with timestamp \"rounding\" - considering\n> >> e.g. sample rate 1000Hz, that's 1ms between samples. And it's quite\n> >> possible the node completes within 1ms, in which case\n> >>\n> >> (now - last_sample)\n> >>\n> >> ends up being 0 (assuming I correctly understand the code).\n> > \n> > That part should be counting in nanoseconds, I think? Unless I misunderstand\n> > something?\n> > \n> \n> The higher precision does not help, because both values come from the\n> *sampled* timestamp (i.e. the one updated from the signal handler). So\n> if the node happens to execute between two signals, the values are going\n> to be the same, and the difference is 0.\n\nIn that case there simply wasn't any sample for the node, and a non-timestamp\nbased sample counter wouldn't do anything different?\n\nIf you're worried about the case where a timer does fire during execution of\nthe node, but exactly once, that should provide a difference between the last\nsampled timestamp and the current time. It'll attribute a bit too much to the\nin-progress nodes, but well, that's sampling for you.\n\n\nI think a \"hybrid\" explain mode might be worth thinking about. Use the\n\"current\" sampling method for the first execution of a node, and for the first\nfew milliseconds of a query (or perhaps the first few timestamp\nacquisitions). That provides an accurate explain analyze for short queries,\nwithout a significant slowdown. Then switch to sampling, which provides decent\nattribution for a bit longer running queries.\n\n\n\n> >>> I've been thinking that we should consider making more of the instrumentation\n> >>> code work like that. The amount of work we're doing in InstrStart/StopNode()\n> >>> has steadily risen. When buffer usage and WAL usage are enabled, we're\n> >>> executing over 1000 instructions! And each single Instrumentation node is ~450\n> >>> bytes, to a good degree due to having 2 BufUsage and 2 WalUsage structs\n> >>> embedded.\n> >>>\n> >>> If we instead have InstrStartNode() set up a global pointer to the\n> >>> Instrumentation node, we can make the instrumentation code modify both the\n> >>> \"global\" counters (pgBufferUsage, pgWalUsage) and, if set,\n> >>> current_instr->{pgBufferUsage, pgWalUsage}. That'll require some larger\n> >>> changes - right now nodes \"automatically\" include the IO/WAL incurred in child\n> >>> nodes, but that's just a small bit of additional summin-up to be done during\n> >>> EXPLAIN.\n> >>>\n> >>\n> >> That's certainly one way to implement that. I wonder if we could make\n> >> that work without the global pointer, but I can't think of any.\n> > \n> > I don't see a realistic way at least. We could pass down an\n> > \"InstrumentationContext\" through everything that needs to do IO and WAL. But\n> > that seems infeasible at this point.\n\n> Why infeasible?\n\nPrimarily the scale of the change. We'd have to pass down the context into all\ntable/index AM functions. And into a lot of bufmgr.c, xlog.c functions,\nwhich'd require their callers to have access to the context. That's hundreds\nif not thousands places.\n\nAdding that many function parameters might turn out to be noticable runtime\nwise, due to increased register movement. I think for a number of the places\nwhere we currently don't, we ought to use by-reference struct for the\nnot-always-used parameters, that then also could contain this context.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 17 Jan 2023 10:46:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "On 1/17/23 19:46, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-17 19:00:02 +0100, Tomas Vondra wrote:\n>> On 1/17/23 18:02, Andres Freund wrote:\n>>> On 2023-01-17 15:52:07 +0100, Tomas Vondra wrote:\n>>>> That also does not have issues with timestamp \"rounding\" - considering\n>>>> e.g. sample rate 1000Hz, that's 1ms between samples. And it's quite\n>>>> possible the node completes within 1ms, in which case\n>>>>\n>>>> (now - last_sample)\n>>>>\n>>>> ends up being 0 (assuming I correctly understand the code).\n>>>\n>>> That part should be counting in nanoseconds, I think? Unless I misunderstand\n>>> something?\n>>>\n>>\n>> The higher precision does not help, because both values come from the\n>> *sampled* timestamp (i.e. the one updated from the signal handler). So\n>> if the node happens to execute between two signals, the values are going\n>> to be the same, and the difference is 0.\n> \n> In that case there simply wasn't any sample for the node, and a non-timestamp\n> based sample counter wouldn't do anything different?\n> \n\nYeah, you're right.\n\n> If you're worried about the case where a timer does fire during execution of\n> the node, but exactly once, that should provide a difference between the last\n> sampled timestamp and the current time. It'll attribute a bit too much to the\n> in-progress nodes, but well, that's sampling for you.\n> \n> \n> I think a \"hybrid\" explain mode might be worth thinking about. Use the\n> \"current\" sampling method for the first execution of a node, and for the first\n> few milliseconds of a query (or perhaps the first few timestamp\n> acquisitions). That provides an accurate explain analyze for short queries,\n> without a significant slowdown. Then switch to sampling, which provides decent\n> attribution for a bit longer running queries.\n> \n\nYeah, this is essentially the sampling I imagined when I first read the\nsubject of this thread. It samples which node executions to measure (and\nthen measures those accurately), while these patches sample timestamps.\n\n> \n> \n>>>>> I've been thinking that we should consider making more of the instrumentation\n>>>>> code work like that. The amount of work we're doing in InstrStart/StopNode()\n>>>>> has steadily risen. When buffer usage and WAL usage are enabled, we're\n>>>>> executing over 1000 instructions! And each single Instrumentation node is ~450\n>>>>> bytes, to a good degree due to having 2 BufUsage and 2 WalUsage structs\n>>>>> embedded.\n>>>>>\n>>>>> If we instead have InstrStartNode() set up a global pointer to the\n>>>>> Instrumentation node, we can make the instrumentation code modify both the\n>>>>> \"global\" counters (pgBufferUsage, pgWalUsage) and, if set,\n>>>>> current_instr->{pgBufferUsage, pgWalUsage}. That'll require some larger\n>>>>> changes - right now nodes \"automatically\" include the IO/WAL incurred in child\n>>>>> nodes, but that's just a small bit of additional summin-up to be done during\n>>>>> EXPLAIN.\n>>>>>\n>>>>\n>>>> That's certainly one way to implement that. I wonder if we could make\n>>>> that work without the global pointer, but I can't think of any.\n>>>\n>>> I don't see a realistic way at least. We could pass down an\n>>> \"InstrumentationContext\" through everything that needs to do IO and WAL. But\n>>> that seems infeasible at this point.\n> \n>> Why infeasible?\n> \n> Primarily the scale of the change. We'd have to pass down the context into all\n> table/index AM functions. And into a lot of bufmgr.c, xlog.c functions,\n> which'd require their callers to have access to the context. That's hundreds\n> if not thousands places.\n> \n> Adding that many function parameters might turn out to be noticable runtime\n> wise, due to increased register movement. I think for a number of the places\n> where we currently don't, we ought to use by-reference struct for the\n> not-always-used parameters, that then also could contain this context.\n> \n\nOK, I haven't realized we'd have to pass it to that many places.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 17 Jan 2023 20:51:39 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "On Tue, 17 Jan 2023 at 14:52, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 1/17/23 19:46, Andres Freund wrote:\n>\n> > I think a \"hybrid\" explain mode might be worth thinking about. Use the\n> > \"current\" sampling method for the first execution of a node, and for the first\n> > few milliseconds of a query (or perhaps the first few timestamp\n> > acquisitions). That provides an accurate explain analyze for short queries,\n> > without a significant slowdown. Then switch to sampling, which provides decent\n> > attribution for a bit longer running queries.\n> >\n>\n> Yeah, this is essentially the sampling I imagined when I first read the\n> subject of this thread. It samples which node executions to measure (and\n> then measures those accurately), while these patches sample timestamps.\n\nThat sounds interesting. Fwiw my first thought would be to implement\nit a bit differently. Always have a timer running sampling right\nfrom the start, but also if there are less than, say, 1000 samples for\na node then measure the actual start/finish time.\n\nSo for any given node once you've hit enough samples to get a decent\nestimate you stop checking the time. That way any fast or rarely\ncalled nodes still have accurate measurements even if they get few\nsamples and any long or frequently called nodes stop getting\ntimestamps and just use timer counts.\n\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 14 Mar 2023 14:37:57 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
},
{
"msg_contents": "This thread has been stale since January with no movement at all during the\nMarch CF, and according to the CFBot it stopped building at all ~ 14 weeks ago.\n\nI'm marking this returned with feedback, it can be resubmitted for a future CF\nif someone decides to pick it up.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 3 Jul 2023 18:01:37 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Sampling-based timing for EXPLAIN ANALYZE"
}
] |
[
{
"msg_contents": "Re-reading my latest MERGE patch, I realised there is a trivial,\npre-existing bug in the check for unreachable WHEN clauses, which\nmeans it won't spot an unreachable WHEN clause if it doesn't have an\nAND condition.\n\nSo the checks need to be re-ordered, as in the attached.\n\nRegards,\nDean",
"msg_date": "Mon, 2 Jan 2023 12:13:59 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Bug in check for unreachable MERGE WHEN clauses"
},
{
"msg_contents": "On Mon, 2 Jan 2023 at 12:13, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Re-reading my latest MERGE patch, I realised there is a trivial,\n> pre-existing bug in the check for unreachable WHEN clauses, which\n> means it won't spot an unreachable WHEN clause if it doesn't have an\n> AND condition.\n>\n> So the checks need to be re-ordered, as in the attached.\n>\n\nPushed and back-patched.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 10 Jan 2023 14:23:25 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Bug in check for unreachable MERGE WHEN clauses"
}
] |
[
{
"msg_contents": "I've been wondering if it might be a good idea to have a third parameter\nfor pg_input_error_message() which would default to false, but which if\ntrue would cause it to emit the detail and hint fields, if any, as well\nas the message field from the error_data.\n\nThoughts?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n",
"msg_date": "Mon, 2 Jan 2023 10:30:23 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "verbose mode for pg_input_error_message?"
},
{
"msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I've been wondering if it might be a good idea to have a third parameter\n> for pg_input_error_message() which would default to false, but which if\n> true would cause it to emit the detail and hint fields, if any, as well\n> as the message field from the error_data.\n\nI don't think that just concatenating those strings would make for a\npleasant API. More sensible, perhaps, to have a separate function\nthat returns a record. Or we could redefine the existing function\nthat way, but I suspect that \"just the primary error\" will be a\nprincipal use-case.\n\nBeing able to get the SQLSTATE is likely to be interesting too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Jan 2023 10:44:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On 2023-01-02 Mo 10:44, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I've been wondering if it might be a good idea to have a third parameter\n>> for pg_input_error_message() which would default to false, but which if\n>> true would cause it to emit the detail and hint fields, if any, as well\n>> as the message field from the error_data.\n> I don't think that just concatenating those strings would make for a\n> pleasant API. More sensible, perhaps, to have a separate function\n> that returns a record. Or we could redefine the existing function\n> that way, but I suspect that \"just the primary error\" will be a\n> principal use-case.\n>\n> Being able to get the SQLSTATE is likely to be interesting too.\n>\n> \t\t\t\n\n\nOK, here's a patch along those lines.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com",
"msg_date": "Wed, 4 Jan 2023 16:18:59 -0500",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Wed, Jan 04, 2023 at 04:18:59PM -0500, Andrew Dunstan wrote:\n> On 2023-01-02 Mo 10:44, Tom Lane wrote:\n>> I don't think that just concatenating those strings would make for a\n>> pleasant API. More sensible, perhaps, to have a separate function\n>> that returns a record. Or we could redefine the existing function\n>> that way, but I suspect that \"just the primary error\" will be a\n>> principal use-case.\n>>\n>> Being able to get the SQLSTATE is likely to be interesting too.\n> \n> OK, here's a patch along those lines.\n\nMy vote would be to redefine the existing pg_input_error_message() function\nto return a record, but I recognize that this would inflate the patch quite\na bit due to all the existing uses in the tests. If this is the only\nargument against this approach, I'm happy to help with the patch.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Tue, 10 Jan 2023 15:41:12 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Tue, Jan 10, 2023 at 03:41:12PM -0800, Nathan Bossart wrote:\n> My vote would be to redefine the existing pg_input_error_message() function\n> to return a record, but I recognize that this would inflate the patch quite\n> a bit due to all the existing uses in the tests. If this is the only\n> argument against this approach, I'm happy to help with the patch.\n\nHere's an attempt at this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Feb 2023 10:40:48 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 10:40:48AM -0800, Nathan Bossart wrote:\n> On Tue, Jan 10, 2023 at 03:41:12PM -0800, Nathan Bossart wrote:\n>> My vote would be to redefine the existing pg_input_error_message() function\n>> to return a record, but I recognize that this would inflate the patch quite\n>> a bit due to all the existing uses in the tests. If this is the only\n>> argument against this approach, I'm happy to help with the patch.\n> \n> Here's an attempt at this.\n\nThis seems to have made cfbot angry. Will post a new version shortly.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Thu, 23 Feb 2023 11:30:38 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 11:30:38AM -0800, Nathan Bossart wrote:\n> Will post a new version shortly.\n\nAs promised...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 23 Feb 2023 13:47:23 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Thu, Feb 23, 2023 at 4:47 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Thu, Feb 23, 2023 at 11:30:38AM -0800, Nathan Bossart wrote:\n> > Will post a new version shortly.\n>\n> As promised...\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\n\nLooks good to me, passes make check-world. Thanks for slogging through this.\n\nOn Thu, Feb 23, 2023 at 4:47 PM Nathan Bossart <nathandbossart@gmail.com> wrote:On Thu, Feb 23, 2023 at 11:30:38AM -0800, Nathan Bossart wrote:\n> Will post a new version shortly.\n\nAs promised...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.comLooks good to me, passes make check-world. Thanks for slogging through this.",
"msg_date": "Fri, 24 Feb 2023 17:36:42 -0500",
"msg_from": "Corey Huinker <corey.huinker@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Fri, Feb 24, 2023 at 05:36:42PM -0500, Corey Huinker wrote:\n> Looks good to me, passes make check-world. Thanks for slogging through this.\n\nFWIW, I agree that switching pg_input_error_message() to return a row\nwould be nicer in the long-run than just getting an error message\nbecause it has the merit to be extensible at will with all the data\nwe'd like to attach to it (I suspect that getting more fields is not\nmuch likely, but who knows..).\n\npg_input_error_message() does not strike me as a good function name,\nthough, because it now returns much more than an error message.\nHence, couldn't something like pg_input_error() be better, because\nmore generic?\n--\nMichael",
"msg_date": "Sat, 25 Feb 2023 13:39:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 01:39:21PM +0900, Michael Paquier wrote:\n> pg_input_error_message() does not strike me as a good function name,\n> though, because it now returns much more than an error message.\n> Hence, couldn't something like pg_input_error() be better, because\n> more generic?\n\nI personally think the existing name is fine. It returns the error\nmessage, which includes the primary, detail, and hint messages. Also, I'm\nnot sure that pg_input_error() is descriptive enough. That being said, I'm\nhappy to run the sed command to change the name to whatever folks think is\nbest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Sat, 25 Feb 2023 15:29:19 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sat, Feb 25, 2023 at 01:39:21PM +0900, Michael Paquier wrote:\n>> pg_input_error_message() does not strike me as a good function name,\n>> though, because it now returns much more than an error message.\n>> Hence, couldn't something like pg_input_error() be better, because\n>> more generic?\n\n> I personally think the existing name is fine. It returns the error\n> message, which includes the primary, detail, and hint messages. Also, I'm\n> not sure that pg_input_error() is descriptive enough. That being said, I'm\n> happy to run the sed command to change the name to whatever folks think is\n> best.\n\nMaybe pg_input_error_info()? I tend to agree with Michael that as\nsoon as you throw things like the SQLSTATE code into it, \"message\"\nseems not very apropos. I'm not dead set on that position, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 25 Feb 2023 20:07:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 08:07:33PM -0500, Tom Lane wrote:\n> Maybe pg_input_error_info()? I tend to agree with Michael that as\n> soon as you throw things like the SQLSTATE code into it, \"message\"\n> seems not very apropos. I'm not dead set on that position, though.\n\npg_input_error_info() seems more descriptive to me. I changed the name to\nthat in v4.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Sat, 25 Feb 2023 20:58:17 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Sat, Feb 25, 2023 at 08:58:17PM -0800, Nathan Bossart wrote:\n> pg_input_error_info() seems more descriptive to me. I changed the name to\n> that in v4.\n\nerror_info() is fine by me. My recent history is poor lately when it\ncomes to name new things.\n\n+ values[0] = CStringGetTextDatum(escontext.error_data->message);\n+\n+ if (escontext.error_data->detail != NULL)\n+ values[1] = CStringGetTextDatum(escontext.error_data->detail);\n+ else\n+ isnull[1] = true;\n+\n+ if (escontext.error_data->hint != NULL)\n+ values[2] = CStringGetTextDatum(escontext.error_data->hint);\n+ else\n+ isnull[2] = true;\n+\n+ values[3] = CStringGetTextDatum(\n+ unpack_sql_state(escontext.error_data->sqlerrcode));\n\nI am OK with this data set as well. If somebody makes a case about\nmore fields in ErrorData, we could always consider these separately.\n\nFWIW, I would like to change some of the regression tests as we are\nbikeshedding the whole.\n\n+SELECT pg_input_error_info(repeat('too_long', 32), 'rainbow');\nFor example, we could use the expanded display for this case in\nenum.sql.\n\n -- test non-error-throwing API\n SELECT str as jsonpath,\n pg_input_is_valid(str,'jsonpath') as ok,\n- pg_input_error_message(str,'jsonpath') as errmsg\n+ pg_input_error_info(str,'jsonpath') as errmsg\nThis case in jsonpath.sql is actually wrong, because we have more than\njust the error message.\n\nFor the others, I would make the choice of expanding the calls of\npg_input_error_info() rather than just showing row outputs, though I\nagree that this part is minor.\n--\nMichael",
"msg_date": "Sun, 26 Feb 2023 14:35:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Sun, Feb 26, 2023 at 02:35:22PM +0900, Michael Paquier wrote:\n> For the others, I would make the choice of expanding the calls of\n> pg_input_error_info() rather than just showing row outputs, though I\n> agree that this part is minor.\n\nWhile bike-shedding all the regression tests, I have noticed that\nfloat4-misrounded-input.out was missing a refresh (the query was\nright, not the output). The rest was pretty much OK for me, still I\nfound all the errmsg aliases a bit out of context as the function is\nnow extended with more attributes, so I have painted a couple of\nLATERALs over that.\n\nAre you OK with the attached?\n--\nMichael",
"msg_date": "Mon, 27 Feb 2023 15:37:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 03:37:37PM +0900, Michael Paquier wrote:\n> Are you OK with the attached?\n\nI found a couple of more small changes required to make cfbot happy.\nOtherwise, it looks good to me.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Mon, 27 Feb 2023 11:25:01 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Mon, Feb 27, 2023 at 11:25:01AM -0800, Nathan Bossart wrote:\n> I found a couple of more small changes required to make cfbot happy.\n> Otherwise, it looks good to me.\n\nThanks, I have confirmed the spots the CI was complaining about, so\napplied. There was an extra place that was not right in xml_2.out as\nreported by prion, parula and snakefly because of a bad copy-paste, so\nfixed as well.\n--\nMichael",
"msg_date": "Tue, 28 Feb 2023 09:01:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
},
{
"msg_contents": "On Tue, Feb 28, 2023 at 09:01:48AM +0900, Michael Paquier wrote:\n> On Mon, Feb 27, 2023 at 11:25:01AM -0800, Nathan Bossart wrote:\n>> I found a couple of more small changes required to make cfbot happy.\n>> Otherwise, it looks good to me.\n> \n> Thanks, I have confirmed the spots the CI was complaining about, so\n> applied. There was an extra place that was not right in xml_2.out as\n> reported by prion, parula and snakefly because of a bad copy-paste, so\n> fixed as well.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Mon, 27 Feb 2023 16:11:56 -0800",
"msg_from": "Nathan Bossart <nathandbossart@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: verbose mode for pg_input_error_message?"
}
] |
[
{
"msg_contents": "Hi all,\n\nA question, may I wrong.\n\nI've a Rocky Linux 8 with OpenSSL 1.1.1 FIPS and Intel cpu with aes \nsupport (cat /proc/cpuinfo | grep aes)\n\nTest made with openssl gives me a huge performance with aes enabled vs not:\n\n\"openssl speed -elapsed -evp aes-128-cbc\" is about 5 time faster than \n\"openssl speed -elapsed aes-128-cbc\" or another \"software calculated \ntest\", eg. \"openssl speed -elapsed bf-cbc\"\n\nSo OpenSSL is ok.\n\nPostgresql 15 is compiled with openssl:\n\nselect name, setting from pg_settings where name = 'ssl_library';\n name | setting\n-------------+---------\n ssl_library | OpenSSL\n(1 row)\n\nSo, a test with pgcrypto:\n\nselect pgp_sym_encrypt(data::text, 'pwd') --default to aes128\nfrom generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, \n'1 hour'::interval) data\n\nvs\n\nselect pgp_sym_encrypt(data::text, 'pwd','cipher-algo=bf') -- blowfish\nfrom generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, \n'1 hour'::interval) data\n\nIn my test both queries execution is similar....aes-128 was expected \nabout 5 time faster.\n\nSo, why?\n\nPgcrypto use OpenSSL as backend, so, does it explicit force software aes \ncalculation instead of AES-NI cpu ones?\n\nThanksfor support.\n\nBest regards,\n\nAgharta\n\n\n\n\n\n\n",
"msg_date": "Mon, 2 Jan 2023 17:57:38 +0100",
"msg_from": "\"agharta82@gmail.com\" <agharta82@gmail.com>",
"msg_from_op": true,
"msg_subject": "Is OpenSSL AES-NI not available in pgcrypto?"
},
{
"msg_contents": "On 02.01.23 17:57, agharta82@gmail.com wrote:\n> select pgp_sym_encrypt(data::text, 'pwd') --default to aes128\n> from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, \n> '1 hour'::interval) data\n> \n> vs\n> \n> select pgp_sym_encrypt(data::text, 'pwd','cipher-algo=bf') -- blowfish\n> from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, \n> '1 hour'::interval) data\n> \n> In my test both queries execution is similar....aes-128 was expected \n> about 5 time faster.\n> \n> So, why?\n> \n> Pgcrypto use OpenSSL as backend, so, does it explicit force software aes \n> calculation instead of AES-NI cpu ones?\n\nI suspect it is actually using AES hardware support, but all the other \noverhead of pgcrypto makes the difference not noticeable.\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:54:31 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: Is OpenSSL AES-NI not available in pgcrypto?"
},
{
"msg_contents": "Hi,\n\nI see, I was hoping that wasn't the case.\n\nThanks a lot for your support.\n\nMy best regards,\n\nAgharta\n\n\nIl 03/01/23 16:54, Peter Eisentraut ha scritto:\n> On 02.01.23 17:57, agharta82@gmail.com wrote:\n>> select pgp_sym_encrypt(data::text, 'pwd') --default to aes128\n>> from generate_series('2022-01-01'::timestamp, \n>> '2022-12-31'::timestamp, '1 hour'::interval) data\n>>\n>> vs\n>>\n>> select pgp_sym_encrypt(data::text, 'pwd','cipher-algo=bf') -- blowfish\n>> from generate_series('2022-01-01'::timestamp, \n>> '2022-12-31'::timestamp, '1 hour'::interval) data\n>>\n>> In my test both queries execution is similar....aes-128 was expected \n>> about 5 time faster.\n>>\n>> So, why?\n>>\n>> Pgcrypto use OpenSSL as backend, so, does it explicit force software \n>> aes calculation instead of AES-NI cpu ones?\n>\n> I suspect it is actually using AES hardware support, but all the other \n> overhead of pgcrypto makes the difference not noticeable.\n>\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:07:40 +0100",
"msg_from": "\"agharta82@gmail.com\" <agharta82@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Is OpenSSL AES-NI not available in pgcrypto?"
},
{
"msg_contents": "On Mon, Jan 2, 2023 at 05:57:38PM +0100, agharta82@gmail.com wrote:\n> So, a test with pgcrypto:\n> \n> select pgp_sym_encrypt(data::text, 'pwd') --default to aes128\n> from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, '1\n> hour'::interval) data\n> \n> vs\n> \n> select pgp_sym_encrypt(data::text, 'pwd','cipher-algo=bf') -- blowfish\n> from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, '1\n> hour'::interval) data\n\nTo see the difference, I think you need to construct a single large\nquery that calls many pgcrypto functions, with a small return result, so\nthe network, parsing, and optimizer overhead are minimal compared to the\nOpenSSL overhread.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n",
"msg_date": "Fri, 6 Jan 2023 21:13:42 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Is OpenSSL AES-NI not available in pgcrypto?"
},
{
"msg_contents": "Hi Bruce,\nThanks for reply.\n\nI've give up: i've found a slide in percona site about pgcrypto that said\nthe developers of plugin intentionally introduces time consuming code to\nprevent brute force attacks.\n\nMy queries involves pgcrypto only in a small number of record (about 2000),\nso at the end the execution time remains the same....sadly.\n\nNow my hopes are now in TDE. Hope to see that feature in PostgrSQL soon.\n\nMany thanks again for support to all!\n\nHave a nice day,\nAgharta\n\n\nIl sab 7 gen 2023, 03:13 Bruce Momjian <bruce@momjian.us> ha scritto:\n\n> On Mon, Jan 2, 2023 at 05:57:38PM +0100, agharta82@gmail.com wrote:\n> > So, a test with pgcrypto:\n> >\n> > select pgp_sym_encrypt(data::text, 'pwd') --default to aes128\n> > from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, '1\n> > hour'::interval) data\n> >\n> > vs\n> >\n> > select pgp_sym_encrypt(data::text, 'pwd','cipher-algo=bf') -- blowfish\n> > from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, '1\n> > hour'::interval) data\n>\n> To see the difference, I think you need to construct a single large\n> query that calls many pgcrypto functions, with a small return result, so\n> the network, parsing, and optimizer overhead are minimal compared to the\n> OpenSSL overhread.\n>\n> --\n> Bruce Momjian <bruce@momjian.us> https://momjian.us\n> EDB https://enterprisedb.com\n>\n> Embrace your flaws. They make you human, rather than perfect,\n> which you will never be.\n>\n\nHi Bruce,Thanks for reply.I've give up: i've found a slide in percona site about pgcrypto that said the developers of plugin intentionally introduces time consuming code to prevent brute force attacks.My queries involves pgcrypto only in a small number of record (about 2000), so at the end the execution time remains the same....sadly.Now my hopes are now in TDE. Hope to see that feature in PostgrSQL soon.Many thanks again for support to all!Have a nice day,Agharta Il sab 7 gen 2023, 03:13 Bruce Momjian <bruce@momjian.us> ha scritto:On Mon, Jan 2, 2023 at 05:57:38PM +0100, agharta82@gmail.com wrote:\n> So, a test with pgcrypto:\n> \n> select pgp_sym_encrypt(data::text, 'pwd') --default to aes128\n> from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, '1\n> hour'::interval) data\n> \n> vs\n> \n> select pgp_sym_encrypt(data::text, 'pwd','cipher-algo=bf') -- blowfish\n> from generate_series('2022-01-01'::timestamp, '2022-12-31'::timestamp, '1\n> hour'::interval) data\n\nTo see the difference, I think you need to construct a single large\nquery that calls many pgcrypto functions, with a small return result, so\nthe network, parsing, and optimizer overhead are minimal compared to the\nOpenSSL overhread.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.",
"msg_date": "Sat, 7 Jan 2023 06:59:25 +0100",
"msg_from": "agharta agharta <agharta82@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Is OpenSSL AES-NI not available in pgcrypto?"
}
] |
[
{
"msg_contents": "I see in v15 there is a note that there is a new category for \"char\"\nhowever it is categorized as \"internal use\"\n\nI would think that char and char(n) would be used by external programs as a\nuser type.\n\nDave Cramer\n\nI see in v15 there is a note that there is a new category for \"char\" however it is categorized as \"internal use\" I would think that char and char(n) would be used by external programs as a user type.Dave Cramer",
"msg_date": "Mon, 2 Jan 2023 14:09:40 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Why is char an internal-use category"
},
{
"msg_contents": "Dave Cramer <davecramer@gmail.com> writes:\n> I see in v15 there is a note that there is a new category for \"char\"\n> however it is categorized as \"internal use\"\n> I would think that char and char(n) would be used by external programs as a\n> user type.\n\n\"char\" (with quotes) is not at all the same type as char without\nquotes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 02 Jan 2023 14:44:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Why is char an internal-use category"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is a follow-up to commit d2a44904 from the 2022-11 CF [1]\nThe TAP tests were left out with the suggestion to use Perl instead of\ncat (Unix) / findstr (Windows) as the program to pipe into.\n\nPFA a patch implementing that suggestion.\n\n\n[1] https://commitfest.postgresql.org/40/4000/\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Mon, 02 Jan 2023 22:32:03 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "TAP tests for psql \\g piped into program"
},
{
"msg_contents": "On 02.01.23 22:32, Daniel Verite wrote:\n> This is a follow-up to commit d2a44904 from the 2022-11 CF [1]\n> The TAP tests were left out with the suggestion to use Perl instead of\n> cat (Unix) / findstr (Windows) as the program to pipe into.\n> \n> PFA a patch implementing that suggestion.\n\nThe perl binary refactoring in this patch caught my attention, since I \nran into this issue in another patch as well. I'm always happy to \nconsider a refactoring, but I think in this case I wouldn't do it.\n\nIf you grep for PostgreSQL::Test::Utils::windows_os, you'll find quite a \nfew pieces of code that somehow fix up paths for Windows. By hiding the \nPerl stuff in a function, we give the illusion that you don't have to \nworry about it and it's all taken care of in the test library. But you \nhave to worry about it in the very next line in \n025_stuck_on_old_timeline.pl! We should handle this all on the same \nlevel: either in the test code or in the test library. It would be \nuseful to work toward a general \"prepare path for shell\" routine. But \nuntil we have that, I don't think this is sufficient progress.\n\nSo for your patch, I would just do the path adjustment ad hoc in-line. \nIt's just one additional line.\n\n\n\n",
"msg_date": "Wed, 8 Mar 2023 16:13:44 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests for psql \\g piped into program"
},
{
"msg_contents": "Peter Eisentraut wrote:\n\n> So for your patch, I would just do the path adjustment ad hoc in-line. \n> It's just one additional line.\n\nHere's the patch updated that way.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite",
"msg_date": "Wed, 29 Mar 2023 20:39:00 +0200",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: TAP tests for psql \\g piped into program"
},
{
"msg_contents": "On 29/03/2023 21:39, Daniel Verite wrote:\n> Peter Eisentraut wrote:\n> \n>> So for your patch, I would just do the path adjustment ad hoc in-line.\n>> It's just one additional line.\n> \n> Here's the patch updated that way.\n\nCommitted, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n",
"msg_date": "Mon, 2 Oct 2023 11:46:48 +0300",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: TAP tests for psql \\g piped into program"
}
] |
[
{
"msg_contents": "Hi,\n\nAttached is a patch: contrib_v1.patch\n\nIt modifies Appendix F, the contrib directory.\n\nIt adds brief text into the titles shown in the\ntable of contents so it's easier to tell what\neach module does. It also suffixes [trusted] or [obsolete]\non the relevant titles.\n\nI added the word \"extension\" into the appendix title\nbecause I always have problems scanning through the\nappendix and finding the one to do with extensions.\n\nThe sentences describing what the modules are and how\nto build them have been reworked. Some split in 2,\nsome words removed or replaced, etc.\n\nI introduced the word \"component\" because the appendix\nhas build instructions for command line programs as well\nas extensions and libraries loaded with shared_preload_libraries().\nThis involved removing most occurrences of the word\n\"module\", although it is left in the section title.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Mon, 2 Jan 2023 18:00:15 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 03.01.2023 at 01:00, Karl O. Pinc wrote:\n> Attached is a patch: contrib_v1.patch\n>\n> It modifies Appendix F, the contrib directory.\n\nReview:\n\nThe patch applies cleanly (1334b79a35 - 2023-01-14 18:05:09 +0900).\n\nIt adds a brief explanatory part to the headers of all contrib modules\nwhich I consider as very useful, especially when looking at the TOC in\ncontrib.html where currently newcomers would need to click through all\nthe links to even get an idea what the various modules do.\nThe explanatory parts added make sense to me, althogh I'm not an expert\nin all the different contrib modules.\n\nAppendix F. now reads as \"Additional Supplied Modules and Extensions\"\ninstead of \"Appendix F. Additional Supplied Modules\" which IMHO proprely\nreflects what it is about. The original title probably comes from the\npre-extension-era.\n\nThere is also some minor rewording of sentences in contrib.sgml that in\ngeneral looks like an improvment to me.\n\nIn conclusion I cannot see why this patch should not be applied in it's\ncurrent form so I deem it ready for commiter.\n\nRegards,\nBrar\n\n\n\n",
"msg_date": "Sun, 15 Jan 2023 07:11:30 +0100",
"msg_from": "Brar Piening <brar@gmx.de>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Sun, 15 Jan 2023 07:11:30 +0100\nBrar Piening <brar@gmx.de> wrote:\n\n> On 03.01.2023 at 01:00, Karl O. Pinc wrote:\n> > Attached is a patch: contrib_v1.patch\n> >\n> > It modifies Appendix F, the contrib directory. \n> \n> Review:\n\n> It adds a brief explanatory part to the headers of all contrib modules\n\n> The explanatory parts added make sense to me, althogh I'm not an\n> expert in all the different contrib modules.\n\nNeither am I. I read the beginning of each module's docs and\nmade a best-effort. There may sometimes be a better\nsummary phrase to describe a module/extension.\n\nThanks for the review.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 15 Jan 2023 07:35:21 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Jan-02, Karl O. Pinc wrote:\n\n> Hi,\n> \n> Attached is a patch: contrib_v1.patch\n> \n> It modifies Appendix F, the contrib directory.\n> \n> It adds brief text into the titles shown in the\n> table of contents so it's easier to tell what\n> each module does. It also suffixes [trusted] or [obsolete]\n> on the relevant titles.\n\nThis looks a good idea to me. I'm not 100% sold on having the \"trusted\"\nor \"obsolete\" marker on the titles themselves, though. Not sure what\nalternative do we have, though, other than leave them out completely.\n\nThere's a typo \"equalivent\" in two places.\n\nIn passwordcheck, I would say just \"check for weak passwords\" or maybe\n\"verify password strength\".\n\npg_buffercache is missing. Maybe \"-- inspect state of the Postgres\nbuffer cache\".\n\nFor pg_stat_statements I suggest \"track statistics of planning and\nexecution of SQL queries\"\n\nFor sepgsql, as I understand it is strictly SELinux based, not just\n\"-like\". So this needs rewording: \"label-based, SELinux-like, mandatory\naccess control\". Maybe \"SELinux-based implementation of mandatory\naccess control for row-level security\".\n\nxml -- typo \"qeurying\"\n\n> The sentences describing what the modules are and how\n> to build them have been reworked. Some split in 2,\n> some words removed or replaced, etc.\n> \n> I introduced the word \"component\" because the appendix\n> has build instructions for command line programs as well\n> as extensions and libraries loaded with shared_preload_libraries().\n> This involved removing most occurrences of the word\n> \"module\", although it is left in the section title.\n\nI haven't read this part yet.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"But static content is just dynamic content that isn't moving!\"\n http://smylers.hates-software.com/2007/08/15/fe244d0c.html\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:25:57 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Not related to this patch: it's very annoying that in the PDF output,\neach section in the appendix doesn't start on a blank page -- which\nmeans that the doc page for many modules starts in the middle of a page\nwere the previous one ends. This is very ugly. And then you get to\ndblink, which contains a bunch of reference pages for the functions it\nprovides, and those *do* start a new page each. So it looks quite\ninconsistent.\n\nI wonder if we can tweak something in the stylesheet to include a page\nbreak.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The Postgresql hackers have what I call a \"NASA space shot\" mentality.\n Quite refreshing in a world of \"weekend drag racer\" developers.\"\n(Scott Marlowe)\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:30:45 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Wed, 18 Jan 2023 13:30:45 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Not related to this patch: it's very annoying that in the PDF output,\n> each section in the appendix doesn't start on a blank page -- which\n> means that the doc page for many modules starts in the middle of a\n> page were the previous one ends.\n<snip>\n> I wonder if we can tweak something in the stylesheet to include a page\n> break.\n\nWould this be something to be included in this patch?\n(If I can figure it out.)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 18 Jan 2023 09:50:12 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Jan-18, Karl O. Pinc wrote:\n\n> On Wed, 18 Jan 2023 13:30:45 +0100\n> Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> > Not related to this patch: it's very annoying that in the PDF output,\n> > each section in the appendix doesn't start on a blank page -- which\n> > means that the doc page for many modules starts in the middle of a\n> > page were the previous one ends.\n> <snip>\n> > I wonder if we can tweak something in the stylesheet to include a page\n> > break.\n> \n> Would this be something to be included in this patch?\n> (If I can figure it out.)\n\nNo, I think we should do that change separately. I just didn't think a\nparenthical complain was worth a separate thread for it; but if you do\ncreate a patch, please do create a new thread (unless the current patch\nin this one is committed already.)\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Ninguna manada de bestias tiene una voz tan horrible como la humana\" (Orual)\n\n\n",
"msg_date": "Wed, 18 Jan 2023 18:34:47 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Wed, 18 Jan 2023 13:25:57 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Jan-02, Karl O. Pinc wrote:\n\n> > Attached is a patch: contrib_v1.patch\n> > \n> > It modifies Appendix F, the contrib directory.\n> > \n> > It adds brief text into the titles shown in the\n> > table of contents so it's easier to tell what\n> > each module does. It also suffixes [trusted] or [obsolete]\n> > on the relevant titles. \n\n> <snip>\n> I'm not 100% sold on having the\n> \"trusted\" or \"obsolete\" marker on the titles themselves, though. Not\n> sure what alternative do we have, though, other than leave them out\n> completely.\n\nThe alternative would be to have a separate table with modules\nfor rows and \"trusted\" and \"obsolete\" columns. It seems like\nmore of a maintenance hassle than having the markers in the titles.\n\nLet me know if you want a table. I do like having a place\nto look to over all the modules to see what is \"trusted\" or \"obsolete\".\n\nI suppose there could just be a table, with module names, descriptions,\nand trusted and obsolete flags. Instead of a table of contents\nfor the modules the module names in the table could be links. But\nthat'd involve suppressing the table of contents showing all the\nmodule names. And has the problem of possible mis-match between\nthe modules listed in the table and the modules that exist.\n\n> There's a typo \"equalivent\" in two places.\n\nFixed.\n\n> In passwordcheck, I would say just \"check for weak passwords\" or maybe\n> \"verify password strength\".\n\nI used \"verify password strength\".\n\n> \n> pg_buffercache is missing. Maybe \"-- inspect state of the Postgres\n> buffer cache\".\n\nI used \"inspect Postgres buffer cache state\"\n\n> For pg_stat_statements I suggest \"track statistics of planning and\n> execution of SQL queries\"\n\nI had written \"track SQL query planning and execution statistics\".\nChanged to: \"track statistics of SQL planning and execution\"\n\nI don't really care. If you want your version I'll submit another\npatch.\n\n> For sepgsql, as I understand it is strictly SELinux based, not just\n> \"-like\". So this needs rewording: \"label-based, SELinux-like,\n> mandatory access control\". Maybe \"SELinux-based implementation of\n> mandatory access control for row-level security\".\n\nChanged to: \"SELinux-based row-level security mandatory access control\"\n\n> xml -- typo \"qeurying\"\n\nFixed.\n\nI have also made the patch put each module on a separate\npage when producing PDF documents. This did produce one warning,\nwhich seems unrelated to me. The pdf seems right. I also tried \njust \"make\", to be sure I didn't break anything unrelated. Seemed \nto work. So..., works for me.\n\nNew patch attached: contrib_v2.patch\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Wed, 18 Jan 2023 13:01:18 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Wed, 18 Jan 2023 18:34:47 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Jan-18, Karl O. Pinc wrote:\n> \n> > On Wed, 18 Jan 2023 13:30:45 +0100\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> > \n> > > Not related to this patch: it's very annoying that in the PDF\n> > > output, each section in the appendix doesn't start on a blank\n> > > page -- which means that the doc page for many modules starts in\n> > > the middle of a page were the previous one ends. \n> > <snip> \n> > > I wonder if we can tweak something in the stylesheet to include a\n> > > page break. \n> > \n> > Would this be something to be included in this patch?\n> > (If I can figure it out.) \n> \n> No, I think we should do that change separately. I just didn't think\n> a parenthical complain was worth a separate thread for it; but if you\n> do create a patch, please do create a new thread (unless the current\n> patch in this one is committed already.)\n\nOops. Already sent a revised patch that includes starting each\nmodule on a new page, for PDF output. I'll wait to rip that\nout after review and start a new thread if necessary.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 18 Jan 2023 13:06:05 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Jan-18, Karl O. Pinc wrote:\n\n> Oops. Already sent a revised patch that includes starting each\n> module on a new page, for PDF output. I'll wait to rip that\n> out after review and start a new thread if necessary.\n\nHere's my review in the form of a delta patch.\n\n\nI didn't find that a thing called \"ISN\" actually exists. Is there a\nreference to that?\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"How strange it is to find the words \"Perl\" and \"saner\" in such close\nproximity, with no apparent sense of irony. I doubt that Larry himself\ncould have managed it.\" (ncm, http://lwn.net/Articles/174769/)",
"msg_date": "Thu, 19 Jan 2023 13:35:17 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Thu, 19 Jan 2023 13:35:17 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Jan-18, Karl O. Pinc wrote:\n> \n> > Oops. Already sent a revised patch that includes starting each\n> > module on a new page, for PDF output. I'll wait to rip that\n> > out after review and start a new thread if necessary.\n\n(I have not removed the PDF page breaks from the latest patch.)\n\n> Here's my review in the form of a delta patch.\n\nLove it.\n\n> I didn't find that a thing called \"ISN\" actually exists. Is there a\n> reference to that?\n\nMaybe. I came across it somewhere and it seemed useful. It's\nan initialism for International Standard Number.\nhttps://en.wikipedia.org/wiki/International_Standard_Number\nIt's the same ISN as in the file name, \"isn.sgml\".\n\nI've frobbed the ISN related text in my response patch.\n(And added a line break to btree-gin.)\n\nAttached are 2 patches, a regular and a delta from your v4 review:\n\ncontrib_v5-delta.patch.txt\ncontrib_v5.patch.txt\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Thu, 19 Jan 2023 11:03:53 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Thu, 19 Jan 2023 11:03:53 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> Attached are 2 patches, a regular and a delta from your v4 review:\n> \n> contrib_v5-delta.patch.txt\n> contrib_v5.patch.txt\n\nI left your appendix title unchanged: \"Additional Supplied \nExtensions and Modules\". \n\nI had put \"Extensions\" after\n\"Modules\", because, apparently, things that come last in the\nsentence are most remembered by the reader. My impression is that\nmore people are looking for extensions than modules.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 19 Jan 2023 12:02:05 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Jan-19, Karl O. Pinc wrote:\n\n> On Thu, 19 Jan 2023 11:03:53 -0600\n> \"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n> \n> > Attached are 2 patches, a regular and a delta from your v4 review:\n> > \n> > contrib_v5-delta.patch.txt\n> > contrib_v5.patch.txt\n> \n> I left your appendix title unchanged: \"Additional Supplied \n> Extensions and Modules\". \n> \n> I had put \"Extensions\" after\n> \"Modules\", because, apparently, things that come last in the\n> sentence are most remembered by the reader. My impression is that\n> more people are looking for extensions than modules.\n\nHmm, I didn't know that. I guess I can put it back. My own instinct is\nto put the most important stuff first, not last, but if research says to\ndo otherwise, fine, let's do that.\n\nI went over all the titles again. There were a couple of mistakes\nand inconsistencies, which I've fixed to the best of my knowledge.\nI'm happy with 0001 now and will push shortly unless there are\ncomplaints.\n\nI'm still unsure of the [trusted]/[obsolete] marker, so I split that out\nto commit 0002. I would like to see more support for that before\npushing that one.\n\nI also put the page-split bits to another page, because it seems a bit\ntoo clumsy. I hope somebody with more docbook-fu can comment: maybe\nthere's a way to fix it more generally somehow?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Update: super-fast reaction on the Postgres bugs mailing list. The report\nwas acknowledged [...], and a fix is under discussion.\nThe wonders of open-source !\"\n https://twitter.com/gunnarmorling/status/1596080409259003906",
"msg_date": "Fri, 20 Jan 2023 12:42:31 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Fri, 20 Jan 2023 12:42:31 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Jan-19, Karl O. Pinc wrote:\n> \n> > On Thu, 19 Jan 2023 11:03:53 -0600\n> > \"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n> > \n> > > Attached are 2 patches, a regular and a delta from your v4 review:\n> > > \n> > > contrib_v5-delta.patch.txt\n> > > contrib_v5.patch.txt \n> > \n> > I left your appendix title unchanged: \"Additional Supplied \n> > Extensions and Modules\". \n> > \n> > I had put \"Extensions\" after\n> > \"Modules\", because, apparently, things that come last in the\n> > sentence are most remembered by the reader. My impression is that\n> > more people are looking for extensions than modules. \n> \n> Hmm, I didn't know that. I guess I can put it back. My own instinct\n> is to put the most important stuff first, not last, but if research\n> says to do otherwise, fine, let's do that.\n\nA quick google on the subject tells me that I can't figure out a good\nquick google. I believe it's from the book at bottom. Memorability\ngoes \"end\", \"beginning\", \"middle\". IIRC.\n\n> I went over all the titles again. There were a couple of mistakes\n> and inconsistencies, which I've fixed to the best of my knowledge.\n> I'm happy with 0001 now and will push shortly unless there are\n> complaints.\n> \n> I'm still unsure of the [trusted]/[obsolete] marker, so I split that\n> out to commit 0002. I would like to see more support for that before\n> pushing that one.\n> \n> I also put the page-split bits to another page, because it seems a bit\n> too clumsy. \n\nAll the above sounds good to me.\n\n> I hope somebody with more docbook-fu can comment: maybe\n> there's a way to fix it more generally somehow?\n\nWhat would the general solution be? There could be a forced page\nbreak at the beginning of _every_ sect1. For PDFs. That seems\na bit much, but maybe not. The only other thing I can think of\nthat's \"general\" would be to force a page break for sect1-s\nthat are in an appendix. Is any of this wanted? (Or technically\n\"better\"?)\n\nThanks for the help.\n\n ----\n\nWriting for Readers\nBy George R. Bramer, Dorothy Sedley · 1981\n\nAbout this edition\nISBN:9780675080453, 0675080452\nPage count:532\nPublished:1981\nFormat:Hardcover\nPublisher:C.E. Merrill Publishing Company\nOriginal from:Pennsylvania State University\nDigitized:July 15, 2009\nLanguage:English\nAuthor:George R. Bramer, Dorothy Sedley\n\nIt's part of a wave of reaction against Strunk & White,\nwhere they started basing writing on research into reading.\n(If it's the right book.)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Fri, 20 Jan 2023 06:26:00 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Jan-20, Karl O. Pinc wrote:\n\n> On Fri, 20 Jan 2023 12:42:31 +0100\n> Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Hmm, I didn't know that. I guess I can put it back. My own instinct\n> > is to put the most important stuff first, not last, but if research\n> > says to do otherwise, fine, let's do that.\n> \n> A quick google on the subject tells me that I can't figure out a good\n> quick google. I believe it's from the book at bottom. Memorability\n> goes \"end\", \"beginning\", \"middle\". IIRC.\n\nAh well. I just put it back the way you had it.\n\n> > I hope somebody with more docbook-fu can comment: maybe\n> > there's a way to fix it more generally somehow?\n> \n> What would the general solution be?\n\nI don't know, I was thinking that perhaps at the start of the appendix\nwe could have some kind of marker that says \"in this chapter, the\n<sect1>s all get a page break\", then a marker to stop that at the end of\nthe appendix. Or a tweak to the stylesheet, \"when inside an appendix,\nall <sect1>s get a pagebreak\", in a way that doesn't affect the other\nchapters.\n\nThe <?hard-pagebreak?> solution looks really ugly to me (in the source\ncode I mean), but I suppose if we discover no other way to do it, we\ncould do it like that.\n\n> There could be a forced page break at the beginning of _every_ sect1.\n> That seems a bit much, but maybe not. The only other thing I can\n> think of that's \"general\" would be to force a page break for sect1-s\n> that are in an appendix. Is any of this wanted? (Or technically\n> \"better\"?)\n\nI wouldn't want to changing the behavior of all the <sect1>s in the\nwhole documentation. Though if you want to try and garner support to do\nthat, I won't oppose it, particularly since it only matters for PDF.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n",
"msg_date": "Fri, 20 Jan 2023 20:12:03 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Ah, I wanted to attach the two remaining patches and forgot. Here they\nare.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/",
"msg_date": "Fri, 20 Jan 2023 20:12:38 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Fri, 20 Jan 2023 20:12:03 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Jan-20, Karl O. Pinc wrote:\n> \n> > On Fri, 20 Jan 2023 12:42:31 +0100\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> wrote: \n> \n> > > Hmm, I didn't know that. I guess I can put it back. My own\n> > > instinct is to put the most important stuff first, not last, but\n> > > if research says to do otherwise, fine, let's do that. \n> > \n> > A quick google on the subject tells me that I can't figure out a\n> > good quick google. I believe it's from the book at bottom.\n> > Memorability goes \"end\", \"beginning\", \"middle\". IIRC. \n> \n> Ah well. I just put it back the way you had it.\n> \n> > > I hope somebody with more docbook-fu can comment: maybe\n> > > there's a way to fix it more generally somehow? \n> > \n> > What would the general solution be? \n> \n> I don't know, I was thinking that perhaps at the start of the appendix\n> we could have some kind of marker that says \"in this chapter, the\n> <sect1>s all get a page break\", then a marker to stop that at the end\n> of the appendix. Or a tweak to the stylesheet, \"when inside an\n> appendix, all <sect1>s get a pagebreak\", in a way that doesn't affect\n> the other chapters.\n> \n> The <?hard-pagebreak?> solution looks really ugly to me (in the source\n> code I mean), but I suppose if we discover no other way to do it, we\n> could do it like that.\n\nI can do a forced page break for sect1-s in the pdf stylesheet just \nfor the contrib appendix (appendix F) by looking for a parent \nwith an id of \"contrib\". That would work, but seems like a kludge.\n(Otherwise, you look for a parent of \"appendix\" and force the page\nbreak in all appendixes.)\n\nI'll send a patch.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Fri, 20 Jan 2023 13:33:46 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Fri, 20 Jan 2023 20:12:38 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Ah, I wanted to attach the two remaining patches and forgot. \n\nAttached are 2 alternatives:\n(They touch separate files so the ordering is meaningless.)\n\n\nv8-0001-List-trusted-and-obsolete-extensions.patch\n\nInstead of putting [trusted] and [obsolete] in the titles\nof the modules, like v7 does, add a list of them into the text.\n\n\nv8-0002-Page-break-before-sect1-in-contrib-appendix-when-pdf.patch\n\nThis frobs the PDF style sheet so that when sect1 is used\nin the appendix for the contrib directory, there is a page\nbreak before every sect1. This puts each module/extension\nonto a separate page, but only for the contrib appendix.\n\nAside from hardcoding the \"contrib\" id, which I suppose isn't\ntoo bad since it's publicly exposed as a HTML anchor (or URL \ncomponent?) and unlikely to change, this also means that the \ncontrib documentation can't use <section> instead of <sect1>.\n\nSometimes I think I only know enough XSLT to get into trouble.\nWhile v8 is \"right\", I can't say if it is a good idea/good practice.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Fri, 20 Jan 2023 14:22:25 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Fri, 20 Jan 2023 14:22:25 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> v8-0001-List-trusted-and-obsolete-extensions.patch\n> \n> Instead of putting [trusted] and [obsolete] in the titles\n> of the modules, like v7 does, add a list of them into the text.\n\nThe list is inline. It might be worthwhile experimenting\nwith a tabular list, like that produced by:\n\n <simplelist type=\"vert\" columns=\"4\">\n\nBut only for the list of trusted extensions. There's not\nenough obsolete extensions to do anything but inline. (IMO)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Fri, 20 Jan 2023 14:54:38 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Attached are 2 v9 patch versions. I don't think I like them.\nI think the v8 versions are better. But I thought it\nwouldn't hurt to show them to you.\n\nOn Fri, 20 Jan 2023 14:22:25 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> Attached are 2 alternatives:\n> (They touch separate files so the ordering is meaningless.)\n> \n> \n> v8-0001-List-trusted-and-obsolete-extensions.patch\n> \n> Instead of putting [trusted] and [obsolete] in the titles\n> of the modules, like v7 does, add a list of them into the text.\n\nv9 puts the list in vertical format, 5 columns.\n\nBut the column spacing in HTML is ugly, and I don't\nsee a parameter to set to change it. I suppose we could\ndo more work on the stylesheets, but this seems excessive.\n\nIt looks good in PDF, but the page break in the middle\nof the paragraph is ugly. (US-Letter) Again (without forcing a hard\npage break by frobbing the stylesheet and adding a processing\ninstruction), I don't see a a good way to fix the page break.\n\n(sagehill.net says that soft page breaks don't work. I didn't\ntry it.)\n\n> v8-0002-Page-break-before-sect1-in-contrib-appendix-when-pdf.patch\n> \n> This frobs the PDF style sheet so that when sect1 is used\n> in the appendix for the contrib directory, there is a page\n> break before every sect1. This puts each module/extension\n> onto a separate page, but only for the contrib appendix.\n> \n> Aside from hardcoding the \"contrib\" id, which I suppose isn't\n> too bad since it's publicly exposed as a HTML anchor (or URL \n> component?) and unlikely to change, this also means that the \n> contrib documentation can't use <section> instead of <sect1>.\n\nv9 supports using <section> instead of just <sect1>. But\nI don't know that it's worth it -- the appendix is committed\nto sect* entities. Once you start with sect* the stylesheet\ndoes not allow \"section\" use to be interspersed. All the \nsect*s would have to be changed to \"section\" throughout \nthe appendix and I don't see that happening.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Sat, 21 Jan 2023 08:11:43 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Sat, 21 Jan 2023 08:11:43 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> Attached are 2 v9 patch versions. I don't think I like them.\n> I think the v8 versions are better. But I thought it\n> wouldn't hurt to show them to you.\n> \n> On Fri, 20 Jan 2023 14:22:25 -0600\n> \"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n> \n> > Attached are 2 alternatives:\n> > (They touch separate files so the ordering is meaningless.)\n> > \n> > \n> > v8-0001-List-trusted-and-obsolete-extensions.patch\n> > \n> > Instead of putting [trusted] and [obsolete] in the titles\n> > of the modules, like v7 does, add a list of them into the text. \n> \n> v9 puts the list in vertical format, 5 columns.\n> \n> But the column spacing in HTML is ugly, and I don't\n> see a parameter to set to change it. I suppose we could\n> do more work on the stylesheets, but this seems excessive.\n\nCome to think of it, this should be fixed by using CSS\nwith a\n\n table.simplelist\n\nselector. Or something along those lines. But I don't\nhave a serious interest in proceeding further. A inline\nlist seems good enough, even if it does not stand out\nin a visual scan of the page. There is a certain amount\nof visual-standout due to all the hyperlinks next to each\nother in the inline presentation.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 22 Jan 2023 08:09:03 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Sun, 22 Jan 2023 08:09:03 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> On Sat, 21 Jan 2023 08:11:43 -0600\n> \"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n> \n> > Attached are 2 v9 patch versions. I don't think I like them.\n> > I think the v8 versions are better. But I thought it\n> > wouldn't hurt to show them to you.\n> > \n> > On Fri, 20 Jan 2023 14:22:25 -0600\n> > \"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n> > \n> > > Attached are 2 alternatives:\n> > > (They touch separate files so the ordering is meaningless.)\n> > > \n> > > \n> > > v8-0001-List-trusted-and-obsolete-extensions.patch\n> > > \n> > > Instead of putting [trusted] and [obsolete] in the titles\n> > > of the modules, like v7 does, add a list of them into the text.\n> > > \n> > \n> > v9 puts the list in vertical format, 5 columns.\n> > \n> > But the column spacing in HTML is ugly, and I don't\n> > see a parameter to set to change it. I suppose we could\n> > do more work on the stylesheets, but this seems excessive. \n> \n> Come to think of it, this should be fixed by using CSS\n> with a\n> \n> table.simplelist\n\nActually, this CSS, added to doc/src/sgml/stylesheet.css,\nmakes the column spacing look pretty good:\n\n/* Adequate spacing between columns in a simplelist non-inline table */\n.simplelist td { padding-left: 2em; padding-right: 2em; }\n\n(No point in specifying table, since td only shows up in tables.)\n\nNote that the default simplelist type value is \"vert\", causing a 1\ncolumn vertical display. There are a number of these in the\ndocumenation. I kind of like what the above css does to these\nlayouts. An example would be the layout in\ndoc/src/sgml/html/datatype-boolean.html, which is the \"Data Types\"\nsection \"Boolean Type\" sub-section.\n\nFor other places affected see: grep -l doc/src/sgml/*.sgml simplelist\n\n\nAttached are 2 patches:\n\nv10-0001-List-trusted-and-obsolete-extensions.patch\n\nList trusted extenions in 4 columns, with the CSS altered\nto put spacing between vertical columns. I changed this\nfrom the 5 columns of v9 because with 5 columns there\nwas a little bit of overflow into the right hand margin\nof a US-letter PDF. The PDF still has an ugly page\nbreak right before the table. To avoid that use the v8\nversion, which presents the list inline.\n\nv10-0002-Page-break-before-sect1-in-contrib-appendix-when-pdf.patch\n\nThis is exactly like the v8 version. See my comments earlier\nabout v8 v.s. v9.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Sun, 22 Jan 2023 14:42:46 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Sun, 22 Jan 2023 14:42:46 -0600\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> Attached are 2 patches:\n> \n> v10-0001-List-trusted-and-obsolete-extensions.patch\n> \n> List trusted extenions in 4 columns, with the CSS altered\n> to put spacing between vertical columns.\n\nIn theory, a number of other simplelist presentations\ncould benefit from this. For example, in the Data Types\nBoolean Type section the true truth values are\npresently listed vertically, like so:\n\ntrue\nyes\non\n1\n\nInstead they could still be listed 'type=\"vert\"' (the default),\nbut with 'columns=\"4\"', to produce something like:\n\n true yes on 1\n\nThis stands out just as much, but takes less space\non the page.\n\nLikewise, perhaps some tables are tables instead of\nsimplelists just because putting simplelists into\ncolumns was so ugly.\n\nI'll leave such modifications to others, at least for\nnow.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sun, 22 Jan 2023 15:29:41 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Jan-22, Karl O. Pinc wrote:\n\n> Actually, this CSS, added to doc/src/sgml/stylesheet.css,\n> makes the column spacing look pretty good:\n> \n> /* Adequate spacing between columns in a simplelist non-inline table */\n> .simplelist td { padding-left: 2em; padding-right: 2em; }\n\nOkay, this looks good to me too. However, for it to actually work, we\nneed to patch the corresponding CSS file in the pgweb repository too.\nI'll follow up in the other mailing list.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 9 Mar 2023 10:22:49 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Hello pgsql-web,\n\nWe're looking to improve the contrib docs with a list of trusted\nextensions. In order for that look more presentable, Karl has come up\nwith the idea of using a <simplelist> table with multiple columns. That\nwould normally look quite terrible because the cell contents are too\nclose to one another, so he came up with the idea of increasing the\npadding, as shown in this patch.\n\nI think this is good thing, as it can help us use tabular <simplelist>\nin other places too.\n\nThis change requires to change main Postgres doc/src/sgml/stylesheet.css \nas Karl suggested here:\n\nOn 2023-Jan-22, Karl O. Pinc wrote:\n\n> Actually, this CSS, added to doc/src/sgml/stylesheet.css,\n> makes the column spacing look pretty good:\n> \n> /* Adequate spacing between columns in a simplelist non-inline table */\n> table.simplelist td { padding-left: 2em; padding-right: 2em; }\n> \n> (No point in specifying table, since td only shows up in tables.)\n> \n> Note that the default simplelist type value is \"vert\", causing a 1\n> column vertical display. There are a number of these in the\n> documenation. I kind of like what the above css does to these\n> layouts. An example would be the layout in\n> doc/src/sgml/html/datatype-boolean.html, which is the \"Data Types\"\n> section \"Boolean Type\" sub-section.\n\n... but in addition it needs the pgweb CSS to be updated to match, as in\nthe attached patch.\n\nWhat do you think?\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"The eagle never lost so much time, as\nwhen he submitted to learn of the crow.\" (William Blake)",
"msg_date": "Thu, 9 Mar 2023 10:27:28 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "> On 9 Mar 2023, at 10:27, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> ... but in addition it needs the pgweb CSS to be updated to match, as in\n> the attached patch.\n> \n> What do you think?\n\nLGTM.\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Thu, 9 Mar 2023 10:35:18 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 3/9/23 4:35 AM, Daniel Gustafsson wrote:\r\n>> On 9 Mar 2023, at 10:27, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\r\n> \r\n>> ... but in addition it needs the pgweb CSS to be updated to match, as in\r\n>> the attached patch.\r\n>>\r\n>> What do you think?\r\n> \r\n> LGTM.\r\n\r\nI'm OK with the change, I'm not OK with the comment around it because \r\n\"Simplelist\" doesn't really give meaning to it AFAICT. Maybe:\r\n\r\n/** Additional formatting for \"simplelist\" structures */\r\n#docContent table.simplelist td {\r\n\tpadding-left: 2em;\r\n\tpadding-right: 2em;\r\n}\r\n\r\nJonathan",
"msg_date": "Thu, 9 Mar 2023 09:34:05 -0800",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Mar-09, Jonathan S. Katz wrote:\n\n> I'm OK with the change, I'm not OK with the comment around it because\n> \"Simplelist\" doesn't really give meaning to it AFAICT. Maybe:\n> \n> /** Additional formatting for \"simplelist\" structures */\n> #docContent table.simplelist td {\n> \tpadding-left: 2em;\n> \tpadding-right: 2em;\n> }\n\nUh, absolutely. Here's a complete patch (in case you wanted that),\nthanks.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\"Las cosas son buenas o malas segun las hace nuestra opinión\" (Lisias)",
"msg_date": "Thu, 9 Mar 2023 19:36:47 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Thu, 9 Mar 2023 10:22:49 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Jan-22, Karl O. Pinc wrote:\n> \n> > Actually, this CSS, added to doc/src/sgml/stylesheet.css,\n> > makes the column spacing look pretty good:\n> Okay, this looks good to me too. However, for it to actually work, we\n> need to patch the corresponding CSS file in the pgweb repository too.\n> I'll follow up in the other mailing list.\n\nDo you also like the page breaking in the PDF for each\ncontributed package, per the\nv10-0002-Page-break-before-sect1-in-contrib-appendix-when-pdf.patch\nof\nhttps://www.postgresql.org/message-id/20230122144246.0ff87372%40slate.karlpinc.com\n?\n\nNo need to reply if I don't need to do anything. (I didn't\nwant the patch to get lost.)\n\nThanks for the review.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Sat, 11 Mar 2023 16:31:28 -0600",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 3/9/23 1:36 PM, Alvaro Herrera wrote:\r\n> On 2023-Mar-09, Jonathan S. Katz wrote:\r\n> \r\n>> I'm OK with the change, I'm not OK with the comment around it because\r\n>> \"Simplelist\" doesn't really give meaning to it AFAICT. Maybe:\r\n>>\r\n>> /** Additional formatting for \"simplelist\" structures */\r\n>> #docContent table.simplelist td {\r\n>> \tpadding-left: 2em;\r\n>> \tpadding-right: 2em;\r\n>> }\r\n> \r\n> Uh, absolutely. Here's a complete patch (in case you wanted that),\r\n> thanks.\r\n\r\nLGTM -- pushed. If it's not reflected within a few hours please let me \r\nknow and I'll force clear the caches.\r\n\r\nThanks,\r\n\r\nJonathan",
"msg_date": "Mon, 13 Mar 2023 09:26:45 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Mon, 13 Mar 2023 at 09:28, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n>\n> > Uh, absolutely. Here's a complete patch (in case you wanted that),\n> > thanks.\n>\n> LGTM -- pushed. If it's not reflected within a few hours please let me\n> know and I'll force clear the caches.\n\nI think this means the patch is committed? I'll update the commitfest\nentry as committed but if there's more to be done feel free to fix.\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:49:23 -0400",
"msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles,\n tweaked sentences"
},
{
"msg_contents": "On Tue, 14 Mar 2023 at 13:49, Gregory Stark (as CFM)\n<stark.cfm@gmail.com> wrote:\n>\n> On Mon, 13 Mar 2023 at 09:28, Jonathan S. Katz <jkatz@postgresql.org> wrote:\n> >\n> > > Uh, absolutely. Here's a complete patch (in case you wanted that),\n> > > thanks.\n> >\n> > LGTM -- pushed. If it's not reflected within a few hours please let me\n> > know and I'll force clear the caches.\n>\n> I think this means the patch is committed? I'll update the commitfest\n> entry as committed but if there's more to be done feel free to fix.\n\nHum. Jonathon Katz isn't listed as a committer -- is this a web site\nchange or something else? Or do we need to add Jonathon?\n\n-- \ngreg\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:51:36 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles,\n tweaked sentences"
},
{
"msg_contents": "Sorry, having read the whole thread I think it's clear. The source\ntree patch was committed by Alvaro H in\na7e584a7d68a9a2bcc7efaf442262771f9044248 and then Katz pushed the\npgweb change. So I gather this is resolved now and I've marked it\ncommitted by Alvaro.\n\n\n",
"msg_date": "Tue, 14 Mar 2023 13:57:57 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles,\n tweaked sentences"
},
{
"msg_contents": "On Tue, 14 Mar 2023 13:57:57 -0400\nGreg Stark <stark@mit.edu> wrote:\n\n> Sorry, having read the whole thread I think it's clear. The source\n> tree patch was committed by Alvaro H in\n> a7e584a7d68a9a2bcc7efaf442262771f9044248 and then Katz pushed the\n> pgweb change. So I gather this is resolved now and I've marked it\n> committed by Alvaro.\n\nThere remains an un-committed patch from this thread/commitfest\nentry:\nv10-0002-Page-break-before-sect1-in-contrib-appendix-when-pdf.patch\n\nWhen generating the PDF docs it starts each contrib entry on\na separate page.\n\nFrom:\nhttps://www.postgresql.org/message-id/20230122144246.0ff87372%40slate.karlpinc.com\n\nI've re-attached the patch. Nobody has commented directly\non this particular patch, although there was a \"looks ok\" reply \nto the email.\n\nI don't know what the policy is now that the commitfest entry\nis closed. Perhaps Alvaro was planning on committing it?\n\nPlease let me know if I should open up a new\ncommitfest entry or if there is something else I need to do.\n\nThanks for the help.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein",
"msg_date": "Tue, 14 Mar 2023 16:29:19 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Mar-14, Karl O. Pinc wrote:\n\n> On Tue, 14 Mar 2023 13:57:57 -0400\n> Greg Stark <stark@mit.edu> wrote:\n> \n> > Sorry, having read the whole thread I think it's clear. The source\n> > tree patch was committed by Alvaro H in\n> > a7e584a7d68a9a2bcc7efaf442262771f9044248 and then Katz pushed the\n> > pgweb change. So I gather this is resolved now and I've marked it\n> > committed by Alvaro.\n\nActually, the 0001 patch hadn't been fully committed yet ... I had only\nadded the CSS tweaks. I have now pushed the addition of the tables to\nthe SGML sources, with minor tag changes: I found that with the\n<simplelist> outside of any <para>, the list was too close to the next\nparagraph, which looked a bit ugly. I put the list inside the <para>\nthat explains what the list is. It looks good with PDF, website-HTML\nand plain-HTML rendering now; didn't look at other output formats.\nSo, the CF entry being marked committed is now correct as far as I'm\nconcerned.\n\n> There remains an un-committed patch from this thread/commitfest\n> entry:\n\n> diff --git a/doc/src/sgml/stylesheet-fo.xsl b/doc/src/sgml/stylesheet-fo.xsl\n> index 0c4dff92c4..68a46f9e24 100644\n> --- a/doc/src/sgml/stylesheet-fo.xsl\n> +++ b/doc/src/sgml/stylesheet-fo.xsl\n\n> +<!-- Every sect1 in the contrib appendix gets a page break -->\n> +<xsl:template match=\"id('contrib')/sect1\">\n> + <fo:block break-after='page'/>\n> + <xsl:apply-imports/>\n> +</xsl:template>\n\nYeah, I think this one achieves what I wanted and isn't a maintenance\nburden, but I would like to hear from other CSS people. I guess I could\njust commit it and see what complaints I get (probably none).\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n“Cuando no hay humildad las personas se degradan” (A. Christie)\n\n\n",
"msg_date": "Wed, 15 Mar 2023 09:41:27 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Wed, 15 Mar 2023 09:41:27 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> Actually, the 0001 patch hadn't been fully committed yet ... I had\n> only added the CSS tweaks. I have now pushed the addition of the\n> tables to the SGML sources, with minor tag changes: I found that with\n> the <simplelist> outside of any <para>, the list was too close to the\n> next paragraph, which looked a bit ugly. I put the list inside the\n> <para> that explains what the list is. It looks good with PDF,\n> website-HTML and plain-HTML rendering now; didn't look at other\n> output formats. So, the CF entry being marked committed is now\n> correct as far as I'm concerned.\n\nThanks for noticing that. (I'd always vaguely wondered about\nlists being inside v.s. outside of paragraphs. There must be other\nplaces in the docs where this matters. ?)\n\n> > There remains an un-committed patch from this thread/commitfest\n> > entry: \n> \n> > diff --git a/doc/src/sgml/stylesheet-fo.xsl\n> > b/doc/src/sgml/stylesheet-fo.xsl index 0c4dff92c4..68a46f9e24 100644\n> > --- a/doc/src/sgml/stylesheet-fo.xsl\n> > +++ b/doc/src/sgml/stylesheet-fo.xsl \n> \n> > +<!-- Every sect1 in the contrib appendix gets a page break -->\n> > +<xsl:template match=\"id('contrib')/sect1\">\n> > + <fo:block break-after='page'/>\n> > + <xsl:apply-imports/>\n> > +</xsl:template> \n> \n> Yeah, I think this one achieves what I wanted and isn't a maintenance\n> burden, but I would like to hear from other CSS people. I guess I\n> could just commit it and see what complaints I get (probably none).\n\nFWIW, this patch is not to CSS. It's XSLT and affects only the PDF\ngeneration.\n\n(The patch is a response to your \"aside\" and remarks regarding a\nseparate thread, here:\nhttps://www.postgresql.org/message-id/20230118173447.aegjdk3girgkqu2g%40alvherre.pgsql\n)\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 15 Mar 2023 08:55:21 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Mar-15, Karl O. Pinc wrote:\n\n> On Wed, 15 Mar 2023 09:41:27 +0100\n> Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Yeah, I think this one achieves what I wanted and isn't a maintenance\n> > burden, but I would like to hear from other CSS people. I guess I\n> > could just commit it and see what complaints I get (probably none).\n> \n> FWIW, this patch is not to CSS. It's XSLT and affects only the PDF\n> generation.\n\nYeah, I misspoke. I was aware it's XSLT, a territory I'm wholly\nunfamiliar with. I have pushed it now nonetheless. Thank you!\n\n> (The patch is a response to your \"aside\" and remarks regarding a\n> separate thread, here:\n> https://www.postgresql.org/message-id/20230118173447.aegjdk3girgkqu2g%40alvherre.pgsql\n> )\n\nRight :-)\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"El destino baraja y nosotros jugamos\" (A. Schopenhauer)\n\n\n",
"msg_date": "Mon, 20 Mar 2023 14:16:45 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Mon, 20 Mar 2023 14:16:45 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Mar-15, Karl O. Pinc wrote:\n\n> Yeah, I misspoke. I was aware it's XSLT, a territory I'm wholly\n> unfamiliar with. I have pushed it now nonetheless. Thank you!\n\nThank you for all your help with this.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Mon, 20 Mar 2023 12:32:19 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Hi,\n\nThere seems to be a problem with the html generated\nfor the public-facing Postgresql docs.\n\nI'm looking at the contrib page in the devel docs,\non the pg website:\n\nhttps://www.postgresql.org/docs/devel/contrib.html\n\nThe simplelist holding the list of trusted extensions,\nin doc/src/sgml/contrib.sgml, is inside the paragraph.\n\nBut when I look at the delivered html, I see the table\noutside of the paragraph. And the vertical spacing\nlooks poor as a result.\n\nAlvaro moved the simplelist into the paragraph to\nfix just this problem.\n\nBuilding HEAD (of master) on my computer (Debian 11.6)\nwhat I see is the generation of two paragraphs,\none with the leading text and a second that contains\nthe simplelist table. (Er, why 2 paragraphs instead\nof just one paragraph with both text and table in\nit I can't say.)\n\nThe html built on my computer has vertical spacing\nthat looks good.\n\nCould it be that the build system has out-of-date\ndocbook xslt? In any case, it looks like what's\nproduced for the world to see is different from what\nAlvaro and I are seeing.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\nP.S. I don't know what html Alvero is generating,\nbut his looks good and what's on postgresql.org does\nnot look good. So I assume that he's getting something\ndifferent from what the public sees now.\n\n\n",
"msg_date": "Mon, 20 Mar 2023 15:20:38 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Hi,\n\nI rebuilt the HEAD (master) html with:\n\n make STYLE=website html\n\nand what I see locally is still different from\nwhat is on postgresql.org. \n\nSo the build system does indeed seem to be generating\n\"different html\" that looks un-good compared to what\nAlvaro and I are seeing when we build locally.\n\n\nOn Mon, 20 Mar 2023 15:20:38 -0500\n\"Karl O. Pinc\" <kop@karlpinc.com> wrote:\n\n> There seems to be a problem with the html generated\n> for the public-facing Postgresql docs.\n> \n> I'm looking at the contrib page in the devel docs,\n> on the pg website:\n> \n> https://www.postgresql.org/docs/devel/contrib.html\n> \n> The simplelist holding the list of trusted extensions,\n> in doc/src/sgml/contrib.sgml, is inside the paragraph.\n> \n> But when I look at the delivered html, I see the table\n> outside of the paragraph. And the vertical spacing\n> looks poor as a result.\n> \n> Alvaro moved the simplelist into the paragraph to\n> fix just this problem.\n> \n> Building HEAD (of master) on my computer (Debian 11.6)\n> what I see is the generation of two paragraphs,\n> one with the leading text and a second that contains\n> the simplelist table. (Er, why 2 paragraphs instead\n> of just one paragraph with both text and table in\n> it I can't say.)\n> \n> The html built on my computer has vertical spacing\n> that looks good.\n> \n> Could it be that the build system has out-of-date\n> docbook xslt? In any case, it looks like what's\n> produced for the world to see is different from what\n> Alvaro and I are seeing.\n> \n> Regards,\n> \n> Karl <kop@karlpinc.com>\n> Free Software: \"You don't pay back, you pay forward.\"\n> -- Robert A. Heinlein\n> \n> P.S. I don't know what html Alvero is generating,\n> but his looks good and what's on postgresql.org does\n> not look good. So I assume that he's getting something\n> different from what the public sees now.\n\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 22 Mar 2023 22:23:38 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Mar-22, Karl O. Pinc wrote:\n\n> Hi,\n> \n> I rebuilt the HEAD (master) html with:\n> \n> make STYLE=website html\n> \n> and what I see locally is still different from\n> what is on postgresql.org. \n> \n> So the build system does indeed seem to be generating\n> \"different html\" that looks un-good compared to what\n> Alvaro and I are seeing when we build locally.\n\nHah, you're right -- the website is missing the closing </p>. Weird.\nIt is definitely possible that the website is using outdated XSLT\nstylesheets. For example, at the top of the page in my local build I\nsee this:\n\n<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n<!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"><html xmlns=\"http://www.w3.org/1999/xhtml\"><head>\n\nwhereas the website only says\n\n<!doctype html>\n<html lang=\"en\">\n <head>\n\nI don't to waste time investigating that, though. \n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Thu, 23 Mar 2023 10:45:51 +0100",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "Is this the pgsql-www list the right place to report\nthis so it does not get forgotten? (If so, no need to reply.)\n\nOn Thu, 23 Mar 2023 10:45:51 +0100\nAlvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> On 2023-Mar-22, Karl O. Pinc wrote:\n\n> > I rebuilt the HEAD (master) html with:\n> > \n> > make STYLE=website html\n> > \n> > and what I see locally is still different from\n> > what is on postgresql.org. \n> > \n> > So the build system does indeed seem to be generating\n> > \"different html\" that looks un-good compared to what\n> > Alvaro and I are seeing when we build locally. \n> \n> Hah, you're right -- the website is missing the closing </p>. Weird.\n> It is definitely possible that the website is using outdated XSLT\n> stylesheets.\n<snip>\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 29 Mar 2023 12:07:38 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 3/23/23 5:45 AM, Alvaro Herrera wrote:\r\n> On 2023-Mar-22, Karl O. Pinc wrote:\r\n> \r\n>> Hi,\r\n>>\r\n>> I rebuilt the HEAD (master) html with:\r\n>>\r\n>> make STYLE=website html\r\n>>\r\n>> and what I see locally is still different from\r\n>> what is on postgresql.org.\r\n>>\r\n>> So the build system does indeed seem to be generating\r\n>> \"different html\" that looks un-good compared to what\r\n>> Alvaro and I are seeing when we build locally.\r\n> \r\n> Hah, you're right -- the website is missing the closing </p>. Weird.\r\n> It is definitely possible that the website is using outdated XSLT\r\n> stylesheets. For example, at the top of the page in my local build I\r\n> see this:\r\n> \r\n> <?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\r\n> <!DOCTYPE html PUBLIC \"-//W3C//DTD XHTML 1.0 Transitional//EN\" \"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd\"><html xmlns=\"http://www.w3.org/1999/xhtml\"><head>\r\n> \r\n> whereas the website only says\r\n> \r\n> <!doctype html>\r\n> <html lang=\"en\">\r\n> <head>\r\n\r\nThe above doctype correct for the web. That's the HTML5 doctype tag.\r\n\r\nI haven't gone through the the doc loading process in awhile, but what \r\nhappens is that the docs build with the HTML generated (make html) and \r\nthen are processed and \"uploaded\" to the website through this code[1]. \r\nIt's possible that somewhere in the \"HTML tidy\" process something may be \r\nremoved.\r\n\r\nLooking at the current state of the contrib page, I'm not sure what the \r\nrendering is that you expect. I can see us adding more margin to the \r\nbottom of the table as that looks close together, but I'm not sure I \r\nunderstand what other issues there are?\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://git.postgresql.org/gitweb/?p=pgweb.git;a=blob;f=tools/docs/docload.py;hb=HEAD",
"msg_date": "Wed, 29 Mar 2023 13:17:33 -0400",
"msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Wed, 29 Mar 2023 13:17:33 -0400\n\"Jonathan S. Katz\" <jkatz@postgresql.org> wrote:\n\n> The above doctype correct for the web. That's the HTML5 doctype tag.\n\nThat might be the difference, since (IIRC) the locally built site\nis xhtml. Which does not seem right -- I'd expect them to be\nthe same. Otherwise doc patch developers can't tell what\nthey are producing.\n\n> Looking at the current state of the contrib page, I'm not sure what\n> the rendering is that you expect. I can see us adding more margin to\n> the bottom of the table as that looks close together, but I'm not\n> sure I understand what other issues there are?\n\nThat's pretty much the issue. The visual presentation differs\nbetween the public pages and my locally generated pages.\nUnderlying this is a difference in the generated html,\nthe \"real\" issue IMO.\n\nBut first, a summary:\n\nAt the bottom of\nhttps://www.postgresql.org/docs/devel/contrib.html\nthere is the text: \"The following extensions are trusted\nin a default installation:\" After this there is a table.\n\nOn the public site the bottom of the table is \"too close\" to\nthe top of the next paragraph. Not so when locally building\nthe html.\n\nThe difference, when I look using browser-based web development\ntools, is that the locally generated table is in a paragraph\nbut on the public page the table is not.\n\n\nThis accounts for the difference in presentation, which was\na deliberate choice in the docbook source sgml. The simplelist\nelement was moved inside the para element to produce a \"standard\"\nvertical spacing between it and the next paragraph.\n\nI suppose that even though the docbook DTD allows a simplelist\nin a para there's no guarantee that whether one does so matters?\nAnd/or maybe there's no guarantee that the presentation is the\nsame when generating xhtml v.s. html5? (This would seem wrong\nto me, but I suppose there could be reasons.) Or there could be\na bug in one or the other set of html-producing style sheets.\n\nI see that in html5 the table element is allowed only in flow\ncontext, not phrasing context. So maybe tables can't be put\ninto paragraphs? I'd still think that the html5 stylesheets could\nwrap tables that are \"supposed to be\" in paragraphs in div\nelements with a class that allows them to be styled like\nhtml5 p tags. (Perhaps this is the \"fix\"?? Or just throw\nan empty paragraph after such tables to sidestep CSS?)\n\nSo there may be a problem somewhere in the html generation.\n\nIn any case, isn't it a problem that html5 is produced for\npublic consumption and xhtml produced when independently generating\nthe docs?\n\nSorry to run-on. Thinking out loud.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Wed, 29 Mar 2023 13:23:33 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Sun, Jan 22, 2023 at 02:42:46PM -0600, Karl O. Pinc wrote:\n> v10-0001-List-trusted-and-obsolete-extensions.patch\n\n> + <para id=\"contrib-obsolete\">\n> + These modules and extensions are obsolete:\n> +\n> + <simplelist type=\"inline\">\n> + <member><xref linkend=\"intagg\"/></member>\n> + <member><xref linkend=\"xml2\"/></member>\n> + </simplelist>\n> + </para>\n\nCommit a013738 incorporated this change. Since xml2 is the only in-tree way\nto use XSLT from SQL, I think xml2 is not obsolete. Some individual\nfunctions, e.g. xml_valid(), are obsolete. (There are years-old threats to\nrender the module obsolete, but this has never happened.)\n\n\n",
"msg_date": "Wed, 29 Mar 2023 21:32:05 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Wed, 29 Mar 2023 21:32:05 -0700\nNoah Misch <noah@leadboat.com> wrote:\n\n> On Sun, Jan 22, 2023 at 02:42:46PM -0600, Karl O. Pinc wrote:\n> > v10-0001-List-trusted-and-obsolete-extensions.patch \n> \n> > + <para id=\"contrib-obsolete\">\n> > + These modules and extensions are obsolete:\n> > +\n> > + <simplelist type=\"inline\">\n> > + <member><xref linkend=\"intagg\"/></member>\n> > + <member><xref linkend=\"xml2\"/></member>\n> > + </simplelist>\n> > + </para> \n> \n> Commit a013738 incorporated this change. Since xml2 is the only\n> in-tree way to use XSLT from SQL, I think xml2 is not obsolete. Some\n> individual functions, e.g. xml_valid(), are obsolete. (There are\n> years-old threats to render the module obsolete, but this has never\n> happened.)\n\nYour point seems valid but this is above my station.\nI have no idea as to how to best resolve this, or even how to make the\nresolution happen now that the change has been committed.\nSomeone who knows more than me about the situation is needed\nto change the phrasing, or re-categorize, or rework the xml2\nmodule docs, or come up with new categories of obsolescence-like \nstates, or provide access to libxslt from PG, or something.\n\nI am invested in the patch and appreciate being cc-ed.\n\nRegards,\n\nKarl <kop@karlpinc.com>\nFree Software: \"You don't pay back, you pay forward.\"\n -- Robert A. Heinlein\n\n\n",
"msg_date": "Thu, 30 Mar 2023 01:27:05 -0500",
"msg_from": "\"Karl O. Pinc\" <kop@karlpinc.com>",
"msg_from_op": true,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On Thu, Mar 30, 2023 at 01:27:05AM -0500, Karl O. Pinc wrote:\n> On Wed, 29 Mar 2023 21:32:05 -0700\n> Noah Misch <noah@leadboat.com> wrote:\n> \n> > On Sun, Jan 22, 2023 at 02:42:46PM -0600, Karl O. Pinc wrote:\n> > > v10-0001-List-trusted-and-obsolete-extensions.patch \n> > \n> > > + <para id=\"contrib-obsolete\">\n> > > + These modules and extensions are obsolete:\n> > > +\n> > > + <simplelist type=\"inline\">\n> > > + <member><xref linkend=\"intagg\"/></member>\n> > > + <member><xref linkend=\"xml2\"/></member>\n> > > + </simplelist>\n> > > + </para> \n> > \n> > Commit a013738 incorporated this change. Since xml2 is the only\n> > in-tree way to use XSLT from SQL, I think xml2 is not obsolete. Some\n> > individual functions, e.g. xml_valid(), are obsolete. (There are\n> > years-old threats to render the module obsolete, but this has never\n> > happened.)\n> \n> Your point seems valid but this is above my station.\n> I have no idea as to how to best resolve this, or even how to make the\n> resolution happen now that the change has been committed.\n\nI'm inclined to just remove <para id=\"contrib-obsolete\">. While intagg is\nindeed obsolete, having a one-entry list seems like undue weight.\n\n\n",
"msg_date": "Sun, 9 Apr 2023 11:50:50 -0700",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
},
{
"msg_contents": "On 2023-Apr-09, Noah Misch wrote:\n\n> On Thu, Mar 30, 2023 at 01:27:05AM -0500, Karl O. Pinc wrote:\n\n> > Your point seems valid but this is above my station.\n> > I have no idea as to how to best resolve this, or even how to make the\n> > resolution happen now that the change has been committed.\n> \n> I'm inclined to just remove <para id=\"contrib-obsolete\">. While intagg is\n> indeed obsolete, having a one-entry list seems like undue weight.\n\nI agree, let's just remove that. The list of trusted modules is clearly\nuseful, but the list of obsoletes one isn't terribly interesting.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El miedo atento y previsor es la madre de la seguridad\" (E. Burke)\n\n\n",
"msg_date": "Sun, 9 Apr 2023 20:52:34 +0200",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": false,
"msg_subject": "Re: Doc: Rework contrib appendix -- informative titles, tweaked\n sentences"
}
] |
[
{
"msg_contents": "Hi,\n\nHere's a draft patch to tackle a couple of TODOs in the RADIUS code in auth.c.\n\nThe first change is to replace select() with a standard latch loop\nthat responds to interrupts, postmaster death etc promptly. It's not\nreally too much of a big deal because the timeout was only 3 seconds\n(hardcoded), but it's not good to have places that ignore ProcSignal,\nand it's good to move code to our modern pattern for I/O multiplexing.\n\nWe know from experience that we have to crank timeouts up to be able\nto run tests reliably on slow/valgrind/etc systems, so the second\nchange is to change the timeout to a GUC, as also requested by a\ncomment. One good side-effect is that it becomes easy and fast to\ntest the timed-out code path too, with a small value. While adding\nthe GUC I couldn't help wondering why RADIUS even needs a timeout\nseparate from authentication_timeout; another way to go here would be\nto remove it completely, but that'd be a policy change (removing the 3\nsecond timeout we always had). Thoughts?\n\nThe patch looks bigger than it really is because it changes the\nindentation level.\n\nBut first, some basic tests to show that it works. We can test before\nand after the change and have a non-zero level of confidence about\nwhacking the code around. Like existing similar tests, you need to\ninstall an extra package (FreeRADIUS) and opt in with\nPG_EXTRA_TESTS=radius. I figured out how to do that for our 3 CI\nUnixen, so cfbot should run the tests and pass once I add this to the\nMarch commitfest. FreeRADIUS claims to work on Windows too, but I\ndon't know how to set that up; maybe someday someone will fix that for\nall the PG_EXTRA_TESTS tests. I've also seen this work on a Mac with\nMacPorts. There's only one pathname in there that's a wild guess:\nnon-Debianoid Linux systems; if you know the answer there please LMK.",
"msg_date": "Tue, 3 Jan 2023 16:11:46 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "RADIUS tests and improvements"
},
{
"msg_contents": "On 1/3/23 04:11, Thomas Munro wrote:\n> Here's a draft patch to tackle a couple of TODOs in the RADIUS code in auth.c.\n\nNice to see someone working on this! I know of one company which could \nhave used the configurable timeout for radius because the 3 second \ntimeout is too short for 2FA. I think they ended up using PAM or some \nother solution in the end, but I am not 100% sure.\n\n> [...] While adding\n> the GUC I couldn't help wondering why RADIUS even needs a timeout\n> separate from authentication_timeout; another way to go here would be\n> to remove it completely, but that'd be a policy change (removing the 3\n> second timeout we always had). Thoughts?\n\nIt was some time since I last looked at the code but my impression was \nthat the reason for having a separate timeout is that you can try the \nnext server after the first one timed out (multiple radius servers are \nallowed). But I wonder if that really is a useful feature or if someone \njust was too clever or it just was an accidental feature.\n\nAndreas\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:03:55 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "On 1/3/23 22:03, Andreas Karlsson wrote:\n> On 1/3/23 04:11, Thomas Munro wrote:\n>> Here's a draft patch to tackle a couple of TODOs in the RADIUS code in \n>> auth.c.\n> \n> Nice to see someone working on this!.\n\nAnother thing: shouldn't we set some wait event to indicate that we are \nwaiting the RADIUS server or is that pointless during authentication \nsince there are no queries running anyway?\n\nAndreas\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:07:44 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 10:07 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> Another thing: shouldn't we set some wait event to indicate that we are\n> waiting the RADIUS server or is that pointless during authentication\n> since there are no queries running anyway?\n\nI initially added a wait_event value, but I couldn't see it\nanywhere... there is no entry in pg_stat_activity for a backend that\nis in that phase of authentication, so I just set it to zero.\n\n\n",
"msg_date": "Wed, 4 Jan 2023 10:16:11 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 10:03 AM Andreas Karlsson <andreas@proxel.se> wrote:\n> On 1/3/23 04:11, Thomas Munro wrote:\n> > [...] While adding\n> > the GUC I couldn't help wondering why RADIUS even needs a timeout\n> > separate from authentication_timeout; another way to go here would be\n> > to remove it completely, but that'd be a policy change (removing the 3\n> > second timeout we always had). Thoughts?\n>\n> It was some time since I last looked at the code but my impression was\n> that the reason for having a separate timeout is that you can try the\n> next server after the first one timed out (multiple radius servers are\n> allowed). But I wonder if that really is a useful feature or if someone\n> just was too clever or it just was an accidental feature.\n\nAh! Thanks, now that makes sense.\n\n\n",
"msg_date": "Wed, 4 Jan 2023 10:18:20 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "On 1/3/23 22:16, Thomas Munro wrote:\n> On Wed, Jan 4, 2023 at 10:07 AM Andreas Karlsson <andreas@proxel.se> wrote:\n>> Another thing: shouldn't we set some wait event to indicate that we are\n>> waiting the RADIUS server or is that pointless during authentication\n>> since there are no queries running anyway?\n> \n> I initially added a wait_event value, but I couldn't see it\n> anywhere... there is no entry in pg_stat_activity for a backend that\n> is in that phase of authentication, so I just set it to zero.\n\nThanks for the explanation, that makes a lot of sense!\n\nAndreas\n\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:21:13 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "New improved version:\n\n* fixed stupid misuse of PG_FINALLY() (oops, must have been thinking\nof another language)\n* realised that it was strange to have a GUC for the timeout, and made\na new HBA parameter instead\n* added documentation for that\n* used TimestampDifferenceMilliseconds() instead of open-coded TimestampTz maths\n\nI don't exactly love the PG_TRY()/PG_CATCH() around the\nCHECK_FOR_INTERRUPTS(). In fact this kind of CFI-with-cleanup problem\nhas been haunting me across several projects. For cases that memory\ncontexts and resource owners can't help with, I don't currently know\nwhat else to do here. Better ideas welcome. If I just let that\nsocket leak because I know this backend will soon exit, I'd expect a\nknock at the door from the programming police.\n\nI don't actually know why we have\nsrc/test/authentication/t/...{password,sasl,peer}..., but then\nsrc/test/{kerberos,ldap,ssl}/t/001_auth.pl. For this one, I just\ncopied the second style, creating src/test/radius/t/001_auth.pl. I\ncan't explain why it should be like that, though. If I propose\nanother test for PAM, where should it go?",
"msg_date": "Sat, 4 Mar 2023 14:23:10 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "On Sat, Mar 04, 2023 at 02:23:10PM +1300, Thomas Munro wrote:\n> I don't exactly love the PG_TRY()/PG_CATCH() around the\n> CHECK_FOR_INTERRUPTS().\n\n> In fact this kind of CFI-with-cleanup problem\n> has been haunting me across several projects. For cases that memory\n> contexts and resource owners can't help with, I don't currently know\n> what else to do here. Better ideas welcome.\n\nLike adding a Open/CloseSocket() in fd.c to control the leaks?\n\n> If I just let that\n> socket leak because I know this backend will soon exit, I'd expect a\n> knock at the door from the programming police.\n\nHmm. It seems to me that you'd better have two patches instead of one\nhere? First, one to introduce the new parameter to control the\ntimeout, and a second to improve the responsiveness with a\nWaitLatch()? If the CFI proves to be an issue, it would be sad to\nhave to revert the configuration part, which is worth on its own.\n\n> I don't actually know why we have\n> src/test/authentication/t/...{password,sasl,peer}..., but then\n> src/test/{kerberos,ldap,ssl}/t/001_auth.pl. For this one, I just\n> copied the second style, creating src/test/radius/t/001_auth.pl. I\n> can't explain why it should be like that, though. If I propose\n> another test for PAM, where should it go?\n\nMy take would be to keep the number of directories in src/test/ to a\nminimum in the long run. Still, this is a case-by-case, as it depends\non if a set of tests needs an expanded set of modules, configuration\nfiles and/or multiple scripts. ssl has its own set of configuration\nfiles with its module, so it makes sense to be independent. ldap has\nits LdapServer.pm with a configuration file, again I'm OK with a\nseparate case. Kerberos has its own README, but IMO it could also be\nmoved to src/test/authentication/ as it has a simple structure, with\nits requirements moved into a different README.\n\nWhat this patch set does for the RADIUS test is simple enough in\nstructure that I would also add it in src/test/authentication/. That\nmeans less Make-fu and less Meson-fu.\n\nIn 0001, PG_TEST_EXTRA requires radius for the test. This needs an\nupdate of regress.sgml where the values available are listed. I think\nthat you'd better document that freeradius is required for the test in\none of the README (either create a new one in radius/, or add this\ninformation to the one in authentication, as you feel).\n--\nMichael",
"msg_date": "Mon, 20 Mar 2023 15:18:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: RADIUS tests and improvements"
},
{
"msg_contents": "Hi Thomas,\n\nHave you have a chance to look at and address the feedback given in this\nthread?\n\n--\nDaniel Gustafsson\n\n\n\n",
"msg_date": "Mon, 10 Jul 2023 11:08:35 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: RADIUS tests and improvements"
}
] |
[
{
"msg_contents": "Hi All,\n\nJust a reminder that Commitfest 2023-01 has started.\nThere are many patches based on the latest run from [1] which require\na) Rebased on top of head b) Fix compilation failures c) Fix test\nfailure, please have a look and rebase it so that it is easy for the\nreviewers and committers:\n1. TAP output format for pg_regress\n2. Add BufFileRead variants with short read and EOF detection\n3. Add SHELL_EXIT_CODE variable to psql\n4. Add foreign-server health checks infrastructure\n5. Add last_vacuum_index_scans in pg_stat_all_tables\n6. Add index scan progress to pg_stat_progress_vacuum\n7. Add the ability to limit the amount of memory that can be allocated\nto backends.\n8. Add tracking of backend memory allocated to pg_stat_activity\n9. CAST( ... ON DEFAULT)\n10. CF App: add \"Returned: Needs more interest\" close status\n11. CI and test improvements\n12. Cygwin cleanup\n13. Expand character set for ltree labels\n14. Fix tab completion MERGE\n15. Force streaming every change in logical decoding\n16. More scalable multixacts buffers and locking\n17. Move SLRU data into the regular buffer pool\n18. Move extraUpdatedCols out of RangeTblEntry\n19.New [relation] options engine\n20. Optimizing Node Files Support\n21. PGDOCS - Stats views and functions not in order?\n22. POC: Lock updated tuples in tuple_update() and tuple_delete()\n23. Parallelize correlated subqueries that execute within each worker\n24. Pluggable toaster\n25. Prefetch the next tuple's memory during seqscans\n26. Pulling up direct-correlated ANY_SUBLINK\n27. Push aggregation down to base relations and joins\n28. Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n29. Refactor relation extension, faster COPY\n30. Remove NEW placeholder entry from stored view query range table\n31. TDE key management patches\n32. Use AF_UNIX for tests on Windows (ie drop fallback TCP code)\n33. Windows filesystem support improvements\n34. making relfilenodes 56 bit\n35. postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n36.recovery modules\n\nCommitfest status as of now:\nNeeds review: 177\nWaiting on Author: 47\nReady for Committer: 20\nCommitted: 31\nWithdrawn: 4\nRejected: 0\nReturned with Feedback: 0\nTotal: 279\n\nWe will be needing more members to actively review the patches to get\nmore patches to the committed state. I would like to remind you that\neach patch submitter is expected to review at least one patch from\nanother submitter during the CommitFest, those members who have not\npicked up patch for review please pick someone else's patch to review\nas soon as you can.\nI'll send out reminders this week to get your patches rebased and\nupdate the status of the patch accordingly.\n\n[1] - http://cfbot.cputube.org/\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Tue, 3 Jan 2023 13:13:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "[Commitfest 2023-01] has started"
},
{
"msg_contents": "On Tue, 3 Jan 2023 at 13:13, vignesh C <vignesh21@gmail.com> wrote:\n>\n> Hi All,\n>\n> Just a reminder that Commitfest 2023-01 has started.\n> There are many patches based on the latest run from [1] which require\n> a) Rebased on top of head b) Fix compilation failures c) Fix test\n> failure, please have a look and rebase it so that it is easy for the\n> reviewers and committers:\n> 1. TAP output format for pg_regress\n> 2. Add BufFileRead variants with short read and EOF detection\n> 3. Add SHELL_EXIT_CODE variable to psql\n> 4. Add foreign-server health checks infrastructure\n> 5. Add last_vacuum_index_scans in pg_stat_all_tables\n> 6. Add index scan progress to pg_stat_progress_vacuum\n> 7. Add the ability to limit the amount of memory that can be allocated\n> to backends.\n> 8. Add tracking of backend memory allocated to pg_stat_activity\n> 9. CAST( ... ON DEFAULT)\n> 10. CF App: add \"Returned: Needs more interest\" close status\n> 11. CI and test improvements\n> 12. Cygwin cleanup\n> 13. Expand character set for ltree labels\n> 14. Fix tab completion MERGE\n> 15. Force streaming every change in logical decoding\n> 16. More scalable multixacts buffers and locking\n> 17. Move SLRU data into the regular buffer pool\n> 18. Move extraUpdatedCols out of RangeTblEntry\n> 19.New [relation] options engine\n> 20. Optimizing Node Files Support\n> 21. PGDOCS - Stats views and functions not in order?\n> 22. POC: Lock updated tuples in tuple_update() and tuple_delete()\n> 23. Parallelize correlated subqueries that execute within each worker\n> 24. Pluggable toaster\n> 25. Prefetch the next tuple's memory during seqscans\n> 26. Pulling up direct-correlated ANY_SUBLINK\n> 27. Push aggregation down to base relations and joins\n> 28. Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n> 29. Refactor relation extension, faster COPY\n> 30. Remove NEW placeholder entry from stored view query range table\n> 31. TDE key management patches\n> 32. Use AF_UNIX for tests on Windows (ie drop fallback TCP code)\n> 33. Windows filesystem support improvements\n> 34. making relfilenodes 56 bit\n> 35. postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n> 36.recovery modules\n>\n> Commitfest status as of now:\n> Needs review: 177\n> Waiting on Author: 47\n> Ready for Committer: 20\n> Committed: 31\n> Withdrawn: 4\n> Rejected: 0\n> Returned with Feedback: 0\n> Total: 279\n>\n> We will be needing more members to actively review the patches to get\n> more patches to the committed state. I would like to remind you that\n> each patch submitter is expected to review at least one patch from\n> another submitter during the CommitFest, those members who have not\n> picked up patch for review please pick someone else's patch to review\n> as soon as you can.\n> I'll send out reminders this week to get your patches rebased and\n> update the status of the patch accordingly.\n>\n> [1] - http://cfbot.cputube.org/\n\nHi Hackers,\n\nHere's a quick status report after the first week (I think only about\n9 commits happened during the week, the rest were pre-CF activity):\n\nstatus | 3rd Jan | w1\n-------------------------+-----------+-----\nNeeds review: | 177 | 149\nWaiting on Author: | 47 | 60\nReady for Committer: | 20 | 23\nCommitted: | 31 | 40\nWithdrawn: | 4 | 7\nRejected: | 0 | 0\nReturned with Feedback: | 0 | 0\nTotal: | 279 | 279\n\nHere is a list of \"Needs review\" entries for which there has not been\nmuch communication on the thread and needs help in proceeding further.\nPlease pick one of these and help us on how to proceed further:\npgbench: using prepared BEGIN statement in a pipeline could cause an\nerror | Yugo Nagata\nFix dsa_free() to re-bin segment | Dongming Liu\npg_rewind: warn when checkpoint hasn't happened after promotion | James Coleman\nWork around non-atomic read of read of control file on ext4 | Thomas Munro\nRethinking the implementation of ts_headline | Tom Lane\nFix GetWALAvailability function code comments for WALAVAIL_REMOVED\nreturn value | sirisha chamarti\nFunction to log backtrace of postgres processes | vignesh C, Bharath Rupireddy\ndisallow HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_LOCKED_ONLY | Nathan Bossart\nNew hooks in the connection path | Bertrand Drouvot\nCheck consistency of GUC defaults between .sample.conf and\npg_settings.boot_val | Justin Pryzby\nAdd <<none>> support to sepgsql_restorecon | Joe Conway\npg_stat_statements and \"IN\" conditions | Dmitry Dolgov\nPatch to implement missing join selectivity estimation for range types\n| Zhicheng Luo, Maxime Schoemans, Diogo Repas, Mahmoud SAKR\nOperation log for major operations | Dmitry Koval\nConsider parallel for LATERAL subqueries having LIMIT/OFFSET | James Coleman\nUsing each rel as both outer and inner for anti-joins | Richard Guo\npartIndexlist for partitioned tables uniqueness | Arne Roland\nIn-place persistence change of a relation (fast ALTER TABLE ... SET\nLOGGED with wal_level=minimal) | Kyotaro Horiguchi\nSpeed up releasing of locks | Andres Freund, David Rowley\nnbtree performance improvements through specialization on key shape |\nMatthias van de Meent\nAdd sortsupport for range types and btree_gist | Christoph Heiss\nasynchronous execution support for Custom Scan | KaiGai Kohei, kazutaka onishi\n\nHere is a list of \"Ready for Committer\" entries for which there has\nnot been much communication on the thread and needs help in proceeding\nfurther. If any of the committers has some time to spare, please help\nus on these:\nFix assertion failure with barriers in parallel hash join | Thomas\nMunro, Melanie Plageman\npg_dump - read data for some options from external file | Pavel Stehule\nAdd non-blocking version of PQcancel | Jelte Fennema\nreduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\npg_stat_statements: Track statement entry timestamp | Andrei Zubkov\nAdd Amcheck option for checking unique constraints in btree indexes |\nMaxim Orlov, Pavel Borisov, Anastasia Lubennikova\nIntroduce a new view for checkpointer related stats | Bharath Rupireddy\nParallel Hash Full Join | Melanie Plageman\nUse fadvise in wal replay | Kirill Reshke, Jakub Wartak\npg_receivewal fail to streams when the partial file to write is not\nfully initialized present in the wal receiver directory | Bharath\nRupireddy, SATYANARAYANA NARLAPURAM\nLet libpq reject unexpected authentication requests | Jacob Champion\nSupport % wildcard in extension upgrade scripts | Sandro Santilli\nTAP output format for pg_regress | Daniel Gustafsson\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease aim to get it to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\n\nI have pinged most threads that are in \"Needs review\" state and don't\napply, compile warning-free, or pass check-world. I'll do some more\nof that sort of thing, and I'll highlight a different set of patches\nnext week.\n\nI have pinged to patch owners who have submitted one or more patches\nbut have not picked any of the patches for review.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 8 Jan 2023 21:00:10 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Sun, 8 Jan 2023 at 21:00, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Tue, 3 Jan 2023 at 13:13, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > Hi All,\n> >\n> > Just a reminder that Commitfest 2023-01 has started.\n> > There are many patches based on the latest run from [1] which require\n> > a) Rebased on top of head b) Fix compilation failures c) Fix test\n> > failure, please have a look and rebase it so that it is easy for the\n> > reviewers and committers:\n> > 1. TAP output format for pg_regress\n> > 2. Add BufFileRead variants with short read and EOF detection\n> > 3. Add SHELL_EXIT_CODE variable to psql\n> > 4. Add foreign-server health checks infrastructure\n> > 5. Add last_vacuum_index_scans in pg_stat_all_tables\n> > 6. Add index scan progress to pg_stat_progress_vacuum\n> > 7. Add the ability to limit the amount of memory that can be allocated\n> > to backends.\n> > 8. Add tracking of backend memory allocated to pg_stat_activity\n> > 9. CAST( ... ON DEFAULT)\n> > 10. CF App: add \"Returned: Needs more interest\" close status\n> > 11. CI and test improvements\n> > 12. Cygwin cleanup\n> > 13. Expand character set for ltree labels\n> > 14. Fix tab completion MERGE\n> > 15. Force streaming every change in logical decoding\n> > 16. More scalable multixacts buffers and locking\n> > 17. Move SLRU data into the regular buffer pool\n> > 18. Move extraUpdatedCols out of RangeTblEntry\n> > 19.New [relation] options engine\n> > 20. Optimizing Node Files Support\n> > 21. PGDOCS - Stats views and functions not in order?\n> > 22. POC: Lock updated tuples in tuple_update() and tuple_delete()\n> > 23. Parallelize correlated subqueries that execute within each worker\n> > 24. Pluggable toaster\n> > 25. Prefetch the next tuple's memory during seqscans\n> > 26. Pulling up direct-correlated ANY_SUBLINK\n> > 27. Push aggregation down to base relations and joins\n> > 28. Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n> > 29. Refactor relation extension, faster COPY\n> > 30. Remove NEW placeholder entry from stored view query range table\n> > 31. TDE key management patches\n> > 32. Use AF_UNIX for tests on Windows (ie drop fallback TCP code)\n> > 33. Windows filesystem support improvements\n> > 34. making relfilenodes 56 bit\n> > 35. postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n> > 36.recovery modules\n> >\n> > Commitfest status as of now:\n> > Needs review: 177\n> > Waiting on Author: 47\n> > Ready for Committer: 20\n> > Committed: 31\n> > Withdrawn: 4\n> > Rejected: 0\n> > Returned with Feedback: 0\n> > Total: 279\n> >\n> > We will be needing more members to actively review the patches to get\n> > more patches to the committed state. I would like to remind you that\n> > each patch submitter is expected to review at least one patch from\n> > another submitter during the CommitFest, those members who have not\n> > picked up patch for review please pick someone else's patch to review\n> > as soon as you can.\n> > I'll send out reminders this week to get your patches rebased and\n> > update the status of the patch accordingly.\n> >\n> > [1] - http://cfbot.cputube.org/\n>\n> Hi Hackers,\n>\n> Here's a quick status report after the first week (I think only about\n> 9 commits happened during the week, the rest were pre-CF activity):\n>\n> status | 3rd Jan | w1\n> -------------------------+-----------+-----\n> Needs review: | 177 | 149\n> Waiting on Author: | 47 | 60\n> Ready for Committer: | 20 | 23\n> Committed: | 31 | 40\n> Withdrawn: | 4 | 7\n> Rejected: | 0 | 0\n> Returned with Feedback: | 0 | 0\n> Total: | 279 | 279\n>\n> Here is a list of \"Needs review\" entries for which there has not been\n> much communication on the thread and needs help in proceeding further.\n> Please pick one of these and help us on how to proceed further:\n> pgbench: using prepared BEGIN statement in a pipeline could cause an\n> error | Yugo Nagata\n> Fix dsa_free() to re-bin segment | Dongming Liu\n> pg_rewind: warn when checkpoint hasn't happened after promotion | James Coleman\n> Work around non-atomic read of read of control file on ext4 | Thomas Munro\n> Rethinking the implementation of ts_headline | Tom Lane\n> Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> return value | sirisha chamarti\n> Function to log backtrace of postgres processes | vignesh C, Bharath Rupireddy\n> disallow HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_LOCKED_ONLY | Nathan Bossart\n> New hooks in the connection path | Bertrand Drouvot\n> Check consistency of GUC defaults between .sample.conf and\n> pg_settings.boot_val | Justin Pryzby\n> Add <<none>> support to sepgsql_restorecon | Joe Conway\n> pg_stat_statements and \"IN\" conditions | Dmitry Dolgov\n> Patch to implement missing join selectivity estimation for range types\n> | Zhicheng Luo, Maxime Schoemans, Diogo Repas, Mahmoud SAKR\n> Operation log for major operations | Dmitry Koval\n> Consider parallel for LATERAL subqueries having LIMIT/OFFSET | James Coleman\n> Using each rel as both outer and inner for anti-joins | Richard Guo\n> partIndexlist for partitioned tables uniqueness | Arne Roland\n> In-place persistence change of a relation (fast ALTER TABLE ... SET\n> LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> Speed up releasing of locks | Andres Freund, David Rowley\n> nbtree performance improvements through specialization on key shape |\n> Matthias van de Meent\n> Add sortsupport for range types and btree_gist | Christoph Heiss\n> asynchronous execution support for Custom Scan | KaiGai Kohei, kazutaka onishi\n>\n> Here is a list of \"Ready for Committer\" entries for which there has\n> not been much communication on the thread and needs help in proceeding\n> further. If any of the committers has some time to spare, please help\n> us on these:\n> Fix assertion failure with barriers in parallel hash join | Thomas\n> Munro, Melanie Plageman\n> pg_dump - read data for some options from external file | Pavel Stehule\n> Add non-blocking version of PQcancel | Jelte Fennema\n> reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n> pg_stat_statements: Track statement entry timestamp | Andrei Zubkov\n> Add Amcheck option for checking unique constraints in btree indexes |\n> Maxim Orlov, Pavel Borisov, Anastasia Lubennikova\n> Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> Parallel Hash Full Join | Melanie Plageman\n> Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n> pg_receivewal fail to streams when the partial file to write is not\n> fully initialized present in the wal receiver directory | Bharath\n> Rupireddy, SATYANARAYANA NARLAPURAM\n> Let libpq reject unexpected authentication requests | Jacob Champion\n> Support % wildcard in extension upgrade scripts | Sandro Santilli\n> TAP output format for pg_regress | Daniel Gustafsson\n>\n> If you have submitted a patch and it's in \"Waiting for author\" state,\n> please aim to get it to \"Needs review\" state soon if you can, as\n> that's where people are most likely to be looking for things to\n> review.\n>\n> I have pinged most threads that are in \"Needs review\" state and don't\n> apply, compile warning-free, or pass check-world. I'll do some more\n> of that sort of thing, and I'll highlight a different set of patches\n> next week.\n\nHi Hackers,\n\nHere's a quick status report after the second week, there has been 13\nentries which were committed in the last week:\n\nstatus | 3rd Jan | w1 | w2\n-------------------------+-----------+-------+-----\nNeeds review: | 177 | 149 | 128\nWaiting on Author: | 47 | 60 | 64\nReady for Committer: | 20 | 23 | 26\nCommitted: | 31 | 40 | 53\nWithdrawn: | 4 | 7 | 7\nRejected: | 0 | 0 | 0\nReturned with Feedback: | 0 | 0 | 1\nTotal: | 279 | 279 | 279\n\nHere is a few different patches which \"Needs review\", please pick one\nof these and help us in proceeding further:\n1) Add semi-join pushdown to postgres_fdw | Alexander Pyhalov\n2) pg_upgrade test failure | Thomas Munro\n3) Fix progress report of CREATE INDEX for nested partitioned tables |\nIlya Gladyshev\n4) Non-replayable WAL records through overflows and >MaxAllocSize\nlengths | Matthias van de Meent\n5) Add sslmode \"no-clientcert\" to avoid auth failure in md5/scram\nconnections | Jim Jones\n6) Add SHELL_EXIT_CODE variable to psql | Corey Huinker\n7) Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\nreturn value | sirisha chamarthi\n8) New strategies for freezing, advancing relfrozenxid early | Peter Geoghegan\n9) Lockless queue of waiters based on atomic operations for LWLock |\nAlexander Korotkov, Pavel Borisov\n10) Refactor relation extension, faster COPY | Andres Freund\n11) Add system view tracking shared buffer actions | Melanie Plageman\n12) Add index scan progress to pg_stat_progress_vacuum | Sami Imseih\n13) HOT chain validation in verify_heapam() | Himanshu Upadhyaya\n14) Periodic burst growth of the checkpoint_req counter on replica. |\nAnton Melnikov\n15) Add EXPLAIN option GENERIC_PLAN for parameterized queries | Laurenz Albe\n16) More scalable multixacts buffers and locking | Kyotaro Horiguchi ,\nAndrey Borodin , Ivan Lazarev\n17) In-place persistence change of a relation (fast ALTER TABLE ...\nSET LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n18) Reducing planning time when tables have many partitions | Yuya Watari\n19) ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast\ntables | Justin Pryzby\n20) Reduce timing overhead of EXPLAIN ANALYZE using rdtsc | Andres\nFreund, Lukas Fittl, David Geier\n\nHere is a few different patches which are in \"Ready for Committer\"\nstate, if any of the committers has some time to spare, please have a\nlook:\n1) Support load balancing in libpq | Jelte Fennema\n2) Use the system CA pool for certificate verification |Jacob\nChampion, Thomas Habets\n3) PG DOCS - pub/sub - specifying optional parameters without values.\n| Peter Smith\n4) Doc: Rework contrib appendix -- informative titles, tweaked\nsentences | Karl Pinc\n5) Amcheck verification of GiST and GIN | Andrey Borodin, Heikki\nLinnakangas, Grigory Kryachko\n6) Introduce a new view for checkpointer related stats | Bharath Rupireddy\n7) Faster pglz compression | Andrey Borodin, tinsane\n8) AcquireExecutorLocks() and run-time pruning | Amit Langote\n9) Parallel Aggregates for string_agg and array_agg | David Rowley\n10) Simplify standby state machine a bit in\nWaitForWALToBecomeAvailable() | Bharath Rupireddy\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease aim to get it to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\n\nI have pinged most threads that are in \"Needs review\" state and don't\napply, compile warning-free, or pass check-world. I'll do some more\nof that sort of thing, and I'll highlight a different set of patches\nnext week.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 15 Jan 2023 23:02:49 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Sun, 15 Jan 2023 at 23:02, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, 8 Jan 2023 at 21:00, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Tue, 3 Jan 2023 at 13:13, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > Hi All,\n> > >\n> > > Just a reminder that Commitfest 2023-01 has started.\n> > > There are many patches based on the latest run from [1] which require\n> > > a) Rebased on top of head b) Fix compilation failures c) Fix test\n> > > failure, please have a look and rebase it so that it is easy for the\n> > > reviewers and committers:\n> > > 1. TAP output format for pg_regress\n> > > 2. Add BufFileRead variants with short read and EOF detection\n> > > 3. Add SHELL_EXIT_CODE variable to psql\n> > > 4. Add foreign-server health checks infrastructure\n> > > 5. Add last_vacuum_index_scans in pg_stat_all_tables\n> > > 6. Add index scan progress to pg_stat_progress_vacuum\n> > > 7. Add the ability to limit the amount of memory that can be allocated\n> > > to backends.\n> > > 8. Add tracking of backend memory allocated to pg_stat_activity\n> > > 9. CAST( ... ON DEFAULT)\n> > > 10. CF App: add \"Returned: Needs more interest\" close status\n> > > 11. CI and test improvements\n> > > 12. Cygwin cleanup\n> > > 13. Expand character set for ltree labels\n> > > 14. Fix tab completion MERGE\n> > > 15. Force streaming every change in logical decoding\n> > > 16. More scalable multixacts buffers and locking\n> > > 17. Move SLRU data into the regular buffer pool\n> > > 18. Move extraUpdatedCols out of RangeTblEntry\n> > > 19.New [relation] options engine\n> > > 20. Optimizing Node Files Support\n> > > 21. PGDOCS - Stats views and functions not in order?\n> > > 22. POC: Lock updated tuples in tuple_update() and tuple_delete()\n> > > 23. Parallelize correlated subqueries that execute within each worker\n> > > 24. Pluggable toaster\n> > > 25. Prefetch the next tuple's memory during seqscans\n> > > 26. Pulling up direct-correlated ANY_SUBLINK\n> > > 27. Push aggregation down to base relations and joins\n> > > 28. Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n> > > 29. Refactor relation extension, faster COPY\n> > > 30. Remove NEW placeholder entry from stored view query range table\n> > > 31. TDE key management patches\n> > > 32. Use AF_UNIX for tests on Windows (ie drop fallback TCP code)\n> > > 33. Windows filesystem support improvements\n> > > 34. making relfilenodes 56 bit\n> > > 35. postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n> > > 36.recovery modules\n> > >\n> > > Commitfest status as of now:\n> > > Needs review: 177\n> > > Waiting on Author: 47\n> > > Ready for Committer: 20\n> > > Committed: 31\n> > > Withdrawn: 4\n> > > Rejected: 0\n> > > Returned with Feedback: 0\n> > > Total: 279\n> > >\n> > > We will be needing more members to actively review the patches to get\n> > > more patches to the committed state. I would like to remind you that\n> > > each patch submitter is expected to review at least one patch from\n> > > another submitter during the CommitFest, those members who have not\n> > > picked up patch for review please pick someone else's patch to review\n> > > as soon as you can.\n> > > I'll send out reminders this week to get your patches rebased and\n> > > update the status of the patch accordingly.\n> > >\n> > > [1] - http://cfbot.cputube.org/\n> >\n> > Hi Hackers,\n> >\n> > Here's a quick status report after the first week (I think only about\n> > 9 commits happened during the week, the rest were pre-CF activity):\n> >\n> > status | 3rd Jan | w1\n> > -------------------------+-----------+-----\n> > Needs review: | 177 | 149\n> > Waiting on Author: | 47 | 60\n> > Ready for Committer: | 20 | 23\n> > Committed: | 31 | 40\n> > Withdrawn: | 4 | 7\n> > Rejected: | 0 | 0\n> > Returned with Feedback: | 0 | 0\n> > Total: | 279 | 279\n> >\n> > Here is a list of \"Needs review\" entries for which there has not been\n> > much communication on the thread and needs help in proceeding further.\n> > Please pick one of these and help us on how to proceed further:\n> > pgbench: using prepared BEGIN statement in a pipeline could cause an\n> > error | Yugo Nagata\n> > Fix dsa_free() to re-bin segment | Dongming Liu\n> > pg_rewind: warn when checkpoint hasn't happened after promotion | James Coleman\n> > Work around non-atomic read of read of control file on ext4 | Thomas Munro\n> > Rethinking the implementation of ts_headline | Tom Lane\n> > Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> > return value | sirisha chamarti\n> > Function to log backtrace of postgres processes | vignesh C, Bharath Rupireddy\n> > disallow HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_LOCKED_ONLY | Nathan Bossart\n> > New hooks in the connection path | Bertrand Drouvot\n> > Check consistency of GUC defaults between .sample.conf and\n> > pg_settings.boot_val | Justin Pryzby\n> > Add <<none>> support to sepgsql_restorecon | Joe Conway\n> > pg_stat_statements and \"IN\" conditions | Dmitry Dolgov\n> > Patch to implement missing join selectivity estimation for range types\n> > | Zhicheng Luo, Maxime Schoemans, Diogo Repas, Mahmoud SAKR\n> > Operation log for major operations | Dmitry Koval\n> > Consider parallel for LATERAL subqueries having LIMIT/OFFSET | James Coleman\n> > Using each rel as both outer and inner for anti-joins | Richard Guo\n> > partIndexlist for partitioned tables uniqueness | Arne Roland\n> > In-place persistence change of a relation (fast ALTER TABLE ... SET\n> > LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> > Speed up releasing of locks | Andres Freund, David Rowley\n> > nbtree performance improvements through specialization on key shape |\n> > Matthias van de Meent\n> > Add sortsupport for range types and btree_gist | Christoph Heiss\n> > asynchronous execution support for Custom Scan | KaiGai Kohei, kazutaka onishi\n> >\n> > Here is a list of \"Ready for Committer\" entries for which there has\n> > not been much communication on the thread and needs help in proceeding\n> > further. If any of the committers has some time to spare, please help\n> > us on these:\n> > Fix assertion failure with barriers in parallel hash join | Thomas\n> > Munro, Melanie Plageman\n> > pg_dump - read data for some options from external file | Pavel Stehule\n> > Add non-blocking version of PQcancel | Jelte Fennema\n> > reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n> > pg_stat_statements: Track statement entry timestamp | Andrei Zubkov\n> > Add Amcheck option for checking unique constraints in btree indexes |\n> > Maxim Orlov, Pavel Borisov, Anastasia Lubennikova\n> > Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> > Parallel Hash Full Join | Melanie Plageman\n> > Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n> > pg_receivewal fail to streams when the partial file to write is not\n> > fully initialized present in the wal receiver directory | Bharath\n> > Rupireddy, SATYANARAYANA NARLAPURAM\n> > Let libpq reject unexpected authentication requests | Jacob Champion\n> > Support % wildcard in extension upgrade scripts | Sandro Santilli\n> > TAP output format for pg_regress | Daniel Gustafsson\n> >\n> > If you have submitted a patch and it's in \"Waiting for author\" state,\n> > please aim to get it to \"Needs review\" state soon if you can, as\n> > that's where people are most likely to be looking for things to\n> > review.\n> >\n> > I have pinged most threads that are in \"Needs review\" state and don't\n> > apply, compile warning-free, or pass check-world. I'll do some more\n> > of that sort of thing, and I'll highlight a different set of patches\n> > next week.\n>\n> Hi Hackers,\n>\n> Here's a quick status report after the second week, there has been 13\n> entries which were committed in the last week:\n>\n> status | 3rd Jan | w1 | w2\n> -------------------------+-----------+-------+-----\n> Needs review: | 177 | 149 | 128\n> Waiting on Author: | 47 | 60 | 64\n> Ready for Committer: | 20 | 23 | 26\n> Committed: | 31 | 40 | 53\n> Withdrawn: | 4 | 7 | 7\n> Rejected: | 0 | 0 | 0\n> Returned with Feedback: | 0 | 0 | 1\n> Total: | 279 | 279 | 279\n>\n> Here is a few different patches which \"Needs review\", please pick one\n> of these and help us in proceeding further:\n> 1) Add semi-join pushdown to postgres_fdw | Alexander Pyhalov\n> 2) pg_upgrade test failure | Thomas Munro\n> 3) Fix progress report of CREATE INDEX for nested partitioned tables |\n> Ilya Gladyshev\n> 4) Non-replayable WAL records through overflows and >MaxAllocSize\n> lengths | Matthias van de Meent\n> 5) Add sslmode \"no-clientcert\" to avoid auth failure in md5/scram\n> connections | Jim Jones\n> 6) Add SHELL_EXIT_CODE variable to psql | Corey Huinker\n> 7) Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> return value | sirisha chamarthi\n> 8) New strategies for freezing, advancing relfrozenxid early | Peter Geoghegan\n> 9) Lockless queue of waiters based on atomic operations for LWLock |\n> Alexander Korotkov, Pavel Borisov\n> 10) Refactor relation extension, faster COPY | Andres Freund\n> 11) Add system view tracking shared buffer actions | Melanie Plageman\n> 12) Add index scan progress to pg_stat_progress_vacuum | Sami Imseih\n> 13) HOT chain validation in verify_heapam() | Himanshu Upadhyaya\n> 14) Periodic burst growth of the checkpoint_req counter on replica. |\n> Anton Melnikov\n> 15) Add EXPLAIN option GENERIC_PLAN for parameterized queries | Laurenz Albe\n> 16) More scalable multixacts buffers and locking | Kyotaro Horiguchi ,\n> Andrey Borodin , Ivan Lazarev\n> 17) In-place persistence change of a relation (fast ALTER TABLE ...\n> SET LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> 18) Reducing planning time when tables have many partitions | Yuya Watari\n> 19) ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast\n> tables | Justin Pryzby\n> 20) Reduce timing overhead of EXPLAIN ANALYZE using rdtsc | Andres\n> Freund, Lukas Fittl, David Geier\n>\n> Here is a few different patches which are in \"Ready for Committer\"\n> state, if any of the committers has some time to spare, please have a\n> look:\n> 1) Support load balancing in libpq | Jelte Fennema\n> 2) Use the system CA pool for certificate verification |Jacob\n> Champion, Thomas Habets\n> 3) PG DOCS - pub/sub - specifying optional parameters without values.\n> | Peter Smith\n> 4) Doc: Rework contrib appendix -- informative titles, tweaked\n> sentences | Karl Pinc\n> 5) Amcheck verification of GiST and GIN | Andrey Borodin, Heikki\n> Linnakangas, Grigory Kryachko\n> 6) Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> 7) Faster pglz compression | Andrey Borodin, tinsane\n> 8) AcquireExecutorLocks() and run-time pruning | Amit Langote\n> 9) Parallel Aggregates for string_agg and array_agg | David Rowley\n> 10) Simplify standby state machine a bit in\n> WaitForWALToBecomeAvailable() | Bharath Rupireddy\n>\n> If you have submitted a patch and it's in \"Waiting for author\" state,\n> please aim to get it to \"Needs review\" state soon if you can, as\n> that's where people are most likely to be looking for things to\n> review.\n>\n> I have pinged most threads that are in \"Needs review\" state and don't\n> apply, compile warning-free, or pass check-world. I'll do some more\n> of that sort of thing, and I'll highlight a different set of patches\n> next week.\n\nHi Hackers,\n\nHere's a quick status report after the third week, there has been 7\nentries which were committed in the last week:\nstatus | 3rd Jan | w1 | w2 | w3\n-------------------------+-----------+-------+-------+-------\nNeeds review: | 177 | 149 | 128 | 118\nWaiting on Author: | 47 | 60 | 64 | 65\nReady for Committer: | 20 | 23 | 26 | 26\nCommitted: | 31 | 40 | 53 | 60\nWithdrawn: | 4 | 7 | 7 | 8\nRejected: | 0 | 0 | 0 | 0\nReturned with Feedback: | 0 | 0 | 1 | 1\nTotal: | 279 | 279 | 279 | 279\n\nHere are a few patches which \"Needs review\", please pick one of these\nand help us in proceeding further:\n1) Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\nshowing metadata ...) | Justin Pryzby\n2) Fix pg_rewind race condition just after promotion | Heikki Linnakangas\n3) warn if GUC set to an invalid shared library | Justin Pryzby\n4) GUC for temporary disabling event triggers | Daniel Gustafsson\n5) Teach autovacuum.c to launch workers to advance table age without\nattendant antiwraparound cancellation behavior | Peter Geoghegan\n6) recovery modules | Nathan Bossart\n7) Add a new pg_walinspect function to extract FPIs from WAL records |\nBharath Rupireddy\n8) CI and test improvements | Justin Pryzby\n9) Test for function error in logrep worker | Anton Melnikov\n10) Allow tests to pass in OpenSSL FIPS mode | Peter Eisentraut\n11) Add a test for ldapbindpasswd | Andrew Dunstan, John Naylor\n12) Add TAP tests for psql \\g piped into program | Daniel Vérité\n13) Support MERGE ... WHEN NOT MATCHED BY SOURCE | Dean Rasheed\n14) add PROCESS_MAIN to VACUUM | Nathan Bossart\n15) SQL/JSON | Amit Langote, Nikita Glukhov\n16) Support MERGE into views | Dean Rasheed\n17) Exclusion constraints on partitioned tables | Paul Jungwirth\n18) Ability to reference other extensions by schema in extension\nscripts | Regina Obe\n19) COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns | Mingli Zhang\n20) Add support for DEFAULT specification in COPY FROM | Israel Barth\n\nHere is a few patches which are in \"Ready for Committer\" state, if any\nof the committers has some time to spare, please have a look:\n1) Transaction timeout | Andrey Borodin\n2) ANY_VALUE aggregate | Vik Fearing\n3) POC: Lock updated tuples in tuple_update() and tuple_delete() |\nAlexander Korotkov\n4) Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n5) PG DOCS - pub/sub - specifying optional parameters without values.\n| Peter Smith\n6) Doc: Rework contrib appendix -- informative titles, tweaked\nsentences | Karl Pinc\n7) Fix assertion failure with barriers in parallel hash join | Thomas\nMunro, Melanie Plageman\n8) On client login event trigger | Konstantin Knizhnik, Greg\nNancarrow, Mikhail Gribkov\n9) reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n10) Support % wildcard in extension upgrade scripts | Sandro Santilli\n\nIf you have submitted a patch and it's in \"Waiting for author\" state,\nplease aim to get it to \"Needs review\" state soon if you can, as\nthat's where people are most likely to be looking for things to\nreview.\n\nI have pinged most threads that are in \"Needs review\" state and don't\napply, compile warning-free, or pass check-world. I'll do some more\nof that sort of thing, and I'll highlight a different set of patches\nnext week.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 22 Jan 2023 22:00:03 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Sun, 22 Jan 2023 at 22:00, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, 15 Jan 2023 at 23:02, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sun, 8 Jan 2023 at 21:00, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Tue, 3 Jan 2023 at 13:13, vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > Hi All,\n> > > >\n> > > > Just a reminder that Commitfest 2023-01 has started.\n> > > > There are many patches based on the latest run from [1] which require\n> > > > a) Rebased on top of head b) Fix compilation failures c) Fix test\n> > > > failure, please have a look and rebase it so that it is easy for the\n> > > > reviewers and committers:\n> > > > 1. TAP output format for pg_regress\n> > > > 2. Add BufFileRead variants with short read and EOF detection\n> > > > 3. Add SHELL_EXIT_CODE variable to psql\n> > > > 4. Add foreign-server health checks infrastructure\n> > > > 5. Add last_vacuum_index_scans in pg_stat_all_tables\n> > > > 6. Add index scan progress to pg_stat_progress_vacuum\n> > > > 7. Add the ability to limit the amount of memory that can be allocated\n> > > > to backends.\n> > > > 8. Add tracking of backend memory allocated to pg_stat_activity\n> > > > 9. CAST( ... ON DEFAULT)\n> > > > 10. CF App: add \"Returned: Needs more interest\" close status\n> > > > 11. CI and test improvements\n> > > > 12. Cygwin cleanup\n> > > > 13. Expand character set for ltree labels\n> > > > 14. Fix tab completion MERGE\n> > > > 15. Force streaming every change in logical decoding\n> > > > 16. More scalable multixacts buffers and locking\n> > > > 17. Move SLRU data into the regular buffer pool\n> > > > 18. Move extraUpdatedCols out of RangeTblEntry\n> > > > 19.New [relation] options engine\n> > > > 20. Optimizing Node Files Support\n> > > > 21. PGDOCS - Stats views and functions not in order?\n> > > > 22. POC: Lock updated tuples in tuple_update() and tuple_delete()\n> > > > 23. Parallelize correlated subqueries that execute within each worker\n> > > > 24. Pluggable toaster\n> > > > 25. Prefetch the next tuple's memory during seqscans\n> > > > 26. Pulling up direct-correlated ANY_SUBLINK\n> > > > 27. Push aggregation down to base relations and joins\n> > > > 28. Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n> > > > 29. Refactor relation extension, faster COPY\n> > > > 30. Remove NEW placeholder entry from stored view query range table\n> > > > 31. TDE key management patches\n> > > > 32. Use AF_UNIX for tests on Windows (ie drop fallback TCP code)\n> > > > 33. Windows filesystem support improvements\n> > > > 34. making relfilenodes 56 bit\n> > > > 35. postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n> > > > 36.recovery modules\n> > > >\n> > > > Commitfest status as of now:\n> > > > Needs review: 177\n> > > > Waiting on Author: 47\n> > > > Ready for Committer: 20\n> > > > Committed: 31\n> > > > Withdrawn: 4\n> > > > Rejected: 0\n> > > > Returned with Feedback: 0\n> > > > Total: 279\n> > > >\n> > > > We will be needing more members to actively review the patches to get\n> > > > more patches to the committed state. I would like to remind you that\n> > > > each patch submitter is expected to review at least one patch from\n> > > > another submitter during the CommitFest, those members who have not\n> > > > picked up patch for review please pick someone else's patch to review\n> > > > as soon as you can.\n> > > > I'll send out reminders this week to get your patches rebased and\n> > > > update the status of the patch accordingly.\n> > > >\n> > > > [1] - http://cfbot.cputube.org/\n> > >\n> > > Hi Hackers,\n> > >\n> > > Here's a quick status report after the first week (I think only about\n> > > 9 commits happened during the week, the rest were pre-CF activity):\n> > >\n> > > status | 3rd Jan | w1\n> > > -------------------------+-----------+-----\n> > > Needs review: | 177 | 149\n> > > Waiting on Author: | 47 | 60\n> > > Ready for Committer: | 20 | 23\n> > > Committed: | 31 | 40\n> > > Withdrawn: | 4 | 7\n> > > Rejected: | 0 | 0\n> > > Returned with Feedback: | 0 | 0\n> > > Total: | 279 | 279\n> > >\n> > > Here is a list of \"Needs review\" entries for which there has not been\n> > > much communication on the thread and needs help in proceeding further.\n> > > Please pick one of these and help us on how to proceed further:\n> > > pgbench: using prepared BEGIN statement in a pipeline could cause an\n> > > error | Yugo Nagata\n> > > Fix dsa_free() to re-bin segment | Dongming Liu\n> > > pg_rewind: warn when checkpoint hasn't happened after promotion | James Coleman\n> > > Work around non-atomic read of read of control file on ext4 | Thomas Munro\n> > > Rethinking the implementation of ts_headline | Tom Lane\n> > > Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> > > return value | sirisha chamarti\n> > > Function to log backtrace of postgres processes | vignesh C, Bharath Rupireddy\n> > > disallow HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_LOCKED_ONLY | Nathan Bossart\n> > > New hooks in the connection path | Bertrand Drouvot\n> > > Check consistency of GUC defaults between .sample.conf and\n> > > pg_settings.boot_val | Justin Pryzby\n> > > Add <<none>> support to sepgsql_restorecon | Joe Conway\n> > > pg_stat_statements and \"IN\" conditions | Dmitry Dolgov\n> > > Patch to implement missing join selectivity estimation for range types\n> > > | Zhicheng Luo, Maxime Schoemans, Diogo Repas, Mahmoud SAKR\n> > > Operation log for major operations | Dmitry Koval\n> > > Consider parallel for LATERAL subqueries having LIMIT/OFFSET | James Coleman\n> > > Using each rel as both outer and inner for anti-joins | Richard Guo\n> > > partIndexlist for partitioned tables uniqueness | Arne Roland\n> > > In-place persistence change of a relation (fast ALTER TABLE ... SET\n> > > LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> > > Speed up releasing of locks | Andres Freund, David Rowley\n> > > nbtree performance improvements through specialization on key shape |\n> > > Matthias van de Meent\n> > > Add sortsupport for range types and btree_gist | Christoph Heiss\n> > > asynchronous execution support for Custom Scan | KaiGai Kohei, kazutaka onishi\n> > >\n> > > Here is a list of \"Ready for Committer\" entries for which there has\n> > > not been much communication on the thread and needs help in proceeding\n> > > further. If any of the committers has some time to spare, please help\n> > > us on these:\n> > > Fix assertion failure with barriers in parallel hash join | Thomas\n> > > Munro, Melanie Plageman\n> > > pg_dump - read data for some options from external file | Pavel Stehule\n> > > Add non-blocking version of PQcancel | Jelte Fennema\n> > > reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n> > > pg_stat_statements: Track statement entry timestamp | Andrei Zubkov\n> > > Add Amcheck option for checking unique constraints in btree indexes |\n> > > Maxim Orlov, Pavel Borisov, Anastasia Lubennikova\n> > > Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> > > Parallel Hash Full Join | Melanie Plageman\n> > > Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n> > > pg_receivewal fail to streams when the partial file to write is not\n> > > fully initialized present in the wal receiver directory | Bharath\n> > > Rupireddy, SATYANARAYANA NARLAPURAM\n> > > Let libpq reject unexpected authentication requests | Jacob Champion\n> > > Support % wildcard in extension upgrade scripts | Sandro Santilli\n> > > TAP output format for pg_regress | Daniel Gustafsson\n> > >\n> > > If you have submitted a patch and it's in \"Waiting for author\" state,\n> > > please aim to get it to \"Needs review\" state soon if you can, as\n> > > that's where people are most likely to be looking for things to\n> > > review.\n> > >\n> > > I have pinged most threads that are in \"Needs review\" state and don't\n> > > apply, compile warning-free, or pass check-world. I'll do some more\n> > > of that sort of thing, and I'll highlight a different set of patches\n> > > next week.\n> >\n> > Hi Hackers,\n> >\n> > Here's a quick status report after the second week, there has been 13\n> > entries which were committed in the last week:\n> >\n> > status | 3rd Jan | w1 | w2\n> > -------------------------+-----------+-------+-----\n> > Needs review: | 177 | 149 | 128\n> > Waiting on Author: | 47 | 60 | 64\n> > Ready for Committer: | 20 | 23 | 26\n> > Committed: | 31 | 40 | 53\n> > Withdrawn: | 4 | 7 | 7\n> > Rejected: | 0 | 0 | 0\n> > Returned with Feedback: | 0 | 0 | 1\n> > Total: | 279 | 279 | 279\n> >\n> > Here is a few different patches which \"Needs review\", please pick one\n> > of these and help us in proceeding further:\n> > 1) Add semi-join pushdown to postgres_fdw | Alexander Pyhalov\n> > 2) pg_upgrade test failure | Thomas Munro\n> > 3) Fix progress report of CREATE INDEX for nested partitioned tables |\n> > Ilya Gladyshev\n> > 4) Non-replayable WAL records through overflows and >MaxAllocSize\n> > lengths | Matthias van de Meent\n> > 5) Add sslmode \"no-clientcert\" to avoid auth failure in md5/scram\n> > connections | Jim Jones\n> > 6) Add SHELL_EXIT_CODE variable to psql | Corey Huinker\n> > 7) Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> > return value | sirisha chamarthi\n> > 8) New strategies for freezing, advancing relfrozenxid early | Peter Geoghegan\n> > 9) Lockless queue of waiters based on atomic operations for LWLock |\n> > Alexander Korotkov, Pavel Borisov\n> > 10) Refactor relation extension, faster COPY | Andres Freund\n> > 11) Add system view tracking shared buffer actions | Melanie Plageman\n> > 12) Add index scan progress to pg_stat_progress_vacuum | Sami Imseih\n> > 13) HOT chain validation in verify_heapam() | Himanshu Upadhyaya\n> > 14) Periodic burst growth of the checkpoint_req counter on replica. |\n> > Anton Melnikov\n> > 15) Add EXPLAIN option GENERIC_PLAN for parameterized queries | Laurenz Albe\n> > 16) More scalable multixacts buffers and locking | Kyotaro Horiguchi ,\n> > Andrey Borodin , Ivan Lazarev\n> > 17) In-place persistence change of a relation (fast ALTER TABLE ...\n> > SET LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> > 18) Reducing planning time when tables have many partitions | Yuya Watari\n> > 19) ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast\n> > tables | Justin Pryzby\n> > 20) Reduce timing overhead of EXPLAIN ANALYZE using rdtsc | Andres\n> > Freund, Lukas Fittl, David Geier\n> >\n> > Here is a few different patches which are in \"Ready for Committer\"\n> > state, if any of the committers has some time to spare, please have a\n> > look:\n> > 1) Support load balancing in libpq | Jelte Fennema\n> > 2) Use the system CA pool for certificate verification |Jacob\n> > Champion, Thomas Habets\n> > 3) PG DOCS - pub/sub - specifying optional parameters without values.\n> > | Peter Smith\n> > 4) Doc: Rework contrib appendix -- informative titles, tweaked\n> > sentences | Karl Pinc\n> > 5) Amcheck verification of GiST and GIN | Andrey Borodin, Heikki\n> > Linnakangas, Grigory Kryachko\n> > 6) Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> > 7) Faster pglz compression | Andrey Borodin, tinsane\n> > 8) AcquireExecutorLocks() and run-time pruning | Amit Langote\n> > 9) Parallel Aggregates for string_agg and array_agg | David Rowley\n> > 10) Simplify standby state machine a bit in\n> > WaitForWALToBecomeAvailable() | Bharath Rupireddy\n> >\n> > If you have submitted a patch and it's in \"Waiting for author\" state,\n> > please aim to get it to \"Needs review\" state soon if you can, as\n> > that's where people are most likely to be looking for things to\n> > review.\n> >\n> > I have pinged most threads that are in \"Needs review\" state and don't\n> > apply, compile warning-free, or pass check-world. I'll do some more\n> > of that sort of thing, and I'll highlight a different set of patches\n> > next week.\n>\n> Hi Hackers,\n>\n> Here's a quick status report after the third week, there has been 7\n> entries which were committed in the last week:\n> status | 3rd Jan | w1 | w2 | w3\n> -------------------------+-----------+-------+-------+-------\n> Needs review: | 177 | 149 | 128 | 118\n> Waiting on Author: | 47 | 60 | 64 | 65\n> Ready for Committer: | 20 | 23 | 26 | 26\n> Committed: | 31 | 40 | 53 | 60\n> Withdrawn: | 4 | 7 | 7 | 8\n> Rejected: | 0 | 0 | 0 | 0\n> Returned with Feedback: | 0 | 0 | 1 | 1\n> Total: | 279 | 279 | 279 | 279\n>\n> Here are a few patches which \"Needs review\", please pick one of these\n> and help us in proceeding further:\n> 1) Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\n> showing metadata ...) | Justin Pryzby\n> 2) Fix pg_rewind race condition just after promotion | Heikki Linnakangas\n> 3) warn if GUC set to an invalid shared library | Justin Pryzby\n> 4) GUC for temporary disabling event triggers | Daniel Gustafsson\n> 5) Teach autovacuum.c to launch workers to advance table age without\n> attendant antiwraparound cancellation behavior | Peter Geoghegan\n> 6) recovery modules | Nathan Bossart\n> 7) Add a new pg_walinspect function to extract FPIs from WAL records |\n> Bharath Rupireddy\n> 8) CI and test improvements | Justin Pryzby\n> 9) Test for function error in logrep worker | Anton Melnikov\n> 10) Allow tests to pass in OpenSSL FIPS mode | Peter Eisentraut\n> 11) Add a test for ldapbindpasswd | Andrew Dunstan, John Naylor\n> 12) Add TAP tests for psql \\g piped into program | Daniel Vérité\n> 13) Support MERGE ... WHEN NOT MATCHED BY SOURCE | Dean Rasheed\n> 14) add PROCESS_MAIN to VACUUM | Nathan Bossart\n> 15) SQL/JSON | Amit Langote, Nikita Glukhov\n> 16) Support MERGE into views | Dean Rasheed\n> 17) Exclusion constraints on partitioned tables | Paul Jungwirth\n> 18) Ability to reference other extensions by schema in extension\n> scripts | Regina Obe\n> 19) COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns | Mingli Zhang\n> 20) Add support for DEFAULT specification in COPY FROM | Israel Barth\n>\n> Here is a few patches which are in \"Ready for Committer\" state, if any\n> of the committers has some time to spare, please have a look:\n> 1) Transaction timeout | Andrey Borodin\n> 2) ANY_VALUE aggregate | Vik Fearing\n> 3) POC: Lock updated tuples in tuple_update() and tuple_delete() |\n> Alexander Korotkov\n> 4) Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n> 5) PG DOCS - pub/sub - specifying optional parameters without values.\n> | Peter Smith\n> 6) Doc: Rework contrib appendix -- informative titles, tweaked\n> sentences | Karl Pinc\n> 7) Fix assertion failure with barriers in parallel hash join | Thomas\n> Munro, Melanie Plageman\n> 8) On client login event trigger | Konstantin Knizhnik, Greg\n> Nancarrow, Mikhail Gribkov\n> 9) reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n> 10) Support % wildcard in extension upgrade scripts | Sandro Santilli\n>\n> If you have submitted a patch and it's in \"Waiting for author\" state,\n> please aim to get it to \"Needs review\" state soon if you can, as\n> that's where people are most likely to be looking for things to\n> review.\n>\n> I have pinged most threads that are in \"Needs review\" state and don't\n> apply, compile warning-free, or pass check-world. I'll do some more\n> of that sort of thing, and I'll highlight a different set of patches\n> next week.\n\nHi,\n\nHere's a quick status report after the fourth week, with just a few days to go:\nstatus | 3rd Jan | w1 | w2 | w3 | w4\n-------------------------+-----------+-------+-------+-------+-------\nNeeds review: | 177 | 149 | 128 | 118 | 112\nWaiting on Author: | 47 | 60 | 64 | 65 | 64\nReady for Committer: | 20 | 23 | 26 | 26 | 24\nCommitted: | 31 | 40 | 53 | 60 | 65\nWithdrawn: | 4 | 7 | 7 | 8 | 12\nRejected: | 0 | 0 | 0 | 0 | 0\nReturned with Feedback: | 0 | 0 | 1 | 1 | 1\nTotal: | 279 | 279 | 279 | 279 | 279\n\nThere have been 5 patches that were committed in the last week.\nI will be updating the patches in the next couple of days before the\ncommitfest is closed. For patches waiting on author and hasn't had any\nupdates, I'm planning to mark\nas returned with feedback. Anything that is clearly making good\nprogress but isn't yet ready for committer, I'm going to move to the\nnext CF. It would be of great help if the patch owner or reviewer can\nhelp in moving the patch in the appropriate direction. Feel free to\nchange the state if I have updated the patch state wrongly.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Sun, 29 Jan 2023 22:05:23 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Sun, 29 Jan 2023 at 22:05, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, 22 Jan 2023 at 22:00, vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > On Sun, 15 Jan 2023 at 23:02, vignesh C <vignesh21@gmail.com> wrote:\n> > >\n> > > On Sun, 8 Jan 2023 at 21:00, vignesh C <vignesh21@gmail.com> wrote:\n> > > >\n> > > > On Tue, 3 Jan 2023 at 13:13, vignesh C <vignesh21@gmail.com> wrote:\n> > > > >\n> > > > > Hi All,\n> > > > >\n> > > > > Just a reminder that Commitfest 2023-01 has started.\n> > > > > There are many patches based on the latest run from [1] which require\n> > > > > a) Rebased on top of head b) Fix compilation failures c) Fix test\n> > > > > failure, please have a look and rebase it so that it is easy for the\n> > > > > reviewers and committers:\n> > > > > 1. TAP output format for pg_regress\n> > > > > 2. Add BufFileRead variants with short read and EOF detection\n> > > > > 3. Add SHELL_EXIT_CODE variable to psql\n> > > > > 4. Add foreign-server health checks infrastructure\n> > > > > 5. Add last_vacuum_index_scans in pg_stat_all_tables\n> > > > > 6. Add index scan progress to pg_stat_progress_vacuum\n> > > > > 7. Add the ability to limit the amount of memory that can be allocated\n> > > > > to backends.\n> > > > > 8. Add tracking of backend memory allocated to pg_stat_activity\n> > > > > 9. CAST( ... ON DEFAULT)\n> > > > > 10. CF App: add \"Returned: Needs more interest\" close status\n> > > > > 11. CI and test improvements\n> > > > > 12. Cygwin cleanup\n> > > > > 13. Expand character set for ltree labels\n> > > > > 14. Fix tab completion MERGE\n> > > > > 15. Force streaming every change in logical decoding\n> > > > > 16. More scalable multixacts buffers and locking\n> > > > > 17. Move SLRU data into the regular buffer pool\n> > > > > 18. Move extraUpdatedCols out of RangeTblEntry\n> > > > > 19.New [relation] options engine\n> > > > > 20. Optimizing Node Files Support\n> > > > > 21. PGDOCS - Stats views and functions not in order?\n> > > > > 22. POC: Lock updated tuples in tuple_update() and tuple_delete()\n> > > > > 23. Parallelize correlated subqueries that execute within each worker\n> > > > > 24. Pluggable toaster\n> > > > > 25. Prefetch the next tuple's memory during seqscans\n> > > > > 26. Pulling up direct-correlated ANY_SUBLINK\n> > > > > 27. Push aggregation down to base relations and joins\n> > > > > 28. Reduce timing overhead of EXPLAIN ANALYZE using rdtsc\n> > > > > 29. Refactor relation extension, faster COPY\n> > > > > 30. Remove NEW placeholder entry from stored view query range table\n> > > > > 31. TDE key management patches\n> > > > > 32. Use AF_UNIX for tests on Windows (ie drop fallback TCP code)\n> > > > > 33. Windows filesystem support improvements\n> > > > > 34. making relfilenodes 56 bit\n> > > > > 35. postgres_fdw: commit remote (sub)transactions in parallel during pre-commit\n> > > > > 36.recovery modules\n> > > > >\n> > > > > Commitfest status as of now:\n> > > > > Needs review: 177\n> > > > > Waiting on Author: 47\n> > > > > Ready for Committer: 20\n> > > > > Committed: 31\n> > > > > Withdrawn: 4\n> > > > > Rejected: 0\n> > > > > Returned with Feedback: 0\n> > > > > Total: 279\n> > > > >\n> > > > > We will be needing more members to actively review the patches to get\n> > > > > more patches to the committed state. I would like to remind you that\n> > > > > each patch submitter is expected to review at least one patch from\n> > > > > another submitter during the CommitFest, those members who have not\n> > > > > picked up patch for review please pick someone else's patch to review\n> > > > > as soon as you can.\n> > > > > I'll send out reminders this week to get your patches rebased and\n> > > > > update the status of the patch accordingly.\n> > > > >\n> > > > > [1] - http://cfbot.cputube.org/\n> > > >\n> > > > Hi Hackers,\n> > > >\n> > > > Here's a quick status report after the first week (I think only about\n> > > > 9 commits happened during the week, the rest were pre-CF activity):\n> > > >\n> > > > status | 3rd Jan | w1\n> > > > -------------------------+-----------+-----\n> > > > Needs review: | 177 | 149\n> > > > Waiting on Author: | 47 | 60\n> > > > Ready for Committer: | 20 | 23\n> > > > Committed: | 31 | 40\n> > > > Withdrawn: | 4 | 7\n> > > > Rejected: | 0 | 0\n> > > > Returned with Feedback: | 0 | 0\n> > > > Total: | 279 | 279\n> > > >\n> > > > Here is a list of \"Needs review\" entries for which there has not been\n> > > > much communication on the thread and needs help in proceeding further.\n> > > > Please pick one of these and help us on how to proceed further:\n> > > > pgbench: using prepared BEGIN statement in a pipeline could cause an\n> > > > error | Yugo Nagata\n> > > > Fix dsa_free() to re-bin segment | Dongming Liu\n> > > > pg_rewind: warn when checkpoint hasn't happened after promotion | James Coleman\n> > > > Work around non-atomic read of read of control file on ext4 | Thomas Munro\n> > > > Rethinking the implementation of ts_headline | Tom Lane\n> > > > Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> > > > return value | sirisha chamarti\n> > > > Function to log backtrace of postgres processes | vignesh C, Bharath Rupireddy\n> > > > disallow HEAP_XMAX_COMMITTED and HEAP_XMAX_IS_LOCKED_ONLY | Nathan Bossart\n> > > > New hooks in the connection path | Bertrand Drouvot\n> > > > Check consistency of GUC defaults between .sample.conf and\n> > > > pg_settings.boot_val | Justin Pryzby\n> > > > Add <<none>> support to sepgsql_restorecon | Joe Conway\n> > > > pg_stat_statements and \"IN\" conditions | Dmitry Dolgov\n> > > > Patch to implement missing join selectivity estimation for range types\n> > > > | Zhicheng Luo, Maxime Schoemans, Diogo Repas, Mahmoud SAKR\n> > > > Operation log for major operations | Dmitry Koval\n> > > > Consider parallel for LATERAL subqueries having LIMIT/OFFSET | James Coleman\n> > > > Using each rel as both outer and inner for anti-joins | Richard Guo\n> > > > partIndexlist for partitioned tables uniqueness | Arne Roland\n> > > > In-place persistence change of a relation (fast ALTER TABLE ... SET\n> > > > LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> > > > Speed up releasing of locks | Andres Freund, David Rowley\n> > > > nbtree performance improvements through specialization on key shape |\n> > > > Matthias van de Meent\n> > > > Add sortsupport for range types and btree_gist | Christoph Heiss\n> > > > asynchronous execution support for Custom Scan | KaiGai Kohei, kazutaka onishi\n> > > >\n> > > > Here is a list of \"Ready for Committer\" entries for which there has\n> > > > not been much communication on the thread and needs help in proceeding\n> > > > further. If any of the committers has some time to spare, please help\n> > > > us on these:\n> > > > Fix assertion failure with barriers in parallel hash join | Thomas\n> > > > Munro, Melanie Plageman\n> > > > pg_dump - read data for some options from external file | Pavel Stehule\n> > > > Add non-blocking version of PQcancel | Jelte Fennema\n> > > > reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n> > > > pg_stat_statements: Track statement entry timestamp | Andrei Zubkov\n> > > > Add Amcheck option for checking unique constraints in btree indexes |\n> > > > Maxim Orlov, Pavel Borisov, Anastasia Lubennikova\n> > > > Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> > > > Parallel Hash Full Join | Melanie Plageman\n> > > > Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n> > > > pg_receivewal fail to streams when the partial file to write is not\n> > > > fully initialized present in the wal receiver directory | Bharath\n> > > > Rupireddy, SATYANARAYANA NARLAPURAM\n> > > > Let libpq reject unexpected authentication requests | Jacob Champion\n> > > > Support % wildcard in extension upgrade scripts | Sandro Santilli\n> > > > TAP output format for pg_regress | Daniel Gustafsson\n> > > >\n> > > > If you have submitted a patch and it's in \"Waiting for author\" state,\n> > > > please aim to get it to \"Needs review\" state soon if you can, as\n> > > > that's where people are most likely to be looking for things to\n> > > > review.\n> > > >\n> > > > I have pinged most threads that are in \"Needs review\" state and don't\n> > > > apply, compile warning-free, or pass check-world. I'll do some more\n> > > > of that sort of thing, and I'll highlight a different set of patches\n> > > > next week.\n> > >\n> > > Hi Hackers,\n> > >\n> > > Here's a quick status report after the second week, there has been 13\n> > > entries which were committed in the last week:\n> > >\n> > > status | 3rd Jan | w1 | w2\n> > > -------------------------+-----------+-------+-----\n> > > Needs review: | 177 | 149 | 128\n> > > Waiting on Author: | 47 | 60 | 64\n> > > Ready for Committer: | 20 | 23 | 26\n> > > Committed: | 31 | 40 | 53\n> > > Withdrawn: | 4 | 7 | 7\n> > > Rejected: | 0 | 0 | 0\n> > > Returned with Feedback: | 0 | 0 | 1\n> > > Total: | 279 | 279 | 279\n> > >\n> > > Here is a few different patches which \"Needs review\", please pick one\n> > > of these and help us in proceeding further:\n> > > 1) Add semi-join pushdown to postgres_fdw | Alexander Pyhalov\n> > > 2) pg_upgrade test failure | Thomas Munro\n> > > 3) Fix progress report of CREATE INDEX for nested partitioned tables |\n> > > Ilya Gladyshev\n> > > 4) Non-replayable WAL records through overflows and >MaxAllocSize\n> > > lengths | Matthias van de Meent\n> > > 5) Add sslmode \"no-clientcert\" to avoid auth failure in md5/scram\n> > > connections | Jim Jones\n> > > 6) Add SHELL_EXIT_CODE variable to psql | Corey Huinker\n> > > 7) Fix GetWALAvailability function code comments for WALAVAIL_REMOVED\n> > > return value | sirisha chamarthi\n> > > 8) New strategies for freezing, advancing relfrozenxid early | Peter Geoghegan\n> > > 9) Lockless queue of waiters based on atomic operations for LWLock |\n> > > Alexander Korotkov, Pavel Borisov\n> > > 10) Refactor relation extension, faster COPY | Andres Freund\n> > > 11) Add system view tracking shared buffer actions | Melanie Plageman\n> > > 12) Add index scan progress to pg_stat_progress_vacuum | Sami Imseih\n> > > 13) HOT chain validation in verify_heapam() | Himanshu Upadhyaya\n> > > 14) Periodic burst growth of the checkpoint_req counter on replica. |\n> > > Anton Melnikov\n> > > 15) Add EXPLAIN option GENERIC_PLAN for parameterized queries | Laurenz Albe\n> > > 16) More scalable multixacts buffers and locking | Kyotaro Horiguchi ,\n> > > Andrey Borodin , Ivan Lazarev\n> > > 17) In-place persistence change of a relation (fast ALTER TABLE ...\n> > > SET LOGGED with wal_level=minimal) | Kyotaro Horiguchi\n> > > 18) Reducing planning time when tables have many partitions | Yuya Watari\n> > > 19) ALTER TABLE and CLUSTER fail to use a BulkInsertState for toast\n> > > tables | Justin Pryzby\n> > > 20) Reduce timing overhead of EXPLAIN ANALYZE using rdtsc | Andres\n> > > Freund, Lukas Fittl, David Geier\n> > >\n> > > Here is a few different patches which are in \"Ready for Committer\"\n> > > state, if any of the committers has some time to spare, please have a\n> > > look:\n> > > 1) Support load balancing in libpq | Jelte Fennema\n> > > 2) Use the system CA pool for certificate verification |Jacob\n> > > Champion, Thomas Habets\n> > > 3) PG DOCS - pub/sub - specifying optional parameters without values.\n> > > | Peter Smith\n> > > 4) Doc: Rework contrib appendix -- informative titles, tweaked\n> > > sentences | Karl Pinc\n> > > 5) Amcheck verification of GiST and GIN | Andrey Borodin, Heikki\n> > > Linnakangas, Grigory Kryachko\n> > > 6) Introduce a new view for checkpointer related stats | Bharath Rupireddy\n> > > 7) Faster pglz compression | Andrey Borodin, tinsane\n> > > 8) AcquireExecutorLocks() and run-time pruning | Amit Langote\n> > > 9) Parallel Aggregates for string_agg and array_agg | David Rowley\n> > > 10) Simplify standby state machine a bit in\n> > > WaitForWALToBecomeAvailable() | Bharath Rupireddy\n> > >\n> > > If you have submitted a patch and it's in \"Waiting for author\" state,\n> > > please aim to get it to \"Needs review\" state soon if you can, as\n> > > that's where people are most likely to be looking for things to\n> > > review.\n> > >\n> > > I have pinged most threads that are in \"Needs review\" state and don't\n> > > apply, compile warning-free, or pass check-world. I'll do some more\n> > > of that sort of thing, and I'll highlight a different set of patches\n> > > next week.\n> >\n> > Hi Hackers,\n> >\n> > Here's a quick status report after the third week, there has been 7\n> > entries which were committed in the last week:\n> > status | 3rd Jan | w1 | w2 | w3\n> > -------------------------+-----------+-------+-------+-------\n> > Needs review: | 177 | 149 | 128 | 118\n> > Waiting on Author: | 47 | 60 | 64 | 65\n> > Ready for Committer: | 20 | 23 | 26 | 26\n> > Committed: | 31 | 40 | 53 | 60\n> > Withdrawn: | 4 | 7 | 7 | 8\n> > Rejected: | 0 | 0 | 0 | 0\n> > Returned with Feedback: | 0 | 0 | 1 | 1\n> > Total: | 279 | 279 | 279 | 279\n> >\n> > Here are a few patches which \"Needs review\", please pick one of these\n> > and help us in proceeding further:\n> > 1) Avoid hiding shared filesets in pg_ls_tmpdir (pg_ls_* functions for\n> > showing metadata ...) | Justin Pryzby\n> > 2) Fix pg_rewind race condition just after promotion | Heikki Linnakangas\n> > 3) warn if GUC set to an invalid shared library | Justin Pryzby\n> > 4) GUC for temporary disabling event triggers | Daniel Gustafsson\n> > 5) Teach autovacuum.c to launch workers to advance table age without\n> > attendant antiwraparound cancellation behavior | Peter Geoghegan\n> > 6) recovery modules | Nathan Bossart\n> > 7) Add a new pg_walinspect function to extract FPIs from WAL records |\n> > Bharath Rupireddy\n> > 8) CI and test improvements | Justin Pryzby\n> > 9) Test for function error in logrep worker | Anton Melnikov\n> > 10) Allow tests to pass in OpenSSL FIPS mode | Peter Eisentraut\n> > 11) Add a test for ldapbindpasswd | Andrew Dunstan, John Naylor\n> > 12) Add TAP tests for psql \\g piped into program | Daniel Vérité\n> > 13) Support MERGE ... WHEN NOT MATCHED BY SOURCE | Dean Rasheed\n> > 14) add PROCESS_MAIN to VACUUM | Nathan Bossart\n> > 15) SQL/JSON | Amit Langote, Nikita Glukhov\n> > 16) Support MERGE into views | Dean Rasheed\n> > 17) Exclusion constraints on partitioned tables | Paul Jungwirth\n> > 18) Ability to reference other extensions by schema in extension\n> > scripts | Regina Obe\n> > 19) COPY FROM enable FORCE_NULL/FORCE_NOT_NULL on all columns | Mingli Zhang\n> > 20) Add support for DEFAULT specification in COPY FROM | Israel Barth\n> >\n> > Here is a few patches which are in \"Ready for Committer\" state, if any\n> > of the committers has some time to spare, please have a look:\n> > 1) Transaction timeout | Andrey Borodin\n> > 2) ANY_VALUE aggregate | Vik Fearing\n> > 3) POC: Lock updated tuples in tuple_update() and tuple_delete() |\n> > Alexander Korotkov\n> > 4) Use fadvise in wal replay | Kirill Reshke, Jakub Wartak\n> > 5) PG DOCS - pub/sub - specifying optional parameters without values.\n> > | Peter Smith\n> > 6) Doc: Rework contrib appendix -- informative titles, tweaked\n> > sentences | Karl Pinc\n> > 7) Fix assertion failure with barriers in parallel hash join | Thomas\n> > Munro, Melanie Plageman\n> > 8) On client login event trigger | Konstantin Knizhnik, Greg\n> > Nancarrow, Mikhail Gribkov\n> > 9) reduce impact of lengthy startup and checkpoint tasks | Nathan Bossart\n> > 10) Support % wildcard in extension upgrade scripts | Sandro Santilli\n> >\n> > If you have submitted a patch and it's in \"Waiting for author\" state,\n> > please aim to get it to \"Needs review\" state soon if you can, as\n> > that's where people are most likely to be looking for things to\n> > review.\n> >\n> > I have pinged most threads that are in \"Needs review\" state and don't\n> > apply, compile warning-free, or pass check-world. I'll do some more\n> > of that sort of thing, and I'll highlight a different set of patches\n> > next week.\n>\n> Hi,\n>\n> Here's a quick status report after the fourth week, with just a few days to go:\n> status | 3rd Jan | w1 | w2 | w3 | w4\n> -------------------------+-----------+-------+-------+-------+-------\n> Needs review: | 177 | 149 | 128 | 118 | 112\n> Waiting on Author: | 47 | 60 | 64 | 65 | 64\n> Ready for Committer: | 20 | 23 | 26 | 26 | 24\n> Committed: | 31 | 40 | 53 | 60 | 65\n> Withdrawn: | 4 | 7 | 7 | 8 | 12\n> Rejected: | 0 | 0 | 0 | 0 | 0\n> Returned with Feedback: | 0 | 0 | 1 | 1 | 1\n> Total: | 279 | 279 | 279 | 279 | 279\n>\n> There have been 5 patches that were committed in the last week.\n> I will be updating the patches in the next couple of days before the\n> commitfest is closed. For patches waiting on author and hasn't had any\n> updates, I'm planning to mark\n> as returned with feedback. Anything that is clearly making good\n> progress but isn't yet ready for committer, I'm going to move to the\n> next CF. It would be of great help if the patch owner or reviewer can\n> help in moving the patch in the appropriate direction. Feel free to\n> change the state if I have updated the patch state wrongly.\n\nHere are the final numbers at the end of the commitfest:\nstatus | 3rd Jan | w1 | w2 | w3 | w4 | final\n-------------------------+-----------+-------+-------+-------+-------+-------\nCommitted: | 31 | 40 | 53 | 60 | 65 | 70\nMoved to Next CF | 0 | 0 | 0 | 0 | 0 | 183\nWithdrawn: | 4 | 7 | 7 | 8 | 12 | 14\nReturned with Feedback: | 0 | 0 | 1 | 1 | 1 | 12\nTotal: | 279 | 279 | 279 | 279 | 279 | 279\n\nI don't have permissions to close the commitfest, could one of them\nhelp in closing the commitfest.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 1 Feb 2023 08:13:57 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Wed, Feb 1, 2023 at 10:44 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> I don't have permissions to close the commitfest, could one of them\n> help in closing the commitfest.\n\nIt's technically 17:53 at Anywhere on Earth, so we usually wait for\nthe day to be over before doing so. But since you already took care\ntriaging all CF entries I closed to CF.\n\nThanks for being the CFM!\n\n\n",
"msg_date": "Wed, 1 Feb 2023 13:54:50 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Wed, Feb 01, 2023 at 01:54:50PM +0800, Julien Rouhaud wrote:\n> On Wed, Feb 1, 2023 at 10:44 AM vignesh C <vignesh21@gmail.com> wrote:\n>> I don't have permissions to close the commitfest, could one of them\n>> help in closing the commitfest.\n\nWow. Thanks for looking at all these entries!\n--\nMichael",
"msg_date": "Wed, 1 Feb 2023 15:07:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [Commitfest 2023-01] has started"
},
{
"msg_contents": "On Wed, 1 Feb 2023 at 11:25, Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Feb 1, 2023 at 10:44 AM vignesh C <vignesh21@gmail.com> wrote:\n> >\n> > I don't have permissions to close the commitfest, could one of them\n> > help in closing the commitfest.\n>\n> It's technically 17:53 at Anywhere on Earth, so we usually wait for\n> the day to be over before doing so. But since you already took care\n> triaging all CF entries I closed to CF.\n\nI had updated the entries at 31st EOD India time, probably I should\nhave waited for the other parts of the world too. I will take care of\nthis next time. Thanks for closing the CF.\n\nRegards,\nVignesh\n\n\n",
"msg_date": "Wed, 1 Feb 2023 18:31:16 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [Commitfest 2023-01] has started"
}
] |
[
{
"msg_contents": "include/pg_config.h\n14: #define ALIGNOF_PG_INT128_TYPE 16\n355: #define MAXIMUM_ALIGNOF 8\n374: #define PG_INT128_TYPE __int128\n\n/include/c.h\n507: /*\n508: * 128-bit signed and unsigned integers\n509: * There currently is only limited support for such types.\n510: * E.g. 128bit literals and snprintf are not supported; but math is.\n511: * Also, because we exclude such types when choosing MAXIMUM_ALIGNOF,\n512: * it must be possible to coerce the compiler to allocate them on no\n513: * more than MAXALIGN boundaries.\n514: */\n515: #if defined(PG_INT128_TYPE)\n516: #if defined(pg_attribute_aligned) || ALIGNOF_PG_INT128_TYPE <=\nMAXIMUM_ALIGNOF\n517: #define HAVE_INT128 1\n518:\n519: typedef PG_INT128_TYPE int128\n520: #if defined(pg_attribute_aligned)\n521: pg_attribute_aligned(MAXIMUM_ALIGNOF)\n522: #endif\n523: ;\n524:\n525: typedef unsigned PG_INT128_TYPE uint128\n526: #if defined(pg_attribute_aligned)\n527: pg_attribute_aligned(MAXIMUM_ALIGNOF)\n528: #endif\n529: ;\n530:\n531: #endif\n532: #endif\n533:\n\n\nHi.\nI am slightly confused by the int128 type. I thought the 128 bit integer\nmeans range type will be upto 2 ^ 127 - 1.\nNow just copy the above code and test the int128 range.\nint128 can only up to 9223372036854775807 (2 ^ 63 -1).\n\nalso\nFile: /home/jian/helloc/pg/pg_interval/include/pg_config_ext.h\n6: /* Define to the name of a signed 64-bit integer type. */\n7: #define PG_INT64_TYPE long int\nI also thought that 64-bit means range up to 2 ^ 63 -1. Obviously I was\nwrong.\n\nSo when we say \"128 bit\" what does it actually mean?\n\n-- \n I recommend David Deutsch's <<The Beginning of Infinity>>\n\n Jian\n\ninclude/pg_config.h14: #define ALIGNOF_PG_INT128_TYPE 16355: #define MAXIMUM_ALIGNOF 8374: #define PG_INT128_TYPE __int128/include/c.h507: /*508: * 128-bit signed and unsigned integers509: *\t\tThere currently is only limited support for such types.510: *\t\tE.g. 128bit literals and snprintf are not supported; but math is.511: *\t\tAlso, because we exclude such types when choosing MAXIMUM_ALIGNOF,512: *\t\tit must be possible to coerce the compiler to allocate them on no513: *\t\tmore than MAXALIGN boundaries.514: */515: #if defined(PG_INT128_TYPE)516: #if defined(pg_attribute_aligned) || ALIGNOF_PG_INT128_TYPE <= MAXIMUM_ALIGNOF517: #define HAVE_INT128 1518: 519: typedef PG_INT128_TYPE int128520: #if defined(pg_attribute_aligned)521: \t\t\tpg_attribute_aligned(MAXIMUM_ALIGNOF)522: #endif523: \t\t ;524: 525: typedef unsigned PG_INT128_TYPE uint128526: #if defined(pg_attribute_aligned)527: \t\t\tpg_attribute_aligned(MAXIMUM_ALIGNOF)528: #endif529: \t\t ;530: 531: #endif532: #endif533: Hi.I am slightly confused by the int128 type. I thought the 128 bit integer means range type will be upto 2 ^ 127 - 1.Now just copy the above code and test the int128 range.int128 can only up to 9223372036854775807 (2 ^ 63 -1).alsoFile: /home/jian/helloc/pg/pg_interval/include/pg_config_ext.h6: /* Define to the name of a signed 64-bit integer type. */7: #define PG_INT64_TYPE long intI also thought that 64-bit means range up to 2 ^ 63 -1. Obviously I was wrong. So when we say \"128 bit\" what does it actually mean? -- I recommend David Deutsch's <<The Beginning of Infinity>> Jian",
"msg_date": "Tue, 3 Jan 2023 15:57:10 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "128-bit integers can range only up to (2 ^ 63 -1)"
},
{
"msg_contents": "jian he <jian.universality@gmail.com> writes:\n> I am slightly confused by the int128 type. I thought the 128 bit integer\n> means range type will be upto 2 ^ 127 - 1.\n> Now just copy the above code and test the int128 range.\n> int128 can only up to 9223372036854775807 (2 ^ 63 -1).\n\nWhat's your grounds for claiming that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 10:20:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 128-bit integers can range only up to (2 ^ 63 -1)"
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 8:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> jian he <jian.universality@gmail.com> writes:\n> > I am slightly confused by the int128 type. I thought the 128 bit integer\n> > means range type will be upto 2 ^ 127 - 1.\n> > Now just copy the above code and test the int128 range.\n> > int128 can only up to 9223372036854775807 (2 ^ 63 -1).\n>\n> What's your grounds for claiming that?\n>\n> regards, tom lane\n>\n\n\nI did something like int128 a1 = 9223372036854775807 +\n1;\nI also did something like int128 a1 = (int128)9223372036854775807000;\nI misread the warning. I should do the cast first.\n\nThe second expression has a warning. I guess because\n\n> There is no support in GCC for expressing an integer constant of type\n> __int128 for targets with long long integer less than 128 bits wide.\n>\nhttps://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html\n\nOn Tue, Jan 3, 2023 at 8:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:jian he <jian.universality@gmail.com> writes:\n> I am slightly confused by the int128 type. I thought the 128 bit integer\n> means range type will be upto 2 ^ 127 - 1.\n> Now just copy the above code and test the int128 range.\n> int128 can only up to 9223372036854775807 (2 ^ 63 -1).\n\nWhat's your grounds for claiming that?\n\n regards, tom lane\nI did something like int128 a1 = 9223372036854775807 + 1;I also did something like int128 a1 = (int128)9223372036854775807000;I misread the warning. I should do the cast first.The second expression has a warning. I guess becauseThere is no\nsupport in GCC for expressing an integer constant of type __int128\nfor targets with long long integer less than 128 bits wide.\nhttps://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html",
"msg_date": "Tue, 3 Jan 2023 21:42:13 +0530",
"msg_from": "jian he <jian.universality@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: 128-bit integers can range only up to (2 ^ 63 -1)"
},
{
"msg_contents": "jian he <jian.universality@gmail.com> writes:\n> On Tue, Jan 3, 2023 at 8:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What's your grounds for claiming that?\n\n> I did something like int128 a1 = 9223372036854775807 +\n> 1;\n\nWell, that's going to do the arithmetic in (probably) long int.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 03 Jan 2023 11:17:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 128-bit integers can range only up to (2 ^ 63 -1)"
}
] |
[
{
"msg_contents": "Delay commit status checks until freezing executes.\n\npg_xact lookups are relatively expensive. Move the xmin/xmax commit\nstatus checks from the point that freeze plans are prepared to the point\nthat they're actually executed. Otherwise we'll repeat many commit\nstatus checks whenever multiple successive VACUUM operations scan the\nsame pages and decide against freezing each time, which is a waste of\ncycles.\n\nOversight in commit 1de58df4, which added page-level freezing.\n\nAuthor: Peter Geoghegan <pg@bowt.ie>\nDiscussion: https://postgr.es/m/CAH2-WzkZpe4K6qMfEt8H4qYJCKc2R7TPvKsBva7jc9w7iGXQSw@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/79d4bf4eff14d8967b10ad4c60039c1b9b0cf66e\n\nModified Files\n--------------\nsrc/backend/access/heap/heapam.c | 89 ++++++++++++++++++++++++++++------------\nsrc/include/access/heapam.h | 9 ++++\n2 files changed, 71 insertions(+), 27 deletions(-)",
"msg_date": "Tue, 03 Jan 2023 19:23:41 +0000",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-03 19:23:41 +0000, Peter Geoghegan wrote:\n> Delay commit status checks until freezing executes.\n> \n> pg_xact lookups are relatively expensive. Move the xmin/xmax commit\n> status checks from the point that freeze plans are prepared to the point\n> that they're actually executed. Otherwise we'll repeat many commit\n> status checks whenever multiple successive VACUUM operations scan the\n> same pages and decide against freezing each time, which is a waste of\n> cycles.\n> \n> Oversight in commit 1de58df4, which added page-level freezing.\n> \n> Author: Peter Geoghegan <pg@bowt.ie>\n> Discussion: https://postgr.es/m/CAH2-WzkZpe4K6qMfEt8H4qYJCKc2R7TPvKsBva7jc9w7iGXQSw@mail.gmail.com\n\nThere's some changes from TransactionIdDidCommit() to !TransactionIdDidAbort()\nthat don't look right to me. If the server crashed while xid X was\nin-progress, TransactionIdDidCommit(X) will return false, but so will\nTransactionIdDidAbort(X). So besides moving when the check happens you also\nchanged what's being checked in a more substantial way.\n\n\nAlso, why did you change when MarkBufferDirty() happens? Previously it\nhappened before we modify the page contents, now after. That's probably fine\n(it's the order suggested in transam/README), but seems like a mighty subtle\nthing to change at the same time as something unrelated, particularly without\neven mentioning it?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Jan 2023 16:54:33 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 4:54 PM Andres Freund <andres@anarazel.de> wrote:\n> There's some changes from TransactionIdDidCommit() to !TransactionIdDidAbort()\n> that don't look right to me. If the server crashed while xid X was\n> in-progress, TransactionIdDidCommit(X) will return false, but so will\n> TransactionIdDidAbort(X). So besides moving when the check happens you also\n> changed what's being checked in a more substantial way.\n\nI did point this out on the thread. I made this change with the\nintention of making the check more robust. Apparently this was\nmisguided.\n\nWhere is the behavior that you describe documented, if anywhere?\n\n> Also, why did you change when MarkBufferDirty() happens? Previously it\n> happened before we modify the page contents, now after. That's probably fine\n> (it's the order suggested in transam/README), but seems like a mighty subtle\n> thing to change at the same time as something unrelated, particularly without\n> even mentioning it?\n\nI changed it because the new order is idiomatic. I didn't think that\nthis was particularly worth mentioning, or even subtle. The logic from\nheap_execute_freeze_tuple() only performs simple in-place\nmodifications.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:15:44 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "(Pruning -committers from the list, since cross-posting to -hackers\nresulted in this being held up for moderation.)\n\nOn Tue, Jan 3, 2023 at 5:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 3, 2023 at 4:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > There's some changes from TransactionIdDidCommit() to !TransactionIdDidAbort()\n> > that don't look right to me. If the server crashed while xid X was\n> > in-progress, TransactionIdDidCommit(X) will return false, but so will\n> > TransactionIdDidAbort(X). So besides moving when the check happens you also\n> > changed what's being checked in a more substantial way.\n>\n> I did point this out on the thread. I made this change with the\n> intention of making the check more robust. Apparently this was\n> misguided.\n>\n> Where is the behavior that you describe documented, if anywhere?\n\nWhen the server crashes, and we have a problem case, what does\nTransactionLogFetch()/TransactionIdGetStatus() (which are the guts of\nboth TransactionIdDidCommit and TransactionIdDidAbort) report about\nthe XID?\n\n> > Also, why did you change when MarkBufferDirty() happens? Previously it\n> > happened before we modify the page contents, now after. That's probably fine\n> > (it's the order suggested in transam/README), but seems like a mighty subtle\n> > thing to change at the same time as something unrelated, particularly without\n> > even mentioning it?\n>\n> I changed it because the new order is idiomatic. I didn't think that\n> this was particularly worth mentioning, or even subtle. The logic from\n> heap_execute_freeze_tuple() only performs simple in-place\n> modifications.\n\nI'm including this here because presumably -hackers will have missed\nit due to the moderation hold-up issue.\n\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Jan 2023 17:54:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-03 17:54:37 -0800, Peter Geoghegan wrote:\n> (Pruning -committers from the list, since cross-posting to -hackers\n> resulted in this being held up for moderation.)\n\nI still think these moderation rules are deeply unhelpful...\n\n\n> On Tue, Jan 3, 2023 at 5:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Tue, Jan 3, 2023 at 4:54 PM Andres Freund <andres@anarazel.de> wrote:\n> > > There's some changes from TransactionIdDidCommit() to !TransactionIdDidAbort()\n> > > that don't look right to me. If the server crashed while xid X was\n> > > in-progress, TransactionIdDidCommit(X) will return false, but so will\n> > > TransactionIdDidAbort(X). So besides moving when the check happens you also\n> > > changed what's being checked in a more substantial way.\n> >\n> > I did point this out on the thread. I made this change with the\n> > intention of making the check more robust. Apparently this was\n> > misguided.\n> >\n> > Where is the behavior that you describe documented, if anywhere?\n\nI don't know - I think there's a explicit comment somewhere, but I couldn't\nfind it immediately. There's a bunch of indirect references to in in\nheapam_visibility.c, with comments like \"it must have aborted or\ncrashed\".\n\nThe reason for the behaviour is that we do not have any mechanism for going\nthrough the clog and aborting all in-progress-during-crash transactions. So\nwe'll end up with the clog for all in-progress-during-crash transaction being\nzero / TRANSACTION_STATUS_IN_PROGRESS.\n\nIMO it's almost always wrong to use TransactionIdDidAbort().\n\n\n> When the server crashes, and we have a problem case, what does\n> TransactionLogFetch()/TransactionIdGetStatus() (which are the guts of\n> both TransactionIdDidCommit and TransactionIdDidAbort) report about\n> the XID?\n\nDepends a bit on the specifics, but mostly TRANSACTION_STATUS_IN_PROGRESS.\n\n\n\n> > > Also, why did you change when MarkBufferDirty() happens? Previously it\n> > > happened before we modify the page contents, now after. That's probably fine\n> > > (it's the order suggested in transam/README), but seems like a mighty subtle\n> > > thing to change at the same time as something unrelated, particularly without\n> > > even mentioning it?\n> >\n> > I changed it because the new order is idiomatic. I didn't think that\n> > this was particularly worth mentioning, or even subtle. The logic from\n> > heap_execute_freeze_tuple() only performs simple in-place\n> > modifications.\n\nI think changes in how WAL logging is done are just about always worth\nmentioning in a commit message...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Jan 2023 19:56:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 7:56 PM Andres Freund <andres@anarazel.de> wrote:\n> I still think these moderation rules are deeply unhelpful...\n\nYes, it is rather annoying.\n\n> I don't know - I think there's a explicit comment somewhere, but I couldn't\n> find it immediately. There's a bunch of indirect references to in in\n> heapam_visibility.c, with comments like \"it must have aborted or\n> crashed\".\n\nI think that that's a far cry from any kind of documentation...\n\n> The reason for the behaviour is that we do not have any mechanism for going\n> through the clog and aborting all in-progress-during-crash transactions. So\n> we'll end up with the clog for all in-progress-during-crash transaction being\n> zero / TRANSACTION_STATUS_IN_PROGRESS.\n\nI find this astonishing. Why isn't there a prominent comment that\nadvertises that TransactionIdDidAbort() just doesn't work reliably?\n\n> IMO it's almost always wrong to use TransactionIdDidAbort().\n\nI didn't think that there was any general guarantee about\nTransactionIdDidAbort() working after a crash. But this is an on-disk\nXID, taken from some tuple's xmax, which must have a value <\nOldestXmin.\n\n> I think changes in how WAL logging is done are just about always worth\n> mentioning in a commit message...\n\nI agree with that as a general statement, but I never imagined that\nthis was a case that such a statement could apply to.\n\nI will try to remember to put something about similar changes in any\nfuture commit messages, in the unlikely event that I ever end up\nmoving MarkBufferDirty() around in some existing critical section in\nthe future.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Jan 2023 20:29:53 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 8:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I find this astonishing. Why isn't there a prominent comment that\n> advertises that TransactionIdDidAbort() just doesn't work reliably?\n\nI pushed a fix for this now.\n\nWe should add a comment about this issue to TransactionIdDidAbort()\nheader comments, but I didn't do that part yet.\n\nThanks for the report.\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Jan 2023 21:50:48 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-03 20:29:53 -0800, Peter Geoghegan wrote:\n> On Tue, Jan 3, 2023 at 7:56 PM Andres Freund <andres@anarazel.de> wrote:\n> > I don't know - I think there's a explicit comment somewhere, but I couldn't\n> > find it immediately. There's a bunch of indirect references to in in\n> > heapam_visibility.c, with comments like \"it must have aborted or\n> > crashed\".\n> \n> I think that that's a far cry from any kind of documentation...\n\nAgreed - not sure if there never were docs, or whether they were accidentally\nremoved. This stuff has been that way for a long time.\n\nI'd say a comment above TransactionIdDidAbort() referencing an overview\ncomment at the top of the file? I think it might be worth moving the comment\nfrom heapam_visibility.c to transam.c?\n\n\n> > The reason for the behaviour is that we do not have any mechanism for going\n> > through the clog and aborting all in-progress-during-crash transactions. So\n> > we'll end up with the clog for all in-progress-during-crash transaction being\n> > zero / TRANSACTION_STATUS_IN_PROGRESS.\n> \n> I find this astonishing. Why isn't there a prominent comment that\n> advertises that TransactionIdDidAbort() just doesn't work reliably?\n\nArguably it works reliably, just more narrowly than one might think. Treating\n\"crashed transactions\" as a distinct state from explicit aborts.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:33:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 10:33 PM Andres Freund <andres@anarazel.de> wrote:\n> I'd say a comment above TransactionIdDidAbort() referencing an overview\n> comment at the top of the file? I think it might be worth moving the comment\n> from heapam_visibility.c to transam.c?\n\nWhat comments in heapam_visibility.c should we be referencing here? I\ndon't see anything about it there. I have long been aware that those\nroutines deduce that a transaction must have aborted, but surely\nthat's not nearly enough. That's merely not being broken, without any\nexplanation given as to why.\n\n> > I find this astonishing. Why isn't there a prominent comment that\n> > advertises that TransactionIdDidAbort() just doesn't work reliably?\n>\n> Arguably it works reliably, just more narrowly than one might think. Treating\n> \"crashed transactions\" as a distinct state from explicit aborts.\n\nThat's quite a stretch. There are numerous comments that pretty much\nimply that TransactionIdDidCommit/TransactionIdDidAbort are very\nsimilar, for example any discussion of how you need to call\nTransactionIdIsInProgress first before calling either of the other\ntwo.\n\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:41:35 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-03 22:41:35 -0800, Peter Geoghegan wrote:\n> On Tue, Jan 3, 2023 at 10:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > I'd say a comment above TransactionIdDidAbort() referencing an overview\n> > comment at the top of the file? I think it might be worth moving the comment\n> > from heapam_visibility.c to transam.c?\n> \n> What comments in heapam_visibility.c should we be referencing here? I\n> don't see anything about it there. I have long been aware that those\n> routines deduce that a transaction must have aborted, but surely\n> that's not nearly enough. That's merely not being broken, without any\n> explanation given as to why.\n\nIMO the comment at the top mentioning why the TransactionIdIsInProgress()\ncalls are crucial / need to be done first would be considerably more likely to\nbe found in transam.c than heapam_visibility.c. And it'd make sense to have\nthe explanation of why TransactionIdDidAbort() isn't the same as\n!TransactionIdDidCommit(), even for !TransactionIdIsInProgress() xacts, near\nthe explanation for doing TransactionIdIsInProgress() first.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:47:40 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 10:47 PM Andres Freund <andres@anarazel.de> wrote:\n> IMO the comment at the top mentioning why the TransactionIdIsInProgress()\n> calls are crucial / need to be done first would be considerably more likely to\n> be found in transam.c than heapam_visibility.c.\n\nYeah, but they're duplicated anyway. For example in the transam\nREADME. Plus we have references to these same comments from other\nfiles, such as heapam.c, which mentions heapam_visibility.c by name as\nwhere you go to learn more about this issue.\n\n> And it'd make sense to have\n> the explanation of why TransactionIdDidAbort() isn't the same as\n> !TransactionIdDidCommit(), even for !TransactionIdIsInProgress() xacts, near\n> the explanation for doing TransactionIdIsInProgress() first.\n\nI think that we should definitely have a comment directly over\nTransactionIdDidAbort(). Though I wouldn't mind reorganizing these\nother comments, or making the comment over TransactionIdDidAbort()\nmostly just point to the other comments.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Tue, 3 Jan 2023 22:52:51 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 1:53 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that we should definitely have a comment directly over\n> TransactionIdDidAbort(). Though I wouldn't mind reorganizing these\n> other comments, or making the comment over TransactionIdDidAbort()\n> mostly just point to the other comments.\n\nYeah, I think it would be good to have a comment there. As Andres\nsays, it is almost always wrong to use this function, and we should\nmake that more visible. Possibly we should even rename the function,\nlike TransactionIdKnownToHaveAborted().\n\nBut that having been said, I'm kind of astonished that you didn't know\nabout this already. The freezing behavior is in general extremely hard\nto get right, and I guess I feel if you don't understand how the\nunderlying functions work, including things like performance\nconsiderations and which functions return fully reliable results, I do\nnot think you should be committing your own patches in this area.\nThere is probably a lot of potential benefit in improving the way this\nstuff works, but there is also a heck of a lot of danger of creating\nsubtle data corrupting bugs that could easily take years to find.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 4 Jan 2023 10:02:59 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 7:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> But that having been said, I'm kind of astonished that you didn't know\n> about this already. The freezing behavior is in general extremely hard\n> to get right, and I guess I feel if you don't understand how the\n> underlying functions work, including things like performance\n> considerations\n\nI was the one that reported the issue with CLOG lookups in the first place.\n\n> and which functions return fully reliable results, I do\n> not think you should be committing your own patches in this area.\n\nMy mistake here had nothing to do with my own goals. I was trying to\nbe diligent by hardening an existing check in passing, and it\nbackfired.\n\n> There is probably a lot of potential benefit in improving the way this\n> stuff works, but there is also a heck of a lot of danger of creating\n> subtle data corrupting bugs that could easily take years to find.\n\nIt's currently possible for VACUUM to set the all-frozen bit while\nunsetting the all-visible bit, due to a race condition [1]. This is\nyour long standing bug. So apparently nobody is qualified to commit\npatches in this area.\n\nAbout a year ago, there was a massive argument over some earlier work\nin the same general area, by me. Being the subject of a pile-on on\nthis mailing list is something that I find deeply upsetting and\ndemoralizing. I just cannot take much more of it. At the same time,\nI've made quite an investment in the pending patches, and think that\nit's something that I have to see through.\n\nIf I am allowed to finish what I've started, then I will stop all new\nwork on VACUUM. I'll go back to working on B-Tree indexing. Nobody is\nasking me to focus on VACUUM, and there are plenty of other things\nthat I could be doing that don't seem to lead to these situations.\n\n[1] https://postgr.es/m/CAH2-WznuNGSzF8v6OsgjaC5aYsb3cZ6HW6MLm30X0d65cmSH6A@mail.gmail.com\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 4 Jan 2023 09:59:37 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-04 09:59:37 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 4, 2023 at 7:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > and which functions return fully reliable results, I do\n> > not think you should be committing your own patches in this area.\n> \n> My mistake here had nothing to do with my own goals. I was trying to\n> be diligent by hardening an existing check in passing, and it\n> backfired.\n\nWhen moving code around I strongly suggest to make as much of a diff to be\n\"move only\". I find\n git diff --color-moved=dimmed-zebra --color-moved-ws=ignore-all-space\nquite helpful for that.\n\nBeing able to see just the \"really changed\" lines makes it a lot easier to see\nthe crucial parts of a change.\n\n\n> > There is probably a lot of potential benefit in improving the way this\n> > stuff works, but there is also a heck of a lot of danger of creating\n> > subtle data corrupting bugs that could easily take years to find.\n> \n> It's currently possible for VACUUM to set the all-frozen bit while\n> unsetting the all-visible bit, due to a race condition [1]. This is\n> your long standing bug. So apparently nobody is qualified to commit\n> patches in this area.\n\nThat's a non-sequitur. Bugs are a fact of programming.\n\n\n> About a year ago, there was a massive argument over some earlier work\n> in the same general area, by me. Being the subject of a pile-on on\n> this mailing list is something that I find deeply upsetting and\n> demoralizing. I just cannot take much more of it. At the same time,\n> I've made quite an investment in the pending patches, and think that\n> it's something that I have to see through.\n\nI'm, genuinely!, sorry that you feel piled on. That wasn't, and isn't, my\ngoal. I think the area of code desperately needs work. I complained because I\ndidn't like the process and was afraid of the consequences and the perceived\nneed on my part to do post-commit reviews.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 4 Jan 2023 10:41:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 10:41 AM Andres Freund <andres@anarazel.de> wrote:\n> > It's currently possible for VACUUM to set the all-frozen bit while\n> > unsetting the all-visible bit, due to a race condition [1]. This is\n> > your long standing bug. So apparently nobody is qualified to commit\n> > patches in this area.\n>\n> That's a non-sequitur. Bugs are a fact of programming.\n\nI agree.\n\n> > About a year ago, there was a massive argument over some earlier work\n> > in the same general area, by me. Being the subject of a pile-on on\n> > this mailing list is something that I find deeply upsetting and\n> > demoralizing. I just cannot take much more of it. At the same time,\n> > I've made quite an investment in the pending patches, and think that\n> > it's something that I have to see through.\n>\n> I'm, genuinely!, sorry that you feel piled on. That wasn't, and isn't, my\n> goal.\n\nApology accepted.\n\nI am making a simple, practical point here, too: I'm much too selfish\na person to continue to put myself in this position. I have nothing to\nprove, and have little to gain over what I'd get out of working in\nvarious other areas. I wasn't hired by my current employer to work on\nVACUUM in particular. In the recent past I have found ways to be very\nproductive in other areas, without any apparent risk of protracted,\nstressful fights -- which is something that I plan on getting back to\nsoon. I just don't have the stomach for this. It just isn't worth it\nto me.\n\n> I think the area of code desperately needs work. I complained because I\n> didn't like the process and was afraid of the consequences and the perceived\n> need on my part to do post-commit reviews.\n\nThe work that I did in 15 (in particular commit 0b018fab, the \"oldest\nextant XID\" commit) really isn't very useful without the other patches\nin place -- it was always supposed to be one piece of a larger whole.\nIt enables the freezing stuff because VACUUM now \"gets credit\" for\nproactive freezing in a way that it didn't before. The motivating\nexamples wiki page shows examples of this [1].\n\nOnce the later patches are in place, the 15/16 work on VACUUM will be\ncomplete, and I can walk away from working on VACUUM having delivered\na very useful improvement to performance stability -- a good outcome\nfor everybody. If you and Robert can find a way to accommodate that,\nthen in all likelihood we won't need to have any more heated and\nprotracted arguments like the one from early in 2022. I will be quite\nhappy to get back to working on B-Tree, likely the skip scan work.\n\n [1] https://wiki.postgresql.org/wiki/Freezing/skipping_strategies_patch:_motivating_examples\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 4 Jan 2023 13:05:33 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 11:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jan 4, 2023 at 7:03 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > But that having been said, I'm kind of astonished that you didn't know\n> > about this already. The freezing behavior is in general extremely hard\n> > to get right, and I guess I feel if you don't understand how the\n> > underlying functions work, including things like performance\n> > considerations\n>\n> I was the one that reported the issue with CLOG lookups in the first place.\n>\n> > and which functions return fully reliable results, I do\n> > not think you should be committing your own patches in this area.\n>\n> My mistake here had nothing to do with my own goals. I was trying to\n> be diligent by hardening an existing check in passing, and it\n> backfired.\n>\n> > There is probably a lot of potential benefit in improving the way this\n> > stuff works, but there is also a heck of a lot of danger of creating\n> > subtle data corrupting bugs that could easily take years to find.\n>\n> It's currently possible for VACUUM to set the all-frozen bit while\n> unsetting the all-visible bit, due to a race condition [1]. This is\n> your long standing bug. So apparently nobody is qualified to commit\n> patches in this area.\n>\n> About a year ago, there was a massive argument over some earlier work\n> in the same general area, by me. Being the subject of a pile-on on\n> this mailing list is something that I find deeply upsetting and\n> demoralizing. I just cannot take much more of it. At the same time,\n> I've made quite an investment in the pending patches, and think that\n> it's something that I have to see through.\n>\n> If I am allowed to finish what I've started, then I will stop all new\n> work on VACUUM.\n>\n\n+1 for you to continue your work in this area. Personally, I don't\nfeel you need to stop working in VACUUM especially now that you have\nbuilt a good knowledge in this area and have a grip over the\nimprovement areas. AFAICS, the main takeaway is to get a review of\none's own work which I see in your case that Jeff is already doing in\nthe main project work. So, continuing with that and having some more\nreviews should avoid such complaints. It is always possible that you,\nme, or anyone can miss something important even after detailed reviews\nby others but I think the chances will be much lower.\n\nYou are an extremely valuable person for this project and I wish that\nyou continue working with the same enthusiasm.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Jan 2023 12:28:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 10:59 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> You are an extremely valuable person for this project and I wish that\n> you continue working with the same enthusiasm.\n\nThank you, Amit. Knowing that my efforts are appreciated by colleagues\ndoes make it easier to persevere.\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Thu, 5 Jan 2023 10:57:05 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Tue, Jan 3, 2023 at 10:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > And it'd make sense to have\n> > the explanation of why TransactionIdDidAbort() isn't the same as\n> > !TransactionIdDidCommit(), even for !TransactionIdIsInProgress() xacts, near\n> > the explanation for doing TransactionIdIsInProgress() first.\n>\n> I think that we should definitely have a comment directly over\n> TransactionIdDidAbort(). Though I wouldn't mind reorganizing these\n> other comments, or making the comment over TransactionIdDidAbort()\n> mostly just point to the other comments.\n\nWhat do you think of the attached patch, which revises comments over\nTransactionIdDidAbort, and adds something about it to the top of\nheapam_visbility.c?\n\n-- \nPeter Geoghegan",
"msg_date": "Fri, 6 Jan 2023 11:16:00 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-06 11:16:00 -0800, Peter Geoghegan wrote:\n> On Tue, Jan 3, 2023 at 10:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > And it'd make sense to have\n> > > the explanation of why TransactionIdDidAbort() isn't the same as\n> > > !TransactionIdDidCommit(), even for !TransactionIdIsInProgress() xacts, near\n> > > the explanation for doing TransactionIdIsInProgress() first.\n> >\n> > I think that we should definitely have a comment directly over\n> > TransactionIdDidAbort(). Though I wouldn't mind reorganizing these\n> > other comments, or making the comment over TransactionIdDidAbort()\n> > mostly just point to the other comments.\n> \n> What do you think of the attached patch, which revises comments over\n> TransactionIdDidAbort, and adds something about it to the top of\n> heapam_visbility.c?\n\nMostly looks good to me. I think it'd be good to add a reference to the\nheapam_visbility.c? comment to the top of transam.c (or move it).\n\n\n> *\t\tAssumes transaction identifier is valid and exists in clog.\n> + *\n> + *\t\tNot all transactions that must be treated as aborted will be\n> + *\t\texplicitly marked as such in clog. Transactions that were in\n> + *\t\tprogress during a crash are never reported as aborted by us.\n> */\n> bool\t\t\t\t\t\t\t/* true if given transaction aborted */\n> TransactionIdDidAbort(TransactionId transactionId)\n\nI think it's currently very likely to be true, but I'd weaken the \"never\" a\nbit nonetheless. I think it'd also be good to point to what to do instead. How\nabout:\n Note that TransactionIdDidAbort() returns true only for explicitly aborted\n transactions, as transactions implicitly aborted due to a crash will\n commonly still appear to be in-progress in the clog. Most of the time\n TransactionIdDidCommit(), with a preceding TransactionIdIsInProgress()\n check, should be used instead of TransactionIdDidAbort().\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 7 Jan 2023 13:47:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Sat, Jan 7, 2023 at 1:47 PM Andres Freund <andres@anarazel.de> wrote:\n> > What do you think of the attached patch, which revises comments over\n> > TransactionIdDidAbort, and adds something about it to the top of\n> > heapam_visbility.c?\n>\n> Mostly looks good to me. I think it'd be good to add a reference to the\n> heapam_visbility.c? comment to the top of transam.c (or move it).\n\nMakes sense.\n\n> I think it's currently very likely to be true, but I'd weaken the \"never\" a\n> bit nonetheless. I think it'd also be good to point to what to do instead. How\n> about:\n> Note that TransactionIdDidAbort() returns true only for explicitly aborted\n> transactions, as transactions implicitly aborted due to a crash will\n> commonly still appear to be in-progress in the clog. Most of the time\n> TransactionIdDidCommit(), with a preceding TransactionIdIsInProgress()\n> check, should be used instead of TransactionIdDidAbort().\n\nThat does seem better.\n\nDo we need to do anything about this to the \"pg_xact and pg_subtrans\"\nsection of the transam README? Also, does amcheck's get_xid_status()\nneed a reference to these rules?\n\nFWIW, I found an existing comment about this rule in the call to\nTransactionIdAbortTree() from RecordTransactionAbort() -- which took\nme quite a while to find. So you might have been remembering that\ncomment before.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 7 Jan 2023 15:41:29 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-07 15:41:29 -0800, Peter Geoghegan wrote:\n> Do we need to do anything about this to the \"pg_xact and pg_subtrans\"\n> section of the transam README?\n\nProbably a good idea, although it doesn't neatly fit right now.\n\n\n> Also, does amcheck's get_xid_status() need a reference to these rules?\n\nDon't think so? Whad made you ask?\n\n\n> FWIW, I found an existing comment about this rule in the call to\n> TransactionIdAbortTree() from RecordTransactionAbort() -- which took\n> me quite a while to find. So you might have been remembering that\n> comment before.\n\nPossible, my memory is vague enough that it's hard to be sure...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Sat, 7 Jan 2023 19:25:49 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Sat, Jan 7, 2023 at 7:25 PM Andres Freund <andres@anarazel.de> wrote:\n> Probably a good idea, although it doesn't neatly fit right now.\n\nI'll leave it for now.\n\nAttached is v2, which changes things based on your feedback. Would\nlike to get this out of the way soon.\n\n> > Also, does amcheck's get_xid_status() need a reference to these rules?\n>\n> Don't think so? Whad made you ask?\n\nJust the fact that it seems to more or less follow the protocol\ndescribed at the top of heapam_visibility.c. Not very important,\nthough.\n\n-- \nPeter Geoghegan",
"msg_date": "Wed, 11 Jan 2023 14:29:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "Hi,\n\nOn 2023-01-11 14:29:25 -0800, Peter Geoghegan wrote:\n> On Sat, Jan 7, 2023 at 7:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > Probably a good idea, although it doesn't neatly fit right now.\n> \n> I'll leave it for now.\n> \n> Attached is v2, which changes things based on your feedback. Would\n> like to get this out of the way soon.\n\nMakes sense. It's clearly an improvement.\n\n\n> + * We can't use TransactionIdDidAbort here because it won't treat transactions\n> + * that were in progress during a crash as aborted by now. We determine that\n> + * transactions aborted/crashed through process of elimination instead.\n\ns/by now//?\n\n\n> * When using an MVCC snapshot, we rely on XidInMVCCSnapshot rather than\n> * TransactionIdIsInProgress, but the logic is otherwise the same: do not\n> diff --git a/src/backend/access/transam/transam.c b/src/backend/access/transam/transam.c\n> index 3a28dcc43..7629904bb 100644\n> --- a/src/backend/access/transam/transam.c\n> +++ b/src/backend/access/transam/transam.c\n> @@ -110,7 +110,8 @@ TransactionLogFetch(TransactionId transactionId)\n> *\t\t transaction tree.\n> *\n> * See also TransactionIdIsInProgress, which once was in this module\n> - * but now lives in procarray.c.\n> + * but now lives in procarray.c, as well as comments at the top of\n> + * heapam_visibility.c that explain how everything fits together.\n> * ----------------------------------------------------------------\n> */\n\n+1\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:06:56 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
},
{
"msg_contents": "On Wed, Jan 11, 2023 at 3:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > + * We can't use TransactionIdDidAbort here because it won't treat transactions\n> > + * that were in progress during a crash as aborted by now. We determine that\n> > + * transactions aborted/crashed through process of elimination instead.\n>\n> s/by now//?\n\nDid it that way in the commit I pushed just now.\n\nThanks\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Wed, 11 Jan 2023 15:32:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Delay commit status checks until freezing executes."
}
] |
[
{
"msg_contents": "Hi,\n\nAs noted in [1], and also commented on by other previously, we use libpq in\nblocking mode in libpqwalreceiver, postgres_fdw, etc. This is done knowingly,\nsee e.g. comments like:\n\t/*\n\t * Submit the query. Since we don't use non-blocking mode, this could\n\t * theoretically block. In practice, since we don't send very long query\n\t * strings, the risk seems negligible.\n\t */\n\nbut I don't understand why we do it. It seems like it'd be a fairly small\namount of additional code to just do it right, given that we do so for calls\nto PQgetResult() etc?\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20230103200520.di5hjebqvi72coql%40awork3.anarazel.de\n\n\n",
"msg_date": "Tue, 3 Jan 2023 12:30:11 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Why are we using blocking libpq in the backend?"
}
] |
[
{
"msg_contents": "Hi,\n\nI realized that pg_locks view shows the transaction id of a\nspeculative token lock in the database field:\n\npostgres(1:509389)=# select * from pg_locks where locktype = 'spectoken';\n locktype | database | relation | page | tuple | virtualxid |\ntransactionid | classid | objid | objsubid | virtualtransaction | pid\n | mode | granted | fastpath | waitstart\n-----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+---------------+---------+----------+-----------\n spectoken | 741 | | | | |\n | 3 | 0 | 0 | 3/5 | 509314 |\nExclusiveLock | t | f |\n(1 row)\n\nIt seems to be confusing and the user won't get the result even if\nthey search it by transactionid = 741. So I've attached the patch to\nfix it. With the patch, the pg_locks views shows like:\n\n locktype | database | relation | page | tuple | virtualxid |\ntransactionid | classid | objid | objsubid | virtualtransaction | pid\n | mode | granted | fastpath | waitstart\n-----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+---------------+---------+----------+-----------\n spectoken | | | | | |\n 746 | | 1 | | 3/4 | 535618 |\nExclusiveLock | t | f |\n(1 row)\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Wed, 4 Jan 2023 15:45:34 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix showing XID of a spectoken lock in an incorrect field of pg_locks\n view."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 12:16 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> It seems to be confusing and the user won't get the result even if\n> they search it by transactionid = 741. So I've attached the patch to\n> fix it. With the patch, the pg_locks views shows like:\n>\n> locktype | database | relation | page | tuple | virtualxid |\n> transactionid | classid | objid | objsubid | virtualtransaction | pid\n> | mode | granted | fastpath | waitstart\n> -----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+---------------+---------+----------+-----------\n> spectoken | | | | | |\n> 746 | | 1 | | 3/4 | 535618 |\n> ExclusiveLock | t | f |\n> (1 row)\n>\n\nIs it a good idea to display spec token as objid, if so, how will\nusers know? Currently for Advisory locks, we display values in\nclassid, objid, objsubid different than the original meaning of fields\nbut those are explained in docs [1]. Wouldn't it be better to mention\nthis in docs?\n\n[1] - https://www.postgresql.org/docs/devel/view-pg-locks.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 4 Jan 2023 15:12:37 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix showing XID of a spectoken lock in an incorrect field of\n pg_locks view."
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 6:42 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 4, 2023 at 12:16 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > It seems to be confusing and the user won't get the result even if\n> > they search it by transactionid = 741. So I've attached the patch to\n> > fix it. With the patch, the pg_locks views shows like:\n> >\n> > locktype | database | relation | page | tuple | virtualxid |\n> > transactionid | classid | objid | objsubid | virtualtransaction | pid\n> > | mode | granted | fastpath | waitstart\n> > -----------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+--------+---------------+---------+----------+-----------\n> > spectoken | | | | | |\n> > 746 | | 1 | | 3/4 | 535618 |\n> > ExclusiveLock | t | f |\n> > (1 row)\n> >\n>\n> Is it a good idea to display spec token as objid, if so, how will\n> users know? Currently for Advisory locks, we display values in\n> classid, objid, objsubid different than the original meaning of fields\n> but those are explained in docs [1]. Wouldn't it be better to mention\n> this in docs?\n\nAgreed. Attached the updated patch.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com",
"msg_date": "Thu, 5 Jan 2023 15:15:41 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix showing XID of a spectoken lock in an incorrect field of\n pg_locks view."
},
{
"msg_contents": "On Thu, Jan 5, 2023 at 11:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> Agreed. Attached the updated patch.\n>\n\nThanks, the patch looks good to me. I think it would be probably good\nto backpatch this but it has the potential to break some monitoring\nscripts which were using the wrong columns for transaction id and spec\ntoken number. As this is not a very critical issue and is not reported\ntill now, so it may be better to leave backpatching it. What do you\nthink?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 6 Jan 2023 17:29:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Fix showing XID of a spectoken lock in an incorrect field of\n pg_locks view."
},
{
"msg_contents": "On Fri, Jan 6, 2023 at 9:00 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Thu, Jan 5, 2023 at 11:46 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n> >\n> > Agreed. Attached the updated patch.\n> >\n>\n> Thanks, the patch looks good to me. I think it would be probably good\n> to backpatch this but it has the potential to break some monitoring\n> scripts which were using the wrong columns for transaction id and spec\n> token number.\n\nRight.\n\n> As this is not a very critical issue and is not reported\n> till now, so it may be better to leave backpatching it. What do you\n> think?\n\nConsidering the compatibility, I'm inclined to agree not to backpatch\nit. If someone complains about the current behavior in back branches\nin the future, we can backpatch it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n",
"msg_date": "Fri, 6 Jan 2023 21:30:04 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Fix showing XID of a spectoken lock in an incorrect field of\n pg_locks view."
}
] |
[
{
"msg_contents": "Hi hackers,\n\nI saw a problem related to column list.\n\nThere's a restriction that different column lists for same table can't be used\nin the publications of single subscription. But we will get unexpected errors in\nsome cases because the dropped columns are not ignored.\n\nFor example:\n-- publisher\ncreate table tbl1 (a int primary key, b int, c int);\ncreate publication pub1 for table tbl1(a,b);\ncreate publication pub2 for table tbl1;\nalter table tbl1 drop column c;\n\n-- subscriber\ncreate table tbl1 (a int primary key, b int, c int);\ncreate subscription sub connection 'dbname=postgres port=5432' publication pub1, pub2;\n\n-- publisher\ninsert into tbl1 values (1,2);\n\nThe publisher and subscriber will report the error:\nERROR: cannot use different column lists for table \"public.tbl1\" in different publications\n\nThis is caused by:\na. walsender (in pgoutput_column_list_init())\n /*\n * If column list includes all the columns of the table,\n * set it to NULL.\n */\n if (bms_num_members(cols) == RelationGetNumberOfAttributes(relation))\n {\n bms_free(cols);\n cols = NULL;\n }\n\nThe returned value of RelationGetNumberOfAttributes() contains dropped columns.\n\nb. table synchronization (in fetch_remote_table_info())\n appendStringInfo(&cmd,\n \"SELECT DISTINCT\"\n \" (CASE WHEN (array_length(gpt.attrs, 1) = c.relnatts)\"\n \" THEN NULL ELSE gpt.attrs END)\"\n \" FROM pg_publication p,\"\n \" LATERAL pg_get_publication_tables(p.pubname) gpt,\"\n \" pg_class c\"\n \" WHERE gpt.relid = %u AND c.oid = gpt.relid\"\n \" AND p.pubname IN ( %s )\",\n lrel->remoteid,\n pub_names.data);\n\nIf the result of the above SQL contains more than one tuple, an error will be\nreport (cannot use different column lists for table ...). In this SQL, attrs is\nNULL if `array_length(gpt.attrs, 1) = c.relnatts`, but `c.relnatts` contains\ndropped columns, what we want is the count of alive columns.\n\nI tried to fix them in the attached patch.\n\nRegards,\nShi yu",
"msg_date": "Wed, 4 Jan 2023 06:58:00 +0000",
"msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>",
"msg_from_op": true,
"msg_subject": "Ignore dropped columns when checking the column list in logical\n replication"
},
{
"msg_contents": "On Wed, Jan 4, 2023 at 12:28 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> I tried to fix them in the attached patch.\n>\n\nDon't we need a similar handling for generated columns? We don't send\nthose to the subscriber side, see checks in proto.c.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Thu, 5 Jan 2023 16:18:32 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ignore dropped columns when checking the column list in logical\n replication"
}
] |
[
{
"msg_contents": "When trying Valgrind I came across some compiling warnings with\nUSE_VALGRIND defined and --enable-cassert not configured. This is\nmainly because in this case we have MEMORY_CONTEXT_CHECKING defined\nwhile USE_ASSERT_CHECKING not defined.\n\naset.c: In function ‘AllocSetFree’:\naset.c:1027:10: warning: unused variable ‘chunk_size’ [-Wunused-variable]\n Size chunk_size = block->endptr - (char *) pointer;\n ^\ngeneration.c: In function ‘GenerationFree’:\ngeneration.c:633:8: warning: variable ‘chunksize’ set but not used\n[-Wunused-but-set-variable]\n Size chunksize;\n ^\n\nAttach a trivial patch for the fix.\n\nThanks\nRichard",
"msg_date": "Wed, 4 Jan 2023 15:11:02 +0800",
"msg_from": "Richard Guo <guofenglinux@gmail.com>",
"msg_from_op": true,
"msg_subject": "Some compiling warnings"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 20:11, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n> When trying Valgrind I came across some compiling warnings with\n> USE_VALGRIND defined and --enable-cassert not configured. This is\n> mainly because in this case we have MEMORY_CONTEXT_CHECKING defined\n> while USE_ASSERT_CHECKING not defined.\n\n> Attach a trivial patch for the fix.\n\nI've just pushed that. Thanks.\n\nDavid\n\n\n",
"msg_date": "Thu, 5 Jan 2023 12:57:40 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Some compiling warnings"
}
] |
[
{
"msg_contents": "Hi hackers,\r\n\r\nI came across a problem on how to improve the performance of queries with GROUP BY clause when the grouping columns have much duplicate data. For example:\r\n\r\ncreate table t1(i1) as select 1 from generate_series(1,10000);\r\ncreate table t2(i2) as select 2 from generate_series(1,10000);\r\n\r\nselect i1,i2 from t1, t2 group by i1,i2;\r\n i1 | i2\r\n----+----\r\n 1 | 2\r\n\r\n QUERY PLAN\r\n-----------------------------------\r\n HashAggregate\r\n Group Key: t1.i1, t2.i2\r\n Batches: 1 Memory Usage: 24kB\r\n -> Nested Loop\r\n -> Seq Scan on t1\r\n -> Materialize\r\n -> Seq Scan on t2\r\n Planning Time: 0.067 ms\r\n Execution Time: 15864.585 ms\r\n\r\n\r\nThe plan is apparently inefficient, since the hash aggregate goes after the Cartesian product. We could expect the query's performance get much improved if the HashAggregate node can be pushed down to the SCAN node. For example, the plan may looks like:\r\n\r\n expected QUERY PLAN\r\n----------------------------------------\r\n Group\r\n Group Key: t1.i1, t2.i2\r\n -> Sort\r\n Sort Key: t1.i1, t2.i2\r\n -> Nested Loop\r\n -> HashAggregate\r\n Group Key: t1.i1\r\n -> Seq Scan on t1\r\n -> HashAggregate\r\n Group Key: t2.i2\r\n -> Seq Scan on t2\r\n\r\nMoreover, queries with expressions as GROUP BY columns may also take advantage of this feature, e.g.\r\n\r\nselect i1+i2 from t1, t2 group by i1+i2;\r\n ?column?\r\n----------\r\n 3\r\n\r\n expected QUERY PLAN\r\n----------------------------------------\r\n Group\r\n Group Key: ((t1.i1 + t2.i2))\r\n -> Sort\r\n Sort Key: ((t1.i1 + t2.i2))\r\n -> Nested Loop\r\n -> HashAggregate\r\n Group Key: t1.i1\r\n -> Seq Scan on t1\r\n -> HashAggregate\r\n Group Key: t2.i2\r\n -> Seq Scan on t2\r\n\r\nIs someone has suggestions on this?\r\n\r\n\n\n\n\n\n\n\n\nHi hackers,\n\n\nI came across a problem on how to improve the performance of queries with GROUP BY clause when the grouping columns have much duplicate data. For example:\n\n\ncreate table t1(i1) as select 1 from generate_series(1,10000);\ncreate table t2(i2) as select 2 from generate_series(1,10000);\n\n\nselect i1,i2 from t1, t2 group by i1,i2;\n i1 | i2 \n----+----\n 1 | 2\n\n\n QUERY PLAN\n-----------------------------------\n HashAggregate\n Group Key: t1.i1, t2.i2\n Batches: 1 Memory Usage: 24kB\n -> Nested Loop\n -> Seq Scan on t1\n -> Materialize\n -> Seq Scan on t2\n Planning Time: 0.067 ms\n Execution Time: 15864.585 ms\n\n\n\n\nThe plan is apparently inefficient, since the hash aggregate goes after the Cartesian product. We could expect the query's performance get much improved if the HashAggregate node can be pushed down to the SCAN node. For example, the plan may looks like:\n\n\n expected QUERY PLAN\n----------------------------------------\n Group\n Group Key: t1.i1, t2.i2\n -> Sort\n Sort Key: t1.i1, t2.i2\n -> Nested Loop\n -> HashAggregate\n Group Key: t1.i1\n -> Seq Scan on t1\n -> HashAggregate\n Group Key: t2.i2\n -> Seq Scan on t2\n\n\nMoreover, queries with expressions as GROUP BY columns may also take advantage of this feature, e.g.\n\n\nselect i1+i2 from t1, t2 group by i1+i2;\n ?column? \n----------\n 3\n\n\n expected QUERY PLAN\n----------------------------------------\n Group\n Group Key: ((t1.i1 + t2.i2))\n -> Sort\n Sort Key: ((t1.i1 + t2.i2))\n -> Nested Loop\n -> HashAggregate\n Group Key: t1.i1\n -> Seq Scan on t1\n -> HashAggregate\n Group Key: t2.i2\n -> Seq Scan on t2\n\n\n\n\nIs someone has suggestions on this?",
"msg_date": "Wed, 4 Jan 2023 10:21:30 +0000",
"msg_from": "Spring Zhong <spring.zhong@openpie.com>",
"msg_from_op": true,
"msg_subject": "grouping pushdown"
},
{
"msg_contents": "On Wed, 4 Jan 2023 at 23:21, Spring Zhong <spring.zhong@openpie.com> wrote:\n> The plan is apparently inefficient, since the hash aggregate goes after the Cartesian product. We could expect the query's performance get much improved if the HashAggregate node can be pushed down to the SCAN node.\n\n> Is someone has suggestions on this?\n\nI think this is being worked on. See [1].\n\nDavid\n\n[1] https://commitfest.postgresql.org/41/3764/\n\n\n",
"msg_date": "Thu, 5 Jan 2023 08:40:59 +1300",
"msg_from": "David Rowley <dgrowleyml@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: grouping pushdown"
},
{
"msg_contents": "David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 4 Jan 2023 at 23:21, Spring Zhong <spring.zhong@openpie.com> wrote:\n> > The plan is apparently inefficient, since the hash aggregate goes after the Cartesian product. We could expect the query's performance get much improved if the HashAggregate node can be pushed down to the SCAN node.\n> \n> > Is someone has suggestions on this?\n> \n> I think this is being worked on. See [1].\n\nWell, the current version of that patch requires the query to contain at least\none aggregate. It shouldn't be a big deal to modify it. However note that this\nfeature pushes the aggregate/grouping only to one side of the join (\"fake\"\naggregate count(*) added to the query):\n\nSET enable_agg_pushdown TO on;\n\nEXPLAIN select i1,i2, count(*) from t1, t2 group by i1,i2;\n QUERY PLAN \n--------------------------------------------------------------------------------\n Finalize GroupAggregate (cost=440.02..440.04 rows=1 width=16)\n Group Key: t1.i1, t2.i2\n -> Sort (cost=440.02..440.02 rows=1 width=16)\n Sort Key: t1.i1, t2.i2\n -> Nested Loop (cost=195.00..440.01 rows=1 width=16)\n -> Partial HashAggregate (cost=195.00..195.01 rows=1 width=12)\n Group Key: t1.i1\n -> Seq Scan on t1 (cost=0.00..145.00 rows=10000 width=4)\n -> Seq Scan on t2 (cost=0.00..145.00 rows=10000 width=4)\n\nIf both sides should be grouped, finalization of the partial aggregates would\nbe more difficult, and I'm not sure it'd be worth the effort.\n\n> [1] https://commitfest.postgresql.org/41/3764/\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n",
"msg_date": "Thu, 05 Jan 2023 08:51:08 +0100",
"msg_from": "Antonin Houska <ah@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: grouping pushdown"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.